text
stringlengths 14
1.76M
|
|---|
# Critical Phenomena and Reentrant Phase Transition of
Asymptotically Reissner–Nordström Black Holes
Mehrab Momennia1,2111email address<EMAIL_ADDRESS>and Seyed Hossein
Hendi1,2,3222email address<EMAIL_ADDRESS>1Department of Physics, School
of Science, Shiraz University, Shiraz 71454, Iran
2Biruni Observatory, School of Science, Shiraz University, Shiraz 71454, Iran
3Canadian Quantum Research Center 204-3002 32 Ave Vernon, BC V1T 2L7 Canada
###### Abstract
By considering a small correction to the Maxwell field, we show that the
resultant black hole solutions (also known as the asymptotically
Reissner–Nordström black holes) undergo the reentrant phase transition and can
have a novel phase behavior. We also show that such a small nonlinear
correction of the Reissner–Nordström black holes has high effects on the phase
structure of the solutions. It leads to a new classification in the canonical
ensemble of extended phase space providing the values of the nonlinearity
parameter $\alpha$ being $\alpha\lesseqqgtr 4q^{2}/7$. We shall study these
three classes and investigate deviations from those of the standard
Reissner–Nordström solutions. Interestingly, we find that there is the
reentrant phase transition for $\alpha<4q^{2}/7$, and for the case of
$\alpha=4q^{2}/7$ there is no phase transition below (at) the critical point.
For the last case, one finds that small and large black holes are
thermodynamically distinguishable for temperatures and pressures higher than
the critical ones.
## I Introduction
It is well known that a black hole can be investigated as an ordinary
thermodynamic system Davies ; Davies1989 ; Wald with typical entropy
Bekenstein and temperature Hawking such that in most cases usually obeys the
first law of thermodynamics Bardeen . It was also shown that these highly
dense compact objects treat as usual thermodynamic systems enjoying the phase
transition phenomenon HP . More interestingly, we can see the van der Waals-
like (vdW-like) phase transition including charged black hole systems by
considering the correspondence
$\left(Q,\Phi\right)\leftrightarrow\left(P,V\right)$ between conserved
quantities and thermodynamic variables Chamblin ; ChamblinEmparan . Recently,
in the context of black hole thermodynamics, a possible connection between the
cosmological constant as a thermodynamical pressure is proposed Caldarelli ;
Kastor which has attracted much attention. This relation is defined as
follows
$P=-\frac{\Lambda}{8\pi},$ (1)
in which the thermodynamical volume $V$ is the conjugate quantity to pressure
as
$V=\left(\frac{\partial M}{\partial P}\right)_{rep},$ (2)
where ”$rep$” refers to ”residual extensive parameters”. Indeed, the primary
motivation of considering $\Lambda$ as a thermodynamical pressure comes from
the fact that several physical constants, such as Yukawa coupling, gauge
coupling constants, and Newton’s constant are not fixed values in some
fundamental theories. In addition, in Tolman–Oppenheimer–Volkoff equation,
$\Lambda$ is added to pressure that shows the cosmological constant can play
the role of the thermodynamical pressure. Besides, $\Lambda$ is a slow
variation parameter and has the dimension $(length)^{-2}$ which is the
dimension of the pressure. Usually, a vdW-like small-large black hole (SBH-
LBH) phase transition can be observed in thermodynamical systems including
black holes whenever $\Lambda$ behaves as a thermodynamical pressure. This
type of phase transition has been studied extensively in the background
spacetime of various black hole solutions (for instance, see an incomplete
list KubiznakMann ; Banerjee ; Wei ; Mo ; Zou ; Xu ; HendiFaizal ; Mandal ;
Miao ; RainbowYM ; StetskoPRD ; MassiveYM ; Estrada ; Stetsko2021 and
references therein written during recent years).
The reentrant phase transition (RPT) phenomenon can be observed in an ordinary
thermodynamical system when a monotonic change of any thermodynamical variable
provides more than one phase transition such that the final phase is
macroscopically similar to the initial phase. There is a special range in
temperature in the asymptotically Reissner–Nordström (ARN) black holes so that
these solutions enjoy a large-small-large phase transition by a monotonic
change in the pressure. This interesting phase behavior has been observed in
ordinary thermodynamical systems, such as nicotine-water mixture Hudson ,
liquid crystals, binary gases, multicomponent fluids, and other different
typical thermodynamic systems Narayanan . In the context of black hole
thermodynamics, the RPT is reported for Born-Infeld solutions BIadS ; AminRPT
, rotating black holes rotatingadS , asymptotically dS black holes deSitter ,
hairy black holes hairy , black hole solutions in massive gravity dRGTmassive
, and Born–Infeld-dilaton black holes reentrant .
In this paper, we study the thermodynamics of ARN black holes, investigate the
RPT in the extended phase space, and find deviations from those of the
standard Reissner–Nordström (RN) solutions. We also discuss novel phenomenon
of our black hole case study and compare it with the standard RN black holes.
## II review of solutions and thermodynamics
In this section, we are going to briefly mention the solutions and
thermodynamics of black holes in the presence of quadratic nonlinear
electrodynamics. Before proceeding, it is worthwhile to give some motivations.
Nonlinear field theories are of interest in various classes of mathematical
physics since most physical systems are basically nonlinear in the nature. The
nonlinear electrodynamic (NED) fields are much richer than the Maxwell theory
and in special cases they reduce to the linear Maxwell field. Different
constraints of the Maxwell field, like the radiation propagation inside
specific materials NLmaterial ; Lorenci ; Novello ; Bittencourt and
description of the self-interaction of virtual electron-positron pairs H-E ;
Yajima ; Schwinger , motivate one to consider NED theories as effective fields
DelphenichQED ; Delphenich . Moreover, a well-known outstanding problem is
that most gravitational theories predict a singularity in the center of black
holes. It was shown that by employing the NED fields, the big bang and black
hole singularities can be removed bigbang ; Ayon ; Klippert ; Dymnikova ;
Corda ; Cuesta . Besides, the NEDs have important effects on the structure of
the superstrongly magnetized compact objects, such as pulsars and strange
stars Birula ; Salim ; CuestaSalim .
The Lagrangian of Born-Infeld-type NED theories Born ; Infeld ; Soleng ; Hendi
, which each one was constructed based on various motivations, tends to the
following form for weak nonlinearity Topologicalinstability
$L(F)=-F+\alpha F^{2}+O\left(\alpha^{2}\right),$ (3)
where $F=F_{\mu\nu}F^{\mu\nu}$ is the Maxwell invariant,
$F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$ is the
electromagnetic field tensor, and $A_{\mu}$ is the gauge potential. In this
equation, $\alpha$ denotes nonlinearity parameter that is a small quantity and
proportional to the inverse value of nonlinearity parameter in Born-Infeld-
type NED fields. Indeed, although different models of NEDs have been
constructed with various primitive aims, they contain physical and
experimental importance just for the weak nonlinearity since the Maxwell field
in various branches leads to near accurate or acceptable results. Thus, in
transition from the Maxwell field to NEDs, considering the weak nonlinearity
effects seems to be reasonable and a logical decision. In other words, we
expect to obtain precise physical results with experimental agreements
whenever the nonlinearity is considered as a correction to the Maxwell theory.
In this context, regardless of constant parameters which are contracted in
$\alpha$, most NED Lagrangians reduce to Eq. (3) for weak nonlinearity and we
shall consider this Lagrangian as an effective matter source coupled to
gravity.
The mentioned motivations have led to publish some interesting and reasonable
works by employing Eq. (3) as an effective Lagrangian of electrodynamics H-E ;
Yajima ; Schwinger ; Stehle ; DelphenichQED ; Delphenich ; Kats ; Nie ;
Anninos ; BIString ; Fradkin ; Matsaev ; Pope ; Tseytlin ; Gross . Heisenberg
and Euler demonstrated that quantum corrections lead to nonlinear properties
of vacuum H-E ; Yajima ; Schwinger ; Stehle ; DelphenichQED ; Delphenich .
Besides, it was shown that a quartic correction of the Maxwell invariant
appears in the low-energy limit of heterotic string theory Kats ; Nie ;
Anninos ; BIString ; Fradkin ; Matsaev ; Pope ; Tseytlin ; Gross . Therefore,
considering a correction term to the Maxwell field and investigating Eq. (3)
as an effective and suitable Lagrangian of electrodynamics instead of the
Maxwell and other NED fields is a reasonable and logical decision.
According to the mentioned motivations, we consider the topological black
holes in $(n+1)$-dimensional spacetime with perturbative nonlinear
electrodynamics Topologicalinstability . The $(n+1)$-dimensional line element
reads
$ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d\Omega_{n-1}^{2},$ (4)
where $f(r)$ is the metric function and $d\Omega_{n-1}^{2}$ represents the
line element of $\left(n-1\right)$-dimensional hypersurface with constant
curvature $\left(n-1\right)\left(n-2\right)k$ and volume $\omega_{n-1}$ with
the following explicit form
$d\Omega_{n-1}^{2}=\left\\{\begin{array}[]{cc}d\theta_{1}^{2}+\sum\limits_{i=2}^{n-1}\prod\limits_{j=1}^{i-1}\sin^{2}\theta_{j}d\theta_{i}^{2}&k=1\\\
d\theta_{1}^{2}+\sinh^{2}\theta_{1}\left(d\theta_{2}^{2}+\sum\limits_{i=3}^{n-1}\prod\limits_{j=2}^{i-1}\sin^{2}\theta_{j}d\theta_{i}^{2}\right)&k=-1\\\
\sum\limits_{i=1}^{n-1}d\phi_{i}^{2}&k=0\end{array}\right.,$ (5)
The metric function of these black holes can be obtained as
Topologicalinstability
$f(r)=k-\frac{m}{r^{n-2}}-\frac{2\Lambda
r^{2}}{n\left(n-1\right)}+\frac{2q^{2}}{\left(n-1\right)\left(n-2\right)r^{2n-4}}-\frac{4q^{4}}{\left[2\left(n-2\right)\left(n+2\right)+\left(n-3\right)\left(n-4\right)\right]r^{4n-6}}\alpha+\mathcal{O}\left(\alpha^{2}\right),$
(6)
in which $m$ and $q$ are two integration constants which are related to the
total mass and total electric charge of the black hole, and the last term
indicates the effect of nonlinearity.
The Hawking temperature of these black holes can obtained by using the
definition of the surface gravity on the outermost horizon, $r_{+}$,
$T=\frac{1}{2\pi\left(n-1\right)}\left(\frac{\left(n-1\right)\left(n-2\right)k}{2r_{+}}-\Lambda
r_{+}-\frac{q^{2}}{r_{+}^{2n-3}}+\frac{2q^{4}}{r_{+}^{4n-5}}\alpha\right)+\mathcal{O}\left(\alpha^{2}\right).$
(7)
Moreover, as we are working in Einstein gravity, the entropy of the black
holes can be calculated via the quarter of the event horizon area
$S=\frac{r_{+}^{n-1}}{4},$ (8)
which shows the entropy per unit volume $\omega_{n-1}$. The electric potential
$\Phi$, measured at infinity as a reference with respect to the event horizon
is given by
$\Phi=\frac{q}{\left(n-2\right)r_{+}^{n-2}}-\frac{4q^{3}}{\left(3n-4\right)r_{+}^{3n-4}}\alpha+\mathcal{O}\left(\alpha^{2}\right).$
(9)
Besides, the total electric charge per unit volume $\omega_{n-1}$, can be
obtained by considering the flux of the electric field at infinity as
$Q=\frac{q}{4\pi}.$ (10)
At the final stage of calculating the conserved and thermodynamic quantities,
one can get the total mass of obtained black holes by using the behavior of
the metric at large $r$. Therefore, the total mass per unit volume
$\omega_{n-1}$ is given by
$M=\frac{\left(n-1\right)m}{16\pi}.$ (11)
Considering the entropy and electric charge as a complete set of extensive
parameters, one can show that these conserved and thermodynamic quantities
satisfy the first law of thermodynamics Topologicalinstability
$dM=TdS+\Phi dQ.$ (12)
It is worthwhile to mention that all the equations (6)-(12) are representing
the background geometry and thermodynamics of the higher dimensional ARN black
holes and they reduce to the higher dimensional standard RN solutions in the
special limit $\alpha=0$.
## III REENTRANT PHASE TRANSITION
It is proved that the Schwarzschild (AdS) black holes have no vdW-like phase
transition Zhang , while this phenomenon has been observed in the RN black
holes KubiznakMann . Thus, the electric charge of black holes usually plays a
key role in observing such a phase transition and it would be interesting to
include the effects of black hole’s charge in the thermodynamic calculations.
In this paper, we show how a small correction to the Maxwell field highly
affects the phase transition structure of the RN solutions and extends the
thermodynamical phase space into three new different regions. The mentioned
correction is motivated by nonlinear properties of vacuum generated from
quantum corrections, appearing a quartic correction of the Maxwell invariant
in the low-energy limit of heterotic string theory, and physical and
experimental importance of adding a weak nonlinearity to the Maxwell field.
In what follows, we concentrate our attention on the spherical symmetric black
holes with negative cosmological constant in $4$-dimensional spacetime.
Calculations show that the RPT can also occur for higher dimensional
solutions, but this is not the case for positive cosmological constant and/or
flat or hyperbolic solutions. The negative cosmological constant in the
extended phase space plays the role of a positive thermodynamical pressure as
follows Caldarelli ; Kastor
$P=-\frac{\Lambda}{8\pi}.$ (13)
In this scenario, the total mass (11) behaves as the enthalpy of system, and
the Smarr formula and first law of thermodynamics are modified as
$M=2TS+\Phi Q-2VP+2\mathcal{A}\alpha;\ \ \ \ \
\mathcal{A}=\left(\frac{\partial M}{\partial\alpha}\right)_{S,Q,P},$ (14)
$dM=TdS+\Phi dQ+VdP+\mathcal{A}d\alpha,$ (15)
where $\mathcal{A}$ is a new thermodynamical variable conjugate to $\alpha$
and as mentioned before, $V$ is the thermodynamical volume conjugate to $P$ as
follows
$V=\left(\frac{\partial M}{\partial
P}\right)_{S,Q,\alpha}=\frac{1}{3}r_{+}^{3}.$ (16)
Here, we study the thermodynamics of $4$-dimensional black holes in the
canonical ensemble (fixed $Q$ and $\alpha$) of extended phase space. So, by
using the temperature (7) for $n=3$
$T=-\frac{\Lambda r_{+}}{4\pi}+\frac{1}{4\pi r_{+}}-\frac{q^{2}}{4\pi
r_{+}^{3}}+\frac{q^{4}\alpha}{2\pi r_{+}^{7}},$ (17)
and the relation between the cosmological constant and pressure (13), it is
straightforward to show that the equation of state, $P=P\left(r_{+},T\right)$,
is given by
$P=\frac{T}{2r_{+}}-\frac{1}{8\pi r_{+}^{2}}+\frac{q^{2}}{8\pi
r_{+}^{4}}-\frac{q^{4}\alpha}{4\pi r_{+}^{8}}.$ (18)
Figure 1: $r_{+c}$ and $\hat{r}_{+}$ in $q-\alpha$ plane. The black region on
the left indicates imaginary ${r}_{+}$ and $\hat{r}_{+}$ whereas the colorful
area represents real ${r}_{+}$ and $\hat{r}_{+}$. At the border between black
and colorful areas, ${r}_{+}$ and $\hat{r}_{+}$ are equal to $2q$.
Figure 2: $P-r_{+}$ and $G-T$ diagrams for different regions of temperatures
and pressures. In the right panel, the solid lines refer to $C_{P}>0$ while
the dashed red lines correspond to $C_{P}<0$. $C_{P}$ diverges at the joins of
dashed and solid lines. Besides, $G-T$ curves are shifted for clarity. The
vertical black line at $T_{0}$, $T_{t}<T_{0}<T_{z}$, shows a discontinuity in
the Gibbs free energy and indicates a zeroth-order SBH-IBH phase transition.
Figure 3: $P-T$ diagram. The yellow region illustrates the no black hole area
and the right panel represents a close-up of the RPT area. The blue curve
shows the coexistence line of SBHs and LBHs whereas the green curve refers to
the coexistence line of IBHs and small ones. On crossing the blue (green)
line, the system goes under a first (zeroth) -order phase transition between
SBHs and LBHs (IBHs and SBHs).
The thermodynamic behavior of the system and its global stability are governed
by the free energy, and thus, we obtain the Gibbs free energy as well. We can
determine the Gibbs free energy per unit volume $\omega_{2}$ in the extended
phase space by employing the following relation
$G=M-TS=\frac{r_{+}}{16\pi}-\frac{r_{+}^{3}P}{6}+\frac{3q^{2}}{16\pi
r_{+}}-\frac{7q^{4}\alpha}{40\pi r_{+}^{5}}.$ (19)
On the other hand, the heat capacity help us to find the local thermal
stability, and thus, we calculate it in extended phase space at constant
pressure as
$C_{P}=T\left(\frac{\partial S}{\partial
T}\right)_{P}=\frac{r_{+}^{2}\left(8\pi
Pr_{+}^{8}+r_{+}^{6}-q^{2}r_{+}^{4}+2q^{4}\alpha\right)}{2\left(8\pi
Pr_{+}^{8}-r_{+}^{6}+3q^{2}r_{+}^{4}-14q^{4}\alpha\right)}.$ (20)
Here, since we are working in the canonical ensemble, $C_{P}$ is the heat
capacity at constant $P$, $Q$, and $\alpha$. The negativity of $C_{P}$
indicates unstable solutions while its positivity refers to local stability
(or at least metastability).
In order to study the phase transition of black holes, one can use the
definition of inflection point at the critical point of isothermal $P-V$ (or
equivalently $P-r_{+}$) diagram
$\left.\frac{\partial P(r_{+},T)}{\partial
r_{+}}\right|_{T}=\left.\frac{\partial^{2}P(r_{+},T)}{\partial
r_{+}^{2}}\right|_{T}=0,$ (21)
which can be used to obtain the critical horizon radius $r_{+c}$ and critical
temperature $T_{c}$. One can easily show that this equation leads to the
following equation for the critical horizon radius
$r_{+}^{6}-6q^{2}r_{+}^{4}+56q^{4}\alpha=0,$ (22)
with at most two real positive solutions as follows
$r_{+c}=\sqrt{2q^{2}\left(1+\frac{q^{2}}{\mathcal{X}}\right)+2\mathcal{X}},$
(23)
$\hat{r}_{+c}=\sqrt{2q^{2}\left(1+\frac{i\left(i+\sqrt{3}\right)q^{2}}{2\mathcal{X}}\right)+i\left(i-\sqrt{3}\right)\mathcal{X}},$
(24)
where
$\mathcal{X}=\left(q^{6}-\frac{7}{2}q^{4}\alpha+\frac{1}{2}\sqrt{7\alpha
q^{8}\left(7\alpha-4q^{2}\right)}\right)^{1/3}.$ (25)
From now, the thermodynamic behavior of these black holes depends on the
values of (23) and (24) which are illustrated in Fig. 1; (a) when $r_{+c}$ and
$\hat{r}_{+c}$ are imaginary/complex, there is neither vdW-like phase
transition nor RPT (black region of Fig. 1). In addition, the behavior is not
like the ideal gas and we shall discuss this region later. (b) in the colorful
area of Fig. 1 that both $r_{+c}$ and $\hat{r}_{+c}$ are real, the RPT is
observed which we investigate it in this section. (c) at the border of these
black and colorful areas, $r_{+c}$ is equal to $\hat{r}_{+c}$ and is
determined by $\alpha=4q^{2}/7$. In this case, there is a critical point such
that no phase transition occurs below and at this point. Besides, the SBHs and
LBHs are thermodynamically distinguishable above this critical point and we
will study this border in the next section. (d) for some higher values of $q$
and $\alpha$, $\hat{r}_{+c}$ is imaginary/complex while $r_{+c}$ is real. This
leads to the standard vdW-like (first-order SBH-LBH) phase transition which is
investigated extensively before (for instance, see KubiznakMann for the
standard RN black holes and vdW for our black hole case study) and we do not
consider it in this paper. The other option, means real $\hat{r}_{+c}$ and
imaginary $r_{+c}$, is not accessible for the system.
One may note that in the absence of the nonlinearity (the case of RN black
hole), $r_{+c}$ reduces to $\sqrt{6}q$ and $\hat{r}_{+c}$ vanishes, as it
should be. Therefore, the RN black hole can only undergo the vdW-like phase
transition for nonzero values of the electric charge. This fact uncovers the
significant role of the nonlinearity parameter $\alpha$ on the phase
transition structure of these black holes. However, there is a constraint on
choosing the values of $q$ and $\alpha$. Considering the last (correction)
term of Eqs. (18) and (19), we should choose some values of $q$ and $\alpha$
so that this term be ignorable compared with the third term, and therefore,
can be considered as a perturbation. Hence, as the simplest option, we can
consider some values of $q$ and $\alpha$ so that $\alpha q^{2}<<r_{+}^{4}/2$.
However, we can ignore this restriction since one can consider the last term
as a nonlinear term rather than just a perturbation (correction) term.
As a typical example and without loss of generality, we consider $q=0.8$ and
$\alpha=0.1$ that is a point in the colorful area of Fig. 1, and thus, we
expect to see the RPT. Now, we can obtain the critical temperature and
pressure as follows
$T_{c}=\frac{1}{2\pi r_{+c}}-\frac{q^{2}}{\pi
r_{+c}^{3}}+\frac{4q^{4}\alpha}{\pi r_{+c}^{7}},$ (26)
$P_{c}=\frac{T_{c}}{2r_{+c}}-\frac{1}{8\pi r_{+c}^{2}}+\frac{q^{2}}{8\pi
r_{+c}^{4}}-\frac{q^{4}\alpha}{4\pi r_{+c}^{8}}.$ (27)
For the fixed $q=0.8$ and $\alpha=0.1$, the general behavior of the ARN black
holes is shown in Fig. 2. This figure is plotted for various areas of
temperature (or equivalently pressure) in $P-r_{+}$ (or equivalently $G-T$)
diagram. The dashed red lines show the negative heat capacity (20) and refer
to unstable black holes while the solid lines stand for the positive heat
capacity representing the stable (or metastable) black holes (for more
discussion regarding the relation between Gibbs free energy and heat capacity
see MassiveYM ). Considering Fig. 2, we see that a critical point located at
$P=P_{c}$ in $G-T$ diagram (at $T=T_{c}$ in $P-r_{+}$ diagram with an
inflection point) and demonstrates a second-order phase transition from SBHs
to large ones. In $G-T$ diagram, the curve looks like the Hawking-Page phase
transition for $P>P_{c}$ HP . For $P_{t}<P<P_{c}$ and $T_{t}<T<T_{c}$, there
is an area that black holes undergo the standard first-order SBH-LBH phase
transition. Besides, there are three different phases including intermediate
black holes (IBHs), SBHs, and LBHs for $P\in\left(P_{t},P_{z}\right)$. The
vertical line at $T=T_{0}\in\left(T_{t},T_{z}\right)$ indicates a zeroth-order
phase transition between SBHs and IBHs which is characterized by a
discontinuity in the Gibbs energy. In this area of pressures and temperatures,
black hole undergoes a first-order SBH-LBH phase transition as well. This
behavior is known as the RPT. Note that IBHs are macroscopically similar to
large ones, and thus, black holes enjoy the large-small-large phase transition
in this region of pressures and temperatures. Finally, we have just LBHs for
$P<P_{t}$ and $T<T_{t}$.
Figure 3 describes the coexistence lines of SBHs+LBHs (the blue line) and
IBHs+SBHs (the green line) in different scales. The blue line is located
between the critical point ($T_{c}$,$P_{c}$) and the triple point
($T_{t}$,$P_{t}$) between SBHs, IBHs, and LBHs. Similarly, the green line is
bounded between this triple point and point ($T_{z}$,$P_{z}$). The black holes
enjoy a first (zeroth)-order phase transition from SBHs to LBHs (IBHs to SBHs)
whenever they cross the blue (green) line from left to right or top to bottom.
Therefore, we observe the RPT behavior of the ARN black holes for a narrow
range of temperatures $T\in\left(T_{t},T_{z}\right)$ and pressures
$P\in\left(P_{t},P_{z}\right)$.
Now, it is worthwhile to do a comparison between the ARN black holes and the
RN ones to see how this small perturbation in the Maxwell field changes the
thermodynamical behavior of the RN black holes significantly. Indeed,
observing the RPT for this kind of black hole is very interesting since such a
behavior cannot be seen for a large class of black holes even with more
complicated generalizations in the matter field and/or gravitational sector of
the field equation.
Figure 4 shows the differences between the ARN black holes and the RN ones.
From the left panel of this figure, we find that the nonlinearity parameter
reduces the pressure of SBHs significantly whereas the pressure of LBHs almost
remains unchanged. Besides, the high pressure SBHs at low temperatures are not
allowed to exist while this is not the case for high temperature SBHs. These
facts can be seen analytically from the equation of state (18) as well. For
SBHs, the nonlinear term grows significantly and reduces the pressure since it
has a negative sign, and finally leads to a negative pressure for these black
holes that are not allowed to exist. But for high temperatures, the first term
dominates the pressure and we have SBHs in this case. On the other hand, the
correction term will be very small for the LBHs and does not affect the
pressure of these black holes.
It is worthwhile to mention that the minimum accessible size for SBHs at (and
below) the critical point is about $r_{+}\sim 0.8$, and therefore, the ratio
of the correction and Maxwell terms ($correction/Maxwell$ ratio) is at most
about $\sim 0.3$. Thus, the nonlinear term is small even in the worst case and
never dominates the behavior of the system. In addition, from the right panel
of Fig. 4, one finds that the nonlinear term creates a new region as IBHs and
increases the critical temperature and pressure. This behavior can also be
understood from Eqs. (26) and (27) by considering the fact that the last term
is ignorable since $r_{+c}^{4}>>2\alpha q^{2}$ (see the left panel of Fig. 1).
However, the nonlinearity parameter does not affect the SBH-LBH phase
transition point significantly (the right panel of Fig. 4).
Figure 4: $P-r_{+}$ and $P-T$ diagrams including the RN black hole. The dotted
lines in both figures show the behavior of the RN black hole. The nonlinear
parameter highly affects the SBHs and this modification term creates a new
black hole region as IBHs and increases the critical temperature and pressure.
## IV special case $\alpha=4q^{2}/7$
From the previous section, we observed that the colorful region related to
$\alpha<4q^{2}/7$ leads to the RPT that we have studied in details. Now, we
are interested to see what would happen for the other areas. Here, we are
going to investigate two special cases of thermodynamical behavior related to
the ARN black holes in extended phase space including a border between the
black and colorful areas of Fig. 1 specified with $\alpha=4q^{2}/7$, and also,
the black area of this figure determined by $\alpha>4q^{2}/7$. For
$\alpha=4q^{2}/7$, Eqs. (23) and (24) give the same results as follows
$r_{+c}=\hat{r}_{+c}=2q,$ (28)
which is a critical point since the response function (20) diverges at this
point. In this case, the critical temperature (26) and critical pressure (27)
reduce to
$T_{c}=\frac{1}{7\pi q};\ \ \ \ \ P_{c}=\frac{3}{256\pi q^{2}}.$ (29)
Here, we fix $q$ as $q=0.8$ to plot Fig. 5. Interestingly, this figure
indicates that the small correction highly affects the thermodynamical
behavior of the ARN black holes in the case of $\alpha=4q^{2}/7$ as well. In
this case, the correction term is very small, hence ignorable for all
accessible black holes’ event horizon radius $r_{+}$. From the left panel, we
find that the nonlinearity parameter converts the stable small RN black holes
to unconditionally unstable SBHs and slightly affects the LBHs. The right
panel shows the effect of $\alpha$ on LBHs so that the region of these black
holes is extended and the pressure is increased. In the case of the RN black
holes (and also, the critical phenomena of other black hole solutions) the
SBHs and LBHs are thermodynamically indistinguishable above the critical point
since the coexistence curve always terminates at the critical point whereas
for our black hole case study, there is always a border between
(unconditionally unstable) SBHs and large ones (blue line of right panel of
Fig. 5). This distinguishable property is a novel feature observed in this
special type of black holes and is due to the fact that there is no SBH-LBH
phase transition in this special case and SBHs are always unstable. Indeed,
another interesting and new behavior is that there is no SBH-LBH phase
transition at (and below) the critical point.
It is worthwhile to mention that the thermodynamic behavior of the black area
of Fig. 1 determined by $\alpha>4q^{2}/7$, is very similar to
$\alpha=4q^{2}/7$ case, but $r_{+c}$ and $\hat{r}_{+c}$ are imaginary, and
thus, the critical point specified by (28) and (29) is absent. Thus, in this
case, the SBHs and LBHs are always distinguishable while the SBHs are
unconditionally unstable.
Figure 5: $P-r_{+}$ and $P-T$ diagrams for the special case $\alpha=4q^{2}/7$
including the RN black hole. The dotted lines in both figures show the
behavior of the RN black hole. The nonlinear parameter converts the stable
small RN black holes to unconditionally unstable SBHs. The nonlinear term
extends the LBH region and increases the pressure of these black holes.
## V Conclusions
In this paper, we have considered the cosmological constant as thermodynamical
pressure and studied the thermodynamics of $4$-dimensional ARN black holes in
the canonical ensemble of extended phase space and deviations from those for
the standard RN black holes were investigated. We interestingly found that by
considering a small correction in the Maxwell field, the thermodynamical
behavior of the RN black holes changes significantly and a novel critical
phenomenon can be observed. Based on the values of the nonlinearity parameter,
the phase space classified into three regions, and thus, three kinds of
behaviors have been found which one of them was the RPT and the other one was
a novel behavior in the extended phase space thermodynamics.
Specially, we have seen that in addition to the standard vdW-like phase
transition of the black hole case study vdW and the RN black holes
KubiznakMann , they can enjoy the RPT by considering this small correction in
the Maxwell field. It was shown that this behavior happens for a narrow range
of temperatures and pressures. In this range of RPT, black holes undergo a
zeroth-order IBH-SBH phase transition and first-order SBH-LBH phase
transition, and this behavior could be seen for special values of the
nonlinearity parameter $\alpha<4q^{2}/7$. In comparison with the RN black
holes, the nonlinearity parameter highly affected the SBHs and converted them
to unstable ones. This modification term created a new black hole region as
IBHs and increased the critical temperature and pressure as well.
Moreover, it was shown that for special values of the nonlinearity parameter
as $\alpha\geq 4q^{2}/7$, the correction term highly affects the
thermodynamical behavior of the solutions as well. Specially, in the case of
$\alpha=4q^{2}/7$, we observed a novel critical point such that below and at
this point, the black holes had no phase transition, and above this critical
point, SBHs and LBHs were thermodynamically distinguishable. Besides, the
stable small RN black holes converted to unconditionally unstable SBHs. This
nonlinear term extended the area of LBHs and increased the pressure of these
black holes.
As the final remark, since introducing a small correction in the Maxwell
field, interestingly, had significant effects on the thermodynamical structure
of the RN black holes, it would be nice to consider dynamical perturbations in
the background geometry of the ARN black holes and investigate the effects of
the nonlinearity parameter on the dynamical stability and quasinormal modes,
and then compare them with those of the RN solutions.
###### Acknowledgements.
We wish to thank Shiraz University Research Council.
## References
* (1) P. C. W. Davies, Rep. Prog. Phys. 41 (1977) 1313.
* (2) P. C. W. Davies, Class. Quantum Grav. 6 (1989) 1909.
* (3) R. M. Wald, Living Rev. Rel. 4 (2001) 6.
* (4) J. D. Bekenstein, Phys. Rev. D 7 (1973) 2333.
* (5) S. W. Hawking, Commun. Math. Phys. 43 (1975) 199.
* (6) J. M. Bardeen, B. Carter and S. W. Hawking, Commun. Math. Phys. 31 (1973) 161.
* (7) S. W. Hawking and D. N. Page, Commun. Math. Phys. 87 (1983) 577.
* (8) A. Chamblin, R. Emparan, C. V. Johnson and R. C. Myers, Phys. Rev. D 60 (1999) 064018.
* (9) A. Chamblin, R. Emparan, C. V. Johnson and R. C. Myers, Phys. Rev. D 60 (1999) 104026.
* (10) M. M. Caldarelli, G. Cognola and D. Klemm, Class. Quantum Grav. 17 (2000) 399.
* (11) D. Kastor, S. Ray and J. Traschen, Class. Quantum Grav. 26 (2009) 195011.
* (12) D. Kubiznak and R. B. Mann, JHEP 07 (2012) 033.
* (13) R. Banerjee, S. K. Modak and S. Samanta, JHEP 10 (2012) 125\.
* (14) S. W. Wei and Y. X. Liu, Phys. Rev. D 87 (2013) 044014.
* (15) J. X. Mo and W. B. Liu, Eur. Phys. J. C 74 (2014) 2836.
* (16) D. C. Zou, Y. Q. Liu and B. Wang, Phys. Rev. D 90 (2014) 044063\.
* (17) J. Xu, L. M. Cao and Y. P. Hu, Phys. Rev. D 91 (2015) 124033.
* (18) S. H. Hendi, S. Panahiyan, B. Eslam Panah, M. Faizal and M. Momennia, Phys. Rev. D 94 (2016) 024028.
* (19) A. Mandal, S. Samanta and B. R. Majhi, Phys. Rev. D 94 (2016) 064069.
* (20) Y. G. Miao and Z. M. Xu, Eur. Phys. J. C 77 (2017) 403.
* (21) S. H. Hendi and M. Momennia, Phys. Lett. B 777 (2018) 222\.
* (22) M. M. Stetsko, Phys. Rev. D 99 (2019) 044028.
* (23) S. H. Hendi and M. Momennia, JHEP 10 (2019) 207.
* (24) M. Estrada and R. Aros, Eur. Phys. J. C 80 (2020) 395.
* (25) M. M. Stetsko, [arXiv:2012.14915].
* (26) C. Hudson, Z. Phys. Chem., Abt. A 47 (1904) 113.
* (27) T. Narayanan and A. Kumar, Phys. Rep. 249 (1994) 135.
* (28) S. Gunasekaran, R. B. Mann and D. Kubiznak, JHEP 11 (2012) 110\.
* (29) A. Dehyadegari and A. Sheykhi, Phys. Rev. D 98 (2018) 024011\.
* (30) N. Altamirano, D. Kubiznak and R. B. Mann, Phys. Rev. D 88 (2013) 101502(R).
* (31) D. Kubiznak and F. Simovic, Class. Quantum Grav. 33 (2016) 245001.
* (32) R. A. Hennigar and R. B. Mann, Entropy 17 (2015) 8056.
* (33) D. C. Zou, R. Yue and M. Zhang, Eur. Phys. J. C 77 (2017) 256.
* (34) S. H. Hendi and M. Momennia, Eur. Phys. J. C 78 (2018) 800\.
* (35) V. A. De Lorenci and M. A. Souza, Phys. Lett. B 512 (2001) 417.
* (36) V. A. De Lorenci and R. Klippert, Phys. Rev. D 65 (2002) 064027\.
* (37) M. Novello et al., Class. Quantum Gravit. 20 (2003) 859.
* (38) M. Novello and E. Bittencourt, Phys. Rev. D 86 (2012) 124024\.
* (39) W. Heisenberg and H. Euler, Z. Phys. 98 (1936) 714. _Translation by:_ W. Korolevski and H. Kleinert, _Consequences of Dirac’s Theory of the Positron_ , [arXiv: physics/0605038].
* (40) H. Yajima and T. Tamaki, Phys. Rev. D 63 (2001) 064007.
* (41) J. Schwinger, Phys. Rev. 82 (1951) 664.
* (42) D. H. Delphenich, _Nonlinear electrodynamics and QED_ , [arXiv: hep-th/0309108].
* (43) D. H. Delphenich, _Nonlinear optical analogies in quantum electrodynamics_ , [arXiv: hep-th/0610088].
* (44) E. Ayon-Beato and A. Garcia, Gen. Relativ. Gravit. 31 (1999) 629.
* (45) E. Ayon-Beato and A. Garcia, Phys. Lett. B 464 (1999) 25.
* (46) V. A. De Lorenci, R. Klippert, M. Novello and J. M. Salim, Phys. Rev. D 65 (2002) 063501.
* (47) I. Dymnikova, Class. Quantum Gravit. 21 (2004) 4417.
* (48) C. Corda and H. J. Mosquera Cuesta, Mod. Phys. Lett A 25 (2010) 2423.
* (49) C. Corda and H. J. Mosquera Cuesta, Astropart. Phys. 34 (2011) 587.
* (50) Z. Bialynicka-Birula and I. Bialynicka-Birula, Phys. Rev. D 2 (1970) 2341.
* (51) H. J. Mosquera Cuesta and J. M. Salim, Mon. Not. Roy. Astron. Soc. 354 (2004) L55.
* (52) H. J. Mosquera Cuesta and J. M. Salim, Astrophys. J. 608 (2004) 925.
* (53) M. Born and L. Infeld, Proc. Roy. Soc. Lond. A 143 (1934) 410.
* (54) M. Born and L. Infeld, Proc. Roy. Soc. Lond. A 144 (1934) 425\.
* (55) H. H. Soleng, Phys. Rev. D 52 (1995) 6178.
* (56) S. H. Hendi, JHEP 03 (2012) 065.
* (57) S. H. Hendi and M. Momennia, Eur. Phys. J. C 75 (2015) 54.
* (58) P. Stehle and P. G. DeBaryshe, Phys. Rev. 152 (1966) 1135.
* (59) Y. Kats, L. Motl and M. Padi, JHEP 12 (2007) 068.
* (60) R. G. Cai, Z. Y. Nie and Y. W. Sun, Phys. Rev. D 78 (2008) 126007\.
* (61) D. Anninos and G. Pastras, JHEP 07 (2009) 030.
* (62) N. Seiberg and E. Witten, JHEP 09 (1999) 032.
* (63) E. Fradkin and A. Tseytlin, Phys. Lett. B 163 (1985) 123.
* (64) R. Matsaev, M. Rahmanov and A. Tseytlin, Phys. Lett. B 193 (1987) 205.
* (65) E. Bergshoeff, E. Sezgin, C. Pope and P. Townsend, Phys. Lett. B 188 (1987) 70.
* (66) A. Tseytlin, Nucl. Phys. B 276 (1985) 391.
* (67) D. J. Gross and J. H. Sloan, Nucl. Phys. B 291 (1987) 41.
* (68) J. L. Zhang, R. G. Cai and H. Yu, JHEP 02 (2015) 143.
* (69) S. H. Hendi, S. Panahiyan and M. Momennia, Int. J. Mod. Phys. D 25 (2016) 1650063.
|
EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN)
LHCb-DP-2021-001 May 17, 2021
A parametrized Kalman filter for fast track fitting at LHCb
P. Billoir1, M. De Cian2, P. A. Günther3, S. Stemmle3,†
1LPNHE, Sorbonne Université, Paris Diderot Sorbonne Paris Cité, CNRS/IN2P3,
Paris, France 2Institute of Physics, Ecole Polytechnique Fédérale de Lausanne
(EPFL), Lausanne, Switzerland
3Physikalisches Institut, Ruprecht-Karls-Universität Heidelberg, Heidelberg,
Germany
†Author was at institute at time work was performed.
We present an alternative implementation of the Kalman filter employed for
track fitting within the LHCb experiment. It uses simple parametrizations for
the extrapolation of particle trajectories in the field of the LHCb dipole
magnet and for the effects of multiple scattering in the detector material. A
speedup of more than a factor of four is achieved while maintaining the
quality of the estimated track quantities. This Kalman filter implementation
could be used in the purely software-based trigger of the LHCb upgrade.
Published in Computer Physics Communications 265, 108026 (2021)
© 2024 CERN for the benefit of the LHCb collaboration. CC-BY-4.0 licence.
## 1 Introduction
The LHCb experiment is a dedicated heavy flavour physics experiment at the LHC
focusing on the study of hadrons containing $b$ and $c$ quarks [1]. Due to the
high luminosity at the LHC and the high proton-proton interaction cross
section, a sophisticated trigger system is needed to reduce the rate of
collisions saved for offline analysis. During Runs 1 and 2 of the LHC, this
trigger system consisted of a hardware stage, reducing the rate from
$40\text{\,MHz}$ to $1\text{\,MHz}$, followed by a two-stage software trigger.
In the latter, the full tracking system was read out and a partial (first
stage) and full (second stage) event reconstruction were performed [2]. Both
software stages included a fit of selected track candidates using a Kalman
filter to extract their parameters and to reject fake tracks. In addition, the
software trigger allowed an online calibration and alignment of the detector
[3].
During Run 3 of the LHC, LHCb will be provided with a factor five higher
luminosity compared to Run 2. In this scope, most of the subdetectors are
currently being replaced or upgraded[4, 5, 6, 7] and a new trigger strategy
has been developed [8]. The hardware trigger will be removed and a two-stage,
fully software-based trigger will process the full $30\text{\,MHz}$111The
nominal bunch-crossing frequency of the LHC is 40 MHz, however empty and non-
colliding bunches reduce this to a collision frequency of 30 MHz at LHCb. of
bunch-crossing rate. In the first stage, tracks with a high transverse
momentum ($p_{\mathrm{T}}$) and primary vertices will be reconstructed. These
objects are used to select events with displaced topologies typical for
$b$-hadron and $c$-hadron decays, and to select high-$p_{\mathrm{T}}$ objects
from decays of heavy vector bosons. In the second stage, a full event
reconstruction will be performed, without any requirement on the
$p_{\mathrm{T}}$ and including particle identification. A large number of
exclusive and several universal event selections based on the decay topology
will be applied.
In LHCb, track reconstruction is split into a pattern recognition and a Kalman
filtering[9, 10] stage. During pattern recognition, sets in each subdetector
are constructed from signals that potentially result from the passage of a
single charged particle. Simple parametrizations are used throughout this
procedure as it is only concerned with finding the right sets of signals and
not to provide the best estimate of the track parameters. During the filtering
stage, an estimate for the track parameters is calculated, and fake tracks are
rejected. Given that the output of the filtering stage is used for physics
selections the best possible precision needs to be achieved, hence an
(extended) Kalman filter is used for track fitting. Ideally, Kalman filtering
of the track candidates is already performed during the first trigger stage.
However, the Kalman filter which was used during Run 1 and 2 in LHCb, in the
following called default Kalman, is significantly too slow. It relies on
lookup tables for the magnetic field and the material distribution of the
detector[11], so-called maps. In addition it uses Runge-Kutta methods to solve
the differential equations necessary to propagate the particle through the
regions with an inhomogeneous magnetic field. Accessing the values in the
lookup table and solving the differential equations are time consuming and
prohibit the usage of the current Kalman filter in the first stage of the
upgraded trigger system. This conclusion is independent of the choice of
computing architecture (CPU or GPU) which is used for the first trigger stage.
In this paper, a fully parametrized version of the Kalman filter in LHCb,
called parametrized Kalman, is presented. It obtains precise values of track
parameters and track quality variables, while relying on neither
computationally costly extrapolation methods nor material or magnetic field
maps.
## 2 Detector and simulation
The LHCb detector [1] is a single-arm forward spectrometer covering the
pseudorapidity range $2<\eta<5$. Its Run 3 configuration includes a high-
precision tracking system consisting of a silicon-pixel vertex detector
surrounding the $pp$ interaction region [5] (VELO), a large-area silicon-strip
detector (Upstream Tracker (UT)) [7] located upstream of a dipole magnet with
a bending power of about $4{\mathrm{\,Tm}}$ [12], and three stations of
scintillating-fibre detectors (SciFi) [7] placed downstream of the magnet.
Different types of charged hadrons are distinguished using information from
two ring-imaging Cherenkov detectors [13, 6]. Photons, electrons and hadrons
are identified by a calorimeter system consisting of an electromagnetic and a
hadronic calorimeter [14, 6]. Muons are identified by a system composed of
alternating layers of iron and multiwire proportional chambers [15, 6].
Given the lack of collision data at this point for Run 3, simulation is
required to model the effects of the detector response, the detector
acceptance and the imposed selection requirements. In the simulation, $pp$
collisions are generated using Pythia [16, *Sjostrand:2006za] with a specific
LHCb configuration [18]. Decays of unstable particles are described by EvtGen
[19], in which final-state radiation is generated using Photos [20]. The
interaction of the generated particles with the detector, and its response,
are implemented using the Geant4 toolkit [21, *Agostinelli:2002hh] as
described in Ref. [23].
## 3 Principles
In the following, the Kalman filter formalism and its application in the LHCb
track reconstruction is outlined. During Kalman filtering, the information
from measurements at detector planes is successively combined to obtain
optimal estimates of the track parameters. The track is represented as a set
of states at fixed $z$-positions222The detector coordinate system is chosen
such that the $z$-axis is parallel to the beam line and charged particles are
deflected in the direction of the $x$-axis., which are typically detector
layers. Each of these states is given by
$\boldsymbol{x}=(x,y,t_{x},t_{y},\frac{q}{p})$ and the corresponding
covariance matrix $\boldsymbol{P}$, where $t_{x}$ and $t_{y}$ are the slopes
with respect to the $z$ axis, $q$ the charge of the particle in units of the
electron charge and $p$ its absolute momentum.
The Kalman filter procedure needs an estimate of a state as a starting point.
Filtering is then a repeated application of two steps. Firstly, the current
state is extrapolated to the next detector layer, and secondly, the
extrapolated state is updated using the measurement in this layer. If the
track has no associated measurement in this layer, the update step is omitted.
These steps can be formalized as follows: given the state
($\boldsymbol{x}_{k-1|k-1}$, $\boldsymbol{P}_{k-1|k-1}$) at position
$z_{k-1}$, the extrapolated state ($\boldsymbol{x}_{k|k-1}$,
$\boldsymbol{P}_{k|k-1}$) at position $z_{k}$ is given by
$\displaystyle\boldsymbol{x}_{k|k-1}$
$\displaystyle=\boldsymbol{f}_{k}(\boldsymbol{x}_{k-1|k-1}),$ (1)
$\displaystyle\boldsymbol{P}_{k|k-1}$
$\displaystyle=\boldsymbol{F}_{k}\boldsymbol{P}_{k-1|k-1}\boldsymbol{F}_{k}^{T}+\boldsymbol{Q}_{k},$
(2)
where the extrapolation function $\boldsymbol{f}_{k}(\boldsymbol{x})$ is given
by five individual mappings
$\boldsymbol{f}_{k}=(f_{k}^{x},f_{k}^{y},f_{k}^{t_{x}},f_{k}^{t_{y}},f_{k}^{\frac{q}{p}})$.
This leads to the transport matrix $\boldsymbol{F}_{k}$ as
$\displaystyle F_{k}^{ij}=\frac{\partial f_{k}^{i}}{\partial x_{j}}.$ (3)
The noise matrix $\boldsymbol{Q}_{k}$ accounts for uncertainties of the
extrapolation, e.g. due to scattering at the material of the detector layers
or the material in between.
The extrapolated state is then combined with the measurement
$\boldsymbol{m}_{k}$ in the respective detector layer to obtain the new state
estimate at the position $\boldsymbol{z}_{k}$, $\boldsymbol{x}_{k|k}$ and
$\boldsymbol{P}_{k|k}$, using the following steps:
$\displaystyle\boldsymbol{r}_{k}$
$\displaystyle=\boldsymbol{m}_{k}-\boldsymbol{H}_{k}\boldsymbol{x}_{k|k-1},$
(4) $\displaystyle\boldsymbol{S}_{k}$
$\displaystyle=\boldsymbol{H}_{k}\boldsymbol{P}_{k|k-1}\boldsymbol{H}_{k}^{T}+\boldsymbol{R}_{k},$
(5) $\displaystyle\boldsymbol{K}_{k}$
$\displaystyle=\boldsymbol{P}_{k|k-1}\boldsymbol{H}_{k}^{T}\boldsymbol{S}_{k}^{-1},$
(6) $\displaystyle\boldsymbol{x}_{k|k}$
$\displaystyle=\boldsymbol{x}_{k|k-1}+\boldsymbol{K}_{k}\boldsymbol{r}_{k},$
(7) $\displaystyle\boldsymbol{P}_{k|k}$
$\displaystyle=(\boldsymbol{1}-\boldsymbol{K}_{k}\boldsymbol{H}_{k})\boldsymbol{P}_{k|k-1}.$
(8)
Here $\boldsymbol{H}_{k}$ projects the estimated state vector to the
measurement space in order to allow a calculation of the residual
$\boldsymbol{r}_{k}$. The covariance matrix of this residual is given by
$\boldsymbol{S}_{k}$ and is combined with the covariance matrix of the state
to obtain the Kalman gain $\boldsymbol{K}_{k}$. The latter defines then how
the estimated state is modified by the residual. The variance of the residual
is given by $\boldsymbol{R}_{k}$.
Starting at the most upstream measurement, the measurements are successively
added and the track parameters updated until the last detector layer is
reached. The same procedure is repeated starting at the most downstream
measurement and successively including more upstream measurements. This yields
two sets of states at every measurement position, which can be combined to
obtain the respective optimal state.
The quality of a track can be estimated by its $\chi^{2}_{\text{track}}$
value. The value at each measurement is given by:
$\displaystyle\chi^{2}_{k}$
$\displaystyle=\chi^{2}_{k-1}+\boldsymbol{r}_{k}^{T}\boldsymbol{P}_{k|k}^{-1}\boldsymbol{r}_{k},$
(9)
and $\chi^{2}_{\text{track}}$ is then simply $\chi^{2}_{k}$ after all
measurements have been added using the combined, optimal states.
The optimal state estimates and the measurement information can also be used
to remove measurements that show a large separation from the fitted trajectory
by having a large contribution to the $\chi^{2}_{\text{track}}$ value. They
are therefore likely to be wrongly associated to the respective track, and are
so-called outliers. Once an outlier is removed, all Kalman filter steps are
performed again. This procedure can be repeated until the maximum allowed
number of outliers are removed, or no more outliers are present.
The above formalism is also the basis of the Kalman filter that is currently
used for track fitting in the LHCb experiment. The extrapolation functions
$\boldsymbol{f}_{k}$ are based on maps of the magnetic field along the
trajectory and numerical models for the extrapolations. Their complexities
range up to a fifth-order Runge-Kutta method. The noise matrices
$\boldsymbol{Q_{k}}$ are obtained by a dedicated model for the multiple
scattering and a map of the material traversed by the particle.
In the parametrized Kalman filter presented in this paper, these two costly
steps are replaced by simple parametrizations. The extrapolation functions
$\boldsymbol{f}_{k}$ are given by analytic expressions that allow a fast
evaluation and calculation of the derivatives in Equation 3. The noise
matrices $\boldsymbol{Q}_{k}$ depend on the momentum of the particle and are
parametrized by a few parameters per extrapolation step.
An important difference with respect to the default Kalman filter is the
treatment of energy loss due to the interaction with the detector material.
While the multiple scattering is taken directly into account, the energy loss
is not part of the extrapolation functions $\boldsymbol{f}_{k}$, i.e.
$f_{k}^{\frac{q}{p}}$ is the unity transformation. This shortcoming is
compensated by choosing the momentum of the state vectors to represent the
momentum at the moment of production of the particle. Thereby, the
extrapolation functions also take this initial momentum as input and thus
indirectly take into account all energy loss that happened on average up to
the respective detector layer. The only caveat being that $\frac{q}{p}$ after
the filtering is only the best representation of the true value at the
production point of the particle.
## 4 Parametrizations
Depending on the strength of the magnetic field and the typical distance
between detector layers, different empirical analytical functions for the
extrapolation are used.
Inside the VELO, where the magnetic field is very weak, these functions and
the noise matrix are given by:
$\displaystyle\boldsymbol{f}(\boldsymbol{x})=\begin{pmatrix}f^{x}(\boldsymbol{x})\\\
f^{y}(\boldsymbol{x})\\\ f^{t_{x}}(\boldsymbol{x})\\\
f^{t_{y}}(\boldsymbol{x})\\\
f^{\frac{q}{p}}(\boldsymbol{x})\end{pmatrix}=\begin{pmatrix}x+0.5[t_{x}+f^{t_{x}}(\boldsymbol{x})]\Delta
z\ \\\ y+t_{y}\Delta z\\\
t_{x}+p^{\text{V}}_{0}\frac{q}{p}(z_{0}+p^{\text{V}}_{1})\Delta z\\\ t_{y}\\\
\frac{q}{p}\end{pmatrix}$ (10)
and
$\displaystyle\boldsymbol{Q}=\begin{pmatrix}\left(\tilde{p}^{\text{V}}_{1}\Delta
z\right)^{2}Q^{t_{x}t_{x}}&0&\tilde{p}^{\text{V}}_{2}\sqrt{Q^{xx}Q^{t_{x}t_{x}}}&0&0\\\
0&\left(\tilde{p}^{\text{V}}_{1}\Delta
z\right)^{2}Q^{t_{y}t_{y}}&0&\tilde{p}^{\text{V}}_{3}\sqrt{Q^{yy}Q^{t_{y}t_{y}}}&0\\\
\tilde{p}^{\text{V}}_{2}\sqrt{Q^{xx}Q^{t_{x}t_{x}}}&0&\left(\tilde{p}^{\text{V}}_{0}\left|\frac{q}{p}\right|\right)^{2}&0&0\\\
0&\tilde{p}^{\text{V}}_{3}\sqrt{Q^{yy}Q^{t_{y}t_{y}}}&0&\left(\tilde{p}^{\text{V}}_{0}\left|\frac{q}{p}\right|\right)^{2}&0\\\
0&0&0&0&0\\\ \end{pmatrix},$ (11)
where $\Delta z$ is the extrapolation distance along the $z$-direction and
$z_{0}$ the initial or final $z$ coordinate for a downstream or upstream
extrapolation, respectively. The parameters $p^{\text{V}}_{0}$,
$p^{\text{V}}_{1}$ and $\tilde{p}^{\text{V}}_{0}$ to
$\tilde{p}^{\text{V}}_{3}$ are the same for all upstream and downstream
extrapolations inside the VELO. They are determined using simulated
${{B}^{0}_{s}}\\!\rightarrow\phi\phi$ decays within the LHCb software
framework, where $\phi\\!\rightarrow{{K}^{+}}{{K}^{-}}$. This simulated sample
allows to create a dataset $D$, containing pairs of states representing two
consecutive measurements of one track inside the VELO. In addition to the true
state parameters obtained from the simulation, also an extrapolation of each
state to the $z$ position of the respective other state is included in the
dataset. Such extrapolation is based on the default extrapolation algorithm in
LHCb [11]. This dataset allows tuning the parameters employing a minimization
of the following likelihood-inspired function:
$\displaystyle\prod_{D}\left[\mathcal{G}\left(f^{s}(\boldsymbol{x_{1}})-\boldsymbol{x_{2}}^{s},\sqrt{Q^{ss}}\right)+c\right].$
(12)
Here, $\mathcal{G}(x,\sigma_{x})$ is a normalized Gaussian distribution
centered around $0$ with width $\sigma_{x}$. The two states of each dataset
entry are represented by $\boldsymbol{x}_{1}$ and $\boldsymbol{x}_{2}$, and
the variable $s$ is one of the state variables, $s\in\\{x,t_{x},y,t_{y}\\}$.
The positive empirical constant $c$ is chosen to be small with respect to the
amplitude of the Gaussian function and softens the impact of outliers.
In a first step, the extrapolation functions $f^{x}$ to $f^{t_{x}}$ are tuned
individually, taking into account that $f^{x}$ depends on the previously
determined parameters for $f^{t_{x}}$. These tuning minimizations employ the
state vector $\boldsymbol{x}_{2}$ that is obtained by the extrapolation of the
state vector $\boldsymbol{x}_{1}$. This choice improves the precision of the
parametrized extrapolation, by removing the effect of multiple scattering that
would be present if instead the true state was chosen for
$\boldsymbol{x}_{2}$.
In a second step, the parameters of the extrapolation functions are fixed, and
a minimization of the following function is performed:
$\displaystyle\prod_{D}\left[\mathcal{G}_{2}\left(f^{d}(\boldsymbol{x_{1}})-\boldsymbol{x_{2}}^{d},f^{t_{d}}(\boldsymbol{x_{1}})-\boldsymbol{x_{2}}^{t_{d}},\sqrt{Q^{dd}},\sqrt{Q^{t_{d}t_{d}}},Q^{dt_{d}}/\sqrt{Q^{dd}Q^{t_{d}t_{d}}}\right)+c\right].$
(13)
Here, $\mathcal{G}_{2}(x,y,\sigma_{y},\sigma_{y},\rho)$ is a normalized two-
dimensional Gaussian distribution centered around $0$ with widths $\sigma_{x}$
and $\sigma_{y}$ and a correlation factor $\rho$. The variable $d$ is either
$x$ or $y$. In this minimization, the true state vector $\boldsymbol{x}_{2}$
is used in order to get the correct estimate of the parameters for the
respective elements of the noise matrix $\boldsymbol{Q}$.
Inside the UT and the SciFi detector stations, the magnetic field is
significantly stronger than inside the VELO and higher order terms are needed
for the extrapolation functions:
$\displaystyle\boldsymbol{f}(\boldsymbol{x})=\begin{pmatrix}x+\left[p^{\text{T}}_{3}t_{x}+(1-p^{\text{T}}_{3})f^{t_{x}}(\boldsymbol{x})\right]\Delta
z\\\
y+\left[p^{\text{T}}_{5}t_{y}+(1-p^{\text{T}}_{5})f^{t_{y}}(\boldsymbol{x})\right]\Delta
z\\\
t_{x}+\left[p^{\text{T}}_{0}\frac{q}{p}+p^{\text{T}}_{1}(\frac{q}{p})^{3}+p^{\text{T}}_{2}y^{2}\frac{q}{p}\right]\Delta
z\\\ t_{y}+p^{\text{T}}_{4}\frac{q}{p}t_{x}\frac{y}{|y|}\\\
\frac{q}{p}\end{pmatrix}.$ (14)
The noise matrix is given in full analogy to Equation 11 with the parameters
$\tilde{p}^{\text{T}}_{0}$ to $\tilde{p}^{\text{T}}_{3}$, where T either
stands for the UT or the SciFi detector. These parameters and the parameters
$p^{\text{T}}_{0}$ to $p^{\text{T}}_{4}$ are individually determined on
simulation for every step from one detector layer to the next and for the
upstream and downstream extrapolation separately. The same strategy as for the
tuning of the parameters related to the extrapolation inside the VELO is
followed.
For the long extrapolations between the different tracking subdetectors, more
sophisticated parametrizations are necessary. In the case of the step between
the VELO and the UT, where the magnetic field is still weak, the extrapolation
is based on two equations. The first describes the change in momentum along
the $x$-direction of the particle:
$\displaystyle\Delta
p_{x}=p\left(\frac{t_{x,\text{UT}}}{\sqrt{1+t^{2}_{x,\text{UT}}+t^{2}_{y,\text{UT}}}}-\frac{t_{x,\text{V}}}{\sqrt{1+t^{2}_{x,\text{V}}+t^{2}_{y,\text{V}}}}\right)=q\int\left(\text{d}\boldsymbol{l}\times\boldsymbol{B}\right)_{x},$
(15)
where $t_{x/y,\text{UT}}$ and $t_{x/y,\text{V}}$ are the state variables at
the first UT detector layer and the last measurement inside the VELO,
respectively. The right hand side of the equation consists of an integral of
the magnetic field along the trajectory of the particle. Note that the
integral expression is simply a parameter which was fitted for on the dataset.
The second ingredient for the extrapolation is to model the effect of the
magnetic field as a single kink of the trajectory at a certain $z$-position
$z_{\text{mag}}$ between the VELO and the UT:
$\displaystyle
x_{\text{UT}}=x_{\text{V}}+(z_{\text{mag}}-z_{\text{V}})t_{x,\text{V}}+(z_{\text{UT}}-z_{\text{mag}})t_{x,\text{UT}},$
(16)
where $z_{\text{V}}$ and $z_{\text{UT}}$ are the positions of the states
inside the VELO and the UT, respectively.
Equation 15 can be solved for $t_{x,\text{UT}}$ and Equation 16 is then
employed to get an expression for $x_{\text{UT}}$. The unknowns in these
expressions are parametrized as a function of the state variables inside the
VELO:
$\displaystyle t_{y,\text{UT}}$
$\displaystyle=t_{y,\text{V}}+p^{\text{S}}_{0}\frac{q}{p}t_{x,\text{V}}\frac{y_{\text{V}}}{|y_{\text{V}}|}$
(17)
$\displaystyle\int\left(\text{d}\boldsymbol{l}\times\boldsymbol{B}\right)_{x}$
$\displaystyle=p^{\text{S}}_{1}+p^{\text{S}}_{2}z_{\text{V}}+p^{\text{S}}_{3}t^{2}_{y,\text{V}}$
(18) $\displaystyle z_{\text{mag}}$
$\displaystyle=p^{\text{S}}_{4}+p^{\text{S}}_{5}z_{\text{V}}+p^{\text{S}}_{6}z^{2}_{\text{V}}+p^{\text{S}}_{7}t^{2}_{y,\text{V}}.$
(19)
In addition, the $y$-position of the extrapolated state is given by:
$\displaystyle
y_{\text{UT}}=y_{\text{V}}+\left[p^{\text{S}}_{8}t_{y,\text{V}}+(1-p^{\text{S}}_{8})t_{y,\text{UT}}\right]\Delta
z,$ (20)
where $\Delta z$ is defined as the difference between $z_{\text{UT}}$ and
$z_{\text{V}}$. The noise matrix is defined in analogy to Equation 11 with the
parameters $\tilde{p}^{\text{S}}_{0}$ to $\tilde{p}^{\text{S}}_{3}$. These
parameters and the parameters $p^{\text{S}}_{0}$ to $p^{\text{S}}_{8}$ are
individually determined for the upstream and downstream extrapolation. The
same strategy as for the tuning of the parameters related to the extrapolation
inside the VELO is followed.
The extrapolation from the UT to the SciFi detector is more delicate because
it is done over a distance of more than 5 meters through a strong magnetic
field. Moreover, this field is far from uniform - in particular, it varies
rapidly in the upper and lower regions, close to the magnet yoke. To ensure a
good quality of the global track fit, the error on the extrapolation should be
well below the other sources of error, mainly multiple scattering. The chosen
solution is an expansion of the magnetic deviation in powers of $q/p$. The
parametrization aims at giving good precision for charged particles used in
physics analyses, that is for trajectories which roughly come from the origin.
To do so, the ideal direction $(t_{x}^{0},t_{y}^{0})$ as the one of a particle
of charge $q$, momentum $p$, starting from the origin and hitting the UT
detector layer in a given point $(x,y)$ is defined. As a good approximation,
we can take $t_{x}^{0}=x/z+{\cal B}q/p$, $t_{y}^{0}=y/z$, where $\cal{B}$ is
proportional to the integrated field between the origin and the UT. The
deviations from the ideal direction, $\delta t_{x}=t_{x}-t_{x}^{0}$, $\delta
t_{y}=t_{y}-t_{y}^{0}$, are small, so only a first order expansion in $\delta
t_{x},\delta t_{y}$ is considered. Corrections of higher order would be
negligible compared to multiple scattering errors.
Finally, a polynomial expansion in $q/p$ for the ideal direction is built, and
a correction in $\delta t_{x},\delta t_{y}$ with coefficients which are
themselves polynomials of $q/p$ is added:
$\displaystyle f^{x}(\boldsymbol{x})=x+t_{x}\Delta
z+\sum_{k=1}^{K_{1}}A^{x}_{k}(x,y)\left(\frac{q}{p}\right)^{k}+\sum_{k=1}^{K_{2}}\left(B^{x}_{k}(x,y)\,\delta
t_{x}+C^{x}_{k}(x,y)\,\delta t_{y}\right)\left(\frac{q}{p}\right)^{k},$ (21)
where the first two terms are the straight line extrapolation, and the next
ones the curvature correction. Similar expressions are used for the other
state parameters $f^{y}(\boldsymbol{x})$, $f^{t_{x}}(\boldsymbol{x})$,
$f^{t_{y}}(\boldsymbol{x})$. The degrees of expansion $K_{1}$ and $K_{2}$ are
tuned for each parameter to obtain the required precision. In practice
$K_{1}=9$, $K_{2}=7$ for $f^{x}$ and $f^{t_{x}}$ and $K_{1}=7$, $K_{2}=5$ for
$f^{y}$ and $f^{t_{y}}$ are used.
The dependence on $x,y$ of the coefficients $A^{u}_{k}$, $B^{u}_{k}$,
$C^{u}_{k}$, with $u=x,y,t_{x},t_{y}$, is described through a tabulation on a
grid of 50$\times$50 points regularly spaced on the rectangle defined by
$|x/z|\leq 0.25$, $|y/z|\leq 0.25$, by steps $\Delta X$, $\Delta Y$. In order
to avoid a systematic convexity bias of a bilinear interpolation, the values
at $x,y$ are computed by a quadratic interpolation between the tabulated
values at the six closest points on the grid: if $(X,Y)$ is the closest one,
these values are: $F_{00}=(X,Y)$, $F_{+0}=F(X+\Delta X,Y)$, $F_{-0}=F(X-\Delta
X,Y)$, $F_{0+}=F(X,Y+\Delta Y)$, $F_{0-}=F(X,Y-\Delta Y)$, and
$F_{\varepsilon_{x}\varepsilon_{y}}=F(X+\varepsilon_{x}\Delta
X,Y+\varepsilon_{y}\Delta Y)$, where $\varepsilon_{x}$ and $\varepsilon_{y}$
are the signs of $\xi=(x-X)/\Delta X$ and $\psi=(y-Y)/\Delta Y$, respectively.
With these notations the interpolation formula for a quantity $F$ is given by
:
$\displaystyle F(x,y)=F_{00}+F_{d}\,\xi\psi+\big{(}$
$\displaystyle(F_{+0}-F_{-0})\,\xi+(F_{0+}-F_{0-})\,\psi$
$\displaystyle+(F_{+0}+F_{-0}-2F_{00})\,\xi^{2}+(F_{0+}+F_{0-}-2F_{00})\,\psi^{2}\big{)}/2$
(22) $\displaystyle\text{with}\;\;\;F_{d}=$
$\displaystyle\varepsilon_{x}\varepsilon_{y}(F_{00}+F_{\varepsilon_{x}\varepsilon_{y}}-F_{\varepsilon_{x}0}-F_{0\varepsilon_{y}}).$
(23)
The tabulated values are obtained using the standard Runge-Kutta method of
order 4, with 20 values of $q/p$ in the range ($-1/p_{min},1/p_{min}$), with
$p_{min}=3000\text{\,Me\kern-1.00006ptV\\!/}c$ and a polynomial fit in $q/p$.
As a consequence, they do not give a reliable result for momenta below
$p_{min}$. Another limitation is the larger errors on the edges of the
acceptance, especially for $|t_{y}|\simeq 0.25$, where the field has strong
spatial variations.
## 5 Performance
A sample of simulated proton-proton collisions that include a
$B^{0}_{s}\rightarrow\phi\phi$, $\phi\\!\rightarrow{{K}^{+}}{{K}^{-}}$ decay
is used to compare the reconstruction quality of the parametrized and the
default Kalman filter. The extrapolation of the most upstream state estimate
to the beam line is the same in both filters and is based on a simplified
material map of the detector [11]. Therefore, not the state near the beam
line, but the state at the most upstream measurement is employed for the
comparison of the two Kalman filters. Although only tracks with measurements
in each of the subdetectors are considered for this study, this is in
principle not a requirement for operating the parameterized Kalman filter
Figure 1 compares the resolution of the momentum, the $x$-position and the
slope $t_{x}$ as a function of the true momentum of a particle.
Figure 1: Comparison of the resolution in simulation in (top left) momentum,
(top right) $x$-position and (bottom) slope $t_{x}$ between the default and
parametrized Kalman filter. The resolution is represented by the root mean
square of the residual distribution when comparing to the true value.
Since the position and slope are nearly exclusively determined by the
measurements in the VELO, where only a very weak magnetic field is present,
the parametrizations of the parametrized Kalman filter are sufficient to
obtain results comparable to the default Kalman filter in these variables. In
contrast, the momentum estimate strongly depends on the extrapolations in
regions with strong magnetic field. There, especially at momenta below
10$\text{\,Ge\kern-1.00006ptV\\!/}c$, an up to $20\%$ worse resolution is
observed for the parametrized Kalman filter.
The Kalman filter does not only provide an estimate of the state parameters,
but also a corresponding covariance matrix. In Figure 2 the pull distributions
of the estimated momentum, $x$-position and slope $t_{x}$ for the parametrized
Kalman filter are shown.
Figure 2: Pull distributions of the momentum, $x$-position and slope $t_{x}$
estimates of the parametrized Kalman filter at the most upstream measurement.
The given values correspond to the mean, width and root mean square of a
Gaussian function that is fitted to the distribution.
In all three cases, good uncertainty estimates are visible. However, in
analogy to the observations made for the resolution, the pull distribution of
the momentum features slightly more pronounced tails.
Besides the estimate of the state near the beam line, which is used for the
reconstruction of charged particles, an important output of the Kalman filter
is the fit quality described by the $\chi^{2}_{\text{track}}$ per degrees of
freedom $N_{\text{dof}}$. In Figure 3, this quantity is shown for the
parametrized Kalman filter for real tracks coming from a particle and fake
tracks consisting of random combinations of clusters. In addition, the real
track efficiencies and fake track rejection rates are shown for both Kalman
filter versions when applying upper bounds on this quantity.
Figure 3: Track quality estimate, $\chi^{2}_{\text{track}}/N_{\text{dof}}$, in
simulation for the parametrized filter (left). Fake tracks are shown in red
and real tracks in black. Real track efficiency and fake track rejection for
the parametrized and default Kalman filter (right).
The parametrized Kalman filter shows a slightly worse but overall comparable
performance in separating the two track classes.
The fitted tracks are combined to reconstruct $B^{0}_{s}\rightarrow\phi\phi$
candidates. Figure 4 shows the invariant mass distribution of candidates based
on the two Kalman filter versions.
Figure 4: Reconstructed $B^{0}_{s}$ mass in simulated
$B^{0}_{s}\rightarrow\phi\phi$ decays for the parametrized and the default
Kalman filter. Fit projections are overlaid.
A single Gaussian distribution and a first order polynomial are employed to
model the signal peak and the combinatorial background, respectively. This
yields nearly identical estimated mass resolutions of
$12.8\text{\,Me\kern-1.00006ptV\\!/}c^{2}$ and
$12.9\text{\,Me\kern-1.00006ptV\\!/}c^{2}$ for the default and the
parametrized Kalman filter, respectively.
In order to compare the timing performance of the parametrized Kalman filter
and the default Kalman filter, throughput studies on a machine with two
Intel(R) Xeon(R) Silver 4214 processors were performed. Simulated proton-
proton collisions were used in order to mimic the situation of real data
taking. Depending on the configuration of the outlier removal strategy, an
overall speedup factor between 4 and 5.5 with respect to the default Kalman
filter was achieved. The largest speedup is achieved when no iterations for
the outlier removal are performed. Singling out the calculation steps of the
Kalman filter, i.e. neglecting the part of the algorithms where the
measurement information is constructed, the speedup factor is even larger and
ranges from 5.7 to 10.
In the case of the parametrized Kalman filter, and singling out again the
calculation step of the Kalman filter, $50\%$ of the time is spent
extrapolating the states between the detector layers. Here, the extrapolation
between the UT and the SciFi constitutes the biggest component with a relative
fraction of $40\%$. The remaining Kalman filter steps, consisting of updating
the states with the cluster information and the combination of upstream and
downstream filtered states, are responsible for $16\%$ and $14\%$ of the time
spent, respectively. The extrapolation to the beam line, which is based on the
default LHCb extrapolation algorithm, is responsible for the remaining $20\%$
of the time budget.
## 6 Conclusion
We presented an alternative implementation of a Kalman filter for the LHCb
experiment. Based on simple parametrizations of material effects and the
extrapolation through the magnetic field of the detector, this algorithm
achieves a significant speedup with respect to the current implementation,
while retaining comparable quality of the track parameters. In the future,
further improvements of the parametrizations might allow an even better
estimate of the track parameters and a subsequent speedup. Ideas currently
under discussion include for example an analytic parametrization of the $x$
and $y$ dependence of the parameters employed in the extrapolation from the UT
to the SciFi detector and a better account for the limited acceptance of low
momentum particles. The version presented in this document or a future
implementation might therefore be well suited for the usage in the LHCb
software trigger system for Run 3 of the LHC.
## Acknowledgements
The authors would like to thank the LHCb computing and simulation teams for
their support and for producing the simulated LHCb samples used in the paper.
We also would like to thank the LHCb RTA team for supporting this publication
and reviewing the work. M. De Cian acknowledges support from the Swiss
National Science Foundation grant “Probing right-handed currents in quark
flavour physics”, PZ00P2_174016.
## References
* [1] LHCb collaboration, A. A. Alves Jr. et al., _The LHCb detector at the LHC_, JINST 3 (2008) S08005
* [2] R. Aaij et al., _Performance of the LHCb trigger and full real-time reconstruction in Run 2 of the LHC_ , JINST 14 (2019) P04013, arXiv:1812.10790
* [3] S. Borghi, _Novel real-time alignment and calibration of the lhcb detector and its performance_ , Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 845 (2017) 560 , Proceedings of the Vienna Conference on Instrumentation 2016
* [4] LHCb collaboration, _Framework TDR for the LHCb Upgrade: Technical Design Report_ , CERN-LHCC-2012-007, 2012
* [5] LHCb collaboration, _LHCb VELO Upgrade Technical Design Report_ , CERN-LHCC-2013-021, 2013
* [6] LHCb collaboration, _LHCb PID Upgrade Technical Design Report_ , CERN-LHCC-2013-022, 2013
* [7] LHCb collaboration, _LHCb Tracker Upgrade Technical Design Report_ , CERN-LHCC-2014-001, 2014
* [8] LHCb collaboration, _LHCb Trigger and Online Technical Design Report_ , CERN-LHCC-2014-016, 2014
* [9] R. Kalman, _A new approach to linear filtering and prediction problems_ , Journal of Basic Engineering 35 (1960)
* [10] R. Frühwirth, _Application of kalman filtering to track and vertex fitting_ , Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 262 (1987) 444
* [11] E. Bos and E. Rodrigues, _The LHCb Track Extrapolator Tools_ , Tech. Rep. LHCb-2007-140. CERN-LHCb-2007-140, CERN, Geneva, 2007
* [12] LHCb collaboration, _LHCb magnet: Technical Design Report_ , CERN-LHCC-2000-007, 2000
* [13] LHCb collaboration, _LHCb RICH: Technical Design Report_ , CERN-LHCC-2000-037, 2000
* [14] LHCb collaboration, _LHCb calorimeters: Technical Design Report_ , CERN-LHCC-2000-036, 2000
* [15] LHCb collaboration, _LHCb muon system: Technical Design Report_ , CERN-LHCC-2001-010, 2001
* [16] T. Sjöstrand, S. Mrenna, and P. Skands, _A brief introduction to PYTHIA 8.1_ , Comput. Phys. Commun. 178 (2008) 852, arXiv:0710.3820
* [17] T. Sjöstrand, S. Mrenna, and P. Skands, _PYTHIA 6.4 physics and manual_ , JHEP 05 (2006) 026, arXiv:hep-ph/0603175
* [18] I. Belyaev et al., _Handling of the generation of primary events in Gauss, the LHCb simulation framework_ , J. Phys. Conf. Ser. 331 (2011) 032047
* [19] D. J. Lange, _The EvtGen particle decay simulation package_ , Nucl. Instrum. Meth. A462 (2001) 152
* [20] P. Golonka and Z. Was, _PHOTOS Monte Carlo: A precision tool for QED corrections in $Z$ and $W$ decays_, Eur. Phys. J. C45 (2006) 97, arXiv:hep-ph/0506026
* [21] Geant4 collaboration, J. Allison et al., _Geant4 developments and applications_ , IEEE Trans. Nucl. Sci. 53 (2006) 270
* [22] Geant4 collaboration, S. Agostinelli et al., _Geant4: A simulation toolkit_ , Nucl. Instrum. Meth. A506 (2003) 250
* [23] M. Clemencic et al., _The LHCb simulation application, Gauss: Design, evolution and experience_, J. Phys. Conf. Ser. 331 (2011) 032023
|
E-mail<EMAIL_ADDRESS>cnn short = CNN , long = convolutional
neural network mse short = MSE , long = mean squared error gbp short = GBP ,
long = guided backpropagation deeplift short = DeepLIFT , long = Deep Learning
Important FeaTures shap short = SHAP , long = SHapley Additive exPlanations ig
short = IG , long = integrated gradients lrp short = LRP , long = layer wise
relevance propagation svm short = SVM , long = support vector machines dr
short = DR, long = diabetic retinopathy eg short = EG, long = expressive
gradients cnv short = CNV, long = choroidal neovascularization dme short =
DME, long = diabetic macular edema amd short = AMD, long = age related macular
degeneration fda short = FDA, long = Food and Drug Administration xai short =
XAI , long = explainable AI ai short = AI, long = artificial intelligence rmse
short = RMSE , long = root mean squared error gdpr short = GDPR, long =
General Data Protection Regulation relu short = ReLU , long = rectified linear
unit oct short = OCT , long = optical coherence tomography cad short = CAD ,
long = computer-aided diagnostic gpu short = GPU , long = graphics processing
units spectral domain oct short = SD-OCT, long = spectral domain OCT frequency
domain oct short = FD-OCT, long = frequency domain OCT time domain oct short =
TD-OCT, long = time domain OCT polarization sensitive oct short = PS-OCT, long
= polarization sensitive OCT swept source oct short = SS-OCT, long = swept
source OCT onh short = ONH, long = optic nerve head ivus short = IVUS, long =
intravascular ultrasound aoslo short = AOSLO, long = adaptive optics scanning
laser/light ophthalmoscopy ma short = MA, long = microaneurysms ex short = EX,
long = exudates he short = HE, long = haemorrhages elm short = ELM, long =
external limiting membrane etdrs short = ETDRS, long = Early treatment
diabetic retinopathy study lbp short = LBP, long = linear binary pattern hog
short = HOG, long = histogram of gradient glcm short = GLCM, long = gray-level
co-occurrence matrix rf short = RF, long = random forest ann short = ANN, long
= artificial neural network pca short = PCA, long = principle component
analysis auc short = AUC, long = area under curve acc short = Acc, long =
accuracy sn short = SN, long = sensitivity sp short = SP, long = specificity
nw short = NW, long = Koversi non-orthogonal wavelet lpnd short = LPND, long =
Laplacian pyramid nonlinear diffusion mlc short = MLC, long = machine learning
classifiers mlp short = MLP, long = multilayer perceptron rnfl short = RNFL,
long = retinal nerve fibre layer fcn short = FCN, long = fully convolutional
neural network me short = ME, long = mean error mape short = MAPE, long = mean
absolute percentage error pr short = PR, long = precision sd short = SD, long
= standard deviation vgg short = VGG, long = Visual geometry group mgrf short
= MGRF, long = Markov Gibbs random field gan short = GAN, long = generative
adversarial network od short = OD, long = optic disc oc short = OC, long =
optic cup knn short = kNN, long = k nearest neighbor ga short = GA, long =
geographic atrophy lstm short = LSTM, long = long short term memory ssim short
= SSIM, long = structural similarity index measurement csr short = CSR, long =
central serous retinopathy irf short = IRF, long = intra-retinal fluid srf
short = SRF, long = sub-retinal fluid ped short = PED, long = pigment
epithelial detachment clahe short = CLAHE, long = contrast limited adaptive
histogram equalization or short = OR, long = overlap ratio snr short = SNR,
long = signal to noise ratio psnr short = PSNR, long = peak signal to noise
ratio kpis short = KPIs, long = key performance indicators tl short = TL, long
= transfer learning slo short = SLO, long = scanning laser ophthalmoscopy af
short = AF, long = auto fluorescence rpe short = RPE, long = retinal pigment
epithelium gcl short = GCL, long = ganglion cell layer inl short = INL, long =
inner plexiform layer onl short = ONL, long = outer nuclear layer opl short =
OPL, long = outer plexiform layer ilm short = ILM, long = inner limiting
membrane iz short = IZ, long = interdigitation zone bm short = BM, long =
bruch membrane ez short = EZ, long = ellipsoidal zone os short = OS, long =
outer segment is short = IS, long = inner segment hf short = HF, long = hyper-
reflective foci cslo short = cSLO, long = confocal scanning laser
ophthalmoscopy dnn short = DNN , long = deep neural networks pcc short = PCC ,
long = Pearson’s correlation coefficient mri short = MRI , long = magnetic
resonance imaging fmri short = fMRI , long = functional magnetic resonance
imaging ct short = CT , long = computerized tomography gradcam short =
GradCAM, long = gradient weighted class activation mapping cam short = CAM,
long = Class activation maps rcv short = RCV, long = Regression Concept
Vectors tcav short = TCAV, long = Testing Concept Activation Vectors ubs short
= UBS, long = Uniform unit Ball surface Sampling mls short = MLS, long =
midline shift gmm short = GMM, long = Gaussian mixture model gru short = GRU,
long = gated recurrent unit rnn short = RNN, long = recurrent neural network
ehr short = EHR, long = electronic healthcare record hilt short = HITL, long =
human-in-the-loop asd short = ASD , long = autism pectrum disorder mh short =
MH , long = macular hole
# Uncertainty aware and explainable diagnosis of retinal disease
Amitojdeep Singh Theoretical and Experimental Epistemology Laboratory, School
of Optometry and Vision Science, University of Waterloo, Waterloo, ON, Canada
Department of Systems Design Engineering, University of Waterloo, Waterloo,
ON, Canada Sourya Sengupta Theoretical and Experimental Epistemology
Laboratory, School of Optometry and Vision Science, University of Waterloo,
Waterloo, ON, Canada Department of Systems Design Engineering, University of
Waterloo, Waterloo, ON, Canada Mohammed Abdul Rasheed Theoretical and
Experimental Epistemology Laboratory, School of Optometry and Vision Science,
University of Waterloo, Waterloo, ON, Canada Varadharajan Jayakumar
Theoretical and Experimental Epistemology Laboratory, School of Optometry and
Vision Science, University of Waterloo, Waterloo, ON, Canada Vasudevan
Lakshminarayanan Theoretical and Experimental Epistemology Laboratory, School
of Optometry and Vision Science, University of Waterloo, Waterloo, ON, Canada
Department of Systems Design Engineering, University of Waterloo, Waterloo,
ON, Canada
###### keywords:
Uncertainty, explainability, deep learning, retinal imaging, Bayesian,
attributions, retina, retinal disease
## ABSTRACT
Deep learning methods for ophthalmic diagnosis have shown considerable success
in tasks like segmentation and classification. However, their widespread
application is limited due to the models being opaque and vulnerable to making
a wrong decision in complicated cases. Explainability methods show the
features that a system used to make prediction while uncertainty awareness is
the ability of a system to highlight when it is not sure about the decision.
This is one of the first studies using uncertainty and explanations for
informed clinical decision making. We perform uncertainty analysis of a deep
learning model for diagnosis of four retinal diseases - age-related macular
degeneration (AMD), central serous retinopathy (CSR), diabetic retinopathy
(DR), and macular hole (MH) using images from a publicly available (OCTID)
dataset. Monte Carlo (MC) dropout is used at the test time to generate a
distribution of parameters and the predictions approximate the predictive
posterior of a Bayesian model. A threshold is computed using the distribution
and uncertain cases can be referred to the ophthalmologist thus avoiding an
erroneous diagnosis. The features learned by the model are visualized using a
proven attribution method from a previous study. The effects of uncertainty on
model performance and the relationship between uncertainty and explainability
are discussed in terms of clinical significance. The uncertainty information
along with the heatmaps make the system more trustworthy for use in clinical
settings.
## 1 Introduction
Over the last decade, advances in automated detection of diseases using deep
learning methods have resulted in performance comparable to human observers in
multiple domains e.g., retinal image diagnosis [1, 2, 3]. The adoption of
these methods in clinics is slow if not negligible The key factors hindering
trust from end-users, regulators, and patients on deep learning methods in
medical imaging are the opaqueness of the algorithms and the tendency to make
wrong decisions on complex cases [4, 5]. There is a need to describe the
factors used for making a decision and separating samples with higher
uncertainty for expert inspection. The former is the domain of explaining the
models and is discussed extensively in the literature [6, 7, 8]. Studies have
been published using a set of explainability methods (called attributions) for
retinal image diagnosis [9, 10, 11, 12, 13]. The attribution methods represent
the model output as a sum of the contributions from each input feature. This
can be visualized as heatmaps for images and various plots for other data
[14].
The final layer of deep learning models used as classifiers typically consists
of softmax outputs corresponding to each of the target classes (usually only
one output for a binary classifier). It is often erroneously interpreted as
the class probability. A model can be uncertain in prediction despite having a
high softmax value as it learns during training to keep the value at its
extremes for a lower loss function.
Uncertainty is described as the ability of a model to convey the ambiguity in
the decision [15]. Deep learning based classifiers typically return the class
with the highest softmax output as the result, even if there is a small
margin. There is a need for the end-users to be alerted when a model is
uncertain to avoid potential wrong decisions. There are two sources of
uncertainty in a system - aleatoric and epistemic [4, 15]. Aleatoric
uncertainty or doubt is the inherent uncertainty in the system arising from
the modeling process e.g., stochastic behavior. On the other hand, epistemic
uncertainty or ambiguity arises due to limited data leading to lower knowledge
of the system. The former is inherent and can not be changed for a given
model. The latter can be reduced effectively with larger training data and
ensuring that new kinds of data (or adversary) are identified.
The Bayesian theory is used to model the uncertainty of networks but the
computational complexity makes it prohibitive for high dimensional problems
like image classification. However, existing deep learning models can be cast
as Bayesian models without changing the models [16]. The dropout layer which
is used only during training is enabled during the testing and the model is
run several times for each test sample. This results in an approximate
posterior probability distribution for each image. This uncertainty
information can improve the diagnostic performance as shown in a study for
diagnosing diabetic retinopathy from retinal fundus image [17]. It helped
identify unfamiliar images that should be referred to a clinician instead of
making an ambiguous choice.
A physician can identify when they are uncertain about a case and can access
other sources of information (e.g. additional tests, medical history, etc). On
the other hand, deep learning methods are have been proposed without specific
measures to access their uncertainty. This leads to the challenge of
calibrating user trust [4] on the system and consequently reduced acceptance.
The samples with high uncertainty can be referred to a human expert to reduce
mistakes. Previous studies have indicated that uncertain cases are strongly
correlated with errors and identifying them increased model performance e.g.
[17]. There are some recent studies combining uncertainty and explainability
information for 3D object detection [18] and clinical time serizes prediction
[19, 20].
Here, we use a random dropout methodology to approximate a Bayesian neural
network to obtain a predictive posterior distribution and use this to quantify
uncertainty in diagnosis. We use retinal oct images in this study. This
uncertainty information along with features (represented as heat maps) used by
the model in making the decision is considered. To the best of our knowledge,
this is the first study that combines uncertainty with explainability for
informed medical diagnosis in addition to being amongst the first studies of
its kind in the 2D computer vision domain.
The custom model with dropout and 6 convolutional layers and the process of
finding the uncertainty and explanations are described in section 2. The
effect of the uncertainty information on the system’s performance is studied
by adjusting thresholds and removing most uncertain examples in section 3.
Section 4 used the explanation heatmaps to discuss the effect of features and
input data on model uncertainty. The conclusions and directions for further
study are provided in section 5.
## 2 Methods
### 2.1 Dataset
The OCTID dataset [21] with oct images of 4 diseases - amd, csr, dr, and mh,
as well as normals was used in this study. It provides broad coverage of the
most common retinal conditions diagnosed using oct. A total of 572 images are
distributed unequally over the classes as shown in Table 1. The normal class
is dominant and has about 4 times the number of amd scans. The training test
split was 80:20 and a further 10% of the training data was used for
validation. After choosing the hyperparameters using a validation set, the
model was trained on the entire training set consisting of 459 scans. Data
augmentation was performed using rotation, width shift, height shift, shear,
and zoom of up to 10% as well horizontal flip.
Table 1: Dataset description showing the class level split for training and test sets. Data | AMD | CSR | DR | MH | Normal | Total
---|---|---|---|---|---|---
Training | 44 | 82 | 86 | 82 | 165 | 459
Test | 11 | 20 | 21 | 20 | 41 | 113
Total | 55 | 102 | 107 | 102 | 206 | 572
% of total | 9.62% | 17.83% | 18.71% | 17.83% | 36.01% | 100%
### 2.2 Model
A compact deep learning model with 6 convolutional layers and a dense layer
was used. Each block of two convolutional layers of size 16, 32, and 64 was
followed by a dropout of 0.2 while the dense layer of 512 neurons was followed
by a dropout of 0.3. Initially, it was trained on the UCSD dataset [22]
consisting of 85k images. Since the dataset was large and had low noise the
model had very low uncertainty in most cases. Then it was trained on the
smaller OCTID dataset which had fewer images and five classes covering
multiple retinal diseases simulating a clinical setting. The weights from the
model trained on the UCSD dataset were finetuned for OCTID using transfer
learning, similar to that used in [23].
The training progress over 45 epochs in Figure 1 (left) shows a final training
accuracy of 84.8% and a validation accuracy of 90.9%. The validation accuracy
graph is bumpy due to small number of images. The resulting model obtained a
test accuracy of 88.5% and the confusion matrix is shown in Figure 1 (right).
It shows that the model had the weakest performance for dr, possibly because
of large amount of variation among the images. All the normal images were
correctly predicted but 9% of the amd images were categorized as normal.
Figure 1: The metrics of the original model - (left) evolution of training and
validation accuracy during training, (right) confusion matrix for the test set
### 2.3 Uncertainty evaluation and explanation
Some of the model weights were removed randomly using the 4 dropout layers at
the test time to approximate a Bayesian neural network. The 10th percentile
level from the training set was used to generate thresholds for each class in
the initial analysis. The test samples with uncertainty higher than the
threshold (lower probability of prediction) were marked as referrals for
manual diagnosis by an expert.
The explanations were produced using an attribution method called DeepTaylor
[24]. It was the best performing attribution method from a previous
comparative study [13]. It finds a root point near a given neuron with a value
close to the input but output as 0. Taylor decomposition is then recursively
applied from the output to the input layer to estimate the attributions of
each neuron. The Innvestigate library [25] was used to implement the
attribution method.
## 3 Results
A threshold was computed using the 10th percentile of the uncertainty in the
training set for each class. These values varied across the classes - amd
0.65, csr 0.92, dr 0.93, mh 0.97, and normal 0.93. The lower threshold for amd
indicates a wider spread in uncertainty levels and the presence of false
negatives. The thresholds were used to remove 27 of 113 test images and the
model accuracy improved from 88.5% to 93.7% for the remaining samples. These
improvements for each class are shown by metrics in Table 2 and the new
confusion matrix is shown Figure 2.
dr, which had the worst performance originally as shown in Figure 1 (right),
showed the largest improvement from 0.76 to 0.92 at the cost of separating
6/21 images for referral. In contrast, only one normal image was sent for
referral and zero false positives were maintained. Interestingly, all the
false negatives were also eliminated by referring 4/13 amd images. The
fraction of correct classifications also increased for the other three classes
- 0.82 to 0.89 for amd, 0.85 to 0.88 for csr, and 0.88 for mh. It is apparent
from this case that the largest improvements were made in the classes with
weakest performance, but no conclusions can be reached without an analysis
involving larger and more diverse datasets.
Table 2: Effect of threshold on precision (PR), recall (RE), F1 score and support samples | Base metrics | After uncertainty
---|---|---
Condition | PR | RE | F1 | Support | PR | RE | F1 | Support
AMD | 0.90 | 0.82 | 0.86 | 11 | 0.89 | 0.89 | 0.89 | 9
CSR | 0.85 | 0.85 | 0.85 | 20 | 0.88 | 0.94 | 0.91 | 16
DR | 0.80 | 0.76 | 0.78 | 21 | 0.92 | 0.80 | 0.86 | 15
MH | 0.81 | 0.85 | 0.83 | 20 | 0.88 | 0.93 | 0.90 | 15
Normal | 0.95 | 0.95 | 0.95 | 41 | 1.00 | 1.00 | 1.00 | 40
Figure 2: Confusion matrix for the test set after uncertainty threshold Figure
3: The effect of removing the samples with most uncertainty on the accuracy.
The points in the scatter are the observations and the 5 sample moving average
shows the overall trend.
Probability distributions of the output were generated by activating random
dropout at the test time and repeating for 1000 times for the entire dataset.
These probability distribution histograms approximating the Bayesian
uncertainty and the corresponding oct images with heatmaps using Deep Taylor
are shown for some typical samples in Figure 6. The brighter the color on a
given part of the image, the greater the contribution to the model output. The
original oct images are also provided for reference. The threshold is plotted
as a solid green and the medians of each class are shown by dotted lines. If
the class median is below the threshold level, the image should be sent for a
referral. The observed relationship between the explanations, uncertainty
levels and the model predictions is discussed in section 4.
An analysis performed by removing the test samples with most uncertainty and
observing the effect on the model accuracy is plotted in Figure 3. The samples
were ordered by the median value of the uncertainty and removed one by one.
Small sample size of 113 makes the overall trend noisy and hence moving
average analysis with window size 5 was used for smoothing. Almost all the
incorrectly classified images were in the initial quarter of uncertainty
scores. The moving average has a shape similar to the logistic growth as it
reaches 100% accuracy level. In practice, such a graph could be used to set a
tolerance level and images below the level can be referred to clinician for
diagnosis by a human expert. In further studies with larger datasets, this
could be used to produce class wise thresholds from the test set.
## 4 Discussion
The black-box nature of deep learning models affects their acceptance in
practical settings such as healthcare. A clinician is supposed to present the
reasoning for decisions. Humans also associate a degree of certainty to their
decisions. This is in contrast to the opaqueness of advanced artificial
intelligence methods such as deep learning which are hard to understand due to
millions of features and give the result as a single class. This study
presented an uncertainty aware and explainable deep learning model for retinal
diagnosis. The heatmaps using Deep Taylor and the uncertainty using
probability histograms supplement the model decision. The end-user can infer
the regions the model looked at as well as how sure the model is about the
predictions and make a final decision. Also, separating the cases with more
uncertainty for referral could help improve model performance and inculcate
trust. The improved confusion matrix and accuracy in Figures 2 and 3
respectively indicate the efficacy of uncertainty information. Referring
uncertain images to a clinician instead of misdiagnosing can improve patient
care by increasing the confidence in the underlying system.
Previous studies [12, 13] compared the heatmaps of each explainability method
for a given sample and concluded that Deep Taylor performed the best for
retinal oct. There are also studies showing the effect of uncertainty for
diagnosis from retinal fundus images [17]. The present study is one of the
first to report both the explanations and uncertainty analysis. It is observed
that the samples classified with higher certainty also have more relevant
heatmaps. Some examples covering all 5 classes and some typical cases are
shown in Figure 6. A greater emphasis is laid on the ability to identify model
errors using uncertainty and explanations. Figure 5(a) shows the retinal oct
of a normal eye for reference.
The model is observed to be the most sensitive to high contrast regions, i.e.
the inner retina or the vitreous - retinal interface and the photoreceptor
layer. Figure 5(b) shows a correctly classified amd scan where the model
showed certainty higher than the corresponding threshold. The model correctly
picks up the relevant deposits and new blood vessels in the photoreceptor
layer. Figures 5(c) \- 6(a) show correctly classified CSR, DR and MH cases.
(a) A correctly classified normal image with high certainty. The probability
was close to 1 for all the runs. The heatmaps focus on the normal shapes of
inner retina and the photoreceptor layer.
(b) An amd image correctly classified with probability higher than the
threshold. The explanations highlight the anomalous photoreceptor layer in the
macula.
(c) A csr image with prominent features correctly classified with complete
certainty.
(d) A correctly classified dr image with high certainty. The model detected
new blood vessels visible as bright structures with shadows underneath.
(a) A correctly classified mh image. The certainty is high and the model
looked the curved boundaries of the hole.
(b) A noisy amd image misclassified as csr with certainty lower than the
threshold. Hence, model refers it to a clinician for further diagnosis.
(c) An image labelled as mh in the dataset with both mh and amd classified as
amd with high certainty.
Figure 6: (Left to right) The input OCT images, explanations using Deep Taylor
and Bayesian uncertainty histograms. The brighter magenta regions had more
effect on the output. The histograms shows the distribution of softmax outputs
for 1000 runs of the model with medians as dotted and threshold in solid
green.
A typical example of csr is shown in Figure 5(c) with as large fluid deposit
in the central part of the eye. The model looked at the boundaries of the
deposit and correctly classified it with high certainty. Similarly, Figure
5(d) shows a dr which is also classified correctly with high certainty. The
model looked at the irregular structures, possibly blood clots or scars in the
central retina. The abnormalities in the periphery are not highlighted in most
cases. A correctly classified and highly certain mh case is shown in Figure
6(a) where the model mainly looked at the main clinical feature - the
boundaries of the hole. Figure 6(b) is a high noise image with diffuse
photoreceptor layer. The model emphasized the curvature of the retina while
focusing less on the telltale fluid deposits whose boundaries are highlighted
in a light magenta. Consequently, mh received the highest probability. The
higher threshold of mh ensured the case is sent for a referral instead of
misdiagnosing it. Figure 6(c) shows an image which has a clear mh and a
possible wet amd. It was labeled as mh in the dataset and the model classified
it as amd. It put emphasis on the macular deposits compared to the relatively
smaller structure of the hole, thus predicting the secondary diagnosis.
Some general observations can also be drawn about the relationship between
uncertainty and the features highlighted by explanations. The cases with
higher contrast between the structures seem to be classified with higher
certainty. The model looked at the sharp transitions between layers such as
the vitreous - retinal and the photoreceptors along with some bright deposits
as in Figure 5(d). Clinicians also consider the darker regions such as the
fluid accumulations in Figure 5(c) and the shadows of new blood vessels in
Figure 5(d). Overall, there is a relationship between the uncertainty,
explanations, and the accuracy of a deep learning model for retinal oct
images. This study is an initial exploratory analysis and leads to several
areas of further research.
## 5 Conclusion
In this study, the Bayesian uncertainty and the explanations of a deep
learning model for retinal diagnosis are demonstrated. It is shown that the
threshold with uncertainty successfully improved the model performance by
leaving out a few uncertain samples. The effect of removing uncertain samples
on the accuracy follows a trend similar to the logistic curve. The uncertainty
and the correctness of a decision are related to the explanations. In cases
with higher certainty and correct decisions the explanations highlighted the
clinically relevant regions the best. Using both uncertainty and
explainability has implications in the acceptance of deep learning models.
Both the clinical community and AI researchers stand to benefit from this, the
former with more holistic information and the latter with a tool to improve
models.
Future studies can expand this approach to more domains for improving the deep
learning models for real-world applications. This information can be used to
alter the features learned by a model and hence improving the robustness.
Another direction could be to quantify pathological features such as the
brightness and thickness of the retinal pigment epithelium and relate them to
the explanations and uncertainty. Designing an uncertainty-aware and
explainable system would alleviate the major barriers for acceptance of deep
learning methods in multiple domains including medical diagnosis.
## Acknowledgement
This work is supported by an NSERC Discovery Grant and NVIDIA Titan V GPU
Grant to V.L. This research was enabled in part by Compute Canada
(www.computecanada.ca).
## References
* [1] De Fauw, J., Ledsam, J. R., Romera-Paredes, B., Nikolov, S., Tomasev, N., Blackwell, S., Askham, H., Glorot, X., O’Donoghue, B., Visentin, D., and Others, “Clinically applicable deep learning for diagnosis and referral in retinal disease,” Nature medicine 24(9), 1342–1350 (2018).
* [2] Sengupta, S., Singh, A., Leopold, H. A., Gulati, T., and Lakshminarayanan, V., “Ophthalmic diagnosis using deep learning with fundus images – A critical review,” Artificial Intelligence in Medicine 102, 101758 (2020).
* [3] Leopold, H. A., Singh, A., Sengupta, S., Zelek, J. S., and Lakshminarayanan, V., [Recent Advances in Deep Learning Applications for Retinal Diagnosis using OCT ], Elsevier, NY, in press (2021).
* [4] Tomsett, R., Preece, A., Braines, D., Cerutti, F., Chakraborty, S., Srivastava, M., Pearson, G., and Kaplan, L., “Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI,” Patterns 1(4), 100049 (2020).
* [5] Singh, A., Sengupta, S., and Lakshminarayanan, V., “Explainable deep learning models in medical image analysis,” Journal of Imaging 6(6), 52 (2020).
* [6] Holzinger, A., Biemann, C., Pattichis, C. S., and Kell, D. B., “What do we need to build explainable AI systems for the medical domain?,” arXiv preprint arXiv:1712.09923 (2017).
* [7] Arrieta, A. B., D′iaz-Rodr′iguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garc′ia, S., Gil-López, S., Molina, D., Benjamins, R., and Others, “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,” Information Fusion 58, 82–115 (2020).
* [8] Arya, V., Bellamy, R. K., Chen, P.-Y., Dhurandhar, A., and Hind, M. e. a., “One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques,” arXiv preprint arXiv:1909.03012 (2019).
* [9] Yang, H.-L., Kim, J. J., Kim, J. H., Kang, Y. K., and et. al. Park, D. H., “Weakly supervised lesion localization for age-related macular degeneration detection using optical coherence tomography images,” PloS One 14(4), e0215076 (2019).
* [10] Sayres, R., Taly, A., Rahimy, E., Blumer, K., and et. al. Coz, D., “Using a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy,” Ophthalmology 126(4), 552–564 (2019).
* [11] Kaur, H., Nori, H., Jenkins, S., Caruana, R., Wallach, H., and Wortman Vaughan, J., “Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning,” in [Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems ], 1–14 (2020).
* [12] Singh, A., Sengupta, S., Abdul Rasheed, M., Zelek, J., and Lakshminarayanan, V., “Interpretation of deep learning using attributions : application to ophthalmic diagnosis,” in [In Proc. Applications of Machine Learning, SPIE ], 11511, 115110A, International Society for Optics and Photonics (SPIE) (2020).
* [13] Singh, A., Sengupta, S., Mohammed, A. R., Faruq, I., Jayakumar, V., Zelek, J., Lakshminarayanan, V., et al., “What is the optimal attribution method for explainable ophthalmic disease classification?,” in [International Workshop on Ophthalmic Medical Image Analysis ], 21–31, Springer (2020).
* [14] Chen, H., Lundberg, S., and Lee, S.-I., “Explaining Models by Propagating Shapley Values of Local Components,” arXiv preprint arXiv:1911.11888 (2019).
* [15] Kendall, A. and Gal, Y., “What uncertainties do we need in bayesian deep learning for computer vision?,” in [Advances in neural information processing systems ], 5574–5584 (2017).
* [16] Gal, Y. and Ghahramani, Z., “Dropout as a bayesian approximation: Representing model uncertainty in deep learning,” in [International Conference on machine learning ], 1050–1059 (2016).
* [17] Leibig, C., Allken, V., Ayhan, M. S., Berens, P., and Wahl, S., “Leveraging uncertainty information from deep neural networks for disease detection,” Scientific reports 7(1), 1–14 (2017).
* [18] Pan, H., Wang, Z., Zhan, W., and Tomizuka, M., “Towards better performance and more explainable uncertainty for 3d object detection of autonomous vehicles,” in [2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC) ], 1–7, IEEE (2020).
* [19] Tan, Q., Ye, M., Ma, A. J., Yang, B., Yip, T. C.-F., Wong, G. L.-H., and Yuen, P. C., “Explainable uncertainty-aware convolutional recurrent neural network for irregular medical time series,” IEEE Transactions on Neural Networks and Learning Systems , 1–15 (2020).
* [20] Wickstrøm, K., Kampffmeyer, M., and Jenssen, R., “Uncertainty and interpretability in convolutional neural networks for semantic segmentation of colorectal polyps,” Medical Image Analysis 60, 101619 (2020).
* [21] Gholami, P., Roy, P., Parthasarathy, M. K., and Lakshminarayanan, V., “OCTID: Optical coherence tomography image database,” Computers & Electrical Engineering 81, 106532 (2020).
* [22] Kermany, D. and Goldbaum, M., “Labeled optical coherence tomography (OCT) and Chest X-Ray images for classification,” Mendeley Data 2 (2018).
* [23] Singh, A., Sengupta, S., and Lakshminarayanan, V., “Glaucoma diagnosis using transfer learning methods,” in [In Proc. Applications of Machine Learning, SPIE ], 11139, 111390U, International Society for Optics and Photonics (SPIE) (2019).
* [24] Montavon, G., Lapuschkin, S., Binder, A., Samek, W., and Müller, K.-R., “Explaining nonlinear classification decisions with deep taylor decomposition,” Pattern Recognition 65, 211–222 (2017).
* [25] Alber, M., Lapuschkin, S., Seegerer, P., Hägele, M., and et. al. Schütt, K. T., “iNNvestigate neural networks,” Journal of Machine Learning Research 20(93), 1–8 (2019).
|
# A $\leavevmode\nobreak\ 75\%$ Occurrence Rate of Debris Discs around F stars
in the $\beta$ Pic Moving Group
Nicole Pawellek,1,2 Mark Wyatt,1 Luca Matrà,3 Grant Kennedy, 4 Ben Yelverton,
1
1Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge
CB3 0HA, UK
2Konkoly Observatory, Research Centre for Astronomy and Earth Sciences,
Konkoly-Thege Miklós út 15-17, H-1121 Budapest, Hungary
3 School of Physics, National University of Ireland Galway, University Road,
Galway, Ireland
4Department of Physics and Centre for Exoplanets and Habitability, University
of Warwick, Gibbet Hill Road, Coventry CV4 7AL, UK
E-mail<EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
Only 20% of old field stars have detectable debris discs, leaving open the
question of what disc, if any, is present around the remaining 80%. Young
moving groups allow to probe this population, since discs are expected to have
been brighter early on. This paper considers the population of F stars in the
23 Myr-old BPMG where we find that 9/12 targets possess discs. We also analyse
archival ALMA data to derive radii for 4 of the discs, presenting the first
image of the 63au radius disc of HD 164249. Comparing the BPMG results to disc
samples from $\sim 45$ Myr and $\sim 150$ Myr-old moving groups, and to discs
found around field stars, we find the disc incidence rate in young moving
groups is comparable to that of the BPMG and significantly higher than that of
field stars. The BPMG discs tend to be smaller than those around field stars.
However, this difference is not statistically significant due to the small
number of targets. Yet, by analysing the fractional luminosity vs disc radius
parameter space we find that the fractional luminosities in the populations
considered drop by two orders of magnitude within the first 100 Myr. This is
much faster than expected by collisional evolution, implying a decay
equivalent to $1/\text{age}^{2}$. We attribute this depletion to embedded
planets which would be around 170 $M_{\text{earth}}$ to cause a depletion on
the appropriate timescale. However, we cannot rule out that different birth
environments of nearby young clusters result in brighter debris discs than the
progenitors of field stars which likely formed in a more dense environment.
###### keywords:
infrared: planetary systems – planet-disc interactions – planets and
satellites: dynamical evolution and stability
††pubyear: 2021††pagerange: A $\leavevmode\nobreak\ 75\%$ Occurrence Rate of
Debris Discs around F stars in the $\beta$ Pic Moving Group–B.3
## 1 Introduction
After its protoplanetary disc has dispersed, a star is left with - if anything
- a system of planets and debris belts. The dust in those debris belts is
inferred to originate in the break-up of planetesimals at least kilometres in
size (e.g., Wyatt, 2008; Krivov, 2010; Hughes et al., 2018), and is seen in
far-infrared (FIR) surveys towards $\sim$20% of nearby several Gyr-old stars
(e.g., Eiroa et al., 2013; Sibthorpe et al., 2018), where a slightly higher
detection rate is noted for earlier type stars (e.g., Su et al., 2006;
Sibthorpe et al., 2018)
FIR surveys of nearby stars also show that debris disc luminosities decrease
with age in a manner explained by population models in which all stars are
born with a debris belt that is depleted by collisions amongst the
planetesimals (Wyatt et al., 2007; Gáspár et al., 2013). This canonical model
successfully explains the detection statistics (as a function of wavelength
and stellar age), with the implication that all stars are born with a
planetesimal belt of initial mass drawn from a log-normal distribution like
that of protoplanetary discs, and concentrated at a radius drawn from a
$n(r)\propto r^{-1.7}$ distribution in the range 1-1000 au (Sibthorpe et al.,
2018).
However, while this model makes accurate predictions for the 20% of Gyr-old
stars with detectable discs, it is almost completely unconstrained for the 80%
of stars without detectable discs for which model predictions rely on the log-
normal or power law assumptions about the underlying initial mass and radius
distributions. For example, stars in the canonical model population without
detectable discs are the 80% with 1-10 au discs that are rapidly depleted and
so never seen, whereas it would be equally valid to put these undetected discs
at 30-100 au with very low initial mass. A further challenge comes from the
inference that planetesimal belt radii depend on stellar luminosity. Belts
imaged at millimetre wavelengths are larger around higher luminosity stars in
a way that may be attributed to preferential formation at protoplanetary disc
ice-lines (Matrà et al., 2018), a possibility not currently included in the
model.
It is inevitably challenging to determine the properties of the planetesimal
belts of the 80% of nearby stars without detectable dust. Our only hope is to
probe these by studying stars that are young ($\ll 100$ Myr when their discs
are brightest) and nearby. Young nearby moving groups are ideal for sample
selection, having also the benefit of being co-eval. Given the stellar
luminosity dependence mentioned above, and that disc detection is maximised
for higher luminosity stars, the optimal sample would include stars of similar
early spectral type in the nearest youngest moving group. The number of A-type
stars in nearby young moving groups for which the disc detection peaks is very
limited while late-type stars are common. The best compromise between a high
stellar luminosity and a reasonably large number of targets within the same
moving group is given by F-type stars.
An example fulfilling the aforementioned requirements is the $\beta$ Pictoris
moving group (BPMG) which contains stars of $\sim$23 Myr age (Bell et al.,
2015). Based on a survey of 30 BPMG stars of different spectral types, Rebull
et al. (2008) found that more than 37% of the targets show evidence for a
circumstellar disc. By considering the known F-type stars in the BPMG,
Churcher et al. (2011) inferred a debris disc detection rate of 6/9 ($\sim
67\%$). This is higher than the $20\%$ seen for Gyr-old stars (e.g., Su et
al., 2006; Marshall et al., 2016; Sibthorpe et al., 2018) leading to the
question why we get such a high frequency. One explanation could be that
during the timespan of two orders of magnitude in age ($\sim$20 Myr and
$\sim$2 Gyr) a majority of discs is collisionally depleted so that we are not
able to detect them anymore. Another possibility might be that the formation
conditions of the BPMG differs significantly from these field stars leading to
debris discs having atypical properties (i.e. unusually bright).
In the first part of this study we consider the population of F star debris
discs in the BPMG. In § 2 we revisit membership in the BPMG sample in the
light of recent studies and note stellar multiplicity and planetary companions
for the sample since those are possible influences on the occurrence of discs.
We investigate evidence for infrared excesses indicative of a surrounding
debris disc in § 3, then § 4 presents ALMA observations to determine the radii
of the belts. We use this spatial information to generate SED models including
a size distribution of dust particles in § 5.
In the second part of this study the properties of the BPMG disc population
are compared with those of other nearby F-star populations in § 6 to identify
similarities and differences between the samples. In § 7 we analyse possible
scenarios explaining the high detection rate of F star debris discs in the
BPMG, before concluding in § 8.
## 2 Sample selection
### 2.1 Reassessing the BPMG sample of F stars
The BPMG is one of the nearest moving groups. Shkolnik et al. (2017)
identified 146 objects belonging to this group, where five stars are found to
be A-type, eleven F-type, six G-type, 27 K-type and 97 M- and L-type. Using
data from the Gaia data release 2 (Gaia Collaboration et al., 2018), several
additional members of the BPMG were found by Gagné & Faherty (2018). While the
majority found in that study are M- and L-type, one F-type star and one A-type
star could also be added to the sample given by Shkolnik et al. (2017). Thus,
the sample of nine F-type members of the BPMG analysed by Churcher et al.
(2011) is now increased to twelve by combining the samples of Shkolnik et al.
(2017) and Gagné & Faherty (2018). These twelve targets will be the basis of
our analysis. All of them lie between 25 and 66 pc (Gaia Collaboration et al.,
2018; Bailer-Jones et al., 2018), with stellar properties listed in Tab. 1.
Table 1: Stellar parameters of the sample of 12 F-type stars belonging to the
$\beta$ Pic moving group.
| | | | | | Multiplicity | Planetary companion
---|---|---|---|---|---|---|---
HD | HIP | SpT | $L/L_{\text{sun}}$ | $T_{\text{eff}}$ | d | Companion | SpT | Separation | Ref | Separation | Mass
| | | | [K] | [pc] | | | [arcsec] | [au] | | [au] | [$M_{\text{Jup}}$]
203 | 560 | F2V | $4.26\pm 0.03$ | $6830\pm 30$ | 40.0 | … | … | … | … | … | … | …
14082A | 10680 | F5V | $2.00\pm 0.01$ | $6170\pm 20$ | 39.8 | HD 14082B | G2V | 14 | 557 | 1, 4 | … | …
15115 | 11360 | F4V | $3.6\pm 0.1$ | $6720\pm 20$ | 49.0 | … | … | … | … | … | … | …
29391a | 21547 | F0V | $5.71\pm 0.06$ | $7330\pm 30$ | 29.8 | GJ 3305AB | M1V | 66 | 1957 | 1 | 13 | 1-12
35850 | 25486 | F8V | $1.84\pm 0.01$ | $6050\pm 20$ | 26.9 | HD 35850B | … | $7.8\times 10^{-4}$ | 0.021 | 6, 7 | … | …
160305 | 86598 | F9V | $1.69\pm 0.03$ | $6050\pm 40$ | 65.7 | … | … | … | … | … | … | …
164249 | 88399 | F5V | $3.20\pm 0.04$ | $6340\pm 40$ | 49.6 | HD 164249B | M2V | 6.5 | 323 | 1 | … | …
| | | | | | 2MASS J18011138-5125594 | … | … | … | 3 | … | …
173167 | - | F5V | $2.4\pm 0.1$ | $6270\pm 90$ | 50.6 | TYC 9073-0762 | M1V | 571 | 28894 | 1, 2 | … | …
181327 | 95270 | F5V | $2.87\pm 0.02$ | $6480\pm 20$ | 48.2 | HD181296 | A0V+M7/8V | 416 | 20072 | 1 | … | …
191089 | 99273 | F5V | $2.74\pm 0.02$ | $6460\pm 30$ | 50.1 | … | … | … | … | … | … | …
199143 | 103311 | F8V | $2.21\pm 0.02$ | $5930\pm 20$ | 45.7 | HD 199143B | M2V | 1.1 | 50 | 4, 8 | … | …
| | | | | | HD 358623 | K7 | 325.0 | 14764 | 1, 8 | … | …
213429 | 111170 | F8V | $1.92\pm 0.06$ | $5970\pm 20$ | 25.5 | HD 213429B | … | $\sim 0.08^{b}$ | $\sim 2^{b}$ | 1, 5 | … | …
Notes: (a) HD 29391 is also known as 51 Eridani. The references for the
planetary companion are given in Section 2.3. (b) The orbital period is 631 d.
It is converted to a separation assuming the mass of the binary companion to
be equal to that of the primary HD 213429 with $1.19M_{\odot}$.
References for multiplicity: [1] Elliott & Bayo (2016), [2] Moór et al.
(2013), [3] Gagné et al. (2018a), [4] Mamajek & Bell (2014), [5] Kovaleva et
al. (2015), [6] Eker et al. (2008), [7] Rodriguez & Zuckerman (2012), [8]
Tokovinin (1997)
### 2.2 Stellar multiplicity
Investigating the stellar multiplicity we found that our sample of F stars
possesses a 67% fraction of multiple systems (8/12) including wide
(separations $>1000$ au) and very wide systems (separations $>10000$ au).
Elliott & Bayo (2016) studied the occurrence of such system configurations and
found that the high fraction of multiples in the BPMG can be explained by the
unfolding of primordial triple systems which was investigated by Reipurth &
Mikkola (2012). The term “unfolding” means that in triple systems, while born
compact, one component is dynamically scattered into a very distant orbit
within a few Myr. Reipurth & Mikkola (2012) showed that if the component
scattered into a wide separation is of low mass while the close components are
more massive, the triple system is likely to be unstable and disrupted on a
short timescale into a massive binary system and a single low-mass star.
Elliott & Bayo (2016) find that the majority of the multiples’ distant
components in the BPMG are of low mass and therefore, the study predicts that
these multiple systems should decay within 100 Myr. In our sample, the systems
with low-mass binary companions are HD 29391, HD 164249, HD 173167 and HD
199143 for which we might expect a decay within the aforementioned time frame.
### 2.3 Planetary companions
In our sample of F stars only the most luminous star HD 29391, also known as
51 Eridani, is known to possess a planetary companion (e.g., Macintosh et al.,
2015; Nielsen et al., 2019). The system is located at a distance of 29.8 pc
and forms a multiple stellar system with the M-type binary star GJ 3305AB
(e.g., Janson et al., 2014). The companion 51 Eri b was discovered by the
Gemini Planet Imager Exoplanet Survey (GPIES, Patience et al., 2015; Nielsen
et al., 2019) with a projected separation of 13 au. Depending on the formation
model the estimated mass of the planet varies between 1…2$M_{\text{Jup}}$ for
a so-called “hot start” model (Marley et al., 2007; Rajan et al., 2017) and
2…12$M_{\text{Jup}}$ for a “cold start” model (Marley et al., 2007; Fortney et
al., 2008).
## 3 Assessing the sample for IR excess
### 3.1 Modelling procedure
We collected photometric data for all twelve targets in our sample from
published catalogues, such as 2MASS (Cutri et al., 2003), the WISE All-Sky
Release Catalog (Wright et al., 2010), the AKARI All-Sky Catalogue (Ishihara
et al., 2010), the Spitzer Heritage Archive (Carpenter et al., 2008;
Lebouteiller et al., 2011; Chen et al., 2014; Sierchio et al., 2014) and the
Herschel Point Source Catalogue (Marton et al., 2015). These data allowed us
to analyse the spectral energy distributions (SEDs) and therefore the
occurrence of infrared emission in excess of that expected from the stellar
photosphere. Mid- and far-infrared excesses are an indicator of the presence
of a debris disc surrounding a host star.
To find excesses we fit an SED model consisting of a star and a disc. We fit
PHOENIX stellar photosphere models (Brott & Hauschildt, 2005) for each target
using the stellar luminosity and the stellar temperature as model parameters.
The resulting stellar luminosities and temperatures are listed in Tab. 1.
Knowing the stellar contribution to the mid- and far-infrared data we were
able to derive the excess emission in the appropriate wavelength bands between
22 and 100$\mu$m taking into account the uncertainties of the photometry and
the photospheric model (Yelverton et al., 2019). The results are given in Tab.
2.
After subtracting the stellar emission the disc is fitted with a modified
blackbody model (Backman & Paresce, 1993) for which the thermal emission of
the dust is described as
$F_{\nu}\sim
B_{\nu}(\lambda,T_{\text{dust}})\left[H(\lambda_{0}-\lambda)+H(\lambda-\lambda_{0})\left(\frac{\lambda}{\lambda_{0}}\right)^{-\beta}\right],$
(1)
where $B_{\nu}$ is the Planck function and $H$ the Heaviside step function.
The parameter $\lambda_{0}$ represents the characteristic wavelength while
$\beta$ is the opacity index. From this model we derive the dust temperature,
$T_{\text{BB}}$, and the resulting blackbody radius of the disc,
$R_{\text{BB}}$, as well as the fractional luminosity, $f_{\text{d}}$ (see
Tab. 3). Here, $R_{\text{BB}}$ is the distance from the star that the
temperature implies if the dust acted like blackbodies in equilibrium with the
stellar radiation. In § 5 we apply a disc model including dust size
distributions which do not exist in the framework of Yelverton et al. (2019).
The uncertainties of the fit parameters were inferred in the following way. We
start at the position of the minimum $\chi^{2}$ in parameter space, i.e. from
the best fitting $f_{\text{d}}$ and $R_{\text{BB}}$. A set of new parameter
values is randomly generated from which we calculate the SED. This leads to a
new $\chi^{2}$ value which is compared to the former minimum value. The
$\chi^{2}$ parameter estimates how likely the set of parameter values fits the
SED. If the probability is larger than a certain threshold value the set is
saved. In the end, it is counted how often the code reaches a certain set of
$f_{\text{d}}$ and $R_{\text{BB}}$. The closer the parameters get to the best
fitting values the higher the probability. The resulting distribution in
parameter space represents an estimate for the probability distribution of the
parameters and thus allows us to calculate the confidence levels for the
parameters assuming that the values follow a normal distribution (simulated
annealing; e.g., Pawellek, 2017).
### 3.2 Stars with IR excess
We identified nine out of twelve stars ($75\%$) that show infrared excess and
therefore suggest the presence of a debris disc. Since we cannot draw strong
conclusions on HD 173167 (see § 3.3) we might even say that nine out of eleven
systems ($\sim 82\%$) possess debris discs. A comparable, high detection rate
of 6/9 F stars for the BPMG was noted in Churcher et al. (2011). As noted in
the introduction, this is in contrast to the results of studies which find a
typical occurrence rate for debris discs of $\sim$20% around FGK-stars for
volume limited samples with older mean ages around Gyr (e.g., Su et al., 2006;
Eiroa et al., 2013; Chen et al., 2014; Marshall et al., 2016; Sibthorpe et
al., 2018).
The fractional luminosities of the excess emission lie between $1.2\times
10^{-5}$ and $4.1\times 10^{-3}$, which are typical values for debris discs
(e.g., Eiroa et al., 2013; Chen et al., 2014; Holland et al., 2017; Sibthorpe
et al., 2018). The inferred blackbody temperatures lie between 51 and 600 K
corresponding to blackbody radii between 0.3 and 52 au.
Pawellek & Krivov (2015) found a relation between the ratio of the spatially
resolved disc radius seen at FIR wavelengths to blackbody radius and the
stellar luminosity of the form
$\frac{R_{\text{FIR}}}{R_{\text{BB}}}=A\left(\frac{L}{L_{\text{sun}}}\right)^{B}.$
(2)
The disc radius seen at FIR wavelengths in this relation is that inferred from
resolved Herschel/PACS imaging, and the blackbody radius that of a fit to the
spectrum that is comparable with the modified blackbody fit used here. We use
the updated values of $A$ and $B$ from Pawellek (2017) with $A=6.49\pm 0.86$
and $B=-0.37\pm 0.05$ assuming pure astronomical silicate (Draine, 2003) for
the dust material.
Estimates of the FIR radii of the discs using eq. (2) give values between 1.5
and 215 au which are $\sim$4 times larger than $R_{\text{BB}}$. In §4.4 we
compare those estimates to the observed disc radii of the spatially resolved
discs.
### 3.3 Notes for particular targets
HD 14082A: For HD 14082A all WISE bands (3.4, 4.6, 12 and 22$\mu$m, see WISE
All-Sky Catalog, Wright et al., 2010) and Spitzer/MIPS (24$\mu$m, Chen et al.,
2014) exhibit significant excess emission, but no excess was found with
Spitzer/MIPS (70$\mu$m) or Herschel/PACS. The star forms a binary system
(Mamajek & Bell, 2014; Elliott & Bayo, 2016) with its companion (HD 14082B)
known to exhibit IR excess in the mid- and far-infrared (e.g., Riviere-
Marichalar et al., 2014). After checking the WISE and MIPS data we found the
photometry to be confused in all bands since those instruments were not able
to differentiate between the two stellar components. Thus, we assume no
significant excess emission for HD 14082A while the excess found around HD
14082B is real.
HD 29391: The star HD 29391 (51 Eri) shows significant excess at MIPS24,
MIPS70 and PACS100 providing a good constraint on the disc as noted previously
(Riviere-Marichalar et al., 2014). The target possesses the only planetary
companion detected in our sample (see §2.3). The planet’s separation is $\sim
13$ au. With the disc’s $R_{\text{BB}}=9\pm 2$ au and an estimated FIR radius
of $R_{\text{FIR}}=30.7\pm 13.4$ au we assume the planet to be located closer
to the star than the planetesimal belt.
HD 173167: The star HD 173167 only possesses photometric data up to 22$\mu$m
so that we cannot draw any conclusions about a possible far-infrared excess.
However, the mid-infrared data do not reveal significant excess.
HD 199143: Considering HD 199143, there are mid-infrared data available as
well as data from Herschel/PACS. The excess emission is significant only for
the MIPS24 band, but WISE data also show a marginal detection of excess
emission at 22$\mu$m. Despite the presence of a close binary companion we
could rule out confusion issues since the companion is several orders of
magnitude fainter than the primary. Therefore, we assume that we detect
emission from hot dust close to the star. In our sample this is the only
target with a close-in disc, with a dust temperature of 600 K and a blackbody
radius of 0.3 au. The FIR radius is estimated to be 1.5 au.
While Tab. 2 gives the significance of the excess at 24$\mu$m as $7\sigma$
this might be overestimated because of the SED fit being to both star and
disc. Fitting the star without including the disc component results in a
24$\mu$m excess of $3\sigma$. Thus the excess is real, albeit at low
significance.
Table 2: IR excesses.
| WISE22 | MIPS24 | MIPS70 | PACS70 | PACS100
---|---|---|---|---|---
HD | $F_{\nu}$ | $F_{\nu,\star}$ | Excess | $F_{\nu}$ | $F_{\nu,\star}$ | Excess | $F_{\nu}$ | $F_{\nu,\star}$ | Excess | $F_{\nu}$ | $F_{\nu,\star}$ | Excess | $F_{\nu}$ | $F_{\nu,\star}$ | Excess
| [mJy] | [mJy] | $\sigma$ | [mJy] | [mJy] | $\sigma$ | [mJy] | [mJy] | $\sigma$ | [mJy] | [mJy] | $\sigma$ | [mJy] | [mJy] | $\sigma$
203 | 126.5 | $\pm$ | 7.2 | 64.2 | 8.6 | 120.5 | $\pm$ | 2.4 | 55.9 | 27 | 61.0 | $\pm$ | 10.4 | 6.1 | 5.3 | 71.5 | $\pm$ | 4.4 | 6.4 | 15 | 41.3 | $\pm$ | 2.5 | 3.2 | 15
14082A | 64.1 | $\pm$ | 3.8 | 41.3 | 6.0∗ | 39.9 | $\pm$ | 0.8 | 36.0 | 4.9∗ | … | 3.9 | … | $<13$ | 4.1 | … | $<8$ | 2.0 | …
15115 | 63.1 | $\pm$ | 3.8 | 39.0 | 6.3 | 58.3 | $\pm$ | 2.3 | 33.9 | 11 | 451.9 | $\pm$ | 32.6 | 3.7 | 14 | 463.0 | $\pm$ | 14.5 | 3.9 | 32 | … | 1.9 | …
29391 | 141.8 | $\pm$ | 8.0 | 123 | 2.3 | 129.7 | $\pm$ | 2.6 | 107 | 8.8 | 23.0 | $\pm$ | 0.92 | 11.5 | 12 | 21.8 | $\pm$ | 3.8 | 12.2 | 2.5 | 19.0 | $\pm$ | 3.0 | 6.0 | 4.3
35850 | 96.9 | $\pm$ | 5.7 | 87.5 | 1.6 | 83.5 | $\pm$ | 3.4 | 76.2 | 2.1 | 40.3 | $\pm$ | 9.20 | 8.3 | 3.5 | … | 8.8 | … | 46.1 | $\pm$ | 2.6 | 4.3 | 16
160305 | 18.5 | $\pm$ | 1.7 | 13.2 | 3.1 | … | 11.5 | … | … | 1.3 | … | 31.9 | $\pm$ | 1.6 | 1.3 | 18 | … | 0.65 | …
164249 | 85.4 | $\pm$ | 5.1 | 39.2 | 9.0 | 77.4 | $\pm$ | 1.6 | 34.1 | 28 | 624.1 | $\pm$ | 62.4 | 4.1 | 10 | … | 3.9 | … | 513.0 | $\pm$ | 17.7 | 1.9 | 29
173167 | 33.9 | $\pm$ | 2.2 | 29.3 | 2.0 | … | 25.5 | … | … | 3.7 | … | … | 2.9 | … | … | 1.4 | …
181327 | 212.2 | $\pm$ | 12.1 | 34.6 | 15 | 205.4 | $\pm$ | 4.1 | 30.1 | 43 | 1468 | $\pm$ | 249 | 3.3 | 5.9 | … | 3.5 | … | 1463 | $\pm$ | 47 | 1.7 | 31
191089 | 192.8 | $\pm$ | 10.9 | 30.9 | 15 | 187.5 | $\pm$ | 3.8 | 26.9 | 43 | 544.3 | $\pm$ | 50.6 | 2.9 | 11 | … | 3.1 | … | 422.6 | $\pm$ | 13.5 | 1.5 | 31
199143 | 45.2 | $\pm$ | 2.9 | 37.8 | 2.5 | 38.8 | $\pm$ | 0.9 | 32.9 | 7.0 | $<9$ | 3.6 | … | 4.89 | $\pm$ | 1.37 | 3.8 | 0.80 | … | 1.9 | …
213429 | 107.3 | $\pm$ | 6.3 | 105 | 0.40 | 93.1 | $\pm$ | 1.9 | 91.2 | 1.0 | 22.2 | $\pm$ | 4.1 | 10.0 | 2.9 | … | 10.5 | … | … | 5.2 | …
Notes: Errors include both statistical and systematic uncertainties. WISE data
from WISE All-Sky Catalog (Wright et al., 2010), MIPS24 and MIPS70 from
Spitzer Heritage Archive (Carpenter et al., 2008; Chen et al., 2014; Sierchio
et al., 2014), PACS70 and PACS100 from Herschel Science Archive (Eiroa et al.,
2013). Upper limits stem from Riviere-Marichalar et al. (2014). The thermal
emission caused by the dust material surrounding the star is given as excess
from the stellar photosphere in units of $\sigma$ and is considered to be
significant if it reaches a value larger than 3. (∗) The WISE and MIPS data of
HD 14082A were found to be confused and thus, are not taken into account when
checking the presence of IR excess.
Table 3: SED fitting results.
HD | Resolved | Modified blackbody model | Grain size distribution model
---|---|---|---
| | $f_{\text{d}}$ | $T_{\text{bb}}$ | $R_{\text{bb}}$ | $R_{\text{est, sub-mm}}$ | | $R_{\text{sub-mm}}$ | $s_{\text{blow}}$ | $s_{\text{min}}$ | $s_{\text{min}}/s_{\text{blow}}$ | $q_{\text{SED}}$ | $T_{\text{dust}}$ | $f_{\text{d}}$
| | [$10^{-5}$] | [K] | [au] | [au] | | [au] | [$\mu$m] | [$\mu$m] | | | [K] | [$10^{-5}$]
203 | No | $15$ | 134 | $\pm$ | 4 | 8.3 | $\pm$ | 1.5 | 30 | $\pm$ | 8 | | … | 1.18 | … | … | … | … | …
14082A | … | … | … | … | … | | … | 0.87 | … | … | … | … | …
15115 | Yes | $51$ | 62 | $\pm$ | 1 | 38 | $\pm$ | 7 | 129 | $\pm$ | 27 | | 93 | $\pm$ | 21 | 0.91 | 4.6 | $\pm$ | 0.2 | 5.1 | $\pm$ | 0.2 | 3.84 | $\pm$ | 0.06 | 61.4 | $\pm$ | 1.9 | $60.0$
29391 | No | $0.5$ | 101 | $\pm$ | 20 | 18 | $\pm$ | 5 | 21 | $\pm$ | 9 | | … | 1.81 | … | … | … | … | …
35850 | No | $4.0$ | 74 | $\pm$ | 4 | 19 | $\pm$ | 8 | 54 | $\pm$ | 6 | | … | 0.51 | … | … | … | … | …
160305 | Yes | $14$ | 56 | $\pm$ | 10 | 32 | $\pm$ | 9 | 71 | $\pm$ | 23 | | 88 | $\pm$ | 2 | 0.67 | 0.6 | $\pm$ | 0.6 | 0.9 | $\pm$ | 0.8 | 3.5a | 77.4 | $\pm$ | 14.5 | 22.4
164249 | Yes | $88$ | 60 | $\pm$ | 1 | 39 | $\pm$ | 8 | 93 | $\pm$ | 15 | | 63 | $\pm$ | 24 | 0.89 | 2.8 | $\pm$ | 0.1 | 3.2 | $\pm$ | 0.2 | 3.73 | $\pm$ | 0.05 | 77.4 | $\pm$ | 2.6 | $94.2$
173167 | … | … | … | … | … | | … | 0.85 | … | … | … | … | …
181327 | Yes | $264$ | 78 | $\pm$ | 1 | 22 | $\pm$ | 4 | 125 | $\pm$ | 22 | | 81 | $\pm$ | 16 | 1.02 | 1.1 | $\pm$ | 0.2 | 1.1 | $\pm$ | 0.2 | 3.45 | $\pm$ | 0.05 | 61.4 | $\pm$ | 4.7 | $227$
191089 | Yes | $151$ | 92 | $\pm$ | 1 | 15 | $\pm$ | 3 | 37 | $\pm$ | 4 | | 45 | $\pm$ | 16 | 0.98 | 1.2 | $\pm$ | 0.3 | 1.2 | $\pm$ | 0.3 | 3.43 | $\pm$ | 0.08 | 83.6 | $\pm$ | 4.9 | $118$
199143 | No | $47$ | 1000 | $\pm$ | 100 | 0.2 | $\pm$ | 0.1 | 0.8 | $\pm$ | 0.3 | | … | 0.74 | … | … | … | … | …
213429 | … | … | … | … | … | | … | 0.74 | … | … | … | … | …
Notes: The blow-out limit is calculated assuming Mie theory and pure
astronomical silicate (Draine, 2003) with a bulk density of $3.3$ g/cm3. The
estimated disc radius seen at sub-mm wavelengths, $R_{\text{est, sub-mm}}$, is
calculated by eq. (2) using the parameters inferred in this work. A grain size
distribution fit is done if the disc is spatially resolved. a The parameter
$q_{\text{SED}}$ was fixed due to a lack of photometric data in the far-
infrared. Only HD 15115 shows evidence for a warm disc component.
Figure 1: SEDs for the debris discs detected around F stars in the BPMG. Solid
lines show the modified blackbody fit. For spatially resolved targets, dashed
lines show the size distribution fit using Mie theory. Blue lines represent
the outer ring, red lines the inner ring (if present). For the SED of HD 15115
both disc components were fitted with a modified blackbody model (solid line)
and a size distribution model (dashed line).
### 3.4 IR excess in multiple systems
We found that all four stars in our sample without a known stellar companion
possess a debris disc (HD 203, HD 15115, HD 160305 and HD 191089).
Furthermore, three out of the five systems with companions at projected
separations larger than 135 au (HD 14082A, HD 29391, HD 164249, HD 173167 and
HD 181327) harbour a disc as well. Two systems have companions at projected
separations below 25 au where one shows evidence of debris. (HD 35850 has
debris and a companion at a distance of 0.021 au, while HD 213429 has no
debris and a companion with an estimated separation of $\sim$2 au). Only HD
199143 has a stellar companion at an intermediate separation of $\sim$50 au
(in addition to a wide separation component at $\sim 15000$ au, Tokovinin,
1997; Mamajek & Bell, 2014). Significant mid-infrared excess (see § 3.3) hints
at the presence of a close-in debris disc with $R_{\text{BB}}=0.3$ au.
These results are broadly consistent with those of Yelverton et al. (2019).
That study analysed a sample of 341 multiple systems and found that for binary
stars with separations between $\sim$25 and 135 au no discs could be detected.
Since these values are comparable to typical debris disc radii (e.g., Booth et
al., 2013; Pawellek et al., 2014; Matrà et al., 2018) it was suggested that
the binaries are clearing the primordial circumstellar or circumbinary
material via dynamical perturbation. While the detection rates for separations
larger than 135 au were found to be similar to the rates for single stars (at
$\sim$20%), only $\sim 8$% of binary systems with separations below 25 au
showed evidence for debris.
Thus, considering the three aforementioned targets (HD 35850, HD 199143, HD
213429), we would expect a lower number of disc detections for these systems,
but as they are only three in number we cannot draw any conclusions about
detection rates.
## 4 Disc imaging
Different observational wavelengths are sensitive to different sizes of dust
grains. While the emission seen by (sub-)mm telescopes such as ALMA is
expected to be dominated by thermal emission from mm-sized particles, near-
infrared instruments such as VLT/SPHERE (Beuzit et al., 2019) are expected to
trace scattered light from micron-sized grains. Particles as small as micron-
size are significantly affected by radiation pressure and other transport
processes (e.g., Burns et al., 1979) so that their distribution is expected to
extend far beyond their birth environment. In contrast the large mm-sized
grains are expected to stay close to the parent belt. Hence, the disc size
inferred in sub-mm observations is the best tracer of the location of a
system’s planetesimal belt, which might differ from the radial extent seen in
near-infrared scattered light. Nevertheless, such short wavelength
observations can also be modelled to infer the planetesimal belt location, and
comparing the disc structure seen at different wavelengths provides
information on the physical mechanisms shaping debris discs.
### 4.1 ALMA reduction procedure
Atacama Large Millimeter/submillimeter Array (ALMA) observations of six out of
the twelve stars in our sample were retrieved from the ALMA archive. Three of
the analysed datasets have already been presented in literature work (HD
15115, HD 29391, HD 191089, MacGregor et al., 2019; Pérez et al., 2019; Kral
et al., 2020), but are re-analysed here to maintain consistency across the
sample. We also present the first ALMA image of HD 164249 and new images of
the disc around HD 181327. For the latter target another dataset was published
by Marino et al. (2016), but due to their lower resolution we do not use those
data here. In addition to that we present new constraints for HD 14082A for
which dust emission was not detected (as was also the case for HD 29391).
The targets were observed as single pointings with the ALMA 12 m array within
the context of a variety of projects, over different ALMA Cycles, leading to
inhomogeneous sensitivities, resolutions, and wavelengths (see Table 4). For
each target, and each observing date, we carried out standard calibration
steps to obtain calibrated visibility datasets; we used the same CASA and
pipeline version as used in the original reduction delivered by the ALMA
observatory.
Later processing was carried out homogeneously in CASA v5.4.0. If available,
for each target, we concatenated multiple datasets at similar frequencies to
obtain a final combined visibility dataset. We also averaged in time (to 30s
integrations) and frequency (to 2 GHz-wide channels) to reduce the size of
each dataset and speed up imaging and modelling.
We then carried out continuum imaging using the CLEAN algorithm implemented
through the tclean CASA task. We used multiscale deconvolution (Cornwell,
2008) in multi-frequency synthesis mode, adapting the choice of visibility
weighting and/or tapering schemes to achieve a good trade-off between image
sensitivity and resolution. The weighting choices, RMS noise levels (measured
in image regions free of source emission), and synthesised beam sizes achieved
are listed in Table 4.
Table 4: BPMG F stars ALMA observations Summary
Target | Date | Project ID | Band | Continuum | Continuum | Cont. Image | Original Reference
---|---|---|---|---|---|---|---
| | | | RMS | Beam size | weighting | for Dataset
| dd-mm-yyyy | | | $\mu$Jy beam-1 | | |
HD14082A | 31-08-2015 | 2013.1.01147 | 6 | 150a | 1.8$\arcsec$ $\times$1.6$\arcsec$ | Natural | This work
HD15115 | 01-01-2016 | 2015.1.00633 | 6 | 15 | 0.6$\arcsec$ $\times$0.6$\arcsec$ | Briggs 0.5 | MacGregor et al. (2019)
| 09-06-2016 | 2015.1.00633 | 6 | | | | MacGregor et al. (2019)
HD29391 | 13-10-2016 | 2016.1.00358 | 6 | 23 | 0.2$\arcsec$ $\times$0.2$\arcsec$ | Natural | Pérez et al. (2019)
HD164249 | 10-03-2014 | 2012.1.00437 | 6 | 45 | 1.1$\arcsec$ $\times$1.0$\arcsec$ | 0.7$\arcsec$ Taper | This work
| 11-08-2015 | 2013.1.01147 | 6 | | | | This work
HD181327 | 25-07-2016 | 2015.1.00032 | 7 | 27 | 0.2$\arcsec$ $\times$0.2$\arcsec$ | Natural | This work
HD191089 | 23-03-2014 | 2012.1.00437 | 6 | 12 | 0.9$\arcsec$ $\times$0.6$\arcsec$ | Briggs 0.0 | This work
| 30-05-2018 | 2017.1.00704 | 6 | | | | Kral et al. (2020)
| 14-09-2018 | 2017.1.00200 | 6 | | | | Matrà et al. (in prep.)
Notes: a At field center. HD14082A is however $\sim 14\arcsec$ from field
center, where the primary beam level drops to 46%. The sensitivity per beam at
that location would be $326$ $\mu$Jy beam-1. RMS noise levels, beam sizes and
weightings of multiple datasets refer to imaging of the joint datasets.
Discs are detected and resolved around four out of the six BPMG F stars with
existing ALMA observations. Fig. 2 shows the ALMA images for the resolved
discs, as well as the best-fit models, residuals, and deprojected
visibilities. No detection was achieved near the location of HD 14082A and HD
29391 in the respective images. We conservatively derive 3$\sigma$ upper
limits of $<5.8$ and $<3.5$ mJy for the flux density of the two belts,
respectively, by spatially integrating emission within a 5$\arcsec$ radius
circular region centered on the expected stellar location. The high values for
the upper limits are caused by the relatively small beam used for the
observation of HD 29391 and the fact that HD 14082A is significantly offset
from the phase center, increasing the already high RMS of that observation.
For both targets no (sub-)mm observations were reported in the literature
before.
For the visibility modelling, we follow the method described e.g. in Matrà et
al. (2019), using RADMC-3D111http://www.ita.uni-
heidelberg.de/~dullemond/software/radmc-3d/ to calculate model images from a
given density distribution, which we here assume to be a radial and vertical
Gaussian described by
$\rho=\Sigma_{0}\ e^{-\frac{(r-r_{\rm
c})^{2}}{2\sigma^{2}}}\frac{e^{-\frac{z^{2}}{2(hr)^{2}}}}{\sqrt{2\pi}hr},$ (3)
with symbols having the same meaning as in Eq. 1 of (Matrà et al., 2018). The
vertical aspect ratio $h=H/R$ is radially constant and fixed to 0.03 for belts
that are too face-on or too low S/N for it to be meaningfully constrained.
Additionally, rather than fitting for the normalization factor $\Sigma_{0}$,
we fit for the total flux density of the belt in the model images. When
calculating model images, we also assume the grains to act as blackbodies and
therefore to have a temperature scaling as $r^{-0.5}$.
After producing a model image, we Fourier Transform it and sample the model
visibility function at the same u-v locations as the data using the GALARIO
software package (Tazzari et al., 2018). This produces model visibilities that
can be directly compared with the observed ones. This process is then used to
fit the model to the data through a Markov Chain Monte Carlo (MCMC)
implemented using the EMCEE v3 software package (Foreman-Mackey et al., 2013;
Foreman-Mackey et al., 2019). This samples the posterior probability function
of the n-dimensional parameter space of our model using the affine-invariant
sampler of Goodman & Weare (2010). We use a likelihood function $\propto
e^{-\chi^{2}}$ and uniform priors on all model parameters. In addition to the
model parameters entering the equation describing the density distribution, we
fit for RA and Dec offsets of the belt’s geometric center from the phase
center of the observations, and for a weight rescaling factor to account for
the inaccurate data weights (and hence uncertainties) typically delivered by
ALMA (e.g. Marino et al., 2018; Matrà et al., 2019). We fit these additional,
nuisance parameters separately to each observing date for any given target.
Tab. 5 shows best-fit parameters and uncertainties derived for each of the
resolved belts, taken as the 50${}^{+34}_{-34}$th percentiles of the
marginalised posterior probability distribution of each parameter. Fig. 2
shows full-resolution model images and residuals obtained by subtracting the
best fit visibility model from the data, and imaging without CLEAN
deconvolution. The residual images look mostly consistent with noise,
indicating that our models provide a very good fit to the data. However, we
note that some residual, extended emission is detected 1) interior to the HD
181327 ring, to the SE of the star, and 2) at the SW ansa, and along the S
side of the HD 191089 belt. While this could be due to true substructure in
the dust morphology of these systems, this does not significantly affect the
measurement of the radius of the bulk of the planetesimal belt material, which
we are most interested in.
Figure 2: ALMA models for four resolved debris discs. From top to bottom: HD
15115, HD 164249, HD 181327, HD 191089. Table 5: Resolved discs and fitting
parameters for ALMA models.
HD | $\lambda$ | $F_{\nu_{\star}}$ | $F_{\nu_{\rm belt}}$ | $R$ | $\Delta R$ | $h$ | $i$ | PA | $\Delta$RA | $\Delta$Dec
---|---|---|---|---|---|---|---|---|---|---
| [$\mu$m] | [$\mu$Jy] | [mJy] | [au] | [au] | | [∘] | [∘] | [$\arcsec$] | [$\arcsec$]
15115 | $1340$ | ${}^{a}43^{+20}_{-20}$ | $2.02^{+0.06}_{-0.06}$ | $93.4^{+1.0}_{-1.3}$ | ${}^{a}21^{+6}_{-7}$ | ${}^{a}0.051^{+0.012}_{-0.016}$ | ${}^{b}87.8^{+1.4}_{-1.3}$ | $98.5^{+0.3}_{-0.3}$ | $0.08^{+0.03}_{-0.03}$ | $-0.04^{+0.01}_{-0.01}$
164249 | $1350$ | $-$ | $0.96^{+0.14}_{-0.13}$ | $63^{+4}_{-3}$ | ${}^{a}24^{+11}_{-11}$ | $-$ | $<49$ | ${}^{c}113$ | $-0.08^{+0.08}_{-0.09}$ | $-0.17^{+0.08}_{-0.08}$
⋆181327 | $880$ | ${}^{a}39^{+25}_{-21}$ | $18.8^{+0.3}_{-0.3}$ | $81.3^{+0.3}_{-0.3}$ | $16.0^{+0.5}_{-0.6}$ | $<0.09$ | $30.0^{+0.5}_{-0.5}$ | $97.8^{+1.0}_{-1.0}$ | $-0.005^{+0.005}_{-0.005}$ | $-0.028^{+0.004}_{-0.004}$
⋆191089 | $1270$ | ${}^{a}45^{+21}_{-21}$ | $1.83^{+0.03}_{-0.03}$ | $44.8^{+0.9}_{-0.9}$ | $16^{+3}_{-3}$ | ${}^{a}0.10^{+0.04}_{-0.05}$ | $60^{+1}_{-1}$ | $73^{+1}_{-1}$ | ${}^{d}0.032^{+0.012}_{-0.012}$ | ${}^{d}-0.012^{+0.008}_{-0.008}$
Notes: ⋆The model fit leaves significant residuals. a Marginally
resolved/detected, i.e. having a posterior probability distribution with a
non-zero peak but consistent with zero at the $3\sigma$ level. b Inclination
consistent with 90∘ (perfectly edge-on) at the 3$\sigma$ level. c Quantity
unconstrained at the 3$\sigma$ level, but with a pronounced peak at the median
value reported. d Offsets refer to 2018 dataset. For 2014 dataset, offsets
were $\Delta$RA=$0.12^{+0.04}_{-0.04}$ and $\Delta$Dec=$0.02^{+0.02}_{-0.02}$.
### 4.2 ALMA Results
#### 4.2.1 A newly resolved disc around HD 164249
The disc around HD 164249 was observed with ALMA at 1.35 mm and is spatially
resolved for the first time increasing the number of resolved debris discs
reported in the literature to 153 according to the database for resolved
discs222https://www.astro.uni-jena.de/index.php/theory/catalog-of-resolved-
disks.html. It shows a face-on orientation with an inclination below 49∘. The
planetesimal belt is found at 63 au with a disc width of 24 au using a
Gaussian disc model. The disc was not resolved at any other wavelength before.
#### 4.2.2 Previously resolved discs
We re-analysed the data sets of two targets (HD 15115, HD 191089) presented in
former studies to infer the system parameters, such as the disc radius, in a
consistent way and present the results of new high-resolution data for HD
181327.
HD 15115: We find the edge-on disc of HD 15115 with an inclination of 88∘ to
be located at $93.4^{+1.0}_{-1.3}$ au with a disc width of $21^{+6}_{-7}$ au
using a Gaussian ring model. The results from MacGregor et al. (2015);
MacGregor et al. (2019) which are based on the same dataset as our study
suggest the disc to extend from 44 to 92 au with a 14 au wide gap at 59 au.
MacGregor et al. (2019) suggests that a planet with a mass of
$0.16\leavevmode\nobreak\ M_{\text{Jup}}$ is creating this gap, but so far no
planet could be detected (see § 2.3). Our results do not show evidence for a
gap in the disc, which may be because of the different parameterisations of
the two models; MacGregor et al. (2019) assumes a 2D disc model using a power
law for the radial surface density distribution and an infinitesimally small
vertical scale height, whereas our disc model assumes Gaussian radial and
vertical density distributions (the latter was found to be marginally resolved
in our analysis).
HD 181327: The face-on disc around HD 181327 was inferred to have a radius of
$81.3\pm 0.3$ au and width of $16^{+0.5}_{-0.6}$ au using a Gaussian ring
model. This is comparable with the 86 au radius and width of 23 au found by
Marino et al. (2016) from lower resolution ALMA Band 6 data.
HD 191089. The debris disc around HD 191089 was observed at 1.27 mm and
formerly presented in Kral et al. (2020) which reported a disc ring at
$43.4\pm 2.9$ au with a width of $<22.5$ au and an inclination of $\sim
52^{\circ}$. With our Gaussian ring model we inferred an inclination of 60∘
and a disc radius of $44.8\pm 0.9$ au with a width of $16\pm 3$ au. The scale
height was constrained to be smaller than 0.1 at a $3\sigma$ significance. We
note that our data-set does not only contain the data used in Kral et al.
(2020), but a combination of those with data from the “Resolved ALMA and SMA
Observations of Nearby Stars” (REASONS) programme (Sepulveda et al., 2019)
which have a higher spatial resolution, as well as older observations from
2012 (see Tab. 4).
#### 4.2.3 Gas emission
We visually checked the data cubes of the four ALMA resolved targets for CO
gas emission, but did not detect any. HD 181327 is the only target in our
sample of F stars in the BMPG with a gas detection presented in Marino et al.
(2016). That study found a significant amount of 12CO in its disc based on the
J=2-1 excitation level and inferred a total CO-gas mass of $1.2\ldots
2.9\times 10^{-6}M_{\oplus}$. The gas is consistent with a secondary origin if
the planetesimals in the disc around HD 181327 possess a similar volatile
fraction compared to Solar system comets. Our observations included the J=3-2
excitation level. The non-detection could be consistent with the J=2-1
detection depending on excitation conditions, but a full gas analysis,
including optimising detection, and considering the wide range of possible
excitation conditions is needed to draw a definitive conclusion.
### 4.3 Imaging at other wavelengths
#### 4.3.1 Scattered light and MIR observations
Scattered light observations give us an additional opportunity to estimate the
planetesimal belt radii of discs especially if they were not observed in
thermal emission. Furthermore, observations at wavelengths shorter than sub-mm
trace dust grain sizes smaller than those seen with ALMA and thus can help to
investigate transport processes within the discs. While most of the spatially
resolved discs in the BPMG were observed with ALMA, there is one disc (HD
160305) only observed in scatterd light.
HD 160305: The disc around HD 160305 was recently detected with VLT/SPHERE by
Perrot et al. (2019) in scattered light. The debris dust is confined to a
narrow ring between 86 and 90 au. It shows a near edge-on inclination and a
brightness asymmetry between its western and eastern side. Perrot et al.
(2019) suggest different scenarios as the reason for this asymmetry, such as
strong recent collisions of planetesimals, interactions with massive
companions, or pericentre glow effects, but was not able to differentiate
between these scenarios.
HD 15115: Scattered light observations of HD 15115 (e.g., Kalas et al., 2007;
Engler et al., 2018) revealed a strong asymmetry of the disc which is not seen
in ALMA observations. Kalas et al. (2007) report a disc extent up to 580 au on
the west side and 340 au on the east side. MacGregor et al. (2019) concluded
that the mechanism causing the asymmetry is only affecting the smallest
grains, suggesting interaction with the local ISM as a likely reason for it.
Engler et al. (2018) derived the maximum of polarised flux density at a
location of $94\pm 2$ au, which is assumed to correspond to the location of
the planetesimal belt (similar to the radius we find in Tab. 5).
HD 181327: HST/NICMOS observations of HD 181327 in scattered light presented
by Schneider et al. (2006) derived a disc radius of 86 au with a width of 36
au. While the radius is in agreement with our ALMA-based results the disc
width is broader in scattered light than at sub-mm wavelengths. We would
expect a broader disc at shorter wavelengths since such observations trace
smaller particles which are more susceptible to transport processes.
Asymmetries were reported by Stark et al. (2014) which suggested a recent
catastrophic disruption or a warping of the disc by the ISM as probable
causes.
HD 191089: Churcher et al. (2011) observed HD 191089 at 18.3$\mu$m with T-ReCS
on Gemini South and found excess emission between 28 and 90 au. This is in
agreement with the belt location inferred from observations of the HD 191089
disc performed by HST/NICMOS and STIS and Gemini/GPI (Ren et al., 2019). That
study detected scattered light between 26 and 78 au. In addition to the dust
ring a halo was found extending to $\sim 640$ au, but no brightness
asymmetries were identified. However, similar to HD 181327 the disc is broader
in scattered light than at mm wavelengths.
#### 4.3.2 FIR observations with Herschel/PACS
Three discs in the BPMG sample were spatially resolved in the FIR with
Herschel/PACS (HD 15115, HD 164249, and HD 181327). To infer their radii in a
homogeneous way we apply the method described in Yelverton et al. (2019) and
Xuan et al. (2020). The PSF of the star is derived from observations of the
Herschel calibration star HD 164058 and then rotated to the appropriate
orientation and scaled to the stellar flux derived from the SED. Then we
generate an axisymmetric, optically thin disc model and assume that the
surface brightness is proportional to $r^{-1.5}$ with $r$ being the distance
to the star. The dust temperature at a distance $r$ is assumed to follow
$r^{-0.5}$ as for blackbody grains. The free parameters of the model are the
total disc flux density, the inner and outer edges of the disc,
$R_{\text{in}}$ and $R_{\text{out}}$, inclination, and position angle. We also
include a 2D offset to account for non-perfect Herschel pointing. The model
disc is convolved with the PSF and then compared to the Herschel/PACS image by
calculating the goodness of fit, $\chi^{2}$. We estimate the parameters using
the emcee package. To get disc radii to be compared with those inferred from
ALMA images that assumed a Gaussian radial profile, we derive a radius from
Herschel/PACS, $R_{\text{FIR}}=0.5\times(R_{\text{in}}+R_{\text{out}})$. We
note that in some cases the inner and outer edge of a disc might be poorly
constrained, and in any case $R_{\text{FIR}}$ could differ from the location
of the peak of the surface brightness that would have been inferred if
observed at higher spatial resolution.
The modelling results of the Herschel/PACS images for the BPMG sample are
listed in Tab. 6. In all cases the inclination and position angle are in
agreement with ALMA observations listed in Tab. 5. For HD 15115 and HD 181327
the radii, $R_{\text{FIR}}$, are in agreement with values inferred from ALMA
observations, but possess uncertainties of up to 25%. The disc widths seem to
be larger compared to ALMA, but only show deviations within $2\sigma$ so that
we assume the discs to be similar in ALMA and Herschel. A broader extent in
Herschel might indicate the presence of transport mechanisms altering the
orbits of smaller dust particles towards larger eccentricities. For HD 164249
the Herschel results show large uncertainties due to the low spatial
resolution. The Herschel radius is a factor 2.2 smaller than that from ALMA,
however the two radii are not significantly different ($\sim 2\sigma$) and the
higher resolution ALMA result is a better estimate of the planetesimal belt
location.
Table 6: Discs resolved with Herschel/PACS.
HD | $R_{\text{in}}$ | $R_{\text{out}}$ | $R_{\text{FIR}}$ | $\Delta R_{\text{FIR}}$ | $i$ | PA
---|---|---|---|---|---|---
| [au] | [au] | [au] | [au] | [∘] | [∘]
15115 | $40.6\pm 23.7$ | $145.4\pm 29.0$ | $93.0\pm 18.7$ | $52.4\pm 18.7$ | $85.9\pm 3.7$ | $98.9\pm 2.4$
164249 | $13.3\pm 11.5$ | $43.0\pm 21.2$ | $28.2\pm 12.0$ | $14.8\pm 12.0$ | $68.1\pm 20.8$ | $175.1\pm 66.5$
181327 | $25.7\pm 25.5$ | $134.4\pm 30.8$ | $80.0\pm 20.0$ | $54.3\pm 20.0$ | $30.1\pm 8.9$ | $101.6\pm 11.6$
Notes: $R_{\text{in}}$ and $R_{\text{out}}$ give the inner and outer radii for
Herschel/PACS inferred with the method described in § 4.3.2 following the
procedure of Yelverton et al. (2019) and Xuan et al. (2020). $R_{\text{FIR}}$
is the central radius defined as
$R_{\text{FIR}}=0.5*(R_{\text{in}}+R_{\text{out}})$, $\Delta R_{\text{FIR}}$
gives the disc width. The parameters $i$ and PA give the inclination and
position angle.
### 4.4 Ratio of spatially resolved disc radius to blackbody radius
We calculate the ratio of sub-mm radius to blackbody radius for the four ALMA
resolved discs in our sample. In addition, we infer the ratio of the scattered
light radius to blackbody radius for HD 160305. Given the knowledge that there
is a potential trend in sub-mm disc sizes with stellar luminosity (Matrà et
al., 2018), and also a trend in the far-infrared size to blackbody radius
ratio with stellar luminosity (Booth et al., 2013; Pawellek & Krivov, 2015),
we compare the five discs to a sample of ALMA-resolved discs with a broader
stellar luminosity range (Matrà et al., 2018). In Fig. 3 we plot the radius
ratio as a function of stellar luminosity. The actual disc width inferred by
ALMA observations (see Tab. 5) is given by the error bars.
For our sample of F stars the values of this ratio lie between 1.6 and 3.4
where the system with the lowest stellar luminosity (HD 160305) possesses the
highest value. Including the ALMA-sample of Matrà et al. (2018) and fitting a
trend of the form in eq. 2 for how the ratio depends on stellar luminosity we
infer a slight decrease of the ratio with stellar luminosity finding parameter
values of $A=2.92\pm 0.50$ and $B=-0.13\pm 0.07$.
Figure 3: Spatially resolved disc radius to blackbody radius ratio as a
function of stellar luminosity for NIR, FIR, and sub-mm wavlengths. Black
circles show the ALMA sample from Matrà et al. (2018), red asterisks show the
ALMA-resolved F stars and the blue square the SPHERE-resolved F star HD
160305. The grey shaded areas depict the 1, 2, and $3\sigma$ levels of the
correlation. The green dashed line shows the trend found in Pawellek (2017)
from the Herschel resolved disc radius.
This result is different from the trend presented in Pawellek & Krivov (2015)
and Pawellek (2017) based on disc sizes from Herschel images which showed
parameter values of $A=6.49\pm 0.86$ and $B=-0.37\pm 0.05$ (see green dashed
line in Fig. 3). For systems with stellar luminosities larger than
$5L_{\text{sun}}$ the radius ratio of the ALMA sample is in agreement with
Pawellek & Krivov (2015). The different fit is caused by a number of systems
with lower luminosities including our sample of ALMA resolved F stars that
show relatively small ratios. Possible reasons for the different trends could
be that Herschel had a lower resolution and so there may be systematic
uncertainties in the derived disc radii, or the discs could be larger when
traced in the far-IR due to such wavelengths tracing small dust in the halo
that extends beyond the planetesimal belt. However, our analysis of the
Herschel images of the BPMG F stars in § 4.3.2 inferred radii that are
consistent with those from ALMA images. Considering Pawellek & Krivov (2015),
none of the BPMG targets was used to derive the radius ratio vs luminosity
trend, but the study inferred radii between 93 and 112 au for HD 181327
depending on the dust composition assumed which is in agreement with the
results of ALMA and Herschel. A more detailed analysis is needed to
investigate possible causes for the different outcomes between the Herschel
and ALMA samples. A systematic difference might indicate the presence of
dynamical processes affecting the size distribution in a way not considered
before.
## 5 SED modelling revisited
As mentioned before, five discs of our sample were spatially resolved (four
with ALMA and one with SPHERE, see Tab. 5). This allows us to apply a more
detailed model to fit the SEDs of these five discs rather than using a simple
modified blackbody model as in §3. In the following approach we model the dust
size distribution and composition.
### 5.1 Modelling approach
We use the SONATA code (Pawellek et al., 2014; Pawellek & Krivov, 2015) and
apply the same PHOENIX-GAIA stellar photospheric models (Brott & Hauschildt,
2005) to determine the host star contribution in a similar approach as for the
modified blackbody fits (MBB, see §3). While for the MBB model we simply
fitted a dust temperature and a fractional luminosity without consideration of
dust properties the SONATA code calculates the temperature and the thermal
emission of dust particles at different distances to the star. It assumes
compact spherical grains and uses Mie theory to derive the absorption
efficiencies (Bohren & Huffman, 1983). The dust composition is assumed to be
pure astronomical silicate (Draine, 2003) with a bulk density of 3.3 g/cm3.
The code sums up the emission of particles within a range of sizes to generate
the SED. The flux densities given for wavelengths shorter than 5 $\mu$m are
not used to fit the dust disc since in this wavelength regime the stellar
photosphere rather than the dust dominates the emission.
We apply a power law for the size distribution and assume a Gaussian radial
distribution of the dust using the surface number density $N(r,s)$:
$N_{\text{SED}}(r,s)\sim s^{-q_{\text{SED}}}\frac{1}{\sqrt{2\pi}\Delta
R_{\text{disc}}}\exp\left[-\frac{1}{2}\left(\frac{r-R_{\text{disc}}}{\Delta
R_{\text{disc}}}\right)^{2}\right].$ (4)
Here, $r$ represents the distance to the star, $R_{\text{disc}}$ the peak and
$\Delta R_{\text{disc}}$ the width of the radial distribution. The parameter
$s$ is the grain radius and $q_{\text{SED}}$ is the SED power-law index for
the size distribution. The surface number density is directly connected to the
surface density, $\Sigma$, by $\Sigma(r,s)\,ds=\pi s^{2}\,N(r,s)\,ds$.
The grain sizes lie between a minimum and a maximum value, $s_{\text{min}}$
and $s_{\text{max}}$ where we fix the maximum grain size to 10 cm. Larger
grains do not contribute to the SED in the wavelength range observed for the
size distributions considered here with $q_{\text{SED}}>3$. In addition, we
fix the radial parameters to the values inferred from our resolved images (see
Tab. 5). Therefore, we are left with three free parameters to fit: the minimum
grain size, $s_{\text{min}}$, the size distribution index, $q_{\text{SED}}$,
and the amount of dust, $M_{\text{dust}}$, for particles between
$s_{\text{min}}$ and $s_{\text{max}}$ assuming a bulk density $\varrho$.
We follow the three criteria given in Ballering et al. (2013) and Pawellek et
al. (2014) to check for the presence of a warm component for the five discs.
For us to consider a warm component to be present, there has to be a
significant excess ($\geq 3\sigma$) in either the WISE/22 or MIPS/24 in excess
of that which could originate in a single ring fitted to longer wavelength
data. Secondly, the fit of the two-component SED has to be much better than
the one-component fit. While the former studies assumed a better two-component
fit when $\chi^{2}_{\text{one}}/\chi^{2}_{\text{two}}>3$ was fulfilled we use
the Bayesian information criterion (BIC) instead, which is
$\text{BIC}=\chi^{2}+J\log_{e}{(N)},$ (5)
where $J$ represents the number of free parameters and $N$ the number of data
points. We use the classification given in Kass & Raftery (1995) to infer
whether a one- or a two-component model is more likely. As a third criterion
we require the inferred ring containing the warm dust to be located outside
the sublimation radius $R_{\text{sub}}$ (assuming 1300 K as the sublimation
temperature).
If all three criteria are fulfilled we obtain the two-component model in the
following way. In a first step we assume the warm dust to be modelled by a
pure blackbody to infer its blackbody temperature and radius. We assume this
radius to be the location of the warm dust belt and fix the belt width to
$\Delta R_{\text{disc}}/R_{\text{disc}}=10\%$. Finally, we fit both disc
components assuming that the warm and cold dust ring possess the same size
distribution of dust grains.
Similar to the cold dust ring it is likely that the sub-mm disc radius of the
warm belt is larger than the blackbody radius. Applying the newly inferred
values presented in § 4.4 the factor would be $\sim 2.5$, but it could be
smaller or larger, since a consistently different dust temperature or
composition could result in a systematic difference.
For the disc around HD 160305 only four mid- and far-infrared data points
(WISE12, WISE22, PACS70, PACS160) are listed in the literature. Therefore, we
fix the size distribution index, $q_{\text{SED}}$, to 3.5 (the outcome of an
ideal collisional cascade, Dohnanyi, 1969) to reduce the number of free
parameters.
### 5.2 Fitting results
Following the criteria for two-component models we checked at first the SEDs
of the four resolved discs around HD 15115, HD 164249, HD 181327 and HD 191089
for the presence of a warm inner component. Only HD 15115 fulfills all of them
so that we fitted this disc with a two-component model. The SED fitting
results of the whole sample are all summarised in Tab. 3 and the SEDs are
depicted in Fig. 1.
Collisional evolution models show that grains smaller than a certain blow-out
size, $s_{\text{blow}}$, are expelled from the stellar system due to radiation
pressure. The blow-out size depends on the optical parameters of the dust
material and increases with increasing stellar luminosity. We would expect the
minimum grain size, $s_{\text{min}}$, to be comparable to $s_{\text{blow}}$.
However, previous studies of grain size distributions (e.g., Pawellek et al.,
2014; Pawellek & Krivov, 2015) found that $s_{\text{min}}$, is weakly
connected to the stellar luminosity. It might also be consistent with being
independent of stellar luminosity, since those studies found an average value
of $\sim 5\mu$m to fit the majority of debris discs analysed therein. It was
also found that the ratio between $s_{\text{min}}$ and $s_{\text{blow}}$ is
$\sim 4\ldots 5$ for discs around host stars with stellar luminosities between
2 and 5 $L_{\text{sun}}$ (Pawellek et al., 2014). The
$s_{\text{min}}/s_{\text{blow}}$ ratio is thought to be connected to the
dynamical excitation of the planetesimals producing the visible dust (e.g.,
Krijt & Kama, 2014; Thebault, 2016). Earlier studies, such as Krivov et al.
(2006) or Thébault & Augereau (2007) suggest a value around 2 for
collisionally active discs.
For three targets in our sample our modelling finds that $s_{\text{min}}$ is
close to $s_{\text{blow}}$ leading to a $s_{\text{min}}/s_{\text{blow}}$ ratio
of $\sim 1$. Only the results for HD 15115 reveal a $s_{\text{min}}$ close to
$5\mu$m and a $s_{\text{min}}/s_{\text{blow}}$ ratio of $\sim 5$. However, the
difference in $s_{\text{min}}$ of this disc to the others in the sample should
be treated with caution, since the minimum grain size that we infer may be
influenced by how we treated the warm component that is only present in this
system. Besides our four targets there is a range of different discs at the
same stellar luminosity investigated by Pawellek & Krivov (2015) and Matrà et
al. (2018) and shown as black dots in Fig. 3, most of which possess a larger
$s_{\text{min}}$. The low $s_{\text{min}}/s_{\text{blow}}$ ratio for F stars
in the BPMG, which is reported for the first time, could indicate high levels
of dynamical excitation similar to that found for discs around A-type stars
(see Fig. 16 in Pawellek & Krivov, 2015).
The size distribution index, $q_{\text{SED}}$, lies between 3.4 and 3.8 for
our sample. These values are consistent with collisional models (e.g., Löhne
et al., 2008; Gáspár et al., 2012; Kral et al., 2013; Löhne et al., 2017).
Overall, the results from our SED modelling suggest that all four spatially
resolved discs are in agreement with a stirred debris disc scenario which
means that the dust seen in the SED is consistent with being created by the
collisional destruction of planetesimals in a belt traced by the ALMA images.
## 6 Comparison with nearby F stars
In the first part of this study we analysed the properties of the BPMG in
detail. So far we do not know whether the high incidence rate of debris discs
is a peculiarity of said moving group or whether we see more discs due to the
young age of the moving group. Therefore, we will put the results of the BPMG
into context with discs around other near-by F-type stars in the second part
of this study. First we investigate the evolution of spectral type to ensure
that we compare stellar populations with similar properties. Then we look at
the appropriate systems in samples of field stars and other young moving
groups.
### 6.1 Stellar population at different ages
The stellar spectral type is determined by the effective temperature of the
star. Due to ongoing thermonuclear reactions, stars and their
physical/chemical properties such as metallicity, stellar radius or
temperature, evolve over time so that the spectral type might change as well.
Therefore, it is not self-evident that comparing stars with similar spectral
types but different ages show the same stellar population at varying
evolutionary phases.
We use the "‘Modules for Experiments in Stellar Astrophysics"’ (MESA, Paxton
et al., 2011; Paxton et al., 2013, 2015; Choi et al., 2016) to check the
evolution of stellar temperature over time. MESA consists of a one-dimensional
stellar evolution module simultaneously solving the fully coupled structure
and composition equations. The results are shown in Fig. 4. We use the lowest
($1.15M_{\text{sun}}$) and highest ($1.58M_{\text{sun}}$) stellar masses in
our sample of F stars to analyse its parameter space and assume a stellar
metallicity of $[\text{Fe}/\text{H}]=0.0$.
The stellar temperature increases up to an age of $\sim 10$ Myr and then stays
constant until $\sim 1$ Gyr. Our sample of F stars belongs to the BPMG with an
age of 23 Myr. Fig. 4 shows that at this age the temperature is already
constant so that the spectral type is not changing. As a result, stars with
similar spectral types and ages between that of the BPMG and 1 Gyr should
represent the same population of stars.
Figure 4: Stellar temperature as function of age.
As the stars leave the main-sequence from their position in the HR diagram the
stellar temperature starts to decrease. Higher mass stars (e.g., that were A
stars on the main sequence) evolve to become F stars as they leave the main
sequence. Therefore, a sample of F stars may be contaminated by post-main
sequence (higher-mass) F stars. Their fraction should be small in a volume-
limited population since the number of high mass stars is lower. Furthermore,
those stars do not spend long on the post-main sequence looking like an F star
before they noticeably evolve so that they should be possible to identify from
their stellar luminosity.
### 6.2 Field stars
Sibthorpe et al. (2018) analysed an unbiased sample of 275 FGK stars including
92 F-type stars observed with Herschel/PACS in the DEBRIS survey. None of the
F-types belong to the BPMG, which lie within 24 pc. All targets are older than
160 Myr following the age determination of Vican (2012) with the exception of
the targets HD 56986 with an age of $\sim 20$ Myr and HD 7788A where no age is
given.
Based on Sibthorpe et al. (2018), 22 of the 92 stars show evidence for a
debris disc. However, we note that the target HD 19994 (Wiegert et al., 2016)
previously assumed to have spatially resolved IR emission shows evidence of
being confused rather than possessing an actual disc (Yelverton et al., 2019).
Thus, we update the number of detections for F stars in the DEBRIS sample to
21 out of 92 targets leading to a detection rate of $22.8^{+6.2}_{-4.9}$%. Due
to the large beam size of Herschel/PACS other targets might show confusion as
well. However, after checking the PACS images available we did not identify
more potentially confused discs. The HR-diagram of the whole DEBRIS sample is
presented in Figs. 1 and 2 in Phillips et al. (2010) and shows that all F
stars in the DEBRIS sample which possess debris discs are compatible with
belonging to the main sequence. Therefore, we assume that in the DEBRIS sample
no debris discs around post-main sequence F stars are included.
Table 7: F stars in the DEBRIS sample with debris disc detections.
HD | d | SpT | $L/L_{\text{sun}}$ | $f_{\text{d}}$ | $R_{\text{BB}}$ | $R_{\text{\text{FIR}}}$ | $\Delta R_{\text{\text{FIR}}}$
---|---|---|---|---|---|---|---
| [pc] | | | [$10^{-5}$] | [au] | [au] | [au]
1581 | 8.6 | F9.5V | 1.29 | $\pm$ | 0.01 | 0.05 | 124 | $\pm$ | 45 | … | …
7570 | 15.2 | F9VFe+0.4 | 1.96 | $\pm$ | 0.01 | 1.20 | 28 | $\pm$ | 10 | … | …
∗10647 | 17.3 | F9V | 1.55 | $\pm$ | 0.01 | 32.7 | 25 | $\pm$ | 6 | $112.3\pm+2.5$ | $69.4\pm 2.5$
11171 | 23.2 | F0V | 5.80 | $\pm$ | 0.10 | 0.68 | 32 | $\pm$ | 17 | … | …
16673 | 21.9 | F8VFe-0.4 | 1.93 | $\pm$ | 0.02 | 0.33 | 33 | $\pm$ | 23 | … | …
∗22484 | 14.0 | F9IV-V | 3.22 | $\pm$ | 0.06 | 0.72 | 21 | $\pm$ | 8 | $39.7\pm 20.5$ | $29.4\pm 20.5$
∗27290 | 20.5 | F1V | 6.67 | $\pm$ | 0.09 | 1.53 | 77 | $\pm$ | 23 | $151.2\pm 32.2$ | $121.1\pm 32.2$
33262 | 11.6 | F9VFe-0.5 | 1.47 | $\pm$ | 0.01 | 1.29 | 6.1 | $\pm$ | 2.9 | … | …
∗48682 | 16.7 | F9V | 1.86 | $\pm$ | 0.02 | 4.74 | 69 | $\pm$ | 16 | $134.4\pm 7.6$ | $73.7\pm 7.6$
55892 | 21.4 | F3VFe-1.0 | 5.68 | $\pm$ | 0.07 | 1.21 | 2.1 | $\pm$ | 1.5 | … | …
a56986 | 18.5 | F2VkF0mF0 | 11.8 | $\pm$ | 0.20 | 160 | 0.1 | $\pm$ | 0.1 | … | …
∗90089 | 22.7 | F4VkF2mF2 | 3.31 | $\pm$ | 0.04 | 1.01 | 140 | $\pm$ | 37 | $58.1\pm 30.8$ | $34.9\pm 30.8$
102870 | 10.9 | F9V | 3.73 | $\pm$ | 0.04 | 0.06 | 48 | $\pm$ | 68 | … | …
∗109085 | 18.3 | F2V | 4.85 | $\pm$ | 0.09 | 2.76 | 67 | $\pm$ | 18 | $150.4\pm 10.6$ | $56.7\pm 10.6$
∗110897 | 17.6 | F9VFe-0.3 | 1.11 | $\pm$ | 0.01 | 1.89 | 52 | $\pm$ | 15 | $97.14\pm 48.3$ | $65.7\pm 48.3$
128167 | 15.7 | F4VkF2mF1 | 3.22 | $\pm$ | 0.03 | 1.38 | 8.3 | $\pm$ | 22 | … | …
160032 | 21.2 | F4V | 4.55 | $\pm$ | 0.06 | 0.29 | 64 | $\pm$ | 35 | … | …
∗165908 | 15.6 | F7VgF7mF5 | 2.87 | $\pm$ | 0.07 | 0.80 | 150 | $\pm$ | 36 | $138.5\pm 40.8$ | $64.4\pm 40.8$
199260 | 21.3 | F6V | 1.97 | $\pm$ | 0.01 | 1.57 | 26 | $\pm$ | 12 | … | …
∗219482 | 20.5 | F6V | 1.90 | $\pm$ | 0.01 | 3.26 | 15 | $\pm$ | 6 | $20.6\pm 12.2$ | $12.3\pm 12.2$
222368 | 13.7 | F7V | 3.33 | $\pm$ | 0.03 | 0.98 | 5.9 | $\pm$ | 6.7 | … | …
Notes: ∗Target was reported in Sibthorpe et al. (2018) to possess extended
disc emission. The radii, $R_{\text{FIR}}$, from Herschel/PACS were derived
from the model presented in Yelverton et al. (2019) and are defined in the
same way as described in §4.3.2. aHD 56986 possesses a marginal excess at
MIPS24. The image at PACS160 seems to be confused with a nearby background
object making the SED model very uncertain.
The SEDs are fitted using the same process outlined in § 4.3.2. The modelling
results are listed in Tab. 7 and the SEDs are shown in Figs. 13 and 14. We
find blackbody radii between 2 and 200 au for the whole sample with the
exception of HD 56986 with a blackbody radius around 0.1 au based on a
marginal mid-IR excess. The excess found at PACS160 is confused by a nearby
background object so that the SED model is very uncertain. We therefore
exclude this target from our further analysis.
Ignoring HD 56986 due to the aforementioned reasons, we find no discs smaller
than 1 au, one disc out of 92 targets with a blackbody radius between 1 and 3
au (1.1%), three disc radii between 3 and 10 au (3.3%), five discs between 10
and 30 au (5.4%), seven discs between 30 and 100 au (7.6%), and four discs
larger than 100 au (4.3%). Nine targets were reported to be spatially resolved
in the FIR (Sibthorpe et al., 2018) (excluding HD 19994). Only HD 10647 and HD
109085 were observed with ALMA (see § B.3). However, using the method of
Yelverton et al. (2019) we infer radii and disc widths from Herschel/PACS
images in the same manner as described in § 4.3.2 (see Tab. 7). The discs
range from 20 au to more than than 150 au. The smallest discs are located
around HD 22484 and HD 219482 with radii of 39.7 and 20.6 au respectively. The
disc widths are uncertain because of the relatively poor spatial resolution so
that we cannot draw strong conclusions on them.
### 6.3 Other young moving groups
The question arises whether the high occurrence rate of debris discs around
F-type stars in BPMG is a singular phenomenon of this moving group or if it is
common in other associations with comparable properties in age and distance as
BPMG. Here we compare the BPMG disc incidence rates with those of other
clusters. When doing so we need to recognise that some stars lack FIR data and
so have limited constraints on the presence of circumstellar dust. We will
consider detection rates for the whole sample (e.g. the 9/12 rate from the
BPMG) and separately we will consider the rate amongst those with FIR data
(e.g. the 9/11 rate for the BPMG).
Following studies of young associations (e.g., Fig. 7 in Gagné et al., 2018c;
Gagné & Faherty, 2018) we identified five groups with similar peaks in their
distance distributions around 50 pc comparable to the BPMG: the
Tucana/Horologium association (THA), Columba (COL), Carina (CAR), AB Doradus
(ABDMG) and Carina-Near (CARN). The groups THA, COL and CAR possess an age
around $\sim$45 Myr, the groups ABDMG and CARN an age around $\sim 150$ Myr.
For the purpose of our analysis a differentiation between the single moving
groups is not necessary. Indeed, Torres et al. (2008) and Zuckerman et al.
(2011) found that THA, COL and CAR are closely located making it difficult to
place members in one or the other group. Therefore, we generated two samples,
one referred to as 45 Myr group sums up all F-type targets belonging to THA,
COL and CAR, the other referred to as 150 Myr group combines the targets of
ABDMG and CARN. Both samples are unbiased towards the presence of IR-excess.
Table 8: F stars of the 45 and 150 Myr groups.
HD | Group | d | SpT | $L/L_{\text{sun}}$ | Disc | $f_{\text{d}}$ | $R_{\text{BB}}$
---|---|---|---|---|---|---|---
| | [pc] | | | excess | [$10^{-5}$] | [au]
984 | 45 Myr | 45.9 | F7V | 2.04 | $\pm$ | 0.02 | No | - | -
1466 | 45 Myr | 43.0 | F8V | 1.58 | $\pm$ | 0.01 | Yes | 6.3 | 7.8 | $\pm$ | 1.8
8671 | 45 Myr | 42.7 | F7V | 6.08 | $\pm$ | 0.04 | … | … | …
10269 | 45 Myr | 46.7 | F5V | 2.60 | $\pm$ | 0.02 | Yes | 12 | 8.8 | $\pm$ | 5.2
10863 | 45 Myr | 45.0 | F2V | 4.39 | $\pm$ | 0.03 | No | - | -
12894 | 45 Myr | 46.3 | F4V | 4.50 | $\pm$ | 0.04 | No | - | -
13246 | 45 Myr | 45.6 | F7V | 1.72 | $\pm$ | 0.04 | Yes | 14 | 5.4 | $\pm$ | 1.4
14691 | 45 Myr | 30.0 | F3V | 4.76 | $\pm$ | 0.04 | No | - | -
17250 | 45 Myr | 57.1 | F8 | 1.91 | $\pm$ | 0.02 | … | … | …
20121 | 45 Myr | 42.5 | F3V+A8V | 5.70 | $\pm$ | 0.60 | … | … | …
20385 | 45 Myr | 48.8 | F6V | 2.08 | $\pm$ | 0.02 | No | - | -
21024 | 45 Myr | 29.3 | F5IV-V | 4.25 | $\pm$ | 0.03 | … | … | …
24636 | 45 Myr | 57.1 | F3IV/V | 3.66 | $\pm$ | 0.02 | Yes | 9.9 | 13 | $\pm$ | 3
29329 | 45 Myr | 32.7 | F7V | 2.25 | $\pm$ | 0.02 | … | … | …
30051 | 45 Myr | 67.6 | F2/3IV/V | 5.20 | $\pm$ | 0.20 | Yes | 3.0 | 48 | $\pm$ | 19
30132 | 45 Myr | 121 | F6/7V | 3.03 | $\pm$ | 0.03 | … | … | …
30447 | 45 Myr | 80.5 | F3V | 3.68 | $\pm$ | 0.03 | Yes | 92 | 34 | $\pm$ | 8
30984 | 45 Myr | 82.6 | F5V | 2.09 | $\pm$ | 0.02 | … | … | …
31359 | 45 Myr | 112 | F5V | 3.30 | $\pm$ | 0.20 | … | … | …
32195 | 45 Myr | 62.8 | F7V | 1.34 | $\pm$ | 0.01 | Yes | 8.5 | 14 | $\pm$ | 7
35114 | 45 Myr | 47.7 | F6V | 2.08 | $\pm$ | 0.02 | Yes | 4.0 | 6.4 | $\pm$ | 2.7
35996 | 45 Myr | 92.1 | F3/5IV/V | 3.40 | $\pm$ | 0.03 | Yes | 9.1 | 3.9 | $\pm$ | 2.1
37402 | 45 Myr | 69.6 | F6V | 0.82 | $\pm$ | 0.03 | No | - | -
37484 | 45 Myr | 59.1 | F4V | 3.49 | $\pm$ | 0.02 | Yes | 31 | 18 | $\pm$ | 5
40216 | 45 Myr | 52.8 | F7V | 2.38 | $\pm$ | 0.02 | No | - | -
43199 | 45 Myr | 76.8 | F0III/IV | 4.88 | $\pm$ | 0.05 | … | … | …
53842 | 45 Myr | 57.9 | F5V | 2.84 | $\pm$ | 0.02 | Yes | 1.9 | 93 | $\pm$ | 16
207575 | 45 Myr | 47.0 | F6V | 2.31 | $\pm$ | 0.02 | No | - | -
207964 | 45 Myr | 46.5 | F0V+F5V | 9.90 | $\pm$ | 0.4 | No | - | -
3454 | 150 Myr | 45.4 | F5 | 1.69 | $\pm$ | 0.02 | … | … | …
4277 | 150 Myr | 52.5 | F8V | 1.70 | $\pm$ | 0.10 | No | - | -
15407 | 150 Myr | 49.4 | F5V | 3.23 | $\pm$ | 0.03 | Yes | 430 | 1.01 | $\pm$ | 0.35
25457 | 150 Myr | 18.8 | F7V | 2.05 | $\pm$ | 0.01 | Yes | 13.0 | 17 | $\pm$ | 4
25953 | 150 Myr | 57.0 | F6V | 1.97 | $\pm$ | 0.02 | No | - | -
31949 | 150 Myr | 63.1 | F8V | 1.84 | $\pm$ | 0.02 | … | … | …
61518 | 150 Myr | 61.5 | F5V | 2.18 | $\pm$ | 0.02 | No | - | -
69051 | 150 Myr | 84.7 | F0III | 9.27 | $\pm$ | 0.09 | … | … | …
103774 | 150 Myr | 56.5 | F6V | 3.62 | $\pm$ | 0.03 | … | … | …
121560 | 150 Myr | 24.5 | F6V | 1.70 | $\pm$ | 0.01 | No | - | -
218382 | 150 Myr | 192 | F8 | 8.10 | $\pm$ | 0.20 | … | … | …
219693 | 150 Myr | 34.1 | F4V | 5.66 | $\pm$ | 0.06 | … | … | …
CD-26 1643 | 150 Myr | 54.8 | F9V | 1.24 | $\pm$ | 0.01 | No | - | -
Notes: The data are taken from Zuckerman et al. (2011); Faherty et al. (2018);
Gagné et al. (2018b); Gagné & Faherty (2018). The excess emission is given for
Spitzer/MIPS at 24$\mu$m and/or $70\mu$m. The excess emission for stars with
only WISE22 data or upper limits from IRAS is shown as dots. The fractional
luminosities are inferred from a modified blackbody SED model.
Using the studies of members of young moving groups (Zuckerman et al., 2011;
Faherty et al., 2018; Gagné et al., 2018b; Gagné & Faherty, 2018), we
identified 29 F stars in Tab. 8 for the 45 Myr group and 13 for the 150 Myr
group. For several targets only data up to mid-infrared wavelengths (WISE22)
are available or upper limits from IRAS at 25, 60 and 100$\mu$m, but no
Spitzer/MIPS or other far-infrared data. The presence of far-infrared emission
for these targets cannot be ruled out, but none of their SEDs shows excess in
the mid-infrared. The detection rates are listed in Tab. 9 and given for both
the complete samples and the sub-samples only including targets with FIR data.
Table 9: Detection rates for the different samples.
Sample | $N_{\text{Discs}}$ | $N_{\text{total}}$ | $N_{\text{FIR}}$ | Rate${}_{\text{total}}$ [%] | Rate${}_{\text{FIR}}$ [%]
---|---|---|---|---|---
BPMG | 9 | 12 | 11 | $75.0^{+25.0}_{-24.5}$ | $81.8^{+18.2}_{-26.8}$
45 Myr | 11 | 29 | 20 | $37.9^{+15.2}_{-11.3}$ | $55.0^{+22.1}_{-16.3}$
150 Myr | 2 | 13 | 7 | $15.4^{+20.3}_{-9.9}$ | $28.6^{+37.7}_{-28.5}$
DEBRIS | 21 | 92 | 92 | $22.8^{+6.2}_{-4.9}$% | $22.8^{+6.2}_{-4.9}$%
Notes: $N_{\text{Discs}}$ gives the number of disc detections,
$N_{\text{total}}$ is the total number of targets in the sample,
$N_{\text{FIR}}$ is the number of targets with FIR data. The detection rates
are given for the complete samples and the sub-samples composed of targets
with FIR data assuming the number of disc detections, $N_{\text{Discs}}$
divided by the sample size. The uncertainties were calculated using the method
of Gehrels (1986).
We applied the same modified blackbody model to fit the SEDs of systems in the
45 Myr and 150 Myr groups (see Figs. 11and 12) and inferred stellar
parameters, fractional luminosities and blackbody radii using the same method
as in § 3 (see Tab. 8). In the 45 Myr group we find no disc with a blackbody
radius below 1 au. Two out of 29 targets possess belts with blackbody radii
between 1 and 3 au (6.9%), five discs lie between 3 and 10 au (17.2%), two
between 10 and 30 au and two between 30 and 100 au (each 6.9%). There were
only two discs detected within the 150 Myr group. One lies at 1 au the other
at 17 au. We note that the disc around 1 au (HD 15407) is only poorly fitted
since a strong solid state feature is visible in the SED but that the
conclusion of a small blackbody radius is reliable.
Considering NIR, FIR or sub-mm disc radii, only HD 30447 was reported as
spatially resolved in scattered light (Soummer et al., 2014) with a detection
between 60 and 200 au.
### 6.4 Comparing the samples
Figure 5: Incidence rates for the different samples ordered by age: BPMG (23
Myr), 45 Myr group, 150 Myr group, DEBRIS ($>160$ Myr). The uncertainties are
calculated using the method of Gehrels (1986). Only targets with FIR data are
taken into account (see Tab. 9) Frequencies are not corrected for
completeness.
In Fig. 5 we compare the fractions of stars with debris disc detections for
each sample which suggests that there might be a decrease of disc frequency
with increasing age. Using the DEBRIS sample as reference and Fisher’s exact
test (Fisher, 1956) we tested the hypothesis that the incidence rates for the
BPMG, the 45 Myr group and the 150 Myr group are similar to the DEBRIS sample.
We found that for the BPMG the probability $p=7.9\times 10^{-4}$, for the 45
Myr group $p=0.013$ and for the 150 Myr group $p=0.68$. The hypothesis is
rejected if the $p$-value is smaller than a chosen significance level,
$\alpha$ which we set to 0.05. Therefore, we can say that for BPMG and the 45
Myr group the detection rates are not similar to that of the DEBRIS sample. In
addition, we tested whether the rates of the BPMG and the 45 Myr group are
different from each other and found $p=0.45$. This means that the BPMG and the
45 Myr group show similar detection rates. The result leads to the impression
that a high frequency of debris discs might be common for F stars younger than
100 Myr.
### 6.5 Fractional luminosity versus radius
Plotting detection rate versus age can be misleading, since different surveys
reach different sensitivities to discs, for example due to the different
distance of the stars in their samples. This sensitivity can be understood
within the context of a modified blackbody model, since for each star the
region of fractional luminosity vs blackbody radius for which a disc detection
would have been possible can be readily quantified. Combining this information
for all stars in a given sample it is then possible to determine the fraction
of stars for which discs could have been detected in different regions of
parameter space. This is the basis of the approach taken in Fig. 6, which
follows on from that used in Sibthorpe et al. (2018) and Wyatt (2018). There
we plot the parameter space of fractional luminosity vs blackbody radius for
the four samples of F stars (BPMG, the 45 Myr group, the 150 Myr group, and
DEBRIS), noting that the sub-mm disc radius is expected to be $\sim 2.5$ times
larger than the blackbody radius (§ 4.4).
Figure 6: Fractional luminosity as function of blackbody radius for the four
samples (BMPG, 45 Myr group, 150 Myr group, and DEBRIS). The colour scale
shows the disc incidence, per log(au), per log(unit $f_{\text{d}}$). The
contour lines show the levels of completeness for 0.1, 0.3, 0.5, 0.7 and 1.0
starting from the bottom of the plot.
To estimate how many discs can be detected in a certain area of parameter
space we analysed the targets of each sample independently of whether they
were reported to possess a disc or not. Using blackbody radii between 0.01 and
1000 au and fractional luminosities between $10^{-7}$ and $10^{-2}$ we
generated a grid of fiducial discs assuming a pure blackbody model. We
inferred the flux density of each model disc at wavelengths corresponding to
those of observations of each star, e.g. with WISE, Spitzer/MIPS,
Herschel/PACS and ALMA. If the total flux density of the fiducial model (star
+ disc) satisfied
$F_{\nu}>F_{\nu}^{\text{star}}+3\sqrt{\left(\Delta
F_{\nu}^{\text{obs}}\right)^{2}+\left(\Delta
F_{\nu}^{\text{star}}\right)^{2}},$ (6)
with $F_{\nu}^{\text{star}}$ being the flux density of the stellar
photosphere, $\Delta F_{\nu}^{\text{star}}$ its uncertainty and $\Delta
F_{\nu}^{\text{obs}}$ being the uncertainty of the observation, we assumed the
model disc to be detected at the wavelength analysed. A model disc is counted
as a detection as soon as one wavelength band fulfills eq. (6). As a result we
get the area of parameter space where discs around a certain host star can be
detected. For a given sample we calculate the number of stars for which discs
could be detected for each node of the grid generating the contour lines shown
in Fig. 6. The contour lines are an estimate for the level of completeness of
the disc detections. For example, if 10 discs are found at a location where
discs could have been detected towards 50% of stars, this suggests that the
true number of discs at this location is 10*100/50, since for half of the
stars the observations provide no information about the presence of discs at
this level.
For the BPMG sample we find that discs with blackbody radii of $\sim 20$ au
could be detected around 100% of the stars at fractional luminosities of
$1\times 10^{-4}$. Discs around $\sim 100$ au could be detected around 10% of
the stars for $f_{\text{d}}=6\times 10^{-6}$. For the 45 Myr group discs at
100 au could be detected around 10% of the stars for $f_{\text{d}}=7\times
10^{-5}$ while for the 150 Myr group $f_{\text{d}}=2\times 10^{-4}$ and for
the DEBRIS $f_{\text{d}}=3\times 10^{-5}$. The reason for the different
sensitivity limits is given by the observations themselves. Some targets have
not been studied in detail so that we do not have data longwards of 70$\mu$m
and only upper limits are available (e.g., from IRAS) which barely constrain
the sensitivity limits.
A second aspect of Fig. 6 is the actual disc detections. They appear on the
plot over the range of $f_{\text{d}}$ and $R_{\text{BB}}$ where they could
appear according to their likelihood. The likelihood itself was inferred
through SED fitting as described in § 3. In Fig. 6 we use the colour scale to
show the fraction of stars for which discs are present. The scale gives the
incidence rate of a disc per $\log(\text{au})$, per $\log(\text{unit
}f_{\text{d}})$, per number of targets in the sample. The incidence rate has
been corrected for completeness by dividing the observed incidence rates by
the sensitivity limits given by the contour lines and was then smoothed with a
Gaussian by one order of magnitude in blackbody radius and fractional
luminosity.
Although discs could have been detected down to fractional luminosities of
$\sim 10^{-6}$ we find that the majority of discs in the BPMG sample is
located around $f_{\text{d}}=10^{-3}$, the area where 100% of fiducial discs
can be detected. The 45 Myr group and DEBRIS discs are found in areas closer
to the sensitivity limits ($f_{\text{d}}=7\times 10^{-5}$ for the 45 Myr
group, $f_{\text{d}}=3\times 10^{-5}$ for DEBRIS), some in areas where less
than 10% of the model discs are observable, which results in a higher
corrected incidence rate. For the 150 Myr group we only have two disc
detections, one lying within the area of 100% completeness the other close to
the detection limit.
Assuming that Fig. 6 shows comparable debris disc populations at different
ages starting from 23 Myr (BPMG) over 45 Myr to older field stars (DEBRIS) we
see a decay of fractional luminosity with increasing age which is in agreement
with Fig. 5 where we see a decrease in detection rates. While we would expect
such a decrease due to collisional evolution it seems that the process takes
place in the first 100 Myr (see § 7.1). Furthermore, the blackbody radii seem
to show a slight increase from the BPMG ($\sim 30$ au) to DEBRIS ($\sim 100$
au). Possible reasons, besides observational biases, will be discussed in §
6.6.
### 6.6 Radius distribution
In this section we compare the radii of discs found in the BPMG with those of
other young moving groups and field stars. Since most of the targets are not
spatially resolved we will look at both blackbody and spatially resolved disc
radii to identify possible differences between the samples.
#### 6.6.1 Blackbody radii
We focus on the SED results listed in Tabs. 3, 7, and 8 that were used to
produce Fig. 6. We compare the blackbody radius of each sample in Fig. 7
applying four radius bins in logarithmic spacing: $R_{\text{BB}}<1$ au
comparable to hot dust, $1-10\,\text{au}$ comparable to the warm asteroid
belt, $10-100\,\text{au}$ comparable to the cold Kuiper belt, and $\geqslant
100\,\text{au}$ for larger discs. The frequencies plotted are taken from Tab.
9 by comparing the number detected with the total number of targets in each
sample, noting that there could be more discs in each radius bin that are
below the detection threshold.
Figure 7: Frequency of disc radii for different radius bins assuming the total
number of targets in each sample taken from Tab. 9. The uncertainties were
calculated using Gehrels (1986).
Most of the discs are found with blackbody radii between 1 and 100 au. For the
BPMG sample and the 45 Myr group the majority lies between 10 and 30 au and
for the DEBRIS sample between 30 and 100 au.
The latter is the only sample containing discs with blackbody radii larger
than 100 au. The DEBRIS sample has a detection limit for large discs down to
$f_{\text{d}}=3\times 10^{-5}$ (Fig. 6) where the 45 Myr group only shows
discs when $f_{\text{d}}>7\times 10^{-5}$, while in the 150 Myr group discs
must be brighter than $f_{\text{d}}=2\times 10^{-4}$ to be detected. Thus, it
is possible that we miss those large and faint discs in the 45 Myr and 150 Myr
group as they would lie below the respective detection limits. In the BPMG
however, the detection limit lies at $6\times 10^{-6}$ and is lower than for
the DEBRIS sample. Yet, we did not find any large discs in the BPMG. This
might be a result of the low number of targets compared to the DEBRIS sample.
For example, the probability of detecting one or more $>100$ au disc in the
BPMG sample of only twelve stars would be 41.3% if their incidence rate was
the same as that of the DEBRIS sample of 4/92.
Nevertheless, it seems that the discs in moving groups (BPMG, 45 Myr group)
tend to be smaller compared to discs around field stars as seen in DEBRIS (see
§ 6.5). It could be a systematic increase in physical size with increasing
age, or that discs in young moving groups are hotter (and so appear smaller by
the $R_{\text{BB}}$ metric) than around older stars. Smaller discs in young
moving groups might be expected from collisional theory as those could have
been depleted around older field stars (see § 4.2.4 of Wyatt et al., 2007). On
the other hand, the discs in the BPMG possess a high fraction of small grains
(see § 5) while the particles around comparable field stars are found to be
larger (Pawellek & Krivov, 2015). This might support the idea of hotter discs
in young moving groups. Nevertheless, the number of targets in each sample is
small and the uncertainties are large so that we cannot draw strong
conclusions on the difference in the radius distribution. We will consider the
influence of collisional evolution in more detail in § 7.
#### 6.6.2 Spatially resolved disc radii
In this section we compare the NIR, FIR, and sub-mm radii inferred from
spatially resolved observations from ALMA, Herschel/PACS and VLT/SPHERE data.
Using ALMA, four targets were resolved in the BPMG and two discs (HD 10647 and
HD 109085, Tab. 10) in the DEBRIS sample. With Herschel/PACS three discs in
the BPMG and nine discs in the DEBRIS sample were resolved (Tabs. 5, 7).
Considering scattered light observations, four discs in the BPMG were
resolved. In the 45 Myr group only HD 30447 was reported as spatially resolved
with SPHERE.
In Fig. 8 we compare the ALMA radii to the Herschel/PACS and VLT/SPHERE radii
for the BPMG and DEBRIS to infer possible biases between the values from the
different observations. In § 4.3.2 we already found that the Herschel and ALMA
radii for the BPMG are in good agreement. This is also the case for HD 109085
from the DEBRIS sample, while for HD 10647 the Herschel radius seems larger
compared to ALMA. Additionally, SPHERE data show broad extended discs for HD
15115, HD 181327, and HD 191089 with the location of the surface brightness
peak being in good agreement with the ALMA radii as well.
Figure 8: Resolved disc radii inferred from Herschel/PACS and VLT/SPHERE
compared to ALMA radii. The central radius of Herschel is assumed to be
$0.5\times(R_{\text{in, FIR}}+R_{\text{out, FIR}})$. The error bars indicate
the disc width inferred from observations.
Fig. 8 complements the results found in Pawellek et al. (2019). That study
used collisional models and showed that at high resolution the peak of the
discs’ surface brightness is at the same location in sub-mm and far-infrared
images (and is nearly coincident with the planetesimal belt). However, the low
surface brightness halo made of small grains that extends beyond the belt gets
brighter at shorter wavelengths. It is thus possible that due to the halo and
the lower resolution of Herschel the radii inferred from Herschel could appear
larger than ALMA radii, which might be the case for HD 10647. Based on Fig. 8
we assume that the disc radii inferred from different telescopes give
comparable values.
In Fig. 9 the FIR and sub-mm radii of the BPMG and DEBRIS sample are shown as
a function of stellar luminosity with error bars indicating the disc width.
There are three discs in the DEBRIS sample with FIR radii below 50 au, but the
majority of targets (six) possesses radii around $\sim 100-150$ au. All discs
are found to be broad. Five of the discs with FIR radii between 100 and 150 au
also possess blackbody radii larger than 50 au (Tab. 7) and are in agreement
with an expected ratio of sub-mm to blackbody radius ratio of $\sim$2.5 (§
4.4).
Figure 9: Resolved disc radii of the BPMG and DEBRIS as function of stellar
luminosity. The radii are inferred from Herschel and ALMA images. The error
bars indicate the disc width.
In contrast to the DEBRIS sample the discs in the BPMG are found within 100
au.
Nevertheless, we have to keep in mind that while most of the discs in the
DEBRIS sample are larger than the discs in the BPMG the number of spatially
resolved discs in both samples is low. Furthermore, seven of the nine resolved
discs in DEBRIS were observed with Herschel/PACS, but not with ALMA.
Herschel/PACS has a lower spatial resolution and thus might bias the sample of
resolved discs towards larger radii.
## 7 Origin of high detection rate
We found that the detection rate for debris discs around F-type stars is
significantly higher in the BPMG than in the DEBRIS sample. In this section we
investigate different scenarios which might explain this phenomenon:
(i) The BPMG is representative of the population of stars that become field
stars. Hence, the discs seen in both the BPMG and DEBRIS samples are
essentially the same population but seen at different ages. This is considered
in § 7.1.
(ii) The BPMG is representative of the population of stars that become field
stars. However, the discs seen in BPMG and DEBRIS are not the same population
seen at different ages. This is considered in § 7.2.
(iii) The BPMG is not representative of the population of stars that become
field stars, since the environment of young moving groups is different to that
in which field stars formed, and more conducive to the retention of bright
discs. This is considered in § 7.3
### 7.1 Same population scenario
In the first scenario we assume that the BPMG and DEBRIS samples possess the
same population of discs seen at different ages. Therefore, the discs in the
BPMG should evolve into discs comparable to the DEBRIS sample. To describe the
evolution process we use the collisional evolution model (§ B) and assume that
the disc radius stays constant while the fractional luminosity decreases over
time.
If the largest planetesimals are in collisional equilibrium at the age of the
BPMG then the fractional luminosity decreases with $f_{\text{d}}\propto 1/t$,
where $t$ is the time (see eq. 9). However, it could also have decreased less
than this or even stayed constant if the biggest bodies were not yet in
collisional equilibrium.
#### 7.1.1 Modelling detection rates
To predict the population that the BPMG would evolve into by DEBRIS ages, we
make a generic model. We generate 100,000 artificial samples of 92 targets
similar to the size of the DEBRIS sample. Each target is randomly chosen from
the 12 systems of the BPMG sample so that each artificial sample is completely
made of BPMG targets including those without a disc detection. We assume that
the fractional luminosity follows
$f_{\text{d}}(t)=f_{\text{d}}(t_{0})\,\left(\frac{t}{t_{0}}\right)^{\alpha},$
(7)
with $f_{\text{d}}(t_{0})$ being the fractional luminosity at the time $t_{0}$
and $\alpha$ being a free parameter. DEBRIS is an unbiased sample of field
stars and as such its stars should possess random ages up to the main sequence
lifetime. We therefore generate random ages ($t$) for the 92 targets in each
of our artificial samples and calculate the fractional luminosity
$f_{\text{d}}(t)$ using those inferred from SED modelling of the BPMG sample
as values for $f_{\text{d}}(t_{0})$. In the next step we consider the
parameter space ($R_{\text{BB}}$, $f_{\text{d}}(t)$) as shown in Fig. 6. For
each location we know the probability that a star in the sample was observed
with sufficient sensitivity to detect the disc. We generate a random number
between 0 and 1 for each target in the 100,000 samples and compare it to the
probability of detecting the disc at its location of parameter space. If the
probability is larger than this random number we count the disc as a
detection. For each of the 100,000 generated samples we assume that the
probability to detect a disc at a certain location ($R_{\text{BB}}$,
$f_{\text{d}}(t)$) is comparable to that of the actually observed DEBRIS
sample as shown in Fig. 6. As a result, we get a distribution of detection
rates for the 100,000 artificial samples which is shown in Fig. 10.
Figure 10: Detection rates of a sample made of 92 discs around F-type stars
similar to DEBRIS with disc properties similar to the discs in BPMG. The grey
region shows the detection rate inferred from the DEBRIS sample
($23.9^{+6.3}_{-5.1}\%$ Sibthorpe et al., 2018).
To test the model we considered what it would predict for the BPMG, i.e. for a
set of 100,000 samples each containing 12 targets randomly taken from the
actual BPMG sample with the same age as the BPMG. We applied the disc
detection probability distribution inferred for BPMG. In Fig. 10 the solid
blue line shows the resulting distribution of detection rates peaking around
75% similar to the actual BPMG sample.
#### 7.1.2 Expected detection rates
Given the derived age of the BPMG we set $t_{0}$ to 23 Myr and $\alpha=-1$ as
expected from the collisional evolution model (Wyatt et al., 2007). The dashed
red line in Fig. 10 shows that in this case the DEBRIS sample should have a
detection rate of $41\pm 10\,\%$ following Poisson statistics with a 95%
confidence level. This is incompatible with the observed detection rate of
$22.8^{+6.2}_{-4.9}$% for the DEBRIS sample. Since $\alpha=-1$ is the fastest
possible collisional evolution, this suggests that collisional evolution
cannot be the explanation if the BPMG and DEBRIS sample possess the same
population of discs at different ages.
#### 7.1.3 Delayed stirring
The observed detection rate of the DEBRIS sample could be explained if the
cascade was initiated only recently, i.e., if the collision age was closer to
$t_{0}\sim 2$ Myr instead of the stellar age of 23 Myr which is shown with the
dotted green line in Fig. 10.
However, this is not realistic since protoplanetary discs dissipate within a
few Myr (e.g., Pascucci et al., 2006). While delayed stirring is expected in
some models (e.g., Kenyon & Bromley, 2008), this explanation would require all
discs to wait 21 Myr before being ignited, whereas all discs are located at
different radii and so should have different evolution timescales.
#### 7.1.4 Fast depletion
The other explanation that is compatible with the assumption of similar disc
populations in the BPMG and DEBRIS samples is that the evolution is faster
than $t^{-1}$. The dash-dotted orange line in Fig. 10 shows that $t^{-2}$ is
in agreement with this hypothesis. However, such rapid evolution is no longer
consistent with simple collisional evolution models. Collisional models might
still be able to reproduce rapid evolution of disc luminosity if in addition
to depletion of the large bodies there is also a change in the equilibrium
dust size distribution, e.g., an over-abundance of small grains above steady
state at BPMG ages.
There might be physical motivation for the dust size distribution to evolve,
say if the quantity of sub-blowout grains destroying the larger particles is
changing or if there is gas preventing small dust being removed at young ages.
Indeed, the collision model introduced by Löhne et al. (2017) shows an
increase in particles slightly larger than the blow-out size (e.g., Fig. 6 in
Löhne et al., 2017). However, there is no evidence for a change in small dust
properties in the SED models’ $s_{\text{min}}$ or $q$, and there is not
significant gas in these systems. The only target with a gas detection is HD
181327 (see § 4.2.3) for which Marino et al. (2016) found only low gas masses
making the presence of gas unlikely to influence the dust in this disc. Hence,
gas is also unlikely to explain the fast depletion of the whole sample. This
leads to the conclusion that something other than collisions is depleting the
BPMG discs.
One potential problem with this scenario is that the evolution with $t^{-2}$
suggested by our model is not compatible with other studies which have shown
that the discs of Sun-like stars evolve slowly on the main sequence. While
ages are hard to determine for those stars, there seems to be slow evolution
($\sim t^{-0.5}$) beyond a few 100 Myr (e.g., Trilling et al., 2008; Holland
et al., 2017). However, the $t^{-2}$ trend only represents an average
evolution. A solution to this problem is thus that there is a process that
depletes the BPMG discs that acts even faster than $t^{-2}$, but only on
timescales of order 100 Myr following which the slower collisional depletion
resumes for any remaining planetesimals that go on to supply the dust seen in
the DEBRIS population.
The Solar System’s Edgeworth-Kuiper belt underwent depletion by two orders of
magnitude in mass on a timescale of 10-100 Myr as a result of the dynamical
instability in the planetary system, so this is one possibility (Gomes et al.,
2005). Others could be embedded planetary embryos, or planets that migrate
into the discs (e.g., Gomes et al., 2005; Levison et al., 2008; Izidoro et
al., 2014; Nesvorný, 2015). Given the depletion timescale inferred above, we
use eq. (3) of Shannon et al. (2016) to estimate the mass of potential
planetary perturbers by setting the timescale of such perturbers to clear
their surrounding area from dust to 100 Myr. We find masses between 20 and 170
earth masses (corresponding to 1 and 10 Neptune masses). With currently
available telescopes such planets could very well be undetected within the
discs in the BPMG. Based on results from different observational instruments
like Kepler or HARPS summarised in the exoplanet
database333http://exoplanet.eu/ (Schneider, 2011), $\sim$260 close-in planets
were detected around F-type stars by radial velocity or transit observations,
$\sim 50$ of them possessing masses between 1 and 10 Neptune masses. This is
just an estimate, since in many cases the total masses of the planets could
not be inferred. Furthermore, most of the planets detected are located close
to the host star and are far away from the typical debris discs investigated
in our study. Suzuki et al. (2016) analysed data from the Microlensing
Observations in Astrophysics (MOA) and estimates that cold Neptunes located
beyond the snow line of stellar systems and thus, closer to debris discs might
be more common than their close-in hot siblings.
51 Eri is the only system with a planet detection in our sample. Due to a lack
of spatially resolved images of the disc, we cannot rule out that the Jupiter-
mass planet might be located close to the disc and thus, accelerating its
depletion. An indicator for this scenario could be the already low disc’s
fractional luminosity of $5\times 10^{-6}$. Indeed, a Jupiter-mass planet
located within our BPMG discs would only need $\sim 20$ Myr to clear its
surrounding area (eq. 3, Shannon et al., 2016).
Another possible depletion mechanism is the disintegration of planetesimals
due to heating. Lisse et al. (2020) showed that high energy stellar flares are
able to heat dust in close-in debris discs to temperatures of $\sim 1000$ K.
According to studies of Kepler/K2 stars (Davenport, 2016; Van Doorsselaere et
al., 2017), 1.6% of young F-type stars show such flares. However, the majority
of the discs in the BPMG lie too far out ($\sim 80$ au) for this to be
important. The close-in disc around HD 199143 might be a candidate for such a
depletion mechanism though collisions would be expected to deplete this disc
by DEBRIS ages (Wyatt et al., 2007)
Planetesimals could also be depleted by stellar radiation forces such as the
YORP effect. However, the relatively slow evolution of Solar system asteroids
(Ďurech et al., 2018) suggests that this radiation effect might be negligible,
since it should be weaker for planetesimals at several tens of au.
### 7.2 Two population scenario
In § 7.1 we assumed that the discs seen in the BPMG and DEBRIS samples are the
same population seen at different ages. While it is possible that the BPMG is
representative of the population of stars that become field stars, comparable
to the DEBRIS sample, it is also possible that the BPMG and DEBRIS samples do
not show the same population of discs.
This could be the case for example if the discs in the DEBRIS sample are belts
of planetesimals like the Edgeworth-Kuiper belt that formed as part of the
planet formation process, whereas those in BPMG are remnants of the primordial
dust that is swept up into a ring by the depleting gas that forms
planetesimals by streaming instability (e.g., Johansen et al., 2007, 2012;
Carrera et al., 2015; Carrera et al., 2017; Schaffer et al., 2018). If that is
the case the implication is that BPMG stars would have two belts - the bright
primordial dust belt and the DEBRIS-like belt with large planetesimals which
may be too faint to detect at this young age. This scenario may be supported
by the tentative outer belt found around HD 181327 (e.g., Marino et al.,
2017).
However, the planetesimals in the two proposed belts in this scenario would
deplete by collisions and thus by $t^{-1}$ as predicted by collisional models
which is thus incompatible with the rate of $t^{-2}$ seen in Fig. 10.
Nevertheless, a key difference between the two populations might be that the
DEBRIS belts would be planetesimals formed in stable regions of the planetary
system (like the Edgeworth-Kuiper belt and Asteroid belt), whereas the BPMG
belts could be deposited anywhere, since this would depend on how the gas disc
depletes which could be driven by photo-evaporation processes. Thus, unstable
regions liable to dynamical depletion as discussed in § 7.1 may be more likely
for such BPMG belts.
Another possibility is that the “planetesimals” formed in these unstable
regions could be more loosely bound and liable to disruption. To consider
this, we compare the minimum sizes of planetesimals inferred in § B.3 of discs
in both the BPMG and DEBRIS samples. Despite the large variation of the sizes,
and the small number of discs being compared, we see a similar range between
both samples. This does not support the hypothesis of two belts with different
planetesimal properties, but it should be noted that if planetesimals form
differently then they may have different $Q_{\text{D}}^{*}$ and so different
planetesimal sizes would be inferred. Nevertheless, the small planetesimal
sizes are in agreement with recent studies suggesting the absence of large
planetesimals (Krivov & Wyatt, 2020).
Considering the disc radii inferred from spatially resolved images (§ 4) the
radii of discs in the BPMG lie between 45 and 94 au. We might expect
systematic differences in the disc radius between the BPMG and DEBRIS samples
for this scenario. This might be supported by the fact that the spatially
resolved discs in DEBRIS tend to be larger than those in the BPMG (see §
6.6.2). However, while the bright belts seen in the BPMG should be depleted
and only the fainter DEBRIS belts remain detectable at DEBRIS ages both
planetesimals rings should be present at BPMG ages.
We investigated the possibility of detecting a DEBRIS-like outer planetesimal
belt around a BPMG star with a bright inner belt. We took HD 181327 with its
bright ring at 80 au and considered an additional fainter outer belt at 150 au
with a width of 46 au similar to that of HD 109085 from the DEBRIS sample.
Originally, HD 109085 was observed with ALMA at 880$\mu$m and a sensitivity of
30$\mu$Jy/beam (Marino et al., 2017) while HD 181327 was observed at a higher
spatial resolution and a sensitivity of 27$\mu$Jy/beam (see Tab. 5). The
surface brightness of HD 109085 is $\sim 200\mu$Jy/beam (Fig. 1 in Marino et
al., 2017). If the disc was located around HD 181327 and observed with the
sensitivity of 27$\mu$Jy/beam we would detect it at a 0.15$\sigma$ level (in
each beam). With azimuthal and radial averaging the detection would reach a
$3.4\sigma$ level with ALMA band 7 for the whole disc. Applying an
observational setting like that in Marino et al. (2016) with a lower spatial
resolution we would even reach a level of $\sim 18\sigma$.
We do not detect such outer discs in the BPMG (the only exception being HD
181327 for which there is a tentative detection at $\sim 200$ au). This could
either mean those outer belts do not exist or they are too faint to detect at
that age (e.g., because the collisional cascade has yet to be fully
initiated).
### 7.3 Different star formation environments
The third scenario to explain the higher detection rate supposes that the
environment of young moving groups is different to that in which field stars
form, such that these regions might be more conducive to the retention of
bright discs.
Deacon & Kraus (2020) analysed the binary fraction of open clusters such as
the Pleiades and compared it to less dense associations including the BPMG.
That study found that the rate of wide multiples (between 300 and 3000 au) is
higher in young moving groups (14.6%) than in field stars (7.8%) or open
clusters (Hyades, 2.5%) which is in agreement with our results (see § 2.2).
Deacon & Kraus (2020) concluded that the rate of multiple systems might be
influenced more strongly by environmental factors than by age which supports
the idea of different formation environments for young moving groups and field
stars. It seems that wide separation multiple systems are more effectively
formed in less dense regions such as young moving groups. However, as shown by
Yelverton et al. (2019), an influence of wide-separation binaries on the
detection rate of debris discs could not be found so far (§ 2.2).
Of greater importance might be the evolution of multiple stellar systems.
Reipurth & Mikkola (2012) and Elliott & Bayo (2016) suggested that the
fraction of such systems might decrease over time as the stellar systems
become unstable and break-up within $\sim 100\leavevmode\nobreak\ $Myr. It is
possible that firstly, the break-ups destroy the debris discs in the system,
and secondly the break-ups lead to a higher rate of stellar flybys in the
moving group which truncate and/or deplete the debris discs. As a result, the
radii of the discs in the field might be on average smaller and fainter than
those in the BPMG.
This idea however is not supported by our results on the radial distribution
in the BPMG and DEBRIS where the discs in DEBRIS tend to be larger than in the
BPMG (see § 6.6.2). Lestrade et al. (2011) investigated the depletion of
debris discs due to flybys during the first 100 Myr and found that only high-
density regions like Orion with star densities $>20,000$ pc-3 have a
significant impact on the discs. Similarly, Vincke & Pfalzner (2018) analysed
the impact of the high-density environment found in open clusters, such as
Trumpler 14, on discs and planetary systems. That study found that during the
initial phase of evolution stellar flybys resulted in $\sim 90\%$ of discs
having a radial extent smaller than 50 au. For $\sim 47\%$ of the discs the
radii were even smaller than 10 au. At later evolution stages of the clusters
the discs were barely influenced by stellar interactions. Assuming that field
stars formed in dense clusters (Eggen, 1958) it is possible that stellar
flybys truncated a number of their protoplanetary discs leading to a smaller
fraction of debris discs with large radii and/or a lower incidence of debris
discs around field stars (Hands et al., 2019). Again, this is not supported by
the radii of spatially resolved discs (see § 6.6.2), but since we analysed
only a small number of them around field stars these might be the ones which
were not altered by stellar flybys.
Nevertheless, it might be possible that the detection rate of debris discs
around field stars is low from an early phase, since their protoplanetary
predecessors were already truncated. In contrast, the discs which formed in
less dense regions like the BPMG might retain their high detection rates since
stellar flybys are less frequent. This might be observationally testable by
comparing disc incidences in § 6 for different clusters with those of more
dense clusters at a comparable age. Recently, Miret-Roig et al. (2020) derived
a disc detection rate of $9\pm 9$% for stars ranging from F5 to K5 in the 30
Myr old cluster IC 4665 based on Spitzer and WISE data. This is much lower
compared to the rates we find for the BPMG and the 45 Myr group (75% and 38%,
Tab. 9). However, we note that the cluster has a distance of 350 pc in
contrast to the close-by targets analysed in our study so that many discs
might be undetected. A more detailed analysis for example repeating the
analyses of the $f_{\text{d}}$ vs $R_{\text{BB}}$ parameter space like § 6.5
for IC 4665 would be needed to draw reliable conclusions on this scenario.
## 8 Conclusions
In the first part of this study we analysed a sample of twelve F-type stars in
the BPMG and investigated different properties of the systems. In the second
part we compared the results of the BPMG to those of other samples of young
moving groups and field stars to analyse possible disc evolution processes.
We found that nine stars in the BPMG possess debris discs leading to a
detection rate of 75%. This is significantly higher than found in unbiased
samples of field stars where only $\sim$23% of the targets show evidence for
debris discs (Sibthorpe et al., 2018).
Five out of the nine discs were spatially resolved with either ALMA or
VLT/SPHERE allowing us to study their radial and grain size distribution in
more detail. The disc around HD 164249 was spatially resolved with ALMA for
the first time. The disc radii lie between 45 and 94 au and are comparable to
the radii found for other debris discs and protoplanetary discs, but tend to
be slightly smaller compared to spatially resolved discs found in the DEBRIS
sample of field stars.
We compared the disc radius to blackbody radius ratio derived from SED
modelling to the relation based on Herschel data presented in Pawellek &
Krivov (2015) and found that the resolved discs in the BPMG possess smaller
radii than expected. Since ALMA has a higher spatial resolution than Herschel
we inferred the sub-mm disc to blackbody radius ratio - stellar luminosity
relation from a sample of ALMA data (Matrà et al., 2018). The resulting
relation shows a weaker decrease of the radius ratio with increasing stellar
luminosity.
The minimum grain sizes of the SED models are in agreement with the blow-out
grain sizes of the discs as we would expect from collisional evolution. The
exception is HD 15115 with an $s_{\text{min}}$ of $\sim$5$\mu$m which is also
the only disc showing the presence of a warm inner component. This result is
somewhat different to earlier studies (Pawellek et al., 2014) which found an
average size of 5$\mu$m for a sample of 34 discs. A reason might be that 66%
of those discs were fitted with a warm inner component, but nevertheless, the
small $s_{\text{min}}/s_{\text{blow}}$ ratio indicates that the discs are
collisionally very active with high levels of dynamical excitation. However, a
more detailed analysis is needed to draw strong conclusions.
We compared the sample of BPMG stars to other young moving groups and old
field stars, finding that the detection rate of debris discs is significantly
higher in young moving groups than in the field star sample. Furthermore, the
discs in the BPMG possess a higher fractional luminosity. From collisional
evolution models we would expect the same discs around older stars to be
fainter, which might also cause a lower detection rate. However, applying
those models we found evolving the BPMG sample to DEBRIS ages results in a
population with significantly higher detection rate than that observed for the
actual DEBRIS sample. We investigated different scenarios explaining this.
In the first scenario we assumed that the BPMG and the DEBRIS samples show the
same disc population at different ages. We found that the observed detection
rate could be explained by a delayed ignition of the collisional cascade, but
that this option seems unlikely since all discs would need to be delayed by
the same $\sim 20$ Myr timescale. A more likely scenario is that additional
depleting processes are at work so that the disc evolution cannot be explained
by collisional processes alone. Depletion through gravitational interaction
with unseen planets is one possibility. We found that Neptune-sized planets
orbiting within discs can cause depletion on the required $\sim 100$ Myr
timescales, and are small enough to remain undetected in current observations.
For discs close to the star high energy stellar flares and other radiation
effects (e.g., YORP) are also possible but less likely. Whatever the processes
are they have to work between 10 and 100 Myr since previous studies showed
that disc evolution is slower at older ages (e.g., Holland et al., 2017).
The second scenario assumed that the discs in young moving groups and around
old field stars are not part of the same population. It is possible that discs
in the BPMG possess two belts, one made of large planetesimals formed by
planet formation processes comparable to the Edgeworth-Kuiper belt, and
another made of remnants of the primordial dust that grow to planetesimal
sizes during the disc dispersal process. This might be supported by the
different radii found for the BPMG and DEBRIS samples, but since we studied
only a small number of discs, the actual radius distribution is not well
characterised yet. However, while the two-population scenario is not
impossible, we would still need to invoke a rapid depletion as proposed in the
first scenario (§ 7.1).
In the third scenario we assumed that the birth environment of stars is
different for young moving groups and field stars so that their respective
discs might be different as well. The influence of stellar flybys in
circumstellar discs is significant at early stages of the evolution for dense
stellar clusters like Orion (e.g., Lestrade et al., 2011; Vincke & Pfalzner,
2018), but barely contributes to the depletion of debris discs found in less
dense associations like the BPMG. On the other hand, field stars are supposed
to form in regions of higher stellar density so that stellar flybys might
truncate the discs at an early evolutionary stage. Therefore, a large fraction
of discs around field stars might possess a radius too small to be detected
while discs with larger radii in moving groups remain detectable. This is not
supported by the radii of spatially resolved discs in the BPMG and DEBRIS, but
it is possible that we only see those discs around field stars that were not
truncated. A possibility to test this hypothesis is to analyse the detection
rates of debris discs in young dense clusters. Indeed, studies found lower
disc detections for the clusters (e.g. IC 4665, Miret-Roig et al., 2020), but
this might be biased by the large distance of IC 4665 rather than an actual
difference in the fraction of stars with discs.
## Acknowledgements
NP thanks Alexander Krivov and Torsten Löhne for fruitful discussions. GMK was
supported by the Royal Society as a Royal Society University Research Fellow.
The Combined Atlas of Sources with Spitzer/IRS Spectra (CASSIS) is a product
of the Infrared Science Center at Cornell University, supported by NASA and
JPL.
ALMA is a partnership of ESO (representing its member states), NSF (USA) and
NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI
(Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA
Observatory is operated by ESO, AUI/NRAO and NAOJ.
## Data availability
The data underlying this article will be shared on request to the
corresponding author. The ALMA and Herschel data are publicly available and
can be queried and downloaded directly from the ALMA archive at
https://almascience.nrao.edu/asax/ and from the Herschel archive at
http://archives.esac.esa.int/hsa/whsa/.
## References
* Backman & Paresce (1993) Backman D., Paresce F., 1993, in Levy E. H., Lunine J. I., eds, Protostars and Planets III. Univ. of Arizona Press, pp 1253–1304
* Bailer-Jones et al. (2018) Bailer-Jones C. A. L., Rybizki J., Fouesneau M., Mantelet G., Andrae R., 2018, AJ, 156, 58
* Ballering et al. (2013) Ballering N. P., Rieke G. H., Su K. Y. L., Montiel E., 2013, ApJ, 775, 55
* Bell et al. (2015) Bell C. P. M., Mamajek E. E., Naylor T., 2015, MNRAS, 454, 593
* Benz & Asphaug (1999) Benz W., Asphaug E., 1999, Icarus, 142, 5
* Beuzit et al. (2019) Beuzit J.-L., et al., 2019, arXiv e-prints,
* Bohren & Huffman (1983) Bohren C. F., Huffman D. R., 1983, Absorption and Scattering of Light by Small Particles. Wiley and Sons: New York – Chichester – Brisbane – Toronto – Singapore
* Booth et al. (2013) Booth M., et al., 2013, MNRAS, 428, 1263
* Brott & Hauschildt (2005) Brott I., Hauschildt P. H., 2005, in C. Turon, K. S. O’Flaherty, & M. A. C. Perryman ed., ESA SP Vol. 576, The Three-Dimensional Universe with Gaia. p. 565 (arXiv:astro-ph/0503395)
* Burns et al. (1979) Burns J. A., Lamy P. L., Soter S., 1979, Icarus, 40, 1
* Carpenter et al. (2008) Carpenter J. M., et al., 2008, ApJS, 179, 423
* Carrera et al. (2015) Carrera D., Johansen A., Davies M. B., 2015, A& A, 579, A43
* Carrera et al. (2017) Carrera D., Gorti U., Johansen A., Davies M. B., 2017, ApJ, 839, 16
* Chen et al. (2014) Chen C. H., Mittal T., Kuchner M., Forrest W. J., Lisse C. M., Manoj P., Sargent B. A., Watson D. M., 2014, ApJS, 211, 25
* Choi et al. (2016) Choi J., Dotter A., Conroy C., Cantiello M., Paxton B., Johnson B. D., 2016, ApJ, 823, 102
* Churcher et al. (2011) Churcher L., Wyatt M., Smith R., 2011, MNRAS, 410, 2
* Cornwell (2008) Cornwell T. J., 2008, IEEE Journal of Selected Topics in Signal Processing, 2, 793
* Cutri et al. (2003) Cutri R. M., et al., 2003, 2MASS All Sky Catalog of point sources.
* Davenport (2016) Davenport J. R. A., 2016, ApJ, 829, 23
* Deacon & Kraus (2020) Deacon N. R., Kraus A. L., 2020, MNRAS,
* Dohnanyi (1969) Dohnanyi J. S., 1969, J. Geophys. Res., 74, 2531
* Draine (2003) Draine B. T., 2003, ARA& A, 41, 241
* Eggen (1958) Eggen O. J., 1958, MNRAS, 118, 65
* Eiroa et al. (2013) Eiroa C., et al., 2013, A& A, 555, A11
* Eker et al. (2008) Eker Z., et al., 2008, MNRAS, 389, 1722
* Elliott & Bayo (2016) Elliott P., Bayo A., 2016, MNRAS, 459, 4499
* Engler et al. (2018) Engler N., et al., 2018, arXiv e-prints,
* Faherty et al. (2018) Faherty J. K., Bochanski J. J., Gagné J., Nelson O., Coker K., Smithka I., Desir D., Vasquez C., 2018, ApJ, 863, 91
* Fisher (1956) Fisher S. R. A., 1956, The World of Mathematics, 3
* Foreman-Mackey et al. (2013) Foreman-Mackey D., Hogg D. W., Lang D., Goodman J., 2013, PASP, 125, 306
* Foreman-Mackey et al. (2019) Foreman-Mackey D., et al., 2019, The Journal of Open Source Software, 4, 1864
* Fortney et al. (2008) Fortney J. J., Marley M. S., Saumon D., Lodders K., 2008, ApJ, 683, 1104
* Gagné & Faherty (2018) Gagné J., Faherty J. K., 2018, ApJ, 862, 138
* Gagné et al. (2018a) Gagné J., et al., 2018a, ApJ, 856, 23
* Gagné et al. (2018b) Gagné J., Roy-Loubier O., Faherty J. K., Doyon R., Malo L., 2018b, ApJ, 860, 43
* Gagné et al. (2018c) Gagné J., Faherty J. K., Mamajek E. E., 2018c, ApJ, 865, 136
* Gaia Collaboration et al. (2018) Gaia Collaboration et al., 2018, A& A, 616, A1
* Gáspár et al. (2012) Gáspár A., Psaltis D., Rieke G. H., Özel F., 2012, ApJ, 754, 74
* Gáspár et al. (2013) Gáspár A., Rieke G. H., Balog Z., 2013, ApJ, 768, 25
* Gehrels (1986) Gehrels N., 1986, ApJ, 303, 336
* Geiler et al. (2019) Geiler F., Krivov A. V., Booth M., Löhne T., 2019, MNRAS, 483, 332
* Gomes et al. (2005) Gomes R., Levison H. F., Tsiganis K., Morbidelli A., 2005, Nature, 435, 466
* Goodman & Weare (2010) Goodman J., Weare J., 2010, Commun. Appl. Math. Comput. Sci., 5, 65
* Hands et al. (2019) Hands T. O., Dehnen W., Gration A., Stadel J., Moore B., 2019, MNRAS, 490, 21
* Holland et al. (2017) Holland W. S., et al., 2017, MNRAS, 470, 3606
* Hughes et al. (2018) Hughes A. M., Duchêne G., Matthews B. C., 2018, ARA& A, 56, 541
* Ishihara et al. (2010) Ishihara D., et al., 2010, A& A, 514, A1
* Izidoro et al. (2014) Izidoro A., Morbidelli A., Raymond S. N., 2014, ApJ, 794, 11
* Janson et al. (2014) Janson M., et al., 2014, ApJS, 214, 17
* Johansen et al. (2007) Johansen A., Oishi J. S., Low M.-M. M., Klahr H., Henning T., Youdin A., 2007, Nature, 448, 1022
* Johansen et al. (2012) Johansen A., Youdin A. N., Lithwick Y., 2012, A& A, 537, A125
* Kalas et al. (2007) Kalas P., Fitzgerald M. P., Graham J. R., 2007, ApJL, 661, L85
* Kass & Raftery (1995) Kass R. E., Raftery A. E., 1995, Journal of the American Statistical Association, 90, 773
* Kenyon & Bromley (2008) Kenyon S. J., Bromley B. C., 2008, ApJS, 179, 451
* Klahr & Schreiber (2020) Klahr H., Schreiber A., 2020, arXiv e-prints, p. arXiv:2007.10696
* Kovaleva et al. (2015) Kovaleva D., Kaygorodov P., Malkov O., Debray B., Oblak E., 2015, Astronomy and Computing, 11, 119
* Kral et al. (2013) Kral Q., Thébault P., Charnoz S., 2013, A& A, 558, A121
* Kral et al. (2020) Kral Q., Matra L., Kennedy G., Marino S., Wyatt M., 2020, arXiv e-prints, p. arXiv:2005.05841
* Krijt & Kama (2014) Krijt S., Kama M., 2014, A& A, 566, L2
* Krivov (2010) Krivov A. V., 2010, Research in Astron. Astrophys., 10, 383
* Krivov & Wyatt (2020) Krivov A. V., Wyatt M. C., 2020, MNRAS,
* Krivov et al. (2006) Krivov A. V., Löhne T., Sremčević M., 2006, A& A, 455, 509
* Krivov et al. (2018) Krivov A. V., Ide A., Löhne T., Johansen A., Blum J., 2018, MNRAS, 474, 2564
* Lebouteiller et al. (2011) Lebouteiller V., Barry D. J., Spoon H. W. W., Bernard-Salas J., Sloan G. C., Houck J. R., Weedman D. W., 2011, ApJS, 196, 8
* Lestrade et al. (2011) Lestrade J. F., Morey E., Lassus A., Phou N., 2011, A& A, 532, A120
* Levison et al. (2008) Levison H. F., Morbidelli A., Vanlaerhoven C., Gomes R., Tsiganis K., 2008, Icarus, 196, 258
* Lisse et al. (2020) Lisse C. M., et al., 2020, ApJ, 894, 116
* Löhne et al. (2008) Löhne T., Krivov A. V., Rodmann J., 2008, ApJ, 673, 1123
* Löhne et al. (2017) Löhne T., Krivov A. V., Kirchschlager F., Sende J. A., Wolf S., 2017, A& A, 605, A7
* MacGregor et al. (2015) MacGregor M. A., Wilner D. J., Andrews S. M., Hughes A. M., 2015, ApJ, 801, 59
* MacGregor et al. (2019) MacGregor M. A., et al., 2019, ApJL, 877, L32
* Macintosh et al. (2015) Macintosh B., et al., 2015, Science, 350, 64
* Mamajek & Bell (2014) Mamajek E. E., Bell C. P. M., 2014, MNRAS, 445, 2169
* Marino et al. (2016) Marino S., et al., 2016, MNRAS, 460, 2933
* Marino et al. (2017) Marino S., et al., 2017, MNRAS, 465, 2595
* Marino et al. (2018) Marino S., et al., 2018, MNRAS, 479, 5423
* Marley et al. (2007) Marley M. S., Fortney J. J., Hubickyj O., Bodenheimer P., Lissauer J. J., 2007, ApJ, 655, 541
* Marshall et al. (2016) Marshall J. P., Booth M., Holland W., Matthews B. C., Greaves J. S., Zuckerman B., 2016, MNRAS,
* Marton et al. (2015) Marton G., et al., 2015, in IAU General Assembly. p. 2253107
* Matrà et al. (2018) Matrà L., Marino S., Kennedy G. M., Wyatt M. C., Öberg K. I., Wilner D. J., 2018, ApJ, 859, 72
* Matrà et al. (2019) Matrà L., Wyatt M. C., Wilner D. J., Dent W. R. F., Marino S., Kennedy G. M., Milli J., 2019, AJ, 157, 135
* Miret-Roig et al. (2020) Miret-Roig N., Huélamo N., Bouy H., 2020, arXiv e-prints, p. arXiv:2007.04992
* Moór et al. (2013) Moór A., et al., 2013, ApJL, 775, L51
* Nesvorný (2015) Nesvorný D., 2015, AJ, 150, 73
* Nielsen et al. (2019) Nielsen E., et al., 2019, in AAS/Division for Extreme Solar Systems Abstracts. p. 100.02
* O’Brien & Greenberg (2003) O’Brien D. P., Greenberg R., 2003, Icarus, 164, 334
* Pascucci et al. (2006) Pascucci I., et al., 2006, ApJ, 651, 1177
* Patience et al. (2015) Patience J., et al., 2015, in AAS/Division for Extreme Solar Systems Abstracts. p. 202.01
* Pawellek (2017) Pawellek N., 2017, PhD thesis, Jena, https://www.db-thueringen.de/receive/dbt_mods_00031595
* Pawellek & Krivov (2015) Pawellek N., Krivov A. V., 2015, MNRAS, 454, 3207
* Pawellek et al. (2014) Pawellek N., Krivov A. V., Marshall J. P., Montesinos B., Ábrahám P., Moór A., Bryden G., Eiroa C., 2014, ApJ, 792, 65
* Pawellek et al. (2019) Pawellek N., Moór A., Pascucci I., Krivov A. V., 2019, MNRAS, 487, 5874
* Paxton et al. (2011) Paxton B., Bildsten L., Dotter A., Herwig F., Lesaffre P., Timmes F., 2011, ApJS, 192, 3
* Paxton et al. (2013) Paxton B., et al., 2013, ApJS, 208, 4
* Paxton et al. (2015) Paxton B., et al., 2015, ApJS, 220, 15
* Pérez et al. (2019) Pérez S., Marino S., Casassus S., Baruteau C., Zurlo A., Flores C., Chauvin G., 2019, MNRAS, 488, 1005
* Perrot et al. (2019) Perrot C., et al., 2019, A& A, 626, A95
* Phillips et al. (2010) Phillips N. M., Greaves J. S., Dent W. R. F., Matthews B. C., Holland W. S., Wyatt M. C., Sibthorpe B., 2010, MNRAS, 403, 1089
* Rajan et al. (2017) Rajan A., et al., 2017, AJ, 154, 10
* Rebull et al. (2008) Rebull L. M., et al., 2008, ApJ, 681, 1484
* Reipurth & Mikkola (2012) Reipurth B., Mikkola S., 2012, Nature, 492, 221
* Ren et al. (2019) Ren B., et al., 2019, ApJ, 882, 64
* Riviere-Marichalar et al. (2014) Riviere-Marichalar P., et al., 2014, A& A, 565, A68
* Rodriguez & Zuckerman (2012) Rodriguez D. R., Zuckerman B., 2012, ApJ, 745, 147
* Schaffer et al. (2018) Schaffer N., Yang C.-C., Johansen A., 2018, A& A, 618, A75
* Schneider (2011) Schneider J., 2011, in EPSC-DPS Joint Meeting 2011. p. 3
* Schneider et al. (2006) Schneider G., et al., 2006, ApJ, 650, 414
* Schüppler et al. (2014) Schüppler C., Löhne T., Krivov A. V., Ertel S., Marshall J. P., Eiroa C., 2014, ArXiv: 1404.6144,
* Sepulveda et al. (2019) Sepulveda A. G., et al., 2019, ApJ, 881, 84
* Shannon et al. (2016) Shannon A., Bonsor A., Kral Q., Matthews E., 2016, MNRAS, 462, L116
* Shkolnik et al. (2017) Shkolnik E. L., Allers K. N., Kraus A. L., Liu M. C., Flagg L., 2017, AJ, 154, 69
* Sibthorpe et al. (2018) Sibthorpe B., Kennedy G. M., Wyatt M. C., Lestrade J. F., Greaves J. S., Matthews B. C., Duchêne G., 2018, MNRAS, 475, 3046
* Sierchio et al. (2014) Sierchio J. M., Rieke G. H., Su K. Y. L., Gaspar A., 2014, preprint, (arXiv:1402.6308)
* Soummer et al. (2014) Soummer R., et al., 2014, ApJL, 786, L23
* Stark et al. (2014) Stark C. C., Schneider G., Weinberger A. J., Debes J. H., Grady C. A., Jang-Condell H., Kuchner M. J., 2014, ApJ, 789, 58
* Stern et al. (2019) Stern S. A., et al., 2019, in Lunar and Planetary Science Conference. Lunar and Planetary Science Conference. p. 1742
* Stewart & Leinhardt (2009) Stewart S. T., Leinhardt Z. M., 2009, ApJL, 691, L133
* Su et al. (2006) Su K. Y. L., et al., 2006, ApJ, 653, 675
* Suzuki et al. (2016) Suzuki D., et al., 2016, ApJ, 833, 145
* Tazzari et al. (2018) Tazzari M., Beaujean F., Testi L., 2018, MNRAS, 476, 4527
* Thébault & Augereau (2007) Thébault P., Augereau J.-C., 2007, A& A, 472, 169
* Thebault (2016) Thebault P., 2016, A& A, 587, A88
* Tokovinin (1997) Tokovinin A. A., 1997, AApS, 124, 75
* Torres et al. (2008) Torres C. A. O., Quast G. R., Melo C. H. F., Sterzik M. F., 2008, Young Nearby Loose Associations. p. 757
* Trilling et al. (2008) Trilling D. E., et al., 2008, ApJ, 674, 1086
* Van Doorsselaere et al. (2017) Van Doorsselaere T., Shariati H., Debosscher J., 2017, ApJS, 232, 26
* Vican (2012) Vican L., 2012, AJ, 143, 135
* Vincke & Pfalzner (2018) Vincke K., Pfalzner S., 2018, ApJ, 868, 1
* Wiegert et al. (2016) Wiegert J., Faramaz V., Cruz-Saenz de Miera F., 2016, MNRAS, 462, 1735
* Wright et al. (2010) Wright E. L., et al., 2010, AJ, 140, 1868
* Wyatt (2008) Wyatt M. C., 2008, ARA& A, 46, 339
* Wyatt (2018) Wyatt M. C., 2018, Debris Disks: Probing Planet Formation. p. 146, doi:10.1007/978-3-319-55333-7_146
* Wyatt & Dent (2002) Wyatt M. C., Dent W. R. F., 2002, MNRAS, 334, 589
* Wyatt et al. (2007) Wyatt M. C., Smith R., Greaves J. S., Beichman C. A., Bryden G., Lisse C. M., 2007, ApJ, 658, 569
* Xuan et al. (2020) Xuan J. W., Kennedy G. M., Wyatt M. C., Yelverton B., 2020, MNRAS,
* Yelverton et al. (2019) Yelverton B., Kennedy G. M., Su K. Y. L., Wyatt M. C., 2019, MNRAS, 488, 3588
* Zuckerman et al. (2011) Zuckerman B., Rhee J. H., Song I., Bessell M. S., 2011, ApJ, 732, 61
* Ďurech et al. (2018) Ďurech J., et al., 2018, A& A, 609, A86
## Appendix A SEDs of debris discs around nearby F stars
### A.1 45 Myr group
We analysed the sample of 29 F-type stars found in the 45 Myr group (see §
6.3) and found that eleven of them exhibit significant mid-infrared excess. We
fitted the SEDs of these targets with a modified blackbody model which is
described in detail in Section 3. The results are shown in Fig. 11.
Figure 11: SEDs for the debris discs detected around F stars in the 45 Myr
group.
### A.2 150 Myr group
We analysed the sample of 13 F-type stars found in the 150 Myr group (see §
6.3) and found that two of them exhibit significant mid-infrared excess. We
fitted the SEDs of these targets with a modified blackbody model which is
described in detail in Section 3. The results are shown in Fig. 12.
Figure 12: SEDs for the debris discs detected around F stars in the 150 Myr
group.
### A.3 DEBRIS
We analysed the sample of 92 F-type stars in DEBRIS (see § 6.2) and found that
21 of them exhibit significant mid-infrared excess. We fitted the SEDs of
these targets with a modified blackbody model which is described in detail in
Section 3. The results are shown in Fig. 13 and 14.
Figure 13: SEDs for the debris discs detected around F stars in the DEBRIS
sample. Figure 14: SEDs for the debris discs detected around F stars in the
DEBRIS sample (continued).
## Appendix B Collisional disc evolution
While we inferred the minimum sizes of dust from SED modelling in § 5 we can
further constrain the size distribution by inferring the minimum size of the
planetesimals that must be feeding the collisional cascade, by extrapolating
the size distribution of the dust up to the size at which the collisional
lifetime is equal to the age of the star, applying the lifetimes calculated
using the analytical collision evolution model introduced by Wyatt et al.
(2007).
### B.1 Collision model
The model uses a similar power law size distribution as the SED model
following
$N_{\text{coll}}(s)\approx s^{2-3q},$ (8)
with $s$ being the radius of a spherical body and $N(s)ds$ the number of
bodies in the size range $s$ to $s+ds$. We note that the parameter $q$ is
different from the size distribution index inferred by SED modelling (§ 5).
They are related by $q_{\text{SED}}=-(2-3q)$ leading to $q_{\text{SED}}=3.5$
and $q=1.83$ for an ideal collisional cascade (Dohnanyi, 1969).
Following eqs. (12) from Wyatt et al. (2007) and (22) from Löhne et al. (2008)
we get an equation for the collisional timescale, $t_{\text{c}}$ as a function
of minimum size of the planetesimals necessary to feed to collisional cascade,
$s_{\text{c}}$:
$\begin{split}t_{\text{c}}=\frac{r^{1/2}\,dr\,i}{(\gamma
M_{\text{star}})^{1/2}f_{\text{d}}\left(\frac{5}{4}e^{2}+i^{2}\right)^{1/2}}\left(\frac{s_{\text{c}}}{s_{\text{blow}}}\right)^{3q-5}\\\
\left\\{\left[X_{\text{c}}^{5-3q}-1\right]+2\frac{q-5/3}{q-4/3}\left[X_{\text{c}}^{4-3q}-1\right]+\frac{q-5/3}{q-1}\left[X_{\text{c}}^{3-3q}-1\right]\right\\}^{-1}.\end{split}$
(9)
The timescale depends on the fractional luminosity, $f_{\text{d}}$, the blow-
out grain size, $s_{\text{blow}}$, the stellar mass, $M_{\text{star}}$, the
disc radius, $r$, the disc width, $dr$, the eccentricity, $e$, the
inclination, $i$, and the parameter $X_{c}$ which is defined as
$X_{c}=\left[\frac{2Q_{\text{D}}^{*}}{v^{2}_{\text{imp}}}\right]^{1/3}.$ (10)
Here, $Q_{\text{D}}^{*}$ the catastrophic disruption threshold and
$v_{\text{imp}}$ the impact velocity of the colliding bodies given as
$v_{\text{imp}}\leavevmode\nobreak\ =\leavevmode\nobreak\ \sqrt{\gamma
M_{\text{star}}\,r^{-1}\,(5/4e^{2}+i^{2})}$ with $\gamma$ as gravitational
constant. In the following section we fix both eccentricity and inclination to
a value of 0.1.
### B.2 The catastrophic disruption threshold
The collisional timescale strongly depends on the catastrophic disruption
threshold, $Q_{\text{D}}^{*}$ of the planetesimals which is the specific
energy necessary to disperse a target (e.g., Benz & Asphaug, 1999). The
parameter can be described by a two-power law function taking into account the
material strength, the self-gravity of particles and the impact velocity
(Stewart & Leinhardt, 2009):
$Q_{\text{D}}^{*}=\left[A\left(\frac{s}{1{\rm
cm}}\right)^{a}+B\left(\frac{s}{1{\rm
cm}}\right)^{b}\right]\left(v_{\text{imp}}\right)^{k}.$ (11)
The parameters $A$, $B$, $a$, $b$ and $k$ are material constants. We found
that $v_{\text{imp}}$ is on average $\sim 0.4$ km/s for the discs in our
samples assuming that $e=i=0.1$. Following O’Brien & Greenberg (2003), we can
infer the size distribution index, $q_{\text{SED}}$, from $Q_{\text{D}}^{*}$
using the parameter $a$ from eq. (11):
$q_{\text{SED}}=\frac{7+a/3}{2+a/3}.$ (12)
Hence, not only the collisional timescale but also the size distribution of
planetesimals depends on $Q_{\text{D}}^{*}$.
Figure 15: $Q_{\text{D}}^{*}$ as function of size. The thin black dashed and
dotted line show values for basalt at 3 and 5 km/s taken from Benz & Asphaug
(1999). The parameters of the HD 109085 system are assumed. The thick blue
dash-dotted line shows the scaling result taken for Wyatt & Dent (2002) and
the thick green dashed line the lab results taken from Stewart & Leinhardt
(2009) for basalt at 0.4 km/s. The thick solid red line shows the result for
“sand” at 0.4 km/s taken from Stewart & Leinhardt (2009).
Studies of the collisional evolution of debris discs (e.g., Wyatt & Dent,
2002; Schüppler et al., 2014; Löhne et al., 2017; Krivov et al., 2018; Geiler
et al., 2019) often assume the materials “sand” (Stewart & Leinhardt, 2009)
and basalt (Benz & Asphaug, 1999). In Fig. 15, $Q_{\text{D}}^{*}$ is depicted
as a function of size for both materials. Using eq. (11), we find that the
values for basalt colliding at 0.4 km/s lie one order of magnitude below those
of Benz & Asphaug (1999) colliding at 5 km/s.
Another approach to infer values of $Q_{\text{D}}^{*}$ at the appropriate
impact velocity is given by Wyatt & Dent (2002) which introduced a scaling
method where $Q_{\text{D}}^{*}\propto v_{\text{imp}}^{\delta}$. Here, $\delta$
is found by comparing the two impact velocity curves given in Benz & Asphaug
(1999). The method gives values one order of magnitude below those from
Stewart & Leinhardt (2009) for sizes smaller than $\sim 100$ m (strength
regime). For larger sizes ($\sim 1$ km, gravity regime) the values are
comparable to each other. The results using sand as material at 0.4 km/s lie
between those of basalt from Wyatt & Dent (2002) and Stewart & Leinhardt
(2009) and show a flatter decrease than basalt in the strength regime. The
values in the gravity regime are close to those of basalt, but the increase is
flatter.
The approaches of Wyatt & Dent (2002) and Stewart & Leinhardt (2009) to scale
$Q_{\text{D}}^{*}$ to the impact velocity are both used in the literature.
Therefore, we emphasise that $Q_{\text{D}}^{*}$ strongly depends on the method
applied and shows variations of one order of magnitude even for the same
material. Furthermore, $Q_{\text{D}}^{*}$ varies for the material chosen.
### B.3 Minimum sizes of planetesimals feeding the cascade
We calculate the minimum sizes of the planetesimals feeding the collisional
cascade using eq. (9) assuming that the collisional timescale, $t_{\text{c}}$,
is similar to the age of the system.
Table 10: Sizes of planetesimals.
| System parameters | Basalt (WD02) | Basalt (SL09) | Sand (SL09)
---|---|---|---|---
HD | r | dr | $M_{\text{star}}$ | $s_{\text{blow}}$ | $f_{\text{d}}$ | $t_{\text{c}}$ | $s_{\text{c}}$ | $M_{\text{disc}}$ | $s_{\text{c}}$ | $M_{\text{disc}}$ | $s_{\text{c}}$ | $M_{\text{disc}}$
| [au] | [au] | [$M_{\odot}$] | [$\mu$m ] | | [Myr] | [m] | [$M_{\text{earth}}$] | [m] | [$M_{\text{earth}}$] | [m] | [$M_{\text{earth}}$]
15115 | 93 | 21 | 1.37 | 0.91 | $5.3\times 10^{-4}$ | 23 | 338 | 23 | 104 | 7.0 | 381 | 26
160305 | 88 | 4 | 1.13 | 0.67 | $1.5\times 10^{-4}$ | 23 | 351 | 6.0 | 115 | 2.0 | 401 | 6.9
164249 | 63 | 24 | 1.30 | 0.89 | $9.4\times 10^{-4}$ | 23 | 514 | 28 | 413 | 23 | 678 | 37
181327 | 81 | 16 | 1.36 | 1.02 | $4.1\times 10^{-3}$ | 23 | 1417 | 562 | 1601 | 634 | 2188 | 867
191089 | 45 | 16 | 1.36 | 0.98 | $1.6\times 10^{-3}$ | 23 | 1105 | 53 | 1310 | 63 | 1703 | 81
10647 | 82 | 49 | 1.12 | 0.48 | $2.9\times 10^{-4}$ | 1000 | 960 | 28 | 1006 | 29 | 1405 | 40
109085 | 152 | 46 | 1.53 | 1.24 | $1.7\times 10^{-5}$ | 1000 | 239 | 1.4 | 19 | 0.11 | 213 | 1.2
Notes: The system data for HD 10647 were taken from Lovell et al. (in prep.),
the data for HD 109085 from Matrà et al. (2018). The age estimates for both
stars show large uncertainties so that we fix the age to 1000 Myr for
simplicity reasons. “Basalt (WD02)” refers to the scaling method of
$Q_{\text{D}}^{*}$ used in Wyatt & Dent (2002) while “Basalt (SL09)” assumes
the velocity dependence found in Stewart & Leinhardt (2009). “Sand (SL09)”
refers to the weak rock material introduced in Stewart & Leinhardt (2009).
Tab. 10 lists the planetesimal sizes and the corresponding minimum disc masses
(since the size distribution must extend up to these sizes, and could extend
further) assuming the two different approaches to scale $Q_{\text{D}}^{*}$ to
the impact velocity of the colliding bodies as well as the two materials
basalt and sand. We added the discs HD 10647 and HD 109085 from the DEBRIS
sample to compare the planetesimal sizes in systems of different age. Both
discs were spatially resolved with ALMA.
We find that the smallest planetesimals feeding the collisional cascade show
sizes from several metres up to $\sim 2$ kilometres independently of the age
of the system. For some discs this is somewhat smaller than assumed by former
studies which found sizes around kilometres (e.g., Wyatt & Dent, 2002; Marino
et al., 2017; Krivov et al., 2018). This is also smaller than the predicted
sizes of hundreds of kilometres from planetesimal formation scenarios (e.g.,
Klahr & Schreiber, 2020) and might indicate a lack of those large
planetesimals as was proposed by Krivov & Wyatt (2020).
However, we find that the discs analysed show variations of sizes of one order
of magnitude for all materials. Applying eq. (11) the maximum sizes using
basalt are smaller than those using sand. While for large planetesimals in the
gravity regime the differences between the materials are small they become
more pronounced for metre-sized planetesimals close to the strength regime
similar to the trend of $Q_{\text{D}}^{*}$ shown in Fig. 15. Considering the
two scaling methods we find a comparable trend – the method chosen to infer
$Q_{\text{D}}^{*}$ becomes more important for smaller sizes.
The large variation in sizes leads to different disc masses depending on the
material applied. Again, discs for which the planetesimals feeding the dust
belt are only required to be metre in size are more sensitive to the material
and method used. The masses vary between 2$M_{\oplus}$ (HD 160305) and
900$M_{\oplus}$ (HD 181327).
Studies of planetesimal formation (e.g., Klahr & Schreiber, 2020) found that
typical planetesimal sizes tend to decrease with increasing distance to the
star and with the time of the formation of planetesimals. While an early
formation might lead to sizes of $\sim 100$ km, planetesimals formed at a
later stage tend to be as small as $\sim 10$ km (e.g., Stern et al., 2019).
This is still somewhat larger than the sizes of $\sim 1$ km we infer for our
discs, but we note that our estimated sizes are minimum sizes necessary to
feed the collisional cascade.
|
# VAE2: Preventing Posterior Collapse of Variational Video Predictions in the
Wild
Yizhou Zhou,1 Chong Luo,2 Xiaoyan Sun,2 Zheng-Jun Zha,1 Wenjun Zeng2
1University of Science Technology of China 2Microsoft Research Asia
<EMAIL_ADDRESS><EMAIL_ADDRESS>{xysun, cluo<EMAIL_ADDRESS>
###### Abstract
Predicting future frames of video sequences is challenging due to the complex
and stochastic nature of the problem. Video prediction methods based on
variational auto-encoders (VAEs) have been a great success, but they require
the training data to contain multiple possible futures for an observed video
sequence. This is hard to be fulfilled when videos are captured in the wild
where any given observation only has a determinate future. As a result,
training a vanilla VAE model with these videos inevitably causes posterior
collapse. To alleviate this problem, we propose a novel VAE structure, dabbed
VAE-in-VAE or VAE2. The key idea is to explicitly introduce stochasticity into
the VAE. We treat part of the observed video sequence as a random transition
state that bridges its past and future, and maximize the likelihood of a
Markov Chain over the video sequence under all possible transition states. A
tractable lower bound is proposed for this intractable objective function and
an end-to-end optimization algorithm is designed accordingly. VAE2 can
mitigate the posterior collapse problem to a large extent, as it breaks the
direct dependence between future and observation and does not directly regress
the determinate future provided by the training data. We carry out experiments
on a large-scale dataset called Cityscapes, which contains videos collected
from a number of urban cities. Results show that VAE2 is capable of predicting
diverse futures and is more resistant to posterior collapse than the other
state-of-the-art VAE-based approaches. We believe that VAE2 is also applicable
to other stochastic sequence prediction problems where training data are lack
of stochasticity.
## Introduction
Video prediction finds many applications in robotics and autonomous driving,
such as action recognition(Zhou et al. 2018), planning(Thrun et al. 2006), and
object tracking(Guo et al. 2017). Initially, video prediction was formulated
as a reconstruction problem (Ranzato et al. 2014) where the trained model
regresses a determinate future for any given observation. However, real-world
events are full of stochasticity. For example, a person standing may sit down,
jump up, or even fall down at the next moment. A deterministic model is not
capable of predicting multiple possible futures, but such capability is
tremendously desired by intelligent agents as it makes them aware of different
possible consequences of their actions in real applications. In order to bring
stochasticity into video prediction, methods based on autoregressive models
(Oord, Kalchbrenner, and Kavukcuoglu 2016), generative adversarial networks
(GANs) (Goodfellow et al. 2014; Mirza and Osindero 2014), and variational
auto-encoders (VAEs)(Kingma and Welling 2013) have been proposed. Among these,
VAE-based methods have received the most attention and they are referred as
variational video prediction (Babaeizadeh et al. 2017).
Variational video prediction learns a latent variable model that maximizes the
likelihood of the data. A key to the success of variational video prediction
is that the training dataset should provide multiple futures for observed
frames. Not surprisingly, VAE-based video prediction methods have been using
synthetic videos or scripted videos111A human actor or robot conducts
predefined activities in well-controlled environment. for training. These
videos can provide multiple futures as desired, but they only cover a small
subset of real-world scenarios. In order to build a practically applicable
video prediction model, it is necessary to train it with non-scripted real-
world videos such as videos captured in the wild. However, such kind of videos
are usually determinate, which means only one of many possible futures is
available. This situation will easily collapse a VAE model. As Fig. 1(a)
shows, if there is always a unique future $v$ corresponding to each
observation $I$, the hidden code $z$ becomes trivial due to the inference
preference property(Chen et al. 2016). In such a case, VAE loses the
capability to predict diverse futures.
VAEs are known to suffer from posterior collapse due to various reasons such
as the ‘optimization challenges’(Bowman et al. 2015; Razavi et al. 2019).
Although many approaches (Higgins et al. 2017; Alemi et al. 2017; Goyal et al.
2017; Razavi et al. 2019; Bowman et al. 2015) have been proposed to mitigate
the problem, they hold the basic assumption that the training data have a
stochastic nature. To our best knowledge, none of the previous work has looked
into the model collapse problem caused by the determinate training data in
video prediction. The intuition behind our solution is to explicitly inject
stochasticity into the VAEs. As illustrated in Fig. 1(b), we intentionally set
aside a part of the observed video sequence $I^{s}$ and treat it as a random
transition state that bridges its past and future. By doing so, we can
maximize the likelihood of a Markov Chain over the sequence under all possible
transition states. Such formulation converts the likelihood that only relies
on the observed data pairs ($I,v$) into an expectation which contains extra
dependence on the distribution of the entire dataset.
However, this new formulation contains an expectation term and a likelihood
term which are both intractable. To tackle it, we first derive a practical
lower bound which can be optimized with a vanilla VAE structure, and then use
another VAE to approximate the remaining intractable part in this lower bound.
In addition, we find that the two objective functions for the two VAEs can be
merged into a single expression in practice. This greatly saves the efforts to
do iterative optimization. As a result, we can innovatively derive a nested
VAE structure. We name this structure VAE-in-VAE, or VAE2 in short. VAE2 can
be optimized in an end-to-end manner.
In a nutshell, the contributions of this work are: 1) We propose a novel VAE2
framework to explicitly inject stochasticity into the VAE to mitigate the
posterior collapse problem caused by determinate training data. 2) We turn the
objective function tractable and develop an efficient end-to-end optimization
algorithm. 3) We make evaluations on non-scripted real-world video prediction
as well as a simple number sequence prediction task. Both quantitative and
qualitative results demonstrate the efficacy of our method.
Figure 1: Schematic diagrams of vanilla VAE-based video prediction approach
and our VAE2 approach. $\phi$ is the encoder that maps the observation $I$ and
future $v$ to a hidden random variable $z$. $\theta$ is the decoder which
predicts stochastic future based on the observation and random signal sampled
from $z$. When the future is determinate, the decoder $\theta$ can well
reconstruct the future without accessing the random hidden code $z$. As a
consequence, it leads VAE to a deterministic model. Unlike vanilla VAE that
maximizes the log likelihood of determinate pair ($v$, $I$), we propose to
treat part of the observation as an unknown transition state (frames $I^{s}$),
and maximize the likelihood of the Markov Chain under all possible transition
state. Such formulation breaks the direct dependence between future and
observation, so that our method models a stochastic process. A nested VAE
structure is then derived for end-to-end optimization, where $\psi$ and
$\theta^{{}^{\prime}}$ are two additional decoder to produce the transition
frames and reconstruct the observed frames, respectively. $D_{\omega}$ is a
discriminator to help generate more realistic transitiona state frames.
## Related Work
The video prediction problem can be addressed by either deterministic or non-
deterministic models. Deterministic models directly reconstruct future frames
with recurrent neural networks(Ranzato et al. 2014; Oh et al. 2015;
Srivastava, Mansimov, and Salakhudinov 2015; Villegas et al. 2017; Finn,
Goodfellow, and Levine 2016; Lu, Hirsch, and Scholkopf 2017) or feed-forward
networks(Jia et al. 2016; Vondrick and Torralba 2017; Liu et al. 2017; Walker,
Gupta, and Hebert 2015). The reconstruction loss assumes a deterministic
environment. Such models cannot capture the stochastic nature in real world
videos and usually result in an averaged prediction over all possibilities.
Non-deterministic models can be further classified into autoregressive models,
GANs, and VAEs. In pixel-level autoregressive models(Oord, Kalchbrenner, and
Kavukcuoglu 2016), spatiotemporal dependencies are jointly modeled, where each
pixel in the predicted video frames fully depends on the previously predicted
pixel via chain rule (Kalchbrenner et al. 2017). Although autoregressive model
can directly construct video likelihood through full factorization over each
pixel, it is unpractical due to its high inference complexity. Besides, it has
been observed to fail on globally coherent structures(Razavi et al. 2019) and
generate very noisy predictions (Babaeizadeh et al. 2017). GANs(Goodfellow et
al. 2014) and conditional GANs (c-GANs)(Mirza and Osindero 2014) are also
employed for stochastic video prediction, for their capability to generate
data close to the target distribution. GAN-based approaches can predict sharp
and realistic video sequences(Vondrick, Pirsiavash, and Torralba 2016), but is
prone to model collapse and often fails to produce diverse futures(Lee et al.
2018).
VAE-based models have received the most attention among the non-deterministic
models. In particular, conditional VAEs (c-VAEs) have been shown to be able to
forecast diverse future actions a single static image (Walker et al. 2016; Xue
et al. 2016). c-VAEs have also been used to predict diverse future sequences
from an observed video sequence (Babaeizadeh et al. 2017; Lee et al. 2018;
Denton and Fergus 2018; Minderer et al. 2019). In some of these approaches,
human body skeleton(Minderer et al. 2019) and dense trajectory(Walker et al.
2016) are incorporated in addition to the RGB frames to enhance the prediction
quality. Besides, RNN-based encoder-decoder structures such as LSTM(Hochreiter
and Schmidhuber 1997) has also been employed for long-term predictions(Wichers
et al. 2018). Some other works leverage GANs to further boost the visual
quality(Lee et al. 2018).
So far, the success of VAE-based video prediction methods has been limited to
synthetic videos or scripted videos, such as video games or synthetic shapes
rendered with multiple futures(Xue et al. 2016; Babaeizadeh et al. 2017), BAIR
robotic pushing dataset(Ebert et al. 2017) with randomly moving robotic arms
that conduct multiple possible movements(Babaeizadeh et al. 2017; Lee et al.
2018), or KTH(Schuldt, Laptev, and Caputo 2004) and Human3.6M(Ionescu et al.
2013) where one volunteer repeatedly conducts predefined activities (such as
hand clapping and walking) in well controlled environments(Lee et al. 2018;
Denton and Fergus 2018; Babaeizadeh et al. 2017; Minderer et al. 2019). In
these datasets, multiple futures are created for a given observation, so that
the VAE model can be trained as desired. However, videos captured in the wild
are usually determinate. There is always a unique future for a given
observation. Such a data distribution can easily result in posterior collapse
and degenerate VAE to a deterministic model. A possible fix is to treat future
frames at multiple time steps as multiple futures at a single time step(Xue et
al. 2016; Walker et al. 2016), but such manipulation creates a non-negligible
gap between the distributions of the training data and the real-world data. In
this paper, we propose VAE2 to alleviate the posterior collapse problem in
variational video prediction. Different from previous attempts that address
model collapse in VAEs such as employing weak decoder(Bowman et al. 2015;
Gulrajani et al. 2016), involving stronger constraints(Higgins et al. 2017;
Goyal et al. 2017) or annealing strategy(Kim et al. 2018; Gulrajani et al.
2016; Bowman et al. 2015), VAE2 is specially designed to handle the collapse
problem caused by videos lacking of stochasticity.
## VAE-in-VAE
In this section, we elaborate the proposed VAE2 step by step. We start with
introducing the VAE’s posterior collapse in video prediction caused by the
determinate data pair. Next, we describe how to overcome this problem by
injecting stochasticity into vanilla VAE’s objective function. Then, we
propose a tractable lower bound to facilitate gradient-based solution and
finally derive the VAE-in-VAE structure for end-to-end optimization.
### Posterior Collapse of Vanilla VAE with Determinate Data Pair
We first briefly introduce the formulation of traditional VAE-based solutions
for video predictions. Let $\mathcal{D}=\\{(I_{k},v_{k})\\}$ denote some
i.i.d. data pairs consisting of $K$ samples. Assuming $v$ is conditioned on
$I$ with some random processes involving an unobserved hidden variable $z$,
one can maximize the conditional data likelihood $\sum_{k=1}^{K}\log
p(v_{k}\mid I_{k})$ with conditioned VAEs (c-VAEs) through maximizing the
variational lower bound
$\mathbb{E}_{q_{\phi}(z_{k}\mid I_{k},v_{k})}[p_{\theta}(v_{k}\mid
I_{k},z_{k})]-KL(q_{\phi}(z_{k}\mid I_{k},v_{k})\mid\mid p(z)),$ (1)
where $q_{\phi}$ is a parametric model that approximates the true posterior
$p(z_{k}\mid I_{k},v_{k})$, $p_{\theta}$ is a generative model w.r.t. the
hidden code $z_{k}$ and the data $I_{k}$. $KL$ denotes the Kullback-
Leibler(KL) Divergence.
When VAE is used for stochastic video prediction, each
$I_{k}\in\mathbb{R}^{T_{I}\times H\times W}$ represents an observed video
sequence consisting of $T_{I}$ consecutive frames with $H\times W$ spatial
size, $v_{k}\in\mathbb{R}^{T_{v}\times H\times W}$ is the future sequence of
the observation consisting of $T_{v}$ consecutive frames. A general framework
for variational video prediction is illustrated in Fig. 1 (a), where an
encoder $\phi$ and a decoder $\theta$ are employed to instantiate $q_{\phi}$
and $p_{\theta}$, respectively.
In Eq. 1, there is a regression term $p_{\theta}(v_{k}\mid I_{k},z_{k})$ which
is usually modeled by a deep decoder that takes as inputs the observation
$I_{k}$ and hidden code $z_{k}$ and directly regresses the future $v_{k}$.
Since the videos in the wild are determinately captured, each $I_{k}$ is only
associated with a unique $v_{k}$ without stochasticity. In principle, such
determinate data pair $(I_{k},v_{k})$ can be easily fit by the decoder
$\theta$ since networks with sufficient capacity are capable of representing
arbitrarily complex functions(Hornik et al. 1989) or even memorize
samples(Arpit et al. 2017). Therefore, the hidden code $z$ can be entirely
ignored by the decoder to fulfill the KL divergence term based on the
information preference property(Chen et al. 2016). As a result, VAE is
modeling a deterministic process under this scenario.
### From VAE to VAE2 by Introducing Stochasticity
The reason for the aforementioned collapse issue is that traditional c-VAEs
maximize the likelihood $p(v_{k}\mid I_{k})$ over data pair $(v_{k},i_{k})$
which has no stochastic behavior. The key to avoid such collapse is to ensure
that the VAE is modeling a stochastic process instead of a deterministic one.
To achieve this, we split each observation $I_{k}$ into two parts: determinate
observation $I^{e}_{k}\in\mathbb{R}^{\frac{T_{I}}{2}\times H\times W}$ and
random transition state $I^{s}_{k}\in\mathbb{R}^{\frac{T_{I}}{2}\times H\times
W}$. Different from the determinate $I^{e}_{k}$, $I^{s}_{k}$ is treated as a
random event that bridges the determinate observation and the unknown future.
Assuming that evolution of a video sequence subjects to a Markov process,
where the generation process of a sequence is only conditioned on its previous
sequence, we propose to maximize the likelihood of the Markov Chain over
observations and futures under all possible transition states. The
optimization problem can be expressed as
$maximize~{}\log\mathbb{E}_{p(I^{s}_{k}\mid v_{k},I^{e}_{k})}[p(v_{k}\mid
I^{s}_{k})p(I^{s}_{k}\mid I^{e}_{k})].$ (2)
Intuitively, the proposed objective function involves stochastic information
by explicitly relaxing a part of determinate observations to random transition
state. However, such formulation is intractable in terms of both the
likelihood of the Markov Chain and the expectation term.
### A Tractable Objective Function for VAE2
In this section, we demonstrate that a tractable lower bound for Eq. 2 can be
derived by applying Cauchy-Schwarz inequality. Specifically, we have
$\begin{split}&\log\mathbb{E}_{p(I^{s}\mid v,I^{e})}[p(v\mid I^{s})p(I^{s}\mid
I^{e})]\\\ &\geq\mathbb{E}_{q_{\phi}(z\mid
I^{e},v)}[\log\mathbb{E}_{p(I^{s}\mid I^{e},z)}p_{\theta}(v\mid I^{s},z)]\\\
&~{}~{}~{}~{}~{}~{}~{}~{}-KL(q_{\phi}(z\mid I^{e},v)\mid\mid p(z)).\\\
\end{split}$ (3)
Here we omit index $k$ for simplicity. The above lower bound serves as our
first-level objective function. It has a similar form as Eq. 1 except that we
have one additional transition variable $I^{s}$ in the decoder model
$p_{\theta}$. To maximize this lower bound, we need to calculate the
expectation term $\mathbb{E}_{p(I^{s}\mid I^{e},z)}p_{\theta}(v\mid I^{s},z)$
in Eq. 3, which requires an approximation towards $p(I^{s}\mid I^{e},z)$.
Here, we employ a generation model $q_{\psi}(I^{s}\mid I^{e},z)$ parameterized
with $\psi$ for the approximation by minimizing $KL(q_{\psi}(I^{s}\mid
I^{e},z)\mid\mid p(I^{s}\mid I^{e},z))$, which induces our second-level
objective function
$\mathbb{E}_{q_{\psi}(I^{s}\mid I^{e},z)}\log
p_{\theta^{{}^{\prime}}}(I^{e}\mid I^{s},z)-KL(q_{\psi}(I^{s}\mid
I^{e},z)\mid\mid p(I^{s})).$ (4)
For the full derivation of Eq. 3 and Eq. 4, please refer to our appendix.
Since transition state $I^{s}$ is high-dimensional signals (video sequences),
the exact functional form of its prior distribution is not accessible. We
follow adversarial autoencoders(Makhzani et al. 2015) to leverage adversarial
training to minimize the distance between the $q_{\psi}(I^{s}\mid I^{e},z)$
and the prior $p(I^{s})$. As such, $-KL(q_{\psi}(I^{s}\mid I^{e},z)\mid\mid
p(I^{s}))$ in Eq. 4 is replaced by
$D_{\omega}(I^{s},I_{prior}),~{}where~{}I_{prior}\sim p(I^{s}),I^{s}\sim
q_{\psi}(I^{s}\mid I^{e},z).$ (5)
Here, $D_{\omega}$ is a discriminator network parameterized with $\omega$ and
$I_{prior}$ is randomly sampled from the video dataset. By doing so, the
second-level objective function Eq. 4 is now tractable. Ideally, we shall
optimize the first-level and second-level objective function iteratively,
which requires unaffordable training time for convergence. In practice, we
simplify such iterative learning process by merging the two objective
functions into a single one as
$\begin{split}&\mathcal{L}(I^{e},I^{s},v;\theta,\phi)+\\\
&\lambda[\mathbb{E}_{q_{\psi}(I^{s}\mid I^{e},z)}\log
p_{\theta^{{}^{\prime}}}(I^{e}\mid
I^{s},z)+{D}_{\omega}(I^{s},I_{prior})],\end{split}$ (6)
where $\lambda$ is a loss weight applied on Eq. 4. This simplification enables
us to simultaneously optimize the two objective functions and we use a weight
$\lambda$ to adjust the optimization speed of our second-level objective
function to mimic the original iterative process.
### End-to-end Optimization
We employ two VAE structures to maximize our final objective function in Eq.
6. As illustrated in Fig. 1(b), the first VAE consisting of an encoder $\phi$
and a decoder $\theta$ is incorporated to maximize the first-level objective
function $\mathcal{L}(I^{e},I^{s},v;\theta,\phi)$. The second VAE with an
encoder $\psi$, a decoder ${\theta^{{}^{\prime}}}$ and a discriminator
$D_{\omega}$ is used for maximizing the second-level objective function. We
assume $p_{\theta}$ and $p_{\theta^{{}^{\prime}}}$ to be Laplace distribution
(which leads to L1 regression loss) and assume $q_{\phi}$ to be Gaussian. The
training procedure can be summarized as follows:
* •
Encoder $\phi$ takes $I^{e}$ and $v$ as inputs and produces hidden codes $z$.
$z$ is fed into $\psi$ to generate $L$ different $I^{s}$. For each $I^{s}$,
decoder $\theta$ and $\theta^{{}^{\prime}}$ reconstruct $v$ and $I^{e}$,
respectively.
* •
Estimate the $\log\mathbf{E}_{p(I^{s}\mid I^{e},z)}p_{\theta}(v\mid I^{s},z)$
in Eq. 3 with $\log\sum_{i=1}^{L}p_{\theta}(v\mid I^{s}_{i},z)$. Estimate
$\mathbf{E}_{q_{\psi}(I^{s}\mid I^{e},z)}\log
p_{\theta^{{}^{\prime}}}(I^{e}\mid I^{s},z)$ in Eq. 6 with $\sum_{i=1}^{L}\log
p_{\theta^{{}^{\prime}}}(I^{e}\mid I^{s}_{i},z)$.
* •
Compute gradients w.r.t. Eq. 6 and update $\phi$, $\psi$, $\theta$,
$\theta^{{}^{\prime}}$. Update $D_{\omega}$ with adversarial learning.
In practice, we observe that this simplified process works well even with
$L=1$. This further reduces the algorithm complexity.
## Number Sequence Prediction
Figure 2: Visualization of the number sequence prediction at two model
parameters. The 100 predictions made by VAE baseline are almost identical
while those made by the proposed VAE2 are more scattered around the
groundtruth. This shows that VAE2 captures the inherent stochastic law of the
number sequences model.
We first use a simple number sequence prediction task to demonstrate how VAE2
mitigates the collapse problem when only determinate data are available. More
specifically, we explicitly design a world model which can stochastically
produce number sequences w.r.t. the given model parameter.
Sequence Model Design. The stochastic model that generates number sequence is
defined by $G(\alpha,H(\boldsymbol{\epsilon}))=\frac{1}{1+e^{-\alpha
H(\boldsymbol{\epsilon})}}$. Here,
$H(\boldsymbol{\epsilon})=[h_{0}(\epsilon_{0}),h_{1}(\epsilon_{1}),...,h_{N-1}(\epsilon_{N-1})]\in\mathbb{R}^{N}$
and $h_{i}(\epsilon_{i})=C+i*S+\epsilon_{i}$, where $\epsilon_{i}\sim
Uniform(0,S)$. This world model is parameterized by $\alpha$ and its
stochastic nature is characterized by $H(\boldsymbol{\epsilon})$.
Constructing the Dataset. In order to mimic the determinate video sequences
captured in the wild, we use $G(\alpha,H(\epsilon))$ to generate only one
number sequence for each world parameter $\alpha$. We then split each number
sequence into three even parts. For VAE2, the three parts correspond to
$I^{e}$, $I^{s}$ and $v$, respectively. For the baseline VAE, the first and
the second parts together correspond to $I$ and the third part corresponds to
$v$. The dataset
$D=\\{G(\alpha_{k},H(\boldsymbol{\hat{\epsilon}})),k\in[1,K]\\}$ contains $K$
number sequences generated from $K$ different world model parameters, where
$H(\boldsymbol{\hat{\epsilon}})$ denotes a single sampling result of
$H(\boldsymbol{\epsilon)}$. We set $C$, $S$, $N$, and $K$ to -1.5, 0.1, 30 and
10,000, respectively. The constructed dataset contains 10,000 number
sequences, each of which has 30 data points.
Evaluation and Visualization. We train our VAE2, VAE(Babaeizadeh et al. 2017)
and VAE-GAN(Lee et al. 2018) models on this number sequence dataset,
respectively. The training and architecture details can be found in the
appendix. After training, we make 100 predictions on $v$ using each method. In
Fig. 2, we plot original data points in red and predicted ones in green. The
different shapes correspond to different hidden variables
$h_{i}(\epsilon_{i})$.
We observe from the figure that the predicted $v$ from the baseline VAE model
are almost identical among the 100 predictions (samples of the same shape are
grouped together), showing that the baseline VAE degrades to a deterministic
model. In contrast, the proposed VAE2 provides much more diverse predictions
(samples of each shape scatter around the ground-truth). Although we do not
provide multiple futures in the training dataset, VAE2 is able to explore the
underlying stochastic information through our innovative design. More
visualizations can be found in the appendix.
In addition to visualizing the predicted numbers, we can use the standard
deviation of the L1 loss of different samples to measure the diversity of
predictions. Fig. 3(a) plots the mean and the standard deviation of the
predictions from VAE2 and its counterparts on the entire dataset. It is clear
that the number sequences predicted by the proposed VAE2 are more diverse than
other methods.
## Experiments on Videos Captured in the Wild
Figure 3: Standard deviation of 100 prediction samples from different methods
under four criteria. Each box and whisker indicates the mean $\pm$ three times
and five times standard deviation, respectively. Larger deviation range of
VAE2 indicates it can significantly improve the diversity of the predictions.
The number beside each legend denotes the averaged Inception Score of the 100
samples from corresponding method over the entire Cityscapes dataset.
Dataset and Evaluation Metrics. We evaluate the proposed VAE2 with the
Cityscapes dataset(Cordts et al. 2016) which contains urban street scenes from
50 different cities. The videos are captured at 17 fps with cameras mounted on
moving cars. Since a car and its mounted camera pass each street only once,
every video sequence is determinate and there are no multiple futures
available. The average number of humans and vehicles per frame is 7.0 and
11.8, respectively, providing a fairly complex instance pattern.
There is no consensus yet on how to evaluate a video prediction scheme.
Previous work has tried to evaluate the perceived visual quality of the
predicted frames. However, this work is focused on mitigating the model
collapse problem caused by determinate training data. In addition to the
perceived visual quality, we will mainly evaluate how seriously a prediction
model suffers from the model collapse problem. In general, the more diverse
the predicted frames are, the better the model handles the model collapse
problem. The diversity of predicted frames can be evaluated both
quantitatively and qualitatively. In particular, we use the standard deviation
of some conventional image quality evaluation metrics, such as SSIM(Wang et
al. 2004), PSNR(Huynh-Thu and Ghanbari 2008), and L1 loss, for quantitative
evaluation. We also compute the optical flows between the ground-truth future
frame and the predicted future frames by various prediction models. Since
optical flow reflects per-pixel displacement, it can be a very intuitive way
to show the pixel-level difference between the two frames.
Reference Schemes and Implementation Details. Existing VAE-based video
prediction approaches are designed with different backbone networks and data
structures, including RGB frames, skeleton, or dense trajectory. In this work,
we only consider methods that directly operate on raw RGB videos to
demonstrate the efficacy of the proposed VAE2. These prediction models can be
categorized into two VAE variants, namely vanilla VAEs (Babaeizadeh et al.
2017; Xue et al. 2016; Denton and Fergus 2018) and VAE-GAN (Lee et al. 2018;
Larsen et al. 2015). In addition, we evaluate $\beta$-VAE(Higgins et al. 2017)
and VAE with annealing(Bowman et al. 2015) which are designed for addressing
the model collapse problem without considering the situation where training
data lack of stochasticity. We also construct a deterministic model as a
baseline.
We employ 18-layer HRNet (wd-18-small)(Sun et al. 2019) to instantiate all the
encoders, decoders, and GANs in VAE2. For fair comparison, we replace the
original backbone network in the reference schemes (Babaeizadeh et al. 2017;
Lee et al. 2018; Higgins et al. 2017; Bowman et al. 2015) with the same HRNet.
The deterministic baseline is also a 18-layer HRNet which directly regresses
the future frames. We set $\lambda$ to 0.1 and we use Adam optimizer to train
all methods for 1,000 epochs. More training details can be found in the
appendix.
Figure 4: Prediction qualities under different numbers of samples. VAE2 can
generate predictions very close to the ground truth future when it is sampled
for sufficient times.
Diversity and Quality. In order to measure the diversity of the predictions,
we make 100 random predictions with each method and compute the PSNR, L1
distance, SSIM, and multi-scale SSIM (MS-SSIM)(Wang, Simoncelli, and Bovik
2003) between each predicted frame and the ground-truth future frame. In Fig.
3, we use box-and-whisker charts to plot the mean (the center of each box),
the 3-sigma (solid box), and the 5-sigma (whiskers) of each metric, where
sigma denotes the standard deviation. In each sub-figure, different colors
represent different methods. It is clear from the figure that the future
frames predicted by VAE2 have a larger standard deviation than the reference
schemes. It corroborates that VAE2 can predict much more diverse futures than
existing methods. Besides, we can find that approaches such as $\beta$-VAE and
Anneal-VAE that are designed for model collapse fail in this scenario. They
only bring marginal diversity improvement. This implies that we need a
specific solution like VAE2 when source data lack stochasticity.
Another intuitive way to check whether the proposed VAE2 helps alleviate the
model collapse problem is to evaluate the KL loss of the hidden code $z$. A
variational model that collapses to its deterministic counterpart usually has
a very small KL loss. As can be viewed in Fig. 5, the converged KL loss of
counterpart methods is significantly smaller than that of VAE2.
Figure 5: KL loss (without normalization) of the latent variable $z$ during
training for different schemes. Small KL loss indicates the learned
distribution of the hidden variable $z$ is very close to the prior
distribution. When KL loss approaches to zero, the latent variable does not
encode any useful information.
Finally, we follow the methods in (Babaeizadeh et al. 2017) to evaluate the
video prediction quality. In Fig. 4, we plot the best prediction among
different numbers of samples under various metrics. The reference is a
deterministic model(Finn, Goodfellow, and Levine 2016) that directly regresses
the ground-truth future. We can see from the figure that as the number of
samples increases, the proposed VAE2 can predict video sequences at a quality
that is comparable with the reference model in terms of the reconstruction
fidelity. As the time-step increases, VAE2 achieves an even better performance
than the reference. We also employ Inception Score(IS)(Salimans et al. 2016),
a frequently used no-reference image quality assessment method, to measure the
visual quality of the predicted futures. As illustrated by the number beside
each legend in Fig. 3, the scores are close to each other, which suggests the
overall visual quality of the futures predicted by VAE2 are on par with other
approaches.
Figure 6: Visualization of predictions and their corresponding optical flow
w.r.t. the ground-truth future. We randomly sample four futures with each
method. The diverse patterns of the flow images under VAE2 indicate its
stochastic behavior. The last row shows the standard deviations of the flow
images over 100 samples. The large value of VAE2 supports that it predicts
much more diverse future. More visualizations can be found in the appendix.
Visualizations. Due to limited space, we only visualize single frame
predictions with three baseline methods in this section. More visualizations
can be found in the appendix. Fig. 7 shows three sampled predictions of a
single prediction step for each of the schemes to be compared. We notice that
the future positions of the fast moving truck predicted by VAE(Babaeizadeh et
al. 2017) and VAE-GAN(Lee et al. 2018) are almost identical to the
deterministic baseline. There is little stochasticity among different samples.
In contrast, VAE2 achieves noticeable randomness on the moving truck and the
moving camera. The predicted camera motion can be observed from the parking
car on the right.
Figure 7: Visualization of three predictions for the next frame of an observed
video sequence. Each column contains the three predictions made by each
method. It can be observed that VAE2 predicts quite different futures while
the others make almost identical predictions.
In order to show the detailed difference among predictions, we compute and
colorize(Baker et al. 2011) the optical flow between each prediction and the
ground-truth future frame. In Fig. 6, we show four predictions of each method
and their corresponding optical flow. The color code for optical flows is
illustrated at the top-left corner. For example, the first flow map under VAE2
has a green hue, suggesting that the whole background shifts left w.r.t. the
ground truth future. This means the camera car is predicted to turn right in
the coming future (although it is not the ground-truth future in the dataset).
It can also be observed that the displacement patterns of the frames predicted
by our method are much more diverse than the other approaches. To quantify
such diversity, we compute and visualize the (normalized) standard deviation
of the optical flows on 100 different samples for each method. As can be
viewed in fifth row, the flow variance of VAE2 has much larger responses on
the moving car and the whole background region. The statistics of such
diversity on the entire dataset are presented in the table at the last row to
illustrate the efficacy of VAE2 on predicting stochastic futures.
## Conclusion and Discussion
In this paper, we investigate the posterior collapse problem in variational
video prediction caused by the videos determinately captured in the wild. We
effectively mitigate this problem by explicitly introducing stochasticity into
vanilla VAEs and propose an end-to-end framework, VAE2, for optimization. The
proposed VAE2 demonstrates its capability of capturing the stochastic
information of videos in the wild, which makes variational video prediction
more practical in real-world applications. In addtition, we believe that VAE2
can be effectively extended to other sequential prediction problems where
training data are lack of stochasticity. We will leave this part to future
works.
We also notice that the inference structure of VAE2 looks similar to the
recently proposed two-stage VAE (Dai and Wipf 2019). However, this two-stage
VAE is designed to address the problem that the hidden code drawn from the
encoder is incongruous with the prior, and the first VAE is used to predict
the distribution of the hidden code instead of the randomized partial
observation as in VAE2.
## References
* Alemi et al. (2017) Alemi, A. A.; Poole, B.; Fischer, I.; Dillon, J. V.; Saurous, R. A.; and Murphy, K. 2017. Fixing a broken ELBO. _arXiv preprint arXiv:1711.00464_ .
* Arpit et al. (2017) Arpit, D.; Jastrzebski, S.; Ballas, N.; Krueger, D.; Bengio, E.; Kanwal, M. S.; Maharaj, T.; Fischer, A.; Courville, A.; Bengio, Y.; et al. 2017. A closer look at memorization in deep networks. _arXiv preprint arXiv:1706.05394_ .
* Babaeizadeh et al. (2017) Babaeizadeh, M.; Finn, C.; Erhan, D.; Campbell, R. H.; and Levine, S. 2017. Stochastic variational video prediction. _arXiv preprint arXiv:1710.11252_ .
* Baker et al. (2011) Baker, S.; Scharstein, D.; Lewis, J.; Roth, S.; Black, M. J.; and Szeliski, R. 2011\. A database and evaluation methodology for optical flow. _International journal of computer vision_ 92(1): 1–31.
* Bowman et al. (2015) Bowman, S. R.; Vilnis, L.; Vinyals, O.; Dai, A. M.; Jozefowicz, R.; and Bengio, S. 2015. Generating sentences from a continuous space. _arXiv preprint arXiv:1511.06349_ .
* Chen et al. (2016) Chen, X.; Kingma, D. P.; Salimans, T.; Duan, Y.; Dhariwal, P.; Schulman, J.; Sutskever, I.; and Abbeel, P. 2016. Variational lossy autoencoder. _arXiv preprint arXiv:1611.02731_ .
* Cordts et al. (2016) Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; and Schiele, B. 2016. The cityscapes dataset for semantic urban scene understanding. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 3213–3223.
* Dai and Wipf (2019) Dai, B.; and Wipf, D. 2019. Diagnosing and enhancing vae models. _arXiv preprint arXiv:1903.05789_ .
* Denton and Fergus (2018) Denton, E.; and Fergus, R. 2018. Stochastic video generation with a learned prior. _arXiv preprint arXiv:1802.07687_ .
* Ebert et al. (2017) Ebert, F.; Finn, C.; Lee, A. X.; and Levine, S. 2017. Self-supervised visual planning with temporal skip connections. _arXiv preprint arXiv:1710.05268_ .
* Finn, Goodfellow, and Levine (2016) Finn, C.; Goodfellow, I.; and Levine, S. 2016. Unsupervised learning for physical interaction through video prediction. In _Advances in neural information processing systems_ , 64–72.
* Goodfellow et al. (2014) Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative adversarial nets. In _Advances in neural information processing systems_ , 2672–2680.
* Goyal et al. (2017) Goyal, A. G. A. P.; Sordoni, A.; Côté, M.-A.; Ke, N. R.; and Bengio, Y. 2017\. Z-forcing: Training stochastic recurrent networks. In _Advances in neural information processing systems_ , 6713–6723.
* Gulrajani et al. (2016) Gulrajani, I.; Kumar, K.; Ahmed, F.; Taiga, A. A.; Visin, F.; Vazquez, D.; and Courville, A. 2016. Pixelvae: A latent variable model for natural images. _arXiv preprint arXiv:1611.05013_ .
* Guo et al. (2017) Guo, Q.; Feng, W.; Zhou, C.; Huang, R.; Wan, L.; and Wang, S. 2017. Learning dynamic siamese network for visual object tracking. In _Proceedings of the IEEE International Conference on Computer Vision_ , 1763–1771.
* Higgins et al. (2017) Higgins, I.; Matthey, L.; Pal, A.; Burgess, C.; Glorot, X.; Botvinick, M.; Mohamed, S.; and Lerchner, A. 2017. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. _Iclr_ 2(5): 6.
* Hochreiter and Schmidhuber (1997) Hochreiter, S.; and Schmidhuber, J. 1997. Long short-term memory. _Neural computation_ 9(8): 1735–1780.
* Hornik et al. (1989) Hornik, K.; Stinchcombe, M.; White, H.; et al. 1989. Multilayer feedforward networks are universal approximators. _Neural networks_ 2(5): 359–366.
* Huynh-Thu and Ghanbari (2008) Huynh-Thu, Q.; and Ghanbari, M. 2008. Scope of validity of PSNR in image/video quality assessment. _Electronics letters_ 44(13): 800–801.
* Ionescu et al. (2013) Ionescu, C.; Papava, D.; Olaru, V.; and Sminchisescu, C. 2013. Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. _IEEE transactions on pattern analysis and machine intelligence_ 36(7): 1325–1339.
* Jia et al. (2016) Jia, X.; De Brabandere, B.; Tuytelaars, T.; and Gool, L. V. 2016. Dynamic filter networks. In _Advances in Neural Information Processing Systems_ , 667–675.
* Kalchbrenner et al. (2017) Kalchbrenner, N.; van den Oord, A.; Simonyan, K.; Danihelka, I.; Vinyals, O.; Graves, A.; and Kavukcuoglu, K. 2017. Video pixel networks. In _Proceedings of the 34th International Conference on Machine Learning-Volume 70_ , 1771–1779. JMLR. org.
* Kim et al. (2018) Kim, Y.; Wiseman, S.; Miller, A. C.; Sontag, D.; and Rush, A. M. 2018. Semi-amortized variational autoencoders. _arXiv preprint arXiv:1802.02550_ .
* Kingma and Welling (2013) Kingma, D. P.; and Welling, M. 2013. Auto-encoding variational bayes. _arXiv preprint arXiv:1312.6114_ .
* Larsen et al. (2015) Larsen, A. B. L.; Sønderby, S. K.; Larochelle, H.; and Winther, O. 2015. Autoencoding beyond pixels using a learned similarity metric. _arXiv preprint arXiv:1512.09300_ .
* Lee et al. (2018) Lee, A. X.; Zhang, R.; Ebert, F.; Abbeel, P.; Finn, C.; and Levine, S. 2018. Stochastic adversarial video prediction. _arXiv preprint arXiv:1804.01523_ .
* Liu et al. (2017) Liu, Z.; Yeh, R. A.; Tang, X.; Liu, Y.; and Agarwala, A. 2017. Video frame synthesis using deep voxel flow. In _Proceedings of the IEEE International Conference on Computer Vision_ , 4463–4471.
* Lu, Hirsch, and Scholkopf (2017) Lu, C.; Hirsch, M.; and Scholkopf, B. 2017. Flexible spatio-temporal networks for video prediction. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 6523–6531.
* Makhzani et al. (2015) Makhzani, A.; Shlens, J.; Jaitly, N.; Goodfellow, I.; and Frey, B. 2015. Adversarial autoencoders. _arXiv preprint arXiv:1511.05644_ .
* Minderer et al. (2019) Minderer, M.; Sun, C.; Villegas, R.; Cole, F.; Murphy, K. P.; and Lee, H. 2019. Unsupervised learning of object structure and dynamics from videos. In _Advances in Neural Information Processing Systems_ , 92–102.
* Mirza and Osindero (2014) Mirza, M.; and Osindero, S. 2014. Conditional generative adversarial nets. _arXiv preprint arXiv:1411.1784_ .
* Oh et al. (2015) Oh, J.; Guo, X.; Lee, H.; Lewis, R. L.; and Singh, S. 2015. Action-conditional video prediction using deep networks in atari games. In _Advances in neural information processing systems_ , 2863–2871.
* Oord, Kalchbrenner, and Kavukcuoglu (2016) Oord, A. v. d.; Kalchbrenner, N.; and Kavukcuoglu, K. 2016. Pixel recurrent neural networks. _arXiv preprint arXiv:1601.06759_ .
* Ranzato et al. (2014) Ranzato, M.; Szlam, A.; Bruna, J.; Mathieu, M.; Collobert, R.; and Chopra, S. 2014\. Video (language) modeling: a baseline for generative models of natural videos. _arXiv preprint arXiv:1412.6604_ .
* Razavi et al. (2019) Razavi, A.; Oord, A. v. d.; Poole, B.; and Vinyals, O. 2019. Preventing posterior collapse with delta-vaes. _arXiv preprint arXiv:1901.03416_ .
* Salimans et al. (2016) Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; and Chen, X. 2016. Improved techniques for training gans. In _Advances in neural information processing systems_ , 2234–2242.
* Schuldt, Laptev, and Caputo (2004) Schuldt, C.; Laptev, I.; and Caputo, B. 2004. Recognizing human actions: a local SVM approach. In _Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004._ , volume 3, 32–36. IEEE.
* Srivastava, Mansimov, and Salakhudinov (2015) Srivastava, N.; Mansimov, E.; and Salakhudinov, R. 2015. Unsupervised learning of video representations using lstms. In _International conference on machine learning_ , 843–852.
* Sun et al. (2019) Sun, K.; Xiao, B.; Liu, D.; and Wang, J. 2019. Deep high-resolution representation learning for human pose estimation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 5693–5703.
* Thrun et al. (2006) Thrun, S.; Montemerlo, M.; Dahlkamp, H.; Stavens, D.; Aron, A.; Diebel, J.; Fong, P.; Gale, J.; Halpenny, M.; Hoffmann, G.; et al. 2006. Stanley: The robot that won the DARPA Grand Challenge. _Journal of field Robotics_ 23(9): 661–692.
* Villegas et al. (2017) Villegas, R.; Yang, J.; Hong, S.; Lin, X.; and Lee, H. 2017. Decomposing motion and content for natural video sequence prediction. _arXiv preprint arXiv:1706.08033_ .
* Vondrick, Pirsiavash, and Torralba (2016) Vondrick, C.; Pirsiavash, H.; and Torralba, A. 2016. Generating videos with scene dynamics. In _Advances in neural information processing systems_ , 613–621.
* Vondrick and Torralba (2017) Vondrick, C.; and Torralba, A. 2017. Generating the future with adversarial transformers. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 1020–1028.
* Walker et al. (2016) Walker, J.; Doersch, C.; Gupta, A.; and Hebert, M. 2016. An uncertain future: Forecasting from static images using variational autoencoders. In _European Conference on Computer Vision_ , 835–851. Springer.
* Walker, Gupta, and Hebert (2015) Walker, J.; Gupta, A.; and Hebert, M. 2015. Dense optical flow prediction from a static image. In _Proceedings of the IEEE International Conference on Computer Vision_ , 2443–2451.
* Wang et al. (2004) Wang, Z.; Bovik, A. C.; Sheikh, H. R.; and Simoncelli, E. P. 2004. Image quality assessment: from error visibility to structural similarity. _IEEE transactions on image processing_ 13(4): 600–612.
* Wang, Simoncelli, and Bovik (2003) Wang, Z.; Simoncelli, E. P.; and Bovik, A. C. 2003. Multiscale structural similarity for image quality assessment. In _The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003_, volume 2, 1398–1402. Ieee.
* Wichers et al. (2018) Wichers, N.; Villegas, R.; Erhan, D.; and Lee, H. 2018. Hierarchical long-term video prediction without supervision. _arXiv preprint arXiv:1806.04768_ .
* Xue et al. (2016) Xue, T.; Wu, J.; Bouman, K.; and Freeman, B. 2016. Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks. In _Advances in neural information processing systems_ , 91–99.
* Zhou et al. (2018) Zhou, Y.; Sun, X.; Zha, Z.-J.; and Zeng, W. 2018. Mict: Mixed 3d/2d convolutional tube for human action recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 449–458.
|
T2AcmrTempora-TLF
# Enhancing Sequence-to-Sequence Neural Lemmatization with External Resources
Kirill Milintsevich
Institute of Computer Science
University of Tartu
Tartu, Estonia
<EMAIL_ADDRESS>
&Kairit Sirts
Institute of Computer Science
University of Tartu
Tartu, Estonia
<EMAIL_ADDRESS>
###### Abstract
We propose a novel hybrid approach to
lemmatization111https://github.com/501Good/lexicon-enhanced-lemmatization that
enhances the seq2seq neural model with additional lemmas extracted from an
external lexicon or a rule-based system. During training, the enhanced
lemmatizer learns both to generate lemmas via a sequential decoder and copy
the lemma characters from the external candidates supplied during run-time.
Our lemmatizer enhanced with candidates extracted from the Apertium
morphological analyzer achieves statistically significant improvements
compared to baseline models not utilizing additional lemma information,
achieves an average accuracy of 97.25% on a set of 23 UD languages, which is
0.55% higher than obtained with the Stanford Stanza model on the same set of
languages. We also compare with other methods of integrating external data
into lemmatization and show that our enhanced system performs considerably
better than a simple lexicon extension method based on the Stanza system, and
it achieves complementary improvements w.r.t. the data augmentation method.
## 1 Introduction
State-of-the-art lemmatization systems are based on attentional sequence-to-
sequence neural architectures operating on characters that transform the
surface word form into its lemma (Kanerva et al., 2018; Qi et al., 2018). Like
any other supervised learning model, these systems are dependent on the amount
and quality of the existing training data. Attempts to develop even more
accurate lemmatization systems can focus on improving the model’s architecture
or obtaining additional data. While annotating additional data is an ongoing
process for many smaller languages in the Universal Dependencies (UD)
collection, there are also other data sources available that can be useful for
improving lemmatization systems. In particular, we refer to existing rule-
based morphological analyzers, lexicons, and other such resources.
Three potential sources for extracting additional lemma candidates are
Apertium, Unimorph, and UD Lexicons initiatives.
Apertium222https://www.apertium.org is an open-source rule-based machine
translation platform (Forcada et al., 2011). It also includes rule-based
morphological analyzers based on finite-state transducers that cover 80
languages. Unimorph333http://unimorph.org/ is a project aimed at collecting
annotated morphological inflection data, including lemmas, from Wiktionary
(Kirov et al., 2016), a free open dictionary for many languages. Currently,
the Unimorph project covers 110 languages. UD
Lexicons444http://atoll.inria.fr/~sagot/ is a collection of 53 morphological
lexicons in CoNLL-UL format covering 38 languages. UD Lexicons mostly use
Apertium and Giellatekno systems to generate the annotations (Sagot, 2018).
Several previous works have proposed methods to improve lemmatization systems
by augmenting the training data with additional instances (Bergmanis and
Goldwater, 2019; Kanerva et al., 2020). In this paper, we propose another
approach that both modifies the model architecture and leverages additional
data. Unlike previous work where the model gains from extracting extra
knowledge from the additional data provided for training, our primary goal is
to teach the model to use external resources, even those that may only be
available later during test time. In particular, the proposed system is a
dual-encoder model, which receives two inputs for each word: 1) the word form
itself to be lemmatized and 2) (optionally) the lemma candidates for that word
form extracted from a lexicon or generated by a rule-based system. Both inputs
are encoded with two different encoders and passed to the decoder. The decoder
then learns via two separate attentional mechanisms to generate the lemma via
the combination of the regular transduction and by copying characters from the
external candidates. This way, the model is trained to use two sources of
information–the regular training set and the options proposed by an external
resource.
The experiments with several models enhanced with external data on 23 UD
languages show that the best model using additional lemma candidates generated
by the Apertium system achieves significantly higher results than the baseline
models trained on the UD training set only. Also, we compare our method with
other methods using external data. The enhanced system performs considerably
better than a simple lexicon extension method based on the Stanza system, and
it achieves complementary improvements w.r.t. the data augmentation method of
Kanerva et al. (2020).
## 2 Related Works
Nowadays, state-of-the-art lemmatization systems are typically based on a
neural sequence-to-sequence architecture, as demonstrated by the variety of
systems presented at the CoNLL 2018 (Zeman et al., 2018) and SIGMORPHON 2019
(McCarthy et al., 2019) shared tasks. Several systems, including the TurkuNLP
pipeline, the winner of the lemmatization track at CoNLL 2018 Shared task, use
an attention-based translation model (Kanerva et al., 2018; Qi et al., 2018).
The input to the system is the character sequence of a surface form (SF),
which is “translated" into the lemma by an attention-based decoder. The input
sequence can also be extended with POS tags (Qi et al., 2018) and
morphological features (Kanerva et al., 2018).
Another approach was used by the UDPipe Future system, the second-best model
at the CoNLL 2018 Shared Task. Straka (2018) proposed to produce a lemma by
constructing a set of rules that transform the SF into a lemma. These rules
can include copying, moving, or deleting a character in the SF, as well as
additional rules for changing or preserving the casing. Thus, the
lemmatization task is rendered into a multi-class classification task of
choosing the correct transformation rule among the set of all possible rules
generated from the training set. A year later, Straka et al. (2019) improved
the result for the lemmatization by adding BERT contextual embeddings (Devlin
et al., 2019) to the input, which made them the best lemmatization system at
the SIGMORPHON 2019 Shared Task.
Several previous works have proposed to leverage additional data to improve
lemmatization. In the simplest form, training data itself can be used to
create a lexicon that maps word forms to its lemma. This strategy has been
adopted by the Stanford neural lemmatization system (Qi et al., 2018), which
creates such lexicons from the training sets and resorts to lemma generation
only when the lexicon lookup fails. One can easily imagine extending such a
lexicon with external resources. Rosa and Mareček (2018) adopted another
simple way of using Unimorph lexicons to post-fix the morphological features
and lemmas predicted by the UDPipe system (Straka and Straková, 2017). The
post-fix is performed by simply looking up the SF from the Unimorph lexicon
and, if the match is found, replacing the model prediction with the tags and
lemmas found in the lexicon.
Another line of work has used additional data to augment the training data
set. Bergmanis and Goldwater (2019) augmented their training set by first
listing all non-ambiguous word-lemma pairs from Unimorph lexicons and then
extracted sentences from Wikipedia that contained these words. They then
trained the context-sensitive Lematus model (Bergmanis and Goldwater, 2018) on
this extended partially lemmatized data set. Kanerva et al. (2018) used
Apertium’s morphological analyzer module to extend the training set for
languages with tiny UD datasets. Apertium was used to generate all possible
morphological analyses to 5000 sentences selected from the Wikipedia of the
respective language. For each sentence, the most likely analysis sequence was
then obtained via a disambiguating language model. The words that were
assigned an Apertium-generated lemma during this process were added to the
lemmatizer training set. In the subsequent work, Kanerva et al. (2020)
extended the training data even more. They used Apertium to analyze all words
found in the CoNLL 2017 web crawl dataset (Ginter et al., 2017) or in the
Wikipedia of the respective language. All new words with unambiguous lemma and
morphological analysis were added to the augmented training set.
## 3 Method
Figure 1: The architecture of the dual-encoder enhanced lemmatizer. Layers
that comprise the original Stanza lemmatizer are marked with a bold red
border.
The core of the proposed model is the Stanford lemmatizer (Qi et al., 2018,
2020) which is a sequence-to-sequence model with attention. It takes
character-level word representation and the POS tag as input and processes
them with a bidirectional LSTM encoder. Then, it passes the encoder outputs to
an LSTM decoder, which applies a soft dot attention layer after every LSTM
cell. Finally, the output is constructed via greedy decoding.
We make several changes to the model architecture as shown in Figure 1. The
components comprising the original Stanford lemmatizer are marked on the
figure with the bold red border. First, we add another encoder that encodes
the lemma candidates provided by the external system. The output
representations of both encoders are combined with a linear layer and fed to
the decoder. Secondly, we add another attention layer to the decoder that
attends to the outputs of the second encoder. The outputs are finally combined
with a linear layer. Finally, in addition to the POS tag, we also add
morphological features to the first encoder’s input.
Additionally, we implement the encoder dropout to simulate the situation when
the external candidates are absent. The value of the encoder dropout that
varies in the range of $\\{0.0,1.0\\}$ defines the probability of discarding
all candidates from a batch during training. Thus, the model will train only
the main encoder based on this batch. This helps to train the model to perform
more robustly in both situations when the candidates in the second encoder are
present or absent.
Treebank | Size | All words | Out-of-vocabulary
---|---|---|---
Def | Lex | Uni | Apt | Stanza | Def | Apt | Diff | OOV%
cs_pdt | 1,503K | 98.51 | 98.66 | 98.67 | 98.55 | 98.58 | 90.51 | 90.95 | 0.44 | 7.53
ru_syntagrus | 1,107K | 97.82 | 97.92 | 98.00 | 98.11 | 97.91 | 89.48 | 91.65 | 2.17 | 10.56
es_ancora | 547K | 99.31 | 99.28 | 99.31 | 99.35 | 99.21 | 95.06 | 95.40 | 0.34 | 5.90
ca_ancora | 530K | 98.85 | 98.83 | 98.83 | 98.89 | 98.49 | 95.82 | 96.79 | 0.97 | 5.43
fr_gsd | 389K | 98.05 | 98.07 | 98.10 | 98.13 | 98.15 | 89.50 | 90.83 | 1.33 | 6.19
hi_hdtb | 351K | 98.77 | 98.71 | 98.71 | 98.80 | 96.66 | 93.49 | 94.62 | 1.13 | 4.67
de_gsd | 287K | 96.87 | 96.91 | 97.04 | 96.80 | 96.78 | 85.53 | 85.22 | -0.31 | 13.04
it_isdt | 278K | 98.19 | 98.39 | 98.31 | 98.48 | 98.32 | 90.28 | 92.22 | 1.94 | 5.86
en_ewt | 254K | 98.21 | 98.19 | 98.22 | 98.26 | 98.18 | 90.10 | 90.49 | 0.39 | 10.05
ro_rrt | 218K | 98.33 | 98.28 | 98.32 | 98.53 | 98.16 | 91.46 | 93.22 | 1.76 | 11.60
pt_bosque | 210K | 98.24 | 98.20 | 98.23 | 98.32 | 98.12 | 93.15 | 94.27 | 1.12 | 8.85
nl_alpino | 208K | 97.08 | 96.61 | 96.89 | 96.74 | 96.99 | 86.34 | 84.88 | -1.46 | 15.81
bg_btb | 156K | 97.97 | 98.20 | 98.17 | 98.07 | 97.36 | 91.07 | 91.02 | 0.05 | 13.97
ur_udtb | 138K | 97.16 | 97.29 | 97.28 | 97.28 | 95.62 | 91.83 | 91.93 | 0.01 | 6.79
gl_ctg | 126K | 98.48 | 98.48 | 98.51 | 98.93 | 98.59 | 89.73 | 93.55 | 3.82 | 10.94
uk_iu | 122K | 97.03 | 97.07 | 97.06 | 97.12 | 96.70 | 91.15 | 91.32 | 0.17 | 33.62
eu_bdt | 121K | 96.48 | 96.62 | 96.63 | 96.68 | 96.52 | 86.18 | 86.81 | 0.63 | 21.68
da_ddt | 100K | 97.87 | 97.7 | 97.81 | 98.03 | 97.36 | 89.86 | 90.31 | 0.45 | 18.13
sv_talbanken | 96K | 97.36 | 97.59 | 97.64 | 98.27 | 97.53 | 87.66 | 92.33 | 4.67 | 17.52
el_gdt | 61K | 96.84 | 97.06 | 97.25 | 97.38 | 96.66 | 84.18 | 86.42 | 2.24 | 19.59
tr_imst | 56K | 97.03 | 97.23 | 97.13 | 97.39 | 96.73 | 92.27 | 93.18 | 0.91 | 36.25
hy_armtdp | 52K | 95.55 | 95.84 | 94.87 | 96.01 | 95.55 | 86.11 | 87.34 | 1.23 | 38.54
be_hse | 13K | 81.91 | 81.86 | 82.36 | 82.63 | 79.98 | 68.78 | 70.30 | 1.52 | 93.28
Average | | 97.04 | 97.09 | 97.10 | 97.25 | 96.70 | 89.11 | 90.22 | 1.11 |
Table 1: Lemmatization accuracy of the models enhanced with training the set
lexicon (Lex), Unimorph lexicon (Uni), and Apertium systems (Apt) as well as
the Default (Def)and Stanza baselines on 23 UD languages.
## 4 Experiments
#### Data
The models are trained and tested on the Universal Dependencies (UD) v2.5
corpora (Zeman et al., 2019). As additional external data, the lexicons from
the Unimorph project (Kirov et al., 2016), UD Lexicons (Sagot, 2018), and
lemmas generated with the Apertium morphological analyzer module (Forcada et
al., 2011) are used. We also experiment with the lexicon constructed from the
training set to simulate the situation when no additional data is
available—this scenario assesses the effect of the second encoder without
external data. The experiments are conducted on 23 languages from the UD
collection. The basis of this selection was that all these languages are
supported by both Unimorph, UD Lexicons, and Apertium.
To extract lemmas from the Unimorph lexicon, the input surface form (SF) is
queried from the lexicon to retrieve the corresponding lemma. Some
morphological forms in the Unimorph lexicons consist of several space-
separated tokens; these were discarded. UD Lexicons are presented in the
CoNLL-UL format, which is an extension of the CoNLL-U format. This makes the
extraction process trivial since the lexicons are already pretokenized. For
Apertium, all generated lemmas were stripped from special annotation symbols,
and duplicate lemmas were removed. Finally, the simple training set based
lexicon solution, similar to Qi et al. (2018), consists of two lookup
dictionaries. The first lexicon maps SF-POS pairs to their lemmas, the second
lexicon maps just SF’s to their possible lemmas found in the training set. The
lemma candidates for a SF are selected by first querying the input SF and POS
tag from the SF-POS dictionary and, in case of failure, falling back to the SF
dictionary.
#### Baselines
As the first baseline, we compare our results with Stanza, the lemmatization
module from the Stanza pipeline Qi et al. (2020), which is a repackaging of
the Stanford lemmatization system from the CoNLL 2018 Shared Task (Qi et al.,
2018). We used the lemmatization models trained on the UDv2.5 available on the
Stanza web page. As the Default baseline, we use our enhanced model, with the
second encoder always being empty.
#### Experimental Setup
We train four enhanced dual-encoder models that differ in the input to the
second encoder. For all models, the input to the first encoder is the
concatenation of SF characters, POS tag, and morphological features. During
the training phase, gold POS tags and morphological features are supplied,
while during inference, POS tags predicted with the Stanza tagger are used.
The input to the second encoder is the following: for the second baseline
(Default), it is always empty; for the Lexicon, Unimorph, and Apertium
enhanced models, it contains the lemma candidate(s) from the training set
based lexicon, Unimorph lexicons, and Apertium analyses respectively. If
several possible candidates are returned for a SF, then these are
concatenated. The encoder dropout for the Lexicon model is set to $0.8$ to
simulate the situation during testing for out-of-vocabulary (OOV) words where
the second encoder will be empty. All models were trained in the HPC at the
University of Tartu University of Tartu (2018) for a maximum of $60$ epochs
with stopping early if there was no improvement in the development accuracy in
$10$ epochs.
## 5 Results
Table 1 shows the results for all three enhanced systems and two baselines.
The Apertium model outperforms other models for most languages, although the
absolute differences are quite small. The Lexicon model and the Default
baseline are on the same level on average, suggesting that supplying the model
with lemmas extracted from the training set via the second encoder does not
help to leverage the training data better. However, all enhanced models,
including the Default model, perform better than the Stanza baseline,
suggesting that omitting the lexicon heuristics and supplying the input tokens
with both POS and morphological features might improve performance.
One-way ANOVA was performed to detect statistical difference between the
systems.555The results for be_hse were extreme outliers and were not included
in the comparison. The Unimorph-enhanced model was excluded from this test as
its results did not conform to the normality requirement. A significant
difference between the scores at the $p<0.05$ level ($p=0.038$) was found.
Post hoc comparisons using one-sided paired t-tests showed that the mean
accuracy of the Apertium-enhanced model is significantly greater compared to
the the Default ($p_{adj}=0.0005$), Lexicon ($p_{adj}<0.0001$), Unimorph
($p_{adj}=0.0001$) and Stanza ($p_{adj}<0.0001$) systems with the $p$-value
adjusted for multiple comparisons using the Bonferroni correction.
As the baseline model performances are already very high and the external
information is expected to improve the lemmatization most for the new words
unseen during training, we computed the accuracy of the out-of-vocabulary
words (OOV) for the best performing Apertium model and the Default baseline.
In this context, OOV words are those words in the test set that were not seen
by the model during training. The results are shown in the right-most section
of the Table 1. The improvements on the OOV words are variable, depending on
the language, although on average, the improvement of the Apertium model over
the Default baseline is more than 1%. We hypothesize that the direction and
the magnitude of these effects are dependent on the coverage and the quality
of the Apertium morphological analyzer.
## 6 Analysis of the Results
In this section, we analyze more thoroughly the potential of the proposed
method. First, we compare our enhanced system with alternative methods for
deploying external data, particularly with the data augmentation method
proposed by Kanerva et al. (2020) and a lexicon extension method implemented
based on the Stanza system Qi et al. (2020). Secondly, we present more
analyses to provide evidence towards the conclusion that the improvements
presented for the enhanced model in the previous section can be attributed to
our system’s ability to make use of external resources supplied to the model
via the second encoder.
Treebank | Def | Apt | Def+8K | Apt+8K | Apt0.8 | Apt+E | Apt+Uni | Apt+UD
---|---|---|---|---|---|---|---|---
| Our models | Augmented models | The second encoder input varies
cs_pdt | 98.51 | 98.55 | 98.49 | 98.57 | 98.49 | 98.39 | 98.51 | 98.50
ru_syntagrus | 97.82 | 98.11 | 97.86 | 98.06 | 97.98 | 97.83 | 97.98 | 97.97
es_ancora | 99.31 | 99.35 | 99.53 | 99.60 | 99.33 | 99.29 | 99.33 | 99.33
ca_ancora | 98.85 | 98.89 | 98.86 | 98.89 | 98.85 | 98.80 | 98.85 | 98.85
fr_gsd | 98.05 | 98.13 | 98.98 | 99.05 | 97.98 | 97.79 | 97.97 | 97.98
hi_hdtb | 98.77 | 98.80 | 98.83 | 98.78 | 98.84 | 98.66 | 98.83 | 98.84
de_gsd | 96.87 | 96.80 | 96.79 | 96.67 | 96.83 | 96.49 | 96.83 | 96.84
it_isdt | 98.19 | 98.48 | 98.98 | 98.99 | 98.36 | 98.3 | 98.36 | 98.37
en_ewt | 98.21 | 98.26 | 97.24 | 98.12 | 98.21 | 98.17 | 98.22 | 98.20
ro_rrt | 98.33 | 98.53 | 97.56 | 98.48 | 98.44 | 98.29 | 98.46 | 98.41
pt_bosque | 98.24 | 98.32 | 98.13 | 98.29 | 98.30 | 98.32 | 98.30 | 98.31
nl_alpino | 97.08 | 96.74 | 96.80 | 96.82 | 96.89 | 96.86 | 96.81 | 96.85
bg_btb | 97.97 | 98.07 | 98.84 | 98.82 | 98.02 | 98.06 | 98.02 | 98.02
ur_udtb | 97.16 | 97.28 | 96.90 | 97.31 | 97.13 | 97.13 | 97.13 | 97.13
gl_ctg | 98.48 | 98.93 | 98.27 | 98.84 | 98.74 | 97.02 | 98.74 | 98.70
uk_iu | 97.03 | 97.12 | 97.25 | 97.35 | 97.22 | 97.11 | 97.22 | 97.22†
eu_bdt | 96.48 | 96.68 | 96.66 | 96.71 | 96.63 | 96.33 | 96.63 | 96.62
da_ddt | 97.87 | 98.03 | 97.74 | 97.95 | 97.87 | 97.57 | 97.91 | 97.87
sv_talbanken | 97.36 | 98.27 | 97.49 | 98.16 | 98.41 | 97.64 | 97.84 | 97.95
el_gdt | 96.66 | 97.38 | 97.02 | 96.96 | 97.49 | 97.38 | 97.56 | 97.47
tr_imst | 97.03 | 97.39 | 97.01 | 97.24 | 97.17 | 96.89 | 97.17 | 97.14
hy_armtdp | 95.55 | 96.01 | 95.74 | 95.66 | 95.86 | 95.68 | 95.86 | 95.86†
be_hse | 81.91 | 82.63 | 83.33 | 82.92 | 83.51 | 82.13 | 83.51 | 83.51†
Average | 97.03 | 97.25 | 97.17 | 97.31 | 97.24 | 96.96 | 97.22 | 97.21
Table 2: Comparison of the enhanced models with the augmentation method: Def is the Default model, Apt is the Apertium-enhanced model, Def+8K and Apt+8K are the same Default and Apertium-enhanced models with augmented data. For the models marked with $\dagger$, the UD Lexicon is absent and is replaced with Apertium candidates instead. Treebank | Stanza | Stanza+UD | Stanza+8K | Apt | Lex+UD | Lex+8K
---|---|---|---|---|---|---
cs_pdt | 98.58 | 98.76 | 98.60 | 98.49 | 98.70 | 98.66
ru_syntagrus | 97.91 | 96.76† | 97.92 | 97.98 | 97.36† | 97.97
es_ancora | 99.21 | 99.25 | 99.15 | 99.33 | 99.27 | 99.28
ca_ancora | 98.49 | 98.29 | 98.51 | 98.85 | 98.83 | 98.83
fr_gsd | 98.15 | 97.69 | 98.24 | 97.98 | 97.09 | 96.75
hi_hdtb | 96.66 | 96.75 | 96.66 | 98.84 | 98.76 | 98.71
de_gsd | 96.78 | 97.53 | 96.86 | 96.83 | 97.01 | 96.94
it_isdt | 98.32 | 98.60 | 98.46 | 98.36 | 98.38 | 98.39
en_ewt | 98.18 | 98.21 | 98.17 | 98.21 | 98.21 | 98.19
ro_rrt | 98.16 | 98.44 | 98.27 | 98.44 | 98.38 | 98.28
pt_bosque | 98.12 | 98.32 | 98.12 | 98.30 | 97.98 | 98.20
nl_alpino | 96.99 | 97.22 | 96.97 | 96.89 | 96.63 | 96.61
bg_btb | 97.36 | 96.26 | 96.62 | 98.02 | 98.12 | 98.20
ur_udtb | 95.62 | 95.66 | 95.64 | 97.13 | 97.28 | 97.29
gl_ctg | 98.59 | 98.64 | 98.60 | 98.74 | 98.48 | 98.48
eu_bdt | 96.52 | 96.51 | 96.41 | 96.63 | 96.66 | 96.62
da_ddt | 97.36 | 97.89 | 97.55 | 97.87 | 97.82 | 97.70
sv_talbanken | 97.53 | 98.45 | 97.63 | 98.41 | 97.78 | 97.59
el_gdt | 96.66 | 96.49 | 96.89 | 97.49 | 97.52 | 97.54
tr_imst | 96.73 | 96.90 | 96.83 | 97.17 | 97.17 | 97.23
Average | 97.60 | 97.63 | 97.61 | 98.00 | 97.87 | 97.87
Table 3: Evaluation of the effect of the Stanza-based lexicon extension
method; comparison with the Apertium-enhanced (Apt) and the Lexicon-enhanced
systems (Lex+UD and Lex+8K).
### 6.1 Data Augmentation
We implemented the transducer augmentation method described by Kanerva et al.
(2020). This method’s basic idea relies on applying existing morphological
analyzers (in this case, Apertium) to unannotated data to generate additional
training instances. To obtain the augmentation data, we recreated the
experiments of Kanerva et al. (2020) with 8K additional data. First, we
collected a word frequency list for each language based on automatically
annotated CoNLL2017 corpora Ginter et al. (2017). For the languages not
present in this dataset (Belarusian and Armenian), we used the wikidump to
extract the word frequency list. Next, all words in the list were analyzed
with the Apertium morphological analyzer. Then, we used the
scripts666https://github.com/jmnybl/universal-lemmatizer from the original
experiments of Kanerva et al. (2020) to convert the Apertium analyses to the
UD format and filter out ambiguous cases. Finally, the 8K most frequent words
not already present in the training set together with their analyses were
chosen and appended to the UD training set.
Although both the enhanced and augmented systems utilize Apertium as the
external source, additional data usage differs. The augmented system uses
Apertium to create extra labeled training data, while our enhanced model uses
Apertium to generate additional lemma candidates to the words of the same
initial training set. On the other hand, during test time, the augmented model
must fully rely on the regularities learned during training, while our
enhanced model can additionally look at the lemmas for words that were never
seen during training.
The comparison of our Apertium-enhanced model and the augmented model is shown
in the first two blocks of Table 2. The first two columns reintroduce the
Default and Apertium-enhanced models’ results from the Table 1, the third and
the fourth columns show the same two models trained on the augmented training
sets. Overall, the average results for both Apertium-enhanced and the
augmented Default model (the column Def+8K) are very similar, with the average
of the Apertium-enhanced model being slightly higher (97.25 vs. 97.17). The
Apertium-enhanced model is better in 15 languages out of 23 (underlined in the
table), while the augmented model surpasses the enhanced model on 8 models.
The Apt+8K column shows the results of a model combining both augmentation and
enhanced methods—the training data is first augmented with the additional 8K
words and then additionally enhanced with the Apertium candidates via the
second encoder. The combined approach scores are the best for 8 languages out
of 23, resulting in an average improvement over the augmented Default model of
0.14% and over the Apertium-enhanced model of 0.06% in absolute. These results
show that both augmentation and enhancement methods can contribute in
complementary ways.
### 6.2 Lexicon Extension
Another simple baseline method for using external data is to use a lexicon or
an external system first and only resort to neural generation when the surface
form (SF) is not present in the lexicon. This is essentially how the Stanza
lemmatizer works. Stanza constructs a lexicon based on the training set.
During inference, the prediction goes through a cascade of three steps: 1) if
the SF is present in the lexicon, then the lemma is immediately retrieved from
the lexicon. 2) If the SF is novel and is missing from the lexicon, an edit
operation is generated that decides whether the SF itself or its lowered form
is the lemma, or whether neither is true. 3) Only in the last case the lemma
is generated by the sequential decoder. For testing out the lexicon extension
system, we used the pretrained Stanza models but extended the lexicon stored
in the Stanza system with additional items. Note that Stanza lexicons can only
store one lemma per SF-POS combination. Thus, if any of the external lexicons
contain ambiguous lemmas, the firstly encountered lemma is chosen for each
word.
We extended the Stanza lexicons with both the Apertium 8K datasets used for
training the augmented models in section 6.1 and the UD lexicons Sagot (2018).
The results of these evaluations are shown in Table 3. The set of languages in
this table is slightly different than in Table 1, only including those
languages for which the UD lexicons are existent. The left block shows the
results with various Stanza models. The first column shows the baseline Stanza
results (taken from Table 1), the second and the third columns present the
Stanza model with its lexicon extended with the UD lexicons and the 8K words,
respectively. The original UD lexicon for Russian contained many erroneous
lemmas due to poor post-processing, which skewed the average accuracy. Thus,
we did additional post-processing to put it in line with other languages.
The average scores of the Stanza systems extended with both UD and 8K lexicons
remain roughly the same. However, when extending the Stanza with UD lexicons,
most languages improve at least slightly, as shown with the underlined scores
in the column Stanza+UD. Overall, on average, the simple lexicon extension
method falls considerably behind our Apertium-enhanced model (97.63 vs.
98.00), the scores of which are again replicated in the first column of the
right-most block.
However, the Apertium-enhanced model is not directly comparable to the Stanza
models with extended lexicons because 1) the training data differs as the
enhanced model has access to extra lemma candidates of the training set words
during training and 2) the lexicons available during the test time are
different. Thus, we also show in the last two columns of the right-hand block
of Table 3 the results of two Lexicon-enhanced models (recall Section 4 and
Table 1), similarly extended with the UD and 8K lexicons. The Lexicon-enhanced
model has access to the same data as the Stanza model during both training
(training set + the training set based lexicon) and testing.
While the Lexicon-enhanced model alone does not perform better than the
Default baseline (see results in Table 1), adopting additional UD or 8K
lexicons during test time increases the results to the same level with the
Apertium-enhanced model. This shows that our proposed approach does not need
additional resources during training—the model can be trained to use external
sources based on the lexicon created from the training set. Then, the system’s
real benefits can be achieved when using extra resources later during test
time. Without those resources, the model still performs on the same level as
the non-enhanced baseline.
We hypothesize that our dual-encoder approach performs better than the Stanza
with extended lexicon partly because of the differences in the usage of the
external data. Since Stanza uses the lexicon resources as a first step in the
cascade, it is prone to potential errors and noise in the lexicons. The dual-
encoder model is safer against noise in this respect because the lemma
candidates are not simply chosen as the prediction if present but are rather
fed through the system that can decide how much to take or ignore from the
given candidates. Also, because Stanza lexicons have the restriction of only
one lemma per word-POS pair, the system might solve some ambiguities
erroneously. Our approach is also more flexible in this respect, as the second
encoder can be given several candidates, and again, the system learns to
decide itself from which candidate how much to take. On average, there are
$0.71$ lemma candidates per input word, and $1.09$ lemma candidates per input
word when excluding those words that do not have external lemma candidates.
### 6.3 Effect of the Second Encoder
Next, we performed a set of evaluations to argue for the effect of the second
encoder in the enhanced model. We suggest that the improvements presented in
Table 1 for the Apertium-enhanced model over the Default baseline are indeed
due to the input provided via the second encoder. To demonstrate that, we
evaluated the test set for each language again, on the same model that was
trained with Apertium lemma candidates but leaving the second encoder empty
for the test time. For that, we retrained the Apertium-enhanced models with
the encoder dropout of $0.8$. This means that during training, 80% of the
time, the lemma candidates provided for the second encoder are dropped, and
the model trains only the main encoder. The reasoning for using the dropout is
similar to one provided for the Lexicon-enhanced model in Section 3—if the
lemma candidates are always provided during training, the model learns to rely
equally on both encoders. Due to that, if the second encoder remains empty
during testing, the performance degrades considerably. If, on the other hand,
the dropout is used, then the model learns to make predictions both when the
candidates in the second encoder are present and also when they are absent.
The results of these experiments are shown in the right-most block of Table 2.
We first show in Table 2 that the results of the Apertium-enhanced models
trained with dropout are equivalent to the results obtained without dropout as
evidenced by the column Apt0.8. Next, when the second encoder is empty (column
Apt+E), the test results are similar to the ones obtained with the Default
model, providing evidence that the improvements are indeed due to the extra
info supplied via the second encoder during test time. Additionally, we
emulated the scenario when extra lexicon information becomes available after
training the model. In this case, it is straightforward to integrate this
information into the system without having to retrain the model. The last two
columns in Table 2 show the following scenarios on this respect: 1) Unimorph
lexicons in addition to Apertium (7th column Apt+Uni) and 2) UD lexicons (the
last column Apt+UD) in addition to Apertium. The results in Table 2 show that,
on average, extending the Apertium system with these particular lexicons do
not add any benefit. The reasons for that can be two-fold: 1) The UD lexicons
are for most languages constructed based on the Apertium system and thus might
not add any extra information; 2) The coverage of Unimorph lexicons in terms
of lemmas is typically smaller than of Apertium systems.
Table 6.4 shows some examples when the Default model predicted incorrect lemma
while the Apertium-enhanced model predicted the correct one. In some cases,
Apertium provided the only and correct candidate for the Apertium-enhanced
model, which was picked as a final prediction. In other cases, several
candidates are provided to the second encoder, and the enhanced model chooses
the correct one in most of the cases. This indicates that the second encoder
effectively learns how to use the candidates to better control the lemma
generation.
### 6.4 Effect of Morphological Features
All dual-encoder models were trained with both POS and morphological features
in the input, while the Stanza baseline only uses POS-tag information. Thus,
the effect of the morphological features is a potential confounding factor
when comparing the performance of the enhanced models to the Stanza baseline.
To evaluate the effect of the morphological features, we trained the Default
and Apertium-enhanced models with only providing POS-tag information to the
input.
Figure 2 shows the improvement in accuracy over the Default model trained with
POS-tags only of 1) the Default model trained with both POS-tags and
morphological features, 2) the Apertium-enhanced model trained with only POS-
tags, and 3) the Apertium-enhanced model trained with both POS-tags and
morphological features. It can be seen that for some of the languages, the
most improvement comes from adding morphological features to the input, while
for other languages adding the second encoder gives the main boost. However,
for most languages, combining the second encoder and morphological features
provides the largest effect, which seems to be more complex than a linear
combination of the two. We suppose that, in this scenario, the attention
mechanism works differently—it allegedly takes the morphological features into
account when picking the correct lemma from the multiple candidates.
Input | Def | Apt | Candidate(s)
---|---|---|---
папері & *папер папір папір _$\langle$ paperi$\rangle$_ _$\langle$
paper$\rangle$_ _$\langle$ papir$\rangle$_ _$\langle$ papir$\rangle$_ чотирьох
*четвери чотири четверо, чотири _$\langle$ čotyr’oh$\rangle$_ _$\langle$
četvery$\rangle$_ _$\langle$ čotyry$\rangle$_ _$\langle$ četvero,
čotyry$\rangle$_ Antworten *Antworte Antworten antworten, antwort besten
bester gut gut раскладзе *раскладз расклад раскласці, расклад _$\langle$
raskladze$\rangle$_ _$\langle$ raskladz$\rangle$_ _$\langle$ rasklad$\rangle$_
_$\langle$ rasklasci, rasklad$\rangle$_ стаіць стаіць стаяць стаяць, стаіць
_$\langle$ staic’$\rangle$_ _$\langle$ staic’$\rangle$_ _$\langle$
stajac’$\rangle$_ _$\langle$ stajac’, staic’$\rangle$_
Table 4: Examples for Ukrainian, German, and Belarusian words corrected by the
enhanced model. All predictions of the Default (Def) are incorrect, the
ungrammatical ones are marked with *. The correct predictions of the Apertium-
enhanced (Apt) models are in bold. The last column shows the external
candidates.
Figure 2: Independent and cumulative effects of the second encoder and the
morphological features on the model’s performance. The origin of the x-axis is
the performance of the Default model with POS-tags only.
## 7 Conclusion
We proposed a method for enhancing neural lemmatization by integrating
external input into the model via a second encoder and showed that the system
incorporating Apertium morphological analyzer significantly improved the
performance over the baselines. Both Bergmanis and Goldwater (2019) and
Kanerva et al. (2020) used external resources to augment the training data,
and thus, the improvement of their system is dependent on the amount and
quality of the extended data supplied during training. On the other hand, our
method trains the system to use the external information provided during run-
time, thus making it independent of the particular external data available
during training.
We experimentally showed that the enhancing method is both slightly better and
complementary to the data augmentation method of Kanerva et al. (2020). We
also compared our system with a simple lexicon extension method implemented
based on the Stanza system. When trained and tested in a comparable setting,
the proposed enhanced system achieves considerably higher results.
Although the model’s computational complexity is increased by introducing the
second encoder, it is counterbalanced by our model being more robust to noise
and the ambiguities stemming from the external lexicons. Moreover, the main
bottleneck in computation originates not from the neural network’s increased
size but can rather stem from the external system. For example, in our
experiments, the main bottleneck in computation originated from executing the
transducer-based Apertium morphological analyser. To overcome this bottleneck,
one possible trade-off between the speed and accuracy is to precompile a
candidate list large enough to cover the most frequent words for a given
language. This is a problem that also simpler baseline methods adopting
external resources have to address.
Finally, it is worth noting that the proposed method could be beneficial for
less-resourced languages. However, establishing this claim would need more
systematic experiments exploring specifically on this question, which we did
not focus on in this paper. Still, because the significant improvements shown
in this work are obtained on languages with larger datasets, the possible
gains on smaller datasets can be larger.
## Acknowledgments
The first author was supported by the IT Academy Program (StudyITin.ee).
## References
* Bergmanis and Goldwater (2018) Toms Bergmanis and Sharon Goldwater. 2018. Context Sensitive Neural Lemmatization with Lematus. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 1391–1400.
* Bergmanis and Goldwater (2019) Toms Bergmanis and Sharon Goldwater. 2019. Training Data Augmentation for Context-Sensitive Neural Lemmatizer Using Inflection Tables and Raw Text. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4119–4128.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186.
* Forcada et al. (2011) Mikel L. Forcada, Mireia Ginestí-Rosell, Jacob Nordfalk, Jim O’Regan, Sergio Ortiz-Rojas, Juan Antonio Pérez-Ortiz, Felipe Sánchez-Martínez, Gema Ramírez-Sánchez, and Francis M. Tyers. 2011. Apertium: A Free/Open-Source Platform for Rule-Based Machine Translation. _Machine translation_ , 25(2):127–144.
* Ginter et al. (2017) Filip Ginter, Jan Hajic, Juhani Luotolahti, Milan Straka, and Daniel Zeman. 2017\. CoNLL 2017 Shared Task–Automatically Annotated Raw Texts and Word Embeddings. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics, Charles University.
* Kanerva et al. (2018) Jenna Kanerva, Filip Ginter, Niko Miekka, Akseli Leino, and Tapio Salakoski. 2018\. Turku Neural Parser Pipeline: An End-to-End System for the CoNLL 2018 Shared Task. In _Proceedings of the CoNLL 2018 Shared Task: Multilingual parsing from raw text to universal dependencies_ , pages 133–142.
* Kanerva et al. (2020) Jenna Kanerva, Filip Ginter, and Tapio Salakoski. 2020. Universal Lemmatizer: A Sequence to Sequence Model for Lemmatizing Universal Dependencies Treebanks. _Natural Language Engineering_ , pages 1–30.
* Kirov et al. (2016) Christo Kirov, John Sylak-Glassman, Roger Que, and David Yarowsky. 2016. Very-large Scale Parsing and Normalization of Wiktionary Morphological Paradigms. In _Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)_ , Paris, France. European Language Resources Association (ELRA).
* McCarthy et al. (2019) Arya D. McCarthy, Ekaterina Vylomova, Shijie Wu, Chaitanya Malaviya, Lawrence Wolf-Sonkin, Garrett Nicolai, Christo Kirov, Miikka Silfverberg, Sebastian J. Mielke, Jeffrey Heinz, et al. 2019. The SIGMORPHON 2019 Shared Task: Morphological Analysis in Context and Cross-Lingual Transfer for Inflection. In _Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology_ , pages 229–244.
* Qi et al. (2018) Peng Qi, Timothy Dozat, Yuhao Zhang, and Christopher D Manning. 2018. Universal Dependency Parsing from Scratch. In _Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies_ , pages 160–170.
* Qi et al. (2020) Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020\. Stanza: A Python Natural Language Processing Toolkit for Many Human Languages. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations_.
* Rosa and Mareček (2018) Rudolf Rosa and David Mareček. 2018. CUNI x-ling: Parsing Under-Resourced Languages in CoNLL 2018 UD Shared Task. In _Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies_ , pages 187–196.
* Sagot (2018) Benoît Sagot. 2018. A Multilingual Collection of CoNLL-U-Compatible Morphological Lexicons. In _Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)_.
* Straka (2018) Milan Straka. 2018. UDPipe 2.0 Prototype at CoNLL 2018 UD Shared Task. In _Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies_ , pages 197–207.
* Straka and Straková (2017) Milan Straka and Jana Straková. 2017. Tokenizing, POS tagging, lemmatizing and parsing UD 2.0 with UDPipe. In _Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies_ , pages 88–99, Vancouver, Canada. Association for Computational Linguistics.
* Straka et al. (2019) Milan Straka, Jana Straková, and Jan Hajic. 2019. UDPipe at SIGMORPHON 2019: Contextualized Embeddings, Regularization with Morphological Categories, Corpora Merging. In _Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology_ , pages 95–103.
* University of Tartu (2018) University of Tartu. 2018. UT Rocket. share.neic.no.
* Zeman et al. (2018) Daniel Zeman, Jan Hajic, Martin Popel, Martin Potthast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. In _Proceedings of the CoNLL 2018 Shared Task: Multilingual parsing from raw text to universal dependencies_ , pages 1–21.
* Zeman et al. (2019) Daniel Zeman, Joakim Nivre, Mitchell Abrams, and et al. 2019. Universal Dependencies 2.5. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University.
|
# Counting regions of the boxed threshold arrangement
Priyavrat Deshpande Chennai Mathematical Institute<EMAIL_ADDRESS>,
Krishna Menon Chennai Mathematical Institute<EMAIL_ADDRESS>and
Anurag Singh Chennai Mathematical Institute<EMAIL_ADDRESS>
###### Abstract.
In this paper we consider the hyperplane arrangement in $\mathbb{R}^{n}$ whose
hyperplanes are $\\{x_{i}+x_{j}=1\mid 1\leq i<j\leq n\\}\cup\\{x_{i}=0,1\mid
1\leq i\leq n\\}$. We call it the _boxed threshold arrangement_ since we show
that the bounded regions of this arrangement are contained in an $n$-cube and
are in one-to-one correspondence with the labeled threshold graphs on $n$
vertices. The problem of counting regions of this arrangement was studied
earlier by Joungmin Song. He determined the characteristic polynomial of this
arrangement by relating its coefficients to the count of certain graphs. Here,
we provide bijective arguments to determine the number of regions. In
particular, we construct certain signed partitions of the set
$\\{-n,\dots,n\\}\setminus\\{0\\}$ and also construct colored threshold graphs
on $n$ vertices and show that both these objects are in bijection with the
regions of the boxed threshold arrangement. We independently count these
objects and provide a closed form formula for the number of regions.
###### Key words and phrases:
threshold graph, hyperplane arrangement, finite field method.
###### 2010 Mathematics Subject Classification:
52C35, 32S22, 05C30, 11B68
PD and AS are partially supported by a grant from the Infosys Foundation
## 1\. Introduction
A _hyperplane arrangement_ $\mathcal{A}$ is a finite collection of affine
hyperplanes (i.e., codimension $1$ subspaces and their translates) in
$\mathbb{R}^{n}$. Without loss of generality we assume that arrangements we
consider are _essential_ , i.e., the subspace spanned by the normal vectors is
the ambient vector space. Removing all the hyperplanes of $\mathcal{A}$ leaves
$\mathbb{R}^{n}$ disconnected; counting the number of connected components
using diverse combinatorial methods is an active area of research. A _flat_ of
$\mathcal{A}$ is a nonempty intersection of some of the hyperplanes in
$\mathcal{A}$; the ambient vector space is a flat since it is an intersection
of no hyperplanes. Flats are naturally ordered by reverse set inclusion; the
resulting poset is called the _intersection poset_ and is denoted by
$\mathrm{L}(\mathcal{A})$. A _region_ of $\mathcal{A}$ is a connected
component of $\mathbb{R}^{n}\setminus\bigcup\mathcal{A}$. The _characteristic
polynomial_ of $\mathcal{A}$ is defined as
$\chi_{\mathcal{A}}(t):=\sum_{x\in\mathrm{L}(\mathcal{A})}\mu(\hat{0},x)\,t^{\dim(x)}$
where $\mu$ is the Möbius function of the intersection poset and $\hat{0}$
corresponds to the flat $\mathbb{R}^{n}$. The characteristic polynomial is a
fundamental combinatorial and topological invariant of the arrangement and
plays a significant role throughout the theory of hyperplane arrangements.
Among other things, the polynomial encodes enumerative information about the
stratification of the space $\mathbb{R}^{n}$, induced by the arrangement. We
refer the reader to Stanley’s notes [9] for more on the enumerative aspects of
hyperplane arrangements. In particular we have the following seminal result by
Zaslavsky.
###### Theorem 1.1 ([10]).
Let $\mathcal{A}$ be an arrangement in $\mathbb{R}^{n}$. Then the number of
regions of $\mathcal{A}$ is given by
$r(\mathcal{A})=(-1)^{n}\chi_{\mathcal{A}}(-1)$
and the number of bounded regions is given by
$b(\mathcal{A})=(-1)^{n}\chi_{\mathcal{A}}(1).$
The finite field method, developed by Athanasiadis [1], converts the
computation of the characteristic polynomial to a point counting problem. A
combination of these two results allowed for the computation of the number of
regions of several arrangements of interest.
Another way to count the number of regions is to give a bijective proof. This
approach involves finding a combinatorially defined set whose elements are in
bijection with the regions of the given arrangement and are easier to count.
For example, the regions of the braid arrangement (whose hyperplanes are given
by the equations $x_{i}-x_{j}=0$ for $1\leq i<j\leq n$) correspond to the $n!$
permutations of order $n$. The regions of the Shi arrangement (whose
hyperplanes are given by the equations $x_{i}-x_{j}=0,1$ for $1\leq i<j\leq
n$) are in bijection with the parking functions on $[n]$, hence the number of
regions is $(n+1)^{n-1}$. Refer to Stanley’s notes [9, Lecture 5] for details.
In the present article we consider the following hyperplane arrangement
$\mathcal{BT}_{n}:=\\{X_{i}+X_{j}=1\mid 1\leq i<j\leq n\\}\cup\\{X_{i}=0,1\mid
1\leq i\leq n\\}.$
The problem of counting regions of this arrangement was recently solved by
Song in a series of papers. His first approach [6, 5] involved relating the
coefficients of the characteristic polynomial to the generating functions for
the number of certain graphs. These generating functions [5, Theorem 1] have
quite a complicated expression and consequently make it difficult to determine
a compact form for the characteristic polynomial. Later Song [7] used the
finite field method to get a slightly simpler expression for the
characteristic polynomial.
The main aim of this paper is to provide bijective proofs for the number of
regions of this arrangement. In particular, in Section 3 we construct certain
(signed) ordered partitions of the set
$\\{-n,-(n-1),\dots,n-1,n\\}\setminus\\{0\\}$ and show that they are in
bijection with the regions of $\mathcal{BT}_{n}$. We also show how to count
these partitions. In Section 4 we spell out a recipe to color the vertices of
a labeled threshold graph on $n$ vertices such that the number of such colored
threshold graphs is equal to the regions of $\mathcal{BT}_{n}$. However, we
begin the article by establishing a simpler form for the characteristic
polynomial in Section 2. There we also justify the term “ _boxed threshold_ ”.
## 2\. The characteristic polynomial
We first translate the hyperplanes in $\mathcal{BT}_{n}$ in order to obtain a
combinatorially isomorphic arrangement with the same characteristic
polynomial. Putting $X_{i}=x_{i}+\frac{1}{2}$ for every $i$ we get
(1) $\\{x_{i}+x_{j}=0\mid 1\leq i<j\leq
n\\}\cup\\{x_{i}=-\frac{1}{2},\frac{1}{2}\mid 1\leq i\leq n\\}.$
We will stick to the notation $\mathcal{BT}_{n}$ to denote the above
arrangement. The notation $[k]$ is used to denote the set $\\{1,\dots,k\\}$.
We begin by stating a generalization of the finite field method, given by
Athanasiadis [2], in our context.
###### Theorem 2.1 ([2, Theorem 2.1]).
If $\mathcal{A}$ is a sub-arrangement of the type $C$ arrangement in
$\mathbb{R}^{n}$, that is, a sub-arrangement of $\\{x_{i}\pm x_{j}=0\mid 1\leq
i<j\leq n\\}\cup\\{x_{i}=0\mid i\in[n]\\}$, there exists an integer $k$ such
that for all odd integers $q$ greater than $k$,
$\chi_{\mathcal{A}}(q)=\\#(\mathbb{Z}_{q}^{n}\setminus V_{\mathcal{A}})$
where $V_{\mathcal{A}}$ is the union of hyperplanes in $\mathbb{Z}_{q}^{n}$
obtained by reducing $\mathcal{A}$ mod q.
We use this result to prove the following relationship between the
characteristic polynomial of sub-arrangements of the type $C$ arrangement and
that of their “boxed” versions.
###### Proposition 2.2.
Let $\mathcal{A}$ be an arrangement in $\mathbb{R}^{n}$ that is a sub-
arrangement of the type $C$ arrangement and let
$\mathcal{BA}=\mathcal{A}\cup\\{x_{i}=-\frac{1}{2},\frac{1}{2}\mid
i\in[n]\\}$. Then
$\chi_{\mathcal{BA}}(t)=\chi_{\mathcal{A}}(t-2).$
###### Proof.
Let $q$ be any large odd number. Set
$D_{q}^{n}:=\\{(a_{1},\dots,a_{n})\in\mathbb{Z}_{q}^{n}\mid
a_{i}\neq\pm\frac{q-1}{2}\\}$. Define a bijection
$f:\mathbb{Z}_{q-2}\rightarrow\mathbb{Z}_{q}\setminus\\{\frac{q-1}{2},-\frac{q-1}{2}\\}$
as
$f(i)=i\quad\text{for }i\in[-\frac{q-3}{2},\frac{q-3}{2}].$
It is clear that for any $a,b\in\mathbb{Z}_{q-2}$
1. (1)
$a+b=0\Leftrightarrow f(a)+f(b)=0$,
2. (2)
$a-b=0\Leftrightarrow f(a)-f(b)=0$, and
3. (3)
$a=0\Leftrightarrow f(a)=0$.
Using $f$, we can define a bijection $F:\mathbb{Z}_{q-2}^{n}\rightarrow
D_{q}^{n}$ as
$F(a_{1},\dots,a_{n})=(f(a_{1}),\dots,f(a_{n}))\quad\text{for
}(a_{1},\dots,a_{n})\in\mathbb{Z}_{q-2}^{n}.$
By the properties of $f$, we can see that $F$ induces a bijection between
those tuples in $\mathbb{Z}_{q-2}^{n}$ that do not satisfy the defining
equation of any hyperplane in $\mathcal{A}$ and those tuples in
$\mathbb{Z}_{q}^{n}$ that do not satisfy the defining equation of any
hyperplane in $\mathcal{BA}$. So, we get that for large odd numbers $q$,
$\chi_{\mathcal{BA}}(q)=\chi_{\mathcal{A}}(q-2).$
Since $\chi_{\mathcal{BA}}$ and $\chi_{\mathcal{A}}$ are polynomials, we get
the required result. ∎
Let $\mathcal{T}_{n}$ denote the threshold arrangement in $\mathbb{R}^{n}$,
i.e., $\mathcal{T}_{n}:=\\{x_{i}+x_{j}=0\mid 1\leq i<j\leq n\\}$. The reason
this is called the threshold arrangement is that its regions are in bijection
with labeled threshold graphs on $n$ vertices (see Section 4 for details).
This is clearly a sub-arrangement of the type $C$ arrangement.
###### Corollary 2.3.
The characteristic polynomials of $\mathcal{BT}_{n}$ and $\mathcal{T}_{n}$ are
related as follows:
$\chi_{\mathcal{BT}_{n}}(t)=\chi_{\mathcal{T}_{n}}(t-2).$
Consequently, the number of bounded regions of $\mathcal{BT}_{n}$ is equal to
the number of regions of $\mathcal{T}_{n}$. Moreover, these bounded regions
are contained in the cube (or a _box_)
$\left[-\frac{1}{2},\frac{1}{2}\right]^{n}$. Next, we derive a closed form
expression for $\chi_{\mathcal{T}_{n}}(t)$ using the finite field method.
###### Proposition 2.4.
The characteristic polynomial of the threshold arrangement $\mathcal{T}_{n}$
is
$\chi_{\mathcal{T}_{n}}(t)=\sum_{k=1}^{n}(S(n,k)+n\cdot
S(n-1,k))\prod_{i=1}^{k}(t-(2i-1)).$
Here $S(n,k)$ are the Stirling numbers of the second kind.
###### Proof.
Using Theorem 2.1, we see that the characteristic polynomial of
$\mathcal{T}_{n}$ satisfies, for large odd values of $q$,
$\chi_{\mathcal{T}_{n}}(q)=|\\{(a_{1},\dots,a_{n})\in\mathbb{Z}_{q}^{n}\mid
a_{i}+a_{j}\neq 0\text{ for all }1\leq i<j\leq n\\}|.$
This means that we need to count the functions
$f:[n]\rightarrow\mathbb{Z}_{q}$ such that
1. (1)
$|f^{-1}(0)|\leq 1$.
2. (2)
$f$ can take at most one value from each of the sets
$\\{1,-1\\},\\{2,-2\\},\dots,\\{\frac{q-1}{2},-\frac{q-1}{2}\\}.$
We split the count into the two cases. If 0 is not attained by $f$, then all
values must be from
$\\{1,-1\\}\cup\\{2,-2\\}\cup\cdots\cup\\{\frac{q-1}{2},-\frac{q-1}{2}\\}.$
with at most one value attained in each set. So, there are
$\binom{\frac{q-1}{2}}{k}\cdot 2^{k}\cdot k!\cdot S(n,k)$
ways for $f$ to attain values from exactly $k$ of these sets. This is because
we have $\binom{\frac{q-1}{2}}{k}\cdot 2^{k}$ ways to choose the $k$ sets and
which element of each set $f$ should attain and $k!S(n,k)$ ways to choose the
images of the elements of $[n]$ after making this choice. So the total number
of $f$ such that 0 is not attained is
$\sum\limits_{k=1}^{n}\binom{\frac{q-1}{2}}{k}\cdot 2^{k}\cdot k!\cdot
S(n,k).$
When 0 is attained, there are $n$ ways to choose which element of $[n]$ gets
mapped to 0 and using a similar logic for choosing the images of the other
elements, we get that the total number of $f$ where 0 is attained is
$n\cdot\sum\limits_{k=1}^{n-1}\binom{\frac{q-1}{2}}{k}\cdot 2^{k}\cdot k!\cdot
S(n-1,k).$
So we get that for large $q$,
$\displaystyle\chi_{\mathcal{T}_{n}}(q)$
$\displaystyle=\sum\limits_{k=1}^{n}\binom{\frac{q-1}{2}}{k}\cdot 2^{k}\cdot
k!\cdot S(n,k)+n\cdot\sum\limits_{k=1}^{n-1}\binom{\frac{q-1}{2}}{k}\cdot
2^{k}\cdot k!\cdot S(n-1,k)$ $\displaystyle=\sum_{k=1}^{n}(S(n,k)+n\cdot
S(n-1,k))\prod_{i=1}^{k}(q-(2i-1)).$
Since $\chi_{\mathcal{T}_{n}}$ is a polynomial, we get the required result. ∎
###### Remark 2.5.
Note that the absolute value of the coefficient of $t^{j}$ in
$(t-1)(t-3)\cdots(t-(2k-1))$ counts the number of signed permutations on $[k]$
with $j$ odd cycles (see A039757 in the OEIS [4]). Let us denote that number
by $a(k,j)$. Using this we get a compact expression for the coefficient of
$t^{j}$ in $\chi_{\mathcal{T}_{n}}(t)$ as
$\sum_{k=j}^{n}(S(n,k)+n\cdot S(n-1,k))a(k,j).$
###### Corollary 2.6.
The coefficient of $t^{j}$ in $\chi_{\mathcal{BT}_{n}}(t)$ is
$\sum_{k=j}^{n}(S(n,k)+n\cdot S(n-1,k))b(k,j).$
where $b(k,j)$ is the coefficient of $t^{j}$ in $(t-3)(t-5)\cdots(t-(2k+1))$.
###### Proof.
Using Corollary 2.3 we get
$\chi_{\mathcal{BT}_{n}}(t)=\sum_{k=1}^{n}(S(n,k)+n\cdot
S(n-1,k))\prod_{i=1}^{k}(t-(2i+1)).$
Expanding the product gives us the required expression. Note that
$b(k,j)=-\sum\limits_{i=0}^{j}a(k+1,i)$ where $a(k,j)$ is defined in Remark
2.5. ∎
###### Remark 2.7.
We can also derive an expression for the exponential generating function for
the characteristic polynomial. Using Problem $25(c)$ in Stanley’s notes [9,
Lecture 5] we get
$\sum_{n\geq
0}\chi_{\mathcal{BT}_{n}}(t)\frac{x^{n}}{n!}=(1+x)(2e^{x}-1)^{\frac{(t-3)}{2}}.$
The generating function for the number of regions is
$\sum_{n\geq
0}r(\mathcal{BT}_{n})\frac{x^{n}}{n!}=\frac{e^{2x}(1-x)}{(2-e^{x})^{2}}.$
For the sake of completeness we enumerate the coefficients of the
characteristic polynomial for smaller values of $n$ (see Table 1). Song [5]
also computed the characteristic polynomial for $n\leq 10$, however there are
typos in all the expressions for $n\geq 4$, consequently the region numbers
are wrong. The sequence of number of regions of $\mathcal{BT}_{n}$ can be
found at the entry A341769 in the OEIS [4].
$n$ | $\chi_{\mathcal{BT}_{n}}(t)$ | $r(\mathcal{BT}_{n})$
---|---|---
2 | $t^{2}-5t+6$ | 12
3 | $t^{3}-9t^{2}+27t-27$ | 64
4 | $t^{4}-14t^{3}+75t^{2}-181t+165$ | 436
5 | $t^{5}-20t^{4}+165t^{3}-695t^{2}+1480t-1263$ | 3624
6 | $t^{6}-27t^{5}+315t^{4}-2010t^{3}+7320t^{2}-14284t+11559$ | 35516
7 | $t^{7}-35t^{6}+546t^{5}-4865t^{4}+26460t^{3}-87010t^{2}+158753t-122874$ | 400544
8 | $t^{8}-44t^{7}+882t^{6}-10402t^{5}+78155t^{4}-379666t^{3}+1154965t^{2}-1995487t+1486578$ | 5106180
9 | $t^{9}-54t^{8}+1350t^{7}-20286t^{6}+200025t^{5}-1331022t^{4}+5932143t^{3}-16952157t^{2}+27979203t-20158695$ | 72574936
10 | $t^{10}-65t^{9}+1980t^{8}-36840t^{7}+459585t^{6}-3986031t^{5}+24172575t^{4}-100548090t^{3}+272771475t^{2}-432836011t+302751327$ | 1137563980
Table 1. Characteristic polynomial and the number of regions of
$\mathcal{BT}_{n}$ for $n\leq 10$.
## 3\. The signed ordered partitions
In this section we prove a bijection between regions of $\mathcal{BT}_{n}$,
and certain ordered partitions of
$[-n,n]\setminus\\{0\\}\cup\\{-\frac{1}{2},\frac{1}{2}\\}$ (the notation
$[-n,n]$ is used for the set $\\{-n,-n+1,\dots,n-1,n\\}$). We will denote
$-x_{i}$ by $x_{-i}$ for all $i\in[n]$. Let $R$ be a region of
$\mathcal{BT}_{n}$. We write
1. (1)
$i\prec_{R}-j$ if $x_{i}+x_{j}<0$ in $R$ where $i\neq j$ in $[n]$.
2. (2)
$i\succ_{R}-j$ if $x_{i}+x_{j}>0$ in $R$ where $i\neq j$ in $[n]$.
3. (3)
$i\succ_{R}\frac{1}{2}$ if $x_{i}>\frac{1}{2}$ in $R$ where
$i\in[-n,n]\setminus\\{0\\}$.
4. (4)
Similarly define $i\prec_{R}\frac{1}{2}$, $i\succ_{R}-\frac{1}{2}$ and
$i\prec_{R}-\frac{1}{2}$ for any $i\in[-n,n]\setminus\\{0\\}$.
So $\prec_{R}$ is a relation on
$[-n,n]\setminus\\{0\\}\cup\\{-\frac{1}{2},\frac{1}{2}\\}$ where comparable
elements are
1. (1)
Elements of $[-n,n]\setminus\\{0\\}$ of different signs and different absolute
values.
2. (2)
Elements of $[-n,n]\setminus\\{0\\}$ and $\pm\frac{1}{2}$.
###### Remark 3.1.
The reader can consider the relation $i\prec_{R}-j$ as $\overset{+}{i}$
appearing before $\overset{-}{j}$ in a signed permutation on $[n]$ and
similarly for other such relations. The equivalence defined in the following
lemma corresponds to choosing a signed permutation representative for each
region. This is similar to the way Spiro [8] associated ‘threshold pairs in
standard form’ to labeled threshold graphs.
###### Lemma 3.2.
Let $\sim$ be the relation defined on the set $N=[-n,n]\setminus\\{0\\}$ as
$a\sim b$ if the following hold:
1. (1)
$a$ and $b$ are of the same sign, and
2. (2)
there does not exist any $c\in N\cup\\{-\frac{1}{2},\frac{1}{2}\\}$ such that
$a\prec_{R}c\prec_{R}b$ or $b\prec_{R}c\prec_{R}a$.
Then, $\sim$ is an equivalence relation on $N$.
###### Proof.
It is clear that $a\sim a$ for any $a\in N$ and that $a\sim b$ implies that
$b\sim a$ for any $a,b\in N$. Let $a\sim b$ and $b\sim c$ for some distinct
$a,b,c\in N$. So $a,b,c$ are of the same sign. By definition of $\prec_{R}$
and $\sim$, the only possible $d\in N\cup\\{-\frac{1}{2},\frac{1}{2}\\}$ such
that $a\prec_{R}d\prec_{R}c$ is $-b$. We must show that this is not possible
(a similar argument works if $c\prec_{R}-b\prec_{R}a$). Suppose
$a\prec_{R}-b\prec_{R}c$. We then have $-c\prec_{R}b\prec_{R}-a$. If
$a\prec_{R}-c$, we will have $a\prec_{R}-c\prec_{R}b$, which contradicts
$a\sim b$. So we must have $-c\prec_{R}a$. But this gives $-a\prec_{R}c$ and
hence $b\prec_{R}-a\prec_{R}c$, which is a contradiction to $b\sim c$. Hence
$\sim$ is an equivalence on $N$. ∎
###### Definition 3.3.
The equivalence classes of $\sim$ defined in Lemma 3.2 will be called boxed
threshold blocks (corresponding to the region $R$). Positive blocks refer to
those blocks that contain positive numbers and similarly we define negative
blocks.
###### Remark 3.4.
Since all the elements of a boxed threshold block are of the same sign, we
sometimes consider a block as a subset of $[n]$ with a sign ($+$ or $-$)
assigned to it.
The proof of the following lemma follows from the definition of the
equivalence.
###### Lemma 3.5.
If $B$ is a boxed threshold block then so is $-B=\\{-b:b\in B\\}$.
###### Proposition 3.6.
Define an order $<$ on the set of boxed threshold blocks along with
$\pm\frac{1}{2}$ as follows:
1. (1)
$B<B^{\prime}$ where $B,B^{\prime}$ are boxed threshold blocks if there exists
a sequence $c_{0},c_{1},\dots,c_{k}$ of elements in
$N\cup\\{-\frac{1}{2},\frac{1}{2}\\}$ such that
$c_{0}\prec_{R}c_{1}\prec_{R}\cdots\prec_{R}c_{k}$, $c_{0}\in B$ and $c_{k}\in
B^{\prime}$.
2. (2)
$-\frac{1}{2}<\frac{1}{2}$.
3. (3)
$B<\frac{1}{2}$ if $b\prec_{R}\frac{1}{2}$ for some $b\in B$. Similarly define
other relations between blocks and $\pm\frac{1}{2}$.
This is a total order in all cases except when there is a unique $i\in[n]$
such that $-\frac{1}{2}\prec_{R}i\prec_{R}\frac{1}{2}$.
###### Proof.
The transitivity of this order is straightforward. If we show that $B<B$ is
not possible for any block, the order is well-defined. Suppose
$c_{0}\prec_{R}c_{1}\prec_{R}\cdots\prec_{R}c_{k}$ where $c_{0},c_{k}\in B$.
If $c_{1}\neq-c_{k}$ we get a contradiction to $c_{0}\sim c_{k}$ and similarly
if $c_{k-1}\neq-c_{1}$. So we must have $c_{1}=-c_{k}$ and $c_{k-1}=-c_{1}$.
But then we get $c_{1}\prec_{R}-c_{k}$ and $-c_{1}\prec_{R}c_{k}$, which is a
contradiction.
We will now show that this order is a total order in all cases except when
there is unique $i\in[n]$ such that
$-\frac{1}{2}\prec_{R}i\prec_{R}\frac{1}{2}$. The relationship of any block
with $\pm\frac{1}{2}$ is always defined. So we only have to check that any two
blocks are comparable. Let $B,B^{\prime}$ be boxed threshold blocks. If
$B,B^{\prime}$ are distinct blocks of the same sign, they are comparable by
definition of the equivalence relation. If they are of opposite sign and not
of the form $\\{i\\},\\{-i\\}$ for some $i\in[n]$, they are comparable since
there is some $a\in B$ and $b\in B^{\prime}$ of opposite signs such that
$|a|\neq|b|$. So we have to deal with the case $B=\\{i\\}$ and
$B^{\prime}=\\{-i\\}$ for some $i\in[n]$. If $i\succ_{R}\frac{1}{2}$ or
$i\prec_{R}-\frac{1}{2}$, $\\{-i\\}$ and $\\{i\\}$ are comparable.
Suppose $-\frac{1}{2}\prec_{R}i\prec_{R}\frac{1}{2}$ and $\\{i\\}$ and
$\\{-i\\}$ are blocks and they are not comparable. We have already seen that
all the positive blocks are comparable and so are all the negative blocks. Let
the order on the positive blocks be $B_{1}<B_{2}<\cdots<B_{k}$ and hence the
order on the negative blocks is $-B_{k}<\cdots<-B_{2}<-B_{1}$.
Suppose $B_{1}=\\{i\\}$. Since $i$ is not the only number satisfying
$-\frac{1}{2}\prec_{R}i\prec_{R}\frac{1}{2}$, we have $b\prec_{R}\frac{1}{2}$
for all $b\in B_{2}$. Taking some $b\in B_{2}$, since $i\notin B_{2}$, there
is some $k\in[n]$ such that $i\prec_{R}-k\prec_{R}b$. Since $\\{-i\\}$ is a
block, and $k\neq i$, there is some $l\in[n]$ such that
$-i\prec_{R}l\prec_{R}-k$ (we cannot have $-k\prec_{R}l\prec_{R}-i$ since this
would mean $\\{i\\}$ and $\\{-i\\}$ are comparable). But this gives
$k\prec_{R}-l\prec_{R}i$, which contradicts the fact that there is no positive
block less than $\\{i\\}$. We get a similar contradiction if $B_{k}=\\{i\\}$.
Suppose $B_{m}=\\{i\\}$ for some $m\in[2,k-1]$. Take some $b_{p}\in B_{m-1}$
and $b_{s}\in B_{m+1}$. There are three possibilities:
1. (1)
There are $k_{p},k_{s}\in[n]$ such that
$b_{p}\prec_{R}-k_{p}\prec_{R}i\prec_{R}-k_{s}\prec_{R}b_{s}.$
Since $k_{p},k_{s}\neq i$, and $\\{-i\\}$ is a block, there are
$c_{p},c_{s}\in[n]$ such that
$\displaystyle-k_{p}$ $\displaystyle\prec_{R}c_{p}\prec_{R}-i$
$\displaystyle-i$ $\displaystyle\prec_{R}c_{s}\prec_{R}-k_{s}$
where the other possibilities are not possible since $\\{i\\}$ and $\\{-i\\}$
are not comparable. So we get
$b_{p}\prec_{R}-k_{p}\prec_{R}c_{p}\prec_{R}-i\prec_{R}c_{s}\prec_{R}-k_{s}\prec_{R}b_{s}.$
But this means that $c_{p}$ and $c_{s}$ are in blocks between $B_{m-1}$ and
$B_{m+1}$ and hence in $\\{i\\}$, which is a contradiction.
2. (2)
There is no $k_{s}\in[n]$ such that $i\prec_{R}-k_{s}<b_{s}$. So we must have
$i\prec_{R}\frac{1}{2}\prec_{R}b_{s}$,
$-\frac{1}{2}\prec_{R}b_{p}\prec_{R}\frac{1}{2}$ and $k_{p}\in[n]$ such that
$b_{p}\prec_{R}-k_{p}\prec_{R}i\prec_{R}\frac{1}{2}\prec_{R}b_{s}.$
This is because $i$ satisfies $-\frac{1}{2}\prec_{R}i\prec_{R}\frac{1}{2}$ and
is not the only such number. Since $k_{p}\neq i$, and $\\{-i\\}$ is a block,
there is some $c_{p}\in[n]$ such that
$-k_{p}\prec_{R}c_{p}\prec_{R}-i$
where the other possibility is not possible since $\\{i\\}$ and $\\{-i\\}$ are
not comparable. So we get
$-\frac{1}{2}\prec_{R}b_{p}\prec_{R}-k_{p}\prec_{R}c_{p}\prec_{R}-i$
which, along with previous observations, gives
$i\prec_{R}-c_{p}\prec_{R}k_{p}\prec_{R}\frac{1}{2}\prec_{R}b_{s}.$
But this means $k_{p}$ is in a positive block after $\\{i\\}$ but before
$B_{m+1}$, which is a contradiction.
3. (3)
The case when there is no $k_{p}\in[n]$ such that
$b_{p}\prec_{R}-k_{p}\prec_{R}i$ is handled similarly.
In the case when there is unique $i\in[n]$ such that
$-\frac{1}{2}\prec_{R}i\prec_{R}\frac{1}{2}$, the only order that is not
defined is between the blocks $\\{i\\}$ and $\\{-i\\}$ and the order is of the
form
(2)
$-B_{k}<\cdots<-B_{2}<-\frac{1}{2}<\\{i\\},\\{-i\\}<\frac{1}{2}<B_{2}<\cdots<B_{k}$
where $B_{2},\dots,B_{k}$ are blocks (not necessarily positive). This can be
proved using similar arguments as before. ∎
If there is a unique $i\in[n]$ such that
$-\frac{1}{2}\prec_{R}i\prec_{R}\frac{1}{2}$ and the blocks are ordered as in
(2), we make the convention that $\\{-i\\}<\\{i\\}$ if $B_{2}$ is a positive
block and $\\{i\\}<\\{-i\\}$ if $B_{2}$ is a negative block. Once this
convention is made, we get a total order on the boxed threshold blocks,
$\frac{1}{2}$ and $-\frac{1}{2}$ in all cases. The proof of the following
lemma follows from the definition of the order.
###### Lemma 3.7.
For any blocks $B,B^{\prime}$, $B<B^{\prime}$ implies that $-B^{\prime}<-B$.
Similarly, other usual properties of taking the negative hold; such as
$B<\frac{1}{2}$ implies that $-\frac{1}{2}<-B$.
###### Proposition 3.8.
The total order associated to a region (defined in Proposition 3.6) is always
of one of the following forms:
1. (1)
$-B_{k}<\cdots<-B_{2}<-B_{1}<-\frac{1}{2}<\frac{1}{2}<B_{1}<B_{2}<\cdots<B_{k}$
where $B_{i}$ and $B_{i+1}$ are of opposite signs for all $i\in[k-1]$.
2. (2)
$-B_{k}<\cdots<-B_{l+1}<-\frac{1}{2}<-B_{l}<\cdots<-B_{1}<B_{1}<\cdots<B_{l}<\frac{1}{2}<B_{l+1}<\cdots<B_{k}$
where the size of $B_{1}$ is greater than 1 and $B_{i}$ and $B_{i+1}$ are of
opposite signs for all $i\in[l-1]$ and $i\in[l+1,k-1]$.
3. (3)
$-B_{k}<\cdots<-B_{2}<-\frac{1}{2}<-B_{1}<B_{1}<\frac{1}{2}<B_{2}<\cdots<B_{k}$
where the size of $B_{1}$ is 1, $B_{1}$ and $B_{2}$ are of the same sign, and
$B_{i}$ and $B_{i+1}$ are of opposite signs for all $i\in[2,k-1]$.
The association of such an order is a bijection between the regions of
$\mathcal{BT}_{n}$ and total orders of the types mentioned above.
###### Proof.
The fact that the first half of the order is the negative of the second
follows from Lemma 3.7. By the definition of blocks, for any two blocks of the
same sign which are in the same position with respect to $\pm\frac{1}{2}$,
there is some block of opposite sign between them.
The first form is when there is no $i\in[n]$ such that
$-\frac{1}{2}\prec_{R}i\prec_{R}\frac{1}{2}$. The third form is by the
convention we made at the end of the proof of Proposition 3.6, when there is a
unique $i\in[n]$ such that $-\frac{1}{2}\prec_{R}i\prec_{R}\frac{1}{2}$.
So we have to show that when there is more than one $i\in[n]$ such that
$-\frac{1}{2}\prec_{R}i\prec_{R}\frac{1}{2}$, the block $B_{1}$ has size
greater than 1. Suppose $B_{1}=\\{a\\}$ for some $a\in N$. Let $b\in B_{2}$.
$b$ has the same sign as $-a$ and is in a different block. Hence there is some
$k\in N$ of opposite sign to $-a$ and $b$ such that $-a\prec_{R}k\prec_{R}b$.
But this means $k$ is in a block between $\\{-a\\}$ and $B_{2}$ and hence in
$\\{a\\}$, which is a contradiction since $k\neq a$.
We will now show that the association of such an order is a bijection between
the regions of $\mathcal{BT}_{n}$ and total orders of the types mentioned
above. First note that, from the total order associated to a region, we can
get back the inequalities describing the region as follows: For any $i\neq j$
in $[n]$, $x_{i}+x_{j}>0$ if and only if the block containing $-j$ is before
that containing $i$ and the relationship between $x_{i}$ and $\pm\frac{1}{2}$
is obtained in the same way. On the other hand, given an order of one of the
forms given above, choosing some real numbers $0<c_{1}<c_{2}<\cdots<c_{k}$
such that $c_{i}<\frac{1}{2}$ if $B_{i}<\frac{1}{2}$ and $c_{i}>\frac{1}{2}$
if $B_{i}>\frac{1}{2}$ and putting $x_{a}=c_{i}$ for all $a\in B_{i}$ for all
$i\in[k]$, gives a point satisfying the required inequalities. It can also be
shown that different such orders correspond to different regions. ∎
Hence, to count the number of regions of $\mathcal{BT}_{n}$, we just have to
count the number of orders of the forms mentioned in Proposition 3.6. Note
that the first half of the order is the negative of the second, so we just
consider the second half. That is, we count orders of the following types,
where in all cases, $B_{1},\dots,B_{k}$ is a partition of $[n]$ with a sign
assigned to each block:
1. (1)
$\frac{1}{2}<B_{1}<B_{2}<\cdots<B_{k}$ where $B_{i}$ and $B_{i+1}$ are of
opposite signs for all $i\in[k-1]$.
2. (2)
$B_{1}<\cdots<B_{l}<\frac{1}{2}<B_{l+1}<\cdots<B_{k}$ where the size of
$B_{1}$ is greater than 1 and $B_{i}$ and $B_{i+1}$ are of opposite signs for
all $i\in[l-1]$ and $i\in[l+1,k-1]$.
3. (3)
$B_{1}<\frac{1}{2}<B_{2}<\cdots<B_{k}$ where the size of $B_{1}$ is 1, $B_{1}$
and $B_{2}$ are of the same sign, and $B_{i}$ and $B_{i+1}$ are of opposite
signs for all $i\in[2,k-1]$.
###### Proposition 3.9.
The total number of orders of the forms mentioned above is
$4\cdot a(n)+\sum_{k=1}^{n}4(k!-(k-1)!)(k\cdot S(n,k)-n\cdot S(n-1,k-1)).$
Here $a(n)$ is the $n^{th}$ ordered Bell number.
###### Proof.
We will count the number of orders of each of the above forms.
1. (1)
In the first case, we just have to define an ordered partition of $[n]$ and
assign alternating signs to them. The number of ways this can be done is
$\sum_{k=1}^{n}\sum\limits_{\begin{subarray}{c}(a_{1},\dots,a_{k})\\\
a_{1}+\cdots+a_{k}=n\end{subarray}}2\cdot\dfrac{n!}{a_{1}!\cdots
a_{k}!}=2\cdot a(n).$
2. (2)
In the second case, we consider two sub-cases:
1. (a)
There is no block after $\frac{1}{2}$. In this case, we have to define an
ordered partition of $[n]$ whose first part has size greater that $1$ and
assign alternating signs to them. The number of ways this can be done is
$\sum_{k=1}^{n-1}\sum\limits_{\begin{subarray}{c}(a_{1},\dots,a_{k})\\\
a_{1}+\cdots+a_{k}=n,\ a_{1}\neq 1\end{subarray}}2\cdot\dfrac{n!}{a_{1}!\cdots
a_{k}!}=2(a(n)-n\cdot a(n-1))$
where the equality is because the number of ordered partitions of $[n]$ with
first block having size $1$ is $n\cdot a(n-1)$.
2. (b)
There is some block after $\frac{1}{2}$. In this case, we have to again define
an ordered partition of $[n]$ whose first part has size greater that $1$. But
we then have to choose a spot between two blocks to place $\frac{1}{2}$ and
then choose a sign for the first block and the block after $\frac{1}{2}$. The
number of ways this can be done is
$\sum_{k=1}^{n-1}\sum\limits_{\begin{subarray}{c}(a_{1},\dots,a_{k})\\\
a_{1}+\cdots+a_{k}=n,\ a_{1}\neq 1\end{subarray}}4(k-1)\dfrac{n!}{a_{1}!\cdots
a_{k}!}.$
Making the following substitution for all $k\in[n-1]$
$\displaystyle\sum\limits_{\begin{subarray}{c}(a_{1},\dots,a_{k})\\\
a_{1}+\cdots+a_{k}=n,\ a_{1}\neq 1\end{subarray}}\dfrac{n!}{a_{1}!\cdots
a_{k}!}$ $\displaystyle=\sum\limits_{\begin{subarray}{c}(a_{1},\dots,a_{k})\\\
a_{1}+\cdots+a_{k}=n\end{subarray}}\dfrac{n!}{a_{1}!\cdots a_{k}!}\hskip
5.69046pt-\sum\limits_{\begin{subarray}{c}(1,a_{2},\dots,a_{k})\\\
1+a_{2}+\cdots+a_{k}=n\end{subarray}}\dfrac{n!}{1!a_{2}!\cdots a_{k}!}$
$\displaystyle=k!\cdot S(n,k)-n\cdot(k-1)!\cdot S(n-1,k-1)$
we get that the initial expression is the same as
$\sum_{k=1}^{n}4(k!-(k-1)!)(k\cdot S(n,k)-n\cdot S(n-1,k-1)).$
3. (3)
In the third case, we have to choose the element of $[n]$ in $B_{1}$ and then
define an ordered partition of the remaining $(n-1)$ elements and assign
alternating signs to them. Since we want $B_{1}$ and $B_{2}$ to have the same
sign, we just need to assign a sign to $B_{2}$. So, the number of orders of
the third type is
$n\cdot\sum_{k=1}^{n-1}\sum\limits_{\begin{subarray}{c}(a_{1},\dots,a_{k})\\\
a_{1}+\cdots+a_{k}=n-1\end{subarray}}2\cdot\dfrac{(n-1)!}{a_{1}!\cdots
a_{k}!}=n\cdot 2\cdot a(n-1).$
Adding up the counts made for each form gives us the required result. ∎
So, from the observations made above, we have proved the following theorem:
###### Theorem 3.10.
The number of regions of $\mathcal{BT}_{n}$ is
$\displaystyle 4\cdot a(n)+\sum_{k=1}^{n}4(k!-(k-1)!)(k\cdot S(n,k)-n\cdot
S(n-1,k-1)).$
Similar arguments can be applied to the threshold arrangement
$\mathcal{T}_{n}$.
###### Proposition 3.11.
The regions of $\mathcal{T}_{n}$ are in bijection with ordered partitions of
$[-n,n]\setminus\\{0\\}$ of the form
$-B_{k}<\cdots<-B_{2}<-B_{1}<B_{1}<B_{2}<\cdots<B_{k}$
where all elements of each block have the same sign, the size of $B_{1}$ is
greater than $1$ and $B_{i}$ and $B_{i+1}$ are of opposite signs for all
$i\in[k-1]$.
The bijection in Proposition 3.11 is defined just as was done for
$\mathcal{BT}_{n}$. That is, the region associated to such an order satisfies
$x_{i}+x_{j}>0$ if and only if $-j$ appears before $i$ in the order. Again,
such orders are completely specified by their second half, which are ordered
partition of $[n]$ with first block size greater than $1$ and a sign assigned
to the first block (and alternate signs for consecutive blocks). So, we get
the following theorem:
###### Theorem 3.12.
The number of regions of $\mathcal{T}_{n}$ is
$2(a(n)-n\cdot a(n-1)).$
###### Remark 3.13.
It can be shown that the orders considered in Proposition 3.11 are in
bijection with the set of threshold pairs (of size $n$) in standard form111A
pair $(\pi,\omega)$ is a _threshold pair_ (of size $n$) if $\pi$ is a
permutation of size $n$ and $\omega$ is a word in $\\{+1,-1\\}^{n}$. A
threshold pair $(\pi,\omega)$ of size $n\geq 2$ is in _standard form_ if
$\omega_{1}=\omega_{2}$ and if $\omega_{i}=\omega_{i+1}$ implies
$\pi_{i}<\pi_{i+1}$ for all $1\leq i<n$. considered by Spiro [8]. In fact, he
showed that the threshold pairs are in bijection with the labeled threshold
graphs.
The known formula for the number of labeled threshold graphs is in terms of
Eulerian numbers. Hence for the sake of completeness, we now show that the
formula in Theorem 3.12 is the same as the one containing Eulerian numbers. We
first recall a few identities related to Eulerian numbers $A(n,k)$ and the
ordered Bell number $a(n)$. A quick reference for these identities is Bóna’s
book [3, Section 1.1].
* •
$a(n)=\sum\limits_{k=0}^{n-1}2^{k}\cdot A(n,k)$
* •
$A(n,0)=1$, $A(n,n-1)=1$ and $A(n,k)=0$ for all $k\geq n$.
* •
$A(n,k)=(n-k)A(n-1,k-1)+(k+1)A(n-1,k)$ for $k\geq 1$.
###### Proposition 3.14.
We have the following equality
$r(\mathcal{T}_{n})=\sum\limits_{k=1}^{n-1}2^{k}(n-k)A(n-1,k-1).$
###### Proof.
From Theorem 3.12, we have
$\begin{split}r(\mathcal{T}_{n})&=2(a(n)-n\cdot a(n-1))\\\
&=2\cdot\Bigg{(}\sum\limits_{k=0}^{n-1}2^{k}\cdot
A(n,k)-n\cdot\Big{(}\sum\limits_{k=0}^{n-2}2^{k}\cdot
A(n-1,k)\Big{)}\Bigg{)}\\\
\frac{r(\mathcal{T}_{n})}{2}&=A(n,0)+\sum\limits_{k=1}^{n-1}2^{k}\big{(}(n-k)A(n-1,k-1)+(k+1)A(n-1,k)\big{)}\\\
&\hskip 14.22636pt-n\Big{(}\sum\limits_{k=0}^{n-2}2^{k}\cdot
A(n-1,k)\Big{)}\\\
&=\sum\limits_{k=1}^{n-1}2^{k}(n-k)A(n-1,k-1)+\Big{(}A(n,0)+\sum\limits_{k=1}^{n-1}2^{k}(k+1)A(n-1,k)\Big{)}\\\
&\hskip 14.22636pt-\sum\limits_{k=0}^{n-2}n\cdot 2^{k}\cdot
A(n-1,k)\end{split}$
Since $A(n-1,n-1)=0$,
$\begin{split}\frac{r(\mathcal{T}_{n})}{2}&=\sum\limits_{k=1}^{n-1}2^{k}(n-k)A(n-1,k-1)+\sum\limits_{k=0}^{n-2}2^{k}(k+1)A(n-1,k)-\sum\limits_{k=0}^{n-2}n\cdot
2^{k}\cdot A(n-1,k)\\\
&=\sum\limits_{k=1}^{n-1}2^{k}(n-k)A(n-1,k-1)-\sum\limits_{k=0}^{n-2}(n-k-1)2^{k}\cdot
A(n-1,k)\end{split}$
Replacing $k+1$ by $t$ in the second sum, we get
$\begin{split}r(\mathcal{T}_{n})&=2\cdot\Bigg{(}\sum\limits_{k=1}^{n-1}2^{k}(n-k)A(n-1,k-1)-\sum\limits_{t=1}^{n-1}2^{t-1}(n-t)A(n-1,t-1)\Bigg{)}\\\
&=\sum\limits_{k=1}^{n-1}2^{k}(n-k)A(n-1,k-1).\end{split}$
∎
## 4\. The colored threshold graphs
Before defining the colored threshold graphs that are in bijection with the
regions of the boxed threshold arrangement, we recall the bijection between
regions of the threshold arrangement and labeled threshold graphs.
###### Definition 4.1.
A threshold graph is defined recursively as follows:
1. (1)
The empty graph is a threshold graph.
2. (2)
A graph obtained by adding an isolated vertex to a threshold graph is a
threshold graph.
3. (3)
A graph obtained by adding a vertex adjacent to all vertices of a threshold
graph is a threshold graph.
###### Definition 4.2.
A labeled threshold graph is a threshold graph having $n$ vertices with
vertices labeled distinctly using $[n]$.
Such labeled threshold graphs can be specified by a signed permutation of
$[n]$, that is, a permutation of $[n]$ with a sign associated to each number.
The signed permutation $i_{1}\ i_{2}\ \cdots\ i_{n}$ would correspond to the
labeled threshold graph obtained by adding vertices labeled
$|i_{1}|,|i_{2}|,\dots,|i_{n}|$ in order where a positive $i_{k}$ means that
$|i_{k}|$ is added adjacent to all previous vertices and a negative $i_{k}$
means that it is added isolated to the previous vertices. A maximal string of
positive numbers or negative numbers in a signed permutation will be called a
block.
###### Example 4.3.
The labeled threshold graph associated to the signed permutation on $[5]$
given by
$\overset{-}{2}\overset{-}{3}\overset{+}{1}\overset{+}{4}\overset{-}{5}$ is
shown in Figure 1.
$2$4$\xrightarrow{\overset{-}{3}}$
$2$$3$4$\xrightarrow{\overset{+}{1}}$
$2$$3$$1$4$\xrightarrow{\overset{+}{4}}$
$2$$3$$1$$\xrightarrow{\overset{-}{5}}$$4$
$2$$3$$1$$4$$5$
Figure 1. Construction of threshold graph corresponding to
$\overset{-}{2}\overset{-}{3}\overset{+}{1}\overset{+}{4}\overset{-}{5}$.
The following facts can be verified:
1. (1)
The sign of the first number in the permutation does not matter and hence we
can make the first block have size greater than 1.
2. (2)
Elements in the same block can be reordered.
Hence, labeled threshold graphs can be specified by an ordered partition of
$[n]$ with first block size greater than 1 and alternating signs assigned to
the blocks. In fact, this association is a bijection.
Given a threshold graph $G_{1}$, we can obtain this alternating signed ordered
partition of $[n]$ as follows: Since $G_{1}$ is a threshold graph, it has at
least one isolated vertex or at least one vertex that is adjacent to all other
vertices. If it has an isolated vertex, set $D_{1}$ to be the set of all
isolated vertices, assign it a negative sign and set $G_{2}$ to be the graph
obtained by deleting all the vertices of $D_{1}$ from $G_{1}$. If $G_{1}$ has
at least one vertex adjacent to all other vertices, set $D_{1}$ to be the set
of all such vertices, assign it a positive sign and set $G_{2}$ to be the
graph obtained by deleting all the vertices of $D_{1}$ from $G_{1}$. We repeat
this process until we obtain a graph $G_{k}$ which is complete, in which case
we set $D_{k}$ to be all vertices of $G_{k}$ and assign it a positive sign, or
$G_{k}$ has no edges, in which case we set $D_{k}$ to be all vertices of
$G_{k}$ and assign it a negative sign. Then set $B_{i}=D_{k-i+1}$ and assign
it the same sign as $D_{k-i+1}$ for all $i\in[k]$. The signed ordered
partition $B_{1},\dots,B_{k}$ is the one associated to $G_{1}$.
###### Example 4.4.
Figure 2 shows an example of obtaining the signed blocks from a threshold
graph. The corresponding signed ordered partition for this example is
$\overset{-}{\\{2,3\\}}\overset{+}{\\{1,4\\}}\overset{-}{\\{5\\}}$.
$2$$3$$1$$4$$5$$\longrightarrow$$D_{2}=\overset{+}{\\{1,4\\}}$
$2$$3$$1$$\longrightarrow$$4$$D_{1}=\overset{-}{\\{5\\}}$
$2$$3$4$\longrightarrow$$D_{2}=\overset{+}{\\{1,4\\}}$
$D_{3}=\overset{-}{\\{2,3\\}}$
Figure 2. Obtaining blocks from a threshold graph.
Hence, regions of $\mathcal{T}_{n}$ and labeled threshold graphs on $n$
vertices are both in bijection with ordered partitions of $[n]$ with first
block size greater than 1 and alternating signs assigned to the blocks. Hence
we obtain a bijection between regions of $\mathcal{T}_{n}$ and labeled
threshold graphs on $n$ vertices. By combining the definitions of the two
bijections we see that to a labeled threshold graph on $n$ vertices we assign
the region where $x_{i}+x_{j}>0$ if and only if there is an edge between $i$
and $j$.
This can be proved as follows: If $-B_{k}<\cdots<-B_{1}<B_{1}<\cdots<B_{k}$ is
the threshold block order corresponding to some region $R$ of
$\mathcal{T}_{n}$, $-j\prec_{R}i$ for some $i\neq j$ in $[n]$ if and only if
one of the following holds:
1. (1)
$-j$ and $i$ both appear in $B_{1},\dots,B_{k}$ with $-j$ appearing first.
2. (2)
$-j$ appears in $-B_{k},\dots,-B_{1}$ and $i$ appears in $B_{1},\dots,B_{k}$.
3. (3)
$-j$ and $i$ both appear in $-B_{k},\dots,-B_{1}$ with $-j$ appearing first.
Equivalently, one of the following holds:
1. (1)
$-j$ and $i$ both appear in $B_{1},\dots,B_{k}$ with $-j$ appearing first.
2. (2)
$i$ and $j$ both appear in $B_{1},\dots,B_{k}$.
3. (3)
$-i$ and $j$ both appear in $B_{1},\dots,B_{k}$ with $-i$ appearing first.
But this is precisely the condition for there to be an edge between $i$ and
$j$ in the threshold graph corresponding to $B_{1}<\cdots<B_{k}$.
We now move on to the boxed threshold arrangement.
###### Definition 4.5.
A colored threshold graph is defined recursively as follows:
1. (1)
The empty graph is a colored threshold graph.
2. (2)
A graph obtained by adding an isolated vertex to a colored threshold graph is
a colored threshold graph. If there are colored vertices in the initial
colored threshold graph, the new vertex should be colored red. If not, the new
vertex can be left uncolored or colored red.
3. (3)
A graph obtained by adding a vertex adjacent to all vertices of a colored
threshold graph is a colored threshold graph. If there are colored vertices in
the initial colored threshold graph, the new vertex should be colored blue. If
not, the new vertex can be left uncolored or colored blue.
###### Definition 4.6.
A labeled colored threshold graph is a colored threshold graph with $n$
vertices with the vertices labeled distinctly with elements of $[n]$.
Just as for threshold graphs, labeled colored threshold graphs can be
represented as a signed permutation. However, we also have to specify if and
when the coloring of the vertices starts. This is done by using the symbol
$\frac{1}{2}$. Having $\frac{1}{2}$ at the end of the signed permutation means
that none of the vertices should be colored.
###### Example 4.7.
The sequence
$\overset{+}{2}\frac{1}{2}\overset{+}{1}\overset{+}{3}\overset{-}{4}\overset{-}{5}$
corresponds to the graph shown in Figure 3.
$2$$1$$3$$4$$5$ Figure 3. Labeled colored threshold graph corresponding to
$\overset{+}{2}\frac{1}{2}\overset{+}{1}\overset{+}{3}\overset{-}{4}\overset{-}{5}$.
Using similar observations about these sequences associated to labeled colored
threshold graphs as done for labeled threshold graphs, we get that labeled
colored threshold graphs are in bijection with orders of the forms counted in
Proposition 3.9. Since these orders also correspond to region of
$\mathcal{BT}_{n}$, we get a bijection between labeled colored threshold
graphs with $n$ vertices and regions of $\mathcal{BT}_{n}$. Just as before,
the inequalities describing the region associated to a colored threshold graph
are as follows: $x_{i}+x_{j}>0$ if and only if there is an edge between $i$
and $j$, $-\frac{1}{2}<x_{i}<\frac{1}{2}$ if $i$ is not colored,
$x_{i}>\frac{1}{2}$ if $i$ is colored blue and $x_{i}<-\frac{1}{2}$ if $i$ is
colored red. Notice that the underlying labeled threshold graph corresponds to
the $\mathcal{T}_{n}$ region that the $\mathcal{BT}_{n}$ region lies in.
Also, we can see that the bounded regions of $\mathcal{BT}_{n}$ are in
bijection with the regions of $\mathcal{T}_{n}$. Both are represented by
labeled threshold graphs with $n$ vertices. The bounded region of
$\mathcal{BT}_{n}$ corresponding to a region of $\mathcal{T}_{n}$ is the one
satisfying the same inequalities between $x_{i}+x_{j}$ and $0$ for all $i\neq
j$ in $[n]$ and having $-\frac{1}{2}<x_{i}<\frac{1}{2}$ for all $i\in[n]$.
$x_{1}=-\frac{1}{2}$$x_{1}=\frac{1}{2}$$x_{2}=-\frac{1}{2}$$x_{2}=\frac{1}{2}$$x_{1}+x_{2}=0$122121212121212121212121
Figure 4. Regions of $\mathcal{BT}_{2}$ represented by labeled colored
threshold graphs
## References
* [1] C. A. Athanasiadis, Characteristic polynomials of subspace arrangements and finite fields, Adv. Math. 122 (1996), 193–233.
* [2] C. A. Athanasiadis, Extended Linial hyperplane arrangements for root systems and a conjecture of Postnikov and Stanley, J. Algebraic Combin. 10 (1999), 207–225.
* [3] M. Bóna, Combinatorics of Permutations, Second Edition, CRC Press, 2012.
* [4] N. J. A. Sloane et al., The On-Line Encyclopedia of Integer Sequences, 2021\. Available at https://oeis.org.
* [5] J. Song, Enumeration of graphs and the characteristic polynomial of the hyperplane arrangements $\mathcal{J}_{n}$, J. Korean Math. Soc. 54 (2017), 1595–1604.
* [6] J. Song, On certain hyperplane arrangements and colored graphs, Bull. Korean Math. Soc. 54 (2017), 375–382.
* [7] J. Song, Characteristic polynomial of the hyperplane arrangements $\mathcal{J}_{n}$ via finite field method, Commun. Korean Math. Soc. 33 (2018), 759–765.
* [8] S. Spiro, Counting labeled threshold graphs with Eulerian numbers, Australas. J. Combin. 77 (2020), 249–255.
* [9] R. P. Stanley, An introduction to hyperplane arrangements, in E. Miller, V. Reiner, and B. Sturmfels, eds., Geometric Combinatorics, Amer. Math. Soc., 2007, pp. 389–496.
* [10] T. Zaslavsky, Facing up to arrangements: face-count formulas for partitions of space by hyperplanes, Mem. Amer. Math. Soc. 1 (1975), No. 154.
|
# Permutations Avoiding Certain Partially-ordered Patterns
Kai Ting Keshia Yap Department of Mathematics & Statistics, Queens
University, 48 University Ave. Jeffery Hall Kingston, ON Canada K7L 3N6
<EMAIL_ADDRESS>, David Wehlau∗ Department of Mathematics & Computer
Science, Royal Military College of Canada, P.O.Box 17000, Station Forces,
Kingston, Ontario, Canada K7K 7B4<EMAIL_ADDRESS>and Imed
Zaguia∗1 Department of Mathematics & Computer Science, Royal Military College
of Canada, P.O.Box 17000, Station Forces, Kingston, Ontario, Canada K7K 7B4
<EMAIL_ADDRESS>
###### Abstract.
A permutation $\pi$ contains a pattern $\sigma$ if and only if there is a
subsequence in $\pi$ with its letters are in the same relative order as those
in $\sigma$. Partially ordered patterns (POPs) provide a convenient way to
denote patterns in which the relative order of some of the letters does not
matter. This paper elucidates connections between the avoidance sets of a few
POPs with other combinatorial objects, directly answering five open questions
posed by Gao and Kitaev [5]. This was done by thoroughly analysing the
avoidance sets and developing recursive algorithms to derive these sets and
their corresponding combinatorial objects in parallel, which yielded a natural
bijection. We also analysed an avoidance set whose simple permutations are
enumerated by the Fibonacci numbers and derived an algorithm to obtain them
recursively.
###### Key words and phrases:
bijection; pattern avoidance; permutation; POP avoidance; simple permutation
###### 2010 Mathematics Subject Classification:
05A05, 05A15
∗ Partially supported by NSERC
1 Corresponding author.
## 1\. Introduction
This paper elucidates connections between the avoidance sets of a few
Partially Ordered Patterns (POPs) with other combinatorial objects, directly
answering five open questions posed by Gao and Kitaev [5]. Results in this
article appeared in the first author’s MSC dissertation.
We write $[n]$ to denote the set of integers $\\{1,2,\dots,n\\}$ for $n\geq
1$. A permutation is a bijection from $[n]$ to itself for some $n\geq 1$. We
call such a permutation an $n$-permutation and typically denote it by
$\pi=\pi_{1}\,\pi_{2}\,\dots\,\pi_{n}$, where $\pi_{i}=\pi(i)$. We say that
its length or size (denoted ${\left|\pi\right|}$) is $n$. We write $S_{n}$ to
denote the set of all $n$-permutations. We denote the size of any set $S$ by
$\\#S$ or ${\left|S\right|}$.
### 1.1. Background
A permutation $\pi$ contains a pattern $\sigma$ if and only if there is a
subsequence in $\pi$ (of the same length as $\sigma$) with its letters are in
the same relative order as those in $\sigma$. For instance, the pattern $312$
occurs in 42531 (as the subsequence 423), but not in 132465. The permutations
that avoid a pattern or a set of patterns make up an avoidance set. Avoidance
sets have been studied extensively and research in this area has important
applications to numerous fields. Examples include sorting devices in
theoretical computer science, Schubert varieties and Kazhdan-Lusztig
polynomials, statistical mechanics, the tandem duplication-random loss model
in computational biology and bijective combinatorics (see [6] and references
therein).
A partially ordered pattern (abbreviated POP) is a partially ordered set
(poset) that generalizes the notion of a pattern when we are not concerned
with the relative order of some of its letters, and therefore may represent
multiple patterns. Specifically, a POP is a poset with $n$ elements labelled
$1,\,2,\,\dots,\,n$, for some $n\geq 1$. For any pattern that the POP
represents, the partial order of the elements stipulates the relative order of
letters in the pattern, where the labels of the elements indicate the
positional order of these letters. For example, the POP $p=$ 1234 represents
all the patterns of length 4 whose first element is larger than the third
element. That is, $p$ represents the twelve patterns
$2314,\,2413,\,3124,\,3421,\,3214,\,3412,\,4213,\,4312,\,4123,\,4321,\,4132\,\text{and
}\,4231.$
A POP may represent a single pattern. For example, the pattern 3241
represented as a POP is the chain of four elements labelled 1, 2, 3 and 4 with
the order $4<2<1<3$. Note that 3241 is the permutation inverse of 4213, and
this is not a coincidence.
A permutation contains a POP if and only if it contains at least one of the
patterns represented by that POP. Otherwise, it avoids the POP. For example,
the permutation 3472615 contains 21 occurrences of the POP $p$ (defined above)
whereas 132456 avoids $p$.
### 1.2. Motivation and structure
Enumerating the permutations of different lengths in the avoidance set of a
pattern or set of patterns and finding one-to-one correspondences to well-
known combinatorial objects is a topic of great interest. Several classical
combinatorial objects may be related to a single avoidance set, and finding
these connections would allow us to understand seemingly disparate objects
under a common framework [6]. With the aid of a computer software, Gao and
Kitaev [5] conducted a systematic search of connections between sequences in
The Online Encyclopedia of Integer Sequences (OEIS) [9] and the enumeration of
permutations avoiding POPs with 4 or 5 elements. They observed connections to
38 sequences in OEIS and listed 15 combinatorial objects with which
potentially interesting bijections might occur with the avoidance sets of
certain POPs (see Tables 6 and 7 of their paper).
The goal of this paper is to find as many bijections between the pairs of
objects in ways that are meaningful. With the help of an interactive software
PermLab [1], we successfully construct nontrivial bijections for five of these
pairs and find generalizations whenever possible. We list the objects in Table
1 and discuss the bijections in Sections 2, 3, 4 and 5. One bijection
(discussed in Section 3.2) emerges directly from the original proof of the
enumeration of ground-state juggling sequences by Chung and Graham [4]. For
each of the remaining four bijections, we first realize that both sets in the
corresponding pair could be partitioned into subsets of corresponding sizes.
This allows us to construct similar recursive algorithms that can build the
sets in parallel, which in turn yield (one or many) bijections that could be
constructed directly and explicitly. Thus, we end up with a thorough
understanding of the permutations that avoid each POP and of the corresponding
combinatorial objects.
During our analysis, we discovered a set of patterns that are avoided by
infinitely many simple permutations (to be defined in Section 1.3), which are,
in fact, enumerated by a translation of the well-known Fibonacci sequence. We
construct an algorithm that allows one to obtain this set of permutations
recursively and prove this in Section 6. Section 1.3 defines all the relevant
terms and concepts in detail and Section 7 summarises our research and lists
possible avenues of further research.
POP | OEIS sequence (beginning with $n=1$) | Equinumerous structures | Location
---|---|---|---
12341234 | A111281 1, 2, 6, 16, 40, 100, 252, 636, 1604, 4044, 10196,25708, … | permutations avoiding the patterns 2413, 2431, 4213, 3412, 3421, 4231, 4321, 4312 | Section 2
1234123412543 | A084509 1, 2, 6, 24, 96, 384, 1536, 6144, 24576, 98304, 393216, 1572864, … | number of ground-state 3-ball juggling sequences of period $n$ | Section 3.2
12341234125434231 | A025192 1, 2, 6, 18, 54, 162, 486, 1458, 4374, 13122, 39366, 118098, … | 2-ary shrub forests of $n$ heaps avoiding the patterns 231, 312, 321 | Section 3.3
123412341254342311324 | A045925 1, 2, 6, 12, 25,48, 91, 168, 306, 550, 979, 1728, 3029… | levels in all compositions of $n+1$ with only ones and twos | Section 4.2
1234123412543423113242341 | A214663 and A232164 1, 2, 6, 12, 25, 57, 124, 268, 588, 1285, 2801, 6118, 13362, … | number of $n$-permutations for which the partial sums of signed displacements do not exceed 2 | Section 5
Table 1. List of POPs studied
### 1.3. Preliminaries
For an $n$-permutation $\pi$, we say that
$\pi_{i_{1}}\,\pi_{i_{2}}\,\cdots\,\pi_{i_{k}}$ is a subsequence of $\pi$ if
and only if $1\leq i_{1}<i_{2}<\cdots i_{k}\leq n$ and $k\in[n]$. For an
$n$-permutation $\pi$ and any $1\leq i,\,j\leq n$, the contiguous substring
$\pi_{i}\pi_{i+1}\,\cdots\,\pi_{j}$ is called a factor of $\pi$. We denote
$\pi_{i}\pi_{i+1}\,\cdots\,\pi_{j}$ simply as $\pi_{[i,j]}$. Note that if
$i=j$, then $\pi_{[i,j]}=\pi_{i}$ has length 1 and we call it a point, term or
an element. If $i>j$ then $\pi_{[i,j]}$ has length 0, and we say that it is
empty. Let $\alpha=\pi_{[i_{1},j_{1}]}$ and $\beta=\pi_{[i_{2},j_{2}]}$ be
non-empty factors of $\pi$. We write $\alpha<\beta$ if and only if
$\pi_{\ell_{1}}<\pi_{\ell_{2}}$ for all $\ell_{1}\in[i_{1},j_{1}]$ and
$\ell_{2}\in[i_{2},j_{2}]$.
Note that we may extend the definition of factors of permutations to factors
of factors. If $\pi$ is a factor of size $n$ of a larger $m$-permutation
$\zeta$, say $\pi=\zeta_{[i,j]}$ for some $1\leq i\leq j\leq m$, then we use
$\pi_{k}$ to denote $\zeta_{i+k-1}$ for any $k\in[n]$. Then
$\pi_{[k,\ell]}=\zeta_{[i+k-1,\,i_{\ell}-1]}$ for any $1\leq k\leq\ell\leq n$.
We say that a factor $\sigma=\pi_{[i,j]}$ (for some $1\leq i\leq j\leq n$) of
an $n$-permutation $\pi$ contains the number $x$, denoted as $x\in\sigma$, if
and only if $\pi_{\ell}=x$ for some $\ell\in[i,j]$. Otherwise, we say that
$\sigma$ does not contain the number $x$ and write $x\not\in\sigma$.
For an $n$-permutation $\pi$, we say that a factor $\pi_{[i,j]}$ is an
interval if and only if it contains exactly the numbers in a contiguous
interval of $[n]$. That is, if and only if
$\\{\pi_{\ell}\mid\ell\in[i,j]\\}=[s,t]$ for some $s,t\in[n]$. For example,
the factor 2413 is an interval while the factor 241 is not an interval. An
interval of an $n$-permutation is trivial if and only if its length is 0, 1 or
$n$.
Let $\pi_{i_{1}}\pi_{i_{2}}\,\cdots\,\pi_{i_{k}}$ be a subsequence of an
$n$-permutation $\pi$. The reduced subsequence
$\text{red}(\pi_{i_{1}}\pi_{i_{2}}\,\cdots\,\pi_{i_{k}})$ is defined as the
$k$-permutation that is order-isomorphic to the subsequence. That is,
$\text{red}(\pi_{i_{1}}\,\pi_{i_{2}}\,\cdots\,\pi_{i_{k}})=\sigma$ is the
$k$-permutation where $\sigma_{s}<\sigma_{t}$ if and only if
$\pi_{i_{s}}<\pi_{i_{t}}$ We say that $\sigma$ is the reduction of the
subsequence $\pi_{i_{1}}\pi_{i_{2}}\,\cdots\,\pi_{i_{k}}$.
###### Example 1.1.
Let $\pi=1435726$. Then the following statements are true:
1. (a)
$1576$ is a subsequence of $\pi$
2. (b)
$\text{red}(1576)=1243$
3. (c)
$\pi_{[3,5]}=357$ and $\pi_{[6,7]}=26$ are factors of $\pi$
4. (d)
$\pi_{[2,4]}=435$ is an interval of $\pi$
### 1.4. Simple permutations
###### Definition 1.1.
An $n$-permutation is simple if and only if all its intervals are trivial.
That is, if and only if its intervals are all of length 0, 1 or $n$.
Simple permutations were first considered in [8].
###### Definition 1.2.
Let $\sigma$ be a $k$-permutation, and for $\ell\in[k]$, let $\alpha^{(\ell)}$
be a permutation of length $i_{\ell}$. We define the inflation of $\sigma$ by
$\alpha^{(1)},\alpha^{(2)},\dots,\alpha^{(k)}$ as the permutation
$\pi=\sigma[\alpha^{(1)},\alpha^{(2)},\dots,\alpha^{(k)}]$
of length $n:=i_{1}+i_{2}+\cdots+i_{k}$ where given $s_{0}:=0$, $\ell\in[k]$,
$s_{\ell}:=i_{1}+i_{2}+\cdots+i_{\ell}$, the following hold:
* •
the factors $\pi_{[s_{\ell-1}+1,\,s_{\ell}]}$ are intervals
* •
$\text{red}(\pi_{[s_{\ell-1}+1,\,s_{\ell}]})=\alpha^{(\ell)}$
* •
$\pi_{[s_{t-1}+1,\,s_{t}]}<\pi_{[s_{u-1}+1,\,s_{u}]}$ if and only if
$\sigma_{t}<\sigma_{u}$.
We call $\sigma$ a quotient of $\pi$.
###### Example 1.2.
The permutation 526314 is simple while the 4215763 is not. The following
statements are true:
1. (a)
$4215763=3142[1,21,132,1]$. Note 4, 21, 576 and 3 are intervals of 4215763.
2. (b)
3142 is a quotient of 4215763
3. (c)
4215763 is an inflation of 3142
###### Proposition 1.1 (Albert and Atkinson [2]).
Every permutation may be written as the inflation of a unique simple
permutation. Moreover, if $\pi$ can be written as
$\sigma[\alpha^{(1)},\,\alpha^{(2)},\,\dots,\,\alpha^{(m)}]$ where $\sigma$ is
simple and $m\geq 4$, then the $\alpha^{(i)}$s are unique.
### 1.5. Separable permutations
###### Definition 1.3.
Suppose $\pi$ and $\sigma$ are permutations of length $n$ and $m$
respectively. We define the direct sum (or simply, sum), using the operator
$\oplus$, and the skew sum, using the operator $\ominus$, of $\pi$ and
$\sigma$ as the permutations of length $m+n$ as follows:
$\pi\oplus\sigma=12[\pi,\sigma]\quad\text{and}\quad\pi\ominus\sigma=21[\pi,\sigma].$
###### Definition 1.4.
If a permutation is an inflation of 12 or 21, we call it sum decomposable and
skew sum decomposable respectively. If a permutation is not sum decomposable
we say it is sum indecomposable, and if it is not skew sum decomposable we say
it is skew-sum indecomposable.
###### Definition 1.5.
A permutation is separable if it can be obtained by repeatedly applying the
$\oplus$ and $\ominus$ operations on the permutation 1.
###### Example 1.3.
The permutation 587694231 is separable, since
$\displaystyle 587694231$ $\displaystyle=14325\ominus 4231$
$\displaystyle=(1\oplus 3214)\ominus(1\ominus 231)$
$\displaystyle=(1\oplus(321\oplus 1)\ominus(1\ominus(12\ominus 1)))$
$\displaystyle=(1\oplus((1\ominus 21)\oplus 1)\ominus(1\ominus(12\ominus 1)))$
$\displaystyle=(1\oplus((1\ominus(1\ominus 1))\oplus
1)\ominus(1\ominus((1\oplus 1)\ominus 1))).$
###### Example 1.4.
All permutations of length 3 are separable. The only permutations of length 4
that are not separable are 2413 and 3142.
###### Theorem 1.1 (folklore).
A permutation is separable if and only if it avoids 2413 and 3412.
###### Proposition 1.2 (Albert and Atkinson [2]).
If $\pi$ is an inflation of 12, say $\pi=12[\alpha,\,\beta]$, then $\alpha$
and $\beta$ are unique if $\alpha$ is sum indecomposable. The same holds with
12 replaced by 21 and “sum” replaced by “skew sum”.
###### Corollary 1.1.1.
All simple permutations of length at least 4 must contain the patterns 132,
213, 231 and 312.
###### Proof.
It is clear that simple permutations are not separable, so by Theorem 1.1,
they must contain at least one of 2413 or 3412, both of which contain the four
patterns of length three. ∎
### 1.6. Pattern/POP containment and avoidance
###### Definition 1.6.
A pattern is a permutation of length at least 2. We say that a permutation
$\pi$ contains a pattern $p$ if and only if there exists some subsequence
$\pi_{i_{1}}\,\pi_{i_{2}}\,\cdots\,\pi_{i_{k}}$ of $\pi$ where
$\text{red}(\pi_{i_{1}}\,\pi_{i_{2}}\,\cdots\,\pi_{i_{k}})=p$. That is,
$p_{j}<p_{\ell}$ if and only if $\pi_{i_{j}}<\pi_{i_{\ell}}$ for all
$j,\,\ell\in[k]$. Otherwise, we say that $\pi$ avoids $p$. If $P$ is a set of
patterns, we say that $\pi$ contains $P$ if $\pi$ contains any pattern in $P$.
Otherwise we say that $\pi$ avoids $P$.
A partially ordered pattern generalizes the notion of a pattern whereby the
order between certain elements do not have to be considered. We are left with
a partial order on the elements, which we can represent using a labelled
partially ordered set. Recall that a partial order is a binary relation $\leq$
over a set $P$ that is reflexive, antisymmetric and transitive. That is, for
all $a,b,c\in P$, the following hold:
1. 1.
$a\leq a$ (_reflexivity_);
2. 2.
If $a\leq b$ and $b\leq a$ then $a=b$ (_antisymmetry_);
3. 3.
If $a\leq b$ and $b\leq c$ then $a<c$ (_transitivity_).
A set $P$ with a partial order $\leq$ is called a partially ordered set
(poset), denoted $(P,\leq)$. We may write $b\geq a$ as an equivalent statement
to $a\leq b$ for any $a,\,b\in P$. We write $a<b$ to mean that $a\leq b$ and
that $a$ and $b$ are distinct.
###### Definition 1.7.
A partially ordered pattern (POP) $p$ of size $k$ is a poset with $k$ elements
labelled $1,\,2,\,\dots,\,k$. A POP can be expressed in one-line notation by
indicating its size and the minimal set of relations that defines the
respective poset.
###### Definition 1.8.
An $n$-permutation $\pi$ contains such a POP $p$ if and only if $\pi$ has a
subsequence $\pi_{i_{1}}\pi_{i_{2}}\cdots\pi_{i_{k}}$ such that
$\pi_{i_{j}}<\pi_{i_{m}}$ if $j<m$ in the poset $P$. Otherwise, we say that
$\pi$ avoids $p$.
###### Example 1.5.
The pattern 3241 represented as a POP is the 4-element chain with its elements
labelled 1, 2, 3 and 4, where $4<2<1<3$. Note that the permutation 4213 is the
inverse of 3241.
Recall that a poset can be represented visually as a Hasse diagram. A Hasse
diagram of a finite poset is a visual representation of the elements and
relations in the poset, where only the covering relations are shown. Recall
that a covering relation in a poset $(P,\leq)$ is a binary relation $i\prec j$
for some $i$ and $j$ in $P$ where $i<j$ and there does not exist any $k\in P$
such that both $i<k$ and $k<j$ hold. A Hasse diagram uniquely determines the
partial order.
A Hasse diagram of a poset with $n$ elements can also be understood as a
simple directed graph $(V,E)$ with an implicit upward orientation where $V$ is
a set of $n$ vertices and $E$ is a set of ordered pairs of distinct elements
in $V$, i.e. $E\subseteq\\{(i,j)\mid i,j\in V,\,i\neq j\\}$, that satisfies
the following three conditions:
1. 1.
if $(i,j)$ is in $E$ then $(j,i)$ is not in $E$ (antisymmetry),
2. 2.
if $(i,j)$ and $(j,k)$ are in $E$ then $(i,k)$ is not in $E$ (transitive
reduction),
3. 3.
if $(i,j)$ is in $E$ then $i\leq j$ is a relation in the poset.
A POP is a labelled poset, and can therefore be represented visually as a
labelled Hasse diagram. That is, as a graph $(V,E)$ defined as above where the
vertices in $V$ are labelled $1,\,2,\,\dots,\,k$.
###### Example 1.6.
The POP of size 4 where $1>3$ is illustrated in Figure 1. It represents the
patterns 2314, 2413, 3124, 3421, 3214, 3412, 4213, 4312, 4123, 4321, 4132,
4231. The permutation 3472615 contains 21 occurrences of the POP whereas
132456 avoids it.
12341234125434231132423411234 Figure 1. The Hasse diagram representation of
the POP in Example 1.6
###### Definition 1.9.
Let $P$ be a pattern, a set of patterns, or a POP. We denote $Av(P)$ as the
set of permutations that avoid $P$ (called the avoidance set of $P$), and
$Av_{n}(P)$ as the set of $n$-permutations that avoid $P$. That is,
$Av_{n}(P):=Av(P)\cap S_{n}$.
###### Definition 1.10.
Let $P_{1}$ and $P_{2}$ each be a set of patterns of a POP. We say that
$P_{1}$ and $P_{2}$ are Wilf-equivalent if and only if
${\left|Av_{n}(P_{1})\right|}={\left|Av_{n}(P_{2})\right|}$ for all $n\geq 1$.
It is not hard to check that containment is a partial order on any set of
permutations. In the literature, sets of permutations which are closed
downward under this order are called permutation classes, or sometimes just
classes. That is, $\mathcal{C}$ is a permutation class if and only if for any
$\pi\in\mathcal{C}$ and any $\sigma$ contained in $\pi$, we have
$\sigma\in\mathcal{C}$. If a permutation $\pi$ avoids a pattern $p$, then
every reduced subsequence of $\pi$ avoids $p$. In other words, every pattern
contained in $\pi$ avoids $p$. Therefore $Av(p)$ and $Av_{n}(p)$ are
permutation classes. The same is true if $p$ is a set of patterns or a POP.
Observe that if $k_{P}$ is the length of the shortest pattern in a set of
patterns $P$, then all permutations of length less than $k_{P}$ avoid $P$.
This means that ${\left|Av_{n}(P)\right|}=n!$ for all $n<k_{P}$. Therefore it
suffices to enumerate $Av_{n}(P)$ for $n\geq k_{P}$ for every POP or set of
patterns $P$ discussed in subsequent sections.
### 1.7. Matrix representations of permutations
###### Definition 1.11.
For an $n$-permutation $\pi$, its permutation matrix is a binary $n\times n$
matrix, denoted $M(\pi)$ where
$M(\pi)_{n-i+1,j}=1\iff\pi_{j}=i.$
Moreover, its pattern matrix is an $n\times n$ matrix, denoted
$M^{\prime}(\pi)$, where
$M^{\prime}(\pi)_{n-i+1,j}=\begin{cases}i&\text{ if }\pi_{j}=i,\\\ 0&\text{
otherwise}.\end{cases}$
We may refer to the non-zero entries in a permutation matrix or pattern matrix
as points. Sometimes, we may omit displaying the 0s and the traditional
brackets if no confusion would arise.
###### Example 1.7.
Let $\pi=312$. Its permutation matrix is
$M(\pi)\quad=\quad\begin{pmatrix}1&0&0\\\ 0&0&1\\\
0&1&0\end{pmatrix}\quad=\quad\begin{matrix}1&&\\\ &&1\\\ &1&\end{matrix}$
and its pattern matrix is
$M^{\prime}(\pi)\quad=\quad\begin{pmatrix}3&0&0\\\ 0&0&2\\\
0&1&0\end{pmatrix}\quad=\quad\begin{matrix}3&&\\\ &&2\\\ &1&\end{matrix}.$
###### Definition 1.12.
The weight of a matrix is the number of non-zero entries it contains. We
denote the weight of a matrix $A$ by ${\left|A\right|}$.
###### Example 1.8.
The weight of the permutation matrix of $\pi$ is equal to the length of $\pi$.
The weight of any column or row of a permutation matrix is 1.
### 1.8. Lattice matrices
###### Definition 1.13.
We call a matrix (or submatrix) void if it has no rows or no columns. A matrix
(or submatrix) is trivial if its weight is 0, and nontrivial otherwise. That
is, a trivial matrix is either void or is a zero matrix. All void matrices are
trivial.
If we know that a permutation contains a certain pattern, it might be helpful
to represent its permutation as a block matrix in order to better understand
the permutation. We will show an example before stating formal definitions:
Suppose we know that the $n$-permutation $\pi$ contains the pattern $p:=312$.
That is, there exist $i$, $j$ and $k$ where $1\leq i<j<k\leq n$ and
$\pi_{i}\,\pi_{j}\,\pi_{k}$ reduces to 312. The columns $i$, $j$ and $k$
partition the rest of the permutation matrix $M(\pi)$ into 4 (possibly
trivial) blocks of columns, and the rows $\pi_{i},\,\pi_{j}$ and $\pi_{k}$
partition the rest of the permutation matrix $M(\pi)$ into 4 (possibly
trivial) blocks of rows. This gives rise to another representation of $M(\pi)$
as a $7\times 7$ block matrix. In this representation we can find the $3\times
3$ matrix $M(p)$ interwoven with a $4\times 4$ block matrix
$(\alpha_{ij})_{i,j\in[4]}$ as depicted in Figure 2, with the following
alterations:
* •
the ones in columns $i,$ $j$ and $k$ are replaced by $\pi_{i}$, $\pi_{j}$ and
$\pi_{k}$ respectively,
in other words, we replace the submatrix corresponding $M(p)$ with the pattern
matrix $M^{\prime}(p)$
* •
the zeroes in columns $i,$ $j$ and $k$ that are also in row $\pi_{i}$,
$\pi_{j}$ or $\pi_{k}$ are replaced by plus signs,
* •
the trivial blocks in columns $i,$ $j$ and $k$ are replaced by vertical bars,
* •
the trivial blocks in rows $\pi_{i}$, $\pi_{j}$ and $\pi_{k}$ are replaced by
horizontal bars, and finally,
* •
the conventional matrix brackets are omitted.
Note that we may also alter the ones, zeroes and $\alpha_{ij}$ blocks (where
$i,j\in[n+1]$) differently based on which properties of the permutation we are
trying to highlight. This figure is reminiscent of the lattice structure of
gridded window panes, so we call it the $p$-lattice matrix of $\pi$, or simply
a lattice matrix, and denote it by $L_{p}(\pi)$.
$\alpha_{11}$ | — | $\alpha_{12}$ | — | $\alpha_{13}$ | — | $\alpha_{14}$
---|---|---|---|---|---|---
——– | 3 | ——– | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| ——– | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| ——–
$\alpha_{21}$ | — | $\alpha_{22}$ | — | $\alpha_{23}$ | — | $\alpha_{24}$
——– | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| ——– | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| ——– | 2 | ——–
$\alpha_{31}$ | — | $\alpha_{32}$ | — | $\alpha_{33}$ | — | $\alpha_{34}$
——– | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| ——– | 1 | ——– | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| ——–
$\alpha_{41}$ | — | $\alpha_{42}$ | — | $\alpha_{43}$ | — | $\alpha_{44}$
Figure 2. The lattice matrix $L_{312}(\pi)$
In general, we may use the following definition:
###### Definition 1.14.
Consider a permutation $\pi$ on $n$ letters and choose $m$ indices $1\leq
i_{1}<i_{2}<\cdots<i_{m}\leq n$. Write $I=(i_{1},\,i_{2},\,\dots,\,i_{m})$ and
let $p:=\text{red}(\pi_{i_{1}}\pi_{i_{2}}\,\cdots\,\pi_{i_{m}})$. We proceed
to define the lattice matrix $L_{p}(\pi)$. Put $i_{0}=0$ and $i_{m+1}=n+1$. We
use the values $i_{1},\,i_{2},\,\dots,\,i_{m}$ to partition the column indices
into subintervals and the values
$\pi_{i_{1}},\,\pi_{i_{2}},\,\dots,\,\pi_{i_{m}}$ to partition the row indices
into subintervals.
A block in $L_{p}(\pi)$ is a (possibly trivial) continguous block submatrix of
the permutation matrix $M(\pi)$ with its column and row indices each given by
a relevant subinterval defined above. To make this more explicit, we note that
$\pi(i_{p{{}^{-1}}(1)})<\pi(i_{p{{}^{-1}}(2)})<\cdots<\pi(i_{p{{}^{-1}}(m)})$.
Write $j_{k}:=\pi(i_{p{{}^{-1}}(k)})$. Put $j_{0}=0$ and $j_{m+1}=n+1$. Then
$M_{S\times T}$ where $S=[i_{s-1}+1,\,i_{s}-1]$ and $T=[j_{t-1}+1,\,j_{t}-1]$
is a block for all $s$ and $t$ in $[m+1]$. We label the block $M_{S\times T}$
as $\alpha_{s,t}$. Note that $\alpha_{s,t}$ is void if either $i_{s}=i_{s-1}$
or $j_{t}=j_{t-1}+1$. We may write $\alpha_{s,t}$ as $\alpha_{st}$ if no
confusion would arise. Note that $M_{i_{k},\pi(i_{k})}$, $M_{i_{k}\times T}$
and $M_{S\times j_{k}}$ may also be referred to as blocks for any $k\in[m]$
and subintervals $S$ and $T$ defined as above.
When depicting $L_{p}(\pi)$, we replace the zero entries in column $i_{k}$ by
a vertical line and the zero entries in row $j_{k}$ by a horizontal line for
every $k\in[m+1]$. We also omit the traditional parentheses around the entire
matrix $L_{p}(\pi)$.
###### Proposition 1.3.
It is easy to deduce the following properties of the lattice matrix
$L_{p}(\pi)$ defined above:
1. 1.
$(\alpha_{s,t})_{a,b}=M_{j_{s-1}+a,\,i_{t-1}+b}$ for $a\in[j_{s}-j_{s-1}-1]$
and $b\in[i_{t}-i_{t-1}-1]$
2. 2.
Each horizontal (respectively, vertical) bar is either void, or is a single
row (respectively, column) of zeroes.
3. 3.
All block matrices in the same row (respectively, column) of the lattice
matrix have the same number of rows (respectively, columns).
4. 4.
Any square block submatrix of the lattice matrix has the same total number of
rows as columns.
We would like to describe the relationships between points in different blocks
in a lattice matrix precisely. The following definitions provide an intuitive
way to do so.
###### Definition 1.15.
Let $p$ and $\pi$ be permutations of length $m$ and $n$ respectively where
$m\leq n$ and $\pi$ contains $p$. Let $L_{p}(\pi)$ be the $p$-lattice matrix
of $\pi$ with its blocks denoted by $\alpha_{ij}$ for $i,\,j\in[m+1]$.
1. (a)
For the points in $L_{p}(\pi)$ that correspond to the pattern $p$, we define a
point being adjacent to a block in a natural way. For example, in Figure 2,
the point labelled 1 is adjacent to the blocks
$\alpha_{32},\,\alpha_{33},\,\alpha_{42}$ and $\alpha_{43}$ while the point
labelled 2 is adjacent to the blocks $\alpha_{23},\,\alpha_{24},\,\alpha_{33}$
and $\alpha_{34}$. We note that every point is adjacent to exactly four
blocks.
2. (b)
We use the four cardinal directions, north, south, east and west to to
indicate where one point lies in relation to another in the visual
representation of the permutation matrix $M(\pi)$. Explicitly, a point is
north (respectively, south) of another point if and only if the row index in
$M(\pi)$ of the former point is smaller than that of the latter, and is west
(respectively, east) of another point if and only if the column index in
$M(\pi)$ of the former point is smaller than that of the latter.
3. (c)
For $i_{1},i_{2}\in[m+1]$, we say that $\alpha_{i_{1}j}$ is to the left
(respectively, to the right) of $\alpha_{i_{2}j}$ if and only if all the
points in $\alpha_{i_{1}j}$ are west (respectively, east) of all the points in
$\alpha_{i_{2}j}$.
4. (d)
For $j_{1},j_{2}\in[m+1]$, we say that $\alpha_{ij_{1}}$ is superior
(respectively, inferior to $\alpha_{ij_{2}}$ if and only if all nontrivial
columns of $\alpha_{ij_{2}}$ are north (respectively, south) of all nontrivial
columns of $\alpha_{ij_{2}}$.
5. (e)
We say that $\alpha_{ij}$ is leftmost (respectively, rightmost) if and only if
the first (respectively, last) column of $\alpha_{ij}$ is nontrivial.
6. (f)
We say that $\alpha_{ij}$ is topmost (respectively, bottommost) if and only if
the first (respectively, last) row of $\alpha_{ij}$ is nontrivial.
## 2\. Permutations avoiding $\lambda$
Gao and Kitaev [5] observed that there the POP $\lambda$ of size 4 where $1>2$
and $1>4$ (illustrated in Figure 3) and the set of patterns
$\mathcal{P}=\\{2413,2431,4213,3412,3421,4231,4321,4312\\}$
are Wilf-equivalent. This was done by proving the recursive equation
${\left|Av_{n}(\lambda)\right|}=3{\left|Av_{n-1}(\lambda)\right|}-2{\left|Av_{n-2}(\lambda)\right|}+2{\left|Av_{n-3}(\lambda)\right|},$
which corresponds to the OEIS sequence A111281. In this section, we will give
a new proof that $\lambda$ and $\mathcal{P}$ are Wilf-equivalent as well as
provide a new recursive formula for the OEIS sequence. We also analyse each
avoidance set in detail and construct an explicit bijection between them.
123412341254342311324234112341234 Figure 3. The POP $\lambda$
First, we show that both avoidance sets have only finitely many simple
permutations. Using this fact, we analyse the possible inflations of these
simple permutations in each avoidance set and derive a method to construct
each set recursively. A recursively-defined bijection on the two sets follows
immediately from this analysis.
In this section, we use the symbol $I_{k}$ to denote the identity permutation
$1\,2\,3\,\cdots\,k$ for all $k\geq 1$.
### 2.1. Structure of permutations avoiding $\lambda$
###### Lemma 2.1.
The only simple permutations that avoid $\lambda$ are 12, 21 and 2413.
###### Proof.
It is clear that 2413 and all simple permutations of length 3 or less avoid
$\lambda$. Consider the permutation matrix of a simple permutation $\pi$ with
length at least $4$. It must contain the pattern $312$ by Corollary 1.1.1, say
$1\leq i<j<k\leq n$ where $\text{red}(\pi_{i}\pi_{j}\pi_{k})=312$. We can then
consider the lattice matrix $L_{312}(\pi)$ which is illustrated in Figure 2.
It suffices to prove that $\alpha_{31}$ has weight 1, while the other blocks
are trivial.
Upon inspection, it is clear that if any of $\alpha_{11},\,\alpha_{13}$ or
$\alpha_{ij}$, where $i,\,j\in[2,4]$, were not trivial, then the permutation
would contain $\lambda$. For the reader’s convenience, we reproduce the figure
with those $\alpha_{ij}$’s omitted in Figure 4. We now proceed to show that
the remaining blocks, except for $\alpha_{31}$, are trivial:
| — | $\alpha_{12}$ | — | | — | $\alpha_{14}$
---|---|---|---|---|---|---
——– | 3 | ——– | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| ——– | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| ——–
$\alpha_{21}$ | — | | — | | — |
——– | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| ——– | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| ——– | 2 | ——–
$\alpha_{31}$ | — | | — | | — |
——– | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| ——– | 1 | ——– | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| ——–
$\alpha_{41}$ | — | | — | | — |
Figure 4. The lattice matrix $L_{312}(\pi)$, with some blocks omitted. The
omitted blocks must be trivial for $\pi$ to avoid $\lambda$.
1. (a)
Suppose $\alpha_{41}$ is not trivial. It cannot be leftmost, since otherwise
the permutation would be sum decomposable. However, if $\alpha_{21}$ or
$\alpha_{31}$ contains a point east of a point in $\alpha_{41}$, then $\pi$
would contain $\lambda$ (consider the 1 in $\alpha_{21}$ or $\alpha_{31}$,
together with the 1 in $\alpha_{41}$ and the points labelled 3 and 1). So
$\alpha_{41}$ is trivial.
2. (b)
Suppose $\alpha_{21}$ is not trivial. Since it is adjacent to the block
labelled 3, it cannot be rightmost in its column by simpleness. However, if
$\alpha_{31}$ contains a point east of a point of $\alpha_{21}$, then $\pi$
would contain $\lambda$ (consider the 1s in $\alpha_{21}\alpha_{31}$, together
with the points labelled 3 and 1). So $\alpha_{21}$ is trivial.
3. (c)
Suppose $\alpha_{14}$ is not trivial. If $\alpha_{14}$ is superior to
$\alpha_{12}$, then the permutation would be sum decomposable. So
$\alpha_{12}$ must contain a nontrivial row superior to a nontrivial row of
$\alpha_{12}$. However, $\pi$ would contain then $\lambda$ (consider the 1s in
$\alpha_{12}\alpha_{14}$, together with the points labelled 1 and 2). So
$\alpha_{14}$ is trivial.
4. (d)
Observe that $\alpha_{12}$ is adjacent to the block labelled 3. Since the
blocks in the same row or column as $\alpha_{12}$ are trivial, $\alpha_{12}$
must also be trivial.
Finally, since all the blocks in the same row or column as $\alpha_{31}$ are
trivial, $\alpha_{31}$ can have weight at most 1 by simpleness. We have thus
eliminated the possibility of there being a simple permutation of length at
least 4 avoiding $\lambda$ that is not 2413, so our list is exhaustive. ∎
###### Lemma 2.2.
For $n\geq 4$, there are $n$ skew sum decomposable permutations in
$Av_{n}(\lambda)$, namely $21[I_{n-1},12]$, $21[I_{n-2},21]$ and
$2431[I_{\ell},\,I_{n-\ell-2},\,1,\,1]$ where $\ell\in[2,n-1]$.
###### Proof.
Let $\pi=21[\alpha,\beta]$ be an $n$-permutation avoiding $\lambda$. It is not
hard to see that if ${\left|\beta\right|}\geq 3$, then $\pi$ contains
$\lambda$. So ${\left|\beta\right|}=1$ or 2:
1. 1)
Suppose ${\left|\beta\right|}=2$. If $\alpha$ contains a descent, then the
elements that make up the descent, together with $\beta$ make up $\lambda$. So
$\alpha$ must be an increasing sequence, and indeed both $21[I_{n-1},12]$ and
$21[I_{n-2},21]$ avoid $\lambda$.
2. 2)
Suppose ${\left|\beta\right|}=1$. Then $\pi$ avoids $\lambda$ if and only if
$\alpha$ avoids the POP of size 3 where $1>2$. This is exactly when the first
$n-2$ elements of $\alpha$ are increasing. Since $\alpha$ is assumed to be
skew sum indecomposable (for uniqueness), the last element of $\alpha$ cannot
be $1$. Therefore there are $n-2$ choices for the last element of $\alpha$,
and only one way to order the initial elements. Indeed, for all $2\leq\ell\leq
n-1$, the following permutation avoids $\lambda$:
$\displaystyle\pi$
$\displaystyle=21[12\cdots\ell(\ell+2)\cdots(n-2)(\ell+1),1]$
$\displaystyle=21[132[I_{\ell},\,I_{n-\ell-2},1],1]$
$\displaystyle=2431[I_{\ell},\,I_{n-\ell-2},\,1,\,1]$
Therefore, there are $(n-2)+2=n$ skew sum decomposable $n$-permutations
avoiding $\lambda$ in total. ∎
###### Lemma 2.3.
For $n\geq 4$, there are $n-3$ that are inflations of 2413 avoiding $\lambda$
that are of length $n$. Specifically, they are of the form
$2413[I_{\ell},\,I_{n-\ell-2},\,1,\,1]$ for $\ell\in[n-3]$.
###### Proof.
Let $\pi:=2413[\alpha,\beta,\gamma,\delta]$ be an $n$-permutation avoiding
$\lambda$. We will show that ${\left|\gamma\right|}={\left|\delta\right|}=1$,
while $\alpha$ and $\beta$ are increasing sequences but can be of variable
length.
Suppose ${\left|\gamma\right|}\geq 2$ or ${\left|\delta\right|}\geq 2$. Then
$\pi$ contains $\lambda$ (consider one point from $\beta$) and three points
total from $\gamma$ and $\delta$. Now suppose that $\alpha$ or $\beta$
contains a descent. Then the two elements that make up the descent, together
with one element from $\gamma$ and one element from $\delta$ make $\lambda$.
Therefore, ${\left|\gamma\right|}={\left|\delta\right|}=1$ and $\alpha$ and
$\beta$ are increasing.
Finally, it is not hard to see that for all $\ell\in[n-3]$, the permutation
$2413[I_{\ell},\,I_{n-\ell-2},\,1,\,1]$ avoids $\lambda$. So there are $n-3$
inflations of 2413 avoiding $\lambda$. ∎
###### Theorem 2.4.
For all $n\geq 4$,
$\displaystyle{\left|Av_{n}(\lambda)\right|}=2n-3+\sum_{i=1}^{n-1}(2i-3){\left|Av_{n-i}(\lambda)\right|}$.
###### Proof.
Lemmas 2.1, 2.2 and 2.3 together imply that there are $2n-3$ sum
indecomposable permutations in $Av_{n}(\lambda)$. Moreover, a sum decomposable
permutation $12[\alpha,\beta]$ avoids $\lambda$ if and only if $\alpha$ and
$\beta$ both avoid $\lambda$. Therefore, there are
$\displaystyle\sum_{i=1}^{n-1}(2i-3){\left|Av_{n-i}(\lambda)\right|}$ sum
decomposable permutations of the form $12[\alpha,\beta]$ in $Av_{n}(\lambda)$,
where $\alpha$ is sum indecomposable. Thus we get the recursive formula for
$Av_{n}(\lambda)$. ∎
### 2.2. Structure of permutations avoiding $\mathcal{P}$
Next, we demonstrate the $n$-permutations of $Av(\mathcal{P})$ explicitly.
Recall that
$\mathcal{P}=\\{2413,\,2431,\,4213,\,3412,\,3421,\,4231,\,4321,\,4312\\}.$
###### Lemma 2.5.
The only simple permutations that avoid $\mathcal{P}$ are 12, 21, 3142 and
41352.
###### Proof.
It is clear that 3142, 41352 and all simple permutations of length $3$ or less
avoid $\mathcal{P}$. Consider a simple permutation $\pi$ of length at least
$4$ avoiding $\mathcal{P}$ . Since $\mathcal{P}$ contains 2413, $\pi$ avoids
2413 and must contain $3142$, since otherwise it would be a separable
permutation and not simple by Theorem 1.1.
We present its lattice matrix $L_{3142}(\pi)$ in Figure 5, with some
alterations explained in the caption. It suffices to show that $\alpha_{33}$
can have weight at most 1, while the remaining $\alpha_{ij}$ are trivial:
4312 | — | 3412 | — | 2431 | — | $\alpha_{14}$ | — | $\alpha_{15}$
---|---|---|---|---|---|---|---|---
—— | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| —— | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| —— | 4 | —— | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| ——
4312 | — | 3412 | — | $\alpha_{23}$ | — | 2431 | — | 2413
—— | 3 | —— | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| —— | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| —— | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| ——
3412 | — | 4312 | — | $\alpha_{33}$ | — | 3421 | — | 3412
—— | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| —— | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| —— | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| —— | 2 | ——
2413 | — | 4231 | — | $\alpha_{43}$ | — | 3412 | — | 3421
—— | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| —— | 1 | —— | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| —— | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| ——
$\alpha_{51}$ | — | $\alpha_{52}$ | — | 4213 | — | 3412 | — | 3421
Figure 5. The lattice matrix $L_{3142}(\pi)$ with some $\alpha_{ij}$ replaced
by a pattern in $\mathcal{P}$ that $\pi$ would contain if that $\alpha_{ij}$
were nontrivial, for $i,j\in[5]$. For example, if $\alpha_{12}$ were
nontrivial, then $\pi$ would contain 3412.
We proceed to show that $\alpha_{ij}$ must trivial for all $i,j\in[5]$, except
for $i=j=3$.
1. (a)
Suppose $\alpha_{52}$ were nontrivial. Since it is adjacent to the point 1,
the block $\alpha_{51}$ cannot be trivial and must be superior to
$\alpha_{52}$ by simpleness. However, this would mean the inclusion of the
pattern $2413$ (consider any submatrix containing
$\alpha_{51}\,3\,\alpha_{52}\,1$).
2. (b)
Since $\alpha_{5j}$ are trivial for all $j\in[2,5]$, the block $\alpha_{51}$
must be trivial as well, for otherwise $\pi$ would be sum decomposable.
3. (c)
Suppose $\alpha_{14}$ were nontrivial. Since it is adjacent to the point 4,
the block $\alpha_{15}$ cannot be trivial and must be inferior to
$\alpha_{14}$ by simpleness. However, this would mean the inclusion of the
pattern $2413$ (consider any submatrix containing
$4\,\alpha_{14}\,2\,\alpha_{15}$).
4. (d)
Suppose $\alpha_{23}$ were nontrivial. Since it is adjacent to the point 4, it
cannot be rightmost by simpleness. However, this would mean the inclusion of
the pattern $3412$ (consider any submatrix containing
$3\,\alpha_{23}\,\alpha_{33}\,2$ or $3\,\alpha_{23}\,\alpha_{43}\,2$).
5. (e)
Suppose $\alpha_{43}$ were nontrivial. Since it is adjacent to the point 1, it
cannot be leftmost by simpleness. Then $\alpha_{33}$ must be nontrivial and
lie to the left of $\alpha_{43}$. However, this would mean the inclusion of
the pattern $4312$ (consider any submatrix containing
$3\,\alpha_{33}\,\alpha_{43}\,2$).
Since all the blocks in the same row or column as $\alpha_{33}$ are trivial,
it can have weight at most 1 by simpleness. Moreover, it cannot be trivial
since all the other $\alpha_{ij}$s are trivial and 312 is not simple. We have
thus eliminated the possibility of there being a simple permutation of length
at least 5 avoiding $\mathcal{P}$ that is not 41352, so our list is
exhaustive. ∎
###### Lemma 2.6.
There are $4$ skew sum decomposable $n$-permutations in $Av(\mathcal{P})$.
Namely, they are $21[1,\,I_{n-1}]$, $312[1,\,I_{n-3},\,21]$, $21[I_{n-1},\,1]$
and $231[21,\,I_{n-3},\,1]$.
###### Proof.
Let $\pi=21[\alpha,\beta]$ be a permutation avoiding $\mathcal{P}$. We have 3
cases:
1. 1)
If ${\left|\alpha\right|},{\left|\beta\right|}\geq 2$, then $\pi$ contains
$3412,\,3421,\,4312$ or $4321$.
2. 2)
Suppose ${\left|\alpha\right|}=1$ and ${\left|\beta\right|}\geq 2$. Then $\pi$
avoids $\mathcal{P}$ if and only if $\beta$ avoids $213,\,231,\,321,\,312$.
That is, $\pi$ avoids $\mathcal{P}$ if and only if all but the last two terms
of $\beta$ are strictly increasing.
There are only two such permutations, namely $21[1,\,I_{n-1}]$ and
$312[1,\,I_{n-3},\,21]$.
3. 3)
Suppose ${\left|\beta\right|}=1$ and ${\left|\alpha\right|}\geq 2$. Then $\pi$
avoids $\mathcal{P}$ if and only if $\alpha$ avoids
$\text{red}(243)=132,\quad\text{red}(342)=231,\quad\text{red}(423)=312\quad\text{and}\quad\text{red}(432)=321.$
That is, if and only if all but the first two terms of $\alpha$ are strictly
increasing.
There are only two such permutations, namely $21[I_{n-1},\,1]$ and
$231[21,\,I_{n-3},\,1]$.
∎
###### Lemma 2.7.
There are $n-3$ inflations of 3142 of length $n$ avoiding $\mathcal{P}$.
Specifically, they are of the form $3142[1,\,I_{\ell},\,I_{n-\ell-2},\,1]$
where $\ell\in[n-3]$.
###### Proof.
Let $\pi=3142[\alpha,\beta,\gamma,\delta]$ be a permutation avoiding
$\mathcal{P}$. We will show that
${\left|\alpha\right|}={\left|\delta\right|}=1$, while $\beta$ and $\gamma$
are increasing sequences of variable length.
1. a.
If $\alpha$ contains an ascent, then $\pi$ contains 3412 (consider
$312[\alpha,\beta,\delta]$).
On the other hand, if $\alpha$ contains an descent, then $\pi$ contains 4312
(consider $312[\alpha,\beta,\delta]$).
2. b.
If $\delta$ contains an ascent, then $\pi$ contains 3412 (consider
$342[]\alpha,\gamma,\delta]$).
On the other hand, if $\delta$ contains an descent, then $\pi$ contains 3421
(consider the subpermutation $342[\alpha,\gamma,\delta]$).
3. c.
If $\beta$ contains an descent, then $\pi$ contains 4213 (consider
$314[\alpha,\beta,\delta]$).
If $\gamma$ contains an descent, then $\pi$ contains 2431 (consider
$342[\alpha,\gamma,\delta]$). Therefore $\beta$ and $\gamma$ are increasing.
Finally, it is not hard to see that for all $\ell\in[n-3]$, the permutation
$3142[1,\,I_{\ell},\,I_{n-\ell-2},\,1]$ avoids $\lambda$. Therefore, there are
$n-3$ inflations of 3142 in $Av_{n}(\mathcal{P})$. ∎
###### Lemma 2.8.
There are $n-4$ inflations of 41352 of length $n$ avoiding $\mathcal{P}$.
Specifically, they are of the form $41352[1,I_{\ell},1,I_{n-\ell-3},1]$ where
$\ell\in[n-4]$.
###### Proof.
Let $41352[\alpha,\beta,\gamma,\delta,\zeta]$ be a permutation avoiding
$\mathcal{P}$. We will show that
${\left|\alpha\right|}={\left|\gamma\right|}={\left|\zeta\right|}=1$, while
$\beta$ and $\delta$ are both increasing sequences of variable length.
1. a.
If $\alpha$ contains an ascent, then $\pi$ contains 3412 (consider
$413[\alpha,\beta,\gamma]$).
If $\alpha$ contains an descent, then $\pi$ contains 4312 (consider
$413[\alpha\beta,\gamma]$).
Therefore ${\left|\alpha\right|}=1$.
2. b.
If $\gamma$ contains an ascent, then $\pi$ contains 4231 (consider
$432[\alpha,\gamma,\zeta]$).
If $\gamma$ contains an descent, then $\pi$ contains 4231 (consider
$432[\alpha,\gamma,\zeta]$).
Therefore ${\left|\gamma\right|}=1$.
3. c.
If $\zeta$ contains an ascent, then $\pi$ contains 4312 (consider
$432[\alpha,\gamma,\zeta]$).
If $\zeta$ contains an descent, then $\pi$ contains 4321 (consider
$432[\alpha,\gamma,\zeta]$).
Therefore ${\left|\zeta\right|}=1$.
4. d.
If $\beta$ contains an descent, then $\pi$ contains 4213 (consider
$412[\alpha,\beta,\zeta]$).
Therefore $\beta$ is increasing.
5. e.
If $\delta$ contains an descent, then $\pi$ contains 2431 (consider
$452[\alpha,\delta,\zeta]$).
Therefore $\delta$ is increasing.
Finally, it is not hard to see that for all $\ell\in[n-4]$, the permutation
$41352[1,I_{\ell},1,I_{n-\ell-3},1]$ avoids $\mathcal{P}$. Therefore, there
are $n-4$ inflations of 41352 in $Av_{n}(\mathcal{P})$. ∎
###### Theorem 2.9.
For all $n\geq 4$,
$\displaystyle{\left|Av_{n}(\mathcal{P})\right|}=2n-3+\sum_{i=1}^{n-1}(2i-3){\left|Av_{n-i}(\mathcal{P})\right|}$.
###### Proof.
From Lemma 2.5, 2.6, 2.7 and 2.8, we can easily see that there are $2n-3$ sum
indecomposable $n$-permutations in $Av(\mathcal{P})$. Moreover, a sum
decomposable $n$-permutation $12[\alpha,\beta]$ avoids $\mathcal{P}$ if and
only if $\alpha$ and $\beta$ both avoid $\mathcal{P}$, so there are
$\displaystyle\sum_{i=1}^{n-1}(2i-3){\left|Av_{n-i}(\mathcal{P})\right|}$
permutations of the form $12[\alpha,\beta]$ in $Av_{n}(\mathcal{P})$ where
$\alpha$ is sum indecomposable. ∎
### 2.3. The bijection
Having thoroughly analysed the structure of the two avoidance sets, we are
ready to construct the bijection explicitly:
###### Theorem 2.10.
Define $g$ to be a function that maps
$\displaystyle 1$ $\displaystyle\mapsto 1$ $\displaystyle 321$
$\displaystyle\mapsto 321$ $\displaystyle 312$ $\displaystyle\mapsto 231$
$\displaystyle 21[I_{k},\,1]$ $\displaystyle\mapsto 21[1,\,I_{k}]$
$\displaystyle 231[I_{k},\,21,\,1]$ $\displaystyle\mapsto 312[1,\,I_{k},\,21]$
$\displaystyle 21[I_{k+1},\,21]$ $\displaystyle\mapsto 231[21,\,I_{k},\,1]$
$\displaystyle 2431[I_{k},\,I_{j+1},\,1,\,1]$ $\displaystyle\mapsto
41352[1,\,I_{k},\,1,\,I_{j},\,1]$ $\displaystyle 2413[I_{k},\,I_{j},\,1,\,1]$
$\displaystyle\mapsto 3142[1,\,I_{k},\,I_{j},\,1]$
for all $k,j\geq 1$. Define $f:Av(\lambda)\rightarrow Av(\mathcal{P})$ by
$\displaystyle f(\pi)=\begin{cases}g(\pi)&\text{if }\pi\text{ is sum
indecomposable}\\\ 12[g(\alpha),f(\beta)]&\text{if
}\pi=12[\alpha,\beta],\text{ and }\alpha\text{ is a sum
indecomposable}.\end{cases}$
Then $f$ is a bijection. In fact, $f$ restricted to $Av_{n}(\lambda)$ is a
bijection onto $Av_{n}(\mathcal{P})$.
###### Proof.
From the lemmas in the previous two sections, it is clear that $g$ maps the
sum indecomposable permutations in $Av_{n}(\lambda)$ to the sum indecomposable
permutations in $Av_{n}(\mathcal{P})$. Moreover $f$ maps permutations in
$Av_{n}(\lambda)$ into $Av_{n}(\mathcal{P})$ by the proofs of Theorem 2.4 and
Theorem 2.9, and it is clear that $f$ is an injection. Since
${\left|Av_{n}(\lambda)\right|}={\left|Av_{n}(\lambda)\right|}$, $f$
restricted to $Av_{n}(\lambda)$ is a bijection onto $Av_{n}(\mathcal{P})$ for
all $n\geq 1$. Thus $f$ is a bijection from $Av(\lambda)$ to
$Av(\mathcal{P})$. ∎
## 3\. $Q_{k}$ and two connections to classical combinatorial objects
###### Definition 3.1.
Let $Q_{k}$ be the POP with $k$ elements where $1>t$ for $t\in[2,n]$. The
Hasse diagram of $Q_{k}$ is the following:
123412341254342311324234112341234$\dots$123$k$ Figure 6. The POP $Q_{k}$
Gao and Kitaev [5] enumerated the avoidance set of $Q_{k}$ in Theorem 2 of
their paper and observed that the sequences
${\left|Av_{n}(Q_{4})\right|}_{n\geq 1}$ and
${\left|Av_{n}(Q_{5})\right|}_{n\geq 1}$ are listed in the OEIS database as
A025192 and A084509 respectively. These sequences enumerate the 2-ary shrub
forests of $n$ heaps avoiding the patterns 231, 312 and 321, and the ground
state 3-ball juggling sequence of period $n$ respectively. These objects will
be defined in the following sections.
We show that there are intimate connections between the relevant sets by
constructing explicit bijections. In fact, we construct an explicit bijection
from $Av_{n}(Q_{k})$ to ground state $(k-2)$-ball juggling sequence of period
$n$ which yields a bijection between the original two sets as a special case.
The study that led us to this bijection also produced a new way of enumerating
$Q_{k}$ using the concept of matrix permanents.
### 3.1. Enumeration of $Av_{n}(Q_{k})$
We will enumerate $Av_{n}(Q_{k})$ for $k,\,n\geq 1$ using the concept of
matrix permanents. Recall that the permanent is defined on square matrices and
is similar to the definition of the determinant of a matrix, where the signs
of the summands are all positive instead of alternating. We restate Gao and
Kitaev’s proof first as a reference for comparison.
###### Theorem 3.1 (Gao and Kitaev (2019) [5]).
For $n\geq k$, ${\left|Av_{n}(Q_{k})\right|}=(k-1)!\times(k-1)^{n-k+1}$.
###### Proof.
We proceed by induction. It is clear that all $(k-1)$-permutations avoid
$Q_{k}$, so ${\left|Av_{k-1}(Q_{k})\right|}=(k-1)!$. For $n\geq k$, $\pi$ is
an $n$-permutation avoiding $Q_{k}$ if and only if $n\in\pi_{[n-k+2,\,n]}$ and
the $(n-1)$-permutation obtained from removing $n$ from the permutation $\pi$
avoids $Q_{k}$. So
${\left|Av_{n}(Q_{k})\right|}=(k-1)\times{\left|Av_{n}(Q_{k-1})\right|}$. The
desired formula can then be easily obtained from this recursion via induction
on $k$. ∎
###### Definition 3.2.
The permanent of an $n\times n$ matrix $A_{n}=(a_{ij})_{i,j\in[n]}$ is
$\text{Perm}(A_{n}):=\sum_{\pi\in S_{n}}\prod_{i=1}^{n}a_{ij}.$
The following proposition states a well-known property of permanents. Its
proof is similar to the proof for an analogous statement for determinants and
is therefore omitted.
###### Proposition 3.1 (folklore).
The permanent of a matrix is invariant under arbitrary permutations of the
rows and/or columns, as well as transposition. That is, for all $n\times n$
matrices $M$ and $n$-permutation matrices $P$ and $Q$,
Perm$(M^{T})=$ Perm$(M)=$ Perm$(PMQ)$.
Observe that the permanent of the $n\times n$ matrix of ones is equal to $n!$
for all positive $n$, and recall that there are $n!$ permutations in $S_{n}$.
One may ask whether there is a matrix associated with subsets of $S_{n}$, such
that the problem of enumerating those subsets can be converted into computing
a certain value of a matrix. This is in fact possible for certain subsets of
$S_{n}$, as we shall see.
###### Definition 3.3.
Let $n\geq 1$. Suppose $K$ is a set of restrictions that indicate whether
$i\mapsto j$ in an $n$-permutation is allowed for all $i,\,j\in[n]$. Then
define $A_{n}^{K}=(a_{ij})$ as the binary $n\times n$ matrix where, for all
$i,\,j\in[n]$, the $ij$th element of $A_{n}^{K}$ is denoted $a_{ij}$ and is
equal to 1 if and only if having $i\mapsto j$ is allowed by $K$. We say that
the matrix $A_{n}^{K}$ represents $K$.
###### Lemma 3.2 (Percus (1971) [7]).
Given a set of restrictions $K_{n}$ and matrix $A_{n}^{K}$ defined as above,
the number of $n$-permutations that satisfy $K$ is the permanent of
$A_{n}^{K}$.
###### Proof.
Let $f$ be the function from $[n]$ to the set of subsets of $[n]$ such that
$i\rightarrow\pi_{i}$ is allowed in a permutation if and only if $\pi_{i}\in
f(i)$. Let $a_{ij}$ denote the $ij$th entry of $A_{n}^{K}$. Then the number of
$n$-permutations induced by $f$ is
$\displaystyle\\#\\{\pi\in S_{n}\mid\pi_{i}\in f(i)\text{ for all
}i\in[1,n]\\}$ $\displaystyle=\\#\\{\pi\in S_{n}\mid a_{i\pi_{i}}=1\text{ for
all }i\in[1,n]\\}$ $\displaystyle=\sum_{\pi\in
S_{n}}a_{1\pi_{1}}a_{2\pi_{2}}\cdots a_{n\pi_{n}}$ $\displaystyle=\sum_{\pi\in
S_{n}}\prod_{i=1}^{n}a_{i\pi_{i}}$ $\displaystyle=\text{Perm}(A).$
∎
We can now apply the theorem to the enumeration of the avoidance set of
$Q_{k}$ for all $k\geq 1$: Observe that for all $n$, an $n$-permutation $\pi$
avoids $Q_{k}$ if and only if $t\in\pi_{[t-k+2,n]}$ for all $t\in[k,n]$.
Therefore we have the following theorem:
###### Theorem 3.3.
The $n$-permutations that avoid $Q_{k}$ are represented by the binary $n\times
n$ matrix $A_{n}=(a_{ij})_{i,j\in[n]}$ where $a_{ij}=1$ if and only if $i\geq
j-k+2$.
Since the permanent of $A_{n}$ is exactly $(k-1)!(k-1)^{n-k+1}$ for all $n\geq
k$, the theorem above proves the enumeration of the avoidance set of $R_{k}$
for all $n$.
### 3.2. Juggling sequences
Suppose we have $b$ balls and a binary vector
$\sigma=(\sigma_{1},\sigma_{2},\dots,\sigma_{n})\in\\{0,1\\}^{n}$ for some
integer $n\geq b$. Given a reference time, we can throw one ball at the $i$th
second such that it lands in our hand again after exactly $t_{i}$ seconds
(that is, $i+t_{i}$ seconds after the reference time) where $t_{i}$ is a
positive integer for all $i\in[n]$, if $\sigma_{i}=1$. Otherwise, we put
$t_{i}:=0$. We say that the $n$-tuple of non-negative integers
$T=(t_{1},t_{2},\dots,t_{n})$ is a juggling sequence of period $n$ and state
$\sigma$ if and only if no two balls land in our hand at the same time, and
our sequence of throws is infinitely repeatable. That is, the following two
conditions hold:
* •
$i+t_{i}\equiv j+t_{j}\pmod{n}$ if and only if $i=j$ for all $i,j\in[n]$,
* •
$\sigma_{i}=1$ if and only if there is some $j\in[n]$ such that $i\equiv
j+t_{j}\pmod{n}$.
We say that $\sigma$ is a ground state if and only if $\sigma_{i}=1$ if
$i\in[b]$ and $\sigma_{i}=0$ otherwise. We refer the reader to [3] and [4] for
diagrams and further analysis on juggling sequences.
Gao and Kitaev [5] observed that the avoidance set $Av_{n}(Q_{5})$ and the
number of ground-state 3-ball juggling sequences of period $n$ are enumerated
by the same OEIS sequence A084509. We will prove a natural bijection between a
generalization of these two combinatorial objects, which is inspired by the
proof of Theorem 1 of [4], restated below:
###### Theorem 3.4.
The number of ground state juggling sequences of period $n$ using $b$ balls
($n\geq b$) is
$J(n,b)=\begin{cases}(b+1)^{n-b}b!&\text{if }n\geq b,\\\
n!&\text{otherwise}.\end{cases}$
###### Proof.
Observe that $T=(t_{1},t_{2},\dots,t_{n})$ is a conforming juggling sequence
if and only if every ball that is thrown lands exactly at some time $t$
seconds after the reference time where $t\in[n+1,n+b]$, and no two balls land
on the same second. That is, $\\{t_{i}+i\mid i\in[b]\\}=[n+1,n+b]$. Since
$t_{i}=0$ for $i\in[n]\setminus[b]$, the condition is equivalent to requiring
$\\{t_{i}+i-b\mid i\in[n]\\}=[n]$. That is, the bijection $i\mapsto t_{i}+i-b$
defines a permutation on the set $[n]$, with the additional condition that
$t_{i}\geq 0$ for all $i\in[n]$. Thus every conforming juggling sequence $T$
can be associated uniquely to a permutation $\pi$ where $\pi_{i}=t_{i}+i-b$.
The number of such permutations is then exactly the permanent of the matrix
$M$ where its $ij$th entry is 1 if and only if $j-i+b\geq 0$, by Lemma 3.2.
Computing the permanent of $M$ yields the formula assigned to $J(n,b)$. ∎
State, period, # balls, # juggling sequences | Juggling sequences $T=(t_{1},\dots,t_{n})$ | $(t_{1}+1,\dots,t_{n}+n)$ $=(\sigma_{1},\dots,\sigma_{n})$ $+(b,\dots,b)$ | $\sigma{{}^{-1}}=\pi$ $\in Av_{n}(Q_{b+2})$; $\sigma_{i}=t_{i}+i-b$ | Matrix transpose $M^{t}=M_{n,b}^{t}$
---|---|---|---|---
$\sigma=(1,0)$; $n=2$; $b=1$; $J(n,b)=2$ | (1,1) (2,0) | (2,3) (3,2) | $12{{}^{-1}}=12$ $21{{}^{-1}}=21$ | $\left(\begin{array}[]{cc}1&1\\\ 1&1\end{array}\right)$
$\sigma=(1,0,0)$; $n=3$; $b=1$; $J(n,b)=4$ | (1,1,1) (3,0,0) (2,0,1) (1,2,0) | (2,3,4) (4,2,3) (3,2,4) (2,4,3) | $123{{}^{-1}}=123$ $312{{}^{-1}}=231$ $213{{}^{-1}}=213$ $132{{}^{-1}}=132$ | $\left(\begin{array}[]{ccc}0&1&1\\\ 1&1&1\\\ 1&1&1\end{array}\right)$
$\sigma=(1,0,0,0)$; $n=4$; $b=1$, $J(n,b)=8$ | (1,1,1,1) (3,0,0,1) (2,0,1,1) (1,2,0,1) (2,0,2,0) (4,0,0,0) (1,3,0,0) (1,1,2,0) | (2,3,4,5) (4,2,3,5) (3,2,4,5) (2,4,3,5) (3,2,5,4) (5,2,3,4) (2,5,3,4) (2,3,5,4) | $1234{{}^{-1}}=1234$ $3124{{}^{-1}}=2314$ $2134{{}^{-1}}=2134$ $1324{{}^{-1}}=1324$ $2143{{}^{-1}}=2143$ $4123{{}^{-1}}=2341$ $1423{{}^{-1}}=1342$ $1243{{}^{-1}}=1243$ | $\left(\begin{array}[]{cccc}0&0&1&1\\\ 0&1&1&1\\\ 1&1&1&1\\\ 1&1&1&1\end{array}\right)$
$\sigma=(1,1)$; $n=2$; $b=2$; $J(n,b)=2$ | (2,2) (3,1) | (3,4) (4,3) | $12{{}^{-1}}=12$ $21{{}^{-1}}=21$ | $\left(\begin{array}[]{cc}1&1\\\ 1&1\end{array}\right)$
$\sigma=(1,1,0)$; $n=3$; $b=2$; $J(n,b)=6$ | (2,2,2) (2,3,1) (3,1,2) (3,3,0) (4,1,1) (4,2,0) | (3,4,5) (3,5,4) (4,3,5) (4,5,3) (5,3,4) (5,4,3) | $123{{}^{-1}}=123$ $132{{}^{-1}}=132$ $213{{}^{-1}}=213$ $231{{}^{-1}}=312$ $312{{}^{-1}}=231$ $321{{}^{-1}}=321$ | $\left(\begin{array}[]{ccc}1&1&1\\\ 1&1&1\\\ 1&1&1\end{array}\right)$
Table 2. Sample values for juggling sequences and their images under $\theta$
for small $n$ and $b$. Note that we are using the permutation matrix notation
system where rows are numbered from bottom to top in increasing order.
The desired bijection can then be easily obtained by realizing that the
permutations mentioned in the proof of Theorem 3.4 are exactly the permutation
inverses of those avoiding $Q_{b+2}$, as is carefully fleshed out in the
following theorem:
###### Theorem 3.5.
Let $\theta$ be a function from the set of ground state juggling sequences of
period $n$ using $b$ balls to $Av_{n}(Q_{b+2})$ given by
$\theta((t_{1},t_{2},\dots,t_{n}))=\pi$ where $\pi_{t_{i}+i-b}=i$ for all
$i\in[n]$. Then $\theta$ is a bijection.
###### Proof.
Since $t_{i}$ is nonnegative for all $i\in[n]$, we have
$\pi_{t_{i}+i-b}=i\iff\pi_{i}=i+b-t_{i}\iff i\geq\pi_{i}-b=\pi_{i}-i-(b+2)+2.$
Therefore, for a given $n,b$, the matrix $M$ defined in Chung and Graham’s
paper is exactly the transpose of the matrix that represents the avoidance of
$Q_{b+2}$ as defined in Theorem 3.3. So the codomain of $\theta$ is indeed
$Av_{n}(Q_{b+2})$. By Theorem 3.1, the number of ground state $b$-ball
juggling sequences of period $n$ is equal to the size of $Av_{n}(Q_{b+2})$
since
$Av_{n}(Q_{b+2})=\begin{cases}(b+1)!\times(b+1)^{n-b-1}=(b+1)^{n-b}b!&\text{if
}n\geq b+2,\\\ n!&\text{otherwise.}\end{cases}$
Therefore, since $\theta$ is injective, it must also be bijective. ∎
We refer the reader to Table 2 for sample values for small $n$ and $b$.
### 3.3. Shrub forests of $n$ heaps
###### Definition 3.4.
Let $\mathcal{P}_{3n}$ denote the set of permutations of length $3n$ that
avoid the patterns 231, 312 and 321 and satisfies $\pi_{3i+1}<\pi_{3i+2}$ and
$\pi_{3i+1}<\pi_{3i+3}$ for all $i\in[n-1]$.
###### Remark 1.
$\mathcal{P}_{3n}$ is also known as the set of 2-ary shrub forests of $n$
heaps avoiding the patterns 231, 312 and 321.
Gao and Kitaev [5] observed that $Av_{n}(Q_{4})$ and the $\mathcal{P}_{3n}$
are enumerated by the same OEIS sequence A025192. We will show a natural
bijection between these two sets for all $n\geq 1$.
###### Theorem 3.6.
Let
$P_{3n}=\begin{cases}\\{123,\,132\\}&\text{if }n=1,\\\
\\{1\oplus\tau\oplus\pi_{[2,3n-3]}\mid\tau\in\\{123,\,132,\,213\\}\text{ and
}\pi\in\mathcal{P}_{3n-3}\\}&\text{if }n\geq 2.\end{cases}.$
Then $P_{3n}=\mathcal{P}_{3n}$.
###### Proof.
It is easy to see that $\mathcal{P}_{3}=\\{123,\,132\\}$. Let
$\sigma:=1\oplus\tau\oplus\pi_{[2,3n-3]}$ for some
$\tau\in\\{123,\,132,\,213\\}$ and $\pi\in\mathcal{P}_{3n-3}$. It is clear
that
${\left|\sigma\right|}={\left|1\oplus\tau\oplus\pi_{[2,3n]}\right|}=1+3+{\left|\pi_{[2,3n]}\right|}=4+3n-3-1=3n,$
so $\sigma$ is a $3n$-permutation. We know that $\pi\in\mathcal{P}_{3n-3}$, so
$\sigma_{3i+1}<\sigma_{3i+2},\,\sigma_{3i+3}$ for $i=3,4,\dots,n-1$. By the
definition of the direct sum $\oplus$ we have
$\sigma_{1}<\sigma_{[2,4]}<\sigma_{[5,3n]}$, so $\sigma_{3i+1}<\sigma_{3i+2}$
and $\sigma_{3i+1}<\sigma_{3i+3}$ for $i=1$ and 2 as well. Since
$\pi_{[2,3n-3]}$ and $\tau=S_{3}\setminus\\{231,\,312,\,321\\}$ both avoid
231, 312 and 321, so do the factors $\sigma_{[1,4]}$ and $\sigma_{[5,3n]}$. It
is easy to see (by the definition of $\oplus$) that none of the forbidden
patterns may span across $\sigma_{[1,4]}$ and $\sigma_{[5,3n]}$. Therefore
$\sigma\in\mathcal{P}_{3n}$.
We claim that ${\left|P_{3n}\right|}=2\times 3^{n-1}$ for all $n\geq 1$.
Indeed, ${\left|P_{3}\right|}=2=2\times 3^{0}$. Suppose this is true for some
$k\geq 2$. Since $\pi_{1}=1$ for all $\pi\in P_{3k}$, we must have that
$\pi_{[2,3k]}$ is distinct for each $\pi\in P_{3k}$. Therefore
${\left|P_{3k+3}\right|}=3\times{\left|P_{3k}\right|}=2\times 3^{n-1}$, and
the claim is true for all $n$. Since $P_{3n}\subseteq\mathcal{P}_{3n}$ and
${\left|P_{3n}\right|}={\left|\mathcal{P}\right|}_{3n}$, they must be equal
for all $n\geq 1$. ∎
###### Theorem 3.7.
Let $\theta:Av_{n}(Q_{4})\rightarrow\mathcal{P}_{3n-3}$ be given by
$\displaystyle\theta(12)$
$\displaystyle=123,\quad\theta(21)=132,\quad\text{and for
}{\left|\pi\right|}=n\geq 3,$ $\displaystyle\theta(\pi)$
$\displaystyle=\begin{cases}1\oplus
132\oplus\theta\left(\pi_{[1,n-1]}\right)_{[2,3n]},&\text{if }\pi_{n}=n,\\\
1\oplus 123\oplus\theta\left(\pi_{[1,n-2]}\,\pi_{n}\right)_{[2,3n]},&\text{if
}\pi_{n-1}=n,\\\ 1\oplus
213\oplus\theta\left(\pi_{[1,n-3]}\,\pi_{[n-1,n]}\right)_{[2,3n]}&\text{if
}\pi_{n-2}=n.\end{cases}$
Then $\theta$ is a bijection.
123412341254342311324234112341234$\dots$123$k$1234 Figure 7. The POP $Q_{4}$
###### Proof.
Recall that $k$ can only be $\pi_{n}$, $\pi_{n-1}$ or $\pi_{n-2}$ by the proof
of Theorem 3.3. So $\theta$ is defined on $Av_{n}(Q_{4})$ for $n\geq 3$.
Moreover, $\theta(Av_{n}(Q_{4}))\subseteq S_{3n-3}$ for all $n\geq 2$. The
base case is: ${\left|\theta(12)\right|}={\left|\theta(21)\right|}=3$. Suppose
$\theta(Av_{k-1}(Q_{4}))\subseteq S_{3k-6}$ for some $k\geq 3$. Then if
$\pi\in Av_{k}(Q_{4})$, we have
${\left|\theta(\pi)\right|}=1+3+(3(k-1)-2+1)=3k=3(k+1)-3.$
So $\theta$ maps into $S_{3n-3}$ for all $n$. Obviously
$\theta(Av_{2}(Q_{4}))=\mathcal{P}_{3}$. Suppose
$\theta(Av_{k-1}(Q_{4}))\subseteq\mathcal{P}_{3k-6}$ for some $k\geq 2$. Then
by Theorem 3.6, $\theta$ indeed maps $Av_{k}(Q_{4})$ into
$\mathcal{P}_{3k-3}$. So $\theta(Av_{n}(Q_{4}))\subseteq\mathcal{P}_{3n-3}$
for all $n\geq 2$ by induction.
Finally, it is clear from the definition of $\theta$ that it is injective. By
Theorem 3.1, ${\left|Av_{n}(Q_{4})\right|}=3!\times 3^{n-3}=2\times 3^{n-2}$
for $n\geq k$. Since
${\left|Av_{n}(Q_{4})\right|}={\left|\mathcal{P}_{3n-3}\right|}$ for all $n$,
the map $\theta$ must be surjective as well. ∎
$n$ | $Av_{n}(Q_{4})$ | $\mathcal{P}_{3n-3}$
---|---|---
2 | 12 21 | 123 132
3 | 123 213 132 231 312 321 | 124356 124365 123456 123465 132456 132465
Table 3. Sample values of $Av_{n}(Q_{4})$ and their images under $\theta$ in
$\mathcal{P}_{3n}$
We note that our bijection is easily generalized:
###### Definition 3.5.
Let $Q_{k,j}$ be the POP of size $k$ where $i<j$ for all $i,j\in[k]$ where
$i\neq j$, as illustrated in Figure 8.
123412341254342311324234112341234$\dots$123$k$1234$\dots$$j$$i_{1}$$i_{2}$$i_{k-1}$
Figure 8. The POP $Q_{k,j}$, where $\\{j,i_{1},i_{2},\dots,i_{k-1}\\}=[k]$
###### Theorem 3.8.
There is a natural bijection between $Av_{n}(Q_{4,j})$ and
$\mathcal{P}_{3n-3}$ for all $1\leq j\leq 4$.
###### Proof.
Due to the symmetry of $Q_{k,j}$ and the fact that $Q_{k}=Q_{k,1}$, it is not
hard to see that $\pi\in Av_{n}(Q_{k})$ if and only if
$\pi_{j}\,\pi_{[2,j-1]}\,\pi_{1}\,\pi_{[j+1,n]}\in Av_{n}(Q_{k,j})$. We can
then obtain a bijection from between $Av_{n}(Q_{4,j})$ and
$\mathcal{P}_{3n-3}$ for all $1\leq j\leq 4$ by composing $\theta$ from
Theorem 3.7 with the bijection
$\pi\mapsto\pi_{j}\,\pi_{[2,j-1]}\,\pi_{1}\,\pi_{[j+1,n]}$. ∎
## 4\. Levels in compositions of $n$ of ones and twos
###### Definition 4.1.
Define $P_{k}$ to be a POP with $k$ elements where $1>3$. Its Hasse diagram is
illustrated below.
123412341254342311324234112341234$\dots$123$k$1234$\dots$$j$$i_{1}$$i_{2}$$i_{k-1}$…345$k$12
Figure 9. The POP $P_{k}$
###### Definition 4.2.
A composition of $n$ is a way of writing $n$ as the sum of positive integers
that sum to $n$, that is, an expression $i_{1}+i_{2}+\cdots+i_{k}=n$ for some
$k\geq 1$. A composition of $n$ of ones and twos has the added restriction
that $i_{j}\in\\{1,2\\}$ for all $j\in[n]$. In this paper, we will refer to a
composition of $n$ of ones and twos simply as a “composition of $n$”, and call
the set $\mathcal{C}_{n}$.
###### Definition 4.3.
A level in a composition of $n$ (of ones and twos) is a pair of consecutive
ones or twos separated by a $+$ sign. We define a marked composition of $n$ to
be a composition of $n$ with exactly one level marked with a line above the
pair of ones or twos. We may denote a single summand that is part of a level
by including a line above it, i.e. $\overline{1}+\overline{1}=\overline{1+1}$
and $\overline{2}+\overline{2}=\overline{2+2}$. We denote the set of marked
compositions of $n$ by $\mathcal{L}_{n}$.
We refer the reader to Table 4 for examples.
Gao and Kitaev [5] discovered that the sequence
${\left|Av_{n}(P_{4})\right|}_{n\geq 1}$ corresponds to the OEIS sequence
A045925 that enumerates the number of levels in all compositions of $n+1$ of
ones and twos. In this section, we will demonstrate an explicit bijection
between the two sets. To do so, we first construct an explicit bijection from
$Av_{n}(P_{3})$ to $\mathcal{C}_{n}$.
$n$ | ${\left|\mathcal{C}_{n}\right|}$ | $\mathcal{C}_{n}$ | ${\left|\mathcal{L}_{n}\right|}$ | $\mathcal{L}_{n}$
---|---|---|---|---
1 | 1 | $1$ | 0 | none
2 | 2 | $1+1,2$ | 1 | $\overline{1+1}$
3 | 3 | $1+1+1$, $2+1$, $1+2$ | 2 | $\overline{1+1}+1$, $1+\overline{1+1}$
4 | 5 | $1+1+1+1$, $1+1+2,$ $1+2+1$, $2+1+1$ $2+2$ | 6 | $\overline{1+1}+1+1$, $1+\overline{1+1}+1$, $1+1+\overline{1+1}$, $\overline{1+1}+2$, $2+\overline{1+1}$, $\overline{2+2}$
Table 4. Compositions and levels of $n$ for small $n$
### 4.1. Compositions of $n$
###### Lemma 4.1.
Let $F(n)$ denote the $n$th Fibonacci number, where $F(0)=0$ and $F(1)=1$. The
size of $\mathcal{C}_{n}$ is $F(n)$ for all $n\geq 1$.
###### Proof.
The size of $\mathcal{C}_{n}$ for $n=1$ and 2 can be verified easily. For
$n\geq 3$, observe that
$c\in\mathcal{C}_{n-1}\iff c+1\in\mathcal{C}_{n}\quad\text{and}\quad
c\in\mathcal{C}_{n-2}\iff c+2\in\mathcal{C}_{n},$
so
${\left|\mathcal{C}_{n}\right|}={\left|\mathcal{C}_{n-1}\right|}+{\left|\mathcal{C}_{n-2}\right|}$,
and ${\left|\mathcal{C}_{n}\right|}=F_{n}$. ∎
###### Lemma 4.2.
The size of $Av_{n}(P_{3})$ is $F(n)$ for all $n\geq 1$. Moreover, all
$n$-permutations avoiding $P_{3}$ for $n\geq 2$ are sum decomposable.
###### Proof.
It is easy to see that ${\left|Av_{1}(P_{3})\right|}=1=F(1)$ and
${\left|Av_{2}(P_{3})\right|}=2=F(2)$. We refer the reader to Table 6 for
examples. It is not difficult to see that for $n\geq 3$,
$\sigma\in Av_{n-1}(P_{3})\iff 12[\sigma,1]\in
Av_{n}(P_{3})\quad\text{and}\quad\sigma\in Av_{n-2}(P_{3})\iff
12[\sigma,21]\in Av_{n}(P_{3}).$
So by Theorem 1.2,
${\left|Av_{n}(P_{3})\right|}={\left|Av_{n-1}(P_{3})\right|}+{\left|Av_{n-2}(P_{3})\right|}=F(n)+F(n-1)=F(n+1)$
for all $n\geq 3$ as well, proving the statements. ∎
The previous two lemmas suggest that there is a natural bijection from the set
$\mathcal{C}_{n}$ to $Av_{n}(P_{3})$. Indeed there is, as we will see in the
following theorem:
$n$ | $F(n)$ | Compositions of $n$ | Permutations in $Av_{n}(P_{3})$
---|---|---|---
1 | 1 | 1 | 1
2 | 2 | $1+1$ | $12=12[1,1]$
| | 2 | $21=21[1,1]$
3 | 3 | $1+1+1$ | $123=123[1,1,1]$
| | $1+2$ | $132=12[1,21]$
| | $2+1$ | $213=12[21,1]$
4 | 5 | $1+1+1+1$ | $1234=1234[1,1,1,1]$
| | $2+1+1$ | $2134=123[21,1,1]$
| | $1+1+2$ | $1243=123[1,1,21]$
| | $1+2+1$ | $1324=123[1,21,1]$
| | $2+2$ | $2143=12[21,21]$
Table 5. Compositions of $n$ and their images under $f$ for small $n$
###### Theorem 4.3.
Let $f:\mathcal{C}_{n}\rightarrow Av_{n}(P_{3})$ defined as
$f(r_{1}+r_{2}+\cdots+r_{k})=123\cdots
k[\alpha_{1},\alpha_{2},\dots,\alpha_{k}]$
where $r_{1}+r_{2}+\cdots+r_{k}$ is a composition of $n$ of ones and twos, and
$\alpha_{i}=\begin{cases}1&\text{if }r_{i}=1\\\ 21&\text{if
}r_{i}=2\end{cases}.$
Then $f$ is a bijection.
###### Proof.
We refer the reader to Table 5 for examples for small $n$. First we check that
if $r_{1}+r_{2}+\cdots+r_{k}$ is a composition of $n$, then
$f(r_{1}+r_{2}+\cdots+r_{k})$ is a permutation in $S_{n}$:
$\displaystyle{\left|f(r_{1}+r_{2}+\cdots r_{k})\right|}$
$\displaystyle={\left|123\cdots
k[\alpha_{1},\alpha_{2},\dots,\alpha_{k}]\right|}$
$\displaystyle={\left|\alpha_{1}\right|}+{\left|\alpha_{2}\right|}+\cdots{\left|\alpha_{k}\right|}$
$\displaystyle=r_{1}+r_{2}+\cdots r_{k}$ $\displaystyle=n.$
Moreover, it is clear that if $\alpha_{i}\in\\{1,21\\}$ for all $i\in[k]$,
then the permutation
$\pi=123\cdots k[\alpha_{1},\alpha_{2},\dots,\alpha_{k}]$
avoids $P_{3}$. It is easy to see that $f$ is injective by definition. By
Lemmas 4.1 and 4.2, $\mathcal{C}_{n}$ and $Av_{n}(P_{3})$ both have $F(n)$
elements, so $f$ maps bijectively onto $Av_{n}(P_{3})$.
∎
### 4.2. Levels of compositions of $n$
$n$ | ${\left|Av_{n}(P_{3})\right|}$ | $Av_{n}(P_{3})$ | ${\left|Av_{n}(P_{4})\right|}$ | $Av_{n}(P_{4})$
---|---|---|---|---
1 | 1 | 1 | 1 | 1
2 | 2 | 12, 21 | 2 | 12, 21
3 | 3 | 123, 132, 213 | 6 | 123,132,213, 231,312,321
4 | 5 | 1234, 1243, 1324, 2134, 2143 | 12 | 1234,1243,1324,1342, 1423,1432,2134,2143, 2341,2431,3142,3241
Table 6. $Av_{n}(P_{3})$ and $Av_{n}(P_{4})$ for small $n$
###### Lemma 4.4.
For $n\geq 1$, the size of the set $\mathcal{L}_{n}$ is $(n-1)F(n-1)$.
Moreover, for $n\geq 3$, we can partition $\mathcal{L}_{n}$ into four sets:
$\displaystyle A^{\prime}_{n}$ $\displaystyle=\\{\text{marked compositions of
}n\text{ that end with }1\text{ (not }\overline{1})\\}$ $\displaystyle
B^{\prime}_{n}$ $\displaystyle=\\{\text{marked compositions of }n\text{ that
end with }2\text{ (not }\overline{2})\\}$ $\displaystyle C^{\prime}_{n}$
$\displaystyle=\\{\text{marked compositions of }n\text{ that end with
}\overline{1+1}\\}$ $\displaystyle D^{\prime}_{n}$
$\displaystyle=\\{\text{marked compositions of }n\text{ that end with
}\overline{2+2}\\}$
###### Proof.
For $n\in[3]$, it is easy to check that
${\left|L_{n}\right|}=(n)-1!=(n-1)F(n-1)$, as we show in Table 4. For $n\geq
4$, we proceed by induction. Suppose that for some $k\geq 4$, the statement is
true for all $n\in[k-1]$. It is clear that the union of $A^{\prime}_{k}$,
$B^{\prime}_{k}$, $C^{\prime}_{k}$, and $D^{\prime}_{k}$ together make up
$\mathcal{L}_{k}$, and that the following statements are true for all $n$:
$\displaystyle m\in\mathcal{L}_{n}$ $\displaystyle\iff
m+1\in\mathcal{L}_{n+1},\qquad$ $\displaystyle c\in\mathcal{C}_{n-1}$
$\displaystyle\iff c+\overline{1+1}\in\mathcal{L}_{n+1}$ $\displaystyle
m\in\mathcal{L}_{n-1}$ $\displaystyle\iff m+2\in\mathcal{L}_{n+1},\qquad$
$\displaystyle c\in\mathcal{C}_{n-3}$ $\displaystyle\iff
c+\overline{2+2}\in\mathcal{L}_{n+1}.$
So by the inductive hypothesis,
${\left|A^{\prime}_{k}\right|}=(k-2)F(k-2)\qquad\text{and}\qquad{\left|B^{\prime}_{k}\right|}=(k-3)F(k-3),$
and by Lemma 4.1,
${\left|C^{\prime}_{k}\right|}=F(k)\qquad\text{and}\qquad{\left|D^{\prime}_{k}\right|}=F(k-2)$
Therefore the size of $\mathcal{L}_{k}$ is
$\displaystyle{\left|A^{\prime}_{k}\right|}+{\left|B^{\prime}_{k}\right|}+{\left|C^{\prime}_{k}\right|}+{\left|D^{\prime}_{k}\right|}$
$\displaystyle=$ $\displaystyle\quad(k-1)F(k-1)+(k-2)F(k-2)+F(k)+F(k-2)$
$\displaystyle=$ $\displaystyle\quad
kF(k-1)-F(k-1)+kF(k-2)-2F(k-2)+F(k)+F(k-2)$ $\displaystyle=$
$\displaystyle\quad k(F(k-1)+F(k-2))-F(k-1)-F(k-2)+F(k)$ $\displaystyle=$
$\displaystyle\quad kF(k),$
and the statement is true by induction on $n$. ∎
123412341254342311324234112341234$\dots$123$k$1234$\dots$$j$$i_{1}$$i_{2}$$i_{k-1}$…345$k$121324
Figure 10. The POP $P_{4}$
###### Lemma 4.5.
For $n\geq 1$, the size of $Av_{n}(P_{4})$ is $nF(n)$. Moreover, for $n\geq
4$, we can partition $Av_{n}(P_{4})$ into four sets. Specifically,
$Av_{n}(P_{4})=A_{n}\sqcup B_{n}\sqcup C_{n}\sqcup D_{n}$ where
$\displaystyle A_{n}:$ $\displaystyle=\\{12[1,\sigma]\,:\,\sigma\in
Av_{n-1}(P_{4})\\},$ $\displaystyle B_{n}:$
$\displaystyle=\\{12[21,\sigma]\,:\,\sigma\in Av_{n-2}(P_{4})\\},$
$\displaystyle C_{n}:$ $\displaystyle=\\{21[\sigma,1]\,:\,\sigma\in
Av_{n-1}(P_{3})\\},\quad\text{and}$ $\displaystyle D_{n}:$
$\displaystyle=\\{3142[1,1,\sigma,1]\,:\,\sigma\in Av_{n-3}(P_{3})\\}.$
###### Proof.
For $n\leq 3$, it is easy to check that
${\left|Av_{n}(P_{4})\right|}=n!=nF(n)$. We refer the reader to Table 6 for
examples.
For $n\geq 4$, we may proceed by induction. First, we leave it to the reader
to check that the following four claims are true for all $n\geq 4$:
1. 1.
$12[1,\sigma]\in Av_{n}(P_{4})\iff\sigma\in Av_{n-1}(P_{3})$,
2. 2.
$12[21,\sigma]\in Av_{n}(P_{4})\iff\sigma\in Av_{n-2}(P_{3})$,
3. 3.
$21[\sigma,1]\in Av_{n}(P_{4})\iff\sigma\in Av_{n-1}(P_{3})$, and
4. 4.
$3142[1,1,\sigma,1]\in Av_{n}(P_{4})\iff\sigma\in Av_{n-3}(P_{4})$.
These imply that $A_{n},B_{n},C_{n}$ and $D_{n}$ are subsets of
$Av_{n}(P_{4})$.
Now suppose that for some $k\geq 4$, the statement is true for all
$n\in[k-1]$. Then by our inductive hypothesis, we have that $A_{k}$ contains
$(k-1)F(k-1)$ elements and $B_{k}$ contains $(k-2)F(k-2)$ elements.
By Lemma 4.2, $Av_{n}(P_{3})$ is counted by the $(n+1)$th Fibonacci number for
all $n$, so $C_{k}$ contains $F(k)$ elements and $D_{k}$ contains $F(k-2)$
elements.
It is not hard to see that the four sets are disjoint, so the total number of
items in the four sets is
$\displaystyle{\left|A_{k}\right|}+{\left|B_{k}\right|}+{\left|C_{k}\right|}+{\left|D_{k}\right|}$
$\displaystyle=$ $\displaystyle\quad(k-1)F(k-1)+(k-2)F(k-2)+F(k)+F(k-2)$
$\displaystyle=$ $\displaystyle\quad
kF(k-1)-F(k-1)+kF(k-2)-2F(k-2)+F(k)+F(k-2)$ $\displaystyle=$
$\displaystyle\quad k(F(k-1)+F(k-2))-F(k-1)-F(k-2)+F(k)$ $\displaystyle=$
$\displaystyle\quad kF(k).$
It remains to check that any $k$-permutation avoiding $P_{4}$ indeed lies in
$A_{k},B_{k},C_{k}$ or $D_{k}$. By Theorem 1.2, it suffices to show that
1. (a)
12, 21 and 3142 are the only simple permutations that avoid $P_{4}$,
2. (b)
if $21[\alpha,\beta]$ avoids $P_{4}$ and $\alpha\neq 1$ is skew-sum
indecomposable, then $\beta=1$, and
3. (c)
if $12[\alpha,\beta]$ avoids $P_{4}$ and $\alpha$ is sum indecomposable, then
$\alpha=1$ or 21.
For (a), it is clear that 12 and 21 avoid $P_{4}$. Since 2413 contains
$P_{4}$, any simple permutation of length at least 4 avoiding $P_{4}$ must
contain $3142$ by Theorem 1.1. Let $\pi$ be such a simple permutation. We can
view it as a lattice matrix. This is shown in Figure 10, with some alterations
explained in the caption. The blocks $\alpha_{13},\,\alpha_{14},\,\alpha_{23}$
and $\alpha_{24}$ are adjacent to the bullet point representing the number 4,
so they must be trivial since $\pi$ is simple. Finally, $\alpha_{51}$ must be
trivial, otherwise $\pi$ would be sum decomposable.
For (b), it is clear that $\beta$ can have size at most 1, since otherwise
$21[\alpha,\beta]$ would contain $P_{4}$ for any ${\left|\alpha\right|}\geq
2$.
For (c), suppose $\alpha$ is sum indecomposable and of length at least 3.
Since ${\left|\beta\right|}\geq 1$, $\alpha$ must avoid $P_{3}$. By (a),
$\alpha$ is an inflation of $21$ or $3142$. The only inflation of 21 avoiding
$P_{3}$ is itself, while 3142 contains $P_{3}$. So $\alpha=1$ or 21.
1 | — | 2 | — | $\alpha_{13}$ | — | $\alpha_{14}$ | — | 4
---|---|---|---|---|---|---|---|---
—— | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| —— | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| —— | $\operatorname*{\scalerel*{\bullet}{\textstyle\sum}}$ | —— | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| ——
1 | — | 2 | — | $\alpha_{23}$ | — | $\alpha_{24}$ | — | 4
—— | $\operatorname*{\scalerel*{\bullet}{\textstyle\sum}}$ | —— | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| —— | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| —— | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| ——
1 | — | 2 | — | 3 | — | 3 | — | 4
—— | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| —— | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| —— | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| —— | $\operatorname*{\scalerel*{\bullet}{\textstyle\sum}}$ | ——
1 | — | 2 | — | 3 | — | 3 | — | 4
—— | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| —— | $\operatorname*{\scalerel*{\bullet}{\textstyle\sum}}$ | —— | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| —— | $\operatorname*{\scalerel*{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>| ——
$\alpha_{51}$ | — | 2 | — | 3 | — | 3 | — | 4
Figure 11. The lattice matrix $L_{3142}(\pi)$, with alterations. The 1s
corresponding to the pattern $3142$ are replaced by bullet points. For all
$i,\,j\in[5]$, $\alpha_{ij}$ is replaced by some $\ell\in[4]$ if and only if
$\pi$ would contain $P_{4}$ if $\alpha_{ij}$ were non-trivial and any point
$\alpha_{ij}$ could take the place of the point labelled $\ell$ in the POP.
∎
Once again, the previous two lemmas suggest that there is a natural bijection
from the set $\mathcal{L}_{n+1}$ to $Av_{n}(P_{4})$ \- and indeed there is, as
we will see in the main theorem of this section:
###### Theorem 4.6.
Let $g:\mathcal{L}_{n+1}\rightarrow Av_{n}(P_{4})$, where $n\geq 0$ and
$r_{i}\in\\{1,2,\overline{1},\overline{2}\\}$ for $i\in[k]$ and $k\geq 1$
$\displaystyle g(r_{1}+r_{2}+\cdots r_{k})$
$\displaystyle=\begin{cases}1&\text{ if }\quad k=1=r_{1},\\\ 21&\text{ if
}\quad k=2,\quad r_{1}=r_{2}=\overline{1},\\\
12[1,g(r_{1}+\cdots+r_{k-1})]&\text{ if }\quad r_{k}=1,\\\
12[21,g(r_{1}+\cdots+r_{k-1})]&\text{ if }\quad r_{k}=2,\\\
21[f(r_{1}+\cdots+r_{k-2}),1]&\text{ if }\quad r_{k-1}=r_{k}=\overline{1},\\\
3142[1,1,f(r_{1}+\cdots+r_{k-2}),1]&\text{ if }\quad
r_{k-1}=r_{k}=\overline{2}.\end{cases}$
Then $g$ is a bijection.
$n$ | Marked compositions of $n+1$ | Sum decomposables in $Av_{n}(P_{4})$ | Marked compositions of $n+1$ | Sum indecomposables in $Av_{n}(P_{4})$
---|---|---|---|---
1 | - | - | $\overline{1+1}$ | $1$
2 | $\overline{1+1}+1$ | $12=12[1,1]$ | $1+\overline{1+1}$ | $21=21[1,1]$
3 | $\overline{1+1}+1+1$, | $123=12[1,12]$ | $1+1+\overline{1+1}$, | $231=21[12,1]$
| $1+\overline{1+1}+1$ | $132=12[1,21]$ | $2+\overline{1+1}$ | $321=21[1,21]$
| $\overline{1+1}+2$ | $213=12[21,1]$ | $\overline{2+2}$ | $312=21[1,12]$
4 | $\overline{1+1}+1+1+1$ | $1234=12[1,123]$ | $1+1+1+\overline{1+1}$ | $2341=21[123,1]$
| $1+\overline{1+1}+1+1$ | $1243=12[1,132]$ | $2+1+\overline{1+1}$ | $2431=21[132,1]$
| $1+1+\overline{1+1}+1$ | $1324=12[1,213]$ | $1+2+\overline{1+1}$ | $3241=21[213,1]$
| $\overline{1+1}+2+1$ | $1342=12[1,231]$ | $1+\overline{2+2}$ | $3142=3142[1,1,1,1]$
| $2+\overline{1+1}+1$ | $1432=12[1,321]$ | |
| $\overline{2+2}+1$ | $1423=12[1,312]$ | |
| $\overline{1+1}+1+2$ | $2134=12[21,12]$ | |
| $1+\overline{1+1}+2$ | $2143=12[21,21]$ | |
Table 7. Marked compositions of $n+1$ and their images under $g$ for small $n$
###### Proof.
Recall from Theorem 4.3 that $f$ is a bijection, so it is clear from the
definition that $g$ is injective. We refer the reader to Tables 8, 9 and 7 for
enumerations for small $n$.
By Lemmas 4.4 and 4.5, the sets $\mathcal{L}_{n+1}$ and $Av_{n}(P_{4})$ both
contain $nF(n)$ elements, so $g$ must be bijective.
In addition, it can easily be seen that the inverse of $g$ is the following:
$g{{}^{-1}}(\pi)=\begin{cases}g{{}^{-1}}(\alpha_{2})+{\left|\alpha_{1}\right|}&\text{
if }\quad\pi=12[\alpha_{1},\alpha_{2}],\\\
f{{}^{-1}}(\alpha)+\overline{1+1}&\text{ if }\quad\pi=21[\alpha,1],\\\
f{{}^{-1}}(\alpha)+\overline{2+2}&\text{ if }\quad\pi=3142[1,1,\alpha,1].\\\
\end{cases}$
∎
$n$ | ${\left|\mathcal{L}_{n+1}\right|}$ | Number of compositions in $\mathcal{L}_{n+1}$ where the last summand is not marked | Number of compositions in $\mathcal{L}_{n+1}$ where the last two terms are marked
---|---|---|---
1 | 1 | 0 | 1
2 | 2 | 1 | 1
3 | 6 | 3 | 3
4 | 12 | 8 | 4
5 | 25 | 18 | 7
6 | 48 | 37 | 11
7 | 91 | 73 | 18
8 | 168 | 139 | 29
9 | 306 | 259 | 47
10 | 550 | 474 | 76
11 | 979 | 856 | 123
$k$ | $kF(k)$ | $(k-1)F(k-1)+(k-2)F(k-2)$ | $L(k-1)=F(k-2)+F(k)$
Table 8. Enumerating marked compositions for small $n$ $n$ | ${\left|Av_{n}(P_{4})\right|}$ | Number of sum decomposable $n$-permutations avoiding $P_{4}$ | Number of sum indecomposable $n$-permutations avoiding $P_{4}$
---|---|---|---
1 | 1 | 0 | 1
2 | 2 | 1 | 1
3 | 6 | 3 | 3
4 | 12 | 8 | 4
5 | 25 | 18 | 7
6 | 48 | 37 | 11
7 | 91 | 73 | 18
8 | 168 | 139 | 29
9 | 306 | 259 | 47
10 | 550 | 474 | 76
11 | 979 | 856 | 123
$k$ | $kF(k)$ | $(k-1)F(k-1)+(k-2)F(k-2)$ | $L(k-1)=F(k-2)+F(k)$
Table 9. Enumerating sum decomposable and sum indecomposable permutations for
small $n$
## 5\. Partial sums of signed displacements
###### Definition 5.1.
Let $R_{k}$ be the POP of size $k$ where $1>k$.
123412341254342311324234112341234$\dots$123$k$1234$\dots$$j$$i_{1}$$i_{2}$$i_{k-1}$…345$k$121324…$k$23$k-1$1
Figure 12. The POP $R_{k}$
It is easy to see that the avoidance set of $R_{2}$ contains only the identity
permutations. Observe that $R_{3}$ is equivalent to the POP $P_{3}$ introduced
in Section 4, where we showed that its avoidance set is enumerated by the
well-known Fibonacci numbers. Gao and Kitaev [5] enumerated the avoidance sets
of $R_{4}$ and $R_{5}$ in Theorems 14 and 33 of their paper respectively.
Moreover, they showed that for all $k\geq 3$, the avoidance set of $R_{k}$ is
in one-to-one correspondence with $n$-permutations such that for each cycle
$c$, the smallest integer interval containing all elements of $c$ has at most
$k-1$ elements.
They also observed that $Av_{n}(R_{4})$ and the set of $n$-permutations for
which the partial sums of signed displacements do not exceed 2 (to be defined
later) are the same size for all $n\geq 1$, and asked if there is an
interesting bijection on one set to the other. We construct such a bijection
in this section by analyzing the two sets in detail.
### 5.1. Analysing $Av_{n}(R_{4})$
###### Theorem 5.1.
If $\pi$ is an $n$-permutation that avoids $R_{4}$ and $n\geq 5$, then $\pi$
is sum decomposable. Moreover, if $12[\alpha,\beta]$ avoids $R_{4}$ for
permutations $\alpha$ and $\beta$ where $\alpha$ is sum indecomposable, then
$\alpha$ is 1, 21, 231, 321, 312 or 2413.
###### Proof.
We claim that the only simple permutations avoiding $R_{4}$ are 1, 12, 21 and
2413. Recall that any simple $n$-permutation contains the patterns 132, 213
and 312 if $n\geq 4$. Thus, we can write such a simple permutation as the
string of factors
$\alpha^{(1)}\,p\,\alpha^{(2)}\,q\,\alpha^{(3)}\,r\,\alpha^{(4)}$
where $\alpha_{i}$ are possibly empty factors (of length 0) and $p$, $q$ and
$r$ are factors of length 1 where $\text{red}(p\,q\,r)=312$. It is clear that
if $\alpha^{(2)}$ and $\alpha^{(3)}$ were not empty, if $\alpha^{(1)}>p$ or if
$\alpha^{(4)}<p$ then the permutation would contain $R_{4}$. So $\alpha^{(2)}$
and $\alpha^{(3)}$ are empty and if $\alpha^{(1)}$ and $\alpha^{(4)}$ are not
empty then we must have $\alpha^{(1)}<p<\alpha^{(4)}$. However, this implies
that if $\alpha_{4}$ is not empty then the permutation must be sum
decomposable, so $\alpha^{(4)}$ is empty. If
${\left|\alpha^{(1)}\right|}=t\geq 2$, then we must have
$\alpha^{(2)}_{[1,t-1]}<q$ otherwise the permutation contains $R_{4}$. This
forces $\alpha^{(1)}_{[1,t-1]}$ to be the interval $[t-1]$ which would mean
that the permutation is sum decomposable. So ${\left|\alpha^{(1)}\right|}=1$
if it is not empty. In this case we must have $q<\alpha^{(1)}<r$, which yields
the permutation 2413.
Finally we show that all $n$-permutations avoiding $R_{4}$ are sum
decomposable for $n\geq 5$. If $21[\beta_{1},\beta_{2}]$ avoids $R_{4}$, then
$\beta_{1}$ and $\beta_{2}$ must have lengths 1 or 2. If
$2413[\beta_{1},\beta_{2},\beta_{3},\beta_{4}]$ avoids $R_{4}$ , then each
$\beta_{i}$ must be of length exactly 1 for all $i\in[4]$. The only sum
indecomposable permutations avoiding $R_{4}$ are 1, 21, 321, 312, 231 and
2413, so $\alpha$ in the theorem must be one of these.
∎
###### Corollary 5.1.1.
Let $a_{n}:={\left|Av_{n}(R_{4})\right|}$. Then
$a_{n}=\begin{cases}n!&\text{if }\,1\leq n\leq 3,\\\ 12&\text{if }\,n=4,\\\
a_{n-1}+a_{n-2}+3a_{n-3}+a_{n-4}&\text{if }\,n\geq 5.\end{cases}$
###### Proof.
The formula is clear for $n\leq 3$. Since $R_{4}$ represents a set of $12$
permutations, $Av_{4}(R_{k})=4!-12=12$. For $n\geq 5$, recall that if
$\pi=12[\alpha,\beta]$ for some permutations $\alpha$ and $\beta$ where
$\alpha$ is sum indecomposable, then $\alpha$ and $\beta$ are unique by
Theorem 1.2. The recursive formula then follows directly from Theorem 5.1. ∎
### 5.2. Partial sums of signed displacements
###### Definition 5.2.
The $j$th signed displacement of an $n$-permutation $\pi$ for $j\in[n]$ is
defined as $\pi_{j}-j$.
The $i$th partial sum of signed displacements of an $n$-permutation $\pi$ for
$i\in[n]$ is defined as $\sum_{j=1}^{i}\pi_{j}-j$, and denoted
$\pi_{\Sigma}^{i}$.
We denote the set of $n$-permutations for which the partial sums of signed
displacements do not exceed 2 by $\mathfrak{S}_{n}$. For examples, refer to
Figures 14 and 14.
$\pi_{j}$ | 3 | 1 | 4 | 2
---|---|---|---|---
$j$ | 1 | 2 | 3 | 4
$\pi_{j}-j$ | 2 | -1 | 1 | -2
partial sum | 2 | 1 | 2 | 0
Figure 13. A permutation in $\mathfrak{S}_{4}$
$\pi_{j}$ | 4 | 1 | 3 | 2
---|---|---|---|---
$j$ | 1 | 2 | 3 | 4
$\pi_{j}-j$ | 3 | -1 | 0 | -2
partial sum | 3 | 2 | 2 | 0
Figure 14. A permutation that is not in $\mathfrak{S}_{4}$
###### Lemma 5.2.
For an $n$-permutation $\pi$ and $k\in[n]$, we have $\pi_{\Sigma}^{k}=0$ if
and only if $\pi_{[1,k]}$ is itself a $k$-permutation (without reducing it).
###### Proof.
We know that $\pi_{j}$ are positive and all distinct for all $j\in[k]$. So let
$i_{1},i_{2},\dots,i_{k}$ be such that
$1\leq\pi_{i_{1}}<\pi_{i_{2}}<\dots<\pi_{i_{k}}\leq n$. and let
$r_{j}=\pi_{i_{j}}-j$ for all $j\in[k]$. Then $r_{j}$ is non-negative for all
$j\in[k]$, and
$\displaystyle\pi_{\Sigma}^{k}=0\iff\sum_{j=1}^{k}(\pi_{j}-j)=0$
$\displaystyle\iff\sum_{j=1}^{k}\pi_{i}=\sum_{j=1}^{k}j$
$\displaystyle\iff\sum_{j=1}^{k}\pi_{i_{j}}=\sum_{j=1}^{k}j$
$\displaystyle\iff\sum_{j=1}^{k}(j+r_{j})=\sum_{j=1}^{k}j$
$\displaystyle\iff\sum_{i=1}^{k}r_{i}=0.$
The last statement is true if and only if $r_{j}=0$ for all $j\in[k]$. That
is, we must have $\\{\pi_{i}\mid i\in[k]\\}=[k]$, i.e. $\pi_{[1,k]}$ is a
$k$-permutation.
∎
The proof of the following theorem was inspired by the proof on OEIS page on
A214663.
###### Theorem 5.3.
If $\pi\in\mathfrak{S}_{n}$ and $n\geq 5$, then $\pi$ must be of the form
$12[\alpha,\beta]$, where
$\beta\in\\{1,21,231,312,321,3142\\}$
and $\alpha$ is a conforming permutation $\alpha$ of length
$n-{\left|\beta\right|}$.
###### Proof.
For $n\geq 5$, observe that the number $n$ can only occur as one of the last 3
terms of a conforming $n$-permutation. Note also that the sum of two
conforming permutations is conforming. Let $\pi$ be a conforming
$n$-permutation. Then we have the following cases:
* •
Case 1: $\pi_{n}=n$. So $\pi_{n}-n=0$, and $\pi$ is conforming if and only if
$\pi_{\Sigma}^{k}$ does not exceed 2 for all $k\in[n-1]$. That is, $\pi$ is of
the form $\alpha\oplus 1$ where $\alpha$ is a conforming $(n-1)$-permutation.
* •
Case 2: $\pi_{n-1}=n$. So $\pi_{n-1}-(n-1)=1$, and $\pi$ is conforming only if
$\pi_{\Sigma}^{n-2}=1$ or 0. We can further break down into cases:
1. (a)
If $\pi_{\Sigma}^{n-2}=0$, then $\pi_{[1,n-2]}$ must itself be an
$(n-2)$-permutation by Lemma 5.2. Then we must have $\pi_{n}=n-1$ \- that is,
$\pi$ is of the form $\alpha\oplus 21$ where $\alpha$ is a conforming
$(n-2)$-permutation.
2. (b)
If $\pi_{\Sigma}^{n-2}=1$ then $\pi_{\Sigma}^{n-1}=2$, so $\pi_{n}=n-2$.
* –
If $\pi_{\Sigma}^{n-3}=0$, then $\pi_{[1,n-3]}$ must itself be an
$(n-3)$-permutation by Lemma 5.2. Since $\pi_{n-2}=n-1$ and $\pi_{n}=n-2$, so
$\pi$ must be of the form $\alpha\oplus 231$ where $\alpha$ is a conforming
$(n-3)$-permutation.
* –
If $\pi_{\Sigma}^{n-3}=1=\pi_{\Sigma}^{n-2}$, then $\pi_{n-2}=n-2$, which is
impossible since $\pi_{n}=n-2$.
* –
If $\pi_{\Sigma}^{n-3}=2$. then $\pi_{n-2}=n-3$. Since the partial sums of
signed displacements of $\pi$ cannot exceed 2 and
$\pi_{[n-2,n]}=(n-3)\,n\,(n-2)$, we must have $\pi_{n-3}=n-1$. This means that
$\pi_{\Sigma}^{n-4}=0$. Then $\pi_{[1,n-4]}$ must itself be an
$(n-4)$-permutation by Lemma 5.2, and $\pi$ is of the form $\alpha\oplus 3142$
where $\alpha$ is a conforming $(n-4)$-permutation.
* •
Case 3: $\pi_{n-2}=n$. So $\pi_{n-2}-(n-2)=2$, and $\pi$ is conforming if and
only if the $(n-2)$th partial sum of signed displacements is 0. By Lemma 5.2,
the factor $\pi_{[1,n-3]}$ must be an $(n-3)$-permutation itself. It is easy
to check that $321$ and $312$ are both conforming, so $\pi$ is of the form
$\alpha\oplus 321$ or $\alpha\oplus 312$, where $\alpha$ is a conforming
$(n-3)$-permutation.
∎
###### Corollary 5.3.1.
Let $b_{n}:={\left|\mathfrak{S}_{n}\right|}$. Then
$b_{n}=\begin{cases}n!&\text{if }1\leq n\leq 3,\\\ 12&\text{if }n=4,\\\
b_{n-1}+b_{n-2}+3b_{n-3}+b_{n-4}&\text{if }n\geq 5.\end{cases}$
###### Proof.
It is easy to check that all $n$-permutations are conforming for $1\leq n\leq
3$. For $n=4$, there are exactly twelve conforming $n$-permutations, namely
1234, 1243, 1324, 1342, 1423, 1432, 2134, 2143, 2314, 3124, 3142 and 3214. For
$n\geq 5$, the recursive formula for $b_{n}$ follows directly from Theorem
5.3. ∎
### 5.3. The bijection
###### Theorem 5.4.
Define $\theta:Av_{n}(R_{4})\rightarrow\mathfrak{S}_{n}$ as
$\theta(\pi)=\begin{cases}\pi&\text{ if }{\left|\pi\right|}\in[4],\\\
\beta\oplus 3142&\text{ if }\pi=2413\oplus\beta\text{ for some permutation
}\beta,\text{ and }\\\ \beta\oplus\alpha&\text{ if
}\pi=\alpha\oplus\beta\text{ for some permutations }\alpha\text{ and
}\beta,\\\ &\text{ where }\alpha\neq 2413.\end{cases}$
Then $\theta$ is a bijection.
###### Proof.
We know that $\theta$ is indeed a function from $Av_{n}(R_{4})$ to
$\mathfrak{S}_{n}$ by Theorems 5.1 and 5.3. It is clear from the definition
that $\theta$ is an injection. Corollaries 5.1.1 and 5.3.1 demonstrate that
${\left|\mathfrak{S}_{n}\right|}={\left|Av_{n}(R_{4})\right|}$ for all $n\geq
1$, so $\theta$ must be a bijection. ∎
## 6\. Simple permutations avoiding 2413, 3412 and 3421
Albert and Atkinson [2] proved that a permutation class with only finitely
many simple permutations has a readily computable algebraic generating
function and has a finite basis. So far, we have been dealing mostly with
avoidance sets that contain only finitely many simple permutations. In this
chapter, we will show an example of a permutation class with a finite basis
that contains infinitely many simple permutations as well as construct an
algorithm that allows us to obtain the entire set recursively.
###### Definition 6.1.
Let $P$ be a pattern, a set of patterns, or a POP. Denote $Av_{n}^{S}(P)$ as
the set of simple $n$-permutations avoiding $P$.
###### Theorem 6.1.
The set of simple permutations avoiding $P:=\\{2413,\,3412,\,3421\\}$ is
enumerated by a translate of the well-known Fibonacci sequence. Specifically,
if $n\geq 3$, then ${\left|Av_{n}^{S}(P)\right|}=F(n-3)$ where $F(0)=0$,
$F(1)=1$ and $F(n)=F(n-1)+F(n-2)$ for all $n\geq 3$.
### 6.1. Partitioning the simple permutations into 3 sets
In order to enumerate the set of simple permutations avoid 2413, 3412 and
3421, we will identify the types of permutations that appear in it.
###### Lemma 6.2.
Let $\pi$ be a simple $n$-permutation that avoids $P$ for some $n\geq 4$. Then
$\pi_{n-1}=n$ and $\pi_{1}=n-1$.
###### Proof.
We first prove that $\pi_{n-1}=n$. Let $j$ and $k$ be in $[n]$ such that
$\pi_{k}=n$ and $\pi_{j}:=\max(\\{\pi_{1},\,\pi_{2},\,\dots,\,\pi_{k-1]}\\})$.
Suppose there exists some $\ell>k$ such that $\pi_{\ell}<\pi_{j}$. Note that
$\text{red}(\pi_{j}\,\pi_{k}\,\pi_{\ell_{1}}\,\pi_{\ell_{2}})$ would be 3412
or 3421 if $\ell_{1}<\ell_{2}$ and both $\ell_{1}$ and $\ell_{2}$ both
satisfied conditions for $\ell$. So $\ell$ must be unique. If $\ell\neq n$
then $\pi_{j}<\pi_{n}$ so
$\pi_{j}\,\pi_{k}\,\pi_{\ell}\,\pi_{n}=\pi_{j}\,n\,\pi_{\ell}\,\pi_{n}$
reduces to 2413. So $\ell=n$. We know that if $i>k$ and $i\neq\ell$ then
$\pi_{i}>\pi_{j}$, which implies that $\pi_{[k,n-1]}$ is an interval. Since
$\pi$ is simple and $k>1$, we must have $k=n-1$.
Next, we prove that $\pi_{1}=n-1$. Let $\pi$ be represented as the string of
factors $\alpha\,(n-1)\,\beta\,n\,k$ where $k\in[n-2]$. We know that $\beta$
cannot be empty, since then $(n-1)n$ would be an nontrivial interval in $\pi$.
Suppose $\alpha$ is not empty. Note that the subsequence
$\hat{\alpha}\,\hat{\beta}\,k$ must reduce to 123 or 132 or 231 for all
$\hat{\alpha}\in\alpha$ and $\hat{\beta}\in\beta$ by elimination (since $n-1$
is larger than the three points and $\hat{\alpha}\,(n-1)\,\hat{\beta}\,k$
cannot reduce to 2413, 3412 or 3421). The set of patterns
$\\{123,\,132,\,231\\}$ is equivalent to the POP of size three where $1<2$, so
the above observation implies that $\alpha<\beta$. If $\alpha>k$ then $\pi$
would be skew sum decomposable, while if $\alpha<k$ then $\pi$ would be sum
decomposable, both of which contradict the simpleness of $\pi$. But this
implies that $\beta>k$ which makes $(n-1)\,\beta\,n$ a nontrivial interval of
$\pi$. So $\alpha$ must be empty, meaning that $\pi_{1}=n-1$. ∎
###### Lemma 6.3.
Let $\pi$ be a simple $n$-permutation that avoids $P$ where $n\geq 5$. Then we
have exactly 3 cases:
1. a.
$\pi_{2}=1$ and $\pi_{3}=n-2$,
2. b.
$\pi_{2}=n-3$ and $\pi_{n-2}=n-2$, or
3. c.
$\pi_{2}=1$, $\pi_{3}=n-3$ and $\pi_{n-2}=n-2$.
###### Proof.
We know from Lemma 6.2 that $\pi_{n-1}=n$ and $\pi_{1}=n-1$. Write $\pi$ as
the string of factors $(n-1)\,\gamma\,n\,k$ where $k$ has length 1. If $k=n-2$
then $\gamma$ is an interval of length $n-3\geq 2$. So $k\in[n-3]$ and
$n-2\in\gamma$. Let $s:={\left|\gamma\right|}$ and $t\in[s]$ such that
$\gamma_{t}=n-2$.
Suppose $t<s$. We want to show that $t=2$ and $\gamma_{1}=1$, which will
satisfy case a. If $t=1$ then $\pi_{[1,2]}=(n-1)\,(n-2)$ is a nontrivial
interval, so $t>1$. So $t\in[2,s-1]$. Note that
$\gamma_{i}\,(n-2)\,\gamma_{j}\,k$ reduces to 1423, 1432 or 2431 for all
$1\leq i<t<j\leq s$ by elimination since $n-2$ is the largest of the four
points, and the subsequence cannot reduce to 2413, 3412 or 3421. This
observation implies that $\gamma_{[1,t-1]}<\gamma_{[t+1,s-1]}$. So
$\gamma_{[1,t-1]}<\gamma_{[t,s]}$. If $k<\gamma_{[t,s]}$ then $\gamma_{[t,s]}$
is a nontrivial interval. So $\gamma_{[1,t-1]}<k$ which means that
$\gamma_{[1,t-1]}$ is an interval. By the simpleness of $\pi$ we have $t=2$.
Since $\gamma_{[1,t-1]}=\gamma_{1}$ is smaller than all the other points,
$\gamma_{1}=1$.
Now suppose that $t=s$. That is, $\pi_{n-2}=n-2$. If $k=n-3$ then $\gamma$ is
an interval of length 1. This is only possible if $\pi=41352$, which satisfies
case a. So if $n\geq 6$ then $n-3$ is in $\gamma$ instead of $k$. Let $n\geq
6$ and $m\in[s]$ such that $\gamma_{m}=n-3$. If $m=1$ then $\pi$ satisfies
case b. So suppose $m>1$. We know that $m\neq s-1$ since that would mean that
$\gamma_{[s-1,s]}=(n-3)\,(n-2)$ is a nontrivial interval in $\pi$. So
$m\in[2,s-2]$. Note that $\gamma_{i}\,(n-3)\,\gamma_{j}\,k$ reduces to 1423,
1432 or 2431 for all $1\leq i<m<j\leq s-1$, since $n-3$ is the largest of the
four points and the subsequence cannot reduce to 2413, 3412 or 3421. This
observation implies that $\gamma_{[1,m-1]}<\gamma_{[m+1,s-1]}$. So
$\gamma_{[1,m-1]}<\gamma_{[m,s]}<n-1$. If $k<\gamma_{[m,s]}$ then
$\gamma_{[m,s]}$ is a nontrivial interval. So $\gamma_{[1,m-1]}<k$ which means
that $\gamma_{[1,m-1]}$ is an interval. By the simpleness of $\pi$ we have
$m=2$. Since $\gamma_{[1,m-1]}=\gamma_{1}$ is smaller than all the other
points, $\gamma_{1}=1$.
∎
By Lemma 6.2 and 6.3, we can partition the set of simple permutations of
length $n$ avoiding $P$ by their initial terms.
###### Definition 6.2.
For $n\geq 4$, define $A_{n}$, $B_{n}$ and $C_{n}$ to be the set of simple
$n$-permutations avoiding $P$ beginning with $(n-1)\,1\,(n-2)$, $(n-1)\,(n-3)$
and $(n-1)\,1\,(n-3)$ respectively.
Denote their cardinalities by $a_{n}$, $b_{n}$ and $c_{n}$.
###### Remark 2.
The above definition holds for all $n\geq 4$ even if $A_{n}$, $B_{n}$ or
$C_{n}$ are empty. Table 10 summarizes the types of permutations in these sets
due to Lemmas 6.2 and 6.3. Table 12 gives sample values of $a_{n}$, $b_{n}$
and $c_{n}$ for $4\leq n\leq 8$.
Set | Permutations in the set are of the form
---|---
$A_{n}$ | $(n-1)\,1\,(n-2)\,\cdots\,n\,k$ | where $k\in[2,n-3]$
$B_{n}$ | 3142 or $(n-1)\,(n-3)\,\cdots\,(n-2)\,n\,k$ | where $k\in[2,n-4]$
$C_{n}$ | $(n-1)\,1\,(n-3)\,\cdots\,(n-2)\,n\,k$ | where $k\in[2,n-4]$
Table 10. Summary of the types of permutations in $Av_{n}^{S}(2413,3412,3421)$
for $n\geq 4$. The ellipses represent (possibly empty) factors of the
permutations.
### 6.2. Recursive functions
We will show that for all $n\geq 6$, we can obtain simple permutations of
length $n$ that avoid $P$ by adding points to smaller simple permutations that
avoid $P$.
###### Definition 6.3.
For $n\geq 6$, let $S_{n}$ denote the set of permutations on $[n]$, and let
$\displaystyle f_{A}:Av_{n-2}^{S}(P)\rightarrow S_{n},\quad
f_{B}:Av_{n-2}^{S}(P)\rightarrow S_{n},\quad\text{and}\quad
f_{C}:B_{n-1}\rightarrow S_{n},$
be functions defined as follows:
$\displaystyle f_{A}(\pi_{1}\pi_{2}\cdots\pi_{n-2})$
$\displaystyle:=(n-1)\,1\,(\pi_{1}+1)\,(\pi_{2}+1)\,\cdots\,(\pi_{n-3}+2)\,(\pi_{n-2}+1),$
$\displaystyle f_{B}(\pi_{1}\pi_{2}\cdots\pi_{n-2})$
$\displaystyle:=(n-1)\,\pi_{1}\,\pi_{2}\,\cdots\,\pi_{n-3}\,n\,\pi_{n-2},\quad\text{and}$
$\displaystyle f_{C}(\pi_{1}\pi_{2}\cdots\pi_{n-1})$
$\displaystyle:=(\pi_{1}+1)\,1\,(\pi_{2}+1)\,\cdots\,(\pi_{n-1}+1).$
###### Definition 6.4.
For $n\geq 6$, let
$\displaystyle g_{A}:A_{n}\rightarrow S_{n-2},\quad g_{B}:B_{n}\rightarrow
S_{n-2},\quad\text{and}\quad g_{C}:C_{n}\rightarrow S_{n-1},$
be functions defined as
$\displaystyle g_{A}(\sigma)$
$\displaystyle:=(\sigma_{3}-1)\,(\sigma_{4}-1)\,\cdots\,(\sigma_{n-2}-1)\,(\sigma_{n-1}-2)\,(\sigma_{n}-1),$
$\displaystyle g_{B}(\sigma)$
$\displaystyle:=\sigma_{2}\,\sigma_{3}\,\cdots\,\sigma_{n-2}\,\sigma_{n},$
$\displaystyle g_{C}(\sigma)$
$\displaystyle:=(\sigma_{1}-1)\,(\sigma_{3}-1)\,(\sigma_{4}-1)\,\cdots\,(\sigma_{n}-1).$
where $\sigma=\sigma_{1}\sigma_{2}\cdots\sigma_{n}\in Av_{n}^{S}(P)$. Note
that $g_{A}(\sigma)=\text{red}(\sigma_{[3,n]})$,
$g_{B}(\sigma)=\text{red}(\sigma_{[2,n-1]}\,\sigma_{n})$ and
$g_{C}(\sigma)=\text{red}(\sigma_{1}\,\sigma_{[3,n]})$.
###### Lemma 6.4.
If $\pi\in Av_{n-2}^{S}(P)$ and $n\geq 6$, then $f_{A}(\pi)$ is simple and
avoids $P$.
###### Proof.
Recall that $f_{A}:Av_{n-2}^{S}(P)\rightarrow Av_{n}^{S}(P)$ and
$f_{A}(\pi_{1}\,\pi_{2}\,\cdots\,\pi_{n-2}):=(n-1)\,1\,(\pi_{1}+1)\,(\pi_{2}+1)\,\cdots\,(\pi_{n-3}+2)\,(\pi_{n-2}+1).$
Suppose that $f_{A}(\pi)_{[i,j]}$ is an interval for some $i,\,j\in[n]$. The
following cases show that all intervals in $f_{A}(\pi)$ are of length 0, 1 or
$n$ thereby proving that $f_{A}(\pi)$ is simple:
* •
Case 1: $i=1$. Recall that $f_{A}(\pi)_{1}=n-1$ and $f_{A}(\pi)_{2}=1$, so if
$j\geq 2$, then $f_{A}(\pi)_{[i,j]}$ must contain all the numbers in $[n-1]$
and so must have length at least $n-1$. Observe that
$f_{A}(\pi)_{n-1}=\pi_{n-3}+2=n$. So $f_{A}(\pi)_{[1,j]}$ must contain $[n]$,
i.e. $j=n$.
* •
Case 2: $i=2$. Observe that $f_{A}(\pi)_{2}=1$, so $f_{A}(\pi)_{[2,j]}$ is the
interval $[m]$ for some $m\in[n]$. If $j>2$, then $f_{A}(\pi)_{[3,j]}$ is the
interval $[2,m]$. Recall that
$f_{A}(\pi)_{[3,j]}=(\pi_{1}-1)\,(\pi_{2}-1)\,\cdots\,(\pi_{j-2}-1)$, so
$\pi_{[1,j-2]}$ is an interval as well, and is a trivial interval only if
$j\leq 3$. We know that $f_{A}(\pi)_{[2,3]}=1\,(n-3)$ is not an interval, so
$j\leq 2$.
* •
Case 3: $i\geq 3$. If $j<n-1$ then
$f_{A}(\pi)_{[i,j]}=(\pi_{i-2}-1)\,(\pi_{i-1}-1)\,\cdots\,(\pi_{j-2}-1)$. So
$f_{A}(\pi)_{[i,j]}$ is an interval only if $\pi_{[i-2,j-2]}$ is an interval,
which is true only if $j\leq i$ since $j-2<n-3$. Note that
$f_{A}(\pi)_{n-1}=\pi_{n-3}+2=n$. So if $j\geq n-1$ then $f_{A}(\pi)_{[i,j]}$
contains the point $n$. This means that $f_{A}(\pi)_{[i,j]}$ can only be a
nontrivial interval if it contains $\pi_{1}=n-1$ as well, which is impossible
since $i\geq 3$. So $f_{A}(\pi)_{[i,j]}$ is an interval only if $j\leq i$.
Finally, we check that $f_{A}(\pi)$ avoids $P$. Suppose we have $1\leq
i<j<k<\ell\leq n$ such that
$f_{A}(\pi)_{i}\,f_{A}(\pi)_{j}\,f_{A}(\pi)_{k}\,f_{A}(\pi)_{\ell}$ reduces to
a pattern in $P$. Since $\pi$ avoids $P$, such a subsequence must contain the
numbers $n-1$ or $1$. However, the only term larger than $n-1$ is
$n=\pi_{n-3}+2=f_{A}(\pi)_{n-1}$ by Lemma 6.2, while all patterns in $P$ have
the largest term 4 as the third last number in the pattern, so it cannot
contain the number $n-1$. On the other hand, 1 is the second term of
$f_{A}(\pi)$, but is the third or fourth term in the patterns in $P$. So it
cannot contain $1$ either. So $f_{A}(\pi)$ avoids $P$. ∎
###### Lemma 6.5.
If $\pi\in Av_{n-2}^{S}(P)$ and $n\geq 6$, then $f_{B}(\pi)$ is simple and
avoids $P$.
###### Proof.
Recall that $f_{B}:Av_{n-2}^{S}(P)\rightarrow Av_{n}^{S}(P)$, and
$f_{B}(\pi_{1}\pi_{2}\cdots\pi_{n-2}):=(n-1)\,\pi_{1}\,\pi_{2}\,\cdots\,\pi_{n-3}\,n\,\pi_{n-2}.$
Suppose that $f_{B}(\pi)_{[i,j]}$ is an interval for some $i,\,j\in[n]$. The
following cases show that all intervals in $f_{B}(\pi)$ are of length 0, 1 or
$n$ thereby proving that $f_{B}(\pi)$ is simple:
* •
Case 1: $2\leq i,\,j\leq n-2$. Observe that
$f_{B}(\pi)_{[i,j]}=\pi_{[i-1,j-1]}$ is an interval only if $i\geq j$ since
$\pi$ is simple. So $i\geq j$.
* •
Case 2: $i=1\leq j\leq n-2$. Observe that
$f_{B}(\pi)_{\ell}<f_{B}(\pi)_{1}=n-1$ for all $\ell\in[2,j]$. So the factor
$f_{B}(\pi)_{[2,j]}=\pi_{[1,j-1]}$ must also be interval. Since $\pi$ is
simple, and $j-1<n-2$, the factor $\pi_{[1,j-1]}$ is an interval only if
$j\leq 2$. We know that $f_{B}(\pi)_{[1,2]}=(n-1)\pi_{1}=(n-1)\,(n-3)$ is not
an interval, so we must have $j=1$.
* •
Case 3: $n-1\leq j$. Observe that $f_{B}(\pi)_{n-1}=n$. If $i<j$ then
$f_{B}(\pi)_{[i,j]}$ must also contain $n-1=f_{B}(\pi)_{1}$, so $i=1$. So it
is a factor of length at least $n-1$. But $f_{B}(\pi)_{[1,n-1]}$ is not an
interval since it omits $f_{B}(\pi)_{n}=\pi_{n-2}$ which is in $[2,n-2]$. So
implies that $i=1$ and $j=n$. Otherwise $i\geq j$.
Finally, we prove that $f_{B}(\pi)$ avoids $P$. Suppose we have $1\leq
i<j<k<\ell\leq n$ such that
$f_{B}(\pi)_{i}\,f_{B}(\pi)_{j}\,f_{B}(\pi)_{k}\,f_{B}(\pi)_{\ell}$ reduces to
a pattern in $P$. Then such a subsequence must contain the number
$n-1=f_{B}(\pi)_{1}$ or $n=f_{B}(\pi)_{n-1}$ since we know that $\pi$ avoids
$P$. Clearly it cannot contain the number $n$ since it is the second last term
in the permutation, but 4 never occurs as the last or second last term in
patterns in 2413, 3412 or 3421. Then we must have $i=1$. But no patterns in
$P$ start with 4, so $f_{B}(\pi)$ indeed avoids $P$. ∎
###### Lemma 6.6.
If $\pi\in B_{n-1}$ and $n\geq 6$, then $f_{C}(\pi)$ is simple and avoids $P$.
###### Proof.
Recall that $f_{C}:B_{n-1}\rightarrow Av_{n}^{S}(P)$ and
$f_{C}(\pi_{1}\,\pi_{2}\,\cdots\,\pi_{n-1}):=(\pi_{1}+1)\,1\,(\pi_{2}+1)\,\cdots\,(\pi_{n-1}+1).$
It is easy to see that $f_{C}(\pi)$ avoids $P$. The only way that $f_{C}(\pi)$
could contain $P$ due to the addition of the point 1 is if there are at least
2 terms preceding it. However, it is inserted in the second position, so
$f_{C}(\pi)$ does not contain $P$.
Suppose that $f_{C}(\pi)_{[i,j]}$ is an interval for some $i,\,j\in[n]$. The
following cases show that all intervals in $f_{B}(\pi)$ are of length 0, 1 or
$n$ thereby proving that $f_{C}(\pi)$ is simple:
* •
Case 1: $i=1$. Suppose $j\geq 2$. Observe that $f_{C}(\pi)_{1}=\pi_{1}+1=n-1$
and $f_{C}(\pi)_{2}=1$. Then the interval $f_{C}(\pi)_{[i,j]}$ must contain
all the numbers in $[n-1]$. This includes $f_{C}(\pi)_{n}=\pi_{n-1}+1$ which
is in $[3,n-4]$ for $n\geq 6$. This is because $\pi_{n-1}$ cannot be $n-4$,
$n-3$ or $n-2$ by Lemmas 6.2 and 6.3, and is not $1$ or $n-1$ since $\pi$ is
simple. So $j=n$.
* •
Case 2: $i=2$. Observe that $f_{C}(\pi)_{2}=1$ and
$f_{C}(\pi)_{3}=\pi_{2}+1=n-3$. If $j>2$, then $f_{C}(\pi)_{[i,j]}$ is an
interval only if it contains $[n-3]$. So the factor must have length at least
$n-3$, meaning that $j\geq n-2$. Observe that
$f_{C}(\pi)_{n-2}=\pi_{n-2}+1=n-1$ by Lemma 6.3, so $f_{C}(\pi)_{[i,j]}$ must
contain $[n-1]$. However, this is not the case since it omits
$f_{C}(\pi)_{1}=\pi_{1}+1$ which is not $n$. So $j\leq 2$.
* •
Case 3: $i\geq 3$. Observe that
$f_{C}(\pi)_{[i,j]}=(\pi_{i-1}+1)\,(\pi_{i}+1)\,\cdots\,(\pi_{j-1}+1)$. Since
$\pi$ is simple, the factor $\pi_{[i-1,j-1]}$ is an interval only if $i\geq
j$. So $i\geq j$.
∎
###### Lemma 6.7.
If $\sigma=(n-1)\,1\,(n-2)\,\cdots\,n\,k\in A_{n}$ and $n\geq 6$, then
$g_{A}(\sigma)$ is simple and avoids $P$.
###### Proof.
The mapping $g_{A}$ removes the first two entries and then reduces the result.
Since $g_{A}(\sigma)$ is a reduced subsequence of $\sigma$, it avoids $P$.
Suppose that $g_{A}(\sigma)$ is not simple. Then there exists
$[a,b]\subsetneq[3,n]$ and $[c,d]\subseteq[2,n]$ with $d\neq n-1$ and $b\geq
a+2$ such that $\\{\sigma_{i}:i\in[a,b]\\}=[c,d]^{0}$ where
$[c,d]^{0}:=[c,d]\setminus\\{n-1\\}$.
If $d=n$ then $n-2\in[c,d]^{0}$. Hence $a=3$ and
$2\in\\{\sigma_{i}:i\in[a,b]\\}=[c,d]^{0}$ which implies that
$k=\pi_{n}\in[c,d]^{0}$ as well. But we have excluded the case $[a,b]=[3,n]$
since it is a trivial interval for $g_{A}(\sigma)$. This shows that $d\leq
n-2$. But then $[c,d]^{0}=[c,d]$ and $\\{\sigma_{i}:i\in[a,b]\\}=[c,d]$ is an
interval for $\sigma$.
∎
###### Lemma 6.8.
If $\sigma\in B_{n}$ and $n\geq 6$, then $g_{B}(\sigma)$ is simple and avoids
$P$.
###### Proof.
The mapping $g_{B}$ removes the first entry and the second last entry of
$\sigma$. Since $g_{B}(\sigma)$ is a reduced subsequence of the permutation
$\sigma$, it avoids $P$.
Suppose that $g_{B}(\pi)$ is not simple. Then there exists
$[a,b]\subsetneq[2,n]$ and $[c,d]\subsetneq[1,n-2]$ with $b\neq n-1$ and $c<d$
such that $\\{\pi_{i}:i\in[a,b]^{0}\\}=[c,d]$ where
$[a,b]^{0}:=[a,b]\setminus\\{n-1\\}$. If $b=n$ then $n-2\in[a,b]^{0}$.
Therefore $n-2=\pi(n-2)\in[c,d]$ and that implies that $(n-3)\in[c,d]$. Hence
$a=2$ which means $[a,b]=[2,n]$. This contradiction implies that $b\leq n-2$.
But then $[a,b]^{0}=[a,b]$ and $\pi_{[a,b]}=[c,d]$ is an interval for $\pi$.
∎
###### Lemma 6.9.
If $\sigma\in C_{n}$ and $n\geq 6$, then $g_{C}(\sigma)$ is simple and avoids
$P$.
###### Proof.
The mapping $g_{C}$ removes the second entry (which is 1) and then reduces the
result. It is not hard to see that $g_{C}(\sigma)$ is a reduced subsequence of
the permutation $\sigma$, so it must avoid $P$.
Suppose that $g_{C}(\pi)$ is not simple. Then there exists
$[a,b]\subsetneq[1,n]$ and $[c,d]\subsetneq[2,n]$ with $a\neq 2$ and $d\geq
c+2$ such that $\\{\pi_{i}:i\in[a,b]^{0}\\}=[c,d]$ where
$[a,b]^{0}:=[a,b]\setminus\\{2\\}$.
If $a=1$ then $n-1\in[c,d]$. Therefore $n-2$ or $n$ lies in $[c,d]$. Thus
$b\geq n-3$ and this implies $2\in\pi([a,b]^{0})=[c,d]$. Therefore
$k=\pi_{n}\in[c,d]$ and thus $b=n$, but we have excluded the case where
$[a,b]=[1,n]$. This contradiction implies that $a\geq 3$. But then
$[a,b]^{0}=[a,b]$ and $\pi_{[a,b]}=[c,d]$ is a nontrivial interval for $\pi$.
∎
$n$ | $A_{n}:=$ permutations that start with $(n-1)\,1\,(n-2)$ | $B_{n}:=$ permutations that start with $(n-1)\,(n-3)$ | $C_{n}:=$ permutations that start with $(n-1)\,1\,(n-3)$
---|---|---|---
4 | - | 3142 | -
5 | 41352 | - | -
6 | 514263 | 531462 | -
7 | 6152473 | 6413572 | 6142573
8 | 71625384, 71642583 | 75142683, 75314682 | 71524683
Table 11. Simple $n$-permutations avoiding $2413,\,3412,\,3421$ for $4\leq n\leq 8$ $n$ | $Av_{n}^{S}(P)$ | ${\left|Av_{n}^{S}(P)\right|}$ | $a_{n}$ | $b_{n}$ | $c_{n}$
---|---|---|---|---|---
4 | 3142 | 1 | 0 | 1 | 0
5 | 41352 | 1 | 1 | 0 | 0
6 | 514263, 531462 | 2 | 1 | 1 | 0
7 | 6152473, 6142573, 6413572 | 3 | 1 | 1 | 1
8 | 71625384, 71642583, 71524683, 75142683, 75314682 | 5 | 2 | 2 | 1
$k\geq 6$ | $Av_{k}^{S}(P)$ | $F(k-3)$ | $F(k-5)$ | $F(k-5)$ | $F(k-6)$
Table 12. The number of simples avoiding $2413,\,3412,\,3421$, where $4\leq
n\leq 8$
### 6.3. Proof of Theorem 6.1
It is well known that there are no simple permutations of length $3$, so it is
obvious that $Av_{3}^{S}(P)=0=F(0)$. For $n=4,\,5,\,6,\,7$, the values in
Table 12 are easy to verify.
We proceed to prove the enumeration for $n\geq 6$ via induction. Recall that
every simple $n$-permutation avoiding $P$ begins with $n-1$ by Lemma 6.2. So
for all $n\geq 6$, if $\pi\in Av_{n-2}^{S}(P)$ then we have
$\displaystyle f_{A}(\pi_{1}\pi_{2}\cdots\pi_{n-2})_{[1,3]}\quad$
$\displaystyle=\quad(n-1)\,1\,(\pi_{1}+1)$
$\displaystyle=\quad(n-1)\,1\,(n-2)\quad\text{and}$ $\displaystyle
f_{B}(\pi_{1}\pi_{2}\cdots\pi_{n-2})_{[1,2]}\quad$
$\displaystyle=\quad(n-1)\,\pi_{1}$ $\displaystyle=\quad(n-1)\,(n-3),$
and for all $n\geq 6$ and $\pi\in B_{n-1}$ we have
$\displaystyle f_{C}(\pi_{1}\pi_{2}\cdots\pi_{n-1})_{[1,3]}\quad$
$\displaystyle=\quad(n-1)\,1\,(\pi_{2}+1)$
$\displaystyle=\quad(n-1)\,1\,(n-3).$
Therefore, from Lemmas 6.4, 6.5 and 6.6,
$\displaystyle\pi\in Av_{n-2}^{S}(P)$ $\displaystyle\implies f_{A}(\pi)\in
A_{n}\quad\text{and}\quad f_{B}(\pi)\in B_{n},$
$\displaystyle\text{and}\quad\pi\in B_{n-1}$ $\displaystyle\implies
f_{C}(\pi)\in C_{n},$
which means that $f_{A}(Av_{n-2}^{S}(P))\subseteq A_{n}$,
$f_{B}(Av_{n-2}^{S}(P))\subseteq B_{n}$ and $f_{C}(B_{n-1}^{S}(P))\subseteq
C_{n}$.
On the other hand, Lemmas 6.7,6.8 and 6.9 clearly show that
$\displaystyle\sigma\in A_{n}$ $\displaystyle\implies g_{A}(\sigma)\in
Av_{n-2}^{S}(P)\quad\text{and}\quad$ $\displaystyle\sigma\in B_{n}$
$\displaystyle\implies g_{B}(\sigma)\in Av_{n-2}^{S}(P)\quad\text{and}\quad$
$\displaystyle\sigma\in C_{n}$ $\displaystyle\implies g_{C}(\sigma)\in
B_{n-1}\quad\text{and}\quad$
which means that $g_{A}(A_{n})\subseteq Av_{n}^{S}(P)$, $g_{B}(B_{n})\subseteq
Av_{n}^{S}(P)$ and $g_{C}(C_{n})\subseteq B_{n-1}$. Therefore, $g_{A}$,
$g_{B}$ and $g_{C}$ is in fact the inverse of $f_{A}$, $f_{B}$ and $f_{C}$
respectively. Moreover, we claim that for all $n\geq 6$,
$a_{n}={\left|Av_{n-2}^{S}(P)\right|},\quad
b_{n}={\left|Av_{n-2}^{S}(P)\right|},\quad\text{and}\quad c_{n}=b_{n-1}.$
Suppose that for some $k\geq 6$,
$a_{\ell}=F(\ell-5),\quad b_{\ell}=F(\ell-5),\quad\text{and}\quad
c_{\ell}=F(\ell-6)$
for all $\ell\in\\{4,5,\dots,k-1\\}$. Then from the statements above,
$\displaystyle a_{k}$ $\displaystyle={\left|Av_{k-2}^{S}(P)\right|}$
$\displaystyle=a_{k-2}+b_{k-2}+c_{k-2}$ $\displaystyle=F(k-7)+F(k-7)+F(k-8)$
$\displaystyle=F(k-7)+F(k-6)$ $\displaystyle=F(k-5),$
$\displaystyle\text{so}\quad b_{k}={\left|Av_{k-2}^{S}(P)\right|}$
$\displaystyle=F(k-5)\quad\text{and}\quad c_{k}=b_{k-1}=F(k-6).$
So the claim is true by induction. Therefore, for all $n\geq 6$,
$\displaystyle{\left|Av_{n}^{S}(P)\right|}$ $\displaystyle=a_{n}+b_{n}+c_{n}$
$\displaystyle=F(n-5)+F(n-5)+F(n-6)$ $\displaystyle=F(n-5)+F(n-4)$
$\displaystyle=F(n-3).$
This concludes the proof of Theorem 6.1. ∎
## 7\. Summary
In this paper, we elucidated connections between the avoidance sets of some
POPs and other combinatorial objects by constructing explicit bijections
between the relevant sets, as a direct response to five of the 15 open
questions posed by Gao and Kitaev [5]. These bijections were derived primarily
by analysing the simple permutations of the avoidance sets and how the rest of
the set could be obtained from their inflations. This was made possible by
illustrating the permutation matrices as lattice matrices, which is a novel
concept introduced in this paper. The bijections constructed in this paper are
a testament to the fundamental role that simple permutations play in the study
of pattern-avoiding permutations. It also demonstrates the intricate
connections that avoidance sets of POPs have with many other combinatorial
objects, and provides a way to relate seemingly disparate combinatorial
objects through their connections to the family of POPs. We also enumerated
the number of simple $n$-permutations avoiding the patterns 2413, 3412 and
3421 for all $n$, giving a concrete example of an avoidance set with a finite
basis and infinitely many simple permutations.
## 8\. Further Work
The remaining ten questions posed by Gao and Kitaev [5] remain open. Given the
bijections we have constructed, it would be interesting to know whether they
can be generalized further, by studying generalizations of the POPs or of the
combinatorial objects. The following questions are natural extensions of the
problem that was discussed in Section 4:
1. 1.
Are there any combinatorial objects that have a natural bijective relationship
with the avoidance set of $P_{k}$ for any $k\geq 5$?
2. 2.
Is there a POP whose avoidance set is in bijection with the levels in
compositions of ones, twos and threes?
3. 3.
Are there any combinatorial objects that have a natural bijective relationship
with the avoidance set of the POP with $k$ elements labelled
$1,\,2,\,\dots,\,k$ where $1>3>5$, or, more generally, with $1>3>\cdots>2i+1$
for some $i\geq 2$?
The enumeration of $Av_{n}(R_{k})$ for $k\geq 6$, $n\geq 1$, where $R_{k}$ is
defined in Section 5, is an open question. It could also be interesting to
enumerate $\mathfrak{S}_{k,n}$, which we define as the set of permutations
whose partial sums of signed displacements do not exceed $k$, for all $k\geq
3$, and check if there exist any $k$ and $\ell$ such that
${\left|\mathfrak{S}_{k,n}\right|}={\left|Av_{n}(R_{\ell})\right|}$ for all
$n\geq 1$. Finally, Gao and Kitaev [5] observed that sequence
${\left|Av_{n-1}(R_{4})\right|}_{n\geq 2}$ corresponds to sequence A232164 as
well. The latter sequence counts the number of Weyl group elements, not
containing an $s_{r}$ factor, which contribute nonzero terms to Kostant’s
weight multiplicity formula when computing the multiplicity of the zero-weight
in the adjoint representation for the Lie algebra of type $C$ and rank $n$.
Using our analysis on the set $Av(R_{4})$, one may be able to construct a
natural bijection between these two sets more easily.
During our study of the simple permutations that avoid the patterns 2413, 3412
and 3421, we discovered using the PermLab software that the addition of the
pattern 2431 to the basis does not change the set of simple permutations for
small $n$. It can be proved that the simple permutations constructed by the
recursive functions to build the set $Av_{n}^{S}(2413,3412,3421)$ indeed avoid
2431. This observation leads us to an interesting question: Which avoidance
sets have the same set of simples?
## References
* [1] Michael Albert. PermLab. 2012. URL : http://www.cs.otago.ac.nz/PermLab/
* [2] M.H. Albert and M.D. Atkinson. _Simple Permutations and Pattern Restricted Permutations_. Discrete Mathematics 300. 1-3 (2005), pp. 1–15.
* [3] Joe Buhler, David Eisenbud, Ron Graham and Colin Wright. _Juggling Drops and Descents_. The American Mathematical Monthly 101.6 (1994), pp. 507–519. ISSN : 00029890, 19300972. URL : http://www.jstor.org/stable/2975316 .
* [4] Fan Chung and Ron Graham. _Primitive Juggling Sequences_. The American Mathematical Monthly 115.3 (2008), pp. 185–194. DOI : http://www.jstor.com/stable/27642443 .
* [5] Alice Gao and Sergey Kitaev. _On Partially Ordered Patterns of Length 4 and 5 in Permutations_. The Electronic Journal Of Combinatorics 26.3 (2019). DOI : https://doi.org/10.37236/8605 .
* [6] Sergey Kitaev. _Patterns in Permutations and Words_. Springer Berlin Heidelberg, 2011. DOI : https://doi.org/10.1007/978-3-642-17333-2 .
* [7] J.K. Percus. _Combinatorial Methods_ , Applied Mathematical Sciences #4. New York: Springer-Verlag, 1971. ISBN : 978-0-387-90027-8.
* [8] Grant Pogosyan Ivo G Rosenberg Akihiro Nozaki Masahiro Miyakawa. _The number of orthogonal permutations_. European Journal of Combinatorics 16.1 (1995), pp. 71–85. DOI : https://doi.org/10.1016/0195-6698(95)90091-8
* [9] N. J. A. Sloane. _The Online Encyclopedia of Integer Sequences_. URL : https://oeis.org
|
# Aharonov-Bohm effect in three-dimensional higher-order topological
insulators
Kun Luo National Laboratory of Solid State Microstructures and school of
Physics, Nanjing University, Nanjing, 210093, China Hao Geng National
Laboratory of Solid State Microstructures and school of Physics, Nanjing
University, Nanjing, 210093, China Collaborative Innovation Center of
Advanced Microstructures, Nanjing University, Nanjing 210093, China Li Sheng
National Laboratory of Solid State Microstructures and school of Physics,
Nanjing University, Nanjing, 210093, China Collaborative Innovation Center of
Advanced Microstructures, Nanjing University, Nanjing 210093, China Wei Chen
Corresponding author<EMAIL_ADDRESS>National Laboratory of Solid State
Microstructures and school of Physics, Nanjing University, Nanjing, 210093,
China Collaborative Innovation Center of Advanced Microstructures, Nanjing
University, Nanjing 210093, China D. Y. Xing National Laboratory of Solid
State Microstructures and school of Physics, Nanjing University, Nanjing,
210093, China Collaborative Innovation Center of Advanced Microstructures,
Nanjing University, Nanjing 210093, China
###### Abstract
The 1D hinge states are the hallmark of the 3D higher-order topological
insulators (HOTI), which may lead to interesting transport properties. Here,
we study the Aharonov-Bohm (AB) effect in the interferometer constructed by
the hinge states in the normal metal-HOTI junctions with a transverse magnetic
field. We show that the AB oscillation of the conductance can clearly manifest
the spatial configurations of such hinge states. The magnetic fluxes encircled
by various interfering loops are composed of two basic ones, so that the
oscillation of the conductance by varying the magnetic field contains
different frequency components universally related to each other.
Specifically, the four dominant frequencies $\omega_{x,y}$ and $\omega_{x\pm
y}$ satisfy the relations $\omega_{x\pm y}=\omega_{x}\pm\omega_{y}$, which
generally holds for different magnetic field, sample size, bias voltage and
weak disorder. Our results provide a unique and robust signature of the hinge
states and pave the way for exploring AB effect in the 3D HOTI.
## I INTRODUCTION
Over the past two decades, topological phases of matter such as topological
insulator and superconductor have become an active research field of condensed
matter physics Qi and Zhang (2011); Hasan and Kane (2010). These materials are
characterized by the nontrivial band topology and the resultant gapless
(d-1)-dimensional edge states. Very recently, the concepts of higher-order
topological insulators (HOTI) and superconductors are theoretically proposed,
which are featured by the $(d-2)$-dimensional edge states Benalcazar _et al._
(2017a, b); Song _et al._ (2017); Langbehn _et al._ (2017); Schindler _et
al._ (2018a); Franca _et al._ (2018); Park _et al._ (2019); Khalaf (2018);
Vu _et al._ (2020); Zhang _et al._ (2020); Yan (2019a); Călugăru _et al._
(2019); Yan (2019b); Yan _et al._ (2018); Schindler _et al._ (2018b).
Specifically, for the 3D HOTI there exist 1D gapless states along the hinges
of the sample, so-called hinge states, while the surface and the bulk states
are both insulating. Recent progresses have shown the evidences of the hinge
states in bismuth by the scanning-tunnelling spectroscopy and Josephson
interferometry Schindler _et al._ (2018b), which pave the way for exploring
more intriguing properties of such topological states in the HOTI.
The 1D nature of the hinge states indicates that it is a good playground for
exploring various interference effects, such as Aharonov-Bohm (AB) and Fabry-
Pérot interferometers Liang _et al._ (2001); van Wees _et al._ (1989); Ji
_et al._ (2003); Ofek _et al._ (2010); McClure _et al._ (2009); Nakamura
_et al._ (2019). Actually, the chiral edge states of the quantum Hall phase
have become an important platform for the study of mesoscopic physics, in
which a variety of novel phenomena have been observed Ji _et al._ (2003);
Henny _et al._ (1999); Neder _et al._ (2007); Weisz _et al._ (2014) due to
its long coherence length and high adjustability. Compared with the chiral
edge states, the hinge states in the HOTI open additional possibilities for
the implementation of novel effects due to their 3D configurations, which
enrich the way of interfering in real space. Moreover, such effects cannot be
realized in any 2D systems, which in turn, can serve as the deterministic
evidence of the hinge states.
Figure 1: (a) The interferometer constructed by HOTI (green block) and normal
metal electrodes (orange blocks). The magnetic field $B$ is imposed in $x$-$y$
plane with a polar angle $\theta$. (b, c) Two elemental interfering loops with
encircled flux $\Phi_{x}$, $\Phi_{y}$. (d, e) Other two dominant interfering
loops with fluxes $\Phi_{x}+\Phi_{y}$, $|\Phi_{x}-\Phi_{y}|$.
The manifestation of the AB effect in an electron system is the periodic
oscillation of conductance as the closed trajectory of electrons encircles a
magnetic flux $\Phi$ Aharonov and Bohm (1959); Webb _et al._ (1985); Holloway
_et al._ (2015); Lahiri _et al._ (2018); Aleiner _et al._ (2015);
Tserkovnyak and Halperin (2006). The dominant period of oscillation is equal
to the flux quantum $\Phi_{0}=h/e$, with a main frequency $2\pi/\Phi_{0}$. The
frequency of oscillation can be found by taking the fast Fourier transform
(FFT) of the conductance pattern Webb _et al._ (1985); Holloway _et al._
(2015); Lahiri _et al._ (2018). Recently, AB effect has been used as an
effective way to detect edge states in various topological systems, such as
edge states of topological insulators Peng _et al._ (2010); Bardarson _et
al._ (2010); Zhang and Vishwanath (2010); Xypakis _et al._ (2020), Majorana
fermions of topological superconductors Li _et al._ (2012); Ueda and Yokoyama
(2014); Tripathi _et al._ (2016); Bartolo _et al._ (2020), surface states of
topological semimetal Wang _et al._ (2016); Lin _et al._ (2017), and non-
Abelian anyons of fractional quantum Hall systems Kim (2006); Halperin _et
al._ (2011); Willett _et al._ (2013); Nakamura _et al._ (2019, 2020).
In this work, we investigate the AB effect in the interferometer composed of
the hinge states of the quadrangular HOTI by imposing an external magnetic
field. The insulating bulk and surface states indicate that the electron can
only propagate along the hinges of the sample, by which the enclosed magnetic
flux can lead to a coherent oscillation of the transmission probability.
Different from the edge states in any 2D systems, the 3D network of the hinge
states results in peculiar interfering trajectories, which relies not only on
the magnitude of the magnetic field but also on its orientation. The AB
interferometer is sketched in Fig. 1(a), where the HOTI is connected to two
leads made of the normal metal and a magnetic field
$\bm{B}=(B_{x},B_{y})=B(\cos\theta,\sin\theta)$ is applied in the $x$-$y$
plane with $\theta$ being the polar angle. The electrons injected from the
leads propagate along four chiral hinge states, which comprise a variety of
interfering loops; see Figs. 1(b)-1(e). The elemental interfering loops shown
in Figs. 1(b) and 1(c) are exactly the boundaries of the ($\pm$1,0,0) and
(0,$\pm$1,0) surfaces. The basic loops in Figs. 1(b) and 1(c) encircle a
magnetic flux of $\Phi_{x}=B\cos\theta S_{x}$ and $\Phi_{y}=B\sin\theta
S_{y}$, respectively, with $S_{x,y}$ the surface areas. Accordingly, the
frequency components $\omega_{x}=2\pi\cos\theta S_{x}/\Phi_{0}$ and
$\omega_{y}=2\pi\sin\theta S_{y}/\Phi_{0}$ naturally appear in the oscillating
pattern of the conductance as the magnetic field $B$ varies. Interestingly,
the magnetic flux in other interfering loops can all be interpreted by the two
elemental ones, among which two typical loops in Figs. 1(d) and 1(e) contain a
flux of $\Phi_{x\pm y}=\Phi_{x}\pm\Phi_{y}$, and the corresponding oscillating
frequency components satisfy $\omega_{x\pm y}=\omega_{x}\pm\omega_{y}$. It
turns out that the aforementioned four interfering loops and the corresponding
oscillating frequencies dominant the coherent oscillation of the conductance.
The relation $\omega_{x\pm y}=\omega_{x}\pm\omega_{y}$ generally holds
independent of various parameters such as the magnetic field, sample size and
the energy of electron, thus providing a universal and deterministic signature
of the hinge states and HOTI.
The rest of this paper is organized as follows. In Sec. II, we elucidate the
model of the HOTI adopted in our work. In Sec. III, we apply the scattering
matrix approach to analyze the coherent transport through the interferometer
and the AB oscillation of the conductance. Detailed numerical simulations on
the lattice model are conducted in Sec. IV, which verify the universality of
the physical results. Finally, a brief summary and outlook are given in Sec.
V.
## II Model of HOTI
We adopt the model of 3D chiral HOTI introduced by Schindler et al. Schindler
_et al._ (2018a) as
$\begin{split}H_{\text{HOTI}}&=\left(M+t\sum_{i}{\cos
k_{i}}\right)\tau_{z}\sigma_{0}+\Delta_{1}\sum_{i}{\sin
k_{i}\tau_{x}\sigma_{i}}\\\ &\ \ \ +\Delta_{2}(\cos k_{x}-\cos
k_{y})\tau_{y}\sigma_{0},\end{split}$ (1)
where $\sigma_{i=x,y,z}$ and $\tau_{i}$ are the Pauli matrices acting on the
spin and orbital space, respectively. For $1<|M/t|<3$ and
$\Delta_{1},\Delta_{2}\neq 0$, the system lies in a chiral 3D HOTI phase. The
energy spectra are gapped in both the bulk and four surfaces parallel to the
$z$-axis. Importantly, the mass term is opposite in sign between adjacent
surfaces that results in the Jackiw-Rebbi-type bound states Jackiw and Rebbi
(1976) propagating only along the $\pm z$-direction, or the so-called
topological hinge states. Time-reversal symmetry is broken in Eq. (1), so that
the hinge states are unidirectional or chiral, without any backscattering
states within a given hinge. Notably, gapless Dirac cones protected by the
$\hat{C}_{4}^{z}\hat{T}$ symmetry persist on the surfaces perpendicular to the
$z$-axis Schindler _et al._ (2018a). Therefore, it is beneficial to explore
pure signature of the hinge states through the transport in the $z$-direction.
## III Scattering matrix analysis
In this section, we study the coherent transport of electrons through the
interferometer sketched in Fig. 1(a) based on the low-energy effective model
of the hinge states using the scattering matrix approach. The scattering
matrix of the whole interferometer can be obtained by combining those at two
normal metal-HOTI interfaces and the matrix of phase accumulation during
propagation in the hinge states. The matrix at the lower interface [cf. Fig.
1(a)] can be parameterized as
$S_{l}=\left(\begin{array}[]{cccc}r_{1}&r_{3}&t_{1}^{\prime}&t_{3}^{\prime}\\\
r_{2}&r_{4}&t_{2}^{\prime}&t_{4}^{\prime}\\\
t_{1}&t_{3}&r_{1}^{\prime}&r_{3}^{\prime}\\\
t_{2}&t_{4}&r_{2}^{\prime}&r_{4}^{\prime}\\\ \end{array}\right),$ (2)
which relates the incoming ($a_{l}$) and outgoing ($b_{l}$) waves in the
normal metal and the HOTI via $b_{l}=S_{l}a_{l}$. The matrix is assumed to be
$4\times 4$ such that two incoming/outgoing waves are taken into account on
both sides. For the HOTI, the number of channels corresponds to that of pairs
of the hinge states. The unitary condition $S_{l}S_{l}^{\dagger}=\text{I}$ is
ensured by the law of current conservation. Here, $t_{1,\cdots,4}$ are the
transmission amplitudes from the normal metal to the chiral hinge states of
the HOTI and $r_{1,\cdots,4}$ are the corresponding reflection amplitudes. The
scattering amplitudes corresponding to the incident waves from the hinge
states of HOTI are defined by
$t^{\prime}_{1,\cdots,4},r^{\prime}_{1,\cdots,4}$ in a similar way. The
scattering matrix for the upper interface can be defined as
$S_{u}=\left(\begin{array}[]{cccc}r_{1}^{u}&r_{3}^{u}&t_{1}^{u\prime}&t_{3}^{u\prime}\\\
r_{2}^{u}&r_{4}^{u}&t_{2}^{u\prime}&t_{4}^{u\prime}\\\
t_{1}^{u}&t_{3}^{u}&r_{1}^{u\prime}&r_{3}^{u\prime}\\\
t_{2}^{u}&t_{4}^{u}&r_{2}^{u\prime}&r_{4}^{u\prime}\\\ \end{array}\right).$
(3)
The phase modulation of the wave function due to the magnetic field can be
described by the matrix as
$S_{m}=\left(\begin{array}[]{cccc}0&0&e^{i\tilde{\phi_{1}}}&0\\\
0&0&0&e^{-i\tilde{\phi_{2}}}\\\ e^{-i\phi_{2}}&0&0&0\\\ 0&e^{i\phi_{1}}&0&0\\\
\end{array}\right)$ (4)
where the phases $\phi_{1},\phi_{2},\tilde{\phi_{1}},\tilde{\phi_{2}}$ are
related by the magnetic fluxes through
$\phi_{1}+\tilde{\phi_{1}}=\phi_{2}+\tilde{\phi_{2}}=\phi_{x}=2\pi\Phi_{x}/\Phi_{0},\phi_{1}-\tilde{\phi_{2}}=\phi_{2}-\tilde{\phi_{1}}=\phi_{y}=2\pi\Phi_{y}/\Phi_{0},\phi_{1}+\phi_{2}=\phi_{x+y}=2\pi\Phi_{x+y}/\Phi_{0}$
and $\tilde{\phi_{1}}+\tilde{\phi_{2}}=\phi_{x-y}=2\pi\Phi_{x-y}/\Phi_{0}$
with $\phi_{x,y}$ and $\phi_{x\pm y}$ being gauge invariant.
By combining three matrices $S_{l},S_{m},S_{u}$ in a standard way we obtain
the total scattering matrix for the whole system. Here, we focus on the
periods of the AB oscillation and an overall phase shift of the pattern is
unimportant. Therefore, we can choose $S_{l},S_{u}$ to be real for simplicity
which will not change the main results. For an electron incident from the
lower terminal, its transmission probability $T$ to upper terminal is obtained
after some algebra as
$\begin{split}T&=F^{-1}\Big{[}C+C_{X}\cos\phi_{x}+C_{Y}\cos\phi_{y}+C_{XY}\cos\phi_{x+y}+C_{XY}^{\prime}\cos\phi_{x-y}\Big{]},\\\
F&=M_{C}+M_{XY}\cos\phi_{x+y}+M_{XY}^{\prime}\cos\phi_{x-y}+M_{2X}\cos(2\phi_{x})+M_{2Y}\cos(2\phi_{y})-M_{X}\cos\phi_{x}-M_{Y}\cos\phi_{y},\end{split}$
(5)
where the explicit forms of the relevant parameters are given in Appendix A.
The numerator of the transmission in Eq. (5) shows that there are four
dominant periodic terms contributed by four interference loops in Figs.
1(b)-1(e) which correspond to four frequencies related by
$\omega_{x,y},\omega_{x\pm y}=\omega_{x}\pm\omega_{y}$. Note that such
relations are stabilized by the spatial configurations of the hinge states,
thus offer a clear and robust signature for its detection, which relies little
on the sample details and the energy. Although magnetic field in different
directions will change the values of frequencies it does not affect the
general relations between them.
Next, we provide numerical verification of such an observation using specific
scattering amplitudes. The AB oscillation of the conductance as a function of
$B$ and its FFT spectrum are shown in Figs. 2(a) and 2(b), respectively. The
polar angle of the magnetic field is set to $\theta=\pi/6$ and the unit of
frequency is chosen as $1/B_{0}$ with $B_{0}=\Phi_{0}/(2\pi S)$ and
$S=S_{x}=S_{y}$ the surface area. One can find multiple periods in the
conductance spectrum in Fig. 2(a). The FFT spectrum in Fig. 2(b) shows that
there are four dominant frequencies with
$\omega_{x}=0.86/B_{0},\omega_{y}=0.5/B_{0},\omega_{x-y}=0.36/B_{0},\omega_{x+y}=1.36/B_{0}$,
which conforms the aforementioned relation $\omega_{x\pm
y}=\omega_{x}\pm\omega_{y}$. Higher frequencies such as
$\omega_{2x},\omega_{2y}$ should also appear as that in the conventional 2D AB
effect. In stark contrast, the frequencies $\omega_{x\pm y}$ can only exist in
the 3D HOTI, which thus provides a unique evidence of the hinge states.
Figure 2: (a) The conductance pattern calculated by the scattering matrices.
(b) The FFT spectrum of the oscillation pattern. The relevant scattering
coefficients are set to
$t_{1}=t_{3}=t_{4}=r_{1}^{\prime}=r_{1}^{u}=r_{2}^{u}=\sqrt{0.4},t_{2}=r_{3}^{\prime}=\sqrt{0.3},r_{1}=r_{2}^{\prime}=\sqrt{0.2},r_{2}=r_{3}=r_{4}=t_{1}^{u}=t_{2}^{u}=t_{3}^{u}=\sqrt{0.1},r_{4}^{\prime}=t_{4}^{u}=-\sqrt{0.1},r_{3}^{u}=r_{4}^{u}=-\sqrt{0.4}.$
## IV Lattice model simulation
Based on the scattering matrix analysis, we see that there are four dominant
frequencies satisfying universal relations $\omega_{x\pm
y}=\omega_{x}\pm\omega_{y}$. In this section, we perform numerical simulation
to give rigorous results. We write the model in Eq. (1) on a cubic lattice as
Levitan and Pereg-Barnea (2020)
$\displaystyle H^{\text{Lattice}}_{\text{HOTI}}$
$\displaystyle=\sum_{i}{c_{i}^{\dagger}M\sigma_{0}\tau_{z}c_{i}}$ (6)
$\displaystyle+\Bigg{\\{}\sum_{i}{{c_{i+x}^{\dagger}\bigg{[}\frac{e^{i\varphi_{x}}}{2}(\Delta_{2}\sigma_{0}\tau_{y}+t\sigma_{0}\tau_{z}+i\Delta_{1}\sigma_{x}\tau_{x})\bigg{]}c_{i}}}$
$\displaystyle-\sum_{i}{c_{i+y}^{\dagger}(\Delta_{2}\sigma_{0}\tau_{y}+t\sigma_{0}\tau_{z}+i\Delta_{1}\sigma_{y}\tau_{x})c_{i}}$
$\displaystyle+\sum_{i}{c_{i+z}^{\dagger}\frac{e^{i\varphi_{z}}}{2}(t\sigma_{0}\tau_{z}+i\Delta_{1}\sigma_{z}\tau_{x})c_{i}}+h.c.\Bigg{\\}},$
where
$c_{i}=(c_{a,\uparrow,i},c_{b,\uparrow,i},c_{a,\downarrow,i},c_{b,\downarrow,i})$
are the annihilate operators at lattice site $i$ with two spin
($\uparrow,\downarrow$) and two orbit ($a,b$) components. The Peierls phase
$\varphi_{x,z}=\frac{e}{\hbar}\int_{r_{i}}^{r_{j}}\bm{A}(\bm{r})\cdot
d\bm{r}$, where $\bm{A}(\bm{r})=(B_{y}z,0,B_{x}y)$ is the vector potential
under Landau gauge. The lattice model of the normal metal electrodes is
$\displaystyle H_{\text{NM}}$
$\displaystyle=\sum_{i}{(-6t+U)c_{i}^{\dagger}c_{i}}$ (7)
$\displaystyle+\sum_{i}{t(c_{i+x}^{\dagger}c_{i}+c_{i+y}^{\dagger}c_{i}+c_{i+z}^{\dagger}c_{i})}+h.c..$
Figure 3: (a) Conductance oscillation for different incident energy and angle
$\theta$ by the lattice simulation. An offset $2e^{2}/h$ is imposed to
adjacent curves for clarity. (b, c) Corresponding FFT spectra of the
conductance patterns. Four dominant frequencies are marked by the dashed
lines. The model parameters are set to
$t=-1,M=2.3,\Delta_{1}=0.8,\Delta_{2}=0.5,U=2$. The lattice of the HOTI is set
to a $30a\times 30a\times 30a$ cube, where $a=1$ is the lattice constant.
The AB interferometer constructed by the 3D HOTI (green block) and the normal
metal leads (two orange blocks) is shown in Fig. 1(a). The blue arrowed lines
denote the chiral hinge states. The cross section of the HOTI in the $x$-$y$
plane is set as $30a\times 30a$ and that for the normal metal leads is
$5a\times 5a$ with $a$ being the lattice constant. The magnetic field $B$
exists only in the HOTI region and is parallel to the $x$-$y$ plane.
Consider an electron impinging from the normal metal towards the HOTI with its
energy lying in the bulk gap ($\simeq$0.7$|t|$) and surface gap
($\simeq$0.32$|t|$) of the HOTI, so that only the hinge channels are available
for propagation. Backscattering can occur at the interfaces, giving rise to
various interference loops. The two terminal conductance $G$ is calculated
using KWANT package Groth _et al._ (2014) and the AB conductance oscillation
for different incident energy ($ie$) and polar angle $\theta$ of magnetic
field are shown in Fig. 3(a) (curves are offset by $2e^{2}/h$ for clarity). To
get the dominant frequencies, we perform FFT calculation whose spectra are
shown in Figs. 3(b) and 3(c).
The numerical results are consistent with those by the scattering matrix
analysis in Figs. 2(b). For different incident energies, the oscillation
patterns in Fig. 3(a) look in stark difference. However, the dominant
frequencies remain almost the same; see Fig. 3(b). To check the general
relations between oscillation frequencies, we first locate two notable peaks
$\omega_{x}$ and $\omega_{y}$ by the dashed lines and the other two peaks
$\omega_{x\pm y}=\omega_{x}\pm\omega_{y}$ are marked accordingly in Fig. 3(b).
One can see that the dominant peaks match the dashed lines very well apart
from a small deviation from $\omega_{x+y}$ for $ie=0.1$ which is attributed to
the limit of numerical calculations. For different polar angle $\theta$,
similar results can be seen in Fig. 3(c). Although the location of peaks
change for different $\theta$, the general relations between them persist.
Figure 4: (a) $\omega_{x+y}$ and (b) $\omega_{x-y}$ as a function of elemental
frequencies $\omega_{x}$ and $\omega_{y}$. Black and orange dots denote
numerical results for different length of the HOTI and polar angle $\theta$ of
the magnetic field. The reference planes satisfy $\omega_{x\pm
y}=\omega_{x}\pm\omega_{y}$. Insets: side view of the plots which reveal the
deviation of the dots from the planes. The parameters are the same as those in
Fig. 3.
In Fig. 4, we present more general results by varying both the polar angle
$\theta$ and the thickness of the HOTI in the $z$-direction. Each pair of
parameters generate one point in both Figs. 4(a) and 4(b), with its
coordinates extracted in the same way as done in Fig. 3(b). The reference
planes therein correspond to the frequency rule $\omega_{x\pm
y}=\omega_{x}\pm\omega_{y}$. One can see that the numerical results labeled by
the black and orange dots are well located around the reference planes, which
indicates the universality of the frequency rule. Note that there are a few
dots of negative frequencies in Fig. 4(b) for $\omega_{x}<\omega_{y}$. In
experiments, one should rather measure $|\omega_{x}-\omega_{y}|$ instead.
Figure 5: (a) Conductance patterns with different disorder strength $\alpha$.
(b) FFT spectra of the conductance. The incident energy is $ie=0.1$ and the
polar angle of the field is $\theta=\pi/6$. Other parameters are the same as
those in Fig. 3.
Disorder generally exists in real samples and it can be expected that
topological chiral hinge states and thus the AB effect should be robust, the
same as that in the quantum Hall edge states. We show numerical results for
the disorder distributed in the whole HOTI region in Fig. 5 with different
strength $\alpha$. For weak disorder strength $\alpha<0.7|t|$ (the gap of the
bulk states), the oscillation pattern and frequency rules $\omega_{x\pm
y}=\omega_{x}\pm\omega_{y}$ retain. For strong disorder $\alpha>0.7|t|$, the
oscillation pattern quenches stemming from disorder induced coupling between
the surface/bulk states and the hinge states. Therefore, as long as the
disorder in the sample of HOTI is not too strong, the AB effect can be
hopefully observed. Similar conclusion also holds for the surface roughness.
Although the AB effect is quite robust against the disorder effect, the
observation should be carried out within the phase coherence length of the
system. The dephasing effect always reduces the visibility of the coherent
oscillation until it vanishes Golizadeh-Mojarad and Datta (2007); Lahiri _et
al._ (2018). One more remark is that the interference here is all of the AB
type without Al’tshuler-Aronov-Spivak (AAS) type contribution Al’Tshuler _et
al._ (1981); Sharvin and Sharvin (1981); Al’tshuler _et al._ (1982). The
model in Eq. (1) breaks time-reversal symmetry so that the AAS effect is
absent.
## V Summary and Outlook
In summary, we have investigated the AB effect in the chiral hinge states of
the 3D HOTI. Due to the spatial configurations of the hinge states, new types
of interfering loops appear compared with the 2D AB interference. Importantly,
we predict a universal relationship $\omega_{x\pm y}=\omega_{x}\pm\omega_{y}$
between the dominant oscillating frequencies, which offers a unique signature
of the hinge states as well as the HOTI. Our study can be generalized
straightforwardly to AB effect in the 3D HOTI with helical hinge states.
Note added. Recently, we became aware of a related work Li _et al._ (2021),
which focuses on different aspects.
###### Acknowledgements.
This work was supported by the National Natural Science Foundation of China
under Grant No. 12074172 (W.C.), No. 11674160 and No. 11974168 (L.S.), the
startup grant at Nanjing University (W.C.), the State Key Program for Basic
Researches of China under Grants No. 2017YFA0303203 (D.Y.X.) and the Excellent
Programme at Nanjing University.
## Appendix A specific forms of coefficient for analysis calculation
In Eq. (5) of the main text, the coefficients are expressed as
$\begin{split}C&=(t_{1}^{2}+t_{3}^{2})(W_{0}^{2}+W_{1}^{2}+W_{2}^{2}+X_{0}^{2}+X_{1}^{2}+X_{2}^{2})+(t_{2}^{2}+t_{4}^{2})(Y_{0}^{2}+Y_{1}^{2}+Y_{2}^{2}+Z_{0}^{2}+Z_{1}^{2}+Z_{2}^{2}),\\\
C_{X}&=2(W_{0}W_{1}+X_{0}X_{1})(t_{1}^{2}+t_{3}^{2})+2(Y_{0}Y_{1}+Z_{0}Z_{1})(t_{2}^{2}+t_{4}^{2})+2(W_{0}Y_{2}+W_{2}Y_{0}+X_{0}Z_{2}+X_{2}Z_{0})(t_{1}t_{2}+t_{3}t_{4}),\\\
C_{Y}&=2(W_{0}W_{2}+X_{0}X_{2})(t_{1}^{2}+t_{3}^{2})+2(Y_{0}Y_{2}+Z_{0}Z_{2})(t_{2}^{2}+t_{4}^{2})+2(W_{0}Y_{1}+W_{1}Y_{0}+X_{0}Z_{1}+X_{1}Z_{0})(t_{1}t_{2}+t_{3}t_{4}),\\\
C_{XY}&=2(W_{0}Y_{0}+W_{0}Y_{0})(t_{1}t_{2}+t_{3}t_{4}),\\\
C_{XY}^{\prime}&=2(W_{1}W_{2}+X_{1}X_{2})(t_{1}^{2}+t_{3}^{2})+2(Y1Y_{2}+Z_{1}Z_{2})(t_{2}^{2}+t_{4}^{2})+2(W_{1}Y_{1}+W_{2}Y_{2}+X_{1}Z_{1}+X_{2}Z_{2})(t_{1}t_{2}+t_{3}t_{4}),\\\
M_{XY}&=2M_{1}M_{4}+2M_{2}M_{3},\ \ \ \ \
M_{XY}^{\prime}=2M_{1}M_{3}+2M_{2}M_{4},\ \ \ \ \ M_{2X}=2M_{1}M_{2},\ \ \ \ \
M_{2Y}=2M_{3}M_{4},\\\ M_{X}&=2M_{0}M_{1}+2M_{0}M_{2},\ \ \ \ \
M_{Y}=2M_{0}M_{3}+2M_{0}M_{4},\ \ \ \ \
M_{C}=M_{0}^{2}+M_{1}^{2}+M_{2}^{2}+M_{3}^{2}+M_{4}^{2},\end{split}$ (8)
which contain the parameters defined by the elements of the scattering
matrices as
$\begin{split}W_{0}&=t_{1}^{u},W_{1}=t_{3}^{u}r_{2}^{\prime}r_{1}^{u}-t_{1}^{u}r_{2}^{\prime}r_{3}^{u},W_{2}=t_{3}^{u}r_{4}^{\prime}r_{2}^{u}-t_{1}^{u}r_{4}^{\prime}r_{4}^{u},\\\
X_{0}&=t_{2}^{u},X_{1}=t_{4}^{u}r_{2}^{\prime}r_{1}^{u}-t_{2}^{u}r_{2}^{\prime}r_{3}^{u},X_{2}=t_{4}^{u}r_{4}^{\prime}r_{2}^{u}-t_{2}^{u}r_{4}^{\prime}r_{4}^{u},\\\
Y_{0}&=t_{3}^{u},Y_{1}=t_{1}^{u}r_{3}^{\prime}r_{4}^{u}-t_{3}^{u}r_{3}^{\prime}r_{2}^{u},Y_{2}=t_{1}^{u}r_{1}^{\prime}r_{3}^{u}-t_{3}^{u}r_{1}^{\prime}r_{1}^{u},\\\
Z_{0}&=t_{4}^{u},Z_{1}=t_{2}^{u}r_{3}^{\prime}r_{4}^{u}-t_{4}^{u}r_{3}^{\prime}r_{2}^{u},Z_{2}=t_{2}^{u}r_{1}^{\prime}r_{3}^{u}-t_{4}^{u}r_{1}^{\prime}r_{1}^{u},\\\
M_{0}&=1+r_{1}^{\prime}r_{1}^{u}r_{4}^{\prime}r_{4}^{u}+r_{3}^{\prime}r_{2}^{u}r_{2}^{\prime}r_{3}^{u}-r_{1}^{\prime}r_{3}^{u}r_{4}^{\prime}r_{2}^{u}-r_{3}^{\prime}r_{4}^{u}r_{2}^{\prime}r_{1}^{u},\\\
M_{1}&=r_{2}^{\prime}r_{3}^{u},M_{2}=r_{3}^{\prime}r_{2}^{u},M_{3}=r_{4}^{\prime}r_{4}^{u},M_{4}=r_{1}^{\prime}r_{1}^{u}.\end{split}$
(9)
## References
* Qi and Zhang (2011) X.-L. Qi and S.-C. Zhang, Rev. Mod. Phys. 83, 1057 (2011).
* Hasan and Kane (2010) M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. 82, 3045 (2010).
* Benalcazar _et al._ (2017a) W. A. Benalcazar, B. A. Bernevig, and T. L. Hughes, Science 357, 61 (2017a).
* Benalcazar _et al._ (2017b) W. A. Benalcazar, B. A. Bernevig, and T. L. Hughes, Phys. Rev. B 96, 245115 (2017b).
* Song _et al._ (2017) Z. Song, Z. Fang, and C. Fang, Phys. Rev. Lett. 119, 246402 (2017).
* Langbehn _et al._ (2017) J. Langbehn, Y. Peng, L. Trifunovic, F. von Oppen, and P. W. Brouwer, Phys. Rev. Lett. 119, 246401 (2017).
* Schindler _et al._ (2018a) F. Schindler, A. M. Cook, M. G. Vergniory, Z. Wang, S. S. Parkin, B. A. Bernevig, and T. Neupert, Science advances 4, eaat0346 (2018a).
* Franca _et al._ (2018) S. Franca, J. van den Brink, and I. C. Fulga, Phys. Rev. B 98, 201114 (2018).
* Park _et al._ (2019) M. J. Park, Y. Kim, G. Y. Cho, and S. Lee, Phys. Rev. Lett. 123, 216803 (2019).
* Khalaf (2018) E. Khalaf, Phys. Rev. B 97, 205136 (2018).
* Vu _et al._ (2020) D. Vu, R.-X. Zhang, and S. Das Sarma, Phys. Rev. Research 2, 043223 (2020).
* Zhang _et al._ (2020) R.-X. Zhang, Y.-T. Hsu, and S. Das Sarma, Phys. Rev. B 102, 094503 (2020).
* Yan (2019a) Z. Yan, Phys. Rev. Lett. 123, 177001 (2019a).
* Călugăru _et al._ (2019) D. Călugăru, V. Juričić, and B. Roy, Phys. Rev. B 99, 041301 (2019).
* Yan (2019b) Z. Yan, Phys. Rev. B 100, 205406 (2019b).
* Yan _et al._ (2018) Z. Yan, F. Song, and Z. Wang, Phys. Rev. Lett. 121, 096803 (2018).
* Schindler _et al._ (2018b) F. Schindler, Z. Wang, M. G. Vergniory, A. M. Cook, A. Murani, S. Sengupta, A. Y. Kasumov, R. Deblock, S. Jeon, I. Drozdov, _et al._ , Nature physics 14, 918 (2018b).
* Liang _et al._ (2001) W. Liang, M. Bockrath, D. Bozovic, J. H. Hafner, M. Tinkham, and H. Park, Nature 411, 665 (2001).
* van Wees _et al._ (1989) B. J. van Wees, L. P. Kouwenhoven, C. J. P. M. Harmans, J. G. Williamson, C. E. Timmering, M. E. I. Broekaart, C. T. Foxon, and J. J. Harris, Phys. Rev. Lett. 62, 2523 (1989).
* Ji _et al._ (2003) Y. Ji, Y. Chung, D. Sprinzak, M. Heiblum, D. Mahalu, and H. Shtrikman, Nature 422, 415 (2003).
* Ofek _et al._ (2010) N. Ofek, A. Bid, M. Heiblum, A. Stern, V. Umansky, and D. Mahalu, Proceedings of the National Academy of Sciences 107, 5276 (2010).
* McClure _et al._ (2009) D. T. McClure, Y. Zhang, B. Rosenow, E. M. Levenson-Falk, C. M. Marcus, L. N. Pfeiffer, and K. W. West, Phys. Rev. Lett. 103, 206806 (2009).
* Nakamura _et al._ (2019) J. Nakamura, S. Fallahi, H. Sahasrabudhe, R. Rahman, S. Liang, G. C. Gardner, and M. J. Manfra, Nature Physics 15, 563 (2019).
* Henny _et al._ (1999) M. Henny, S. Oberholzer, C. Strunk, T. Heinzel, K. Ensslin, M. Holland, and C. Schönenberger, Science 284, 296 (1999).
* Neder _et al._ (2007) I. Neder, N. Ofek, Y. Chung, M. Heiblum, D. Mahalu, and V. Umansky, Nature 448, 333 (2007).
* Weisz _et al._ (2014) E. Weisz, H. Choi, I. Sivan, M. Heiblum, Y. Gefen, D. Mahalu, and V. Umansky, Science 344, 1363 (2014).
* Aharonov and Bohm (1959) Y. Aharonov and D. Bohm, Phys. Rev. 115, 485 (1959).
* Webb _et al._ (1985) R. A. Webb, S. Washburn, C. P. Umbach, and R. B. Laibowitz, Phys. Rev. Lett. 54, 2696 (1985).
* Holloway _et al._ (2015) G. W. Holloway, D. Shiri, C. M. Haapamaki, K. Willick, G. Watson, R. R. LaPierre, and J. Baugh, Phys. Rev. B 91, 045422 (2015).
* Lahiri _et al._ (2018) A. Lahiri, K. Gharavi, J. Baugh, and B. Muralidharan, Phys. Rev. B 98, 125417 (2018).
* Aleiner _et al._ (2015) I. L. Aleiner, A. V. Andreev, and V. Vinokur, Phys. Rev. Lett. 114, 076802 (2015).
* Tserkovnyak and Halperin (2006) Y. Tserkovnyak and B. I. Halperin, Phys. Rev. B 74, 245327 (2006).
* Peng _et al._ (2010) H. Peng, K. Lai, D. Kong, S. Meister, Y. Chen, X.-L. Qi, S.-C. Zhang, Z.-X. Shen, and Y. Cui, Nature materials 9, 225 (2010).
* Bardarson _et al._ (2010) J. H. Bardarson, P. W. Brouwer, and J. E. Moore, Phys. Rev. Lett. 105, 156803 (2010).
* Zhang and Vishwanath (2010) Y. Zhang and A. Vishwanath, Phys. Rev. Lett. 105, 206601 (2010).
* Xypakis _et al._ (2020) E. Xypakis, J.-W. Rhim, J. H. Bardarson, and R. Ilan, Phys. Rev. B 101, 045401 (2020).
* Li _et al._ (2012) J. Li, G. Fleury, and M. Büttiker, Phys. Rev. B 85, 125440 (2012).
* Ueda and Yokoyama (2014) A. Ueda and T. Yokoyama, Phys. Rev. B 90, 081405 (2014).
* Tripathi _et al._ (2016) K. M. Tripathi, S. Das, and S. Rao, Phys. Rev. Lett. 116, 166401 (2016).
* Bartolo _et al._ (2020) T. C. Bartolo, J. S. Smith, B. Muralidharan, C. Müller, T. M. Stace, and J. H. Cole, Phys. Rev. Research 2, 043430 (2020).
* Wang _et al._ (2016) L.-X. Wang, C.-Z. Li, D.-P. Yu, and Z.-M. Liao, Nat. Commun. 7, 10769 (2016).
* Lin _et al._ (2017) B.-C. Lin, S. Wang, L.-X. Wang, C.-Z. Li, J.-G. Li, D. Yu, and Z.-M. Liao, Phys. Rev. B 95, 235436 (2017).
* Kim (2006) E.-A. Kim, Phys. Rev. Lett. 97, 216404 (2006).
* Halperin _et al._ (2011) B. I. Halperin, A. Stern, I. Neder, and B. Rosenow, Phys. Rev. B 83, 155440 (2011).
* Willett _et al._ (2013) R. L. Willett, C. Nayak, K. Shtengel, L. N. Pfeiffer, and K. W. West, Phys. Rev. Lett. 111, 186401 (2013).
* Nakamura _et al._ (2020) J. Nakamura, S. Liang, G. C. Gardner, and M. J. Manfra, Nature Physics 16, 931 (2020).
* Jackiw and Rebbi (1976) R. Jackiw and C. Rebbi, Phys. Rev. D 13, 3398 (1976).
* Levitan and Pereg-Barnea (2020) B. A. Levitan and T. Pereg-Barnea, Phys. Rev. Research 2, 033327 (2020).
* Groth _et al._ (2014) C. W. Groth, M. Wimmer, A. R. Akhmerov, and X. Waintal, New Journal of Physics 16, 063065 (2014).
* Golizadeh-Mojarad and Datta (2007) R. Golizadeh-Mojarad and S. Datta, Phys. Rev. B 75, 081301 (2007).
* Al’Tshuler _et al._ (1981) B. Al’Tshuler, A. Aronov, and B. Spivak, JETP Lett. 33, 94 (1981).
* Sharvin and Sharvin (1981) D. Y. Sharvin and Y. V. Sharvin, JETP Lett. 34, 272 (1981).
* Al’tshuler _et al._ (1982) B. Al’tshuler, A. Aronov, B. Spivak, D. Y. Sharvin, and Y. V. Sharvin, JETP Lett. 35, 588 (1982).
* Li _et al._ (2021) C.-A. Li, S.-B. Zhang, J. Li, and B. Trauzettel, Phys. Rev. Lett. 127, 026803 (2021).
|
††thanks: Publication of the U.S. government, not subject to U.S. copyright.
# Determining the Angle-of-Arrival of an Radio-Frequency Source with a Rydberg
Atom-Based Sensor
Amy K. Robinson Nikunjkumar Prajapati Depart. of Electr. Engin., University
of Colorado, Boulder, CO 80305, USA Damir Senic ANSYS, Inc., Boulder, CO,
USA Matthew T. Simons Christopher L. Holloway<EMAIL_ADDRESS>National Institute of Standards and Technology, Boulder, CO 80305, USA
###### Abstract
In this work, we demonstrate the use of a Rydberg atom-based sensor for
determining the angle-of-arrival of an incident radio-frequency (RF) wave or
signal. The technique uses electromagnetically induced transparency in Rydberg
atomic vapor in conjunction with a heterodyne Rydberg atom-based mixer. The
Rydberg atom mixer measures the phase of the incident RF wave at two different
locations inside an atomic vapor cell. The phase difference at these two
locations is related to the direction of arrival of the incident RF wave. To
demonstrate this approach, we measure phase differences of an incident 19.18
GHz wave at two locations inside a vapor cell filled with cesium atoms for
various incident angles. Comparisons of these measurements to both full-wave
simulation and to a plane-wave theoretical model show that these atom-based
sub-wavelength phase measurements can be used to determine the angle-of-
arrival of an RF field.
The ability to measure angle-of-arrival (AoA) is of great importance to radar
and advanced communications applications. Here we present a method of
determining AoA based on Rydberg-atom sensors. Atom-based sensors have
garnered a lot of attention in the past several years because of their many
possible advantages over other conventional technologies. Measurement
standards have evolved towards atom-based measurements over the last couple
decades; most notably length (m), frequency (Hz), and time (s) standards.
Recently there has been a great interest in extending this to magnetic and
electric (E) field sensors. In particular, since the initiation and completion
of DARPA’s QuASAR program, NIST and other groups have made great progress in
the development of Rydberg atom-based radio-frequency (RF) E-field sensors
gor1 ; sed1 ; holl1 ; holl2 ; holl3 ; sed2 ; tan1 ; gor2 ; fan ; sim1 ; sim2 ;
anderson1 . The Rydberg atom-based sensors now have the capability of
measuring amplitude, polarization, and phase of the RF field. As such, various
applications are beginning to emerge. These include SI-traceable E-field
probes holl1 ; holl2 , power-sensorsholl5 , receivers for communication
signals (AM/FM modulated and digital phase modulation signals) song1 ; meyer1
; holl6 ; cox1 ; holl4 ; anderson2 , and even recording musical
instrumentsholl7 . In this paper, we investigate the capability of a Rydberg
atom-based sensor for determining AoA of an incident RF field.
The majority of the work on Rydberg atom-based E-field sensors uses on-
resonant electromagnetically induced transparency (EIT) and Autler-Townes (AT)
splitting techniques sed2 ; holl2 ; holl3 . The concept uses a vapor of alkali
atoms placed in a glass cell (referred to as a “vapor cell”) as a means of
detecting and receiving the RF E-field or signal. The EIT technique involves
using two lasers. One laser (called a “probe” laser) is used to monitor the
optical response of the medium in the vapor cell and a second laser (called a
“coupling” laser) is used to establish a coherence in the atomic system. When
the RF E-field is applied, it alters the susceptibility of the atomic vapor
seen by the probe laser. By detecting the power in the probe laser propagating
through the cell, the RF E-field strength can be determined. This approach has
shown to be very successful for determining the magnitude of an RF E-field.
However, an alternative approach is required to measure phase, which is
necessary to determine AoA. Recently, we developed a heterodyne technique
using a Rydberg atom-based mixer sim3 . In this approach, a reference RF field
is applied to the atoms. This reference RF field is on-resonance with the
Rydberg-atom transition, and acts as a local oscillator (LO). The LO field
causes the EIT/AT effect in the Rydberg atoms which is used to down-convert a
second, co-polarized RF field (referred to as SIG and is the field for which
the phase is desired). The SIG field is detuned (by a few kHz) from the LO
field. The frequency difference between the LO and the SIG is an intermediate
frequency (IF) and the IF is detected by optically probing the Rydberg atoms.
This IF is essentially the beat-note between the LO and SIG frequencies. The
phase of the IF signal corresponds directly to the relative phase between the
LO and SIG signals. In effect, the atoms down-convert the SIG to the IF, and
the phase of SIG is obtained by the probe laser propagating through the atomic
vapor.
In order to determine the AoA, the phase ($\phi$) of SIG is needed at two
different locations, see Fig. 1. Once the phase of SIG is determined at the
two different locations, the relationship between AoA (defined as $\theta$ in
Fig. 1) and the phase difference at the two locations (location 1 and 2 in the
figure) can be calculated. Assuming SIG is a plane wave, the relationship
between $\theta$ and $\phi$ is:
$\displaystyle\Delta\phi_{2,1}=\phi_{2}-\phi_{1}\approx
k\,d\,\,\sin(\theta):\theta\approx\sin^{-1}\left(\frac{\Delta\phi_{2,1}}{k\,d}\right)$
(1)
where $d$ is the separation between the two locations, $\phi_{1,2}$ are the
phases of SIG at the two locations, $k=2\pi/\lambda$, and $\lambda$ is the
wavelength of SIG. This expression assumes that the line formed by locations 1
and 2 is perpendicular to the line for which the angle $\theta$ is measured.
If the two locations (say locations 1 and 3 in Fig. 1) form a line that is not
perpendicular to the line that determine $\theta$, then the phase difference
between locations 1 and 3 are given by
$\displaystyle\Delta\phi_{3,1}$ $\displaystyle=\phi_{3}-\phi_{1}$
$\displaystyle\approx
k\sqrt{d^{2}+t^{2}}\sin\left[\theta+\tan^{-1}\left(t/d\right)\right]$ (2)
$\displaystyle\theta\approx\sin^{-1}\left(\frac{\Delta\phi_{3,1}}{k\,\sqrt{d^{2}+t^{2}}}\right)\,-\tan^{-1}\left(t/d\right)\,\,,$
(3)
where $t$ is defined in Fig. 1. Eqs. (1)-(3) relate AoA to the measured phase
of the SIG and LO signals at two locations in the cell, assuming that the AoA
is defined in a plane orthogonal to the probe laser propagation. Future work
will include the measurement AoA in two dimensions, see discussion below.
Figure 1: Incident plane wave (SIG) onto three locations separated by $d$ and
offset by $t$.
To measure the phase at any two different locations, we generate EIT in two
locations inside a vapor cell filled with 133Cs, see Fig. 2(a). The probe
laser is split with a beam cube and passed through the vapor cell at two
locations. The full power of the coupling laser is passed through each of the
two locations, see Fig. 2(a). The beam directions are chosen to ensure that at
both locations in the cell, the probe and coupling lasers are counter-
propagating. To generate EIT at the two locations in the cell, we tune the
probe laser to the ${\rm D}_{2}$ transition for 133Cs ($6S_{1/2}$-$6P_{3/2}$
or wavelength of $\lambda_{p}=852.35$ nm) focused to a full-width at half
maximum (FWHM) of 390$~{}\mu$m, with a power of 96 $\mu$W. To produce an EIT
signal, we couple to the 133Cs $6P_{3/2}$-$58S_{1/2}$ states by applying a
counter-propagating coupling laser at $\lambda_{c}=509.26$ nm with a power of
60 mW, focused to a FWHM of 450 $\mu$m.
(a) Laser field schematic
(b) antenna arrangement
Figure 2: (a) Schematic of the orientation of the optical fields. The probe
beam is split in two by a beam cube, and one coupling field is re-circulated
using a dichroic mirror to counter-propagate along each probe beam. (b) The LO
antenna is suspended above the cell, such that the LO field is incident nearly
perpendicular to the line between the two optical beams. The SIG is held by an
adjustable arm to vary the angle of incidence, which is measured using an
electronic compass attached to the horn mount.
The LO and SIG are applied to the vapor cell as shown in Fig. 2(b), where the
LO is at a fixed position and the SIG is rotated to different incident
directions ($\theta$). We use a signal generator (SG) to apply a continuous
wave (CW) LO field at 19.18 GHz to couple states $58S_{1/2}$ and $59P_{3/2}$.
While we use 19.18 GHz in these experiments, this approach can work at
carriers from 100 MHz to 1 THz (because of the broadband nature of the EIT/AT
approach holl1 ; holl2 ). A second SG is used to generate a CW SIG field at
19.18 GHz+$f_{IF}$ (where the $f_{IF}$=50 kHz)). The output from the two SG
are connected to two standard gain horn antennas via RF cables. The LO horn is
mounted directly above the vapor cell and is stationary, whereas the SIG horn
sits on a rotating arm which sets the incident angle ($\theta$).
Two different photodetectors are used to monitor the two probe beams that
travel through the vapor cell. The output of the photodetectors are sent to an
oscilloscope and a lock-in amplifier. Fig 3(a) shows the beam position at the
two locations inside the vapor cell. These beam positions correspond to
locations 1 and 3 as defined in Fig. 1, and the phase relationship is given in
eq. (3). In our experiments, $d=2.6$ mm and $t=0.3$ mm. The lock-in is
referenced to a 50 kHz signal from a mixer that is fed by the two signal
generators. The Rydberg atoms automatically down-convert the CW carrier (i.e.,
SIG) to the IF (the amplitude of the probe laser transmission) and the phase
of SIG is determined.
(a) (b)
Figure 3: (a)The $x$-$y$ location of lasers inside vapor cell, where the
origin is the center of the cell, and (b) Beat-note for two locations inside
the vapor cell.
The Rydberg-atom sensor and the photodetectors act like a mixer and low pass
filter in a classic RF heterodyne setup. The LO and SIG create a beat-note and
the atoms respond directly to this beat-note, which is detected by the probe
laser transmission measured on the photodetectors. At each location inside the
vapor cell, the total electric field ($E_{atoms}$) is the sum of the LO and
SIG fields ($E_{LO}$ and $E_{SIG}$). The atoms demodulate the high-frequency
$\omega_{LO}$ field and the probe transmission as a function of time at
locations $i$ and $j$ (1 and 3 as defined in Fig. 1) is given by sim3 ; gor3
$T_{(i,j)}\propto|E_{atoms}|\approx
E_{LO}+E_{SIG}\,\cos\left(\Delta\omega\,t+\phi_{i,j}\right)\,\,,$ (4)
where $\phi_{i,j}$ corresponds to the phase of SIG at locations i and j, and
$\Delta\omega=\omega_{LO}-\omega_{SIG}$. Once $\phi_{i}$ and $\phi_{j}$ are
determined from the probe laser transmissions measured on the two different
photodetectors, the phase difference ($\Delta\phi$) between the two locations
is given by
$\Delta\phi=\phi_{j}-\phi_{i}~{}~{}~{}\textrm{.}$ (5)
To be more exact, $\phi_{i,j}$ is actually the phase difference (at each
location) between the LO and SIG sim3 . In these experiments, LO is at a fixed
location such that a measurement of $\Delta\phi$ is a measurement of the phase
change of SIG between the two locations.
For a given incident angle $\theta$, the beat-notes as measured from the two
photodetectors are shown in Fig. 3(b). From the figure, we see the “cosine”
behavior as predicted by eq. (4) with a period of 20 $\mu$s (or the IF
frequency of 50 kHz used in the experiments). In this figure we see that the
two beat-notes are shifted in phase. This is the phase difference for the
given incident angles that is defined in eq. (3).
Using the setup shown in Fig. 2(b), the SIG antenna is scanned from
$\theta=\pm 40^{o}$. The phase difference ($\Delta\phi$) at each $\theta$
position was determined and the measured $\Delta\phi$ for each incident angle
is shown in Fig. 4(a). The error bars correspond to the standard deviation of
5 data sets. The uncertainties of Rydberg atom based measurements in general
are discussed in Ref.emcconf and it is shown in Ref.ieeeaccess that the
heterodyne Rydberg atom-based mixer approach can measure the phase to within
$1^{o}$. Also shown in this figure are the theoretical results given in eq.
(3). Upon comparing the experimental results to the theoretical results, we
see that while the standard deviation for the phase measurement for each
incident angle is small (i.e., small error bars), the measurements do not lie
exactly on the theoretical results. The reason why the data does not exactly
follow the theoretical model is twofold. First, from Fig. 2(b) we see there
are several objects in the apparatus used to rotate the SIG antenna. These
objects cause scattering which are not accounted for in the theoretical
results. The second reason is due to the vapor cell itself. Because the vapor
cell is a dielectric, the RF fields can exhibit multi-reflections inside the
cell and RF standing waves (or resonances) in the field strength can develop
in the cellholl2 ; holl3 ; sim5 ; fan5 . Thus, for a given location inside the
cell, the RF field can be larger or smaller than the incident field and the
phase of the field at a given location will be perturbed as well. Hence, the
standing wave can generates differences in the measured AoA when compared to
the expected sinusoidal relationship as given in eqs. (1) and (3). Numerical
models can be used to investigate this effect. While modeling the entire
structure used to support the SIG antenna is difficult, we can use full-wave
numerical tools to simulate the vapor cell effects.
We use ANSYS HFSS (High Frequency Structure Simulator)hfss to simulate only
the SIG antenna and the vapor cell (including the plastic vapor-cell holder),
see Fig. 5. HFSS convergence criteria was based on the energy of a plane wave
to 0.01 W, and the mesh around the cell was seeded using curvilinear
approximation, and inside the cell using length-restriction to 1 mm, with
first order polynomial solving. With this model, we determine the phase at
location 1 and 3 (as defined in Fig. 1) and the $\Delta\phi_{3,1}$ obtained
from HFSS are shown in Fig. 4(a). To ensure that the phases are being
calculated correctly with the HFSS simulation, we first determine
$\Delta\phi_{3,1}$ with no vapor cell present. These results are shown in Fig.
4(a) and match the theoretical calculation closely, as expected. Now that we
have confirmed that the HFSS is implemented correctly, the result from the
HFSS for the case when the vapor cell is included are shown in Fig. 4(a). We
see that the HFSS results (including the vapor cell) correspond well to the
measured data for angles $>$-25o. As with the experimental results, the HFSS
results indicate that the vapor cell does perturb the phase measurement and
causes deviation from the theoretical results. We see that the HFSS results do
not correspond exactly to the measured data over all the angles, but do show
the same trends. The deviations between the measured data and HFSS are
twofold. First, the exact permittivity ($\epsilon_{r}$) of the glass is not
known, $\epsilon_{r}$ ranges from 3 to 6tropf (in this numerical model we
assume $\epsilon_{r}=5$). Secondly, upon comparing the photo of the
experimental setup in Fig. 2(b) and the HFSS model in Fig. 5, we see that not
all the objects used to rotate the SIG antenna are included in the HFSS model.
With that said, the measured and HFSS model compare well and follow the same
trends, especially for angles $>$-25o. There are asymmetries in the apparatus
use in the experimenters. For angles $<$-25o degrees, the apparatus used to
support the SIG antenna and vapor cell in the experiment begin to influence
the phase of the measured data. The additional scattering caused by this
apparatus is not included in the HFSS numerical model. While the vapor cell
does perturb the measurement of $\Delta\phi_{3,1}$, the results in Fig. 4(a)
show that the Rydberg-atom based sensor can detect the relative phase
difference between two locations inside the vapor cell and work as a AoA
detector.
(a) (b)
Figure 4: (a) Experimental and HFSS data for $\Delta\phi$. The error bars
correspond to the standard deviation of 5 data sets, and (b) AoA from the
experimental data.
Figure 5: HFSS model for the cell and horn antenna. The horn is rotated by an
angle $\Theta$ around the $x-$axis, such that it points towards the cell. The
model also shows the vapor cell holder.
With the measured $\Delta\phi$, the AoA can be determined from eq. (3). Fig.
4(b) shows the AoA from the measured $\phi_{3,1}$ for incident angles ranging
from $\theta=\pm 40^{o}$. The solid line in this figure represents the one-to-
one correspondence of the incident angle and AoA. The measured AoA should lay
on the line. While the measured AoA follows this line, it does not exactly lay
on the line. Also, in this figure we show the AoA obtained by using the HFSS
results for $\Delta\phi$ in eq. (3). Here again, we see that the HFSS results
deviate from the solid line. As with the measured $\Delta\phi$, the deviation
in the measured AoA (and the HFSS results for AoA) is due to the vapor cell
perturbation and due to the supporting apparatus used to experimental
equipment (SIG antenna and vapor cell). This demonstrates that the Rydberg
atom-based sensor can be used to determine AoA of an incident RF signal.
While the cell does perturb the AoA measurement, two approaches can be pursued
to mitigate this effect. One approach is to design a vapor cell that can
minimize and even eliminate the vapor cell perturbations. Various groups are
investigating different approaches to modify the vapor cell used for these
Rydberg atom-based sensors. Two examples include the use of vapor cells with
honeycomb sides schafer or the use of metamaterials on the sides of the vapor
cells meta . A second approach is to use the HFSS results to calibrate the
vapor cell to reduce the perturbation effects. This is done by defining a
calibration factor as
${\cal C}={\rm AoA}_{HFSS}-{\rm AoA}_{theory}$ (6)
and subtracting this from the measured AoA
${\rm AoA}_{cal}={\rm AoA}_{meas}-{\cal C}$ (7)
where ${\rm AoA}_{HFSS}$, ${\rm AoA}_{theory}$, and ${\rm AoA}_{meas}$ are the
AoA obtained from the HFSS results, theory, and experimental results,
respectively. Fig. 4(b) shows ${\rm AoA}_{cal}$. While there is not a perfect
correlation to the solid line with the calibration based on the HFSS results,
we do see that the calibration did improve the AoA measurement, especially for
angles $>$-25o. Once again, the deviations from the theory and HFSS simulation
for angles $<$-25o is due to the asymmetries in associated with the apparatus
to support the experiments. The larger discrepancies could be handled with a
more accurate models of the experimental setup.
This paper demonstrates that it is possible to determine the angle of arrival
of an RF signal using an atom-based sub-wavelength phase measurement method.
While the vapor cell perturbs this measurement, we see that this effect can be
mostly accounted for or at least explained, and these Rydberg atom-based
sensors have the capability of measuring the AoA of a incident RF signal.
Future iterations of this experiment will explore the perimittivity of the
glass for more precise modeling of the standing wave in the glass cell. We are
also investigating different types of vapor cell designs and beam orientations
in order to minimize or eliminate the vapor cell effect on the AoA
measurements.
Now that we have demonstrated that it is possible to determine the AoA with a
Rydberg atom-based sensor, one can envision (1) developing arrays of these
Rydberg atom sensors (2) or sampling the phase at numerous locations inside
one vapor cell in order to detect the AoA of more general incidence angles, or
for the purpose of simultaneously detecting the AoA of several sources at
once. We are currently developing these two types of sensors and these will be
the topic of a future publication.
## References
* (1) Gordon, J.A., et al., “Quantum-Based SI Traceable Electric-Field Probe,” Proc of 2010 IEEE International Symposium on Electromagnetic Compatibility, July 25-30, 321-324, 2010.
* (2) Sedlacek, J.A., et al., Nature Phys., 8, 819, 2012.
* (3) Holloway, C.L., et al., IEEE Trans. on Antenna and Propag., 62(12), 6169-6182, 2014.
* (4) Holloway, C.L., et al., IEEE Trans. on Electromagnetic Compat., 59(2), 717-728, 2017.
* (5) Holloway, C.L., et al., Applied Phys. Lett., 104, 244102-1-5, 2014.
* (6) Sedlacek, J.A., et al., , Phys. Rev. Lett., 111, 063001, 2013.
* (7) Tanasittikosol, M., et al., J. Phys B, 44, 184020, 2011.
* (8) Gordon, J.A., et al., Applied Physics Letters, 105, 024104, 2014.
* (9) Fan, H., et al., J. Phys. B: At. Mol. Opt. Phys., 48, 202001, 2015.
* (10) Simons, M.T., et al., IEEE Access, 7, 164975-164985, 2019.
* (11) Simons, M.T., et al., Applied Optics, 57(22), pp. 6456-6460, 2018.
* (12) Anderson, D.A., et al. Physical Review Applied, 5, 034003, 2016.
* (13) Holloway, C.L., et al., Applied Phys. Letters, 113, 094101, 2018.
* (14) Song, Z., et al., Optics Express, 27(6), 2019.
* (15) Meyer, D.H., et al., Appl. Phys. Lett., 12, 211108, 2018.
* (16) Holloway, C.L., et al., IEEE Antenna and Wireless Propag. Lett., 18(9), 1853-1857, 2019.
* (17) Cox, K.C., et al., Phys. Rev. Lett. 121, 110502, 2018.
* (18) Holloway, C.L., et al., “A Multi-Band Rydberg-Atom Based Receiver: AM/FM Stereo Reception”, IEEE Antenna and Propagation Magazine, 2020.
* (19) Anderson, D.A., et al., arXiv:1808.08589v1, Aug. 26, 2018.
* (20) Holloway, C.L., et al., AIP Advanced, 9(6), 065110, 2019.
* (21) Simons, M.T., et al., Applied Physics Letters, 114, 114101 2019.
* (22) Gordon, J.A., et al., AIP Advances, 9, 045030, 2019.
* (23) Simons, M.T., et al., “Uncertainties in Rydberg atom-based RF E-field measurements,” in Proc. EMC Eur., Amsterdam, The Netherlands, pp. 376–380, Aug. 2018.
* (24) Simons, M.T., et al., IEEE Access, vol. 7, pp. 164975-164985, 2019, doi: 10.1109/ACCESS.2019.2949017.
* (25) Simons, M.T., et al., ”Applications with a Rydberg Atom-based Radio Frequency Antenna/Receiver,” Procc in EMC EUrope 2019, Barcelona, Spain, Sept. 2019.
* (26) Fan H., et al., Physical Review Applied, 4, 044015, November, 2015.
* (27) Ansys® HFSS, Release 2020R2, https://www.ansys.com/products/electronics/ansys-hfss. Mentioning this product does not imply an endorsement by NIST, but serves to clarify the software used.
* (28) Tropf, W.J., et al., P͡roperties of Crystals and Glasses. Chapter 33, p 33.7.
* (29) J.P. Shaffer, “Atom-based electromagnetic field sensing (Conference Presentation),” Proc. SPIE 11296, Optical, Opto-Atomic, and Entanglement-Enhanced Precision Metrology II, 112960Q (27 March 2020); https://doi.org/10.1117/12.2552626.
* (30) H. Mu, et al., “A Low Permittivity Metamaterial on a Glass Substrate for Fabricating an Atomic Vapor Cell,” 2019 Photonics & Electromagnetics Research Symposium - Fall (PIERS - Fall), Xiamen, China, 2019, pp. 344-350, doi: 10.1109/PIERS-Fall48861.2019.9021792.
|
# Autoregressive Denoising Diffusion Models for Multivariate Probabilistic
Time Series Forecasting
Kashif Rasul Calvin Seward Ingmar Schuster Roland Vollgraf
###### Abstract
In this work, we propose TimeGrad, an autoregressive model for multivariate
probabilistic time series forecasting which samples from the data distribution
at each time step by estimating its gradient. To this end, we use diffusion
probabilistic models, a class of latent variable models closely connected to
score matching and energy-based methods. Our model learns gradients by
optimizing a variational bound on the data likelihood and at inference time
converts white noise into a sample of the distribution of interest through a
Markov chain using Langevin sampling. We demonstrate experimentally that the
proposed autoregressive denoising diffusion model is the new state-of-the-art
multivariate probabilistic forecasting method on real-world data sets with
thousands of correlated dimensions. We hope that this method is a useful tool
for practitioners and lays the foundation for future research in this area.
Time Series and Sequences, Generative Models
## 1 Introduction
Classical time series forecasting methods such as those in (Hyndman &
Athanasopoulos, 2018) typically provide univariate point forecasts, require
hand-tuned features to model seasonality, and are trained individually on each
time series. Deep learning based time series models (Benidis et al., 2020) are
popular alternatives due to their end-to-end training of a global model, ease
of incorporating exogenous covariates, and automatic feature extraction
abilities. The task of modeling uncertainties is of vital importance for
downstream problems that use these forecasts for (business) decision making.
More often the individual time series for a problem data set are statistically
dependent on each other. Ideally, deep learning models need to incorporate
this inductive bias in the form of multivariate (Tsay, 2014) probabilistic
methods to provide accurate forecasts.
To model the full predictive distribution, methods typically resort to
tractable distribution classes or some type of low-rank approximations,
regardless of the true data distribution. To model the distribution in a
general fashion, one needs probabilistic methods with tractable likelihoods.
Till now several deep learning methods have been proposed for this purpose
such as autoregressive (van den Oord et al., 2016c) or generative ones based
on normalizing flows (Papamakarios et al., 2019) which can learn flexible
models of high dimensional multivariate time series. Even if the full
likelihood is not be tractable, one can often optimize a tractable lower bound
to the likelihood. But still, these methods require a certain structure in the
functional approximators, for example on the determinant of the Jacobian (Dinh
et al., 2017) for normalizing flows. _Energy-based models_ (EBM) (Hinton,
2002; LeCun et al., 2006) on the other hand have a much less restrictive
functional form. They approximate the unnormalized log-probability so that
density estimation reduces to a non-linear regression problem. EBMs have been
shown to perform well in learning high dimensional data distributions at the
cost of being difficult to train (Song & Kingma, 2021).
In this work, we propose autoregressive EBMs to solve the multivariate
probabilistic time series forecasting problem via a model we call TimeGrad and
show that not only are we able to train such a model with all the inductive
biases of probabilistic time series forecasting, but this model performs
exceptionally well when compared to other modern methods. This autoregressive-
EBM combination retains the power of autoregressive models, such as good
performance in extrapolation into the future, with the flexibility of EBMs as
a general purpose high-dimensional distribution model, while remaining
computationally tractable.
The paper is organized as follows. In Section 2 we first set up the notation
and detail the EBM of (Ho et al., 2020) which forms the basis of our per time-
step distribution model. Section 3 introduces the multivariate probabilistic
time series problem and we detail the TimeGrad model. The experiments with
extensive results are detailed in Section 4. We cover related work in Section
5 and conclude with some discussion in Section 6.
## 2 Diffusion Probabilistic Model
Let $\mathbf{x}^{0}\sim q_{\mathcal{X}}(\mathbf{x}^{0})$ denote the
multivariate training vector from some input space
$\mathcal{X}=\mathbb{R}^{D}$ and let $p_{\theta}(\mathbf{x}^{0})$ denote the
probability density function (PDF) which aims to approximate
$q_{\mathcal{X}}(\mathbf{x}^{0})$ and allows for easy sampling. Diffusion
models (Sohl-Dickstein et al., 2015) are latent variable models of the form
$p_{\theta}(\mathbf{x}^{0}):=\int
p_{\theta}(\mathbf{x}^{0:N})\,\mathrm{d}\mathbf{x}^{1:N}$, where
$\mathbf{x}^{1},\ldots,\mathbf{x}^{N}$ are latents of dimension
$\mathbb{R}^{D}$. Unlike in variational autoencoders (Kingma & Welling, 2019)
the approximate posterior $q(\mathbf{x}^{1:N}|\mathbf{x}^{0})$,
$q(\mathbf{x}^{1:N}|\mathbf{x}^{0})=\Pi_{n=1}^{N}q(\mathbf{x}^{n}|\mathbf{x}^{n-1})$
is not trainable but fixed to a Markov chain (called the _forward_ process)
that gradually adds Gaussian noise to the signal:
$q(\mathbf{x}^{n}|\mathbf{x}^{n-1}):=\mathcal{N}(\mathbf{x}^{n};\sqrt{1-\beta_{n}}\mathbf{x}^{n-1},\beta_{n}\mathbf{I}).$
The forward process uses an increasing variance schedule
$\beta_{1},\ldots,\beta_{N}$ with $\beta_{n}\in(0,1)$. The joint distribution
$p_{\theta}(\mathbf{x}^{0:N})$ is called the _reverse_ process, and is defined
as a Markov chain with learned Gaussian transitions starting with
$p(\mathbf{x}^{N})=\mathcal{N}(\mathbf{x}^{N};\mathbf{0},\mathbf{I})$, where
each subsequent transition of
$p_{\theta}(\mathbf{x}^{0:N}):=p(\mathbf{x}^{N})\Pi_{n=N}^{1}p_{\theta}(\mathbf{x}^{n-1}|\mathbf{x}^{n})$
is given by a parametrization of our choosing denoted by
$p_{\theta}(\mathbf{x}^{n-1}|\mathbf{x}^{n}):=\mathcal{N}(\mathbf{x}^{n-1};\mu_{\theta}(\mathbf{x}^{n},n),\Sigma_{\theta}(\mathbf{x}^{n},n)\mathbf{I}),$
(1)
with shared parameters $\theta$. Both
$\mu_{\theta}:\mathbb{R}^{D}\times\mathbb{N}\to\mathbb{R}^{D}$ and
$\Sigma_{\theta}:\mathbb{R}^{D}\times\mathbb{N}\to\mathbb{R}^{+}$ take two
inputs, namely the variable $\mathbf{x}^{n}\in\mathbb{R}^{D}$ as well as the
noise index $n\in\mathbb{N}$. The goal of
$p_{\theta}(\mathbf{x}^{n-1}|\mathbf{x}^{n})$ is to eliminate the Gaussian
noise added in the diffusion process. The parameters $\theta$ are learned to
fit the data distribution $q_{\mathcal{X}}(\mathbf{x}^{0})$ by minimizing the
negative log-likelihood via a variational bound using Jensen’s inequality:
$\begin{split}\min_{\theta}\mathbb{E}_{q(\mathbf{x}^{0})}[-\log
p_{\theta}(\mathbf{x}^{0})]\leq\\\
\min_{\theta}\mathbb{E}_{q(\mathbf{x}^{0:N})}[-\log
p_{\theta}(\mathbf{x}^{0:N})+\log
q(\mathbf{x}^{1:N}|\mathbf{x}^{0})].\end{split}$
This upper bound can be shown to be equal to
$\min_{\theta}\mathbb{E}_{q(\mathbf{x}^{0:N})}\left[-\log
p(\mathbf{x}^{N})-\sum_{n=1}^{N}\log\frac{p_{\theta}(\mathbf{x}^{n-1}|\mathbf{x}^{n})}{q(\mathbf{x}^{n}|\mathbf{x}^{n-1})}\right].$
(2)
As shown by (Ho et al., 2020), a property of the forward process is that it
admits sampling $\mathbf{x}^{n}$ at any arbitrary noise level $n$ in closed
form, since if $\alpha_{n}:=1-\beta_{n}$ and
$\bar{\alpha}_{n}:=\Pi_{i=1}^{n}\alpha_{i}$ its cumulative product, we have:
$q(\mathbf{x}^{n}|\mathbf{x}^{0})=\mathcal{N}(\mathbf{x}^{n};\sqrt{\bar{\alpha}_{n}}\mathbf{x}^{0},(1-\bar{\alpha}_{n})\mathbf{I}).$
(3)
By using the fact that these processes are Markov chains, the objective in (2)
can be written as the KL-divergence between Gaussian distributions:
$-\log
p_{\theta}(\mathbf{x}^{0}|\mathbf{x}^{1})+D_{\mathrm{KL}}(q(\mathbf{x}^{N}|\mathbf{x}^{0})||p(\mathbf{x}^{N}))\\\
+\sum_{n=2}^{N}D_{\mathrm{KL}}(q(\mathbf{x}^{n-1}|\mathbf{x}^{n},\mathbf{x}^{0})||p_{\theta}(\mathbf{x}^{n-1}|\mathbf{x}^{n})),$
(4)
and (Ho et al., 2020) shows that by the property (3) the forward process
posterior in these KL divergences when conditioned on $\mathbf{x}^{0}$, i.e.
$q(\mathbf{x}^{n-1}|\mathbf{x}^{n},\mathbf{x}^{0})$ are tractable given by
$q(\mathbf{x}^{n-1}|\mathbf{x}^{n},\mathbf{x}^{0})={\cal{N}}(\mathbf{x}^{n-1};\tilde{\mu}_{n}(\mathbf{x}^{n},\mathbf{x}^{0}),\tilde{\beta}_{n}\mathbf{I}),$
where
$\tilde{\mu}_{n}(\mathbf{x}^{n},\mathbf{x}^{0}):=\frac{\sqrt{\bar{\alpha}_{n-1}}\beta_{n}}{1-\bar{\alpha}_{n}}\mathbf{x}^{0}+\frac{\sqrt{\alpha_{n}}(1-\bar{\alpha}_{n-1})}{1-\bar{\alpha}_{n}}\mathbf{x}^{n}$
and
$\tilde{\beta}_{n}:=\frac{1-\bar{\alpha}_{n-1}}{1-\bar{\alpha}_{n}}\beta_{n}.$
(5)
Further, (Ho et al., 2020) shows that the KL-divergence between Gaussians can
be written as:
$D_{\mathrm{KL}}(q(\mathbf{x}^{n-1}|\mathbf{x}^{n},\mathbf{x}^{0})||p_{\theta}(\mathbf{x}^{n-1}|\mathbf{x}^{n}))=\\\
\mathbb{E}_{q}\left[\frac{1}{2\Sigma_{\theta}}\|\tilde{\mu}_{n}(\mathbf{x}^{n},\mathbf{x}^{0})-\mu_{\theta}(\mathbf{x}^{n},n)\|^{2}\right]+C,$
(6)
where $C$ is a constant which does not depend on $\theta$. So instead of a
parametrization (1) of $p_{\theta}$ that predicts $\tilde{\mu}$, one can
instead use the property (3) to write
$\mathbf{x}^{n}(\mathbf{x}^{0},\mathbf{\epsilon})=\sqrt{\bar{\alpha}_{n}}\mathbf{x}^{0}+\sqrt{1-\bar{\alpha}_{n}}\mathbf{\epsilon}$
for $\mathbf{\epsilon}\sim{\cal{N}}(\mathbf{0},\mathbf{I})$ and the formula
for $\tilde{\mu}$ to obtain that $\mu_{\theta}$ must predict
$(\mathbf{x}^{n}-\beta_{n}\mathbf{\epsilon}/\sqrt{1-\bar{\alpha}_{n}})/\sqrt{\alpha_{n}}$,
but since $\mathbf{x}^{n}$ is available to the network, we can choose:
$\mu_{\theta}(\mathbf{x}^{n},n)=\frac{1}{\sqrt{\alpha_{n}}}\left(\mathbf{x}^{n}-\frac{\beta_{n}}{\sqrt{1-\bar{\alpha}_{n}}}\mathbf{\epsilon}_{\theta}(\mathbf{x}^{n},n)\right),$
where $\mathbf{\epsilon}_{\theta}$ is a network which predicts
$\mathbf{\epsilon}\sim{\cal{N}}(\mathbf{0},\mathbf{I})$ from $\mathbf{x}^{n}$,
so that the objective simplifies to:
$\mathbb{E}_{\mathbf{x}^{0},\mathbf{\epsilon}}\left[\frac{\beta_{n}^{2}}{2\Sigma_{\theta}\alpha_{n}(1-\bar{\alpha}_{n})}\|\mathbf{\epsilon}-\mathbf{\epsilon}_{\theta}(\sqrt{\bar{\alpha}_{n}}\mathbf{x}^{0}+\sqrt{1-\bar{\alpha}_{n}}\mathbf{\epsilon},n)\|^{2}\right]$
(7)
resembling the loss in Noise Conditional Score Networks (Song & Ermon, 2019,
2020) using score matching. Once trained, to sample from the reverse process
$\mathbf{x}^{n-1}\sim p_{\theta}(\mathbf{x}^{n-1}|\mathbf{x}^{n})$ (1) we can
compute
$\mathbf{x}^{n-1}=\frac{1}{\sqrt{\alpha_{n}}}\left(\mathbf{x}^{n}-\frac{\beta_{n}}{\sqrt{1-\bar{\alpha}_{n}}}\mathbf{\epsilon}_{\theta}(\mathbf{x}^{n},n)\right)+\sqrt{\Sigma_{\theta}}\mathbf{z}$
where $\mathbf{z}\sim{\cal{N}}(\mathbf{0},\mathbf{I})$ for $n=N,\ldots,2$ and
$\mathbf{z}=\mathbf{0}$ when $n=1$. The full sampling procedure for
$\mathbf{x}^{0}$, starting from white noise sample $\mathbf{x}^{N}$, resembles
Langevin dynamics where we sample from the most noise-perturbed distribution
and reduce the magnitude of the noise scale until we reach the smallest one.
## 3 TimeGrad Method
We denote the entities of a multivariate time series by
$x_{i,t}^{0}\in\mathbb{R}$ for $i\in\\{1,\ldots,D\\}$ where $t$ is the time
index. Thus the multivariate vector at time $t$ is given by
$\mathbf{x}_{t}^{0}\in\mathbb{R}^{D}$. We are tasked with predicting the
multivariate distribution some given prediction time steps into the future and
so in what follows consider time series with $t\in[1,T]$, sampled from the
complete time series history of the training data, where we will split this
contiguous sequence into a context window of size $[1,t_{0})$ and prediction
interval $[t_{0},T]$, reminiscent of seq-to-seq models (Sutskever et al.,
2014) in language modeling.
In the univariate probabilistic DeepAR model (Salinas et al., 2019b), the log-
likelihood of each entity $x^{0}_{i,t}$ at a time step $t\in[t_{0},T]$ is
maximized over an individual time series’ prediction window. This is done with
respect to the parameters of some chosen distributional model via the state of
an RNN derived from its previous time step $x^{0}_{i,t-1}$ and its
corresponding covariates $\mathbf{c}_{i,t-1}$. The emission distribution
model, which is typically Gaussian for real-valued data or negative binomial
for count data, is selected to best match the statistics of the time series
and the network incorporates activation functions that satisfy the constraints
of the distribution’s parameters, e.g. a softplus() for the scale parameter of
the Gaussian.
A straightforward time series model for multivariate real-valued data could
use a factorizing output distribution instead. Shared parameters can then
learn patterns across the individual time series entities through the temporal
component — but the model falls short of capturing dependencies in the
emissions of the model. For this, a full joint distribution at each time step
has to be modeled, for example by using a multivariate Gaussian. However,
modeling the full covariance matrix not only increases the number of
parameters of the neural network by $O(D^{2})$, making learning difficult but
computing the loss is $O(D^{3})$ making it impractical. Furthermore,
statistical dependencies for such distributions would be limited to second-
order effects. Approximating Gaussians with low-rank covariance matrices do
work however and these models are referred to as Vec-LSTM in (Salinas et al.,
2019a).
Instead, in this work we propose TimeGrad which aims to learn a model of the
conditional distribution of the future time steps of a multivariate time
series given its past and covariates as:
$q_{\mathcal{X}}(\mathbf{x}_{t_{0}:T}^{0}|\mathbf{x}_{1:t_{0}-1}^{0},\mathbf{c}_{1:T})=\Pi_{t=t_{0}}^{T}q_{\mathcal{X}}(\mathbf{x}_{t}^{0}|\mathbf{x}_{1:t-1}^{0},\mathbf{c}_{1:T}),$
(8)
were we assume that the covariates are known for all the time points and each
factor is learned via a _conditional_ denoising diffusion model introduced
above. To model the temporal dynamics we employ the autoregressive recurrent
neural network (RNN) architecture from (Graves, 2013; Sutskever et al., 2014)
which utilizes the LSTM (Hochreiter & Schmidhuber, 1997) or GRU (Chung et al.,
2014) to encode the time series sequence up to time point $t$, given the
covariates $\mathbf{c}_{t}$, via the updated hidden state $\mathbf{h}_{t}$:
$\mathbf{h}_{t}=\mathrm{RNN}_{\theta}(\mathtt{concat}(\mathbf{x}_{t}^{0},\mathbf{c}_{t}),\mathbf{h}_{t-1}),$
(9)
where $\mathrm{RNN}_{\theta}$ is a multi-layer LSTM or GRU parameterized by
shared weights $\theta$ and $\mathbf{h}_{0}=\mathbf{0}$. Thus we can
approximate (8) by the model
$\Pi_{t=t_{0}}^{T}p_{\theta}(\mathbf{x}_{t}^{0}|\mathbf{h}_{t-1}),$ (10)
where now $\theta$ comprises the weights of the RNN as well as denoising
diffusion model. This model is autoregressive as it consumes the observations
at the time step $t-1$ as input to learn the distribution of, or sample, the
next time step as shown in Figure 1.
### 3.1 Training
Training is performed by randomly sampling context and adjoining prediction
sized windows from the training time series data and optimizing the parameters
$\theta$ that minimize the negative log-likelihood of the model (10):
$\sum_{t=t_{0}}^{T}-\log p_{\theta}(\mathbf{x}_{t}^{0}|\mathbf{h}_{t-1}),$
starting with the hidden state $\mathbf{h}_{t_{0}-1}$ obtained by running the
RNN on the chosen context window. Via a similar derivation as in the previous
section, we have that the conditional variant of the objective (4) for time
step $t$ and noise index $n$ is given by the following simplification of (7)
(Ho et al., 2020):
$\mathbb{E}_{\mathbf{x}_{t}^{0},\epsilon,n}\left[\|\mathbf{\epsilon}-\mathbf{\epsilon}_{\theta}(\sqrt{\bar{\alpha}_{n}}\mathbf{x}^{0}_{t}+\sqrt{1-\bar{\alpha}_{n}}\mathbf{\epsilon},\mathbf{h}_{t-1},n)\|^{2}\right],$
when we choose the variance in (1) to be $\Sigma_{\theta}=\tilde{\beta}_{n}$
(5), where now the $\epsilon_{\theta}$ network is also _conditioned_ on the
hidden state. Algorithm 1 is the training procedure for each time step in the
prediction window using this objective.
Algorithm 1 Training for each time series step $t\in[t_{0},T]$
Input: data $\mathbf{x}_{t}^{0}\sim q_{\cal{X}}(\mathbf{x}_{t}^{0})$ and state
$\mathbf{h}_{t-1}$
repeat
Initialize $n\sim\mathrm{Uniform}({1,\ldots,N})$ and
$\epsilon\sim{\cal{N}}(\mathbf{0},\mathbf{I})$ Take gradient step on
$\nabla_{\theta}\|\mathbf{\epsilon}-\mathbf{\epsilon}_{\theta}(\sqrt{\bar{\alpha}_{n}}\mathbf{x}^{0}_{t}+\sqrt{1-\bar{\alpha}_{n}}\mathbf{\epsilon},\mathbf{h}_{t-1},n)\|^{2}$
until converged
Figure 1: TimeGrad schematic: an RNN _conditioned_ diffusion probabilistic
model at some time $t-1$ depicting the fixed forward process that adds
Gaussian noise and the learned reverse processes.
### 3.2 Inference
After training, we wish to predict for each time series in our data set some
prediction steps into the future and compare with the corresponding test set
time series. As in training, we run the RNN over the last context sized window
of the training set to obtain the hidden state $\mathbf{h}_{T}$ via (9). Then
we follow the sampling procedure in Algorithm 2 to obtain a sample
$\mathbf{x}_{T+1}^{0}$ of the next time step, which we can pass
autoregressively to the RNN together with the covariates $\mathbf{c}_{T+1}$ to
obtain the next hidden state $\mathbf{h}_{T+1}$ and repeat until the desired
forecast horizon has been reached. This process of sampling trajectories from
the “warm-up” state $\mathbf{h}_{T}$ can be repeated many times (e.g. $S=100$)
to obtain empirical quantiles of the uncertainty of our predictions.
Algorithm 2 Sampling $\mathbf{x}_{t}^{0}$ via annealed Langevin dynamics
Input: noise $\mathbf{x}_{t}^{N}\sim{\cal{N}}(\mathbf{0},\mathbf{I})$ and
state $\mathbf{h}_{t-1}$
for $n=N$ to $1$ do
if $n>1$ then
$\mathbf{z}\sim{\cal{N}}(\mathbf{0},\mathbf{I})$
else
$\mathbf{z}=\mathbf{0}$
end if
$\mathbf{x}_{t}^{n-1}=\frac{1}{\sqrt{\alpha_{n}}}(\mathbf{x}^{n}_{t}-\frac{\beta_{n}}{\sqrt{1-\bar{\alpha}_{n}}}\mathbf{\epsilon}_{\theta}(\mathbf{x}^{n}_{t},\mathbf{h}_{t-1},n))+\sqrt{\Sigma_{\theta}}\mathbf{z}$
end for
Return: $\mathbf{x}_{t}^{0}$
### 3.3 Scaling
In real-world data, the magnitudes of different time series entities can vary
drastically. To normalize scales, we divide each time series entity by their
context window mean (or $1$ if it’s zero) before feeding it into the model. At
inference, the samples are then multiplied by the same mean values to match
the original scale. This rescaling technique simplifies the problem for the
model, which is reflected in significantly improved empirical performance as
shown in (Salinas et al., 2019b). The other method of a short-cut connection
from the input to the output of the function approximator, as done in the
multivariate point forecasting method LSTNet (Lai et al., 2018), is not
applicable here.
### 3.4 Covariates
We employ embeddings for categorical features (Charrington, 2018), that allows
for relationships within a category, or its context, to be captured when
training time series models. Combining these embeddings as features for
forecasting yields powerful models like the first place winner of the Kaggle
Taxi Trajectory Prediction111https://www.kaggle.com/c/pkdd-15-predict-taxi-
service-trajectory-i challenge (De Brébisson et al., 2015). The covariates
$\mathbf{c}_{t}$ we use are composed of time-dependent (e.g. day of week, hour
of day) and time-independent embeddings, if applicable, as well as lag
features depending on the time frequency of the data set we are training on.
All covariates are thus known for the periods we wish to forecast.
## 4 Experiments
We benchmark TimeGrad on _six_ real-world data sets and evaluate against
several competitive baselines. The source code of the model will be made
available after the review process.
### 4.1 Evaluation Metric and Data Set
For evaluation, we compute the Continuous Ranked Probability Score (CRPS)
(Matheson & Winkler, 1976) on each time series dimension, as well as on the
sum of all time series dimensions (the latter denoted by
$\mathrm{CRPS}_{\mathrm{sum}}$). CRPS measures the compatibility of a
cumulative distribution function $F$ with an observation $x$ as
$\mathrm{CRPS}(F,x)=\int_{\mathbb{R}}(F(z)-\mathbb{I}\\{x\leq
z\\})^{2}\,\mathrm{d}z,$
where $\mathbb{I}\\{x\leq z\\}$ is the indicator function which is one if
$x\leq z$ and zero otherwise. CRPS is a _proper scoring function_ , hence CRPS
attains its minimum when the predictive distribution $F$ and the data
distribution are equal. Employing the empirical CDF of $F$, i.e.
$\hat{F}(z)=\frac{1}{S}\sum_{s=1}^{S}\mathbb{I}\\{X_{s}\leq z\\}$ with $S$
samples $X_{s}\sim F$ as a natural approximation of the predictive CDF, CRPS
can be directly computed from simulated samples of the conditional
distribution (8) at each time point (Jordan et al., 2019). Finally,
$\mathrm{CRPS}_{\mathrm{sum}}$ is obtained by first summing across the $D$
time-series — both for the ground-truth data, and sampled data (yielding
$\hat{F}_{\mathrm{sum}}(t)$ for each time point). The results are then
averaged over the prediction horizon, i.e. formally
$\mathrm{CRPS}_{\mathrm{sum}}=\mathbb{E}_{t}\left[\mathrm{CRPS}\left(\hat{F}_{\mathrm{sum}}(t),\sum_{i}x_{i,t}^{0}\right)\right]$.
As proved in (de Bézenac et al., 2020) $\mathrm{CRPS}_{\mathrm{sum}}$ is also
a proper scoring function and we use it, instead of likelihood based metrics,
since not all methods we compare against yield analytical forecast
distributions or likelihoods are not meaningfully defined.
For our experiments we use Exchange (Lai et al., 2018), Solar (Lai et al.,
2018),
Electricity222https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014,
Traffic333https://archive.ics.uci.edu/ml/datasets/PEMS-SF,
Taxi444https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page and
Wikipedia555https://github.com/mbohlkeschneider/gluon-
ts/tree/mv_release/datasets open data sets, preprocessed exactly as in
(Salinas et al., 2019a), with their properties listed in Table 1. As can be
noted in the table, we do not need to normalize scales for Traffic.
Table 1: Dimension, domain, frequency, total training time steps and
prediction length properties of the training data sets used in the
experiments.
Data set | Dim. $D$ | Dom. | Freq. | Time steps | Pred. steps
---|---|---|---|---|---
Exchange | $8$ | $\mathbb{R}^{+}$ | day | $6,071$ | $30$
Solar | $137$ | $\mathbb{R}^{+}$ | hour | $7,009$ | $24$
Elec. | $370$ | $\mathbb{R}^{+}$ | hour | $5,833$ | $24$
Traffic | $963$ | $(0,1)$ | hour | $4,001$ | $24$
Taxi | $1,214$ | $\mathbb{N}$ | 30-min | $1,488$ | $24$
Wiki. | $2,000$ | $\mathbb{N}$ | day | $792$ | $30$
### 4.2 Model Architecture
We train TimeGrad via SGD using Adam (Kingma & Ba, 2015) with learning rate of
$1\text{\times}{10}^{-3}$ on the training split of each data set with $N=100$
diffusion steps using a linear variance schedule starting from
$\beta_{1}=$1\text{\times}{10}^{-4}$$ till $\beta_{N}=0.1$. We construct
batches of size $64$ by taking random windows (with possible overlaps), with
the context size set to the number of prediction steps, from the total time
steps of each data set (see Table 1). For testing we use a rolling windows
prediction starting from the last context window history before the start of
the prediction and compare it to the ground-truth in the test set by sampling
$S=100$ trajectories.
The RNN consists of $2$ layers of an LSTM with the hidden state
$\mathbf{h}_{t}\in\mathbb{R}^{40}$ and we encode the noise index
$n\in\\{1,\ldots,N\\}$ using the Transformer’s (Vaswani et al., 2017) Fourier
positional embeddings, with $N_{\max}=500$, into $\mathbb{R}^{32}$ vectors.
The network $\epsilon_{\theta}$ consists of conditional 1-dim dilated ConvNets
with residual connections adapted from the WaveNet (van den Oord et al.,
2016a) and DiffWave (Kong et al., 2021) models. Figure 2 shows the schematics
of a single residual block $i=\\{0,\ldots,7\\}$ together with the final output
from the sum of all the $8$ skip-connections. All, but the last, convolutional
network layers have an output channel size of $8$ and we use a _bidirectional_
dilated convolution in each block $i$ by setting its dilation to $2^{i\%2}$.
We use a validation set from the training data of the same size as the test
set to tune the number of epochs for early stopping.
All experiments run on a single Nvidia V100 GPU with $16$GB of memory.
Figure 2: The network architecture of $\epsilon_{\theta}$ consisting of
$\mathtt{residual\\_layers}=8$ conditional residual blocks with the Gated
Activation Unit $\sigma(\cdot)\odot\tanh(\cdot)$ from (van den Oord et al.,
2016b); whose skip-connection outputs are summed up to compute the final
output. Conv1x1 and Conv1d are 1D convolutional layers with filter size of $1$
and $3$, respectively, circular padding so that the spatial size remains $D$,
and all but the last convolutional layer has output channels
$\mathtt{residual\\_channels}=8$. FC are linear layers used to up/down-sample
the input to the appropriate size for broadcasting.
### 4.3 Results
Table 2: Test set $\mathrm{CRPS}_{\mathrm{sum}}$ comparison (lower is better)
of models on six real world data sets. Mean and standard error metrics for
TimeGrad obtained by re-training and evaluating $10$ times.
Method | Exchange | Solar | Electricity | Traffic | Taxi | Wikipedia
---|---|---|---|---|---|---
VES | $\mathbf{0.005}\scriptstyle{\pm 0.000}$ | $0.9\scriptstyle{\pm 0.003}$ | $0.88\scriptstyle{\pm 0.0035}$ | $0.35\scriptstyle{\pm 0.0023}$ | - | -
VAR | $\mathbf{0.005}\scriptstyle{\pm 0.000}$ | $0.83\scriptstyle{\pm 0.006}$ | $0.039\scriptstyle{\pm 0.0005}$ | $0.29\scriptstyle{\pm 0.005}$ | - | -
VAR-Lasso | $0.012\scriptstyle{\pm 0.0002}$ | $0.51\scriptstyle{\pm 0.006}$ | $0.025\scriptstyle{\pm 0.0002}$ | $0.15\scriptstyle{\pm 0.002}$ | - | $3.1\scriptstyle{\pm 0.004}$
GARCH | $0.023\scriptstyle{\pm 0.000}$ | $0.88\scriptstyle{\pm 0.002}$ | $0.19\scriptstyle{\pm 0.001}$ | $0.37\scriptstyle{\pm 0.0016}$ | - | -
KVAE | $0.014\scriptstyle{\pm 0.002}$ | $0.34\scriptstyle{\pm 0.025}$ | $0.051\scriptstyle{\pm 0.019}$ | $0.1\scriptstyle{\pm 0.005}$ | - | $0.095\scriptstyle{\pm 0.012}$
Vec-LSTM ind-scaling | $0.008\scriptstyle{\pm 0.001}$ | $0.391\scriptstyle{\pm 0.017}$ | $0.025\scriptstyle{\pm 0.001}$ | $0.087\scriptstyle{\pm 0.041}$ | $0.506\scriptstyle{\pm 0.005}$ | $0.133\scriptstyle{\pm 0.002}$
Vec-LSTM lowrank-Copula | $0.007\scriptstyle{\pm 0.000}$ | $0.319\scriptstyle{\pm 0.011}$ | $0.064\scriptstyle{\pm 0.008}$ | $0.103\scriptstyle{\pm 0.006}$ | $0.326\scriptstyle{\pm 0.007}$ | $0.241\scriptstyle{\pm 0.033}$
GP scaling | $0.009\scriptstyle{\pm 0.000}$ | $0.368\scriptstyle{\pm 0.012}$ | $0.022\scriptstyle{\pm 0.000}$ | $0.079\scriptstyle{\pm 0.000}$ | $0.183\scriptstyle{\pm 0.395}$ | $1.483\scriptstyle{\pm 1.034}$
GP Copula | $0.007\scriptstyle{\pm 0.000}$ | $0.337\scriptstyle{\pm 0.024}$ | $0.0245\scriptstyle{\pm 0.002}$ | $0.078\scriptstyle{\pm 0.002}$ | $0.208\scriptstyle{\pm 0.183}$ | $0.086\scriptstyle{\pm 0.004}$
Transformer MAF | $\mathbf{0.005}\scriptstyle{\pm 0.003}$ | $0.301\scriptstyle{\pm 0.014}$ | $0.0207\scriptstyle{\pm 0.000}$ | $0.056\scriptstyle{\pm 0.001}$ | $0.179\scriptstyle{\pm 0.002}$ | $0.063\scriptstyle{\pm 0.003}$
TimeGrad | $0.006\scriptstyle{\pm 0.001}$ | $\mathbf{0.287}\scriptstyle{\pm 0.02}$ | $\mathbf{0.0206}\scriptstyle{\pm 0.001}$ | $\mathbf{0.044}\scriptstyle{\pm 0.006}$ | $\mathbf{0.114}\scriptstyle{\pm 0.02}$ | $\mathbf{0.0485}\scriptstyle{\pm 0.002}$
Using the $\mathrm{CRPS}_{\mathrm{sum}}$ as an evaluation metric, we compare
test time predictions of TimeGrad to a wide range of existing methods
including classical multivariate methods:
* •
VAR (Lütkepohl, 2007) a mutlivariate linear vector auto-regressive model with
lags corresponding to the periodicity of the data,
* •
VAR-Lasso a Lasso regularized VAR,
* •
GARCH (van der Weide, 2002) a multivariate conditional heteroskedastic model
and
* •
VES a innovation state space model (Hyndman et al., 2008);
as well as deep learning based methods namely:
* •
KVAE (Fraccaro et al., 2017) a variational autoencoder to represent the data
on top of a linear state space model which describes the dynamics,
* •
Vec-LSTM-ind-scaling (Salinas et al., 2019a) which models the dynamics via an
RNN and outputs the parameters of an _independent_ Gaussian distribution with
mean-scaling,
* •
Vec-LSTM-lowrank-Copula (Salinas et al., 2019a) which instead parametrizes a
low-rank plus diagonal covariance via Copula process,
* •
GP-scaling (Salinas et al., 2019a) which unrolls an LSTM with scaling on each
individual time series before reconstructing the joint distribution via a low-
rank Gaussian,
* •
GP-Copula (Salinas et al., 2019a) which unrolls an LSTM on each individual
time series and then the joint emission distribution is given by a low-rank
plus diagonal covariance Gaussian copula and
* •
Transformer-MAF (Rasul et al., 2021) which uses Transformer (Vaswani et al.,
2017) to model the temporal conditioning and Masked Autoregressive Flow
(Papamakarios et al., 2017) for the distribution emission model.
Table 2 lists the corresponding $\mathrm{CRPS}_{\mathrm{sum}}$ values averaged
over $10$ independent runs together with their empirical standard deviations
and shows that the TimeGrad model sets the new state-of-the-art on all but the
smallest of the benchmark data sets. Note that flow based models must apply
continuous transformations onto a continuously connected distribution, making
it difficult to model disconnected modes. Flow models assign spurious density
to connections between these modes leading to potential inaccuracies.
Similarly the generator network in variational autoencoders must learn to map
from some continuous space to a possibly disconnected space which might not be
possible to learn. In contrast EMBs do not suffer from these issues (Du &
Mordatch, 2019).
### 4.4 Ablation
The length $N$ of the forward process is a crucial hyperparameter, as a bigger
$N$ allows the reverse process to be approximately Gaussian (Sohl-Dickstein et
al., 2015) which assists the Gaussian parametrization (1) to approximate it
better. We evaluate to which extent, if any at all, larger $N$ affects
prediction performance, with an ablation study where we record the test set
$\mathrm{CRPS}_{\mathrm{sum}}$ of the Electricity data set for different total
diffusion process lengths $N=2,4,8,\ldots,256$ while keeping all other
hyperparemeters unchanged. The results are then plotted in Figure 3 where we
note that $N$ can be reduced down to $\approx 10$ without significant
performance loss. An optimal value is achieved at $N\approx 100$ and larger
levels are not beneficial if all else is kept fixed.
Figure 3: TimeGrad test set $\mathrm{CRPS}_{\mathrm{sum}}$ for Electricity
data by varying total diffusion length $N$. Good performance is established
already at $N\approx 10$ with optimal value at $N\approx 100$. The mean and
standard errors obtained over $5$ independent runs. We see similar behaviour
with other data sets.
To highlight the predictions of TimeGrad we show in Figure 4 the predicted
median, $50\%$ and $90\%$ distribution intervals of the first $6$ dimensions
of the full $963$ dimensional multivariate forecast of the Traffic benchmark.
Figure 4: TimeGrad prediction intervals and test set ground-truth for Traffic
data of the first $6$ of $963$ dimensions from first rolling-window. Note that
neighboring entities have an order of magnitude difference in scales.
## 5 Related Work
### 5.1 Energy-Based Methods
The EBM of (Ho et al., 2020) that we adapt is based on methods that learn the
gradient of the log-density with respect to the _inputs_ , called Stein Score
function (Hyvärinen, 2005; Vincent, 2011), and at inference time use this
gradient estimate via Langevin dynamics to sample from the model of this
complicated data distribution (Song & Ermon, 2019). These models achieve
impressive results for image generation (Ho et al., 2020; Song & Ermon, 2020)
when trained in an unsupervised fashion without requiring adversarial
optimization. By perturbing the data using multiple noise scales, the learnt
Score network captures both coarse and fine-grained data features.
The closest related work to TimeGrad is in the recent non-autoregressive
conditional methods for high fidelity waveform generation (Chen et al., 2021;
Kong et al., 2021). Although these methods learn the distribution of vector
valued data via denoising diffusion methods, as done here, they do not
consider its temporal development. Also neighboring dimensions of waveform
data are highly correlated and have a uniform scale, which is not necessarily
true for multivariate time series problems where neighboring entities occur
arbitrarily (but in a fixed order) and can have different scales. (Du &
Mordatch, 2019) also use EBMs to model one and multiple steps for a trajectory
modeling task in an non-autoregressive fashion.
### 5.2 Time Series Forecasting
Neural time series methods have recently become popular ways of solving the
prediction problem via univariate point forecasting methods (Oreshkin et al.,
2020; Smyl, 2020) or univariate probabilistic methods (Salinas et al., 2019b).
In the multivariate setting we also have point forecasting methods (Lai et
al., 2018; Li et al., 2019) as well as probabilistic methods, like this
method, which explicitly model the data distribution using Gaussian copulas
(Salinas et al., 2019a), GANs (Yoon et al., 2019), or normalizing flows (de
Bézenac et al., 2020; Rasul et al., 2021). Bayesian neural networks can also
be used to provide _epistemic_ uncertainty in forecasts as well as detect
distributional shifts (Zhu & Laptev, 2018), although these methods often do
not perform as well empirically (Wenzel et al., 2020).
## 6 Conclusion and Future Work
We have presented TimeGrad, a versatile multivariate probabilistic time series
forecasting method that leverages the exceptional performance of EBMs to learn
and sample from the distribution of the next time step, autoregressivly.
Analysis of TimeGrad on six commonly used time series benchmarks establishes
the new state-of-the-art against competitive methods.
We note that while training TimeGrad we do not need to loop over the EBM
function approximator $\epsilon_{\theta}$, unlike in the normalizing flow
setting where we have multiple stacks of bijections. However while sampling we
do loop $N$ times over $\epsilon_{\theta}$. A possible strategy to improve
sampling times introduced in (Chen et al., 2021) uses a combination of
improved variance schedule and an $L_{1}$ loss to allow sampling with fewer
steps at the cost of a small reduction in quality if such a trade-off is
required. A recent paper (Song et al., 2021) generalize the diffusion
processes via a class of non-Markovian processes which also allows for faster
sampling.
The use of normalizing flows for discrete valued data dictates that one
dequantizes it (Theis et al., 2016), by adding uniform noise to the data,
before using the flows to learn. Dequantization is not needed in the EBM
setting and future work could explore methods of explicitly modeling discrete
distributions.
As noted in (Du & Mordatch, 2019) EBMs exhibit better out-of-distribution
(OOD) detection than other likelihood models. Such a task requires models to
have a high likelihood on the data manifold and low at all other locations.
Surprisingly (Nalisnick et al., 2019) showed that likelihood models, including
flows, were assigning higher likelihoods to OOD data whereas EBMs do not
suffer from this issue since they penalize high probability under the model
but low probability under the data distribution explicitly. Future work could
evaluate the usage of TimeGrad for anomaly detection tasks.
For long time sequences, one could replace the RNN with a Transformer
architecture (Rasul et al., 2021) to provide better conditioning for the EBM
emission head. Concurrently, since EBMs are not constrained by the form of
their functional approximators, one natural way to improve the model would be
to incorporate architectural choices that best encode the inductive bias of
the problem being tackled, for example with graph neural networks (Niu et al.,
2020) when the relationships between entities are known.
## References
* Benidis et al. (2020) Benidis, K., Rangapuram, S. S., Flunkert, V., Wang, B., Maddix, D., Turkmen, C., Gasthaus, J., Bohlke-Schneider, M., Salinas, D., Stella, L., Callot, L., and Januschowski, T. Neural forecasting: Introduction and literature overview, 2020.
* Charrington (2018) Charrington, S. TWiML & AI Podcast: Systems and Software for Machine Learning at Scale with Jeff Dean, 2018. URL https://bit.ly/2G0LmGg.
* Chen et al. (2021) Chen, N., Zhang, Y., Zen, H., Weiss, R. J., Norouzi, M., and Chan, W. WaveGrad: Estimating gradients for waveform generation. In _International Conference on Learning Representations 2021 (Conference Track)_ , 2021. URL https://openreview.net/forum?id=NsMLjcFaO8O.
* Chung et al. (2014) Chung, J., Gulcehre, C., Cho, K., and Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. In _NIPS 2014 Workshop on Deep Learning, December 2014_ , 2014.
* de Bézenac et al. (2020) de Bézenac, E., Rangapuram, S. S., Benidis, K., Bohlke-Schneider, M., Kurle, R., Stella, L., Hasson, H., Gallinari, P., and Januschowski, T. Normalizing Kalman Filters for Multivariate Time series Analysis. In _Advances in Neural Information Processing Systems_ , volume 33. Curran Associates, Inc., 2020.
* De Brébisson et al. (2015) De Brébisson, A., Simon, E., Auvolat, A., Vincent, P., and Bengio, Y. Artificial Neural Networks Applied to Taxi Destination Prediction. In _Proceedings of the 2015th International Conference on ECML PKDD Discovery Challenge - Volume 1526_ , ECMLPKDDDC’15, pp. 40–51, Aachen, Germany, Germany, 2015. CEUR-WS.org. URL http://dl.acm.org/citation.cfm?id=3056172.3056178.
* Dinh et al. (2017) Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density estimation using Real NVP. In _5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings_. OpenReview.net, 2017. URL https://openreview.net/forum?id=HkpbnH9lx.
* Du & Mordatch (2019) Du, Y. and Mordatch, I. Implicit Generation and Modeling with Energy Based Models. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R. (eds.), _Advances in Neural Information Processing Systems_ , volume 32, pp. 3608–3618. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/378a063b8fdb1db941e34f4bde584c7d-Paper.pdf.
* Fraccaro et al. (2017) Fraccaro, M., Kamronn, S., Paquet, U., and Winther, O. A Disentangled Recognition and Nonlinear Dynamics Model for Unsupervised Learning. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), _Advances in Neural Information Processing Systems_ , volume 30, pp. 3601–3610. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/2017/file/7b7a53e239400a13bd6be6c91c4f6c4e-Paper.pdf.
* Graves (2013) Graves, A. Generating Sequences With Recurrent Neural Networks. _arXiv preprint arXiv:1308.0850_ , 2013.
* Hinton (2002) Hinton, G. E. Training Products of Experts by Minimizing Contrastive Divergence. _Neural Computation_ , 14(8):1771––1800, August 2002. ISSN 0899-7667. doi: 10.1162/089976602760128018. URL https://doi.org/10.1162/089976602760128018.
* Ho et al. (2020) Ho, J., Jain, A., and Abbeel, P. Denoising Diffusion Probabilistic Models. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R. (eds.), _Advances in Neural Information Processing Systems_ , volume 33. Curran Associates, Inc., 2020. URL https://papers.nips.cc/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf.
* Hochreiter & Schmidhuber (1997) Hochreiter, S. and Schmidhuber, J. Long Short-Term Memory. _Neural Computation_ , 9(8):1735–1780, November 1997. ISSN 0899-7667. doi: 10.1162/neco.1997.9.8.1735.
* Hyndman & Athanasopoulos (2018) Hyndman, R. and Athanasopoulos, G. _Forecasting: Principles and practice_. OTexts, 2018. ISBN 9780987507112.
* Hyndman et al. (2008) Hyndman, R., Koehler, A., Ord, K., and Snyder, R. _Forecasting with exponential smoothing. The state space approach_ , chapter 17, pp. 287–300. Springer-Verlag, 2008. doi: 10.1007/978-3-540-71918-2.
* Hyvärinen (2005) Hyvärinen, A. Estimation of Non-Normalized Statistical Models by Score Matching. _Journal of Machine Learning Research_ , 6(24):695–709, 2005. URL http://jmlr.org/papers/v6/hyvarinen05a.html.
* Jordan et al. (2019) Jordan, A., Krüger, F., and Lerch, S. Evaluating Probabilistic Forecasts with scoringRules. _Journal of Statistical Software, Articles_ , 90(12):1–37, 2019. ISSN 1548-7660. doi: 10.18637/jss.v090.i12. URL https://www.jstatsoft.org/v090/i12.
* Kingma & Ba (2015) Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. In _International Conference on Learning Representations (ICLR)_ , 2015.
* Kingma & Welling (2019) Kingma, D. P. and Welling, M. An Introduction to Variational Autoencoders. _Foundations and Trends in Machine Learning_ , 12(4):307–392, 2019. doi: 10.1561/2200000056. URL https://doi.org/10.1561/2200000056.
* Kong et al. (2021) Kong, Z., Ping, W., Huang, J., Zhao, K., and Catanzaro, B. DiffWave: A Versatile Diffusion Model for Audio Synthesis. In _International Conference on Learning Representations 2021 (Conference Track)_ , 2021. URL https://openreview.net/forum?id=a-xFK8Ymz5J.
* Lai et al. (2018) Lai, G., Chang, W.-C., Yang, Y., and Liu, H. Modeling Long- and Short-Term Temporal Patterns with Deep Neural Networks. In _The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval_, SIGIR ’18, pp. 95–104, New York, NY, USA, 2018. ACM. ISBN 978-1-4503-5657-2. doi: 10.1145/3209978.3210006. URL http://doi.acm.org/10.1145/3209978.3210006.
* LeCun et al. (2006) LeCun, Y., Chopra, S., Hadsell, R., Ranzato, M., and Huang, F. A Tutorial on Energy-Based Learning. In Bakir, G., Hofman, T., Schölkopf, B., Smola, A., and Taskar, B. (eds.), _Predicting Structured Data_. MIT Press, 2006.
* Li et al. (2019) Li, S., Jin, X., Xuan, Y., Zhou, X., Chen, W., Wang, Y.-X., and Yan, X. Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. In Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché Buc, F., Fox, E., and Garnett, R. (eds.), _Advances in Neural Information Processing Systems 32_ , pp. 5244–5254. Curran Associates, Inc., 2019.
* Lütkepohl (2007) Lütkepohl, H. _New Introduction to Multiple Time Series Analysis_. Springer Berlin Heidelberg, 2007. ISBN 9783540262398. URL https://books.google.de/books?id=muorJ6FHIiEC.
* Matheson & Winkler (1976) Matheson, J. E. and Winkler, R. L. Scoring Rules for Continuous Probability Distributions. _Management Science_ , 22(10):1087–1096, 1976\.
* Nalisnick et al. (2019) Nalisnick, E., Matsukawa, A., Teh, Y. W., Gorur, D., and Lakshminarayanan, B. Do Deep Generative Models Know What They Don’t Know? In _International Conference on Learning Representations_ , 2019. URL https://openreview.net/forum?id=H1xwNhCcYm.
* Niu et al. (2020) Niu, C., Song, Y., Song, J., Zhao, S., Grover, A., and Ermon, S. Permutation Invariant Graph Generation via Score-Based Generative Modeling. In Chiappa, S. and Calandra, R. (eds.), _The 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020, 26-28 August 2020, Online [Palermo, Sicily, Italy]_ , volume 108 of _Proceedings of Machine Learning Research_ , pp. 4474–4484. PMLR, 2020\.
* Oreshkin et al. (2020) Oreshkin, B. N., Carpov, D., Chapados, N., and Bengio, Y. N-BEATS: Neural basis expansion analysis for interpretable time series forecasting. In _International Conference on Learning Representations_ , 2020. URL https://openreview.net/forum?id=r1ecqn4YwB.
* Papamakarios et al. (2017) Papamakarios, G., Pavlakou, T., and Murray, I. Masked Autoregressive Flow for Density Estimation. _Advances in Neural Information Processing Systems 30_ , 2017.
* Papamakarios et al. (2019) Papamakarios, G., Nalisnick, E., Rezende, D. J., Mohamed, S., and Lakshminarayanan, B. Normalizing Flows for Probabilistic Modeling and Inference, 2019.
* Rasul et al. (2021) Rasul, K., Sheikh, A.-S., Schuster, I., Bergmann, U., and Vollgraf, R. Multivariate Probabilistic Time Series Forecasting via Conditioned Normalizing Flows. In _International Conference on Learning Representations 2021 (Conference Track)_ , 2021. URL https://openreview.net/forum?id=WiGQBFuVRv.
* Salinas et al. (2019a) Salinas, D., Bohlke-Schneider, M., Callot, L., Medico, R., and Gasthaus, J. High-dimensional multivariate forecasting with low-rank Gaussian Copula Processes. In Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché Buc, F., Fox, E., and Garnett, R. (eds.), _Advances in Neural Information Processing Systems 32_ , pp. 6824–6834. Curran Associates, Inc., 2019a.
* Salinas et al. (2019b) Salinas, D., Flunkert, V., Gasthaus, J., and Januschowski, T. DeepAR: Probabilistic forecasting with autoregressive recurrent networks. _International Journal of Forecasting_ , 2019b. ISSN 0169-2070. URL http://www.sciencedirect.com/science/article/pii/S0169207019301888.
* Smyl (2020) Smyl, S. A hybrid method of exponential smoothing and recurrent neural networks for time series forecasting. _International Journal of Forecasting_ , 36(1):75–85, 2020. ISSN 0169-2070. doi: https://doi.org/10.1016/j.ijforecast.2019.03.017. URL http://www.sciencedirect.com/science/article/pii/S0169207019301153. M4 Competition.
* Sohl-Dickstein et al. (2015) Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., and Ganguli, S. Deep Unsupervised Learning using Nonequilibrium Thermodynamics. In Bach, F. and Blei, D. (eds.), _Proceedings of the 32nd International Conference on Machine Learning_ , volume 37 of _Proceedings of Machine Learning Research_ , pp. 2256–2265, Lille, France, 2015. PMLR. URL http://proceedings.mlr.press/v37/sohl-dickstein15.html.
* Song et al. (2021) Song, J., Meng, C., and Ermon, S. Denoising Diffusion Implicit Models. In _International Conference on Learning Representations 2021 (Conference Track)_ , 2021. URL https://openreview.net/pdf?id=St1giarCHLP.
* Song & Ermon (2019) Song, Y. and Ermon, S. Generative Modeling by Estimating Gradients of the Data Distribution. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R. (eds.), _Advances in Neural Information Processing Systems_ , volume 32, pp. 11918–11930. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/3001ef257407d5a371a96dcd947c7d93-Paper.pdf.
* Song & Ermon (2020) Song, Y. and Ermon, S. Improved Techniques for Training Score-Based Generative Models. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R. (eds.), _Advances in Neural Information Processing Systems_ , volume 33. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/92c3b916311a5517d9290576e3ea37ad-Paper.pdf.
* Song & Kingma (2021) Song, Y. and Kingma, D. P. How to Train Your Energy-Based Models. 2021\. URL https://arxiv.org/abs/2101.03288.
* Sutskever et al. (2014) Sutskever, I., Vinyals, O., and Le, Q. V. Sequence to Sequence Learning with Neural Networks. In Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., and Weinberger, K. (eds.), _Advances in Neural Information Processing Systems 27_ , pp. 3104–3112. Curran Associates, Inc., 2014.
* Theis et al. (2016) Theis, L., van den Oord, A., and Bethge, M. A note on the evaluation of generative models. In _International Conference on Learning Representations_ , 2016. URL http://arxiv.org/abs/1511.01844. arXiv:1511.01844.
* Tsay (2014) Tsay, R. S. _Multivariate Time Series Analysis: With R and Financial Applications_. Wiley Series in Probability and Statistics. Wiley, 2014. ISBN 9781118617908.
* van den Oord et al. (2016a) van den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A., and Kavukcuoglu, K. WaveNet: A Generative Model for Raw Audio. In _The 9th ISCA Speech Synthesis Workshop, Sunnyvale, CA, USA, 13-15 September 2016_ , pp. 125. ISCA, 2016a. URL http://www.isca-speech.org/archive/SSW_2016/abstracts/ssw9_DS-4_van_den_Oord.html.
* van den Oord et al. (2016b) van den Oord, A., Kalchbrenner, N., Espeholt, L., kavukcuoglu, k., Vinyals, O., and Graves, A. Conditional Image Generation with PixelCNN Decoders. In Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., and Garnett, R. (eds.), _Advances in Neural Information Processing Systems_ , volume 29, pp. 4790–4798. Curran Associates, Inc., 2016b. URL https://proceedings.neurips.cc/paper/2016/file/b1301141feffabac455e1f90a7de2054-Paper.pdf.
* van den Oord et al. (2016c) van den Oord, A., Kalchbrenner, N., and Kavukcuoglu, K. Pixel Recurrent Neural Networks. In Balcan, M. F. and Weinberger, K. Q. (eds.), _Proceedings of The 33rd International Conference on Machine Learning_ , volume 48 of _Proceedings of Machine Learning Research_ , pp. 1747–1756, New York, New York, USA, 20–22 Jun 2016c. PMLR. URL http://proceedings.mlr.press/v48/oord16.html.
* van der Weide (2002) van der Weide, R. GO-GARCH: a multivariate generalized orthogonal GARCH model. _Journal of Applied Econometrics_ , 17(5):549–564, 2002. doi: 10.1002/jae.688.
* Vaswani et al. (2017) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. u., and Polosukhin, I. Attention is All you Need. In Guyon, I., Luxburg, U., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), _Advances in Neural Information Processing Systems 30_ , pp. 5998–6008. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf.
* Vincent (2011) Vincent, P. A Connection Between Score Matching and Denoising Autoencoders. _Neural Computation_ , 23(7):1661–1674, 2011\. URL https://doi.org/10.1162/NECO_a_00142.
* Wenzel et al. (2020) Wenzel, F., Roth, K., Veeling, B., Swiatkowski, J., Tran, L., Mandt, S., Snoek, J., Salimans, T., Jenatton, R., and Nowozin, S. How good is the Bayes posterior in deep neural networks really? In III, H. D. and Singh, A. (eds.), _Proceedings of the 37th International Conference on Machine Learning_ , volume 119 of _Proceedings of Machine Learning Research_ , pp. 10248–10259. PMLR, 13–18 Jul 2020. URL http://proceedings.mlr.press/v119/wenzel20a.html.
* Yoon et al. (2019) Yoon, J., Jarrett, D., and van der Schaar, M. Time-series Generative Adversarial Networks. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R. (eds.), _Advances in Neural Information Processing Systems_ , volume 32, pp. 5508–5518. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/c9efe5f26cd17ba6216bbe2a7d26d490-Paper.pdf.
* Zhu & Laptev (2018) Zhu, L. and Laptev, N. Deep and Confident Prediction for Time Series at Uber. In _2017 IEEE International Conference on Data Mining Workshops (ICDMW)_ , volume 00, pp. 103–110, November 2018. doi: 10.1109/ICDMW.2017.19. URL doi.ieeecomputersociety.org/10.1109/ICDMW.2017.19.
|
# S++: A Fast and Deployable Secure-Computation Framework for Privacy-
Preserving Neural Network Training
Prashanthi Ramachandran 1 Shivam Agarwal 1 Arup Mondal 1
Aastha Shah 1 Debayan Gupta 1
###### Abstract
We introduce S++, a simple, robust, and deployable framework for training a
neural network (NN) using private data from multiple sources, using secret-
shared secure function evaluation. In short, consider a virtual third party to
whom every data-holder sends their inputs, and which computes the neural
network: in our case, this virtual third party is actually a set of servers
which individually learn nothing, even with a malicious (but non-colluding)
adversary.
Previous work in this area has been limited to just one specific activation
function: ReLU, rendering the approach impractical for many use-cases. For the
first time, we provide fast and verifiable protocols for all common activation
functions and optimize them for running in a secret-shared manner. The ability
to quickly, verifiably, and robustly compute exponentiation, softmax, sigmoid,
etc., allows us to use previously written NNs without modification, vastly
reducing developer effort and complexity of code. In recent times, ReLU has
been found to converge much faster and be more computationally efficient as
compared to non-linear functions like sigmoid or tanh. However, we argue that
it would be remiss not to extend the mechanism to non-linear functions such as
the logistic sigmoid, tanh, and softmax that are fundamental due to their
ability to express outputs as probabilities and their universal approximation
property. Their contribution in RNNs and a few recent advancements also makes
them more relevant.
## Introduction
Neural networks (NN) are used in areas ranging from image classification to
machine translation, and there are often multiple parties that contribute
possibly private data to the training process. However, training data
interaction and the resulting model may still leak a significant amount of
private data. Thus, there arises a need for secure neural networks, which can
help parties collaboratively train a neural network without revealing their
private input data.
Several approaches have been proposed to overcome possible data leakage, based
on secure multi-party computation (MPC). MPC is a powerful way to protect
sensitive training data which uses a range of cryptographic primitives to
allow multiple parties to compute a function without revealing the inputs of
any individual party (beyond what is implied by the output of the function
itself). However, previously proposed MPC-based schemes (Ohrimenko et al.
2016; Hunt et al. 2018; Mohassel and Zhang 2017; Wagh, Gupta, and Chandran
2018; Patra and Suresh 2020) do not support models that use exponentiation-
based activation functions such as logistic sigmoid, softmax, and tanh. These
protocols are largely restricted to ReLU and its variants, which might
restrict applicability in some specific cases (as described in the motivation
section).
In this paper, we propose S++, a three-party secure computation framework for
secure exponentiation, and exponentiation-based activation functions such as
logistic sigmoid, tanh, and softmax. This enables us to construct three-party
secure protocols for training and inference of several NN architectures such
that no single party learns any information about the data. S++ is an
efficient MPC-based privacy-preserving neural network training framework based
on (Wagh, Gupta, and Chandran 2018) for 3PC with the activation functions
logistic sigmoid, tanh, their derivatives, and softmax. In our setting, there
are $D$ (where $D$ can be arbitrary) data owners who wish to jointly train a
model over their data with the help of $3$-servers. In the setup phase, these
data owners create ”additive secret-shares” of their input data and send one
share each to $2$-servers. In the computation phase, the $3$-servers (the $2$
primary servers and the helper server) collectively run an interactive
protocol to train a neural network on the data owners’ data without learning
any information beyond the trained model.
### Contributions
We summarize our key contributions as follows:
* •
We first propose a secure protocol for exponentiation in the three-party
setting. This protocol is based on SCALE MAMBA’s (Aly et al. 2020) protocol
for base 2 exponentiation. It also uses primitives from (Catrina and Saxena
2010). We modify these ideas for our three-party setting and additively
secret-shared data.
* •
We describe novel secure protocols for the logistic sigmoid, softmax, and
tanh—popular activation functions that are significantly more complex than
ReLU (described by (Wagh, Gupta, and Chandran 2018))—along with their
derivatives with the help of the above-mentioned exponentiation protocol. The
inclusion of these protocols in a secure and private setting vastly increases
the practicality of the framework and enables people to convert their
protocols into secure ones without having to redesign the actual internal
structure of their NNs.
### Organization of the paper
The remainder of this paper is organized as follows: we first go over our
motivation for extending secure protocols to functions such as the logistic
sigmoid, tanh and softmax. Then, we look at recent work in this area, largely
in the realm of MPC. After that, we describe the notations used in our
protocols and explain some cryptographic primitives. In the Protocols section,
we delve into our architecture and the supporting protocols, specifically the
supporting 3-server protocols that serve as building blocks for our main
protocols. We describe our main protocols: exponentiation, logistic sigmoid,
tanh, their derivatives, and softmax. After that, we describe the theoretical
efficiency of our protocols and provide an evaluation of our experiments.
Lastly, we describe the future plans and directions for this work.
### Motivation
#### Logistic sigmoid and variants
The logistic sigmoid is a common activation function to introduce non-
linearity, where sigmoid$(x)=\displaystyle\frac{1}{1+e^{-x}}$. It essentially
squashes a real value in [0, 1] and has been used widely over the years for
its simplicity, especially in binary classification tasks with a maximum
likelihood loss function. It also has important variants like the tanh and
softmax:
* •
Tanh as a rescaled logistic sigmoid: The tanh is a rescaling and stretching of
the logistic sigmoid that maps to [-1, 1] i.e., tanh$(x)$ =
2$\times$sigmoid$(2x)-1$. Extending the protocol to the tanh facilitates
applications in recurrent neural networks where it is widely used.
* •
Logistic sigmoid as a special case of the softmax: The softmax is often used
for classification tasks, where
softmax$(z_{i})=\displaystyle\frac{e^{z_{i}}}{\sum_{i=1}^{k}e^{z_{i}}}$. The
logistic sigmoid happens to be a special case of this function, often used for
binary classification. Enabling softmax thus also extends the framework to
multi-class classification problems.
#### Why is ReLU not enough?
Despite ReLU being more computationally efficient than logistic sigmoid or
tanh (Krizhevsky, Sutskever, and Hinton 2017), we argue that it is worth
extending the secure computation protocol to popular functions particularly
involving exponentiation such as logistic sigmoid, tanh, and softmax for the
following reasons (with more details in the appendix):
* •
For classification tasks, output layers usually use softmax or logistic
sigmoid because they make for more interpretable values in [0, 1] as opposed
to ReLU and variations that are unbounded. In fact, though previously missing
in the literature, (Asadi and Jiang 2020) provide theoretical proofs on the
universal approximation of softmax used in the output layer for multiclass
classification tasks as well.
* •
ReLU’s unbounded nature limits its applicability in RNNs. Capturing long-term
dependencies becomes harder with an unbounded function due to exploding
gradients, as explored in (Pascanu, Mikolov, and Bengio 2013). Because of the
temporal correlations captured by RNNs (by forming connections between hidden
layers), long-term components add up and the norm of the gradient of the
hidden layers explode and ReLU’s unbounded nature exacerbates this problem.
There have been a few notable improvements to assuage this, but even in LSTMs,
the tanh often gives more consistent results (Kent and Salem 2019) and ReLU
tends to require greater fine-tuning.
* •
’Leaky’ ReLU has been used to overcome ReLU’s vanishing gradient problem. (Xu,
Huang, and Li 2016) have also revised logistic sigmoid to a scaled version of
it and tanh to a ‘leaky’ tanh. The ‘leaky’ version penalizes the negative part
of the domain. They found it to be much faster than the tanh, it gave better
results than ReLU, and is almost identical to leaky ReLU. Such potential
improvements highlight the need for more secure protocols for such functions.
Though ReLU has become widely adapted in the past few years, there are still
some tasks wherein sigmoids are preferred over ReLU, and it is worthwhile to
extend such secure protocols for these as well.
### Recent Work
There have been many privacy-preserving machine learning (PPML) frameworks for
various situations, such as Decision Trees (Agrawal and Srikant 2000; Lindell
and Pinkas 2000), Linear Regression (Du and Atallah 2001; Sanil et al. 2004),
k-means clustering (Jagannathan and Wright 2005; Bunn and Ostrovsky 2007), SVM
classification (Yu, Vaidya, and Jiang 2006; Vaidya, Yu, and Jiang 2008), and
logistic regression (Slavkovic, Nardi, and Tibbits 2007). However, these
cannot generalize to a number of (complex and high accuracy) standard machine
learning applications.
To overcome this, SecureML (Mohassel and Zhang 2017) provided secure protocols
for linear regression, logistic regression, and neural networks with linear
activations, using a combination of arithmetic and garbled circuits for a
single semi-honest adversary in both the 2 and 3-server models. It also
provided secure protocols for logistic sigmoid and softmax in the 2-server
setting (although very briefly) and a method for truncating decimal numbers.
MiniONN (Liu et al. 2017) optimized the protocols of SecureML by reducing the
offline cost of matrix multiplications and increasing the online cost for the
2-server model in the semi-honest adversary setting. Chameleon (Riazi et al.
2018) and Gazelle (Juvekar, Vaikuntanathan, and Chandrakasan 2018) provided
secure inference protocols which are computationally secure against one semi-
honest adversary in the 3-server and 2-server models. Chameleon removes
expensive oblivious transfer protocols by using the third party as a dealer,
while Gazelle focuses on making the linear layers more communication efficient
by providing specialized packing schemes for additively homomorphic encryption
schemes.
SecureNN (Wagh, Gupta, and Chandran 2018) provides secure neural network
training protocols for non-linear activation functions in the 3-party setting
with up to one malicious corruption and shows that the honest-majority can
improve the performance by several orders of magnitude. For linear regression,
logistic regression and neural networks, the problem is even more challenging
as the training procedure computes many instances of non-linear activation
functions such as logistic sigmoid, tanh, and softmax that are expensive to
compute inside a 2 and 3-server.
SecureNN provides efficient protocols for computing the ReLU activation,
maxpool and their derivatives. It suggests that the general idea can extended
to other variants and piecewise linear approximations. However, (Mohassel and
Zhang 2017) show experimentally that low-degree polynomial approximations,
like piecewise linear approximations, are ineffective in terms of accuracy. It
can be proven that hard sigmoid, a piecewise linear approximation of the
logistic sigmoid, performs worse than logistic sigmoid, especially in the case
of linear regression (Maksutov 2018). SecureML also provides higher-order
approximations of logistic sigmoid and softmax using a combination of ReLU
functions in place of exponentiation. However, they do not provide a detailed
algorithm and find the running time in the offline phase to be high.
To overcome this research gap, we propose S++, based on (Wagh, Gupta, and
Chandran 2018), for the 3-server setting with efficient protocols for logistic
sigmoid and tanh and their derivatives, as well as the softmax. This can be
extended to the derivative of the softmax as well.
## Cryptographic Primitives
### Notation
In our work, we use 2-out-of-2 additive sharing over the even ring $Z_{L}$
where $L=2^{l}$. For our purposes, we use the same ring as (Wagh, Gupta, and
Chandran 2018), i.e., $Z_{2^{64}}$. The 2-out-of-2 additive shares are
represented by $\langle x\rangle_{0}^{L}$ and $\langle x\rangle_{1}^{L}$.
where L represents the ring $Z_{L}$ over which the value $x$ has been shared.
We use the notation $\langle a\rangle_{j}$ to denote Party $P_{j}$’s additive
secret share of the value a, and $\lfloor x\rfloor$ to denote the floor of the
value x. The notation $(p_{0},p_{1},\cdots,p_{n-1})$ is used to denote the
bits of a n-bit value, $p$.
Further, we use the following primitives from (Wagh, Gupta, and Chandran 2018)
in our work:
1. 1.
Matrix Multiplication of two secret shared matrices represented by
$\mathcal{F}_{MatMul}(\\{P_{0},P_{1}\\}P_{2})$.
2. 2.
Division of a two secret shared values represented by
$\mathcal{F}_{Division}(\\{P_{0},P_{1}\\}P_{2})$.
These protocols are described in the following section.
## Protocols of S++
### Architecture
S++ works with the $3$-server setting similar to SecureNN (Wagh, Gupta, and
Chandran 2018). In this setting, there are two primary servers and one helper
server. The two primary servers hold additive secret shares of the data. Let
$P_{0},P_{1},P_{2}$ be the three servers and $C$ be a set of $n$ participants
$\\{C_{1},C_{2},\dots C_{n}\\}$, where the $i^{th}$ party $C_{i}$ holds its
own private dataset, $\mathbb{D}_{i}$. $\mathcal{M}_{nn}$ is the neural
network model to be trained on the parties’ private data. The participants
$\\{C_{1},C_{2},\dots C_{n}\\}$ split their data into 2-out-of-2 additive
secret shares and give one share to each of the two servers $P_{0}$ and
$P_{1}$. The third server, $P_{2}$, similar to (Wagh, Gupta, and Chandran
2018), is crucial to the protocols, but does not hold any data. We use the
same integer representation for fixed point numbers as (Wagh, Gupta, and
Chandran 2018). However, to tackle any overflow that can render the
exponentiation protocol impractical for use, we also consider extending the
integer ring to $Z_{128}$. S++ provides security against one semi-honest
adversary and up to one malicious corruption.
Figure 1: Function Dependencies in S++.
### Supporting Protocols
#### Matrix Multiplication ($\mathcal{F}_{MatMul}(\\{P_{0},P_{1}\\}P_{2})$):
In S++, this function from (Wagh, Gupta, and Chandran 2018) has been used for
multiplying fixed point values (represented as unsigned integers in the
$Z_{L}$ ring) by considering the values as $1\times 1$ matrices.
Two parties, $P_{0}$ and $P_{1}$ are required to hold shares of $X\in
Z_{L}^{m\times n}$ and $Y\in Z_{L}^{n\times v}$. After running this
interactive secure protocol, $P_{0}$ gets $\langle X.Y\rangle^{L}_{0}$ and
$P_{1}$ gets $\langle X.Y\rangle^{L}_{1}$.
#### Division ($\mathcal{F}_{Division}(\\{P_{0},P_{1}\\}P_{2})$):
This function, also from (Wagh, Gupta, and Chandran 2018), is used to perform
secure division on secret shared values. In this protocol, two parties,
$P_{0}$ and $P_{1}$ are required to hold shares of values $x_{j}$ and $y_{j}$
respectively, for $j\text{in}\\{0,1\\}$. After running the protocol, they
obtain $\langle x/y\rangle^{L}_{0}$ and $\langle x/y\rangle^{L}_{1}$
respectively.
#### Taylor Series Expansion
($\mathcal{F}_{taylorExp}(\\{P_{0},P_{1}\\}P_{2})$; see algorithm 1):
In this protocol, we compute $e^{x}$, where $x$ is a secret fractional value.
We do so by computing the value of the Taylor expansion up to four terms. In
the end of the protocol, parties $P_{0}$ and $P_{1}$ obtain shares of $e^{x}$.
Algorithm 1 Taylor Expansion,
$\mathcal{F}_{taylorExp}(\\{P_{0},P_{1}\\},P_{2})$
Input: $P_{0},P_{1}$ hold $\\{\langle x\rangle_{0}^{L}\\}$ and $\\{\langle
x\rangle_{1}^{L}\\}$ respectively where $x<1$.
Output: $P_{0}$ and $P_{1}$ obtain $\langle y\rangle_{j}^{L}$ = $\langle
e^{x}\rangle_{j}^{L}$.
Common Randomness: $P_{0}$ and $P_{1}$ hold random shares of zero - $u_{0}$
and $u_{1}$ respectively.
1: Parties $P_{0}$ and $P_{1}$ compute $\langle c\rangle_{j}^{L}=j$.
2: Parties $P_{0}$ and $P_{1}$ compute $\langle
numerator\rangle_{j}^{L}=\langle x\rangle_{j}^{L}$ and $denominator=1$.
3: Now, Parties $P_{0}$ and $P_{1}$ compute $\langle c\rangle_{j}^{L}=\langle
c\rangle_{j}^{L}+\langle numerator\rangle_{j}^{L}$.
4: for $i=2,\ldots,4$ do
5: Parties $P_{0}$, $P_{1}$ and $P_{2}$ invoke
$\mathcal{F}_{MatMul}(\\{P_{0},P_{1}\\}P_{2})$ with inputs $\langle
numerator\rangle_{j}^{L}$ and $\langle x\rangle_{j}^{L}$ to obtain $\langle
numerator\rangle_{j}^{L}$ for $j\in\\{0,1\\}$.
6: $denominator=denomiator\times i$
7: Parties $P_{0}$ and $P_{1}$ compute $\langle c\rangle_{j}^{L}=\langle
c\rangle_{j}^{L}+\frac{\langle numerator\rangle_{j}^{L}}{denominator}$
8: end for
9: $P_{j}$ for $j\in\\{0,1\\}$ output $\langle c\rangle_{j}^{L}$ \+ $u_{j}$.
#### Exponentiation ($\mathcal{F}_{Exp}(\\{P_{0},P_{1}\\},P_{2})$); see
algorithm 2:
In this protocol, we compute $e^{x}$, where $x$ is secret. This protocol is
based on SCALE MAMBA’s (Aly et al. 2020) base-2 exponentiation protocol. It
uses primitives like $\mathcal{F}_{Trunc}$, $\mathcal{F}_{BitDecomp}$,
$\mathcal{F}_{PreMult}$ from (Catrina and Saxena 2010).
While (Aly et al. 2020) uses the polynomial $P_{1045}(X)$ from (Hart 1978) to
compute the exponentiation for the fractional part of a number, we use a
secure protocol to compute the output using the Taylor series expansion of
$e^{x}$. We introduce function $\mathcal{F}_{taylorExp}$ defined above for
this purpose.
This protocol first uses $\mathcal{F}_{Trunc}$ to split the input $a$ into its
integer ($a_{\text{int}}$) and fractional ($a_{\text{frac}}$) parts. After
that, it evaluates $e^{a_{\text{int}}}$ using primitives
$\mathcal{F}_{BitDecomp}$ and $\mathcal{F}_{PreMult}$ described above. It
further evaluates $e^{a_{\text{frac}}}$ using $\mathcal{F}_{taylorExp}$. It
then multiplies these two values to obtain
$e^{a_{\text{int}}+a_{\text{frac}}}=e^{a}$.
Algorithm 2 Exponentiation $\mathcal{F}_{Exp}(\\{P_{0},P_{1}\\},P_{2})$
Input: $P_{0}$ and $P_{1}$ hold $\langle x\rangle_{0}^{L}$ and $\langle
x\rangle_{1}^{L}$ (shares of a value x).
Output: $P_{0}$ and $P_{1}$ obtain $\langle y\rangle_{j}^{L}$ = $\langle
e^{x}\rangle_{j}^{L}$.
1: For $j\in\\{0,1\\}$, party $P_{j}$ calls
$\mathcal{F}_{Trunc}(\\{P_{0},P_{1}\\})$ to obtain $\langle a\rangle_{j}^{L}$,
which is the $j^{\text{th}}$ share of $\lfloor x\rfloor$.
2: For $j\in\\{0,1\\}$, party $P_{j}$ gets the fractional part of x by locally
computing: $\langle b\rangle_{j}^{L}=\langle x\rangle_{j}^{L}-\langle
a\rangle_{j}^{L}$
3: For $j\in\\{0,1\\}$ party $P_{j}$ invokes
$\mathcal{F}_{BitDecomp}(\\{P_{0},P_{1}\\})$ to obtain $(\langle
c_{0}\rangle_{j}^{L},\langle c_{1}\rangle_{j}^{L},\cdots,\langle
c_{m-1}\rangle_{j}^{L})$: shares of $(x_{0},x_{1},\cdots,x_{m-1})$, where $x$
is a m-bit number.
4: for $i=0,1,\ldots,m-1$ do
5: $P_{j}$ for $j\in\\{0,1\\}$ computes $\langle
v_{i}\rangle_{j}^{L}=e^{2^{i}}.(\langle c_{i}\rangle_{j}^{L})+j-(\langle
c_{i}\rangle_{j}^{L})$.
6: end for
7: $P_{j}$ for $j\in\\{0,1\\}$ invokes
$\mathcal{F}_{PreMult}(\\{P_{0},P_{1}\\})$ to get $\langle m\rangle_{j}^{L}$.
8: $P_{j}$ for $j\in\\{0,1\\}$ invokes
$\mathcal{F}_{Taylor}(\\{P_{0},P_{1}\\})$ to get $\langle n\rangle_{j}^{L}$
9: Finally, $P_{j}$ for $j\in\\{0,1\\}$ invokes
$\mathcal{F}_{MatMul}(\\{P_{0},P_{1}\\}P_{2})$ with inputs $\langle
m\rangle_{j}^{L}$ and $\langle n\rangle_{j}^{L}$ to obtain share $\langle
m\rangle_{j}^{L}$ of: $y=m\times n$.
### Main Protocols
Our main protocols include logistic sigmoid ($\mathcal{F}_{Sigmoid}$;
described in algorithm 3), tanh ($\mathcal{F}_{Tanh}$; described in algorithm
5), and their derivatives; see algorithms 4 and 6, as well as the softmax
function ($\mathcal{F}_{Softmax}$; described in algorithm 7).
#### Sigmoid, $\mathcal{F}_{Sigmoid}(\\{P_{0},P_{1}\\},P_{2})$; see algorithm
3):
This protocol can be used to securely compute logistic sigmoid. Parties
$P_{0}$, $P_{1}$ and $P_{2}$ invoke $\mathcal{F}_{Exp}$ with shares of the
input $x$ to obtain output shares of $e^{x}$. They then jointly compute shares
of $1+e^{x}$. They use (Wagh, Gupta, and Chandran 2018)’s secure division
protocol to obtain $\sigma(x)=\frac{e^{x}}{1+e^{x}}$.
Algorithm 3 Sigmoid, $\mathcal{F}_{Sigmoid}(\\{P_{0},P_{1}\\},P_{2})$
Input: $P_{0},P_{1}$ hold $\\{\langle x\rangle_{0}^{L}\\}$ and $\\{\langle
x\rangle_{1}^{L}\\}$ respectively.
Output: $P_{0},P_{1}$ get $\\{\langle\sigma(x)\rangle_{0}^{L}\\}$ and
$\\{\langle\sigma(x)\rangle_{1}^{L}\\}$ respectively where
$\sigma(x)=\frac{e^{x}}{1+e^{x}}$.
Common Randomness: $P_{0}$ and $P_{1}$ hold random shares of zero - $u_{0}$
and $u_{1}$ respectively.
1: Parties $P_{0}$, $P_{1}$ and $P_{2}$ invoke
$F_{Exp}(\\{P_{0},P_{1}\\},P_{2})$ with inputs $\langle x\rangle_{0}^{L}$ and
$\langle x\rangle_{1}^{L}$ and obtain $\langle a\rangle_{j}^{L}=\langle
e^{x}\rangle_{j}^{L}$.
2: Now, parties $P_{0}$ and $P_{1}$ compute $\langle b\rangle_{j}^{L}=\langle
a\rangle_{j}^{L}+j$.
3: $P_{0}$, $P_{1}$ and $P_{2}$ invoke
$\mathcal{F}_{Division}(\\{P_{0},P_{1}\\},P_{2})$ with inputs $\langle
a\rangle_{j}^{L}$ and $\langle b\rangle_{j}^{L}$ to obtain $\langle
c\rangle_{j}^{L}$ for $j\in\\{0,1\\}$.
4: $P_{j}$ for $j\in\\{0,1\\}$ output $\langle c\rangle_{j}^{L}$ \+ $u_{j}$.
#### Derivative of Sigmoid,
$\mathcal{F^{D}}_{Sigmoid}(\\{P_{0},P_{1}\\},P_{2})$; see algorithm 4):
This protocol can be used to securely compute the derivative of the logistic
sigmoid. We use the idea that
$\frac{dS(x)}{d(x)}=\sigma^{\prime}(x)=\sigma(x)\cdot(1-\sigma(x)).$ In the
end of the protocol, parties $P_{0}$ and $P_{1}$ obtain the shares of
$\sigma^{\prime}(x)$.
Algorithm 4 Derivative of Sigmoid
$\langle{\mathcal{F^{D}}_{sigmoid}(x)}\rangle^{L}$
Input: $P_{0}$, $P_{1}$ have inputs $\langle{x}\rangle_{j}^{L}$ for $P_{j}$
such that $j\in\\{0,1\\}$.
Output: $P_{0}$, $P_{1}$ get $\langle{\sigma^{\prime}(c)}\rangle_{0}^{L}$ and
$\langle{\sigma^{\prime}(c)}\rangle_{1}^{L}$ where
$\sigma^{\prime}(c)=\sigma(x)(1-\sigma(x))$
Common Randomness: $P_{0}$ and $P_{1}$ hold random shares of zero - $u_{0}$
and $u_{1}$ respectively.
1: $P_{0}$, $P_{1}$ and $P_{2}$ invoke
$\mathcal{F}_{Sigmoid}{(\\{P_{0},P_{1}\\},P_{2})}$ with $P_{j}$ for
$j\in\\{0,1\\}$ having input $\langle{x}\rangle_{j}^{L}$ to learn
$\langle{a}\rangle_{j}^{L}=\langle{\sigma(x)}\rangle_{j}^{L}$.
2: $P_{j}$ computes $\langle{a}\rangle_{j}^{L}$ =
$-1\times\langle{x}\rangle_{j}^{L}+j$ for $j\in\\{0,1\\}$
3: $P_{j}$ computes $\langle{b}\rangle_{j}^{L}$ = $\langle{a}\rangle_{j}^{L}$
for $j\in\\{0,1\\}$
4: $P_{0}$, $P_{1}$ and $P_{2}$ invoke
$\mathcal{F}_{MatMul}{(\\{P_{0},P_{1}\\},P_{2})}$ with $P_{j}$ for
$j\in\\{0,1\\}$ having inputs $\langle{a}\rangle_{j}^{L}$ and
$\langle{b}\rangle_{j}^{L}$ to learn
$\langle{c}\rangle_{j}^{L}=\langle{a}\rangle_{j}^{L}\times\langle{b}\rangle_{j}^{L}$.
5: $P_{j}$ for $j\in\\{0,1\\}$ output:
$\langle{c}\rangle_{j}^{L}+\langle{u}\rangle_{j}^{L}$
#### Tanh, $\mathcal{F}_{Tanh}(\\{P_{0},P_{1}\\},P_{2})$; see algorithm 5):
This protocol can be used to securely compute the tanh function. For this
protocol, we use the idea that $tanh(x)$ is related to $\sigma(x)$ as
$tanh(x)=2\sigma(2x)-1$.
Algorithm 5 Tanh $\langle{\mathcal{F}_{Tanh}(x)}\rangle^{L}$
Input: $P_{0}$, $P_{1}$ have inputs $\langle{x}\rangle_{j}^{L}$ for $P_{j}$
such that $j\in\\{0,1\\}$.
Output: $P_{0}$, $P_{1}$ get $\langle{tanh(c)}\rangle_{0}^{L}$ and
$\langle{tanh(c)}\rangle_{1}^{L}$ where $tanh(c)=\frac{2}{1+e^{-2x}}-1$
Common Randomness: $P_{0}$ and $P_{1}$ hold random shares of zero - $u_{0}$
and $u_{1}$ respectively.
1: $P_{j}$ computes $\langle{a}\rangle_{j}^{L}$ =
$2\times\langle{x}\rangle_{j}^{L}$ for $j\in\\{0,1\\}$
2: $P_{0}$, $P_{1}andP_{2}$ invoke
$\mathcal{F}_{Sigmoid}{(\\{P_{0},P_{1}\\},P_{2})}$ with $P_{j}$ for
$j\in\\{0,1\\}$ having input $\langle{a}\rangle_{j}^{L}$ to learn
$\langle{p}\rangle_{j}^{L}=\langle{\sigma(a)}\rangle_{j}^{L}$.
3: $P_{j}$ for $j\in\\{0,1\\}$ output:
$2\times\langle{P}\rangle_{j}^{L}-j+\langle{u}\rangle_{j}^{L}$
#### Derivative of Tanh, $\mathcal{F^{D}}_{Tanh}(\\{P_{0},P_{1}\\},P_{2})$;
see algorithm 6):
This protocol can be used to securely compute the derivative of the tanh
function. For this protocol, we use the idea that $tanh^{\prime}(x)$ is
related to $\sigma^{\prime}(x)$ as $tanh^{\prime}(x)=4\sigma^{\prime}(2x)$.
Algorithm 6 Derivative of Tanh $\langle{\mathcal{F^{D}}_{Tanh}(x)}\rangle^{L}$
Input: $P_{0}$, $P_{1}$ have inputs $\langle{x}\rangle_{j}^{L}$ for $P_{j}$
such that $j\in\\{0,1\\}$.
Output: $P_{0}$ and $P_{1}$ get $\langle{\tanh^{\prime}(c)}\rangle_{0}^{L}$
and $\langle{\tanh^{\prime}(c)}\rangle_{1}^{L}$ respectively.
Common Randomness: $P_{0}$ and $P_{1}$ hold random shares of zero - $u_{0}$
and $u_{1}$ respectively.
1: $P_{0}$ and $P_{1}$ compute
$\langle{a}\rangle_{j}^{L}=2\times\langle{x}\rangle_{j}^{L}$.
2: $P_{0}$, $P_{1}$, $P_{2}$ invoke $\mathcal{F^{D}}_{sigmoid}$ with
$\langle{a}\rangle_{j}^{L}$ to obtain shares $\langle{b}\rangle_{j}^{L}$.
3: $P_{0}$ and $P_{1}$ output:
$4.\langle{b}\rangle_{j}^{L}+\langle{u}\rangle_{j}^{L}$
#### Softmax, $\mathcal{F}_{Softmax}(\\{P_{0},P_{1}\\},P_{2})$; see algorithm
6):
This protocol can be used to securely compute the softmax function. The two
parties compute the numerator $e^{z_{i}}$ and the denominator
$\sum_{i=1}^{k}e^{z_{i}}$ and invoke the $\mathcal{F}_{Division}$ function to
perform secure division.
Algorithm 7 Softmax, $\mathcal{F}_{Softmax}(\\{P_{0},P_{1}\\},P_{2})$
Input: $P_{0},P_{1}$ hold $\\{\langle z_{i}\rangle_{0}^{L}\\}_{i\in[k]}$ and
$\\{\langle z_{i}\rangle_{1}^{L}\\}_{i\in[k]}$ respectively.
Output: $P_{0},P_{1}$ get $\\{\langle
s_{max}(z_{i})\rangle_{0}^{L}\\}_{i\in[k]}$ and $\\{\langle
s_{max}(z_{i})\rangle_{1}^{L}\\}_{i\in[k]}$ respectively where
$s_{max}(z_{i})=\frac{e^{z_{i}}}{\sum_{i=1}^{k}e^{z_{i}}}$.
Common Randomness: $P_{0}$ and $P_{1}$ hold random shares of zero - $u_{0}$
and $u_{1}$ respectively.
1: for $i=1,2,\ldots,k$ do
2: Parties $P_{0}$, $P_{1}$ and $P_{2}$ invoke
$F_{Exp}(\\{P_{0},P_{1}\\},P_{2})$ with inputs $\\{\langle
z_{i}\rangle_{0}^{L}\\}_{i\in[k]}$ and $\\{\langle
z_{i}\rangle_{1}^{L}\\}_{i\in[k]}$ and $P_{0}$, $P_{1}$ obtain shares $\langle
c_{i}\rangle_{0}^{L}$, $\langle c_{i}\rangle_{1}^{L}$ resp. of $c_{i}^{L}$ =
$e^{z_{i}}$.
3: end for
4: for $j\in\\{0,1\\}$, $P_{j}$ calculates $S_{j}=\sum_{i=1}^{k}\langle
c_{i}\rangle_{j}^{L}$.
5: for $i=1,2,\dots,k$ do
6: Parties $P_{0}$, $P_{1}$ and $P_{2}$ invoke
$\mathcal{F}_{Division}(\\{P_{0},P_{1}\\},P_{2})$ with inputs $\langle
c_{i}\rangle_{j}^{L}$ and $\langle S\rangle_{j}^{L}$.
7: Parties $P_{j}$ for $j\in\\{0,1\\}$ output $\\{\langle
S_{max}(z_{i})\rangle_{j}^{L}\\}_{i\in[k]}+u_{j}$.
8: end for
## Empirical Evaluation
### Implementation and Experimental Setup
Our system is implemented in C++ using standard libraries. To be able to
execute machine learning protocols, we use fixed-point representation in the
integer ring $\mathbb{Z}_{64}$. This allows us to compute precise values using
our machine learning protocols. The fixed-arithmetic used in our
implementation is the same as (Wagh, Gupta, and Chandran 2018) which borrowed
the idea from (Mohassel and Zhang 2017).
Exponentiation is complicated and may overflow the integer ring. So, we
consider the ring $\mathbb{Z}_{128}$ as proposed by (Aly et al. 2020). So far,
we have implemented the Taylor series exponentiation (refer to step 7 of
Algorithm 1) which works for values less than 1. We provide benchmarks for our
logistic sigmoid, tanh and softmax functions, their derivatives and the Taylor
series exponentiation in the secure setting with input vectors of varying
sizes. These benchmarks include the execution time and the total communication
(in MB) of these protocols.
We run our protocols in a LAN setting on an Intel i5 7th Gen with 8GB RAM. The
average bandwidth measured was 40Mb/s for download and 9.2Mb/s for upload and
the average ping was 12ms.
### Experimental Results
We show the average time taken by the protocols in Table 1.
Protocol | Dimension | Time(s) | Comm.(mb)
---|---|---|---
| 64x16 | 0.08 | 0.025
$\mathcal{F}_{Exp}$ | 128x128 | 2.134 | 0.393
| 576x20 | 0.882 | 0.276
| 64x16 | 0.252 | 2.58
$\mathcal{F}_{Sigmoid}$ | 128x128 | 5.631 | 41.288
| 576x20 | 2.615 | 29.03
| 64x16 | 0.275 | 2.58
$\mathcal{F}_{tanh}$ | 128x128 | 5.32 | 41.288
| 576x20 | 2.613 | 29.03
| 64x16 | 0.324 | 2.58
$\mathcal{F}_{Softmax}$ | 128x128 | 5.438 | 41.288
| 576x20 | 2.617 | 29.03
| 64x16 | 0.464 | 2.597
$\mathcal{F^{D}}_{sigmoid}$ | 128x128 | 8.033 | 41.55
| 576x20 | 4.121 | 29.214
| 64x16 | 0.383 | 2.58
$\mathcal{F^{D}}_{tanh}$ | 128x128 | 4.465 | 41.288
| 576x20 | 2.84 | 29.03
| 64x16 | 0.032 | 0.005
$\mathcal{F}_{taylorExp}$ | 128x128 | 0.092 | 0.079
| 576x20 | 0.427 | 0.055
Table 1: Benchmarks in LAN setting on i5 7th Gen
## Conclusion and Future Work
Previous work in secure computation of neural networks has been able to
achieve reasonable efficiency for many real world applications. However, the
lack of options in the choice of protocols that these implementations can run
can limit the capacity of activities the user is able to perform in the secure
setting.
Through the addition of the exponentiation function, it becomes possible to
add a large number of secure activation functions that were previously
impossible to compute. We have shown logistic sigmoid, tanh, softmax and their
derivatives as examples of such functions. We implement these functions with
similar overheads as the previously proposed activation functions.
In its current form, the exponentiation function overflows for small values
making it impractical for usage in regular neural network. This also makes us
unable to use the functions that call the exponentiation function, as part of
their execution, in practical settings.
Exponentiation protocols in a fixed-point MPC setting have been provided by
(Aly and Smart 2019) and implemented by (Aly et al. 2019). Ideas from these
protocols can be used to avoid using a workaround that would involve
increasing the size of shares to accommodate the overflow.
The future work on S++ will be focused on dealing with the overflow such that
the exponentiation function can handle the magnitude of the values that it
would encounter in a regular neural network. Once this is solved, the other
functions that we have proposed, that use exponentiation, will automatically
become suitable for practical usage.
> ## References
>
> * Agrawal and Srikant (2000) Agrawal, R.; and Srikant, R. 2000. Privacy-
> preserving data mining. In _Proceedings of the 2000 ACM SIGMOD
> international conference on Management of data_ , 439–450.
> * Aly et al. (2020) Aly, A.; Cong, K.; Cozzo, D.; Keller, M.; Orsini, E.;
> Rotaru, D.; Scherer, O.; Scholl, P.; Smart, N.; Tanguy, T.; et al. 2020.
> SCALE–MAMBA v1. 10: Documentation .
> * Aly et al. (2019) Aly, A.; Orsini, E.; Rotaru, D.; Smart, N. P.; and
> Wood, T. 2019. Zaphod: Efficiently combining lsss and garbled circuits in
> scale. In _Proceedings of the 7th ACM Workshop on Encrypted Computing &
> Applied Homomorphic Cryptography_, 33–44.
> * Aly and Smart (2019) Aly, A.; and Smart, N. P. 2019. Benchmarking
> privacy preserving scientific operations. In _International Conference on
> Applied Cryptography and Network Security_ , 509–529. Springer.
> * Asadi and Jiang (2020) Asadi, B.; and Jiang, H. 2020. On Approximation
> Capabilities of ReLU Activation and Softmax Output Layer in Neural Networks.
> _arXiv preprint arXiv:2002.04060_ .
> * Bunn and Ostrovsky (2007) Bunn, P.; and Ostrovsky, R. 2007. Secure two-
> party k-means clustering. In _Proceedings of the 14th ACM conference on
> Computer and communications security_ , 486–497.
> * Catrina and Saxena (2010) Catrina, O.; and Saxena, A. 2010. Secure
> Computation with Fixed-Point Numbers. In _International Conference on
> Financial Cryptography and Data Security_. Springer.
> * Du and Atallah (2001) Du, W.; and Atallah, M. J. 2001. Privacy-
> preserving cooperative scientific computations. In _csfw_ , 0273. Citeseer.
> * Hart (1978) Hart, J. F. 1978. Computer Approximations. In _Krieger
> Publishing Co., Inc._ Springer.
> * Hunt et al. (2018) Hunt, T.; Song, C.; Shokri, R.; Shmatikov, V.; and
> Witchel, E. 2018. Chiron: Privacy-preserving machine learning as a service.
> _arXiv preprint arXiv:1803.05961_ .
> * Jagannathan and Wright (2005) Jagannathan, G.; and Wright, R. N. 2005.
> Privacy-preserving distributed k-means clustering over arbitrarily
> partitioned data. In _Proceedings of the eleventh ACM SIGKDD international
> conference on Knowledge discovery in data mining_ , 593–599.
> * Juvekar, Vaikuntanathan, and Chandrakasan (2018) Juvekar, C.;
> Vaikuntanathan, V.; and Chandrakasan, A. 2018. $\\{$GAZELLE$\\}$: A low
> latency framework for secure neural network inference. In _27th
> $\\{$USENIX$\\}$ Security Symposium ($\\{$USENIX$\\}$ Security 18)_,
> 1651–1669.
> * Kent and Salem (2019) Kent, D.; and Salem, F. 2019. Performance of
> three slim variants of the long short-term memory (LSTM) layer. In _2019
> IEEE 62nd International Midwest Symposium on Circuits and Systems (MWSCAS)_
> , 307–310. IEEE.
> * Krizhevsky, Sutskever, and Hinton (2017) Krizhevsky, A.; Sutskever, I.;
> and Hinton, G. E. 2017. Imagenet classification with deep convolutional
> neural networks. _Communications of the ACM_ 60(6): 84–90.
> * Le, Jaitly, and Hinton (2015) Le, Q. V.; Jaitly, N.; and Hinton, G. E.
> 2015. A simple way to initialize recurrent networks of rectified linear
> units. _arXiv preprint arXiv:1504.00941_ .
> * Lindell and Pinkas (2000) Lindell, Y.; and Pinkas, B. 2000. Privacy
> preserving data mining. In _Annual International Cryptology Conference_ ,
> 36–54. Springer.
> * Liu et al. (2017) Liu, J.; Juuti, M.; Lu, Y.; and Asokan, N. 2017.
> Oblivious neural network predictions via minionn transformations. In
> _Proceedings of the 2017 ACM SIGSAC Conference on Computer and
> Communications Security_ , 619–631.
> * Maksutov (2018) Maksutov, R. 2018. Deep study of a not very deep neural
> network. Part 2: Activation functions. https://towardsdatascience.com/deep-
> study-of-a-not-very-deep-neural-network-part-2-activation-functions-
> fd9bd8d406fc. Accessed: 2020-11-06.
> * Mohassel and Zhang (2017) Mohassel, P.; and Zhang, Y. 2017. SecureML: A
> system for scalable privacy-preserving machine learning. In _2017 IEEE
> Symposium on Security and Privacy (SP)_ , 19–38. IEEE.
> * Ohrimenko et al. (2016) Ohrimenko, O.; Schuster, F.; Fournet, C.; Mehta,
> A.; Nowozin, S.; Vaswani, K.; and Costa, M. 2016. Oblivious multi-party
> machine learning on trusted processors. In _25th $\\{$USENIX$\\}$ Security
> Symposium ($\\{$USENIX$\\}$ Security 16)_, 619–636.
> * Pascanu, Mikolov, and Bengio (2013) Pascanu, R.; Mikolov, T.; and
> Bengio, Y. 2013. On the difficulty of training recurrent neural networks.
> In _International conference on machine learning_ , 1310–1318.
> * Patra and Suresh (2020) Patra, A.; and Suresh, A. 2020. BLAZE: Blazing
> Fast Privacy-Preserving Machine Learning. _arXiv preprint arXiv:2005.09042_
> .
> * Riazi et al. (2018) Riazi, M. S.; Weinert, C.; Tkachenko, O.; Songhori,
> E. M.; Schneider, T.; and Koushanfar, F. 2018. Chameleon: A hybrid secure
> computation framework for machine learning applications. In _Proceedings of
> the 2018 on Asia Conference on Computer and Communications Security_ ,
> 707–721.
> * Sanil et al. (2004) Sanil, A. P.; Karr, A. F.; Lin, X.; and Reiter, J.
> P. 2004. Privacy preserving regression modelling via distributed
> computation. In _Proceedings of the tenth ACM SIGKDD international
> conference on Knowledge discovery and data mining_ , 677–682.
> * Slavkovic, Nardi, and Tibbits (2007) Slavkovic, A. B.; Nardi, Y.; and
> Tibbits, M. M. 2007. ” Secure” Logistic Regression of Horizontally and
> Vertically Partitioned Distributed Databases. In _Seventh IEEE
> International Conference on Data Mining Workshops (ICDMW 2007)_ , 723–728.
> IEEE.
> * Vaidya, Yu, and Jiang (2008) Vaidya, J.; Yu, H.; and Jiang, X. 2008.
> Privacy-preserving SVM classification. _Knowledge and Information Systems_
> 14(2): 161–178.
> * Wagh, Gupta, and Chandran (2018) Wagh, S.; Gupta, D.; and Chandran, N.
> 2018. SecureNN: Efficient and Private Neural Network Training. _IACR
> Cryptology ePrint Archive_ 2018: 442.
> * Xu, Huang, and Li (2016) Xu, B.; Huang, R.; and Li, M. 2016. Revise
> saturated activation functions. _arXiv preprint arXiv:1602.05980_ .
> * Yu, Vaidya, and Jiang (2006) Yu, H.; Vaidya, J.; and Jiang, X. 2006.
> Privacy-preserving svm classification on vertically partitioned data. In
> _Pacific-asia conference on knowledge discovery and data mining_ , 647–656.
> Springer.
>
## Appendix
Universal approximation of softmax: (Asadi and Jiang 2020) provide theoretical
proofs on the universal approximation of ReLU used in hidden layers of deep
networks and softmax used in the output layer for multiclass classification
tasks. They find that such a proof for the softmax is missing in the
literature despite it being widely used for these tasks, and provide their own
to prove the power of the softmax used in output layers to approximate
indicator functions.
ReLU and gradients: (Pascanu, Mikolov, and Bengio 2013) delve into the
exploding and vanishing gradients of activation functions that tend to cause
problems in recurrent neural networks because of the temporal correlations
they capture (by forming connections between hidden layers). Long-term
components add up and the norm of the gradient of the hidden layers explode.
ReLU’s unbounded nature exacerbates this problem. There have been a few
notable improvements to assuage this, namely gradient clipping, identity
initialization (Le, Jaitly, and Hinton 2015), and gated RNNs like LSTMs and
GRUs. However, even in LSTMs, the tanh often gives more consistent results
((Kent and Salem 2019)) and ReLU tends to require greater fine-tuning.
The Leaky Tanh: (Xu, Huang, and Li 2016) propose a ‘leaky’ tanh that penalizes
the negative part i.e., instead of the regular tanh function, the penalized
tanh behaves like this:
$penalized\\_tanh(x)=\begin{dcases}tanh(x)&x>0\\\ a.tanh(x)&otherwise\\\
\end{dcases}$ (1)
where $a\in(0,1)$. They also found this variation to be two times faster than
the tanh.
|
00footnotetext: Support of the research by the Austrian Science Fund (FWF),
project I 4579-N, and the Czech Science Foundation (GAČR), project 20-09869L,
entitled “The many facets of orthomodularity”, as well as by ÖAD, project CZ
02/2019, entitled “Function algebras and ordered structures related to logic
and data fusion”, and, concerning the first author, by IGA, project PřF 2020
014, is gratefully acknowledged.
# Sheffer operation in relational systems
Ivan Chajda and Helmut Länger
###### Abstract
The concept of a Sheffer operation known for Boolean algebras and orthomodular
lattices is extended to arbitrary directed relational systems with involution.
It is proved that to every such relational system there can be assigned a
Sheffer groupoid and also, conversely, every Sheffer groupoid induces a
directed relational system with involution. Hence, investigations of these
relational systems can be transformed to the treaty of special groupoids which
form a variety of algebras. If the Sheffer operation is also commutative then
the induced binary relation is antisymmetric. Moreover, commutative Sheffer
groupoids form a congruence distributive variety. We characterize symmertry,
antisymmetry and treansitivity of binary relations by identities and quasi-
identities satisfied by an assigned Sheffer operation. The concepts of twist-
products of relational systems and of Kleene relational systems are
introduced. We prove that every directed relational system can be embedded
into a directed relational system with involution via the twist-product
construction. If the relation in question is even transitive, then the
directed relational system can be embedded into a Kleene relational system.
Any Sheffer operation assigned to a directed relational system $\mathbf{A}$
with involution induces a Sheffer operation assigned to the twist-product of
$\mathbf{A}$.
AMS Subject Classification: 08A02, 08A05, 08A40, 05C76
Keywords: Relational system, directed relational system, involution, Sheffer
operation, Sheffer groupoid, twist-product, Kleene relational system
## 1 Introduction
Relational systems form one of the most general mathematical structures.
Almost all structures appearing in algebra can be considered as relational
structures. Such structures were studied for a long lime, see the pioneering
work by J. Riguet ([13]) from 1948 containing elementary properties and
constructions with binary relations and the paper by R. Fraissé ([11]) from
1954. On the other hand, in contrast to publications in algebra, not so many
of papers are devoted to relational systems. One of the reasons is that there
are not so powerful tools for investigating relations as there are for
algebras. This is also the reason why relational systems do not appear so
often in applications both in mathematics and outside. One important
application of relational systems are e.g. Kripke systems used in the
formalization of several non-classical logical systems.
The authors introduced formerly several methods where relational systems are
connected with various accompanying algebras and hence their properties can be
transformed into algebraic language and the problems are solved by tools
developed in general algebra. Let us mention e.g. [6] and [7] where certain
groupoids similar to directoids are assigned or [8] and [10] where this
approach is applied to relational systems equipped with a unary operation. For
ternary relations, such an approach was used in [5]. However, the spectrum of
used algebraic tools is not restricted only to these more or less elementary
cases, in [2] relational systems are treated similarly as residuated ordered
sets. In the present paper we extend this list of used tools by the so-called
Sheffer operation.
Remember that the Sheffer operation introduced by H. M. Sheffer ([14]) in 1913
was used in Boolean algebras as a very successful tool since this operation
can replace all other Boolean operations. Namely, every Boolean operation,
both basic or derived, can be expressed by repeatedly using the Sheffer
operation, see e.g. [1]. In today terminology, the clone of Boolean functions
is generated by the Sheffer operation. This has a surprising and very
successful application in technology because in switching circles, in
particular in computer processors, it suffices to use only one binary
operation, namely the Sheffer one. Then the technology of production of such
chips is much easier and cheaper than it was in the beginning of computer era
when several parts of the computer were composed by at least two different
kinds of diodes (e.g. one for conjunction and the other one for negation). As
it was shown by the first author in [3], a Sheffer operation can be introduced
not only in Boolean algebras but also in orthomodular lattices or even in
ortholattices (see [1] for these concepts). These algebras form an algebraic
axiomatization of the logic of quantum mechanics. Since not all authors agreed
that such lattices are suitable for modeling the propositional calculus of the
logic of quantum mechanics, they recognized that disjunction in this logic not
necessarily exists for all elements, i.e. that the supremum of two elements
need not exist if the these elements are not orthogonal. Hence so-called
orthomodular posets and orthoposets were introduced. This was the reason why
the concept of Sheffer operation was transferred from ortholattices to
orthomodular posets and orthoposets, or, more generally, to ordered sets with
an involution or a complementation, see [4].
The next natural step is to extend this method from posets to more general
relational systems. In order to avoid difficulties with not everywhere defined
operations and some other drawbacks, we consider so-called directed relational
systems where the relation is reflexive and equipped with a unary involution
operation. The authors show that also in this case a kind of Sheffer operation
can be introduced and the corresponding groupoid characterizes the given
relational system. The benefit of this method is two-fold. At first, we show
that similarly as for Boolean algebras, using an assigned Sheffer operation we
can conversely recover not only the involution but also the given binary
relation. Hence the Sheffer operation reduces the type of the relational
system and substitutes a binary relation by an everywhere defined operation.
And secondly, we show that some basic properties of binary relations can be
characterized with advantage by using this operation and that the Sheffer
operation enables also more advanced constructions for relational systems not
considered so far.
## 2 Basic concepts
The Sheffer operation was introduced by H. M. Sheffer ([14]) in Boolean
algebras. If $\mathbf{B}=(B,\vee,\wedge,{}^{\prime},0,1)$ is a Boolean algebra
and one defines
$x|y:=x^{\prime}\vee y^{\prime}$
then $|$ is just the Sheffer operation on $\mathbf{B}$. For our reasons, we
define it as follows.
###### Definition 2.1.
A Sheffer operation on a non-void set $A$ is a binary operation $|$ on $A$
satisfying the following identities:
$\displaystyle(x|y)|(x|x)\approx x,$ (1) $\displaystyle(x|y)|(y|y)\approx y.$
(2)
A Sheffer groupoid is a groupoid $(A,|)$ where $|$ is a Sheffer operation on
$A$.
Hence, the class of Sheffer groupoids forms a variety of algebras.
###### Example 2.2.
If $A:=\\{a,b,c,d\\}$ and the binary operation $|$ on $A$ is defined by
$\begin{array}[]{c|cccc}|&a&b&c&d\\\ \hline\cr a&a&c&d&c\\\ b&c&b&d&c\\\
c&a&b&d&c\\\ d&a&b&d&c\end{array}$
then $(A,|)$ is a Sheffer groupoid.
It is worth noticing that the Sheffer operation in a Boolean algebra satisfies
the identities (1) and (2) and hence our new concept is sound.
An antitone involution on a lattice $(L,\vee,\wedge)$ is a unary operation ′
on $L$ satisfying
1. (i)
$x^{\prime\prime}\approx x$,
2. (ii)
$x\leq y$ implies $y^{\prime}\leq x^{\prime}$
for all $x,y\in L$.
The following lemma was shown for ortholattices in [3].
###### Lemma 2.3.
Let $(L,\vee,\wedge,{}^{\prime})$ be a lattice with an antitone involution.
Then (i) and (ii) hold:
1. (i)
If $x|y:=x^{\prime}\vee y^{\prime}$ for all $x,y\in L$ then $(L,|)$ is a
Sheffer groupoid.
2. (ii)
If $x|y:=x^{\prime}\wedge y^{\prime}$ for all $x,y\in L$ then $(L,|)$ is a
Sheffer groupoid.
###### Proof.
1. (i)
Since $x|x\approx x^{\prime}\vee x^{\prime}\approx x^{\prime}$, (1) and (2)
are equivalent to
$\displaystyle(x^{\prime}\vee y^{\prime})^{\prime}\vee x^{\prime\prime}$
$\displaystyle\approx x,$ $\displaystyle(x^{\prime}\vee
y^{\prime})^{\prime}\vee y^{\prime\prime}$ $\displaystyle\approx y,$
respectively.
2. (ii)
Since $x|x\approx x^{\prime}\wedge x^{\prime}\approx x^{\prime}$, (1) and (2)
are equivalent to
$\displaystyle(x^{\prime}\wedge y^{\prime})^{\prime}\wedge x^{\prime\prime}$
$\displaystyle\approx x,$ $\displaystyle(x^{\prime}\wedge
y^{\prime})^{\prime}\wedge y^{\prime\prime}$ $\displaystyle\approx y,$
respectively.
∎
###### Lemma 2.4.
Axioms (1) and (2) are independent.
###### Proof.
if $A:=\\{a,b\\}$ and the binary operation $|$ on $A$ is defined by $x|y:=x$
for all $x\in A$ then $|$ satisfies (1), but not (2) since
$(a|b)|(b|b)=a|b=a\neq b$, and if $A:=\\{a,b,c\\}$ and the binary operation
$|$ on $A$ is defined by
$\begin{array}[]{c|ccc}|&a&b&c\\\ \hline\cr a&a&b&c\\\ b&c&b&c\\\
c&a&a&c\end{array}$
then $|$ satisfies (2), but not (1) since $(a|b)|(a|a)=b|a=c\neq a$. ∎
Let us recall some concepts from theory of relations.
Let $A$ be a non-void set, $a,b\in A$, $R$ a binary relation on $A$ and ′ a
unary operation on $A$. We define
$\displaystyle U(a,b)$ $\displaystyle:=\\{x\in A\mid(a,x),(b,x)\in R\\},$
$\displaystyle L(a,b)$ $\displaystyle:=\\{x\in A\mid(x,a),(x,b)\in R\\}$
and call these set the upper cone and lower cone of $a$ and $b$ with respect
to $R$, respectively. The relational system $\mathbf{A}=(A,R)$ is called
directed if $U(x,y)\neq\emptyset$ and $L(x,y)\neq\emptyset$ for all $x,y\in
A$. The operation ′ is called antitone if $(x,y)\in R$ implies
$(y^{\prime},x^{\prime})\in R$ and an involution on $\mathbf{A}$ if it is
antitone and if it satisfies the identity $x^{\prime\prime}\approx x$. It can
be shown that in a relational system $(A,R,{}^{\prime})$ with involution, if
$U(x,y)\neq\emptyset$ for all $x,y\in A$ then $L(x,y)\neq\emptyset$ for all
$x,y\in A$ since $L(x,y)\approx(U(x^{\prime},y^{\prime}))^{\prime}$, where
$B^{\prime}:=\\{b^{\prime}\mid b\in B\\}$ for every subset $B$ of $A$.
###### Definition 2.5.
A directed relational system with involution is an ordered triple
$(A,R,{}^{\prime})$ consisting of a non-void set $A$, a binary relation $R$ on
$A$ and a unary operation ′ on $A$ satisfying the following conditions:
$\displaystyle R\text{ is reflexive},$ (3) $\displaystyle(A,R)\text{ is
directed},$ (4) ${}^{\prime}\text{ is an involution on }(A,R).$ (5)
## 3 Representation of relational systems by Sheffer
groupoids
The following result shows how a Sheffer groupoid is connected with a directed
relational system with involution.
###### Theorem 3.1.
Let $\mathbf{A}=(A,|)$ be a Sheffer groupoid and define a unary operation ′ on
$A$ and a binary relation $R$ on $A$ by
$\displaystyle x^{\prime}$ $\displaystyle:=x|x\text{ for all }x\in A,$
$\displaystyle R$ $\displaystyle:=\\{(x,y)\in A^{2}\mid
x^{\prime}|y^{\prime}=y\\}.$
Then $\mathbb{R}(\mathbf{A}):=(A,R,{}^{\prime})$ is a directed relational
system with involution, the so-called directed relational system with
involution induced by $\mathbf{A}$.
###### Proof.
Let $a,b\in A$. (1) implies $x^{\prime\prime}\approx x$ and that $R$ is
reflexive. (1) and (2) can be written in the equivalent form
$(x|y)|x^{\prime}\approx x$ and $(x|y)|y^{\prime}\approx y$, respectively. If
$(a,b)\in R$ then $a^{\prime}|b^{\prime}=b$ and hence
$b|a=(a^{\prime}|b^{\prime})|a=a^{\prime}$, i.e. $(b^{\prime},a^{\prime})\in
R$ showing that ′ is an involution on $(A,R)$. Since
$(a^{\prime}|b^{\prime})|a=a^{\prime}$ and
$(a^{\prime}|b^{\prime})|b=b^{\prime}$ we have
$((a^{\prime}|b^{\prime})^{\prime},a^{\prime}),((a^{\prime}|b^{\prime})^{\prime},b^{\prime})\in
R$ and hence $(a,a^{\prime}|b^{\prime}),(b,a^{\prime}|b^{\prime})\in R$, i.e.
$a^{\prime}|b^{\prime}\in U(a,b)$ which shows $U(a,b)\neq\emptyset$ proving
that $(A,R)$ is directed. ∎
###### Example 3.2.
$(A,A^{2}\setminus\\{(a,b),(b,a)\\},{}^{\prime})$ where $a^{\prime}=a$,
$b^{\prime}=b$, $c^{\prime}=d$ and $d^{\prime}=c$ is the directed relational
system induced by the Sheffer groupoid $\mathbf{A}$ from Example 2.2.
In the following we show that also conversely, to every directed relational
system with involution a Sheffer groupoid can be assigned.
Let $\mathbf{A}=(A,R,{}^{\prime})$ be a directed relational system with
involution. Define a binary operation $|$ on $A$ as follows: Put
$x|y:=y^{\prime}$ if $(x^{\prime},y^{\prime})\in R$ and let $x|y$ be an
arbitrary element of $U(x^{\prime},y^{\prime})$ otherwise ($x,y\in A$). Then
$|$ will be called an operation assigned to $\mathbf{A}$.
###### Lemma 3.3.
Let $\mathbf{A}=(A,R,{}^{\prime})$ be a directed relational system with
involution and $|$ a binary operation on $A$. Then $|$ is assigned to
$\mathbf{A}$ if and only if
1. (i)
$(x,y)\in R$ if and only if $x^{\prime}|y^{\prime}=y$,
2. (ii)
$x|y\in U(x^{\prime},y^{\prime})$ for all $x,y\in A$.
###### Proof.
Let $a,b\in A$. First assume $|$ to be assigned to $\mathbf{A}$. If $(a,b)\in
R$ then $(a^{\prime\prime},b^{\prime\prime})\in R$ and hence
$a^{\prime}|b^{\prime}=b^{\prime\prime}=b$. Conversely, assume
$a^{\prime}|b^{\prime}=b$. Then $(a,b)\notin R$ would imply
$(a^{\prime\prime},b^{\prime\prime})\notin R$ and hence
$b=a^{\prime}|b^{\prime}\in U(a^{\prime\prime},b^{\prime\prime})=U(a,b)$ and
hence $(a,b)\in R$, a contradiction. Hence $(a,b)\in R$. This shows (i). If
$(a^{\prime},b^{\prime})\in R$ then $a|b=b^{\prime}\in
U(a^{\prime},b^{\prime})$. Otherwise, $a|b\in U(a^{\prime},b^{\prime})$, too.
This shows (ii). Conversely, if $|$ satisfies (i) and (ii) then clearly $|$ is
assigned to $\mathbf{A}$. ∎
It should be remarked that if $(A,R,{}^{\prime})$ is a directed relational
system with involution and $|$ an assigned operation then condition (ii) of
Lemma 3.3 is equivalent to
$(x|y)|(x|y)\in L(x,y)\text{ for all }x,y\in A.$
In the following we will often use this lemma. Now we prove the converse of
Theorem 3.1.
###### Theorem 3.4.
Let $\mathbf{A}=(A,R,{}^{\prime})$ be a directed relational system with
involution and $|$ an operation assigned to $\mathbf{A}$. Then $|$ is a
Sheffer operation, a so-called Sheffer operation assigned to $\mathbf{A}$,
i.e. $\mathbb{G}(\mathbf{A}):=(A,|)$ is a Sheffer groupoid, a so-called
Sheffer groupoid assigned to $\mathbf{A}$.
###### Proof.
Let $a,b\in A$. Since $(x^{\prime},x^{\prime})\in R$ we have $x|x\approx
x^{\prime}$. If $(a^{\prime},b^{\prime})\in R$ then $(b,a)\in R$ and hence
$(a|b)|a^{\prime}=b^{\prime}|a^{\prime}=a$ and
$(a|b)|b^{\prime}=b^{\prime}|b^{\prime}=b$. If $(a^{\prime},b^{\prime})\notin
R$ then $a|b\in U(a^{\prime},b^{\prime})$ and hence
$(a^{\prime},a|b),(b^{\prime},a|b)\in R$ which implies
$((a|b)^{\prime},a),((a|b)^{\prime},b)\in R$, i.e. $(a|b)a^{\prime}=a$ and
$(a|b)b^{\prime}=b$. ∎
###### Remark 3.5.
In general, $\mathbb{G}(\mathbf{A})$ is not uniquely defined. However, it
contains all the information on the directed relational system $\mathbf{A}$
with involution. In other words, the given directed relational system with
involution can be completely recovered from an assigned Sheffer groupoid, see
the following result.
###### Theorem 3.6.
Let $\mathbf{A}=(A,R,{}^{\prime})$ be a directed relational system with
involution. Then $\mathbb{R}(\mathbb{G}(\mathbf{A}))=\mathbf{A}$.
###### Proof.
If
$\displaystyle\mathbb{G}(\mathbf{A})$ $\displaystyle=(A,|),$
$\displaystyle\mathbb{R}(\mathbb{G}(\mathbf{A}))$ $\displaystyle=(A,S,{}^{*})$
then according to the proof of Lemma 3.3,
$\displaystyle S$ $\displaystyle=\\{(x,y)\in A^{2}\mid
x^{\prime}|y^{\prime}=y\\}=\\{(x,y)\in A^{2}\mid(x,y)\in R\\}=R,$
$\displaystyle x^{*}$ $\displaystyle\approx x|x\approx x^{\prime}.$
∎
On the other hand, we can show for which pairs of elements a Sheffer operation
assigned to $\mathbb{R}(A,|)$ coincides with the Sheffer operation $|$ of a
given Sheffer groupoid $(A,|)$.
###### Theorem 3.7.
Let $\mathbf{A}=(A,|)$ be a Sheffer groupoid and
$\mathbb{G}(\mathbb{R}(\mathbf{A}))=(A,\circ)$. Then $x\circ y=x|y$ if
$x|y=y|y$.
###### Proof.
If $\mathbb{R}(\mathbf{A})=(A,R,{}^{\prime})$ then any of the following
assertions implies the next one:
$\displaystyle x|y$ $\displaystyle=y|y,$ $\displaystyle x|y$
$\displaystyle=y^{\prime},$ $\displaystyle(x^{\prime},y^{\prime})$
$\displaystyle\in R,$ $\displaystyle x\circ y$ $\displaystyle=y^{\prime},$
$\displaystyle x\circ y$ $\displaystyle=x|y.$
∎
In fact, $\circ$ need not coincide with $|$ as can be seen by the following
example.
###### Example 3.8.
If $|$ is the Sheffer operation from Example 2.2 then $\circ$ has the
operation table
$\begin{array}[]{c|cccc}\circ&a&b&c&d\\\ \hline\cr a&a&x&d&c\\\ b&y&b&d&c\\\
c&a&b&d&c\\\ d&a&b&d&c\end{array}$
where $x,y\in\\{c,d\\}$ since $U(a,b)=\\{c,d\\}$ in the induced relational
system. Hence, if we take $x=d$ or $y=d$ then $\circ$ differs from $|$.
We have shown that directed relational systems with involution are nearly in a
one-to-one correspondence with Sheffer groupoids. Analogously as for Boolean
algebras where the Sheffer operation substitutes all other operations since
they can be derived from it, also here the Sheffer operation substitutes both
the binary relation and the unary operation. Hence it enables us to reduce the
type of the directed relational system with involution.
## 4 Elementary properties of relations
In the following we characterize some of properties of the relation $R$ of a
directed relational system $\mathbf{A}=(A,R,{}^{\prime})$ with involution by
means of identities and quasi-identities for a Sheffer operation assigned to
$\mathbf{A}$.
###### Theorem 4.1.
Let $\mathbf{A}=(A,R,{}^{\prime})$ be a directed relational system with
involution and $|$ an assigned Sheffer operation. Then $R$ is symmetric if and
only if $|$ satisfies the identity
$((x|y)|(x|y))|x\approx x|x.$ (6)
###### Proof.
If $R$ is symmetric then any of the following assertions implies the next one:
$\displaystyle x|y$ $\displaystyle\in U(x^{\prime},y^{\prime}),$
$\displaystyle(x^{\prime},x|y)$ $\displaystyle\in R,$
$\displaystyle(x|y,x^{\prime})$ $\displaystyle\in R,$
$\displaystyle(x|y)^{\prime}|x$ $\displaystyle\approx x^{\prime},$
$\displaystyle((x|y)|(x|y))|x$ $\displaystyle\approx x|x.$
If, conversely, $|$ satisfies identity (6) then any of the following
assertions implies the next one:
$\displaystyle(x,y)$ $\displaystyle\in R,$ $\displaystyle
x^{\prime}|y^{\prime}$ $\displaystyle=y,$ $\displaystyle
y^{\prime}|x^{\prime}$
$\displaystyle=(x^{\prime}|y^{\prime})^{\prime}|x^{\prime}=x,$
$\displaystyle(y,x)$ $\displaystyle\in R.$
∎
Another important property of a binary relation is antisymmetry. Recall that a
binary relation $R$ is antisymmetric if $(x,y),(y,x)\in R$ implies $x=y$.
###### Theorem 4.2.
Let $\mathbf{A}=(A,R,{}^{\prime})$ be a directed relational system with
involution and $|$ a Sheffer operation assigned to it. Then the following
hold:
1. (i)
$R$ is antisymmetric if and only if $x|y=y^{\prime}$ and $y|x=x^{\prime}$
imply $x=y$.
2. (ii)
If $x|y\approx y|x$ then $R$ is antisymmetric.
###### Proof.
1. (i)
is clear.
2. (ii)
This follows from (i) since $x|y\approx y|x$, $x|y=y^{\prime}$ and
$y|x=x^{\prime}$ imply $x=(y|x)^{\prime}=(x|y)^{\prime}=y$.
∎
Transitivity of a binary relation can be expressed by an identity for an
assigned Sheffer operation as follows.
###### Theorem 4.3.
Let $\mathbf{A}=(A,R,{}^{\prime})$ be a directed relational system with
involution and $|$ an assigned Sheffer operation. Then $R$ is transitive if
and only if $|$ satisfies the identity
$x|(((x|y)|(x|y))|z)|(((x|y)|(x|y))|z)\approx((x|y)|(x|y))|z.$ (7)
###### Proof.
If $R$ is transitive then any of the following assertions implies the next
one:
$\displaystyle x|y\in U(x^{\prime},y^{\prime})$ $\displaystyle\text{ and
}(x|y)^{\prime}|z\in U(x|y,z^{\prime}),$
$\displaystyle(x^{\prime},x|y),(x|y,(x|y)^{\prime}|z)$ $\displaystyle\in R,$
$\displaystyle(x^{\prime},(x|y)^{\prime}|z)$ $\displaystyle\in R,$
$\displaystyle x|((x|y)^{\prime}|z)^{\prime}$
$\displaystyle\approx(x|y)^{\prime}|z,$ $\displaystyle
x|(((x|y)|(x|y))|z)|(((x|y)|(x|y))|z)$ $\displaystyle\approx((x|y)|(x|y))|z.$
If, conversely, $|$ satisfies identity (7) then any of the following
assertions implies the next one:
$\displaystyle(x,y),(y,z)$ $\displaystyle\in R,$ $\displaystyle
x^{\prime}|y^{\prime}=y$ $\displaystyle\text{ and }y^{\prime}|z^{\prime}=z,$
$\displaystyle x^{\prime}|z^{\prime}$
$\displaystyle=x^{\prime}|(y^{\prime}|z^{\prime})^{\prime}=x^{\prime}|((x^{\prime}|y^{\prime})^{\prime}|z^{\prime})^{\prime}=(x^{\prime}|y^{\prime})^{\prime}|z^{\prime}=y^{\prime}|z^{\prime}=z,$
$\displaystyle(x,z)$ $\displaystyle\in R.$
∎
Let us introduce the following concepts. A bounded relational system with
involution is an ordered quintuple $\mathbf{A}=(A,R,{}^{\prime},$ $0,1)$ such
that $(A,R,{}^{\prime})$ is a directed relational system with involution,
$0,1\in A$ and $(0,x),(x,1)\in R$ hold for all $x\in A$. $\mathbf{A}$ is
called complemented if it is bounded and if $U(x,x^{\prime})\approx 1\approx
0^{\prime}$. In such a case $L(x,x^{\prime})\approx 0$. Also these properties
of relational systems can be characterized by identities and quasi-identities
for an assigned Sheffer operation.
###### Theorem 4.4.
Let $(A,R,{}^{\prime})$ be a directed relational system with involution and
$|$ a Sheffer operation assigned to it. Moreover, let $0,1\in A$ and put
$\mathbf{A}:=(A,R,{}^{\prime},0,1)$. Then the following hold:
1. (i)
$\mathbf{A}$ is bounded if and only if it satisfies the identities
$(0|0)|x\approx x|x$ and $x|(1|1)\approx 1$.
2. (ii)
$\mathbf{A}$ is complemented if it is bounded, $0|0\approx 1$ and if for every
$x,y\in A$,
$x|(y|y)=(x|x)|(y|y)=y\text{ implies }y=1.$
###### Proof.
1. (i)
The assertions $(0,x^{\prime})\in R$ and $(x^{\prime},1)\in R$ are equivalent
to $0^{\prime}|x\approx x^{\prime}$ and $x|1^{\prime}\approx 1$, respectively.
2. (ii)
The following are equivalent:
$\displaystyle y$ $\displaystyle\in U(x,x^{\prime}),$
$\displaystyle(x,y),(x^{\prime},y)$ $\displaystyle\in R,$ $\displaystyle
x|y^{\prime}$ $\displaystyle=x^{\prime}|y^{\prime}=y,$ $\displaystyle x|(y|y)$
$\displaystyle=(x|x)|(y|y)=y.$
∎
As mentioned in Section 2, the class of Sheffer groupoids forms a variety
$\mathcal{V}$. We can ask one more condition, namely commutativity of $|$. As
shown in Theorems 3.6 and 4.2, the directed relational systems with involution
induced by commutative Sheffer groupoids will have antisymmetric binary
relations. We present a subvariety of $\mathcal{V}$ containing all commutative
Sheffer groupoids which has an important congruence property.
We recall that a variety $\mathcal{V}$ of algebras is called congruence
distributive if every member of $\mathcal{V}$ has a distributive congruence
lattice.
###### Theorem 4.5.
The variety of Sheffer groupoids $(A,|)$ satisfying the identities
$\displaystyle(x|y)|(x|x)\approx(x|x)|(x|y),$ (8)
$\displaystyle(x|y)|(y|y)\approx(y|y)|(x|y)$ (9)
is congruence distributive.
###### Proof.
If $x^{\prime}:=x|x$ and $m(x,y,z):=((x|y)|(x|z))^{\prime}|(y|z)$ then
$\displaystyle m(x,z,z)$
$\displaystyle\approx((x|z)|(x|z))^{\prime}|(z|z)\approx(x|z)|z^{\prime}\approx
z\text{ by (\ref{equ2})},$ $\displaystyle m(x,y,x)$
$\displaystyle\approx((x|y)|(x|x))^{\prime}|(y|x)\approx
x^{\prime}|(y|x)\approx x\text{ by (\ref{equ1}), (\ref{equ9}) and
(\ref{equ2})},$ $\displaystyle m(x,x,z)$
$\displaystyle\approx((x|x)|(x|z))^{\prime}|(x|z)\approx
x^{\prime}|(x|z)\approx x\text{ by (\ref{equ3}) and (\ref{equ1})}.$
∎
## 5 Kleene relational systems and twist-products
At first, we show how homomorphisms of Sheffer groupoids are related with
homomorphisms of induced directed relational systems with involution. Because
in the literature there are different concepts of homomorphism of relational
systems, we recall the following one.
Let $(A,R)$ and $(B,S)$ be relational systems. A mapping $f:A\rightarrow B$ is
called a homomorphism from $(A,R)$ to $(B,S)$ if
$(x,y)\in R\text{ implies }(f(x),f(y))\in S.$
A homomorphism f is called strong if
$(x,y)\in R\text{ if and only if }(f(x),f(y))\in S.$
If $(A,R,{}^{\prime})$ and $(B,S,^{*})$ are relational systems with unary
operation then $f$ is a homomorphism from $(A,R,{}^{\prime})$ to $(B,S,^{*})$
if it is a homomorphism from $(A,R)$ to $(B,S)$ satisfying
$f(x^{\prime})=(f(x))^{*}\text{ for all }x\in A.$
###### Theorem 5.1.
Let $\mathbf{A}=(A,|_{A})$ and $\mathbf{B}=(B,|_{B})$ be Sheffer groupoids and
$f$ a homomorphism from $\mathbf{A}$ to $\mathbf{B}$. Then $f$ is a
homomorphism between the induced directed relational systems
$\mathbb{R}(\mathbf{A})$ and $\mathbb{R}(\mathbf{B})$ with involution.
###### Proof.
Let $a,b\in A$, $\mathbb{R}(\mathbf{A})=(A,R,{}^{\prime})$ and
$\mathbb{R}(\mathbf{B})=(B,S,{}^{*})$. We have $f(x^{\prime})\approx
f(x|_{A}x)\approx f(x)|_{B}f(x)\approx(f(x))^{*}$ and hence any of the
following assertions implies the next one:
$\displaystyle(a,b)$ $\displaystyle\in R,$ $\displaystyle
a^{\prime}|_{A}b^{\prime}$ $\displaystyle=b,$ $\displaystyle
f(a^{\prime}|_{A}b^{\prime})$ $\displaystyle=f(b),$ $\displaystyle
f(a^{\prime})|_{B}f(b^{\prime})$ $\displaystyle=f(b),$
$\displaystyle(f(a))^{*}|_{B}(f(b))^{*}$ $\displaystyle=f(b),$
$\displaystyle(f(a),f(b))$ $\displaystyle\in S.$
∎
For the converse direction, we firstly mention the following result for
bounded relational systems.
###### Lemma 5.2.
Let $(A,R,{}^{\prime},0_{A},1_{A})$ and $(B,S,^{*},0_{B},1_{B})$ be bounded
relational systems with involution and $f$ a strong homomorphism from
$\mathbf{A}=(A,R,{}^{\prime})$ to $\mathbf{B}=(B,S,^{*})$. Further assume that
$f(1_{A})=1_{B}$. Define binary operations $|_{A}$ and $|_{B}$ on $A$ and $B$,
respectively, by
$x|_{A}y:=\left\\{\begin{array}[]{ll}y^{\prime}&\text{if
}(x^{\prime},y^{\prime})\in R,\\\
1_{A}&\text{otherwise}\end{array}\right.\quad\quad
x|_{B}y:=\left\\{\begin{array}[]{ll}y^{*}&\text{if }(x^{*},y^{*})\in S,\\\
1_{B}&\text{otherwise}\end{array}\right.$
Then $(A,|_{A})$ and $(B,|_{B})$ are Sheffer groupoids assigned to
$\mathbf{A}$ and $\mathbf{B}$, respectively, and $f$ is a homomorphism from
$(A,|_{A})$ to $(B,|_{B})$.
###### Proof.
Let $a,b\in A$. Obviously, $(A,|_{A})$ and $(B,|_{B})$ are Sheffer groupoids.
If $(a^{\prime},b^{\prime})\in R$ then
$((f(a))^{*},(f(b))^{*})=(f(a^{\prime}),f(b^{\prime}))\in S$ and hence
$f(a|_{A}b)=f(b^{\prime})=(f(b))^{*}=f(a)|_{B}f(b)$. If
$(a^{\prime},b^{\prime})\notin R$ then
$((f(a))^{*},(f(b))^{*})=(f(a^{\prime}),f(b^{\prime}))\notin S$ and hence
$f(a|_{A}b)=f(1_{A})=1_{B}=f(a)|_{B}f(b)$. ∎
We are going to determine conditions under which the converse of Theorem 5.1
holds.
###### Theorem 5.3.
Let $\mathbf{A}=(A,R,{}^{\prime})$ and $\mathbf{B}=(B,S,{}^{*})$ be directed
relational systems with involution, $f$ a strong surjective homomorphism from
$\mathbf{A}$ to $\mathbf{B}$ and $|_{A}$ a Sheffer operation assigned to
$\mathbf{A}$ and assume that the equivalence relation $\ker f$ on $A$ is a
congruence on $(A,|_{A})$. Then there exists a Sheffer operation $|_{B}$ on
$B$ such that $f$ is a homomorphism from $(A,|_{A})$ to $(B,|_{B})$ and
$|_{B}$ is assigned to $\mathbf{B}$.
###### Proof.
Define $f(x)|_{B}f(y):=f(x|_{A}y)$ for all $x,y\in A$. Since $\ker
f\in\operatorname{Con}(A,|_{A})$, $|_{B}$ is well-defined. Let $a,b\in A$.
Then any of the following assertions implies the next one:
$\displaystyle((f(a))^{*},(f(b))^{*})$ $\displaystyle\in S,$
$\displaystyle(f(a^{\prime}),f(b^{\prime}))$ $\displaystyle\in S,$
$\displaystyle(a^{\prime},b^{\prime})$ $\displaystyle\in R,$ $\displaystyle
a|_{A}b$ $\displaystyle=b^{\prime},$ $\displaystyle f(a|_{A}b)$
$\displaystyle=f(b^{\prime}),$ $\displaystyle f(a)|_{B}f(b)$
$\displaystyle=(f(b))^{*}.$
Moreover, any of the following assertions implies the next one:
$\displaystyle((f(a))^{*},(f(b))^{*})$ $\displaystyle\notin S,$
$\displaystyle(f(a^{\prime}),f(b^{\prime}))$ $\displaystyle\notin S,$
$\displaystyle(a^{\prime},b^{\prime})$ $\displaystyle\notin R,$ $\displaystyle
a|_{A}b$ $\displaystyle\in U(a^{\prime},b^{\prime}),$ $\displaystyle
f(a|_{A}b)$ $\displaystyle\in U(f(a^{\prime}),f(b^{\prime})),$ $\displaystyle
f(a)|_{B}f(b)$ $\displaystyle\in U((f(a))^{*},(f(b))^{*}).$
This shows that $|_{B}$ is a Sheffer operation on $B$ assigned to
$\mathbf{B}$. According to the definition of $|_{B}$ we have
$f(x|_{A}y)=f(x)|_{B}f(y)$ for all $x,y\in A$. ∎
For a lattice $\mathbf{L}=(L,\vee,\wedge)$ its twist-product
$(L^{2},\sqcup,\sqcap)$ is defined by
$\displaystyle(x,y)\sqcup(z,v)$ $\displaystyle:=(x\vee z,v\wedge y),$
$\displaystyle(x,y)\sqcap(z,v)$ $\displaystyle:=(x\wedge z,v\vee y)$
for all $(x,y),(z,v)\in L^{2}$. We extend this concept to relational systems
as follows.
Let $A$ be a non-void set and $R$ a binary relation on $A$. Then
$(A^{2},S,{}^{*})$ with
$\displaystyle S$
$\displaystyle:=\\{((x,y),(z,v))\in(A^{2})^{2}\mid(x,z),(v,y)\in R\\},$
$\displaystyle(x,y)^{*}$ $\displaystyle:=(y,x)$
for all $(x,y)\in A^{2}$ will be called the twist-product of $(A,R)$.
Recall that an embedding of a relational system $\mathbf{A}$ into a relational
system $\mathbf{B}$ is an injective strong homomorphism from $\mathbf{A}$ to
$\mathbf{B}$.
The importance of twist-products is illuminated by the next result.
###### Theorem 5.4.
Let $\mathbf{A}=(A,R)$ be a relational system, $a\in A$ and
$\mathbf{B}=(A^{2},S,{}^{*})$ the twist-product of $\mathbf{A}$. Then the
following hold:
1. (i)
If $\mathbf{A}$ is directed then $\mathbf{B}$ is a directed relational system
with involution ∗,
2. (ii)
the mapping $x\mapsto(x,a)$ is an embedding of $\mathbf{A}$ into $(A^{2},S)$.
###### Proof.
Let $a,b,c,d\in A$.
1. (i)
Assume $\mathbf{A}$ to be directed. Since $(a,a),(b,b)\in R$ we have
$((a,b),(a,b))\in S$ showing reflexivity of $S$. Because of
$U((a,b),(c,d))=U(a,c)\times L(b,d)$, $(A^{2},S)$ is directed. Moreover,
$(x,y)^{**}\approx(y,x)^{*}\approx(x,y)$, and the following are equivalent:
$\displaystyle((a,b),(c,d))$ $\displaystyle\in S,$ $\displaystyle(a,c),(d,b)$
$\displaystyle\in R,$ $\displaystyle(d,b),(a,c)$ $\displaystyle\in R,$
$\displaystyle((d,c),(b,a))$ $\displaystyle\in S,$
$\displaystyle((c,d)^{*},(a,b)^{*})$ $\displaystyle\in S.$
Hence, ∗ is an involution on $(A^{2},S)$.
2. (ii)
The mapping $x\mapsto(x,a)$ is injective. Moreover, $((b,a),(c,a))\in S$ if
and only if $(b,c)\in R$.
∎
Hence, every directed relational system can be embedded into a directed
relational system with involution.
The question arises whether a Sheffer operation assigned to the twist-product
of a directed relational $\mathbf{A}$ with involution can be derived from a
Sheffer operation assigned to $\mathbf{A}$. We give a positive answer in the
following theorem.
###### Theorem 5.5.
Let $(A,R,{}^{\prime})$ be a directed relational system with involution,
$|_{A}$ an assigned Sheffer operation on $A$ and define
$(x,y)|_{B}(z,v):=(y^{\prime}|_{A}v^{\prime},(x|_{A}z)^{\prime})$
for all $(x,y),(z,v)\in A^{2}$. Then $|_{B}$ is a Sheffer operation on $A^{2}$
assigned to the twist-product of $(A,R)$.
###### Proof.
For $a,b,c,d\in A$ the following are equivalent:
$\displaystyle((a,b)^{*},(c,d)^{*})$ $\displaystyle\in S,$
$\displaystyle((b,a),(d,c))$ $\displaystyle\in S,$ $\displaystyle(b,d),(c,a)$
$\displaystyle\in R,$
$\displaystyle(b^{\prime\prime},d^{\prime\prime}),(a^{\prime},c^{\prime})$
$\displaystyle\in R,$ $\displaystyle(b^{\prime}|_{A}d^{\prime},a|_{A}c)$
$\displaystyle=(d^{\prime\prime},c^{\prime}),$
$\displaystyle(b^{\prime}|_{A}d^{\prime},(a|_{A}c)^{\prime})$
$\displaystyle=(d,c),$ $\displaystyle(a,b)|_{B}(c,d)$ $\displaystyle=(d,c),$
$\displaystyle(a,b)|_{B}(c,d)$ $\displaystyle=(c,d)^{*}$
and the following are equivalent:
$\displaystyle(b^{\prime}|_{A}d^{\prime},a|_{A}c)$ $\displaystyle\in
U(b^{\prime\prime},d^{\prime\prime})\times U(a^{\prime},c^{\prime}),$
$\displaystyle(b^{\prime}|_{A}d^{\prime},(a|_{A}c)^{\prime})$
$\displaystyle\in U(b,d)\times L(a,c),$
$\displaystyle(b^{\prime}|_{A}d^{\prime},(a|_{A}c)^{\prime})$
$\displaystyle\in U((b,a),(d,c)),$ $\displaystyle(a,b)|_{B}(c,d)$
$\displaystyle\in U((a,b)^{*},(c,d)^{*}).$
∎
In order to simplify notation we extend binary relations between elements of a
non-void set $A$ to relations between subsets of $A$.
Let $A$ be a non-void set, $b,c$ be elements of $A$, $B,C$ be subsets of $A$
and $R$ be a binary relation on $A$. We say $(B,C)\in R$ if $B\times
C\subseteq R$. Instead of $(\\{b\\},C)\in R$ and $(B,\\{c\\})\in R$ we shortly
write $(b,C)\in R$ and $(B,c)\in R$, respectively.
The concept of a Kleene lattice was introduced by J. A. Kalman ([12]). Recall
that a distributive lattice $(L,\vee,\wedge,{}^{\prime})$ with antitone
involution is called a Kleene lattice if it satisfies the so-called normality
condition, i.e. the identity
$x\wedge x^{\prime}\leq y\vee y^{\prime}\text{ for all }x,y\in L.$
These lattices are used in logic in order to formalize certain De Morgan
propositional logics. For posets with involution, this notion was already
generalized by the authors in [9] in the following way: A distributive poset
$(P,\leq,{}^{\prime})$ with involution is called a Kleene poset if
$L(x,x^{\prime})\leq U(y,y^{\prime})\text{ for all }x,y\in P$
which means that $z\leq v$ for all $x,y\in P$ and all $(z,v)\in
L(x,x^{\prime})\times U(y,y^{\prime})$.
###### Definition 5.6.
1. (i)
A Kleene relational system is a relational system $(A,R,{}^{\prime})$ with an
antitone involution satisfying
$(L(x,x^{\prime}),U(y,y^{\prime}))\in R\text{ for all }x,y\in A.$
2. (ii)
If $\mathbf{A}=(A,R)$ is a relational system, $a\in A$ and $(A^{2},S,$
${}^{*})$ the twist-product of $\mathbf{A}$ then we define the following
subset of $A^{2}$:
$P_{a}(\mathbf{A}):=\\{(x,y)\in A^{2}\mid(L(x,y),a),(a,U(x,y))\in R\\}.$
It is worth noticing that Kleene lattices and Kleene posets are Kleene
relational systems according to our previous definition.
Using the above defined subset of the twist-product, we can show that every
directed relational system with a transitive relation can be embedded into a
Kleene relational system.
###### Theorem 5.7.
Let $\mathbf{A}=(A,R)$ be a directed relational system, $a\in A$,
$(A^{2},S,{}^{*})$ the twist-product of $\mathbf{A}$ and
$T:=S\cap(P_{a}(\mathbf{A}))^{2}$. Then the following hold:
1. (i)
If $R$ is transitive then $(P_{a}(\mathbf{A}),T,{}^{*})$ is a directed
relational system with involution which is a Kleene relational system,
2. (ii)
the mapping $x\mapsto(x,a)$ is an embedding of $\mathbf{A}$ into
$(P_{a}(\mathbf{A}),T)$.
###### Proof.
Let $(b,c),(d,e)\in P_{a}(\mathbf{A})$.
1. (i)
Put $\mathbf{B}:=(P_{a}(\mathbf{A}),T,{}^{*})$. From $(b,c)\in
P_{a}(\mathbf{A})$ we conclude $(L(b,c),a),(a,U(b,c))\in R$ and hence
$(L(c,b),a),(a,U(c,b))\in R$, i.e. $(b,c)^{*}=(c,b)\in P_{a}(\mathbf{A})$
which shows that $P_{a}(\mathbf{A})$ is closed with respect to ∗. According to
Theorem 5.4, $\mathbf{B}$ is a directed relational system with involution.
Because of
$\displaystyle(L((b,c),(c,b)),(a,a))=(L(b,c)\times U(b,c),(a,a))$
$\displaystyle\in S,$ $\displaystyle((a,a),U(d,e)\times
L(d,e))=((a,a),U((d,e),(e,d)))$ $\displaystyle\in S$
we have $(L((b,c),(c,b)),U((d,e),(e,d)))\in S$ due to transitivity of $S$
(which follows from the transitivity of $R$) and hence $\mathbf{B}$ is a
Kleene relational system.
2. (ii)
For all $x\in A$ we have $(L(x,a),a),(a,U(x,a))\in R$ and hence $(x,a)\in
P_{a}(\mathbf{A})$. The rest follows from Theorem 5.4.
∎
It should be remarked that if $R$ is transitive then
$(P_{a}(\mathbf{A}),T,{}^{*})$ is a relational subsystem of the twist-product
$(A^{2},S,{}^{*})$ of $\mathbf{A}$.
## References
* [1] G. Birkhoff, Lattice Theory. Amer. Math. Soc., Providence, R.I., 1979. ISBN 0-8218-1025-1.
* [2] S. Bonzio and I. Chajda, Residuated relational systems. Asian-Eur. J. Math. 11 (2018), 1850024, 14pp.
* [3] I. Chajda, Sheffer operation in ortholattices. Acta Univ. Palack. Olomuc. Fac. Rerum Natur. Math. 44 (2005), 19–23.
* [4] I. Chajda and M. Kolařík, Sheffer operations in complemented posets. Mathematics for Applications (to appear).
* [5] I. Chajda, M. Kolařík and H. Länger, Algebras assigned to ternary relations, Miskolc Math. Notes 14 (2013), 827–844.
* [6] I. Chajda and H. Länger, Groupoids assigned to relational systems, Math. Bohem. 138 (2013), 15–23.
* [7] I. Chajda and H. Länger, Groupoids corresponding to relational systems. Miskolc Math. Notes 17 (2016), 111–118.
* [8] I. Chajda and H. Länger, Relational systems with involution. Asian-Eur. J. Math. 9 (2016), 1650087, 8pp.
* [9] I. Chajda and H. Länger, Kleene posets and pseudo-Kleene posets. Miskolc Math. Notes (submitted). http://arxiv.org/abs/2006.04417.
* [10] I. Chajda, H. Länger and P. Ševčik, An algebraic approach to binary relations. Asian-Eur. J. Math. 8 (2015), 1550017, 13 pp.
* [11] R. Fraissé, Sur l’extension aux relations de quelques propriétés des ordres. Ann. Sci. Ecole Norm. Sup. 71 (1954), 363–388.
* [12] J. A. Kalman, Lattices with involution. Trans. Amer. Math. Soc. 87 (1958), 485–491.
* [13] J. Riguet, Relations binaires, fermetures, correspondances de Galois, Bull. Soc. Math. France 76 (1948), 114–155.
* [14] H. M. Sheffer, A set of five independent postulates for Boolean algebras, with application to logical constants. Trans. Amer. Math. Soc. 14 (1913), 481–488.
Authors’ addresses:
Ivan Chajda
Palacký University Olomouc
Faculty of Science
Department of Algebra and Geometry
17\. listopadu 12
771 46 Olomouc
Czech Republic
<EMAIL_ADDRESS>
Helmut Länger
TU Wien
Faculty of Mathematics and Geoinformation
Institute of Discrete Mathematics and Geometry
Wiedner Hauptstraße 8-10
1040 Vienna
Austria, and
Palacký University Olomouc
Faculty of Science
Department of Algebra and Geometry
17\. listopadu 12
771 46 Olomouc
Czech Republic
<EMAIL_ADDRESS>
|
# Two-Sided Matching Markets in the ELLIS 2020 PhD Program
Maximilian Mordig1,2 Riccardo Della Vecchia3 Nicolò Cesa-Bianchi4 Bernhard
Schölkopf1,2
(1 Max-Planck Institute for Intelligent Systems Tübingen
2 ETH Zürich
3 Artificial Intelligence Lab, Bocconi University, Milano, Italy
4 Dept. of Computer Science & DSRC, University of Milan, Italy
)
###### Abstract
The ELLIS PhD program is a European initiative that supports excellent young
researchers by connecting them to leading researchers in AI. In particular,
PhD students are supervised by two advisors from different countries: an
advisor and a co-advisor. In this work we summarize the procedure that, in its
final step, matches students to advisors in the ELLIS 2020 PhD program. The
steps of the procedure are based on the extensive literature of two-sided
matching markets and the college admissions problem (Knuth and De Bruijn,
1997; Gale and Shapley, 1962; Roth and Sotomayor, 1992). We introduce PolyGS,
an algorithm for the case of two-sided markets with quotas on both sides (also
known as many-to-many markets) which we use throughout the selection procedure
of pre-screening, interview matching and final matching with advisors. The
algorithm returns a stable matching in the sense that no unmatched persons
prefer to be matched together rather than with their current partners (given
their indicated preferences). Roth (1984) gives evidence that only stable
matchings are likely to be adhered to over time. Additionally, the matching is
student-optimal. Preferences are constructed based on the rankings each side
gives to the other side and the overlaps of research fields. We present and
discuss the matchings that the algorithm produces in the ELLIS 2020 PhD
program.
## 1 Introduction
The ELLIS PhD program is a European initiative that supports excellent young
researchers by connecting them to leading researchers in AI. In particular,
PhD students are supervised by two advisors from different countries. This
document summarizes the procedure that, in its final step, matches students to
advisors in the ELLIS 2020 PhD program. After an overview of the selection
procedure, we present the theory of two-sided matching markets which is used
first to match the students to the evaluators for an initial pre-screening,
then to match candidates to advisors in the interviews, and finally is
responsible for matching students to advisors in ELLIS 2020. We propose an
algorithm that matches students to advisors. This matching is a suggestion and
may be inspected and corrected manually to satisfy additional criteria. The
co-advisor must be manually matched to a student and must be from a different
country than the main advisor in ELLIS. Briefly, each student ranks research
fields and/or professors they would like to work with and vice versa for
professors. This happens in the following order. First, when students apply,
they are pre-screened using just the preferences of students over professors
and the overlaps between fields of students and advisors. After this, advisors
also rank the acceptable candidates and the interviews are set. After the
interviews, both students and professors can consequentially update their
preferences and the final matching takes place, matching each student to an
advisor. The co-advisor is added separately, taking care of possible
constraints imposed by the program.
In (Mordig et al., 2021), we address the case when both advisors and co-
advisors should be matched algorithmically to a student without human aid. We
assume that all preferences are available to us. When the acquisition of
preferences is too expensive, Charlin and Zemel (2013) uses machine-learning
approaches to interpolate sparse preference data to other persons. Furthemore,
our procedure relies heavily upon the seminal work on two-sided markets by
Gale and Shapley (1962) which had high practical impact since its
introduction, leading to improved matching systems for high-school admissions
and labor markets (Roth, 1984), house allocations with existing tenants
(Abdulkadiroğlu and Sönmez, 1999), content delivery networks (Maggs and
Sitaraman, 2015), and kidney exchanges (Roth et al., 2005).
We introduce the different phases of the selection procedure in a broad and
schematic way in Section 2. In Section 3 we present the algorithm that is used
(with different parameters) in the different phases. In Section 4 we state the
choices of parameters and give additional details about the procedure adopted
in the phases of the selection procedure.
## 2 Phases of the Selection Procedure
The ELLIS PhD selection procedure consists of the following phases:
* •
_Phase 1 (Student-Evaluator Matching for Pre-Screening)_ : Students state up
to five research fields and optionally rank up to 10 professors. Each
professor provides up to five research fields and specifies an evaluator who
pre-screens applications (e.g. a postdoc).
Based on research fields and on the ranking of professors made by students,
both students and evaluators are assigned preferences over the other side of
the market. A matching algorithm runs to match students and evaluators. Each
student is matched to three evaluators. Each evaluator is assigned the same
maximum capacity of students to pre-screen. Evaluators score the assigned
students.
* •
_Phase 2 (Student-Advisor Matching for Interviews)_ : After filtering out
badly scoring students, professors rank students based on the pre-screening
scores and other means. Each professor ranks a number of students from 1 - 4
(best to worse) reflecting their priorities for interviews. 5 and 6 indicate
that the scoring advisor does not consider the applicant for interviews. 5
specifies that the student is not a good fit for the scoring advisor, but may
be good for someone else; 6 marks the student as unfit. An advisor may be
assigned to interview any of their ranked students. Each professor specifies
their (maximum) interviewing capacity.
Using preferences of students from _Phase 1_ , the ranking assigned by
advisors to students in this phase, plus the overlap of research fields, the
matching for interviews is computed. Each student must be interviewed by three
advisors (ELLIS Excellence Criterion).111 Due to the limited interviewing
capacity of advisors and the high number of ranked students, it proved
impossible to match all promising candidates to 3 interviews. In practice,
then, students are also matched for 1 or 2 interviews. The algorithm provides
each advisor with a number of students to interview not greater than their
interviewing capacity. Advisors must interview the first 80% of the assigned
students, the remaining 20% are provided on a voluntary basis.
* •
_Phase 3 (Student-Advisor Matching for Hiring)_ : Advisors and students rank
the other side again. Advisors also provide their (maximum) hiring capacity.
Based on their updated preferences, advisors and students are matched
together. Each student is matched to at most one advisor and each advisor is
matched up to their specified hiring capacity.
* •
_Phase 4 (Student-Co-advisor Matching for Hiring)_ : Advisors send out
acceptance letters to all the students they got matched with. They make sure
that they find a co-advisor for any student who accepts. This year, there is
no algorithmic support to find co-advisors.
* •
_Phase 5 (Matching the Unmatched)_ : Students who are ranked for offers, but
not matched, will enter the pool for the rematching phase. Advisors who were
not matched or still have extra hiring capacity can see in the system which
students are still unmatched but positively scored. This year, there is no
algorithmic support for the rematching stage.
## 3 Matching theory – polygamous market
This section explains the theory behind the matching algorithm (Algorithm 1)
which is used in each of the phases of the ELLIS PhD program selection. Our
work is based on an extensive literature (Knuth and De Bruijn, 1997; Gale and
Shapley, 1962; Roth and Sotomayor, 1992) which culminated in the 2012 Economic
Nobel Prize to Alvin E. Roth and Lloyd Shapley for their work on stable
allocations and the practice of market
design222https://www.nobelprize.org/nobel_prizes/economics/laureates/2012/.
Two-sided matching markets are game-theoretic abstractions which correspond to
bipartite matchings. We recall some basic results for marriage markets in
Appendix A and their extensions to the college admission problem in Appendix
B. The theory below provides the theoretical background for the case of two-
sided markets with quotas on both sides, also known as many-to-many markets.
Consider the setting where students need to be matched to advisors for
interviews as in the case at hand. Each student can take a maximum number of
interviews and each professor can interview a limited number of students.
Furthermore, no student should be assigned to the same advisor more than once.
Students and advisors form the two sides of a matching market. We can phrase
this problem as follows.
We adopt the convention with men and women for consistency with the
literature, and we want to stay out of the gender debate. We use the pronouns
“his/her” to make clear that we refer to a man/woman in the model, so “person
or himself” can also mean “person or herself”. Let $M$ and $W$ be finite sets
of men and women, and let each person be endowed with a strict order of
preference with respect to the members of the opposite sex. For each person
$p\in W\cup M,$ define $q_{p}$ to be the “quota”/“capacity” of this person,
i.e., the amount of spouses from the opposite sex this person seeks. We call
this market a _“polygamous market”_ , in reference to the classical marriage
market problem introduced by Gale and Shapley (1962). In this case, we want to
allow quotas on both sides, unlike college admission markets and marriage
markets in Appendix A and B.
Let us formally define preference lists and the polygamous market.
###### Definition 1 (preference lists).
For a market over men $M$ and women $W,$ each man $m$ has preferences $P(m)$
over all women in $W\cup\\{m\\}$ defined by the binary relations
$\geq_{m},=_{m}$ (which defines $>_{m},<_{m},\leq_{m}$). A man has strict
preferences if $=_{m}$ is equal to $=$, i.e. if he is not indifferent between
any two women. Analogously, each woman $w$ has preferences $P(w)$ over all
persons in $M\cup\\{w\\}$. Person $p_{1}$ is acceptable to $p$ if
$p_{1}>_{p}p$. For persons $p,p_{1}$ and a set of persons $K$, we define
$p_{1}>_{p}K\iff\exists\tau\in K:p_{1}>_{p}\tau$.
We assume that the spots are independent such that individual preferences
suffice to express the preferences over assignments of up to $q_{p}$ persons.
###### Definition 2 (polygamous market).
A polygamous market over men $M$ and women $W$ is defined by the quadruple
$(M,W,P,Q)$, where:
* •
$Q=\\{q_{m}\mid m\in M\\}\cup\\{q_{w}\mid w\in W\\}$ are the quotas of men and
women,
* •
$P$ is the set of preference lists of men and women:
$P=\left\\{P\left(m_{1}\right),\dots,P\left(m_{|M|}\right),P\left(w_{1}\right),\dots,P\left(w_{|W|}\right)\right\\}.$
###### Definition 3 (valid matching).
A matching on the polygamous market $(M,W,P,Q)$ is a multi-set (i.e. elements
can appear more than once) $\mu\subset M\times W$ such that:
* •
$|\mu(w)|\leq q_{w}\;\forall w\in W$, where $\mu(w)=\\{m\mid(m,w)\in\mu\\}$,
* •
$|\mu(m)|\leq q_{m}\;\forall m\in M$, where $\mu(m)=\\{w\mid(m,w)\in\mu\\}$.
We have the property $m\in\mu(w)\iff w\in\mu(m)$. $|\mu(m)|\leq q_{m}$ means
that $m$ is matched $q_{m}-|\mu(m)|$ times to itself and we fill $\mu(m)$ with
$m$ up to size $q_{m}$, same for women. Since a set does not allow duplicate
elements, the same man and woman can match at most once.333 If $\mu$ were a
multi-set, they could match several times. In this case, stable matchings can
be obtained by running the traditional GS algorithm on the extended market,
where both sides are replicated according to their capacities. When the
quotas $Q$ satisfy the college admission market assumption, i.e. one side of
the market has quotas all equal to one, this definition coincides with the
definition given in Appendix B.
A matching $\mu$ is unstable if any two persons prefer to be together rather
than with their assigned partners. From this, a new matching can be
constructed. It should be valid in the sense that it matches any pair at most
once. So $(m,w)$ is only a blocking pair if it is not matched already in
$\mu$. On marriage and college admission markets, this definition coincides
with the definitions in Appendix B.
###### Definition 4 (stability).
A matching $\mu$ is unstable if there exists a pair $(m,w)\in(M\times
W)\cup\\{(m,m)\mid m\in M\\}\cup\\{(w,w)\mid w\in W\\}$ such that $(m=w\lor
m\notin\mu(w))$ and $w>_{m}\mu(m)$ and $m>_{w}\mu(w)$. $(m,w)$ is called a
blocking pair of $\mu$.444 The case $m=w$ is also known as individual
rationality.
In Algorithm 1, we introduce PolyGS, which is a direct reformulation of the
Gale-Shapley (GS) algorithm to the case of quotas on both sides of the
polygamous market. All men are initially unmatched and women start by matching
with themselves as many times as they have capacity. At each step of the
algorithm, a man with available quotas proposes to the woman he prefers most
among all women who have not (yet) rejected him. Next, the woman compares this
offer with the least favourite man among all the ones she is provisionally
engaged to. If the new proposal is worse according to her preference list, she
directly rejects it. If the new proposal is better according to her preference
list, she disengages the least favourite man and provisionally engages with
the new one. As long as a man hasn’t filled his quotas, he continues
proposing. If no acceptable woman is left, he fills the remaining spots with
himself. Furthermore, we use the notation $[w]^{q_{w}}$ to refer to the list
$[w,\dots,w]$ of size $q_{w}$. In Algorithm 1, the function
$\textsc{offerNext}(P,m)$, returns $m$’s most preferred woman who did not
reject him yet, given his preferences $P(m)$. In the algorithm, we define
$\mu(m)=\\{w\mid(m,w)\in\mu\\}$, so it is not filled to capacity (during the
algorithm). The function $\textsc{weakestMatch}(w,\mu(w),P)$ returns the
weakest match $p\in\mu(w)$ of the woman $w$ ($p$ can be either a man or the
woman herself) according to her preferences $P(w)$. Note that
$m>_{w}\textsc{weakestMatch}(w,\mu(w),P)$ is equivalent to $m>_{w}\mu(w)$.
Data: Market $(M,W,P,Q)$, quotas $Q$
Result: Matching $\mu$
$\mu\leftarrow\cup_{w\in W}\cup_{i=1}^{q_{w}}\\{(w,w)\\}$ // match every woman
$q_{w}$ times to herself
while _there is a man $m$ with available capacity, i.e. $|\mu(m)|<q_{m}$_ do
$w\leftarrow\textsc{offerNext}(P,m)$ // best woman $w$ to which $m$ has not
yet proposed, otherwise $m$
if _$w=m$_ then
// man proposed to himself and accepts
$\mu\leftarrow\mu\cup\\{(m,m)\\}$
else if _$m >_{w}m^{\prime}=\textsc{weakestMatch}(w,\mu(w),P)$_ then
$\mu\leftarrow\mu\setminus\\{(m^{\prime},w)\\}$ // unmatch $(m^{\prime},w)$
(once only)
$\mu\leftarrow\mu\cup\\{(m,w)\\}$ // match $(m,w)$
end if
end while
return _$\mu$_
Algorithm 1 PolyGS
The algorithm coincides with the GS algorithm on marriage markets and with the
college GS algorithm on college admission markets. We now prove that this
algorithm returns a matching which matches the same man and woman at most
once. Additionally, it is stable and optimal for the side which is proposing.
The proofs are adaptations of the many-to-one case Gale and Shapley (1962).
###### Proposition 5.
PolyGS terminates and returns a valid and stable matching (Definition 4).
###### Proof.
The algorithm terminates because each man proposes to each woman at most once.
The returned matching $\mu$ satisfies the quotas because $|\mu(w)|=q_{w}$ is
preserved over iterations for all women $w$ and the algorithm terminates only
when $|\mu(m)|=q_{m}$ for all men $m$. Since each man proposes to each woman
at most once, a man can be matched to a woman at most once. Therefore, the
matching is valid.
To prove stability, we use the following property: Over iterations, the
weakest match of any woman $w$ cannot decrease. By contradiction, let $(m,w)$
a blocking pair of the matching $\mu$. If $m=w:=p\in M\cup W$, this implies
$p>_{p}\mu(p)$, but this can never happen because only acceptable partners are
matched. Otherwise, $m$ is a man and $w$ is a woman and there exist
$m^{\prime},w^{\prime}$ (not necessarily man and woman) such that
$w>_{m}w^{\prime}\in\mu(m)$ and $m>_{w}m^{\prime}\in\mu(w)$, where
$m^{\prime}$ is the weakest element of $\mu(w)$. Since
$w>_{m}w^{\prime}\in\mu(m)$, $m$ must have proposed to $w$. Since
$m>_{w}m^{\prime}\in\mu(w)$ and the weakest match can never decrease, $w$
would never have rejected $m$, which contradicts $m\notin\mu(w)$. ∎
Looking at the proof of optimality in the college admission market (without
passing via the extended market), we see that it can be extended without
problems to this setting. The difference is that a matching $\mu$ is unstable
only if the blocking pair $(m,w)$ is not already part of $\mu$, i.e.
$m\notin\mu(w)$.
###### Proposition 6 (Optimality).
Assume strict preferences and men propose (with quotas on both sides). PolyGS
returns a man-optimal result $\mu$, i.e. for every man $m$:
$\mu(m)_{i}\geq_{m}\mu^{\prime}(m)_{i}\;\forall m,i$. The assignments
$\mu(m),\mu^{\prime}(m)$ are ordered from best to worst in terms of $\geq_{m}$
and $\mu^{\prime}$ is stable according to Definition 4. Under strict
preferences, PolyGS returns a unique result (independently of the order in
which men propose).
###### Proof.
Uniqueness follows immediately from optimality. A woman $w$ is achievable to
man $m$ if there exists a stable matching $\mu^{\prime}$ such that
$m\in\mu^{\prime}(w)$. It is enough to prove that a man $m$ is never rejected
by an achievable woman. Indeed, assume there exists a man $m$ and let $i$ the
first index such that $\mu(m)_{i}<_{m}\mu^{\prime}(m)_{i}$. By assumption,
$\mu(m)_{j}\geq_{m}\mu^{\prime}(m)_{j}>_{m}\mu^{\prime}(m)_{i}$ for all $j<i$.
Also, $\mu^{\prime}(m)_{i}>_{m}\mu(m)_{i}\geq_{m}\mu(m)_{j}$ for all $j\geq i$
and this means that man $m$ was rejected by $\mu^{\prime}(m)_{i}$ since $m$
applied to $\mu(m)_{i}<_{s}\mu^{\prime}(m)_{i}$ and is not matched to
$\mu^{\prime}(m)_{i}$.
Let $m$ the first man who is rejected by an achievable woman $w$ during the
execution of the algorithm. Since $w$ is achievable to $m$, let $\mu^{\prime}$
the stable matching in which $m$ is matched to $w$, i.e. $m\in\mu^{\prime}(w)$
(and $w\in\mu^{\prime}(m)$). Since $m$ was rejected (and is acceptable to
$w$), there must be $q_{w}$ other men who are all preferred by the woman:
$m_{i}>_{w}m\in\mu^{\prime}(w)\;\forall i=1,\dots,q_{w}$. Because none of
these other men $m_{i}$ was yet rejected by an achievable woman, $\forall
i\,\exists j_{i}:w\geq_{m_{i}}\mu^{\prime}(m_{i})_{j_{i}}$. Indeed, by
contradiction, assume that there exists an index $i$ s.t.
$w<_{m_{i}}\mu^{\prime}(m_{i})_{j}\;\forall j=1,\dots,q_{m_{i}}$. $m_{i}$
cannot be matched to $w$. Since $m_{i}$ applied to $w$, this means that
$m_{i}$ was rejected by the achievable $\mu^{\prime}(m_{i})_{j_{i}}$ for some
$j_{i}$, which is a contradiction with $m$ being the first man with this
property.
Since $m\in\mu^{\prime}(w)$, there must exist $m_{i}\notin\mu^{\prime}(w)$.
Thus $w\neq\mu^{\prime}(m_{i})_{j_{i}}$ and
$w>_{m_{i}}\mu^{\prime}(m_{i})_{j_{i}}\in\mu^{\prime}(m_{i})$. Thus,
$(m_{i},w)$ (with the additional property $m_{i}\notin\mu^{\prime}(w)$) blocks
$\mu^{\prime}$, which contradicts the stability of $\mu^{\prime}$. Hence $w$
is not achievable to $m$. ∎
In fact, one can also prove that it is woman-pessimal, i.e.
$\mu^{\prime}(w)_{i}\geq_{w}\mu(w)_{i}\;\forall w,i$ for all stable matchings
$\mu^{\prime}$.
As for the GS algorithm, women can propose and we obtain a woman-optimal
matching. Optimality generally does not hold when preferences are not strict.
When preferences are non-strict, we can break ties and the algorithm returns a
matching that is also stable with respect to the original preferences.
## 4 Application to ELLIS 2020
As part of the selection procedure, ELLIS aims to match people on one side of
the market to the other side, possibly several times. We will always use the
PolyGS with students proposing. It reduces to the traditional GS algorithm and
college admission algorithm when quotas are all equal to one or equal to one
on one side respectively. Students always propose to ensure student-
optimality. This means that students get the best matches among all the stable
ones. In each of the phases, we need to specify the market participants and
their preferences as well as their capacities. When the preferences provided
to the algorithm are non-strict, they are broken arbitrarily. To have more
control over the tie-breaking, we break some of the ties beforehand. We
construct preferences based on the similarity score between fields of
research. For person $p_{1}$, the research similarity score with person
$p_{2}$ is
$\displaystyle S(p_{1},p_{2})=R(p_{1})^{T}\cdot R(p_{2}),$
where $R(p)$ denotes the multi-one-hot encoding of the research interests of
person $p$. Say $[A,B,C]$ are the available research fields and person $p$ is
interested in fields $A$ and $C$, then $R(p)=[1,0,1]^{T}$. A person can use
this score to rank the people on the other side of the market.
We now describe each of the phases. Incomplete, fake and blatantly bad
students are removed from the system before each phase. Removed students may
be added again after manual inspection at each phase, e.g. students who got
kicked out because they weren’t matched to three evaluators (Phase 1), three
interviews (Phase 2) or to an advisor (Phase 3).
### 4.1 Phase 1 - Pre-Screening
By December 1, students and professors specify their areas of interest.
Students may additionally rank up to 10 professors. Each professor provides an
evaluator to pre-screen applications. The evaluator’s research fields are
those of the corresponding professor and the market consists of evaluators and
students. Evaluators rank students based on research similarity score.
Students give the best ranks to the (up to 10) advisors (evaluators) they
listed, followed by all others ordered by research overlap. More precisely,
given a student, let $\mathcal{A}$ the ordered set of (up to 10) advisors who
were ranked by the student. Let $\mathcal{B}$ the ordered set of all advisors
ordered by decreasing research overlap score. Then, the student’s preferences
become:
$\displaystyle\mathcal{A}+(\mathcal{B}\setminus\mathcal{A}),$
where the $+$ operation appends the second list to the first list preserving
the order. Since preferences are based on discrete scores, ties can occur.
Advisors break ties between any indifferent students such that they prefer
students who listed them. For example, if
$\mathcal{A}=\\{a_{1},[a_{2},a_{3}]\\},\mathcal{B}=\\{[a_{1},a_{4}],a_{2},[a_{3},a_{5},a_{6}]\\}$,
the new preferences are $\\{a_{1},[a_{2},a_{3}],a_{4},[a_{5},a_{6}]\\}$. If
advisor $a_{1}$ has preferences
$\\{[s_{1},s_{2},s_{3},s_{4}],[s_{5},s_{6},s_{7}]\\}$ and $s_{1},s_{2},s_{5}$
are the only students who listed $a_{1}$, the advisor’s preferences become
$\\{[s_{1},s_{2}],[s_{3},s_{4}],s_{5},[s_{6},s_{7}]\\}$. The remaining ties
are broken arbitrarily. Students all have quota $3$. Each evaluator has quota
$\left\lceil\frac{3|S|}{|A|}\right\rceil$, where $|S|,|A|$ are the total
number of students and advisors. In the period December 2 - 4, the algorithm
runs. Students are assigned to evaluators. From December 5 - 10, evaluators
score each student (using the scores “A”, “A-B”, “B”, “B-C” and “C”). Because
a student may not get assigned to three evaluators (e.g. in the scenario with
only one evaluator), we break ties again randomly up to 10 times. If this is
unfruitful, we remove (a subset of) insufficiently matched students and rerun
the algorithm. This is repeated until a solution is found.
In Figure 1, we plot a graph that shows that a very large proportion of
students gets matched to their first choice.
#### Bound on the number of removed students:
Since we remove students, the matching may not be stable with respect to the
original preferences over all students (including the removed students). We
can bound the maximum number of removed students. Let $q_{s}$ the maximum (and
target) capacity of students, $q_{s}^{\text{min}}$ the minimum number of
matches a student must have in order not to be removed. The evaluator capacity
is $q_{e}=\left\lceil\frac{q_{s}|S|}{|E|}\right\rceil$. Suppose $k$ students
were removed and consider the next iteration (one iteration corresponds to tie
breaking up to 10 times). This means that at most $q_{s}(|S|-k)$ evaluator
spots are occupied, i.e. at least $q_{e}|E|-q_{s}(|S|-k)\geq q_{s}k$ evaluator
spots are free. In this iteration, a student cannot be matched if all these
free spots are distributed over $q_{s}^{\text{min}}-1$ evaluators. Once
$q_{s}k>(q_{s}^{\text{min}}-1)\cdot q_{e}$, every student is guaranteed to
find enough evaluators and the algorithm terminates. This holds when
$q_{s}k>(q_{s}^{\text{min}}-1)\cdot(\frac{q_{s}|S|}{|E|}+1)$. Therefore, the
maximum number of removed students is
$k\leq\left\lfloor(q_{s}^{\text{min}}-1)\cdot(\frac{|S|}{|E|}+\frac{1}{q_{s}})+1\right\rfloor$.
The fraction of removed students is
$\frac{k}{|S|}\leq(q_{s}^{\text{min}}-1)\cdot(\frac{1}{|E|}+\frac{1}{|S|q_{s}})+\frac{1}{|S|}$.
As expected, $k$ is smaller the smaller the ratio $\frac{|S|}{|E|}$ is. In
Phase 1 of ELLIS, $q_{s}=3,q_{s}^{\text{min}}=3,|E|\approx 100,|S|\approx
500$, therefore $k\leq 11$ and $\frac{k}{|S|}\leq 2.2\%$.
Figure 1: Rank distribution of the students’ matches in Phase 1: The top-left
shows the number of students whose best match corresponds to their $i$th
ranked person (with $i$ on the $x$-axis). The top-right is for the second-best
match and the bottom-right is for the third-best match.
### 4.2 Phase 2 - Interview Matching
From December 14 2020 - January 15 2021, professors score students (1 - 4, 5,
6) based on the application documents, scores from the pre-screening phase and
other means555 In the system, professors can see which students listed them,
so they are more likely to rank those. . The score “5” means ”not a good fit
for me, but still good”, “6” means ”this student should not be part of ELLIS”
(and is an indication to remove this student from the system). If an advisor
ranks a student “5” or “6”, they will not be matched to this student.
Professors also specify their interviewing capacity. Based on the scores and
research overlaps (to break ties of students with same scores), a ranking over
scored students is created for each professor. A professor can only be matched
to students they scored and didn’t score “5” or “6”. Students are assigned
preferences in the same fashion as before. Each student has quota $3$ (“ELLIS
Excellence Criterion”666To ensure the excellence of the hired students, this
is one of the requirements decided by the ELLIS committee.). While it is
certainly possible that an advisor interviews all students that they are
interested in, the goal is to also give less “visible/good” candidates a
chance to get interviewed and possibly hired. Therefore, even the best
students can only be assigned three interviews. Additional interviews can be
organized individually if necessary. Each advisor is assigned a quota which is
80% of their specified interviewing capacity.777 When the stated interview
capacity is less than $3$, it is left as is. We decrease the capacity to 80%
because an advisor must interview all of these 80% and 100% may be too severe.
Advisors are not constrained regarding their remaining capacity; we give
suggestions for these remaining interviews as outlined below.
Because the ELLIS Excellence Criterion is hard to satisfy, we require each
student to be matched to at least 2 interviews rather than 3. If this student
should be hired in Phase 3, the criterion can be satisfied by manually
arranging an interview. Therefore, each student has a minimum capacity of 2
and a maximum capacity of 3. When a student is assigned to less than 2
interview slots, they are removed and the algorithm reruns (as described in
Phase 1). Since many students may be matched to less than 2 interviews, we
remove at most 20 students at a time (based on average advisor ratings). Note
that a student with one interview, who was not removed, can match to one
interview slot previously taken by a removed student and this avoids their
removal. Finally, when a student has at least one “1” and at least one “5” or
at least two “1”, they are never removed even if they only have one interview
slot.
The algorithm runs in the time frame January 15 - January 22 2021. To gain
insight into the next-best matches, students and advisors participate again
with their remaining quota (including the additional 20%). No students are
removed in this second matching if they have too few interviews. These new
matches are included in the optional list of candidates an advisor may wish to
interview. However, ELLIS does not put any restrictions on how advisors
eventually fill their additional 20% capacity. In Figure 2, we plot the
distribution of the rank of the $i$th best match for students with
$i=1,\dots,3$, which is the analogous of Figure 1.
Figure 2: Rank distribution of the students’ matches in Phase 2: The top-left
shows the number of students whose best match corresponds to their $i$th most
preferred person (with $i$ on the $x$-axis). The top-right is for the second-
best match and the bottom-right is for the third-best match. If a person is
matched less than three times, the unmatched spots are assigned rank -1 (top-
right and bottom-left).
### 4.3 Phase 3 - Advisor Matching
Between January 23 - February 19 2021, students and professors arrange
interviews. Professors specify their hiring capacity. Students and advisors
update their preferences by February 19. Advisors can be indifferent (e.g.
score two students “1”), but students cannot (and rank up to 10 advisors). For
each advisor, we break ties between equally-scored students based on the total
number of scores different from 6 a student got, and randomly break any
remaining ties. These preferences are input unmodified to the matching
algorithm (and not modified as before based on the overlap of research
fields). Students have quota 1 and advisors have quota equal to their hiring
capacity. The algorithm is run. Since the number of matches can vary depending
on how ties are broken, the algorithm was rerun 10 times and the matching with
the maximum number of matches was taken. This resulted in 1 or 2 additional
matches. Around March, advisors send out letters of acceptance to all students
they matched with. In Figure 3, we plot the distribution of the rank of the
$i$th best match for students and advisors respectively, which is the
analogous of Figure 2. All students entering Phase 3 were matched. Most
students were matched to their first choice, indicating that people’s
preferences crystallized out thanks to the previous phases. The matching
algorithm was helpful with assigning the remaining persons. Moreover, we see
that all participating advisors were matched at least once. Some advisors were
only matched once, either because they had only one spot or the students they
were interested in found different matches.
Figure 3: Rank distribution of the matches in Phase 3 for students on the left
and advisors on the right. Rank of match #i plots a histogram over all persons
of the rank each person assigns to their $i$-th best match. If an advisor is
not matched up to rank #i, the histogram counts it as rank -1. For example,
all advisors were matched at their first spot, whereas 35 advisors were not
matched at their second spot (either because they had hiring capacity one or
could not find a student).
### 4.4 Phase 4 - Co-Advisor Matching
This phase is manual. When a student accepts, the advisor has to find a co-
advisor from a different country as required for admission to the ELLIS PhD
Program. Students and advisors may already discuss potential co-supervisors
during the interview stage.
## 5 Acknowledgements
We want to thank the following persons for the initial idea and help with the
practical aspects of this project: Andreas Geiger, Lynn Anthonissen, Leila
Masri, and the other members of the ELLIS PhD Committee.
## References
* Abdulkadiroğlu and Sönmez [1999] Atila Abdulkadiroğlu and Tayfun Sönmez. House allocation with existing tenants. _Journal of Economic Theory_ , 88(2):233–260, 1999.
* Charlin and Zemel [2013] Laurent Charlin and Richard Zemel. The toronto paper matching system: an automated paper-reviewer assignment system. 2013\.
* Gale and Shapley [1962] David Gale and Lloyd S Shapley. College admissions and the stability of marriage. _The American Mathematical Monthly_ , 69(1):9–15, 1962.
* Knuth and De Bruijn [1997] Donald Ervin Knuth and NG De Bruijn. _Stable marriage and its relation to other combinatorial problems: An introduction to the mathematical analysis of algorithms_ , volume 10. American Mathematical Soc., 1997.
* Maggs and Sitaraman [2015] Bruce M Maggs and Ramesh K Sitaraman. Algorithmic nuggets in content delivery. _ACM SIGCOMM Computer Communication Review_ , 45(3):52–66, 2015.
* Mordig et al. [2021] Maximilian Mordig, Riccardo Della Vecchia, Nicolò Cesa-Bianchi, and Bernhard Schölkopf. Multi-sided matching markets with consistent preferences and cooperative partners, 2021.
* Roth [1984] Alvin E Roth. The evolution of the labor market for medical interns and residents: a case study in game theory. _Journal of political Economy_ , 92(6):991–1016, 1984.
* Roth and Sotomayor [1992] Alvin E Roth and Marilda Sotomayor. Two-sided matching. _Handbook of game theory with economic applications_ , 1:485–541, 1992.
* Roth et al. [2005] Alvin E Roth, Tayfun Sönmez, and M Utku Ünver. Pairwise kidney exchange. _Journal of Economic theory_ , 125(2):151–188, 2005.
## Appendix A Marriage-market
In the seminal work by Gale and Shapley [1962], the authors introduced two
types of matching markets, the marriage markets that we are going to recall in
this section, and the college admission markets (Section B). These notions are
at the basis of the extension in Section 3. In a classical marriage market
consisting of a set of men $M$, women $W$ and preferences of each person over
the persons of the other side, the goal is to match each man to at most one
woman. Each woman can get married to one man at most. In fact, a person may
also choose to match with themself rather than match with some unacceptable
partner. We start from some formal definitions and these are illustrated in
Example 1.
###### Definition 7 (preference lists).
For a market over men $M$ and women $W,$ each man $m$ has preferences $P(m)$
over all persons in $W\cup\\{m\\}$ defined by the binary relations
$\geq_{m},=_{m}$ (which defines $>_{m},<_{m},\leq_{m}$). A man has strict
preferences if $=_{m}$ is equal to $=$, i.e. he is not indifferent between any
two people. Analogously, each woman $w$ has preferences $P(w)$ over all
persons in $M\cup\\{w\\}$. Person $p_{1}$ is acceptable to $p$ if
$p_{1}>_{p}p$.
These preferences can be represented as ordered lists as shown in Example 1.
###### Definition 8 (marriage market).
A marriage market over men $M$ and women $W$ is defined by the triple
$(M,W,P)$, where $P$ is the set of preference lists,
$P=\left\\{P\left(m_{1}\right),\dots,P\left(m_{n}\right),P\left(w_{1}\right),\dots,P\left(w_{p}\right)\right\\}$.
A matching $\mu$ is valid if each person is matched to exactly one partner
from the opposite sex or themself.
###### Definition 9 (valid matching).
A matching on the marriage market $(M,W,P)$ is a correspondence $\mu:M\cup
W\rightarrow M\cup W$ such that:
* •
$\mu(m)\in W\cup\\{m\\},$
* •
$\mu(w)\in M\cup\\{w\\},$
* •
$\mu(m)=w\iff m=\mu(w).$
While there exist many valid matchings, matchings should not fall apart
quickly because people find better partners, thus disregarding the matching.
This can be ensured if the matching is stable. A matching is stable if there
does not exist a man $m$ and woman $w$ such that $(m,w)$ strictly prefer each
other to the partners they are currently matched with. In addition, each
person prefers to stay single rather than match with some unacceptable
partner. This leads to the following definition.
###### Definition 10 (stability).
A matching $\mu$ is unstable if there exists a pair $(m,w)\in(M\times
W)\cup\\{(m,m)\mid m\in M\\}\cup\\{(w,w)\mid w\in W\\}$ such that
$w>_{m}\mu(m)$ and $m>_{w}\mu(w)$. $(m,w)$ is called a blocking pair of
$\mu$888The case $m=w$ is also known as individual rationality in the
literature..
Stability implies that it is enough to list all acceptable partners up to the
position of the person themself. A person will always prefer to match to
themself rather than match to anyone coming afterwards in their preferences.
At this point, let us consider an example.
###### Example 1.
Consider the market with men $M=\\{m_{1},m_{2}\\}$ and women
$W=\\{w_{1},w_{2},w_{3}\\}$ and preferences $P$ (expressed as a list in the
order of decreasing preference):
$\displaystyle P(m_{1})$ $\displaystyle=\\{w_{1},w_{2},w_{3},m_{1}\\},\quad
P(m_{2})=\\{w_{2},w_{1},m_{2}\\},$ $\displaystyle P(w_{1})$
$\displaystyle=\\{m_{2},m_{1},w_{1}\\},\quad
P(w_{2})=\\{m_{1},m_{2},w_{2}\\},\quad P(w_{3})=\\{m_{1},w_{3}\\}.$
We could equivalently write $P(m_{2})=\\{w_{2},w_{1},m_{2},w_{3}\\}$ with
woman $w_{3}$ unacceptable to $m_{2}$, but this does not affect the set of
stable matchings because $m_{2}$ will always prefer himself to $w_{3}$. One
can verify that the matching
$\mu=\\{(m_{1},w_{2}),(m_{2},w_{1}),(w_{3},w_{3})\\}$ is stable with woman
$w_{3}$ matched to herself. Another stable matching is
$\mu=\\{(m_{1},w_{1}),(m_{2},w_{2}),(w_{3},w_{3})\\}$. Indifferent preferences
are represented by square brackets:
$\displaystyle P(m_{1})=\\{[w_{1},w_{2}],w_{3},[w_{4},w_{5}],m_{1}\\}.$
It means that
$w_{1}=_{m}w_{2}>_{m}w_{3}>_{m}w_{4}=_{m}w_{5}>_{m}m_{1}>_{m}\textrm{anyone
else}$. A person can never be indifferent between themself and anyone else,
i.e. $[w_{5},m_{1}]$ is not allowed in the preferences of $m_{1}$.
Though desirable, it is questionable whether a stable matchings can be found
in general. The celebrated Gale-Shapley algorithm (GS algorithm) presented in
[Gale and Shapley, 1962], shows by construction that a stable matching exists
in all marriage markets. The pseudocode of the GS algorithm is provided in
Algorithm 2. The GS algorithm is typically exposed by letting men propose at
the same time. The version presented here makes the generalization in Section
3 more straightforward.
Data: Marriage market $(M,W,P)$
Result: Matching $\mu_{M}$
$\mu\leftarrow\\{(w,w)\mid w\in W\\}$ // match every woman to herself
while _there is an unmatched man $m$_ do
$w\leftarrow\textsc{offerNext}(P,m)$ // partner: woman or man himself
if _$w=m$_ then
$\mu\leftarrow\mu\cup\\{(m,m)\\}$ // man proposed to himself and accepts
else if _$m >_{w}\mu(w)$_ then
$\mu\leftarrow\mu\setminus\\{(m^{\prime},w)\\}$ // unmatch $(m^{\prime},w)$
$\mu\leftarrow\mu\cup\\{(m,w)\\}$ // match $(m,w)$
end if
end while
return _$\mu$_
Algorithm 2 Deferred Acceptance Algorithm or Gale-Shapley Algorithm
The GS algorithm works by letting men propose to women and women conditionally
accept unless they get an offer from a better man later on. It starts with all
men unmatched and all women matched to themselves. As long as a man is
unmatched, consider any unmatched man. He proposes to his next most preferred
woman he has not proposed to already. The function that does this in Algorithm
2 is offerNext. In case of indifferent preferences, a man arbitrarily picks
any of the equally preferred women. If a man has proposed to all of his
acceptable women, he proposes to himself instead (he accepts and remains
single). If the woman prefers the man to her current partner, she disengages
from her old partner (a man or herself), leaving her old partner unmatched
again. She engages/matches with this new man. The algorithm stops once all men
are matched (with a woman or themeselves). The matched men and women are
married. The algorithm is also termed _deferred acceptance algorithm_ since a
woman confirms her engagement only once the algorithm terminates and may break
up for a better man any time before. The algorithm does not specify the order
in which free men are chosen or how a man decides between indifferent women.
When preferences are strict, the returned matching is always the same.
###### Theorem 11 (Gale and Shapley).
The GS algorithm terminates and returns a valid and stable matching.
###### Proof.
The algorithm terminates because the preference lists of men are finite and a
man always accepts himself. The returned matching is valid because a man is
matched to exactly one partner, himself or a woman.
By contradiction, we prove that the matching is stable. Assume $(m,w)$ blocks
$\mu$. If $m=w$, this means that $m$ is either a man or a woman and matched to
an unacceptable partner in $\mu$. This is impossible because only acceptable
partners are matched in the algorithm. Otherwise, $m$ is a man and $w$ a woman
and $w>_{m}\mu(m)$, $m>_{w}\mu(w)$. This means that $m$ proposed to $w$ before
proposing to $\mu(m)$ and was rejected. Since $w$’s match cannot decrease over
iterations, this means that $w$ would have accepted. ∎
Assuming that an unmatched man can be identified in $O(1)$, the running time
is $O(MW)$ since each man proposes to a woman at most once. Whilst it is
possible to find an unmatched man in $O(1)$ by storing the free men in a set,
uniformly picking a free man is not $O(1)$. One could obtain a random sample
from a set in $O(\text{\\# free men})$ by passing via a list.
Under strict preferences, it can be shown that the GS matching is optimal in
the sense that each man is matched to the best partner he could get among all
stable matchings. In addition, the matching is worst-optimal for women,
meaning that each woman gets the worst partner among all stable matchings.
###### Proposition 12 (Optimality).
Assume strict preferences and let $\mu$ the matching returned by the GS
algorithm. Then $\mu(m)\geq_{m}\mu^{\prime}(m)$ for each man $m$ and any
stable matching $\mu^{\prime}$. Also, $\mu^{\prime}(w)\geq_{w}\mu(w)$ for each
woman $w$ and any stable matching $\mu^{\prime}$.
The proof is a special case of the proof in Proposition 6. Since optimality
implies uniqueness, the GS algorithm returns a unique matching under strict
preferences, independently of the order in which men propose. By inverting the
roles of men and women, an equivalent version of the GS algorithm lets women
propose to men and the matching is generally different. Under strict
preferences, this matching is woman-optimal and men-worst-optimal.
## Appendix B College Admission Problem
The marriage market can be generalized to the college admission setting.
Instead of men and women, the market consists of colleges and students. A
student can go to at most one college, but a college can accept more than one
student, up to its capacity. We will state more general definitions respect to
the previous section and some of them can also be applied to the extension in
Section 3 when students can have capacities as well (polygamous market).
###### Definition 13 (college admission market).
A college admission market over students $S$ and colleges $C$ is defined by
the quadruple $(S,C,P,Q)$, where:
* •
$Q=\\{q_{c}\mid c\in C\\}\cup\\{q_{s}\mid s\in S\\}$ are the capacities of
colleges and students,
* •
$P$ is the set of preference lists of colleges and students:
$P=\left\\{P\left(c_{1}\right),\dots,P\left(c_{n}\right),P\left(s_{1}\right),\dots,P\left(s_{p}\right)\right\\}.$
We say that the quotas $Q$ satisfy the college admission market assumption if
one side of the market has quotas all equal to one. Without loss of
generality, we call students the side with quotas all equal to one.
The college admission market is the market where only one side has quotas
greater than one, i.e. $q_{s}=1\;\forall s\in S$.
###### Definition 14 (valid matching).
A matching on the college admission market $(S,C,P,Q)$ is a set $\mu\subset
S\times C$ such that:
* •
$|\mu(c)|\leq q_{c}\;\forall c\in C$, where $\mu(c)=\\{s\mid(s,c)\in\mu\\}$,
* •
$|\mu(s)|\leq q_{s}(=1)\;\forall s\in S$, where
$\mu(s)=\\{c\mid(s,c)\in\mu\\}$.
$\mu(s)$ and $\mu(c)$ equivalently characterize the matching with the
property: $s\in\mu(c)\iff c\in\mu(s)$. $|\mu(c)|\leq q_{c}$ means that $c$ is
matched $q_{c}-|\mu(c)|$ times to itself and we fill $\mu(c)$ with $c$ up to
size $q_{c}$, same for students.
We give an example of a college admission market.
###### Example 2.
Consider the college admission market with students
$S=\\{s_{1},s_{2},s_{3}\\}$ and colleges $C=\\{c_{1},c_{2}\\}$ with capacities
$2$ and $3$. Preferences for students over colleges are expressed as before.
For colleges, they express their preferences over groups of students of size
less or equal to their capacity. For some preferences, a valid matching could
be $\mu=\\{(c_{1},\\{s_{1},s_{2}\\}),(c_{2},\\{\\}),(s_{3},s_{3})\\}$. This
can be equivalently written by filling spots to capacity,
$\mu=\\{(c_{1},\\{s_{1},s_{2}\\}),(c_{2},\\{c_{2},c_{2},c_{2}\\}),(s_{3},s_{3})\\}$,
or by listing the set $\mu=\\{(c_{1},s_{1}),(c_{1},s_{2})\\}$.
We define preferences between a single person and a set of persons as follows.
###### Definition 15.
Given a set of persons $K$, we say that $p_{1}>_{p}K$ if there exists
$p_{2}\in K$ such that $p_{1}>_{p}p_{2}$.
The stability definition carries over and we restate it here for clarity. It
assumes that $\mu(s)$ and $\mu(c)$ are filled up to their capacity, as
described in Definition 14.
###### Definition 16 (stability).
A matching $\mu$ is unstable if there exists a pair $(s,c)\in(S\times
C)\cup\\{(s,s)\mid s\in S\\}\cup\\{(c,c)\mid c\in C\\}$ such that
$s>_{c}\mu(c)$ and $c>_{s}\mu(s)$, i.e. there exist
$\tau_{c}\in\mu(c),\tau_{s}\in\mu(s)$ such that $s>_{c}\tau_{c}$ and
$c>_{s}\tau_{s}$. In this case, $(s,c)$ is called a blocking pair of $\mu$.
###### Theorem 17.
The college admission market admits a valid and stable matching.
###### Proof (sketch).
It is possible to find a stable matching by relying on the GS algorithm. This
is illustrated in Algorithm 3. It works by mapping the college admission
market to an extended marriage market, where college $c$ is replicated
according to its capacity to $c_{1},\dots,c_{q_{c}}$, each with the same
preferences over students as in the original market (with $c$ replaced by
$c_{i}$). Students equally prefer any of the replicated colleges, i.e. any
occurrence of $c$ is replaced by the indifferent $[c_{1},\dots,c_{q_{c}}]$,
see Example 3. It is easy to show that a matching is stable on the college
admission market if and only if it is stable on the extended marriage market.
Therefore, the GS algorithm can be used to find a stable matching on the
extended market and then map it back. ∎
Data: Market $(S,C,P,Q)$, quotas $Q$ for one side
Result: Matching $\mu$
$\tilde{S},\tilde{C},\tilde{P},\textrm{mapping}\leftarrow\textsc{mapToExtendedMarket}(S,C,P,Q)$
$\tilde{\mu}\leftarrow\textsc{Gale-Shapley}(\tilde{S},\tilde{C},\tilde{P})$
$\mu\leftarrow\textsc{mapFromExtendedMarket}(\tilde{\mu},\textrm{mapping})$
return _$\mu$_
Algorithm 3 College Admission Algorithm
In the GS algorithm, either $\tilde{S}$ can propose to $\tilde{C}$ or vice
versa. Traditionally, quotas are such that only one side has quotas greater
than one, but the algorithm continues to work for quotas on both sides if we
define $\mu$ to be a multi-set so that the same pair can match more than once.
In Section 3, we consider the case when both sides have quotas, but the same
pair can match at most once.
###### Example 3 (extended market construction).
Consider students $S=\\{s,\tilde{s}\\}$ with capacities 2, 1 and colleges
$C=\\{c,\tilde{c}\\}$ with capacities 1, 2 and preferences (omitting the
person themself in the preferences):
$\displaystyle P(s)$ $\displaystyle=\\{c,\tilde{c}\\},\quad
P(\tilde{s})=\\{\tilde{c}\\},$ $\displaystyle P(c)$
$\displaystyle=\\{[\tilde{s},s]\\},\quad P(\tilde{c})=\\{s\\},\quad$
The extended market maps
$s\mapsto[s_{1},s_{2}],\tilde{s}\mapsto[\tilde{s}_{1}],c\mapsto[c_{1}],\tilde{c}\mapsto[\tilde{c}_{1},\tilde{c}_{2}]$.
It is defined over students $S^{ext}=\\{s_{1},s_{2},\tilde{s}_{1}\\}$ and
colleges $C^{ext}=\\{c_{1},\tilde{c}_{1},\tilde{c}_{2}\\}$ with preferences:
$\displaystyle P(s_{1})$
$\displaystyle=\\{[c_{1}],[\tilde{c}_{1},\tilde{c}_{2}]\\},\quad
P(s_{2})=P(s_{1}),\quad P(\tilde{s}_{1})=\\{[\tilde{c}_{1},\tilde{c}_{2}]\\},$
$\displaystyle P(c_{1})$
$\displaystyle=\\{[\tilde{s}_{1},s_{1},s_{2}]\\},\quad
P(\tilde{c}_{1})=\\{[s_{1},s_{2}]\\},\quad P(\tilde{c}_{2})=P(\tilde{c}_{1}),$
More precisely, $P(s_{2})=P(s_{1})$ means
$P(s_{1})=\\{[c_{1}],[\tilde{c}_{1},\tilde{c}_{2}],s_{1}\\}$ and
$P(s_{2})=\\{[c_{1}],[\tilde{c}_{1},\tilde{c}_{2}],s_{2}\\}$. Observe what
happens to the indifferent preferences of $c$. The GS algorithm runs on this
extended market and the obtained matching is transformed back to give a stable
matching on the original market.
Optimality carries over to this setting. In particular, the college-optimal
matching is student-worst-optimal and vice versa. This proof can be extended
easily to many-to-many markets. It can also be proven by passing via the
extended market. When the same pair can match at most once, this continues to
be true because the set of stable matchings that match the same pair at most
once is a subset of the set of stable matchings that may match the same pair
more than once. The former inherits the ordering of the latter.
|
# Eye: Program Visualizer for CS2
Aman Bansal<EMAIL_ADDRESS>0000-0001-7771-5459 Indian Institute of
Technology Bombay , Preey Shah<EMAIL_ADDRESS>0000-0001-7771-5459
Indian Institute of Technology Bombay and Sahil Shah
<EMAIL_ADDRESS>Indian Institute of Technology Bombay
###### Abstract.
In recent years, programming has witnessed a shift towards using standard
libraries as a black box. However, there has not been a synchronous
development of tools that can help demonstrate the working of such libraries
in general programs, which poses an impediment to improved learning outcomes
and makes debugging exasperating. In this paper, we present a tool _Eye_ ,
which is an interactive visual interpreter that provides an intuitive
representation of code execution and commonly used data structures in the C++
STL library. Eye provides a comprehensive overview at each stage during run
time including the execution stack and the state of data structures. The
modular implementation allows for extension to other languages and
modification of the graphics as desired.
_Eye_ opens up a gateway for CS2 students to more easily understand myriads of
programs that are available on online programming websites, lowering the
barrier towards self-learning of coding. It expands the scope of visualizing
data structures from standard algorithms to general cases, benefiting both
teachers as well as programmers who face issues in debugging. The interpreting
nature of Eye also provides space for a visualizer that can describe the
execution and not only the current state. We also conduct experiments to
evaluate the efficacy of _Eye_ for debugging and comprehending a completely
new code. Our findings show that it becomes faster and less frustrating to
debug certain problems using this tool, and also makes understanding a new
code a much more pleasant experience.
Program Visualization, CS2, Data Structures, Debug, Code Comprehension,
Introductory Programming Education
## 1\. Introduction
With the increasing popularity of Computer Science (CS), the number of
students interested in a formal CS education is ever-growing and thus is
growing the need for CS instructors to move from a standard write-on-board
teaching style to a more productive methodology. The advent of CoViD-19 and
social distancing has globally amplified this demand by disadvantaging the
conventional methods. Instructors now need to both effectively teach over
video conference and empower students to continue learning on their own
without direct support from the instructor or the TAs. Satisfying this demand
requires access to tools that can facilitate self-learning and allow students
to further expand their skill set by making use of existing programming
resources (such as online programming websites). Appropriate tools that would
help understand standard libraries and the relevant algorithms would go a long
way in furthering this goal.
In this paper, we restrict ourselves to the education of students who satisfy
the following criteria: (i) Have sufficient introductory programming knowledge
(‘CS1’ curriculum equivalent), (ii) Learning data structures and algorithms
(‘CS2’ curriculum equivalent), and (iii) Practicing programming problems to
hone programming skills. We refer to these students as _beginners_ in the
paper. We now enumerate the specific difficulties that challenge these
beginners:
The main screen of Eye. It shows how the current display looks and positions
elements.
Figure 1. Main Screen of _Eye_. (a) Explanation of code being executed. (b)
Program with code snippet being executed in red. The names of data structure
classes are kept intuitive for better understanding. (c) Variables in the
execution stack, with a different block for each scope. (d) Visualization of
data structures like queue, stack, binary search tree, and array. (e)
Interactive buttons to allow the users to move back and forward.
1. (1)
The primary challenge they face is understanding the working of the well-known
data structures and learning the different algorithms that manipulate them.
While there are standard algorithms that demonstrate the usage of such
structures, there is no tool that visualizes data structures in an arbitrary
program. Henceforth, we refer to this problem as _learning_.
2. (2)
The second challenge they face is that while practicing programming problems,
much to their dismay, an inordinate amount of their time is spent debugging,
which is a very frustrating process (Perscheid et al., 2017; Perkins et al.,
1986). In fact, it is considered by many as the most difficult part of
learning programming (Lahtinen et al., 2005). Compile-time or run time errors
have some helpful message or stack trace which can be used intelligently to
simplify debugging (Hristova et al., 2003; Cosman et al., 2020), but errors
that cause the program to give wrong results without obstructing it are much
harder to find and fix. This problem is accentuated for bugs resulting from an
incorrect understanding or usage of standard libraries and their algorithms.
We call these bugs _logical bugs_ and we refer to this problem as _logical
debugging_.
3. (3)
Moreover, the beginners would inevitably be unable to solve some problems,
engendering a need for explanatory solutions. Most of the time, these
solutions do not exist and the most readily, sometimes the only, available
option is to find the working code of the problem setter (or someone else) and
understand it. This is markedly more pronounced for problems on online
programming websites. Here they face their third challenge - understanding a
completely new code. We refer to this problem as _code comprehension_.
Notably, most of these problems require the usage of standard data structures,
whose solutions are written by other peers in varying programming styles.
Our contributions toward mitigation of these problems are:
1. (1)
We introduce _Eye_ , an interactive pedagogical tool that visualizes a
program’s execution as it runs. It demonstrates properties and usage of data
structures in a general environment, thereby helping in learning, logical
debugging, and code comprehension.
2. (2)
We present two experiments, along with their methodology and results, which
analyze the efficacy of _Eye_ for logical debugging and code comprehension.
The first experiment measures the benefit of using _Eye_ in debugging programs
of which the subjects have some high-level knowledge, including the algorithm,
the role of variables, and the loop invariants. The second experiment measures
the benefit of using _Eye_ in understanding a program on a high-level (such as
time complexity, loop invariants, and role of data structures) given the
problem statement and the program.
We now specify some important properties and reason that they are essential
for widespread adoption of any visualization tool. All these properties are
satisfied by _Eye_.
1. P1
Completeness: It should support CS2-equivalent courses by visualizing data
structures classes (such as C++ STL). This is required for covering the CS
curriculum of beginners.
2. P2
Flexibility: It should provide flexibility to change the display (beyond CSS
based changes) so that the instructor can modify it without much trouble. This
is needed because instructors would want to focus on different aspects of the
program in different lectures and would require changes like zooming in on a
data structure, adding animations, or using some external display library of
their choice. We believe that the lack of this flexibility can drastically
decrease adoption among different universities. Allowing these changes but
with considerable modifications can also scare away instructors (Sorva et al.,
2013a).
3. P3
Awareness: It should allow the addition of ‘program-aware’ features, including
but not limited to explanatory text and variable scoping. This is needed
because these features can significantly improve the understanding of the
program. Tools that are dependent on execution trace cannot demonstrate such
features.
4. P4
Accessibility: It should be able to run and display the visual elements in a
web browser. This is for ensuring that every user can use it from anywhere
without any hassle of installation or compatibility.
5. P5
Modularity: It should support multiple languages or be modular enough to
support new languages with minimal back-end changes. This ensures that the
tool is customizable for different universities and instructors and requires
that the language-specific part be separate from the remaining implementation.
6. P6
Interactivity: It should be interactive, allowing the beginners to go back and
forth as per their convenience. This is required because interactive tools
make the students learn better than passive tools (Tanenbaum, 2007; Sorva et
al., 2013b).
We show how _Eye_ satisfies these properties in Sections 4 and 3. In Section
5, we present our results concerning the effect of _Eye_ on debugging and code
comprehension. We then conclude in Section 6.
## 2\. Related Work
### 2.1. Learning
The concept of using visualizations to avoid the drawbacks of on-the-board
teaching (see (Orsega et al., 2012)) and improve the understanding of
algorithms is not new. Studies have shown that the amount of time students
spend on interactive visualization tools correlates with their performance
(Guo, 2013; Levy et al., 2003). Therefore, _Eye_ , as a visual tool, has much
potential for improving learning.
In the past few decades, a large number of program visualization tools have
been created. Sorva (Sorva, 2012) and Sorva et al. (Sorva et al., 2013a) give
an overview of many such tools. However, when scrutinized closely, many of
these lack our required properties. We give a comparison with some of the
prominent program visualization tools in Table 1. We note that Jsvee (Table 1)
provides only limited flexibility, which is also discussed in Section 4.
Table 1. Comparison with Existing Visualization Tools Tools | P1 | P2 | P3 | P4 | P5 | P6
---|---|---|---|---|---|---
Jeliot 3 (Ben-Ari et al., 2011) | ✗ | ✗ | ✗ | ✗ | ✗ | ✓
jGRASP (Hendrix et al., 2004) | ✓ | ✗ | ✗ | ✗ | ✗ | ✗
Jsvee (Sirkia, 2016) | ✗ | ✓ | ✓ | ✓ | ✓ | ✓
Python Tutor (Guo, 2013) | ✗ | ✗ | ✗ | ✓ | ✓ | ✓
UUhistle (Sorva and Sirkiä, 2010) | ✗ | ✗ | ✓ | ✗ | ✗ | ✓
ViLLE (Rajala et al., 2007) | ✗ | ✗ | ✓ | ✗ | ✓ | ✓
_Eye_ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓
### 2.2. Logical Debugging
Ahmadzadeh et al. (Ahmadzadeh et al., 2005) have shown that debugging requires
skills distinct from general programming skills. Yet, these skills are not
explicitly taught by the instructors and the students have to learn debugging
techniques on their own (Michaeli and Romeike, 2019). The industry debuggers
do not help either as they are meant for professionals and tend to be too
difficult to use and understand by beginners. Furthermore, most do not show
the internals of a library. As a result, they might not catch an incorrect
update to a data structure until the program’s end. We seek to cover this gap
with our tool. In fact, we believe that _Eye_ can be a crucial stepping stone
for beginners aiming to use industry debuggers.
The general opinion in relevant literature is that the use of a tool for
debugging is indeed beneficial. Sorva et al. (Sorva et al., 2013c) comment on
how such tools should be used and integrated with the conventional teaching
methods for better results. Lewis and Gregg (Lewis and Gregg, 2016) discuss
that introducing such tools earlier than later is even more beneficial. One
criticism of debugging tools is that they can help find a bug but cannot help
correct it. However, Fitzgerald et al. (Fitzgerald et al., 2008) report that
beginners face the most difficulty finding the bug and that once found, fixing
it does not take much effort.
We do not expect _Eye_ to be a panacea, but these results are prompting enough
to expect that it can help in debugging. To our surprise, a literary survey to
find a paper mentioning the effectiveness of such a tool for debugging
purposes yielded no result. Therefore to validate _Eye_ ’s potential, we
devised and conducted our own experiment (see Section 5).
### 2.3. Code Comprehension
This problem has been studied under the domain of _algorithm visualization_
(AV), which is different from _program visualization_ (PV). The goal of AV is
to visually aid the learning of an algorithm and not visualize a general
program (Diehl, 2005). JSAV (Karavirta and Shaffer, 2015) is one such
prominent AV library for data structures. Our code comprehension problem
differs from this by focusing on _Eye_ , a PV tool.
This problem has also been studied indirectly in the debugging literature with
the motivation of analyzing difficulties in debugging someone else’s code and
Mccauley et al. (Mccauley et al., 2008) discuss this in their comprehensive
literary survey on debugging. Gould (Gould, 1975) argue that students first
spend time understanding the given code and only then start finding bugs. This
separation has been further corroborated by other studies (Gould and
Drongowski, 1974; Ahmadzadeh et al., 2005). Moreover, Katz and Anderson (Katz
and Anderson, 1987) provide strong evidence that the skills needed to
understand the system are not necessarily connected to the skills needed to
locate the error. We have discussed the latter in the previous subsection.
Regarding the former, we could not find a result showcasing the efficacy of a
program visualization tool for code comprehension, let alone with data
structure libraries. Therefore, we devise and conduct our own experiment (see
Section 5).
## 3\. Design Overview
(a) The currently executing line is highlighted and explained.
(b) The fourth element of the array is highlighted as it is being accessed.
(c) The execution stack with scope separation as seen for variable length. The
empty region shows a scope where no variable was declared.
(d) Currently active function stack in a different color.
Figure 2. Basic Design Features
The basic design features of _Eye_ such as code highlighting, explanatory
line, array highlighting, execution stack with scoping, differently colored
execution stack for a function call.
(a) Stacks
(b) Hash Table for integers with closed addressing and separate chaining for
collision avoidance. The hash function used is modulo 6.
(c) Queue
Figure 3. Data Structures
The visualization of various data structures such as stack, queue, and hash
tables.
In this section, we describe the functionality provided by our tool, including
the essential elements that are common with different tools and some
additional features which we believe are integral for our purpose. Figure 1
shows the window with some fundamental elements on the canvas. The tool
currently supports _C++_ with _STL_ , but thanks to the modular design (see
Section 4), it can be easily extended to support other languages like _Java_
and _Python_. We now enumerate some of the basic elements common among other
tools and detail how we supplement them to make them more descriptive.
* •
An execution stack that shows all the variables and their current values. For
clarity, data structures are shown outside the stack. We divide the stack into
different sections to represent different scopes, as shown in Figure 2(c).
Showing variables with different scopes in different sections of the stack
elucidates the concept of scoping and expedites the detection of bugs due to
_variable shadowing_. Surprisingly, this simple feature was missing from other
tools we studied.
* •
A new execution stack as soon as a new function starts executing. To make
understanding easier, we color the currently active frame with a different
color, as shown in Figure 2(d).
* •
Besides displaying the source code with the current line highlighted, we
provide an explanatory line summarizing the operation being executed (Figure
2(a)). It allows faster debugging by avoiding having to look at the
syntactically dense code and reading the explanation instead for checking the
correctness .
Now we enumerate some advanced design features of our tool.
* •
Multiple data structures (STL constructs in C++) such as vector (array), map
(binary search tree), stack, queue, deque, and unordered$\\_$map (hash table)
are supported (Figure 3(c)). This lets beginners better grasp the working of
these data structures and verify their state while debugging.
* •
Every access to these data structures is highlighted (including arrays, as
shown in Figures 2(b) and 1). This speeds up the debugging process as students
can skip the code or explanation and directly verify if all accesses (and
assignments) are occurring as expected. For example, indexing errors, which
are quite common among beginners, become noticeable due to this feature. It
also helps in code comprehension where the student can quickly see which value
was read from or written onto a data structure.
* •
We carry this highlighting feature further to visually explain what happens
internally in each data structure on a function call. For example, when an
element is inserted or deleted in a binary search tree (Figure 4(d)). With
this feature, we expect appreciable improvement in understanding when learning
these data structures’ working for the first time.
(a) Root is being read.
(b) Light Blue color signifies that the node is being read.
(c) Blue signifies that the node is being modified. Modifications include a
change in child pointers or a change in value.
(d) Red signifies node deletion.
Figure 4. Operations on Binary Search Tree (BST). (a), (b) and (c) show the
insertion of value 6 into the binary search tree. (d) Deletion of node with
value 6.
The coloring scheme used for explaining internal working of binary search
tree. Different colors are used for reading, modifying and deleting.
## 4\. Implementation Overview
ptSource Code Module $\\#$1 Canonical Code Representation Module $\\#$2
Canonical Graphics Representation Module $\\#$3 Display
Figure 5. Implementation Scheme
A flowchart which shows the implementation scheme for _Eye_.
In this section, we give an overview of the implementation and show how _Eye_
satisfies all the requirements that we assert are necessary for receiving
wholesale traction.
The implementation is divided into three completely independent modules. At a
high-level, the role of these modules is summarized in Figure 5. Before
delving into these modules, we introduce the intermediate representations
shown in the figure.
Canonical Code Representation (CCR): It represents the source code in a format
that is language independent. It covers all the basic programming constructs
usually taught in CS2, including data structures. The primary benefit of
introducing this representation is that adding support for different languages
requires changes only in module 1, hence ensuring property P5 (Modularity).
The obvious choice for such a representation is an abstract syntax tree (AST).
Canonical Graphics Representation (CGR): It is a representation of the
information that module 3 needs to create the graphics. The reason for
creating this intermediate stage is to ensure property P2 (Flexibility). Tools
such as Jsvee (Sirkia, 2016) visualize the program parallelly with its
execution. This causes their visualization and execution semantics to get
coupled, making it difficult to change the graphics. Although it is possible
to keep the coupling relaxed, it is natural to expect that an instructor would
not be willing to understand the library’s working to manipulate the
visualization. Another advantage is that an intermediate representation allows
peeking into future frames to decide the display. For instance, if a data
structure is not used in the next 50 frames, the instructor may reasonably
wish to hide it for some frames.
CGR is created in the standard JSON format and includes, among other things,
variables and the state of data structures. It may appear similar to an
execution trace, but our framework allows us to include significantly more
information like the scope of variables and array accesses such as in Figure
2(b).
Module 1: It converts the source code to an abstract syntax tree and is
implemented in python. We avoid using any external compiler as a black box
because they impose extraneous restrictions and are usually daunting to modify
for future developments. The lexical analysis and parsing of the code were
done using _rply_ library (Gaynor, 2019). The AST is made up of pre-defined
python classes for every programming construct. Support for various C++ STL
data structure libraries was added, ensuring property P1 (Completeness).
Module 2: It converts the AST into a JSON object and is also implemented in
python. Every class in the AST implements an ‘exec’ function which emulates
its execution and generates the information required in CGR, including
program-aware features, hence ensuring property P3 (Awareness). To interpret
and display data structures, we define custom classes with member functions
which also create additional execution information. For example, ‘insert’ in a
binary search tree can display each step of the algorithm if required, as
shown in Figure 4(d). Enthusiastic instructors can modify these behaviors too,
gaining more flexibility.
Currently, the whole CGR JSON object is returned in the end. We can optionally
pass it after every few line executions to reduce display latency in case of
long execution times or infinite loops.
Module 3: It converts the CGR into an actual visual display. It is implemented
using HTML5, CSS, and JavaScript and can run on supported web browsers,
ensuring property P4 (Accessibility). Buttons are present to go to the next or
previous frame, ensuring property P6 (Interactivity). Visualization can also
be produced locally via _graphics.py_ , a basic graphic library of python
(Zelle, 2010). The current graphics can easily be further modified since our
modular implementation allows the users great flexibility for this purpose.
They can pick colors, add animations, and even use external libraries to help
them build appealing graphics.
## 5\. Experimental Results
We design two experiments to measure the efficacy of _Eye_ in debugging and
code comprehension. We try to answer the following research questions (RQ) via
our experiments:
RQ1(a): Does using _Eye_ for debugging data structures based programs
accelerate the debugging process?
RQ1(b): Does using _Eye_ for debugging data structures based programs reduce
frustration usually seen in debugging process?
RQ2(a): Does using _Eye_ for understanding a new code improve the code
comprehension in a fixed amount of time?
RQ2(b): Does using _Eye_ for understanding a new code lead to better
productivity in terms of time?
We contacted around 60 senior computer science undergraduates from our
university, out of which 20 agreed to participate. Before proceeding with the
experiments, they were given a small demonstration and were asked to
familiarize themselves with the tool. The subjects ran the tool locally and
not on the browser. Each subject participated in two experiments for answering
RQ1(a) and RQ2(a), and an anonymous survey to answer RQ1(b) and RQ2(b).
Due to social distancing, the experiments were conducted online using video
conferencing software. On average, each subject took around one hour to
complete the experiment. All the experiment material, including videos of some
subjects taking the experiment, can be produced upon request.
### 5.1. Experiment 1
We conducted the experiment as follows:
1. (1)
Subjects were given two problem statements (_Prob1_ and _Prob2_) with buggy
implementations of their solutions. The problems were based on data structures
like _stack_ and _queue_ , and involved algorithms taught as part of CS2
curriculum (and hence were known to subjects). There was exactly one logical
bug in both the implementations, and the subjects were asked to fix them. The
time taken by the subjects to debug each program was recorded.
2. (2)
The experiment was counterbalanced with respect to tool usage. Half of the
subjects did _Prob1_ with _Eye_ and _Prob2_ without (Group 1), and the other
half did the opposite (Group 2). Subjects were assigned to these groups
randomly. The problems were always given in the same order. Subjects using
_Eye_ were disallowed to edit or even see the code in any other application to
ensure that they use _Eye_ to debug.
3. (3)
In a few cases, subjects could not debug the problem and gave up. The time for
such subjects was then set to a default value larger than the time taken by
any successful subject.
4. (4)
Running the tool locally required a library installation that three people
refused to do. Such subjects were allowed to debug both problems without
_Eye_. To somewhat offset the increase in number of without _Eye_
measurements, one subject was asked to solve both the problems with _Eye_.
Group 1 had an average debug time of 1071.25 seconds for _Prob1_ and 1022.90
seconds for _Prob2_ while Group 2 had an average debug time of 778.75 seconds
for _Prob1_ and 518.6 seconds for _Prob2_. Due to random allocation, Group 1
had subjects with better debugging skills than Group 2 on average which is
ratified by the average times of two groups - Group 1 took far more time that
Group 2 for each of the questions. To account for biases introduced by
difference in debugging skills, we calculate the percentage of total debug
time the subjects spent on the _Prob1_ (or equivalently _Prob2_). We consider
these percentages to be random variables and test against the null hypothesis
that the variables have the same mean for the two groups. We had to eliminate
the four subjects who did not have alternating tool usage for the two
problems. The average values for this measure is shown in Table 2. The
$p$-value for our data is $0.0578$. These results show that _Eye_ improves
debug time.
### 5.2. Experiment 2
We conducted the experiment as follows:
1. (1)
Subjects were divided into two equal-sized groups, randomly and independent of
the previous experiment. One group used _Eye_ for the experiment while the
other had no restrictions.
2. (2)
Both the groups were given a problem statement and a correct implementation of
its solution. The solution was based on the _deque_ data structure of C++ STL
and involved an algorithm new to the subjects.
3. (3)
The subjects were first given 6 minutes to see the visualization (or go
through the code) and try to understand how the algorithm is working. They
were then given a link to a Google form which contained various questions.
They were given 10 minutes to answer the quiz and were allowed to go back to
the visualizer or the code during the quiz.
We use the quiz score as a proxy for understanding. Our null hypothesis for
RQ2(a) was that there would be no considerable difference in the scores of the
two groups. We report the average percentage score for each group in Table 2.
Although the group with _Eye_ performed better, the difference was not
statistically significant ($p=0.446$). Nonetheless, given the biases against
_Eye_(Section 5.4) and the subjects’ overwhelmingly positive opinion (Section
5.3), we can reasonably expect that consistent use of _Eye_ will show positive
results. On a hopeful note, Levy et al. (Levy et al., 2003) have shown that
performance improvements do manifest when the students become conversant with
a tool.
### 5.3. Anonymous Survey
After the subjects had completed both the experiments, we asked them to fill
an anonymous survey which contained two questions corresponding to RQ1(b) and
RQ2(b). The subjects were advised that this survey is for estimating the
benefits of the tool so they should not bias their answer based on their
particular experience in the experiment. The questions and their responses
were as follows:
1. Q1(b)
Question: Assuming same time spent on debugging with _Eye_ and without _Eye_ ,
how much do you think debugging with _Eye_ can help in reducing frustration?
Options: Ranging from 0% to 100%.
Response: _Eye_ reduces frustration by $61.43\%$ on average.
Conclusion: A visualizer for libraries makes debugging remarkably less
frustrating, thereby holding on a student’s interest in programming for a
longer time and raising enthusiasm for self-learning.
2. Q2(b)
Question: Assuming a time bound on your practice sessions and assuming you
only want to understand someone else’s submission of every problem you
practice, how much can _Eye_ improve your productivity (number of problems
solved)?
Options: Ranging from same to double productivity.
Response: _Eye_ can increase the number of problems solved by a factor of
roughly $1.56$ on average.
Conclusion: It demonstrates that students consider _Eye_ useful for code
comprehension in that it allows faster understanding of someone else’s code,
greatly improving the utility of online programming websites.
Table 2. Summary of the results of both experiments: With _Eye_ , there is a reduction in the average fraction of time taken to debug and an increment in average score, showing improvements in both use cases. | Exp1 (Avg % time) | Exp2 (Avg % score)
---|---|---
| _Prob1_ | _Prob2_ |
Without _Eye_ | 58.3 | 51.4 | 59.2
With _Eye_ | 48.6 | 41.7 | 60.6
$p$-value | 0.0578 | 0.446
### 5.4. Experimental Biases
* •
Subjects were disallowed to use regular debugging techniques with _Eye_. This
potentially hampered their ability and added a bias against _Eye_.
* •
Informal discussion with subjects after the experiment confirmed our suspicion
of familiarity bias against _Eye_. Many students primarily focused on the code
(Figure 2(a)). Features like access highlighting (Figure 2(b)) were intended
to reduce dependence on code-reading but were largely ignored.
## 6\. Conclusion
Data Structure libraries are widely used in schools, universities and
industries for programming. Visualization for such libraries is the need of
the hour. In this paper, we presented a tool _Eye_ , that offered a visual
display of inner working of such libraries, thereby helping in learning,
debugging and code comprehension. The efficacy of the tool was also tested
through an assessment that showed encouraging results with the positive
responses to the survey reinforcing its utility in practice. Its design and
functionality satisfy the properties that are required for widespread
traction.
We believe that _Eye_ will be extremely useful in universities and online
courses for teaching purposes, and tweaks can be easily made to suit each
course’s requirements. We plan to do a formal study to evaluate its efficacy
in teaching when schools reopen and classes start. In addition, extensive
deployment on online programming websites can be done through integration with
their IDEs (Integrated Development Environment) which requires a change in the
display module. Visualizations for more data structures and libraries in
languages like Python and Java will lead to a greater adoption and expand its
use cases to students learning other programming languages as well.
## References
* (1)
* Ahmadzadeh et al. (2005) Marzieh Ahmadzadeh, Dave Elliman, and Colin Higgins. 2005\. An Analysis of Patterns of Debugging among Novice Computer Science Students. _SIGCSE Bull._ 37, 3 (June 2005), 84–88. https://doi.org/10.1145/1151954.1067472
* Ben-Ari et al. (2011) Mordechai Ben-Ari, Roman Bednarik, Ronit Levy, Gil Ebel, Andrés Moreno, Niko Myller, and Erkki Sutinen. 2011. A decade of research and development on program animation: The Jeliot experience. _J. Vis. Lang. Comput._ 22 (10 2011), 375–384. https://doi.org/10.1016/j.jvlc.2011.04.004
* Cosman et al. (2020) Benjamin Cosman, Madeline Endres, Georgios Sakkas, Leon Medvinsky, Yao-Yuan Yang, Ranjit Jhala, Kamalika Chaudhuri, and Westley Weimer. 2020. PABLO: Helping Novices Debug Python Code Through Data-Driven Fault Localization. In _Proceedings of the 51st ACM Technical Symposium on Computer Science Education_ (Portland, OR, USA) _(SIGCSE ’20)_. Association for Computing Machinery, New York, NY, USA, 1047–1053. https://doi.org/10.1145/3328778.3366860
* Diehl (2005) Stephan Diehl. 2005\. Software visualization. In _27th International Conference on Software Engineering (ICSE 2005), 15-21 May 2005, St. Louis, Missouri, USA_ , Gruia-Catalin Roman, William G. Griswold, and Bashar Nuseibeh (Eds.). Association for Computing Machinery, New York, NY, USA, 718–719. https://doi.org/10.1145/1062455.1062634
* Fitzgerald et al. (2008) Sue Fitzgerald, Gary Lewandowski, Renée McCauley, Laurie Murphy, Beth Simon, Lynda Thomas, and Carol Zander. 2008. Debugging: finding, fixing and flailing, a multi-institutional study of novice debuggers. _Computer Science Education_ 18, 2 (2008), 93–116. https://doi.org/10.1080/08993400802114508 arXiv:https://doi.org/10.1080/08993400802114508
* Gaynor (2019) Alex Gaynor. 2019\. RPLY. https://pypi.org/project/rply/
* Gould (1975) J. A. Gould. 1975\. Some Psychological Evidence on How People Debug Computer Programs. _Int. J. Man Mach. Stud._ 7 (1975), 151–170.
* Gould and Drongowski (1974) John D. Gould and Paul Drongowski. 1974. An Exploratory Study of Computer Program Debugging. _Human Factors_ 16, 3 (1974), 258–277. https://doi.org/10.1177/001872087401600308 arXiv:https://doi.org/10.1177/001872087401600308
* Guo (2013) Philip J. Guo. 2013\. Online Python Tutor: Embeddable Web-Based Program Visualization for Cs Education. In _Proceeding of the 44th ACM Technical Symposium on Computer Science Education_ (Denver, Colorado, USA) _(SIGCSE ’13)_. Association for Computing Machinery, New York, NY, USA, 579–584. https://doi.org/10.1145/2445196.2445368
* Hendrix et al. (2004) T. Dean Hendrix, James H. Cross, and Larry A. Barowski. 2004\. An Extensible Framework for Providing Dynamic Data Structure Visualizations in a Lightweight IDE. _SIGCSE Bull._ 36, 1 (March 2004), 387–391. https://doi.org/10.1145/1028174.971433
* Hristova et al. (2003) Maria Hristova, Ananya Misra, Megan Rutter, and Rebecca Mercuri. 2003. Identifying and Correcting Java Programming Errors for Introductory Computer Science Students. _SIGCSE Bull._ 35, 1 (Jan. 2003), 153–156. https://doi.org/10.1145/792548.611956
* Karavirta and Shaffer (2015) Ville Karavirta and Clifford Shaffer. 2015. Creating Engaging Online Learning Material with the JSAV JavaScript Algorithm Visualization Library. _IEEE Transactions on Learning Technologies_ 9 (10 2015), 1–1. https://doi.org/10.1109/TLT.2015.2490673
* Katz and Anderson (1987) Irvin R. Katz and John R. Anderson. 1987. Debugging: An Analysis of Bug-Location Strategies. _Human–Computer Interaction_ 3, 4 (1987), 351–399. https://doi.org/10.1207/s15327051hci0304_2 arXiv:https://www.tandfonline.com/doi/pdf/10.1207/s15327051hci0304_2
* Lahtinen et al. (2005) Essi Lahtinen, Kirsti Ala-Mutka, and Hannu-Matti Järvinen. 2005\. A Study of the Difficulties of Novice Programmers. In _Proceedings of the 10th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education_ (Caparica, Portugal) _(ITiCSE ’05)_. Association for Computing Machinery, New York, NY, USA, 14–18. https://doi.org/10.1145/1067445.1067453
* Levy et al. (2003) Ronit Ben-Bassat Levy, Mordechai Ben-Ari, and Pekka A. Uronen. 2003. The Jeliot 2000 Program Animation System. _Comput. Educ._ 40, 1 (Jan. 2003), 1–15. https://doi.org/10.1016/S0360-1315(02)00076-3
* Lewis and Gregg (2016) Colleen M. Lewis and Chris Gregg. 2016. How Do You Teach Debugging? Resources and Strategies for Better Student Debugging (Abstract Only). In _Proceedings of the 47th ACM Technical Symposium on Computing Science Education_ (Memphis, Tennessee, USA) _(SIGCSE ’16)_. Association for Computing Machinery, New York, NY, USA, 706\. https://doi.org/10.1145/2839509.2850473
* Mccauley et al. (2008) Renée Mccauley, Sue Fitzgerald, Gary Lewandowski, Laurie Murphy, Beth Simon, Lynda Thomas, and Carol Zander. 2008. Debugging: A review of the literature from an educational perspective. _Computer Science Education_ 18 (06 2008). https://doi.org/10.1080/08993400802114581
* Michaeli and Romeike (2019) Tilman Michaeli and Ralf Romeike. 2019. Current Status and Perspectives of Debugging in the K12 Classroom: A Qualitative Study. In _2019 IEEE Global Engineering Education Conference (EDUCON)_. IEEE, New York, NY, USA, 1030–1038. https://doi.org/10.1109/EDUCON.2019.8725282
* Orsega et al. (2012) Michael C. Orsega, Bradley T. Vander Zanden, and Christopher H. Skinner. 2012. Experiments with Algorithm Visualization Tool Development. In _Proceedings of the 43rd ACM Technical Symposium on Computer Science Education_ (Raleigh, North Carolina, USA) _(SIGCSE ’12)_. Association for Computing Machinery, New York, NY, USA, 559–564. https://doi.org/10.1145/2157136.2157296
* Perkins et al. (1986) D. N. Perkins, Chris Hancock, Renee Hobbs, Fay Martin, and Rebecca Simmons. 1986. Conditions of Learning in Novice Programmers. _Journal of Educational Computing Research_ 2, 1 (1986), 37–55. https://doi.org/10.2190/GUJT-JCBJ-Q6QU-Q9PL arXiv:https://doi.org/10.2190/GUJT-JCBJ-Q6QU-Q9PL
* Perscheid et al. (2017) Michael Perscheid, Benjamin Siegmund, Marcel Taeumel, and Robert Hirschfeld. 2017. Studying the Advancement in Debugging Practice of Professional Software Developers. _Software Quality Journal_ 25, 1 (March 2017), 83–110. https://doi.org/10.1007/s11219-015-9294-2
* Rajala et al. (2007) Teemu Rajala, Mikko-Jussi Laakso, Erkki Kaila, and Tapio Salakoski. 2007. VILLE: A Language-Independent Program Visualization Tool. In _Proceedings of the Seventh Baltic Sea Conference on Computing Education Research - Volume 88_ (Koli National Park, Finland) _(Koli Calling ’07)_. Australian Computer Society, Inc., AUS, 151–159.
* Sirkia (2016) Teemu Sirkia. 2016\. Jsvee & Kelmu: Creating and Tailoring Program Animations for Computing Education. In _2016 IEEE Working Conference on Software Visualization (VISSOFT)_ (Raleigh, NC, USA). IEEE, New York, NY, USA, 36–45. https://doi.org/10.1109/VISSOFT.2016.24
* Sorva (2012) Juha Sorva. 2012\. _Visual program simulation in introductory programming education; Visuaalinen ohjelmasimulaatio ohjelmoinnin alkeisopetuksessa_. G4 Monografiaväitöskirja. Aalto University. http://urn.fi/URN:ISBN:978-952-60-4626-6
* Sorva et al. (2013a) Juha Sorva, Ville Karavirta, and Lauri Malmi. 2013a. A Review of Generic Program Visualization Systems for Introductory Programming Education. _ACM Trans. Comput. Educ._ 13, 4, Article 15 (Nov. 2013), 64 pages. https://doi.org/10.1145/2490822
* Sorva et al. (2013b) Juha Sorva, Jan Lönnberg, and Lauri Malmi. 2013b. Students’ ways of experiencing visual program simulation. _Computer Science Education_ 23, 3 (2013), 207–238. https://doi.org/10.1080/08993408.2013.807962 arXiv:https://doi.org/10.1080/08993408.2013.807962
* Sorva et al. (2013c) Juha Sorva, Jan Lönnberg, and Lauri Malmi. 2013c. Students’ ways of experiencing visual program simulation. _Computer Science Education_ 23, 3 (2013), 207–238. https://doi.org/10.1080/08993408.2013.807962 arXiv:https://doi.org/10.1080/08993408.2013.807962
* Sorva and Sirkiä (2010) Juha Sorva and Teemu Sirkiä. 2010. UUhistle: A Software Tool for Visual Program Simulation. In _Proceedings of the 10th Koli Calling International Conference on Computing Education Research_ (Koli, Finland) _(Koli Calling ’10)_. Association for Computing Machinery, New York, NY, USA, 49–54. https://doi.org/10.1145/1930464.1930471
* Tanenbaum (2007) Andrew S. Tanenbaum. 2007\. _Modern Operating Systems_ (3rd ed.). Prentice Hall Press, USA.
* Zelle (2010) John Zelle. 2010\. _Python Programming: An Introduction to Computer Science 2nd Edition_. Franklin, Beedle & Associates Inc., USA.
|
# An Analysis Of Protected Health Information Leakage
In Deep-Learning Based De-Identification Algorithms
Salman Seyedi, 1 Li Xiong, 2 Shamim Nemati, 3 Gari D. Clifford 1,4
###### Abstract
The increasing complexity of algorithms for analyzing medical data, including
de-identification tasks, raises the possibility that complex algorithms are
learning not just the general representation of the problem, but specifics of
given individuals within the data. Modern legal frameworks specifically
prohibit the intentional or accidental distribution of patient data, but have
not addressed this potential avenue for leakage of such protected health
information. Modern deep learning algorithms have the highest potential of
such leakage due to complexity of the models. Recent research in the field has
highlighted such issues in non-medical data, but all analysis is likely to be
data and algorithm specific. We, therefore, chose to analyze a state-of-the-
art free-text de-identification algorithm based on LSTM (Long Short-Term
Memory) and its potential in encoding any individual in the training set.
Using the i2b2 Challenge Data, we trained, then analyzed the model to assess
whether the output of the LSTM, before the compression layer of the
classifier, could be used to estimate the membership of the training data.
Furthermore, we used different attacks including membership inference attack
method to attack the model. Results indicate that the attacks could not
identify whether members of the training data were distinguishable from non-
members based on the model output. This indicates that the model does not
provide any strong evidence into the identification of the individuals in the
training data set and there is not yet empirical evidence it is unsafe to
distribute the model for general use.
An electronic health record, or electronic medical record (EMR) includes a
wealth of information in the form of both physiological data and structured or
free text. The latter is often replete with protected health information (PHI)
and personal identifiable information (PII). As a result, there has been much
attention paid to the notion of secure processing and sharing. The integrity
of patients' personal information and related privacy are governed in the US
by the Health Insurance Portability and Accountability Act of 1996 (HIPAA).
Although intended to make medical data portable, it has had a much larger
effect in ensuring privacy through de-identification of shared data. The HIPAA
privacy rules provide two avenues that one must follow to meet the de-
identification standards: ‘expert determination’ and ‘safe harbor’. The safe
harbor de-identification method requires the removal of Protected Health
Information (PHI) which is a list of 18 categories of identifiers. The process
of de-identification of a particular document can be performed using different
means, but based on the enormous amount of EMR which grows every moment, the
sheer volume not only greatly incentivizes the automation of the process as
much as possible, but one can argue any pragmatic approach have to rely
intensively on utilizing power of computers. Because of this, the automation
of the de-identification process has been of great interest and different
algorithms have been developed over the years to facilitate the de-
identification by automating the finding/labeling of the PHI (Neamatullah et
al. 2008; Yang and Garibaldi 2015; Dernoncourt et al. 2016).
In recent years due to improvements in the deep learning architectures and
recurrent neural networks (RNN), not only there have been great achievements
in improved metrics like accuracy, recall, and F1-score for the de-
identification systems, but also the flexibility of the systems allow the
entities to utilize the same algorithm on different corpora of EMRs. This
property means that an entity can train the system on one corpus of records
and then share that trained system to de-identify another body of reports, or
even share it with other entities. This property, also known as transfer
learning, while very useful and promising, raises the concern of leakage of
sensitive information through these sharing processes (Melis et al. 2019).
One of the most widely used RNN is Long Short-Term Memory (LSTM) architecture,
which is applied in many deep-learning based de-identification systems (Lample
et al. 2016; Kim, Heider, and Meystre 2018; Dernoncourt et al. 2016; Shickel
et al. 2017).
One of these successful algorithms is NeuroNER (Dernoncourt et al. 2016). LSTM
provides a great capability for processing the long dependencies which is an
important characteristic in text formats, thus, providing a very powerful
tool. However, since it has a great number of variables (for instance more
than 40000 for a single LSTM with 100 units), there is the concern of
memorization of data and leakage of the sensitive information in the trained
model due to the model complexity. All these extensive and expensive effort of
the de-identification systems is to protect the privacy of the individuals.
The leak of information through the parameters of the de-identifications model
can jeopardize the privacy and nullify the main goal. One of the recent works
on investigating the data memorization in neural network is (Carlini et al.
2019) on generative sequence models which is a specific neural network with
LSTM. They study these particular LSTMs and show how memorization or leakage
can happen even when the data is rare. Moreover, they suggest a quantitative
metric for measuring the memorization on rare or unique sequences in the data.
While they provide very elegant approach, their method is not directly
applicable to neural networks other than generative ones.
In this article, we focus on the unintended memorization or leakage of the
deep-learning de-identifiers and more specifically, the NeuroNER system. The
reason behind selecting this system is the fact that it has been used in a few
publicly accessible data sets with different structures (i2b2 2014 challenge
data set (Stubbs and Uzuner 2015) and the MIMIC II de-identification data set
(Neamatullah et al. 2008)) and the same system with essentially same structure
and hyperparameters, has been utilized to achieve the state-of-the-art
performance (Dernoncourt, Lee, and Szolovits 2017) (Dernoncourt et al. 2016).
This approach provides a flexible framework and has the potential to be
leveraged in transfer learning paradigms. As such, we are concerned that this
type of approach can encode identities of the individuals in the training data
into the weights of the neural network. In this work we explore this concept.
First we provide a statistical analysis and perform cut-off attacks to
determine the risk if this state-of-the-art algorithm has created unnecessary
exposure for the data subjects. Moreover, we performed membership inference
attack (Shokri et al. 2017), which is the state-of-the-art re-identification
attack related to our approach.
## materials and methods
### NeuroNER
In this subsection, a brief description of the NeuroNER package as well as
specifics of how the system is trained are provided. More details can be found
in (Dernoncourt et al. 2016; Dernoncourt, Lee, and Szolovits 2017). The
statistical and inference attack methods are discussed afterward.
Briefly, the structure of the NeuroNER system, as can be seen in figure 1, is
composed of three layers:
* •
Token embedding
* •
LSTM based label prediction
* •
CRF (conditional random field)
Figure 1: In this chart the circles represent character (Ch) embedding (Emb)
and token level embedding (Embed), the FFNN indicates feed forward neural
network. The attaching points are at the output level of the second layer
after a feed-forward neural network and after the CRF in the third layer as
indicated.
In the first layer, in parallel with a tokenizer and token embedding, there is
an LSTM network for character level embedding. To avoid any memorization or
leak, the token embedding was fixed (no learning) and loaded from the pre-
trained models in public data (as is suggested in the original release codes).
The output of the first layer, which is a concatenation of the word and
character level embedding (illustrated as ’Vec—tor’ in figure 1), is sent to
the bidirectional LSTM of the second layer. The output of the LSTM is then
sent to a feed-forward neural network which will output the probability
vectors $P$. In the third layer, these probability vectors will be the input
of the CRF. The CRF layer can be turned on or off as an option.
The first step in analyzing a text is to break it down to its compartments. In
a sense, one can think of tokenization as separating each word in a text and
calling it token. There are different ready-to-use tokenizers. In this work,
we apply a method known as ‘Spacy’, which is a free and open-source library
for advance natural language processing (NLP).
There are many different methods available to convert tokens into numbers. One
simple approach would be representing any word with one-hot encoding. Given a
vector that has the size of the word domain, the component corresponding to
the word will have value one and all other components in the vector are zero
(so naming is one-hot). In this approach, all the words are perpendicular to
one another and the size of the space grows with the number of words. A more
advanced approach is to use a lower-dimensional (100 for instance) denser
space when the words are represented with vectors that are not all
perpendicular and related words have smaller angles with each other. This
representation is built by an unsupervised learning algorithm that leverages a
large corpus of data, such as Wikipedia, and learns the relevance of the words
in texts (mostly by their co-appearance in the text). In this paper, for the
token embedding, a pre-trained embedding known as Global Vectors for Word
Representation (GloVe) (Pennington, Socher, and Manning 2014) was used which
is of the later type described above.
On the character level, an LSTM with the dimension of 25 is used to embed
tokens into a dense-space using character level training on each token. The
results of the GloVe embedding and character embedding are concatenated as an
input for the next layer.
In NeuroNER, bi-directional LSTM is used in the label prediction in the second
layer.
LSTM is a recurrent neural network architecture and is very effective in
learning from sequence data like text. In the LSTM network in addition to
input and output, each unit has three (input, output and forget) gates
(although there are variations). With the input dimension $m$ and recurrent
dimension $n$, the matrix of weights of input and recurrent connections have a
dimension of $D_{W}=m*n$ and $D_{U}=n*n$, respectively. In addition to the
gates, there is also the state of the unit. There are also biases ($b$) for
each one and so, forward pass one has:
$i^{t}=\sigma(W_{i}x^{t}+U_{i}h^{t-1}+b_{i})$
$f^{t}=\sigma(W_{f}x^{t}+U_{f}h^{t-1}+b_{f})$
$o^{t}=\sigma(W_{o}x^{t}+U_{o}h^{t-1}+b_{o})$
$\tilde{c}^{t}=\tanh(W_{c}x^{t}+U_{c}h^{t-1}+b_{c})$ $c^{t}=f^{t}\circ
c^{t-1}+i^{t}\circ\tilde{c}^{t}$ $h^{t}=o^{t}\circ\tanh(c^{t})$
where $x^{t}(\in\mathbb{R}^{m})$ is the input vector for the unit, $i^{t}$
,$f^{t}$ and $o^{t}$ are input, forget and output activation vectors ,
$\sigma(x)=\frac{1}{1+\exp(-x)}$ and ($\circ$) is element-wise product. The
number of parameters then can be calculated by $4(n*m+n^{2}+n)$. The uni-
directional LSTM, where the sequence can be fed in when the $t$ value
increases. One can make similar calculations when the sequence is trained in
the decreasing order of $t$. In the former case, the network learns from what
comes before a value, while in the later, the network learns from what is
after a value. Bi-directional LSTM is utilizing two LSTMs to learn from all
around the element. This is how the LSTM learns the sequential dependence
between the words.
Conditional random field (CRF) is a powerful statistical method in machine
learning. The conditional part refers to the fact that CRF is a family of
conditional distributions with a structure. For sequence labeling, which is of
interest in this work, linear-chain CRF is the relevant choice which has a
linear structure. In this paper linear-chain CRF is called CRF in short.
### Data sources
The i2b2 challenge data set has been used in this work. The data contains a
set of over 1300 patient records, which is the largest publicly available data
set for de-identification. These reports contain (de-identified) protected
health information (PHI) and it is divided into three sets; namely, training,
validation, and test set. The training set contains more than five hundred
reports.
While the system is trained to label all different PHI types, the patient name
is arguably most sensitive part and there are more than seven hundred patient
surname as well as more than five hundred patient given-name occurrences in
the training data where well over hundreds of them are unique. Here, the focus
is on the patient names.
### Statistical analysis and attack models
To investigate the sharing/leaking of sensitive data, we assume the adversary
has access to almost complete information. Here the almost complete
information simply means that the trained model as well as all the reports are
available to the adversary except that the name which is altered. The goal
here is to investigate the differences in $P$ from data sets that differ in
only one part, names for instance. If the outputs are distinguishable, it
suggests there may be unintentional memorization or leakage of the last names
in the training data. More specifically, new sets of data were produced from
the original one. For the first data set type 1 or inside, names in the
original data set were replaced with other names from the corpus. The next
set, data set type 2 or outside, was constructed by the same procedure but the
original names were replaced with the names from a dictionary of names that
did not appear in the original set. Each of the inside and outside data sets
was divided to three subsets, the one that only surname was replaced (SN), the
one that only given name was replaced (GN) and the one that both surname and
given name were replaced (GN $\&$ SN). The surname dictionary contains above
80000 names, while male and female dictionaries contain about 3000 and 5000
names, respectively. In order to make the comparison with the original
training data, we also preserved punctuation and capitalization of the set and
the names. For statistical attack, the model was trained on original data set
and then used original, data set 1 (inside) and data set 2 (outside) for
testing and compared their relative parameters (the output of softmax, to be
precise).
The non-parametric Kolmogorov—Smirnov (KS) test (Razali, Wah et al. 2011) was
used to determine whether there is any difference in a data set and the
reference probability distribution or between distributions of two data sets
(and thus two-sample KS test). This test gives access to statistic $D$ values.
Statistic $D$ then can be used in numerical or counting algorithms to
estimate/calculate the p-value. To calculate $D$, one has:
$D_{m,n}=\underset{x}{max}|S_{m}(x)-S_{n}(x)|,$
which gives the maximum absolute difference between two distributions with m
and n number of samples. The distributions are relative (normalized) and
empirically produced from samples, and so, sometimes are called empirical
distribution function.
Different re-identification attacks were attempted to stress-check the
vulnerability of the model. They can be categorized to cut-off attacks and
membership inference attack. For the cut-off attacks, the goal was to identify
a set of limits for different probabilities which can be used to re-identify
the participants. In naïve cut-off attack the goal was to find a limit that
can differentiate between the original, data set (1) inside and data set 2
(outside). In brute-force cut-off attack the goal was to feed all possible
names and find if the original name can be re-identified by a limit. For this
purpose, another data preparation was done by randomly selecting three reports
in which the patient name appears six times or more in the body of the report
and then insulating the sentences containing the name (surname) and then
calculating the $P$ for them with replacing the surname with all the over
eighty thousand names in the dictionary.
For membership inference attack, 12 different samples of type data set 2
(outside) were produced (shadow data sets (Shokri et al. 2017)), 10 of which
were used to train 10 different shadow models. As figure 2 illustrates, shadow
model 1 is trained on shadow data 1. Then the training data 1 along with a
randomized mix of reports from shadow data 2 to shadow data 10 (data set -1)
were fed as test to trained model 1 (the mixed data has the same number of
reports as the shadow data 1). The $P$s then with label 1 for shadow data 1
and label -1 for mix shadow data 2 to shadow data 10 were extracted. The same
process was used for all the 10 shadow models. These $P$s and labels then were
fed in to a feed forward network to train it to label inside names (1) and
outside names (-1). The attack accuracy is used as the metric. The shadow
model 11 was used as validation set and shadow model 0 was the target model.
The membership inference attack was conducted where the trained feed-forward
network was used to differentiate and re-identify the names inside shadow data
0 using $P$s produced by testing the shadow model 0 on all possible names
(brute-force) for a few reports with most repetition of a name.
Figure 2: In this chart the ellipse represents data and rectangle indicates
neural network model. The (-k) is the similar data where the names are
replaced with other names that appears in all other sets but set k. Labels are
indicating that the report is in the training data or not, and parameters are
obtained from each model. Figure 3: The graph shows the histogram frequency of
probabilities for the case without CRF. The cyan represents the probabilities
for original/unchanged data. The orange represents the probabilities of
surnames being labeled names when altered randomly with other surnames from
the corpus. The green gives the probabilities for surnames being labeled names
when altered randomly with other surnames from the outside dictionary
(excluding the in-corpus names). The inset zooms in the higher probabilities.
The color-coding is the same as the main graph.
## Parameters and Results
The software was trained on the data set with the following values for the
hyper-parameters: Character embedding dimension and LSTM: 25; Token embedding
dimension and LSTM: 100; Optimizer: SGD; Learning rate: 0.01;Dropout rate:
0.5; Tokenizer: Spacy.
The precision achieved after 95 epoch is 98.43$\%$ on the test set and
99.96$\%$ on the training set. By dropping the third layer (CRF), after 88
epoch the achieved precision is 97.39$\%$ and 98.60$\%$ on the test and
training data respectively. Note that the goal here is to investigate the risk
of leaked information and not to necessarily achieve the best performance on
the test set after training. Nonetheless, the high values of the precision,
recall and F1 indicate that the system has been trained adequately (close to
state-of-the-art) and is reflecting a real-world use of the algorithm. Also,
the suggested precautions, such as freezing the token embedding, were
implemented to maximize the implemented protection methods of the model.
For the inference attack, a feed-forward network with one hidden layer of 64
ReLU units followed by a two softmax units provided the highest accuracy
($\sim$0.75). Please note the inference attack training data is balanced.
The magnitude of the test statistics $D$, and $p$-values obtained from the
double-sided two-sample Kolmogorov-Smirnov tests were used to evaluate the
difference in distributions as shown in table 1 for the network with CRF and
the no-CRF network. As it can be seen, the null hypothesis that the original
names give the same distribution as the altered ones does not hold, especially
in the cases where outside data set 2 (outside) has been used (SN2 and SN2*).
The question remains whether these differences are significant? And more
importantly, can they be utilized for re-identification? Our cut-off attacks
and membership inference attack does not indicate any potential for re-
identification. The naïve cut-off does not indicate any cut-off that can be
used to re-identify any of the reports. The brute-force cut-off attack also
could not narrow down the list of the potential original names to less than
hundreds that would contain the original name used in the training. Even in
the case of membership inference attack, the narrowest of lists got the
original name ranked thirty-eight, while all the rest stayed well above
hundred. In other words, these different attacks failed and no re-
identification was possible.
| no CRF | CRF
---|---|---
| D | p-value | D | p-value
SN1 | 6.2e-2 | 9.5e-2 | 1.0e-1 | 4.3e-4
SN2 | 1.6e-1 | 6.0e-9 | 2.3e-1 | <e-9
SN1* | 6.8e-2 | 5.6e-2 | 9.1e-2 | 3.1e-3
SN2* | 1.7e-1 | <e-9 | 2.3e-1 | <e-9
Table 1: Two-sample Kolmogorov-Smirnov test $D$ statistic and double-sided
p-value comparing the distribution of $P$s for surnames of original
unperturbed data set and other sets which include: SN1 representing the
surnames replaced from the corpus, SN2 surnames replaced from the outside
dictionary, SN1* representing the surnames where both given-names and surnames
have been replaced from the corpus, and SN2* representing surnames where the
surnames and given-names where both replaced from the external dictionaries
(exclusive of names in the corpus).
The histogram (figure 3) illustrates how the distribution of surname
probabilities look like for the random replacement of the original names with
the ones from other reports in the corpus, as well as from outside. The main
graph shows the density of probabilities in different bins for different
surname alterations. The inset figure illustrates the distribution of
probabilities for the highest probabilities only. To the extent that there is
a spread and widening in the statistics, they are overlapping. By using the
cumulative distribution function, some differences can be more easily observed
(figures 4 and 5). Figure 4 shows the cumulative density with Gaussian kernel
density estimates for the case when $P$ is extracted from a model without CRF.
Figure 5 represents the case for $P$ calculated on a network with CRF.
Figure 4: The kernel density estimates are provided for the values of $P$ for
the network with no CRF. Cyan represents the unaltered original data, orange
represents the case when surnames are replaced randomly with other names in
the corpus, green represents the case when the surnames are replaced randomly
with names from outside dictionary(exclusive), red represents $P$ values for
surnames when both given-names and surnames have been replaced from other
names in the corpus, and magenta represents values of $P$ obtained for
surnames when both given-names and surnames were replaced randomly from the
external dictionaries. Figure 5: The kernel density estimates (curves) are
provided for $P$ values for the network with CRF. Cyan represents the
unaltered original data, orange represents the case when surnames are replaced
randomly with other names in the corpus, green represents the case when the
surnames are replaced randomly with names from outside dictionary(exclusive),
red represents $P$ values for surnames when both given-names and surnames have
been replaced from other names in the corpus, and magenta represents $P$
values obtained for surnames when both given-names and surnames were replaced
randomly from the external dictionaries.
## Discussion
In the result section, the difference between distributions for $P$s has been
discussed. These differences are more notable when comparing the original data
set with ones replaced with the external dictionaries. However, one can not
infer that any sensitive information has been compromised. More precisely,
while the p-values are indeed small, one has to note that they have been
gained by sampling over seven hundred surnames. Also, the differences are not
by any means drastic. The overlap of the distributions is overwhelming, and
that can protect sensitive data from adversaries’ inference attacks. For
instance, there is no cutoff that can be used to exclude original $P$s from
either of the other samples.
Moreover, the attempts to narrow down the number of names in the candidates
list, by filtering names with high $P$s between the appearances of the same
patient name in the same report, were also unsuccessful. This failure is due
to the persistence of same names from outside of the corpus with high $P$s,
and also with high fluctuations in the rank of the original name for its
different appearances. Furthermore, it is notable that in this work the almost
absolute knowledge is presumed, meaning that everything including punctuation,
notations, and capitalization of the names were preserved in the replacements
and altered data. Even the membership inference attack did not improve the
case for the leakage of sensitive data as the attack was un-successful to re-
identify any data. Also, the attack could not even limit the candidates for
original name bellow several hundreds as was the case for the cut-off attacks.
To check the robustness of the model, we implemented the membership inference
attack on a reduced data set with fifty reports which is a tenth of the number
of reports in the original set to push the model to over-train and potentially
increase the chance of data leakage, but that did not lead to any leakage.
This indicates all the more to the point that the model does not leak data
when used responsibly. Please note we took suggested precautions as mentioned
in the training section.
It is worth mentioning that $P$ can be interpreted as the probability of
different labels for any input. The distributions of these probabilities for
the original names, as well as replaced names, have very similar properties
like mean, median, standard deviation, and maximum value. That is the case
both for when the model has been trained with CRF layer as well as when CRF
layer has been disabled. In the case of using CRF, the interpretation of the
results of applying softmax on the output of the forward neural network in the
second layer as probabilities is not as accurate. But the numerical values and
statistics of both with and without CRF models, follow each other closely and
the discussions and conclusions provided, stand in both cases.
Differential privacy (Dwork, Roth et al. 2014) (Abadi et al. 2016) has been
gaining momentum in recent years. It is very reliable in the sense that when
applicable, it can provide mathematical insurances to preserve the plausible
deniability up to desired thresholds by introducing noise to the process in a
controlled and measured way. To achieve this mathematically safe-guarded
security, the parameters of this added noise should be set carefully depending
on factors such as the number of epochs, the number of independent data
samples and so on. While new tools like tensor-flow privacy library help with
the implementation of differential privacy, there are caveats for the
practical implementation of the algorithm in cases where the size of available
data is limited. The performance of the algorithms trained in this manner
suffers (Rahman et al. 2018) especially in cases with limited amount of data
and complex models. Moreover, as the number of dependent inputs increases, the
noise should increase, and so the performance gets even worse. Thus, while
theoretically ideal, differential privacy may fall short of enabling
protection of sensitive data by decreasing the accuracy of the model and so
potentially increasing the probability of releasing sensitive data. Of course
it would be interesting to see the above propositions investigated with actual
implementation of differential privacy and its effects, but that is beyond the
scope of this work.
In this paper we have presented an analysis of the potential for a state-of-
the-art deep learning algorithm for de-identification to leak information
concerning subjects’ identities. We show that although statistically there is
difference between the network reaction to inside and outside names, there is
no evidence to suggest that a user could guess that any given subject was
present in the training data, and that the deep neural network encoded the
identities of the users in the training data in a way that creates a risk to
the users. Of course, this does not preclude that some analysis sometime in
the future might reveal the identity of a user, but one can always make the
argument that future technology can do anything, and we feel that this is not
a sufficient argument to be of concern about posting trained networks on
medical data, even when the source training data explicitly contains
identities of individuals. The legal framework governing PHI was developed to
focus on portability (hence the ‘P’ in HIPAA), and judge the trade-off between
risk and benefit of sharing data. This should extend to a new generation of
algorithms trained on such data.
## Acknowledgements
GC, SN and SS are funded by the National Science Foundation, grant # 1822378
‘Leveraging Heterogeneous Data Across International Borders in a Privacy
Preserving Manner for Clinical Deep Learning’. GC and LX are partially
supported by the National Center for Advancing Translational Sciences of the
National Institutes of Health under Award Number UL1TR002378. The content is
solely the responsibility of the authors and does not necessarily represent
the official views of the National Institutes of Health. LX is partially
supported by NIH R01GM118609 Decentralized Differentially-Private Methods for
Dynamic Data Release and Analysis. SN is partially supported by the National
Institutes of Health, award # K01ES025445.
## References
* Abadi et al. (2016) Abadi, M.; Chu, A.; Goodfellow, I.; McMahan, H. B.; Mironov, I.; Talwar, K.; and Zhang, L. 2016. Deep learning with differential privacy. In _Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security_ , 308–318.
* Carlini et al. (2019) Carlini, N.; Liu, C.; Erlingsson, Ú.; Kos, J.; and Song, D. 2019. The Secret Sharer: Evaluating and testing unintended memorization in neural networks. In _28th Security Symposium (Security 19)_ , 267–284.
* Dernoncourt, Lee, and Szolovits (2017) Dernoncourt, F.; Lee, J. Y.; and Szolovits, P. 2017. NeuroNER: an easy-to-use program for named-entity recognition based on neural networks. _Conference on Empirical Methods on Natural Language Processing (EMNLP)_ .
* Dernoncourt et al. (2016) Dernoncourt, F.; Lee, J. Y.; Uzuner, O.; and Szolovits, P. 2016. De-identification of Patient Notes with Recurrent Neural Networks. _Journal of the American Medical Informatics Association (JAMIA)_ .
* Dwork, Roth et al. (2014) Dwork, C.; Roth, A.; et al. 2014. The algorithmic foundations of differential privacy. _Foundations and Trends® in Theoretical Computer Science_ 9(3–4): 211–407.
* Kim, Heider, and Meystre (2018) Kim, Y.; Heider, P.; and Meystre, S. 2018. Ensemble-based Methods to Improve De-identification of Electronic Health Record Narratives. In _AMIA Annual Symposium Proceedings_ , volume 2018, 663. American Medical Informatics Association.
* Lample et al. (2016) Lample, G.; Ballesteros, M.; Subramanian, S.; Kawakami, K.; and Dyer, C. 2016. Neural architectures for named entity recognition. _arXiv preprint arXiv:1603.01360_ .
* Melis et al. (2019) Melis, L.; Song, C.; Cristofaro, E. D.; and Shmatikov, V. 2019. Exploiting Unintended Feature Leakage in Collaborative Learning. In _2019 2019 IEEE Symposium on Security and Privacy (SP)_ , 497–512. Los Alamitos, CA, USA: IEEE Computer Society. ISSN 2375-1207. doi:10.1109/SP.2019.00029.
* Neamatullah et al. (2008) Neamatullah, I.; Douglass, M. M.; Lehman, L.-w. H.; Reisner, A.; Villarroel, M.; Long, W. J.; Szolovits, P.; Moody, G. B.; Mark, R. G.; and Clifford, G. D. 2008. Automated de-identification of free-text medical records. _BMC Medical Informatics and Decision Making_ 8(1): 32. ISSN 1472-6947. doi:10.1186/1472-6947-8-32.
* Pennington, Socher, and Manning (2014) Pennington, J.; Socher, R.; and Manning, C. D. 2014. GloVe: Global Vectors for Word Representation. In _Empirical Methods in Natural Language Processing (EMNLP)_ , 1532–1543.
* Rahman et al. (2018) Rahman, M. A.; Rahman, T.; Laganière, R.; Mohammed, N.; and Wang, Y. 2018. Membership Inference Attack against Differentially Private Deep Learning Model. _Trans. Data Priv._ 11(1): 61–79.
* Razali, Wah et al. (2011) Razali, N. M.; Wah, Y. B.; et al. 2011. Power comparisons of shapiro-wilk, kolmogorov-smirnov, lilliefors and anderson-darling tests. _Journal of Statistical Modeling and Analytics_ 2(1): 21–33.
* Shickel et al. (2017) Shickel, B.; Tighe, P. J.; Bihorac, A.; and Rashidi, P. 2017. Deep EHR: a survey of recent advances in deep learning techniques for electronic health record (EHR) analysis. _IEEE Journal of Biomedical and Health Informatics_ 22(5): 1589–1604.
* Shokri et al. (2017) Shokri, R.; Stronati, M.; Song, C.; and Shmatikov, V. 2017. Membership inference attacks against machine learning models. In _2017 IEEE Symposium on Security and Privacy (SP)_ , 3–18. IEEE.
* Stubbs and Uzuner (2015) Stubbs, A.; and Uzuner, Ö. 2015. Annotating longitudinal clinical narratives for de-identification: The 2014 i2b2/UTHealth corpus. _Journal of Biomedical Informatics_ 58: S20–S29.
* Yang and Garibaldi (2015) Yang, H.; and Garibaldi, J. M. 2015. Automatic detection of protected health information from clinic narratives. _Journal of Biomedical Informatics_ 58: S30–S38.
|
# Potential Function-based Framework for Making the Gradients Small in Convex
and Min-Max Optimization††thanks: This research was partially supported by the
NSF grant CCF-2007757 and by the Office of the Vice Chancellor for Research
and Graduate Education at the University of Wisconsin–Madison with funding
from the Wisconsin Alumni Research Foundation. Part of this research was done
while PW was attending UW-Madison as part of the Visiting International
Student Program (VISP).
Jelena Diakonikolas
Department of Computer Sciences
University of Wisconsin-Madison
<EMAIL_ADDRESS>Puqian Wang
School of Mathematics
Shandong University
<EMAIL_ADDRESS>
###### Abstract
Making the gradients small is a fundamental optimization problem that has
eluded unifying and simple convergence arguments in first-order optimization,
so far primarily reserved for other convergence criteria, such as reducing the
optimality gap. We introduce a novel potential function-based framework to
study the convergence of standard methods for making the gradients small in
smooth convex optimization and convex-concave min-max optimization. Our
framework is intuitive and it provides a lens for viewing algorithms that make
the gradients small as being driven by a trade-off between reducing either the
gradient norm or a certain notion of an optimality gap. On the lower bounds
side, we discuss tightness of the obtained convergence results for the convex
setup and provide a new lower bound for minimizing norm of cocoercive
operators that allows us to argue about optimality of methods in the min-max
setup.
## 1 Introduction
One of the most basic facts in convex optimization is that a differentiable
convex function attains its minimum at a point where its gradient equals zero,
provided such a point exists. Thus, it is tempting to conclude that there is
no difference between minimizing the function value or its gradient (in any
suitable norm). This is only partially true, as we are almost never guaranteed
to find a point at which the function is minimized; instead, we opt for a more
modest goal of approximating such points. As it turns out, from an algorithmic
point of view, there are major differences between guarantees provided for the
function value (or optimality gap) and norm of its gradient.
Much of the standard optimization literature on smooth (gradient-Lipschitz)
convex first-order optimization has been concerned with providing guarantees
for the optimality gap. There is comparatively much less work on guarantees
for the norm of the gradient, most of it being initiated after the work of
Nesterov [41], which argued that such guarantees are natural and more
informative than those based on the function value for certain linearly
constrained optimization problems that frequently arise in applications.
Further, unlike the optimality gap, which would require knowledge of the
minimum function value to be usable as a stopping criterion, the norm of the
gradient is readily available to the algorithm as a stopping criterion, as
standard first-order methods define their iterates based on the gradient
information. This insight is particularly useful for the design of parameter-
free algorithms (i.e., algorithms that do not require knowledge of function
parameters such as smoothness, strong convexity, or sharpness/constants of
Łojasiewicz inequality; see, e.g., [36, 37, 11, 5]), and as such has been used
to design parameter-free algorithms that are near-optimal in terms of
iteration complexity (i.e., optimal up to poly-logarithmic factors) [42, 35,
25].
As for $L$-smooth functions the norm of the gradient can be bounded above as a
function of the optimality gap $f(\bm{x})-f(\bm{x}^{*}),$ where
$\bm{x}^{*}\in\operatorname*{argmin}_{\bm{x}}f(\bm{x})$, using
$\frac{1}{2L}\|\nabla f(\bm{x})\|^{2}\leq f(\bm{x})-f(\bm{x}^{*}),$ (1.1)
it is not surprising that convergence rates can be established for gradient
norm minimization. What _is_ surprising, however, is that those rates can be
faster than what is implied by Eq. (1.1) and existing results for convergence
in function value/optimality gap. In particular, methods that are optimal in
terms of iteration complexity for minimizing the optimality gap are not
necessarily optimal for gradient norm optimization, and vice-versa. More
specifically, the fast gradient method (FGM) of Nesterov [44] is iteration
complexity-optimal for minimizing the optimality gap, but it is suboptimal for
minimizing the gradient norm [27, 13].
More generally, the existing literature has not yet shed light on what is the
basic mechanism that drives algorithms for gradient norm minimization. The
only known iteration complexity-optimal algorithm for minimizing norm of the
gradient of a smooth convex function is due to Kim and Fessler [28].111The
optimality of the algorithm can be certified using the lower bound from [13].
This algorithm was obtained by using the performance estimation framework of
Drori and Teboulle [20], originally developed for understanding the worst-case
performance of optimization algorithms. The algorithm [28] itself and its
convergence analysis are inferred from numerical solutions to a semidefinite
program (SDP). As such, the intuition behind what is driving the convergence
analysis of the algorithm and how the improved convergence rate is obtained is
lacking, which constitutes an impediment to possibly generalizing this
algorithm to other optimization settings.
Even less is known in the setting of smooth convex-concave min-max
optimization, where (near-)optimal convergence results have been established
only recently [15, 26, 33, 48] and the problem has been much less studied from
the aspect of oracle lower bounds [46, 15, 22]. In particular, similar as in
the case of convex optimization, classical methods for min-max optimization
that are optimal for reducing the primal-dual gap, such as, e.g., the
extragradient method [29], mirror-prox [39], and dual extrapolation [40], are
suboptimal in terms of iteration complexity for minimizing the gradient norm.
Interestingly, however, the methods that turn out to be (near-)optimal were
originally studied in the context of fixed point iterations [30, 38, 23].
In this paper, we introduce a novel potential function-based framework to
study the convergence in gradient norm for smooth convex and convex-concave
optimization problems. Our framework is intuitive, as it relies on
establishing convergence of standard methods by interpreting it as a trade-off
between reducing the gradient norm and reducing a notion of an optimality gap.
The same view can be adopted in a unifying manner for methods such as standard
gradient descent, Nesterov FGM [44], optimized method of Kim and Fessler [28],
gradient descent-ascent (which is equivalent to Krasnosel’skiı-Mann iteration
[30, 38]; see Section 3.1), and Halpern iteration [23]. We further complement
these results with a discussion of optimality of the considered methods for
convex optimization, and with a new lower bound for minimizing the norm of
cocoercive operators (see Section 1.2 for a precise definition and
relationship to min-max optimization), which allows us to discuss optimality
of gradient descent-ascent and Halpern iteration as methods for minimizing the
gradient norm in smooth convex-concave min-max optimization.
### 1.1 Further Related Work
Understanding the phenomenon of acceleration and providing a unifying theory
of first-order optimization algorithms has been an important topic in
optimization research, with a flurry of recent research activity in this area
[55, 1, 4, 7, 53, 56, 57, 59, 50, 31, 49, 12, 21, 34, 10, 18, 19, 8, 24, 32,
51, 17, 6]. However, the existing literature has almost exclusively focused on
the optimality gap guarantees, with only a small subset of results seeking to
provide guarantees for gradient norm and primarily addressing FGM-type
algorithms with suboptimal rates [17, 50, 6].
Complementary to the literature discussed above, whose focus has been on
deriving intuitive convergence analysis frameworks, another line of work has
focused on using the SDP-based performance estimation framework of Drori and
Teboulle [20] to investigate the worst-case performance of optimization
algorithms [27, 26, 28, 33, 14, 54]. Most relevant to our work among these
results are: [27], which investigated the worst-case performance of FGM-type
methods in terms of gradient norm minimization, [28], which obtained the first
(and so far, the only) iteration complexity-optimal algorithm for minimizing
the gradient norm of smooth convex functions, and [33], which obtained a tight
worst-case convergence bound for Halpern iteration. While the SDP-based
approach used in this line of work is useful for understanding the worst-case
performance of existing algorithms (and even obtaining new algorithms [28]),
its downside is that, because the convergence arguments are computer-assisted
(namely, they are inferred from numerical solutions to SDPs), they are
generally not suitable for developing intuition about what is driving the
methods and their analysis. Our work fills this gap by providing intuitive
convergence proofs based on potential function arguments.
### 1.2 Notation and Preliminaries
Throughout the paper, we consider the Euclidean space
$(\mathbb{R}^{d},\|\cdot\|),$ where
$\|\cdot\|=\sqrt{\left\langle\cdot,\cdot\right\rangle}$ is the Euclidean norm
and $\left\langle\cdot,\cdot\right\rangle$ denotes any inner product on
$\mathbb{R}^{d}$. We use $\\{A_{k}\\}_{k\geq 0}$ and $\\{B_{k}\\}_{\geq 0}$ to
denote sequences of nondecreasing nonnegative numbers, and define
$a_{0}=A_{0},$ $a_{k}=A_{k}-A_{k-1}$ for $k\geq 1,$ and, similarly,
$b_{0}=B_{0},$ $b_{k}=B_{k}-B_{k-1}$ for $k\geq 1.$
We consider two main problem setups: (i) making the gradients small in convex
optimization, and (ii) making the gradients small in min-max optimization.
##### Convex optimization.
In the first setup, we assume we are given first-order oracle access to a
convex continuously differentiable function $f:\mathbb{R}^{d}\to\mathbb{R}.$
The first-order definition of convexity then applies, and we have:
$(\forall\bm{x},\bm{y}\in\mathbb{R}^{d}):\quad f(\bm{y})\geq
f(\bm{x})+\left\langle\nabla f(\bm{x}),\bm{y}-\bm{x}\right\rangle.$
We further assume that $f$ is $L$-smooth, i.e., that its gradients are
$L$-Lipschitz continuous:
$(\forall\bm{x},\bm{y}\in\mathbb{R}^{d}):\quad\|\nabla f(\bm{x})-\nabla
f(\bm{y})\|\leq L\|\bm{x}-\bm{y}\|.$
Recall that smoothness of $f$ implies:
$(\forall\bm{x},\bm{y}\in\mathbb{R}^{d}):\quad f(\bm{y})\leq
f(\bm{x})+\left\langle\nabla
f(\bm{x}),\bm{y}-\bm{x}\right\rangle+\frac{L}{2}\|\bm{y}-\bm{x}\|^{2}.$ (1.2)
The goal of the first setup is to, given $\epsilon>0,$ construct a point
$\bm{x}$ such that $\|\nabla f(\bm{x})\|\leq\epsilon$ in as few iterations
(oracle queries to the gradient of $f$) as possible. A useful fact that turns
out to be crucial for the analysis in the convex case is the following (see,
e.g., [58, Section 3.5]).
###### Fact 1.1.
A continuously differentiable function $f:\mathbb{R}^{d}\to\mathbb{R}$ is
$L$-smooth and convex if and only if
$(\forall\bm{x},\bm{y}\in\mathbb{R}^{d}):\quad\frac{1}{2L}\|\nabla
f(\bm{y})-\nabla f(\bm{x})\|^{2}\leq f(\bm{y})-f(\bm{x})-\left\langle\nabla
f(\bm{x}),\bm{y}-\bm{x}\right\rangle.$ (1.3)
Observe that Fact 1.1 fully characterizes the class of smooth convex
functions, and, as such, should be sufficient for analyzing any algorithm that
addresses problems from this class. An immediate consequence of Fact 1.1 is
that the gradient of a smooth convex function is cocoercive, i.e.,
$\left\langle\nabla f(\bm{x})-\nabla
f(\bm{y}),\bm{x}-\bm{y}\right\rangle\geq\frac{1}{L}\|\bm{x}-\bm{y}\|^{2}.$
(1.4)
##### Min-max optimization.
In the second setup, we are given oracle access to gradients of a function
$\phi:\mathbb{R}^{d_{1}}\times\mathbb{R}^{d_{2}}\to\mathbb{R}$, where
$d_{1}+d_{2}=d.$ Function $\phi(\bm{x},\bm{y})$ is assumed to be convex-
concave: convex in the first argument ($\bm{x}$) when the second argument
($\bm{y}$) is fixed and concave in the second argument ($\bm{y}$) when the
first argument ($\bm{x}$) is fixed, for any values of $\bm{x},\bm{y}$. Similar
to the case of convex optimization, the goal in this case is, given
$\epsilon>0,$ to find a pair of points
$(\bm{x},\bm{y})\in\mathbb{R}^{d_{1}}\times\mathbb{R}^{d_{2}}$ such that
$\|\nabla\phi(\bm{x},\bm{y})\|\leq\epsilon$ in as few iterations (oracle
queries to the gradient of $\phi$) as possible.
We consider the problem of minimizing the norm of the gradient of $\phi$ as
the problem of minimizing the norm of the operator
$F(\bm{u})=\big{[}\vbox{\Let@\restore@math@cr\default@tag\halign{\hfil$\m@th\scriptstyle#$&$\m@th\scriptstyle{}#$\hfil\cr\nabla_{\bm{x}}\phi(\bm{x},\bm{y})\\\
-\nabla_{\bm{y}}\phi(\bm{x},\bm{y})\crcr}}\big{]},$ where
$\bm{u}=\big{[}\vbox{\Let@\restore@math@cr\default@tag\halign{\hfil$\m@th\scriptstyle#$&$\m@th\scriptstyle{}#$\hfil\cr\bm{x}\\\
\bm{y}\crcr}}\big{]}.$ When $\phi$ is convex-concave, $F$ is monotone, i.e.,
it holds
$(\forall\bm{u},\bm{v}\in\mathbb{R}^{d}):\quad\left\langle
F(\bm{u})-F(\bm{v}),\bm{u}-\bm{v}\right\rangle\geq 0.$ (1.5)
We will assume throughout that $F$ is $\frac{1}{L}$-cocoercive, i.e., that
$(\forall\bm{u},\bm{v}\in\mathbb{R}^{d}):\quad\left\langle
F(\bm{u})-F(\bm{v}),\bm{u}-\bm{v}\right\rangle\geq\frac{1}{L}\|F(\bm{u})-F(\bm{v})\|^{2}.$
(1.6)
Cocoercivity of $F$ implies that it is monotone and $L$-Lipschitz. The
opposite does not hold in general, unless $F$ is the gradient of a smooth
convex function (as we saw in the case of convex optimization described
earlier). Nevertheless, cocoercivity is sufficient to capture the main
algorithmic ideas of smooth min-max optimization, and the extensions to
general smooth min-max optimization are possible through the use of
approximate resolvent operators (see, e.g., [15]). Further, it suffices to
consider unconstrained problems, as extensions to constrained optimization
problems are possible in a straightforward manner using a notion of _operator
mapping_ (see, e.g., [15], where a similar idea was used).
We assume here that there exists a point $\bm{u}^{*}\in\mathbb{R}^{d}$ such
that $F(\bm{u}^{*})=0.$ Due to cocoercivity of $F$ (Eq. (1.6)), this
assumption implies that
$(\forall\bm{u}\in\mathbb{R}^{d}):\quad\left\langle
F(\bm{u}),\bm{u}-\bm{u}^{*}\right\rangle\geq\frac{1}{L}\|F(\bm{u})\|^{2}.$
(1.7)
It will be useful to think of $\left\langle
F(\bm{u}),\bm{u}-\bm{u}^{*}\right\rangle$ as a notion of “optimality gap” for
min-max optimization problems, as, using convexity-concavity of $\phi$, we
have
$(\forall(\bm{x},\bm{y})\in\mathbb{R}^{d_{1}}\times\mathbb{R}^{d_{2}}):\quad\phi(\bm{x},\bm{y}^{*})-\phi(\bm{x}^{*},\bm{y}^{*})+\phi(\bm{x}^{*},\bm{y}^{*})-\phi(\bm{x}^{*},\bm{y})\leq\left\langle
F(\bm{u}),\bm{u}-\bm{u}^{*}\right\rangle.$
## 2 Small Gradients in Convex Optimization
In this section, we consider the problem of minimizing norm of the gradient of
a smooth convex function. We show that all standard methods, including
standard gradient descent, fast gradient method of Nesterov [44], and the
optimized gradient method of Kim and Fessler [28], can be captured within an
intuitive potential function-based framework, where the progress of a method
is established through a trade-off between the norm of the gradient and the
optimality gap. Further, the complete convergence analysis of each of the
methods can be fully carried out using only the cocoercivity inequality from
Eq. (1.3), which fully characterizes the class of smooth convex functions.
### 2.1 Gradient Descent
As a warmup, we start by considering $L$-smooth but possibly nonconvex
objectives $f.$ In this case, all that can be said about $f$ is that its
gradients are $L$-Lipschitz, which implies Eq. (1.2). Further, for any method
that does not converge to local maxima (unless initialized at one), we cannot
hope to bound the norm of the last gradient – all that we can hope for is the
average or the minimum over all seen gradients. The simplest way to see this
is by considering the one dimensional case: if the function is locally concave
and the algorithm moves in the direction that reduces the function value, the
absolute value of the function derivative must increase.
Thus, assuming that the function is bounded below by some $f_{\star}>-\infty$,
it is natural to consider methods that in each iteration either reduce the
function value or the norm of the gradient. Such methods ensure that, $\forall
k\geq 0:$
$a_{k}\|\nabla f(\bm{x}_{k})\|^{2}+f(\bm{x}_{k+1})-f(\bm{x}_{k})\leq 0,$
or, equivalently, that the following potential function
$\mathcal{C}_{k}=\sum_{i=0}^{k}a_{i}\|\nabla
f(\bm{x}_{i})\|^{2}+f(\bm{x}_{i+1})$ (2.1)
is non-increasing, where $a_{i}$ is some sequence of positive numbers.
Equivalently, such methods ensure that $a_{k}\|\nabla
f(\bm{x}_{k})\|^{2}+f(\bm{x}_{k+1})-f(\bm{x}_{k})\leq 0,$ $\forall k\geq 0.$
As the only assumption we are making about $f$ is that it is $L$-smooth, the
most we can do to bound $f(\bm{x}_{k+1})-f(\bm{x}_{k})$ is use Eq. (1.2). The
tightest bound on $f(\bm{x}_{k+1})-f(\bm{x}_{k})$ that can be obtained from
Eq. (1.2) is attained when $\bm{x}_{k+1}=\bm{x}_{k}-\frac{1}{L}\nabla
f(\bm{x}_{k})$ (i.e., for the standard gradient descent step) and is given by
$f(\bm{x}_{k+1})-f(\bm{x}_{k})\leq-\frac{1}{2L}\|\nabla f(\bm{x}_{k})\|^{2}$,
in which case the largest $a_{k}$ we can choose is $a_{k}=\frac{1}{2L}.$ As
$\mathcal{C}_{k}$ is non-increasing, it follows that
$\mathcal{C}_{k}\leq\mathcal{C}_{0},$ and we recover the familiar convergence
bound of gradient descent:
$\frac{1}{k+1}\sum_{i=0}^{k}\|\nabla
f(\bm{x}_{i})\|^{2}\leq\frac{2L(f(\bm{x}_{0})-f(\bm{x}_{k+1}))}{k+1}\leq\frac{2L(f(\bm{x}_{0})-f_{\star})}{k+1}.$
(2.2)
When considering the case of a convex objective function $f,$ the first
question to ask is how would convexity help to improve the bound from Eq.
(2.2). The first observation to make is that Fact 1.1 fully characterizes the
class of smooth convex functions, and, thus, Eq. (1.3) should be enough to
carry out the analysis of any algorithm for smooth convex functions.
Given that the function is convex, in this case it seems reasonable to hope
that we can obtain a bound on the gradient norm at the last iterate. Thus, we
could consider a potential function of the form
$\mathcal{C}_{k}=A_{k}\|\nabla f(\bm{x}_{k})\|^{2}+f(\bm{x}_{k})$
and try enforcing the condition that $\mathcal{C}_{k}\leq\mathcal{C}_{k-1}$
for $A_{k}$ that grows as fast as possible with the iteration count $k.$ This
approach precisely gives the bound $\|\nabla
f(\bm{x}_{k})\|^{2}\leq\frac{2L(f(\bm{x}_{0})-f(\bm{x}^{*}))}{2k+1},$ which is
tight (see, e.g., [28, Lemma 5.2]).
###### Lemma 2.1 (Convergence of Gradient Descent).
Let $f:\mathbb{R}^{d}\to\mathbb{R}$ be an $L$-smooth function that attains its
minimum on $\mathbb{R}^{d}$ and let
$\bm{x}^{*}\in\operatorname*{argmin}_{\bm{x}\in\mathbb{R}^{d}}f(\bm{x})$. Let
$\bm{x}_{0}\in\mathbb{R}^{d}$ be an arbitrary initial point and assume that
the sequence $\\{\bm{x}_{k}\\}_{k\geq 0}$ evolves according to the standard
gradient descent, i.e., $\bm{x}_{k+1}=\bm{x}_{k}-\frac{1}{L}\nabla
f(\bm{x}_{k}),$ $\forall k\geq 0$. Then
$\mathcal{C}_{k}=\frac{k}{L}\|\nabla f(\bm{x}_{k})\|^{2}+f(\bm{x}_{k})$
is non-increasing with $k,$ and we can conclude that, $\forall k\geq 0$
$\|\nabla
f(\bm{x}_{k})\|^{2}\leq\frac{2L(f(\bm{x}_{0})-f(\bm{x}^{*}))}{2k+1}.$
###### Proof.
We start by showing that $\mathcal{C}_{k+1}\leq\mathcal{C}_{k},$ $\forall
k\geq 0.$ By the definition of $\mathcal{C}_{k},$
$\mathcal{C}_{k+1}-\mathcal{C}_{k}\leq\frac{k+1}{L}\|\nabla
f(\bm{x}_{k+1})\|^{2}-\frac{k}{L}\|\nabla
f(\bm{x}_{k})\|^{2}+f(\bm{x}_{k+1})-f(\bm{x}_{k}).$
Applying Fact 1.1 with $\bm{x}=\bm{x}_{k+1}=\bm{x}_{k}-\frac{1}{L}\nabla
f(\bm{x}_{k})$ and $\bm{y}=\bm{x}_{k}$, it follows that
$f(\bm{x}_{k+1})-f(\bm{x}_{k})\leq-\frac{1}{2L}\|\nabla
f(\bm{x}_{k+1})\|^{2}-\frac{1}{2L}\|\nabla f(\bm{x}_{k})\|^{2},$ and, thus
$\mathcal{C}_{k+1}-\mathcal{C}_{k}\leq\frac{2k+1}{2L}\|\nabla
f(\bm{x}_{k+1})\|^{2}-\frac{2k+1}{2L}\|\nabla f(\bm{x}_{k})\|^{2}.$
To complete the proof that $\mathcal{C}_{k+1}\leq\mathcal{C}_{k},$ it remains
to argue that $\|\nabla f(\bm{x}_{k+1})\|\leq\|\nabla f(\bm{x}_{k})\|,$
$\forall k\geq 0.$ This is clearly true if $\|\nabla f(\bm{x}_{k+1})\|=0,$ so
assume $\|\nabla f(\bm{x}_{k+1})\|\neq 0$. Applying Eq. (1.6) with
$\bm{x}=\bm{x}_{k+1}=\bm{x}_{k}-\frac{1}{L}\nabla f(\bm{x}_{k})$,
$\bm{y}=\bm{x}_{k}$, and simplifying, it follows that:
$\|\nabla f(\bm{x}_{k+1})\|^{2}\leq\left\langle\nabla f(\bm{x}_{k+1}),\nabla
f(\bm{x}_{k})\right\rangle\leq\|\nabla f(\bm{x}_{k+1})\|\|\nabla
f(\bm{x}_{k})\|,$
where the last inequality is by Cauchy-Schwarz. To conclude that $\|\nabla
f(\bm{x}_{k+1})\|\leq\|\nabla f(\bm{x}_{k})\|,$ it remains to divide both
sides of the last inequality by $\|\nabla f(\bm{x}_{k+1})\|.$
From the first part of the proof, it follows that
$\mathcal{C}_{k}\leq\mathcal{C}_{0},$ and, thus
$\frac{k}{L}\|\nabla f(\bm{x}_{k})\|^{2}\leq
f(\bm{x}_{0})-f(\bm{x}_{k})=f(\bm{x}_{0})-f(\bm{x}^{*})+f(\bm{x}^{*})-f(\bm{x}_{k}).$
It remains to observe that
$f(\bm{x}^{*})-f(\bm{x}_{k})\leq-\frac{1}{2L}\|\nabla f(\bm{x}_{k})\|^{2},$
which follows by applying Fact 1.1 with $\bm{x}=\bm{x}^{*},$
$\bm{y}=\bm{x}_{k}$, and rearrange. ∎
### 2.2 Methods that are Faster than Gradient Descent
The potential functions we have seen so far (for gradient descent) trade off
the gradient norm (squared) with the function value. Equivalently, we can view
them as trading off the gradient norm with the optimality gap
$f(\bm{x}_{k})-f(\bm{x}^{*}),$ as $f(\bm{x}^{*})$ would cancel out in the
analysis and the same argument would go through.
It is reasonable to ask whether we can obtain faster algorithms by using a
different trade off, say, by considering potential functions of the form
$\mathcal{C}_{k}=A_{k}\|\nabla
f(\bm{x}_{k})\|^{2}+B_{k}(f(\bm{x}_{k})-f(\bm{x}^{*}))$ or
$\mathcal{C}_{k}=\sum_{i=0}^{k}a_{i}\|\nabla
f(\bm{x}_{i})\|^{2}+B_{k}(f(\bm{x}_{k})-f(\bm{x}^{*})),$ where $B_{k}$ is some
positive function of the iteration count $k$.
Observe that for non-constant $B_{k},$ one way or another, we would need to
account for $\bm{x}^{*},$ which is not known to the algorithm. However, there
are at least two ways around this issue. The first one is to utilize Eq. (1.3)
to bound below $f(\bm{x}^{*})$. This approach does not lead to the optimal
iteration complexity, but improves the overall bound compared to gradient
descent and recovers a variant of Nesterov FGM. The second approach is to
replace the optimality gap with a gap to some reference point. In particular,
as we show below, optimized gradient method [28] can be viewed as using the
final point of the algorithm $\bm{x}_{N}$ as the reference (or anchor) point.
#### 2.2.1 Fast Gradient Method
We start by considering a potential function that offers a different trade-off
between the norm of the gradient and the optimality gap, defined by
$\mathcal{C}_{k}=\sum_{i=0}^{k-1}a_{i}\|\nabla
f(\bm{x}_{i})\|^{2}+B_{k}(f(\bm{x}_{k})-f(\bm{x}^{*})),$ (2.3)
where $a_{i}>0,$ $\forall i\geq 0$ and the sequence of scalars $B_{k}>0$,
$\forall k\geq 0$, is strictly increasing. We also define
$b_{k}=B_{k}-B_{k-1}>0.$ By convention, the summation from $i$ to $j$ where
$j<i$ is taken to be zero. Observe that
$\mathcal{C}_{0}=B_{0}(f(\bm{x}_{0})-f(\bm{x}^{*})).$ (2.4)
While, in principle, one could also consider $\mathcal{C}_{k}=A_{k}\|\nabla
f(\bm{x}_{k})\|^{2}+B_{k}(f(\bm{x}_{k})-f(\bm{x}^{*}))$ hoping to obtain a
bound on the last gradient, it is not clear that such a bound is even possible
for non-constant $B_{k}$ (see Section 2.3).
We first show that there is a natural algorithm that ensures
$\mathcal{C}_{k+1}-\mathcal{C}_{k}\leq E_{k},$ $\forall k\geq 0$, where
$E_{k}$ contains only telescoping terms. As it turns out, this algorithm is
precisely Nesterov FGM.
###### Lemma 2.2.
Given an arbitrary initial point $\bm{x}_{0}\in\mathbb{R}^{d}$, assume that
for $k\geq 1,$ the sequence $\bm{x}_{k}$ is updated as
$\begin{gathered}\bm{x}_{k}=\frac{B_{k-1}}{B_{k}}\Big{(}\bm{x}_{k-1}-\frac{1}{L}\nabla
f(\bm{x}_{k-1})\Big{)}+\frac{b_{k}}{B_{k}}\bm{v}_{k},\end{gathered}$ (2.5)
where $\bm{v}_{k}$ is defined recursively via
$\bm{v}_{k}=\bm{v}_{k-1}-\frac{b_{k-1}}{L}\nabla f(\bm{x}_{k-1})$ with
$\bm{v}_{0}=\bm{x}_{0}$. If ${b_{k}}^{2}\leq B_{k}$ and
$a_{k-1}\leq\frac{B_{k-1}}{2L},$ then
$\mathcal{C}_{k}-\mathcal{C}_{k-1}\leq\frac{L}{2}\big{(}\|\bm{x}^{*}-\bm{v}_{k}\|^{2}-\|\bm{x}^{*}-\bm{v}_{k+1}\|^{2}\big{)},$
$\forall k\geq 1,$ where $\mathcal{C}_{k}$ is defined by Eq. (2.3).
###### Proof.
Given $k\geq 1,$ by definition of $\mathcal{C}_{k},$ we have
$\mathcal{C}_{k}-\mathcal{C}_{k-1}=a_{k-1}\|\nabla
f(\bm{x}_{k-1})\|^{2}+B_{k}f(\bm{x}_{k})-B_{k-1}f(\bm{x}_{k-1})-b_{k}f(\bm{x}^{*}).$
(2.6)
Since $f(\bm{x}^{*})$ is not known to the algorithm and we are trying to bound
$\mathcal{C}_{k}-\mathcal{C}_{k-1}$ above, it appears natural to use Eq. (1.3)
to bound $f(\bm{x}^{*})$ below. In particular, we have:
$f(\bm{x}^{*})\geq f(\bm{x}_{k})+\left\langle\nabla
f(\bm{x}_{k}),\bm{x}^{*}-\bm{x}_{k}\right\rangle+\frac{1}{2L}\|\nabla
f(\bm{x}_{k})\|^{2}.$ (2.7)
On the other hand, the difference $f(\bm{x}_{k})-f(\bm{x}_{k-1})$ can be
bounded above using, again, Eq. (1.3), as follows.
$\displaystyle f(\bm{x}_{k})-f(\bm{x}_{k-1})\leq$
$\displaystyle\left\langle\nabla
f(\bm{x}_{k}),\bm{x}_{k}-\bm{x}_{k-1}+\frac{1}{L}\nabla
f(\bm{x}_{k-1})\right\rangle$ (2.8) $\displaystyle-\frac{1}{2L}\|\nabla
f(\bm{x}_{k})\|^{2}-\frac{1}{2L}\|\nabla f(\bm{x}_{k-1})\|^{2}.$
Combining Eq. (2.7) and Eq. (2.8) with Eq. (2.6), we have:
$\displaystyle\mathcal{C}_{k}-\mathcal{C}_{k-1}\leq$
$\displaystyle-\frac{B_{k}}{2L}\|\nabla
f(\bm{x}_{k})\|^{2}+\Big{(}a_{k-1}-\frac{B_{k-1}}{2L}\Big{)}\|\nabla
f(\bm{x}_{k-1})\|^{2}$ (2.9) $\displaystyle+B_{k-1}\left\langle\nabla
f(\bm{x}_{k}),\bm{x}_{k}-\bm{x}_{k-1}+\frac{1}{L}\nabla
f(\bm{x}_{k-1})\right\rangle+b_{k}\left\langle\nabla
f(\bm{x}_{k}),\bm{x}_{k}-\bm{x}^{*}\right\rangle.$
Now, if $b_{k}$ were zero (constant $B_{k}$), we could simply set
$\bm{x}_{k}=\bm{x}_{k-1}-\frac{1}{L}\nabla f(\bm{x}_{k-1}),$ and we would be
recovering gradient descent and its analysis from the previous subsection. Of
course, the goal here is to get a different trade off, where $B_{k}$ is
strictly increasing.
To get a useful bound on $\mathcal{C}_{k}-\mathcal{C}_{k-1},$ we need to be
able to bound or otherwise control the term $b_{k}\left\langle\nabla
f(\bm{x}_{k}),\bm{x}_{k}-\bm{x}^{*}\right\rangle.$ Fortunately, such a term
frequently appears in the mirror-descent-type analysis, and it can be bounded
using standard arguments by defining
$\displaystyle\bm{v}_{k+1}$
$\displaystyle=\operatorname*{argmin}_{\bm{u}\in\mathbb{R}^{d}}\Big{\\{}b_{k}\left\langle\nabla
f(\bm{x}_{k}),\bm{u}-\bm{v}_{k}\right\rangle+\frac{L}{2}\|\bm{u}-\bm{v}_{k}\|^{2}\Big{\\}}$
$\displaystyle=\bm{v}_{k}-\frac{b_{k}}{L}\nabla f(\bm{x}_{k}).$
Then, we have:
$\displaystyle b_{k}\left\langle\nabla
f(\bm{x}_{k}),\bm{x}_{k}-\bm{x}^{*}\right\rangle=\;$ $\displaystyle
b_{k}\left\langle\nabla
f(\bm{x}_{k}),\bm{x}_{k}-\bm{v}_{k+1}\right\rangle+L\left\langle\bm{v}_{k}-\bm{v}_{k+1},\bm{v}_{k+1}-\bm{x}^{*}\right\rangle$
$\displaystyle=\;$ $\displaystyle b_{k}\left\langle\nabla
f(\bm{x}_{k}),\bm{x}_{k}-\bm{v}_{k}\right\rangle+\frac{{b_{k}}^{2}}{L}\|\nabla
f(\bm{x}_{k})\|^{2}$
$\displaystyle+\frac{L}{2}\|\bm{x}^{*}-\bm{v}_{k}\|^{2}-\frac{L}{2}\|\bm{x}^{*}-\bm{v}_{k+1}\|^{2}-\frac{L}{2}\|\bm{v}_{k+1}-\bm{v}_{k}\|^{2}$
$\displaystyle=\;$ $\displaystyle b_{k}\left\langle\nabla
f(\bm{x}_{k}),\bm{x}_{k}-\bm{v}_{k}\right\rangle+\frac{{b_{k}}^{2}}{2L}\|\nabla
f(\bm{x}_{k})\|^{2}$
$\displaystyle+\frac{L}{2}\|\bm{x}^{*}-\bm{v}_{k}\|^{2}-\frac{L}{2}\|\bm{x}^{*}-\bm{v}_{k+1}\|^{2},$
where we have repeatedly used $\bm{v}_{k+1}=\bm{v}_{k}-\frac{b_{k}}{L}\nabla
f(\bm{x}_{k}).$ Combining with Eq. (2.9), we have
$\displaystyle\mathcal{C}_{k}-\mathcal{C}_{k-1}\leq\;$
$\displaystyle\frac{{b_{k}}^{2}-B_{k}}{2L}\|\nabla
f(\bm{x}_{k})\|^{2}+\Big{(}a_{k-1}-\frac{B_{k-1}}{2L}\Big{)}\|\nabla
f(\bm{x}_{k-1})\|^{2}$
$\displaystyle+\frac{L}{2}\|\bm{x}^{*}-\bm{v}_{k}\|^{2}-\frac{L}{2}\|\bm{x}^{*}-\bm{v}_{k+1}\|^{2}$
$\displaystyle+\left\langle\nabla
f(\bm{x}_{k}),B_{k}\bm{x}_{k}-B_{k-1}\Big{(}\bm{x}_{k-1}-\frac{1}{L}\nabla
f(\bm{x}_{k-1})\Big{)}-b_{k}\bm{v}_{k}\right\rangle.$
To obtain
$\mathcal{C}_{k}-\mathcal{C}_{k-1}\leq\frac{L}{2}\|\bm{x}^{*}-\bm{v}_{k}\|^{2}-\frac{L}{2}\|\bm{x}^{*}-\bm{v}_{k+1}\|^{2},$
it remains to choose ${b_{k}}^{2}\leq B_{k},$ $a_{k-1}\leq\frac{B_{k-1}}{2L}$,
and $\bm{x}_{k}=\frac{B_{k-1}}{B_{k}}\big{(}\bm{x}_{k-1}-\frac{1}{L}\nabla
f(\bm{x}_{k-1})\big{)}+\frac{b_{k}}{B_{k}}\bm{v}_{k}.$ ∎
We can now use Lemma 2.2 to argue about convergence of Nesterov FGM from Eq.
(2.5). Interestingly, the result from Lemma 2.2 suffices to argue about both
convergence in function value and in norm of the gradient. The resulting
bounds are tight, up to a small absolute constant, due to numerical results
from [27].
###### Theorem 2.3 (Convergence of Fast Gradient Method).
Suppose that the assumptions of Lemma 2.2 hold, where $\bm{v}_{0}=\bm{x}_{0}.$
Then, $\forall k\geq 1$:
$f(\bm{x}_{k})-f(\bm{x}^{*})\leq\frac{2B_{0}(f(\bm{x}_{0})-f(\bm{x}^{*}))+L\|\bm{x}_{0}-\bm{x}^{*}\|^{2}}{2B_{k}}$
and
$\sum_{i=0}^{k}\frac{B_{i}}{2L}\|\nabla f(\bm{x}_{i})\|^{2}\leq
B_{0}(f(\bm{x}_{0})-f(\bm{x}^{*}))+\frac{L}{2}\|\bm{x}_{0}-\bm{x}^{*}\|^{2}.$
In particular, if $b_{0}=B_{0},$ ${b_{k}}^{2}=B_{k}$ for $k\geq 1$, and
$a_{k}=\frac{B_{k}}{2L}$, then
$f(\bm{x}_{k})-f(\bm{x}^{*})\leq\frac{4L\|\bm{x}_{0}-\bm{x}^{*}\|^{2}}{(k+1)(k+2)}$
and
$\min_{0\leq i\leq k}\|\nabla
f(\bm{x}_{i})\|^{2}\leq\frac{\sum_{i=0}^{k}{B_{i}}\|\nabla
f(\bm{x}_{i})\|^{2}}{\sum_{i=0}^{k}B_{i}}\leq\frac{18L^{2}\|\bm{x}_{0}-\bm{x}^{*}\|^{2}}{(k+1)(k+2)(k+3)}.$
###### Proof.
Applying Lemma 2.2 and the definition of $\mathcal{C}_{k},$ we have, $\forall
k\geq 1$:
$\displaystyle\mathcal{C}_{k}$
$\displaystyle\leq\mathcal{C}_{0}+\frac{L}{2}\|\bm{x}^{*}-\bm{v}_{0}\|^{2}-\frac{L}{2}\|\bm{v}_{k+1}-\bm{x}^{*}\|^{2}$
$\displaystyle\leq
B_{0}(f(\bm{x}_{0})-f(\bm{x}^{*}))+\frac{L}{2}\|\bm{x}^{*}-\bm{x}_{0}\|^{2}.$
Equivalently:
$\sum_{i=0}^{k-1}a_{i}\|\nabla
f(\bm{x}_{i})\|^{2}+B_{k}(f(\bm{x}_{k})-f(\bm{x}^{*}))\leq
B_{0}(f(\bm{x}_{0})-f(\bm{x}^{*}))+\frac{L}{2}\|\bm{x}^{*}-\bm{x}_{0}\|^{2}.$
The first part of the theorem is now immediate, as
$\sum_{i=0}^{k-1}a_{i}\|\nabla f(\bm{x}_{i})\|^{2}\geq 0$ and
$B_{k}(f(\bm{x}_{k})-f(\bm{x}^{*}))\geq\frac{B_{k}}{2L}\|\nabla
f(\bm{x}_{k})\|^{2}\geq a_{k}\|\nabla f(\bm{x}_{k})\|^{2}.$
For the second part, we only need to bound the growth of $B_{k}$ when
${b_{k}}^{2}=(B_{k}-B_{k-1})^{2}=B_{k}.$ It is a standard result that this
growth is quadratic and at least as fast the growth resulting from choosing
$b_{k}=\frac{k+1}{2},$ $\forall k.$ Thus,
$B_{k}\geq\sum_{i=0}^{k}\frac{i+1}{2}=\frac{(k+1)(k+2)}{4}$ and
$\sum_{i=0}^{k}B_{i}\geq\frac{(k+1)(k+2)(k+3)}{12}.$ Using that
$f(\bm{x}_{0})-f(\bm{x}^{*})\leq\frac{L}{2}\|\bm{x}_{0}-\bm{x}^{*}\|^{2},$ it
now follows from the first part of the theorem that
$f(\bm{x}_{k})-f(\bm{x}^{*})\leq\frac{4L\|\bm{x}_{0}-\bm{x}^{*}\|^{2}}{(k+1)(k+2)}$
and
$\min_{0\leq i\leq k}\|\nabla
f(\bm{x}_{i})\|^{2}\leq\frac{\sum_{i=0}^{k}{B_{i}}\|\nabla
f(\bm{x}_{i})\|^{2}}{\sum_{i=0}^{k}B_{i}}\leq\frac{18L^{2}\|\bm{x}_{0}-\bm{x}^{*}\|^{2}}{(k+1)(k+2)(k+3)},$
as claimed. ∎
###### Remark 2.4.
It may not be immediately clear why the bound from Theorem 2.3 improves upon
the bound for gradient descent from Lemma 2.1, as in the former the gradient
is bounded as a function of $\|\bm{x}^{*}-\bm{x}_{0}\|^{2},$ while in the
latter it is bounded as a function of $f(\bm{x}_{0})-f(\bm{x}^{*}).$ Here, one
should note that, using the standard convergence result for the optimality gap
of gradient descent
$f(\bm{x}_{k})-f(\bm{x}^{*})=O\big{(}\frac{L\|\bm{x}_{0}-\bm{x}^{*}\|^{2}}{k}\big{)}$
and combining it with the bound from Lemma 2.1, we also have that $\|\nabla
f(\bm{x}_{k})\|^{2}=O\big{(}L(\frac{f(\bm{x}_{\lceil
k/2\rceil})-f(\bm{x}^{*}))}{k}\big{)}=O\big{(}\frac{L^{2}\|\bm{x}_{0}-\bm{x}^{*}\|^{2}}{k^{2}}\big{)}.$
Furthermore, this bound is known to be tight [27, Theorem 2], and it also
applies to $\min_{0\leq i\leq k}\|\nabla f(\bm{x}_{i})\|^{2}$, as gradient
descent monotonically decreases the gradient. We also note that the improved
bound for FGM from Theorem 2.3 can only be established for the minimum
gradient norm up to iteration $k;$ as shown numerically in [27], the bound for
the gradient of the last iterate is no better than that of gradient descent,
i.e., $\|\nabla
f(\bm{x}_{k})\|^{2}=\Omega\big{(}\frac{L^{2}\|\bm{x}_{0}-\bm{x}^{*}\|^{2}}{k^{2}}\big{)}$.
#### 2.2.2 Optimized Method for the Gradients
The only known method that achieves the optimal convergence bound of the form
$\|\nabla
f(\bm{x}_{k})\|^{2}=O\big{(}\frac{L(f(\bm{x}_{0})-f(\bm{x}^{*}))}{k^{2}}\big{)}$
is the optimized method for the gradients (OGM-G), due to Kim and Fessler
[28]. This method was obtained using the performance estimation framework
(PEP) of Drori and Teboulle [20], which relies on numerical solutions to
semidefinite programs that model the worst case performance of methods on a
given class of problems (such as, e.g., unconstrained problems with smooth
convex objective functions considered here). While this is a very powerful
approach that generally produces tight convergence analysis and worst case
instances as a byproduct, as discussed before, the intuition behind the
methods and their analysis obtained using PEP is not always clear.
In this section, we show that OGM-G naturally arises from a potential function
that fits within the broader framework studied in this paper. In particular,
as mentioned earlier in this section, we can view OGM-G as trading off the
norm of the gradient for a gap w.r.t. an anchor point, which is the last point
constructed by the algorithm. As a consequence of anchoring to the last point,
the algorithm crucially requires fixing the number of iterations in advance to
achieve the optimal convergence bound stated above.
The potential function used for analyzing OGM-G is defined by
$\mathcal{C}_{k}=A_{k}\Big{(}\frac{1}{2L}\|\nabla
f(\bm{x}_{k})\|^{2}+\frac{1}{2L}\|\nabla
f(\bm{x}_{K})\|^{2}+f(\bm{x}_{k})-f(\bm{x}_{K})\Big{)},$ (2.10)
where $K$ is the total number of iterations for which OGM-G is invoked.
Unlike for other algorithms, we will not be able to argue that
$\mathcal{C}_{k}-\mathcal{C}_{k-1}\leq E_{k}$ for $E_{k}$ that is either zero
or only contains telescoping terms. Instead, we will settle for a more modest
goal of arguing that, under the appropriate choice of algorithm steps and
growth of the sequence $A_{k},$ we have $\mathcal{C}_{K}\leq\mathcal{C}_{0}.$
Observe that, by the definition of $\mathcal{C}_{k}$, if we can prove that
$A_{K}/A_{0}=\Omega(K^{2}),$ this condition immediately leads to the desired
bound
$\|\nabla
f(\bm{x}_{K})\|^{2}=O\Big{(}\frac{L(f(\bm{x}_{0})-f(\bm{x}_{K}))}{K^{2}}\Big{)}=O\Big{(}\frac{L(f(\bm{x}_{0})-f(\bm{x}^{*}))}{K^{2}}\Big{)}.$
As before, we define $a_{k}=A_{k}-A_{k-1}$ and assume it is strictly positive,
for all $k$ (i.e., $A_{k}$ is strictly increasing). To bound
$\mathcal{C}_{K},$ we start by bounding the change in the potential function
$\mathcal{C}_{k}-\mathcal{C}_{k-1},$ for $k\geq 1,$ in the following lemma.
Observe that the lemma itself is algorithm-independent.
###### Lemma 2.5.
Let $\mathcal{C}_{k}$ be defined by Eq. (2.10), for all
$k\in\\{0,1,\dots,K\\}.$ Define $\bm{y}_{k}=\bm{x}_{k}-\frac{1}{L}\nabla
f(\bm{x}_{k})$ for $k\geq 0,$ and set $\bm{y}_{-1}=\bm{x}_{0}.$ Then, $\forall
1\leq k\leq K:$
$\displaystyle\mathcal{C}_{k}-\mathcal{C}_{k-1}\leq\;$ $\displaystyle
A_{k}\left\langle\nabla f(\bm{x}_{k}),\bm{x}_{k}-\bm{y}_{k-1}\right\rangle-
A_{k-1}\left\langle\nabla
f(\bm{x}_{k-1}),\bm{x}_{k-1}-\bm{y}_{k-2}\right\rangle$
$\displaystyle+\left\langle\nabla
f(\bm{x}_{k-1}),A_{k}\bm{y}_{k-1}-A_{k-1}\bm{y}_{k-2}-a_{k}\bm{y}_{K}\right\rangle.$
###### Proof.
Let $\bm{x},\bm{\hat{x}}$ be any two vectors from $\mathbb{R}^{d},$ and let
$\bm{y}=\bm{x}-\frac{1}{L}\nabla f(\bm{x}).$ Then, Eq. (1.3) can be
equivalently written as:
$f(\bm{\hat{x}})-f(\bm{x})\leq\left\langle\nabla
f(\bm{\hat{x}}),\bm{\hat{x}}-\bm{y}\right\rangle-\frac{1}{2L}\|\nabla
f(\bm{\hat{x}})\|^{2}-\frac{1}{2L}\|\nabla f(\bm{x})\|^{2}.$ (2.11)
From the definition of $\mathcal{C}_{k}$ in Eq. (2.10), we have
$\displaystyle\mathcal{C}_{k}-\mathcal{C}_{k-1}=\;$
$\displaystyle\frac{A_{k}}{2L}\|\nabla
f(\bm{x}_{k})\|^{2}-\frac{A_{k-1}}{2L}\|\nabla
f(\bm{x}_{k-1})\|^{2}+\frac{a_{k}}{2L}\|\nabla f(\bm{x}_{K})\|^{2}$
$\displaystyle+A_{k}(f(\bm{x}_{k})-f(\bm{x}_{k-1}))+a_{k}(f(\bm{x}_{k-1})-f(\bm{x}_{K})).$
Applying Eq. (2.11) to $f(\bm{x}_{k})-f(\bm{x}_{k-1})$ and
$f(\bm{x}_{k-1})-f(\bm{x}_{K})$, we further have
$\displaystyle\mathcal{C}_{k}-\mathcal{C}_{k-1}\leq\;$
$\displaystyle-\frac{A_{k}}{L}\|\nabla
f(\bm{x}_{k-1})\|^{2}+A_{k}\left\langle\nabla
f(\bm{x}_{k}),\bm{x}_{k}-\bm{y}_{k-1}\right\rangle$
$\displaystyle+a_{k}\left\langle\nabla
f(\bm{x}_{k-1}),\bm{x}_{k-1}-\bm{y}_{K}\right\rangle$ $\displaystyle=\;$
$\displaystyle A_{k}\left\langle\nabla
f(\bm{x}_{k}),\bm{x}_{k}-\bm{y}_{k-1}\right\rangle+A_{k}\left\langle\nabla
f(\bm{x}_{k-1}),\bm{y}_{k-1}-\bm{y}_{K}\right\rangle$ $\displaystyle-
A_{k-1}\left\langle\nabla
f(\bm{x}_{k-1}),\bm{x}_{k-1}-\bm{y}_{K}\right\rangle$ $\displaystyle=\;$
$\displaystyle A_{k}\left\langle\nabla
f(\bm{x}_{k}),\bm{x}_{k}-\bm{y}_{k-1}\right\rangle-A_{k-1}\left\langle\nabla
f(\bm{x}_{k-1}),\bm{x}_{k-1}-\bm{y}_{k-2}\right\rangle$
$\displaystyle+\left\langle\nabla
f(\bm{x}_{k-1}),A_{k}\bm{y}_{k-1},-A_{k-1}\bm{y}_{k-2}-a_{k}\bm{y}_{K}\right\rangle,$
as claimed. ∎
The following lemma provides the restrictions on the step sizes of the
algorithm that are needed to ensure that $\mathcal{C}_{K}\leq\mathcal{C}_{0}.$
Here, we assume that each point $\bm{x}_{k}$ can be expressed as the sum of
the initial point $\bm{x}_{0}$ and some linear combination of the gradients
evaluated at points $\bm{x}_{i}$ for $0\leq i\leq k-1.$ Note that most of the
standard first-order algorithms can be expressed in this form.
###### Lemma 2.6.
Let $\mathcal{C}_{k}$ be defined by Eq. (2.10) for $k\in\\{0,\dots,K\\}$ and
assume that points $\bm{x}_{k}$ can be expressed as
$\bm{x}_{k}=\bm{x}_{0}-\frac{1}{L}\sum_{i=0}^{k-1}\beta_{i,k}\nabla
f(\bm{x}_{i}),$ where $\beta_{i,k}$ are some real scalars. Define
$\beta_{k,k}=1,$ so that $\bm{y}_{k}=\bm{x}_{k}-\frac{1}{L}\nabla
f(\bm{x}_{k})=\bm{x}_{0}-\frac{1}{L}\sum_{i=0}^{k}\beta_{i,k}\nabla
f(\bm{x}_{i})$ and set $\bm{y}_{-1}=\bm{x}_{0}.$ If the following two
conditions are satisfied for all $0\leq j<k\leq K-1$:
$\displaystyle\beta_{k,K-1}+\frac{a_{k+1}}{A_{K}}\leq\frac{A_{k+1}}{a_{k+1}},$
(2.12) $\displaystyle
A_{k+1}\beta_{j,k}=A_{k}\beta_{j,k-1}+a_{k+1}\Big{(}\beta_{j,K-1}+\frac{a_{j+1}}{A_{K}}\Big{)}+a_{j+1}\Big{(}\beta_{k,K-1}+\frac{a_{k+1}}{A_{K}}\Big{)}$
(2.13)
and if
$\bm{x}_{K}=\bm{y}_{K-1}-\frac{1}{LA_{K}}\sum_{k=0}^{K-1}a_{k+1}\nabla
f(\bm{x}_{k}),$ (2.14)
then $\mathcal{C}_{K}\leq\mathcal{C}_{0}.$ Further, the largest growth of
$\frac{A_{K}}{A_{0}}$ for which both of these conditions can be satisfied is
$O(K^{2}).$
###### Proof.
Telescoping the inequality from Lemma 2.5, we have:
$\displaystyle\mathcal{C}_{K}-\mathcal{C}_{0}\leq\;$ $\displaystyle
A_{K}\left\langle\nabla f(\bm{x}_{K}),\bm{x}_{K}-\bm{y}_{K-1}\right\rangle$
$\displaystyle+\sum_{k=0}^{K-1}\left\langle\nabla
f(\bm{x}_{k}),A_{k+1}\bm{y}_{k}-A_{k}\bm{y}_{k-1}-a_{k+1}\bm{y}_{K}\right\rangle.$
Observe that $\nabla f(\bm{x}_{K})$ only appears in the first term and as part
of $\bm{y}_{K}=\bm{x}_{K}-\frac{1}{L}\nabla f(\bm{x}_{K}).$ Thus, grouping the
terms that multiply $\nabla f(\bm{x}_{K}),$ we can equivalently write
$\displaystyle\mathcal{C}_{K}-\mathcal{C}_{0}\leq\;$
$\displaystyle\left\langle\nabla
f(\bm{x}_{K}),A_{K}(\bm{x}_{K}-\bm{y}_{K-1})+\frac{1}{L}\sum_{k=0}^{K-1}a_{k+1}\nabla
f(\bm{x}_{k})\right\rangle$ $\displaystyle+\sum_{k=0}^{K-1}\left\langle\nabla
f(\bm{x}_{k}),A_{k+1}\bm{y}_{k}-A_{k}\bm{y}_{k-1}-a_{k+1}\bm{x}_{K}\right\rangle.$
The choice of $\bm{x}_{K}$ from Eq. (2.14) ensures that the first term on the
right-hand side is zero (and this is how it was chosen). The rest of the terms
can be expressed as a function of gradients up to the $(K-1)^{\mathrm{th}}$
one. To simplify the notation, let us define
$\bm{g}_{K-1}=\frac{1}{L}\sum_{k=0}^{K-1}a_{k+1}\nabla f(\bm{x}_{k}).$ Then,
we have
$\displaystyle\mathcal{C}_{K}-\mathcal{C}_{0}\leq\sum_{k=0}^{K-1}\left\langle\nabla
f(\bm{x}_{k}),A_{k+1}\bm{y}_{k}-A_{k}\bm{y}_{k-1}-a_{k+1}\Big{(}\bm{y}_{K-1}-\frac{\bm{g}_{K-1}}{A_{K}}\Big{)}\right\rangle.$
(2.15)
Observe that, as
$\bm{y}_{k}=\bm{x}_{0}-\frac{1}{L}\sum_{i=0}^{k}\beta_{i,k}\nabla
f(\bm{x}_{i})$ by the lemma assumptions, the expression on the right-hand side
can be written as a linear combination of inner products between gradients, as
follows.
$\displaystyle\mathcal{C}_{K}-\mathcal{C}_{0}\leq\frac{1}{L}\sum_{j=0}^{K-1}\sum_{k=j}^{K-1}P_{j,k}\left\langle\nabla
f(\bm{x}_{j}),{\nabla f(\bm{x}_{k})}\right\rangle,$
where, by Eq. (2.15), we have that, for all $0\leq j<k\leq K-1:$
$\displaystyle P_{k,k}$
$\displaystyle=-A_{k+1}\beta_{k,k}+a_{k+1}\Big{(}\beta_{k,K-1}+\frac{a_{k+1}}{{A_{K}}}\Big{)},$
$\displaystyle P_{j,k}$
$\displaystyle=-A_{k+1}\beta_{j,k}+A_{k}\beta_{j,k-1}+a_{k+1}\Big{(}\beta_{j,K-1}+\frac{a_{j+1}}{A_{K}}\Big{)}+a_{j+1}\Big{(}\beta_{k,K-1}+\frac{a_{k+1}}{A_{K}}\Big{)}.$
As, by assumption, $\beta_{k,k}=1,$ conditions in Eqs. (2.12) and (2.13) are
equivalent to $P_{k,k}\leq 0$ and $P_{j,k}=0$, for all $0\leq j<k\leq K-1.$ By
construction, these conditions are sufficient for guaranteeing
$\mathcal{C}_{k}-\mathcal{C}_{0}\leq 0,$ completing the first part of the
proof.
Observe that, given a sequence of positive numbers $\\{a_{k}\\}_{k\geq 0}$ and
$A_{k}=\sum_{j=0}^{k}a_{j},$ all coefficients $\beta_{j,k}$ are uniquely
determined by Eq. (2.13) (as $\beta_{k,k}=1$ by assumption, and the remaining
coefficients can be computed by recursively applying Eq. (2.13)). Thus, the
role of the condition from Eq. (2.12) is to limit the growth of the sequence
$\\{A_{k}\\}_{k\geq 0}.$ Starting with $\beta_{k,k}=1,$ $\forall k$ (which
holds by assumption), it is possible to argue by induction that
$\beta_{j,k}\geq 0,$ $\forall j,k$ (the proof is omitted for brevity). Thus
the condition from Eq. (2.12) implies that
$\frac{a_{k+1}}{A_{K}}\leq\frac{A_{k+1}}{a_{k+1}}.$ Equivalently, $\forall
k\leq K-1$:
$\frac{{a_{k+1}}^{2}}{A_{k+1}}\leq A_{K}.$ (2.16)
For any fixed $A_{K},$ Eq. (2.16) implies that $\frac{A_{k}}{A_{0}}$ cannot
grow faster than quadratically with $k,$ for $k\leq K-1.$ It remains to argue
that the sequence does not make a big jump from $A_{K-1}$ to $A_{K}.$ This
follows by using again Eq. (2.12) for $k=K-1$ and recalling that
$\beta_{K-1,K-1}=1.$ We then have
$1+\frac{a_{K}}{A_{K}}\leq\frac{A_{K}}{a_{K}}.$
Solving for $\frac{a_{K}}{A_{K}},$ it follows that
$\frac{a_{K}}{A_{K}}\leq\frac{-1+\sqrt{5}}{2}<0.62,$ and, thus,
$\frac{A_{K}}{A_{K-1}}\leq\frac{1}{1-0.62}<3,$ completing the proof that
$\frac{A_{K}}{A_{0}}=O(K^{2}).$ ∎
That $\frac{A_{K}}{A_{0}}=O(K^{2})$ is not surprising – if it were not true,
by the discussion from the beginning of this subsection, we would be able to
obtain an algorithm that converges at rate faster than $1/K^{2},$ which is
impossible, due to the existing lower bounds [13]. This result was rather
included to highlight the role of the conditions from Eqs. (2.12) and (2.13)
in Lemma 2.6: the first condition limits the growth of $\\{A_{k}\\}_{k\geq
0},$ whereas the second determines the step sizes $\beta_{j,k}$ in the
algorithm, given the sequence $\\{A_{k}\\}_{k\geq 0}$.
What remains to be shown is that there is a choice of step sizes $\beta_{j,k}$
that guarantees $\frac{A_{K}}{A_{0}}=\Theta(K^{2}),$ and thus leads to an
algorithm with the optimal convergence rate. This choice is obtained when the
inequality from Eq. (2.12) is satisfied with equality. It is possible to argue
that this choice is also the one that leads to the fastest growth of
$\frac{A_{K}}{A_{0}};$ however, this direction is not pursued here as it
unnecessarily complicates the analysis. Further, when Eq. (2.12) is satisfied
with equality, Eq. (2.13) can be further simplified, and it leads to the
algorithm description that does not necessitate storing all of the gradients,
but only a constant number of $d$-dimensional vectors. However, similar to the
algorithm description in [28], the entire sequence $\\{A_{k}\\}_{k=0}^{K}$
needs to be pre-computed and stored, which appears to be unavoidable. The
algorithm and its convergence rate are summarized in the following theorem.
###### Theorem 2.7 (Convergence of Optimized Gradient Method).
Let $f:\mathbb{R}^{d}\to\mathbb{R}$ be an $L$-smooth function and let
$\bm{x}_{0}\in\mathbb{R}^{d}$ be an arbitrary initial point. Let $K\geq 1.$
Consider the following algorithm. Let
$\bm{v}=\bm{x}_{0}-\frac{A_{1}}{a_{1}L}\nabla f(\bm{x}_{0})$,
$\bm{g}_{0}=a_{1}\nabla f(\bm{x}_{0}).$ For $k=1$ to $K-1,$
$\begin{gathered}\bm{y}_{k-1}=\bm{x}_{k-1}-\frac{1}{L}\nabla
f(\bm{x}_{k-1}),\\\
\bm{x}_{k}=\frac{A_{k}}{A_{k+1}}\bm{y}_{k-1}+\frac{a_{k+1}}{A_{k+1}}\bm{v}_{k-1}-\frac{1}{a_{k+1}}\bm{g}_{k-1},\\\
\bm{v}_{k}=\bm{v}_{k-1}-\frac{1}{L}\frac{A_{k+1}}{a_{k+1}}\nabla
f(\bm{x}_{k}),\;\bm{g}_{k}=\bm{g}_{k-1}+a_{k+1}\nabla
f(\bm{x}_{k}),\end{gathered}$ (2.17)
where the sequence $\\{A_{k}\\}_{k=0}^{K}$ is recursively defined by the
following
$\begin{cases}A_{k}=1,&\text{ if }k=K,\\\
A_{k}=A_{k+1}\big{[}1+\frac{1}{2}A_{k+1}-\frac{1}{2}\sqrt{A_{k+1}(4+A_{k+1})}\big{]},&\text{
if }0\leq k\leq K-1,\end{cases}$ (2.18)
and $a_{k+1}=A_{k+1}-A_{k},$ for $0\leq k\leq K-1.$
If $\bm{x}_{K}$ is defined by
$\bm{x}_{K}=\bm{y}_{K-1}-\frac{1}{A_{K}L}\bm{g}_{K-1},$
then
$\|\nabla
f(\bm{x}_{K})\|^{2}\leq\frac{16L(f(\bm{x}_{0})-f(\bm{x}^{*}))}{(K+2)^{2}},$
where
$\bm{x}^{*}\in\operatorname*{argmin}_{\bm{x}\in\mathbb{R}^{d}}f(\bm{x}).$
###### Proof.
The proof strategy is as follows. We first argue that the algorithm from the
theorem statement satisfies $\mathcal{C}_{K}\leq\mathcal{C}_{0},$ where
$\mathcal{C}_{k}$ is defined by Eq. (2.10). This is done by showing that we
can apply Lemma 2.6. Then, by the definition of $\mathcal{C}_{k},$
$\mathcal{C}_{K}\leq\mathcal{C}_{0}$ is equivalent to
$\|\nabla f(\bm{x}_{K})\|^{2}\leq
2L\frac{A_{0}}{A_{K}}\Big{(}f(\bm{x}_{0})-f(\bm{x}_{K})+\frac{1}{2L}\|\nabla
f(\bm{x}_{0})\|^{2}\Big{)}.$
As $f(\bm{x}_{K})\geq f(\bm{x}^{*})$ and $\frac{1}{2L}\|\nabla
f(\bm{x}_{0})\|^{2}\leq f(\bm{x}_{0})-f(\bm{x}^{*}),$ what then remains to be
argued is that $\frac{A_{0}}{A_{K}}=O(\frac{1}{K^{2}}).$
To apply Lemma 2.6, observe first that the definition of $\bm{x}_{K}$ from the
theorem statement is the same as the definition of $\bm{x}_{K}$ in Lemma 2.6.
For $k\leq K-1,$ let us define
$\bm{x}_{k}=\bm{x}_{0}-\frac{1}{L}\sum_{j=0}^{k-1}\beta_{j,k}\nabla
f(\bm{x}_{j}),$ $\beta_{k,k}=1,$ and
$\bm{y}_{k}=\bm{x}_{k}-\frac{\beta_{k,k}}{L}\nabla f(\bm{x}_{k})$ as in Lemma
2.6 and show that when both conditions from Lemma 2.6 stated in Eqs. (2.12)
and Eq. (2.13) are satisfied with equality, we recover the algorithm from the
theorem statement, and thus the two sequences of points are equivalent, and so
we can conclude that $\mathcal{C}_{K}\leq\mathcal{C}_{0}.$
When Eq. (2.12) holds with equality, we have that
$\beta_{k,K-1}+\frac{a_{k+1}}{A_{K}}=\frac{A_{k+1}}{a_{k+1}}.$ (2.19)
Plugging it into Eq. (2.13), we have
$A_{k+1}\beta_{j,k}=A_{k}\beta_{j,k-1}+a_{k+1}\frac{A_{j+1}}{a_{j+1}}+a_{j+1}\frac{A_{k+1}}{a_{k+1}}.$
(2.20)
Thus, it follows that
$\displaystyle A_{k+1}\bm{x}_{k}-A_{k}\bm{y}_{k-1}$
$\displaystyle=a_{k+1}\bm{x}_{0}-\frac{a_{k+1}}{L}\sum_{j=0}^{k-1}\frac{A_{j+1}}{a_{j+1}}\nabla
f(\bm{x}_{j})-\frac{a_{j+1}}{L}\sum_{j=0}^{k-1}a_{j+1}\nabla f(\bm{x}_{j})$
$\displaystyle=a_{k+1}\bm{v}_{k-1}-\frac{A_{k+1}}{a_{k+1}}\bm{g}_{k-1},$
which is the same as the definition of $\bm{x}_{k}$ from Eq. (2.17).
It remains to show that the conditions from Lemma 2.6 imply the recursive
definition of the sequence $\\{A_{k}\\}_{k\geq 0}$ and that
$\frac{A_{K}}{A_{0}}\geq\frac{4}{(K+2)^{2}}.$ This is established by Lemma A.1
in the appendix. ∎
###### Remark 2.8.
While OGM-G provides the optimal convergence guarantee for norm of the
gradient, its convergence rate for the optimality gap is not known. Thus, it
does not immediately imply a bound on norm of the gradient in terms of
$\|\bm{x}^{*}-\bm{x}_{0}\|^{2}.$ However, as observed in [43], it is possible
to obtain a bound of $\|\nabla
f(\bm{x}_{K})\|^{2}=O\big{(}\frac{L^{2}\|\bm{x}^{*}-\bm{x}_{0}\|^{2}}{K^{4}}\big{)}$
from OGM-G, by running Nesterov FGM for $\lfloor K/2\rfloor$ iterations,
followed by $\lceil K/2\rceil$ iterations of OGM-G.
### 2.3 Discussion
Gradient descent is perhaps the simplest method that can be used for
minimizing the gradient norm. We also conjecture that it is, in a certain
sense, optimal.
###### Conjecture 1.
For any $K>0$ and any method that constructs its iterates as
$\bm{x}_{k}=\bm{x}_{0}-\sum_{i=0}^{k-1}\beta_{i,k}\nabla f(\bm{x}_{i}),$ where
$\bm{x}_{0}\in\mathbb{R}^{d}$ is the initial point, $f$ is a convex function
accessed via a gradient oracle, and coefficients $\beta_{i,k}\in\mathbb{R}$
can depend on $L>0,i,k$ but are otherwise chosen independently of $K$ or the
input function $f,$ there exists an $L$-smooth convex input function $f$ and
an absolute constant $C>0$ such that
$\|\nabla f(\bm{x}_{K})\|^{2}\geq C\frac{L(f(\bm{x}_{0})-f(\bm{x}^{*}))}{K}.$
The basis for this conjecture is the numerical evidence from [27, 28], which
seems to suggest that fixing the total number of iterations $K$ and choosing
the coefficients $\beta_{i,k}$ as a function $K$ is crucial to obtaining the
optimal bound $\|\nabla
f(\bm{x}_{K})\|^{2}=O\Big{(}\frac{L(f(\bm{x}_{0})-f(\bm{x}^{*}))}{K^{2}}\Big{)}$.
We note that the lower bound from Conjecture 1 can be proved under a stricter
condition on coefficients $\beta_{i,k}$ that essentially forces them to be
constant (independent of $i$ and $k$), using the techniques of Arjevani and
Shamir [3]. However, such a lower bound is weak as it not only excludes the
optimal algorithm from [28] (which is desired) but also all variants of
Nesterov FGM considered in [27].
## 3 Small Gradients in Min-Max Optimization
In this section, we consider the problem of making the gradients small in
convex-concave min-max optimization, under the assumption that the operator
$F$ corresponding to the gradient of the objective is cocoercive (see Section
1.2). Similarly as in the case of convex optimization, the potential functions
we consider trade off a notion of an optimality gap with the norm of $F.$
Further, the inequality corresponding to the cocoercivity assumption suffices
to carry out the analysis of standard methods considered here; namely, the
gradient descent-ascent method and Halpern iteration. We also show (in Section
3.3) that these two methods are the best we can hope for when considering
broad classes of methods that capture most of the standard optimization
methods.
### 3.1 Krasnosel’skiı-Mann/Gradient Descent-Ascent
Perhaps the simplest potential function that can be considered for min-max
optimization is
$\mathcal{C}_{k}=A_{k}\|F(\bm{u}_{k})\|^{2}+B_{k}\left\langle
F(\bm{u}_{k}),\bm{u}_{k}-\bm{u}^{*}\right\rangle,$ (3.1)
which can be seen as a counterpart to the potential function used for gradient
descent in the previous section. The method that is suitable for the analysis
with this potential function is also the counterpart of gradient descent for
min-max optimization—gradient descent-ascent (GDA), stated as
$\bm{u}_{k+1}=\bm{u}_{k}-\eta_{k}F(\bm{u}_{k}),$
where $\eta_{k}\in(0,\frac{2}{L}).$ This method is also equivalent to the
well-known Krasnosel’skiı-Mann iteration for finding fixed points of
nonexpansive (1-Lipschitz) operators. In particular, given a nonexpansive
operator $T:\mathbb{R}^{d}\to\mathbb{R}^{d},$ the Krasnosel’skiı-Mann
iteration updates the iterates as
$\bm{u}_{k+1}=(1-\alpha_{k})\bm{u}_{k}+\alpha_{k}T(\bm{u}_{k}),$
where $\alpha_{k}\in(0,1).$ It is a standard fact that $F$ is
$\frac{1}{L}$-cocoercive if and only if $T(\cdot)=\cdot-\frac{2}{L}F(\cdot)$
is nonexpansive (see, e.g., [9, Proposition 4.1]). Thus, if we apply the
Krasnosel’skiı-Mann iteration to $T(\cdot)=\cdot-\frac{2}{L}F(\cdot)$, we have
$\bm{u}_{k+1}=\bm{u}_{k}-\frac{2\alpha_{k}}{L}F(\bm{u}_{k}),$
which is precisely GDA with $\eta_{k}=\frac{2\alpha_{k}}{L}.$
For simplicity, in the following we analyze GDA with the step size
$\eta_{k}=\eta=\frac{1}{L},$ which is the optimal step size for this method.
The analysis however extends to any step sizes $\eta_{k}\in(0,\frac{2}{L})$ in
a straightforward manner. The convergence result is summarized in the
following lemma.
###### Lemma 3.1 (Convergence of Gradient Descent-Ascent).
Let $F:\mathbb{R}^{d}\to\mathbb{R}^{d}$ be a $\frac{1}{L}$-cocoercive
operator, $\bm{u}_{0}\in\mathbb{R}^{d}$ be an arbitrary initial point, and let
$\bm{u}_{k+1}=\bm{u}_{k}-\frac{1}{L}F(\bm{u}_{k})$ for $k\geq 0.$ Then,
$\forall k\geq 1:$
$\|F(\bm{u}_{k})\|\leq\frac{L\|\bm{u}_{0}-\bm{u}^{*}\|}{\sqrt{k/2+1}},$
where $\bm{u}^{*}$ is such that $F(\bm{u}^{*})=\textbf{0}.$
###### Proof.
The proof relies on showing that the potential function $\mathcal{C}_{k}$
satisfies $\mathcal{C}_{k}\leq\mathcal{C}_{k-1}+E_{k},$ where $E_{k}$ only
contains terms that telescope, for suitably chosen sequences of positive
numbers $\\{A_{k}\\}_{k\geq 0}$ and $\\{B_{k}\\}_{k\geq 0}.$
Let us start with bounding $\mathcal{C}_{0}.$ As
$\bm{u}_{1}=\bm{u}_{0}-\frac{1}{L}F(\bm{u}_{0}),$ we have
$\displaystyle\mathcal{C}_{0}$
$\displaystyle=A_{0}\|F(\bm{u}_{0})\|^{2}+B_{0}\left\langle
F(\bm{u}_{0}),\bm{u}_{0}-\bm{u}^{*}\right\rangle$
$\displaystyle=A_{0}\|F(\bm{u}_{0})\|^{2}+B_{0}L\left\langle\bm{u}_{0}-\bm{u}_{1},\bm{u}_{0}-\bm{u}^{*}\right\rangle$
$\displaystyle=A_{0}\|F(\bm{u}_{0})\|^{2}+\frac{B_{0}L}{2}\big{(}\|\bm{u}_{0}-\bm{u}^{*}\|^{2}-\|\bm{u}_{1}-\bm{u}^{*}\|^{2}+\|\bm{u}_{0}-\bm{u}_{1}\|^{2}\big{)}$
$\displaystyle=\Big{(}A_{0}+\frac{B_{0}}{2L}\Big{)}\|F(\bm{u}_{0})\|^{2}+\frac{B_{0}L}{2}\big{(}\|\bm{u}_{0}-\bm{u}^{*}\|^{2}-\|\bm{u}_{1}-\bm{u}^{*}\|^{2}\big{)}.$
Eq. (1.7) implies $\|F(\bm{u}_{0})\|^{2}\leq
L^{2}\|\bm{u}_{0}-\bm{u}^{*}\|^{2},$ and, thus, we have
$\mathcal{C}_{0}=\frac{A_{0}L^{2}+2B_{0}L}{2}\|\bm{u}_{0}-\bm{u}^{*}\|^{2}-\frac{B_{0}L}{2}\|\bm{u}_{1}-\bm{u}^{*}\|^{2}.$
(3.2)
Now let us consider the change in the potential function
$\mathcal{C}_{k}-\mathcal{C}_{k-1}.$ Note first that, by Eq. (1.7),
$\left\langle
F(\bm{u}_{k-1}),\bm{u}_{k-1}-\bm{u}^{*}\right\rangle\geq\frac{1}{L}\|F(\bm{u}_{k-1})\|^{2}.$
Thus:
$\displaystyle\mathcal{C}_{k}-\mathcal{C}_{k-1}=\;$ $\displaystyle
A_{k}\|F(\bm{u}_{k})\|^{2}-A_{k-1}\|F(\bm{u}_{k-1})\|^{2}+B_{k}\left\langle
F(\bm{u}_{k}),\bm{u}_{k}-\bm{u}^{*}\right\rangle$ $\displaystyle-
B_{k-1}\left\langle F(\bm{u}_{k-1}),\bm{u}_{k-1}-\bm{u}^{*}\right\rangle$
$\displaystyle\leq\;$ $\displaystyle
A_{k}\|F(\bm{u}_{k})\|^{2}-\Big{(}A_{k-1}+\frac{B_{k-1}}{L}\Big{)}\|F(\bm{u}_{k-1})\|^{2}+B_{k}\left\langle
F(\bm{u}_{k}),\bm{u}_{k}-\bm{u}^{*}\right\rangle.$
Using that $F(\bm{u}_{k})=L(\bm{u}_{k}-\bm{u}_{k+1}),$ we have that
$\left\langle
F(\bm{u}_{k}),\bm{u}_{k}-\bm{u}^{*}\right\rangle=\frac{1}{2L}\|F(\bm{u}_{k})\|^{2}+\frac{L}{2}\|\bm{u}_{k}-\bm{u}^{*}\|^{2}-\frac{L}{2}\|\bm{u}_{k+1}-\bm{u}^{*}\|^{2},$
which leads to
$\displaystyle\mathcal{C}_{k}-\mathcal{C}_{k-1}\leq\;$
$\displaystyle\Big{(}A_{k}+\frac{B_{k}}{2L}\Big{)}\|F(\bm{u}_{k})\|^{2}-\Big{(}A_{k-1}+\frac{B_{k-1}}{L}\Big{)}\|F(\bm{u}_{k-1})\|^{2}$
$\displaystyle+\frac{B_{k}L}{2}\|\bm{u}_{k}-\bm{u}^{*}\|^{2}-\frac{B_{k}L}{2}\|\bm{u}_{k+1}-\bm{u}^{*}\|^{2}.$
On the other hand, by Eq. (1.6) and
$\bm{u}_{k}=\bm{u}_{k-1}-\frac{1}{L}F(\bm{u}_{k-1})$, we have that
$\|F(\bm{u}_{k})\|^{2}\leq\left\langle
F(\bm{u}_{k}),F(\bm{u}_{k-1})\right\rangle,$ and, consequently,
$\|F(\bm{u}_{k})\|\leq\|F(\bm{u}_{k-1})\|.$ Thus, for
$\mathcal{C}_{k}-\mathcal{C}_{k-1}$ to contain only telescoping terms, it
suffices that $A_{k}+\frac{B_{k}}{2L}-A_{k-1}-\frac{B_{k-1}}{L}\leq 0$ and
that $\\{B_{k}\\}_{k\geq 0}$ is non-increasing. In particular, taking
$B_{k}=1$ and $A_{k+1}=A_{k}+\frac{1}{2L}=A_{0}+\frac{k+1}{2L}$ for all $k\geq
0,$ we have
$\mathcal{C}_{k}-\mathcal{C}_{k-1}\leq\frac{L}{2}\|\bm{u}_{k}-\bm{u}^{*}\|^{2}-\frac{L}{2}\|\bm{u}_{k+1}-\bm{u}^{*}\|^{2}.$
(3.3)
Telescoping Eq. (3.3) and combining with Eq. (3.2), we then get
$\mathcal{C}_{k}\leq\frac{A_{0}L^{2}+2L}{2}\|\bm{u}_{0}-\bm{u}^{*}\|^{2}-\frac{L}{2}\|\bm{u}_{k+1}-\bm{u}^{*}\|^{2}\leq\frac{A_{0}L^{2}+2L}{2}\|\bm{u}_{0}-\bm{u}^{*}\|^{2}.$
Taking $A_{0}=0$ and observing that, by Eq. (1.7),
$\mathcal{C}_{k}\geq\big{(}A_{k}+\frac{B_{k}}{L}\big{)}\|F(\bm{u}_{k})\|^{2}=\frac{k+2}{2L}\|F(\bm{u}_{k})\|^{2},$
we finally get
$\|F(\bm{u}_{k})\|^{2}\leq\frac{2L^{2}\|\bm{u}_{0}-\bm{u}^{*}\|^{2}}{k+2}.$
It remains to take the square-root on both sides of the last inequality. ∎
### 3.2 Halpern Iteration
It seems reasonable now to ask whether it is possible to obtain faster rates
than for GDA by considering a different potential function that trades off the
gradient/operator norm for a notion of an optimality gap w.r.t. an anchor
point, similar to how we obtained faster rates for convex optimization. It
turns out that the answer is “yes,” using the initial point $\bm{u}_{0}$ as
the anchor. The resulting potential function is
$\mathcal{C}_{k}=A_{k}\|F(\bm{u}_{k})\|^{2}+B_{k}\left\langle
F(\bm{u}_{k}),\bm{u}_{k}-\bm{u}_{0}\right\rangle$
and it corresponds to the well-known Halpern iteration
$\bm{u}_{k+1}=\lambda_{k+1}\bm{u}_{0}+(1-\lambda_{k+1})T(\bm{u}_{k}),$ (3.4)
where, similarly as in the case of GDA, $T(\cdot)=\cdot-\frac{2}{L}F(\cdot)$
is a nonexpansive operator. We note that a similar potential function was used
in [15] to analyze the convergence of Halpern iteration.
Here we show that the potential function
$\mathcal{C}_{k}=A_{k}\|F(\bm{u}_{k})\|^{2}+B_{k}\left\langle
F(\bm{u}_{k}),\bm{u}_{k}-\bm{u}_{0}\right\rangle$ in fact leads to the Halpern
iteration as a natural algorithm that guarantees that $\mathcal{C}_{k}$ is
non-increasing. The main convergence result is summarized in the following
lemma, whose proof reveals how the chosen potential function leads to the
Halpern iteration.
###### Lemma 3.2 (Convergence of Halpern Iteration).
Let $F:\mathbb{R}^{d}\to\mathbb{R}^{d}$ be a $\frac{1}{L}$-cocoercive
operator, $\bm{u}_{0}\in\mathbb{R}^{d}$ be an arbitrary initial point, and,
for $k\geq 0,$ let
$\bm{u}_{k+1}=\frac{1}{k+1}\bm{u}_{0}+\frac{k}{k+1}\Big{(}\bm{u}_{k}-\frac{2}{L}F(\bm{u}_{k})\Big{)}.$
Then, $\forall k\geq 1,$ we have
$\|F(\bm{u}_{k})\|\leq\frac{L\|\bm{u}_{0}-\bm{u}^{*}\|}{k+1},$
where $\bm{u}^{*}$ satisfies $F(\bm{u}^{*})=\textbf{0}.$
###### Proof.
The claim trivially holds if $\|F(\bm{u}_{k})\|=0,$ so assume throughout that
$\|F(\bm{u}_{k})\|\neq 0.$
Consider bounding $\mathcal{C}_{k}-\mathcal{C}_{k-1}$ above by zero. To do so,
we can only rely on cocoercivity of $F$ from Eq. (1.6). Applying Eq. (1.6)
with $\bm{u}=\bm{u}_{k}$ and $\bm{v}=\bm{u}_{k-1}$ and rearranging the terms,
we have
$\displaystyle\frac{1}{L}\|F(\bm{u}_{k})\|^{2}\leq$ $\displaystyle\left\langle
F(\bm{u}_{k}),\bm{u}_{k}-\bm{u}_{k-1}+\frac{2}{L}F(\bm{u}_{k-1})\right\rangle$
(3.5) $\displaystyle-\left\langle
F(\bm{u}_{k-1}),\bm{u}_{k}-\bm{u}_{k-1}\right\rangle-\frac{1}{L}\|F(\bm{u}_{k-1})\|^{2}.$
Combining Eq. (3.5) with the definition of $\mathcal{C}_{k}$ and grouping
appropriate terms, we have
$\displaystyle\mathcal{C}_{k}-\mathcal{C}_{k-1}\leq$
$\displaystyle\left\langle
F(\bm{u}_{k}),A_{k}L\Big{(}\bm{u}_{k}-\bm{u}_{k-1}+\frac{2}{L}F(\bm{u}_{k-1})\Big{)}+B_{k}(\bm{u}_{k}-\bm{u}_{0})\right\rangle$
(3.6) $\displaystyle-\left\langle
F(\bm{u}_{k-1}),A_{k}L(\bm{u}_{k}-\bm{u}_{k-1})+B_{k-1}(\bm{u}_{k-1}-\bm{u}_{0})\right\rangle$
$\displaystyle-\frac{A_{k}+A_{k-1}}{L}\|F(\bm{u}_{k-1})\|^{2}.$
For $\bm{u}_{k}$ to be explicitly defined, it cannot depend on
$F(\bm{u}_{k})$. Thus, the only direct way to make the first line of the
right-hand side of Eq. (3.6) non-positive is to set
$A_{k}L\Big{(}\bm{u}_{k}-\bm{u}_{k-1}+\frac{2}{L}F(\bm{u}_{k-1}))\Big{)}+B_{k}(\bm{u}_{k}-\bm{u}_{0})=0.$
(3.7)
For the remaining terms, it suffices that
$-\left\langle
F(\bm{u}_{k-1}),A_{k}L(\bm{u}_{k}-\bm{u}_{k-1})+B_{k-1}(\bm{u}_{k-1}-\bm{u}_{0})\right\rangle-({A_{k}+A_{k-1}})\|F(\bm{u}_{k-1})\|^{2}\leq
0.$ (3.8)
Rearranging Eq. (3.7) gives the Halpern algorithm from Eq. (3.4) with
$\lambda_{k}=\frac{B_{k}}{A_{k}L+B_{k}},$ i.e.,
$\bm{u}_{k}=\frac{B_{k}}{A_{k}L+B_{k}}\bm{u}_{0}+\frac{A_{k}L}{A_{k}L+B_{k}}\Big{(}\bm{u}_{k-1}-\frac{2}{L}F(\bm{u}_{k-1})\Big{)}.$
(3.9)
The other condition (from Eq. (3.8)) effectively constrains the growth of
$A_{k}$ compared to $B_{k},$ which is expected, as otherwise we would be able
to prove an arbitrarily fast convergence rate for Halpern iteration, which is
impossible, due to existing lower bounds (see, e.g. [15]).
Combining Eq. (3.7) and Eq. (3.8), we have
$\displaystyle-\left\langle
F(\bm{u}_{k-1}),A_{k}L\bm{u}_{k}-B_{k-1}\bm{u}_{0}-(A_{k}L-B_{k-1})\bm{u}_{k-1}\right\rangle\leq(A_{k}+A_{k-1})\|F(\bm{u}_{k-1})\|^{2}.$
Now, to be able to guarantee that the last inequality is satisfied and
consistent with Eq. (3.9), it is required that
$\frac{B_{k-1}}{A_{k}L}=\frac{B_{k}}{A_{k}L+B_{k}}\quad\text{ and
}\quad\frac{2A_{k}}{A_{k}L+B_{k}}\leq\frac{A_{k}+A_{k-1}}{A_{k}L}.$ (3.10)
In particular, when $B_{k}=k+1$ and $A_{k}=\frac{k(k+1)}{L},$ both conditions
from Eq. (3.10) are satisfied with equality.
Hence, for $B_{k}=k+1,$ $A_{k}=\frac{k(k+1)}{L},$ and
$\lambda_{k}=\frac{B_{k}}{A_{k}L+B_{k}}=\frac{1}{k+1},$ we have that
$\mathcal{C}_{k}\leq\mathcal{C}_{0}.$ By definition, and as $A_{0}=0,$ we have
that $\mathcal{C}_{0}=0$. Thus, $\mathcal{C}_{k}\leq 0,$ $\forall k\geq 1,$
and it follows that
$\displaystyle\|F(\bm{u}_{k})\|^{2}$
$\displaystyle\leq\frac{B_{k}}{A_{k}}\left\langle
F(\bm{u}_{k}),\bm{u}_{0}-\bm{u}_{k}\right\rangle$
$\displaystyle=\frac{L}{k}\big{(}\left\langle
F(\bm{u}_{k}),\bm{u}^{*}-\bm{u}_{k}\right\rangle+\left\langle
F(\bm{u}_{k}),\bm{u}_{0}-\bm{u}^{*}\right\rangle\big{)}$
$\displaystyle\leq\frac{L}{k}\Big{(}-\frac{1}{L}\|F(\bm{u}_{k})\|^{2}+\|F(\bm{u}_{k})\|\|\bm{u}_{0}-\bm{u}^{*}\|\Big{)},$
where the last inequality is by Eq. (1.7) and Cauchy-Schwarz. To complete the
proof, it remains to rearrange the last inequality and divide both sides by
$\|F(\bm{u}_{k})\|.$ ∎
### 3.3 Lower Bounds for Cocoercive Operators
In this section, we provide a lower bound that applies to the class of
algorithms that construct their iterates as the sum of an initial point and a
linear combination of the cocoercive operator
$F:\mathbb{R}^{d}\to\mathbb{R}^{d}$ evaluated at any of the points seen up to
the current iteration. In particular, given a $\frac{1}{L}$-cocoercive
operator $F:\mathbb{R}^{d}\to\mathbb{R}^{d},$ an algorithm’s iterate
$\bm{u}_{k}$ at iteration $k$ can be expressed as
$\bm{u}_{k}=\bm{u}_{0}-\sum_{i=0}^{k-1}\beta_{i,k}F(\bm{u}_{i}),$ (3.11)
where $\beta_{i,k}$ are real coefficients that can depend on $L$ but are
otherwise independent of $F$. To state the lower bound, we use
$\mathcal{F}_{L,D}$ to denote the class of problems with
$\frac{1}{L}$-cocoercive operators $F$ that satisfy
$\|\bm{u}^{*}-\bm{u}_{0}\|\leq D,$ where $\bm{u}_{0}\in\mathbb{R}^{d}$ is an
arbitrary initial point and $\bm{u}^{*}$ is such that
$F(\bm{u}^{*})=\textbf{0}.$ We assume w.l.o.g. that $d$ is even.
To derive the lower bound, we use the framework developed in [2, 3]. To make
use of this framework, which relies on the use of Chebyshev polynomials, it is
necessary to construct hard instances corresponding to linear operators
$F(\bm{u})=\bm{A}\bm{u}+\bm{b},$ where $\bm{A}\in\mathbb{R}^{d\times d}$ and
$\bm{b}\in\mathbb{R}^{d}.$ We note that such an approach was also used in [22]
for the class of monotone Lipschitz operators. However, here we aim to provide
a lower bound for the more restricted class of cocoercive operators, which
necessitates a separate construction. In particular, the monotone operator
from the lower bound instance used in [22] is not cocoercive as it corresponds
to a bilinear function; in fact, it satisfies $\left\langle
F(\bm{u})-F(\bm{v}),\bm{u}-\bm{v}\right\rangle=0,$
$\forall\bm{u},\bm{v}\in\mathbb{R}^{d}.$
Before delving into the technical details of our lower bound, we first provide
definitions and supporting claims from [2] that are needed for stating and
proving it. A useful definition is that of 1-SCLI algorithms, which allows
abstracting algorithms of the form from Eq. (3.11) through the lens of
Chebyshev polynomials. Here, we adopt the terminology from [22], which
somewhat blurs the lines between various definitions (of stationary,
oblivious, $p$-SCLI) algorithm types from [3, 2], but provides perhaps the
simplest way of stating the results.
###### Definition 3.3 (1-SCLI Algorithms).
An optimization algorithm $\mathcal{A}$ acting on the class of linear
operators $F:\mathbb{R}^{d}\to\mathbb{R}^{d}$ of the form
$F(\bm{u})=\bm{A}\bm{u}+\bm{b}$, where $\bm{A}\in\mathbb{R}^{d\times d},$
$\bm{b}\in\mathbb{R}^{d},$ is said to be $1$-stationary canonical linear
iterative ($1$-SCLI) over $\mathbb{R}^{d}$ if, given an initial point
$\bm{u}_{0}\in\mathbb{R}^{d}$, there exist mappings
$C_{0}(\mathbf{A}),N(\mathbf{A}):\mathbb{R}^{d\times d}\to\mathbb{R}^{d\times
d}$ such that for all $k\geq 1$ the iterates of $\mathcal{A}$ can be expressed
as
$\bm{u}_{k}=C_{0}(\bm{A})\bm{u}_{k-1}+N(\bm{A})\bm{b}.$
Observe here that Definition 3.3 imposes no restrictions on what kind of
mappings $C_{0}$ and $N$ can be. In particular, they can be polynomials of an
arbitrary degree. This is important because choosing polynomials of degree $K$
would allow us to emulate arbitrary algorithms of the form from Eq. (3.11) run
over $K$ iterations, as $F$ is assumed to be linear (this observation is
typically used in the analysis of the classical conjugate gradient method;
see, e.g., [45, Chapter 5]). On the other hand, restricting the degree of the
polynomials would restrict the adaptivity of coefficients $\beta_{i,k}$, as
$C_{0},N$ remain fixed for all $k.$ In this context, both GDA and Halpern
iteration (when restricted to be run over a fixed number $K$ of iterations)
can be viewed as 1-SCLI algorithms, with the following crucial difference. For
GDA with a fixed step size $\eta$, we have
$\bm{u}_{k}=(\bm{I}-\eta\bm{A})\bm{u}_{k-1}+\eta\bm{b},$
i.e., $C_{0}$ is of degree one and $N$ is of degree zero. On the other hand,
for Halpern iteration,
$\displaystyle\bm{u}_{k}$
$\displaystyle=\lambda_{k}\bm{u}_{0}+(1-\lambda_{k})\Big{(}\bm{I}-\frac{2}{L}\bm{A}\Big{)}\bm{u}_{k-1}+(1-\lambda_{k})\bm{b}.$
(3.12)
By recursively applying Eq. (3.12) and rolling it down to zero, we get that
$\bm{u}_{k}$ can be expressed as
$\bm{u}_{k}=C_{0}(\bm{A})\bm{u}_{0}+N(\bm{A})\bm{b}$ using $C_{0}$ that is a
polynomial of degree $k$ and $N$ that is a polynomial of degree $k-1.$ In
other words, we can view $k$ iterations of Halpern’s algorithm as one
iteration of a 1-SCLI algorithm, using polynomial maps $C_{0}$ and $N$ of
suitably large degrees. This is crucial for understanding the statement of the
lower bound, which will effectively tell us that GDA is iteration complexity-
optimal among all algorithms of the form from Eq. (3.11) that choose steps
sizes $\beta_{i,k}$ independently of $k,$ while Halpern iteration is iteration
complexity-optimal over all algorithms that are allowed to adapt
$\beta_{i,k}$’s to $k.$
In the following, we further restrict our attention to operators $F$
corresponding to full-rank matrices $\bm{A}.$ This is convenient because the
optimal solution $\bm{u}^{*}$ for which $F(\bm{u}^{*})=\textbf{0}$ can be
expressed in closed form as $\bm{u}^{*}=-\bm{A}^{-1}\bm{b}.$ This allows us to
relate the polynomials $C_{0}$ and $N$ under a minimal (and standard [3, 2,
22]) assumption that the 1-SCLI algorithms we consider are _consistent_ (or
convergent). We note here that the consistency condition is not necessary; it
is rather the case that the proof relies on the relationship between $C_{0}$
and $N$ from Eq. (3.13), for which the natural consistency condition suffices.
###### Definition 3.4 (Consistency).
A 1-SCLI algorithm $\mathcal{A}$ is said to be consistent w.r.t. a full-rank
matrix $\bm{A}$ if for any $\bm{b}\in\mathbb{R}^{d}$ we have that $\bm{u}_{k}$
converges to $\bm{u}^{*}=-\bm{A}^{-1}\bm{b}$. A 1-SCLI algorithm is said to be
consistent if it is consistent w.r.t. any full-rank matrix $\bm{A}.$
The relationship between $C_{0}$ and $N$ for consistent algorithms is
characterized by the following lemma.
###### Lemma 3.5 (Consistency of 1-SCLI Algorithms [2]).
If a 1-SCLI algorithm is consistent w.r.t. $\bm{A}$, then
$C_{0}(\bm{A})=\bm{I}+N(\bm{A})\bm{A}.$ (3.13)
Finally, the following auxiliary lemma will be useful when proving our lower
bound.
###### Lemma 3.6 ([22, Lemma 13]).
Let $L>0,$ let $p$ and $k$ be arbitrary but fixed non-negative integers, and
let $r(y)$ be a polynomial with real-valued coefficients of degree at most
$p$, such that $r(0)=1$. Then:
$\sup_{y\in(0,L]}y|r(y)|^{k}\geq\sup_{y\in[L/(20p^{2}k),L]}y|r(y)|^{k}>\frac{L}{40p^{2}k}.$
(3.14)
We are now ready to state and prove our lower bound.
###### Theorem 3.7.
Let $p,K$ be any two positive integer numbers, and let $L,D>0.$ Then, for any
consistent 1-SCLI algorithm $\mathcal{A}$ acting on instances from
$\mathcal{F}_{L,D}$, initialized at $\bm{u}_{0}=\textbf{0}$ and for which
$N(\bm{A})$ is a matrix polynomial of degree at most $p-1$,
$\sup_{F\in\mathcal{F}_{L,D}}\|F(\bm{u}_{K})\|\geq\frac{LD}{4p\sqrt{5K}}.$
###### Proof.
Similar to [22], we start by showing that
$\bm{u}_{k}=(C_{0}(\bm{A})^{k}-\bm{I})\bm{A}^{-1}\bm{b},$ (3.15)
for all $k\geq 0$. This claim follows by induction on $k$. The base case $k=0$
is immediate. For the inductive step, suppose that Eq. (3.15) holds for some
$k-1\geq 0.$ Then by the definition of 1-SCLI algorithms and the consistency
of $\mathcal{A}$ (Definitions 3.3 and 3.4):
$\displaystyle\bm{u}_{k}$
$\displaystyle=C_{0}(\bm{A})\bm{u}_{k-1}+N(\bm{A})\bm{b}$
$\displaystyle=C_{0}(\bm{A})(C_{0}(\bm{A})^{k-1}-\bm{I})\bm{A}^{-1}\bm{b}+(C_{0}(\bm{A})-\bm{I})\bm{A}^{-1}\bm{b}$
$\displaystyle=(C_{0}(\bm{A})^{k}-\bm{I})\bm{A}^{-1}\bm{b}.$
Therefore, $F(\bm{u}_{k})$ can be expressed as
$F(\bm{u}_{k})=\bm{A}\bm{u}_{k}+\bm{b}=C_{0}(\bm{A})^{k}\bm{b}.$ (3.16)
Let us now specify the “hard instance.” Consider
$F(\bm{u})=\bm{A}\bm{u}+\bm{b},$ where $\bm{A}$ can be expressed as
$\bm{A}=\big{[}\vbox{\Let@\restore@math@cr\default@tag\halign{\hfil$\m@th\scriptstyle#$&$\m@th\scriptstyle{}#$\hfil\cr\eta\bm{I}\>\alpha\bm{I}\\\
-\alpha\bm{I}\>\eta\bm{I}\crcr}}\big{]}$ for some
$\eta,\alpha\in\mathbb{R}_{+}.$ (Observe that such an $F$ can be obtained from
the convex-concave objective
$\phi(\bm{x},\bm{y})=\frac{1}{2}\eta\bm{x}^{T}\bm{x}-\frac{1}{2}\eta\bm{y}^{T}\bm{y}+\alpha\bm{x}^{T}\bm{y}+\bm{b}_{1}^{T}\bm{x}-\bm{b}_{2}^{T}\bm{y},$
where $\bm{x},\bm{y},\bm{b}_{1},\bm{b}_{2}\in\mathbb{R}^{{d}/{2}}$,
$\bm{b}=[{\bm{b}_{1}}^{T}{\bm{b}_{2}}^{T}]^{T}$.)
Let us now argue that for suitably chosen $\eta,\alpha,$ we have that $F$ is
$\frac{1}{L}$-cocoercive. Let $\bm{u}=[\bm{x}^{T}\,\bm{y}^{T}]^{T}$,
$\bar{\bm{u}}=[\bm{\bar{x}}^{T}\,\bm{\bar{y}}^{T}]^{T}$ be an arbitrary pair
of vectors from $\mathbb{R}^{d},$ where
$\bm{x},\bm{y},\bm{\bar{x}},\bm{\bar{y}}\in\mathbb{R}^{d/2}.$ Then
$\left\langle
F(\bm{u})-F(\bar{\bm{u}}),\bm{u}-\bar{\bm{u}}\right\rangle=\eta\|\bm{u}-\bar{\bm{u}}\|^{2}$
and
$\|F(\bm{u})-F(\bar{\bm{u}})\|^{2}=(\eta^{2}+\alpha^{2})\|\bm{u}-\bar{\bm{u}}\|^{2}.$
Hence, for $\eta^{2}+\alpha^{2}\leq L\eta,$ we have $\left\langle
F(\bm{u})-F(\bar{\bm{u}}),\bm{u}-\bar{\bm{u}}\right\rangle\geq\frac{1}{L}\|F(\bm{u})-F(\bar{\bm{u}})\|^{2},$
i.e., $F$ is $\frac{1}{L}$-cocoercive.
To complete the proof, it remains to show that
$\sup_{F\in\mathcal{F}_{L,D}}{\|F(\bm{u}_{K})\|}\geq\frac{LD}{p\sqrt{80K}}.$
To do so, observe that by Eq. (3.16), $\bm{u}_{0}=\textbf{0},$ and
$\bm{u}^{*}=-\bm{A}^{-1}\bm{b},$ we have
$\sup_{F\in\mathcal{F}_{L,D}}\frac{\|F(\bm{u}_{K})\|^{2}}{\|\bm{u}^{*}-\bm{u}_{0}\|^{2}}\geq\sup_{\begin{subarray}{c}\eta\in[0,L],\\\
\alpha\in[0,\sqrt{L\eta-\eta^{2}}]\end{subarray}}\frac{\|C_{0}(\bm{A})^{K}\bm{b}\|^{2}}{\|\bm{A}^{-1}\bm{b}\|^{2}},$
where
$\bm{A}=\big{[}\vbox{\Let@\restore@math@cr\default@tag\halign{\hfil$\m@th\scriptstyle#$&$\m@th\scriptstyle{}#$\hfil\cr\eta\bm{I}\>\alpha\bm{I}\\\
-\alpha\bm{I}\>\eta\bm{I}\crcr}}\big{]}.$ Observe that the characteristic
polynomial of
$\bm{A}=\big{[}\vbox{\Let@\restore@math@cr\default@tag\halign{\hfil$\m@th\scriptstyle#$&$\m@th\scriptstyle{}#$\hfil\cr\eta\bm{I}\>\alpha\bm{I}\\\
-\alpha\bm{I}\>\eta\bm{I}\crcr}}\big{]}$ is:
$\mathrm{det}(\lambda\bm{I}-\mathbf{A})=((\lambda-\eta)^{2}+\alpha^{2})^{{d}/{2}}.$
Hence, $\bm{A}$ has eigenvalues: $\lambda_{1}=\eta+\alpha
i,\,\lambda_{2}=\eta-\alpha i.$ These conjugate eigenvalues have the same
magnitude: $\sqrt{\eta^{2}+\alpha^{2}}$. Accordingly, $\mathbf{A}^{-1}$ has
eigenvalues:
$\lambda_{1}^{\prime}=\frac{1}{\lambda_{1}},\quad\lambda_{2}^{\prime}=\frac{1}{\lambda_{2}}$,
which are also conjugate and equal in magnitude. On the other hand, since
$C_{0}(\bm{A})=\bm{I}+N(\bm{A})\bm{A}$, and, by assumption, $N(\bm{A})$ is a
matrix polynomial of degree at most $p-1$ for some $p\in\mathbb{N}$ with real
coefficients, $C_{0}(\bm{A})$ is a polynomial of $\bm{A}$ with
$C_{0}(\textbf{0}_{d\times d})=\bm{I}$. Therefore, it can be expressed as:
$C_{0}(\bm{A})=\bm{I}+r_{1}\bm{A}+r_{2}\mathbf{A}^{2}+\dots+r_{p}\bm{A}^{p},$
for some real-valued $r_{1},r_{2},r_{3},\dots,r_{p}$. We denote the polynomial
on complex field with the same real-valued coefficients as:
$c_{0}(y)=1+r_{1}y+r_{2}y^{2}+\dots+r_{p}y^{p}$. Then, by the spectral mapping
theorem, the eigenvalues of $C_{0}(\bm{A})$ are: $c_{0}(\lambda_{1})$ and
$c_{0}(\lambda_{2})$, which are again conjugate and have equal norms.
Therefore, we have:
$\displaystyle\sup_{\begin{subarray}{c}\eta\in[0,L]\\\
\alpha\in[0,\sqrt{L\eta-\eta^{2}}]\end{subarray}}\frac{\left\lVert
C_{0}(\mathbf{A})^{K}\bm{b}\right\rVert^{2}}{\left\lVert\mathbf{A}^{-1}\bm{b}\right\rVert^{2}}$
$\displaystyle=\sup_{\begin{subarray}{c}\eta\in[0,L]\\\
\alpha\in[0,\sqrt{L\eta-\eta^{2}}]\end{subarray}}\frac{|c_{0}(\lambda_{1})|^{2K}\left\lVert\bm{b}\right\rVert^{2}}{\frac{1}{|\lambda_{1}|^{2}}\left\lVert\bm{b}\right\rVert^{2}}$
$\displaystyle=\sup_{\begin{subarray}{c}\eta\in[0,L]\\\
\alpha\in[0,\sqrt{L\eta-\eta^{2}}]\end{subarray}}(\eta^{2}+\alpha^{2})|c_{0}(\eta+\alpha
i)|^{2K}.$
To derive the stated lower bound by applying Lemma 3.6, we need to convert the
above expression into a similar form: $\sup_{y\in(0,L]}y|r(y)|^{k}$. Here, we
can observe the difference between the problem we are considering and the
problem discussed in [22]. In [22], the eigenvalues are purely imaginary: $\nu
i$ and $-\nu i$. As a result, the above expression can be written as:
$\sup_{\nu\in(0,L]}\nu^{2}|c_{0}(\nu i)|^{2K}$. By taking the real part of
this term, we get a smaller value
$\sup_{\nu\in(0,L]}\nu^{2}|1-r_{2}\nu^{2}+r_{4}\nu^{4}-\dots+(-1)^{p^{\prime}}r_{2p^{\prime}}\nu^{2p^{\prime}}|^{2K}$,
where $p^{\prime}=\lfloor p/2\rfloor$. Thus, substituting $\nu^{2}$ with $y$,
we get the equation that fits the inequality from Lemma 3.6. However, the same
strategy cannot be applied here since the real part of
$(\eta^{2}+\alpha^{2})|c_{0}(\eta+\alpha i)|^{2K}$ is tangled up with $\alpha$
and $\eta$, hence making it impossible to get an equation of the form
$y|r(y)|^{k}$ by simply taking its real part.
Nevertheless, since we have the extra freedom of choosing $\alpha$, we can
select $\alpha$ carefully to make the real part and imaginary part of
$|c_{0}(\eta+\alpha i)|^{2K}$ separable, while keeping the constant
$\eta^{2}+\alpha^{2}$ large enough. In particular, this can be achieved for:
$\alpha^{2}=L\eta-\eta^{2}.$
Observe that, as long as $\eta\leq L,$ we have
$\alpha\in[0,\sqrt{L\eta-\eta^{2}}],$ as required in the bound above. It
follows that:
$\displaystyle\sup_{\begin{subarray}{c}\eta\in[0,L]\\\
\alpha\in[0,\sqrt{L\eta-\eta^{2}}]\end{subarray}}(\eta^{2}+\alpha^{2})|c_{0}(\eta+\alpha
i)|^{2K}$ $\displaystyle\hskip
72.26999pt\geq\sup_{\eta\in[0,L]}L\eta|c_{0}(\eta+\alpha i)|^{2K}$
$\displaystyle\hskip 72.26999pt=\sup_{\eta\in[0,L]}L\eta|1+r_{1}(\eta+\alpha
i)+\dots+r_{p}(\eta+\alpha i)^{p}|^{2K}.$
Observe that the factor $\alpha$ in the real terms of $(\eta+\alpha i)^{j}$
has only even order, therefore, $\mathrm{Re}(c_{0}(\eta+\alpha i))$ is a
polynomial of $\eta$ and $\alpha^{2}$. Since $\alpha^{2}=L\eta-\eta^{2}$, it
is actually a polynomial of $\eta$ exclusively with real-valued coefficients
of degree at most $p$, which we denote as:
$c^{\prime}_{0}(\eta)=1+r^{\prime}_{1}\eta+r^{\prime}_{2}\eta^{2}+\dots+r^{\prime}_{p}\eta^{p}$.
Therefore, we get:
$\displaystyle\sup_{\eta\in[0,L]}L\eta|c_{0}(\eta+\alpha i)|^{2K}$
$\displaystyle\geq\sup_{\eta\in(0,L]}L\eta|\mathrm{Re}(c_{0}(\eta+\alpha
i))|^{2K}$
$\displaystyle=\sup_{\eta\in(0,L]}L\eta|c_{0}^{\prime}(\eta)|^{2K}.$
By Lemma 3.6 and $\|\bm{A}^{-1}\bm{b}\|=D$, we now have:
$\displaystyle\sup_{F\in\mathcal{F}_{L,D}}\frac{\|F(\bm{u}_{K})\|^{2}}{\|\bm{u}^{*}-\bm{u}_{0}\|^{2}}=\sup_{F\in\mathcal{F}_{L,D}}\frac{\|F(\bm{u}_{K})\|^{2}}{D^{2}}\geq\sup_{\eta\in(0,L]}L\eta|c_{0}^{\prime}(\eta)|^{2K}\geq\frac{L^{2}}{80p^{2}K},$
and the claimed lower bound follows after rearranging the last inequality. ∎
The implications of Theorem 3.7 are as follows. Among all algorithms that
update their iterates as in Eq. (3.11) and use constant (independent of the
iteration count) step sizes $\beta_{i,k},$ GDA is iteration complexity-optimal
for minimizing the norm of a cocoercive operator. This means that other
standard methods such as the extragradient/mirror-prox [29, 39] method, dual
extrapolation [40], or the method of Popov [47], which fall into the same
category, cannot attain a convergence rate for minimizing $\|F(\cdot)\|$ that
is faster than $1/\sqrt{k}.$ Thus, choosing step sizes $\beta_{i,k}$ that
depend on the iteration count is essential for achieving the faster $1/k$ rate
of Halpern’s algorithm. Furthermore, this rate is unimprovable for any of the
typical iterative methods that take the form from Eq. (3.11).
## 4 Conclusion and Future Work
We presented a general and unifying potential function-based framework for
analyzing the convergence of first-order algorithms under the gradient norm
criterion in the settings of convex and min-max optimization. The framework is
intuitive in that it provides an interpretation of the mechanism driving the
convergence as a trade-off between reducing the norm of the gradient and
reducing some notion of an optimality gap.
Many interesting questions for future work remain. In particular, our
framework is primarily applicable to Euclidean setups. Thus, it is an
intriguing question whether it is possible to generalize it to other normed
spaces. We note that beyond the Euclidean setups, the only results with near-
optimal convergence for $\ell_{p}$-normed spaces in the setting of convex
optimization are those for $\ell_{\infty}$ (where an $\ell_{\infty}$ variant
of gradient descent is optimal) and the very recent results for $p\in[1,2]$
that are based on a regularization trick [16]. In a different direction, as
conjectured in Section 2, it appears that fixing either the number of
iterations or the accuracy of the problem in advance is crucial for achieving
near optimal rates in the case of convex objectives, even in Euclidean setups.
Proving such a lower bound would be very interesting, as it would likely
require completely new mathematical techniques. Finally, very little is known
about the convergence in gradient norm in convex-concave min-max optimization
setups, both from the aspect of algorithms and the lower bounds. In
particular, we are not aware of any lower bounds outside of the Euclidean
setup considered here, while, similar as in the case of convex optimization,
the only near-optimal algorithm is based on a regularization trick and applies
only to $p\in[1,2]$ [52].
## References
* [1] Z. Allen-Zhu and L. Orecchia. Linear coupling: An ultimate unification of gradient and mirror descent. In Proc. ITCS’17, 2017.
* [2] Y. Arjevani, S. Shalev-Shwartz, and O. Shamir. On lower and upper bounds in smooth and strongly convex optimization. The Journal of Machine Learning Research, 17(1):4303–4353, 2016\.
* [3] Y. Arjevani and O. Shamir. On the iteration complexity of oblivious first-order optimization algorithms. In Proc. ICML’16, pages 908–916, 2016.
* [4] H. Attouch and F. Alvarez. The heavy ball with friction dynamical system for convex constrained minimization problems. In Optimization, pages 25–35. Springer, 2000.
* [5] H. Attouch, J. Bolte, and B. F. Svaiter. Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized gauss–seidel methods. Mathematical Programming, 137(1):91–129, 2013.
* [6] H. Attouch, Z. Chbani, J. Fadili, and H. Riahi. First-order optimization algorithms via inertial systems with Hessian driven damping. Mathematical Programming, pages 1–43, 2020.
* [7] H. Attouch, Z. Chbani, and H. Riahi. Rate of convergence of the Nesterov accelerated gradient method in the subcritical case $\alpha\leq 3$. ESAIM: Control, Optimisation and Calculus of Variations, 25:2, 2019\.
* [8] H. Attouch, X. Goudou, and P. Redont. The heavy ball with friction method, I. the continuous dynamical system: global exploration of the local minima of a real-valued function by asymptotic analysis of a dissipative dynamical system. Communications in Contemporary Mathematics, 2(1):1–34, 2000.
* [9] H. H. Bauschke and P. L. Combettes. Convex analysis and monotone operator theory in Hilbert spaces, volume 408. Springer, 2011.
* [10] M. Betancourt, M. I. Jordan, and A. C. Wilson. On symplectic optimization. arXiv preprint arXiv:1802.03653, 2018.
* [11] J. Bolte, A. Daniilidis, O. Ley, and L. Mazet. Characterizations of łojasiewicz inequalities: subgradient flows, talweg, convexity. Transactions of the American Mathematical Society, 362(6):3319–3363, 2010.
* [12] S. Bubeck, Y. T. Lee, and M. Singh. A geometric alternative to Nesterov’s accelerated gradient descent. arXiv preprint, arXiv:1506.08187, 2015.
* [13] Y. Carmon, J. C. Duchi, O. Hinder, and A. Sidford. Lower bounds for finding stationary points i. Mathematical Programming, pages 1–50, 2019.
* [14] E. De Klerk, F. Glineur, and A. B. Taylor. Worst-case convergence analysis of inexact gradient and newton methods through semidefinite programming performance estimation. SIAM Journal on Optimization, 30(3):2053–2082, 2020.
* [15] J. Diakonikolas. Halpern iteration for near-optimal and parameter-free monotone inclusion and strong solutions to variational inequalities. In Proc. COLT’2020, 2020.
* [16] J. Diakonikolas and C. Guzmán. Complementary composite minimization, small gradients in general norms, and applications to regression problems. arXiv preprint, arXiv:2101.11041, 2021.
* [17] J. Diakonikolas and M. I. Jordan. Generalized momentum-based methods: A Hamiltonian perspective. SIAM Journal on Optimization, 2021. To appear.
* [18] J. Diakonikolas and L. Orecchia. Accelerated extra-gradient descent: A novel, accelerated first-order method. In Proc. ITCS’18, 2018.
* [19] J. Diakonikolas and L. Orecchia. The approximate duality gap technique: A unified theory of first-order methods. SIAM Journal on Optimization, 29(1):660–689, 2019.
* [20] Y. Drori and M. Teboulle. Performance of first-order methods for smooth convex minimization: a novel approach. Mathematical Programming, 145(1-2):451–482, 2014.
* [21] D. Drusvyatskiy, M. Fazel, and S. Roy. An optimal first order method based on optimal quadratic averaging. SIAM J. Optimiz., 28(1):251–271, 2018.
* [22] N. Golowich, S. Pattathil, C. Daskalakis, and A. Ozdaglar. Last iterate is slower than averaged iterate in smooth convex-concave saddle point problems. In Proc. COLT’20, 2020.
* [23] B. Halpern. Fixed points of nonexpanding maps. Bulletin of the American Mathematical Society, 73(6):957–961, 1967\.
* [24] B. Hu and L. Lessard. Control interpretations for first-order optimization methods. In Proc. IEEE ACC’17, 2017.
* [25] M. Ito and M. Fukuda. Nearly optimal first-order methods for convex optimization under gradient norm measure: An adaptive regularization approach. arXiv preprint arXiv:1912.12004, 2019.
* [26] D. Kim. Accelerated proximal point method and forward method for monotone inclusions. arXiv preprint arXiv:1905.05149, 2019.
* [27] D. Kim and J. A. Fessler. Generalizing the optimized gradient method for smooth convex minimization. SIAM Journal on Optimization, 28(2):1920–1950, 2018.
* [28] D. Kim and J. A. Fessler. Optimizing the efficiency of first-order methods for decreasing the gradient of smooth convex functions. Journal of Optimization Theory and Applications, pages 1–28, 2020\.
* [29] G. Korpelevich. Extragradient method for finding saddle points and other problems. Matekon, 13(4):35–49, 1977.
* [30] M. Krasnosel’skiı. Two remarks on the method of successive approximations, uspehi mat. Nauk, 10:123–127, 1955.
* [31] W. Krichene, A. Bayen, and P. L. Bartlett. Accelerated mirror descent in continuous and discrete time. In Proc. NIPS’15, 2015.
* [32] L. Lessard, B. Recht, and A. Packard. Analysis and design of optimization algorithms via integral quadratic constraints. SIAM Journal on Optimization, 26(1):57–95, 2016.
* [33] F. Lieder. On the convergence rate of the halpern-iteration. Optimization Letters, pages 1–14, 2020.
* [34] H. Lin, J. Mairal, and Z. Harchaoui. A universal catalyst for first-order optimization. In Proc. NIPS’15, 2015.
* [35] Q. Lin and L. Xiao. An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization. In Proc. ICML’14, 2014.
* [36] S. Lojasiewicz. Une propriété topologique des sous-ensembles analytiques réels. Les équations aux dérivées partielles, 117:87–89, 1963\.
* [37] S. Łojasiewicz. Ensembles semi-analytiques. IHES notes, 1965.
* [38] W. R. Mann. Mean value methods in iteration. Proceedings of the American Mathematical Society, 4(3):506–510, 1953.
* [39] A. Nemirovski. Prox-method with rate of convergence $O(1/t)$ for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM Journal on Optimization, 15(1):229–251, 2004.
* [40] Y. Nesterov. Dual extrapolation and its applications to solving variational inequalities and related problems. Mathematical Programming, 109(2):319–344, 2007.
* [41] Y. Nesterov. How to make the gradients small. Optima. Mathematical Optimization Society Newsletter, (88):10–11, 2012.
* [42] Y. Nesterov. Gradient methods for minimizing composite functions. Mathematical Programming, (140(1)):125–161, 2013.
* [43] Y. Nesterov, A. Gasnikov, S. Guminov, and P. Dvurechensky. Primal–dual accelerated gradient methods with small-dimensional relaxation oracle. Optimization Methods and Software, pages 1–38, 2020.
* [44] Y. E. Nesterov. A method for solving the convex programming problem with convergence rate ${O}(1/k^{2})$. Doklady Akademii Nauk, 269(3):543–547, 1983.
* [45] J. Nocedal and S. Wright. Numerical optimization. Springer Science & Business Media, 2006.
* [46] Y. Ouyang and Y. Xu. Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems. Mathematical Programming, Aug 2019.
* [47] L. D. Popov. A modification of the Arrow-Hurwicz method for search of saddle points. Mathematical notes of the Academy of Sciences of the USSR, 28(5):845–848, Nov 1980.
* [48] S. Sabach and S. Shtern. A first order method for solving convex bilevel optimization problems. SIAM Journal on Optimization, 27(2):640–660, 2017.
* [49] D. Scieur, V. Roulet, F. Bach, and A. D’Aspremont. Integration methods and accelerated optimization algorithms. In Proc. NIPS’17, 2017.
* [50] B. Shi, S. S. Du, M. I. Jordan, and W. J. Su. Understanding the acceleration phenomenon via high-resolution differential equations. arXiv preprint, arXiv:1810.08907, 2018.
* [51] C. Song, Y. Jiang, and Y. Ma. Unified acceleration of high-order algorithms under general Hölder continuity. SIAM Journal on Optimization, 2021. To appear.
* [52] C. Song, Z. Zhou, Y. Zhou, Y. Jiang, and Y. Ma. Optimistic dual extrapolation for coherent non-monotone variational inequalities. Proc. NeurIPS’20, 2020.
* [53] W. Su, S. Boyd, and E. J. Candes. A differential equation for modeling Nesterov’s accelerated gradient method: Theory and insights. J. Mach. Learn. Res., 17(153):1–43, 2016.
* [54] A. B. Taylor, J. M. Hendrickx, and F. Glineur. Exact worst-case performance of first-order methods for composite convex optimization. SIAM Journal on Optimization, 27(3):1283–1313, 2017.
* [55] P. Tseng. On accelerated proximal gradient methods for convex-concave optimization, 2008.
* [56] A. Wibisono, A. C. Wilson, and M. I. Jordan. A variational perspective on accelerated methods in optimization. In Proceedings of the National Academy of Sciences, 2016.
* [57] A. C. Wilson, B. Recht, and M. I. Jordan. A Lyapunov analysis of momentum methods in optimization. arXiv preprint, arXiv:1611.02635, 2016.
* [58] C. Zalinescu. Convex analysis in general vector spaces. World scientific, 2002.
* [59] J. Zhang, A. Mokhtari, S. Sra, and A. Jadbabaie. Direct Runge-Kutta discretization achieves acceleration. In Proc. NeurIPS’18, 2018.
## Appendix A Sequence Growth for the Optimized Gradient Method
This section provides a technical lemma used in the proof of Theorem 2.7.
###### Lemma A.1.
Let $\\{\beta_{i,k}\\}_{i\leq k}$, $\\{a_{k}\\}_{k\geq 0},$
$\\{A_{k}\\}_{k\geq 0}$ be the sequences of real numbers that for
$k\in\\{0,\dots,K\\}$ satisfy $\beta_{k,k}=1,$ $A_{k}=\sum_{i=0}^{k}a_{i},$
and
$\displaystyle\beta_{k,K-1}+\frac{a_{k+1}}{A_{K}}=\frac{A_{k+1}}{a_{k+1}},$
(A.1) $\displaystyle
A_{k+1}\beta_{j,k}=A_{k}\beta_{j,k-1}+a_{k+1}\Big{(}\beta_{j,K-1}+\frac{a_{j+1}}{A_{K}}\Big{)}+a_{j+1}\Big{(}\beta_{k,K-1}+\frac{a_{k+1}}{A_{K}}\Big{)}.$
(A.2)
Then the sequence $\\{A_{k}\\}_{k\geq 0}$ can be chosen as
$\begin{cases}A_{k}=1,&\text{ if }k=K;\\\
A_{k}=A_{k+1}\big{[}1+\frac{1}{2}A_{k+1}-\frac{1}{2}\sqrt{A_{k+1}(4+A_{k+1})}\big{]},&\text{
if }0\leq k\leq K-1\end{cases}$ (A.3)
and $\frac{A_{K}}{A_{0}}\geq\frac{(K+2)^{2}}{4}.$
###### Proof.
First, we show that the sequence $\\{A_{k}\\}^{K}_{k=0}$ with $A_{K}=1$ and
that satisfies Eq. (A.1) and Eq. (A.2) has the following recursive
relationship between two successive terms:
$\frac{1}{A_{k-1}}=\frac{1}{A_{k}}+\frac{A_{k}}{a_{k}}$ (A.4)
which is equivalent to:
$\frac{A_{k-1}A_{k}}{a_{k}}=\frac{a_{k}}{A_{k}}.$ (A.5)
Solving for $A_{k-1},$ this relationship leads to Eq. (A.3). We prove the
recursive relationship by induction on $k$. First, for the base case $k=K$,
setting $k=K-1$ in Eq. (2.19), we have:
$\beta_{K-1,K-1}=\frac{A_{K}}{a_{K}}-\frac{a_{K}}{A_{K}}.$ Since we have set
$A_{K}=1$ and $\beta_{K-1,K-1}=1$, it follows that:
$\frac{a_{K}}{A_{K}}=\frac{A_{K}-a_{K}}{a_{K}}=\frac{A_{K-1}}{a_{K}}=\frac{A_{K}A_{K-1}}{a_{K}}$
which coincides with Eq. (A.5).
Now assume that Eq. (A.4) (equivalently, Eq. (A.5)) holds for
$k=K,K-1,\dots,n+1$, and consider $k=n$. Setting $k=n$, $j=n-1$ in Eq. (2.20),
we have:
$A_{n+1}\beta_{n-1,n}=A_{n}\beta_{n-1,n-1}+a_{n+1}\frac{A_{n}}{a_{n}}+a_{n}\frac{A_{n+1}}{a_{n+1}}$
Hence:
$A_{n+1}\beta_{n-1,n}=A_{n}+a_{n+1}\frac{A_{n}}{a_{n}}+a_{n}\frac{A_{n+1}}{a_{n+1}}$
(A.6)
It turns out that we can express $A_{n+1}\beta_{n-1,n}$ using $A_{k}$ for $k$
ranging from $n+1$ to $K$. Let $k=\ell$, $\ell=n+1,n+2,\cdots,K-1$ and $j=n-1$
in Eq. (2.20); then, we get:
$A_{\ell}\beta_{n-1,l-1}=A_{\ell+1}\beta_{n-1,\ell}-a_{\ell+1}\frac{A_{n}}{a_{n}}-a_{n}\frac{A_{\ell+1}}{a_{\ell+1}}.$
This is a recursive relation between $A_{\ell}\beta_{n-1,\ell-1}$ and
$A_{\ell+1}\beta_{n-1,\ell}$. Applying this relation recursively from
$\ell=n+1$ to $\ell=K-1$, we get:
$\displaystyle A_{n+1}\beta_{n-1,n}$
$\displaystyle=A_{K}\beta_{n-1,K-1}-\frac{A_{n}}{a_{n}}(a_{n+2}+\cdots+a_{K})-a_{n}\big{(}\frac{A_{n+2}}{a_{n+2}}+\cdots+\frac{A_{K}}{a_{K}}\big{)}$
$\displaystyle=A_{K}\big{(}\frac{A_{n}}{a_{n}}-\frac{a_{n}}{A_{K}}\big{)}-\frac{A_{n}}{a_{n}}\sum_{\ell=n+1}^{K-1}(A_{\ell+1}-A_{\ell})-a_{n}\sum_{\ell=n+1}^{K-1}\big{(}\frac{1}{A_{\ell}}-\frac{1}{A_{\ell+1}}\big{)}$
$\displaystyle=A_{K}\big{(}\frac{A_{n}}{a_{n}}-\frac{a_{n}}{A_{K}}\big{)}-\frac{A_{n}}{a_{n}}(A_{K}-A_{n+1})-a_{n}\big{(}\frac{1}{A_{n+1}}-\frac{1}{A_{K}}\big{)}$
$\displaystyle=\frac{A_{n}A_{n+1}}{a_{n}}-\frac{a_{n}}{A_{n+1}}.$
The second equation is valid due to our inductive hypothesis for
$k=n+2,n+3,\dots,K$. To derive the last equation, we use that $A_{K}=1$, by
the lemma assumption. Plugging the above equation into Eq. (A.6), we get:
$\frac{A_{n}A_{n+1}}{a_{n}}=A_{n}\frac{a_{n}+a_{n+1}}{a_{n}}+a_{n}\big{(}\frac{A_{n+1}}{a_{n+1}}+\frac{1}{A_{n+1}}\big{)}.$
Using the assumption that
$\frac{1}{A_{n}}=\frac{1}{A_{n+1}}+\frac{A_{n+1}}{a_{n+1}}$, we obtain Eq.
(A.5) for $k=n$, completing the inductive argument.
The recursive relationship between $A_{k}$ and $A_{k+1}$ from Eq. (2.18) be
equivalently written as
$\frac{1}{A_{k}}=\frac{1}{4}\Big{(}2+\frac{4}{A_{k+1}}+2\sqrt{1+\frac{4}{A_{k+1}}}\Big{)}$
Denote $D_{n}=\frac{1}{A_{K-n}}$ for $n\in\\{0,\dots,K\\}.$ Then
$D_{n}=\frac{1}{2}+D_{n-1}+\sqrt{D_{n-1}+\frac{1}{4}}.$ (A.7)
We prove by induction that:
$D_{n}\geq\frac{(n+2)^{2}}{4}.$ (A.8)
As $D_{0}=\frac{1}{A_{K}}=1,$ $D_{0}=\frac{(0+2)^{2}}{4}$ holds by definition.
Now suppose it also for some $n=j$, $0\leq j\leq K-1$. Then:
$\displaystyle D_{j+1}$
$\displaystyle=\frac{1}{2}+D_{j}+\sqrt{D_{j}+\frac{1}{4}}$
$\displaystyle\geq\frac{1}{2}+\frac{1}{4}(j+2+1-1)^{2}+\frac{1}{2}\sqrt{(j+2)^{2}}$
$\displaystyle=\frac{1}{2}+\frac{1}{4}(j+3)^{2}-\frac{1}{2}(j+3)+\frac{1}{4}+\frac{1}{2}(j+2)$
$\displaystyle>\frac{1}{4}(j+3)^{2}$
Thus, $D_{K}=\frac{1}{A_{0}}=\frac{A_{K}}{A_{0}}\geq\frac{(K+2)^{2}}{4},$ as
claimed. ∎
|
# Bilinear control and growth of Sobolev norms for the nonlinear Schrödinger
equation
Alessandro Duca 111Université Paris-Saclay, UVSQ, CNRS, Laboratoire de
Mathématiques de Versailles, 78000, Versailles, France; e-mail:
<EMAIL_ADDRESS>Vahagn Nersesyan 222Université Paris-Saclay, UVSQ,
CNRS, Laboratoire de Mathématiques de Versailles, 78000, Versailles, France;
e-mail<EMAIL_ADDRESS>
###### Abstract
We consider the nonlinear Schrödinger equation (NLS) on a torus of arbitrary
dimension. The equation is studied in presence of an external potential field
whose time-dependent amplitude is taken as control. Assuming that the
potential satisfies a saturation property, we show that the NLS equation is
approximately controllable between any pair of eigenstates in arbitrarily
small time. The proof is obtained by developing a multiplicative version of a
geometric control approach introduced by Agrachev and Sarychev. We give an
application of this result to the study of the large time behavior of the NLS
equation with random potential. More precisely, we assume that the amplitude
of the potential is a random process whose law is $1$-periodic in time and
non-degenerate. Combining the controllability with a stopping time argument
and the Markov property, we show that the trajectories of the random equation
are almost surely unbounded in regular Sobolev spaces.
AMS subject classifications: 35Q55, 35R60, 37L55, 81Q93, 93B05
Keywords: Nonlinear Schrödinger equation, approximate controllability,
geometric control theory, growth of Sobolev norms, random perturbation
###### Contents
1. 0 Introduction
2. 1 Preliminaries
3. 2 Approximate controllability
4. 3 Proof of Proposition 1.2
5. 4 Saturating subspaces
6. 5 Growth of Sobolev norms
## 0 Introduction
In this paper, we study the controllability and the growth of Sobolev norms
for the following nonlinear Schrödinger (NLS) equation on the torus
${\mathbb{T}}^{d}={\mathbb{R}}^{d}/2\pi{\mathbb{Z}}^{d}$:
$i\partial_{t}\psi=-\Delta\psi+V(x)\psi+\kappa|\psi|^{2p}\psi+\langle
u(t),Q(x)\rangle\psi.$ (0.1)
We assume that $V:{\mathbb{T}}^{d}\to{\mathbb{R}}$ is an arbitrary smooth
potential, $Q:{\mathbb{T}}^{d}\to{\mathbb{R}}^{q}$ is a given smooth external
field subject to some geometric condition, $d,p\geq 1$ are arbitrary integers,
and $\kappa$ is an arbitrary real number. The role of the control (or the
random perturbation) is played by ${\mathbb{R}}^{q}$-valued function (or
random process) $u$ which is assumed to depend only on time. Eq. (0.1) is
equipped with the initial condition
$\psi(0,x)=\psi_{0}(x)$ (0.2)
belonging to a Sobolev space $H^{s}=H^{s}({\mathbb{T}}^{d};{\mathbb{C}})$ of
order $s>d/2$, so that the problem is locally well-posed.
The purpose of this paper is to study the NLS equation (0.1) when the driving
force $u$ acts multiplicatively through only few low Fourier modes. Referring
the reader to the subsequent sections for the general setting, let us
formulate in this Introduction particular cases of our main results. Let
${\cal K}\subset{\mathbb{Z}}^{d}_{*}$ be the set of $d$ vectors defined by
${\cal
K}=\\{(1,0,\ldots,0),\,(0,1,\ldots,0),\,\ldots,\,(0,0,\ldots,1,0),\,(1,\ldots,1)\\},$
(0.3)
and assume that the field $Q=(Q_{1},\ldots,Q_{q})$ is such that
$\left\\{{\bf 1},\,\sin\langle x,k\rangle,\,\cos\langle x,k\rangle:k\in{\cal
K}\right\\}\subset\mathop{\rm span}\nolimits\\{Q_{j}:j=1,\ldots,q\\}.$ (0.4)
Let $s_{d}$ be the least integer strictly greater than $d/2$.
###### Theorem A.
The problem (0.1), (0.2) is approximately controllable in the following sense:
for any $s\geq s_{d}$, $\varepsilon>0$, $\varkappa>0$, $\psi_{0}\in H^{s}$,
and $\theta\in C^{\infty}({\mathbb{T}}^{d};{\mathbb{R}})$, there is a time
$T\in(0,\varkappa)$, a control $u\in L^{2}([0,T];{\mathbb{R}}^{q})$, and a
unique solution $\psi\in C([0,T];H^{s})$ of (0.1), (0.2) such that
$\|\psi(T)-e^{i\theta}\psi_{0}\|_{H^{s}}<\varepsilon.$
More general formulation of this result is given in Theorem 2.2, where the
controllability is proved under an abstract saturation condition for the field
$Q$ (see Condition (H1)). Note that the time $T$ may depend on the initial
condition $\psi_{0}$, the target $e^{i\theta}\psi_{0}$, and the parameters in
the equation. In the second result, we show that, when $V=0$ and $\psi_{0}$ is
an eigenstate $\phi_{l}(x)=(2\pi)^{-d/2}e^{i\langle x,l\rangle},$
$l\in{\mathbb{Z}}^{d}$ of the Laplacian, the system can be approximately
controlled in any fixed time $T>0$ to any target of the form
$e^{i\theta}\phi_{m}$ with $m\in{\mathbb{Z}}^{d}$.
###### Theorem B.
For any $s\geq s_{d},$ $\varepsilon>0$, $l,m\in{\mathbb{Z}}^{d}$, $\theta\in
C^{\infty}({\mathbb{T}}^{d};{\mathbb{R}})$, and $T>0$, there is a control
$u\in L^{2}([0,T];{\mathbb{R}}^{q})$ and a unique solution $\psi\in
C([0,T];H^{s})$ of (0.1), (0.2) with $V=0$ and $\psi_{0}=\phi_{l}$ such that
$\|\psi(T)-e^{i\theta}\phi_{m}\|_{L^{2}}<\varepsilon.$
The controllability of the Schrödinger equation with time-dependent bilinear
(multiplicative) control has attracted a lot of attention during the last
fifteen years. In the one-dimensional case, local exact controllability
results are established by Beauchard, Coron, and Laurent [Bea05, BC06, BL10].
There is a vast literature on the approximate controllability in the
multidimensional case. For the first achievements, we refer the reader to the
papers by Boscain et al. [CMSB09, BCCS12], Mirrahimi [Mir09], and the second
author [Ner10]. Except the paper [BL10], all the other works consider the
linear Schrödinger equation, i.e., the one obtained by taking $\kappa=0$ in
Eq. (0.1); note that in that case the control problem is still nonlinear in
$u$.
Theorems A and B are the first to deal with the problem of bilinear
approximate controllability of the NLS equation. Let us emphasise that the
controllability between any pair of eigenstates in arbitrarily small time is
new even in the linear case $\kappa=0$. It is interesting to note that Theorem
B complements a result by Beauchard et al. [BCT18], which proves that, for
some choices of the field $Q$, there is a minimal time for the approximate
controllability to some particular states in the phase space.
The approach adopted in the proofs of Theorems A and B is quite different from
those used in the literature for bilinear control systems. We proceed by
developing Agrachev–Sarychev type arguments which were previously employed in
the case of additive controls. Let us recall that Agrachev and Sarychev [AS05,
AS06] considered the global approximate controllability of the 2D
Navier–Stokes and Euler systems. Their approach has been further extended by
many authors to different equations. Let us mention, for example, the papers
[Shi06, Shi07] by Shirikyan who considered the approximate controllability of
the 3D Navier–Stokes system and Sarychev [Sar12] who considered the case of
the 2D defocusing cubic Schrödinger equation. The configuration we use in the
present paper is closer to the one elaborated in the recent paper [Ner21],
where parabolic PDEs are studied with polynomial nonlinearities. We refer the
reader to the reviews [AS08, Shi18] and the paper [Ner21] for more references
and discussions.
The present paper is the first to deal with Agrachev–Sarychev type arguments
in a bilinear setting. To explain the scheme of the proof of Theorem A, let us
denote by ${\cal R}_{t}(\psi_{0},u)$ the solution of problem (0.1), (0.2)
defined up to some maximal time. A central role in the proof is played by the
limit
$e^{-i\delta^{-1/2}\varphi}{\cal
R}_{\delta}(e^{i\delta^{-1/2}\varphi}\psi_{0},\delta^{-1}u)\to
e^{-i\left({\mathbb{B}}(\varphi)+\langle
u,Q\rangle\right)}\psi_{0}\quad\text{in $H^{s}$ as $\delta\to 0^{+}$}$ (0.5)
which holds for any $\psi_{0}\in H^{s}$, $\varphi\in
C^{\infty}({\mathbb{T}}^{d};{\mathbb{R}})$, and constant
$u\in{\mathbb{R}}^{q}$. Here we denote
${\mathbb{B}}(\varphi)(x)=\sum_{j=1}^{d}\left(\partial_{x_{j}}\varphi(x)\right)^{2}$.
Applying this limit with $\varphi=0$ and using the assumption (0.4), we see
that the equation can be controlled in small time from initial point
$\psi_{0}$ arbitrarily close to $e^{i\theta}\psi_{0}$ for any $\theta$ in the
vector space
${\cal H}_{0}=\mathop{\rm span}\nolimits\left\\{{\bf 1},\,\sin\langle
x,k\rangle,\,\cos\langle x,k\rangle:k\in{\cal K}\right\\}.$
By applying again the limit (0.5) with functions $\varphi=\theta_{j}\in{\cal
H}_{0}$, $j=1,\ldots,n$, we add more directions in $\theta$. That is, we show
that the system can be steered from $\psi_{0}$ close to $e^{i\theta}\psi_{0}$,
where $\theta$ now belongs to a larger vector space ${\cal H}_{1}$ whose
elements are of the form
$\theta_{0}-\sum_{j=1}^{n}{\mathbb{B}}(\theta_{j}).$
We iterate this argument and construct an increasing sequence of subspaces
$\\{{\cal H}_{j}\\}$ such that the equation can be approximately controlled to
any target $e^{i\theta}\psi_{0}$ with any $\theta\in{\cal H}_{j}$ and $j\geq
1$. Using trigonometric computations, we show that the union $\cup_{j=1}{\cal
H}_{j}$ is dense in $C^{k}({\mathbb{T}}^{d},{\mathbb{R}})$ for any $k\geq 1$
(in other words, ${\cal H}_{0}$ is a saturating space for the NLS equation,
see Definition 2.1). This completes the proof of Theorem A.
Theorem B is derived from Theorem A by noticing that the eigenstate $\phi_{l}$
can be approximated in $L^{2}$ by functions of the form $e^{i\theta}\phi_{m}$
and that the eigenstates are constant solutions333This follows immediately
from the assumptions that $V=0$ and ${\mathbf{1}}\in{\cal H}_{0}$. of Eq.
(0.1) corresponding to some control. This allows to appropriately adjust the
controllability time and choose it the same for any initial condition and
target.
As an application of Theorem A, we study the large time behavior of the
trajectories of the random NLS equation. We show that if a random process
perturbes the same Fourier modes as in the above controllability results, then
the energy is almost surely transferred to higher modes resulting in the
unboundedness of the trajectories in regular Sobolev spaces. More precisely,
we replace the control $u$ by an ${\mathbb{R}}^{q}$-valued random process
$\eta$ of the form
$\eta(t)=\sum_{k=1}^{+\infty}{\mathbb{I}}_{[k-1,k)}(t)\eta_{k}(t-k+1),$ (0.6)
where ${\mathbb{I}}_{[k-1,k)}$ is the indicator function of the interval
$[k-1,k)$ and $\\{\eta_{k}\\}$ are independent identically distributed random
variables in $L^{2}([0,1];{\mathbb{R}}^{q})$ with non-degenerate law (see
Condition (H2)). The solution $\psi$ of the problem (0.1), (0.2), (0.6) will
be itself a random process in $H^{s}$. We prove the following result.
###### Theorem C.
For any $s>s_{d}$ and any non-zero $\psi_{0}\in H^{s}$, the trajectory of
(0.1), (0.2), (0.6) is almost surely unbounded in $H^{s}$.
The idea of constructing unbounded solutions by using random perturbations is
not new. Such results have been obtained by Bourgain [Bou99] and Erdogan et
al. [EgKS03] for linear one-dimensional Schrödinger equations. They also
provided polynomial lower bounds for the growth. Unboundedness of trajectories
for multidimensional linear Schrödinger equations is obtained in [Ner09]. In
that paper, the assumptions on the law of the random perturbation are rather
general and no estimates for the growth are given; Theorem C is a
generalisation of that result to the case of the NLS equations. There are also
examples of linear Schrödinger equations with various deterministic time-
dependent potentials which admit unbounded trajectories: e.g., see the papers
by Bambusi et al. [BGMR18], Delort [Del14], Haus and Maspero [HM20, Mas19],
and the references therein.
There are only few results in the case of unperturbed NLS equations. For cubic
defocusing Schrödinger equations on bounded domains or manifolds, the
existence of unbounded trajectories in regular Sobolev spaces is a challenging
open problem (see Bourgain [Bou00]). In different situations, existence of
trajectories with arbitrarily large finite growth has been shown by Kuksin
[Kuk97], Colliander et al. [CKS+10], Guardia and Kaloshin [GK15], and others.
Hani et al. [HPTV15] show the existence of unbounded trajectories in the case
of the cubic defocusing Schrödinger equation on the infinite cylinder
${\mathbb{R}}\times{\mathbb{T}}^{d}$. In the case of the cubic Szegő equation
on the circle, Gérard and Grellier [GG17] show that the trajectories are
generically unbounded in Sobolev spaces. Moreover, they exhibit the existence
of a family of solutions with superpolynomial growth.
Let us give a brief (and not entirely accurate) description of the main ideas
of the proof of Theorem C. By starting from any initial point $\psi_{0}\in
H^{s}$, Theorem A allows to increase the Sobolev norms by choosing
appropriately the control. This, together with a compactness argument and the
assumption that the law of the process $\eta$ is non-degenerate, leads to a
uniform estimate of the form
$c_{M}=\sup_{\psi_{0}\in
H^{s}}{\mathbb{P}}\left\\{\sup_{t\in[0,1]}\|\psi(t)\|_{H^{s}}>M\right\\}<1$
for any $M>0$. By combining the latter with the Markov property, we show that
${\mathbb{P}}\left\\{\sup_{t\in[0,n]}\|\psi(t)\|_{H^{s}}>M\right\\}\leq
c_{M}^{n}$
for any $\psi_{0}\in H^{s}$. Then, the Borel–Cantelli lemma implies that the
norm of any trajectory becomes almost surely larger than $M$ in some random
time that is almost surely finite. As $M$ is arbitrary, this proves the
required result.
The paper is organised as follows. In Section 1, we discuss the local well-
posedness and some stability properties of the NLS equation. In Section 2, we
formulate more general versions of Theorems A and B and give their proofs.
Section 3 is devoted to the derivation of limit (0.5). In Section 4, we
establish a general criterion for the validity of the saturation property.
Finally, in Section 5, we prove Theorem C.
### Acknowledgement
The authors thank Armen Shirikyan for his valuable comments. The research of
AD was supported by the ANR grant ISDEEC ANR-16-CE40-0013. The research of VN
was supported by the ANR grant NONSTOPS ANR-17-CE40-0006-02.
### Notation
In what follows, we use the following notation.
$\langle\cdot,\cdot\rangle$ is the Euclidian scalar product in
${\mathbb{R}}^{q}$ and $\|\cdot\|$ is the corresponding norm. We write $m\bot
l$ when the vectors $m,l\in{\mathbb{R}}^{q}$ are orthogonal and $m\not\perp l$
when they are not.
$H^{s}=H^{s}({\mathbb{T}}^{d};{\mathbb{C}}),~{}s\geq 0$ and
$L^{p}=L^{p}({\mathbb{T}}^{d};{\mathbb{C}}),~{}p\geq 1$ are the standard
Sobolev and Lebesgue spaces of functions $f:{\mathbb{T}}^{d}\to{\mathbb{C}}$
endowed with the norms $\|\cdot\|_{s}$ and $\|\cdot\|_{L^{p}}$. The space
$L^{2}$ is endowed with the scalar product
$\langle
f,g\rangle_{L^{2}}=\int_{{\mathbb{T}}^{d}}f(x)\overline{g(x)}{\textup{d}}x.$
$C^{s}=C^{s}({\mathbb{T}}^{d};{\mathbb{C}})$,
$s\in{\mathbb{N}}\cup\\{\infty\\}$ is the space of $s$-times continuously
differentiable functions $f:{\mathbb{T}}^{d}\to{\mathbb{C}}$.
Let $X$ be a Banach space. We denote by $B_{X}(a,r)$ the closed ball of radius
$r>0$ centred at $a\in X$.
We write $J_{T}$ instead of $[0,T]$ and $J$ instead of $[0,1]$.
$C(J_{T};X)$ is the space of continuous functions $f:J_{T}\to X$ with the norm
$\|f\|_{C(J_{T};X)}=\max_{t\in J_{T}}\|f(t)\|_{X}.$
$L^{p}(J_{T};X),1\leq p<\infty$ is the space of Borel-measurable functions
$f:J_{T}\to X$ with
$\|f\|_{L^{p}(J_{T};X)}=\left(\int_{0}^{T}\|f(t)\|_{X}^{p}{\textup{d}}t\right)^{1/p}<\infty.$
$\lceil x\rceil$ is the least integer greater than or equal to
$x\in{\mathbb{R}}$.
$s_{d}$ is the least integer strictly greater than $d/2$.
${\bf 1}$ is the function identically equal to $1$ on ${\mathbb{T}}^{d}$.
## 1 Preliminaries
In this section, we consider the NLS equation (0.1), where $u$ is a
deterministic ${\mathbb{R}}^{q}$-valued function and
$V:{\mathbb{T}}^{d}\to{\mathbb{R}}$ and
$Q:{\mathbb{T}}^{d}\to{\mathbb{R}}^{q}$ are arbitrary smooth functions. In
what follows, we shall always assume that the parameters $d\geq 1$, $p\geq 1$,
and $\kappa\in{\mathbb{R}}$ are arbitrary. Here we formulate two propositions
that will be used in the proofs of our main results. The first one gathers
some well-known facts about the local well-posedness and stability of the NLS
equation in regular Sobolev spaces.
###### Proposition 1.1.
For any $s>d/2$, $\hat{\psi}_{0}\in H^{s}$, and $\hat{u}\in
L^{2}_{loc}({\mathbb{R}}_{+};{\mathbb{R}}^{q})$, there is a maximal time
${\cal T}={\cal T}(\hat{\psi}_{0},\hat{u})>0$ and a unique solution
$\hat{\psi}$ of the problem (0.1), (0.2) with
$(\psi_{0},u)=(\hat{\psi}_{0},\hat{u})$ whose restriction to the interval
$J_{T}$ belongs to $C(J_{T};H^{s})$ for any $T<{\cal T}$. If ${\cal
T}<\infty$, then $\|\hat{\psi}(t)\|_{s}\to+\infty$ as $t\to{\cal T}^{-}$.
Furthermore, for any $T<{\cal T}$, there are constants
$\delta=\delta(T,\Lambda)>0$ and $C=C(T,\Lambda)>0$, where
$\Lambda=\|\hat{\psi}\|_{C(J_{T};H^{s})}+\|\hat{u}\|_{L^{2}(J_{T};{\mathbb{R}}^{q})},$
such that the following two properties hold.
1. (i)
For any $\psi_{0}\in H^{s}$ and $u\in L^{2}(J_{T};{\mathbb{R}}^{q})$
satisfying
$\|\psi_{0}-\hat{\psi}_{0}\|_{s}+\|u-\hat{u}\|_{L^{2}(J_{T};{\mathbb{R}}^{q})}<\delta,$
(1.1)
the problem (0.1), (0.2) has a unique solution $\psi\in C(J_{T};H^{s}).$
2. (ii)
Let ${\cal R}$ be the resolving operator for Eq. (0.1), i.e., the mapping
taking a couple $(\psi_{0},u)$ satisfying (1.1) to the solution $\psi$. Then
$\displaystyle\|{\cal R}(\psi_{0},u)-{\cal
R}(\hat{\psi}_{0},\hat{u})\|_{C(J_{T};H^{s})}$ $\displaystyle\leq
C\left(\|\psi_{0}-\hat{\psi}_{0}\|_{s}+\|u-\hat{u}\|_{L^{2}(J_{T};{\mathbb{R}}^{q})}\right).$
The proof of this proposition is rather standard, so we omit it (e.g., see
Section 3.3 in [Tao06] or Section 4.10 in [Caz03] for similar results). Let
${\cal S}$ be the unit sphere in $L^{2}$. As the functions $V,Q$, and $u$ are
real-valued, the solution $\psi$ belongs to ${\cal S}$ throughout its
lifespan, provided that $\psi_{0}\in{\cal S}\cap H^{s}$.
Before formulating the second proposition, let us introduce some notation. For
any $\psi_{0}\in H^{s}$ and $T>0$, let $\Theta(\psi_{0},T)$ be the set of
functions $u\in L^{2}(J_{T};{\mathbb{R}}^{q})$ such that the problem (0.1),
(0.2) has a solution in $C(J_{T};H^{s})$. By the previous proposition, the set
$\Theta(\psi_{0},T)$ is open in $L^{2}(J_{T};{\mathbb{R}}^{q})$. For any
$\varphi\in C^{1}({\mathbb{T}}^{d};{\mathbb{R}})$, let
${\mathbb{B}}(\varphi)(x)=\sum_{j=1}^{d}\left(\partial_{x_{j}}\varphi(x)\right)^{2}.$
(1.2)
We have the following asymptotic property in small time.
###### Proposition 1.2.
For any $s\geq s_{d}$, $\psi_{0}\in H^{s}$, $u\in{\mathbb{R}}^{q}$, and
$\varphi\in C^{r}({\mathbb{T}}^{d};{\mathbb{R}})$, where $r={\lceil
s\rceil+2}$, there is a constant $\delta_{0}>0$ such that 444For any vector
$u\in{\mathbb{R}}^{q}$, with a slight abuse of notation, we denote by the same
letter the constant function equal to $u$.
$\delta^{-1}u\in\Theta(e^{i\delta^{-1/2}\varphi}\psi_{0},\delta)$ for any
$\delta\in(0,\delta_{0})$ and the following limit holds
$e^{-i\delta^{-1/2}\varphi}{\cal
R}_{\delta}(e^{i\delta^{-1/2}\varphi}\psi_{0},\delta^{-1}u)\to
e^{-i\left({\mathbb{B}}(\varphi)+\langle
u,Q\rangle\right)}\psi_{0}\quad\text{in $H^{s}$ as $\delta\to 0^{+}$},$ (1.3)
where ${\cal R}_{\delta}$ is the restriction of the solution at time
$t=\delta$.
The proof of this proposition is postponed to Section 4. Limit (1.3) is a
multiplicative version of a limit established in Proposition 2 in [Ner21] in
the case of parabolic PDEs with additive controls.
## 2 Approximate controllability
In what follows, we assume that $s\geq s_{d}$ and denote $r={\lceil
s\rceil+2}$ as in Proposition 1.2. We start this section with a definition of
a saturation property inspired by the papers [AS06, Shi06]. Let ${\cal H}$ be
a finite-dimensional subspace of $C^{r}({\mathbb{T}}^{d};{\mathbb{R}})$, and
let ${\cal F}({\cal H})$ be the largest subspace of
$C^{r}({\mathbb{T}}^{d};{\mathbb{R}})$ whose elements can be represented in
the form
$\theta_{0}-\sum_{j=1}^{n}{\mathbb{B}}(\theta_{j})$
for some integer $n\geq 1$ and functions $\theta_{j}\in{\cal H}$,
$j=0,\ldots,n$, where ${\mathbb{B}}$ is given by (1.2). As ${\mathbb{B}}$ is
quadratic, ${\cal F}({\cal H})$ is well-defined and finite-dimensional. Let us
define a non-decreasing sequence $\\{{\cal H}_{j}\\}$ of finite-dimensional
subspaces by ${\cal H}_{0}={\cal H}$ and ${\cal H}_{j}={\cal F}({\cal
H}_{j-1})$, $j\geq 1$, and denote
${\cal H}_{\infty}=\bigcup_{j=1}^{+\infty}{\cal H}_{j}.$ (2.1)
###### Definition 2.1.
A finite-dimensional subspace ${\cal H}\subset
C^{r}({\mathbb{T}}^{d};{\mathbb{R}})$ is said to be saturating if ${\cal
H}_{\infty}$ is dense in $C^{r}({\mathbb{T}}^{d};{\mathbb{R}})$.
We assume that the following condition is satisfied.
(H1) The field $Q=(Q_{1},\ldots,Q_{q})$ is saturating, i.e., the subspace
${\cal H}=\mathop{\rm span}\nolimits\\{Q_{j}:j=1,\ldots,q\\}$
is saturating in the sense of Definition 2.1
In this section, we prove the following result. As we will see below, it
implies Theorems A and B formulated in the Introduction.
###### Theorem 2.2.
Assume that Condition (H1) is satisfied. Then for any $\varepsilon>0$,
$\varkappa>0$, $\psi_{0}\in H^{s}$, and $\theta\in
C^{r}({\mathbb{T}}^{d};{\mathbb{R}})$, there is a time $T\in(0,\varkappa)$ and
a control $u\in\Theta(\psi_{0},T)$ such that
$\|{\cal R}_{T}(\psi_{0},u)-e^{i\theta}\psi_{0}\|_{s}<\varepsilon.$
###### Proof.
By using an induction argument in $N$, we show that the approximate
controllability property in this theorem is true for any $\theta\in{\cal
H}_{N}$ and $N\geq 0$. Combined with the saturation hypothesis, this will lead
to approximate controllability with any $\theta\in
C^{r}({\mathbb{T}}^{d};{\mathbb{R}})$.
Step 1. Case $N=0$. Let us show that, for any $\varepsilon>0$, $\varkappa>0$,
$\psi_{0}\in H^{s}$, and $\theta\in{\cal H}$, there is a time
$T\in(0,\varkappa)$ and a control $u\in\Theta(\psi_{0},T)$ such that
$\|{\cal R}_{T}(\psi_{0},u)-e^{i\theta}\psi_{0}\|_{s}<\varepsilon.$ (2.2)
By applying Proposition 1.2 with $\varphi=0$ and $u\in{\mathbb{R}}^{q}$ such
that $\theta=-\langle u,Q\rangle$, we obtain
${\cal R}_{\delta}(\psi_{0},\delta^{-1}u)\to e^{i\theta}\psi_{0}\quad\text{in
$H^{s}$ as $\delta\to 0^{+}$}.$
This implies (2.2) with sufficiently small time $T=\delta$ and control
$\delta^{-1}u$.
Step 2. Case $N\geq 1$. We assume that the result is true for any
$\theta\in{\cal H}_{N-1}$. Let $\tilde{\theta}\in{\cal H}_{N}$ be of the form
$\tilde{\theta}=\theta_{0}-\sum_{j=1}^{n}{\mathbb{B}}(\theta_{j}),$
where $n\geq 1$ and $\theta_{j}\in{\cal H}_{N-1}$, $j=0,\ldots,n$. By applying
Proposition 1.2 with $\varphi=\theta_{1}$ and $u=0$, we get
$e^{-i\delta^{-1/2}\theta_{1}}{\cal
R}_{\delta}(e^{i\delta^{-1/2}\theta_{1}}\psi_{0},0)\to
e^{-i{\mathbb{B}}(\theta_{1})}\psi_{0}\quad\text{in $H^{s}$ as $\delta\to
0^{+}$}.$
The induction hypothesis, the assumption that $\theta_{1}\in{\cal H}_{N-1}$,
and Proposition 1.1 imply that, for any $\varepsilon>0$ and $\varkappa>0$,
there is a time $T_{1}\in(0,\varkappa)$ and a control
$u_{1}\in\Theta(\psi_{0},T_{1})$ such that
$\|{\cal
R}_{T_{1}}(\psi_{0},u_{1})-e^{-i{\mathbb{B}}(\theta_{1})}\psi_{0}\|_{s}<\varepsilon.$
By iterating this argument with $\theta_{j}\in{\cal H}_{N-1}$, $j=0,\ldots,n$,
we obtain that for any $\varepsilon>0$ and $\varkappa>0$, there is
$T_{n}\in(0,\varkappa)$ and $u_{n}\in\Theta(\psi_{0},T_{n})$ such that
$\|{\cal
R}_{T_{n}}(\psi_{0},u_{n})-e^{i\left(\theta_{0}-\sum_{j=1}^{n}{\mathbb{B}}(\theta_{j})\right)}\psi_{0}\|_{s}=\|{\cal
R}_{T_{n}}(\psi_{0},u_{n})-e^{i\tilde{\theta}}\psi_{0}\|_{s}<\varepsilon.$
As $\tilde{\theta}\in{\cal H}_{N}$ is arbitrary, this proves the required
property for $N$.
Step 3. Conclusion. Finally, let $\theta\in
C^{r}({\mathbb{T}}^{d};{\mathbb{R}})$ be arbitrary. By the saturation
hypothesis, ${\cal H}_{\infty}$ is dense in
$C^{r}({\mathbb{T}}^{d};{\mathbb{R}})$. Hence, we can find $N\geq 1$ and
$\tilde{\theta}\in H_{N}$ such that
$\|e^{i\theta}\psi_{0}-e^{i\tilde{\theta}}\psi_{0}\|_{s}<\varepsilon.$
Applying the controllability property proved in the previous steps for
$\tilde{\theta}\in H_{N}$, we complete the proof. ∎
As a consequence of this result, we have the following two theorems.
###### Theorem 2.3.
Under the conditions of Theorem 2.2, for any $M>0$, $\varkappa>0$, and non-
zero $\psi_{0}\in H^{s}$, there is a time $T\in(0,\varkappa)$ and a control
$u\in\Theta(\psi_{0},T)$ such that
$\|{\cal R}_{T}(\psi_{0},u)\|_{s}>M.$
###### Proof.
It suffices to apply Theorem 2.2 by choosing $\theta\in
C^{r}({\mathbb{T}}^{d};{\mathbb{R}})$ such that
$\|e^{i\theta}\psi_{0}\|_{s}>M.$
To find such $\theta$, we take any $\theta_{1}\in
C^{r}({\mathbb{T}}^{d};{\mathbb{R}})$ verifying
$\|e^{i\theta_{1}}\psi_{0}\|_{1}\neq 0$, put $\theta=\lambda\theta_{1}$ with
sufficiently large $\lambda>0$, and use the inequality
$\|\cdot\|_{1}\leq\|\cdot\|_{s}.$ ∎
###### Theorem 2.4.
Assume that the conditions of Theorem 2.2 are satisfied and
${\mathbf{1}}\in\mathop{\rm
span}\nolimits\\{Q_{j}:j=1,\ldots,q\\}\quad\quad\text{and}\quad\quad V=0.$
(2.3)
Then, for any $\varepsilon>0$, $l,m\in{\mathbb{Z}}^{d}$, $\theta\in
C^{r}({\mathbb{T}}^{d};{\mathbb{R}})$, and $T>0$, there is a control
$u\in\Theta(\phi_{l},T)$ such that
$\|{\cal R}_{T}(\phi_{l},u)-e^{i\theta}\phi_{m}\|_{L^{2}}<\varepsilon.$
###### Proof.
Let us take any $\theta_{1}\in C^{r}({\mathbb{T}}^{d};{\mathbb{R}})$. Applying
Theorem 2.2, we find a time $T_{1}\in(0,T)$ and a control
$u_{1}\in\Theta(\phi_{l},T_{1})$ such that
$\|{\cal
R}_{T_{1}}(\phi_{l},u)-e^{i\theta_{1}}\phi_{l}\|_{s}<\frac{\varepsilon}{2}.$
Choosing $\theta_{1}\in C^{r}({\mathbb{T}}^{d};{\mathbb{R}})$ such that
$\|e^{i\theta_{1}}\phi_{l}-e^{i\theta}\phi_{m}\|_{L^{2}}<\frac{\varepsilon}{2},$
we arrive at
$\|{\cal R}_{T_{1}}(\phi_{l},u)-e^{i\theta}\phi_{m}\|_{L^{2}}<\varepsilon.$
Now, notice that $\phi_{l}$ is a stationary solution of Eq. (0.1)
corresponding a control $u_{0}\in
L^{2}_{loc}({\mathbb{R}}_{+};{\mathbb{R}}^{q})$ satisfying the relation
$\langle u_{0}(t),Q(x)\rangle=-|l|^{2}-\kappa(2\pi)^{-dp}\quad\text{for any
$t\geq 0$ and $x\in{\mathbb{T}}^{d}$}.$
Such a choice of $u_{0}$ is possible in view of assumption (2.3). Thus,
$u_{0}\in\Theta(\phi_{l},t)$ and $\phi_{l}={\cal R}_{t}(\phi_{l},u_{0})$ for
any $t\geq 0$. Setting
$u(t)=\begin{cases}u_{0}(t)&\text{for }t\in[0,T-T_{1}],\\\
u_{1}(t-T+T_{1})&\text{for }t\in(T-T_{1},T],\end{cases}$
we complete the proof of the theorem. ∎
Let us close this section with an example of a saturating subspace. Let ${\cal
I}\subset{\mathbb{Z}}^{d}_{*}$ be a finite set and let
${\cal H}={\cal H}({\cal I})=\text{span}\left\\{{\bf 1},\,\sin\langle
x,k\rangle,\,\cos\langle x,k\rangle:k\in{\cal I}\right\\}.$ (2.4)
Recall that ${\cal I}$ is a generator if any vector of ${\mathbb{Z}}^{d}$ is a
linear combination of vectors of ${\cal I}$ with integer coefficients. The
following proposition is proved in Section 4.
###### Proposition 2.5.
The subspace ${\cal H}({\cal I})$ is saturating in the sense of Definition 2.1
if and only if ${\cal I}$ is a generator and for any $l,m\in{\cal I}$, there
are vectors $\\{n_{j}\\}_{j=1}^{k}\subset{\cal I}$ such that $l\not\perp
n_{1}$, $n_{j}\not\perp n_{j+1}$, $j=1,\ldots,k-1,$ and $n_{k}\not\perp m$.
Clearly, the set ${\cal K}\subset{\mathbb{Z}}^{d}_{*}$ defined by (0.3)
satisfies the condition in this proposition. Therefore, the subspace ${\cal
H}({\cal K})$ is saturating, and Theorems A and B follow from Theorems 2.2 and
2.4, respectively.
## 3 Proof of Proposition 1.2
We start by proving the result in the case when $s>d/2$ is an integer, so
$r=s+2$. Let us fix any $R>0$ and assume that $\psi_{0}\in H^{s}$, $\varphi\in
C^{r}({\mathbb{T}}^{d};{\mathbb{R}})$, and $u\in{\mathbb{R}}^{q}$ are such
that
$\|\psi_{0}\|_{s}+\|\varphi\|_{C^{r}}+\|u\|_{{\mathbb{R}}^{q}}\leq R.$ (3.1)
For any $\delta>0$, we denote $\phi(t)=e^{-i\delta^{-1/2}\varphi}{\cal
R}_{t}(e^{i\delta^{-1/2}\varphi}\psi_{0},\delta^{-1}u)$. According to
Proposition 1.1, $\phi(t)$ exists up to some maximal time ${\cal
T}^{\delta}={\cal T}(e^{i\delta^{-1/2}\varphi}\psi_{0},\delta^{-1}u)$, and
$\|e^{i\delta^{-1/2}\varphi}\phi(t)\|_{s}\to+\infty\quad\text{as~{}$t\to{\cal
T}^{\delta-}$, if ${\cal T}^{\delta}<\infty$.}$
We need to show that
* $(a)$
there is a constant $\delta_{0}>0$ such that ${\cal T}^{\delta}>\delta$ for
any $\delta<\delta_{0}$;
* $(b)$
the following limit holds
$\phi(\delta)\to e^{-i\left({\mathbb{B}}(\varphi)+\langle
u,Q\rangle\right)}\psi_{0}\quad\text{in $H^{s}$ as $\delta\to 0^{+}$}.$
To prove these properties, we introduce the functions
$\displaystyle w(t)$ $\displaystyle=e^{-i\left({\mathbb{B}}(\varphi)+\langle
u,Q\rangle\right)t}\psi_{0}^{\delta},$ (3.2) $\displaystyle v(t)$
$\displaystyle=\phi(\delta t)-w(t),$
where $\psi_{0}^{\delta}\in H^{r}$ is such that 555In what follows, $C$
denotes positive constants which may change from line to line. These constants
depend on the parameters $R,V,Q,\kappa,p,d,s$, but not on $\delta$.
$\displaystyle\|\psi_{0}^{\delta}\|_{s}\leq C\quad\quad\quad\quad\text{for
}\delta\leq 1,$ (3.3) $\displaystyle\|\psi_{0}^{\delta}\|_{r}\leq
C\delta^{-1/4}\,\,\,\,\quad\text{for }\delta\leq 1,$ (3.4)
$\displaystyle\|\psi_{0}-\psi_{0}^{\delta}\|_{s}\to 0\quad\,\,\,\text{ as
$\delta\to 0^{+}$}.$
For example, we can define $\psi_{0}^{\delta}$ by using the heat semigroup:
$\psi_{0}^{\delta}=e^{\delta^{1/4}\Delta}\psi_{0}$. In view of (3.1)-(3.4), we
have
$\displaystyle\|w(t)\|_{s}$ $\displaystyle\leq C,\quad\,\,\quad\quad t\geq 0,$
(3.5) $\displaystyle\|w(t)\|_{r}$ $\displaystyle\leq C{\delta^{-1/4}},\quad
t\geq 0.$ (3.6)
Furthermore, $v(t)$ is well-defined for $t<\delta^{-1}{\cal T}^{\delta}$ and
satisfies the equation
$\displaystyle i\partial_{t}v$ $\displaystyle=-\delta\Delta(v+w)+\delta
V(v+w)+\delta\kappa|v+w|^{2p}(v+w)$
$\displaystyle\quad-i\delta^{\frac{1}{2}}{\mathbb{D}}(v+w,\varphi)+{\mathbb{B}}(\varphi)v+\langle
u,Q\rangle v,$ (3.7)
and the initial condition
$v(0)=\psi_{0}-\psi_{0}^{\delta},$ (3.8)
where
${\mathbb{D}}(v+w,\varphi)=(v+w)\Delta\varphi+2\sum_{j=1}^{d}\partial_{x_{j}}(v+w)\,\partial_{x_{j}}\varphi.$
Let $\alpha=(\alpha_{1},\ldots,\alpha_{d})\in{\mathbb{N}}^{d}$ be such that
$|\alpha|=|\alpha_{1}|+\ldots+|\alpha_{d}|\leq s$. We take the scalar product
of Eq. (3.7) with $\partial^{2\alpha}v$ in $L^{2}$ and integrating by parts,
we obtain
$\displaystyle\partial_{t}\|\partial^{\alpha}v\|_{L^{2}}^{2}$
$\displaystyle\leq C\Big{(}\delta|\langle\Delta
w,\partial^{2\alpha}v\rangle_{L^{2}}|+\delta|\langle
V(v+w),\partial^{2\alpha}v\rangle_{L^{2}}|$
$\displaystyle\quad+\delta|\langle|v+w|^{2p}(v+w),\partial^{2\alpha}v\rangle_{L^{2}}|+\delta^{1/2}|\langle{\mathbb{D}}(v+w,\varphi),\partial^{2\alpha}v\rangle_{L^{2}}|$
$\displaystyle\quad+|\langle{\mathbb{B}}(\varphi)v+\langle u,Q\rangle
v,\partial^{2\alpha}v\rangle_{L^{2}}|\Big{)}=\sum_{j=1}^{5}I_{j}.$ (3.9)
We estimate the terms $I_{1},I_{2},I_{3},$ and $I_{5}$ by integrating by parts
and by using (3.1), (3.5), and (3.6):
$\displaystyle|I_{1}|$ $\displaystyle\leq C\delta\|w\|_{r}\|v\|_{s}\leq
C\delta^{3/4}\|v\|_{s},$ $\displaystyle|I_{2}|$ $\displaystyle\leq
C\delta\|v+w\|_{s}\|v\|_{s}\leq C\delta\|v\|_{s}^{2}+C\delta\|v\|_{s},$
$\displaystyle|I_{3}|$ $\displaystyle\leq
C\delta\|v+w\|_{s}^{2p+1}\|v\|_{s}\leq
C\delta\|v\|_{s}^{2(p+1)}+C\delta\|v\|_{s},$ $\displaystyle|I_{5}|$
$\displaystyle\leq C\|v\|_{s}^{2}.$
We estimate $I_{4}$ as follows
$|I_{4}|\leq C\delta^{1/2}\|v\|_{s}^{2}+C\delta^{1/2}\|w\|_{s+1}\|v\|_{s}\leq
C\delta^{1/2}\|v\|_{s}^{2}+C\delta^{1/4}\|v\|_{s},$
In the last relation, we used again the integration by parts, the identities
(3.1), (3.5) and (3.6), and the equality
$\displaystyle\langle\partial_{x_{j}}\varphi\,\partial_{x_{j}}\partial^{\alpha}v,\partial^{\alpha}v\rangle_{L^{2}}$
$\displaystyle=\frac{1}{2}\langle\partial_{x_{j}}\varphi,\partial_{x_{j}}|\partial^{\alpha}v|^{2}\rangle_{L^{2}}=-\langle\partial^{2}_{x_{j}}\varphi,|\partial^{\alpha}v|^{2}\rangle_{L^{2}}.$
Summing up inequalities (3.9) for all $\alpha\in{\mathbb{N}}^{d}$,
$|\alpha|\leq s$, combining the resulting inequality with the estimates for
$I_{j}$ and the Young inequality, and recalling that $\delta\leq 1$, we obtain
$\partial_{t}\|v\|^{2}_{s}\leq
C\delta^{1/2}+C(1+\delta^{1/2})\|v\|_{s}^{2}+C\delta\|v\|_{s}^{2(p+1)},\quad
t\leq\delta^{-1}{\cal T}^{\delta}.$
This inequality, together with (3.8) and the Gronwall inequality, implies that
$\|v(t)\|_{s}^{2}\leq
e^{C(1+\delta^{1/2})t}\left(C\delta^{1/2}t+\|\psi_{0}-\psi_{0}^{\delta}\|_{s}^{2}+C\delta\int_{0}^{t}\|v(y)\|_{s}^{2(p+1)}{\textup{d}}y\right)$
(3.10)
for $t\leq\delta^{-1}{\cal T}^{\delta}$. Let us take $\delta_{0}\in(0,1)$ so
small that, for $\delta<\delta_{0}$,
$\displaystyle\|\psi_{0}-\psi_{0}^{\delta}\|_{s}^{2}<1,$ (3.11) $\displaystyle
e^{C(1+\delta^{1/2})}\left(C\delta^{1/2}+\|\psi_{0}-\psi_{0}^{\delta}\|_{s}^{2}\right)<\frac{1}{2},$
(3.12)
and denote
$\tau^{\delta}=\sup\left\\{t<\delta^{-1}{\cal
T}^{\delta}:\|v(t)\|_{s}<1\right\\}.$
From (3.8) and (3.11) it follows that $\tau^{\delta}>0$ for
$\delta<\delta_{0}$. Let us show that $\tau^{\delta}>1$ provided that
$\delta_{0}<\left(2Ce^{2C}\right)^{-1}.$ (3.13)
Assume, by contradiction, that $\tau^{\delta}\leq 1$. Let $t=\tau^{\delta}$ in
(3.10). By using (3.12) and (3.13), we obtain
$1=\|v(\tau^{\delta})\|_{s}^{2}<\frac{1}{2}+\frac{1}{2}\int_{0}^{\tau^{\delta}}\|v(y)\|_{s}^{2(p+1)}{\textup{d}}y\leq
1.$
This contradiction shows that $\tau^{\delta}>1$ for $\delta<\delta_{0}$, hence
also $1<\delta^{-1}{\cal T}^{\delta}$. Thus, property (a) is proved. Taking
$t=1$ in (3.10), we arrive at
$\|v(1)\|_{s}^{2}\leq
e^{C(1+\delta^{1/2})}\left(C\delta^{1/2}+\|\psi_{0}-\psi_{0}^{\delta}\|_{s}^{2}+C\delta\right)\to
0\quad\text{as $\delta\to 0^{+}$}.$
This implies (b) and completes the proof in the case when $s>d/2$ is an
integer.
To derive properties (a) and (b) in the general case, i.e., when $s\geq s_{d}$
is an arbitrary number, we use inequality (3.10) for integer values of $s$ and
an interpolation argument.
## 4 Saturating subspaces
###### Proof of Proposition 2.5.
The proof is divided into four steps.
Step 1. First, let us assume that ${\cal I}\subset{\mathbb{Z}}^{d}_{*}$ is an
arbitrary finite set, ${\cal H}_{0}({\cal I})={\cal H}({\cal I})$ is the
subspace defied by (2.4), ${\cal H}_{j}({\cal I})={\cal F}({\cal
H}_{j-1}({\cal I}))$ for $j\geq 1$, and ${\cal H}_{\infty}({\cal I})$ is
defined by (2.1).
Step 1.1. Let us show that, if
$\cos\langle x,m\rangle,\,\sin\langle x,m\rangle\in{\cal H}_{\infty}({\cal
I})\quad\text{for some $m\in{\mathbb{Z}}^{d}_{*}$},$
then
${\mathbb{B}}(\cos\langle x,m\rangle),\,{\mathbb{B}}(\sin\langle
x,m\rangle)\in{\cal H}_{\infty}({\cal I}).$
Indeed, assume that
$\cos\langle x,m\rangle,\,\sin\langle x,m\rangle\in{\cal H}_{N}({\cal
I})\quad\text{ for some $N\geq 0$}.$ (4.1)
The equalities
$\cos\langle x,2m\rangle=1-\frac{2}{|m|^{2}}{\mathbb{B}}(\cos\langle
x,m\rangle)=\frac{2}{|m|^{2}}{\mathbb{B}}(\sin\langle x,m\rangle)-1,$ (4.2)
the assumptions ${\bf 1}\in{\cal H}({\cal I})$ and (4.1), and the definition
of ${\cal F}$ imply that
$\cos\langle x,2m\rangle\in{\cal H}_{N+1}({\cal I}).$ (4.3)
As a consequence of (4.2) and (4.3), we have
$\displaystyle{\mathbb{B}}(\cos\langle x,m\rangle)$
$\displaystyle=\frac{|m|^{2}}{2}(1-\cos\langle x,2m\rangle)\in{\cal
H}_{N+1}({\cal I}),$ $\displaystyle{\mathbb{B}}(\sin\langle x,m\rangle)$
$\displaystyle=\frac{|m|^{2}}{2}(1+\cos\langle x,2m\rangle)\in{\cal
H}_{N+1}({\cal I}),$
which imply the required result.
Step 1.2. Let us show that, if
$\cos\langle x,m\rangle,\,\sin\langle x,m\rangle,\,\cos\langle
x,l\rangle,\,\sin\langle x,l\rangle\in{\cal H}_{\infty}({\cal I})$
for some $m,l\in{\mathbb{Z}}^{d}_{*}$ such that $m\not\perp l$, then
$\cos\langle x,m+l\rangle,\,\sin\langle x,m+l\rangle\in{\cal H}_{\infty}({\cal
I}).$
Indeed, this follows immediately from the equalities
$\displaystyle\cos\langle x,m+l\rangle=$ $\displaystyle\,\pm\frac{1}{\langle
m,l\rangle}\Big{(}{\mathbb{B}}(\sin\langle x,m\rangle\pm\sin\langle
x,l\rangle)+{\mathbb{B}}(\cos\langle x,m\rangle\mp\cos\langle x,l\rangle)$
$\displaystyle-{\mathbb{B}}(\sin\langle x,m\rangle)-{\mathbb{B}}(\sin\langle
x,l\rangle)-{\mathbb{B}}(\cos\langle x,m\rangle)-{\mathbb{B}}(\cos\langle
x,l\rangle)\Big{)},$ $\displaystyle\sin\langle x,m+l\rangle=$
$\displaystyle\,\pm\frac{1}{\langle m,l\rangle}\Big{(}{\mathbb{B}}(\sin\langle
x,m\rangle\mp\cos\langle x,l\rangle)+{\mathbb{B}}(\cos\langle
x,m\rangle\mp\sin\langle x,l\rangle)$ $\displaystyle-{\mathbb{B}}(\sin\langle
x,m\rangle)-{\mathbb{B}}(\sin\langle x,l\rangle)-{\mathbb{B}}(\cos\langle
x,m\rangle)-{\mathbb{B}}(\cos\langle x,l\rangle)\Big{)}$
and the result of step 1.1.
Step 2. Now, let us suppose that ${\cal I}\subset{\mathbb{Z}}^{d}_{*}$ is a
finite set such that, for any $l,m\in{\cal I}$, there are vectors
$\\{n_{j}\\}_{j=1}^{k}\subset{\cal I}$ satisfying $l\not\perp n_{1}$,
$n_{j}\not\perp n_{j+1}$, $j=1,\ldots,k-1,$ and $n_{k}\not\perp m$. Let
$N=\text{card}({\cal I})$ and ${\cal I}=\\{m_{1},\ldots,m_{N}\\}$. Arguing by
induction on $N$, we show in this step that
$\cos\langle x,a_{1}m_{1}+\ldots+a_{N}m_{N}\rangle,\,\sin\langle
x,a_{1}m_{1}+\ldots+a_{N}m_{N}\rangle\in{\cal H}_{\infty}({\cal I})$ (4.4)
for any $a_{1},\ldots,a_{N}\in{\mathbb{Z}}$.
Step 2.1. Let ${\cal I}=\\{m_{1},m_{2}\\}\subset{\mathbb{Z}}^{d}_{*}$ with
$m_{1}\not\perp m_{2}$. By the result of step 1.2, we have
$\cos\langle x,a_{1}m_{1}\rangle,\,\sin\langle
x,a_{1}m_{1}\rangle,\,\cos\langle x,a_{2}m_{2}\rangle,\,\sin\langle
x,a_{2}m_{2}\rangle\in{\cal H}_{\infty}({\cal I})$
for any $a_{1},a_{2}\in{\mathbb{Z}}$. Again, in view of step 1.2, this implies
that
$\cos\langle x,a_{1}m_{1}+a_{2}m_{2}\rangle,\,\sin\langle
x,a_{1}m_{1}+a_{2}m_{2}\rangle\in{\cal H}_{\infty}({\cal I})$
for any $a_{1},a_{2}\in{\mathbb{Z}}$.
Step 2.2. Assume that the required property is true if the cardinality of the
set ${\cal I}$ is less or equal to $N-1$. Let ${\cal
I}\subset{\mathbb{Z}}^{d}_{*}$ be such that $N=\text{card}({\cal I})$ and
${\cal I}=\\{m_{1},\ldots,m_{N}\\}$. Without loss of generality, we can assume
$m_{N-1}\not\perp m_{N}$ and the set $\\{m_{1},\ldots,m_{N-1}\\}$ satisfies
the condition formulated in the beginning of step 2. Let us take any
$a_{1},\ldots,a_{N}\in{\mathbb{Z}}$ and $k\geq 1$ and write
$\displaystyle a_{1}m_{1}+\ldots+a_{N}m_{N}$
$\displaystyle=\left(a_{1}m_{1}+\ldots+a_{N-2}m_{N-2}+(a_{N-1}-k)m_{N-1}\right)$
$\displaystyle\quad+\left(km_{N-1}+a_{N}m_{N}\right).$ (4.5)
Then,
$\displaystyle\langle
a_{1}m_{1}+\ldots+(a_{N-1}-k)m_{N-1},km_{N-1}+a_{N}m_{N}\rangle$
$\displaystyle=(a_{N-1}-k)k\|m_{N-1}\|^{2}$
$\displaystyle\quad+O(k)\quad\text{as $k\to+\infty$.}$
As $m_{N-1}\neq 0$, for sufficiently large $k\geq 1$, we have
$\displaystyle a_{1}m_{1}+\ldots+a_{N-2}m_{N-2}+(a_{N-1}-k)m_{N-1}\not\perp
km_{N-1}+a_{N}m_{N}.$ (4.6)
Relation (4.4) is proved by combining (4.5) and (4.6), the induction
hypothesis, and the assumption that $m_{N-1}\not\perp m_{N}$.
Step 3. We conclude from step 2 that, if ${\cal I}\subset{\mathbb{Z}}^{d}_{*}$
is a set satisfying the conditions of Proposition 2.5, then
$\cos\langle x,m\rangle,\,\sin\langle x,m\rangle\in{\cal H}_{\infty}({\cal
I})\quad\text{for any $m\in{\mathbb{Z}}^{d}_{*}$.}$
This implies that ${\cal H}_{\infty}({\cal I})$ is dense in
$C^{r}({\mathbb{T}}^{d};{\mathbb{R}})$ for any $r\geq 0$, hence ${\cal
H}({\cal I})$ is saturating.
Step 4. Finally, let us assume that the conditions of the proposition are not
satisfied for ${\cal I}\subset{\mathbb{Z}}^{d}_{*}$. We distinguish between
two cases.
Step 4.1. If ${\cal I}$ is not a generator, we can find a vector
$n\in{\mathbb{Z}}^{d}_{*}$ which does not belong to the set $\tilde{\cal I}$
of linear combinations of vectors of ${\cal I}$ with integer coefficients. It
is easy to see that
${\cal H}_{\infty}({\cal I})\subset\text{span}\\{\sin\langle
x,m\rangle,\,\cos\langle x,m\rangle:\,\,m\in\tilde{\cal I}\\}.$
Thus, the functions $\sin\langle x,n\rangle$ and $\cos\langle x,n\rangle$ are
orthogonal to the vector space ${\cal H}_{\infty}({\cal I})$ in the Sobolev
spaces $H^{j}({\mathbb{T}}^{d};{\mathbb{R}})$ for any $j\geq 0$. We conclude
that ${\cal H}_{\infty}({\cal I})$ is not dense in
$C^{r}({\mathbb{T}}^{d};{\mathbb{R}})$, thus the subspace ${\cal H}({\cal I})$
is not saturating.
Step 4.2. If ${\cal I}$ does not satisfy the second condition in the theorem,
then it is of the form
${\cal I}=\cup_{j=1}^{k}\\{m_{1}^{j},\ldots,m^{j}_{n_{j}}\\},$
where $k\geq 2$ and $m_{i_{1}}^{j_{1}}\bot m_{i_{2}}^{j_{2}}$ for any integers
$1\leq j_{1}<j_{2}\leq k,$ $1\leq i_{1}\leq n_{j_{1}},$ and $1\leq i_{2}\leq
n_{j_{2}}$. By using the arguments of the steps 1 and 2, it is easy to verify
that the function $\cos\langle x,m_{1}^{j_{1}}+m_{2}^{j_{2}}\rangle$ is
orthogonal to ${\cal H}_{\infty}({\cal I})$ in
$H^{j}({\mathbb{T}}^{d};{\mathbb{R}})$ for any $j\geq 0$. Thus, the space
${\cal H}_{\infty}({\cal I})$ is not dense in
$C^{r}({\mathbb{T}}^{d};{\mathbb{R}})$. ∎
## 5 Growth of Sobolev norms
Let us consider the NLS equation
$\displaystyle i\partial_{t}\psi$
$\displaystyle=-\Delta\psi+V(x)\psi+\kappa|\psi|^{2p}\psi+\langle\eta(t),Q(x)\rangle\psi,$
(5.1) $\displaystyle\psi(0)$ $\displaystyle=\psi_{0}$ (5.2)
with potential $V$ and parameters $d,p,\kappa$ as in the previous sections. We
assume that the field $Q$ satisfies Condition (H1) and $\eta$ is a random
process of the form (0.6) with the following condition satisfied for the
random variables $\\{\eta_{k}\\}$. We denote $J=[0,1]$ and ${\cal
E}=L^{2}(J;{\mathbb{R}}^{q})$.
(H2) $\\{\eta_{k}\\}$ are independent random variables in ${\cal E}$ with
common law $\ell$ such that
$\int_{E}\|y\|_{\cal
E}^{2}\,\ell({\textup{d}}y)<\infty\quad\quad\text{and}\quad\quad\mathop{\rm
supp}\nolimits\ell={\cal E}.$
For example, this condition is satisfied if the random variables
$\\{\eta_{k}\\}$ are of the form
$\eta_{k}(t)=\sum_{j=1}^{+\infty}b_{j}\xi_{jk}e_{j}(t),\quad t\in J,$
where $\\{b_{j}\\}$ are non-zero real numbers verifying
$\sum_{j=1}^{+\infty}b_{j}^{2}<\infty,$ $\\{e_{j}\\}$ is an orthonormal basis
in ${\cal E}$, and $\\{\xi_{jk}\\}$ are independent real-valued random
variables whose law has a continuous density $\rho_{j}$ with respect to the
Lebesgue measure such that
$\int_{-\infty}^{+\infty}x^{2}\rho_{j}(x)\,{\textup{d}}x=1,\quad\rho_{j}(x)>0\quad\text{for
all $x\in{\mathbb{R}}$ and $j\geq 1$.}$
By Proposition 1.1, the problem (5.1), (5.2) is locally well-posed in $H^{s}$
for any $s>d/2$ up to some (random) maximal time ${\cal T}={\cal
T}(\psi_{0},\eta)>0$. Let ${\mathbb{P}}_{\psi_{0}}$ be the probability measure
corresponding to the trajectories issued from $\psi_{0}$ (e.g., see Section
1.3.1 in [KS12]). Recall that ${\cal S}$ is the unit sphere in $L^{2}$.
###### Theorem 5.1.
Under the Conditions (H1) and (H2), for any $s>s_{d}$ and any $\psi_{0}\in
H^{s}\cap{\cal S}$, we have
${\mathbb{P}}_{\psi_{0}}\left\\{\limsup_{t\to{\cal
T}^{-}}\|\psi(t)\|_{s}=+\infty\right\\}=1.$ (5.3)
By the blow-up alternative, equality (5.3) gives new information in the case
${\cal T}(\psi_{0},\eta)=+\infty.$
###### Proof.
Step 1. Reduction. Together with Eq. (5.1), let us consider the following
truncated NLS equation:
$i\partial_{t}\psi=-\Delta\psi+V(x)\psi+\kappa\chi_{R}(\|\psi\|_{s})|\psi|^{2p}\psi+\langle\eta(t),Q(x)\rangle\psi,$
(5.4)
where $R>0$ and $\chi_{R}\in C^{\infty}_{0}({\mathbb{R}})$ is such that
$0\leq\chi_{R}(x)\leq 1$ for $x\in{\mathbb{R}}$ and $\chi_{R}(x)=1$ for
$|x|\leq R$. Let ${\cal F}_{k}$, $k\geq 1$ be the $\sigma$-algebra generated
by the family $\\{\eta_{j}\\}_{j=1}^{k}$. The problem (5.4), (5.2) is globally
well-posed. The following proposition is proved at the end of this section.
###### Proposition 5.2.
For any $\psi_{0}\in H^{s}$ and $R>0$, the problem (5.4), (5.2) has a unique
solution $\psi^{R}\in C({\mathbb{R}}_{+};H^{s})$. Moreover, the family
$\left\\{\psi^{R}(k+\cdot):J\to H^{s}\right\\}_{k\geq 0}$
defines an $C(J;H^{s})$-valued Markov process with respect to the filtration
${\cal F}_{k+1}$.
Let us fix any $0<M<R$ and consider the stopping time
$\tau_{M,R}=1+\min\left\\{k\geq
0:\|\psi^{R}(k+\cdot)\|_{C(J;H^{s})}>M\right\\},\quad\psi_{0}\in H^{s},$
where the minimum over an empty set is equal to $+\infty$. Assume we have
shown that
${\mathbb{P}}_{\psi_{0}}\\{\tau_{M,R}<\infty\\}=1,\quad\psi_{0}\in
H^{s}\cap{\cal S}.$ (5.5)
Since $R>M$, this implies that
${\mathbb{P}}_{\psi_{0}}\\{\tau_{M}<\infty\\}=1,\quad\psi_{0}\in
H^{s}\cap{\cal S},$ (5.6)
where
$\tau_{M}=\min\left\\{k\geq 0:\sup_{t\in J,\,k+t<{\cal
T}}\|\psi(k+t)\|_{s}>M\right\\}$
and again the minimum over an empty set is $+\infty$. As $M>0$ is arbitrary,
we conclude that (5.3) holds.
Step 2. Proof of (5.5). Assume that there is an integer $l\geq 1$ such that
$c=c(M,R)=\sup_{{\psi_{0}\in H^{s}\cap{\cal
S}}}{\mathbb{P}}_{\psi_{0}}\left\\{\tau_{M,R}>l\right\\}<1.$ (5.7)
Combining this with the Markov property, we obtain
$\displaystyle{\mathbb{P}}_{\psi_{0}}\left\\{\tau_{M,R}>nl\right\\}$
$\displaystyle={\mathbb{E}}_{\psi_{0}}\left({\mathbb{I}}_{\\{\tau_{M,R}>(n-1)l\\}}{\mathbb{P}}_{\phi}\\{\tau_{M,R}>l\\}|_{\phi={\psi^{R}((n-1)l)}}\right)$
$\displaystyle\leq
c\,{\mathbb{P}}_{\psi_{0}}\left\\{\tau_{M,R}>(n-1)l\right\\},$
where ${\mathbb{E}}_{\psi_{0}}$ is the expectation corresponding to
${\mathbb{P}}_{\psi_{0}}$. Iterating this inequality, we get
${\mathbb{P}}_{\psi_{0}}\left\\{\tau_{M,R}>nl\right\\}\leq c^{n}.$
This, together with the Borel–Cantelli lemma, implies (5.5).
Step 3. Proof of (5.7). By Theorem 2.3, for any $\psi_{0}\in
H^{s_{d}}\cap{\cal S}$, there is a control $u\in{\cal E}$ such that
$\sup_{t\in J,\,t<{\cal T}}\|\psi(t)\|_{s_{d}}>M.$ (5.8)
On the other hand, Condition (H2) implies that
${\mathbb{P}}\left\\{\|u-\eta\|_{{\cal E}}<\delta\right\\}>0$
for any $\delta>0$. Combining this with Proposition 1.1 and inequality (5.8),
we see that there is a number $\delta>0$ such that
$\inf_{\psi_{0}^{\prime}\in B_{H^{s_{d}}}(\psi_{0},\delta)\cap{\cal
S}}{\mathbb{P}}_{\psi_{0}^{\prime}}\left\\{\sup_{t\in J,\,t<{\cal
T}^{\prime}}\|\psi(t)\|_{s_{d}}>M\right\\}>0,$
where ${\cal T}^{\prime}={\cal T}(\psi_{0}^{\prime},\eta)$. As $R>M$, we also
have
$\inf_{\psi_{0}^{\prime}\in B_{H^{s_{d}}}(\psi_{0},\delta)\cap{\cal
S}}{\mathbb{P}}_{\psi_{0}^{\prime}}\left\\{\sup_{t\in
J}\|\psi^{R}(t)\|_{s_{d}}>M\right\\}>0.$
Since the ball $B_{H^{s}}(0,M)$ is compact in $H^{s_{d}}$ and
$\|\cdot\|_{s_{d}}\leq\|\cdot\|_{s}$, we derive that
$\inf_{\psi_{0}\in B_{H^{s}}(0,M)\cap{\cal
S}}{\mathbb{P}}_{\psi_{0}}\left\\{\sup_{t\in
J}\|\psi^{R}(t)\|_{s}>M\right\\}>0.$
The latter and the fact that
${\mathbb{P}}_{\psi_{0}}\left\\{\tau_{M,R}=1\right\\}=1\quad\text{if
$\|\psi_{0}\|_{s}>M$}$
imply (5.7) with $l=1$ and
$c=1-\inf_{\psi_{0}\in B_{H^{s}}(0,M)\cap{\cal
S}}{\mathbb{P}}_{\psi_{0}}\left\\{\sup_{t\in
J}\|\psi^{R}(t)\|_{s}>M\right\\}.$
This completes the proof of the theorem. ∎
###### Proof of Proposition 5.2.
The local well-posedness of (5.4), (5.2) is proved by standard arguments. As
the $H^{s}$-norm of the solution remains bounded on any bounded interval, it
can be extended to any $t>0$. For any $k\geq 1$, let us denote by
$\psi_{k}(\psi_{0},\eta_{1},\ldots,\eta_{k})$ the restriction of the solution
of (5.4), (5.2) to the interval $[k-1,k]$ (we skip the dependence on $R$).
Then $\\{\psi_{k}(\psi_{0},\eta_{1},\ldots,\eta_{k})\\}_{k\geq 1}$ is a Markov
process in $C(J,H^{s})$. Indeed, we have
$\psi_{k+n}(\psi_{0},\eta_{1},\ldots,\eta_{k+n})=\psi_{n}(\psi_{k}(\psi_{0},\eta_{1},\ldots,\eta_{k}),\eta_{k+1},\ldots,\eta_{k+n}).$
As $\\{\eta_{j}\\}_{j\geq k+1}$ is independent of ${\cal F}_{k}$ and
$\psi_{k}$ is ${\cal F}_{k}$-measurable, the following equality holds
${\mathbb{E}}\left(f(\psi_{k+n}(\psi_{0},\eta_{1},\ldots,\eta_{k+n}))|{\cal
F}_{k}\right)={\mathbb{E}}f(\psi_{n}(\psi,\eta_{k+1},\ldots,\eta_{k+n}))$
(5.9)
for any bounded measurable function $f:C(J,H^{s})\to{\mathbb{R}}$. Here,
$\psi$ is the value at time-1 of
$\psi_{k}(\psi_{0},\eta_{1},\ldots,\eta_{k})$. The vectors
$(\eta_{1},\ldots,\eta_{n})$ and $(\eta_{k+1},\ldots,\eta_{k+n})$ have the
same law, so
${\mathbb{E}}f(\psi_{n}(\psi,\eta_{k+1},\ldots,\eta_{k+n}))={\mathbb{E}}f(\psi_{n}(\psi,\eta_{1},\ldots,\eta_{n})).$
Combining this and (5.9), we arrive at the required result. ∎
## References
* [AS05] A. A. Agrachev and A. V. Sarychev. Navier–Stokes equations: controllability by means of low modes forcing. J. Math. Fluid Mech., 7(1):108–152, 2005.
* [AS06] A. A. Agrachev and A. V. Sarychev. Controllability of 2D Euler and Navier–Stokes equations by degenerate forcing. Comm. Math. Phys., 265(3):673–697, 2006.
* [AS08] A. A. Agrachev and A. V. Sarychev. Solid controllability in fluid dynamics. In Instability in Models Connected with Fluid Flows. I, volume 6 of Int. Math. Ser. (N. Y.), pages 1–35. Springer, New York, 2008.
* [BC06] K. Beauchard and J.-M. Coron. Controllability of a quantum particle in a moving potential well. J. Funct. Anal., 232(2):328–389, 2006.
* [BCCS12] U. Boscain, M. Caponigro, T. Chambrion, and M. Sigalotti. A weak spectral condition for the controllability of the bilinear Schrödinger equation with application to the control of a rotating planar molecule. Comm. Math. Phys., 311(2):423–455, 2012.
* [BCT18] K. Beauchard, J.-M. Coron, and H. Teismann. Minimal time for the approximate bilinear control of Schrödinger equations. Math. Methods Appl. Sci., 41(5):1831–1844, 2018.
* [Bea05] K. Beauchard. Local controllability of a 1-D Schrödinger equation. J. Math. Pures Appl. (9), 84(7):851–956, 2005.
* [BGMR18] D. Bambusi, B. Grébert, A. Maspero, and D. Robert. Reducibility of the quantum harmonic oscillator in $d$-dimensions with polynomial time-dependent perturbation. Anal. PDE, 11(3):775–799, 2018.
* [BL10] K. Beauchard and C. Laurent. Local controllability of 1D linear and nonlinear Schrödinger equations with bilinear control. J. Math. Pures Appl. (9), 94(5):520–554, 2010.
* [Bou99] J. Bourgain. On growth of Sobolev norms in linear Schrödinger equations with smooth time dependent potential. J. Anal. Math., 77:315–348, 1999.
* [Bou00] J. Bourgain. Problems in Hamiltonian PDE’s. Number Special Volume, Part I, pages 32–56. 2000.
* [Caz03] T. Cazenave. Semilinear Schrödinger equations, volume 10. AMS, Providence, RI, 2003.
* [CKS+10] J. Colliander, M. Keel, G. Staffilani, H. Takaoka, and T. Tao. Transfer of energy to high frequencies in the cubic defocusing nonlinear Schrödinger equation. Invent. Math., 181(1):39–113, 2010.
* [CMSB09] T. Chambrion, P. Mason, M. Sigalotti, and U. Boscain. Controllability of the discrete-spectrum Schrödinger equation driven by an external field. Ann. Inst. H. Poincaré Anal. Non Linéaire, 26(1):329–349, 2009.
* [Del14] J.-M. Delort. Growth of Sobolev norms for solutions of time dependent Schrödinger operators with harmonic oscillator potential. Comm. Partial Differential Equations, 39(1):1–33, 2014.
* [EgKS03] M. B. Erdog̃an, R. Killip, and W. Schlag. Energy growth in Schrödinger’s equation with Markovian forcing. Comm. Math. Phys., 240(1-2):1–29, 2003.
* [GG17] P. Gérard and S. Grellier. The cubic Szegö equation and Hankel operators. Astérisque, (389):vi+112, 2017.
* [GK15] M. Guardia and V. Kaloshin. Growth of Sobolev norms in the cubic defocusing nonlinear Schrödinger equation. J. Eur. Math. Soc., 17(1):71–149, 2015.
* [HM20] E. Haus and A. Maspero. Growth of Sobolev norms in time dependent semiclassical anharmonic oscillators. J. Funct. Anal., 278(2):108316, 25, 2020.
* [HPTV15] Z. Hani, B. Pausader, N. Tzvetkov, and N. Visciglia. Modified scattering for the cubic Schrödinger equation on product spaces and applications. Forum Math. Pi, 3, 2015.
* [KS12] S. Kuksin and A. Shirikyan. Mathematics of Two-Dimensional Turbulence. Cambridge University Press, Cambridge, 2012.
* [Kuk97] S. Kuksin. Oscillations in space-periodic nonlinear Schrödinger equations. Geom. Funct. Anal., 7(2):338–363, 1997.
* [Mas19] A. Maspero. Lower bounds on the growth of Sobolev norms in some linear time dependent Schrödinger equations. Math. Res. Lett., 26(4):1197–1215, 2019.
* [Mir09] M. Mirrahimi. Lyapunov control of a quantum particle in a decaying potential. Ann. Inst. H. Poincaré Anal. Non Linéaire, 26(5):1743–1765, 2009.
* [Ner09] V. Nersesyan. Growth of Sobolev norms and controllability of the Schrödinger equation. Comm. Math. Phys., 290(1):371–387, 2009.
* [Ner10] V. Nersesyan. Global approximate controllability for Schrödinger equation in higher Sobolev norms and applications. Ann. Inst. H. Poincaré Anal. Non Linéaire, 27(3), 2010.
* [Ner21] V. Nersesyan. Approximate controllability of nonlinear parabolic PDEs in arbitrary space dimension. To appear in Math. Control Relat. Fields, 2021.
* [Sar12] A. Sarychev. Controllability of the cubic Schrödinger equation via a low-dimensional source term. Math. Control Relat. Fields, 2(3):247–270, 2012.
* [Shi06] A. Shirikyan. Approximate controllability of three-dimensional Navier–Stokes equations. Comm. Math. Phys., 266(1):123–151, 2006.
* [Shi07] A. Shirikyan. Exact controllability in projections for three-dimensional Navier–Stokes equations. Ann. Inst. H. Poincaré Anal. Non Linéaire, 24(4):521–537, 2007\.
* [Shi18] A. Shirikyan. Control theory for the Burgers equation: Agrachev-Sarychev approach. Pure Appl. Funct. Anal., 3(1):219–240, 2018.
* [Tao06] T. Tao. Nonlinear dispersive equations, volume 106. AMS, Providence, RI, 2006. Local and global analysis.
|
# Model-Based Policy Search Using Monte Carlo Gradient Estimation with Real
Systems Application
Fabio Amadio1, Alberto Dalla Libera1, Riccardo Antonello1, Daniel Nikovski2,
Ruggero Carli1, Diego Romeres2 1 Fabio Amadio, Alberto Dalla Libera, Riccardo
Antonello and Ruggero Carli are with the Deptartment of Information
Engineering, University of Padova, Via Gradenigo 6/B, 35131 Padova, Italy
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>Diego Romeres and Daniel Nikovski are with Mitsubishi
Electric Research Laboratories (MERL), Cambridge, MA 02139<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
In this paper, we present a Model-Based Reinforcement Learning (MBRL)
algorithm named _Monte Carlo Probabilistic Inference for Learning COntrol_
(MC-PILCO). The algorithm relies on Gaussian Processes (GPs) to model the
system dynamics and on a Monte Carlo approach to estimate the policy gradient.
This defines a framework in which we ablate the choice of the following
components: (i) the selection of the cost function, (ii) the optimization of
policies using dropout, (iii) an improved data efficiency through the use of
structured kernels in the GP models. The combination of the aforementioned
aspects affects dramatically the performance of MC-PILCO. Numerical
comparisons in a simulated cart-pole environment show that MC-PILCO exhibits
better data efficiency and control performance w.r.t. state-of-the-art GP-
based MBRL algorithms. Finally, we apply MC-PILCO to real systems, considering
in particular systems with partially measurable states. We discuss the
importance of modeling both the measurement system and the state estimators
during policy optimization. The effectiveness of the proposed solutions has
been tested in simulation and on two real systems, a Furuta pendulum and a
ball-and-plate rig. MC-PILCO code is publicly available at
https://www.merl.com/research/license/MC-PILCO.
###### Index Terms:
Model learning for Control, Dynamics, Learning and Adaptive Systems, Robot
Learning
## I Introduction
In recent years, reinforcement learning (RL) [1] has achieved outstanding
results in many different environments, and has shown the potential to provide
an automated framework for learning different controllers by self-
experimentation. However, model-free RL (MFRL) algorithms might require a
massive amount of interactions with the environment in order to solve the
assigned task. This data inefficiency puts a limit to RL’s potential in real-
world applications, due to the time and cost of interacting with them. In
particular, when dealing with mechanical systems, it is critical to learn the
task with the least possible amount of interaction, to reduce wear and tear
and avoid any damage to the system. A promising way to overcome this
limitation is model-based reinforcement learning (MBRL). MBRL is based on the
use of data from interactions to build a predictive model of the environment
and to exploit it to plan control actions. MBRL increases data efficiency by
using the model to extract more valuable information from the available data
[2].
On the other hand, MBRL methods are effective only inasmuch as their models
resemble accurately the real systems. Hence, deterministic models might suffer
dramatically from model inaccuracy, and the use of stochastic models becomes
necessary in order to capture uncertainty. Gaussian Processes (GPs) [3] are a
class of Bayesian models commonly used in RL methods precisely for their
intrinsic capability to handle uncertainty and provide principled stochastic
predictions [4][5].
Related work. PILCO (Probabilistic Inference for Learning COntrol) [6] is a
successful MBRL algorithm that uses GP models and gradient-based policy search
to achieve substantial data efficiency in solving different control problems,
both in simulation as well as with real systems [7][8]. In PILCO, long-term
predictions are computed analytically, approximating the distribution of the
next state at each time instant with a Gaussian distribution by means of
moment matching. In this way, the policy gradient is computed in closed form.
However, the use of moment matching introduces two relevant limitations. (i)
Moment matching models only unimodal distributions. (ii) The computation of
the moments is shown to be tractable only when considering Squared Exponential
(SE) kernels and differentiable cost functions. The unimodal approximation is
too crude of an assumption on the long-term system dynamics for several
systems. Moreover, it introduces relevant limitations in case that initial
conditions or the optimal solution are multimodal. For instance, in case that
the initial variance of the state distribution is high, the optimal solution
might be multimodal, due to dependencies on initial conditions. Also the
limitation on the kernel choice might be very stringent, as the SE kernel
imposes smooth properties on the GPs posterior estimator and might show poor
generalization properties in data that have not been seen during training [9,
10, 11, 12].
PILCO has inspired several other MBRL algorithms that try to improve it in
different ways. Limitations due to the use of SE kernels have been addressed
in Deep-PILCO [13], where the system evolution is modeled using Bayesian
Neural Networks [14], and long-term predictions are computed combining
particle-based methods and moment matching. Results show that, compared to
PILCO, Deep-PILCO requires a larger number of interactions with the system in
order to learn the task. This fact suggests that using neural networks (NNs)
might not be advantageous in terms of data efficiency, due to the considerably
high amount of parameters needed to characterize the model. A more articulated
approach has been proposed in PETS [15], where the authors use a probabilistic
ensemble of NNs to model the uncertainty of the system dynamics. Despite the
positive results in the simulated high-dimension systems, also the numerical
results in PETS show that GPs are more data-efficient than NNs when
considering low-dimensional systems, such as the cart-pole benchmark. An
alternative route has been proposed in [16], where the authors use a simulator
to learn a prior for the GP model before starting the reinforcement learning
procedure on the actual system to control. This simulated prior improves the
performance of PILCO in areas of the state space with no available data
points. However, the method requires an accurate simulator that may not always
be available to the user.
Limitations due to the gradient-based optimization were addressed in Black-
DROPS [17], which adopts a gradient-free policy optimization. In this way,
also non-differentiable cost functions can be used, and the computational time
can be improved with the parallelization of the black-box optimizer. With this
strategy, Black-DROPS achieves similar data efficiency to PILCO’s, but
significantly increases asymptotic performance.
Other approaches focused on improving the accuracy of long-term predictions,
overcoming approximations due to moment matching. A first attempt has been
proposed in [18], where long-term distributions are computed relying on
particle-based methods. Based on the current policy and the one-step-ahead GP
models, the authors simulate the evolution of a batch of particles sampled
from the initial state distribution. Then, the particle trajectories are used
to approximate the expected cumulative cost. The policy gradient is computed
using the strategy proposed in PEGASUS [19], where by fixing the initial
random seed, a probabilistic Markov decision process (MDP) is transformed into
an equivalent partially observable MDP with deterministic transitions.
Compared to PILCO, results obtained were not satisfactory. The poor
performance was attributed to the policy optimization method, and in
particular, to its inability to escape from the numerous local minima
generated by the multimodal distribution.
Another particle-based approach is PIPPS [20], where they proposed the _total
propagation algorithm_ to compute the gradient instead of the PEGASUS
strategy. The _total propagation algorithm_ combines the gradient obtained
with the _reparameterization trick_ with the likelihood ratio gradient. The
_reparameterization trick_ has been introduced with successful results in
stochastic variational inference (SVI) [21, 22]. In contrast with the results
obtained in SVI, where just a few samples are needed to estimate the gradient,
the authors of [20] highlighted several issues related to the gradient
computed with the _reparameterization trick_ , due to its exploding magnitude
and random direction. [20] concluded that policy gradient computation via
particle-based methods and the _reparameterization trick_ was not a feasible
strategy. To overcome these issues, PIPPS relies on the likelihood ratio
gradient to regularize the gradient computed with the _reparameterization
trick_. The algorithm performs similarly to PILCO with some improvements in
the gradient computation, and in the overall performance in the presence of
additional noise.
Proposed approach. In this work, we propose an MBRL algorithm named Monte
Carlo Probabilistic Inference for Learning COntrol (MC-PILCO). Like PILCO, MC-
PILCO is a policy gradient algorithm, which uses GPs to describe the one-step-
ahead system dynamics and relies on a particle-based method to approximate the
long-term state distribution instead of using moment matching. The gradient of
the expected cumulative cost w.r.t. the policy parameters is obtained by
backpropagation [23] on the associated stochastic computational graph,
exploiting the _reparameterization trick_. Differently from PIPPS, where they
focused on obtaining regularized estimates of the gradient, we have
interpreted the optimization problem as a stochastic gradient descent (SGD)
problem [24]. This problem has been studied in depth in the context of neural
networks, where overparameterized models are optimized using noisy estimates
of the gradient [25]. Analytical and experimental studies show that the shape
of the cost function and the nonlinear activation function adopted can affect
dramatically the performance of SGD algorithms [26, 27, 28]. Motivated by the
results obtained in this field, w.r.t. the previous particle-based approaches,
we considered: (i) the use of less peaked cost functions, i.e., less
penalizing costs, to avoid the presence of regions where the gradient is
numerically almost null. (ii) During policy optimization, we applied dropout
[29] to the policy parameters, in order to improve the ability to escape from
local minima and obtain better performing policies.
In addition, we propose a solution to deal with partially measurable systems
which are particularly relevant in real applications, introducing MC-
PILCO4PMS. Indeed, unlike simulated environments, where the state is typically
assumed to be fully measurable, the state of real systems might be only
partially measurable. For instance, only positions are often directly measured
in real robotic systems, whereas velocities are typically computed by means of
estimators, such as state observers, Kalman filters, and numerical
differentiation with low-pass filters. In this context, during policy
optimization, it is important to distinguish between the states generated by
the models, which aim at describing the evolution of the real system state,
and the states provided to the policy. Indeed, providing to the control policy
the model predictions corresponds to assuming ability to measure directly the
system state, which, as mentioned before, is not possible in the real system.
To deal with this problem, we estimate the actual states observed in the real
system by applying to the predicted states the models of both the measurement
system and the online estimators, and passing these estimates to the policy
during training. In this way, we obtain robustness w.r.t. the delays and
distortions caused by online filtering. Thanks to the flexibility of our
particle-based approach, it is possible to easily reproduce a wide variety of
filters and state estimators, e.g., numerical differentiation, low-pass
filters, Kalman filters, etc.
Contributions. We present MC-PILCO, an MBRL algorithm based on particle-based
methods for long-term predictions that features cost shaping, use of dropout
during policy optimization, extension to any kernel functions, and the
introduction of the so called speed-integration scheme. The effectiveness of
the proposed method has been ablated and shown both in simulation and on real
systems. We considered systems with up to 12-dimensional state space that are
typical dimensions for GP-based MBRL algorithms. First, the advantage of each
of these features has been shown on a cart-pole swing-up benchmark and
validated with statistical tests. Results show a significant increase in
performance, both in terms of convergence and data efficiency, as well as the
capability to handle multi-modal distributions. Second, MC-PILCO outperforms
the state-of-the-art GP-based MBRL algorithms PILCO and Black-DROPS on the
same simulated cart-pole system. Third, we validated MC-PILCO on a higher-
dimensional system, by successfully learning a joint-space controller for a
trajectory tracking of a simulated UR5 robotic arm. These results support the
novel conclusion that, by properly shaping the cost function and using dropout
during policy optimization, the _reparameterization trick_ can be used
effectively in GP-based MBRL and Monte Carlo methods do not suffer of gradient
estimation problems, contrary to what was asserted in the previous literature.
Furthermore, the property of using any kernel function was tested using a
combination of an SE and a polynomial kernel [30], as well as a semi-
parametrical kernel [10, 11, 12]. Results obtained both in simulation and on a
real Furuta pendulum show that structured kernels can increase significantly
data efficiency, limiting the interaction time required to learn the tasks.
Finally, we extended the algorithm to partially measurable systems, such as
most existing real systems, introducing MC-PILCO4PMS. We propose the idea of
having different state estimators during model learning and policy
optimization. In particular, when training the policy, it is essential to
incorporate in the state predicted by the models the distortions caused by the
online estimators and measurement devices in the real system. The
effectiveness of this approach is validated on a simulated cart-pole and on
two real systems, namely, a Furuta pendulum and a ball-and-plate system.
To recap, the main results of this paper are:
* •
Design of MC-PILCO, a GP-based policy-gradient MBRL algorithm that relies on
Monte Carlo simulation with the _reparameterization trick_ to update the
policy;
* •
We show that by properly shaping the cost function and using dropout during
policy optimization, the _reparameterization trick_ can be effective in
policy-gradient MBRL;
* •
We analyze behaviors occurring in real setups due to filtering and state
estimators, and we propose MC-PILCO4PMS, a modified version of MC-PILCO
capable of dealing with partially measurable systems.
The article is structured as follows. In Section II, some background notions
are provided: we state the general problem of model-based policy gradient
methods, and present modelling approaches of dynamical systems with GPs. In
Section III, we present MC-PILCO, our proposed algorithm, detailing the policy
optimization and model learning techniques adopted. In Section IV, we discuss
MC-PILCO4PMS, a variation of the proposed algorithm, specifically designed for
the application to systems with partially measurable state. In Section V, we
analyze several aspects affecting the performance of MC-PILCO, such as the
cost shape, dropout, and the kernel choice. In Section VI we validate and
analyse MC-PILCO in different tests on simulated environments, while, in
Section VII, we refer to MC-PILCO4PMS providing a proof of concept and the
results obtained on a real Furuta pendulum and a ball-and-plate system.
Finally, we draw conclusions in Section VIII.
## II Background
In this section, we first introduce the standard framework considered in
model-based policy gradient RL methods, and then discuss how to use Gaussian
Process Regression (GPR) for model learning. In the latter topic, we focus on
three aspects: some background notions about GPR, the description of the model
for one-step-ahead predictions, and finally, we discuss long term predictions,
focusing on two possible strategies, namely, moment matching and a particle-
based method.
### II-A Model-Based policy gradient
Consider the discrete-time system described by the unknown transition function
$f(\cdot,\cdot)$,
$\boldsymbol{x}_{t+1}=f(\boldsymbol{x}_{t},\boldsymbol{u}_{t})+\boldsymbol{w}_{t},$
where, at each time step $t$,
$\boldsymbol{x}_{t}\in\mathbb{R}^{d_{\boldsymbol{x}}}$ and
$\boldsymbol{u}_{t}\in\mathbb{R}^{d_{\boldsymbol{u}}}$ are, respectively, the
state and the inputs of the system, while
$\boldsymbol{w}_{t}\sim\mathcal{N}(0,\Sigma_{\boldsymbol{w}})$ is an
independent Gaussian random variable modeling additive noise. The cost
function $c(\boldsymbol{x}_{t})$ is defined to characterize the immediate
penalty for being in state $\boldsymbol{x}_{t}$.
Inputs are chosen according to a policy function
$\pi_{\boldsymbol{\theta}}:\boldsymbol{x}\mapsto\boldsymbol{u}$ that depends
on the parameter vector $\boldsymbol{\theta}$.
The objective is to find the policy that minimizes the expected cumulative
cost over a finite number of time steps $T$, i.e.,
$J(\boldsymbol{\theta})=\sum_{t=0}^{T}\mathbb{E}_{\boldsymbol{x}_{t}}\left[c(\boldsymbol{x}_{t})\right]\text{,}$
(1)
with an initial state distributed according to a given
$p(\boldsymbol{x}_{0})$.
A model-based approach for learning a policy consists, generally, of the
succession of several trials; i.e., attempts to solve the desired task. Each
trial includes three main phases:
* •
Model Learning: the data collected from all the previous interactions are used
to build a model of the system dynamics (at the first iteration, data are
collected by applying possibly random exploratory controls);
* •
Policy Update: the policy is optimized in order to minimize the cumulative
cost $J(\boldsymbol{\theta})$. The optimization algorithm iteratively
approximates $J(\boldsymbol{\theta})$ by simulating the system according to
the current model and policy parameters $\boldsymbol{\theta}$, and updates
$\boldsymbol{\theta}$.
* •
Policy Execution: the current optimized policy is applied to the system and
the data are stored for model improvement.
Model-based policy gradient methods use the learned model to predict the state
evolution when the current policy is applied. These predictions are used to
estimate $J(\boldsymbol{\theta})$ and its gradient
$\nabla_{\boldsymbol{\theta}}J$ in order to update the policy parameters
$\boldsymbol{\theta}$ following a gradient-descent approach.
### II-B GPR and one-step-ahead predictions
A common strategy with GPR-based approaches consists of modeling the evolution
of each state dimension with a distinct GP. Let’s denote by
$\Delta^{(i)}_{t}=x^{(i)}_{t+1}-x^{(i)}_{t}$ the difference between the value
of the _i_ -th component at time $t+1$ and $t$, and by $y^{(i)}_{t}$ the noisy
measurement of $\Delta^{(i)}_{t}$ with
$i\in\\{1,\ldots,d_{\boldsymbol{x}}\\}$. Moreover, let
$\tilde{\boldsymbol{x}}_{t}=[\boldsymbol{x}_{t},\boldsymbol{u}_{t}]$ be the
vector that includes the state and the input at time $t$, also called the GP
input. Then, given the data
$\mathcal{D}=\left(\tilde{X},\boldsymbol{y}^{(i)}\right)$, where
$\boldsymbol{y}^{(i)}=[y^{(i)}_{t_{1}},\dots,y^{(i)}_{t_{n}}]^{T}$ is a vector
of $n$ output measurements, and
$\tilde{X}=\\{\tilde{\boldsymbol{x}}_{t_{1}},\dots,\tilde{\boldsymbol{x}}_{t_{n}}\\}$
is the set of GP inputs, GPR assumes the following probabilistic model, for
each state dimension,
$\boldsymbol{y}^{(i)}=\begin{bmatrix}h^{(i)}(\tilde{\boldsymbol{x}}_{t_{1}})\\\
\vdots\\\
h^{(i)}(\tilde{\boldsymbol{x}}_{t_{n}})\end{bmatrix}+\begin{bmatrix}e^{(i)}_{t_{1}}\\\
\vdots\\\
e^{(i)}_{t_{n}}\end{bmatrix}=\boldsymbol{h}^{(i)}(\tilde{X})+\boldsymbol{e}^{(i)}\text{,}$
where $\boldsymbol{e}^{(i)}$ is a zero-mean Gaussian i.i.d. noise with
standard deviation $\sigma_{i}$, $h^{(i)}(\cdot)$ is an unknown function
modeled a priori as a zero-mean Gaussian Process, and
$i\in\\{1,\ldots,d_{x}\\}$. In particular, we have
$\boldsymbol{h}^{(i)}\sim\mathcal{N}(0,K_{i}(\tilde{X},\tilde{X}))$, with the
a priori covariance matrix $K_{i}(\tilde{X},\tilde{X})\in\mathbb{R}^{n\times
n}$ defined element-wise through a kernel function $k_{i}(\cdot,\cdot)$,
namely, the element in _j_ -th row and _k_ -th column is given by
$k_{i}(\tilde{\boldsymbol{x}}_{t_{j}},\tilde{\boldsymbol{x}}_{t_{k}})$. A
crucial aspect in GPR is the kernel choice. The kernel function encodes prior
assumptions about the process. One of the most common choices for continuous
functions is the SE kernel, defined as
$k_{SE}(\tilde{\boldsymbol{x}}_{t_{j}},\tilde{\boldsymbol{x}}_{t_{k}}):=\lambda^{2}e^{-||\tilde{\boldsymbol{x}}_{t_{j}}-\tilde{\boldsymbol{x}}_{t_{k}}||^{2}_{\Lambda^{-1}}}\text{,}$
(2)
where the scaling factor $\lambda$ and the matrix $\Lambda$ are kernel
hyperparameters which can be estimated by marginal likelihood maximization.
Typically, $\Lambda$ is assumed to be diagonal, with the diagonal elements
named length-scales.
Remarkably, the posterior distribution of $h^{(i)}(\cdot)$ can be computed in
closed form. Let $\tilde{\boldsymbol{x}}_{t}$ be a general GP input at time
$t$. Then, the distribution of $\hat{\Delta}^{(i)}_{t}$, the estimate of
$\Delta^{(i)}_{t}$, is Gaussian with mean and variance given by
$\displaystyle\mathbb{E}[\hat{\Delta}^{(i)}_{t}]=k_{i}(\tilde{\boldsymbol{x}}_{t},\tilde{X})\Gamma^{-1}_{i}\boldsymbol{y}^{(i)}\text{,}$
(3) $\displaystyle
var[\hat{\Delta}^{(i)}_{t}]=k_{i}(\tilde{\boldsymbol{x}}_{t},\tilde{\boldsymbol{x}}_{t})-k_{i}(\tilde{\boldsymbol{x}}_{t},\tilde{X})\Gamma^{-1}_{i}k_{i}^{T}(\tilde{\boldsymbol{x}}_{t},\tilde{X})\text{,}$
(4)
with $\Gamma_{i}$ and $k_{i}(\tilde{\boldsymbol{x}}_{t},\tilde{X})$ defined as
$\displaystyle\Gamma_{i}=(K_{i}(\tilde{X},\tilde{X})+\sigma_{i}^{2}I)\text{,}$
$\displaystyle
k_{i}(\tilde{\boldsymbol{x}}_{t},\tilde{X})=[k_{i}(\tilde{\boldsymbol{x}}_{t},\tilde{\boldsymbol{x}}_{t_{1}}),\dots,k_{i}(\tilde{\boldsymbol{x}}_{t},\tilde{\boldsymbol{x}}_{t_{n}})]\text{.}$
Recalling that the evolution of each state dimension is modeled with a
distinct GP, and assuming that the GPs are conditionally independent given the
current GP input $\tilde{\boldsymbol{x}}_{t}$, the posterior distribution for
the estimated state at time $t+1$ is
$p(\hat{\boldsymbol{x}}_{t+1}|\tilde{\boldsymbol{x}}_{t},\mathcal{D})\sim\mathcal{N}(\boldsymbol{\mu}_{t+1},\Sigma_{t+1})\text{,}$
(5)
where
$\displaystyle\boldsymbol{\mu}_{t+1}=\boldsymbol{x}_{t}+\left[\mathbb{E}[\hat{\Delta}^{(1)}_{t}],\dots,\mathbb{E}[\hat{\Delta}^{(d_{\boldsymbol{x}})}_{t}]\right]^{T}\text{,}$
(6)
$\displaystyle\Sigma_{t+1}=\text{diag}\left(\left[var[\hat{\Delta}^{(1)}_{t}],\dots,var[\hat{\Delta}^{(d_{\boldsymbol{x}})}_{t}]\right]\right)\text{.}$
(7)
### II-C Long-term predictions with GP dynamical models
In MBRL, the policy $\pi_{\boldsymbol{\theta}}$ is evaluated and improved
based on long-term predictions of the state evolution:
$p(\hat{\boldsymbol{x}}_{1}),\dots,p(\hat{\boldsymbol{x}}_{T})$. The exact
computation of these quantities entails the application of the one-step-ahead
GP models in cascade, considering the propagation of the uncertainty. More
precisely, starting from a given initial distribution $p(\boldsymbol{x}_{0})$,
at each time step _t_ , the next state distribution is obtained by
marginalizing (5) over $p(\hat{\boldsymbol{x}}_{t})$, namely,
$p(\hat{\boldsymbol{x}}_{t+1})=\int
p(\hat{\boldsymbol{x}}_{t+1}|\hat{\boldsymbol{x}}_{t},\pi_{\boldsymbol{\theta}}(\hat{\boldsymbol{x}}_{t}),\mathcal{D})p(\hat{\boldsymbol{x}}_{t})d\hat{\boldsymbol{x}}_{t}.$
(8)
Unfortunately, computing the exact predicted distribution in (8) is not
tractable. There are different ways to solve it approximately, and here we
discuss two main approaches: moment matching, adopted by PILCO, and a
particle-based method, the strategy followed in this work.
#### II-C1 Moment matching
Assuming that the GP models use only the SE kernel as a prior covariance, and
considering a normal initial state distribution
$x_{0}\thicksim\mathcal{N}(\boldsymbol{\mu}_{0},\Sigma_{0})$, the first and
the second moments of $p(\hat{\boldsymbol{x}}_{1})$ can be computed in closed
form [31]. Then, the distribution $p(\hat{\boldsymbol{x}}_{1})$ is
approximated to be a Gaussian distribution, whose mean and variance correspond
to the moments computed previously. Finally, the subsequent probability
distributions are computed iterating the procedure for each time step of the
prediction horizon. For details about the computation of the first and second
moments, we refer the reader to [31]. Moment matching offers the advantage of
providing a closed-form solution for handling uncertainty propagation through
the GP dynamics model. Thus, in this setting, it is possible to analytically
compute the policy gradient from long-term predictions. However, as already
mentioned in Section I, the Gaussian approximation performed in moment
matching is also the cause of two main weaknesses: (i) The computation of the
two moments has been performed assuming the use of SE kernels, which might
lead to poor generalization properties in data that have not been seen during
training [9, 10, 11, 12]. (ii) Moment matching allows modeling only unimodal
distributions, which might be a too restrictive approximation of the real
system behavior.
#### II-C2 Particle-based method
The integral in (8) can be approximated relying on Monte Carlo approaches, in
particular on particle-based methods, see, for instance, [17] [20].
Specifically, $M$ particles are sampled from the initial state distribution
$p(\boldsymbol{x}_{0})$. Each one of the $M$ particles is propagated using the
one-step-ahead GP models (5). Let $\boldsymbol{x}^{(m)}_{t}$ be the state of
the _m_ -th particle at time $t$, with $m=1,\dots,M$. At time step $t$, the
actual policy $\pi_{\boldsymbol{\theta}}$ is evaluated to compute the
associated control. The GP model provides the Gaussian distribution
$p\left(\boldsymbol{x}^{(m)}_{t+1}|\boldsymbol{x}^{(m)}_{t},\pi_{\boldsymbol{\theta}}(\boldsymbol{x}^{(m)}_{t}),\mathcal{D}\right)$
from which $\boldsymbol{x}^{(m)}_{t+1}$, the state of the particle at the next
time step, is sampled. This process is iterated until a trajectory of length
$T$ is generated for each particle. The overall process is illustrated in
Figure 1. The long-term distribution at each time step is approximated with
the distribution of the particles. Note that this approach does not impose any
constraint on the choice of the kernel function and the initial state
distribution. Moreover, there are no restrictions on the distribution of
$p(\hat{\boldsymbol{x}}_{t})$. Therefore, particle-based methods do not suffer
from the problems seen in moment matching, at the cost of being more
computationally heavy. Specifically, the computation of (5) entails the
computation of (3) and (4), which are, respectively, the mean and the variance
of the delta states. Regarding the computational complexity, it can be noted
that $\Gamma^{-1}_{i}\boldsymbol{y}^{(i)}$ is computed a single time offline
during the training of the GP model (same computation is needed in the moment
matching case), and the number of operations required to compute (3) is linear
w.r.t. the number of samples $n$. The computational bottleneck is the
computation of (4), which is $O(n^{2})$. Then, the cost of a single state
prediction is $O(d_{\boldsymbol{x}}n^{2})$, leading to a total computational
cost of $O(d_{\boldsymbol{x}}MTn^{2})$. Depending on the complexity of the
system dynamics, the number of particles necessary to obtain a good
approximation might be high, determining a considerable computational burden.
Nevertheless, the computational burden can be substantially mitigated via GPU
parallel computing, due to the possibility of computing the evolution of each
particle in parallel.
Figure 1: Example of three particles propagating through the stochastic model
(Gaussian distributions represented as ellipses).
## III MC-PILCO
In this section, we present the proposed algorithm. MC-PILCO relies on GPR for
model learning and follows a Monte Carlo sampling method to estimate the
expected cumulative cost from particles trajectories propagated through the
learned model. We exploit the _reparameterization trick_ to obtain the policy
gradient from the sampled particles and optimize the policy. This way of
proceeding is very flexible, and allows using any kind of kernels for the GPs,
as well as providing more reliable approximations of the system’s behaviour.
MC-PILCO, in broad terms, consists of the iteration of three main steps,
namely, update the GP models, update the policy parameters, and execute the
policy on the system. In its turn, the policy update is composed of the
following three steps, iterated for a maximum of $N_{opt}$ times:
* •
simulate the evolution of $M$ particles, based on the current
$\pi_{\boldsymbol{\theta}}$ and on the GP models learned from the previously
observed data;
* •
compute $\hat{J}(\boldsymbol{\theta})$, an approximation of the expected
cumulative cost, based on the evolution of the $M$ particles;
* •
update the policy parameters $\boldsymbol{\theta}$ based on
$\nabla_{\boldsymbol{\theta}}\hat{J}(\boldsymbol{\theta})$, the gradient of
$\hat{J}(\boldsymbol{\theta})$ w.r.t. $\boldsymbol{\theta}$, computed by
backpropagation.
In the remainder of this section, we discuss in greater depth the model
learning step and the policy optimization step.
### III-A Model Learning
Here, we describe the model learning framework considered in MC-PILCO. We
begin by showing the proposed one-step-ahead prediction model, and analyzing
the advantages w.r.t. the standard model described in Section II-B. Then, we
discuss the choice of the kernel functions. Finally, we briefly detail the
model’s hyperparameters optimization and the strategy adopted to reduce the
computational cost.
#### III-A1 Speed-integration model
Let the state be defined as
$\boldsymbol{x}_{t}=[\boldsymbol{q}_{t}^{T},\boldsymbol{\dot{q}}_{t}^{T}]^{T}$,
where $\boldsymbol{q}_{t}\in\mathbb{R}^{\nicefrac{{d_{\boldsymbol{x}}}}{{2}}}$
is the vector of the generalized coordinates of the system at time step $t$,
and, $\boldsymbol{\dot{q}}_{t}$ represents the derivative of
$\boldsymbol{q}_{t}$ w.r.t. time. MC-PILCO adopts a one-step-ahead model,
hereafter denoted as speed-integration dynamical model, which exploits the
intrinsic correlation between the state components $\boldsymbol{q}$ and
$\boldsymbol{\dot{q}}$. Indeed, when considering a sufficiently small sampling
time $T_{s}$ (small w.r.t. the application), it is reasonable to assume
constant accelerations between two consecutive time-steps, obtaining the
following evolution of $\boldsymbol{q}_{t}$,
$\boldsymbol{q}_{t+1}=\boldsymbol{q}_{t}+T_{s}\boldsymbol{\dot{q}}_{t}+\frac{T_{s}}{2}(\boldsymbol{\dot{q}}_{t+1}-\boldsymbol{\dot{q}}_{t})\text{.}$
(9)
Let $\mathcal{I}_{\boldsymbol{{q}}}$ (respectively
$\mathcal{I}_{\boldsymbol{\dot{q}}}$) be the ordered set of the dimension
indices of the state $\boldsymbol{x}$ associated with $\boldsymbol{{q}}$
(respectively $\boldsymbol{\dot{q}}$ ). The proposed speed-integration model
learns only $d_{\boldsymbol{x}}/2$ GPs, each of which models the evolution of
a distinct velocity component $\Delta^{(i_{k})}_{t}$, with
$i_{k}\in\mathcal{I}_{\boldsymbol{\dot{q}}}$. Then, the evolution of the
position change, $\Delta^{(i_{k})}_{t}$, with
$i_{k}\in\mathcal{I}_{\boldsymbol{q}}$, is computed according to (9) and the
predicted change in velocity.
Many previous MBRL algorithms, see for instance [6, 17], adopted the standard
model described in Section II-B, and hereafter denoted as full-state dynamical
model. The full-state model predicts the change of each state component with a
distinct and independent GP. Doing so, the evolution of each state dimension
is assumed to be conditionally independent given the current GP input, and it
is necessary to learn a number of GPs equal to the state dimension
$d_{\boldsymbol{x}}$. Then, compared to the full-state model, the proposed
speed-integration model halves the number of GPs to be learned, decreasing the
cost of a state prediction to $O(\frac{d_{\boldsymbol{x}}}{2}MTn^{2})$.
Nevertheless, this approach is based on a constant acceleration assumption,
and works properly only when considering small enough sampling times. However,
MC-PILCO can use also the standard full-state model, which might be more
effective when sampling time is longer.
#### III-A2 Kernel functions
Regardless of the GP dynamical model structure adopted, one of the advantages
of the particle-based policy optimization method is the possibility of
choosing any kernel functions without restrictions. Hence, we considered
different kernel functions as examples to model the evolution of physical
systems. However, readers can consider a custom kernel function appropriate
for their application.
Squared exponential (SE). The SE kernel described in (2) represents the
standard choice adopted in many different works.
SE + Polynomial (SE+$\text{P}^{(d)}$). Recalling that the sum of kernels is
still a kernel [3], we considered also a function given by the sum of a SE and
a polynomial kernel. In particular, we used the Multiplicative Polynomial (MP)
kernel, which is a refinement of the standard polynomial kernel, introduced in
[30]. The MP kernel of degree $d$ is defined as the product of $d$ linear
kernels, namely,
$k_{P}^{(d)}(\tilde{\boldsymbol{x}}_{t_{j}},\tilde{\boldsymbol{x}}_{t_{k}}):=\prod_{r=1}^{d}\left(\sigma^{2}_{P_{r}}+\tilde{\boldsymbol{x}}_{t_{j}}^{T}\Sigma_{P_{r}}\tilde{\boldsymbol{x}}_{t_{k}}\right)\text{.}$
where the $\Sigma_{P_{r}}>0$ matrices are distinct diagonal matrices. The
diagonal elements of the $\Sigma_{P_{r}}$, together with the
$\sigma^{2}_{P_{r}}$ elements are the kernel hyperparameters. The resulting
kernel is
$k_{SE+P^{(d)}}(\tilde{\boldsymbol{x}}_{t_{j}},\tilde{\boldsymbol{x}}_{t_{k}})=k_{SE}(\tilde{\boldsymbol{x}}_{t_{j}},\tilde{\boldsymbol{x}}_{t_{k}})+k_{P}^{(d)}(\tilde{\boldsymbol{x}}_{t_{j}},\tilde{\boldsymbol{x}}_{t_{k}})\text{.}$
(10)
The idea motivating this choice is the following: the MP kernel allows
capturing possible modes of the system that are polynomial functions in
$\tilde{\boldsymbol{x}}$, which are typical in mechanical systems [9], while
the SE kernel models more complex behaviors not captured by the polynomial
kernel.
Semi-Parametrical (SP). When prior knowledge about the system dynamics is
available, for example given by physics first principles, the so called
physically inspired (PI) kernel can be derived. The PI kernel is a linear
kernel defined on suitable basis functions $\phi(\tilde{\boldsymbol{x}})$, see
for instance [10]. More precisely,
$\boldsymbol{\phi}(\tilde{\boldsymbol{x}})\in\mathbb{R}^{d_{\phi}}$ is a
(possibly nonlinear) transformation of the GP input $\tilde{\boldsymbol{x}}$
determined by the physical model. Then, we have
$k_{PI}(\tilde{\boldsymbol{x}}_{t_{j}},\tilde{\boldsymbol{x}}_{t_{k}})=\boldsymbol{\phi}^{T}(\tilde{\boldsymbol{x}}_{t_{j}})\Sigma_{PI}\boldsymbol{\phi}(\tilde{\boldsymbol{x}}_{t_{k}})\text{,}$
where $\Sigma_{PI}$ is a $d_{\phi}\times d_{\phi}$ positive-definite matrix,
whose elements are the $k_{PI}$ hyperparameters; to limit the number of
hyperparameters, a standard choice consists in considering $\Sigma_{PI}$ to be
diagonal. To compensate possible inaccuracies of the physical model, it is
common to combine $k_{PI}$ with an SE kernel, obtaining so called semi-
parametric kernels [12, 10], expressed as
$k_{SP}(\tilde{\boldsymbol{x}}_{t_{j}},\tilde{\boldsymbol{x}}_{t_{k}})=k_{PI}(\tilde{\boldsymbol{x}}_{t_{j}},\tilde{\boldsymbol{x}}_{t_{k}})+k_{SE}(\tilde{\boldsymbol{x}}_{t_{j}},\tilde{\boldsymbol{x}}_{t_{k}})\text{.}$
The rationale behind this kernel is the following: $k_{PI}$ encodes the prior
information given by the physics, and $k_{SE}$ compensates for the dynamical
components unmodeled in $k_{PI}$.
#### III-A3 Model optimization and reduction techniques
In MC-PILCO, the GP hyperparameters are optimized by maximizing the marginal
likelihood (ML) of the training samples, see [3]. In Section II-C2, we saw
that the computational cost of a particle prediction scales with the square of
the number of samples $n$, leading to a considerable computational burden when
$n$ is high. In this context, it is essential to implement a strategy to limit
the computational complexity of a prediction. We implemented a Subset of Data
technique (refer to [32] for further details on this method and others) with
an input selection procedure inspired by [33], where the authors proposed an
online importance sampling strategy. After optimizing the GP hyperparameters
by ML maximization, the samples in $\mathcal{D}$ are downsampled to a subset
$\mathcal{D}_{r}=\left(\tilde{X}_{r},\boldsymbol{y}^{(i)}_{r}\right)$, which
is then used to compute the predictions. This procedure first initializes
$\mathcal{D}_{r}$ with the first sample in $\mathcal{D}$, then, it computes
iteratively the GP estimates of all the remaining samples in $\mathcal{D}$,
using $\mathcal{D}_{r}$ as training samples. Each sample in $\mathcal{D}$ is
either added to $\mathcal{D}_{r}$ if the uncertainty of the estimate is higher
than a threshold $\beta^{(i)}$ or it is discarded. The GP estimator is updated
every time a sample is added to $\mathcal{D}_{r}$. The trade-off between the
reduction of the computational burden and the severity of the approximation
introduced is regulated by tuning $\beta^{(i)}$. The higher the $\beta^{(i)}$,
the smaller the number of samples in $\mathcal{D}_{r}$. Inversely, using
values of $\beta^{(i)}$ that are too high might compromise the accuracy of GP
predictions.
### III-B Policy optimization
(a) Original computational graph.
(b) Reparameterized computational graph.
Figure 2: (Left) Original computational graph of the GP model predictions for
two time steps. (Right) Computational graph modified by the
_reparameterization trick_. Squares and circles represent, respectively,
deterministic and stochastic operations.
Here, we present the policy optimization strategy adopted in MC-PILCO. We
start by describing the general-purpose policy structure considered. Later, we
show how to exploit backpropagation and the _reparameterization trick_ to
estimate the policy gradient from particle-based long-term predictions.
Finally, we explain how to implement dropout in this framework.
#### III-B1 Policy structure
In all the experiments presented in this work, we adopted an RBF network
policy with outputs limited by an hyperbolic tangent function, properly
scaled. We call this function squashed-RBF-network, and it is defined as
$\pi_{\boldsymbol{\theta}}(\boldsymbol{x})=u_{max}\;\text{tanh}\left(\frac{1}{u_{max}}\sum_{i=1}^{n_{b}}w_{i}e^{||\boldsymbol{a}_{i}-\boldsymbol{x}||_{\Sigma_{\pi}}^{2}}\right)\text{.}$
(11)
The policy parameters are
$\boldsymbol{\theta}=\left\\{\boldsymbol{w},A,\Sigma_{\pi}\right\\}$, where
$\boldsymbol{w}=[w_{1}\dots w_{n_{b}}]$ and
$A=\left\\{\boldsymbol{a}_{1}\dots\boldsymbol{a}_{n_{b}}\right\\}$ are,
respectively, the weights and the centers of the Gaussian basis functions,
while ${\Sigma_{\pi}}$ determines the shape of the Gaussian basis functions;
in all experiments we assumed ${\Sigma_{\pi}}$ to be diagonal. The maximum
control action $u_{max}$ is constant and chosen depending on the system to
control. It is worth mentioning that MC-PILCO can deal with any differentiable
policy, so more complex functions, such as deep neural networks, could be
considered too.
#### III-B2 Policy gradient estimation
MC-PILCO derives the policy gradient by applying the _reparameterization
trick_ to the computation of the estimated expected cumulative cost in (1),
obtained relying on Monte Carlo sampling [34]. Given a control policy
$\pi_{\boldsymbol{\theta}}$ and an initial state distribution
$p(\boldsymbol{x}_{0})$, the evolution of a sufficiently high number of
particles is predicted as described in Section II-C2. Thus, the sample mean of
the costs incurred by the particles at time step $t$ approximates each
$\mathbb{E}_{\boldsymbol{x}_{t}}[c(\boldsymbol{x}_{t})]$. Specifically, let
$\boldsymbol{x}^{(m)}_{t}$ be the state of the _m_ -th particle at time $t$,
with $m=1,\dots,M$ and $t=0,\dots,T$. The Monte Carlo estimate of the expected
cumulative cost is computed with the following expression:
$\hat{J}(\boldsymbol{\theta})=\sum_{t=0}^{T}\left(\frac{1}{M}\sum_{m=1}^{M}c\left(\boldsymbol{x}_{t}^{(m)}\right)\right)\;.$
(12)
The evolution of every particle $\boldsymbol{x}^{(m)}_{t}$ at the next time
step is sampled from the normal distribution
$p(\boldsymbol{x}^{(m)}_{t+1}|\boldsymbol{x}^{(m)}_{t},\pi_{\boldsymbol{\theta}}(\boldsymbol{x}^{(m)}_{t}),\mathcal{D})\sim\mathcal{N}(\boldsymbol{\mu}_{t+1},\Sigma_{t+1})$,
defined in (6)-(7). Hence, the computation of $\hat{J}(\boldsymbol{\theta})$
entails the sampling from probability distributions that depend on policy
parameters $\boldsymbol{\theta}$. The presence of such stochastic operations
makes it impossible to compute straightforwardly the gradient of (12) w.r.t.
the policy parameters. The _reparameterization trick_ [21] allows to still
differentiate through the stochastic operations by re-defining the probability
distributions involved in the computation of
$\nabla_{\boldsymbol{\theta}}\hat{J}$. In fact, instead of sampling directly
from $\mathcal{N}(\boldsymbol{\mu}_{t+1},\Sigma_{t+1})$, it is possible to
sample a point $\epsilon$ from a zero-mean and unit-variance normal
distribution with the same dimension of $\boldsymbol{\mu}_{t+1}$. Then,
$\epsilon$ can be mapped into the desired distribution as
$\boldsymbol{x}^{(m)}_{t+1}=\boldsymbol{\mu}_{t+1}+L_{t+1}\epsilon$, where
$L_{t+1}$ is the Cholesky decomposition of $\Sigma_{t+1}$, namely,
$\Sigma_{t+1}=L_{t+1}L^{T}_{t+1}$. In this way, the _reparameterization trick_
makes the dependency of $\boldsymbol{x}^{(m)}_{t+1}$ from
$\boldsymbol{\theta}$ purely deterministic, allowing to compute
$\nabla_{\boldsymbol{\theta}}\hat{J}$ simply by backpropagation. Figure 2
illustrates how the _reparameterization trick_ works in the context of MC-
PILCO. Then, policy parameters $\boldsymbol{\theta}$ are updated using the
Adam solver [35]; we will denote the Adam step size with $\alpha_{lr}$.
#### III-B3 Dropout
To improve exploration in the parameter space and increase the ability of
escaping from local minima during policy optimization, we considered the use
of dropout [29]. The adopted procedure is described assuming that the policy
is the squashed-RBF-network in (11); similar considerations can be applied to
different policy functions. When dropout is applied to the policy in (11),
weights $\boldsymbol{w}$ are randomly dropped with probability $p_{d}$ at each
evaluation of the policy. This operation is performed by scaling each weight
$w_{i}$ with a random variable $r_{i}\sim Bernoulli(1-p_{d})$, where
$Bernoulli(1-p_{d})$ denotes a Bernoulli distribution, assuming value
$1/(1-p_{d})$ with probability $1-p_{d}$, and $0$ with probability $p_{d}$.
This operation is equivalent to defining a probability distribution for
$\boldsymbol{w}$, obtaining a parameterized stochastic policy. In particular,
as shown in [36], the distribution of each $w_{i}$ can be approximated with a
bimodal distribution, defined by the sum of two properly scaled Gaussian
distributions with infinitely small variance $\xi^{2}$, namely,
$p_{d}\mathcal{N}(0,\xi^{2})+(1-p_{d})\mathcal{N}\left(\frac{w_{i}}{1-p_{d}},\xi^{2}\right)\text{.}$
The use of a stochastic policy during policy optimization allows increasing
the entropy of the particles’ distribution. This property increments the
probability of visiting low-cost regions and escaping from local minima. In
addition, we also verified that dropout can mitigate issues related to
exploding gradients. This is probably due to the fact that the average of
several different values of $\boldsymbol{w}$ is used to compute the gradient
and not a single value of $\boldsymbol{w}$, i.e., different policy functions
are used, obtaining a regularization of the gradient estimates.
By contrast, the use of a stochastic policy might affect the precision of the
obtained solution due to the additional entropy. We also need to take into
consideration that the final objective is to obtain a deterministic policy.
For these reasons, we designed an heuristic scaling procedure to gradually
decrease the dropout rate, $p_{d}$, until it equals $0$. The scaling action is
triggered by a monitoring signal $s$, defined from the statistics of the past
history of $\hat{J}$. Define the cost change,
$\Delta\hat{J}_{j}=\hat{J}(\boldsymbol{\theta}_{j})-\hat{J}(\boldsymbol{\theta}_{j-1})$,
where $\boldsymbol{\theta}_{j}$ denotes the policy parameters at the _j_ -th
optimization step. Then, $s$ is computed as a filtered version of the ratio
between $\mathcal{E}[\Delta\hat{J}_{j}]$ and
$\sqrt{\mathcal{V}[\Delta\hat{J}_{j}]}$, that are, respectively, the mean and
the standard deviation of $\Delta\hat{J}_{j}$ computed with an Exponential
Moving Average (EMA) filter. The expression of $s$ at the _j_ -th optimization
step is the following:
$\displaystyle\mathcal{E}[\Delta\hat{J}_{j}]=\alpha_{s}\mathcal{E}[\Delta\hat{J}_{j-1}]+(1-\alpha_{s})\Delta\hat{J}_{j}\text{,}$
$\displaystyle\mathcal{V}[\Delta\hat{J}_{j}]=\alpha_{s}(\mathcal{V}[\Delta\hat{J}_{j-1}]+(1-\alpha_{s})(\Delta\hat{J}_{j}-\mathcal{E}[\Delta\hat{J}_{j-1}])^{2})\text{,}$
$\displaystyle
s_{j}=\alpha_{s}s_{j-1}+(1-\alpha_{s})\frac{\mathcal{E}[\Delta\hat{J}_{j}]}{\sqrt{\mathcal{V}[\Delta\hat{J}_{j}]}}\text{,}$
(13)
with $\alpha_{s}$ a coefficient of the exponential moving average filter,
which determines the memory of the filter. At each iteration of the
optimization procedure, the algorithm checks if the absolute value of the
monitoring signal $s$ in the last $n_{s}$ iterations is below the threshold
$\sigma_{s}$, namely,
$[|s_{j-n_{s}}|\dots|s_{j}|]<\sigma_{s}\text{,}$ (14)
where $<$ is an element-wise operator, and the condition in (14) is true if it
is verified for all the elements. If the condition is verified, $p_{d}$ is
decreased by the quantity $\Delta p_{d}$, and both the learning rate of the
optimizer, $\alpha_{lr}$, and $\sigma_{s}$, are scaled by an arbitrary factor
$\lambda_{s}$. Then, we have
$\displaystyle p_{d}=p_{d}-\Delta p_{d}\text{,}$ (15a)
$\displaystyle\alpha_{lr}=\lambda_{s}\alpha_{lr}\text{,}$ (15b)
$\displaystyle\sigma_{s}=\lambda_{s}\sigma_{s}\text{.}$ (15c)
The procedure is iterated as long as
$p_{d}\geq 0\text{ and }\alpha_{lr}\geq\alpha_{lr_{min}}\text{,}$ (16)
where $\alpha_{lr_{min}}$ is the minimum value of the learning rate.
The rationale behind this heuristic scaling procedure is the following. The
$s_{j}$ signal is small, if $\mathcal{E}[\Delta\hat{J}_{j}]$ is close to zero,
or if $\mathcal{V}[\Delta\hat{J}_{j}]$ is particularly high. The first case
happens when the optimization reaches a minimum, while the high variance
denotes that the particles’ trajectories cross regions of the workspace where
the uncertainty of the GPs predictions is high. In both cases, we are
interested in testing the policy on the real system, in the first case to
verify if the configuration reached solves the task, and in the second case to
collect data where predictions are uncertain, and so to improve model
accuracy. MC-PILCO is summarized in pseudo-code in Algorithm 1.
We conclude the discussion about policy optimization by reporting, in Table I,
the optimization parameters used in all the proposed experiments, unless
expressly stated otherwise. However, it is worth mentioning that some
adaptation could be needed in other setups, depending on the problem
considered.
Parameter | Description | Value
---|---|---
$p_{d}$ | dropout probability | 0.25
$\Delta p_{d}$ | $p_{d}$ reduction coeff. | 0.125
$\alpha_{lr}$ | Adam step size | 0.01
$\alpha_{lr_{min}}$ | minimum step size | 0.0025
$\alpha_{s}$ | EMA filter coeff. | 0.99
$\sigma_{s}$ | monitoring signal treshold | 0.08
$n_{s}$ | num. iterations monitoring | 200
$\lambda_{s}$ | $\sigma_{s}$ reduction coeff. | 0.5
$M$ | number of particles | 400
TABLE I: Standard values for the policy optimization parameters.
init policy $\pi_{\boldsymbol{\theta}}(\cdot)$, cost $c(\cdot)$, kernel
$k(\cdot,\cdot)$, maximum optimization steps $N_{opt}$, number of particles
$M$, learning rate $\alpha_{lr}$, min. learning rate $\alpha_{lr_{min}}$,
dropout probability $p_{d}$, dropout probability reduction $\Delta_{p_{d}}$
and other monitoring signal parameters: $\sigma_{s}$, $\lambda_{s}$, $n_{s}$.
Apply exploratory control to system and collect data
while _task not learned_ do
1) Model Learning:
Learn GP models from sampled data - Sec. III-A;
2) Policy Update:
Initialize monitoring signal $s_{0}=0$;
for _$j=1...N_{opt}$_ do
Simulate $M$ particles rollouts with GP models and current policy
$\pi_{\boldsymbol{\theta}_{j}}(\cdot)$;
Compute $\hat{J}(\boldsymbol{\theta}_{j})$ from particles (12);
Compute $\nabla_{\boldsymbol{\theta}}\hat{J}(\boldsymbol{\theta}_{j})$ through
backpropagation;
Gradient-based policy update
$\rightarrow\pi_{\boldsymbol{\theta}_{j+1}}(\cdot)$ ;
Update monitoring signal $s_{j}$ with (13);
if _( 14) is True_ then
Update $p_{d}$, $\alpha_{lr}$ and $\sigma_{s}$ with (15);
end if
if _( 16) is False_ then
break;
end if
end for
3) Policy Execution:
apply updated policy to system and collect data
end while
return trained policy, learned GP model;
Algorithm 1 MC-PILCO
## IV MC-PILCO for Partially Measurable Systems
In this section, we discuss the application of MC-PILCO to systems where the
state is partially measurable, i.e., systems whose state is observable, but
only some components of the state can be directly measured, while the rest
must be estimated from measurements. For simplicity, we introduce the problem
discussing the case of a mechanical system where only positions (and not
velocities) can be measured, but similar considerations can be done for any
partially measurable system with observable state. Then, we describe _MC-PILCO
for Partially Measurable Systems_ (MC-PILCO4PMS), a modified version of MC-
PILCO, proposed to deal with such setups.
Consider a mechanical systems where only joint positions can be measured. This
can be described as a partially measurable system, where in the state
$\boldsymbol{x}_{t}=[\boldsymbol{q}_{t}^{T},\boldsymbol{\dot{q}}_{t}^{T}]^{T}$
only $\boldsymbol{q}_{t}$ is measured. Consequently, the
$\boldsymbol{\dot{q}}_{t}$ elements are estimated starting from the history of
$\boldsymbol{q}_{t}$ measurements through proper estimation procedures,
possibly performing also denoising operations of $\boldsymbol{q}_{t}$ in case
that the measurement noise is high. In particular, it is worth distinguishing
between estimates computed online and estimates computed offline. The former
are provided to the control policy to determine the system control input, and
they need to respect real-time constraints, namely, velocity estimates are
causal and computations must be performed within a given interval. For the
latter, we do not have to deal with such constraints. As a consequence,
offline estimates can be more accurate, taking into account acausal
information and limiting delays and distortions.
In this context, we verified that, during policy optimization, it is relevant
to distinguish between the particle state predictions computed by the models
and the data provided to the policy. On the one hand, GPs should simulate the
real system dynamics, independently of additional noise given by the sensing
instrumentation, they need to work with the most accurate estimates available,
possibly obtained with acausal filters; delays and distortions might
compromise the accuracy of long-term predictions. On the other hand, providing
to the policy directly the particle states computed with the GPs during policy
optimization, correspond to train the policy assuming to access directly to
the system state, which is not possible in the considered setup. Indeed,
relevant discrepancies between the particle states and the state estimates
computed online, during the interaction with the real system, might compromise
the effectiveness of the policy. Most of the previous GP-based MBRL algorithms
do not focus on these aspects, and assume direct access to the state. In our
opinion, a correct understanding of the state estimation problem, for both
modeling and control purposes, is fundamental for a robust deployment of MBRL
solutions to real-world applications.
To deal with the above issues, we introduce MC-PILCO4PMS an extension of MC-
PILCO, that carefully takes into account the presence of online state
estimators during policy training. With respect to the algorithm described in
Section III, we propose the two following additions:
Offline estimation of GPs training data. We compute the state estimates used
to train the GP models with offline estimation techniques. In particular, in
our real experiments, we considered two options,
* •
Computation of the velocities with the central difference formula, i.e.,
$\dot{\boldsymbol{q}}_{t}=(\boldsymbol{q}_{t+1}-\boldsymbol{q}_{t-1})/(2T_{s})$,
where $T_{s}$ is the sampling time. This technique can be used only when the
measurement noise is limited, otherwise the $\dot{\boldsymbol{q}}$ estimates
might be too noisy.
* •
Estimation of the state with a Kalman smoother [37], with state-space model
given by the general equations relating positions, velocities, and
accelerations. The advantage of this technique is that it exploits the
correlation between positions and velocities, increasing regularization.
Simulation of the online estimators. During policy optimization, instead of
simulating only the evolution of the particles states, we simulate also the
measurement system and the online estimators. The state fed to the policy,
denoted by $\bar{\boldsymbol{x}}_{t}$, is computed to resemble the state that
will be estimated online. Given the _m_ -th particle, this is given by
$\displaystyle\bar{\boldsymbol{x}}^{(m)}_{t}=\varphi\left(\bar{\boldsymbol{q}}^{(m)}_{t}\dots\bar{\boldsymbol{q}}^{(m)}_{t-m_{q}},\bar{\boldsymbol{x}}^{(m)}_{t-1}\dots\bar{\boldsymbol{x}}^{(m)}_{t-1-m_{\varphi}}\right)\text{,}$
where $\varphi$ denotes the online state estimator, with memory $m_{q}$ and
$m_{\varphi}$, and $\bar{\boldsymbol{q}}^{(m)}_{t}$ is a fictitious noisy
measurement of the _m_ -th particle positions. More precisely, let
$\boldsymbol{q}^{(m)}_{t}$ the positions of the $\boldsymbol{x}^{(m)}_{t}$
particle state, then, we have
$\bar{\boldsymbol{q}}^{(m)}_{t}=\boldsymbol{q}^{(m)}_{t}+\boldsymbol{e}^{(m)}_{t}\text{,}$
(17)
where $\boldsymbol{e}^{(m)}_{t}\in\mathbb{R}^{d_{x}/2}$ is Gaussian i.i.d.
noise with zero mean and covariance
$\text{diag}([\sigma^{(1)}_{\bar{x}}\dots\sigma^{(d_{x}/2)}_{\bar{x}}])$. The
$\sigma^{(i)}_{\bar{x}}$s values must be tuned in accordance with the
properties of the measurement system, e.g., the accuracy of the encoder. Then,
the control input of the _m_ -th particle is computed as
$\pi_{\boldsymbol{\theta}}(\bar{\boldsymbol{x}}^{(m)}_{t})$, instead of
$\pi_{\boldsymbol{\theta}}(\boldsymbol{x}^{(m)}_{t})$. Differences in
particles generation between MC-PILCO and MC-PILCO4PMS are summed up in the
block scheme reported in Figure 3.
(a) MC-PILCO
(b) MC-PILCO4PMS
Figure 3: Block schemes illustrating particles generation in MC-PILCO (top)
and MC-PILCO4PMS (bottom).
## V MC-PILCO: Ablation Studies
In this section, we analyze several aspects affecting the performance of MC-
PILCO, such as the shape of the cost function, the use of dropout, the kernel
choice, and the probabilistic model adopted, namely, full-state or speed-
integration dynamical model. The purpose of the analysis is to validate the
choices made in the proposed algorithm, and show the effect that they have on
the control learning procedure. MC-PILCO has been implemented in Python,
exploiting the PyTorch library [38] automatic differentiation
functionalities111Code available at https://www.merl.com/research/license/MC-
PILCO.
We considered the swing-up of a simulated cart-pole, a classical benchmark
problem, to perform the ablation studies. The system and the experiments are
described in the following. The physical properties of the system are the same
as the system used in PILCO [6]: the masses of both cart and pole are 0.5
[kg], the length of the pole is $L=0.5$ [m], and the coefficient of friction
between cart and ground is 0.1. The state at each time step $t$ is defined as
$\boldsymbol{x}_{t}=[p_{t},\dot{p}_{t},\theta_{t},\dot{\theta}_{t}]$, where
$p_{t}$ represents the position of the cart and $\theta_{t}$ the angle of the
pole. The target states corresponding to the swing-up of the pendulum is given
by $p^{des}=0$ [m] and $|\theta^{des}|=\pi$ [rad]. The downward stable
equilibrium point is defined at $\theta_{t}=0$ [rad]. As done in [6], in order
to avoid singularities due to the angles, $\boldsymbol{x}_{t}$ is replaced in
the algorithm with the state representation
$\boldsymbol{x}^{*}_{t}=[p_{t},\dot{p}_{t},\dot{\theta}_{t},sin(\theta_{t}),cos(\theta_{t})]$
(18)
The control action is the force that pushes the cart horizontally. In all
following experiments, we considered white measurement noise with standard
deviation of $10^{-2}$, and as initial state distribution
$\mathcal{N}([0,0,0,0],\text{diag}([10^{-4},10^{-4},10^{-4},10^{-4}]))$. The
sampling time is $0.05$ seconds. The policy is a squashed-RBF-network with
$n_{b}=200$ basis functions. It receives as input $\boldsymbol{x}^{*}_{t}$ and
$u_{max}=10$ [N]. The exploration trajectory is obtained by applying at each
time step $t$ a random control action sampled from $\mathcal{U}(-10,10)$. GP
reduction techniques were not adopted.
In this work, in all the experiments carried out with MC-PILCO, the cost
function is a saturating function with the same general structure. The
saturation is given by a negative exponential of the
$\boldsymbol{x}_{t}-\boldsymbol{x}^{des}$ squared norm, namely,
$c(\boldsymbol{x}_{t})=1-\text{exp}\left(-\left(\boldsymbol{x}_{t}-\boldsymbol{x}^{des}\right)^{T}L\left(\boldsymbol{x}_{t}-\boldsymbol{x}^{des}\right)\right),$
where $L$ is a diagonal matrix. The diagonal elements of $L$ are the inverse
of the squared cost length-scales, and they allow weighting the different
components of $\boldsymbol{x}_{t}-\boldsymbol{x}^{des}$, for instance based on
their range of variation. Notice that this general structure of the cost can
be applied to any system, and generalizes also to tasks with time-variant
target, such as trajectory tracking tasks. Then, the cost function considered
for the cart-pole cost is the following,
$c(\boldsymbol{x}_{t})=1-\text{exp}\left(-\left(\frac{|\theta_{t}|-\pi}{l_{\theta}}\right)^{2}-\left(\frac{p_{t}}{l_{p}}\right)^{2}\right),$
(19)
where the absolute value on $\theta_{t}$ is needed to allow different swing-up
solutions to both the equivalent target angles of the pole, $\pi$ and $-\pi$.
The length-scales $l_{\theta}$ and $l_{p}$ define the shape of the cost
function as $c(\cdot)$ goes to its maximum value more rapidly with small
length-scales. Therefore, higher cost is associated to the same distance from
the target state with lower $l_{\theta}$ and $l_{p}$. The lower the length-
scale the more selective the cost function.
Other algorithms, like PILCO [6] and Black-DROPS [17], used an alternative
cost function for solving the cart-pole swing-up, with the saturation given by
the negative exponential of the squared Euclidean distance between
$\boldsymbol{x}_{t}$ and $\boldsymbol{x}^{des}$, namely,
$c^{\text{pilco}}(\boldsymbol{x}_{t})=1-\text{exp}\left(-\frac{1}{2}\left(\frac{d_{t}}{0.25}\right)^{2}\right),$
(20)
where $d_{t}^{2}=p_{t}^{2}+2p_{t}Lsin(\theta_{t})+2L^{2}(1+cos(\theta_{t}))$
is the squared euclidean distance between the tip of the pole and its position
at the unstable equilibrium point with $p_{t}=0$ [m]. Since we compare MC-
PILCO with PILCO and Black-DROPS in Section VI-A, the results for the cart-
pole system are rendered w.r.t. (20) to allow direct comparisons with previous
literature.
All the comparisons consist of a Monte-Carlo study composed of 50 experiments.
Every experiment is composed of 5 trials, each of length 3 seconds. The random
seed varies at each experiment, corresponding to different explorations and
initialization of the policy, as well as different measurement noise
realizations. For each trial, we report the median value and confidence
interval defined by the 5-th and 95-th percentile of the cumulative cost
computed with $c^{\text{pilco}}(\cdot)$, as well as the success rates
observed. We mark two values of the cumulative cost indicatively associated
with a swing-up for which the pole oscillates once or twice before reaching
the upwards equilibrium. Trivially, the solution we aim for is the one that
entails only one oscillation. Finally, we label a trial as "success" if
$|p_{t}|<0.1$ [m] and $170\text{ [deg]}<|\theta_{t}|<190\text{ [deg]}$
$\forall t$ in the last second of the trial.
To evaluate the statistical significance of the reported results, we tested
the cumulative cost distributions with a Mann-Whitney U-test [39], and the
success rates with a Barnard’s exact test [40]. The significance level of both
tests is set to 0.05. For the sake of space, we point out statistically
significant results on the plots and tables and we explicitly report p-values
only when objective conclusions are drawn.
### V-A Cost shaping
The first test regards the performance obtained varying the length-scales of
the cost function in (19). Reward shaping is a known important aspect of RL
and here we analyze it for MC-PILCO. In Figure 4, we compare the evolution of
the cumulative costs obtained with $(l_{\theta}=3,l_{p}=1)$ and
$(l_{\theta}=0.75,l_{p}=0.25)$ and we report the observed success rates. The
latter set of length-scales defines a more selective cost as the function
shape becomes more skewed. In both cases, we adopted the speed-integration
model with SE kernel and no dropout was used during policy optimization.
Figure 4: Median and confidence intervals of the cumulative cost
$c^{\text{pilco}}(\cdot)$ per trial obtained using $(l_{\theta}=3,l_{p}=1)$ or
$(l_{\theta}=0.75,l_{p}=0.25)$. In both cases, we used GP speed-integration
models with SE kernels and no dropout was applied. In the cumulative cost
plot, we marked each trial with an *, to indicate the statistical significance
of the difference between the two options. Instead, the difference between
success rates is not statistically significant.
Success Rates
---
| Trial 1 | Trial 2 | Trial 3 | Trial 4 | Trial 5
$l$=(0.75,0.25) | 0% | 4% | 42% | 68% | 70%
$l$=(3,1) | 0% | 6% | 54% | 72% | 82%
The results show that with $(l_{\theta}=3,l_{p}=1)$ MC-PILCO performs better.
Indeed, the median and variance of $(l_{\theta}=0.75,l_{p}=0.25)$ are higher
w.r.t. the ones of $(l_{\theta}=3,l_{p}=1)$ (the difference is statistically
relevant at every trial, with p-value $2.7\cdot 10^{-4}$ at trial 1 and
smaller than $10^{-4}$ in all subsequent trials). Observing the cumulative
costs, it is possible to appreciate also a difference in the quality of the
policies learned in the two cases. When using $(l_{\theta}=3,l_{p}=1)$, MC-
PILCO learned to swing-up the cart-pole with only one oscillation in the
majority of the experiments, while it has never been obtained with
$(l_{\theta}=0.75,l_{p}=0.25)$. The success rates obtained with
$(l_{\theta}=3,l_{p}=1)$ are greater than the counterpart, but this difference
is not statistically significant, showing that the benefits of less selective
cost functions are not sufficient, alone, to guarantee a clear advantage in
terms of success rates.
These facts suggest that the use of too selective cost functions might
decrease significantly the probability of converging to a solution. The reason
might be that with small valued length-scales, $c(\boldsymbol{x}_{t})$ is very
peaked, resulting in almost null gradient, when the policy parameters are far
from a good configuration, and increasing the probability of getting stuck in
a local minimum. Instead, higher values of the length-scales promote the
presence of non-null gradients also far away from the objective, facilitating
the policy optimization procedure. These observations have already been made
in PILCO, but the authors did not encountered difficulties in using a small
length-scale such as 0.25 in (20). This may be due to the analytic computation
of the policy gradient made possible thanks to moment matching, as well as to
the different optimization algorithm used. On the other hand, the length-
scales’ values seems to have no effect on the precision of the learned
solution. To confirm this, in Table III (rows 4 and 5), are reported the
average distances from the target states obtained by successful policies at
trial 5 during the last second of interaction. No significant difference in
terms of precision in reaching the targets is observed.
### V-B Dropout
In this test, we compared the results obtained using, or not, the dropout
during policy optimization. In Figure 5, we compare the evolution of the
cumulative cost obtained in the two cases and we show the obtained success
rates.
Figure 5: Median and confidence intervals of the cumulative cost
$c^{\text{pilco}}(\cdot)$ per trial obtained using, or not, dropout. In both
cases, we adopted GP speed-integration model with SE kernels, $l_{\theta}=3$
and $l_{p}=1$. Success rates are reported below. In both cumulative cost plot
and success rate table, we marked each trial with an *, to indicate the
statistical significance of the difference between the two options.
Success Rates
---
| Trial 1 | Trial 2 | Trial 3 | Trial 4 | Trial 5
Dropout OFF | 0% | 6% | 54%* | 72%* | 82%*
Dropout ON | 0% | 14% | 76%* | 98%* | 100%*
In both scenarios, we adopted the speed-integration model with SE kernel and a
cost function with length-scales $(l_{\theta}=3,l_{p}=1)$. When using dropout,
MC-PILCO solved the task at trial 4 in the 98% of the experiments, and it
managed to reach a 100% success rate by trial 5. Instead, without dropout, the
correct policy was not always found, even in the last trial. Notice that, when
dropout is not used, the upper bounds of the cumulative costs in the last two
trials are higher, meaning that the task cannot always be solved correctly.
The statistical tests show that the advantages of dropout are statistically
significant from trial 3 to trial 5 (cumulative cost
p-values$:[0.33,1.1,0.29]\cdot 10^{-3}$; success rate
p-values$:[11,0.13,0.90]\cdot 10^{-3}$). This fact suggests that dropout
increases the probability of escaping from local minima, promoting the
identification of a better policy. Additionally, Table III (rows 3 and 5),
shows that dropout also helps in decreasing the cart positioning error at the
end of the swing-up (in both mean and standard deviation). Thus, we found
empirically that dropout not only helps in stabilizing the learning process
and in finding better solutions more consistently, but it can also improve the
precision of the learned policies.
### V-C Kernel function
In this test, we compared the results obtained using as kernels the SE, the
SE+P(2) or the SP, see Section III-A. Our aim is to test if the use of
structured kernels can increase data efficiency. The kernels are listed from
the least to the most structured: SE+P(2) can capture polynomial contributions
more efficiently than SE, which are typical of robotic systems, and the SP
kernel favours modes derived from the system equations (without assuming to
know physical parameters)222SP basis functions are obtained by isolating, in
each ODE defining cart-pole laws of motion, all the state-dependent components
that are linearly related. In particular, we have
$\phi_{\dot{p}}(\boldsymbol{x},u)=[\dot{\theta}^{2}\;sin(\theta),sin(\theta)cos(\theta),u,\dot{x}]$
for the cart velocity GP, and
$\phi_{\dot{\theta}}(\boldsymbol{x},u)=[\dot{\theta}^{2}\;sin(\theta)cos(\theta),sin(\theta),u\;cos(\theta),\dot{x}\;cos(\theta)]$
for the pole velocity GP.. In all the cases, we adopted a speed-integration
model, the cost function was defined with length-scales
$(l_{\theta}=3,l_{p}=1)$, and dropout was used. In Figure 6, we present, for
each trial, the obtained cumulative costs and success rates. We can observe
that the use of structured kernels, such as SP and SE+P(2), can be beneficial
in terms of data efficiency, compared to adopting the standard SE kernel. In
fact, the fastest convergence is observed in the SP case, where a success rate
of 100% is obtained at trial 3, after only 9 seconds of experience. Also at
trial 2, the gap between the SP performance and the ones of SE and SE+P(2) is
considerable. The statistical tests show that the differences w.r.t the
SE+P(2) and SE kernel are statistically significant from trial 1 to trial 3,
confirming the augmented data efficiency (SP vs SE+P(2) cumulative cost
p-values: $<10^{-4}$ at trials 1 and 2, $3.9\cdot 10^{-3}$ at trial 3; SP vs
SE+P(2) success rate p-values$:[22,0.37,6.0]\cdot 10^{-3}$ ; SP vs SE
cumulative cost p-values: always $<10^{-4}$; SP vs SE success rate p-values:
$2.2\cdot 10^{-2}$ at trial 1 and $<10^{-4}$ later). Moreover, the cumulative
cost distributions obtained by SE+P(2) and SE differ statistically after trial
1 (p-values: $<10^{-4}$ at trial 2, $[0.42,3.6,6.2]\cdot 10^{-3}$ later),
observing a statistically significant success rate improvement at trial 2
(p-value$:6.0\cdot 10^{-3}$) when comparing the performance of SE+P(2) and SE
kernels. These differences can be explained by the capacity of a more
structured kernel to better generalize outside of the training set, i.e., to
learn dynamical properties of the system that hold also in areas of the state-
action space with scarce data points. In fact, some dynamics components of the
cart-pole system are polynomial functions of the GP input
$\tilde{\boldsymbol{x}}_{t}=(\boldsymbol{x}^{*}_{t},\boldsymbol{u}_{t})$, with
$\boldsymbol{x}^{*}_{t}$ defined in (18), leading SE+P(2) to achieve better
data efficiency during the first trials compared to SE. With one step further,
the SP kernel exploits features determined by a direct knowledge of the
physical model, thus it reaches a even higher level of data efficiency.
Figure 6: Median and confidence intervals of the cumulative cost
$c^{\text{pilco}}(\cdot)$ per trial obtained using GP speed-integration model
with kernel SE, SE+P(2) and SP. In all the cases, $l_{\theta}=3$, $l_{p}=1$,
and dropout was used. Success rates are reported below. In both cumulative
cost plot and success rate table, we marked each trial to indicate the
statistical significance of the difference between the three options. The
labels adopted are, *: SE+P(2) vs SE; ${\dagger}$: SP vs SE; $\diamond$: SP vs
SE+P(2).
Success Rates
---
| Trial 1 | Trial 2 | Trial 3 | Trial 4 | Trial 5
SE | 0%${\dagger}$ | 14%*${\dagger}$ | 76%${\dagger}$ | 98% | 100%
SE+P${}^{\text{(2)}}$ | 0%$\diamond$ | 36%*$\diamond$ | 88%$\diamond$ | 98% | 100%
SP | 8%${\dagger}\diamond$ | 70%${\dagger}\diamond$ | 100%${\dagger}\diamond$ | 100% | 100%
### V-D Speed-integration model
In this test, we compared the performance obtained by the proposed speed-
integration dynamical model and by the standard full-state model. In both
cases, SE kernels were adopted, the cost function was defined with length-
scales $(l_{\theta}=3,l_{p}=1)$, and dropout was used. The success rates
obtained at each trial are reported in Table II. We can observe that the
performance obtained by the two structures are quite similar, in fact the
differences between success rates observed at trial 2 and 3 are not
statistically significant. Also the precision in reaching the target state is
comparable, as reported in Table III (rows 3 and 6). Hence, the proposed
speed-integration model performs similarly compared to the full-state
counterpart but offers the advantage of reducing the computational burden by
halving the number of GPs employed.
| Trial 1 | Trial 2 | Trial 3 | Trial 4 | Trial 5
---|---|---|---|---|---
Full-state | 0% | 12% | 70% | 98% | 100%
Speed-int. | 0% | 14% | 76% | 98% | 100%
TABLE II: Success rates per trial obtained using full-state or speed-
integration dynamical models. The difference between the two options is not
statistically significant.
## VI MC-PILCO Experiments
In this section, we describe different experiments conducted on simulated
scenarios to test the validity of the proposed MC-PILCO algorithm. First, we
compare MC-PILCO to other GP-based MBRL algorithms, namely PILCO and Black-
DROPS, on the cart-pole benchmark. Second, we analyse MC-PILCO and PILCO
computational time requirements. Moreover, we tested the capacity of our
algorithm to handle bimodal state distributions in the cart-pole benchmark.
Finally, we tested MC-PILCO in a higher DoF system, namely a UR5 robotic
manipulator, where we solved a trajectory tracking task.
### VI-A Comparison with other algorithms
We tested PILCO333PILCO code available at http://mlg.eng.cam.ac.uk/pilco/,
Black-DROPS444Black-DROPS code available at
https://github.com/resibots/blackdrops, and MC-PILCO on the cart-pole system,
previously described in Section V. In MC-PILCO, we considered the cost
function (19) with length-scales $(l_{\theta}=3,l_{p}=1)$, and adopted the SE
kernel, as it is the one employed by the other algorithms. PILCO and Black-
DROPS optimized their original cost/reward function (20). To be consistent
with the previous literature, we used the latter cost function as common
metric to compare the results. For fairness, we verified if also PILCO and
Black-DROPS benefits from higher length-scales in (20). Moreover, we tested
Black-DROPS with cost function (19) and increasing the length-scales from
small values to $(l_{\theta}=3,l_{p}=1)$. The performance of both the
algorithms deteriorated as we increased the length-scales. For these reasons,
we report the results of both algorithms achieved with (20), which gave the
best performance. The observed cumulative costs and success rates are reported
in Figure 7. MC-PILCO achieved the best performance both in transitory and at
convergence. In fact, it obtained a statistically significant improvement in
terms of success rate w.r.t. the other algorithms from trial 2 to 5 (MC-PILCO
vs PILCO p-values: $4.7\cdot 10^{-2}$ at trial 2 and $<10^{-4}$ later; MC-
PILCO vs Black-DROPS p-values: $4.7\cdot 10^{-2}$ at trial 2, $<10^{-4}$ at
trials 3 and 4, and $3.3\cdot 10^{-3}$ at trial 5). Moreover, MC-PILCO
cumulative cost distributions show lower median and variance w.r.t.
counterparts, with differences always statistically significant up to trial 4
(MC-PILCO vs PILCO, p-values: $3.5\cdot 10^{-3}$ at trial 1 and $<10^{-4}$
later; MC-PILCO vs Black-DROPS p-values: $<10^{-4}$ at trial 1 and 2,
$4.6\cdot 10^{-4}$ at trial 3 and $1.1\cdot 10^{-2}$ at trial 4). On the
contrary, PILCO showed poor convergence properties, while Black-DROPS can
outperform PILCO, but without reaching MC-PILCO level of performance. Finally,
results in Table III (rows 1, 2, 3, 7 and 8), also show that MC-PILCO policies
are more precise in reaching the target.
Figure 7: Median and confidence intervals of the cumulative cost
$c^{\text{pilco}}(\cdot)$ per trial obtained with PILCO, Black-DROPS and MC-
PILCO (with speed-integration model, SE kernel, dropout activated,
$l_{\theta}=3$ and $l_{p}=1$). Success rates are reported below. In both
cumulative cost plot and success rate table, we marked each trial to indicate
the statistical significance of the difference between the three algorithms.
In the following, we report the list of labels adopted, *: MC-PILCO vs PILCO,
${\dagger}$: MC-PILCO vs Black-DROPS
Success Rates
---
| Trial 1 | Trial 2 | Trial 3 | Trial 4 | Trial 5
PILCO | 2% | 4%* | 20%* | 36%* | 42%*
Black-DROPS | 0% | 4%${\dagger}$ | 30%${\dagger}$ | 68%${\dagger}$ | 86%${\dagger}$
MC-PILCO | 0% | 14%*${\dagger}$ | 76%*${\dagger}$ | 98%*${\dagger}$ | 100%*${\dagger}$
| $e_{p}$ [m] | $e_{\theta}$ [rad]
---|---|---
1 | S.I. SE+P(2) (3,1) drop. on | $0.008\pm 0.003$ | $0.011\pm 0.04$
2 | S.I. SP (3,1) drop. on | $0.008\pm 0.003$ | $0.011\pm 0.005$
3 | S.I. SE (3,1) drop. on | $0.010\pm 0.005$ | $0.011\pm 0.005$
4 | S.I. SE (0.75,0.25) drop. off | $0.016\pm 0.009$ | $0.012\pm 0.008$
5 | S.I. SE (3,1) drop. off | $0.019\pm 0.014$ | $0.015\pm 0.009$
6 | F.S. SE (3,1) drop. on | $0.011\pm 0.005$ | $0.011\pm 0.005$
7 | Black-DROPS | $0.025\pm 0.011$ | $0.033\pm 0.019$
8 | PILCO | $0.027\pm 0.012$ | $0.045\pm 0.019$
TABLE III: Average distances from the target states ($p_{t}=0$ and
$\theta_{t}=\pm\pi$) obtained during the last second of interaction with the
cart-pole by the successful policies learned by PILCO, Black-DROPS and the
various MC-PILCO configurations analyzed in Section V. Different
configurations are labeled reporting the adopted dynamical model structure
(speed-integration, S.I., or full-state, F.S.), kernel function, cost length-
scales, and if dropout was used or not. Values are reported as mean $\pm$
standard deviation, calculated over the total number of successful runs at
trial 5.
### VI-B Computational time analysis
We analyzed the time required by MC-PILCO and PILCO to compute the
approximation of the cumulative cost expectation and its gradient w.r.t. the
policy parameters. We left Black-DROPS out of this comparison, because of the
different nature of its optimization strategy, which is based on a black-box
gradient-free algorithm. We remark that the algorithms are implemented in
different languages, which significantly affects computational time (PILCO is
implemented in MATLAB, MC-PILCO in Python). MC-PILCO relies on the speed-
integration dynamical model, which halves the number of GPs employed. For
these reasons, we are more interested in the behavior of computational time as
a function of training samples and system dimension than in absolute values of
time reported. Figure 8 shows that both with MC-PILCO and PILCO the average
computational time scales with the square of the training samples $n$, as
expected from the analysis in Section II-C. As regards the dependencies w.r.t.
system dimensions, we considered three systems of increasing dimension: a
pendulum ($d_{x}=2$), a cart-pole ($d_{x}=4$), and a cart-double-pendulum
($d_{x}=6$). MC-PILCO scales linearly, while for PILCO the linear model is not
enough to fit the average computational time; PILCO scales at least
quadratically. This fact represents a great advantage of the particles based
approximation used by MC-PILCO w.r.t. the moment matching approach followed by
PILCO. Figure 8 also reports MC-PILCO computational time as a function of the
particles number. In accordance with the results in Section II-C, MC-PILCO
complexity scales linearly with the number of particles. Finally, we tested
MC-PILCO on a GPU instead of a CPU: the average times collected are almost
constant w.r.t. the number of samples and particles. As expected, MC-PILCO is
highly parallelizable.
We conclude the computational time analysis reporting the average and the
standard deviation of the time required to run MC-PILCO and PILCO for 5
trials, computed in the 50 runs. On average, PILCO and MC-PILCO took,
respectively, $1692$ and $2060$[s], with standard deviations $94$ and
$157$[s]. The times are similar, but PILCO is faster than MC-PILCO, even
though it requires more time to compute a single approximation of the
cumulative cost expectation and its gradient. This is due to the optimization
algorithm adopted, which performs fewer steps but converges to worse policies.
As previously highlighted, the performance gap between the two algorithms is
considerable. At the last trial, PILCO converges only in $42\%$ of the runs,
while MC-PILCO in $100\%$. For the sake of completeness, we tried to increase
the maximum number of function evaluations admitted by the PILCO optimization
algorithm. Computational time increased without improving success rate.
Figure 8: Average time required to compute the distribution of long-term
predictions and its gradient as a function of: GP training samples (top-left,
on the simulated cart-pole), system dimension (top-right, with 300 training
samples), number of particles (bottom-left, with 300 training samples on the
simulated cart-pole). For all the algorithms and systems, the policy was a RBF
network with 200 basis functions. Hardware adopted: CPU: Intel i7-6700K, GPU:
Nvidia RTX 2080 Ti.
### VI-C Handling bimodal distributions
One of the main advantages of particle-based policy optimization is the
capability to handle multimodal state evolutions. This is not possible when
applying methods based on moment matching, such as PILCO. We verified this
advantage by applying both PILCO and MC-PILCO to the simulated cart-pole
system, when considering a very high variance on the initial cart position,
$\sigma_{p}^{2}=0.5$, which corresponds to have unknown cart’s initial
position (but limited within a reasonable range). With this initial condition,
the optimal initial cart direction and the swing-up direction depend on
whether the initial position of the cart is positive or negative. The aim is
to be in a situation in which the policy has to solve the task regardless of
the initial conditions and needs to have a bimodal behaviour in order to do
so. Note that the situation described could be relevant in several real
applications. We kept the same setup used in previous cart-pole experiments,
changing the initial state distribution to a zero mean Gaussian with
covariance matrix $\text{diag}([0.5,10^{-4},10^{-4},10^{-4}]))$. MC-PILCO
optimizes the cost in (19) with length-scales $(l_{\theta}=3,l_{p}=1)$. We
tested the policies learned by the two algorithms starting from nine different
cart initial positions (-2, -1.5, -1, -0.5, 0, 0.5, 1, 1.5, 2 [m]). In Section
VI-A, we observed that PILCO struggles to consistently converge to a solution
and the high variance in the initial conditions accentuates this issue.
Nevertheless, in order to make the comparison possible, we cherry-picked a
random seed for which PILCO converged to a solution in this particular
scenario. In Figure 9, we show the results of the experiment. MC-PILCO is able
to handle the initial high variance. It learned a bimodal policy that pushes
the cart in two opposite directions, depending on the cart’s initial position,
and stabilizes the system in all the experiments. On the contrary, PILCO’s
policy is not able to control the cart-pole for all the tested starting
conditions. Its strategy is always to push the cart in the same direction, and
it cannot stabilize the system when the cart starts far away from the zero
position. The state evolution under MC-PILCO’s policy is bimodal, while PILCO
cannot find this type of solutions because of the unimodal approximation
enforced by moment matching.
Figure 9: (Left) MC-PILCO policy applied to the cart-pole system starting from
nine different sparse cart initial positions, namely: -2, -1.5, -1, -0.5, 0,
0.5, 1, 1.5, 2 [m], see middle figures and same pole angle. All 9 trajectories
are reported in the figures.. The policy is able to complete the task in all
cases, pushing the cart in different directions depending on its initial
condition. The pole trajectories have a bimodal distribution. (Right) PILCO
policy applied starting from the same cart initial positions. This policy
struggles to adapt to different starting conditions, and it cannot swing up
the cart-pole when starting from the initial positions further away from zero.
In this example, we have seen that a multimodal state evolution could be the
correct solution, when starting from a unimodal state distribution with high
variance, due to dependencies on initial conditions. In other cases,
multimodality could be directly enforced by the presence of multiple possible
initial conditions that would be badly modeled with a single unimodal
distribution. MC-PILCO can handle all these situations thanks to its particle-
based method for long-term predictions. Similar results were obtained when
considering bimodal initial distributions.
### VI-D Trajectory tracking task on UR5 manipulator
The objective of this experiment is to test MC-PILCO in a more complex system
with higher DoF. We used MC-PILCO to learn a joint-space controller for a UR5
robotic arm (6 DoF) simulated in MuJoCo [41]. Let the state at time $t$ be
$\boldsymbol{x}_{t}=[\boldsymbol{q}_{t}^{T},\dot{\boldsymbol{q}}_{t}^{T}]^{T}$,
where $\boldsymbol{q}_{t}$,$\dot{\boldsymbol{q}_{t}}$ $\in\mathbb{R}^{6}$ are
joint angles and velocities, respectively. The objective for the policy
$\pi_{\boldsymbol{\theta}}$ is to control the torques $\boldsymbol{\tau}_{t}$
in order to follow a desired trajectory
$(\boldsymbol{q}^{r}_{t},\dot{\boldsymbol{q}}^{r}_{t})$ for $t=0,\dots,T$. Let
$\boldsymbol{e}_{t}=\boldsymbol{q}^{r}_{t}-\boldsymbol{q}_{t},\dot{\boldsymbol{e}}_{t}=\dot{\boldsymbol{q}}^{r}_{t}-\dot{\boldsymbol{q}}_{t}$
be position and velocity errors at time $t$, respectively. The policy is a
multi-output squashed-RBF-network with $n_{b}=400$ Gaussian basis functions
and $u_{max}=1$ [N$\cdot$m] for all the joints, that maps states and errors
into torques,
$\pi_{\boldsymbol{\theta}}:\boldsymbol{q}_{t},\dot{\boldsymbol{q}}_{t},\boldsymbol{e}_{t},\dot{\boldsymbol{e}}_{t}\mapsto\leavevmode\nobreak\
\boldsymbol{\tau}_{t}$. The control scheme is represented in Figure 10.
Figure 10: Joint-space control scheme for UR5 robotic arm.
In this experiment, we considered a control horizon of 4 seconds with a
sampling time of 0.02 seconds. The reference trajectory has been calculated to
make the end-effector draw a circle in the X-Y operational space. The initial
exploration, used to initialize the speed-integration dynamical model, is
provided by a poorly-tuned PD controller. We used SE+$\text{P}^{\text{(1)}}$
kernels in the GP dynamical model. The GP reduction thresholds were set to
$10^{-3}$. GP input was built using extended state
$\boldsymbol{x}^{*}_{t}=[\dot{\boldsymbol{q}}_{t},sin(\boldsymbol{q}_{t}),cos(\boldsymbol{q}_{t})]$.
$M=200$ is the number of particles used for gradient estimation. The cost
function considered is defined as,
$c(\boldsymbol{x}_{t})=1-\text{exp}\left(-\left(\frac{||\boldsymbol{q}^{r}_{t}-\boldsymbol{q}_{t}||}{0.5}\right)^{2}-\left(\frac{||\dot{\boldsymbol{q}}^{r}_{t}-\dot{\boldsymbol{q}}_{t}||}{1}\right)^{2}\right).$
We assumed full state observability with measurements perturbed by white noise
with standard deviation of $10^{-3}$. The initial state distribution is a
Gaussian centered on $(\boldsymbol{q}^{r}_{0},\dot{\boldsymbol{q}}^{r}_{0})$
with standard deviation of $10^{-3}$. Policy optimization parameters are the
same reported in Table I, with the exception of $n_{s}=400$ and $\sigma_{s}$ =
0.05, to enforce more restrictive exit conditions.
In Figure 11, we report the trajectory followed by the end-effector at each
trial, together with the desired trajectory. MC-PILCO considerably improved
the high tracking error obtained with the PD controller after only 2 trials
(corresponding to 8 seconds of interaction with the system). The learned
control policy followed the reference trajectory for the end-effector with a
mean error of 0.65 [mm] (standard deviation of 0.23 [mm]), and a maximum error
of 1.08 [mm].
Figure 11: End-effector trajectories obtained in exploration and for each
trial of policy learning together with the desired circle. Let
$\boldsymbol{e}_{ee}$ be the error between the desired and the actual end-
effector trajectories. In the table below, we report, in millimeters, the
maximum and mean errors ($\pm$ 3$\times$standard deviation) at each trial.
| Exploration | Trial 1 | Trial 2
---|---|---|---
mean($\boldsymbol{e}_{ee}$) [mm] | 140.66$\pm$158.94 | 21.15$\pm$41.71 | 0.65$\pm$0.69
max($\boldsymbol{e}_{ee}$) [mm] | 196.70 | 40.79 | 1.08
## VII MC-PILCO4PMS Experiments
In this section, we provide the experimental results obtained by MC-PILCO4PMS.
First, we propose a proof of concept on the simulated cart-pole benchmark, to
better show the validity of of the concepts introduced in Section IV. Later,
we we test MC-PILCO4PMS when applied to real systems. In particular, we
experimented on two benchmark systems555A video of the experiments is
available at https://youtu.be/--73hmZYaHA.: a Furuta pendulum, and a ball-and-
plate (Figure 12).
Figure 12: (Left) Furuta pendulum controlled in the upward equilibrium point
by the learned policy. (Right) Ball-and-plate system.
### VII-A MC-PILCO4PMS proof of concept
Figure 13: Comparison of 400 simulated particles rollout (left) and the
trajectories performed applying repetitively the policy 400 times in the
system (right) with the simulated cart-pole system. Each and all trajectories
are shown with a line. Results obtained without simulating online filtering
are on the top plots, while the ones obtained considering the low-pass filters
are on the bottom. The plots refer to the policy learned after 5 trials with
the system.
Here, we test the relevance of modeling the presence of online estimators
using the simulated cart-pole system, but adding assumptions that emulate a
real world experiment. We considered the same physical parameters and the same
initial conditions described in Section V, but assuming to measure only the
cart position and the pole angle. We modeled a possible measurement system
that we would have in the real world as an additive Gaussian i.i.d. noise with
standard deviation $3\cdot 10^{-3}$. In order to obtain reliable estimates of
the velocities, samples were collected at 30 [Hz]. The online estimates of the
velocities were computed by means of causal numerical differentiation followed
by a first order low-pass filter, with cutoff frequency 7.5 [Hz]. The
velocities used to train the GPs were derived with the central difference
formula. To verify the effectiveness of MC-PILCO4PMS (described in Section IV)
two policy functions were trained. The first policy is obtained with MC-PILCO
by neglecting the presence of online filtering during policy optimization and
assuming direct access to the state predicted by the model. On the contrary,
the second policy is trained with MC-PILCO4PMS, which models the presence of
the online estimators. Exploration data were collected with a random policy.
To avoid dependencies on initial conditions, such as policy initialization and
exploration data, we fixed the same random seed in both experiments. In Figure
13, we report the results of a Monte Carlo study with 400 runs. On the left,
the final policy is applied to the learned models (ROLLOUT) and on the right
to the cartpole system (TEST). Even though the two policies perform similarly
when applied to the models, which is all can be tested offline, the results
obtained by testing the policies in the cartpole system are significantly
different. The policy optimized with modeling the presence of online filtering
solves the task in all 400 attempts. In contrast, in several attempts, the
first policy does not solve the task, due to delays and discrepancies
introduced by the online filter and not considered during policy optimization.
We believe that these considerations on how to manipulate the data during
model learning and policy optimization might be beneficial for other
algorithms than MC-PILCO.
### VII-B Furuta pendulum
The Furuta pendulum (FP) [42] is a popular benchmark system used in nonlinear
control and RL. The system is composed of two revolute joints and three links.
The first link, called the base, is fixed and perpendicular to the ground. The
second link, called arm, rotates parallel to the ground, while the rotation
axis of the last link, the pendulum, is parallel to the principal axis of the
second link, see Figure 12. The FP is an under-actuated system as only the
first joint is actuated. In particular, in the FP considered the horizontal
joint is actuated by a DC servomotor, and the two angles are measured by
optical encoders with 4096 [ppr]. The control variable is the motor voltage.
Let the state at time step $t$ be
$\boldsymbol{x}_{t}=[\theta^{h}_{t},\dot{\theta}^{h}_{t},\theta^{v}_{t},\dot{\theta}^{v}_{t}]^{T}$,
where $\theta^{h}_{t}$ is the angle of the horizontal joint and
$\theta^{v}_{t}$ the angle of the vertical joint attached to the pendulum. The
objective is to learn a controller able to swing-up the pendulum and stabilize
it in the upwards equilibrium ($\theta_{t}^{v}=\pm\pi$ [rad]) with
$\theta_{t}^{h}=0$ [rad]. The trial length is 3 seconds with a sampling
frequency of 30 [Hz]. The cost function is defined as
$c(\boldsymbol{x}_{t})=1-\text{exp}\left(-\left(\frac{\theta_{t}^{h}}{2}\right)^{2}-\left(\frac{|\theta_{t}^{v}|-\pi}{2}\right)^{2}\right)+c_{b}(\boldsymbol{x}_{t}),$
(21)
with
$\displaystyle c_{b}(\boldsymbol{x}_{t})=$
$\displaystyle\frac{1}{1+\text{exp}\left(-10\left(-\frac{3}{4}\pi-\theta^{h}_{t}\right)\right)}$
$\displaystyle+\frac{1}{1+\text{exp}\left(-10\left(\theta^{h}_{t}-\frac{3}{4}\pi\right)\right)}\text{.}$
The first part of the function in (21) aims at driving the two angles towards
$\theta_{t}^{h}=0$ and $\theta_{t}^{v}=\pm\pi$, while
$c_{b}(\boldsymbol{x}_{t})$ penalizes solutions where
$\theta_{t}^{h}\leq-\frac{3}{4}\pi$ or $\theta_{t}^{h}\geq\frac{3}{4}\pi$. We
set those boundaries to avoid the risk of damaging the system if the
horizontal joint rotates too much. Offline estimates of velocities for the GP
model have been computed by means of central differences. For the online
estimation, we used causal numerical differentiation:
$\dot{\boldsymbol{q}}_{t}=(\boldsymbol{q}_{t}-\boldsymbol{q}_{t-1})/(T_{s})$,
where $T_{s}$ is the sampling time. Instead of $\boldsymbol{x}_{t}$, we
considered the extended state
$\boldsymbol{x}^{*}_{t}=[\dot{\theta}^{h}_{t},\dot{\theta}^{v}_{t},sin(\theta^{h}_{t}),cos(\theta^{h}_{t}),sin(\theta^{v}_{t}),cos(\theta^{v}_{t})]^{T}$
in GP input. The policy is a squashed-RBF-network with $n_{b}=200$ basis
functions that receives as input
$[(\theta^{h}_{t}-\theta^{h}_{t-1})/{T_{s}},(\theta^{v}_{t}-\theta^{v}_{t-1})/T_{s},sin(\theta^{h}_{t}),cos(\theta^{h}_{t}),sin(\theta^{v}_{t}),cos(\theta^{v}_{t})]^{T}$.
The exploration trajectory has been obtained using as input a sum of ten sine
waves of random frequencies and same amplitudes. The initial state
distribution is assumed to be $\mathcal{N}([0,0,0,0]^{T},\text{diag}([5\cdot
10^{-3},5\cdot 10^{-3},5\cdot 10^{-3},5\cdot 10^{-3}])$. The GP reduction
thresholds were set to $10^{-3}$. We solved the task using the three different
choices of kernel functions described in Section III-A2: squared exponential
(SE), squared exponential + polynomial of degree $d$ (SE+$\text{P}^{(d)}$) and
semi-parametrical (SP)666SP basis functions can be obtained by isolating, in
each ODE defining FP laws of motion, all the linearly related state-dependent
components. In particular, we have
$\phi_{\dot{\theta}^{h}}(\boldsymbol{x},u)=[(\dot{\theta}^{v})^{2}sin(\theta^{v}),\dot{\theta}^{h}\dot{\theta}^{v}sin(2\theta^{v}),\dot{\theta}^{h},u]$
for the arm velocity GP, and
$\phi_{\dot{\theta}^{v}}(\boldsymbol{x},u)=[(\dot{\theta}^{h})^{2}sin(2\theta^{v}),\dot{\theta}^{v},sin(\theta^{v}),u\;cos(\theta^{v})]$
for the pendulum velocity GP.. In Figure 14, we show the resulting
trajectories for each trial.
Figure 14: (Left) Pendulum angle’s trajectories for each trial. (Right)
Horizontal joint angle’s trajectories for each trial. For all the kernels, the
angles are plotted up to the trial that solved the task.
MC-PILCO4PMS managed to learn how to swing up the Furuta pendulum in all
cases. It succeeded at trial 6 with kernel SE, at trial 4 with kernel
SE+$\text{P}^{(2)}$, and at trial 3 with SP kernel. These experimental results
confirm the higher data efficiency of more structured kernels and the
advantage of allowing any kernel function offered by our MBRL method.
Moreover, we can observe the effectiveness of the cost function (21) in
keeping $\theta_{t}^{h}$ always inside the desired boundaries in all the
trials and for any kernel tested. Considering penalties similar to
$c_{b}(\boldsymbol{x}_{t})$ inside the cost function could be enough to handle
soft constraints also in other scenarios.
### VII-C Ball-and-plate
The ball-and-plate system is composed of a square plate that can be tilted in
two orthogonal directions by means of two motors. On top of it, there is a
camera to track the ball and measure its position on the plate. Let
$(b^{x}_{t},b^{y}_{t})$ be the position of the center of the ball along X-axis
and Y-axis, while $\theta^{(1)}_{t}$ and $\theta^{(2)}_{t}$ are the angles of
the two motors tilting the plate, at time $t$. So, the state of the system is
defined as
$\boldsymbol{x}_{t}=[b^{x}_{t},b^{y}_{t},\dot{b}^{x}_{t},\dot{b}^{y}_{t},\theta^{(1)}_{t},\theta^{(2)}_{t},\dot{\theta}^{(1)}_{t},\dot{\theta}^{(2)}_{t}]^{T}$.
The drivers of the motors allow only position control, and do not provide
feedback about the motors angles. To keep track of the motor angles, we
defined the control actions as the difference between two consecutive
reference values sent to the motor controllers, and we limited the maximum
input to a sufficiently small value, such that the motor controllers are able
to reach the target angle within the sampling time. Then, in first
approximation, the reference angles and the motor angles coincide, and we have
$u_{t}^{(1)}=\theta^{(1)}_{t+1}-\theta^{(1)}_{t}$ and
$u_{t}^{(2)}=\theta^{(2)}_{t+1}-\theta^{(2)}_{t}$. The objective of the
experiment is to learn how to control the motor angles in order to stabilize
the ball around the center of the plate. Notice that the control task, with
the given definition of inputs, is particularly difficult because the policy
must learn to act in advance, and not only react to changes in the ball
position. The cost function is defined as
$c(\boldsymbol{x}_{t})=1-\text{exp}\left(-g_{t}(\boldsymbol{x}_{t})\right),\qquad\text{with}$
$g_{t}(\boldsymbol{x}_{t})=\left(\frac{b^{x}_{t}}{0.15}\right)^{2}+\left(\frac{b^{y}_{t}}{0.15}\right)^{2}+\left(\theta_{t}^{(1)}\right)^{2}+\left(\theta_{t}^{(2)}\right)^{2}.$
The trial length is 3 seconds, with a sampling frequency of 30 [Hz].
Measurements provided by the camera are very noisy, and cannot be used
directly to estimate velocities from positions. We used a Kalman smoother for
the offline filtering of ball positions ($b^{x}_{t},b^{y}_{t}$) and associated
velocities ($\dot{b}^{x}_{t},\dot{b}^{y}_{t}$). In the control loop, instead,
we used a Kalman filter [43] to estimate online the ball state from noisy
measures of positions. Concerning the model, we need to learn only two GPs
predicting the evolution of the ball velocity because we directly control
motor angles, hence, their evolution is assumed deterministic. GP inputs,
$\tilde{\boldsymbol{x}}_{t}=[\boldsymbol{x}^{*}_{t},u_{t}]$, include an
extended version of the state,
$\boldsymbol{x}^{*}_{t}=[b^{x}_{t},b^{y}_{t},\dot{b}^{x}_{t},\dot{b}^{y}_{t},sin(\theta^{(1)}_{t}),cos(\theta^{(1)}_{t}),sin(\theta^{(2)}_{t}),cos(\theta^{(2)}_{t}),(\theta^{(1)}_{t}-\theta^{(1)}_{t-1})/T_{s},(\theta^{(2)}_{t}-\theta^{(2)}_{t-1})/T_{s}]^{T}$
where angles have been replaced by their sines and cosines, and motor angular
velocities have been estimated with causal numerical differentiation ($T_{s}$
is the sampling time). The SE+$\text{P}^{(1)}$ kernel (10) is used, where the
linear kernel acts only on a subset of the model inputs,
$\tilde{\boldsymbol{x}}^{lin}_{t}=[sin(\theta^{(1)}_{t}),sin(\theta^{(2)}_{t}),cos(\theta^{(1)}_{t}),cos(\theta^{(2)}_{t}),u_{t}]$.
We diminished the GP reduction threshold to $10^{-4}$ w.r.t. the FP experiment
because of the small distances the ball can cover in a time step. The policy
is a multi-output RBF network (11), with $n_{b}=400$ basis functions, that
receives as inputs the estimates of
$(b^{x}_{t},b^{y}_{t},\dot{b}^{x}_{t},\dot{b}^{y}_{t},\theta^{(1)}_{t},\theta^{(1)}_{t-1},\theta^{(2)}_{t},\theta^{(2)}_{t-1})$
computed with the Kalman filter; maximum angle displacement is $u_{max}=4$
[deg] for both motors. The policy optimization parameters used were the same
described in Table I, with the difference that we set $\alpha_{lr}=0.006$ as
initial learning rate. The reduction of the learning rate is related to the
use of small length-scales in the cost function, that are necessary to cope
with the small range of movement of the ball. For the same reason, we set also
$\alpha_{lr_{min}}=0.0015$ and $\sigma_{s}=0.05$. Initial exploration is given
by two different trials, in which the control signals are two triangular waves
perturbed by white noise. Mostly during exploration and initial trials, the
ball might touch the borders of the plate. In those cases, we kept data up to
the collision instant. A peculiarity of this experiment in comparison to the
others seen before is a wide range of initial conditions. In fact, the ball
could be positioned anywhere on the plate’s surface, and the policy must
control it to the center. The initial distribution of $b^{x}_{0}$ and
$b^{y}_{0}$ is a uniform $\mathcal{U}(-0.15,0.15)$, which covers almost the
entire surface (the plate is a square with sides of about 0.20 [m]). For the
other state components, $\theta^{(1)}_{t}$ and $\theta^{(2)}_{t}$, we assumed
tighter initial distributions $\mathcal{U}(-10^{-6},10^{-6})$. MC-PILCO4PMS
managed to learn a policy able to control the ball around the center starting
from any initial position after the third trial, 11.33 seconds of interaction
with the system.
Figure 15: Ten different ball trajectories obtained under the final policy
learned by MC-PILCO4PMS. Steady-state positions are marked with black crosses.
The dashed circle has the same diameter as the ball.
We tested the learned policy starting from ten different points, see Figure
15. The mean steady-state error, i.e., the average distance of the final ball
position from the center observed in the ten trials, was 0.0099 [m], while the
maximum measured error was 0.0149 [m], which is lower than the ball radius of
0.016 [m].
## VIII Conclusions
In this paper, we have presented the MBRL algorithm MC-PILCO. The proposed
framework uses GPs to derive a probabilistic model of the system dynamics, and
updates the policy parameters through a gradient-based optimization that
exploits the _reparameterization trick_ and approximates the expected
cumulative cost relying on a Monte Carlo approach. Compared to similar
algorithms proposed in the past, our Monte Carlo approach worked by focusing
on two aspects, that are (i) proper selection of the cost function, and (ii)
introduction of dropout during policy optimization. Extensive experiments on
the simulated cart-pole benchmark confirm the effectiveness of the proposed
solution, and show the relevance of the two aforementioned aspects when
optimizing the policy combining the _reparameterization trick_ with particle-
based methods. Particles-based approximation offers other two advantages in
comparison to the moment-matching approach of PILCO, namely, the possibility
of using structured kernels, such as polynomial kernels and semi-parametrical
kernels, and the ability of handling multimodal distributions. In particular,
experimental results show that the use of structured kernels can increase data
efficiency, reducing the interaction-time required to learn the task. MC-PILCO
was also used to learn from scratch a joint-space controller for a (simulated)
robotic manipulator, proving able to handle such a relatively high-DoF task.
Moreover, we compared MC-PILCO with PILCO and Black-DROPS (two state-of-the-
art GP-based MBRL algorithms) on the cart-pole benchmark. MC-PILCO
outperformed both algorithms in this scenario, exhibiting better data
efficiency and asymptotic performance.
Furthermore, we analyzed common problems that arise when trying to apply MBRL
to real systems. In particular, we focused on systems with partially
measurable states (e.g., mechanical systems) which are particularly relevant
in real applications. In this context, we proposed a modified version of our
algorithm, called MC-PILCO4PMS, through which we verified the importance of
taking into account the state estimators used in the real system during policy
optimization. Results have been validated on two different real setups,
specifically, a Furuta pendulum and a ball-and-plate system.
In future works, we are interested in testing the proposed algorithms in more
challenging scenarios, e.g., manipulation tasks in real world environments.
The issues regarding the impossibility of measuring directly the velocity
states tackled in MC-PILCO4PMS could be further analyzed by considering the
recently introduced "Velocity-free" framework [44]. Finally, the application
to manipulation tasks will also require the introduction of safe exploration
techniques and guarantees from the state-of-the-art in safe RL [45].
## References
* [1] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018.
* [2] Christopher G Atkeson and Juan Carlos Santamaria. A comparison of direct and model-based reinforcement learning. In Proceedings of international conference on robotics and automation, volume 4, pages 3557–3564. IEEE, 1997.
* [3] Christopher KI Williams and Carl Edward Rasmussen. Gaussian processes for machine learning. MIT press Cambridge, MA, 2006.
* [4] Malte Kuss and Carl E Rasmussen. Gaussian processes in reinforcement learning. In Advances in neural information processing systems, pages 751–758, 2004.
* [5] Felix Berkenkamp, Matteo Turchetta, Angela Schoellig, and Andreas Krause. Safe model-based reinforcement learning with stability guarantees. In Advances in neural information processing systems, pages 908–918, 2017.
* [6] M. Deisenroth and Carl E. Rasmussen. Pilco: A model-based and data-efficient approach to policy search. In Proceedings of the 28th International Conference on machine learning (ICML), pages 465–472, 2011.
* [7] Marc Peter Deisenroth, Carl Edward Rasmussen, and Dieter Fox. Learning to control a low-cost manipulator using data-efficient reinforcement learning. Robotics: Science and Systems VII, pages 57–64, 2011.
* [8] Marc Peter Deisenroth, Roberto Calandra, André Seyfarth, and Jan Peters. Toward fast policy search for learning legged locomotion. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 1787–1792. IEEE, 2012.
* [9] A. D. Libera and R. Carli. A data-efficient geometrically inspired polynomial kernel for robot inverse dynamic. IEEE Robotics and Automation Letters, 5(1):24–31, 2020.
* [10] Diego Romeres, Devesh K Jha, Alberto DallaLibera, Bill Yerazunis, and Daniel Nikovski. Semiparametrical gaussian processes learning of forward dynamical models for navigating in a circular maze. In 2019 International Conference on Robotics and Automation (ICRA), pages 3195–3202. IEEE, 2019.
* [11] Diego Romeres, Mattia Zorzi, Raffaello Camoriano, and Alessandro Chiuso. Online semi-parametric learning for inverse dynamics modeling. In 2016 IEEE 55th Conference on Decision and Control (CDC), pages 2945–2950. IEEE, 2016.
* [12] D. Nguyen-Tuong and J. Peters. Using model knowledge for learning inverse dynamics. In 2010 IEEE International Conference on Robotics and Automation, pages 2677–2682, 2010.
* [13] Yarin Gal, Rowan McAllister, and Carl Edward Rasmussen. Improving pilco with bayesian neural network dynamics models. In Data-Efficient Machine Learning workshop, ICML, volume 4, page 34, 2016.
* [14] David JC MacKay. Bayesian methods for adaptive models. PhD thesis, California Institute of Technology, 1992.
* [15] Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In Advances in Neural Information Processing Systems, pages 4754–4765, 2018.
* [16] M. Cutler and J. P How. Efficient reinforcement learning for robots using informative simulated priors. In 2015 IEEE International Conference on Robotics and Automation (ICRA), pages 2605–2612. IEEE, 2015.
* [17] K. Chatzilygeroudis, R. Rama, R. Kaushik, D. Goepp, V. Vassiliades, and J. Mouret. Black-box data-efficient policy search for robotics. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 51–58. IEEE, 2017.
* [18] Andrew James McHutchon et al. Nonlinear modelling and control using Gaussian processes. PhD thesis, Citeseer, 2015.
* [19] Andrew Y Ng and Michael I Jordan. Pegasus: A policy search method for large mdps and pomdps. arXiv preprint arXiv:1301.3878, 2013.
* [20] P. Parmas, Carl E. Rasmussen, J. Peters, and K. Doya. Pipps: Flexible model-based policy search robust to the curse of chaos. In International Conference on Machine Learning, pages 4065–4074, 2018.
* [21] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
* [22] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In International conference on machine learning, pages 1278–1286. PMLR, 2014.
* [23] David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning Representations by Back-Propagating Errors, page 696–699. MIT Press, Cambridge, MA, USA, 1988.
* [24] Léon Bottou. Large-scale machine learning with stochastic gradient descent. In in COMPSTAT, 2010.
* [25] Yann LeCun, Y. Bengio, and Geoffrey Hinton. Deep learning. Nature, 521:436–44, 05 2015.
* [26] Carlo Baldassi, Fabrizio Pittorino, and Riccardo Zecchina. Shaping the learning landscape in neural networks around wide flat minima. Proceedings of the National Academy of Sciences, 117(1):161–170, 2020.
* [27] C. Baldassi, E. M. Malatesta, and R. Zecchina. Properties of the geometry of solutions and capacity of multilayer neural networks with rectified linear unit activations. Phys. Rev. Lett., 123:170602, Oct 2019.
* [28] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In Geoffrey Gordon, David Dunson, and Miroslav Dudík, editors, Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, volume 15 of Proceedings of Machine Learning Research, pages 315–323, Fort Lauderdale, FL, USA, 11–13 Apr 2011\. PMLR.
* [29] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(56):1929–1958, 2014.
* [30] Alberto Dalla Libera, Ruggero Carli, and Gianluigi Pillonetto. A novel multiplicative polynomial kernel for volterra series identification. IFAC-PapersOnLine, 53(2):316–321, 2020. 21st IFAC World Congress.
* [31] M. P. Deisenroth, D. Fox, and C. E. Rasmussen. Gaussian processes for data-efficient learning in robotics and control. IEEE transactions on pattern analysis and machine intelligence, 37(2):408–423, 2013.
* [32] J. Quinonero Candela and CE. Rasmussen. A unifying view of sparse approximate gaussian process regression. Journal of Machine Learning Research, 6:1935–1959, December 2005\.
* [33] Lehel Csató and Manfred Opper. Sparse on-line gaussian processes. Neural Comput., 14(3):641–668, March 2002.
* [34] Russel E Caflisch et al. Monte carlo and quasi-monte carlo methods. Acta numerica, 1998:1–49, 1998.
* [35] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
* [36] Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML’16, page 1050–1059. JMLR.org, 2016.
* [37] Garry A Einicke. Optimal and robust noncausal filter formulations. IEEE Transactions on Signal Processing, 54(3):1069–1077, 2006.
* [38] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. In Proceedings of Neural Information Processing Systems, 2017.
* [39] Markus Neuhäuser. Wilcoxon–Mann–Whitney Test, pages 1656–1658. Springer Berlin Heidelberg, Berlin, Heidelberg, 2011.
* [40] George A. Barnard. A new test for 2 × 2 tables. Nature, 156:177, 1945.
* [41] Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5026–5033. IEEE, 2012.
* [42] Benjamin Seth Cazzolato and Zebb Prime. On the dynamics of the furuta pendulum. Journal of Control Science and Engineering, 2011, 2011.
* [43] R. E. Kalman. A New Approach to Linear Filtering and Prediction Problems. Journal of Basic Engineering, 82(1):35–45, 03 1960.
* [44] A. Dalla Libera, D. Romeres, D. K. Jha, B. Yerazunis, and D. Nikovski. Model-based reinforcement learning for physical systems without velocity and acceleration measurements. IEEE Robotics and Automation Letters, 5(2):3548–3555, 2020.
* [45] Javier Garcıa and Fernando Fernández. A comprehensive survey on safe reinforcement learning. Journal of Machine Learning Research, 16(1):1437–1480, 2015.
©2022 IEEE. Personal use of this material is permitted. Permission from IEEE
must be obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes,
creating new collective works, for resale or redistribution to servers or
lists, or reuse of any copyrighted component of this work in other works.
|
# Nuclear binding energy predictions using neural networks: Application of the
multilayer perceptron
Esra Yüksel<EMAIL_ADDRESS>Department of Physics, Faculty of Science
and Letters, Yildiz Technical University, Davutpasa Campus, 34220, Esenler,
Istanbul, Turkey Derya Soydaner<EMAIL_ADDRESS>Department of
Statistics, Mimar Sinan Fine Arts University, Bomonti 34380, Istanbul, Turkey
Hüseyin Bahtiyar<EMAIL_ADDRESS>Department of Physics, Mimar
Sinan Fine Arts University, Bomonti 34380, Istanbul, Turkey
###### Abstract
In recent years, artificial neural networks and their applications for large
data sets have became a crucial part of scientific research. In this work, we
implement the Multilayer Perceptron (MLP), which is a class of feedforward
artificial neural network (ANN), to predict ground-state binding energies of
atomic nuclei. Two different MLP architectures with three and four hidden
layers are used to study their effects on the predictions. To train the MLP
architectures, two different inputs are used along with the latest atomic mass
table and changes in binding energy predictions are also analyzed in terms of
the changes in the input channel. It is seen that using appropriate MLP
architectures and putting more physical information in the input channels, MLP
can make fast and reliable predictions for binding energies of atomic nuclei,
which is also comparable to the microscopic energy density functionals.
## I INTRODUCTION
One of the major research areas in nuclear physics is the nuclear mass
(binding energy) predictions, especially for nuclei far from the stability
line with extreme proton-neutron ratio. As the most fundamental property of
nuclei, accurate nuclear mass measurements and theoretical predictions have
vital importance not only for nuclear physics [1] but also for nuclear
astrophysics [2, 3, 4]. Alongside with other nuclear properties (i.e., charge
radii, separation energies, decay properties, etc.), the nuclear masses could
provide information about the nucleon-nucleon interaction, shell, and pairing
properties of nuclei, and its precise determination is also crucial for our
understanding of the formation of chemical elements heavier than iron in the
universe [5].
In the last decades, nuclear mass measurements have gained acceleration with
the developments in experimental facilities. According to the latest atomic
mass table AME2016 [6], the ground-state masses of 3435 nuclei have been
measured. However, measurement of the ground-state properties for nuclei close
to the drip lines still stands as a challenge. Also, there exist large
deviations in theoretical model calculations for nuclei close to the drip
lines. Therefore, further studies are needed to make reliable predictions in
these regions. Up to now, microscopic-macroscopic (mic-mac) [7, 8, 9] and
microscopic models [10, 11, 12, 13, 14, 15] have been employed for the mapping
of the nuclear landscape. Although the microscopic models are more complete in
terms of the physics behind them, much better results are obtained using the
mic-mac models in nuclear mass predictions since their constants are
determined using the experimental ground-state masses of nuclei. While the
root-mean-square (rms) deviation of nuclear masses is generally high (several
MeV) using the microscopic models, the FRDM2012 [9] predicts an rms deviation
0.57 MeV with respect to the AME2003 atomic mass table [16], and WS3 model
gives even lower rms deviation (0.336 MeV) for nuclear masses [8].
Considering the microscopic models, the first complete nuclear mass table was
based on the well-known Skyrme Hartree–Fock (HF) method in the non-
relativistic framework [17], and the root-mean-square error was obtained as
0.738 MeV using the 1995 Audi–Wapstra compilation [18]. With further
improvements, the HFB-31 mass model also gave a model error of 0.561 MeV for
the measured mass of 2353 nuclei [19]. Using the Gogny HFB method, the rms
deviation was obtained as 0.798 MeV with respect to the experimental
predictions of the 2149 nuclei [11]. A systematic study was also conducted on
6969 nuclei to predict nuclear properties using the relativistic mean-field
model, and rms deviation for nuclear masses was obtained as 2.1 MeV [20].
According to the latest microscopic mass model based on the relativistic
continuum Hartree-Bogoliubov (RCHB) theory [21], the root-mean-square
deviation of the binding energies with respect to the experimental data was
obtained at around several MeV. Although considerable progress has been
achieved with the mic-mac and microscopic models, the rms deviations are still
high and one needs more accurate results, especially for astrophysical
applications. Therefore, different approximations and models are required to
understand the discrepancies in the results as well as to make more precise
predictions.
In recent years, there has been an increasing amount of interest in artificial
neural networks [22, 23, 24, 25, 26, 27, 28], which is known as a
nonparametric estimator in Machine Learning (ML). Applications of neural
networks cover many areas in science as well as the different branches of
physics. Considering the variety and richness of the available experimental
data, nuclear physics is also a good candidate to study using neural networks.
Long ago, several studies were performed to predict nuclear properties using
various techniques [29, 30, 31]. Neural networks were used to predict nuclear
mass excess and neutron separation energies [30], and it is shown that neural
networks can be used as a new tool to predict the properties of atomic nuclei.
Later on, nuclear mass defect predictions were made using the neural networks,
showing that the neural networks can be considered as powerful tools to
explore nuclear properties alongside the theoretical models [31]. The ground-
state energies [23] and charge radii [22] of nuclei were also investigated
using artificial neural networks, and the usefulness of the method in the
predictions was shown. Recently, various machine learning algorithms were used
with the latest AME2016 data set to estimate the binding energies of atomic
nuclei [32]. Besides, it was shown that the deep neural networks can predict
the ground-state and excited energies as accurate as the nuclear energy
density functional with less computational cost [33]. In recent years, neural
network approaches were also used to train the mass residues of the
theoretical models to improve the predictive power of the models and achieved
considerable success [25, 26, 27, 28]. As far as we are concerned, there is no
work related to the application of the multilayer perceptron (MLP), which is a
class of feedforward artificial neural network (ANN), to nuclear physics data.
Therefore, it would be interesting to investigate the success of this model in
the predictions of nuclear properties.
In this work, we implement the multilayer perceptron to predict the total
binding energies (BE) of atomic nuclei. In our work, we first use the
experimental data [6] along with the proton (Z) and mass (A) numbers of the
selected nuclei as inputs. Then, we study the effects of the increasing number
of the hidden layers and inputs in the predictive power of the neural network.
Finally, we compare our results with the other microscopic and microscopic-
macroscopic models to evaluate the success of MLP in the binding energy
predictions compared to the other models.
## II Multilayer Perceptron
In this study, we aim to create a model that makes ground-state binding energy
predictions for atomic nuclei by using input data. Inputs are the nuclear
properties that can affect the binding energies of nuclei. Our model takes
these properties as input and predicts the binding energies as the output.
Such problems, where the output is a numerical value, are known as the
regression problems. Regression is a supervised learning problem where there
is an input, X, an output, Y, and the task is to learn the mapping from the
input to the output [34]. To this end, in machine learning, we assume a model
as shown below:
$y=f(x|\theta)$ (1)
where $f(.)$ is the model and $\theta$ are its parameters. In our case, $y$
corresponds to the prediction for binding energy, and $f(.)$ is the regression
function. In the context of machine learning, the parameters, $\theta$, are
optimized by minimizing a loss function. Thus, the predictions are obtained as
close as possible to the correct values given in the input data.
We choose multilayer perceptron (MLP) as the model $f(.)$. MLP is a neural
network architecture that is mostly preferred to solve such regression
problems. In the training stage of an MLP, the backpropagation algorithm [35]
is used for computing the gradient. On the other side, another algorithm is
used to perform learning using this gradient [24]. The second algorithm is
used for optimization, which is usually called as the optimizer. In recent
years, a new type of algorithm, which is called the adaptive gradient method,
is preferred as the optimizer. In this study, we use Adam algorithm [36] to
train the MLP.
Basically, an MLP is a feedforward neural network with one or more than one
hidden layer between input and output layers. In the case of one hidden layer,
first, input $x$ is fed to the input layer. By using an activation function,
the activation propagates in the forward direction, and the values of the
hidden units z are computed. Each hidden unit usually applies a nonlinear
activation function to its weighted sum. After performing the forward pass, an
error is computed by using a loss function. By using this error, the weights
are updated in the backward pass [35].
However, an MLP with one hidden layer has limited capacity, and using an MLP
with multiple hidden layers can learn more complicated functions of the input.
That is the idea behind deep neural networks where each hidden layer combines
the values in its preceding layer and learns more complicated functions [34].
It is possible to have multiple hidden layers each with its own weights and
applying the activation function to its weighted sum. It should be noted that
different activation functions can be used in multilayer perceptrons, e.g.,
ReLU, tanh, sigmoid, etc. In this work, we implement both tanh and ReLU, which
are two commonly used activation functions in nuclear mass predictions [37,
32, 38, 26], and find that the ReLU function gives better predictions on the
test data. Therefore, we choose the ReLU function as the activation function
of the hidden layers in this work, which is also mostly preferred for the
hidden layers of deep neural networks [39]:
$\phi(x)=max(0,x)$ (2)
An MLP with three hidden layers is demonstrated in Fig. 1 where
$\textbf{w}_{1h}$, $\textbf{w}_{2l}$ and $\textbf{w}_{3k}$ are the weights
belonging to the first, second and third hidden layers, respectively. The
units on the first, second and third hidden layers are represented as
$z_{1h}$, $z_{2l}$ and $z_{3k}$, and v are the output layer weights. Such an
architecture is required four stages to compute the output. Firstly, input x
is fed to the input layer, the weighted sum is computed, and the activation
propagates in the forward direction. When the ReLU function is chosen as the
activation function, $z_{1h}$ is computed as shown below:
$\displaystyle z_{1h}$ $\displaystyle=$ $\displaystyle
ReLU(\textbf{w}_{1h}^{T}\textbf{x})$ (3) $\displaystyle=$ $\displaystyle
ReLU\Bigg{(}\sum_{j=1}^{d}w_{1hj}x_{j}+w_{1h0}\Bigg{)},h=1,...,H_{1}$
Figure 1: The structure of a multilayer perceptron with three hidden layers.
The computations for the second hidden layer are practiced similarly. At this
stage, the second hidden layer activations are computed by taking the first
hidden layer activations as their inputs. Then, the third hidden layer
activations are computed by taking the second hidden layer activations as
their inputs. In a regression problem, there is no nonlinearity in the output
layer. Therefore, the output $y$ is computed by taking the $z_{3}$ as input
[34]. Thus, the forward propagation is completed:
$\displaystyle z_{2l}$ $\displaystyle=$ $\displaystyle
ReLU(\textbf{w}_{2l}^{T}\textbf{z}_{1})$ (4) $\displaystyle=$ $\displaystyle
ReLU\Bigg{(}\sum_{h=0}^{H_{1}}w_{2lh}z_{1h}+w_{2l0}\Bigg{)},l=1,...,H_{2}$
$\displaystyle z_{3k}$ $\displaystyle=$ $\displaystyle
ReLU(\textbf{w}_{3k}^{T}\textbf{z}_{2})$ (5) $\displaystyle=$ $\displaystyle
ReLU\Bigg{(}\sum_{l=0}^{H_{2}}w_{3kl}z_{2l}+w_{3k0}\Bigg{)},k=1,...,H_{3}$
$y=\textbf{v}^{T}\textbf{z}_{3}=\sum_{k=1}^{H_{3}}v_{k}z_{3k}+v_{0}$ (6)
When the MLP goes deeper, one more step is added to these computations for
each one of additional hidden layer. In this study, we implement MLP
architectures of two different depths. Whereas the first one includes three
hidden layers, the other one includes four. Thus, we observe the effect of
depth on the binding energy predictions. As we predict the binding energy,
i.e. one single numerical value, only one unit exists in the output layer. In
order to determine the MLP architecture, we gradually increase the number of
hidden units according to the prediction performance on three data sets. Then,
we choose our final architectures. The MLP with three hidden layers includes
32,16 and 8 hidden units, respectively. It includes 769 (833) parameters for
the MLP model with two (four) inputs, which is much smaller than the number of
training data. On the other side, the MLP with four hidden layers includes
32,32,16 and 8 hidden units. It includes 1825 (1889) parameters for the MLP
model with two (four) inputs. In addition to the smaller number of parameters,
our architectures does not overfit because of two main reasons: Firstly, the
central challenge in machine learning is that we must perform well on new,
previously unseen inputs – not just those on which our model is trained [24].
As it is seen in our results below, our neural network can make good
predictions on test data. Secondly, overfitting occurs when the gap between
the training loss and test loss is too large [24]. However, in our
calculations, the gap between the training loss and test loss is too small.
For instance, it is found as 0.0023 (0.0029) for the training (test) data of
MLP architecture with four hidden layers using four inputs. Therefore, it is
seen that our neural network does not overfit, and it generalizes well on test
data of each dataset, and the losses of training and test data are so close to
each other.
Another important step of creating these architectures is to initialize the
layer weights. We initialize them with the Glorot normal initializer, also
known as Xavier normal initializer [40]. Besides, the input data is randomly
divided into two subsets as 70.0% for training and 30.0% for testing. We
prefer mean absolute error as the loss function on the training set X:
$E(\textbf{W},\textbf{v}|X)=\frac{\sum_{t=1}^{n}\left|r^{t}-f(x^{t})\right|}{n}$
(7)
where $r^{t}$ are the desired values and $f(x^{t})$ are predictions for the
binding energy.
We train our MLP architectures 800 epochs by using Adam optimization algorithm
to minimize mean absolute error. The name Adam is derived from adaptive moment
estimation. It is an adaptive gradient method that individually adapts the
learning rates of model parameters [36]. During training, this algorithm
computes the estimates of first and second moments of the gradients, and uses
decay constants to update them. Therefore, Adam algorithm requires
hyperparameters called decay constants in addition to the learning rate. In
this study, the initial learning rate is 0.001, and decay constants are 0.9
and 0.999, respectively.
## III Results
Figure 2: Comparison of the experimental and predicted binding energy
differences for the testing set of the three layers MLP architecture using (Z,
A) (upper figure) and (Z, A, $\delta$, P) (lower figure) as inputs.
Figure 3: The same as in Fig. 2, but for the four layers MLP architecture.
In this work, the multilayer perceptron is used to predict ground-state
binding energies of atomic nuclei. Two different MLP architectures and inputs
are used to test the effect of the number of the hidden layers and inputs on
the results. In the input channels, we first use proton (Z) and mass (A)
numbers of nuclei as inputs along with the experimental data from the latest
AME2016 mass table [6] to predict binding energies. Therefore, this part of
the present work does not include any physical identity or theory, except the
proton and mass numbers. In the second part, we also include additional
physical inputs to improve the predictive power of our models. These inputs
carry information about the shell structure of nuclei, which in turn related
to nuclear binding energies [41, 42]. One of them is the pairing term
$\delta(Z,N)$, which is defined as
$\delta(Z,N)=\left[(-1)^{Z}+(-1)^{N}\right]/2.$ (8)
The pairing term becomes $+1$ ($-1$) for even-even (odd-odd) nuclei, and 0 for
other nuclei. A positive value for the pairing term indicates that the nucleus
is more bound while the opposite behavior is valid for a negative value [42].
Another input is the promiscuity factor (P) of nuclei [41] and it is given by
$P=\nu_{p}\nu_{n}/(\nu_{p}+\nu_{n}),$ (9)
where $\nu_{p(n)}$ is the difference between the actual proton (neutron)
number and the nearest magic number. The promiscuity factor is defined as a
measure of the valance proton-neutron ($p-n$) interactions [41]. In this work,
the proton and neutron magic numbers are taken as Z=8, 20, 28, 50, 82, 126 and
N=8, 20, 28, 50, 82, 126, 184, respectively. The latest AME2016 mass table
provides data for 3413 nuclei for A$\geq$8\. However, only 2479 of them are
obtained experimentally, and the others are calculated using the trend from
the mass surface (TMS) in the neighborhood. Although the properties of some
nuclei are not obtained directly from experiments, they are expected to
provide reasonable data and trends for unknown nuclei. As it is well-known,
having a large amount of data is crucial for training an artificial neural
network. Using a large collection of data, performance of an algorithm can be
improved significantly. In our model, we make predictions by using only proton
and mass numbers of nuclei alongside some shell effects in the input. Since we
do not provide much information to the model in the input channel, we need a
large amount of data to make reasonable predictions. Therefore, non-
experimental values from the AME2016 mass table are also used in the
calculations to increase the performance of the model. While training the
model, light nuclei with N$<$8 and A$<$10 is not taken into account and we
randomly divide our data set (3388 nuclei in total) into training (70.0%) and
testing sets (30.0%), as usual.
In Figs. 2 and 3, we display the binding energy (BE) differences between the
experimental data and the results from the MLP model using two different
architectures and inputs. The first architecture has three hidden layers
(32-16-8), and we only use the proton and mass numbers of nuclei (Z, A) in the
input channel (see Fig. 2(a)). Then, we also add pairing and promiscuity
factors (Z, A, $\delta$, P) to see their effects on the results (see Fig.
2(b)).
The predictions of the MLP with the three hidden layers are given in Fig. 2
for 1017 nuclei in the testing set. Although the root-mean-square deviation
($\sigma_{rms}$) with respect to the experimental data is high and obtained as
$\sigma_{rms}=3.98$ MeV, the MLP can make reasonable predictions for the
nuclear binding energies using only (Z, A) as inputs, and the results are
comparable with the predictions of the nuclear energy density functionals.
First thing to notice is the large deviation of the model results for light
and heavy nuclei, which can be related to the limited number of experimental
data in these regions. Adding more physical information to the input, we find
that the predictive power of the MLP increases considerably as can be seen
from Fig.2(b). By adding pairing and promiscuity factors to the input channel,
the root-mean-square deviation is improved by about 45.73% and obtained as
2.16 MeV for the testing set nuclei. It is also clear that the predictive
power of the MLP increases considerably for light and heavy nuclei. To see the
performance of the MLP in different regions of the nuclear landscape, we
divide the testing set into three parts as light nuclei (Z$<$20, 93 nuclei),
medium-heavy nuclei (20$\leq$Z$\leq$82, 696 nuclei), and super-heavy nuclei
(Z$\geq$82, 228 nuclei). Using (Z, A) as inputs, the $\sigma_{rms}$ values are
obtained as 8.60, 2.93, and 3.80 MeV for light, medium-heavy and super-heavy
nuclei, respectively. Increasing the number of the inputs in the MLP
architecture, namely using (Z, A, $\delta$, P) in the input channel, we obtain
important improvements in the $\sigma_{rms}$ values (see Table 1). For
instance, the $\sigma_{rms}$ value for light nuclei is decreased and found as
3.39 MeV, which corresponds to 60.58% improvement in the predictions. Besides,
the $\sigma_{rms}$ value for medium-heavy and super-heavy nuclei is decreased
and obtained as 2.10 and 1.61 MeV, respectively. Using MLP with four inputs,
the model predictions are improved by about 28.32% and 57.63% for medium-heavy
and super-heavy nuclei, respectively.
Table 1: The root-mean square deviations ($\sigma_{rms}$) in units of MeV for different MLP architectures and inputs. The results of the best MLP architectures are shown in bold. Since the number of the parameters are higher than the number of the training data set, the results of the two hidden layers (64-64) MLP architecture are not presented. | MLP | Input | Z$<$20 | $20\leq$ Z$\leq 82$ | Z$>$82 | Testing set |
---|---|---|---|---|---|---|---
| | | 93 nuclei | 696 nuclei | 228 nuclei | 1017 nuclei |
| (32-32) | (A,Z) | 7.80 | 2.80 | 2.11 | 3.46 |
| (64-32) | (A,Z) | 2.93 | 2.35 | 4.50 | 3.01 |
| (32-32) | (A,Z,$\delta$,P) | 4.13 | 1.95 | 4.76 | 3.04 |
| (64-32) | (A,Z,$\delta$,P) | 5.00 | 1.88 | 3.07 | 2.61 |
| (32-16-8) | (A,Z) | 8.60 | 2.93 | 3.80 | 3.98 |
| (64-8-4) | (A,Z) | 6.42 | 4.07 | 4.83 | 4.51 |
| (64-16-8) | (A,Z) | 6.22 | 2.11 | 2.05 | 2.75 |
| (32-16-8) | (A,Z,$\delta$,P) | 3.39 | 2.10 | 1.61 | 2.16 |
| (64-8-4) | (A,Z,$\delta$,P) | 6.75 | 2.93 | 4.76 | 3.90 |
| (64-16-8) | (A,Z,$\delta$,P) | 4.60 | 1.89 | 2.47 | 2.40 |
| (32-16-8-4) | (A,Z) | 10.37 | 3.01 | 2.60 | 4.20 |
| (32-16-16-8) | (A,Z) | 1.98 | 2.73 | 2.37 | 2.60 |
| (32-32-16-8) | (A,Z) | 4.89 | 3.06 | 4.82 | 3.72 |
| (32-16-8-4) | (A,Z,$\delta$,P) | 5.03 | 3.22 | 3.24 | 3.43 |
| (32-16-16-8) | (A,Z,$\delta$,P) | 9.11 | 2.83 | 2.27 | 3.78 |
| (32-32-16-8) | (A,Z,$\delta$,P) | 3.03 | 1.58 | 1.94 | 1.84 |
| (32-16-8-8-8) | (A,Z) | 4.20 | 2.56 | 5.25 | 3.50 |
| (32-16-16-8-4) | (A,Z) | 4.21 | 2.83 | 4.88 | 3.52 |
| (64-32-16-16-8) | (A,Z) | 2.87 | 2.46 | 5.55 | 3.44 |
| (32-16-8-8-8) | (A,Z,$\delta$,P) | 5.56 | 2.17 | 1.35 | 2.54 |
| (32-16-16-8-4) | (A,Z,$\delta$,P) | 10.83 | 2.98 | 1.78 | 4.18 |
| (64-32-16-16-8) | (A,Z,$\delta$,P) | 3.83 | 7.47 | 2.47 | 4.92 |
Figure 4: Comparison of the experimental and predicted binding energy
differences for 956 nuclei between the MLP with three hidden layers and (a)
UNEDF1 [43, 44, 13], SKM* [45, 44, 13], (c) FRDM-2012 [9] and (d) WS3 [8]
models. The black dashed lines are given to guide the eye. Figure 5: The same
as in Fig. 4, but using the four layers MLP architecture.
It is known that increasing or decreasing the number of hidden layers can also
affect the predictive power of the neural networks. Therefore, we also
increase the number of the hidden layers to four in the MLP architecture and
the same calculations are repeated to see its effect on the results. In Fig.
3, the binding energy differences between the experimental data and MLP
predictions with three hidden layers are displayed for 1017 nuclei in the
testing set. By increasing the number of the hidden layer by one unit, the
predictive power of the MLP model increases, and the $\sigma_{rms}$ values are
obtained as 3.72 and 1.84 MeV with two (see Fig.3(a)) and four inputs (see
Fig.3(b)), respectively. Similar to the MLP with the three hidden layers, the
largest deviations in the binding energies are obtained for light and super-
heavy nuclei, and inclusion of the additional inputs increases the success of
the model in these regions. Using MLP with four hidden layers and two inputs
(Z, A), the $\sigma_{rms}$ deviations are obtained as 4.89, 3.06, and 4.82 MeV
for light, medium-heavy, and super-heavy nuclei, respectively. Adding the
pairing and promiscuity factors to the input channels, the results are
improved by about 38.03%, 48.37%, and 59.75% and the $\sigma_{rms}$ deviation
values are obtained as 3.03, 1.58, and 1.94 MeV for light, medium-heavy and
super-heavy nuclei (see Table 1), respectively.
We should mention that different MLP architectures with different number of
hidden layers or hidden units are also tested to make nuclear mass
predictions, and obtain the best MPL model. The results of these works are
also given in Table 1. We found that decreasing the number of the hidden
layers also decrease the predictive power of the results. On the other hand,
increasing the number of the hidden layers more than four also does not give
better results. In this work, the best results are obtained using the three
(32-16-8) and four (32-32-16-8) layers MLP architectures with four inputs. Our
results indicate that using the proper number of hidden layers and inputs in
the MLP architecture, we can make fast and reliable predictions for nuclear
properties. Since the predictions are better using the (Z, A, $\delta$, P) as
inputs, we always use the results with four inputs to compare with other mic-
mac and microscopic results in the rest of the paper.
In figures 4 and 5, we also display the binding energy differences between
experimental data and modeling results from the MLP architectures with three
and four hidden layers and using four (Z, A, $\delta$, P) inputs. Both mic-mac
(FRDM-2012 [9] and WS3 [8]) and microscopic (UNEDF1 [43, 44, 13] and SkM* [45,
44, 13]) results from theoretical database Explorer [44] are shown along with
the MLP predictions for comparison. It is known that the accuracy of the self-
consistent models is not high for the nuclear mass predictions and the root-
mean-square deviations are obtained at about several MeV. On the other side,
the mic-mac models (FRDM-2012 and WS3) are not self-consistent, and their
parameter constants are fitted using the available experimental data, which in
turn provide better estimations for the nuclear binding energies as can be
seen from Figs. 4 and 5. The first thing to notice is that the binding energy
differences are generally high for light and heavy nuclei in all model
predictions. Compared to the UNEDF1, the deviation of the SkM* results from
the experimental data is quite high, and increases with the increase in mass
number. For mic-mac models (FRDM-2012 and WS3), the highest deviations are
obtained for nuclei with A$\geq$250\. Comparing the results of MLP
architectures with three and four layers (see figs. 4 and 5), it is clear that
four layers MLP results are better than the three layers MLP. Besides, both
MLP architectures are as successful as the FRDM-2012 and WS3 models in the
description of nuclei with A$\geq$250\. Although we only use the proton-mass
numbers alongside pairing and promiscuity factors of nuclei as physical inputs
in the training of the MLP, the model gives promising results that they are
comparable to other microscopic and mic-mac models.
Table 2: The root-mean square deviations ($\sigma_{rms}$) in units of MeV for the common 956 nuclei using various models. In here, MLP1 and MLP2 represent three layers (32-16-8) and four layers (32-32-16-8) MLP architectures using (Z, A, $\delta$, P) in the input channel. The nuclear bindings energy results are taken from the nuclear energy density functionals UNEDF1 [43, 44, 13] and SkM* [45, 44, 13]. The results of the FRDM-2012 and WS3 models are taken from Refs. [9, 8]. | MLP1 | MLP2 | UNEDF1 | SKM* | FRDM-2012 | WS3
---|---|---|---|---|---|---
$\sigma_{rms}$ | 1.97 | 1.72 | 2.13 | 7.81 | 0.99 | 0.55
In Table 2, the root-mean-square deviations ($\sigma_{rms}$) are also given
for each model to get a better insight into the success of the models. The
best results are obtained for FRDM-2012 and WS3 models, and rms deviations are
obtained below 1.0 MeV. While the rms deviation is rather high using the SkM*
functional and obtained as 7.81 MeV, the UNEDF1 functional makes better
predictions and rms deviation is obtained as 2.13 MeV. The root-mean-square
deviations are still high using the self-consistent microscopic models in the
calculations. Besides, calculations using nuclear energy density functionals
are still computationally demanding. It is seen that the MLP model can make
reliable predictions that are comparable to the well-known microscopic models,
and the $\sigma_{rms}$ values are obtained as 1.97 and 1.72 MeV with three and
four layers MLP architectures.
## IV Conclusions
We implement the multilayer perceptron (MLP) to make ground-state binding
energy predictions for atomic nuclei. Two different architectures and inputs
are used in the MLP model to study the performance of this neural network in
the binding energy predictions. In the first one, we only use the proton and
mass numbers of nuclei alongside with the latest experimental data, and no
physical input is included. Then, we also added two additional inputs: pairing
and promiscuity factors of nuclei to give more physical information to the
models. We find that using proper hidden layers and units with relevant
information in the input channels, the nuclear binding energy predictions
using the MLP improve considerably, especially for light and medium-heavy
nuclei. For 1017 nuclei in the testing set, the best root-mean-square
deviations are obtained as 2.16 MeV and 1.84 MeV for three and four layers MLP
architectures using (Z, A, $\delta$, P) inputs, respectively.
Our findings show that the MLP model can make reasonable predictions for
binding energies of atomic nuclei and the results are also comparable to other
models. Although the MLP does not include any physics theory behind it and
considered as a statistical model, it is seen that the model can make fast and
reliable predictions with a proper architecture and relevant inputs. In this
respect, the artificial neural networks can be seen as an alternative tool to
other mic-mac and microscopic models.
As future work, we plan to extend our calculations by including more physical
quantities in the input to better estimate the nuclear properties. Improving
the extrapolation abilities of the neural networks for very neutron-rich
nuclei is also another challenging task and remains as a future work. Besides,
the neural network approaches can be used to train the residues of nuclear
properties as it is done in Refs. [25, 26, 27, 28], which in turn can be
helpful to understand the missing physics behind the microscopic models.
## References
* Lunney, Pearson, and Thibault [2003] D. Lunney, J. M. Pearson, and C. Thibault, “Recent trends in the determination of nuclear masses,” Rev. Mod. Phys. 75, 1021–1082 (2003).
* Mumpower _et al._ [2015a] M. R. Mumpower, R. Surman, D.-L. Fang, M. Beard, P. Möller, T. Kawano, and A. Aprahamian, “Impact of individual nuclear masses on $r$-process abundances,” Phys. Rev. C 92, 035807 (2015a).
* Mumpower _et al._ [2015b] M. Mumpower, R. Surman, D. L. Fang, M. Beard, and A. Aprahamian, “The impact of uncertain nuclear masses near closed shells on ther-process abundance pattern,” Journal of Physics G: Nuclear and Particle Physics 42, 034027 (2015b).
* Schatz and Ong [2017] H. Schatz and W.-J. Ong, “Dependence of x-ray burst models on nuclear masses,” The Astrophysical Journal 844, 139 (2017).
* Martin _et al._ [2016] D. Martin, A. Arcones, W. Nazarewicz, and E. Olsen, “Impact of nuclear mass uncertainties on the $r$ process,” Phys. Rev. Lett. 116, 121101 (2016).
* Wang _et al._ [2017] M. Wang, G. Audi, F. G. Kondev, W. Huang, S. Naimi, and X. Xu, “The AME2016 atomic mass evaluation (II). tables, graphs and references,” Chinese Physics C 41, 030003 (2017).
* Wang _et al._ [2010] N. Wang, Z. Liang, M. Liu, and X. Wu, “Mirror nuclei constraint in nuclear mass formula,” Phys. Rev. C 82, 044304 (2010).
* Liu _et al._ [2011] M. Liu, N. Wang, Y. Deng, and X. Wu, “Further improvements on a global nuclear mass model,” Phys. Rev. C 84, 014333 (2011).
* Möller _et al._ [2012] P. Möller, W. D. Myers, H. Sagawa, and S. Yoshida, “New finite-range droplet mass model and equation-of-state parameters,” Phys. Rev. Lett. 108, 052501 (2012).
* Goriely, Chamel, and Pearson [2009] S. Goriely, N. Chamel, and J. M. Pearson, “Skyrme-hartree-fock-bogoliubov nuclear mass formulas: Crossing the 0.6 mev accuracy threshold with microscopically deduced pairing,” Phys. Rev. Lett. 102, 152503 (2009).
* Goriely _et al._ [2009] S. Goriely, S. Hilaire, M. Girod, and S. Péru, “First gogny-hartree-fock-bogoliubov nuclear mass model,” Phys. Rev. Lett. 102, 242501 (2009).
* Stoitsov _et al._ [2006] M. Stoitsov, J. Dobaczewski, W. Nazarewicz, and P. Borycki, “Large-scale self-consistent nuclear mass calculations,” International Journal of Mass Spectrometry 251, 243 – 251 (2006).
* Erler _et al._ [2012] J. Erler, N. Birge, M. Kortelainen, W. Nazarewicz, E. Olsen, A. Perhac, and M. Stoitsov, “The limits of the nuclear landscape,” Nature 486, 509–512 (2012).
* Afanasjev and Agbemava [2016] A. V. Afanasjev and S. E. Agbemava, “Covariant energy density functionals: Nuclear matter constraints and global ground state properties,” Phys. Rev. C 93, 054310 (2016).
* Agbemava _et al._ [2014] S. E. Agbemava, A. V. Afanasjev, D. Ray, and P. Ring, “Global performance of covariant energy density functionals: Ground state observables of even-even nuclei and the estimate of theoretical uncertainties,” Phys. Rev. C 89, 054320 (2014).
* Audi, Wapstra, and Thibault [2003] G. Audi, A. Wapstra, and C. Thibault, “The ame2003 atomic mass evaluation: (ii). tables, graphs and references,” Nuclear Physics A 729, 337 – 676 (2003), the 2003 NUBASE and Atomic Mass Evaluations.
* Goriely, Tondeur, and Pearson [2001] S. Goriely, F. Tondeur, and J. Pearson, “A hartree–fock nuclear mass table,” Atomic Data and Nuclear Data Tables 77, 311 – 381 (2001).
* Audi and Wapstra [1995] G. Audi and A. Wapstra, “The 1995 update to the atomic mass evaluation,” Nuclear Physics A 595, 409 – 480 (1995).
* Goriely, Chamel, and Pearson [2016] S. Goriely, N. Chamel, and J. M. Pearson, “Further explorations of skyrme-hartree-fock-bogoliubov mass formulas. xvi. inclusion of self-energy effects in pairing,” Phys. Rev. C 93, 034337 (2016).
* Geng, Toki, and Meng [2005] L. Geng, H. Toki, and J. Meng, “Masses, Deformations and Charge Radii—Nuclear Ground-State Properties in the Relativistic Mean Field Model,” Progress of Theoretical Physics 113, 785–800 (2005).
* Xia _et al._ [2018] X. Xia, Y. Lim, P. Zhao, H. Liang, X. Qu, Y. Chen, H. Liu, L. Zhang, S. Zhang, Y. Kim, and J. Meng, “The limits of the nuclear landscape explored by the relativistic continuum hartree–bogoliubov theory,” Atomic Data and Nuclear Data Tables 121-122, 1 – 215 (2018).
* Akkoyun _et al._ [2013] S. Akkoyun, T. Bayram, S. O. Kara, and A. Sinan, “An artificial neural network application on nuclear charge radii,” Journal of Physics G: Nuclear and Particle Physics 40, 055106 (2013).
* Bayram, Akkoyun, and Kara [2014] T. Bayram, S. Akkoyun, and S. O. Kara, “A study on ground-state energies of nuclei by using neural networks,” Annals of Nuclear Energy 63, 172 – 175 (2014).
* Goodfellow, Bengio, and Courville [2016] I. Goodfellow, Y. Bengio, and A. Courville, _Deep Learning_ (MIT Press, 2016).
* Utama and Piekarewicz [2017] R. Utama and J. Piekarewicz, “Refining mass formulas for astrophysical applications: A bayesian neural network approach,” Phys. Rev. C 96, 044308 (2017).
* Niu and Liang [2018] Z. Niu and H. Liang, “Nuclear mass predictions based on bayesian neural network approach with pairing and shell effects,” Physics Letters B 778, 48 – 53 (2018).
* Neufcourt _et al._ [2018] L. Neufcourt, Y. Cao, W. Nazarewicz, and F. Viens, “Bayesian approach to model-based extrapolation of nuclear observables,” Phys. Rev. C 98, 034318 (2018).
* Neufcourt _et al._ [2019] L. Neufcourt, Y. Cao, W. Nazarewicz, E. Olsen, and F. Viens, “Neutron drip line in the ca region from bayesian model averaging,” Phys. Rev. Lett. 122, 062502 (2019).
* Gernoth _et al._ [1993] K. Gernoth, J. Clark, J. Prater, and H. Bohr, “Neural network models of nuclear systematics,” Physics Letters B 300, 1 – 7 (1993).
* Gazula, Clark, and Bohr [1992] S. Gazula, J. Clark, and H. Bohr, “Learning and prediction of nuclear stability by neural networks,” Nuclear Physics A 540, 1 – 26 (1992).
* Athanassopoulos _et al._ [2004] S. Athanassopoulos, E. Mavrommatis, K. Gernoth, and J. Clark, “Nuclear mass systematics using neural networks,” Nuclear Physics A 743, 222 – 235 (2004).
* Anil, Malik, and Banerjee [2020] M. U. Anil, T. Malik, and K. Banerjee, “Nuclear binding energy predictions based on machine learning,” ArXiv:2004.14196 (2020).
* Lasseri _et al._ [2020] R.-D. Lasseri, D. Regnier, J.-P. Ebran, and A. Penon, “Taming nuclear complexity with a committee of multilayer neural networks,” Phys. Rev. Lett. 124, 162502 (2020).
* Alpaydın [2014] E. Alpaydın, _Introduction to Machine Learning_ (MIT Press, 2014).
* Rumelhart, Hinton, and Williams [1986] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323, 533–536 (1986).
* Kingma and Ba [2014] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” preprint ArXiv:1412.6980 (2014).
* Idinil [2019] A. Idinil, “Statistical learnability of nuclear masses,” arXiv:1904.00057 (2019).
* Zhang _et al._ [2017] H. F. Zhang, L. H. Wang, J. P. Yin, P. H. Chen, and H. F. Zhang, “Performance of the levenberg–marquardt neural network approach in nuclear mass prediction,” Journal of Physics G: Nuclear and Particle Physics 44, 045110 (2017).
* Glorot, Bordes, and Bengio [2011] X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” 14th International Conference on Artificial Intelligence and Statistics , 315–323 (2011).
* Glorot and Bengio [2010] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” Proceedings of the thirteenth international conference on artificial intelligence and statistics , 249–256 (2010).
* Casten and Zamfir [1996] R. F. Casten and N. V. Zamfir, “The evolution of nuclear structure: the scheme and related correlations,” Journal of Physics G: Nuclear and Particle Physics 22, 1521–1552 (1996).
* Kirson [2008] M. W. Kirson, “Mutual influence of terms in a semi-empirical mass formula,” Nuclear Physics A 798, 29 – 60 (2008).
* Kortelainen _et al._ [2012] M. Kortelainen, J. McDonnell, W. Nazarewicz, P.-G. Reinhard, J. Sarich, N. Schunck, M. V. Stoitsov, and S. M. Wild, “Nuclear energy density optimization: Large deformations,” Phys. Rev. C 85, 024304 (2012).
* mas [2020] “Mass explorer, http://massexplorer.frib.msu.edu/content/dftmasstables.html,” (2020).
* Bartel _et al._ [1982] J. Bartel, P. Quentin, M. Brack, C. Guet, and H.-B. Håkansson, “Towards a better parametrisation of skyrme-like effective forces: A critical study of the skm force,” Nuclear Physics A 386, 79 – 100 (1982).
|
# Two-fluid hydrodynamics of cold atomic bosons under influence of the quantum
fluctuations at non-zero temperatures
Pavel A. Andreev<EMAIL_ADDRESS>Faculty of physics, Lomonosov
Moscow State University, Moscow, Russian Federation, 119991. Peoples
Friendship University of Russia (RUDN University), 6 Miklukho-Maklaya Street,
Moscow, 117198, Russian Federation
###### Abstract
Ultracold Bose atoms is the physical system, where the quantum and nonlinear
phenomena play crucial role. Ultracold bosons are considered at the small
finite temperatures. Bosons are considered as two different fluids: Bose-
Einstein condensate and normal fluid (the thermal component). An extended
hydrodynamic model is obtained for both fluids, where the pressure evolution
equations and the pressure flux third rank tensor evolution equations are
considered along with the continuity and Euler equations. It is found that the
pressure evolution equation contains zero contribution of the short-range
interaction. The pressure flux evolution equation contains the interaction
which gives the quantum fluctuations in the zero temperature limit. Here, we
obtain its generalization for the finite temperature. The contribution of
interaction in the pressure flux evolution equation which goes to zero in the
zero temperature limit is found. The model is obtained via the straightforward
derivation from the microscopic many-particle Schrodinger equation in the
coordinate representation.
BEC, pressure evolution equation, hydrodynamics, finite temperatures, quantum
fluctuations.
###### pacs:
03.75.Hh, 03.75.Kk, 67.85.Pq
## I Introduction
Small temperature bosons are studied in terms of two-fluid hydrodynamics
consisting of the Bose-Einstein condensate (BEC) and normal fluid Dalfovo RMP
99 . Each fluid is considered in terms of two hydrodynamic equations: the
continuity and Euler equations. It is assumed that the BEC can be completely
described by the concentration and velocity field, or, in other terms, by the
Gross-Pitaevskii equation Dalfovo RMP 99 , since BEC is the collection of
particles in the single quantum state. However, the normal fluid model
requires an truncation of the set of hydrodynamic equations. The pressure of
normal fluid existing in the Euler equation for the normal fluid is an
independent function. Equation for the pressure evolution provides an
expression for the pressure perturbations via the perturbations of other
functions. Application of the equation of state for pressure makes the model
more simple, but equation of state for pressure leads to the less accurate
model.
Moreover, the kinetic pressure in the Euler equation for the BEC is usually
chosen to be equal to zero Andreev PRA08 , Andreev LP 19 . Since the kinetic
pressure is related to the occupation of the excited states However, there is
the nonzero part caused by the quantum fluctuations Andreev 2005 . The
pressure evolution equation of the weakly interacting bosons contains no
interaction, but next equation in the chain of the quantum hydrodynamic
equations (the equation for the pressure flux third rank tensor) contains the
interaction causing the depletion of the BECs at the zero temperature.
Therefore, it is necessary to consider the pressure flux evolution equation
both for the BEC and for the normal fluid at the analysis of the small
temperature influence.
The quantum depletion of the BECs is the appearance of the bosons in the
excited states while system is kept at the zero temperature. So, some energy
of the collective motion is transferred to the individual motion of a portion
of particles. It is caused by the quantum fluctuations related to the
interparticle interaction. The quantum fluctuations are considered in
literature for a long time. Mostly, their theoretical analysis is based on the
Bogoliubov-de Gennes approach Lee PR 57 , Pitaevskii PRL 98 , Braaten PRL 99 ,
Astrakharchik PRL 05 . The quantum fluctuations in BECs are studied
experimentally as well Xu PRL 06 , Altmeyer PRL 07 , Papp PRL 08 . This method
is generalized for the dipolar BECs, where the quantum fluctuations plays
crucial role at the description of the dipolar BECs of lantanoids. The dipolar
lantanoid BECs reveal the large scale instability causing the splitting of the
cloud of atoms on the number of macroscopic drops. This highly nonlinear
phenomena is called the quantum droplet formation Kadau Pfau Nature 16 ;
Ferrier-Barbut PRL 16 ; Baillie PRA 16 ; Bisset PRA 16 ; Wachtler PRA 16 a1 ;
Wachtler PRA 16 a2 ; Blakie pra 16 ; Boudjemaa PRA 20 ; Heinonen PRA 19 ;
Malomed Phys D 19 ; Shamriz PRA 20 ; Li PRA 19 ; Aybar PRA 19 ; Examilioti JP
B 20 ; Miyakawa PRA 20 ; Bottcher arXiv 20 07 ; Bisset arXiv 20 07 ; Wang
arXiv 20 02 ; Edmonds arXiv 20 02 ; Baillie PRA 20 . Therefore, reassemble of
bosons in smaller compact groups causes the stabilization of the system. These
studies give the motivation for the study of quantum fluctuations. However,
here we restrict our analysis with the short-range interaction only and no
dipole-dipole interaction is discussed. Moreover, the influence of the finite
temperature is the necessary part of complete model of these phenomena.
Fundamental feature of the collective dynamics in the spectrum of the sound
waves. Two distinct sound velocities exist in finite temperature ultracold
Bose gas Dalfovo RMP 99 , Griffin PRB 96 . The two-fluid model shows that the
slower mode (second sound) is associated with the BEC component, while the
faster mode (first sound) is associated with the thermal component.
Generalized expressions for the speeds of sounds are obtained within developed
model.
Derivation of two-fluid hydrodynamics for the finite temperature bosons in the
limit of small temperature, where the large fraction of the bosons is located
in the BEC state is given from the microscopic motion in accordance with the
quantum hydrodynamic method Andreev PRA08 , Andreev LP 19 , MaksimovTMP 2001 ,
Andreev PTEP 19 . The microscopic dynamics is described by the Schrodinger
equation in the coordinate representation. Collection of the macroscopic
functions is presented to describe the collective effects in ultracold bosons.
The last includes the concentration of particles, the velocity field and the
pressure tensor. The derivation of basic equations is made for all bosons
distributed on the lower energy level and the excited levels as the single
fluid. The decomposition on two fluids is made on the microscopic scale. After
general structure of equations is obtained for the arbitrary temperature and
arbitrary strength of interaction, an approximate calculation of functions
presenting the interaction is made for the regime of short-range interaction.
Hence, the small parameter related to the small area of interaction potential
is used. It gives a specification for general model, but also the first order
contribution on the small parameter is applied. The further truncation is made
at calculation of the interaction terms for weak interaction and small
temperatures.
This paper is organized as follows. In Sec. II major steps of derivation of
hydrodynamic equations from the Schrodinger equation are demonstrated, where
the pressure evolution equation (the quantum Bohm potential evolution
equation) and the third rank tensor evolution equation are obtained along with
the continuity and Euler equations. In Sec. III calculation of interaction in
the Euler equation, the pressure evolution equation, and the pressure flux
evolution equation is demonstrated. In Sec. IV presents the suggested version
of the extended two-fluid quantum hydrodynamic model for the ultracold finite
temperature bosons. In Sec. V the limiting regime of derived model for the BEC
is obtained under influence of the quantum fluctuations at the zero
temperature. In Sec. VI a brief summary of obtained results is presented.
## II On derivation of hydrodynamic equations from microscopic quantum
dynamics
### II.1 Basic definitions of quantum hydrodynamics and the Euler equation
derivation
On the microscopic level we do not have notion of temperature. Hence we
consider system of interacting bosons governed by the Schrodinger equation
$\imath\hbar\partial_{t}\Psi=\hat{H}\Psi$ with the following Hamiltonian
$\hat{H}=\sum_{i=1}^{N}\biggl{(}\frac{\hat{\textbf{p}}^{2}_{i}}{2m_{i}}+V_{ext}(\textbf{r}_{i},t)\biggr{)}+\frac{1}{2}\sum_{i,j\neq
i}U(\textbf{r}_{i}-\textbf{r}_{j}),$ (1)
where $m_{i}$ is the mass of i-th particle,
$\hat{\textbf{p}}_{i}=-\imath\hbar\nabla_{i}$ is the momentum of i-th
particle. The last term in the Hamiltonian (1) is the boson-boson interaction
$U_{ij}$. We do not specify the form of interaction. However, the derivation
presented below employs that the interaction has finite value on the small
distances between particles and shows fast decay at the increase of the
interparticle distance. Definitely, no distinguishing between bosons in the
BEC state and bosons in other states is made at this stage. Separation of all
bosons on two subsystems is made in terms of collective variables.
Hydrodynamic model usually made for each species of particles. If we consider
a single species then all masses equal to each other.
Distribution of particles in a trap, waves, solitons, oscillations of form of
trapped particles are described by the concentration of particles.
Concentration is an essential macroscopic function both for classical and for
quantum fluids. The module of the macroscopic wave function in the Gross-
Pitaevskii equation gives the square root of concentration of bosons in the
BEC state. Therefore, we start the derivation of quantum hydrodynamic
equations from the definition of concentration. The quantum mechanics is based
on notion of point-like objects in spite the wave nature of quantum objects.
So, the eigenfunction of the coordinate operator in the coordinate
representation is the delta function
$\hat{x}\psi_{x^{\prime}}(x)=x^{\prime}\psi_{x^{\prime}}(x)$, where normalized
wave function is $\psi_{x^{\prime}}(x)=\delta(x-x^{\prime})$. Obviously, the
operation of concentration in the coordinate representation of quantum
mechanics is the sum of delta functions
$\hat{n}=\sum_{i=1}^{N}\delta(\textbf{r}-\textbf{r}_{i})$. Moreover, it is
supported by the general rule for quantization. We need to take corresponding
classical function.
Transition to description of the collective motion of bosons is made via
introduction of the concentration Andreev PRA08 , MaksimovTMP 2001 :
$n=\int
dR\sum_{i=1}^{N}\delta(\textbf{r}-\textbf{r}_{i})\Psi^{*}(R,t)\Psi(R,t),$ (2)
which is the first collective variable in our model. Other collective
variables appear during the derivation. Equation (2) contains the following
notations $dR=\prod_{i=1}^{N}d\textbf{r}_{i}$ is the element of volume in $3N$
dimensional configurational space, with $N$ is the number of bosons.
Concentration (2) is the sum of partial concentrations $n=n_{n}+n_{b}$
describing the distribution of BEC $n_{b}$ and normal fluid $n_{n}$ in the
coordinate space.
The equation for evolution of concentration (2) can be obtained by acting by
time derivative on function (2). The time derivative acts on the wave
functions under the integral while the time derivatives of the wave function
are taken from the Schrodinger equation. Obtain the continuity equation for
concentration (2) after straightforward calculations
$\partial_{t}n+\nabla\cdot\textbf{j}=0,$ (3)
where the new collective function called the current appears as the following
integral of the wave function
$\textbf{j}(\textbf{r},t)=\int
dR\sum_{i=1}^{N}\delta(\textbf{r}-\textbf{r}_{i})\times$
$\times\frac{1}{2m_{i}}(\Psi^{*}(R,t)\hat{\textbf{p}}_{i}\Psi(R,t)+c.c.),$ (4)
with $c.c.$ is the complex conjugation.
Both introduced collective functions $n(\textbf{r},t)$ and
$\textbf{j}(\textbf{r},t)$ are quadratic forms of the wave function. Each of
them can be splitted on two parts related to the BEC and normal fluid. Hence
we have $n=n_{n}+n_{b}$ and $\textbf{j}=\textbf{j}_{n}+\textbf{j}_{b}$. No
microscopic definitions are introduced for the partial functions $n_{n}$,
$n_{b}$, $\textbf{j}_{n}$, and $\textbf{j}_{b}$. Therefore, the continuity
equation (3) splits on two partial continuity equations
$\partial_{t}n_{a}+\nabla\cdot\textbf{j}_{a}=0,$ (5)
where subindex $a$ stands for $b$ and $n$.
Continue the derivation of hydrodynamic equations and consider the time
evolution of the particle current (4). Act by time derivative on function j
(4) and use the Schrodinger equation with Hamiltonian (1). It leads to the
general form of the current evolution equation
$\partial_{t}j^{\alpha}+\partial_{\beta}\Pi^{\alpha\beta}=-\frac{1}{m}n\partial_{\alpha}V_{ext}+\frac{1}{m}F^{\alpha}_{int},$
(6)
where
$\Pi^{\alpha\beta}=\int
dR\sum_{i=1}^{N}\delta(\textbf{r}-\textbf{r}_{i})\frac{1}{4m^{2}}[\Psi^{*}(R,t)\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\beta}\Psi(R,t)$
$+\hat{p}_{i}^{\alpha*}\Psi^{*}(R,t)\hat{p}_{i}^{\beta}\Psi(R,t)+c.c.]$ (7)
is the momentum flux, and
$F^{\alpha}_{int}=-\int(\partial^{\alpha}U(\textbf{r}-\textbf{r}^{\prime}))n_{2}(\textbf{r},\textbf{r}^{\prime},t)d\textbf{r}^{\prime},$
(8)
with the two-particle concentration
$n_{2}(\textbf{r},\textbf{r}^{\prime},t)$ $=\int dR\sum_{i,j=1,j\neq
i}^{N}\delta(\textbf{r}-\textbf{r}_{i})\delta(\textbf{r}^{\prime}-\textbf{r}_{j})\Psi^{*}(R,t)\Psi(R,t).$
(9)
It is necessary to split equation (6) on two equations for each subsystem of
bosons. In current form equation (6) consist of superposition of functions
which are quadratic forms of the wave function. Hence, each term can be
splitted on two parts and we find two similar equations for the currents
$\partial_{t}j_{a}^{\alpha}+\partial_{\beta}\Pi_{a}^{\alpha\beta}=-\frac{1}{m}n_{a}\partial_{\alpha}V_{ext}+\frac{1}{m}F^{\alpha}_{a,int}.$
(10)
The first and third terms are proportional to the concentration and the
current. therefore, they require no comments. Nontrivial difference between
two current evolution equation appears at further analysis of the momentum
flux $\Pi^{\alpha\beta}$ and the interaction $F^{\alpha}_{int}$. However, we
point out some difference which appear for the momentum flux
$\Pi^{\alpha\beta}$. Its structure is obtained in many papers (see for
instance Andreev PRA08 after equation (52), Andreev 2001 equation (24))
$\Pi^{\alpha\beta}=nv^{\alpha}v^{\beta}+p^{\alpha\beta}+T^{\alpha\beta},$ (11)
where $p^{\alpha\beta}$ is the pressure tensor, and $T^{\alpha\beta}$ is the
tensor giving the quantum Bohm potential, its approximate form can be written
in the following form
$T^{\alpha\beta}_{0}=-\frac{\hbar^{2}}{4m^{2}}\biggl{[}\partial_{\alpha}\partial_{\beta}n-\frac{\partial_{\alpha}n\cdot\partial_{\beta}n}{n}\biggr{]}.$
(12)
Tensor $T^{\alpha\beta}_{0}$ (12) is obtained for noninteracting particles
located in the single quantum state.
Basically, the pressure tensor $p^{\alpha\beta}$ is defined via the wave
function $\Psi(R,t)$. However, it requires some manipulations with the wave
function and introduction of a number of intermediate function. Hence we do
not present its explicit form. Nevertheless, the pressure tensor is related to
the distribution of bosons on quantum states with energies above $E_{min}$.
Therefore, for bosons in the BEC state we have $p^{\alpha\beta}_{B}=0$ if no
quantum fluctuations are considered and
$\Pi_{B}^{\alpha\beta}=n_{B}v_{B}^{\alpha}v_{B}^{\beta}+T_{B}^{\alpha\beta}+p_{qf}^{\alpha\beta},$
(13)
where $T_{B}^{\alpha\beta}$ is the function of $n_{B}$ (12) if there is no
interaction. Distribution of particles on different quantum states does not
allow to get full expression (12), but the first term. However, it can be used
as an equation of state for noninteracting limit. The normal fluid bosons have
nonzero pressure $p_{n}^{\alpha\beta}\neq 0$. Hence, all terms in presentation
(11) exists in this regime. Expression (12) appears for bosons in the single
state in the absence of interaction. Hence, it is an approximate expression
for the weakly interacting bosons being in the BEC state. It is even less
accurate for normal fluid bosons, but we use it as an equation of state for
the quantum part of the momentum flux.
### II.2 The pressure evolution equation
Extending the set of hydrodynamic equations we can derive the equation for the
momentum flux evolution. It can be expected that this equation brings extra
information for the normal fluid bosons only. However, the quantum
fluctuations give contribution in the evolution of the kinetic pressure of
BECs in the limit of zero temperature via the divergence of the third rank
tensor. If the temperature is nonzero we have two partial kinetic pressures
for the BEC and for the normal fluid. Consider the time evolution of the
momentum flux (7) using the Schrodinger equation with Hamiltonian (1).
It gives to the following expression:
$\partial_{t}\Pi^{\alpha\beta}=\frac{\imath}{\hbar}\int
dR\sum_{i=1}^{N}\delta(\textbf{r}-\textbf{r}_{i})\frac{1}{4m^{2}}[\hat{H}^{*}\Psi^{*}(R,t)\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\beta}\Psi(R,t)$
$-\Psi^{*}(R,t)\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\beta}\hat{H}\Psi(R,t)+\hat{p}_{i}^{\alpha*}\hat{H}^{*}\Psi^{*}(R,t)\hat{p}_{i}^{\beta}\Psi(R,t)$
$-\hat{p}_{i}^{\alpha*}\Psi^{*}(R,t)\hat{p}_{i}^{\beta}\hat{H}\Psi(R,t)-c.c.]$
(14)
The part of the presented terms contains the Hamiltonian $\hat{H}$ under
action of the momentum operators. We permute the Hamiltonian $\hat{H}$ and the
operators acting on it. Hence, the result of permutation presented by terms
where no operators act on the Hamiltonian $\hat{H}$. However, the terms
containing the corresponding commutators appear. Therefore, all terms are
combined in two groups:
$\partial_{t}\Pi^{\alpha\beta}=\frac{\imath}{\hbar}\int
dR\sum_{i=1}^{N}\delta(\textbf{r}-\textbf{r}_{i})\frac{1}{4m^{2}}[\hat{H}^{*}\Psi^{*}\cdot\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\beta}\Psi$
$-\Psi^{*}\hat{H}\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\beta}\Psi+\hat{H}^{*}\hat{p}_{i}^{\alpha*}\Psi^{*}\cdot\hat{p}_{i}^{\beta}\Psi-\hat{p}_{i}^{\alpha*}\Psi^{*}\cdot\hat{H}\hat{p}_{i}^{\beta}\Psi-c.c.]$
$+\frac{\imath}{\hbar}\int
dR\sum_{i=1}^{N}\delta(\textbf{r}-\textbf{r}_{i})\frac{1}{4m^{2}}[-\Psi^{*}[\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\beta},\hat{H}]\Psi$
$+[\hat{p}_{i}^{\alpha*},\hat{H}^{*}]\Psi^{*}\cdot\hat{p}_{i}^{\beta}\Psi-\hat{p}_{i}^{\alpha*}\Psi^{*}\cdot[\hat{p}_{i}^{\beta},\hat{H}]\Psi-c.c.]$
(15)
The first group of terms in expression (15) gives the divergence of flux of
tensor $\Pi^{\alpha\beta}$. The second group of terms contains the
commutators. This group leads to the contribution of interaction in the
momentum flux evolution.
It gives the momentum flux evolution equation
$\partial_{t}\Pi^{\alpha\beta}+\partial_{\gamma}M^{\alpha\beta\gamma}=-\frac{1}{m}j^{\beta}\partial_{\alpha}V_{ext}$
$-\frac{1}{m}j^{\alpha}\partial_{\beta}V_{ext}+\frac{1}{m}(F^{\alpha\beta}+F^{\beta\alpha}),$
(16)
where the momentum flux is the full flux of all bosons
$\Pi^{\alpha\beta}=\Pi_{n}^{\alpha\beta}+\Pi_{b}^{\alpha\beta}$, the splitting
on two subspecies is to be made later,
$F^{\alpha\beta}=-\int[\partial^{\alpha}U(\textbf{r}-\textbf{r}^{\prime})]j_{2}^{\beta}(\textbf{r},\textbf{r}^{\prime},t)d\textbf{r}^{\prime}$
(17)
is the force tensor field,
$M^{\alpha\beta\gamma}=\int
dR\sum_{i=1}^{N}\delta(\textbf{r}-\textbf{r}_{i})\frac{1}{8m_{i}^{3}}\biggl{[}\Psi^{*}(R,t)\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\beta}\hat{p}_{i}^{\gamma}\Psi(R,t)$
$+\hat{p}_{i}^{\alpha*}\Psi^{*}(R,t)\hat{p}_{i}^{\beta}\hat{p}_{i}^{\gamma}\Psi(R,t)+\hat{p}_{i}^{\alpha*}\hat{p}_{i}^{\gamma*}\Psi^{*}(R,t)\hat{p}_{i}^{\beta}\Psi(R,t)$
$+\hat{p}_{i}^{\gamma*}\Psi^{*}(R,t)\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\beta}\Psi(R,t)+c.c.\biggr{]}$
(18)
is the current (flux) of the momentum flux, and
$\textbf{j}_{2}(\textbf{r},\textbf{r}^{\prime},t)=\int dR\sum_{i,j\neq
i}\delta(\textbf{r}-\textbf{r}_{i})\delta(\textbf{r}^{\prime}-\textbf{r}_{j})\times$
$\times\frac{1}{2m_{i}}[\Psi^{*}(R,t)\hat{\textbf{p}}_{i}\Psi(R,t)+c.c.]$ (19)
is the two-particle current-concentration function.
If quantum correlations are dropped function
$j_{2}^{\alpha}(\textbf{r},\textbf{r}^{\prime},t)$ splits on product of the
current $j^{\alpha}(\textbf{r},t)$ and the concentration
$n(\textbf{r}^{\prime},t)$. Interaction in the momentum flux evolution
equation (16) is presented by symmetrized combinations of tensors
$F^{\alpha\beta}$, which is the flux or current of force.
Partial momentum flux equations appear as
$\partial_{t}\Pi_{a}^{\alpha\beta}+\partial_{\gamma}M_{a}^{\alpha\beta\gamma}=-\frac{1}{m}j_{a}^{\beta}\partial_{\alpha}V_{ext}$
$-\frac{1}{m}j_{a}^{\alpha}\partial_{\beta}V_{ext}+\frac{1}{m}(F_{a}^{\alpha\beta}+F_{a}^{\beta\alpha}),$
(20)
where
$M^{\alpha\beta\gamma}=M_{B}^{\alpha\beta\gamma}+M_{n}^{\alpha\beta\gamma}$,
with
$M_{a}^{\alpha\beta\gamma}=n_{a}v_{a}^{\alpha}v_{a}^{\beta}v_{a}^{\gamma}+v_{a}^{\alpha}(p_{a}^{\beta\gamma}+T_{a}^{\beta\gamma})+v_{a}^{\beta}(p_{a}^{\alpha\gamma}+T_{a}^{\alpha\gamma})$
$+v_{a}^{\gamma}(p_{a}^{\alpha\beta}+T_{a}^{\alpha\beta})+Q_{a}^{\alpha\beta\gamma}+T_{a}^{\alpha\beta\gamma}+L_{a}^{\alpha\beta\gamma}.$
(21)
The pressure is the average of the square of the thermal velocity, when tensor
$Q_{a}^{\alpha\beta\gamma}$ is the average of the product of three projections
of the thermal velocity. Function $L_{a}^{\alpha\beta\gamma}$ presents
quantum-thermal terms. For the BEC we have $p_{B}^{\alpha\beta}=0$,
$Q_{B}^{\alpha\beta\gamma}=0$, $L_{B}^{\alpha\beta\gamma}=0$, since it has no
contribution of the excited states. For symmetric equilibrium distributions we
have $Q_{n}^{\alpha\beta\gamma}=0$, $L_{n}^{\alpha\beta\gamma}=0$. We
generalize this conclusion for nonequilibrium states as the trivial equations
of state for these functions. Tensor $T_{a}^{\alpha\beta\gamma}$ is
$T_{a}^{\alpha\beta\gamma}=-\frac{\hbar^{2}}{12m^{2}}n_{a}(\partial^{\alpha}\partial^{\beta}v_{a}^{\gamma}+\partial^{\alpha}\partial^{\gamma}v_{a}^{\beta}+\partial^{\beta}\partial^{\gamma}v_{a}^{\alpha}).$
(22)
This definition of tensor $T^{\alpha\beta\gamma}$ differs from equation (27)
in Ref. Andreev 2001 by extraction of the quantum Bohm potentials written
together with pressure tensors in equation (21). Equation (27) in Ref. Andreev
2001 contains approximate form of the quantum Bohm potential
$T^{\alpha\beta}$. Equation (21) includes the quantum Bohm potential in its
general form. Moreover, expression (22) is an exact formula obtained with no
assumption about structure of the many-particle wave function like the first
term in equation (23) in Ref. Andreev 2001 .
Equations (3)-(20) are obtained in general form. The short-range nature of the
inter-particle interaction is not used. Moreover, the traditional hydrodynamic
equations are presented via the velocity field and the pressure tensor while
equations (3)-(20) are written via the current and the momentum flux.
The method of the introduction of the velocity field in the equations of
quantum hydrodynamics of spinless particles is presented in Refs. Andreev
PRA08 , Andreev 2001 . The method of calculation of the terms containing
interaction for the short-range interaction limit is also described in Refs.
Andreev PRA08 , Andreev 2001 . Let us present results of application of these
methods for finite temperature bosons. Moreover, we consider the short-range
interaction in the first order by the interaction radius.
### II.3 Appearance of the quantum fluctuations in the third rank tensor
evolution equation
Derivation of the quantum fluctuations requires the calculation of the time
evolution of the current of the momentum flux $M^{\alpha\beta\gamma}$ (18).
The method of derivation is similar to the equations obtained above. The time
derivative of tensor $M^{\alpha\beta\gamma}$ acts on the wave function in its
definition. The time derivative of the wave function is replaced by the
Hamiltonian (1) in accordance with the many-particle microscopic Schrodinger
equation $\imath\hbar\partial_{t}\Psi=\hat{H}\Psi$. It leads to the following
expression:
$\partial_{t}M^{\alpha\beta\gamma}=\frac{\imath}{\hbar}\int
dR\sum_{i=1}^{N}\delta(\textbf{r}-\textbf{r}_{i})\frac{1}{8m_{i}^{3}}\biggl{[}\hat{H}^{*}\Psi^{*}\cdot\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\beta}\hat{p}_{i}^{\gamma}\Psi$
$-\Psi^{*}\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\beta}\hat{p}_{i}^{\gamma}\hat{H}\Psi+\hat{p}_{i}^{\alpha*}\hat{H}^{*}\Psi^{*}\cdot\hat{p}_{i}^{\beta}\hat{p}_{i}^{\gamma}\Psi+\hat{p}_{i}^{\alpha*}\hat{p}_{i}^{\gamma*}\hat{H}^{*}\Psi^{*}\cdot\hat{p}_{i}^{\beta}\Psi$
$-\hat{p}_{i}^{\alpha*}\Psi^{*}\cdot\hat{p}_{i}^{\beta}\hat{p}_{i}^{\gamma}\hat{H}\Psi-\hat{p}_{i}^{\alpha*}\hat{p}_{i}^{\gamma*}\Psi^{*}\cdot\hat{p}_{i}^{\beta}\hat{H}\Psi$
$+\hat{p}_{i}^{\gamma*}\hat{H}^{*}\Psi^{*}\cdot\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\beta}\Psi-\hat{p}_{i}^{\gamma*}\Psi^{*}\cdot\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\beta}\hat{H}\Psi-c.c.\biggr{]}$
(23)
The part of the presented terms contains the Hamiltonian $\hat{H}$ under
action of the momentum operators. We permute the Hamiltonian $\hat{H}$ and the
operators acting on it. Hence, the result of permutation presented by terms
where no operators act on the Hamiltonian $\hat{H}$. However, the terms
containing the corresponding commutators appear. Therefore, all terms are
combined in two groups:
$\partial_{t}M^{\alpha\beta\gamma}=\frac{\imath}{\hbar}\int
dR\sum_{i=1}^{N}\delta(\textbf{r}-\textbf{r}_{i})\frac{1}{8m_{i}^{3}}\biggl{[}\hat{H}^{*}\Psi^{*}\cdot\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\beta}\hat{p}_{i}^{\gamma}\Psi$
$-\Psi^{*}\hat{H}\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\beta}\hat{p}_{i}^{\gamma}\Psi+\hat{H}^{*}\hat{p}_{i}^{\alpha*}\Psi^{*}\cdot\hat{p}_{i}^{\beta}\hat{p}_{i}^{\gamma}\Psi+\hat{H}^{*}\hat{p}_{i}^{\alpha*}\hat{p}_{i}^{\gamma*}\Psi^{*}\cdot\hat{p}_{i}^{\beta}\Psi$
$-\hat{p}_{i}^{\alpha*}\Psi^{*}\cdot\hat{H}\hat{p}_{i}^{\beta}\hat{p}_{i}^{\gamma}\Psi-\hat{p}_{i}^{\alpha*}\hat{p}_{i}^{\gamma*}\Psi^{*}\cdot\hat{H}\hat{p}_{i}^{\beta}\Psi$
$+\hat{H}^{*}\hat{p}_{i}^{\gamma*}\Psi^{*}\cdot\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\beta}\Psi-\hat{p}_{i}^{\gamma*}\Psi^{*}\cdot\hat{H}\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\beta}\Psi-c.c.\biggr{]}$
$+\frac{\imath}{\hbar}\int
dR\sum_{i=1}^{N}\delta(\textbf{r}-\textbf{r}_{i})\frac{1}{8m_{i}^{3}}\biggl{[}-\Psi^{*}[\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\beta}\hat{p}_{i}^{\gamma},\hat{H}]\Psi$
$+[\hat{p}_{i}^{\alpha*},\hat{H}^{*}]\Psi^{*}\cdot\hat{p}_{i}^{\beta}\hat{p}_{i}^{\gamma}\Psi+[\hat{p}_{i}^{\alpha*}\hat{p}_{i}^{\gamma*},\hat{H}^{*}]\Psi^{*}\cdot\hat{p}_{i}^{\beta}\Psi$
$-\hat{p}_{i}^{\alpha*}\Psi^{*}\cdot[\hat{p}_{i}^{\beta}\hat{p}_{i}^{\gamma},\hat{H}]\Psi-\hat{p}_{i}^{\alpha*}\hat{p}_{i}^{\gamma*}\Psi^{*}\cdot[\hat{p}_{i}^{\beta},\hat{H}]\Psi$
$+[\hat{p}_{i}^{\gamma*},\hat{H}^{*}]\Psi^{*}\cdot\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\beta}\Psi-\hat{p}_{i}^{\gamma*}\Psi^{*}\cdot[\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\beta},\hat{H}]\Psi-c.c.\biggr{]}.$
(24)
The first group of terms leads to the divergence of the flux of tensor
$M^{\alpha\beta\gamma}$. The second group of terms containing the commutators
presents the interactions.
Final form of tensor $M^{\alpha\beta\gamma}$ evolution equation can be
expressed in the following terms:
$\partial_{t}M^{\alpha\beta\gamma}+\partial_{\delta}R^{\alpha\beta\gamma\delta}=\frac{\hbar^{2}}{4m^{3}}n\partial_{\alpha}\partial_{\beta}\partial_{\gamma}V_{ext}$
$-\frac{1}{m}\Pi^{\beta\gamma}\partial_{\alpha}V_{ext}-\frac{1}{m}\Pi^{\alpha\gamma}\partial_{\beta}V_{ext}-\frac{1}{m}\Pi^{\alpha\beta}\partial_{\gamma}V_{ext}$
$+\frac{1}{m}F_{qf}^{\alpha\beta\gamma}+\frac{1}{m}(F^{\alpha\beta\gamma}+F^{\beta\gamma\alpha}+F^{\gamma\alpha\beta}),$
(25)
where
$F_{qf}^{\alpha\beta\gamma}=\frac{\hbar^{2}}{4m^{2}}\int[\partial^{\alpha}\partial^{\beta}\partial^{\gamma}U(\textbf{r}-\textbf{r}^{\prime})]n_{2}(\textbf{r},\textbf{r}^{\prime},t)d\textbf{r}^{\prime}$
(26)
is the quantum force contribution leading to the quantum fluctuations, and
$F^{\alpha\beta\gamma}=-\int[\partial^{\alpha}U(\textbf{r}-\textbf{r}^{\prime})]\Pi_{2}^{\beta\gamma}(\textbf{r},\textbf{r}^{\prime},t)d\textbf{r}^{\prime}$
(27)
is the interaction contribution containing nonzero limit in the classical
regime, with
$\Pi_{2}^{\alpha\beta}(\textbf{r},\textbf{r}^{\prime},t)=\int dR\sum_{i,j\neq
i}\frac{1}{4m_{i}^{2}}\delta(\textbf{r}-\textbf{r}_{i})\delta(\textbf{r}^{\prime}-\textbf{r}_{j})\times$
$\times[\Psi^{*}\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\beta}\Psi+(\hat{p}_{i}^{\beta})^{*}\Psi^{*}\hat{p}_{i}^{\alpha}\Psi+c.c.].$
(28)
Tensor $\Pi_{2}^{\alpha\beta}(\textbf{r},\textbf{r}^{\prime},t)$ can be
simplified in the correlationless regime to the following form
$\Pi_{2}^{\alpha\beta}(\textbf{r},\textbf{r}^{\prime},t)=\Pi^{\alpha\beta}(\textbf{r},t)\cdot
n(\textbf{r}^{\prime},t)$. However, the correlations caused by the
symmetrization of the bosonic many-particle wave function are used below.
Terms $F^{\alpha\beta\gamma}$ and $F_{qf}^{\alpha\beta\gamma}$ are the third
rank force tensors describing the interparticle interaction. However, equation
(25) contains the flux of tensor $M^{\alpha\beta\gamma}$ which is the fourth
rank tensor appearing in the following form:
$R^{\alpha\beta\gamma\delta}=\int
dR\sum_{i=1}^{N}\delta(\textbf{r}-\textbf{r}_{i})\frac{1}{16m_{i}^{4}}\biggl{[}\Psi^{*}\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\beta}\hat{p}_{i}^{\gamma}\hat{p}_{i}^{\delta}\Psi$
$+\hat{p}_{i}^{\alpha*}\Psi^{*}\hat{p}_{i}^{\beta}\hat{p}_{i}^{\gamma}\hat{p}_{i}^{\delta}\Psi+\hat{p}_{i}^{\beta*}\Psi^{*}\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\gamma}\hat{p}_{i}^{\delta}\Psi+\hat{p}_{i}^{\gamma*}\Psi^{*}\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\beta}\hat{p}_{i}^{\gamma}\Psi$
$+\hat{p}_{i}^{\delta*}\Psi^{*}\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\beta}\hat{p}_{i}^{\gamma}\Psi+\hat{p}_{i}^{\alpha*}\hat{p}_{i}^{\delta*}\Psi^{*}\hat{p}_{i}^{\beta}\hat{p}_{i}^{\gamma}\Psi$
$+\hat{p}_{i}^{\alpha*}\hat{p}_{i}^{\gamma*}\Psi^{*}\hat{p}_{i}^{\beta}\hat{p}_{i}^{\delta}\Psi+\hat{p}_{i}^{\gamma*}\hat{p}_{i}^{\delta*}\Psi^{*}\hat{p}_{i}^{\alpha}\hat{p}_{i}^{\beta}\Psi+c.c.\biggr{]}.$
(29)
Equation (25) is obtained for bosons with the arbitrary temperature. It can be
separated on two equations for two following subsystems: the BEC and the
normal fluid. All terms in equation (25) are additive on the particles.
Therefore, they are additive on the subsystems. Hence, the structure of the
partial equations is identical to the structure of equation (25):
$\partial_{t}M_{a}^{\alpha\beta\gamma}+\partial_{\delta}R_{a}^{\alpha\beta\gamma\delta}=-\frac{1}{m}n_{a}\partial_{\alpha}\partial_{\beta}\partial_{\gamma}V_{ext}$
$-\frac{1}{m}\Pi_{a}^{\beta\gamma}\partial_{\alpha}V_{ext}-\frac{1}{m}\Pi_{a}^{\alpha\gamma}\partial_{\beta}V_{ext}-\frac{1}{m}\Pi_{a}^{\alpha\beta}\partial_{\gamma}V_{ext}$
$+\frac{1}{m}F_{a,qf}^{\alpha\beta\gamma}+\frac{1}{m}(F_{a}^{\alpha\beta\gamma}+F_{a}^{\beta\gamma\alpha}+F_{a}^{\gamma\alpha\beta}),$
(30)
where subindex $a$ equal $B$ for the BEC and $n$ for the normal fluid.
The fourth rank kinematic tensor $R_{a}^{\alpha\beta\gamma\delta}$ (29) has
the following form after the introduction of the velocity field via the
Madelung transformation of the many-particle wave function:
$R_{a}^{\alpha\beta\gamma\delta}=n_{a}v_{a}^{\alpha}v_{a}^{\beta}v_{a}^{\gamma}v_{a}^{\delta}$
$+v_{a}^{\alpha}v_{a}^{\delta}(p_{a}^{\beta\gamma}+T_{a}^{\beta\gamma})+v_{a}^{\beta}v_{a}^{\delta}(p_{a}^{\alpha\gamma}+T_{a}^{\alpha\gamma})+v_{a}^{\gamma}v_{a}^{\delta}(p_{a}^{\alpha\beta}+T_{a}^{\alpha\beta})$
$+v_{a}^{\alpha}v_{a}^{\gamma}(p_{a}^{\beta\delta}+T_{a}^{\beta\delta})+v_{a}^{\beta}v_{a}^{\gamma}(p_{a}^{\alpha\delta}+T_{a}^{\alpha\delta})+v_{a}^{\alpha}v_{a}^{\beta}(p_{a}^{\gamma\delta}+T_{a}^{\gamma\delta})$
$+v_{a}^{\alpha}Q_{a}^{\beta\gamma\delta}+v_{a}^{\beta}Q_{a}^{\alpha\gamma\delta}+v_{a}^{\gamma}Q_{a}^{\alpha\beta\delta}+v_{a}^{\delta}Q_{a}^{\alpha\beta\gamma}$
$+Q_{a}^{\alpha\beta\gamma\delta}+T_{a}^{\alpha\beta\gamma\delta}+L_{a}^{\alpha\beta\gamma\delta}.$
(31)
This structure shows some similarity to the representations for the second
rank tensor momentum flux (11) and for the third rank tensor (21), where the
higher rank tensors are partially transformed via the concentration, velocity
field and, if possible, via tensors of smaller rank. However, this
transformation is partial since there is the tensor of the equal rank, but
defined in the comoving frame. Moreover, this final tensor is splitted on few
parts. It is two parts for the second rank tensor momentum flux, where we have
the kinetic pressure (quasi-classical part of thermal nature) and the quantum
Bohm potential (the quantum part). There are three parts for the third rank
tensor $M^{\alpha\beta\gamma}$. They are the quasi-classical part of thermal
nature, the quantum part, and the combined thermal-quantum part. For the
fourth rank tensor we also have three parts: the quasi-classical part of
thermal nature $Q_{a}^{\alpha\beta\gamma\delta}$, the quantum part
$T_{a}^{\alpha\beta\gamma\delta}$, and the combined thermal-quantum part
$L_{a}^{\alpha\beta\gamma\delta}$.
Developed model shows that arbitrary quantum system can be modeled via the
hydrodynamic equations which are traditionally associated with the fluid
dynamics. Quantum systems demonstrates that each particle shows the properties
of the wave and this wave-like behavior is incorporated in the quantum
hydrodynamic model. This conclusion follows from the fact that the quantum
hydrodynamic is derived from the Schrodinger equations which contains these
information. These general concept is illustrated for the ultracold bosons,
but the quantum hydrodynamic method can be applied to other physical systems.
This similarity between quantum behavior and the dynamics of fluids recently
found unusual realization. It is experimentally found that classic fluid
objects demonstrate the quantum-like behavior Bush Ch 18 , Couder Nat 05 ,
Bush ARFM 15 . It is observed as the millimetric droplet walking on the
surface of vibrating fluids, where the motion of droplets is affected by the
resonant interaction with their own wave field Couder Nat 05 , Bush ARFM 15 .
Systems walking droplets demonstrate various quantum effects Cristea-Platon Ch
18 , Chowdury Ch 18 , Budanur Ch 19 .
## III Contribution of interaction in the quantum hydrodynamic equations
Equations (6), (16), and (25) contain terms describing interaction.
Approximate forms of these force fields of different tensor ranks are
necessary to get a truncated set of equations. In our case, it is necessary to
include the short-range nature of the potential of the interparticle
interaction. Moreover, the weak interaction limit is considered. These two
assumptions are used to get simplified form of $F^{\alpha}$,
$F^{\alpha\beta}$, $F^{\alpha\beta\gamma}$ and $F_{qf}^{\alpha\beta\gamma}$ in
this section.
### III.1 Interaction terms in the Euler equation
The short-range interaction in the Euler for the single species of quantum
particles can be written as the divergence of the symmetric quantum stress
tensor $F^{\alpha}=-\partial^{\beta}\sigma^{\alpha\beta}$.
The first order by the interaction radius approximation gives the following
expression for the quantum stress tensor (see also Andreev PRA08 )
$\sigma^{\alpha\beta}(\textbf{r},t)=-\frac{1}{2}\int dR\sum_{i,j.i\neq
j}\delta(\textbf{r}-\textbf{R}_{ij})\times$
$\times\frac{r^{\alpha}_{ij}r^{\beta}_{ij}}{\mid\textbf{r}_{ij}\mid}\frac{\partial
U(\textbf{r}_{ij})}{\partial\mid\textbf{r}_{ij}\mid}\Psi^{*}(R^{\prime},t)\Psi(R^{\prime},t),$
(32)
where $R^{\prime}=\\{...,\textbf{R}_{ij},...,\textbf{R}_{ij},...\\}$ with
vector $\textbf{R}_{ij}$ located at $i$-th and $j$-th places.
Expression (III.1) can be rewritten in terms of two-particle concentration
$\sigma^{\alpha\beta}(\textbf{r},t)=-\frac{1}{2}Tr(n_{2}(\textbf{r},\textbf{r}^{\prime},t))\int
d\textbf{r}\frac{r^{\alpha}r^{\beta}}{r}\frac{\partial U(r)}{\partial r},$
(33)
where the notion of trace is used
$Trf(\textbf{r},\textbf{r}^{\prime})=f(\textbf{r},\textbf{r}).$ (34)
Consideration of the short-range interaction leads to the separation of
integral containing the potential of interaction. So, the characteristic of
interaction does not depend on the motion or position of particles. This
integral simplifies in the following way
$\int\frac{r^{\alpha}r^{\beta}}{r}\frac{\partial U}{\partial
r}d\textbf{r}=\frac{1}{3}\delta^{\alpha\beta}\int
rU^{\prime}d\textbf{r}=-\delta^{\alpha\beta}\int Ud\textbf{r}.$ (35)
The last integral in this expression is denoted as $g=\int Ud\textbf{r}$.
The two-particle concentration can be calculated in the weakly interacting
limit (see Andreev PRA08 )
$n_{2}(\textbf{r},\textbf{r}^{\prime},t)=n(\textbf{r},t)n(\textbf{r}^{\prime},t)+|\rho(\textbf{r},\textbf{r}^{\prime},t)|^{2}+\wp(\textbf{r},\textbf{r}^{\prime},t),$
(36)
where
$n(\textbf{r},t)=\sum_{f}n_{f}\varphi_{f}^{*}(\textbf{r},t)\varphi_{f}(\textbf{r},t)$
(37)
is the expression of concentration (2) in terms of the single particle wave
functions $\varphi_{f}(\textbf{r},t)$,
$\rho(\textbf{r}^{\prime},\textbf{r},t)=\sum_{f}n_{f}\varphi_{f}^{*}(\textbf{r},t)\varphi_{f}(\textbf{r}^{\prime},t)$
(38)
is the density matrix, and
$\wp(\textbf{r},\textbf{r}^{\prime},t)=\sum_{f}n_{f}(n_{f}-1)|\varphi_{f}(\textbf{r},t)|^{2}|\varphi_{f}(\textbf{r}^{\prime},t)|^{2},$
(39)
The last term in equation describes interaction of pairs of particles being in
the same quantum state. It can be seen from the existence of single quantum
number $g$ in all wave are single-particle wave functions.
Expression (36) can be substituted in the general expression of the force
field (8). However, equation (8) does not contain information about the short-
range nature of considered interaction. The first and second terms are related
to particles located in different quantum states. It cannot be seen from
equation (36), but it follows from intermediate terms which can be found in
Ref. Andreev PRA08 .
The trace of the two-particle concentration entering the quantum stress tensor
has the following form
$Trn_{2}(\textbf{r},\textbf{r}^{\prime},t)\approx
2(n^{2})^{\prime}+n^{2}_{B},$ (40)
where the first term on the right-hand side symbol ’ means that the product of
concentrations is related to the particles in different quantum states.
Therefore, the first term has no $n^{2}_{B}$ contribution from selfaction of
BEC. The dropped terms are described in Ref. Andreev IJMP B 13 .
Present explicit contribution of the BEC concentration $n_{B}$ and the
concentration of normal fluid $n_{n}$ in the first term on the right-hand side
of equation (40):
$(n^{2})^{\prime}=((n_{B}+n_{n})(n_{B}+n_{n}))^{\prime}=(n_{n}^{2}+2n_{B}n_{n}).$
(41)
The last term in equation (40) appears for particles being in BEC. While the
first term on the right-hand side in equation (40) related to interaction of
particles being in different quantum states. Hence, it gives contribution for
the interaction between BEC and normal fluid and for the interaction between
bosons belonging to normal fluid.
Full expression of the quantum stress tensor for the bosons at finite
temperature can be written in terms of the concentration of BEC and the
concentration of normal fluid:
$\sigma^{\alpha\beta}=\frac{1}{2}g\delta^{\alpha\beta}(2n_{n}^{2}+4n_{B}n_{n}+n^{2}_{B}).$
(42)
If we consider dynamics of BEC or normal fluid we cannot use the notion of the
quantum stress tensor $\sigma^{\alpha\beta}$ for the interaction of subspecies
as it is for the interaction of different species.
The first (last) term in equation (42) contains the selfaction of the normal
fluid (of the BEC). The second term in equation (42) presents the interaction
between the BEC and normal fluid.
If we consider dynamics of BEC we need to extract force caused by the BEC and
normal fluid acting on the BEC. This force is the superposition of a part of
the second term in equation (42) and the last term in equation (42):
$F_{B}^{\alpha}=-gn_{B}\partial^{\alpha}(2n_{n}+n_{B}).$ (43)
The second term in equation (42) can be rewritten as follows
$F_{2}^{\alpha}=-2g(n_{B}\partial^{\alpha}n_{n}+n_{n}\partial^{\alpha}n_{B})$.
The first part of this expression is used in equation (43).
If we consider dynamics of normal fluid it means that the source of field in
the first term of $n_{2}$ can be the normal fluid and the BEC, hence the last
term gives no contribution in this case in equation (42):
$F_{n}^{\alpha}=-2gn_{n}\partial^{\alpha}(n_{n}+n_{B}).$ (44)
The nonsymmetric decomposition allows to use the notion of the NLSE. It is
necessary condition to have the GP equation at finite temperatures. Moreover,
the nonsymmetric form is traditionally used in literature Griffin PRB 96 .
Same chose is made at analysis $j_{2}^{\beta}$ below.
#### III.1.1 Nonlinear Schrodinger equations
Dropping the pressure of normal fluid and using the quantum Bohm potential in
form (12) we find a closed set of hydrodynamic equations. Introducing the
macroscopic wave function for both the BEC and the normal fluid for the
potential velocity fields as $\Phi_{a}=\sqrt{n_{a}}e^{\imath
m\phi_{a}/\hbar}$, where $\phi_{a}$ is the potential of the velocity field
$\textbf{v}_{a}=-\nabla\phi_{a}$.
$\imath\hbar\partial_{t}\Phi_{B}=\Biggl{(}-\frac{\hbar^{2}\nabla^{2}}{2m}+V_{ext}+g(n_{B}+2n_{n})\Biggr{)}\Phi_{B},$
(45)
and
$\imath\hbar\partial_{t}\Phi_{n}=\Biggl{(}-\frac{\hbar^{2}\nabla^{2}}{2m}+V_{ext}+2g(n_{B}+n_{n})\Biggr{)}\Phi_{n}.$
(46)
The kinetic energy (the first term on the right-hand side of equations (45)
and (46)) corresponds to the application of the noninteracting limit for the
quantum Bohm potential for the BEC and for the normal fluid.
The pressure of the normal fluid is dropped in equation (46).
Equations (45), (46) correspond to equations 127-129 given in Ref. Dalfovo RMP
99 while there is a difference in the form of presentation.
Therefore, the account of the pressure evolution together with the pressure
flux evolution gives the generalization of the model presented with the
nonlinear Schrodinger equations (45), (46). Necessity of additional equations
is demonstrated in Refs. Andreev 2005 , Andreev 2007 , Andreev 2009 if one
wants to include the quantum fluctuations.
### III.2 Interaction terms in the pressure evolution equation
General form of the pressure evolution equation contains the interaction via
the force second rank tensor field. Its main contribution is obtained in the
first order by the interaction radius. The result appears in the following
form
$F^{\alpha\beta}(\textbf{r},t)=\frac{1}{8m^{2}}\partial^{\gamma}\int
dR\sum_{i,j;i\neq j}\delta(\textbf{r}-\textbf{R}_{ij})\times$
$\times\frac{r^{\beta}_{ij}r^{\gamma}_{ij}}{\mid\textbf{r}_{ij}\mid}\frac{\partial
U(\textbf{r}_{ij})}{\partial\mid\textbf{r}_{ij}\mid}\biggl{[}\Psi^{*}(R^{\prime},t)(\hat{p}_{(1)}^{\alpha}+\hat{p}_{(2)}^{\alpha})\Psi(R^{\prime},t)+c.c.\biggr{]}$
$-\frac{1}{8m^{2}}\int dR\sum_{i,j;i\neq
j}\delta(\textbf{r}-\textbf{R}_{ij})\frac{r^{\alpha}_{ij}r^{\gamma}_{ij}}{\mid\textbf{r}_{ij}\mid}\frac{\partial
U(\textbf{r}_{ij})}{\partial\mid\textbf{r}_{ij}\mid}\times$
$\times\biggl{[}(\partial^{\gamma}_{(1)}-\partial^{\gamma}_{(2)})\Psi^{*}(R^{\prime},t)(\hat{p}_{(1)}^{\alpha}-\hat{p}_{(2)}^{\alpha})\Psi(R^{\prime},t)$
$+\Psi^{*}(R^{\prime},t)(\partial^{\gamma}_{(1)}-\partial^{\gamma}_{(2)})(\hat{p}_{(1)}^{\alpha}-\hat{p}_{(2)}^{\alpha})\Psi(R^{\prime},t)+c.c.\biggr{]}.$
(47)
Form (47) appears at the expansion of the force tensor field (17) using the
short-range nature of interaction (see Andreev PRA08 for the method described
for the force field, or Andreev 2001 for application of this method to
fermions).
For the force tensor field $F^{\alpha\beta}$ we can present the intermediate
expressions like equations (33) and (36) obtained for the force field
$F^{\alpha}=-\partial^{\beta}\sigma^{\alpha\beta}$. However, similar
expressions obtained for $F^{\alpha\beta}$ are rather large. Hence, we start
the presentation with equation similar to equation (40) obtained after taking
trace of the intermediate expressions.
Therefore, we obtain the following simplification of equation (47) for the
force tensor field $F^{\alpha\beta}$:
$F^{\alpha\beta}=-\frac{g}{4m^{2}}\partial^{\beta}[2(n\Lambda^{\alpha})^{\prime}+n_{B}\Lambda^{\alpha}_{B}]$
$+\frac{\imath}{\hbar}\frac{g}{4m^{2}}[2(nr^{\alpha\beta})^{\prime}-2(\Lambda^{\alpha}\Lambda^{\beta})^{\prime}+n_{B}r^{\alpha\beta}_{B}-\Lambda^{\alpha}_{B}\Lambda^{\beta}_{B}]+c.c.,$
(48)
where we use the intermediate functions $\Lambda^{\alpha}$ and
$r^{\alpha\beta}$ with the following definitions:
$\Lambda^{\alpha}=\sum_{f}n_{f}\varphi_{f}^{*}\hat{p}^{\alpha}\varphi_{f}=mj^{\alpha}-\imath\frac{\hbar}{2}\partial^{\alpha}n,$
(49)
and
$r^{\alpha\beta}=\sum_{f}n_{f}\varphi_{f}^{*}\hat{p}^{\alpha}\hat{p}^{\beta}\varphi_{f}$
$=m^{2}\biggl{(}nv^{\alpha}v^{\beta}+p^{\alpha\beta}-\frac{\hbar^{2}}{m^{2}}\sum_{f}n_{f}a_{f}\partial^{\alpha}\partial^{\beta}a_{f}\biggr{)}$
$-\imath\frac{m\hbar}{2}[\partial^{\alpha}(nv^{\beta})+\partial^{\beta}(nv^{\alpha})].$
(50)
The calculation of functions $\Lambda^{\alpha}$ and $r^{\alpha\beta}$ includes
the Madelung transformation of the single-particle wave functions
$\varphi_{f}(r,t)=\sqrt{a_{f}}e^{\imath S_{f}}$. Next, we use the following
definitions of the velocity field and the pressure tensor in terms of the
single-particle wave functions
$nv^{\alpha}=\sum_{f}n_{f}a_{f}^{2}(\hbar\partial^{\alpha}S_{f}/m)$, and
$p^{\alpha\beta}=\sum_{f}n_{f}a_{f}^{2}u_{f}^{\alpha}u_{f}^{\beta}$, where
$u_{f}^{\alpha}=(\hbar\partial^{\alpha}S_{f}/m)-v^{\alpha}$.
Let us represent terms like $(n\Lambda^{\alpha})^{\prime}$ in the explicit
form:
$F^{\alpha\beta}=-\frac{g}{4m^{2}}\partial^{\beta}[2n_{n}\Lambda^{\alpha}_{n}$
$+2n_{n}\Lambda^{\alpha}_{B}+2n_{B}\Lambda^{\alpha}_{n}+n_{B}\Lambda^{\alpha}_{B}]$
$+\frac{\imath}{\hbar}\frac{g}{4m^{2}}[2n_{n}r^{\alpha\beta}_{n}-2\Lambda^{\alpha}_{n}\Lambda^{\beta}_{n}+2n_{n}r^{\alpha\beta}_{B}-2\Lambda^{\alpha}_{n}\Lambda^{\beta}_{B}$
$2n_{B}r^{\alpha\beta}_{n}-2\Lambda^{\alpha}_{B}\Lambda^{\beta}_{n}+n_{B}r^{\alpha\beta}_{B}-\Lambda^{\alpha}_{B}\Lambda^{\beta}_{B}]+c.c..$
(51)
Further calculation gives the representation of tensor $F^{\alpha\beta}$ in
term of hydrodynamic functions:
$F^{\alpha\beta}=-\frac{g}{2m}\partial^{\beta}[2n_{n}j_{n}^{\alpha}+2n_{n}j_{B}^{\alpha}+2n_{B}j_{n}^{\alpha}+n_{B}j_{B}^{\alpha}]$
$+\frac{g}{4m}\biggl{[}2n_{n}(\partial j_{n}+\partial j_{n})-2j_{n}\partial
n_{n}-2j_{n}\partial n_{n}$
$+2n_{n}(\partial^{\alpha}j_{B}^{\beta}+\partial^{\beta}j_{B}^{\alpha})-2j_{B}^{\alpha}\partial^{\beta}n_{n}-2j_{B}^{\beta}\partial^{\alpha}n_{n}$
$+2n_{B}(\partial^{\alpha}j_{n}^{\beta}+\partial^{\beta}j_{n}^{\alpha})-2j_{n}^{\alpha}\partial^{\beta}n_{B}-2j_{n}^{\beta}\partial^{\alpha}n_{B}$
$+n_{B}(\partial^{\alpha}j_{B}^{\beta}+\partial^{\beta}j_{B}^{\alpha})-j_{B}^{\alpha}\partial^{\beta}n_{B}-j_{B}^{\beta}\partial^{\alpha}n_{B}\biggr{]}.$
(52)
The momentum flux evolution equation contains the symmetric combination of the
force tensor fields $F^{\alpha\beta}$:
$F^{\alpha\beta}+F^{\beta\alpha}=-\frac{g}{m}\biggl{[}2(j_{n}^{\alpha}\partial^{\beta}n_{n}+j_{n}^{\beta}\partial^{\alpha}n_{n})$
$+2(j_{n}^{\alpha}\partial^{\beta}n_{B}+j_{n}^{\beta}\partial^{\alpha}n_{B})$
$+2(j_{B}^{\alpha}\partial^{\beta}n_{n}+j_{B}^{\beta}\partial^{\alpha}n_{n})+j_{B}^{\alpha}\partial^{\beta}n_{B}+j_{B}^{\beta}\partial^{\alpha}n_{B}\biggr{]}.$
(53)
The zero temperature analysis demonstrates that there is nonzero pressure for
the BECs, caused by the quantum fluctuations entering the set of hydrodynamic
equations via the evolution of the pressure flux Andreev 2005 , Andreev 2007 ,
Andreev 2009 . The pressure also exists for the normal fluid. So, we make
decomposition of the momentum flux evolution equation on two partial equations
for $\Pi^{\alpha\beta}_{n}$ and $\Pi^{\alpha\beta}_{B}$. Formally this
decomposition is presented with equation (20). To complete this procedure we
need to split the force tensor field $F^{\alpha\beta}+F^{\beta\alpha}$
$=F_{B}^{\alpha\beta}+F_{B}^{\beta\alpha}$
$+F_{n}^{\alpha\beta}+F_{n}^{\beta\alpha}$, where
$F_{B}^{\alpha\beta}+F_{B}^{\beta\alpha}=-\frac{g}{m}\biggl{[}2(j_{B}^{\alpha}\partial^{\beta}n_{n}+j_{B}^{\beta}\partial^{\alpha}n_{n})$
$+j_{B}^{\alpha}\partial^{\beta}n_{B}+j_{B}^{\beta}\partial^{\alpha}n_{B}\biggr{]},$
(54)
and
$F_{n}^{\alpha\beta}+F_{n}^{\beta\alpha}=-\frac{g}{m}\biggl{[}2(j_{n}^{\alpha}\partial^{\beta}n_{n}+j_{n}^{\beta}\partial^{\alpha}n_{n})$
$+2(j_{n}^{\alpha}\partial^{\beta}n_{B}+j_{n}^{\beta}\partial^{\alpha}n_{B})\biggr{]}.$
(55)
After extraction of the pressure tensor $p^{\alpha\beta}$ from the momentum
flux evolution $\Pi^{\alpha\beta}$ we have extra contribution of the
interaction in the pressure evolution equation in compare with equations (20).
It contains the following contribution $F^{\alpha\beta}-v^{\beta}F^{\alpha}$.
Using equations (43), (44), (55), (54) find
$F^{\alpha\beta}+F^{\alpha\beta}-v^{\alpha}F^{\beta}-v^{\beta}F^{\alpha}=0$
for the BECs and for the normal fluid.
A pressure evolution equation is used in Kavoulakis PRA 98 for bosons above
the critical temperature. Equation 4 of Ref. Kavoulakis PRA 98 contains the
force in the following form $n\textbf{v}\cdot\textbf{F}$ which generally
differs from $F^{\alpha\beta}-v^{\beta}F^{\alpha}$ obtained above.
### III.3 The short-range interaction in the third rank tensor evolution
equation
The third rank tensor $M^{\alpha\beta\gamma}$ (25) evolution equation contains
two kinds of the third rank force tensors $F^{\alpha\beta\gamma}$ (27) and
$F_{qf}^{\alpha\beta\gamma}$ (26). Consider them separately.
#### III.3.1 Quasi-classical third rank force tensor
Tensor $F_{qf}^{\alpha\beta\gamma}$ (26) is proportional to the Planck
constant, so it goes to zero in the classical limit. The third rank force
tensor $F^{\alpha\beta\gamma}$ (27) is different, it has nonzero limit in the
quasiclassical regime. However, we are interested in the value of tensor
$F^{\alpha\beta\gamma}$ (27) in one of quantum regimes for the degenerate
bosons.
We calculate the third rank force tensor $F^{\alpha\beta\gamma}$ (27) in the
first order by the interaction radius appears in the following form
$F^{\alpha\beta\gamma}(\textbf{r},t)=\frac{1}{8m^{3}}\partial^{\mu}\int
dR\sum_{i,j;i\neq
j}\delta(\textbf{r}-\textbf{R}_{ij})\frac{r^{\mu}_{ij}r^{\gamma}_{ij}}{\mid\textbf{r}_{ij}\mid}\frac{\partial
U(\textbf{r}_{ij})}{\partial\mid\textbf{r}_{ij}\mid}\biggl{[}\Psi^{*}(R^{\prime},t)\hat{p}_{(1)}^{\alpha}\hat{p}_{(1)}^{\beta}\Psi(R^{\prime},t)+\hat{p}_{(1)}^{\beta*}\Psi^{*}(R^{\prime},t)\hat{p}_{(1)}^{\alpha}\Psi(R^{\prime},t)+c.c.\biggr{]}$
$-\frac{1}{8m^{3}}\int dR\sum_{i,j;i\neq
j}\delta(\textbf{r}-\textbf{R}_{ij})\frac{r^{\mu}_{ij}r^{\gamma}_{ij}}{\mid\textbf{r}_{ij}\mid}\frac{\partial
U(\textbf{r}_{ij})}{\partial\mid\textbf{r}_{ij}\mid}\biggl{[}(\partial^{\mu}_{(1)}-\partial^{\mu}_{(2)})\Psi^{*}(R^{\prime},t)\hat{p}_{(1)}^{\alpha}\hat{p}_{(1)}^{\beta}\Psi(R^{\prime},t)$
$+\Psi^{*}(R^{\prime},t)(\partial^{\mu}_{(1)}-\partial^{\mu}_{(2)})\hat{p}_{(1)}^{\alpha}\hat{p}_{(1)}^{\beta}\Psi(R^{\prime},t)+\hat{p}_{(1)}^{\beta*}(\partial^{\mu}_{(1)}-\partial^{\mu}_{(2)})\Psi^{*}(R^{\prime},t)\hat{p}_{(1)}^{\alpha}\Psi(R^{\prime},t)+\hat{p}_{(1)}^{\beta*}\Psi^{*}(R^{\prime},t)(\partial^{\mu}_{(1)}-\partial^{\mu}_{(2)})\hat{p}_{(1)}^{\alpha}\Psi(R^{\prime},t)+c.c.\biggr{]}.$
(56)
Here, the part of expression for $F^{\alpha\beta\gamma}$ containing the
interaction potential appears as the independent multiplier. It has same form
as the integral in the Euler equation (35). Hence, tensor
$F^{\alpha\beta\gamma}$ is proportional to the Groos-Pitaevskii interaction
constant.
Further calculation in the weakly interacting limit, following the method
described in Ref. Andreev PRA08 , gives an intermediate representation of the
third rank force tensor:
$F^{\alpha\beta\gamma}(\textbf{r},t)=-\frac{g}{4m^{3}}\Biggl{[}\Pi_{B}^{\alpha\beta}\partial^{\gamma}n_{B}+\Pi^{\alpha\beta}\partial^{\gamma}n+\biggl{(}n\sum_{f}n_{f}\partial^{\gamma}\varphi_{f}^{*}\hat{p}^{\alpha}\hat{p}^{\beta}\varphi_{f}+\frac{\imath}{\hbar}\Lambda^{\gamma}r^{\alpha\beta}-\frac{\imath}{\hbar}\Lambda^{\alpha*}\kappa^{\gamma\beta}+\frac{\imath}{\hbar}\Lambda^{\beta}\kappa^{\alpha\gamma}+c.c.\biggr{)}\Biggr{]},$
(57)
where
$\kappa^{\alpha\beta}=\sum_{f}n_{f}p^{\alpha}\varphi_{f}^{*}\cdot\hat{p}^{\beta}\varphi_{f}.$
(58)
Function $\kappa^{\alpha\beta}$ is the nonsymmetric tensor. It has symmetric
real part and the antisymmetric imaginary part:
$\kappa^{\alpha\beta}=m^{2}\biggl{(}nv^{\alpha}v^{\beta}+p^{\alpha\beta}+\frac{\hbar^{2}}{m^{2}}\sum_{f}n_{f}\partial^{\alpha}\cdot
a_{f}\partial^{\beta}a_{f}\biggr{)}$ $-\frac{1}{2}\imath
m\hbar[v^{\alpha}\partial^{\beta}n-v^{\beta}\partial^{\alpha}n+\sum_{f}n_{f}a_{f}(u^{\alpha}\partial^{\beta}a_{f}-u^{\beta}\partial^{\alpha}a_{f})].$
(59)
No specific notation is introduced for the third rank tensor
$\sum_{f}n_{f}\partial^{\gamma}\varphi_{f}^{*}\hat{p}^{\alpha}\hat{p}^{\beta}\varphi_{f}$.
In our calculations we need its imaginary part multiplied by 2:
$\sum_{f}n_{f}\partial^{\gamma}\varphi_{f}^{*}\hat{p}^{\alpha}\hat{p}^{\beta}\varphi_{f}+c.c.=2m^{2}\Biggl{[}\frac{1}{2}\partial^{\gamma}n\cdot
v^{\alpha}v^{\beta}-\partial^{\alpha}n\cdot
v^{\beta}v^{\gamma}-\partial^{\beta}n\cdot
v^{\alpha}v^{\gamma}-nv^{\gamma}(\partial^{\beta}v^{\alpha}+\partial^{\alpha}v^{\beta})$
$+\sum_{f}n_{f}a_{f}(\partial^{\gamma}a_{f})u_{f}^{\alpha}u_{f}^{\beta}-\frac{1}{2}\partial^{\beta}p^{\alpha\gamma}-\frac{1}{2}\partial^{\alpha}p^{\beta\gamma}+\frac{1}{2}\sum_{f}n_{f}a_{f}^{2}(u_{f}^{\beta}\partial^{\alpha}u_{f}^{\gamma}+u_{f}^{\alpha}\partial^{\beta}u_{f}^{\gamma})$
$+v^{\alpha}\sum_{f}n_{f}a_{f}(\partial^{\gamma}a_{f}\cdot
u_{f}^{\beta}-\partial^{\beta}a_{f}\cdot
u_{f}^{\gamma})+v^{\beta}\sum_{f}n_{f}a_{f}(\partial^{\gamma}a_{f}\cdot
u_{f}^{\alpha}-\partial^{\alpha}a_{f}\cdot
u_{f}^{\gamma})-\frac{\hbar^{2}}{m^{2}}\sum_{f}n_{f}\partial^{\gamma}a_{f}\cdot\partial^{\alpha}\partial^{\beta}a_{f}\biggr{)}\Biggr{]}.$
(60)
Equation (57) includes term $\Pi_{B}^{\alpha\beta}\partial^{\gamma}n_{B}$
which describes full contribution of the BEC in $F^{\alpha\beta\gamma}$.
The second term in equation (57) is an analog of the first term in equation
(36). All following terms in equation (57) are the analog of the second term
in equation (36). It can be interpreted as the exchange interaction.
Further calculation of the (57) gives the partially truncated expression
mainly presented via the macroscopic hydrodynamic functions:
$F^{\alpha\beta\gamma}(\textbf{r},t)=-\frac{g}{4}\biggl{[}4\Pi_{B}^{\alpha\beta}\partial^{\gamma}n_{B}+4\Pi^{\alpha\beta}\partial^{\gamma}n+\partial^{\alpha}n\biggl{(}\Pi^{\beta\gamma}+\frac{\hbar^{2}}{4m^{2}}\partial^{\beta}\partial^{\gamma}n\biggr{)}+\partial^{\beta}n\biggl{(}\Pi^{\alpha\gamma}+\frac{\hbar^{2}}{4m^{2}}\partial^{\alpha}\partial^{\gamma}n\biggr{)}+\partial^{\gamma}n\biggl{(}\Pi^{\alpha\beta}-\frac{\hbar^{2}}{4m^{2}}\partial^{\alpha}\partial^{\beta}n\biggr{)}$
$+n\biggl{(}3\partial^{\gamma}n\cdot
v^{\alpha}v^{\beta}-2\partial^{\alpha}n\cdot
v^{\beta}v^{\gamma}-2\partial^{\beta}n\cdot
v^{\alpha}v^{\gamma}-nv^{\gamma}(\partial^{\alpha}v^{\beta}+\partial^{\beta}v^{\alpha})-\partial^{\beta}p^{\alpha\gamma}-\partial^{\alpha}p^{\beta\gamma}$
$+\sum_{f}n_{f}(\partial^{\gamma}a_{f}^{2})u_{f}^{\alpha}u_{f}^{\beta}+\sum_{f}n_{f}a_{f}^{2}(u_{f}^{\beta}\partial^{\alpha}u_{f}^{\gamma}+u_{f}^{\alpha}\partial^{\beta}u_{f}^{\gamma})+\frac{3}{2}v^{\alpha}\sum_{f}n_{f}(\partial^{\gamma}a_{f}^{2}\cdot
u_{f}^{\beta}-\partial^{\beta}a_{f}^{2}\cdot u_{f}^{\gamma})$
$+\frac{3}{2}v^{\beta}\sum_{f}n_{f}(\partial^{\gamma}a_{f}^{2}\cdot
u_{f}^{\alpha}-\partial^{\alpha}a_{f}^{2}\cdot
u_{f}^{\gamma})-2\frac{\hbar^{2}}{m^{2}}\sum_{f}n_{f}\partial^{\gamma}a_{f}\cdot\partial^{\alpha}\partial^{\beta}a_{f}\biggr{)}\biggr{]}.$
(61)
Equation for the third rank tensor $M^{\alpha\beta\gamma}$ evolution contains
the symmetric combination of the third rank force tensors (61) which cam be
presented in the following form:
$F^{\alpha\beta\gamma}+F^{\beta\gamma\alpha}+F^{\gamma\alpha\beta}=-\frac{g}{4}\biggl{[}4(\Pi_{B}^{\beta\gamma}\partial^{\alpha}n_{B}+\Pi_{B}^{\alpha\gamma}\partial^{\beta}n_{B}+\Pi_{B}^{\alpha\beta}\partial^{\gamma}n_{B})+4(\Pi^{\beta\gamma}\partial^{\alpha}n+\Pi^{\alpha\gamma}\partial^{\beta}n+\Pi^{\alpha\beta}\partial^{\gamma}n)$
$+\partial^{\alpha}n\biggl{(}3\Pi^{\beta\gamma}+\frac{\hbar^{2}}{4m^{2}}\partial^{\beta}\partial^{\gamma}n\biggr{)}+\partial^{\beta}n\biggl{(}3\Pi^{\alpha\gamma}+\frac{\hbar^{2}}{4m^{2}}\partial^{\alpha}\partial^{\gamma}n\biggr{)}+\partial^{\gamma}n\biggl{(}3\Pi^{\alpha\beta}+\frac{\hbar^{2}}{4m^{2}}\partial^{\alpha}\partial^{\beta}n\biggr{)}$
$-n(\partial^{\alpha}\Pi^{\beta\gamma}+\partial^{\beta}\Pi^{\alpha\gamma}+\partial^{\gamma}\Pi^{\alpha\beta})-\frac{3\hbar^{2}}{4m^{2}}n\partial^{\alpha}\partial^{\beta}\partial^{\gamma}n\biggr{]}.$
(62)
The pressure flux evolution equation obtained as the reduction of the third
rank tensor $M^{\alpha\beta\gamma}$ evolution equation contains the following
combination of the force fields:
$F^{\alpha\beta\gamma}+F^{\beta\gamma\alpha}+F^{\gamma\alpha\beta}-\frac{1}{mn}(F^{\alpha}\Pi^{\beta\gamma}+F^{\beta}\Pi^{\alpha\gamma}+F^{\gamma}\Pi^{\alpha\beta})=\frac{g}{4}[\partial^{\alpha}(n\Pi^{\beta\gamma})+\partial^{\beta}(n\Pi^{\alpha\gamma})+\partial^{\gamma}(n\Pi^{\alpha\beta})]^{\prime}$
$+\frac{g\hbar^{2}}{16m^{2}}[3n\partial^{\alpha}\partial^{\beta}\partial^{\gamma}n-\partial^{\alpha}n\cdot\partial^{\beta}\partial^{\gamma}n-\partial^{\beta}n\cdot\partial^{\alpha}\partial^{\gamma}n-\partial^{\gamma}n\cdot\partial^{\alpha}\partial^{\beta}n]^{\prime},$
(63)
where symbol $[]^{\prime}$ specifies that product of functions describing the
BEC is excluded similarly equations (40) and (41).
The quantum part can be represented
$F^{\alpha\beta\gamma}+F^{\beta\gamma\alpha}+F^{\gamma\alpha\beta}-\frac{1}{mn}(F^{\alpha}\Pi^{\beta\gamma}+F^{\beta}\Pi^{\alpha\gamma}+F^{\gamma}\Pi^{\alpha\beta})=\frac{g}{4}[\partial^{\alpha}(n\Pi^{\beta\gamma})+\partial^{\beta}(n\Pi^{\alpha\gamma})+\partial^{\gamma}(n\Pi^{\alpha\beta})]^{\prime}$
$+\frac{g\hbar^{2}}{16m^{2}}[\partial^{\alpha}(n\partial^{\beta}\partial^{\gamma}n-\partial^{\beta}n\cdot\partial^{\gamma}n)+\partial^{\beta}(n\partial^{\alpha}\partial^{\gamma}n-\partial^{\alpha}n\cdot\partial^{\gamma}n)+\partial^{\gamma}(n\partial^{\alpha}\partial^{\beta}n-\partial^{\alpha}n\cdot\partial^{\beta}n)]^{\prime}.$
(64)
It can be considered as
$F^{\alpha\beta\gamma}+F^{\beta\gamma\alpha}+F^{\gamma\alpha\beta}-\frac{1}{mn}(F^{\alpha}\Pi^{\beta\gamma}+F^{\beta}\Pi^{\alpha\gamma}+F^{\gamma}\Pi^{\alpha\beta})$
$=\tilde{F}^{\alpha\beta\gamma}+\tilde{F}^{\beta\gamma\alpha}+\tilde{F}^{\gamma\alpha\beta}$,
where
$\tilde{F}^{\alpha\beta\gamma}=\frac{g}{4}[\partial^{\alpha}(n\Pi^{\beta\gamma})+\frac{g\hbar^{2}}{16m^{2}}[\partial^{\alpha}(n\partial^{\beta}\partial^{\gamma}n-\partial^{\beta}n\cdot\partial^{\gamma}n)]^{\prime}.$
(65)
It is the derivative of the second rank tensor.
If we consider the zero temperature limit we find
$F^{\alpha\beta\gamma}=(-g/4m^{3})\Pi_{B}^{\alpha\beta}\partial^{\gamma}n_{B}$.
The transition from the equation of evolution for tensor
$M^{\alpha\beta\gamma}$ to the equation of evolution for tensor
$Q^{\alpha\beta\gamma}$, which is the sibling of $M^{\alpha\beta\gamma}$, but
the pressure flux $Q^{\alpha\beta\gamma}$ is defined in the comoving frame
leads to the canceling of such term. Hence, the nonzero contribution comes
from the quantum part of the third rank force tensor
$F_{qf}^{\alpha\beta\gamma}$. The nonzero temperature gives a nonzero
contribution of $F^{\alpha\beta\gamma}$ at the transition to the pressure flux
evolution equation.
Equation (65) is obtained for all bosons. We need to separate it on the force
acting on the BEC and the force acting on the normal fluid.
Finally, we obtain
$\tilde{F}_{n}^{\alpha\beta\gamma}+\tilde{F}_{n}^{\beta\gamma\alpha}+\tilde{F}_{n}^{\gamma\alpha\beta}=\frac{g}{4}[\partial^{\alpha}(n_{n}\Pi_{n}^{\beta\gamma})+\partial^{\beta}(n_{n}\Pi_{n}^{\alpha\gamma})+\partial^{\gamma}(n_{n}\Pi_{n}^{\alpha\beta})$
$+n_{n}\partial^{\alpha}(\Pi_{B}^{\beta\gamma}+\partial^{\beta}\Pi_{B}^{\alpha\gamma}+\partial^{\gamma}\Pi_{B}^{\alpha\beta})+\Pi_{n}^{\beta\gamma}\partial^{\alpha}n_{B}+\Pi_{n}^{\alpha\gamma}\partial^{\beta}n_{B}+\Pi_{n}^{\alpha\beta}\partial^{\gamma}n_{B}]$
$+\frac{g\hbar^{2}}{16m^{2}}\biggl{[}3n_{n}\partial^{\alpha}\partial^{\beta}\partial^{\gamma}n_{n}+3n_{n}\partial^{\alpha}\partial^{\beta}\partial^{\gamma}n_{B}-\partial^{\alpha}n_{n}\cdot\partial^{\beta}\partial^{\gamma}n_{n}-\partial^{\beta}n_{n}\cdot\partial^{\alpha}\partial^{\gamma}n_{n}-\partial^{\gamma}n_{n}\cdot\partial^{\alpha}\partial^{\beta}n_{n}$
$-\frac{1}{2}\partial^{\alpha}n_{n}\cdot\partial^{\beta}\partial^{\gamma}n_{B}-\frac{1}{2}\partial^{\beta}n_{n}\cdot\partial^{\alpha}\partial^{\gamma}n_{B}-\frac{1}{2}\partial^{\gamma}n_{n}\cdot\partial^{\alpha}\partial^{\beta}n_{B}-\frac{1}{2}\partial^{\alpha}n_{B}\cdot\partial^{\beta}\partial^{\gamma}n_{n}-\frac{1}{2}\partial^{\beta}n_{B}\cdot\partial^{\alpha}\partial^{\gamma}n_{n}-\frac{1}{2}\partial^{\gamma}n_{B}\cdot\partial^{\alpha}\partial^{\beta}n_{n}\biggr{]},$
(66)
and
$\tilde{F}_{B}^{\alpha\beta\gamma}+\tilde{F}_{B}^{\beta\gamma\alpha}+\tilde{F}_{B}^{\gamma\alpha\beta}=\frac{g}{4}[n_{B}\partial^{\alpha}(\Pi_{n}^{\beta\gamma}+\partial^{\beta}\Pi_{n}^{\alpha\gamma}+\partial^{\gamma}\Pi_{n}^{\alpha\beta})$
$+\Pi_{B}^{\beta\gamma}\partial^{\alpha}n_{n}+\Pi_{B}^{\alpha\gamma}\partial^{\beta}n_{n}+\Pi_{B}^{\alpha\beta}\partial^{\gamma}n_{n}]+\frac{g\hbar^{2}}{16m^{2}}\biggl{[}3n_{B}\partial^{\alpha}\partial^{\beta}\partial^{\gamma}n_{n}$
$-\frac{1}{2}\partial^{\alpha}n_{B}\cdot\partial^{\beta}\partial^{\gamma}n_{n}-\frac{1}{2}\partial^{\beta}n_{B}\cdot\partial^{\alpha}\partial^{\gamma}n_{n}-\frac{1}{2}\partial^{\gamma}n_{B}\cdot\partial^{\alpha}\partial^{\beta}n_{n}-\frac{1}{2}\partial^{\alpha}n_{n}\cdot\partial^{\beta}\partial^{\gamma}n_{B}-\frac{1}{2}\partial^{\beta}n_{n}\cdot\partial^{\alpha}\partial^{\gamma}n_{B}-\frac{1}{2}\partial^{\gamma}n_{n}\cdot\partial^{\alpha}\partial^{\beta}n_{B}\biggr{]}.$
(67)
Equations (66) and (67) gives final expressions for the quasi-classic force
fields in the pressure flux evolution equations for two fluid model.
#### III.3.2 The third rank force tensor describing the quantum fluctuation
The quantum fluctuations in the zero temperature BECs is caused by tensor
$F_{qf}^{\alpha\beta\gamma}$ (26). Its major contribution can be found in the
first order by the interaction radius approximation. Here, we consider the
small nonzero temperature regime of $F_{qf}^{\alpha\beta\gamma}$ for the
bosons. So, we obtain its generalization for the two-fluid model. The quantum
third rank force tensor $F_{qf}^{\alpha\beta\gamma}$ (26) is calculated in the
first order by the interaction radius
$F_{qf}^{\alpha\beta\gamma}=-\frac{\hbar^{2}}{8m^{2}}\partial^{\delta}\int
dR\sum_{i,j.i\neq j}\delta(\textbf{r}-\textbf{R}_{ij})\times$ $\times
r^{\delta}_{ij}\partial^{\alpha}_{i}\partial^{\beta}_{i}\partial^{\gamma}_{i}U(\textbf{r}_{ij})\Psi^{*}(R^{\prime},t)\Psi(R^{\prime},t).$
(68)
In formula (68) for tensor $F_{qf}^{\alpha\beta\gamma}$ we have separation of
the integral containing the interaction potential, as we have at the
calculation of other force fields above. However, here we obtain different
integral $\int r^{\alpha}\partial^{\beta}\partial^{\gamma}\partial^{\delta}$.
Calculation of this integral leads to the second interaction constant given
below in the simplified expression for the quantum third rank force tensor
$F_{qf}^{\alpha\beta\gamma}=\frac{\hbar^{2}}{8m^{2}}g_{2}I_{0}^{\alpha\beta\gamma\delta}\partial^{\delta}Trn_{2}(\textbf{r},\textbf{r}^{\prime},t),$
(69)
where
$g_{2}=\frac{2}{3}\int d\textbf{r}U^{\prime\prime}(r),$ (70)
and
$I_{0}^{\alpha\beta\gamma\delta}=\delta^{\alpha\beta}\delta^{\gamma\delta}+\delta^{\alpha\gamma}\delta^{\beta\delta}+\delta^{\alpha\delta}\delta^{\beta\gamma}.$
(71)
Calculation of the two-particle concentration leads to the following
expression:
$F_{qf}^{\alpha\beta\gamma}=\frac{\hbar^{2}}{8m^{2}}g_{2}I_{0}^{\alpha\beta\gamma\delta}\partial^{\delta}(2n_{n}^{2}+4n_{B}n_{n}+n_{B}^{2}).$
(72)
Here we have $\partial^{\delta}(2n_{n}^{2}+4n_{B}n_{n}+n_{B}^{2})$. Next, we
open brackets and find
$(4n_{n}\partial^{\delta}n_{n}+4n_{n}\partial^{\delta}n_{B}+4n_{B}\partial^{\delta}n_{n}+2n_{B}\partial^{\delta}n_{B})$.
The source of field is under derivative, while the multiplier in front
corresponds to the species under the action of field. Hence the first two
terms correspond to the force acting on the normal fluid, while the third and
fourth terms correspond to the force acting on the BEC.
Therefore, we can separate expression (72) on two parts corresponding to the
BEC and to the normal fluid:
$F_{qf,B}^{\alpha\beta\gamma}=\frac{\hbar^{2}}{4m^{2}}g_{2}I_{0}^{\alpha\beta\gamma\delta}(2n_{B}\partial^{\delta}n_{n}+n_{B}\partial^{\delta}n_{B}),$
(73)
and
$F_{qf,n}^{\alpha\beta\gamma}=\frac{\hbar^{2}}{2m^{2}}g_{2}I_{0}^{\alpha\beta\gamma\delta}\partial^{\delta}(n_{n}\partial^{\delta}n_{n}+n_{B}\partial^{\delta}n_{n}).$
(74)
## IV Hydrodynamic equations for two fluid model of bosons with nonzero
temperature
This section provides the final set of equations obtained in this paper. The
method of the introduction of the velocity field and corresponding
representation of the hydrodynamic equations is not described in this paper.
It can be found in number other papers, majority of details are given in Refs.
Andreev PRA08 , Andreev 2001 .
In this regime we have two continuity equations:
$\partial_{t}n_{B}+\nabla\cdot(n_{B}\textbf{v}_{B})=0,$ (75)
and
$\partial_{t}n_{n}+\nabla\cdot(n_{n}\textbf{v}_{n})=0.$ (76)
The Euler equation for bosons in the BEC state
$mn_{B}(\partial_{t}+\textbf{v}_{B}\cdot\nabla)v^{\alpha}_{B}+\partial_{\beta}T_{B}^{\alpha\beta}+\partial_{\beta}p_{qf}^{\alpha\beta}$
$+gn_{B}\partial^{\alpha}n_{B}=-n_{B}\partial^{\alpha}V_{ext}-2gn_{B}\partial^{\alpha}n_{n},$
(77)
where the quantum Bohm potential is given by equation (12) in the
noninteracting limit.
The Euler equation for bosons in the excited states corresponding to the
nonzero temperature
$mn_{n}(\partial_{t}+\textbf{v}_{n}\cdot\nabla)v^{\alpha}_{n}+\partial_{\beta}p_{n,eff}^{\alpha\beta}$
$+2gn_{n}\partial^{\alpha}n_{n}=-n_{n}\partial^{\alpha}V_{ext}-2gn_{n}\partial^{\alpha}n_{B},$
(78)
where the effective pressure tensor
$p_{n,eff}^{\alpha\beta}=p_{n}^{\alpha\beta}+T_{n}^{\alpha\beta}$.
The effective pressure evolution equation for normal boson fluid is also a
part of developed and applied hydrodynamic model
$\partial_{t}p_{n,eff}^{\alpha\beta}+v_{n}^{\gamma}\partial_{\gamma}p_{n,eff}^{\alpha\beta}+p_{n,eff}^{\alpha\gamma}\partial_{\gamma}v_{n}^{\beta}+p_{n,eff}^{\beta\gamma}\partial_{\gamma}v_{n}^{\alpha}$
$+p_{n,eff}^{\alpha\beta}\partial_{\gamma}v_{n}^{\gamma}+\partial_{\gamma}T^{\alpha\beta\gamma}_{n}+\partial_{\gamma}Q^{\alpha\beta\gamma}_{n}=0.$
(79)
Moreover, we have the pressure (the quantum Bohm potential)
$p_{B,eff}^{\alpha\beta}=T_{B}^{\alpha\beta}+p_{qf}^{\alpha\beta}$ evolution
equation for the BEC
$\partial_{t}p_{B,eff}^{\alpha\beta}+v_{B}^{\gamma}\partial_{\gamma}p_{B,eff}^{\alpha\beta}+p_{B,eff}^{\alpha\gamma}\partial_{\gamma}v_{B}^{\beta}+p_{B,eff}^{\beta\gamma}\partial_{\gamma}v_{B}^{\alpha}$
$+p_{B,eff}^{\alpha\beta}\partial_{\gamma}v_{B}^{\gamma}+\partial_{\gamma}T^{\alpha\beta\gamma}_{B}+\partial_{\gamma}Q^{\alpha\beta\gamma}_{qf}=0.$
(80)
Let us to point out the following property of the quantum Bohm potential that
it satisfies the following equation for the arbitrary species $a$
$\partial_{t}T_{a}^{\alpha\beta}+v_{a}^{\gamma}\partial_{\gamma}T_{a}^{\alpha\beta}+T_{a}^{\alpha\gamma}\partial_{\gamma}v_{a}^{\beta}+T_{a}^{\beta\gamma}\partial_{\gamma}v_{a}^{\alpha}$
$+T_{a}^{\alpha\beta}\partial_{\gamma}v_{a}^{\gamma}+\partial_{\gamma}T^{\alpha\beta\gamma}_{a}=0.$
(81)
It is expected that approximate form of the quantum Bohm potential (12)
satisfies equation (81) existing at the zero interaction. Hence, substitute
(12) in equation (81) with the zero right-hand side:
$\partial^{\beta}\partial^{\gamma}n_{a}\cdot(\partial^{\gamma}v_{a}^{\alpha}-\partial^{\alpha}v_{a}^{\gamma})+\partial^{\alpha}\partial^{\gamma}n_{a}\cdot(\partial^{\gamma}v_{a}^{\beta}-\partial^{\beta}v_{a}^{\gamma})$
$+\frac{1}{3}\partial_{\gamma}n_{a}\cdot(\partial^{\beta}\partial^{\gamma}v_{a}^{\alpha}+\partial^{\alpha}\partial^{\gamma}v_{a}^{\beta}-\partial^{\alpha}\partial^{\beta}v_{a}^{\gamma})$
$+\frac{1}{3}n_{a}[\triangle(\partial^{\beta}v_{a}^{\alpha}+\partial^{\alpha}v_{a}^{\beta})-\partial^{\alpha}\partial^{\beta}(\nabla\cdot\textbf{v}_{a})]=0,$
(82)
where the continuity equation is used for the time derivatives of
concentration. Make the condition of the potentiality of the velocity field
$\textbf{v}_{a}=\nabla\phi_{a}$. Include it in equation (82) and find that
this equation is satisfied.
Hence, we obtain the simplified form of the pressure evolution equations,
where the traditional quantum Bohm potential is extracted:
$\partial_{t}p_{qf,B}^{\alpha\beta}+v_{B}^{\gamma}\partial_{\gamma}p_{qf,B}^{\alpha\beta}+p_{qf,B}^{\alpha\gamma}\partial_{\gamma}v_{B}^{\beta}+p_{qf,B}^{\beta\gamma}\partial_{\gamma}v_{B}^{\alpha}$
$+p_{qf,B}^{\alpha\beta}\partial_{\gamma}v_{B}^{\gamma}+\partial_{\gamma}Q^{\alpha\beta\gamma}_{qf,B}=0,$
(83)
and
$\partial_{t}p_{n}^{\alpha\beta}+v_{n}^{\gamma}\partial_{\gamma}p_{n}^{\alpha\beta}+p_{n}^{\alpha\gamma}\partial_{\gamma}v_{n}^{\beta}+p_{n}^{\beta\gamma}\partial_{\gamma}v_{n}^{\alpha}$
$+p_{n}^{\alpha\beta}\partial_{\gamma}v_{n}^{\gamma}+\partial_{\gamma}Q^{\alpha\beta\gamma}_{n}=0.$
(84)
Equation for the evolution of quantum-thermal part of the third rank tensor is
Andreev 2005 , Andreev 2007 :
$\partial_{t}Q_{qf}^{\alpha\beta\gamma}+\partial_{\delta}(v_{B}^{\delta}Q_{qf}^{\alpha\beta\gamma})+Q_{qf}^{\alpha\gamma\delta}\partial_{\delta}v_{B}^{\beta}+Q_{qf}^{\beta\gamma\delta}\partial_{\delta}v_{B}^{\alpha}+Q_{qf}^{\alpha\beta\delta}\partial_{\delta}v_{B}^{\gamma}$
$=\frac{\hbar^{2}}{4m^{2}}g_{2}I_{0}^{\alpha\beta\gamma\delta}\biggl{(}n_{B}\partial^{\delta}n_{B}+2n_{B}\partial^{\delta}n_{n})\biggr{)}$
$-\frac{1}{m}n_{B}\partial_{\alpha}\partial_{\beta}\partial_{\gamma}V_{ext}+\tilde{F}_{B}^{\alpha\beta\gamma}+\tilde{F}_{B}^{\beta\gamma\alpha}+\tilde{F}_{B}^{\gamma\alpha\beta}$
$+\frac{1}{mn}(p_{qf,eff}^{\alpha\beta}\partial^{\delta}p_{qf,eff}^{\gamma\delta}+p_{qf,eff}^{\alpha\gamma}\partial^{\delta}p_{qf,eff}^{\beta\delta}+p_{qf,eff}^{\beta\gamma}\partial^{\delta}p_{qf,eff}^{\alpha\delta}),$
(85)
and
$\partial_{t}Q_{n}^{\alpha\beta\gamma}+\partial_{\delta}(v_{n}^{\delta}Q_{n}^{\alpha\beta\gamma})+Q_{n}^{\alpha\gamma\delta}\partial_{\delta}v_{n}^{\beta}+Q_{n}^{\beta\gamma\delta}\partial_{\delta}v_{n}^{\alpha}+Q_{n}^{\alpha\beta\delta}\partial_{\delta}v_{n}^{\gamma}$
$=\frac{\hbar^{2}}{2m^{2}}g_{2}I_{0}^{\alpha\beta\gamma\delta}\biggl{(}n_{n}\partial^{\delta}n_{n}+n_{n}\partial^{\delta}n_{B}\biggr{)}$
$-\frac{1}{m}n_{n}\partial_{\alpha}\partial_{\beta}\partial_{\gamma}V_{ext}+\tilde{F}_{n}^{\alpha\beta\gamma}+\tilde{F}_{n}^{\beta\gamma\alpha}+\tilde{F}_{n}^{\gamma\alpha\beta}$
$+\frac{1}{mn}(p_{n,eff}^{\alpha\beta}\partial^{\delta}p_{n,eff}^{\gamma\delta}+p_{n,eff}^{\alpha\gamma}\partial^{\delta}p_{n,eff}^{\beta\delta}+p_{n,eff}^{\beta\gamma}\partial^{\delta}p_{n,eff}^{\alpha\delta}),$
(86)
where $\tilde{F}_{a}^{\alpha\beta\gamma}$ is not presented explicitly since
equations (66) and (67) show that required expressions are rather large.
Refs. Andreev PRA08 , Andreev LP 19 . Hydrodynamic model for fermions with
pressure evolution is derived in Refs. Andreev 2001 , Andreev 1912 , Andreev
LP 21 .
Terms proportional to
$p_{a,eff}^{\alpha\beta}\partial^{\delta}p_{a,eff}^{\gamma\delta}$ appears in
the pressure flux evolution equation, but it leads to the contribution beyond
the chosen approximation Tokatly PRB 99 , Tokatly PRB 00 .
Term containing the external potential
$-\frac{1}{m}n_{a}\partial_{\alpha}\partial_{\beta}\partial_{\gamma}V_{ext}$
goes to zero for the parabolic trap. However, it can give some nontrivial
contribution for other form of potentials.
## V BEC dynamics under the influence of the quantum fluctuations
Developed model shows that there is nontrivial evolution equation for the
pressure and the pressure flux of the BEC. Therefore, the well-known model of
BEC is extended in spite the fact that the kinetic pressure tensor is expected
to be equal to zero due to zero temperature. However, the quantum fluctuations
lead to the nonzero occupation numbers for the excited states.
If we need to consider pure BEC we need to drop the contribution of the normal
fluid in the model presented above. Therefore, let us summarize the BEC model
in parabolic traps:
$\partial_{t}n_{B}+\nabla\cdot(n_{B}\textbf{v}_{B})=0,$ (87)
$mn_{B}(\partial_{t}+\textbf{v}_{B}\cdot\nabla)v^{\alpha}_{B}+\partial_{\beta}(p_{qf}^{\alpha\beta}+T_{B}^{\alpha\beta})$
$+gn_{B}\partial^{\alpha}n_{B}+n_{B}\partial^{\alpha}V_{ext}=0,$ (88)
$\partial_{t}p_{qf}^{\alpha\beta}+v_{B}^{\gamma}\partial_{\gamma}p_{qf}^{\alpha\beta}+p_{qf}^{\alpha\gamma}\partial_{\gamma}v_{B}^{\beta}+p_{qf}^{\beta\gamma}\partial_{\gamma}v_{B}^{\alpha}$
$+p_{qf}^{\alpha\beta}\partial_{\gamma}v_{B}^{\gamma}+\partial_{\gamma}Q^{\alpha\beta\gamma}_{qf}=0,$
(89)
and
$\partial_{t}Q_{qf}^{\alpha\beta\gamma}+\partial_{\delta}(v^{\delta}Q_{qf}^{\alpha\beta\gamma})+Q_{qf}^{\alpha\gamma\delta}\partial_{\delta}v^{\beta}$
$+Q_{qf}^{\beta\gamma\delta}\partial_{\delta}v^{\alpha}+Q_{qf}^{\alpha\beta\delta}\partial_{\delta}v^{\gamma}=\frac{\hbar^{2}}{4m^{2}}g_{2}I_{0}^{\alpha\beta\gamma\delta}n\partial^{\delta}n.$
(90)
This simplified model is reported earlier in Refs. Andreev 2005 , Andreev 2007
, Andreev 2009 , where the dipole-dipole interaction is also considered. It
has been demonstrated that the quantum fluctuations gives mechanisms for the
instability of the small amplitude perturbations Andreev 2005 . Moreover, the
dipolar part of the quantum fluctuations creates conditions for the bright
soliton in the repulsive BECs Andreev 2009 . The developed in previous section
model provides proper generalization of earlier model giving the small
temperature contribution.
## VI Conclusion
Revision of the two-fluid model for the finite temperature ultracold bosons
has been presented through the derivation of the number of particles, the
momentum, the momentum flux, and the third rank tensor balance equations. The
derivation has been based on the trace of the microscopic dynamics of quantum
particles via the application of the many-particle microscopic Schrodinger
equation. Hence, the microscopic Schrodinger equation determines the time
evolution for the macroscopic functions describing the collective motion of
bosons.
General equations have been derived for the arbitrary strength of interaction
and the arbitrary temperatures. The set of equations has been restricted by
the third rank kinematic tensor (the flux of pressure). The truncation is made
for the low temperatures weakly interacting bosons after the derivation of the
general structure of hydrodynamic equations. Therefore, the thermal part of
the fourth rank kinematic tensor has been taken equal to zero. Next, the terms
containing the interaction potential have been considered for the short-range
interaction. The small radius of interaction provides the small parameter for
the expansion. The expansion is made in the force field in the Euler equation,
the force tensor field in the momentum flux equation, and the third rank force
tensor in the pressure flux evolution equation. The first term in expansion on
the small interparticle distance has been considered in each expansion, which
corresponds to the first order by the interaction radius.
This model allows to gain the quantum fluctuations which is the essential
property of the BECs. Moreover, the interaction causing the quantum
fluctuations has been consider at the finite temperature.
The functions obtained in the first order by the interaction radius have been
expressed via the trace of two-particle functions. The two-particle functions
have been calculated for the weakly interacting bosons.
The single species of 0-spin bosons has been considered. Therefore, the single
fluid hydrodynamics has been derived. Next, it has been included that the
concentration of particles, the current of particles (the momentum density),
the momentum flux, and the current of the momentum flux are additive
functions. Consequently, they can be easily splitted on two parts: the BEC and
the normal fluid of bosons (the non BEC part). Hence, the two fluid model of
single species of bosons is obtained. This separation on two fluids has been
made in general form of equations.
## VII Acknowledgements
Work is supported by the Russian Foundation for Basic Research (grant no.
20-02-00476). This paper has been supported by the RUDN University Strategic
Academic Leadership Program.
## References
* (1) F. Dalfovo, S. Giorgini, L. P. Pitaevskii, and S. Stringari, Rev. Mod. Phys. 71, 463 (1999).
* (2) P. A. Andreev, L. S. Kuz’menkov, Phys. Rev. A 78, 053624 (2008).
* (3) P. A. Andreev, Laser Phys. 29, 035502 (2019).
* (4) P. A. Andreev, arXiv:2005.13503 (accepted to Chaos).
* (5) T. D. Lee, K. Huang, and C. N. Yang, Phys. Rev. 106, 1135 (1957).
* (6) L. Pitaevskii and S. Stringari, Phys. Rev. Lett. 81, 4541 (1998).
* (7) E. Braaten and J. Pearson, Phys. Rev. Lett. 82, 255 (1999).
* (8) G. E. Astrakharchik, R. Combescot, X. Leyronas, and S. Stringari, Phys. Rev. Lett. 95, 030404 (2005).
* (9) K. Xu, Y. Liu, D. E. Miller, J. K. Chin, W. Setiawan, and W. Ketterle, Phys. Rev. Lett. 96, 180405 (2006).
* (10) A. Altmeyer, S. Riedl, C. Kohstall, M. J. Wright, R. Geursen, M. Bartenstein, C. Chin, J. Hecker Denschlag, and R. Grimm, Phys. Rev. Lett. 98, 040401 (2007).
* (11) S. B. Papp, J. M. Pino, R. J. Wild, S. Ronen, C. E. Wieman, D. S. Jin, and E. A. Cornell, Phys. Rev. Lett. 101, 135301 (2008).
* (12) H. Kadau, M. Schmitt, M. Wenzel, C. Wink, T. Maier, I. Ferrier-Barbut, T. Pfau, Nature 530, 194 (2016).
* (13) I. Ferrier-Barbut, H. Kadau, M. Schmitt, M. Wenzel, and T. Pfau, Phys. Rev. Lett. 116, 215301 (2016).
* (14) D. Baillie, R. M. Wilson, R. N. Bisset, and P. B. Blakie, Phys. Rev. A 94, 021602(R) (2016).
* (15) R. N. Bisset, R. M. Wilson, D. Baillie, P. B. Blakie, Phys. Rev. A 94, 033619 (2016).
* (16) F. Wachtler and L. Santos, Phys. Rev. A 93, 061603R (2016).
* (17) F. Wachtler and L. Santos, Phys. Rev. A 94, 043618 (2016).
* (18) P. B. Blakie, Phys. Rev. A 93, 033644 (2016).
* (19) A. Boudjemaa and N. Guebli, Phys. Rev. A 102, 023302 (2020).
* (20) V. Heinonen, K. J. Burns, and J. Dunkel, Phys. Rev. A 99, 063621 (2019).
* (21) B. A. Malomed, Physica D 399, 108 (2019).
* (22) E. Shamriz, Z. Chen, and B. A. Malomed, Phys. Rev. A 101, 063628 (2020).
* (23) Z. Li, J.-S. Pan, and W. Vincent Liu, Phys. Rev. A 100, 053620 (2019).
* (24) E. Aybar and M. O. Oktel, Phys. Rev. A 99, 013620 (2019).
* (25) P. Examilioti, and G. M. Kavoulakis, J. Phys. B: At. Mol. Opt. Phys. 53, 175301 (2020).
* (26) T. Miyakawa, S. Nakamura, H. Yabu, Phys. Rev. A 101, 033613 (2020).
* (27) F. Bottcher, Jan-Niklas Schmidt, J. Hertkorn, Kevin S. H. Ng, Sean D. Graham, M. Guo, T. Langen, and T. Pfau, arXiv:2007.06391.
* (28) R. N. Bisset, L. A. P. Ardila, and L. Santos, Phys. Rev. Lett. 126, 025301 (2021).
* (29) Y. Wang, L. Guo, S. Yi, and T. Shi, Phys. Rev. Research 2, 043074 (2020).
* (30) M. J. Edmonds, T. Bland, and N. G. Parker, J. Phys. Commun. 4, 125008 (2020)
* (31) D. Baillie and P. B. Blakie, Phys. Rev. A 101, 043606 (2020).
* (32) A. Griffin, Phys. Rev. B 53, 9341 (1996).
* (33) L. S. Kuz’menkov, S. G. Maksimov, and V. V. Fedoseev, Theor. Math. Fiz. 126, 136 (2001) [Theoretical and Mathematical Physics 126, 110 (2001)].
* (34) P. A. Andreev, L. S. Kuz’menkov, Prog. Theor. Exp. Phys. 2019, 053J01 (2019).
* (35) P. A. Andreev, arXiv:2001.02764.
* (36) J. W. M. Bush, Y. Couder, T. Gilet, P. A. Milewski, and A. Nachbin, Chaos 28, 096001 (2018).
* (37) Y. Couder, S. Protiere, E. Fort, and A. Boudaoud, Nature 437, 208 (2005).
* (38) J. W. M. Bush, Annu. Rev. Fluid Mech. 47, 269–292 (2015).
* (39) T. Cristea-Platon, P. J. Saenz, and J. W. M. Bush, Chaos 28, 096116 (2018).
* (40) A. Chowdury, A. Ankiewicz, N. Akhmediev, and W. Chang, Chaos 28, 123116 (2018).
* (41) N. B. Budanur, and M. Fleury, Chaos 29, 013122 (2019).
* (42) P. A. Andreev, Int. J. Mod. Phys. B 27, 1350017 (2013).
* (43) P. A. Andreev, arXiv:2007.15045.
* (44) P. A. Andreev, arXiv:2009.12720.
* (45) G. M. Kavoulakis, C. J. Pethick, and H. Smith, Phys. Rev. A 57, 2938 (1998).
* (46) P. A. Andreev, arXiv:1912.00843.
* (47) P. A. Andreev, K. V. Antipin, M. I. Trukhanova, Laser Phys. 31, 015501 (2021).
* (48) I. Tokatly, O. Pankratov, Phys. Rev. B 60, 15550 (1999).
* (49) I. V. Tokatly, O. Pankratov, Phys. Rev. B 62, 2759 (2000).
|
University of California at Berkeley<EMAIL_ADDRESS>IIT Bombay, India
<EMAIL_ADDRESS>Institute of Theoretical Computer Science,
Braunschweig, Germany<EMAIL_ADDRESS>Adwait Godbole, S. Krishna, Roland
Meyer [500]Concurrency, Verification
# Safety Verification of Parameterized Systems under Release-Acquire
Adwait Godbole 111This work was done when Adwait Godbole was a final year
undergraduate student at IIT Bombay Shankara Narayanan Krishna Roland Meyer
###### Abstract
We study the safety verification problem for parameterized systems under the
release-acquire (RA) semantics. It has been shown that the problem is
intractable for systems with unlimited access to atomic compare-and-swap (CAS)
instructions. We show that, from a verification perspective where approximate
results help, this is overly pessimistic. We study parameterized systems
consisting of an unbounded number of environment threads executing identical
but CAS-free programs and a fixed number of distinguished threads that are
unrestricted.
Our first contribution is a new semantics that considerably simplifies RA but
is still equivalent for the above systems as far as safety verification is
concerned. We apply this (general) result to two subclasses of our model. We
show that safety verification is only $\mathsf{PSPACE}$-complete for the
bounded model checking problem where the distinguished threads are loop-free.
Interestingly, we can still afford the unbounded environment. We show that the
complexity jumps to $\mathsf{NEXPTIME}$-complete for thread-modular
verification where an unrestricted distinguished ‘ego’ thread interacts with
an environment of CAS-free threads plus loop-free distinguished threads (as in
the earlier setting). Besides the usefulness for verification, the results are
strong in that they delineate the tractability border for an established
semantics.
###### keywords:
release acquire, parameterized systems
###### category:
## 1 Introduction
Release-acquire (RA) is a popular fragment of C++11 [14] (in which reads are
annotated by acquire and writes by release) that strikes a good balance
between programmability and performance and has received considerable
attention (see e.g., [47, 49, 60, 51, 8, 53, 64, 63, 41]). The model is not
limited to concurrent programs, though. RA has tight links [52] with causal
consistency (CC) [7], a prominent consistency guarantee in distributed
databases [55]. Common to RA implementations and distributed databases is that
they tend to offer functionality to multi-threaded client programs, be it
means of synchronization or access to shared data.
We are interested in verifying such implementations on top of RA. For
verification, we can abstract the client program to invocations of the offered
functionality [20]. The result is a so-called instance of the implementation
in which concurrent threads execute the code of interest. There is a subtlety.
As the RA implementation should be correct for every client, we cannot fix the
instance to be verified. We have to prove correctness irrespective of the
number of threads executing the code. This is the classical formulation of a
parameterized system as it has been studied over the last 35 years [20].
We are interested in the decidability and complexity of safety verification
for parameterized programs under RA. The goal is to identify expressive
classes of programs for which the problem is tractable. There are good
arguments in favor of this agenda. From a pragmatic point of view, even if the
implementation at hand does not fall into one of the classes identified, we
may hope for a reasonably precise encoding. From a conceptual point of view,
tractability of verification is linked to programmability, and understanding
the complexity may lead to suggestions for better consistency notions [50] or
programming guidelines, e.g. in the form of type systems [56]. Safety
verification is a good fit for linearizability [43], the de-facto standard
correctness conditions for concurrency libraries, and has to be settled before
going to more complicated notions.
To explain the challenges of parameterized verification under RA, it will be
helpful to have an understanding of how to program under RA. The slogan of RA
is _never read “overwritten” values_ [52]. Assume we have shared variables g
and d, initially $0$, and a thread first stores $1$ to d and then $1$ to g.
Assume a second thread reads the $1$ from g. Under RA, that thread can no
longer claim $\texttt{d}=0$. Formulated axiomatically [9], the reads-from,
modification order, program order, and from-read should be acyclic [52]. While
less concise, there are operational formulations of RA that make explicit,
information about the computation which will be useful for our development
[59, 48, 47]. The mechanism is as follows. Program and modification order are
encoded as natural numbers, called _timestamps_. Each thread stores locally a
_view_ object, a map from shared variables to timestamps. This map reflects
the thread’s progress in terms of seeing (or as above hearing from) stores to
a shared variable. The communication is organized in a way that achieves the
desired acyclicity. Store instructions generate _messages_ that decorate the
variable-value pair by a view. This view is the one held by the thread except
that the timestamp of the variable being written is raised to a strictly
higher value. The shared memory is implemented as a pool to which the
generated messages are added and in which they remain forever. When loading a
message from the pool, the timestamp of the variable given by the message must
be at least the timestamp in the thread. The views are then joined so that the
receiver cannot load values older than what the sender has seen.
The timestamps render the RA semantics infinite-state, which makes algorithmic
verification difficult. Indeed, the problem of solving safety verification
under RA in a complete way has been studied very recently in the non-
parameterized setting and proven to be undecidable even for programs with
finite control flow and finite data domains [1]. With this insight, [1]
proposes to give up completeness and show how to encode an under-approximation
of the safety verification problem into sequential consistency [54]. Lahav and
Boker [50] drew a different conclusion. They proposed strong release-acquire
(SRA) as a new consistency guarantee under which safety verification is
decidable for general non-parameterized programs. Unfortunately, the lower
bound is again non-primitive recursive. Also the related problem of checking
CC itself for a given implementation has been studied. It is undecidable in
general, but EXPSPACE-complete under the assumption of data independence [22].
To sum up, despite recent efforts [22, 1, 50] we are missing an expressive
class of programs for which the safety verification problem under RA is
tractable. The parameterized verification problem has not been studied.
Problem Statement. The parameterized systems of interest have the form
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1}\parallel\dots\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{n}$.
We have a fixed number of distinguished threads collectively referred to as
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
and executing programs
$\mathsf{c}_{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}}^{1},\cdots\mathsf{c}_{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}}^{n}$,
respectively. Moreover, we have an environment consisting of arbitrarily many
threads executing the same program
$\mathsf{c}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$.
We obtain an _instance_ of the system by also fixing the number of number of
environment threads. The safety verification problem is as follows:
> _Safety Verification for Parameterized Systems_ :
> Given a parameterized system
> ${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1}\parallel\dots\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{n}$,
> is there an instance of the system and a computation in that instance that
> reaches an assertion violation?
The complexity of the problem depends on the system class under consideration.
We denote system classes by signatures of the form
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{type}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}})\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1}(\mathsf{type}_{1})\parallel\dots\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{n}(\mathsf{type}_{n})$,
where the types constrain the programs executed by the threads. The parameters
are the structure of the control flow, which may be loop-free, denoted by
$\mathsf{acyc}$, and the instruction set, which may forbid the atomic compare-
and-swap (CAS) command, denoted by $\mathsf{nocas}$. We drop the type if no
restriction applies. If a thread is not present, we do not mention it in the
signature. With this,
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1}(\mathsf{acyc})\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{2}(\mathsf{nocas})\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{3}$
is a non-parameterized system (without
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads) having three
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads executing: a loop-free
$\mathsf{c}_{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}}^{1}$,
$\mathsf{c}_{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}}^{2}$
which does not have CAS instructions, and
$\mathsf{c}_{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}}^{3}$
which is free of restrictions, respectively.
Justifying the Parameters. In [1], the safety verification problem under RA
has been shown to be undecidable for non-parameterized
(${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$-free)
systems from
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1}(\mathsf{nocas})\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{2}(\mathsf{nocas})\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{3}\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{4}$
and non-primitive-recursive for systems from
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1}(\mathsf{nocas})\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{2}(\mathsf{nocas})$.
There are several conclusions to draw from this.
With distinguished threads, we cannot hope to arrive at a tractable
verification problem. We take the bounded model checking [28] approach and
consider loop-free code. Acyclic programs, however, are not very expressive.
Fortunately, RA implementations tend to be parameterized, and, as we will see,
this frees us from the acyclicity restriction. The fact that parameterization
simplifies verification has been observed in various works [46, 39, 62, 33, 5]
that we discuss below.
Restricting the use of CAS requires an explanation. The class
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
of unconstrained environment threads enables what we call leader isolation: an
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread can distinguish itself from the others by acquiring a CAS-based lock.
Even just $t$ CAS operations allows for the isolation of $t$ distinguished
threads, which takes us back to the results of [1] for $t=2$ resp. $t=4$.
Acyclicity will not help in this case, in section 3 we show that safety
verification for
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{acyc})$
is undecidable.
Contributions. We state our main results and present the technical details in
the later parts.
A Simplified Semantics. We consider parameterized systems of the form
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{nocas})\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1}\parallel\dots\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{n}$.
Our first contribution is a simplified semantics (Section 4) that is
equivalent with the standard RA semantics as far as safety verification is
concerned. The simplified semantics uses the notion of timestamp abstraction,
which allows us to be imprecise about the exact timestamps of the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads. Note that we do not make any assumptions on the form of the
distinguished threads but support cyclic control flow and CAS. So the result
in particular applies to the intractable classes from [1], even when extended
with a parameterized environment. Supporting CAS in the distinguished threads
is important. Without it, there is no way to capture the optimistic
synchronization strategies used in performance critical programming [42].
We continue to apply the simplified semantics to prove tight complexity bounds
for the safety verification problem in two particular cases of
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
programs.
Loop-Free Setting. In Section 5, we show a $\mathsf{PSPACE}$-upper bound for
the safety verification problem of parameterized programs from
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{nocas})\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1}(\mathsf{acyc})\parallel\cdots\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{n}(\mathsf{acyc})$.
The class reflects the bounded model checking problem [28], which unrolls a
given program into a loop-free under-approximation. Interestingly, we can
sequeeze into $\mathsf{PSPACE}$ the unbounded environment of cyclic threads.
Our decision procedure is not only optimal complexity-wise, it also has the
potential of being practical (we do not have experiments). We show how to
encode the safety verification problem into the query evaluation problem for
linear Datalog, the format supported by Horn-clause solvers [18, 17], a state-
of-the-art backend in verification.
Leader Setting. We continue to show an $\mathsf{NEXPTIME}$-upper bound for
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{nocas})\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1}(\mathsf{acyc})\parallel\cdots\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{n}(\mathsf{acyc})\parallel{\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$
in Section 6. These systems add an unconstrained distinguished thread, called
the leader (denoted
${\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$),
to the system from Section 5. The class is in the spirit of thread-modular
verification techniques [57, 35], where the safety of a single ‘ego’ thread is
verified when interacting with an environment.
We note that these results delineate the border of tractability: adding
another
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread results in a non-primitive-recursive lower bound [1], and adding CAS
operations to
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
results in undecidability (section 3).
Lower Bounds. Our last contributions are matching lower bounds for the two
classes. Interestingly, they hold even in the absence of CAS. We show that the
safety verification problem is $\mathsf{PSPACE}$-hard already for
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{nocas},\mathsf{acyc})$,
while it is $\mathsf{NEXPTIME}$-hard for
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{nocas},\mathsf{acyc})\parallel{\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}(\mathsf{nocas})$.
Related Work. There is a vast body of work on algorithmic verification under
consistency models. Since our interest is in decidability and complexity, we
focus on complete methods. We have already discussed the related work on RA
and CC.
_Other Consistency models._ Atig et al. have shown that safety verification is
decidable for assembly programs running on TSO, the consistency model of x86
architectures [11]. The result has been generalized to consistency models with
non-speculative writes [12] and very recently to models with persistence [3].
It has also been generalized to parameterized programs executed by an
unbounded number of threads [4]. Behind the decision procedures are (often
drastic) reformulations of the semantics combined with well-structuredness
arguments [6]. A notable exception is [5], showing that safety verification
under TSO can be solved in $\mathsf{PSPACE}$ for cas-free parameterized
programs, called
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{nocas})$
here. On the widely-used Power architecture safety is undecidable [2].
The decidability and complexity of verification problems has been studied also
for distributed databases and data structures. Enea et al. considered the
problem of checking eventual consistency (EC) [66] of replicated databases and
developed a surprising link to vector addition systems [23] that yields
decidability and complexity results for the safety and liveness aspects of EC.
For concurrent data structures, the default correctness criterion is
linearizability wrt. a specification [43]. While checking linearizability is
$\mathsf{EXPSpace}$-complete in general [10, 40], important data structures
(for which the specification is then fixed) admit $\mathsf{PSPACE}$-algorithms
[21].
_Parameterized Systems with Asynchronous Communication_. We exploit a pleasant
interplay between the asynchronous communication in RA and the
parameterization of our systems in the number of threads. Kahlon [46] was the
first to observe that parameterization simplifies verification in the case of
concurrent pushdowns. Hague [39] showed that safety verification remains
decidable when adding a distinguished leader thread. Esparza, Ganty, Majumdar
studied the complexity of what is now called leader-contributor systems [33].
It is surprisingly low, $\mathsf{NP}$-complete for systems of finite-state
components and $\mathsf{PSPACE}$-completeness for systems of pushdowns. At the
heart of their technique is the so-called copycat-lemma. The work has been
generalized [62] to all classes of models that are closed under regular
intersection and have a computable downward-closure. It has also been
generalized to liveness verification [31, 36]. Finally, the study has been
generalized to parameterized complexity, for safety [27] and liveness [25].
Our work is related in that the distinguished threads behave like a leader.
Moreover, our simplified semantics relies on an infinite-supply property the
proof of which gives a copycat variant for RA. Our Datalog encoding is
reminiscent of the notion of Strahler number [34].
Leader-contributor systems are closely related to broadcast networks [32, 61].
Also there, safety verification has been found to be suprisingly cheap, namely
$\mathsf{PTIME}$-complete [30]. For liveness verification, there was a gap
between $\mathsf{EXPSpace}$ and $\mathsf{PTIME}$ that was settled recently
with a non-trivial polynomial-time algorithm [26]. What is new in broadcast
networks and neither occurs in leader-contributor systems nor in our setting
is the problem of reconfiguration [29, 13, 16].
## 2 The Release-Acquire Semantics
A parameterized system consists of an unknown and potentially large number of
threads, all running the same program. Threads compute locally over a set of
registers and interact with each other by writing to and reading from a shared
memory. The interaction with the shared memory is under the Release Acquire
(RA) semantics [52, 59, 48].
$\begin{array}[]{c}\begin{array}[]{cccc}\inferrule{(\mathsf{c}_{1},\mathsf{rv},\mathsf{vw})\xrightharpoondown{\mathsf{msg}}(\mathsf{c}_{1}^{\prime},\mathsf{rv}^{\prime},\mathsf{vw}^{\prime})}{(\mathsf{c}_{1};\mathsf{c}_{2},\mathsf{rv},\mathsf{vw})\xrightharpoondown{\mathsf{msg}}(\mathsf{c}_{1}^{\prime};\mathsf{c}_{2},\mathsf{rv}^{\prime},\mathsf{vw}^{\prime})}&\inferrule{i=1,2}{(\mathsf{c}_{1}\oplus\mathsf{c}_{2},\mathsf{rv},\mathsf{vw})\xrightharpoondown{}(\mathsf{c}_{i},\mathsf{rv},\mathsf{vw})}&\inferrule{\leavevmode\nobreak\
}{(\mathsf{skip};\mathsf{c},\mathsf{rv},\mathsf{vw})\xrightharpoondown{}(\mathsf{c},\mathsf{rv},\mathsf{vw})}&\inferrule{\leavevmode\nobreak\
}{(\mathsf{assert}\;{\texttt{false}},\mathsf{rv},\mathsf{vw})\xrightharpoondown{}\bot}\end{array}\\\\[14.22636pt]
\begin{array}[]{ccc}\inferrule{\leavevmode\nobreak\
}{(\mathsf{c}^{*},\mathsf{rv},\mathsf{vw})\xrightharpoondown{}(\mathsf{skip}\oplus\mathsf{c};\mathsf{c}^{*},\mathsf{rv},\mathsf{vw})}&\inferrule{[\\![\mathsf{e}]\\!](\mathsf{rv}(\overline{\mathsf{r}}))=\mathsf{d}\quad\mathsf{rv}^{\prime}=\mathsf{rv}[\mathsf{r}\mapsto
\mathsf{d}]}{(\mathsf{r}
\coloneqq\mathsf{e}(\overline{\mathsf{r}}),\mathsf{rv},\mathsf{vw})\xrightharpoondown{}(\mathsf{skip},\mathsf{rv}^{\prime},\mathsf{vw})}&\inferrule{[\\![\mathsf{e}]\\!](\mathsf{rv}(\overline{\mathsf{r}}))\neq
0}{(\mathsf{assume}\;{\mathsf{e}(\overline{\mathsf{r}})},\mathsf{rv},\mathsf{vw})\xrightharpoondown{}(\mathsf{skip},\mathsf{rv},\mathsf{vw})}\end{array}\\\\[14.22636pt]
\begin{array}[]{cc}\textsc{(ST-
local)}\quad\inferrule{\mathsf{rv}(\mathsf{r})=\mathsf{d}\quad\mathsf{vw}<_{\mathsf{x}}\mathsf{vw}^{\prime}}{(\mathsf{x}
\coloneqq\mathsf{r},\mathsf{rv},\mathsf{vw})\xrightharpoondown{\mathsf{st},(\mathsf{x},\mathsf{d},\mathsf{vw}^{\prime})}(\mathsf{skip},\mathsf{rv},\mathsf{vw}^{\prime})}&\hskip
14.22636pt\textsc{(LD-
local)}\quad\inferrule{\mathsf{vw}(\mathsf{x})\leq\mathsf{vw}^{\prime}(\mathsf{x})\quad\mathsf{rv}^{\prime}=\mathsf{rv}[\mathsf{r}\mapsto
\mathsf{d}]}{(\mathsf{r}
\coloneqq\mathsf{x},\mathsf{rv},\mathsf{vw})\xrightharpoondown{\mathsf{ld},(\mathsf{x},\mathsf{d},\mathsf{vw}^{\prime})}(\mathsf{skip},\mathsf{rv}^{\prime},\mathsf{vw}\sqcup\mathsf{vw}^{\prime})}\end{array}\\\\[14.22636pt]
\small{\textsc{(CAS-
local)}\quad\inferrule{\mathsf{rv}(\mathsf{r}_{1})=\mathsf{d}_{1}\quad
\mathsf{rv}(\mathsf{r}_{2})=\mathsf{d}_{2}\quad\mathsf{vw}(\mathsf{x})\leq\mathsf{vw}^{\prime}(\mathsf{x})=\mathsf{ts}\\\
\widetilde{\mathsf{vw}}=\mathsf{vw}^{\prime}[\mathsf{x}\mapsto\mathsf{ts}+1]\quad\mathsf{vw}^{\prime\prime}=\mathsf{vw}\sqcup\widetilde{\mathsf{vw}}}{(\mathsf{cas}(\mathsf{x},\mathsf{r}_{1},\mathsf{r}_{2}),\mathsf{rv},\mathsf{vw})\xrightharpoondown{\mathsf{ld},(\mathsf{x},\mathsf{d}_{1},\mathsf{vw}^{\prime})}\xrightharpoondown{\mathsf{st},(\mathsf{x},\mathsf{d}_{2},\mathsf{vw}^{\prime\prime})}(\mathsf{skip},\mathsf{rv},\mathsf{vw}^{\prime\prime})}}\\\\[14.22636pt]
\hline\cr\begin{array}[]{cc}\textsc{(LD-global)}\leavevmode\nobreak\
\leavevmode\nobreak\
\inferrule{\mathsf{lcfm}(\mathsf{t})=\mathsf{lcf}\quad\mathsf{lcf}\xrightharpoondown{\mathsf{ld},\mathsf{msg}}\mathsf{lcf}^{\prime}\quad\mathsf{msg}\in\mathsf{m}}{(\mathsf{m},\mathsf{lcfm})\xrightarrow{(\mathsf{t},\mathsf{msg})}(\mathsf{m},\mathsf{lcfm}[\mathsf{t}\mapsto\mathsf{lcf}^{\prime}])}&\hskip
14.22636pt\textsc{(ST-global)}\leavevmode\nobreak\ \leavevmode\nobreak\
\inferrule{\mathsf{lcfm}(\mathsf{t})=\mathsf{lcf}\quad\mathsf{lcf}\xrightharpoondown{\mathsf{st},\mathsf{msg}}\mathsf{lcf}^{\prime}\quad\mathsf{msg}\;\\#\;\mathsf{m}}{(\mathsf{m},\mathsf{lcfm})\xrightarrow{(\mathsf{t},\mathsf{msg})}(\mathsf{m}\cup\\{\mathsf{msg}\\},\mathsf{lcfm}[\mathsf{t}\mapsto\mathsf{lcf}^{\prime}])}\end{array}\\\
\begin{array}[]{cc}\textsc{(CAS-global)}\leavevmode\nobreak\
\inferrule{\mathsf{lcfm}(\mathsf{t})=\mathsf{lcf}\quad\mathsf{lcf}\xrightharpoondown{\mathsf{ld},\mathsf{msg}_{l}}\xrightharpoondown{\mathsf{st},\mathsf{msg}_{s}}\mathsf{lcf}^{\prime}\quad\mathsf{msg}_{l}\in\mathsf{m}\quad\mathsf{msg}_{s}\;\\#\;\mathsf{m}}{(\mathsf{m},\mathsf{lcfm})\xrightarrow{(\mathsf{t},\mathsf{msg})}(\mathsf{m}\cup\\{\mathsf{msg}_{s}\\},\mathsf{lcfm}[\mathsf{t}\mapsto\mathsf{lcf}^{\prime}])}&\hskip
14.22636pt\textsc{(Unlabelled)}\leavevmode\nobreak\
\inferrule{\mathsf{lcfm}(\mathsf{t})=\mathsf{lcf}\quad\mathsf{lcf}\xrightharpoondown{}\mathsf{lcf}^{\prime}}{(\mathsf{m},\mathsf{lcfm})\xrightarrow{\mathsf{t}}(\mathsf{m},\mathsf{lcfm}[\mathsf{t}\mapsto\mathsf{lcf}^{\prime}])}\end{array}\end{array}$
Figure 1: Local transition relation: silent (thread-local) transitions (pink),
shared memory transitions (blue). Global transition relation (below in green)
### 2.1 Program Syntax
We model the individual threads in our system as (non-deterministic)
sequential programs. Assume a standard while-language $\mathsf{Com}$ defined
by:
$\displaystyle\mathsf{c}\;::=$
$\displaystyle\quad\mathsf{skip}\;\mid\;\mathsf{assume}\;{\mathsf{e}(\overline{\mathsf{r}})}\;\mid\;\mathsf{assert}\;{\texttt{false}}\;\mid\;\mathsf{r}
\coloneqq\mathsf{e}(\overline{\mathsf{r}})\;\mid\;\mathsf{c};\mathsf{c}\;\mid\;\mathsf{c}\oplus\mathsf{c}\;\mid\;\mathsf{c}^{*}\;\mid\;$
$\displaystyle\quad\mathsf{r} \coloneqq\mathsf{x}\;\mid\;\mathsf{x}
\coloneqq\mathsf{r}\;\mid\;\mathsf{cas}(\mathsf{x},\mathsf{r}_{1},\mathsf{r}_{2})$
The programs compute on (thread-local) registers $\mathsf{r}$ from the finite
set $\mathsf{Reg}$ using assume, assert, assignments, sequential composition,
non-deterministic choice, and iteration. Conditionals $\mathsf{if}$ and
iteratives $\mathsf{while}$ can be derived from these operators, and we use
them where convenient. The shared memory variables $\mathsf{x}$ are accessed
only by means of load, store and compare-and-swap (CAS) operations as
$\mathsf{r} \coloneqq\mathsf{x}$, $\mathsf{x} \coloneqq\mathsf{r}$ and
$\mathsf{cas}(\mathsf{x},\mathsf{r}_{1},\mathsf{r}_{2})$, respectively. These
instructions are also referred to as _events_. We have a finite set
$\mathsf{Var}$ of shared variables, and work with the data domain
$\mathsf{Dom}={\rm Nature}$. We do not insist on a shape of expressions
$\mathsf{e}$ but require an interpretation
$[\\![\mathsf{e}]\\!]:\mathsf{Dom}^{n}\rightarrow\mathsf{Dom}$ that respects
the arity $n$ of the expression.
### 2.2 Release-Acquire (RA) Semantics
We give the semantics of parameterized systems under release-acquire
consistency. We opted for an operational [59, 48] over an axiomatic [52]
definition, and follow [1]. What makes the operational definition attractive
is that it comes with a notion of configuration or state of the system that we
use to reason about computations. We first define thread-local configurations,
then add the shared memory, and give the global transition relation.
Local Configurations. The RA semantics enforces a total order on all stores to
the same variable that have been performed in the computation. We model these
total orders by
${\color[rgb]{.75,0,.25}\definecolor[named]{pgfstrokecolor}{rgb}{.75,0,.25}\mathsf{Time}}={\rm
Nature}$ and refer to elements of
${\color[rgb]{.75,0,.25}\definecolor[named]{pgfstrokecolor}{rgb}{.75,0,.25}\mathsf{Time}}$
as timestamps. Using the total orders, each thread keeps track of its progress
in the computation. It maintains a _view_ from
$\mathsf{View}=\mathsf{Var}\rightarrow{\color[rgb]{.75,0,.25}\definecolor[named]{pgfstrokecolor}{rgb}{.75,0,.25}\mathsf{Time}}$,
a function, that for a shared variable $\mathsf{x}$, returns the timestamp of
the most recent event the thread has observed on $\mathsf{x}$. Besides, the
thread keeps track of the command to be executed next (which can be
represented as program counter) and the register valuation from
$\mathsf{RVal}=\mathsf{Reg}\rightarrow\mathsf{Dom}$. The set of _thread-local
configurations_ is thus
$\displaystyle\mathsf{LCF}\;=\;\mathsf{Com}\times
\mathsf{RVal}\times\mathsf{View}.$
Unbounded Threads. The number of threads executing in the system is not known
a priori. As long as we restrict ourselves to safety properties, there are two
ways of modeling this. One way is to define _instance programs_ for a given
number of threads, and then requiring correctness of all instances, as has
been done in [19]. The alternative is to consider an infinite number of
threads right away. We take the latter approach and define $\mathsf{TID}={\rm
Nature}$ to be the set of thread identifiers. The thread-local configuration
map then assigns a local configuration to each thread:
$\displaystyle\mathsf{LCFMap}\;=\;\mathsf{TID}\rightarrow\mathsf{LCF}.$
Views. The views maintained by the threads are used for synchronization. They
determine where in the (appropriate) total order a thread can place a store
and from which stores it can load a value. To achieve this, the shared memory
consists of _messages_ , which are variable value pairs enriched by a view,
with the form $(\mathsf{x},\mathsf{d},\mathsf{vw})$:
$\displaystyle\mathsf{Msgs}\;=\;\mathsf{Var}\times\mathsf{Dom}\times\mathsf{View}.$
Shared Memory. A _memory state_ is a set of such messages, and we use
$\mathsf{Mem}=2^{\mathsf{Msgs}}$ for the set of all memory states. With this,
the set of all _configurations_ of parametrized systems under release-acquire
is
$\displaystyle\mathsf{CF}=\mathsf{Mem}\times\mathsf{LCFMap}.$
Transitions. To define the transition relation among configurations, we first
give a _thread-local transition relation_ among thread-local configurations
$\xrightharpoondown{}\
\subseteq\mathsf{LCF}\times\mathsf{LAB}\times\mathsf{LCF}$ in Figure 1.
Thread-local transitions may be labeled or unlabeled, indicated by
$\mathsf{LAB}=\\{\varepsilon\\}\cup(\\{\mathsf{ld},\mathsf{st},\mathsf{cas}\\}\times\mathsf{Msgs})$.
The unlabeled transitions capture the control flow within a thread and
properly handle assignments and assumes. They are standard. The message-
labeled transitions capture the interaction of the thread with the shared
memory. We elaborate on the load, store, and CAS transitions by which a thread
with local view $\mathsf{vw}$, interacts with the shared memory.
Load. A load transition $\mathsf{r} \coloneqq\mathsf{x}$ picks a message
$(\mathsf{x},\mathsf{d},\mathsf{vw}^{\prime})$ from the shared memory where
$\mathsf{d}$ is the value stored in the message and updates its register
$\mathsf{r}$ with value $\mathsf{d}$. The message should not be outdated,
which means the timestamp of $\mathsf{x}$ in the message,
$\mathsf{vw}^{\prime}(\mathsf{x})$, should be at least the thread’s current
timestamp for $\mathsf{x}$, $\mathsf{vw}(\mathsf{x})$. The timestamps of other
variables do not influence the feasibility of the load transition. They are
taken into account, however, when the load is performed. The thread’s local
view is updated by joining the thread’s current view $\mathsf{vw}$ and
$\mathsf{vw}^{\prime}$ by taking the maximum timestamp per address;
$(\mathsf{vw}\sqcup\mathsf{vw}^{\prime})=\lambda\mathsf{x}.\max(\mathsf{vw}(\mathsf{x}),\mathsf{vw}^{\prime}(\mathsf{x}))$.
Store. When a thread executes a store $\mathsf{x} \coloneqq\mathsf{r}$ it adds
a message $(\mathsf{x},\mathsf{d},\mathsf{vw}^{\prime})$ to the memory, where
$\mathsf{d}$ is the value held by the register $\mathsf{r}$. The new thread-
local view (and the message view), $\mathsf{vw}^{\prime}$, is obtained from
the current $\mathsf{vw}$ by increasing the time-stamp of $\mathsf{x}$. We use
$\mathsf{vw}<_{x}\mathsf{vw}^{\prime}$ to mean
$\mathsf{vw}(\mathsf{x})<\mathsf{vw}^{\prime}(\mathsf{x})$ and
$\mathsf{vw}(\mathsf{y})=\mathsf{vw}^{\prime}(\mathsf{y})$ for all variables
$\mathsf{y}\neq\mathsf{x}$.
CAS. A CAS transition is a load and store instruction executed atomically.
$\mathsf{cas}(\mathsf{x},\mathsf{r}_{1},\mathsf{r}_{2})$ has the intuitive
meaning $\mathsf{atomic\\{}\mathsf{r}
\coloneqq\mathsf{x};\;\mathsf{assume}\;{\mathsf{r}=\mathsf{r}_{1}};\;\mathsf{x}
\coloneqq\mathsf{r}_{2}\mathsf{\\}}$. The instruction checks whether the
shared variable $\mathsf{x}$ holds the value of $\mathsf{r}_{1}$ and, in case
it does, sets it to the value of $\mathsf{r}_{2}$. The check and the
assignment happen atomically. Under RA, this means the timestamp $\mathsf{ts}$
of the load instruction and the timestamp $\mathsf{ts}^{\prime}$ of the store
instruction involved in the CAS should be adjacent,
$\mathsf{ts}^{\prime}=\mathsf{ts}+1$.
The transition relation among configurations $\xrightarrow{}\
\subseteq\mathsf{CF}\times\mathsf{TID}\times(\mathsf{Msgs}\cup\\{\varepsilon\\})\times\mathsf{CF}$
is defined in Figure 1. It is labeled by a thread identifier and possibly
message (if the transition interacts with the shared memory). The relation
expects a thread $\mathsf{t}$ which performs the transition. In the case of
local computations, there are no more requirements and the transition
propagates to the configuration. In the case of loads, we require the memory
to hold the message to be loaded. In the case of stores, the message to be
stored should not conflict with the memory. In the case of CAS, we require
both of the above, and that the two messages should have consecutive
timestamps. We defer the definition of non-conflicting messages for the moment
until we can give it in broader perspective.
Fix a parametrized system of interest $\mathsf{c}$. The initial thread-local
configuration is
$\mathsf{lcf}_{\mathsf{init}}=(\mathsf{c},\mathsf{rv}_{0},\mathsf{vw}_{0})$,
where the register valuation assigns $\mathsf{rv}_{0}(\mathsf{r})=0$ to all
registers and the view has $\mathsf{vw}_{0}(\mathsf{x})=0$ for all
$\mathsf{x}\in\mathsf{Var}$. The _initial configuration_ of the parametrized
system is
$\mathsf{cf}_{0}=(\mathsf{Mem}_{\mathsf{init}},\mathsf{lcfm}_{\mathsf{init}})$
with an initial memory $\mathsf{Mem}_{\mathsf{init}}$ consisting of messages
where all shared variables store the value
$\mathsf{d}_{\mathsf{init}}\in\mathsf{Dom}$, along with the initial view which
assigns time stamp 0 to all shared variables, and
$\mathsf{lcfm}_{\mathsf{init}}(\mathsf{t})=\mathsf{lcf}_{\mathsf{init}}$ for
all threads. A _computation_ (or a run) is a finite sequence of consecutive
transitions
$\displaystyle\rho\;=\;\mathsf{cf}_{0}\xrightarrow{(\mathsf{t}_{1},\mathsf{msg}_{1})}\mathsf{cf}_{1}\xrightarrow{(\mathsf{t}_{2},\mathsf{msg}_{2})}\ldots\xrightarrow{(\mathsf{t}_{n},\mathsf{msg}_{n})}\mathsf{cf}_{n}.$
The computation is initialized if
$\mathsf{cf}_{0}=\mathsf{cf}_{\mathsf{init}}$. We use $\mathsf{TS}(\rho)$ for
the set of all non-zero timestamps that occur in all configurations across all
variables. We use $\mathsf{TID}(\rho)$ to refer to the set of thread
identifiers labeling the transitions. For a set
$\mathsf{TID}^{\prime}\subseteq\mathsf{TID}$ of thread identifiers, we use
$\rho\\!\downarrow_{\mathsf{TID}^{\prime}}$ to project the computation to
transitions from the given threads. With
$\mathsf{first}(\rho)=\mathsf{cf}_{0}$, $\mathsf{last}(\rho)=\mathsf{cf}_{n}$
we access the first/last configurations in the computation.
###### Example 2.1.
Consider the program given in Figure 2 which implements a simplified version
of the Dekker’s mutual exclusion protocol for two threads. There are two
shared variables $\mathsf{x}$ and $\mathsf{y}$. Both $\mathsf{x},\mathsf{y}$
are initialized to 0, and at instructions $\lambda_{0},\lambda^{\prime}_{0}$
the registers $\mathsf{r}_{1},\mathsf{r}^{\prime}_{1}$ are initialized to 1.
The first thread $\mathsf{t}_{1}$ signals that it wants to enter the critical
section by writing the value 1 to $\mathsf{x}$. It then checks if thread
$\mathsf{t}_{2}$ has asked to enter the critical section by reading the value
of $\mathsf{y}$ and storing it into the register $\mathsf{r}_{1}$. The thread
$\mathsf{t}_{1}$ is allowed to enter the critical section only if the value
stored in the register $\mathsf{r}_{1}$ is 0. The second thread
$\mathsf{t}_{2}$ behaves in a symmetric manner.
$\begin{array}[]{c}\text{Variables }\mathsf{x}\text{ and }\mathsf{y}\text{
have been initialized to 0}\\\ \begin{array}[]{l|l}\hline\cr\text{Thread
}\mathsf{t}_{1}&\text{Thread }\mathsf{t}_{2}\\\
\hline\cr\lambda_{0}:\leavevmode\nobreak\ \mathsf{r}_{1} \coloneqq
1&\lambda^{\prime}_{0}:\leavevmode\nobreak\ \mathsf{r}^{\prime}_{1} \coloneqq
1\\\ \lambda_{1}:\leavevmode\nobreak\ \mathsf{x}
\coloneqq\mathsf{r}_{1}&\lambda^{\prime}_{1}:\leavevmode\nobreak\ \mathsf{y}
\coloneqq\mathsf{r}^{\prime}_{1}\\\ \lambda_{2}:\leavevmode\nobreak\
\mathsf{r}_{1} \coloneqq\mathsf{y}&\lambda^{\prime}_{2}:\leavevmode\nobreak\
\mathsf{r}^{\prime}_{1} \coloneqq\mathsf{x}\\\
\lambda_{3}:\leavevmode\nobreak\
\texttt{if}(\mathsf{r}_{1}==0):&\lambda^{\prime}_{3}:\leavevmode\nobreak\
\texttt{if}(\mathsf{r}^{\prime}_{1}==0):\\\
\quad\qquad\texttt{criticalsection}&\quad\qquad\texttt{criticalsection}\\\
\hline\cr\end{array}\end{array}$
Figure 2: On the top, is a simplified version of Dekker’s mutual exclusion
protocol. Below, is a partial execution sequence under RA. The rectangles show
the contents (messages) of the shared memory. Messages have three components -
(1) the variable, (2) value of the message and (3) the message view - a map
from $\\{\mathsf{x},\mathsf{y}\\}$ to the set of timestamps
${\color[rgb]{.75,0,.25}\definecolor[named]{pgfstrokecolor}{rgb}{.75,0,.25}\mathsf{Time}}$
($\mathbb{N}$). The lines below show the thread-local state - instruction-
pointer, register valuation and thread-local view
.
Under Sequential Consistency (SC) [54], which is a stronger notion of
consistency, the mutual exclusion property (i.e., at most one thread is in the
critical section at any time) is preserved. However, this is not the case
under the RA memory model. To see why, consider the execution sequence
presented in Fig 2. At each instant, the figure shows where the instruction-
counter (i.e., the label of the next instruction to get executed) resides in
each of the threads, along with the values of the registers. The black arrows
with instruction labels $\lambda_{1},\lambda^{\prime}_{1}$ show the evolution
of the run on executing the instruction labeled
$\lambda_{1},\lambda^{\prime}_{1}$ respectively. Let $\mathsf{m}_{\lambda}$
represent the memory obtained after executing the instruction labeled
$\lambda$, and let $\mathsf{msg}_{\lambda}$ be the unique new message (if any)
that is part of $\mathsf{m}_{\lambda}$ after the execution of the instruction
labeled $\lambda$. The initial memory is $\mathsf{m}_{\mathsf{init}}$ where
$\mathsf{x},\mathsf{y}$ have values and timestamps 0;
$\mathsf{msg}_{\mathsf{x}},\mathsf{msg}_{\mathsf{y}}$ represent the messages
in $\mathsf{m}_{\mathsf{init}}$ corresponding to $\mathsf{x},\mathsf{y}$. The
execution of the instruction labeled by $\lambda_{1}$ results in the addition
of a new message $\mathsf{msg}_{\lambda_{1}}$ to the memory whose timestamp
(10) is higher than 0 (which is the current timestamp of the variable
$\mathsf{x}$ for $\mathsf{t}_{1}$). The view of $\mathsf{t}_{1}$ is then
updated to $\mathsf{x}\mapsto 10,\mathsf{y}\mapsto 0$. Likewise, the execution
of the instruction labeled by $\lambda^{\prime}_{1}$ results in the addition
of a new message $\mathsf{msg}_{\lambda^{\prime}_{1}}$ to the memory with a
higher time stamp (7). This will result in the update of the view of
$\mathsf{t}_{2}$ to $\mathsf{x}\mapsto 0,\mathsf{y}\mapsto 7$, wrt. the
variable $\mathsf{y}$. The read instruction labeled by $\lambda_{2}$ is then
allowed to use the message $\mathsf{msg}_{\mathsf{y}}$ to fetch the value of
$\mathsf{y}$, since the view of $\mathsf{t}_{1}$ wrt. $\mathsf{y}$ is 0.
Likewise, in the case of $\mathsf{t}_{2}$ concerning the execution of the
instruction labeled by $\lambda^{\prime}_{2}$, the message
$\mathsf{msg}_{\mathsf{x}}$ is used since the view of $\mathsf{t}_{2}$ wrt.
$x$ is 0. After these steps, both threads enter their respective critical
section.
#### 2.2.1 Conflict
We need a notion of conflict not only for messages, but also for memories,
configurations, and computations. Two messages are non-conflicting, denoted by
$(\mathsf{x}_{1},\mathsf{d}_{1},\mathsf{vw}_{1})\;\\#\;(\mathsf{x}_{2},\mathsf{d}_{2},\mathsf{vw}_{2})$,
if either their variables are different, $\mathsf{x}_{1}\neq\mathsf{x}_{2}$,
the timestamps are different,
$\mathsf{vw}_{1}(\mathsf{x}_{1})\neq\mathsf{vw}_{2}(\mathsf{x}_{2})$, or the
timestamps are zero,
$\mathsf{vw}_{1}(\mathsf{x}_{1}){=}0{=}\mathsf{vw}_{2}(\mathsf{x}_{2})$.
Observe that initial messages do not conflict with any other message.
Two memory states are non-conflicting, $\mathsf{m}_{1}\;\\#\;\mathsf{m}_{2}$,
if for all $\mathsf{msg}_{1}\in\mathsf{m}_{1}$ and all
$\mathsf{msg}_{2}\in\mathsf{m}_{2}$ we have
$\mathsf{msg}_{1}\;\\#\;\mathsf{msg}_{2}$. Two configurations are non-
conflicting, $\mathsf{cf}_{1}\;\\#\;\mathsf{cf}_{2}$, if their memory states
are non-conflicting. Two computations are non-conflicting, denoted
$\rho\;\\#\;\rho^{\prime}$, if they use different threads and non-conflicting
messages, $\mathsf{TID}(\rho)\cap\mathsf{TID}(\rho^{\prime})=\emptyset$ and
$\mathsf{last}(\rho)\;\\#\;\mathsf{last}(\rho^{\prime})$.
## 3 Undecidability of
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{acyc})$
In this section, we establish the undecidabililty of the class
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{acyc})$,
that is the class with loop-free
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads (which can execute arbitrarily-many CAS operations) and without any
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads. This result essentially shows that even with the loop-free
assumption, allowing
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads to perform CAS operations is in itself intractable from a safety
verification viewpoint. Hence, the $\mathsf{nocas}$ restriction that we impose
on
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads is a justified means of achieving tractabililty.
In fact, we will show a stronger result. We will show that we can transform a
non-paramterized system consisting of $k$ distinguished threads having full
instruction set and loops under RA (the class
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1}\parallel\cdots\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{k}$)
to a parameterized system corresponding to the class
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{acyc})$
such that, control-state reachability is preserved. With this equivalence, the
claim would follow by the undecidability result of [1].
### 3.1 Constructing the equivalent loop-free program
To show this result, we transform an input of $n$ programs
$\\{\mathsf{c}_{1},\mathsf{c}_{2},\cdots,\mathsf{c}_{n}\\}$ and a failure
state label $\lambda_{\mathsf{fail}}$ in some $\mathsf{c}_{i}$ (with possibly
full instruction set and loops) and transform them into a single program
$\mathsf{c}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
and failure state $\lambda_{\mathsf{fail}}^{\prime}$, the control flow for
which is loop-free but uses the full instruction set including CAS operations.
We claim that the state label $\lambda_{\mathsf{fail}}$ is reachable in
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1}\parallel\cdots\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{n}$
with
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{i}$
executing $\mathsf{c}_{i}$ if and only if the state label
$\lambda_{\mathsf{fail}}^{\prime}$ is reachable in the system
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{acyc})$
with the environment threads executing
$\mathsf{c}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$.
Let the variable set, data domain and register set of the original system be
$\mathsf{Var}$, $\mathsf{Dom}$ and
$\mathsf{Reg}=\\{\mathsf{r}_{1},\cdots,\mathsf{r}_{k}\\}$ as usual. We assume
that the memory is initialized to 0 on all variables.
Converting a single program $\mathsf{c}_{i}$ to $\mathsf{c}_{i}^{\prime}$ We
show how we can convert one thread program $\mathsf{c}_{i}$ into a loop-free
program $\mathsf{c}_{i}^{\prime}$ and then show how we can combine all the
programs together into a single loop-free program
$\mathsf{c}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$.
Consider for an $i$, the program $\mathsf{c}_{i}$. For the purposes of this
construction, we will assume that the program $\mathsf{c}_{i}$ has been
specified as a transition system rather than in the while-language syntax. It
is clear that both representations are equivalent and can be interconverted
with only polynomial overhead. Hence we assume that
$\mathsf{c}_{i}=(Q,\Delta,\iota)$ where $Q$ is the set of control states,
$\Delta$ is the transition relation and $\iota$ maps each transition to its
corresponding instruction from
$\\{\mathsf{skip},\mathsf{assume}\;{\mathsf{e}(\overline{\mathsf{r}})},\mathsf{r}
\coloneqq\mathsf{e}(\overline{\mathsf{r}}),\mathsf{r}
\coloneqq\mathsf{x},\mathsf{x}
\coloneqq\mathsf{r},\mathsf{cas}(\mathsf{x},\mathsf{r}_{1},\mathsf{r}_{2})\\}$.
We transform $\mathsf{c}_{i}$ to a loop-free program as follows. Let
$Q=\\{q_{0},\cdots,q_{n}\\}$ and with $q_{0}$ as the initial state.
In this conversion, we add extra variables and values such that
$\mathsf{Var}^{\prime}=\mathsf{Var}\uplus\\{t_{i},\mathfrak{r}_{i}^{1},\cdots,\mathfrak{r}_{i}^{|\mathsf{Reg}|}\\}$
and
$\mathsf{Dom}^{\prime}=\mathsf{Dom}\cup\\{0,\cdots,|Q|-1,\Lambda^{\bot}\\}$
where $\uplus$ denotes disjoint union. Now we specify the new transition
system $\mathsf{c}_{i}^{\prime}$ which needs to be loop-free. For each
transition $\partial=(q_{a},q_{b})\in\Delta$, with source and end states
$q_{a}$ and $q_{b}$ respectively and instruction $\iota(\partial)$, we
transform it into the following three transition sequence (a sequence of green
transitions starting with a CAS, followed by $k$ load operations; the
transition corresponding to $\iota(\partial)$, followed by the sequence of
pink transitions consisting of $k$ store operations ending with a CAS) denoted
as $\mbox{\Lightning}(\partial)$.
$\displaystyle\mbox{\Lightning}(\partial)\leavevmode\nobreak\ =$
$\displaystyle\leavevmode\nobreak\
{\color[rgb]{0,0.5,0.5}\definecolor[named]{pgfstrokecolor}{rgb}{0,0.5,0.5}q_{\mathsf{start}}\xrightarrow{\mathsf{cas}(t_{i},a,\Lambda^{\bot})}q_{\partial}^{0}\xrightarrow{\mathsf{r}_{1}
\coloneqq\mathfrak{r}_{i}^{1}}q_{\partial}^{1}\cdots
q_{\partial}^{k-1}\xrightarrow{\mathsf{r}_{k}
\coloneqq\mathfrak{r}_{i}^{k}}}q_{\partial}^{k}\xrightarrow{\iota(\partial)}q_{\partial}^{k+1}\cdots\quad\text{(contd.
below)}$ $\displaystyle\leavevmode\nobreak\ \cdots
q_{\partial}^{k+1}{\color[rgb]{1,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,1}\pgfsys@color@cmyk@stroke{0}{1}{0}{0}\pgfsys@color@cmyk@fill{0}{1}{0}{0}\xrightarrow{\mathfrak{r}_{i}^{1}
\coloneqq\mathsf{r}_{1}}q_{\partial}^{k+2}\cdots
q_{\partial}^{2k-1}\xrightarrow{\mathfrak{r}_{i}^{k}
\coloneqq\mathsf{r}_{k}}q_{\partial}^{2k}\xrightarrow{\mathsf{cas}(t_{i},\Lambda^{\bot},b)}q_{\mathsf{end}}}$
We construct $\mbox{\Lightning}(\partial)$ for each transition $\partial$ in
$\Delta$ to get the complete transition system. The initial/final
(collectively called terminal) nodes of this transition system are
$q_{\mathsf{start}},q_{\mathsf{end}}$ (which are common to all
$\mbox{\Lightning}(\partial)$). The internal $q_{\partial}^{\\_}$ states are
all distinct across the $\mbox{\Lightning}(\partial)$ for different
$\partial$. The transition system that we obtain has size
$\mathcal{O}(|\mathsf{Reg}||\Delta|)$ (each original transition from $q_{a}$
to $q_{b}$ is transformed into a sequence of $2|\mathsf{Reg}|+3$ transitions
between $q_{\mathsf{start}}$ and $q_{\mathsf{end}}$). It is clearly loop-free.
See Figure 3 showing an example if we start with
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{i}\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{j}$.
Combining individual $\mathsf{c}_{i}^{\prime}$ We construct programs
$\mathsf{c}_{i}^{\prime}$ as described above for each thread
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{i}$.
Now we combine these individual programs into a single program
$\mathsf{c}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$.We
ensure that the newly added shared variables ($t_{i}$,
$\mathfrak{r}_{i}^{\\_}$ for
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{i}$)
are also disjoint across threads. Hence the variable set is now
$\mathsf{Var}^{\prime}=\mathsf{Var}\uplus\\{t_{1},\cdots,t_{n}\\}\uplus\\{\mathfrak{r}_{i}^{j}\\}_{i\in[n],j\in[|\mathsf{Reg}|]}$
(where $\mathsf{Var}$ was the original variable set
$\\{\mathsf{c}_{1},\mathsf{c}_{2},\cdots,\mathsf{c}_{n}\\}$ were operating
with). Finally, the combined data domain is simply a union of individual data
domains (which possibly overlap). We combine the individual programs as
$\mathsf{c}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}=\mathsf{c}_{1}^{\prime}\oplus\mathsf{c}_{2}^{\prime}\oplus\cdots\oplus\mathsf{c}_{n}^{\prime}$
where $\oplus$ denotes non-deterministic choice. It is clear that
$\mathsf{c}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
is loop-free. Additionally,
$|\mathsf{c}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}|$,
and the new $|\mathsf{Dom}^{\prime}|$ and $|\mathsf{Var}^{\prime}|$ is
polynomial in $\sum_{i}|\mathsf{c}_{i}|$ and previous $|\mathsf{Var}|$ and
$|\mathsf{Dom}|$.
$q_{0}$$q_{2}$$q_{0}$$q_{1}$$q_{2}$$q_{1}$$\partial_{1}$$\partial_{2}$$\partial_{5}$$\partial_{3}$$\partial_{4}$Program
$\mathsf{c}_{i}$Program
$\mathsf{c}_{j}$$q_{\mathsf{end}}$$q_{\mathsf{end}}$$q_{\mathsf{start}}$$q_{\mathsf{start}}$Transformed
program $\mathsf{c}^{\prime}_{j}$Transformed program
$\mathsf{c}^{\prime}_{i}$$\iota(\partial_{1})$$\iota(\partial_{2})$$\iota(\partial_{3})$$\iota(\partial_{4})$$\iota(\partial_{5})$register
loadsregister loadsregister storesregister
stores$\mathsf{cas}(t_{i},0,\Lambda^{\bot})$$\mathsf{cas}(t_{i},1,\Lambda^{\bot})$$\mathsf{cas}(t_{i},\Lambda^{\bot},2)$$\mathsf{cas}(t_{i},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},0,\Lambda^{\bot})$$\mathsf{cas}(t_{j},2,\Lambda^{\bot})$$\mathsf{cas}(t_{j},1,\Lambda^{\bot})$$\mathsf{cas}(t_{j},\Lambda^{\bot},2)$
Figure 3: Examples of two
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{i}$
and
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{j}$
executing programs $\mathsf{c}_{i},\mathsf{c}_{j}$ and the corresponding
transformed programs $\mathsf{c}^{\prime}_{i},\mathsf{c}^{\prime}_{j}$. The
program $c_{i}$ has 2 transitions while $c_{j}$ has 3 transitions. Note how
the read-write value for CAS operations in the transformed program match with
the transitions in the original program.
### 3.2 Proof of Equivalence
We now prove that the system
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{acyc})$
with the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads executing the program
$\mathsf{c}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
as defined above respect the original system. We defer the notion of
‘maintaining reachability’ for a bit later. We first observe the program
$\mathsf{c}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
and make some observations.
#### 3.2.1 Simulation of individual threads
Locking/unlocking of $\mathsf{c}_{i}^{\prime}$. For any single transformed
program $\mathsf{c}_{i}^{\prime}$, we note that at any given point, only one
thread can be in any internal (not initial/final) state of
$\mathsf{c}_{i}^{\prime}$. To see this, note the two atomic CAS operations
flanking each 3-path in $\mathsf{c}_{i}^{\prime}$. All these CAS operations
are on the same variable $t_{i}$ and moreover there are no other operations on
$t_{i}$. Hence at any given point in time, there is only one message on
$t_{i}$ (the most recent write) that is available for a CAS operation. The
value of this message dictates whether the operation will succeed. When it
succeeds, the most recent write value changes to the value written by the CAS.
Now note that $t_{i}$ is initialized with value 0; hence initially one thread,
say $\mathsf{t}$ can take perform a CAS and change the recent value to
$\Lambda^{\bot}$. Now, there is no transition from $q_{\mathsf{start}}$ that
performs a CAS with a read of $\Lambda^{\bot}$. Hence all other threads are
kept waiting until the recent value on $t_{i}$ changes from $\Lambda^{\bot}$.
This is possible only when the initial thread $\mathsf{t}$, executes the final
transition and reaches $q_{\mathsf{end}}$, maintaining the claim. Hence these
CAS operations perform the role of a mutual-exclusion lock. But then they
perform another function too.
State transference. We now know that for each $i$, only one thread may execute
$\mathsf{c}_{i}^{\prime}$ at any given time. However the locking/unlocking
operations using CAS also enable threads to transfer their state to their
successors. There are three components to the state, which we handle in turn:
* •
Control-state: Note that the recent value on variable $t_{i}$ is
$v\neq\Lambda^{\bot}$ only if the previous thread terminated after simulating
some transition ending at $q_{v}$. Additionally, a locking CAS operation for
$\mbox{\Lightning}(\partial)$ reads value $v$ only if $\partial$ is a
transition from $q_{v}$ to some other state. Hence, it is guaranteed that the
successive thread will execute some transition that emerges from a state where
the previous thread left off. Note how this is true for the first thread as
well since the initial value on all variables is 0 and the initial state of
the transition system is $q_{0}$.
* •
View: The second component that we consider is the view. This also is
transferred from a thread to its successor through the CAS operation. In
particular, when a thread $\mathsf{t}$ executes the final CAS operation to
reach $q_{\mathsf{end}}$, it generates a message on $t_{i}$ which is read by
its successor. This read implies that the successor will take the join on its
own (initial) view with that of the message and hence essentially accumulate
the exact view that the previous thread left with. So, the view is transferred
as well.
* •
Register valuations: The previous thread $\mathsf{t}$ stores its register
valuations in the shared variables $\mathfrak{r}_{i}^{j}$ in the final
sequence of store operations before terminating. These are then accessed by
the successor thread through the initial sequence of load operations.
In this way we see that not only is exclusion ensured, but the thread states
are transferred from one thread to the next. Together, these sequences of
threads simulate the entire run of the original
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{i}$
in fragments. The above holds for all $i\in[n]$. Hence at any given point,
there are at most $n$ threads simulating the original ones.
#### 3.2.2 The Complete Simulation
Now we formalize the notion of equivalence in reachability. We say that an
original state with the threads
$\\{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1},\cdots,{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{n}\\}$
is equivalent to a new state when we have the following.
* •
if the control states of threads
$\\{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1},\cdots,{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{n}\\}$
are $(q_{i_{1}},q_{i_{2}},\cdots,q_{i_{n}})$ respectively then in the new
system with
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$,
the recent value of shared variable $t_{j}$ is $i_{j}$,
* •
the register valuation of each original
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{i}$
is reflected ($\mathsf{r}_{j}$ = most recent write to $\mathfrak{r}_{i}^{j}$)
in the most recent writes to the variables $\mathfrak{r}_{i}^{\\_}$ for each
thread $i$,
* •
the view of
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{i}$
is the view stored in the most recent message to $t_{i}$ (again projected on
the original variable set) and
* •
the global memory (projected) is identical across the original and
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
states.
We claim the following: a state in the original system can be reached if and
only some equivalent state in the new system can be reached. We can prove this
by induction. The base case is that all threads are in their initial states,
registers and views with the memory only with initial messages (0 on each
variable). This trivially satisfies the requirement, both in the forward and
reverse directions.
Now for the inductive case ($\Rightarrow$). Assume that it was true at some
instant. Let some
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{i}$
execute an instruction for the transition $\partial$. In the new system, we
can simulate this as a new
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread $\mathsf{t}$ taking the path corresponding to
$\mbox{\Lightning}(\partial)$ in $\mathsf{c}_{i}^{\prime}$. We note by the
observations above that the invariants for the thread-local state (control-
state, register valuations and view) is maintained. Additionally, if
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{i}$
wrote a message to the memory, then so can $\mathsf{t}$. In particular, since
the view of $\mathsf{t}$ is obtained from the CAS read, it matches that of
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{i}$.
Hence the message added by $\mathsf{t}$ can have the same timestamps as
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{i}$.
Inductive case ($\Leftarrow$). The same argument works in the reverse
direction. Assume that a pair of equivalent states have been reached. Now,
consider a
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread path $\mbox{\Lightning}(\partial)$ in $\mathsf{c}_{i}^{\prime}$ where
$\partial=(q_{a},q_{b})$. Then, by the induction hypothesis this means that
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{i}$
is in state $q_{a}$ in the original run. Given the equality of thread and
memory state initially, it too can take the transition $(q_{a},q_{b})$. Once
again, the invariant follows from the earlier observations.
This gives a sketch of the proof. In particular, note that even though we give
an equivalence between the control state in the original system and a variable
value in the new system, this can be easily converted to an equivalence
between control states themselves. This means that the reachability problem
for
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1}\parallel\cdots\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{n}$
can can converted to a reachability problem for
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{acyc})$.
This prompts us to restrict
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads to a reduced (cas-free) instruction set and motivates the idea of
modelling CAS instructions in a run via computations of the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads.
## 4 A Simplified Semantics
In this section, we propose a simplified semantics for the class of systems
given by
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{nocas})\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1}\parallel\dots\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{n}$.
The core of this result relies on the _Infinite Supply Lemma_ which shows that
if some
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread could generate a message $(\mathsf{x},\mathsf{val},\mathsf{vw})$, then
a clone of that thread could generate the message
$(\mathsf{x},\mathsf{val},\mathsf{vw}^{\prime})$ with
$\mathsf{vw}^{\prime}=\mathsf{vw}[\mathsf{x}\mapsto t]$ for some
$t>\mathsf{vw}(\mathsf{x})$.
There are two assumptions that the infinite supply lemma and hence our
semantic simplification result rely on:
* •
arbitrarily many
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads executing identical programs.
* •
the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads do not have atomic instructions (CAS).
The first assumption allows us to have clone
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads that duplicate the computation and hence the messages generated in it.
The second assumption is required for the duplicated computation to remain
valid under RA.
While performing the duplication, one must keep in mind the dependency between
stores and loads across threads. The fact that
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads are not replicatable (their messages cannot be duplicated) adds to the
challenge. To ensure that the clone threads can follow in the footsteps of the
original computation we require that
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
messages can be read by the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
clones whenever they can be read by the original
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads. This necessitates that we respect relative order among timestamps
between
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
and
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads.
We develop some intermediate concepts that help us in developing a valid
duplicate run. In order to accommodate the clone threads, we must make space
(create unused timestamps) along
${\color[rgb]{.75,0,.25}\definecolor[named]{pgfstrokecolor}{rgb}{.75,0,.25}\mathsf{Time}}$
for clones to write messages. We do this via timestamp liftings. Having done
this, we need to define how we can combine the original computation with that
of the clones. We develop the concept of superposition of computations to do
this. Finally, the infinite supply (of messages) lemma shows how, using the
earlier two concepts, we can generate copies of messages, with higher
timestamps.
This ‘duplication-at-will’ of
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
messages means that we need not store the entire set of
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
messages produced. Those with the smallest timestamps act as good
representatives of the set. Additionally when any thread reads from an
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
message, we need not be bothered about timestamp comparisons since we could
always generate a copy of that message with as high timestamp as required. It
is this observation that gives us the timestamp abstraction and with it the
simplified semantics.
### 4.1 Infinite Supply
We now make these arguments precise. Our strategy is to split up the
timestamps (hence the computation) and separate the part originating from the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads from the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
part (which can be duplicated at will). We write
$\rho\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}$
and
$\rho\\!\downarrow_{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}}$
to denote the projections of $\rho$ to
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
and
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
respectively.
#### 4.1.1 Timestamp Lifting
In our development we will make use of _timestamp transformations_
$\mathsf{tf}:{\color[rgb]{.75,0,.25}\definecolor[named]{pgfstrokecolor}{rgb}{.75,0,.25}\mathsf{Time}}\rightarrow{\color[rgb]{.75,0,.25}\definecolor[named]{pgfstrokecolor}{rgb}{.75,0,.25}\mathsf{Time}}$.
We extend these to views $\mathsf{vw}$ with per variable timestamp
transformations
$\mathsf{tf}=\\{\mathsf{tf}^{\mathsf{x}}\\}_{\mathsf{x}\in\mathsf{Var}}$,
where $\mathsf{tf}^{\mathsf{x}}$ only transforms the timestamps for the
variable $\mathsf{x}$. The transformed view
$\mathsf{tf}(\mathsf{vw}):\mathsf{Var}\rightarrow{\color[rgb]{.75,0,.25}\definecolor[named]{pgfstrokecolor}{rgb}{.75,0,.25}\mathsf{Time}}$
is defined by
$(\mathsf{tf}(\mathsf{vw}))(\mathsf{x})=\mathsf{tf}^{\mathsf{x}}(\mathsf{vw}(\mathsf{x}))$
for every variable $\mathsf{x}$.
As an example consider shared variables $\mathsf{x},\mathsf{y}$ and views
$\mathsf{vw}_{1},\mathsf{vw}_{2}$ such that
$\mathsf{vw}_{1}=[\mathsf{x}\mapsto 2,\mathsf{y}\mapsto 5]$ and
$\mathsf{vw}_{2}=[\mathsf{x}\mapsto 10,\mathsf{y}\mapsto 0]$. Using the
timestamp transformation
$\mathsf{tf}=\\{\mathsf{tf}^{\mathsf{x}},\mathsf{tf}^{\mathsf{y}}\\}$ where
$\mathsf{tf}^{\mathsf{x}}(0)=\mathsf{tf}^{\mathsf{y}}(0)=0$,
$\mathsf{tf}^{\mathsf{x}}(t)=t+2$ and $\mathsf{tf}^{\mathsf{y}}(t)=t+7$ for
$t>0$, we obtain $\mathsf{tf}(\mathsf{vw}_{1})=[\mathsf{x}\mapsto
4,\mathsf{y}\mapsto 12]$ and $\mathsf{tf}(\mathsf{vw}_{2})=[\mathsf{x}\mapsto
12,\mathsf{y}\mapsto 0]$. We also apply the timestamp transformation to
messages, memories, configurations, and computations by transforming all view
components.
RA-valid timestamp lifting. An _RA-valid timestamp lifting_ for a run $\rho$
is a (per variable) timestamp transformation
$\mathcal{M}=\\{\mu^{\mathsf{x}}\\}_{\mathsf{x}\in\mathsf{Var}}$ satisfying
two properties for each $\mathsf{x}\in\mathsf{Var}$: (1) it is strictly
increasing, $\mu^{\mathsf{x}}(0)=0$; for all $t_{1},t_{2}\in{\rm Nature}$ with
$t_{1}<t_{2}$ we have $\mu^{\mathsf{x}}(t_{1})<\mu^{\mathsf{x}}(t_{2})$ and
(2) if there is a CAS operation on $\mathsf{x}$ with (load, store) timestamps
as $(t,t+1)$ then $\mu^{\mathsf{x}}(t+1)=\mu^{\mathsf{x}}(t)+1$, i.e.
consecution of CAS-timestamps is maintained. Note that
$\mu(\mathsf{cf}_{\mathsf{init}})=\mathsf{cf}_{\mathsf{init}}$. In the example
above, $\mathsf{tf}$ is a RA-valid timestamp lifting.
Lemma 4.1 says that the run $\mathcal{M}(\rho)$ obtained by modifying the
timestamps of a valid run $\rho$ with an RA-valid timestamp lifting
$\mathcal{M}$ is also a valid under the RA semantics.
###### Lemma 4.1 (Timestamp Lifting Lemma).
Let $\mathcal{M}=\\{\mu^{\mathsf{x}}\\}_{\mathsf{x}\in\mathsf{Var}}$ be an RA-
valid timestamp lifting. If $\rho$ is a computation under RA, then so is
$\mathcal{M}(\rho)$. Hence if a configuration $\mathsf{cf}$ is reachable under
RA then so is $\mathcal{M}(\mathsf{cf})$.
###### Proof 4.2.
This result follows since timestamp lifting is just a relabelling of
timestamps for each shared variable. The lemma relies on the following
facts/observations:
* •
There are no timestamp comparisons across variables, $\mathsf{vw}(\mathsf{x})$
is never compared with $\mathsf{vw}(\mathsf{x}^{\prime})$ for
$\mathsf{x}\neq\mathsf{x}^{\prime}$.
* •
The relative order between timestamps on the same variable is preserved due to
the strictly increasing property. Additionally, $\mu(0)=0$, maintaining the
timestamps of the $\mathsf{init}$ messages.
* •
The load, store timestamps of (CAS-local) operations still remain consecutive.
In particular the lemma can be formally proven by induction on the length of
the run. The base case is trivial and the inductive case follows by showing
that each instruction - read, write, CAS - that can be executed in $\rho$ can
be executed in the lifted run, $\mathcal{M}(\rho)$.
The duplication of messages by the clone
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads requires us to copy computations and then merge them such that the RA
semantics are not violated. This requries (1) timestamps of merging
computations to not conflict and (2) the reads-from dependencies between
threads are respected. With this in mind, we introduce the idea of
superposition.
#### 4.1.2 Superposition
We define the _superposition_ $\rho\triangleright\rho^{\prime}$ of two
computations $\rho,\rho^{\prime}$ as the computation that first executes
$\rho$ and then $\rho^{\prime}$. This requires us to combine the memory in
$\mathsf{last}(\rho)$ with that of every configuration in $\rho^{\prime}$.
Moreover, the threads transitioning in $\rho,\rho^{\prime}$ must be disjoint.
Given these considerations, the operation requires the computations to be non-
conflicting, $\rho\;\\#\;\rho^{\prime}$ (see Section 2.2.1), and is defined as
follows:
$\displaystyle\rho\triangleright\rho^{\prime}\;=\;\rho;(\mathsf{last}(\rho)+\rho^{\prime}).$
The addition of a configuration $\mathsf{cf}$ to a computation
$\rho\;=\;\mathsf{cf}_{0}\raisebox{-0.85pt}{$\smash{\mathrel{\stackon[-4.5pt]{\xrightarrow{\makebox[64.57784pt]{}}}{\scriptstyle(\mathsf{t}_{1},\mathsf{msg}_{1})\,}}}$}\ldots\raisebox{-0.85pt}{$\smash{\mathrel{\stackon[-4.5pt]{\xrightarrow{\makebox[65.46675pt]{}}}{\scriptstyle(\mathsf{t}_{n},\mathsf{msg}_{n})\,}}}$}\mathsf{cf}_{n}$
yields the new computation
$\displaystyle\mathsf{cf}+\rho\;=\;(\mathsf{cf}+\mathsf{cf}_{0})\xrightarrow{(\mathsf{t}_{1},\mathsf{msg}_{1})}\ldots\xrightarrow{(\mathsf{t}_{n},\mathsf{msg}_{n})}(\mathsf{cf}+\mathsf{cf}_{n}).$
Addition of configurations
$\mathsf{cf}_{1}=(\mathsf{m}_{1},\mathsf{lcfm}_{1})$ and
$\mathsf{cf}_{2}=(\mathsf{m}_{2},\mathsf{lcfm}_{2})$ is the configuration
$\mathsf{cf}_{1}+\mathsf{cf}_{2}=(\mathsf{m}_{1}\cup\mathsf{m}_{2},\mathsf{lcfm})$,
where $\mathsf{lcfm}(\mathsf{t})=\mathsf{lcfm}_{1}(\mathsf{t})$ if
$\mathsf{lcfm}_{1}(\mathsf{t})\neq\mathsf{lcf}_{\mathsf{init}}$ and
$\mathsf{lcfm}(\mathsf{t})=\mathsf{lcfm}_{2}(\mathsf{t})$ otherwise.
When $\rho\;\\#\;\rho^{\prime}$ holds, we have: (1) for any thread
$\mathsf{t}$, if it has transitioned in $\rho$, then it cannot in
$\rho^{\prime}$; likewise, if it has not transitioned in $\rho$, then it can
in $\rho^{\prime}$.
(2) $\mathsf{last}(\rho)\;\\#\;\mathsf{last}(\rho^{\prime})$, and since the
memory in earlier configurations of $\rho$ is a subset of that in
$\mathsf{last}(\rho)$, the memory unions performed above involve
nonconflicting memories. An initial configuration is neutral for addition, in
particular
$\mathsf{last}(\rho^{\prime})+\mathsf{first}(\rho)=\mathsf{last}(\rho^{\prime})$.
The operation of concatenation $\rho_{1};\rho_{2}$ expects two computations
$\rho_{1}$ and $\rho_{2}$ that satisfy
$\mathsf{last}(\rho_{1})=\mathsf{first}(\rho_{2})$ and returns the sequence
consisting of the transitions in $\rho_{1}$ followed by the transitions in
$\rho_{2}$. This need not be a valid computation under RA, but under the
following conditions it is.
Let $\mathsf{Msgs}(\rho)$ be the memory in $\mathsf{last}(\rho)$. Likewise,
let
$\mathsf{Msgs}(\rho\\!\downarrow_{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}})\subseteq\mathsf{Msgs}(\rho)$
be the subset of memory in $\mathsf{last}(\rho)$, which have been added by
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads during $\rho$.
###### Lemma 4.3 (Superposition).
Consider valid computations $\rho,\rho^{\prime}$ of a parametrized system
under RA such that
$\rho\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}\\#\rho^{\prime}\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}$
and that
$\mathsf{Msgs}(\rho\\!\downarrow_{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}})=\mathsf{Msgs}(\rho^{\prime}\\!\downarrow_{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}})$.
Then the superposition
$\rho\triangleright\rho^{\prime}\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}$
is a valid computation under RA.
###### Proof 4.4.
Since there are arbitrarily many
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads, we distinguish apart the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads in $\rho^{\prime}$ from the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads in $\rho$. By doing so we ensure that the threads operating (changing
state) in $\rho$ and
$\rho^{\prime}\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}$
are disjoint.
Now consider the global state obtained after executing $\rho$ (which is a
valid run under RA). By hypothesis, the memory state contains messages from
$\rho\\!\downarrow_{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}}$,
which are identical to those in
$\rho^{\prime}\\!\downarrow_{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}}$.
After execution of $\rho$ is complete, we claim that we can execute
$\rho^{\prime}\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}$
one step at a time.
* •
Whenever a
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread loads from a message generated by a
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread in $\rho$, the same can happen in
$\rho\triangleright\rho^{\prime}\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}$.
Likewise, the relative time stamps between the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads and the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads in $\rho^{\prime}$ are the same; so
$\rho^{\prime}\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}$
can be executed after $\rho$.
* •
Likewise, reads made by some
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread on $\rho,\rho^{\prime}$ either from another
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread or a
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread also continues exactly in the same way in
$\rho\triangleright\rho^{\prime}\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}$,
since the messages added by
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads are exactly same in $\rho,\rho^{\prime}$, and the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads are disjoint.
* •
The above two points show that we have exactly the same reads-from
dependencies
(${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\leftrightarrow{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
in $\rho$,
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\leftrightarrow{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
in $\rho^{\prime}$) in
$\rho\triangleright\rho^{\prime}\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}$.
The reason is that
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads are disjoint and the messages added by
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads are the same in $\rho,\rho^{\prime}$. Finally, all writes made by the
respective
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads of $\rho,\rho^{\prime}$ can be done in
$\rho\triangleright\rho^{\prime}\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}$;
likewise, all writes made by the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads in $\rho$ can also be made in
$\rho\triangleright\rho^{\prime}\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}$.
The reason is that
$\rho\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}\\#\rho^{\prime}\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}$,
and trivially, we have
$\rho^{\prime}\\!\downarrow_{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}}\\#\rho^{\prime}\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}$.
This ensures no conflict of write-timestamps.
Formally this can be proven by induction on the length of $\rho^{\prime}$.
Now we develop the infinite supply lemma. Recall that our goal is to generate
arbitrarily many copies of
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
messages with the same variable and value but higher timestamp. Let us fix one
such message, $\mathsf{msg}=(\mathsf{x},\mathsf{d},\mathsf{vw})$, for our
discussion here and see how we can replicate it. Towards this end, consider a
computation $\rho$ in which it is generated. We ‘spread-apart’ the timestamps
of $\mathsf{Msgs}(\rho)$, using timestamp liftings so that we create ‘holes’
(unused timestamps) along
${\color[rgb]{.75,0,.25}\definecolor[named]{pgfstrokecolor}{rgb}{.75,0,.25}\mathsf{Time}}$.
Then we generate copies of
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads, denoted as
$\mathsf{copy}({\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}})$
(possible since
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads can be replicated).
The holes accomodate for the timestamps of
$\mathsf{copy}({\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}})$
and the (higher) timestamp of the copy of $\mathsf{msg}$. Throughout this, we
preserve the order of timestamps of
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}},\mathsf{copy}({\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}})$
threads relative to those of
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads. This ensures that reads-from dependencies are maintained -
$\mathsf{copy}({\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}})$
can read a
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
message whenever
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
can do so.
We define the computation $\widetilde{\rho}$ as a copy of
$\rho\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}$
executed by
$\mathsf{copy}({\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}})$
threads. The write timestamps used by
$\mathsf{copy}({\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}})$
threads are the unoccupied timestamps generated by the timestamp lifting
operation $\mathcal{M}(\rho)$. We show an example of this via a graphic. Let
${\color[rgb]{1,0.1,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{1,0.1,0.1}\mathsf{e}}{{\color[rgb]{0.1,0.1,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.1}\pgfsys@color@gray@stroke{0.1}\pgfsys@color<EMAIL_ADDRESS>and
${\color[rgb]{0.1,0.1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,1}\mathsf{d}}{{\color[rgb]{0.1,0.1,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.1,0.1}\pgfsys@color@gray@stroke{0.1}\pgfsys@color<EMAIL_ADDRESS>respectively denote the timestamps chosen by
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
and
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
along $\rho$ (first row).
$\displaystyle\rho\\!:\leavevmode\nobreak\ $
$\displaystyle{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}\mathsf{T}^{0}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}\mathsf{T}^{0}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}\mathsf{T}^{1}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}\mathsf{T}^{1}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}\mathsf{T}^{2}\leavevmode\nobreak\
\leavevmode\nobreak\ $ $\displaystyle\mathcal{M}(\rho)\\!:\leavevmode\nobreak\
$
$\displaystyle{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}\mathsf{T}^{0}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{1,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{1,0.8,0.8}\mathsf{e}}{{\color[rgb]{0.8,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{0.8,0.8,0.8}\pgfsys@color@gray@stroke{0.8}\pgfsys@color<EMAIL_ADDRESS>\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}\mathsf{T}^{0}_{\mathsf{a}}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}\mathsf{T}^{1}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{1,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{1,0.8,0.8}\mathsf{e}}{{\color[rgb]{0.8,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{0.8,0.8,0.8}\pgfsys@color@gray@stroke{0.8}\pgfsys@color<EMAIL_ADDRESS>\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}\mathsf{T}^{1}_{\mathsf{a}}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{1,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{1,0.8,0.8}\mathsf{e}}{{\color[rgb]{0.8,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{0.8,0.8,0.8}\pgfsys@color@gray@stroke{0.8}\pgfsys@color<EMAIL_ADDRESS>\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}\mathsf{T}^{2}_{\mathsf{a}}$
$\displaystyle\widetilde{\rho}\\!:\leavevmode\nobreak\ $
$\displaystyle{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>\leavevmode\nobreak\
{\color[rgb]{0.8,0.8,1}\definecolor[named]{pgfstrokecolor}{rgb}{0.8,0.8,1}\mathsf{d}}{{\color[rgb]{0.8,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{0.8,0.8,0.8}\pgfsys@color@gray@stroke{0.8}\pgfsys@color<EMAIL_ADDRESS>\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}\mathsf{T}^{0}_{\mathsf{b}}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{1,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{1,0.8,0.8}\mathsf{e}}{{\color[rgb]{0.8,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{0.8,0.8,0.8}\pgfsys@color@gray@stroke{0.8}\pgfsys@color<EMAIL_ADDRESS>\leavevmode\nobreak\
{\color[rgb]{0.8,0.8,1}\definecolor[named]{pgfstrokecolor}{rgb}{0.8,0.8,1}\mathsf{d}}{{\color[rgb]{0.8,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{0.8,0.8,0.8}\pgfsys@color@gray@stroke{0.8}\pgfsys@color<EMAIL_ADDRESS>\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}\mathsf{T}^{1}_{\mathsf{b}}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{1,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{1,0.8,0.8}\mathsf{e}}{{\color[rgb]{0.8,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{0.8,0.8,0.8}\pgfsys@color@gray@stroke{0.8}\pgfsys@color<EMAIL_ADDRESS>\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}\mathsf{T}^{2}_{\mathsf{b}}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{1,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{1,0.8,0.8}\mathsf{e}}{{\color[rgb]{0.8,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{0.8,0.8,0.8}\pgfsys@color@gray@stroke{0.8}\pgfsys@color<EMAIL_ADDRESS>
The second row shows lifted timestamps (with subscript $\mathsf{a}$) of
$\mathcal{M}(\rho)$ and the holes (faded). The third row shows holes being
used by
$\mathsf{copy}({\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}})$
for $\widetilde{\rho}$ (these have subscript $\mathsf{b}$). The construction
guarantees $\mathcal{M}(\rho)\;\\#\;\widetilde{\rho}$ and superposition
$\mathcal{M}(\rho)\triangleright\widetilde{\rho}$ is allowed. In this
computation, $\widetilde{\rho}$ generates a copy of $\mathsf{msg}$,
$\mathsf{msg}^{\prime}=(\mathsf{x},\mathsf{val},\mathsf{vw}^{\prime})$ with
higher $\mathsf{vw}^{\prime}(\mathsf{x})$. Additionally, since
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}\mathsf{T}^{i}_{\mathsf{a}},{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}\mathsf{T}^{i}_{\mathsf{b}}$
have the same position relative to all
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}\mathsf{T}^{j}$
timestamps, so will $\mathsf{vw}(\mathsf{y}),\mathsf{vw}^{\prime}(\mathsf{y})$
for $\mathsf{y}\neq\mathsf{x}$.
Now we state the Infinite Supply Lemma. As helper notation, for a run $\rho$
and each variable $\mathsf{x}$, we denote the timestamps of stores of
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads on $\mathsf{x}$ as
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\text{ts}}^{\mathsf{x}}_{0}<{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\text{ts}}^{\mathsf{x}}_{1}<\cdots$.
###### Lemma 4.5 (Infinite Supply).
Let $\rho$ be a valid run under the RA semantics, in which the message
$(\mathsf{x},\mathsf{d},\mathsf{vw})$ has been generated by an
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread. Then for each timestamp
${\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}t^{*}}\in\mathbb{N}$,
there exist two timestamp lifting functions $\mathcal{M}_{1}$,
$\mathcal{M}_{2}$ and a run $\rho_{1}$ such that
$\mathcal{M}_{1}(\rho)\triangleright\mathcal{M}_{2}(\rho\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}})\triangleright\rho_{1}$
is a valid run. This run contains a message
$(\mathsf{x},\mathsf{d},\mathsf{vw}^{\prime})$ satisfying (ts comes from
$\rho$)
1. 1.
$\forall i$
(${\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}t^{*}}\leq{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\text{ts}}^{\mathsf{x}}_{i}\land\mathsf{vw}(\mathsf{x})\leq{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\text{ts}}^{\mathsf{x}}_{i})\implies\mathsf{vw}^{\prime}(\mathsf{x})\leq\mu_{1}^{\mathsf{x}}({\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\text{ts}}^{\mathsf{x}}_{i})$
2. 2.
$\mathsf{vw}^{\prime}(\mathsf{x})\geq\mu_{2}^{\mathsf{x}}({\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}t^{*}})$
3. 3.
$\forall\mathsf{x}^{\prime}\neq\mathsf{x}$, $\forall i$,
$\mathsf{vw}(\mathsf{x}^{\prime})\leq{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\text{ts}}^{\mathsf{x}^{\prime}}_{i}\implies\mathsf{vw}^{\prime}(\mathsf{x}^{\prime})\leq\mu_{1}^{\mathsf{x}^{\prime}}({{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\text{ts}}^{\mathsf{x}^{\prime}}_{i}})$
###### Proof 4.6.
Without loss of generality, we assume that in the run $\rho$, the timestamps
on each variable are consecutive. If that is not the case, we can always use a
timestamp lowering operation that ‘fills in the gaps’ between non-consecutive
timestamps, while maintaining consecution of the load,store timestamps of
(CAS-local) operations.
We will give a constructive proof. We specify
$\mathcal{M}_{1}(\rho)\triangleright\mathcal{M}(\rho\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}})\triangleright\rho_{1}$
by defining $\mathcal{M}_{1}$, $\mathcal{M}_{2}$ and $\rho_{1}$, and showing
that the resulting run is valid under RA. Then we show how a copy of the
message $(\mathsf{x},\mathsf{d},\mathsf{vw})$ can be obtained as claimed.
First, we describe how to copy runs.
1. 1.
Copying a run. For a variable $\mathsf{x}$, we define the lifting functions as
follows. With the consecutiveness assumption, the messages on $\mathsf{x}$
have consecutive timestamps and are generated by either
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
or
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$,
which we will denote below as
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}$
and
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}$
respectively. For developing intution quickly, consider the following sequence
of consecutive timestamps on some variable.
${\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{0}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{1}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{0}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{2}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{1}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{2}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{3}$
Intuitively, the new interleaved run is obtained by triplicating each
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}$
timestamp into three adjacent timestamps,
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}_{a}$,
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}_{b}$
and
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}_{c}$.
The
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}_{a}$
timestamps belongs to the lifted run $\mathcal{M}_{1}(\rho)$, the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}_{b}$
timestamp belongs to $\mathcal{M}_{2}(\rho)$ and the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}_{c}$
timestamp belongs to $\rho_{1}$. The $a,b,c$ copies are ordered as $b<c<a$
giving us the following timestamp sequence from the one above.
${\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{0}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{1}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{0}_{b}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{0}_{c}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{0}_{a}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{2}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{1}_{b}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{1}_{c}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{1}_{a}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{2}_{b}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{2}_{c}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{2}_{a}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{3}$
We can formalize $\mathcal{M}_{1}$ and $\mathcal{M}_{2}$ by counting the
number of
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}$
timestamps smaller than the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{i}$
timestamp, but for ease of presentation, we will keep this implicit. The total
shift can be done for instance, using the function which maps a time stamp
$p\in{\rm Nature}$ corresponding to a
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread to the (number of
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads appearing before $p$) + 3(number of
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads appearing before $p$)+3, while for a
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
time stamp $p\in\mathbb{N}$, one can map it to (number of
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads appearing before $p$) + 3(number of
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads appearing before $p$)+1. So for instance, the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{0}$
at timestamp 3 moves to the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{0}_{a}$
time stamp 5, while the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{3}$
moves to the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{3}$
time stamp 3+ 3(3)+1=13.
$\mathcal{M}_{1}$ maps the timestamp
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{i}$
to the timestamp
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{i}_{a}$
timestamp. Similarly, $\mathcal{M}_{2}$ maps the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{i}$
timestamp to the timestamp
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{i}_{b}={\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{i}_{a}-2$.
Finally, we have
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{i}_{c}={\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{i}_{a}-1$.
We call these adjacent timestamps ‘triplets’. Additionally, $\mathcal{M}_{1}$
and $\mathcal{M}_{2}$ map the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{i}$
timestamp to the corresponding
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{i}$
timestamp in the expanded run.
We first note that $\mathcal{M}_{1}$ satisfies the premise of the timestamp
lifting lemma, that for (CAS-local) operations, consecutive load,store
timestamps for CAS remain consecutive. This follows since only
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
can perform (CAS-local) and under $\mathcal{M}_{1}$, consecution is maintained
both for
$({\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{i-1},{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{i})$
timestamps as well as
$({\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}_{a},{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{i})$
as depicted in the following timestamp sequence. Thus, by the timestamp
lifting lemma, $\mathcal{M}_{1}(\rho)$ is a valid run under RA.
${\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{0}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{1}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{1,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{1,0.8,0.8}\mathsf{env}}{{\color[rgb]{0.8,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{0.8,0.8,0.8}\pgfsys@color@gray@stroke{0.8}\pgfsys@color<EMAIL_ADDRESS>\leavevmode\nobreak\
{\color[rgb]{1,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{1,0.8,0.8}\mathsf{env}}{{\color[rgb]{0.8,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{0.8,0.8,0.8}\pgfsys@color@gray@stroke{0.8}\pgfsys@color<EMAIL_ADDRESS>\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{0}_{a}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{2}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{1,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{1,0.8,0.8}\mathsf{env}}{{\color[rgb]{0.8,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{0.8,0.8,0.8}\pgfsys@color@gray@stroke{0.8}\pgfsys@color<EMAIL_ADDRESS>\leavevmode\nobreak\
{\color[rgb]{1,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{1,0.8,0.8}\mathsf{env}}{{\color[rgb]{0.8,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{0.8,0.8,0.8}\pgfsys@color@gray@stroke{0.8}\pgfsys@color<EMAIL_ADDRESS>\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{2}_{a}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{1,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{1,0.8,0.8}\mathsf{env}}{{\color[rgb]{0.8,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{0.8,0.8,0.8}\pgfsys@color@gray@stroke{0.8}\pgfsys@color<EMAIL_ADDRESS>\leavevmode\nobreak\
{\color[rgb]{1,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{1,0.8,0.8}\mathsf{env}}{{\color[rgb]{0.8,0.8,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{0.8,0.8,0.8}\pgfsys@color@gray@stroke{0.8}\pgfsys@color<EMAIL_ADDRESS>\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{3}_{a}\leavevmode\nobreak\
\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{3}$
2. 2.
The first superposition gives a valid run. We now claim that the run
$\mathcal{M}_{1}(\rho)\triangleright\mathcal{M}_{2}(\rho\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}})$
is a valid run under RA. For a
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread $\mathsf{t}$ in $\rho$ we denote by $\mathsf{copyB}(\mathsf{t})$ as its
‘copy’ ($\mathsf{TID}$ distinguished apart) in
$\mathcal{M}_{2}(\rho\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}})$
($\mathsf{B}$ since it occupies the $b$ timestamps). We will show that
$\mathsf{copyB}(\mathsf{t})$ copies the transitions that $\mathsf{t}$ took. We
use the following invariant relating the view $\mathsf{vw}_{1}$ of thread
$\mathsf{t}$ in $\rho$ with the view $\mathsf{vw}_{2}$ of
$\mathsf{copyB}(\mathsf{t})$ for showing this. Let $\mathsf{TS}(t)$ denote the
set of timestamps used by thread $t$ in a run $\rho$.
> For every shared variable $\mathsf{x}$, if
> $\mathsf{vw}_{1}(\mathsf{x})\in\mathsf{TS}({\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}})$,
> then $\mathsf{vw}_{2}(\mathsf{x})=\mathsf{vw}_{1}(\mathsf{x})-2$, else if
> $\mathsf{vw}_{1}(\mathsf{x})\in\mathsf{TS}({\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}})$
> then $\mathsf{vw}_{2}(\mathsf{x})=\mathsf{vw}_{1}(\mathsf{x})$
Now we can reason by induction on the length of the run that whenever
$\mathsf{t}$ takes a transition in $\rho$, $\mathsf{copyB}(\mathsf{t})$ can
replicate it but with the view as given by the above invariant. More
precisely, whenever a
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread $\mathsf{t}$ makes a store with timestamp
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}$,
$\mathsf{copyB}({\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}})$
will make a store with timestamp
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}-2$.
Similarly, when a
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread $\mathsf{t}$ makes a load,
1. (a)
if the load is from a message by a
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread,
$\mathsf{copyB}({\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}})$
also loads from the same message
2. (b)
if the load is from a message by some
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread $\mathsf{t}^{\prime}$ in $\rho$,
$\mathsf{copyB}({\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}})$
loads from $\mathsf{copyB}(\mathsf{t}^{\prime})$
It is easy to check that the view invariant is maintained through this
simulation. Crucially we have
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{i}_{a}<{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{j}\iff{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{i}_{b}<{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{j}$.
Thus $\mathsf{t}$ and $\mathsf{copyB}(\mathsf{t})$ can always read the same
set of
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
messages. Thus we have that
$\mathcal{M}_{1}(\rho)\triangleright\mathcal{M}_{2}(\rho\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}})$
is valid under RA. Now we focus on message generation by $\rho_{1}$.
Intuitively, $\rho_{1}$ will also be a copy of $\rho$ but will occupy the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}_{c}$
timestamps.
3. 3.
Generating a copy of the message. Now we will describe how we can use
$\rho_{1}$ to generate the message
$(\mathsf{x},\mathsf{d},\mathsf{vw}^{\prime})$. Let $\rho_{p}$ be the prefix
of $\rho$ just (one transition) before the message
$(\mathsf{x},\mathsf{d},\mathsf{vw})$ is generated. We generate the run
$\rho^{\prime}$ by copying the
$\rho_{p}\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}$
using the $c$-timestamps. The run obtained is
$\mathcal{M}_{1}(\rho)\triangleright\mathcal{M}_{2}(\rho\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}})\triangleright\rho^{\prime}$.
Using similar reasoning as earlier, we can show that this is a valid run under
the RA semantics.
Now let
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread $\mathsf{t}$ be the thread which generates the message
$(\mathsf{x},\mathsf{d},\mathsf{vw})$ in $\rho$. Then there is a copy of
$\mathsf{t}$, thread $\mathsf{copyC}(\mathsf{t})$ in $\rho^{\prime}$, that is
now in a control state enabling it to generate a message on variable
$\mathsf{x}$ with value $\mathsf{d}$ (since the transitions have been
replicated exactly across
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
and
$\mathsf{copyC}({\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}})$).
Now we need to reason about the view of the message generated by
$\mathsf{copyC}(\mathsf{t})$. If the view of $\mathsf{t}$ is $\mathsf{vw}_{a}$
and that of $\mathsf{copyC}(\mathsf{t})$ is $\mathsf{vw}_{c}$ we have the
following, which again follows from the invariant mentioned above.
> For each variable $\mathsf{x}^{\prime}$, if
> $\mathsf{vw}_{a}(\mathsf{x}^{\prime})\in\mathsf{TS}({\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}})$,
> then
> $\mathsf{vw}_{c}(\mathsf{x}^{\prime})=\mathsf{vw}_{a}(\mathsf{x}^{\prime})-1$,
> else if
> $\mathsf{vw}_{a}(\mathsf{x}^{\prime})\in\mathsf{TS}({\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}})$
> then
> $\mathsf{vw}_{c}(\mathsf{x}^{\prime})=\mathsf{vw}_{a}(\mathsf{x}^{\prime})$
Observe how this satisfies the condition (3) in the lemma immediately since in
both cases above, we have
$\mathsf{vw}_{c}(\mathsf{x}^{\prime})\leq\mathsf{vw}_{a}(\mathsf{x}^{\prime})$.
Now the thread $\mathsf{copyC}(\mathsf{t})$ will choose the timestamp for
variable $\mathsf{x}$, $\mathsf{vw}^{\prime}(\mathsf{x})$. Assume
$t^{*}\in{\rm Nature}$ has been given. We have two cases, (i)
$t^{*}\leq\mathsf{vw}(\mathsf{x})$ and (ii) $t^{*}>\mathsf{vw}(\mathsf{x})$.
1. (i)
In this case there is nothing to prove as the original message is lifted to
$(\mathsf{x},\mathsf{d},\mathsf{vw}^{\prime})$ where
$\mathsf{vw}^{\prime}(\mathsf{x})=\mu_{1}^{\mathsf{x}}(t^{*})$ which satisfies
both conditions (1) and (2).
2. (ii)
We choose $\mathsf{vw}^{\prime}(\mathsf{x})=\mu_{2}^{\mathsf{x}}(t^{*})+1$,
which satisfies (2). Note that this timestamp is higher than
$\mathsf{vw}_{c}(\mathsf{x})$ since
$\mu_{2}^{\mathsf{x}}(t^{*})\geq\mu_{1}^{\mathsf{x}}(\mathsf{vw}(\mathsf{x}))=\mathsf{vw}_{a}(\mathsf{x})>\mathsf{vw}_{c}(\mathsf{x})$.
We have
$\mathsf{vw}^{\prime}(\mathsf{x})=\mu_{2}^{\mathsf{x}}(t^{*})+1=\mu_{1}^{\mathsf{x}}(t^{*})-1$
due to the construction of $\mathcal{M}_{1},\mathcal{M}_{2},\rho_{1}$.
Additionally,
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\text{ts}}^{\mathsf{x}}_{i}\geq
t^{*}\implies\mu_{1}^{\mathsf{x}}({\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\text{ts}}^{\mathsf{x}}_{i})\geq\mu_{1}^{\mathsf{x}}(t^{*})>\mathsf{vw}^{\prime}(\mathsf{x})$
satisfiying condition (1). In this case $\rho_{1}$ is defined as
$\rho^{\prime}$ extended by the store transition generating the message. Note
that since it is a $c$-type message, the timestamp is available.
Thus in both cases, we have a message
$(\mathsf{x},\mathsf{d},\mathsf{vw}^{\prime})$ with $\mathsf{vw}^{\prime}$
satisfying the required conditions. This proves the theorem.
To sum up, we interpret the infinite supply as this: $\mathcal{M}_{1}(\rho)$
is the lifted run with holes.
$\mathcal{M}_{2}(\rho\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}})$
is the
$\mathsf{copy}({\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}})$
run and $\rho_{1}$ is obtained by running another copy that generates the new
message. We note that run triplication is not strictly necessary for message
duplication, but makes the proof easier. We note, points (1) and (3) above
refer to relative ordering between
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
and
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
timestamps and (2) refers to the new message with an arbitrarily high
timestamp.
### 4.2 Abstracting the Timestamps
We introduce the _timestamp abstraction_ , which is a building block for the
simplified semantics. Let us call a message $\mathsf{msg}$ an
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
(${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$)
message if it is generated by a
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
(${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$)
thread. With the intuition that
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
messages can be replicated with arbitrarily high timestamps, while
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
or initial messages cannot be, we distinguish the write timestamps of the two
types of messages.
Timestamp Abstraction. If a
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread has read a message $(\mathsf{x},d,\mathsf{vw})$ from a
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread with a timestamp $\mathsf{ts}=\mathsf{vw}(\mathsf{x})$ and has
generated a message $\mathsf{msg}$ on $\mathsf{x}$, then copies of
$\mathsf{msg}$ are available with arbitrarily high timestamps at least as high
as $\mathsf{ts}$. To capture this in our abstraction, we assign the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
message $\mathsf{msg}$, a timestamp $\mathsf{ts}^{\textbf{+}}$ that is by
definition, larger than $\mathsf{ts}$.
We define the set of timestamps in the simplified semantics as ${\rm
Nature}\uplus{\rm Nature}^{\textbf{+}}$, where ${\rm Nature}^{\textbf{+}}$
contains for each $\mathsf{ts}\in{\rm Nature}$, a timestamp
$\mathsf{ts}^{\textbf{+}}$. The timestamps are equipped with the order
$\preceq$ in which $\mathsf{ts}^{\textbf{+}}$ is greater than $\mathsf{ts}$
and smaller than $\mathsf{ts}+1$:
$0\prec 0^{\textbf{+}}\prec 1\prec 1^{\textbf{+}}\prec\ldots$
Timestamps of form $\mathsf{ts}\in\mathbb{N}$ are used for the stores of
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads while those of form $\mathsf{ts}^{\textbf{+}}$ are used for stores of
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads. We allow multiple stores with the same timestamp of form
$\mathsf{ts}^{\textbf{+}}$, while allowing at most one store for timestamps of
form $\mathsf{ts}$. This abstracts timestamps of multiple
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
messages between two
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
messages by a single $\mathsf{ts}^{\textbf{+}}$ timestamp. Initial messages
have timestamp 0 as usual.
We utilize this timestamp abstraction by defining a simplified semantics; note
that this simplification is not per se a simpler formulation but rather is
simple in the sense that it will pave the way for efficient verification
procedures (as detailed in Section 5, Section 6). We then show that a run
$\rho$ in the classical RA semantics has an equivalent run in the simplified
semantics where the timestamps are transformed according to some timestamp
transformation $\mathcal{M}$ as defined above. We show that reachability
across the two semantics is preserved since both order and consecution between
timestamps is maintained.
RA semantics, simplified. As in the classical RA semantics, the transition
rules of the simplified semantics will require us to increase timestamps (upon
writing messages). We define the function $\mathsf{raise}(-)$ on ${\rm
Nature}\uplus{\rm Nature}^{\textbf{+}}$ by
$\displaystyle\mathsf{raise}(\mathsf{ts})=\mathsf{raise}(\mathsf{ts}^{\textbf{+}})=\mathsf{ts}^{\textbf{+}},\mathsf{ts}\in{\rm
Nature}.$
The definition of the simplified semantics replaces the domain
${\color[rgb]{.75,0,.25}\definecolor[named]{pgfstrokecolor}{rgb}{.75,0,.25}\mathsf{Time}}$
by $\mathbb{P}={\rm Nature}\uplus{\rm Nature}^{\textbf{+}}$. We use the term
_abstract_ to refer to the resulting views, messages, memory, local
configurations, and configurations and use a superscript
${{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}$
(shortened
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}/{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$)
to indicate that an element is abstract. So an abstract view is a function,
$\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}$
that maps shared variables to $\mathbb{P}$. We now specify the transitions in
the abstract semantics. Owing to their different nature (one is replicatable,
the other is not) the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
and
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads will have different transition rules in the simplified semantics.
For storing a value, the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads use a rule (ST-localenv) that coincides with rule (ST-local) from the
RA semantics (Figure 1) except that it replaces relation $<_{\mathsf{x}}$ by
$<^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}_{\mathsf{x}}$
defined as follows:
$\displaystyle\mathsf{vw}_{1}<^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}_{\mathsf{x}}\mathsf{vw}_{2}\quad\text{iff}\leavevmode\nobreak\
\begin{cases}\mathsf{raise}(\mathsf{vw}_{1}(\mathsf{x}))\preceq\mathsf{vw}_{2}(\mathsf{x})\in\mathbb{N}^{+}&\\\
\mathsf{vw}_{1}(\mathsf{y})=\mathsf{vw}_{2}(\mathsf{y})\qquad\text{ for
}\mathsf{y}\neq\mathsf{x}.\end{cases}$
Additionally, for stores of
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads, we no longer require the timestamp of the message to be unused. So we
disregard the $\mathsf{msg}\\#\mathsf{m}$ check in the global (ST-global) rule
(note crucially this is for
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
only). The
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads use (ST-local) from the RA semantics without modifications, and hence
choose a timestamp in ${\rm Nature}$, not a raised value.
For load instructions, we distinguish between messages generated by
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
and
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads. This is a natural consequence of the different nature of timestamps,
$\mathsf{ts}$ for
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
and $\mathsf{ts}^{\textbf{+}}$ for
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
messages. For loading a
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
message, we use rule (LD-local) (Figure 1) from the RA semantics without
changes.
For loading from
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads, we introduce a new rule (LD-localenv). It is defined by replacing
$\sqcup$ in (LD-local) by
$\sqcup^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}_{x}$.
We drop the check on the order of timestamps (overwrite it by true); a
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
message may always be read, independent of the reading thread’s view. The join
is dependent on the variable being read from, $\mathsf{x}$. To define
$\mathsf{vw}_{1}\sqcup^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}_{\mathsf{x}}\mathsf{vw}_{2}$,
let $\mathsf{vw}_{1}$ be the view of the thread thread loading the message and
$\mathsf{vw}_{2}$ be the view in the message.
$\displaystyle\mathsf{vw}_{1}\sqcup^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}_{\mathsf{x}}\mathsf{vw}_{2}=(\mathsf{vw}_{1}[\mathsf{x}\mapsto\mathsf{raise}(\mathsf{vw}_{1}(\mathsf{x}))])\sqcup\mathsf{vw}_{2}$
Thus, if $\mathsf{vw}_{1}(\mathsf{x})=4$ and
$\mathsf{vw}_{2}(\mathsf{x})=2^{+}$, then
$(\mathsf{vw}_{1}\sqcup_{\mathsf{x}}^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}\mathsf{vw}_{2})(\mathsf{x})=4^{+}$.
The update to $\mathsf{raise}(\mathsf{vw}_{1}(\mathsf{x}))$ ensures that if it
the timestamp on $\mathsf{x}$ was $\mathsf{ts}$, it is at least
$\mathsf{ts}^{\textbf{+}}$, and hence it cannot read a
(${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$)
message with timestamp $\mathsf{ts}$ again. We note that the above join
operation is not commutative.
Now we consider the atomic operation - (CAS-local) \- which can only be
performed by
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$.
We have two cases depending on whether (CAS-local) loads from a
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
or
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
message. If it is the latter, then the transition is identical to ((LD-
localenv); (ST-local)) with the additional condition that the load and store
timestamps must be $\mathsf{ts}^{\textbf{+}}$ and $\mathsf{ts}+1$ for some
$\mathsf{ts}$.
If it is the former (load from
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$)
then the load and store timestamps must be $\mathsf{ts}$ and $\mathsf{ts}+1$.
Consequently, there cannot be any messages with timestamp
$\mathsf{ts}^{\textbf{+}}$. Conversely, if there is (atleast one) message with
timestamp $\mathsf{ts}^{\textbf{+}}$, then the (CAS-local) operation with load
and store timestamps $\mathsf{ts}$ and $\mathsf{ts}+1$ is forbidden. We keep
track of such ‘blocked’ intervals $(\mathsf{ts},\mathsf{ts}+1)$ by adding a
set $\mathsf{B}$ to the global state in the simplified semantics. The global
and local transition relations of the full simplified semantics are in Figure
6, 7.
We formally proof equivalence w.r.t. reachability of the simplified semantics
with the original RA semantics. But before that, we give an example to
illustrate timestamp abstraction in the simplified semantics.
$q_{0}$$q_{2}$$q_{0}$$q_{1}$$q_{2}$$q_{1}$$\partial_{1}$$\partial_{2}$$\partial_{5}$$\partial_{3}$$\partial_{4}$Program
$\mathsf{c}_{i}$Program
$\mathsf{c}_{j}$$q_{\mathsf{end}}$$q_{\mathsf{end}}$$q_{\mathsf{start}}$$q_{\mathsf{start}}$Transformed
program $\mathsf{c}^{\prime}_{j}$Transformed program
$\mathsf{c}^{\prime}_{i}$$\iota(\partial_{1})$$\iota(\partial_{2})$$\iota(\partial_{3})$$\iota(\partial_{4})$$\iota(\partial_{5})$register
loadsregister loadsregister storesregister
stores$\mathsf{cas}(t_{i},0,\Lambda^{\bot})$$\mathsf{cas}(t_{i},1,\Lambda^{\bot})$$\mathsf{cas}(t_{i},\Lambda^{\bot},2)$$\mathsf{cas}(t_{i},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},0,\Lambda^{\bot})$$\mathsf{cas}(t_{j},2,\Lambda^{\bot})$$\mathsf{cas}(t_{j},1,\Lambda^{\bot})$$\mathsf{cas}(t_{j},\Lambda^{\bot},2)$$\begin{array}[]{c}\text{Variables
}\mathsf{x}\text{ and }\mathsf{y}\text{ have been initialized to 0}\\\
\begin{array}[]{l|l}\hline\cr\mathsf{producer}&\mathsf{consumer}\\\
\hline\cr\lambda_{1}:\leavevmode\nobreak\ \mathsf{r}_{1}
\coloneqq\mathsf{y}&\lambda^{\prime}_{1}:\leavevmode\nobreak\ \mathsf{y}
\coloneqq 1\\\ \lambda_{2}:\leavevmode\nobreak\
\texttt{if}(\mathsf{r}_{1}==1):&\lambda^{\prime}_{2}:\leavevmode\nobreak\
\texttt{for i in {1..z}}:\\\ \lambda_{3}:\leavevmode\nobreak\ \qquad\mathsf{x}
\coloneqq 1\oplus\quad\quad\cdots\oplus\mathsf{x} \coloneqq
l&\lambda^{\prime}_{3}:\leavevmode\nobreak\ \quad\quad\mathsf{r}^{\prime}_{1}
\coloneqq\mathsf{x}\\\ &\lambda^{\prime}_{4}:\leavevmode\nobreak\
\quad\quad\mathsf{assume}\;{\mathsf{r}^{\prime}_{1}=\texttt{(i \%\% l)+1}}\\\
\hline\cr\end{array}\end{array}$
Figure 4: Above is a (non-parameterized) producer-consumer program, and below
is a sample execution snippet with threads $t_{1}$ and $t_{2}$ playing the
roles of producer and consumer respectively. Figure 5: Execution under the
simplified semantics.
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{producer}}$
transitions and messages in red,
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{consumer}}$
transitions and messages in blue. The execution begins with the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{consumer}}$
thread generating a message on $\mathsf{y}$ with value 1 and timestamp 1
leading to the memory $\mathsf{m}_{\lambda^{\prime}_{1}}$. The
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{producer}}$
threads executing $\lambda_{1}^{1\dots l}$ read from this message and reach
states $\lambda_{2}^{1\dots l}$. They generate messages on $\mathsf{x}$ with
values $\\{1,\dots,l\\}$ shown in memory $\mathsf{m}_{\lambda_{3}}$. These are
then read by the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{consumer}}$
as it loops around $\lambda^{\prime}_{3}$, $\lambda^{\prime}_{4}$ for
different iterates i, (i=1, i=2, i>2) as shown along the transition edge.
Simplified Semantics, on an Example. In Figure 5 we give an example of a
computation under the simplified semantics by parameterizing the program from
Figure 4. The parameterized program has a single distinguished
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1}$
thread which runs the program
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{consumer}}$,
and arbitrarily many
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads which run
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{producer}}$.
We consider a computation in which
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1}$,
and $l$ (out of the unboundedly many)
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads participate. We label different instances of the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads (and their instruction labels) by superscripts from $\\{1,\dots,l\\}$
(eg., $\lambda_{1}^{1},\dots\lambda_{1}^{l}$ for $\lambda_{1}$).
The
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{consumer}}$
generates write timestamps of the form $\mathsf{ts}$ (1 in example), while
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{producer}}$
threads have write timestamps of the form
$\mathsf{ts}_{1}^{+},\dots,\mathsf{ts}_{l}^{+}$. While the timestamp 1 is now
occupied, there can be several writes with timestamps of the form
$\mathsf{ts}^{+}$, (in particular some $\mathsf{ts}_{i}^{+}$ may be equal).
Additionally, when reading from these
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{producer}}$
generated messages,
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{consumer}}$
does not perform any timestamp checks, rather simply updates its view by
taking joins. Hence we point out that the load with value 2 during the second
loop iteration (i=2), is feasible even if
$\mathsf{ts}_{2}^{+}<\mathsf{ts}_{1}^{+}$; unlike the classical RA semantics.
In this example, the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread, after looping around $l$ times and reading from
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
messages, has the view on $\mathsf{x}$ as $\max\\{\mathsf{ts}_{i}^{+}\\}$. Due
to the lack of timestamp comparisons,
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{consumer}}$
can perform the loop arbitrarily many times ($\mathsf{z}>l$ times), moreover,
the number of
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads needed is independent of z. We see that this relieves the burden of
timestamp comparisons, for
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
messages.
$q_{0}$$q_{2}$$q_{0}$$q_{1}$$q_{2}$$q_{1}$$\partial_{1}$$\partial_{2}$$\partial_{5}$$\partial_{3}$$\partial_{4}$Program
$\mathsf{c}_{i}$Program
$\mathsf{c}_{j}$$q_{\mathsf{end}}$$q_{\mathsf{end}}$$q_{\mathsf{start}}$$q_{\mathsf{start}}$Transformed
program $\mathsf{c}^{\prime}_{j}$Transformed program
$\mathsf{c}^{\prime}_{i}$$\iota(\partial_{1})$$\iota(\partial_{2})$$\iota(\partial_{3})$$\iota(\partial_{4})$$\iota(\partial_{5})$register
loadsregister loadsregister storesregister
stores$\mathsf{cas}(t_{i},0,\Lambda^{\bot})$$\mathsf{cas}(t_{i},1,\Lambda^{\bot})$$\mathsf{cas}(t_{i},\Lambda^{\bot},2)$$\mathsf{cas}(t_{i},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},0,\Lambda^{\bot})$$\mathsf{cas}(t_{j},2,\Lambda^{\bot})$$\mathsf{cas}(t_{j},1,\Lambda^{\bot})$$\mathsf{cas}(t_{j},\Lambda^{\bot},2)$$\begin{array}[]{c}\begin{array}[]{c}\textsc{(LD-
global)}\quad\inferrule{\mathsf{lcfm}(\mathsf{t})=\mathsf{lcf}\quad\mathsf{lcf}\xrightharpoondown{\mathsf{ld},\mathsf{msg}}\mathsf{lcf}^{\prime}\quad\mathsf{msg}\in\mathsf{m}}{(\mathsf{m},\mathsf{lcfm},\mathsf{B})\xrightarrow{(\mathsf{t},\mathsf{msg})}(\mathsf{m},\mathsf{lcfm}[\mathsf{t}\mapsto\mathsf{lcf}^{\prime}],\mathsf{B})}\\\
\textsc{(Unlabelled)}\quad\inferrule{\mathsf{lcfm}(\mathsf{t})=\mathsf{lcf}\quad\mathsf{lcf}\xrightharpoondown{}\mathsf{lcf}^{\prime}}{(\mathsf{m},\mathsf{lcfm},\mathsf{B})\xrightarrow{\mathsf{t}}(\mathsf{m},\mathsf{lcfm}[\mathsf{t}\mapsto\mathsf{lcf}^{\prime}],\mathsf{B})}\end{array}\\\
\textsc{(ST-
global)}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}}\quad\inferrule{\mathsf{t}\in{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\quad\mathsf{lcfm}(\mathsf{t})=\mathsf{lcf}\quad\mathsf{lcf}\xrightharpoondown{\mathsf{st},\mathsf{msg},{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}}\mathsf{lcf}^{\prime}\quad}{(\mathsf{m},\mathsf{lcfm},\mathsf{B})\xrightarrow{(\mathsf{t},\mathsf{msg})}(\mathsf{m}\cup\\{\mathsf{msg}\\},\mathsf{lcfm}[\mathsf{t}\mapsto\mathsf{lcf}^{\prime}],\mathsf{B})}\\\
\textsc{(ST-
global)}^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}\vspace{-1em}\\\
\inferrule{\mathsf{t}\in{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\quad\mathsf{lcfm}(\mathsf{t})=\mathsf{lcf}\quad\mathsf{lcf}\xrightharpoondown{\mathsf{st},\mathsf{msg},{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}\mathsf{lcf}^{\prime}\quad\mathsf{msg}=(\mathsf{x},\mathsf{d},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}})\quad\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}(\mathsf{x})=\mathsf{ts}^{+}\quad\mathsf{ts}\not\in\mathsf{B}}{(\mathsf{m},\mathsf{lcfm},\mathsf{B})\xrightarrow{(\mathsf{t},\mathsf{msg})}(\mathsf{m}\cup\\{\mathsf{msg}\\},\mathsf{lcfm}[\mathsf{t}\mapsto\mathsf{lcf}^{\prime}],\mathsf{B}\cup\\{\mathsf{ts}^{+}\\})}\\\
\textsc{(CAS-global)}\vspace{-0.5em}\\\
\inferrule{\mathsf{lcfm}(\mathsf{t})=\mathsf{lcf}\quad\mathsf{lcf}\xrightharpoondown{\mathsf{cas},\mathsf{msg}_{r},\mathsf{msg}_{w}}\mathsf{lcf}^{\prime}\quad\mathsf{msg}_{r}\in\mathsf{m}\quad\mathsf{msg}_{w}=(\mathsf{x},\mathsf{d},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}})\quad\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}(\mathsf{x})=\mathsf{ts}+1\\\
\\\ \text{ if }\mathsf{msg}_{r}(\mathsf{x})\in\mathbb{N}\text{ then
}\mathsf{ts}^{\textbf{+}}\not\in\mathsf{B}}{(\mathsf{m},\mathsf{lcfm},\mathsf{B})\xrightarrow{(\mathsf{t},\mathsf{msg}_{w})}(\mathsf{m}\cup\\{\mathsf{msg}_{w}\\},\mathsf{lcfm}[\mathsf{t}\mapsto\mathsf{lcf}^{\prime}],\mathsf{B}\cup\\{\mathsf{ts}\\})}\end{array}$
Figure 6: Simplified semantics. Global transition relation. $\mathsf{B}$ is a
set of blocked time stamps. For an
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread making a store operation, the time stamp
$\mathsf{ts}^{+}\in\mathbb{N}^{+}$ can be chosen only when $\mathsf{ts}$ has
not been blocked ( $\mathsf{ts}\notin\mathsf{B}$). $\mathsf{ts}^{+}$ is added
to $\mathsf{B}$ whenever a
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread makes a store operation adding a message
$(\mathsf{x},d,\mathsf{ts}^{+})$. Likewise, when a
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
makes a CAS operation on loading from a message $(\mathsf{x},d,\mathsf{vw})$
with $\mathsf{vw}(\mathsf{x})=\mathsf{ts}\in\mathbb{N}$, then it must be
checked that $\mathsf{ts}^{+}\notin\mathsf{B}$, ensuring that there are no
time stamps between $\mathsf{ts}$ and $\mathsf{ts}+1$.
$\mathsf{ts}\in\mathbb{N}$ is added to $\mathsf{B}$ when a
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread makes a CAS, loading from a message $(\mathsf{x},d,\mathsf{vw})$ with
$\mathsf{vw}(\mathsf{x})=\mathsf{ts}\in\mathbb{N}$.
$q_{0}$$q_{2}$$q_{0}$$q_{1}$$q_{2}$$q_{1}$$\partial_{1}$$\partial_{2}$$\partial_{5}$$\partial_{3}$$\partial_{4}$Program
$\mathsf{c}_{i}$Program
$\mathsf{c}_{j}$$q_{\mathsf{end}}$$q_{\mathsf{end}}$$q_{\mathsf{start}}$$q_{\mathsf{start}}$Transformed
program $\mathsf{c}^{\prime}_{j}$Transformed program
$\mathsf{c}^{\prime}_{i}$$\iota(\partial_{1})$$\iota(\partial_{2})$$\iota(\partial_{3})$$\iota(\partial_{4})$$\iota(\partial_{5})$register
loadsregister loadsregister storesregister
stores$\mathsf{cas}(t_{i},0,\Lambda^{\bot})$$\mathsf{cas}(t_{i},1,\Lambda^{\bot})$$\mathsf{cas}(t_{i},\Lambda^{\bot},2)$$\mathsf{cas}(t_{i},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},0,\Lambda^{\bot})$$\mathsf{cas}(t_{j},2,\Lambda^{\bot})$$\mathsf{cas}(t_{j},1,\Lambda^{\bot})$$\mathsf{cas}(t_{j},\Lambda^{\bot},2)$$\begin{array}[]{cc}\hskip
28.45274pt\begin{subarray}{c}\textsc{(ST-
local${}^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}$)}\\\
\text{store operation for
}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\end{subarray}&\quad\inferrule{\mathsf{rv}(\mathsf{r})=\mathsf{d}\quad\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}<^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}_{\mathsf{x}}\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2}}{(\mathsf{x}
\coloneqq\mathsf{r},\mathsf{rv},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1})\xrightharpoondown{\mathsf{st},(\mathsf{x},\mathsf{d},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2}),{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}(\mathsf{skip},\mathsf{rv},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2})}\\\\[14.22636pt]
\begin{subarray}{c}\textsc{(ST-local)}\\\ \text{store operation for
}{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\end{subarray}&\quad\inferrule{\mathsf{rv}(\mathsf{r})=\mathsf{d}\quad\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}<_{\mathsf{x}}\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2}\quad\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2}(\mathsf{x})\in\mathbb{N}}{(\mathsf{x}
\coloneqq\mathsf{r},\mathsf{rv},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1})\xrightharpoondown{\mathsf{st},(\mathsf{x},\mathsf{d},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2}),{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}}(\mathsf{skip},\mathsf{rv},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2})}\\\\[14.22636pt]
\begin{subarray}{c}\textsc{(LD-
local${}^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}$)}\\\
\text{load from
}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\text{
messages}\end{subarray}&\quad\inferrule{\mathsf{rv}^{\prime}=\mathsf{rv}[\mathsf{r}\mapsto
\mathsf{d}]\quad\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2}(\mathsf{x})\in\mathbb{N}^{+}}{(\mathsf{r}
\coloneqq
x,\mathsf{rv},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1})\xrightharpoondown{\mathsf{ld},(\mathsf{x},\mathsf{d},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2})}(\mathsf{skip},\mathsf{rv}^{\prime},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}\sqcup^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}_{\mathsf{x}}\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2})}\\\\[14.22636pt]
\begin{subarray}{c}\textsc{(LD-local)}\\\ \text{load from
}{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\text{
messages}\end{subarray}&\quad\inferrule{\mathsf{rv}^{\prime}=\mathsf{rv}[\mathsf{r}\mapsto
\mathsf{d}]\quad\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}(\mathsf{x})\preceq\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2}(\mathsf{x})\in\mathbb{N}}{(\mathsf{r}
\coloneqq\mathsf{x},\mathsf{rv},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1})\xrightharpoondown{\mathsf{ld},(\mathsf{x},\mathsf{d},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2})}(\mathsf{skip},\mathsf{rv}^{\prime},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}\sqcup\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2})}\\\\[14.22636pt]
\begin{subarray}{c}\textsc{(CAS-
local${}^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}$)}\\\
\text{cas with load from
}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\text{
messages}\end{subarray}&\quad\inferrule{\mathsf{rv}(\mathsf{r}_{1})=\mathsf{d}_{1}\quad\mathsf{rv}(\mathsf{r}_{2})=\mathsf{d}_{2}\quad\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}(\mathsf{x})=\mathsf{ts}^{+}\\\
\\\
{\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}^{\prime}=\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}\sqcup^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}_{\mathsf{x}}\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}\quad{\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}^{\prime}(\mathsf{x})=\mathsf{ts}_{1}^{+}\quad\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2}={\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}^{\prime}[\mathsf{x}\mapsto\mathsf{ts}_{1}+1]}{(\mathsf{cas}(\mathsf{x},\mathsf{r}_{1},\mathsf{r}_{2}),\mathsf{rv},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}})\xrightharpoondown{\mathsf{cas},(\mathsf{x},\mathsf{d}_{1},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}),(\mathsf{x},\mathsf{d}_{2},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2})}(\mathsf{skip},\mathsf{rv},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2})}\\\\[14.22636pt]
\begin{subarray}{c}\textsc{(CAS-local)}\\\ \text{cas with load from
}{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\text{
messages}\end{subarray}&\quad\inferrule{\mathsf{rv}(\mathsf{r}_{1})=\mathsf{d}_{1}\quad\mathsf{rv}(\mathsf{r}_{2})=\mathsf{d}_{2}\quad\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}(\mathsf{x})=\mathsf{ts}\geq\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}(\mathsf{x})\\\
\\\
{\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}^{\prime}=\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}\sqcup\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}\quad\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2}={\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}^{\prime}[\mathsf{x}\mapsto\mathsf{ts}+1]}{(\mathsf{cas}(\mathsf{x},\mathsf{r}_{1},\mathsf{r}_{2}),\mathsf{rv},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}})\xrightharpoondown{\mathsf{cas},(\mathsf{x},\mathsf{d}_{1},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}),(\mathsf{x},\mathsf{d}_{2},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2})}(\mathsf{skip},\mathsf{rv},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2})}\end{array}$
Figure 7: Simplified semantics. Thread-local transition relation. Margin
annotations provide description. The store rules refer to the thread type
(${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$/${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$)
executing the instruction; the load rules refer to the thread type which
generated the message that is being loaded (similarly for the load part of CAS
operations which can only be executed by
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads). In Rule (ST-localenv), we use
$\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}<_{\mathsf{x}}^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2}$
to mean
$\mathsf{raise}(\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}(\mathsf{x}))\preceq\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2}(\mathsf{x})\in\mathbb{N}^{+}$
and
$\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}(\mathsf{y})=\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2}(\mathsf{y})$
for all variables $\mathsf{y}\neq\mathsf{x}$. In Rule (LD-localenv),
$\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}\sqcup_{\mathsf{x}}^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2}$
is defined as
${\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}}[\mathsf{x}\mapsto\mathsf{raise}(\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}(\mathsf{x}))]\sqcup\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2}$.
The join $\sqcup$ always means an element wise max over the relevant domain.
The simplified semantics exactly captures reachability of the original
semantics. Define
$\alpha_{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}$
to be a function which drops all views from messages and local configurations,
and define
$=_{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}$
as equality of the local configurations modulo views.
###### Theorem 4.7 (Soundness and Completeness).
If a configuration $\mathsf{cf}$ is reachable under RA, then there is an
abstract configuration
$\mathsf{cf}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}$
reachable in the simplified semantics so that
$\mathsf{cf}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}=_{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}\alpha_{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}(\mathsf{cf})$.
Conversely, if a configuration
$\mathsf{cf}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}$
is reachable in the simplified semantics, then there is a configuration
$\mathsf{cf}$ reachable under RA such that
$\alpha_{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}(\mathsf{cf})=_{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}\mathsf{cf}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}$.
###### Proof 4.8.
At the outset, we note that the only component of the configuration that
differs between the classical and simplified semantics is that of the
timestamps and hence the view map, $\mathsf{vw}$ in concrete and
$\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}$
in the abstract configuration. We now give a relation between these
timestamps. With this relation being clear, the formal equivalence between the
semantics can be shown by considering a case analysis of the transitions that
the threads can take. Once again for quick intuition we take the example of
timestamps on a single shared variable $\mathsf{x}$ as follows.
$\begin{array}[]{cccccccc}\mathsf{init}&{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{0}&{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{1}&{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{0}&{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{2}&{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{1}&{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{2}&{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{3}\\\
0&1&2&3&4&5&6&7\end{array}$
This sequence of timestamps corresponds to the following sequence in the
abstract semantics.
$\begin{array}[]{rcccccccc}&\mathsf{init}&{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{0}&{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{1}&{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{0}&{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{2}&{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{1}&{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\mathsf{T}^{2}&{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{3}\\\
\text{concrete}&0&1&2&3&4&5&6&7\\\
\text{abstract}&0&1&2&\hbox{\pagecolor{blue!5}$\quad
2^{+}\quad$}&3&\lx@intercol\hfil\hbox{\pagecolor{blue!5}$\qquad
3^{+}\qquad$}\hfil\lx@intercol&4\end{array}$
In this fashion the $\mathsf{ts}^{+}$ are abstracted
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
timestamps between any two
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
timestamps. We define the abstraction (similarly concretization) function as
the function that transforms all timestamps in the run as shown above. With
the above timestamp abstraction/concretization in mind we show that abstract
and concrete configurations are equivalent in terms of reachability.
We prove this by induction on the length of a run. We show that a concrete
(similarly abstract) configuration is reachable if and only if it has some
abstraction (similarly concretization) that is reachable.
Base Case. In the base case equivalence is maintained as the initial concrete
configuration is equivalent to its simplified configuration where all
timestamps are 0. Recall that all timestamp transformations maintain 0 as a
fixpoint. Hence the initial thread-local states and memory are equivalent for
the concrete and abstract semantics.
Inductive Case - Concrete to Abstract For the inductive case, assume that we
have the result after $n\in{\rm Nature}$ steps in a computation. Now we induct
by considering cases over types of the $n+1$ th instruction in the
computation.
* •
Silent: Silent (thread-local) instructions are handled trivially. They only
change the thread local state identically for the concrete and abstract
configurations.
* •
Load: A load transition can be either from a
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
or a
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$.
For both the cases, we note that the timestamp abstraction maintains
(including equality) the relative order of timestamps. Hence whenever a
concrete message is readable, so is the corresponding abstract message.
* •
Store: This follows since the corresponding thread in the abstract
configuration can simulate the store using the corresponding timestamp
($\mathsf{ts}^{+}\in{\rm Nature}^{+}$ in case of a
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
store, and $\mathsf{ts}\in{\rm Nature}$ in case of a
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
store). Note once again that, the abstraction preserves order on the
timestamps and consequently, a store is allowed in the abstract semantics if
it was allowed in the concrete computation.
* •
CAS: In this case, we note that the set $\mathsf{B}$ keeps track of which
timestamps are allowed for CAS operations. If the CAS operation read from a
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
message, the semantics follows from (LD-localenv)(ST-local). However, if the
CAS load is performed using the store of a
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$,
then it implies that there are no
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
timestamps between the load,store timestamps ($\mathsf{ts},\mathsf{ts}+1$) of
the CAS (similar to
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{0}$
and
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\mathsf{T}^{1}$
in the figure above). Consequently, we see that the set $\mathsf{B}$ in the
abstract semantics does not contain the timestamp $\mathsf{ts}^{+}$
($\mathsf{ts}^{+}$ is added to $\mathsf{B}$ the moment a
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
makes a store with the timestamp $\mathsf{ts}^{+}$ to disallow a CAS with
load, store timestamps $\mathsf{ts}$ and $\mathsf{ts}+1$). Thus the equivalent
CAS operation is also allowed under the abstract semantics.
Inductive Case - Concrete to Abstract
* •
Silent: Silent instructions are handled trivially they only change the thread
local state identically for the concrete and abstract configurations.
* •
Load: We consider two cases depending on whether the load happens from a
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread or a
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread.
* –
In the case where we load from a
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
message, the semantics are equivalent between the abstract and concrete
transitions, since we compare the timestamps
$\mathsf{ts}^{+}\prec\mathsf{ts}+1$ and $\mathsf{ts}\prec\mathsf{ts}+1$ (see
the rule (LD-local) in Figure 7). Given that the concretization function (like
the abstraction) maintains relative order between
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
and
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
timestamps, the load is also feasible in the concrete semantics.
* –
In the second case, the load is from a
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$.
By inductive hypothesis, we have the concrete computation till the load
transition. In particular, the message $(\mathsf{x},d,\mathsf{vw})$ we wish to
load has already been generated in the concrete computation. To this concrete
computation $\rho$ obtained by inductive hypothesis, we invoke the infinite
supply lemma with $t^{*}$ as the reading thread’s local view on $\mathsf{x}$
to generate the computation
$\mathcal{M}_{1}(\rho)\triangleright\mathcal{M}_{2}(\rho\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}})\triangleright\rho_{1}$
with the fresh message $(\mathsf{x},\mathsf{d},\mathsf{vw}^{\prime})$. By
point (2) in the lemma the message is loadable,
$\mathsf{vw}^{\prime}(\mathsf{x})\geq\mu_{2}^{\mathsf{x}}(t^{*})$. Note how we
apply the timestamp lifting function to $t^{*}$ since the reading thread’s new
concrete timestamp has changed. Additionally by points (1) and (3) the
relative order of timestamps in $\mathsf{vw}^{\prime}$ in variables other than
$\mathsf{x}$ remain the same w.r.t the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread messages. This implies that after reading the message, the view of the
reading thread will only increase on $\mathsf{x}$. Hence for all other
variables it will remain the same thus maintaining equivalence between the
timestamps in the concrete and abstract run.
* •
Store: The store transition for
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
is identical to its concrete counterpart. For a
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread, we note that we generate copies of the abstract $\mathsf{ts}^{+}$
timestamp to get a sequence of concrete timestamps. Here we can generate an
arbitrary number of copies and hence the thread, will always find a vacant
timestamp for its store.
* •
CAS: When a
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread makes a CAS, it can either read from a
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
or from the store of a
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread. In the latter case let the timestamps of the load,store in the CAS be
$\mathsf{ts}$,$\mathsf{ts}+1$. Then in the abstract semantics we require that
$\mathsf{ts}^{+}\not\in\mathsf{B}$. This implies that in the concrete
semantics too, there are no
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
timestamps between the load,store timestamps and hence CAS is possible in the
concrete semantics too. In the former case we again use the infinite supply
lemma as we did in the case of loads, to generate a loadable
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
message.
## 5 Safety Verification with Loop-Free Threads
This section discusses the safety verification problem for the class
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{nocas})\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1}(\mathsf{acyc})\parallel\dots\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{n}(\mathsf{acyc})$
consisting of a set of $n$ distinguished
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads executing a loop-free program in the presence of an unbounded number
of
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads. We show that the safety verification problem for this class of
systems can be decided in $\mathsf{PSPACE}$ by leveraging the simplified
semantics from Section 4. We will assume that the domain $\mathsf{Dom}$ is
finite. Parallely, we demonstrate the ability to improve automatic
verification techniques by showing how to encode the safety verification
problem (of whether all assertions hold) into Datalog programs. The encoding
is interesting for two reasons: (1) it yields a complexity upper bound that,
given [1], came as a surprise; (2) it provides practical verification
opportunities, considering that Datalog-based Horn-clause solvers are state-
of-the-art in program verification [18, 17].
###### Theorem 5.1.
The safety verification problem for
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{nocas})$
$\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1}(\mathsf{acyc})\parallel\dots\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{n}(\mathsf{acyc})$,
$n\in\mathbb{N}$ is non-deterministic polynomial-time relative to the query
evaluation problem in linear Datalog _(_ $\mathsf{NP}^{\mathsf{PSPACE}}$_)_ ,
and hence is in $\mathsf{PSPACE}$.
We note that the theorem mentions non-deterministic polynomial time relative
to the linear Datalog oracle. We provide a non-deterministic poly-time
procedure $\mathcal{A}\mathsf{lgo}$, that, given a verification instance
converts it to a Datalog problem $\mathsf{P}$ s.t. (1) for a ‘yes’
verification instance, atleast one execution of $\mathcal{A}\mathsf{lgo}$
results in $\mathsf{P}$ having successful query evaluation and (2) for a ‘no’
verification instance, no execution of $\mathcal{A}\mathsf{lgo}$ leads to the
resulting $\mathsf{P}$ to have successful query evaluation.
Linear Datalog is a syntactically restricted variant of Datalog for which
query evaluation is easy to solve ($\mathsf{PSPACE}$) at the cost of being
inconvenient as an encoding target. Given that we show a $\mathsf{PSPACE}$
upper bound on the parameterized safety verification for the class
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{nocas})||{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1}(\mathsf{acyc})||\dots||{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{n}(\mathsf{acyc})$,
in principle, we could have directly encoded the parameterized safety
verification problem instance as a linear Datalog program. For convenience of
encoding, we do not directly reduce safety verification into query evaluation
in linear Datalog, but use an intermediate notion of $\mathsf{Cache}$ Datalog.
To make the ideas behind our reduction clear, we proceed in three steps.
1. 1.
We introduce $\mathsf{Cache}$ Datalog, which is Datalog with an additional
parameter, called the $\mathsf{Cache}$, that turns out decisive in controlling
complexity of encodings in the following sense : every $\mathsf{Cache}$
Datalog program can be turned into a linear Datalog program at a cost that is
linear in the size of the program plus that of the $\mathsf{Cache}$ (Lemma
5.2),
2. 2.
We then show that $\mathcal{A}\mathsf{lgo}$ generates $\mathsf{Cache}$ Datalog
problems that satisfy the description from the previous paragraph (Lemma 5.4),
and
3. 3.
We then argue that for all $\mathsf{Cache}$ Datalog instances generated by
$\mathcal{A}\mathsf{lgo}$, a $\mathsf{Cache}$ of polynomial size is sufficient
for query evaluation (Lemma 5.5).
This shows Theorem 5.1.
Linearizing Datalog A Datalog program $\mathsf{Prog}$ [24] consists of a
predicate set $\mathsf{Preds}$, a data domain $\mathsf{Data}$, and a set
$\mathsf{Rules}$ of rules (also called clauses). Each predicate comes with a
fixed arity $>0$. A predicate $P$ of arity $j$ is a mapping from
$\mathsf{Data}^{j}$ to $\\{true,false\\}$. An _atom_ consists of a predicate
$P(t_{1},\dots,t_{j})$ and a list $t_{1},\dots,t_{j}$ of arguments, where each
$t_{i}$ is a term. A term is either a variable or a constant; a term is a
ground term if it is a constant, and an atom is a ground atom if all its terms
are constants. A positive literal is a positive atom $P(t_{1},\dots,t_{j})$
and a negative literal is a negative atom $\neg P(t_{1},\dots,t_{j})$, and a
ground literal is a ground atom. A rule has the form
$\mathsf{head}\leavevmode\nobreak\ \mathop{:-}\leavevmode\nobreak\
\mathsf{body}_{1},\ldots,\mathsf{body}_{t}$
where $\mathsf{head}$ and $\mathsf{body}_{i}$ are positive literals. A rule
with one literal in the body is a linear rule, one without a body is called a
_fact_. A linear Datalog program is one where all rules are linear or are
facts. An instantiation of a rule is the result of replacing each occurrence
of a variable in the rule by a constant. For all instantiations of the rule,
if all ground atoms constituting the body are true then the ground atom in the
head can be inferred to be true. All instantiations of facts are trivially
true. We write $\mathsf{Prog}\vdash\mathsf{g}$ to denote that the ground atom
$\mathsf{g}$ can be inferred from program $\mathsf{Prog}$.
Query Evaluation Problem. The _query evaluation problem_ for Datalog is, given
a _query instance_ $(\mathsf{Prog},\mathsf{g})$ consisting of a Datalog
program $\mathsf{Prog}$ and a ground atom $\mathsf{g}$, to determine whether
$\mathsf{Prog}\vdash\mathsf{g}$. When studying the _combined complexity_ ,
both $\mathsf{Prog}$ and $\mathsf{g}$ are given as input [65]. It is known
[38] that combined complexity of query evaluation for linear Datalog is in
$\mathsf{PSPACE}$, while allowing non linear rules raises the complexity to
$\mathsf{NEXPTIME}$ ([65] and [44]). Motivated by verification, there has been
interest in linearizing Datalog [45].
Adding $\mathsf{Cache}$ to Datalog: $\mathsf{Cache}$ Datalog. We introduce to
Datalog the concept of a $\mathsf{Cache}$. A $\mathsf{Cache}$ is a set of
ground atoms that is used to control the inference process. The resulting
program is called a $\mathsf{Cache}$ Datalog program. In the presence of a
$\mathsf{Cache}$, the semantics of Datalog is adapted by the following two
rules.
Add: For an instantiated rule, the ground atom in the head can be inferred and
added to $\mathsf{Cache}$ only when all the ground atoms in the body are in
$\mathsf{Cache}$.
Drop: Atoms in $\mathsf{Cache}$ can be dropped non-deterministically.
The standard semantics of Datalog can be recovered by monotonically adding all
inferred atoms (starting with facts) to the $\mathsf{Cache}$ and never
dropping anything. To show the upper bound, we use a notion of inference that
takes into account the size of the $\mathsf{Cache}$ and minimizes it. For a
$\mathsf{Cache}$ Datalog program $\mathsf{Prog}$ and $k\in\mathbb{N}$, we
write $\mathsf{Prog}\vdash_{k}\mathsf{g}$ to mean that ground atom
$\mathsf{g}$ can be inferred from $\mathsf{Prog}$ with a computation in which
$|\mathsf{Cache}|\leq k$, the number of atoms in $\mathsf{Cache}$ is always at
most $k$. The $\mathsf{Cache}$ size measures the complexity of linearizing
$\mathsf{Cache}$ Datalog as follows.
###### Lemma 5.2.
Given a $\mathsf{Cache}$ Datalog program $\mathsf{Prog}$, a ground atom
$\mathsf{g}$, and a bound $k$, in time quadratic in $|\mathsf{Prog}|+|g|+k$ we
can construct a linear Datalog program $\mathsf{Prog}^{\prime}$ so that
$\mathsf{Prog}\vdash_{k}\mathsf{g}$ iff
$\mathsf{Prog}^{\prime}\vdash\mathsf{g}$.
###### Proof 5.3.
To go from $\mathsf{Cache}$ Datalog to linear Datalog, the idea is to simulate
the $\mathsf{Cache}$ using a new predicate $\mathsf{Cache}\mathsf{Pred}$ of
arity $k$ in the constructed linear Datalog program $\mathsf{Prog}^{\prime}$.
We know that a $\mathsf{Cache}$ of size $k$ suffices in the $\mathsf{Cache}$
Datalog program, so any rule $\mathsf{head}\leavevmode\nobreak\
\mathop{:-}\leavevmode\nobreak\ \mathsf{body}_{1},\ldots,\mathsf{body}_{p}$ in
the $\mathsf{Cache}$ Datalog is s.t. $p<k$.
1. 1.
Simulating Cache Intuitively, the predicate
$\mathsf{Cache}\mathsf{Pred}(t_{1},t_{2},\cdots,t_{k})$ represents that the
terms $t_{i}$ are members of the $\mathsf{Cache}$. We can simulate the set
$\mathsf{Cache}$ by reshuffling terms using rules that swap the $i^{th}$ and
$j^{th}$ elements with rules of the form,
$\mathsf{CachePred}(t_{1},\cdots,t_{j},\cdots,t_{i},\cdots,t_{k})\leavevmode\nobreak\
\mathop{:-}\leavevmode\nobreak\
\mathsf{CachePred}(t_{1},\cdots,t_{i},\cdots,t_{j},\cdots,t_{k})$
There are quadratically many such rules.
2. 2.
Rules Consider a rule $R$ with a body of size $p$ in $\mathsf{Cache}$ Datalog
as follows.
$\mathsf{head}\leavevmode\nobreak\ \mathop{:-}\leavevmode\nobreak\
\mathsf{body}_{1},\ldots,\mathsf{body}_{p}$
We convert this into a rule which matches the first $p$ terms of
$\mathsf{Cache}\mathsf{Pred}$ with the elements of the body. If there is such
a matching, the term $\mathsf{head}$ can be inferred and added into
$\mathsf{Cache}$. This is simulated by replacing some term amongst $t_{i}$
with the term in the $\mathsf{head}$ while keeping other terms the same.
$\mathsf{CachePred}(t_{1},\cdots,t_{i}=\mathsf{head},\cdots,t_{k})\mathop{:-}\mathsf{CachePred}(t_{1}=\mathsf{body}_{1},\cdots,t_{p}=\mathsf{body}_{p},t_{p+1},\cdots,t_{k})$
There are $k$ choices for the term to be replaced. Thus we have $k$ new rules
per rule in the original program.
3. 3.
Final Inference Finally, since we know that each element of $\mathsf{Cache}$
is true, we add the inference rules,
$t_{i}\leavevmode\nobreak\ \mathop{:-}\leavevmode\nobreak\
\mathsf{Cache}\mathsf{Pred}(t_{1},t_{2},\cdots,t_{p})\quad\text{ for }1\leq
i\leq p$
Now, $g$ can be generated if $g$ ever enters $\mathsf{Cache}$, i.e.
$\mathsf{Cache}\mathsf{Pred}(t_{1},t_{2},\dots,g,\dots,t_{p})$ for some other
terms $t_{i}$. Then we can use the above inference rule to infer $g$.
This shows that we need at most quadratically many rules each with a single
body, to give us a linear Datalog program.
### 5.1 Datalog Encoding
Theorem 4.7 tells us that safety verification under RA is equivalent to safety
verification in the simplified semantics. Safety verification in the
simplified semantics, in turn, can be reduced to the _Message Generation (MG)_
problem.
> Given a parametrized system $\mathsf{c}$ and a message
> $\mathsf{msg}^{\\#}=(\mathsf{x}^{*},\mathsf{d}^{*},\\_)$ called _goal
> message_ , does there exist a reachable configuration
> $\mathsf{cf}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}=(\mathsf{m}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}},\mathsf{lcfm}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}})$
> such that
> $\mathsf{msg}^{\\#}\in\mathsf{m}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}$
> (for some
> $\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}$)?
To see the connection between MG and safety verification, note that we can
replace each $\mathsf{assert}\;{\texttt{false}}$ statement in the program by
$\mathsf{x}^{*}\coloneqq\mathsf{d}^{*}$ for variable $\mathsf{x}^{*}$ and
value $\mathsf{d}^{*}$ unused elsewhere. The system is unsafe if and only if a
_goal message_
$\mathsf{msg}^{\\#}=(\mathsf{x}^{*},\mathsf{d}^{*},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}})$
is generated for some
$\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}$.
While encoding into Datalog, we non-deterministically guess
$\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}$.
For this, we crucially show that there are only exponentially-many choices of
$\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}$
which need to be enumerated. Henceforth we assume that the queried goal
message $\mathsf{msg}^{\\#}$ can have arbitrary
$\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}$.
Given $\mathsf{c}$, $\mathsf{msg}^{\\#}$, our non-deterministic poly-time
procedure $\mathcal{A}\mathsf{lgo}$ satisfies the following, the proof of
which is in Sections 5.1.2.
###### Lemma 5.4.
Given a parametrized system $\mathsf{c}$ and a goal message
$\mathsf{msg}^{\\#}$, Message Generation (MG) holds iff there is some
execution of $\mathcal{A}\mathsf{lgo}$ that generates a query instance
($\mathsf{Prog},\mathsf{g}$) such that $\mathsf{Prog}\vdash\mathsf{g}$. The
construction of $\mathsf{Prog}$ and $\mathsf{g}$ is in (non-deterministic)
time polynomial in $|\mathsf{c}|$.
The procedure $\mathcal{A}\mathsf{lgo}$ generates one query instance
$(\mathsf{Prog},\mathsf{g})$ per execution. We postpone the full description
of $\mathcal{A}\mathsf{lgo}$ and first give some intuition. Since the
parameterized system consists of $n$ loop-free
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads, each can execute only linearly-many instructions in their size. The
total number of instructions executed (and hence the total number of
timestamps used) by the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads is polynomial in
$|\mathsf{c}_{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}|$,
the combined size of
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
programs (concretely the sum of sizes of individual
$\mathsf{c}_{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}}^{i}$
programs). $\mathcal{A}\mathsf{lgo}$ guesses the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads part of the computation and generates a query instance
$(\mathsf{Prog},\mathsf{g})$.
$\mathsf{Prog}$ itself uses four main predicates. The environment message
predicate
$\mathsf{emp}(\mathsf{x},\mathsf{d},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})$
represents the availability of a
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
message on variable $\mathsf{x}$ with value $\mathsf{d}$ and view
$\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}$.
The environment thread predicate
$\mathsf{etp}(\mathsf{lc},\mathsf{rv},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}})$
encodes the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread configuration, where $\mathsf{lc}$ is the control-state, $\mathsf{rv}$
is the register valuation and
$\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}$
is the thread view. We also have similar message and thread predicates for
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads. The distinguished message predicate
$\mathsf{dmp}(\mathsf{x},\mathsf{d},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})$
represents the availability of a
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
message. Additionally, for each
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread $i\in[n]$, we have a distinguished thread predicate
$\mathsf{dtp}_{i}(\mathsf{lc},\mathsf{rv},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}})$
that encodes the configurations of
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}[i]$.
In the set of rules, we have the fact
$\mathsf{dmp}(\mathsf{x},\mathsf{d}_{\mathsf{init}},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{\mathsf{init}})$
for each $\mathsf{x}\in\mathsf{Var}$ with $\mathsf{d}_{\mathsf{init}}$ the
initial value and
$\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{\mathsf{init}}$
the initial view. We also have (i) facts
$\mathsf{etp}(\lambda_{\mathsf{init}},\mathsf{rv}_{\mathsf{init}},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{\mathsf{init}})$
and
$\mathsf{dtp}[i](\lambda_{\mathsf{init}},\mathsf{rv}_{\mathsf{init}},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{\mathsf{init}})$
representing the initial states of both
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
and
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads, (ii) rules corresponding to the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
transitions and the guessed
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread run fragments. Finally, the query atom $\mathsf{g}$, is a ground atom
$\in\\{\mathsf{emp}$, $\mathsf{dmp}\\}$ and captures the goal message
$\mathsf{msg}^{\\#}$ being generated. The instances generated in the non-
deterministic branches of $\mathcal{A}\mathsf{lgo}$ differ only due to the
guessed
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
run and the atom $\mathsf{g}$.
We now describe the full Datalog program, also proving Lemma 5.4.
#### 5.1.1 Procedure $\mathcal{A}\mathsf{lgo}$ for query instance generation
We discuss the details of the procedure $\mathcal{A}\mathsf{lgo}$ which
generates the query instance $(\mathsf{Prog},\mathsf{g})$ non-
deterministically. We use the following predicates in the constructed Datalog
program.
* •
$\mathsf{emp}(\mathsf{msg})$: the message generation predicate for
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads, where $\mathsf{msg}$ is a message;
* •
$\mathsf{etp}(\mathsf{lc},\mathsf{rv},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}})$
: the thread state predicate for
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads
* •
$\mathsf{dmp}(\mathsf{msg})$: the message generation predicate for
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads, where $\mathsf{msg}$ is a message;
* •
$\mathsf{dtp}[i](\mathsf{lc},\mathsf{rv},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}})$:
the thread state predicate, one for each
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread
* •
$\mathsf{avail}(\mathsf{x},\mathsf{ts}^{+})$ : the timestamp availability
predicate, which indicates that a timestamp $\mathsf{ts}$ is not blocked by a
CAS operation, per variable.
The Datalog program generated has two parts, one does not depend on the non-
deterministic choices made by $\mathcal{A}\mathsf{lgo}$, while the other does.
We describe the former part first, these rules for the Datalog Program are in
Figure 8. The second set of rules, depending on the nondeterministic choice of
$\mathcal{A}\mathsf{lgo}$ is in Figure 9.
The first set of rules in the Datalog program (Figure 8). The facts, in green,
provide the ground terms for the $\mathsf{init}$ messages as well as initial
state of the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
and
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads. The orange rules capture the thread local transitions of the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads. We deviate a bit from the standard notation for programs here, and
instead view them as labelled transition systems. It is easy to see that the
two notions are equivalent. The initial state labels are
$\lambda^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}_{\mathsf{init}}$
for the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads and $\lambda^{i}_{\mathsf{init}}$ for the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads. For a pair of labels, we write
$\lambda_{1}\xrightarrow{\texttt{i}}\lambda_{2}$ to denote that $\lambda_{2}$
can be reached from $\lambda_{1}$ by executing i. In the Datalog program, we
have a rule for each such transition in the program. The thread-local
transitions are in orange. Loads are in violet (first corresponding to loads
from
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
messages, the second for loading from
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
messages). For loads, the rule requires a term with the message predicate
(from which the thread is reading) in the body of the rule. Stores are in
pink, the first rule corresponds to the new thread-local state after execution
of the store. The second rule corresponds to the generation of a term for the
new message (in the head). Though we use some higher order syntax for rules
such as $\mathsf{assume}$, $\sqcup$, and
$<^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}_{\mathsf{x}}$
we note that these can be easily translated to pure Datalog with small
overhead given the polynomial size of the domain and the constant arity of the
predicates.
$q_{0}$$q_{2}$$q_{0}$$q_{1}$$q_{2}$$q_{1}$$\partial_{1}$$\partial_{2}$$\partial_{5}$$\partial_{3}$$\partial_{4}$Program
$\mathsf{c}_{i}$Program
$\mathsf{c}_{j}$$q_{\mathsf{end}}$$q_{\mathsf{end}}$$q_{\mathsf{start}}$$q_{\mathsf{start}}$Transformed
program $\mathsf{c}^{\prime}_{j}$Transformed program
$\mathsf{c}^{\prime}_{i}$$\iota(\partial_{1})$$\iota(\partial_{2})$$\iota(\partial_{3})$$\iota(\partial_{4})$$\iota(\partial_{5})$register
loadsregister loadsregister storesregister
stores$\mathsf{cas}(t_{i},0,\Lambda^{\bot})$$\mathsf{cas}(t_{i},1,\Lambda^{\bot})$$\mathsf{cas}(t_{i},\Lambda^{\bot},2)$$\mathsf{cas}(t_{i},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},0,\Lambda^{\bot})$$\mathsf{cas}(t_{j},2,\Lambda^{\bot})$$\mathsf{cas}(t_{j},1,\Lambda^{\bot})$$\mathsf{cas}(t_{j},\Lambda^{\bot},2)$$\begin{array}[]{rl|l}\hline\cr&\text{rule}&\text{condition
on program of
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads,
$\mathsf{c}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$}\\\
\hline\cr\hline\cr\mathsf{etp}(\lambda_{2},\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})&\leavevmode\nobreak\
\mathop{:-}\leavevmode\nobreak\
\mathsf{etp}(\lambda_{1},\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})&\text{if
}\lambda_{1}\xrightarrow{\mathsf{skip}}\lambda_{2}\\\
\hline\cr\mathsf{etp}(\lambda_{2},\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})&\leavevmode\nobreak\
\mathop{:-}\leavevmode\nobreak\ \mathsf{etp}(\lambda_{1},\mathsf{rv}\text{
with }[\\![\mathsf{e}]\\!](\mathsf{rv}(\overline{\mathsf{r}}))\neq
0,\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})&\text{if
}\lambda_{1}\xrightarrow{\mathsf{assume}\;{\mathsf{e}(\overline{\mathsf{r}})}}\lambda_{2}\\\
\hline\cr\mathsf{etp}(\lambda_{2},\mathsf{rv}[\mathsf{r}\mapsto\mathsf{e}(\overline{\mathsf{r}})],\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})&\leavevmode\nobreak\
\mathop{:-}\leavevmode\nobreak\
\mathsf{etp}(\lambda_{1},\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})&\text{if
}\lambda_{1}\xrightarrow{\mathsf{r}
\coloneqq\mathsf{e}(\overline{\mathsf{r}})}\lambda_{2}\\\
\hline\cr\mathsf{etp}(\lambda_{2},\mathsf{rv}[\mathsf{r}\leftarrow\mathsf{d}],\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1}\sqcup^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}_{\mathsf{x}}\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2})&\leavevmode\nobreak\
\mathop{:-}\leavevmode\nobreak\
\mathsf{etp}(\lambda_{1},\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1}),\mathsf{emp}(\mathsf{x},\mathsf{d},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2})&\text{if
}\lambda_{1}\xrightarrow{\mathsf{r} \coloneqq\mathsf{x}}\lambda_{2}\\\
\hline\cr\mathsf{etp}(\lambda_{2},\mathsf{rv}[\mathsf{r}\leftarrow\mathsf{d}],\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1}\sqcup_{\mathsf{x}}\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2})&\leavevmode\nobreak\
\mathop{:-}\leavevmode\nobreak\
\mathsf{etp}(\lambda_{1},\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1}),\mathsf{dmp}(\mathsf{x},\mathsf{d},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2}),\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1}<_{\mathsf{x}}\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2}&\text{if
}\lambda_{1}\xrightarrow{\mathsf{r} \coloneqq\mathsf{x}}\lambda_{2}\\\
\hline\cr\mathsf{etp}(\lambda_{2},\mathsf{rv},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2})&\leavevmode\nobreak\
\mathop{:-}\leavevmode\nobreak\
\mathsf{etp}(\lambda_{1},\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1}),\mathsf{avail}(\mathsf{x},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2}(\mathsf{x}))&\text{if
}\lambda_{1}\xrightarrow{\mathsf{x} \coloneqq\mathsf{r}}\lambda_{2},\text{
with
}\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}<^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}_{\mathsf{x}}\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2}\\\
\hline\cr\mathsf{emp}(\mathsf{x},\mathsf{rv}(\mathsf{r}),\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2})&\leavevmode\nobreak\
\mathop{:-}\leavevmode\nobreak\
\mathsf{etp}(\lambda_{1},\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1}),\mathsf{avail}(\mathsf{x},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2}(\mathsf{x}))&\text{if
}\exists\lambda_{2}.\leavevmode\nobreak\ \lambda_{1}\xrightarrow{\mathsf{x}
\coloneqq\mathsf{r}}\lambda_{2},\text{ with
}\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}<^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}_{\mathsf{x}}\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2}\\\
\hline\cr\end{array}$
(a) The (fixed) set of rules in the Datalog program encoding the transition
system of the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads. Silent transitions (in orange); memory accesses: loads (in violet)
and stores (in pink).
$q_{0}$$q_{2}$$q_{0}$$q_{1}$$q_{2}$$q_{1}$$\partial_{1}$$\partial_{2}$$\partial_{5}$$\partial_{3}$$\partial_{4}$Program
$\mathsf{c}_{i}$Program
$\mathsf{c}_{j}$$q_{\mathsf{end}}$$q_{\mathsf{end}}$$q_{\mathsf{start}}$$q_{\mathsf{start}}$Transformed
program $\mathsf{c}^{\prime}_{j}$Transformed program
$\mathsf{c}^{\prime}_{i}$$\iota(\partial_{1})$$\iota(\partial_{2})$$\iota(\partial_{3})$$\iota(\partial_{4})$$\iota(\partial_{5})$register
loadsregister loadsregister storesregister
stores$\mathsf{cas}(t_{i},0,\Lambda^{\bot})$$\mathsf{cas}(t_{i},1,\Lambda^{\bot})$$\mathsf{cas}(t_{i},\Lambda^{\bot},2)$$\mathsf{cas}(t_{i},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},0,\Lambda^{\bot})$$\mathsf{cas}(t_{j},2,\Lambda^{\bot})$$\mathsf{cas}(t_{j},1,\Lambda^{\bot})$$\mathsf{cas}(t_{j},\Lambda^{\bot},2)$$\begin{array}[]{rl|l}\hline\cr\text{fact}&&\text{comment}\\\
\hline\cr\hline\cr\mathsf{dmp}(\mathsf{x},\mathsf{d}_{\mathsf{init}},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{\mathsf{init}})&\leavevmode\nobreak\
\mathop{:-}\nobreak\leavevmode&\text{ for all variables }\mathsf{x}\\\
\hline\cr\mathsf{etp}(\lambda^{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}_{\mathsf{init}},\mathsf{rv}_{\mathsf{init}},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{\mathsf{init}})&\leavevmode\nobreak\
\mathop{:-}&\lambda^{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}_{\mathsf{init}}\text{
is initial state of
}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}\text{
threads}\\\
\hline\cr\mathsf{dtp}[i](\lambda^{i}_{\mathsf{init}},\mathsf{rv}_{\mathsf{init}},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{\mathsf{init}})&\leavevmode\nobreak\
\mathop{:-}&\lambda^{i}_{\mathsf{init}}\text{ is initial state of
}{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}[i]\text{
thread}\\\ \hline\cr\end{array}$ (b) First set of facts in the Datalog
program; these do not depend on the non-deterministic guess made by
$\mathcal{A}\mathsf{lgo}$ for the computation of
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads. These facts encode the initial configurations of the threads and the
initial messages.
Figure 8: First set of rules for the Datalog Program. This fixed rule set is
independent of non-determinism of $\mathcal{A}\mathsf{lgo}$.
These rules capture completely the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread component of the run. As we had mentioned earlier, the component of the
query instance that differs due to non-determinism of
$\mathcal{A}\mathsf{lgo}$ is the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
part of the run. Essentially, $\mathcal{A}\mathsf{lgo}$ guesses in polynomial
time the executions of all the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads. This is possible since they are loop-free and hence execution lengths
are linear in the size of their specifications. We now describe this second
part of the Datalog query instance.
Second set of rules in the Datalog program(Figure 9). We have a bound on the
number of write timestamps that can be used by the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads - an easy bound is the combined number of instructions in
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads,
$|\mathsf{c}_{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}|$.
We will refer to this bound as $T$. By the simplified semantics it suffices to
consider the timestamps $\\{0,0^{+},\cdots,T,T^{+}\\}$. This follows since,
the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads perform atmost $T$ writes. Hence we need only $T$ timestamps of the
form $\mathbb{N}$. Additionally, we have only one timestamp of the form
$\mathbb{N}^{+}$ between any two timestamps of the form $\mathbb{N}$. This
shows that the view terms in the predicates of the Datalog program can be
guessed in polynomial space (since $T$ is polynomial in the input).
Now for each
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread $i$, the procedure $\mathcal{A}\mathsf{lgo}$ non-deterministically
guesses the computation $\rho_{i}$ for
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}[i]$.
That is, $\mathcal{A}\mathsf{lgo}$ guesses the timestamps and the register
valuations of
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{i}$
at each configuration in this run, along with the messages
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{i}$
loaded from. Post this, it converts $\rho_{i}$ to a set of rules which are
then added to the earlier set from Figure 8.
Consider the computation
$\rho_{i}\equiv\lambda^{i}_{\mathsf{init}}\xrightarrow{\texttt{i}_{1}}\lambda_{1}\xrightarrow{\texttt{i}_{2}}\lambda_{2}\cdots\xrightarrow{\texttt{i}_{|\rho_{i}|}}\lambda_{|\rho_{i}|}$
of length $|\rho_{i}|$. Let the views of
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread $i$ at point $j$ in the run be given as
$\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{j}$.
Additionally, if $\texttt{i}_{j}$ is a load instruction,
$\mathcal{A}\mathsf{lgo}$ also guesses the message that was read by the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread $i$. Each instruction $\texttt{i}_{j}$ in this computation is then
converted into one amongst the rules in Figure 9(a) depending on the
instruction $\texttt{i}_{j}$ executed, represented in the figure as
‘condition’. Additionally to encode the $\mathbb{N}^{+}$ timestamps that have
not been occupied by CAS operations (and hence are free to use by the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads), we have the rule in 9(b).
$q_{0}$$q_{2}$$q_{0}$$q_{1}$$q_{2}$$q_{1}$$\partial_{1}$$\partial_{2}$$\partial_{5}$$\partial_{3}$$\partial_{4}$Program
$\mathsf{c}_{i}$Program
$\mathsf{c}_{j}$$q_{\mathsf{end}}$$q_{\mathsf{end}}$$q_{\mathsf{start}}$$q_{\mathsf{start}}$Transformed
program $\mathsf{c}^{\prime}_{j}$Transformed program
$\mathsf{c}^{\prime}_{i}$$\iota(\partial_{1})$$\iota(\partial_{2})$$\iota(\partial_{3})$$\iota(\partial_{4})$$\iota(\partial_{5})$register
loadsregister loadsregister storesregister
stores$\mathsf{cas}(t_{i},0,\Lambda^{\bot})$$\mathsf{cas}(t_{i},1,\Lambda^{\bot})$$\mathsf{cas}(t_{i},\Lambda^{\bot},2)$$\mathsf{cas}(t_{i},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},0,\Lambda^{\bot})$$\mathsf{cas}(t_{j},2,\Lambda^{\bot})$$\mathsf{cas}(t_{j},1,\Lambda^{\bot})$$\mathsf{cas}(t_{j},\Lambda^{\bot},2)$$\begin{array}[]{rl|l}\hline\cr&\text{rule}&\text{condition
on thread transition $\texttt{i}_{j}$ of computation $\rho_{i}$ for thread
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{i}$}\\\
\hline\cr\hline\cr\mathsf{dtp}[i](\lambda_{j},\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})&\leavevmode\nobreak\
\mathop{:-}\leavevmode\nobreak\
\mathsf{dtp}[i](\lambda_{j-1},\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})&\texttt{i}_{j}=\mathsf{skip}\\\
\hline\cr\mathsf{dtp}[i](\lambda_{j},\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})&\leavevmode\nobreak\
\mathop{:-}\leavevmode\nobreak\
\mathsf{dtp}[i](\lambda_{j-1},\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})&\texttt{i}_{j}=\mathsf{assume}\;{\mathsf{e}(\overline{\mathsf{r}})}\land[\\![\mathsf{e}]\\!](\mathsf{rv}(\overline{\mathsf{r}}))\neq
0\\\
\hline\cr\mathsf{dtp}[i](\lambda_{j},\mathsf{rv}[\mathsf{r}\mapsto\mathsf{r}
\coloneqq\mathsf{e}(\overline{\mathsf{r}})],\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})&\leavevmode\nobreak\
\mathop{:-}\leavevmode\nobreak\
\mathsf{dtp}[i](\lambda_{j-1},\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})&\texttt{i}_{j}=\mathsf{r}
\coloneqq\mathsf{e}(\overline{\mathsf{r}})\\\
\hline\cr\mathsf{dtp}[i](\lambda_{j},\mathsf{rv}[\mathsf{r}\leftarrow\mathsf{d}],\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1}\sqcup_{\mathsf{x}}^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2})&\leavevmode\nobreak\
\mathop{:-}\leavevmode\nobreak\
\mathsf{dtp}[i](\lambda_{j-1},\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1}),\mathsf{emp}(\mathsf{x},\mathsf{d},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2})&\texttt{i}_{j}=\mathsf{r}
\coloneqq\mathsf{x}\land\text{ thread loads
}\mathsf{msg}=(\mathsf{x},\mathsf{d},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2})\land\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2}(\mathsf{x})\in\mathbb{N}^{+}\\\
\hline\cr\mathsf{dtp}[i](\lambda_{j},\mathsf{rv}[\mathsf{r}\leftarrow\mathsf{d}],\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1}\sqcup_{\mathsf{x}}\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2})&\leavevmode\nobreak\
\mathop{:-}\leavevmode\nobreak\
\mathsf{dtp}[i](\lambda_{j-1},\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1}),\mathsf{dmp}(\mathsf{x},\mathsf{d},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2})&\texttt{i}_{j}=\mathsf{r}
\coloneqq\mathsf{x}\land\text{ thread loads
}\mathsf{msg}=(\mathsf{x},\mathsf{d},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2})\land\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2}(\mathsf{x})\in\mathbb{N}\land\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}<_{\mathsf{x}}\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2}\\\
\hline\cr\mathsf{dtp}[i](\lambda_{j},\mathsf{rv},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2})&\leavevmode\nobreak\
\mathop{:-}\leavevmode\nobreak\
\mathsf{dtp}[i](\lambda_{j-1},\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1})&\\\
\mathsf{dmp}(\mathsf{x},\mathsf{rv}(\mathsf{r}),\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2})&\leavevmode\nobreak\
\mathop{:-}\leavevmode\nobreak\
\mathsf{dtp}[i](\lambda_{j-1},\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1})&\hbox{\multirowsetup$\texttt{i}_{j}=\mathsf{x}
\coloneqq\mathsf{r}\land\text{ thread stores
}\mathsf{msg}=(\mathsf{x},\mathsf{rv}(\mathsf{r}),\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2})\land\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}<_{\mathsf{x}}\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2}$}\\\
\hline\cr\pagecolor{black!5}\mathsf{dtp}[i](\lambda_{j},\mathsf{rv},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{3})&\leavevmode\nobreak\
\pagecolor{black!5}\mathop{:-}\leavevmode\nobreak\
\mathsf{dtp}[i](\lambda_{j-1},\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1}),\mathsf{emp}(\mathsf{x},\mathsf{d},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2})&\pagecolor{black!5}\texttt{i}_{j}=\mathsf{cas}(\mathsf{x},\mathsf{d},\mathsf{r})\land\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2}(\mathsf{x})\in\mathbb{N}^{+},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}=\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}\sqcup_{\mathsf{x}}^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2},\\\
\pagecolor{black!5}\mathsf{dmp}(\mathsf{x},\mathsf{rv}(\mathsf{r}),\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{3})&\leavevmode\nobreak\
\pagecolor{black!5}\mathop{:-}\leavevmode\nobreak\
\mathsf{dtp}[i](\lambda_{j-1},\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1}),\mathsf{emp}(\mathsf{x},\mathsf{d},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2})&\pagecolor{black!5}\qquad\qquad\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}(\mathsf{x})=\mathsf{ts}^{+},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{3}=\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}[\mathsf{x}\rightarrow\mathsf{ts}+1]\\\
\hline\cr\pagecolor{black!5}\mathsf{dtp}[i](\lambda_{j},\mathsf{rv},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{3})&\leavevmode\nobreak\
\pagecolor{black!5}\mathop{:-}\leavevmode\nobreak\
\mathsf{dtp}[i](\lambda_{j-1},\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1}),\mathsf{dmp}(\mathsf{x},\mathsf{d},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2})&\pagecolor{black!5}\texttt{i}_{j}=\mathsf{cas}(\mathsf{x},\mathsf{d},\mathsf{r})\land\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}(\mathsf{x})\leq\mathsf{ts}=\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2}(\mathsf{x})\\\
\pagecolor{black!5}\mathsf{dmp}(\mathsf{x},\mathsf{rv}(\mathsf{r}),\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{3})&\leavevmode\nobreak\
\pagecolor{black!5}\mathop{:-}\leavevmode\nobreak\
\mathsf{dtp}[i](\lambda_{j-1},\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1}),\mathsf{dmp}(\mathsf{x},\mathsf{d},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2})&\qquad\qquad\pagecolor{black!5}\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}=\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}\sqcup_{\mathsf{x}}\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{3}=\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}[\mathsf{x}\rightarrow\mathsf{ts}+1]\\\
\hline\cr\end{array}$
(a) These rules are chosen depending upon the nondeterministic choice made by
$\mathcal{A}\mathsf{lgo}$ of the computation $\rho_{i}$ of thread $i$. Each
instruction $\texttt{i}_{j}$ executing in $\rho_{j}$ is then mapped to one of
the rules from above depending upon which condition (right column) is
satisfied. Rules for silent $\texttt{i}_{j}$ (in orange); memory accesses,
loads (in violet) and stores (in pink), and CAS (gray). The second pink rule
corresponds to message generation by the thread $i$ executing a store
instruction. The first two CAS rules correspond to the case where the load is
from a
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
message and the last two correspond to the load from a
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
message. The first rule for each case is the thread-local state change rule
while the second rule generates the ground atom corresponding to the message
generated by the CAS operation.
$q_{0}$$q_{2}$$q_{0}$$q_{1}$$q_{2}$$q_{1}$$\partial_{1}$$\partial_{2}$$\partial_{5}$$\partial_{3}$$\partial_{4}$Program
$\mathsf{c}_{i}$Program
$\mathsf{c}_{j}$$q_{\mathsf{end}}$$q_{\mathsf{end}}$$q_{\mathsf{start}}$$q_{\mathsf{start}}$Transformed
program $\mathsf{c}^{\prime}_{j}$Transformed program
$\mathsf{c}^{\prime}_{i}$$\iota(\partial_{1})$$\iota(\partial_{2})$$\iota(\partial_{3})$$\iota(\partial_{4})$$\iota(\partial_{5})$register
loadsregister loadsregister storesregister
stores$\mathsf{cas}(t_{i},0,\Lambda^{\bot})$$\mathsf{cas}(t_{i},1,\Lambda^{\bot})$$\mathsf{cas}(t_{i},\Lambda^{\bot},2)$$\mathsf{cas}(t_{i},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},0,\Lambda^{\bot})$$\mathsf{cas}(t_{j},2,\Lambda^{\bot})$$\mathsf{cas}(t_{j},1,\Lambda^{\bot})$$\mathsf{cas}(t_{j},\Lambda^{\bot},2)$$\begin{array}[]{rl|l}\hline\cr\hskip
85.35826pt\text{fact}&&\hskip 85.35826pt\text{condition on availability of
timestamps}\\\ \hline\cr\hline\cr\hskip
85.35826pt\mathsf{avail}(\mathsf{x},\mathsf{ts}^{+})&\leavevmode\nobreak\
\mathop{:-}&\hskip 85.35826pt\mathsf{ts}\in\\{0,\cdots T\\}\land\text{ no
}{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}\text{
performs a cas operation with timestamps }(\mathsf{ts},\mathsf{ts}+1)\\\
\hline\cr\end{array}$
(b) This fact corresponds to the availability of a $\mathbb{N}^{+}$ timestamp
for stores by
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads, which is known once all the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
computations have been guessed. These rules are not generated on a
per-${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread basis but rather once the computations $\rho_{i}$ for all
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads have been non-determinsitically guessed. Referring to the simplified
semantics, this rule captures $\mathsf{ts}\not\in\mathsf{B}$, the fact that
there is no CAS operation with timestamps $(\mathsf{ts},\mathsf{ts}+1)$. Note
that the $\mathsf{avail}$ predicate plays a role in inferring the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread state and message predicates as seen in the last two rows in Figure 8 :
we can infer the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread state and message predicates with a view $\mathsf{vw}(\mathsf{x})$ only
when the respective timestamp is not blocked. This in turn is used in the
first CAS operation (first gray row, Figure 9(a)) ) when loads happen from a
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread : the merged view
$\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}(\mathsf{x})=\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{1}(\mathsf{x})\sqcup_{\mathsf{x}}^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{2}(\mathsf{x})$
is $\mathsf{ts}^{+}$, and the new timestamp after CAS is $\mathsf{ts}$+1. Note
that this is possible since (i) if the timestamp of the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread for $\mathsf{x}$ from where we load was $\mathsf{ts}^{+}$, then there
is no
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread with a timestamp $\mathsf{ts}$ for $\mathsf{x}$ and, (ii) if the
timestamp of the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread from where we load was $\prec\mathsf{ts}^{+}$, then the timestamp of
the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread performing the CAS was $\mathsf{ts}$ for $\mathsf{x}$. In both cases,
the timestamp after CAS will be $\mathsf{ts}$+1.
Figure 9: Second set of rules for the Datalog Program. This rule set depends
on the nondeterministic choice made by $\mathcal{A}\mathsf{lgo}$ for the
computations of the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads
Since we have the polynomial bound on $T$, it is easy to see that the rules
above for the run $\rho_{i}$ executed by each
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread $i$ can be generated in polynomial-time after nondeterministically
guessing $\rho_{i}$. These (non-determinism dependent) rules along with the
rules from Figure 8 together form the complete Datalog program.
#### 5.1.2 Invariants for (Datalog Inference $\leftrightarrow$ Computations
in the Simplified Semantics) and proof of Lemma 5.4
Now we see how an inference process in the (complete) Datalog program
corresponds to a computation in the simplified semantics. To do this, we give
invariants which relate the inference of atoms in the Datalog program with the
existence of events in the computation. These invariants together imply the
equivalence between an inference sequence in the Datalog program and a
computation of $\mathsf{c}$ under the the simplified semantics. Finally, if
the goal message $\mathsf{msg}^{\\#}$ is reachable at the end of a computation
$\rho$ of $\mathsf{c}$, then correspondingly, thanks to the invariants we
obtain, we can also infer the ground term $\mathsf{g}$ being
$\mathsf{dmp}(\mathsf{msg}^{\\#})$ or $\mathsf{emp}(\mathsf{msg}^{\\#})$ in
the Datalog program depending on whether the goal message was generated by a
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread or
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread in the computation.
1. 1.
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread-local state invariant
> The ground atom
> $\mathsf{etp}(\lambda,\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})$
> can be inferred iff some
> ${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
> thread can reach the
> $\mathsf{lcf}=(\lambda,\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})$.
This says that some
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread is able to reach the state from its transition system with label
$\lambda$ such that the thread-local view and the register valuation at that
time are
$\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}$
and $\mathsf{rv}$ respectively. We can prove that this holds by induction on
the length of the run and by noting from the Datalog rules (Figure 8) that
there is a transition $\lambda\xrightarrow{\texttt{i}}\lambda^{\prime}$
whenever there is a rule corresponding to that. Additionally, the load rules
(in blue) require that the corresponding message atoms
$(\mathsf{emp}/\mathsf{dmp})$ holds which as we will see below implies the
possibility of the generation of a message in the memory.
2. 2.
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread message invariant
> The ground atom
> $\mathsf{emp}(\mathsf{x},\mathsf{d},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})$
> can be inferred if the corresponding message
> $\mathsf{msg}=(\mathsf{x},\mathsf{d},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})$
> can be generated in the simplified semantics by some
> ${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
> thread.
Note that a ground atom of the form
$\mathsf{emp}(\mathsf{x},\mathsf{d},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})$
can only be inferred using the last rule in Figure 8. The body of this rule
contains the term
$\mathsf{etp}(\lambda,\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})$.
This, if true, implies that some
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread can reach the corresponding thread-local state by the first invariant
(above). An
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread in this thread-local state can generate the message
$(\mathsf{x},\mathsf{rv}(\mathsf{r}),{\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}}^{\prime})$
since there is an outgoing transition from $\lambda$ with instruction
$\mathsf{x}:=\mathsf{r}$. Note that we have the check on the existence of the
transition to ensure that the message can indeed be generated. This is
required for the last rule to exist in the program. This implies that the
message can in fact be generated.
3. 3.
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread-local state For each
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread $i$, we have the following invariant.
> The ground atom
> $\mathsf{dtp}[i](\lambda,\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})$
> can be inferred iff the
> ${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
> thread $i$ can reach the
> $\mathsf{lcf}=(\lambda,\mathsf{rv},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})$.
This is just the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
analog of the first invariant for
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads. This can also be proved by induction on the length of the run (or the
inference sequence). Also analogous to the invariant for the $\mathsf{emp}$,
we have an invariant for
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
messages.
4. 4.
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread message invariant
> The ground atom
> $\mathsf{dmp}(\mathsf{x},\mathsf{d},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})$
> can be inferred if the corresponding
> (${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$)
> message
> $\mathsf{msg}=(\mathsf{x},\mathsf{d},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})$
> can be generated in the simplified semantics by some
> ${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
> thread.
5. 5.
$\mathsf{avail}$ timestamp availability invariant
> If the fact $\mathsf{avail}(\mathsf{x},\mathsf{ts}^{+})$ is in the Datalog
> program then we have $\mathsf{ts}\not\in\mathsf{B}$ throughout the
> computation $\rho$ of the simplified semantics.
The base case for message predicates holds since the _facts_
$\mathsf{dmp}(\mathsf{x},\mathsf{d}_{\mathsf{init}},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{\mathsf{init}})$
for all variables are given in the Datalog program. The base case for thread
state predicates holds due to the fact
$\mathsf{etp}(\lambda_{\mathsf{init}},\mathsf{rv}_{\mathsf{init}},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{\mathsf{init}})$
which captures the initial state of the thread. The inductive steps can be
formally proved by considering a computation $\rho$ under the simplified
semantics and mapping each transition in $\rho$ to an inference step in the
Datalog program. For the converse, we assume an inference sequence (a sequence
of invocations of the rules) and, for each rule invoked to infer a new ground
atom, we show that a corresponding transition can be taken by a thread in the
simplified semantics so that the invariants are maintained. This in turn, is
done by taking cases on the next instruction to be executed.
The equivalence between transitions in a computation (hence a computation
$\rho$) of the simplified semantics and the application of rules/facts in the
Datalog program, leading to reachability of some message $\mathsf{msg}$ in
$\rho$ iff the corresponding ground term $\mathsf{dmp}(\mathsf{msg})$ or
$\mathsf{emp}(\mathsf{msg})$ is inferred in Datalog is sufficient to prove
Lemma 5.4. In particular, the generation of
$\mathsf{msg}^{{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}}$
in some computation $\rho$ of $\mathsf{c}$ gives a sub-computation
$\rho\\!\downarrow_{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}}$
performed by the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads. We consider the Datalog query instance $(\mathsf{Prog},\mathsf{g})$
generated where $\mathcal{A}\mathsf{lgo}$ correctly guesses
$\rho\\!\downarrow_{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}}$.
By the message generation invariant, the ground atom
$\mathsf{dmp}(\mathsf{msg}^{{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}})$
or
$\mathsf{emp}(\mathsf{msg}^{{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}})$
corresponding to
$\mathsf{msg}^{{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}}$
can be inferred, $\mathsf{Prog}\vdash\mathsf{g}$, giving the forward direction
of the lemma.
For the reverse direction, we note that
$\mathsf{Prog}\vdash\mathsf{emp}(\mathsf{msg}^{{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}})$
or
$\mathsf{Prog}\vdash\mathsf{dmp}(\mathsf{msg}^{{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}})$
immediately implies that the message can be generated in some computation of
the system $\mathsf{c}$ (the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
computation is already determined in the guessed program $\mathsf{Prog}$).
### 5.2 $\mathsf{Cache}$ Size
Having described the encoding(s), the challenge now is to provide a polynomial
bound on the cache size for the query instances generated by
$\mathcal{A}\mathsf{lgo}$. The $\mathsf{Cache}$ behaves like a memoized set of
atoms which are used for the inference process. The reason why a polynomial
sized $\mathsf{Cache}$ suffices is that we can “forget” (remove from
$\mathsf{Cache}$) previously inferred atoms when they are not being actively
used. We use this crucially in the context of
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
predicates, $\mathsf{emp},\mathsf{etp}$. Technically this is possible since
the arbitrary replication property of
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads allows us to “forget” the state of the previously simulated
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread and simulate a fresh copy instead.
Let
$\mathcal{Q}_{0}=|\mathsf{Dom}||\mathsf{Var}|+|\mathsf{c}_{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}|$.
We show that a $\mathsf{Cache}$ of size $\mathcal{O}(\mathcal{Q}_{0}^{2})$ is
sufficient to infer $\mathsf{g}$.
###### Lemma 5.5.
For each $(\mathsf{Prog},\mathsf{g})$ generated by $\mathcal{A}\mathsf{lgo}$,
$\mathsf{Prog}\vdash\mathsf{g}$ if and only if
$\mathsf{Prog}\vdash_{k}\mathsf{g}$ with
$k\in\mathcal{O}(\mathcal{Q}_{0}^{2})$.
An inference sequence performed on $\mathsf{Prog}$ corresponds to a
computation of the parameterized system $\mathsf{c}$ in the simplified
semantics (Section 5.1.2). Hence, to see that the above size of
$\mathsf{Cache}$ is sufficient we analyze the structure of computations in the
simplified semantics. The analysis will reveal a dependency relation among the
messages generated. We will see that this gives enough information to guide
the Datalog computation so as to use small sized $\mathsf{Cache}$.
Consider a computation
$\rho^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}$
ending in the configuration
$\mathsf{last}(\rho^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}})=(\mathsf{m}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}},\mathsf{lcfm}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})$.
For every message
$\mathsf{msg}^{{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}}$
in
$\mathsf{m}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}$,
we define
$\mathsf{genthread}(\mathsf{msg}^{{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}})$
as the first thread which added
$\mathsf{msg}^{{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}}$
to the memory
$\mathsf{m}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}$.
(Recall that the simplified semantics admits repeated insertions for
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
messages due to reuse of timestamps from ${\rm Nature}^{+}$). We define
$\mathsf{depend}(\mathsf{msg}^{{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}})$
as the set of messages which
$\mathsf{genthread}(\mathsf{msg}^{{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}})$
reads from, before generating the first instance of
$\mathsf{msg}^{{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}}$.
We define the notion of a dependency graph for a computation
$\rho^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}$.
###### Definition 5.6.
The dependency graph of a computation
$\rho^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}$
with
$\mathsf{last}(\rho^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}})=(\mathsf{m}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}},\mathsf{lcfm}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})$
is the directed graph
$\mathit{G}_{\rho^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}=(V,E)$
whose vertices
$V=\mathsf{m}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}$
are the messages in the final configuration and whose edges reflect the
dependencies,
$(\mathsf{msg}^{{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}}_{1},\mathsf{msg}^{{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}}_{2})\in
E$ if
$\mathsf{msg}^{{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}}_{1}\in\mathsf{depend}(\mathsf{msg}^{{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}}_{2})$.
As $\mathsf{depend}(-)$ is based on the linear order of the computation, the
dependency graph is acyclic. The acyclicity of dependency graphs follows
immediately from the definition of $\mathsf{depend}{}$. If there is a cycle,
then all the threads involved in the cycle would be dependent on each other
for the first generation of the respective message, thus causing a deadlock.
We denote the sets of sink and source vertices of $\mathit{G}$ by
$\mathsf{sink}(\mathit{G})$ resp. $\mathsf{source}(\mathit{G})$. A path in $G$
is also called a _dependency sequence_. A path or dependency sequence
$m_{1}\rightarrow m_{2}\rightarrow m_{3}\rightarrow\dots m_{n-1}\rightarrow
m_{n}$ thus says that $m_{1}$ was read by some thread which generated $m_{2}$,
$m_{2}$ in turn was read by a thread which generated $m_{3}$ and so on till
the thread which generated $m_{n}$ read $m_{n-1}$. Given such a sequence, we
say $m_{i}$ is an ancestor of $m_{j}$ if $i<j$. The height of a vertex $v$ is
the length of a longest path from a source vertex to $v$. The maximal height
over all vertices is $\mathsf{height}(\mathit{G})$. See Figure 10 for an
example.
$q_{0}$$q_{2}$$q_{0}$$q_{1}$$q_{2}$$q_{1}$$\partial_{1}$$\partial_{2}$$\partial_{5}$$\partial_{3}$$\partial_{4}$Program
$\mathsf{c}_{i}$Program
$\mathsf{c}_{j}$$q_{\mathsf{end}}$$q_{\mathsf{end}}$$q_{\mathsf{start}}$$q_{\mathsf{start}}$Transformed
program $\mathsf{c}^{\prime}_{j}$Transformed program
$\mathsf{c}^{\prime}_{i}$$\iota(\partial_{1})$$\iota(\partial_{2})$$\iota(\partial_{3})$$\iota(\partial_{4})$$\iota(\partial_{5})$register
loadsregister loadsregister storesregister
stores$\mathsf{cas}(t_{i},0,\Lambda^{\bot})$$\mathsf{cas}(t_{i},1,\Lambda^{\bot})$$\mathsf{cas}(t_{i},\Lambda^{\bot},2)$$\mathsf{cas}(t_{i},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},0,\Lambda^{\bot})$$\mathsf{cas}(t_{j},2,\Lambda^{\bot})$$\mathsf{cas}(t_{j},1,\Lambda^{\bot})$$\mathsf{cas}(t_{j},\Lambda^{\bot},2)$$(\mathsf{y},0,\overline{00})$$(\mathsf{x},0,\overline{00})$$(\mathsf{y},1,\overline{00^{+}})$$(\mathsf{x},1,\overline{0^{+}0^{+}})$$(\mathsf{y},2,\overline{0^{+}0^{+}})$$(\mathsf{y},0,\overline{00})$$(\mathsf{x},0,\overline{00})$$(\mathsf{y},1,\overline{00^{+}})$$(\mathsf{x},1,\overline{0^{+}0^{+}})$$(\mathsf{y},2,\overline{0^{+}0^{+}})$
x = 0 = y
---
| $T_{1}$ | $T_{2}$
---|---
⬇ x.load(0) y.store(1) x.load(1) y.store(2)
Figure 10: Two possible dependency graphs for the code snippet. $T_{1},T_{2}$
are both
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads. The color of each message
$\mathsf{msg}^{{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}}$
signifies
$\mathsf{genthread}(\mathsf{msg}^{{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}})$
($T_{1}$ orange, $T_{2}$ violet, $\mathsf{init}$ gray). We denote the view as
a vector $\overline{t_{x}t_{y}}$. Since we only consider the thread adding a
message for the first time
$\mathsf{genthread}(\mathsf{y},2,\overline{0^{+}0^{+}})$ can be either $T_{1}$
(left graph) or $T_{2}$ (right graph).
Compact Computations. Unfortunately, dependency graphs may contain
exponentially many vertices (due to the views), and given the
$\mathsf{PSPACE}$-hardness in Section 7 there is no way to reduce this to
polynomial size. Yet, there are two parameters that we can reduce, the ‘fan-
in’ of each vertex $v$ (number of messages read by $\mathsf{genthread}(v)$
before generating $v$), and and the ‘height’ of the dependency graph (longest
dependency sequence). A computation
$\rho^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}$
is _compact_ if its dependency graph
$\mathit{G}_{\rho^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}$
satisfies the following two bounds. (1) Every message $v$ depends on a small
number of other messages, $|\mathsf{depend}(v)|\leq\mathcal{Q}_{0}$. (2) The
dependency sequences are polynomially long, that is,
$\mathsf{height}(\mathit{G}_{\rho^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})\leq\mathcal{Q}_{0}$.
The following lemma says that compact computations are sufficient:
###### Lemma 5.7.
Any message that can be generated in the simplified semantics, can be
generated by a compact computation.
Figure 11: Fan-in reduction: the dependency graph on the left can be converted
to the one on the right by eliminating the redundant dependency of thread
$\mathsf{genthread}(\mathsf{msg})$ on
$(\mathsf{x},\mathsf{d},[\dots,\mathsf{x}\rightarrow\mathsf{ts}_{2},\dots])$
when $\mathsf{ts}_{2}>\mathsf{ts}_{1}$.
###### Proof 5.8.
We prove both parts (fan-in and height) of this lemma by showing that if there
exists a computation whose dependency graph violates the bound for fan-in
(similarly height), then there must exist a computation whose dependency graph
has a lower fan-in (height) with the rest of the graph (fan-ins of other
vertices) unchanged. We first show this for fan-in. We will assume that the
programs
$\mathsf{c}_{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
executed by
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads have been specified as a transition system (note that we can
interconvert between the while-language and transition system representation
with only polynomial blowup). Then
$|\mathsf{c}_{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}|$
is an upper bound on the total number of transitions in all
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads together.
1. 1.
Fan-in. Suppose to the contrary, we had
$|\mathsf{depend}(v)|>2|\mathsf{Dom}||\mathsf{Var}|+|\mathsf{c}_{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}|$
for some message $v$. Consider the thread $p=\mathsf{genthread}(v)$ which
generated the message represented by vertex $v$ for the first time. There are
only $|\mathsf{Dom}||\mathsf{Var}|$ distinct (variable, value) pairs,
$|\mathsf{Dom}||\mathsf{Var}|$ many $\mathsf{init}$ messages and only
$|\mathsf{c}_{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}|$
many
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
messages
($|\mathsf{c}_{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}|$
is an upper bound on the number of transitions the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread can take). Hence by a pigeonhole argument, $p$ must have read two
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
messages with same (variable, value) pair but distinct abstract views. Let
these messages be
$m_{1}=(\mathsf{x},\mathsf{d},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1})$
and
$m_{2}=(\mathsf{x},\mathsf{d},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2})$
where the abstract views are unequal.
Without loss of generality assume that $p=\mathsf{genthread}(v)$ read $m_{1}$
first, and $m_{2}$ later (in order) before it generated $v$.
It can be seen that any time $p$ read $m_{2}$, it could have read $m_{1}$
instead. This follows since timestamp comparisons are irrelevant when reading
from
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
messages. The thread-local view obtained on replacing a read of $m_{2}$ with
that of $m_{1}$ will only decrease or remain the same. From the simplified
semantics, after reading $m_{1}$ once, the thread view
$\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}$
satisfies
$\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}\sqsupseteq\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1}$
(per-variable). Hence reading from $m_{1}$ again leads to the thread view
being
${\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}}^{\prime}>_{\mathsf{x}}^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}$.
On the other hand, after reading $m_{2}$, the view will be
${\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}}^{\prime}\sqcup_{\mathsf{x}}^{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}}\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2}$
which is clearly higher than
${\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}}^{\prime}$.
Indeed, instead of reading from $m_{2}$, the loading thread can read from
$m_{1}$, resulting in a lower view for $\mathsf{x}$ (compared to reading from
$m_{2}$).
Let $\rho^{\prime}$ denote the subcomputation starting from the position right
after reading from the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
message $m_{2}$. We can see that if we replace this read operation by reading
from $m_{1}$, we can continue on $\rho^{\prime}$ as before.
* •
Indeed, all store operations on $\rho^{\prime}$ are independent of this load
from $m_{1}$ (or $m_{2}$).
* •
Consider a load operation along $\rho^{\prime}$. A load on a variable
$\mathsf{y}\neq\mathsf{x}$ is not affected clearly. Consider now a load on
$\mathsf{x}$ performed by loading some message $m_{3}$. Assume the load is
performed by $p$. The view of $\mathsf{x}$ along $\rho^{\prime}$ for thread
$p$ was coming from $m_{2}$ which was a least that given by $m_{1}$; indeed if
loading from $m_{3}$ was possible in $\rho^{\prime}$ when the view on
$\mathsf{x}$ was at least $\mathsf{vw}_{2}(\mathsf{x})$, it definitely is
possible now with a lower view on $\mathsf{x}$.
* •
Lastly, consider a CAS operation on variable $\mathsf{x}$, along
$\rho^{\prime}$. Assume the load was made from $m_{2}$. If
$\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}(\mathsf{x})=\mathsf{ts}_{2}^{+}$
in $m_{2}$, then the CAS operation will add a new message $m_{3}$ on
$\mathsf{x}$ with $\mathsf{vw}_{3}(\mathsf{x})=\mathsf{ts}_{2}+1$. However,
note that the same thread can still perform CAS by reading from $m_{1}$, with
$\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}(\mathsf{x})=\mathsf{ts}_{1}^{+}$
in $m_{1}$ (with $\mathsf{ts}_{1}^{+}<\mathsf{ts}_{2}^{+}$) by adding a new
message $m_{3}$ on $\mathsf{x}$ with
$\mathsf{vw}_{3}(\mathsf{x})=\mathsf{ts}_{1}+1$.
Hence reading from $m_{1}$ instead of $m_{2}$ does not affect the sub
computation $\rho^{\prime}$. Thus we can eliminate all reads of $m_{2}$ to
decrease $|\mathsf{depend}(v)|$. Thus, $|\mathsf{depend}(v)|\leq
2|\mathsf{Dom}||\mathsf{Var}|+|\mathsf{c}_{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}|$
for each vertex $v$.
Figure 12: Height compression: the dependency graph on the left can be
converted to the one on the right by allowing
$\mathsf{genthread}(\mathsf{msg})$ to directly read from
$(\mathsf{x},\mathsf{d},[\dots,\mathsf{x}\rightarrow\mathsf{ts}_{1},\dots])$.
We note that $\mathsf{ts}_{1}\leq\mathsf{ts}_{2}$ making the new dependency
graph, a graph of some valid computation. This prospectively reducing the
height of the graph.
2. 2.
Height. Let there be a dependency sequence of length greater than
$2|\mathsf{Dom}||\mathsf{Var}|+|\mathsf{c}_{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}|$.
There are only $|\mathsf{Dom}||\mathsf{Var}|$ (variable, value) pairs,
$|\mathsf{Dom}||\mathsf{Var}|$ many $\mathsf{init}$ messages and atmost
$|\mathsf{c}_{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}|$
many
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
messages. Hence by a pigeonhole argument, for a dependency sequence longer
than
$2|\mathsf{Dom}||\mathsf{Var}|+|\mathsf{c}_{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}|$
there exists a (variable, value) pair $(\mathsf{x},\mathsf{d})$ such that
there are two
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
messages
$m_{1}=(\mathsf{x},\mathsf{d},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1})$
and
$n_{1}=(\mathsf{x},\mathsf{d},\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2})$
along it. Without loss of generality, let $n_{1}$ be an ancestor of $m_{1}$.
So, $n_{1}$ has been read before generating $m_{1}$. Then we must have
$\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1}\sqsupseteq\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2}$
by the RA Semantics (since the thread generating $m_{1}$ indirectly
accumulates the view of $n_{1}$). Then the thread reading from (depending-
upon) $m_{1}$ could have directly read from $n_{1}$ instead (note that since
$m_{1}$ itself depends on $n_{1}$, by the time $m_{1}$ has been generated,
$n_{1}$ must have been as well). By reading from $n_{1}$ its view may only
decrease or remain the same thus not affecting the run (as justified above).
Thus we can eventually reduce the dependency sequences so that all have length
at most
$2|\mathsf{Dom}||\mathsf{Var}|+|\mathsf{c}_{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}|$.
This gives us the result.
In $\mathsf{Cache}$ Datalog, the inference of an atom $\mathsf{g}$ from the
program $\mathsf{Prog}$ involves a sequence of applications of the Add (to
$\mathsf{Cache}$) and Drop (from $\mathsf{Cache}$) rules that ends with
$\mathsf{g}$ being inferred. Such a sequence for
$\mathsf{Prog}\vdash\mathsf{g}$ corresponds to a run
$\rho^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}$
under the simplified RA semantics. We show that this follows by the structure
of the query instance $(\mathsf{Prog},\mathsf{g})$. The run
$\rho^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}$
can be compacted to
${\rho^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}^{\prime}$
by Lemma 5.7. From the dependency graph of
${\rho^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}^{\prime}$
we can read off an inference strategy that keeps the $\mathsf{Cache}$ size
polynomial in $|\mathsf{Var}|,|\mathsf{Dom}|$ and
$|\mathsf{c}_{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}|$.
The following lemma formalizes this argument and so proves Proposition 5.5. We
now proceed by showing Lemma 5.9. This lemma along with Lemma 5.7 together
gives Proposition 5.5 and will lead to the coveted $\mathsf{PSPACE}$-bound.
Since the term
$2|\mathsf{Dom}||\mathsf{Var}|+|\mathsf{c}_{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}|$
will occur repeatedly, we denote it by the the quantity $\mathcal{Q}_{0}$.
From here on,
$\mathcal{Q}_{0}=2|\mathsf{Dom}||\mathsf{Var}|+|\mathsf{c}_{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}|$.
###### Lemma 5.9 (Datalog Inference Strategy).
Let $\mathcal{A}\mathsf{lgo}$ generate the query instance
$(\mathsf{Prog},\mathsf{g})$. The inference for
$\mathsf{Prog}\vdash\mathsf{g}$ implies the existence of an execution
$\rho^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}$
under the simplified semantics, which can be compacted to
${\rho^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}^{\prime}$.
The computation
${\rho^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}^{\prime}$
can be mapped back to a new inference sequence such that
$\mathsf{Prog}\vdash_{k}\mathsf{g}$ for
$k\in\mathcal{O}(\mathcal{Q}_{0}^{2})$.
###### Proof 5.10.
This lemma has two parts: (1) it states that computations in the simplified
semantics and inference sequences in the $\mathsf{Cache}$ Datalog program are
related and (2) it says that compact computations can be mapped to an
inference sequence with a small $\mathsf{Cache}$ size.
Let $(\mathsf{Prog},\mathsf{g})$ be generated by the procedure
$\mathcal{A}\mathsf{lgo}$ with $\mathsf{Prog}\vdash\mathsf{g}$. We need to
show that $\mathsf{g}$ can also be inferred from $\mathsf{Prog}$ with a small
$\mathsf{Cache}$. Recall that when generating the Datalog program
$\mathsf{Prog}$, the procedure $\mathcal{A}\mathsf{lgo}$ guesses the
computations of the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
processes. Consider some inference sequence for
$\mathsf{Prog}\vdash\mathsf{g}$. For each application of an inference rule in
the sequence, we can find a corresponding transition of a thread in the
simplified semantics. This follows from the invariants in section 5.1.2. Hence
we can convert the sequence of inferences to a run
$\rho^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}$.
This run in turn can be compacted by the arguments in Lemma 5.7, to get a
smaller run
${\rho^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}^{\prime}$.
Now we need to see how this compact run implies the existence of an inference
sequence with smaller sized $\mathsf{Cache}$. To do this we consider the
dependency graph of
${\rho^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}^{\prime}$.
We proceed by induction on the height of messages in the dependency graph. We
strengthen the statement and show that for every message
$\mathsf{msg}^{{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}}$
at a height given by
$\mathsf{height}(\mathsf{msg}^{{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}})=h$,
we have
$\mathsf{Prog}\vdash_{k}\mathsf{emp}(\mathsf{msg}^{{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}})$
($\mathsf{Prog}\vdash_{k}\mathsf{dmp}(\mathsf{msg}^{{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}})$)
for $k=h\times\mathcal{Q}_{0}$. The lemma follows by the definition of
compactness, which guarantees
$h\leq\mathsf{height}(\mathit{G}_{\rho^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})\leq\mathcal{Q}_{0}$.
The base case is trivial, since all messages in
$\mathsf{sink}(\mathit{G}_{\rho^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})$
are facts in the Datalog program $\mathsf{Prog}$. We now show the inductive
case for a message
$v\in\mathit{G}_{\rho^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}$
at height $h+1$. The messages $v^{\prime}$ in $\mathsf{depend}(v)$ have height
at most $h$. The inductive hypothesis thus yields
$\mathsf{Prog}\vdash_{h\mathcal{Q}_{0}}v^{\prime}$. We infer these messages
one at a time, store them in the $\mathsf{Cache}$, and discard all atoms in
the $\mathsf{Cache}$ used for the inference of the $v^{\prime}$. Hence at each
step in the inference sequence, the $\mathsf{Cache}$ contains a subset of
$\mathsf{depend}(v)$ which has already been inferred, and, additionally some
atoms which are currently being used for the inference of the next member of
$\mathsf{depend}(v)$. The former is bounded by $\mathcal{Q}_{0}$ by the
compactness (Lemma 5.7, $|\mathsf{depend}(v)|<\mathcal{Q}_{0}$) while the
latter is bounded by $h\mathcal{Q}_{0}$ by the induction hypothesis. Together
the size of the $\mathsf{Cache}$ never exceeds $(h+1)\mathcal{Q}_{0}$. Thus by
reusing the space in the $\mathsf{Cache}$ to infer members of
$\mathsf{depend}(v)$, we only require an additional space of
$\mathcal{Q}_{0}$. At the end of this process, the size of the
$\mathsf{Cache}$ equals $|\mathsf{depend}(v)|$ and the space consumption of
the dependencies is at most
$\ \underbrace{(\mathcal{Q}_{0}-1)}_{\text{bound on
}|\mathsf{depend}(v)|}+\underbrace{h\mathcal{Q}_{0}}_{\text{inductive
hypothesis for next atom at height }h}=(h+1)(\mathcal{Q}_{0})-1$
Now we want to infer the message corresponding to $v$, having inferred and
inserted into $\mathsf{Cache}$ atoms corresponding to messages from
$\mathsf{depend}(v)$. This inference of $v$ from the messages in
$\mathsf{depend}(v)$ requires us to simulate the run of
$\mathsf{genthread}(v)$ using the rules of the Datalog program (by mapping
each transition executed by $\mathsf{genthread}(v)$ to its corresponding rule
from the Datalog program). We note that at all points in the simulation it
suffices to store exactly one extra atom either of $\mathsf{etp}$ or of
$\mathsf{dtp}$ (depending upon the type of $\mathsf{genthread}(v)$)
corresponding to the local state of $\mathsf{genthread}(v)$. The additional
atom can be accommodated along with $\mathsf{depend}(v)$ since
$|\mathsf{depend}(v)|+1<(h+1)\mathcal{Q}_{0}$ (since
$|\mathsf{depend}(v)|<\mathcal{Q}_{0}$).
Hence a $\mathsf{Cache}$ of size at most
$2(h+1)(|\mathsf{Dom}||\mathsf{Var}|+|\mathsf{c}_{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}|)$
is sufficient, and by induction the lemma follows.
Lemma 5.7 along with the compact inference sequences, Lemma 5.9, together show
that for all the query instances generated by $\mathcal{A}\mathsf{lgo}$,
inference is possible if it is possible with a small $\mathsf{Cache}$. This
shows Lemma 5.5 giving us $\mathsf{PSPACE}$-membership.
## 6 Safety Verification with Leader
In this section our goal is to support compositional verification methods
prominent in program logics and thread-modular reasoning style algorithmic
verification. Such approaches focus on a single thread and study its
interaction with others.
We extend the system from section 5 by adding a single distinguished ‘ego’
thread, which we refer to as the leader, denoted by the symbol
${\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$.
Amongst the $n$
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads only the
${\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$
can execute loops, while the others, like section 5 are required to be loop-
free.
The environment once again consists of arbitrarily many identical
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads that are required to be cas-free. We can represent this as
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{nocas})\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1}\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{2}(\mathsf{acyc})\parallel\dots\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{n}(\mathsf{acyc})$
which we refer to as the leader setting.
Note that the simplified semantics presented in Section 4 applies here. This
allows us to leverage Theorem 4.7 by which we can operate on the simplified
semantics instead. The main challenge of this section then is to go from the
simplified semantics in the presence of a leader to an $\mathsf{NEXPTIME}$
verification technique, by means of a small model argument.
### 6.1 Dependency Analysis
As discussed before, the safety verification problem amounts to solving the
message generation problem (MG) (section 5.1). Let the goal message be denoted
$\mathsf{msg}^{\\#}$.
We demonstrate that the simplified semantics helps solving the problem.
Our main finding is that message generation has short witness computations
(assuming the domain is finite). The proof of Theorem 6.1 is in Section 6.3.
###### Theorem 6.1.
In the leader setting, a message can be generated in the simplified semantics
if and only if it can be generated by a computation of length at most
exponential in the input specification,
$|\mathsf{c}_{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}|\cdot|\mathsf{c}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}|\cdot|\mathsf{Reg}|\cdot|\mathsf{Dom}|\cdot|\mathsf{Var}|$.
###### Corollary 6.2.
In the leader setting, the message generation problem for RA is in
$\mathsf{NEXPTIME}$.
We establish the result in two steps. First we show that every computation in
the simplified semantics has a “backbone”, which is made up solely by some
threads called _essential threads_ (Lemma 6.4). Then we show how to truncate
this backbone to obtain a short computation (Section 6.2).
Analyzing Dependencies in the Dependency Graph. The following study of
dependencies generalizes the one in Section 5.2. In a computation of the
simplified semantics, messages from the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads have unique timestamps whereas messages from
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads may have identical timestamps. We recall
$\mathsf{genthread}(\mathsf{msg})$, the thread which first generated message
$\mathsf{msg}$, and the _dependency set_ of a message $\mathsf{msg}$, denoted
by $\mathsf{depend}(\mathsf{msg})$ as defined earlier in Section 5.2.
We define $\mathsf{depend}(\mathsf{msg})=\emptyset$ for initial messages. We
write $\mathsf{depend}^{*}(\mathsf{msg})$ for the reflexive and transitive
closure of $\mathsf{depend}$, the smallest set containing $\mathsf{msg}$ and
such that for all $\mathsf{msg}^{\prime}\in\mathsf{depend}^{*}(\mathsf{msg})$
we have
$\mathsf{depend}(\mathsf{msg}^{\prime})\subseteq\mathsf{depend}^{*}(\mathsf{msg})$.
Similar to Lemma 5.7, we now show that we can focus on computations where any
write event directly depends on a small number of other events, and where
dependency sequences are short. The main difference with Section 5.2 is that
since the leader has loops, we cannot apriori bound executions w.r.t.
$|\mathsf{c}_{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}|$.
Keeping this in mind, we provide an alternative notion for compact
computations.
Compact Computations. We call a computation $\rho$ _compact_ if for every
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
message $\mathsf{msg}\in\mathsf{depend}^{*}(\mathsf{msg}^{\\#})$ in the
computation (1)
$|\mathsf{depend}(\mathsf{msg})\cap\mathsf{Msgs}(\rho\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}})|\leq|\mathsf{Dom}||\mathsf{Var}|$
and (2) for every $\mathsf{msg}^{\prime}\neq \mathsf{msg}$ from
$\mathsf{depend}^{*}(\mathsf{msg})\cap\mathsf{Msgs}(\rho\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}})$
either the variable or the value is different from $\mathsf{msg}$. The first
point addresses the situation where an
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread reads two messages with the same variable and value but different
views: it says that the thread could have chosen to read one of the messages
twice. The second point says there is no need to generate two
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
messages with the same variable and value along a dependency sequence. A
thread reading the second message could equally well read the first message,
the $\mathsf{ts}^{\textbf{+}}$ timestamp for
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
messages would make it available forever.
###### Lemma 6.3.
In the leader setting, if the message $\mathsf{msg}^{\\#}$ can be generated in
the simplified semantics, then it can be generated by a compact computation.
In a compact computation, both fan-in (size of $\mathsf{depend}$ set) and
depth (along a dependency sequence) of
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
messages is $\mathcal{O}(|\mathsf{Dom}||\mathsf{Var}|)$ since there are only
as many distinct (variable, value) pairs. Hence
$\mathcal{O}((|\mathsf{Dom}||\mathsf{Var}|)^{|\mathsf{Dom}||\mathsf{Var}|})$
many
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
messages are sufficient to generate $\mathsf{msg}^{\\#}$. Our goal is to
derive a similar bound on
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
messages. First, we consider the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
messages read by
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads, i.e. the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$-${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
reads-from dependencies. The
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$-${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
dependencies will be handled later.
Essential Messages and Threads. Given a computation $\rho$ in the simplified
semantics, the _essential messages_ for generating message $\mathsf{msg}$,
denoted by $\mathsf{edepend}(\mathsf{msg})$, is the smallest set that includes
$\mathsf{msg}$ and is closed as follows.
1. 1.
(1) $\forall$ messages
$\mathsf{msg}^{\prime}\in\mathsf{edepend}(\mathsf{msg})\cap\mathsf{Msgs}(\rho\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}})$
we have
$\mathsf{depend}(\mathsf{msg}^{\prime})\subseteq\mathsf{edepend}(\mathsf{msg})$.
2. 2.
$\forall$
$\mathsf{msg}^{\prime}\in\mathsf{edepend}(\mathsf{msg})\cap\mathsf{Msgs}(\rho\\!\downarrow_{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}})$
we have
$\mathsf{depend}(\mathsf{msg}^{\prime})\cap\mathsf{Msgs}(\rho\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}})\subseteq\mathsf{edepend}(\mathsf{msg})$.
Note the asymmetry, for the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads we track all dependencies, for the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads we only track the dependencies from
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$.
For a computation $\rho$, the threads generating essential messages of
$\mathsf{msg}^{\\#}$ for the first time and the set of
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads are _essential threads_ ;
$\mathsf{ethread}(\rho)\\!=\\{\mathsf{genthread}(m){\mid}m{\in}{\mathsf{edepend}(\mathsf{msg}^{\\#})\\}}\cup{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$.
We claim that projecting $\rho$ to _essential threads_ yields a valid
computation in the simplified semantics. Essential messages thus form the
backbone of the computation mentioned above. We now give the proof of Lemma
6.4 and Corollary 6.6.
###### Lemma 6.4.
If $\rho$ is a computation in the simplified semantics, so is
$\rho\\!\downarrow_{\mathsf{ethread}(\rho)}$.
###### Proof 6.5.
To prove this theorem it suffices to show that there is no thread in
$\mathsf{ethread}(\rho)$ that reads from some thread
$\mathsf{t}^{\prime}\not\in\mathsf{ethread}(\rho)$. Then we simply can project
away the threads not in $\mathsf{ethread}(\rho)$ and all the reads-from
dependencies will still be respected.
This follows trivially from the definition of
$\mathsf{edepend}(\leavevmode\nobreak\ )$. Indeed we have that for an
essential
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread $\mathsf{t}$ the messages (and hence threads) that $\mathsf{t}$ reads
from are also essential. All
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads are essential by definition. Additionally, for any
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread, we add all its
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
dependencies to the essential set. The set $\mathsf{ethread}(\rho)$ is then
closed under reads-from dependencies and hence the computation
$\rho\\!\downarrow_{\mathsf{edepend}(\rho)}$ is valid under RA.
Now we discuss bounding of essential messages. Essential
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
messages and (and essential
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads) are atmost exponential, bounded by
$\mathcal{Q}_{1}=(|\mathsf{Dom}||\mathsf{Var}|)^{|\mathsf{Dom}||\mathsf{Var}|}$
using the earlier compactness argument. We show that the number of essential
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
messages is bounded as well. Firstly, each
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread has a state space (control-state, registers) bounded by
$\mathcal{Q}_{2}=|\mathsf{c}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}||\mathsf{Reg}|^{|\mathsf{Dom}|}$.
Given the earlier bound on total number of essential
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
messages (and hence those by a single thread), an
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread run of length greater than
$\mathcal{O}(\mathcal{Q}_{1}\mathcal{Q}_{2})$ implies that there will exist a
sub-run in which (1) no essential message was generated and (2) the thread
revisited the same local state twice. We can truncate this sub-sequence since
the absence of essential messages implies that external reads-from
dependencies are not affected. Hence the computation for a single
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
is $\mathcal{Q}_{1}\mathcal{Q}_{2}$-bounded. Given the $\mathcal{Q}_{1}$-bound
on
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads, the total number of
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
messages consumed by the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads can be atmost $\mathcal{Q}_{1}^{2}\mathcal{Q}_{2}$. This implies
sufficiency with exponentially many essential
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
messages.
###### Corollary 6.6.
Let the goal message $\mathsf{msg}^{\\#}$ be generated in a computation of
system $\mathsf{c}$. Then for some compact computation,
$|\mathsf{edepend}(\mathsf{msg}^{\\#})|$ is at most exponential in
$|\mathsf{c}|$.
###### Proof 6.7.
Recall the notation $\mathsf{genthread}(\mathsf{msg})$ which refers to a
thread which generated the message $\mathsf{msg}$ for the first time. In the
following, if $t=\mathsf{genthread}(\mathsf{msg})$, we also refer to $t$ as
the “first writer” of $\mathsf{msg}$.
First we observe that
$\mathsf{edepend}(\mathsf{msg})\subseteq\mathsf{depend}(\mathsf{msg})$. Hence
in particular, we have
$\mathsf{edepend}(\mathsf{msg}^{\\#})\cap\mathsf{Msgs}(\rho\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}})\subseteq\mathsf{depend}(\mathsf{msg}^{\\#})\cap\mathsf{Msgs}(\rho\\!\downarrow_{{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}})$.
This is shown to be at most exponential ($\mathcal{O}(\mathcal{Q}_{1})$) by
Lemma 6.3, since both the height and the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
fan-in of the dependency graph restricted to
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
is polynomial. Given that each essential message is generated for the first
time by a unique essential thread, the number of essential
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads is also bounded by $\mathcal{O}(\mathcal{Q}_{1})$.
Now, consider the fragment $\rho^{\prime}$ of the computation between two
consecutive first-writes (first points of generation) of two essential
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
messages. Now if any
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread performs more than
$\mathcal{O}(\mathcal{Q}_{2})=\mathcal{O}(|\mathsf{c}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}||\mathsf{Reg}|^{|\mathsf{Dom}|})$
many transitions within $\rho^{\prime}$ it would imply that there are two
configurations $\mathsf{lcf}_{1},\mathsf{lcf}_{2}$ within $\rho^{\prime}$ at
which the local-states of the thread (modulo view) are identical - this
follows since
$|\mathsf{c}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}|$
is the program size and $|\mathsf{Reg}|^{|\mathsf{Dom}|}$ is the number of
distinct register valuations. Additionally note that the view at
$\mathsf{lcf}_{1}$ cannot be greater than that at $\mathsf{lcf}_{2}$
(monotonicity of views in RA). Hence we can simply truncate the sub-
computation between $\mathsf{lcf}_{1}$ and $\mathsf{lcf}_{2}$ while keeping
the computation still valid under RA (the thread with lower view can still
perform all its remaining transitions). In this truncation no essential
messages will be lost and hence the reads-from dependencies will be respected.
To explain further, suppose to the contrary that some thread $\mathsf{t}$
which is the first writer of an essential message executed more than
$\mathcal{O}(\mathcal{Q}_{1}\mathcal{Q}_{2})$ number of transitions
$\mathsf{lcf}_{0}\mathsf{lcf}_{1}\cdots\mathsf{lcf}_{l}$. Since the total
number of essential messages is only $\mathcal{O}(\mathcal{Q}_{1})$, there
must exist a subsequence $\sigma$ such that no essential
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
messages were generated (for the first time) in $\sigma$. Additionally, since
the state-space of each thread is $\mathcal{O}(\mathcal{Q}_{2})$, by a pigeon-
hole argument, it follows that two local configurations
$\mathsf{lcf}_{i},\mathsf{lcf}_{j}$ of $\mathsf{t}$ in $\sigma$ are equal. We
can simply truncate the fragment of the run between these configurations since
no essential messages have been generated for the first time.
Then it suffices for each first writer
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread to take at most $\mathcal{O}(\mathcal{Q}_{1}\mathcal{Q}_{2})$ many
transitions and consequently read at most exponentially many
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
messages. Recall that the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
messages that are read by first writers of essential
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
messages are essential themselves. Since the number of essential
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads which are first writers itself is bounded by
$\mathcal{O}(\mathcal{Q}_{1})$, the number of essential
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
messages is bounded by $\mathcal{O}(\mathcal{Q}_{1}^{2}\mathcal{Q}_{2})$,
which is exponential in the input. Since
$\mathsf{edepend}(\mathsf{msg}^{\\#})$ is a union of essential
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
and
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
messages we get the exponential bound on essential messages.
Combined with Lemma 6.4, the corollary says it is sufficent to focus on
computations with atmost exponentially many essential threads and essential
messages. We now want to bound the computation of the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads.
### 6.2 Short Witnesses
The computation truncation idea as applied to
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads earlier does not apply to the leader. Recall the asymmetry in the
definition of essential dependencies; we did not include the
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$-${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
load dependencies. The dependencies come in two forms: (1) those involving
(either as message writer or as reader) some non-leader
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread and (2)
${\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$-${\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$
dependencies. The former are poly-sized owing to the loop-free nature of the
non-leader
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads. Hence, we focus on
${\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$-${\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$
dependencies. For a memory
$\mathsf{m}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}$,
let
$\mathsf{m}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}\\!\downarrow_{{\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}}$
be the set of
${\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$
messages in it. Assuming
$\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}$
is the view of the
${\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$,
let
$\mathsf{selfRead}(\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}},\mathsf{m}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})$
denote the $(\mathsf{x},\mathsf{d})$ pairs in messages of
$\mathsf{m}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}\\!\downarrow_{{\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}}$
which can be read by
${\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$.
###### Definition 6.8.
$\mathsf{selfRead}(\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}},\mathsf{m}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})=\\{(\mathsf{x},\mathsf{d})\;\mid\;(\mathsf{x},\mathsf{d},\mathsf{vw}_{1}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})\in\mathsf{m}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}\\!\downarrow_{{\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}},\leavevmode\nobreak\
\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}(\mathsf{x})=\mathsf{vw}_{1}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}(\mathsf{x})\\}$.
We note that a pair $(\mathsf{x},\mathsf{d})$ is in $\mathsf{selfRead}$ when
this pair is the last store by the
${\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$
on $\mathsf{x}$ following which
$\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}(\mathsf{x})$
has not changed. Observe that there can be at most
$|\mathsf{Var}|^{|\mathsf{Dom}|}$ many distinct $\mathsf{selfRead}$ functions.
Consider a sub-computation of the leader between two generations of essential
messages. We call configurations
$\mathsf{cf}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1}$
and
$\mathsf{cf}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2}$
_${\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$
-equivalent_ if (1) the local configurations of the leader coincide except for
the views
$\mathsf{vw}_{1}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}$
resp.
$\mathsf{vw}_{2}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}$
and (2) the memories
$\mathsf{m}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1}$
and
$\mathsf{m}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2}$
satisfy
$\displaystyle\mathsf{selfRead}(\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1},\mathsf{m}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1})=\mathsf{selfRead}(\mathsf{vw}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2},\mathsf{m}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2})$
Then the computation of the leader between
$\mathsf{cf}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{1}$
and
$\mathsf{cf}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}_{2}$
can be projected away while retaining a computation in the simplified
semantics. Since there are only
$\mathcal{O}(|\mathsf{c}_{\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}|(|\mathsf{Reg}||\mathsf{Var}|)^{|\mathsf{Dom}|})$
many distinct configurations that are not
${\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$-equivalent,
after projecting away the redundant part, the leader will have an at most
exponentially long computation between generation of two consecutive essential
messages. Given the exponential bound on all essential messages, we see that
post projection, the leader computation is reduced to exponential size.
Combined with the argument for the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
and non-leader
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads, gives Theorem 6.1. Note that the resulting non-deterministic
algorithm does not run in polynomial space as there may be exponentially many
essential
${\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$
messages which need to be generated concurrently with the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads.
### 6.3 Theorem 6.1 : $\mathsf{NEXPTIME}$-membership of safety verification
in the leader case
We now move on to Theorem 6.1. It suffices to show that we only need to
consider computations of exponential length in order to verify safety
properties of a parameterized system under the simplified semantics in the
leader case. For this, we show exponential bounds on the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
and
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
components of the computation.
We have already seen that for the essential
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads, $\mathcal{O}(\mathcal{Q}_{1}^{2}\mathcal{Q}_{2})$ is an upper bound
on the number of transitions they need to make. Additionally this bound also
applies to the number of essential
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
messages. Note that the non-leader
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads are loop-free and hence their number of transitions is polynomial in
$|\mathsf{c}_{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}|$.
Hence we now focus on computations of the leader. We denote
$\mathcal{Q}_{3}=|\mathsf{c}_{\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}|(|\mathsf{Reg}||\mathsf{Var}|)^{|\mathsf{Dom}|}$
which is a bound on the number of distinct (non equivalent) leader
configurations and use it below in the proof.
For the
${\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$,
we need to maintain more states (as compared to the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads) to ensure that the truncated run is valid. This is so as we also want
to capture
${\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$-${\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$
dependencies as well. The $\mathsf{selfRead}$ function does precisely this -
at each point in the run it tracks the set of
${\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$
messages that can be read by the
${\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$
itself.
Assume once again that there is a (super-exponential) leader computation with
length greater than
$\mathcal{O}(\mathcal{Q}_{1}^{2}\mathcal{Q}_{2}\mathcal{Q}_{3})$. Then since
$\mathcal{O}(\mathcal{Q}_{1}^{2}\mathcal{Q}_{2})$ is a bound on the number of
total
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
essential messages (and in particular essential
${\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$
messages), there must exist a sub-computation of the
${\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$
of length greater than $\mathcal{O}(\mathcal{Q}_{3})$ that is free of
essential message generation. Let this sub-computation be
$\mathsf{lcf}_{1}\mathsf{lcf}_{2}\cdots\mathsf{lcf}_{l}$. Assume the memory
states along this sub-computation to be
$\mathsf{m}_{1}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}\mathsf{m}_{2}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}\dots\mathsf{m}_{l}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}$.
We augment each configuration $\mathsf{lcf}_{i}$ with the respective memory
state
$\mathsf{m}_{i}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}}$
obtaining an _augmented configuration_ as explained below. Consider the
configurations obtained by augmenting
$\mathsf{lcf}=(\mathsf{c},\mathsf{rv},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}})$
to the set
$\mathsf{selfRead}(\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}},\mathsf{m}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})$.
That is, given
$\mathsf{lcf}_{i}=(\mathsf{c}_{i},\mathsf{rv}_{i},\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{i})$,
on augmentation with
$\mathsf{selfRead}(\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{i},\mathsf{m}_{i}^{{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}})$
we obtain the augmented state as
$\langle\mathsf{c}_{i},\mathsf{rv}_{i},\mathsf{selfRead}(\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{i},\mathsf{m}_{i})\rangle$.
Now, $\mathsf{selfRead}$ can take atmost $|\mathsf{Var}|^{|\mathsf{Dom}|}$
many values, while the leader local-state (modulo view) has only
$|\mathsf{c}_{\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}||\mathsf{Reg}|^{|\mathsf{Dom}|}$
values. This implies, (by a pigeon-hole argument), the existence of a pair
$i,j$ such that
$\langle\mathsf{lcf}_{i},\mathsf{selfRead}(\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{i},\mathsf{m}_{i})\rangle$
and
$\langle\mathsf{lcf}_{j},\mathsf{selfRead}(\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{j},\mathsf{m}_{j})\rangle$
are equivalent.
Now, the view of the
${\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$
thread is monotonic. This implies that if for $i\neq j$ we have
$\langle\mathsf{c}_{i},\mathsf{rv}_{i},\mathsf{selfRead}(\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{i},\mathsf{m}_{i})\rangle=\langle\mathsf{c}_{j},\mathsf{rv}_{j},\mathsf{selfRead}(\mathsf{vw}^{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{d}}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{e}}}_{j},\mathsf{m}_{j})\rangle$
then the sub-computation between $i$ and $j$ may be truncated. Thus the run
$\mathsf{lcf}_{1}\cdots\mathsf{lcf}_{i}\mathsf{lcf}_{j+1}\cdots\mathsf{lcf}_{l}$
is also a valid run of the thread. Moreover it does not affect other threads
since once again no essential messages are lost.
Hence for any super-exponential (order greater than
$\mathcal{O}(\mathcal{Q}_{1}^{2}\mathcal{Q}_{2}\mathcal{Q}_{3})$) leader
computaion, there exists a shorter computation which also preserves
reachability. Thus for safety verification it suffices to consider runs of
atmost exponential length, immediately giving an $\mathsf{NEXPTIME}$ upper
bound.
## 7 Limits of Semantic Simplification I: $\mathsf{PSPACE}$-hardness of
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{nocas},\mathsf{acyc})$
We show that the applications of semantic simplification to the loop-free and
leader settings are tight, and further simplification is not possible.
Having shown that safety verification of
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{nocas})\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1}(\mathsf{acyc})\parallel\cdots\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{n}(\mathsf{acyc})$
is in $\mathsf{PSPACE}$, we give a matching lower bound. For the lower bound,
it suffices to consider the variant with no
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads and loop-free
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads,
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{nocas},\mathsf{acyc})$.
In fact, this result captures the inherent complexity in Parameterized RA,
termed as $\mathsf{PureRA}$, i.e. RA in its simplest form. The simplicity of
$\mathsf{PureRA}$ comes from (1) disallowing registers, and (2) stores can
only write value 1 and the memory is initialized with 0 values. We obtain
$\mathsf{PSPACE}$-hardness even with this reduced form, which is surprising,
given that in its full form it is in $\mathsf{PSPACE}$. Notice that the
$\mathsf{PSPACE}$-hardness with registers is trivial, since $\mathsf{PSPACE}$
can be encoded in valuations of registers.
### 7.1 Pure RA
In this section, we elaborate on the $\mathsf{PSPACE}$-hardness of checking
safety properties of parameterized systems under RA in the absence of
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
threads (and loop-free, cas-free
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads), which we can denote as
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{nocas},\mathsf{acyc})$.
In fact, we investigate the inherent complexity in RA, by removing all extra
frills like registers, as well as arbitrary data domains. So what we have is,
$\mathsf{Pure}$ RA, which is basically, RA in its simplest form. The
simplicity of $\mathsf{Pure}$ RA comes from the fact that we do not use
registers, and the only writes that are allowed are that of writing value 1 to
any shared variable, where we assume that the memory was initialized to 0 so
that we have a data domain of $\\{0,1\\}$. The remarkable thing about this
result is that we obtain $\mathsf{PSPACE}$-hardness, which is surprising,
given that in its full form it is in $\mathsf{PSPACE}$ by Section 5. Notice
that the $\mathsf{PSPACE}$-hardness with registers is trivial, since
computations can be encoded in register operations themselves.
$q_{0}$$q_{2}$$q_{0}$$q_{1}$$q_{2}$$q_{1}$$\partial_{1}$$\partial_{2}$$\partial_{5}$$\partial_{3}$$\partial_{4}$Program
$\mathsf{c}_{i}$Program
$\mathsf{c}_{j}$$q_{\mathsf{end}}$$q_{\mathsf{end}}$$q_{\mathsf{start}}$$q_{\mathsf{start}}$Transformed
program $\mathsf{c}^{\prime}_{j}$Transformed program
$\mathsf{c}^{\prime}_{i}$$\iota(\partial_{1})$$\iota(\partial_{2})$$\iota(\partial_{3})$$\iota(\partial_{4})$$\iota(\partial_{5})$register
loadsregister loadsregister storesregister
stores$\mathsf{cas}(t_{i},0,\Lambda^{\bot})$$\mathsf{cas}(t_{i},1,\Lambda^{\bot})$$\mathsf{cas}(t_{i},\Lambda^{\bot},2)$$\mathsf{cas}(t_{i},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},0,\Lambda^{\bot})$$\mathsf{cas}(t_{j},2,\Lambda^{\bot})$$\mathsf{cas}(t_{j},1,\Lambda^{\bot})$$\mathsf{cas}(t_{j},\Lambda^{\bot},2)$$(\mathsf{y},0,\overline{00})$$(\mathsf{x},0,\overline{00})$$(\mathsf{y},1,\overline{00^{+}})$$(\mathsf{x},1,\overline{0^{+}0^{+}})$$(\mathsf{y},2,\overline{0^{+}0^{+}})$$(\mathsf{y},0,\overline{00})$$(\mathsf{x},0,\overline{00})$$(\mathsf{y},1,\overline{00^{+}})$$(\mathsf{x},1,\overline{0^{+}0^{+}})$$(\mathsf{y},2,\overline{0^{+}0^{+}})$$\begin{array}[]{rl}\mathsf{c}&=\mathsf{c}_{\mathrm{AG}}\oplus\mathsf{c}_{\mathrm{SATC}}\oplus\mathsf{c}_{\mathrm{FE[0]}}\oplus\cdots\oplus\mathsf{c}_{\mathrm{FE[n-1]}}\oplus\mathsf{c}_{\mathrm{assert}}\\\
&\quad\leavevmode\nobreak\ \mathsf{choose}(u)=(t_{u} \coloneqq 0)\oplus(f_{u}
\coloneqq 0)\\\
\mathsf{c}_{\mathrm{AG}}&=\mathsf{choose}(u_{0});\mathsf{choose}(e_{1});\mathsf{choose}(u_{1});\cdots;\mathsf{choose}(u_{n});(s
\coloneqq 1)\\\
\mathsf{c}_{\mathrm{SATC}}&=\mathsf{assume}\;{(s=1)};\mathsf{check}(\Phi);\\\
&\leavevmode\nobreak\ \quad((\mathsf{assume}\;{(t_{u_{n}}=0)};a_{n,1}
\coloneqq 1;)\oplus(\mathsf{assume}\;{(f_{u_{n}}=0)};a_{n,0} \coloneqq 1))\\\
\mathsf{c}_{\mathrm{FE[i]}}&=\leavevmode\nobreak\
\mathsf{assume}\;{(a_{i+1,0}=1)};\mathsf{assume}\;{(a_{i+1,1}=1)};(\mathsf{assume}\;{(f_{e_{i+1}}=0)}\oplus\mathsf{assume}\;{(t_{e_{i+1}}=0)});\\\
&\leavevmode\nobreak\ \quad((\mathsf{assume}\;{(t_{u_{i}}=0)};a_{i,1}
\coloneqq 1)\oplus(\mathsf{assume}\;{(f_{u_{i}}=0)};a_{i,0} \coloneqq 1))\\\
\mathsf{c}_{\mathrm{assert}}&=\mathsf{assume}\;{(a_{0,0}=1)};\mathsf{assume}\;{(a_{0,1}=1)};\mathsf{assert}\;{\texttt{false}}\\\
\end{array}$ Figure 13: The parametrized system used in the reduction
#### 7.1.1 A QBF Encoding
To show the $\mathsf{PSPACE}$-hardness of checking safety properties of
parameterized systems of the class
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{nocas},\mathsf{acyc})$,
we establish a reduction from the canonical $\mathsf{PSPACE}$-complete
problem, $\mathsf{QBF}$. The $\mathsf{QBF}$ problem is described as follows.
Given a quantified boolean formula $\Psi=\forall u_{0}\exists e_{1}\forall
u_{1}\exists e_{2}\cdots\exists e_{n}\forall u_{n}\leavevmode\nobreak\
\Phi(u_{0},e_{1},\cdots u_{n})$, over variables
$Vars(\Psi)=\\{u_{0},\dots,u_{n},e_{1},\dots,e_{n}\\}$, decide if $\Psi$ is
true. $\Psi$ has $n+1$ universally quantified variables and $n$ existentially
quantified variables. To establish the reduction, we construct an instance of
the parametrized reachability problem for RA (in fact $\mathsf{Pure}$ RA)
consisting of the parametrized system $\mathsf{c}$, such that $\mathsf{c}$ is
unsafe if and only if the $\mathsf{QBF}$ instance is true. We assume that the
$\mathsf{QBF}$ instance $\Psi$ is as given above and now detail the
construction.
The program $\mathsf{c}$ executed by the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads (given in Figure 13) consists of functions (sub-programs), one of
which may be executed non-deterministically:
$\mathsf{c}=\mathsf{c}_{\mathrm{AG}}\oplus\mathsf{c}_{\mathrm{SATC}}\oplus\mathsf{c}_{\mathrm{FE[0]}}\oplus\cdots\oplus\mathsf{c}_{\mathrm{FE[n-1]}}\oplus\mathsf{c}_{\mathrm{assert}}$
### 7.2 Infrastructure
Gadgets used. The task of checking the satisfiability of $\Psi$ is distributed
over the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads executing these functions. Each function has a particular role, which
we term as gadgets and now describe.
* $\mathsf{c}_{\mathrm{AG}}$
The _Assignment Guesser_ guesses a possible satisfying assignment for
$Vars(\Psi)$.
* $\mathsf{c}_{\mathrm{SATC}}$
The _SATisfiability Checker_ checks satisfiability of $\Phi$ w.r.t. an
assignment guessed by $\mathsf{c}_{\mathrm{AG}}$.
* $\mathsf{c}_{\mathrm{FE[i]}}$
The $\forall\exists$ (ForallExists) _Checker_ at level $i$, $0\leq i\leq n-1$
($\mathsf{c}_{\mathrm{FE[i]}}$) verifies that the $(i+1)$th quantifier
alternation $\forall u_{i}\exists e_{i+1}$ is respected by the guessed
assignments. This proceeds in levels, where the check function at level $i+1$,
$\mathsf{c}_{\mathrm{FE[i+1]}}$ ‘triggers’ the check function at level $i$,
$\mathsf{c}_{\mathrm{FE[i]}}$, till we have verified that all assignments
satisfying $\Phi$ constitute the truth of $\Psi$.
* $\mathsf{c}_{\mathrm{assert}}$
The Assertion Checker reaches the $\mathsf{assert}\;{\texttt{false}}$
instruction when all the previous functions act as intended, implying that the
formula was true.
Due to the parameterization, an arbitrary number of threads may execute the
different functions at the same time. However, there is no interference
between threads, and there is a natural order between the roles:
$\mathsf{c}_{\mathrm{SATC}}$ requires $\mathsf{c}_{\mathrm{AG}}$ to function
as intended, and $\mathsf{c}_{\mathrm{FE[i]}}$ requires the functions
$\mathsf{c}_{\mathrm{AG}}$, $\mathsf{c}_{\mathrm{SATC}}$ and
$\mathsf{c}_{\mathrm{FE[j]}}$, $n-1\geq j>i$.
Shared Variables. We use the following set of shared variables in
$\mathsf{c}$: For each $x\in Vars(\Psi)$, we have boolean shared variables
$t_{x}$ and $f_{x}$ in $\mathsf{c}$. These variables represent true and false
assignments to $x$ using the respective boolean variables in a way that is
explained below. All the shared variables used are boolean, and the initial
value of all variables is 0. We also have a special (boolean) variable $s$.
Encoding variable assignments of $\Psi$: the essence of the construction.
Recall that the messages in the memory are of the form
$(\mathsf{x},\mathsf{d},\mathsf{vw})$ where $\mathsf{x}$ is a shared variable,
$\mathsf{d}\in\\{0,1\\}$, and $\mathsf{vw}$ is a view. To begin, the views of
all variables are assigned time stamp 0. An assignment to the variables in
$\Psi$ can be read off from the $\mathsf{vw}$ of a message $(s,1,\mathsf{vw})$
in the memory state. For $v\in Vars(\Psi)$, if $\mathsf{vw}(t_{v})=0$, then
$v$ is considered to have been assigned true, while if $\mathsf{vw}(f_{v})=0$,
then $v$ is assigned false. Our construction, explained below, ensures that
exactly one of the shared variables $t_{v},f_{v}$ will have time stamp 0 in
the view of the message $(s,1,\mathsf{vw})$. The zero/non-zero timestamps of
variables $t_{x}$ and $f_{x}$ in the view of $(s,1,\mathsf{vw})$ can be used
to check satisfiability of $\Phi$ since only a thread with a zero timestamp
can read the initial message on the corresponding variable.
Checking a single clause. As an example, consider the $i^{th}$ clause
$e_{1}\lor\lnot u_{3}\lor u_{5}$. The satisfiability check is implemented in a
code fragment as follows.
$\mathsf{check}(i)=(\mathsf{assume}\;{t_{e_{1}}=0})\oplus(\mathsf{assume}\;{f_{u_{3}}=0})\oplus(\mathsf{assume}\;{t_{u_{5}}=0})$
and
$\mathsf{check}(\Phi)=\mathsf{check}(1);\mathsf{check}(2);\cdots;\mathsf{check}(l)$.
Finally, we have the boolean variables $a_{i,0}$ and $a_{i,1}$ for
$i\in\\{0,\cdots n\\}$: these are $2(n+1)$ ‘universality enforcing’ variables
that ensure that all possible assignments to the universal variables in
$Vars(\Psi)$ have been checked.
### 7.3 The Construction
First we describe the various gadgets.
#### 7.3.1 The Gadgets
We now detail the gadgets (functions) mentioned in Figure 13.
Assignment Guesser: $\mathsf{c}_{\mathrm{AG}}$: The job of the Assignment
Guesser is to guess a possible assignment for the variables. This is done by
writing 1 to exactly one of the variables $t_{x},f_{x}$ for all $x\in
Vars(\Psi)$. Each such write is required to have a timestamp greater than 0 by
the RA semantics, and the view $\mathsf{vw}$ of the writing thread is updated
similarly. After making the assignment to all variables in $Vars(\Psi)$ as
described, the writing thread adds the message $(s,1,\mathsf{vw})$ to the
memory.
Consequently, the view $\mathsf{vw}$ of the writing thread (and hence the
message) satisfies
$\forall x\in Vars(\Phi),\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \mathsf{vw}(t_{x})=0\oplus\mathsf{vw}(f_{x})=0$
We interpret this as: the assignment chosen for $x\in Vars(\Phi)$ is true if
$\mathsf{vw}(t_{x})=0$ and is false if $\mathsf{vw}(f_{x})=0$. The chosen
assignment is thus encoded in $\mathsf{vw}$ and hence can be incorporated by
threads loading 1 from $s$ using the message $(s,1,\mathsf{vw})$, (see
$\mathsf{c}_{\mathrm{SATC}}$). This follows since load operations of the RA
semantics cause the thread-local view to be updated by the view in the message
loaded.
SAT Checker: $\mathsf{c}_{\mathrm{SATC}}$: The SAT Checker reads from one of
the messages of the form $(s,1,\mathsf{vw})$ generated by
$\mathsf{c}_{\mathrm{AG}}$. Using the code explained in Figure 13, it must
check that the assignment obtained using the $\mathsf{vw}$ satisfies $\Phi$.
The crucial observation is that $\mathsf{assume}\;{(t_{x}=0)}$
($\mathsf{assume}\;{(f_{x}=0)}$) being successful is synonymous with the
timestamp of $t_{x}$ ($f_{x}$) in $\mathsf{vw}$ being 0. This holds since
$\mathsf{assume}\;{(v=0)}$ requires the ability to read the initial message on
$v$ which in turn requires the thread-local view on $v$ to be 0. Timestamp of
$t_{x}$ ($f_{x}$) in $\mathsf{vw}$ itself being 0 is equivalent to $x$ being
assigned the value true (false) by $\mathsf{c}_{\mathrm{AG}}$.
Finally it checks that either $t_{u_{n}}$ or $f_{u_{n}}$ had timestamp 0 in
$\mathsf{vw}$, and writes 1 to $a_{n,1}$ or $a_{n,0}$ correspondingly in
Figure 13. For insight, we note prematurely that we will enforce both these
writes to $a_{n,1}$ and $a_{n,0}$ as a way of ensuring the universality for
the variable $u_{n}$. The main task is to verify the ‘goodness’ of the
assignments satisfying $\Phi$. One of the things to verify is that, we have
satisfying assignments for both values true/false of the universal variables
$u_{i}$.
If the $\mathsf{assume}\;{(t_{u_{n}}=0)}$ evaluates to true in
$\mathsf{c}_{\mathrm{SATC}}$ then in the view of the message
$(s,1,\mathsf{vw})$ obtained at the end of $\mathsf{c}_{\mathrm{AG}}$,
$\mathsf{vw}(t_{u_{n}})=0$. We now need a $\mathsf{c}_{\mathrm{AG}}$ function
(executed by some thread) to make an assignment such that in the view of the
message $(s,1,\mathsf{vw})$, we have $\mathsf{vw}(f_{u_{n}})=0$, and the
formula $\Phi$ is satisfiable again. The next step is to check if these
assignments which differ in $u_{n}$ are sound with respect to the $\forall
u_{n-1}\exists e_{n}$ part of $\Psi$ : that is, the assignment to $e_{n}$ is
independent to that of $u_{n}$. This procedure has to be iterated with respect
to all of $u_{0},u_{1},\dots,u_{n-1}$ by (1) first ensuring that $\Phi$ is
satisfiable for both assignments to $u_{i}$, $0\leq i\leq n-1$ and (2)
verifying that such assignments are sound with respect to the quantifier
alternation in $\Psi$ for $1\leq i\leq n-1$, (that is the choice of assignment
to $e_{i}$ is independent of all variables in
$\\{u_{i},e_{i+1},\cdots,u_{n}\\}$).
ForallExists Checker: $\mathsf{c}_{\mathrm{FE[\\_]}}$: The $n$
$\forall\exists$ _Checker_ s
$\mathsf{c}_{\mathrm{FE[0]}},\dots,\mathsf{c}_{\mathrm{FE[n-1]}}$ take over at
this point, consuming the writes made earlier. In general, for each
$i\in\\{0,\cdots,n-1\\}$, we have $\forall\exists$ _Checker_ function of $n$
kinds, $\mathsf{c}_{\mathrm{FE[0]}},\dots,\mathsf{c}_{\mathrm{FE[n-1]}}$ that
operate at levels $0,\dots,n-1$. $\mathsf{c}_{\mathrm{FE[i]}}$ operates at
level $i$ by reading 1 from $a_{i+1,0},a_{i+1,1}$ variables, and making writes
to $a_{i,0},a_{i,1}$ variables for $0\leq i\leq n-1$.
Universality Check : $\mathsf{c}_{\mathrm{FE[i]}}$ first verifies that all
possible valuations to the universally quantified variable $u_{i+1}$ made
$\Phi$ satisfiable : the two statements
$\mathsf{assume}\;{(a_{i+1,0}=1)};\mathsf{assume}\;{(a_{i+1,1}=1)}$ verify
this by reading 1 from $a_{i+1,0}$ and $a_{i+1,1}$ (note how all higher
$\mathsf{c}_{\mathrm{FE[j]}}$, $j>i$ level functions enforce this by
generating a dependency tree such as the one in Figure 14).
Existentiality Check : Next, $\mathsf{c}_{\mathrm{FE[i]}}$ checks that the
satisfying assignments of $\Phi$ seen so far agree on the existentially
quantified variable $e_{i+1}$ : the statements
$(\mathsf{assume}\;{(f_{e_{i+1}}=0)}\oplus\mathsf{assume}\;{(t_{e_{i+1}}=0)})$
check this. Assume that we have satisfying assignments of $\Phi$ which do not
agree on $e_{i+1}$. Then we have messages $(a_{i+1,0},1,\mathsf{vw}_{1})$ and
$(a_{i+1,1},1,\mathsf{vw}_{2})$ such that $\mathsf{vw}_{1}(t_{e_{i+1}})>0$
($e_{i+1}$ assigned false) but $\mathsf{vw}_{2}(t_{e_{i+1}})=0$ ($e_{i+1}$
assigned true). Now when $\mathsf{c}_{\mathrm{FE[i]}}$ reads from these
messages, its view $\mathsf{vw}$, will have both, $\mathsf{vw}(t_{e_{i+1}})>0$
and $\mathsf{vw}(f_{e_{i+1}})>0$. This will disallow
$\mathsf{c}_{\mathrm{FE[i]}}$ from executing
$(\mathsf{assume}\;{(f_{e_{i+1}}=0)}\oplus\mathsf{assume}\;{(t_{e_{i+1}}=0)})$
since the messages in the memory where $t_{e_{i+1}}$ and $f_{e_{i+1}}$ have
value 0 (and time stamp 0) cannot be read. This enforces that the choice of
the existentially quantified variable $e_{i+1}$ is independent of the choice
of the assignments made to the variables in
$\\{u_{i+1},e_{i+2},\cdots,u_{n}\\}$, and hence the proper semantics of
quantifier alternation is maintained.
Propagation Finally, the $\mathsf{c}_{\mathrm{FE[i]}}$ function ‘propagates’
assignments to the next level, that is, to $\mathsf{c}_{\mathrm{FE[i-1]}}$
after a last verification. Let $A_{i+1,j}$ contain all assignments satisfying
$\Phi$ which agree on $e_{i+1}$, and where $u_{i}$ is assigned value
$j\in\\{0,1\\}$. Such assignments are propagated to the next level by a
$\mathsf{c}_{\mathrm{FE[i]}}$ function which writes 1 to $a_{i,j}$.
$\mathsf{c}_{\mathrm{FE[i-1]}}$ is accessible only when $A_{i+1,0}$ and
$A_{i+1,1}$ are both propagated.
$q_{0}$$q_{2}$$q_{0}$$q_{1}$$q_{2}$$q_{1}$$\partial_{1}$$\partial_{2}$$\partial_{5}$$\partial_{3}$$\partial_{4}$Program
$\mathsf{c}_{i}$Program
$\mathsf{c}_{j}$$q_{\mathsf{end}}$$q_{\mathsf{end}}$$q_{\mathsf{start}}$$q_{\mathsf{start}}$Transformed
program $\mathsf{c}^{\prime}_{j}$Transformed program
$\mathsf{c}^{\prime}_{i}$$\iota(\partial_{1})$$\iota(\partial_{2})$$\iota(\partial_{3})$$\iota(\partial_{4})$$\iota(\partial_{5})$register
loadsregister loadsregister storesregister
stores$\mathsf{cas}(t_{i},0,\Lambda^{\bot})$$\mathsf{cas}(t_{i},1,\Lambda^{\bot})$$\mathsf{cas}(t_{i},\Lambda^{\bot},2)$$\mathsf{cas}(t_{i},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},0,\Lambda^{\bot})$$\mathsf{cas}(t_{j},2,\Lambda^{\bot})$$\mathsf{cas}(t_{j},1,\Lambda^{\bot})$$\mathsf{cas}(t_{j},\Lambda^{\bot},2)$$(\mathsf{y},0,\overline{00})$$(\mathsf{x},0,\overline{00})$$(\mathsf{y},1,\overline{00^{+}})$$(\mathsf{x},1,\overline{0^{+}0^{+}})$$(\mathsf{y},2,\overline{0^{+}0^{+}})$$(\mathsf{y},0,\overline{00})$$(\mathsf{x},0,\overline{00})$$(\mathsf{y},1,\overline{00^{+}})$$(\mathsf{x},1,\overline{0^{+}0^{+}})$$(\mathsf{y},2,\overline{0^{+}0^{+}})$${\mathsf{assert}}$${\mathsf{FE}[0]}$${\mathsf{FE}[1]}$$\begin{array}[]{ll}{\mathsf{SATC}}\end{array}$
$a_{2,1}{=}1$$\begin{array}[]{ll}{\mathsf{SATC}}\end{array}$$a_{2,0}{=}1$$a_{1,1}{=}1$${\mathsf{FE}[1]}$$\begin{array}[]{ll}{\mathsf{SATC}}\end{array}$$a_{2,1}{=}1$$\begin{array}[]{ll}{\mathsf{SATC}}\end{array}$$a_{2,0}{=}1$$a_{1,0}{=}1$$a_{0,0}{=}1$${\mathsf{FE}[0]}$${\mathsf{FE}[1]}$$\begin{array}[]{ll}{\mathsf{SATC}}\end{array}$$a_{2,1}{=}1$$\begin{array}[]{ll}{\mathsf{SATC}}\end{array}$$a_{2,0}{=}1$$a_{1,1}{=}1$${\mathsf{FE}[1]}$$\begin{array}[]{ll}{\mathsf{SATC}}\end{array}$$a_{2,1}{=}1$$\begin{array}[]{ll}{\mathsf{SATC}}\end{array}$$a_{2,0}{=}1$$a_{1,0}{=}1$$a_{0,1}{=}1$
Figure 14: The dependency tree for the case of $\forall u_{0}\exists
e_{1}\forall u_{1}\exists e_{2}\forall u_{2}\Phi$. The same color of sibling
nodes $\mathsf{c}_{\mathrm{FE[i]}}$ represents that the value of $e_{i+1}$ is
same at both of these.
Assert Checker: $\mathsf{c}_{\mathrm{assert}}$: After the $n$ $\forall\exists$
Checkers finish, the Assertion Checker reads 1 from the variables $a_{0,0}$
and $a_{0,1}$ and reaches the assertion $\mathsf{assert}\;{\texttt{false}}$.
This is possible only if all the earlier functions act as intended, which in
turn is only possible if the QBF evaluates to true.
#### 7.3.2 Roles played by the threads
The non-deterministic branching between the choices of the gadgets above means
that each
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread executes exactly one of the gadgets. However together they check $\Psi$
in a distributed fashion as one thread passing on a part of its state to the
next one by the load-stores for the $a_{\\_,0/1}$ variables as mentioned
above. Hence a computation that reaches the assertion requires each thread to
play a part in this tableau. We now describe this.
First a set of $2^{n}$ threads run the $\mathsf{c}_{\mathrm{AG}}$ gadgets and
they guess one assignment each such that all possible assignments for the
universally quantified variables are covered and such that the existentially
quantified variables are chosen such that the semantics of quantifier
alternation is respected. Essentially this means the the $2^{n}$ assignments
guessed would be a sufficient witness to the truth of $\Psi$.
Now, $2^{n}$ threads execute $\mathsf{c}_{\mathrm{SATC}}$ and check that each
of the assignments guessed (one thread checks one assignment) satisfies
$\Phi$. They produce a ‘proof’ that this check is complete by writing to
variables $a_{n,0/1}$. This also checks the innermost universality is
respected. At level $n-1$, $2^{n-1}$ threads execute
$\mathsf{c}_{\mathrm{FE[n-1]}}$. Each $\mathsf{c}_{\mathrm{FE[n-1]}}$ reads 1
from both $a_{n,0}$ and $a_{n,1}$ and reads 0 from exactly one of $t_{e_{n}}$
or $f_{e_{n}}$. Depending on the view read from the level below, they either
write 1 to $a_{n-1,0}$ or to $a_{n-1,1}$. (Prematurely this corresponds to the
assignments $A_{n,0}$ and $A_{n,1}$ in the proof below.) In essence these
threads check that the last quantifier alternation ($\forall u_{n-1}\exists
e_{n}$) is respected. $2^{n-2}$ threads then execute
$\mathsf{c}_{\mathrm{FE[n-2]}}$ at level $n-2$, reading 1 from both
$a_{n-1,1}$ and $a_{n-1,0}$, and reading 0 from exactly one of $t_{e_{n-1}}$
or $f_{e_{n-1}}$. These threads then write 1 to either $a_{n-2,0}$ or to
$a_{n-2,1}$, (representing assignments $A_{n-1,0}$ and $A_{n-1,1}$ in the
proof below). These threads check that the second last quantifier alternation
$\forall u_{n-1}\exists e_{n-1}$ is respected. This continues till two threads
execute $\mathsf{c}_{\mathrm{FE[0]}}$, and writes 1 to $a_{0,1}$ or $a_{0,0}$.
These two writes are read by a thread executing
$\mathsf{c}_{\mathrm{assert}}$. The views of these threads are all stitched
together by the stores and loads they perform on the variables $s$ (for
guessing assignments) and $a_{\\_,0/1}$ for checking proper alternation.
Figure 14 illustrates how the view (in which the assignments are embedded as
described earlier) propagate through these threads for the case of the QBF
$\forall u_{0}\exists e_{1}\forall u_{1}\exists e_{2}\forall u_{2}\Phi$. The
nodes represent individual threads executing the corresponding gadget and the
edges represent the variable which a child writes to pass on its view to its
parent.
###### Lemma 7.1.
$\Psi$ is true iff the $\mathsf{assert}\;{\texttt{false}}$ statement is
reachable in $\mathsf{c}$.
This gives us the main theorem
###### Theorem 7.2.
The verification of safety properties for parametrized systems of the class
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{nocas},\mathsf{acyc})$
under RA is $\mathsf{PSPACE}$-hard.
### 7.4 Correctness of the Construction: Proof of Lemma 7.1
We prove that reaching $\mathsf{assert}\;{\texttt{false}}$ is possible in the
parameterized system $\mathsf{c}$ iff the QBF $\Psi$ is satisfiable. First we
fix some notations. Given the QBF $\Psi=\forall u_{0}\exists e_{1}\dots\exists
e_{n}\forall u_{n}\Phi(u_{0},e_{1},\dots,u_{n})$, we define for $0\leq i\leq
n$, the level $i$ QBF corresponding to $\Psi$ as follows.
1. 1.
For $0\leq i\leq n-1$, the level $i$ QBF, denoted $\Psi_{i}$ is defined as
$\Psi_{i}\equiv\forall u_{i}\exists e_{i+1}\forall u_{i+1}\exists
e_{i+2}\dots\forall
u_{n}{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\exists
e_{1}\exists e_{2}\dots\exists e_{i}\exists u_{0}\exists u_{1}\dots\exists
u_{i-1}}\Phi(u_{0},e_{1},\dots,u_{n})$
2. 2.
For $i=n$, the level $n$ QBF, denoted $\Psi_{n}$ is defined as
$\Psi_{n}\equiv\forall
u_{n}{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\exists
e_{1}\dots\exists e_{n}\exists u_{0}\exists u_{1}\dots\exists
u_{n-1}}\Phi(u_{0},e_{1},\dots,u_{n})$
Note that $\Psi_{0}$ is the same as $\Psi$. To prove Lemma 7.1, we prove the
following helper lemmas. For ease of arguments, we add some labels in our
gadgets, and reproduce them below.
$\displaystyle\mathsf{choose}(u)$ $\displaystyle=(t_{u} \coloneqq
0)\oplus(f_{u} \coloneqq 0)$ $\displaystyle\mathsf{c}_{\mathrm{AG}}$
$\displaystyle=\mathsf{choose}(u_{0});\mathsf{choose}(e_{1});\mathsf{choose}(u_{1});\cdots;\mathsf{choose}(u_{n});(s
\coloneqq 1)$ Figure 15: Implementation of the Assignment Guesser
$\mathsf{c}_{\mathrm{AG}}$ gadget
###### Lemma 7.3.
$\Psi_{n}$ is true iff we reach the label $\lambda_{1}$ of the
$\mathsf{c}_{\mathsf{SAT}}$ gadget (ref. Figure 16) in some thread, and the
label $\lambda_{2}$ of the $\mathsf{c}_{\mathsf{SAT}}$ gadget in some thread.
###### Lemma 7.4.
For $0\leq i\leq n-1$, $\Psi_{i}$ is satisfiable iff we reach the label
$\lambda_{3}$ in the $\mathsf{c}_{\mathrm{FE[i]}}$ gadget (ref. Figure 17) in
some thread, and the label $\lambda_{4}$ in the $\mathsf{c}_{\mathrm{FE[i]}}$
gadget in some thread.
###### Lemma 7.5.
$\mathsf{assert}\;{\texttt{false}}$ is reachable iff we reach the label
$\lambda_{3}$ in the $\mathsf{c}_{\mathrm{FE[0]}}$ gadget in some thread, and
the label $\lambda_{4}$ in the $\mathsf{c}_{\mathrm{FE[0]}}$ gadget in some
thread.
In the following, we write $\Phi$ for $\Phi(u_{0},e_{1},\dots,e_{n},u_{n})$
since the free variables of $\Phi$ are clear.
$\displaystyle\mathsf{c}_{\mathrm{SATC}}=$
$\displaystyle\mathsf{assume}\;{(s=1)};\mathsf{check}(\Phi);\lambda_{0}:\mathsf{skip};$
$\displaystyle[(\mathsf{assume}\;{(t_{u_{n}}=0)};a_{n,1} \coloneqq
1;\lambda_{1}:\mathsf{skip};)$ $\displaystyle\oplus$
$\displaystyle(\mathsf{assume}\;{(f_{u_{n}}=0)};a_{n,0} \coloneqq
1;\lambda_{2}:\mathsf{skip};)]$ Figure 16: Implementation of the SAT Checker
$\mathsf{c}_{\mathrm{SATC}}$ gadget with labels
$\lambda_{0},\lambda_{1},\lambda_{2}$
#### Proof of Lemma 7.3
Assume $\Psi_{n}$ is satisfiable. Then there are satisfying assignments
$\alpha_{1}$ and $\alpha_{2}$ s.t. $\alpha_{1}(u_{n})=0$,
$\alpha_{2}(u_{n})=1$, such that $\alpha_{1},\alpha_{2}\models\Phi$. These
assignments $\alpha_{1},\alpha_{2}$ can be guessed by
$\mathsf{c}_{\mathrm{AG}}$ gadgets in two threads, resulting in adding
messages $(s,1,\mathsf{view}_{1})$ and $(s,1,\mathsf{view}_{2})$ to the
memory, such that $\mathsf{view}_{1}(f_{u_{n}})=0$ and
$\mathsf{view}_{2}(t_{u_{n}})=0$. Correspondingly, there are
$\mathsf{c}_{\mathrm{SATC}}$ gadgets which read from these views, (they read 1
from $s$), and check for the satisfiability of $\Phi$ using the
$\mathsf{view}_{1},\mathsf{view}_{2}$ values of $t_{x},f_{x}$ for $x\in
Vars(\Psi)$. Since both are satisfying assignments, the label $\lambda_{0}$ is
reachable in both $\mathsf{c}_{\mathrm{SATC}}$ gadgets. One of them will reach
the label $\lambda_{1}$ reading $t_{u_{n}}=0$ (using $\mathsf{view}_{2}$) and
the other will reach the label $\lambda_{2}$ reading $f_{u_{n}}=0$ (using
$\mathsf{view}_{1}$).
Conversely, assume that the label $\lambda_{1}$ of
$\mathsf{c}_{\mathrm{SATC}}$ is reachable in one thread, while the label
$\lambda_{2}$ of $\mathsf{c}_{\mathrm{SATC}}$ is reachable in another thread.
Then we know that in one thread, we have read a message
$(s,1,\mathsf{view}_{1})$, checked for the satisfiability of $\Phi$ using
$\mathsf{view}_{1}$, and also verified that $\mathsf{view}_{1}(t_{u_{n}})=0$,
while in another thread, we have read a message $(s,1,\mathsf{view}_{2})$,
checked for the satisfiability of $\Phi$ using $\mathsf{view}_{2}$, and also
verified that $\mathsf{view}_{2}(f_{u_{n}})=0$. Thus, we have 2 satisfying
assignments to $\Phi$, one where $u_{n}$ has been assigned to 0, and the
other, where $u_{n}$ has been assigned 1. Hence $\Psi_{n}$ is satisfiable. ∎
###### Definition 7.6.
Let $\mathsf{view}$ be a view. We say that an assignment
$\alpha:Vars(\Psi)\rightarrow\\{0,1\\}$ is embedded in $\mathsf{view}$ iff for
all $x\in Vars(\Psi)$, $\mathsf{view}(t_{x})=0\Leftrightarrow\alpha(x)=1$ and
$\mathsf{view}(f_{x})=0\Leftrightarrow\alpha(x)=0$. The term “embedded” is
used since the view also has (program) variables outside of $t_{x}$ and
$f_{x}$.
For $0\leq i\leq n$, let
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i}}$
and
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i}}$
respectively represent the set of assignments which are _embedded_ in the
views reaching the labels $\lambda_{3},\lambda_{4}$ of the
$\mathsf{c}_{\mathrm{FE[i]}}$ gadget. Thus, we know that
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}n}}=\\{\alpha\models\Phi\mid\alpha(u_{n})=1\\},\leavevmode\nobreak\
\text{and}\leavevmode\nobreak\
{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}n}}=\\{\alpha\models\Phi\mid\alpha(u_{n})=0\\}$
$\displaystyle\mathsf{c}_{\mathrm{FE[i]}}=\leavevmode\nobreak\ $
$\displaystyle[\mathsf{assume}\;{(a_{i+1,0}=1)};\mathsf{assume}\;{(a_{i+1,1}=1)}];\kappa_{1}:\mathsf{skip};$
$\displaystyle[\mathsf{assume}\;{(f_{e_{i+1}}=0)}\oplus\mathsf{assume}\;{(t_{e_{i+1}}=0)}];\kappa_{2}:\mathsf{skip};$
$\displaystyle[(\mathsf{assume}\;{(t_{u_{i}}=0)};a_{i,1} \coloneqq
1;\lambda_{3}:\mathsf{skip};)\oplus(\mathsf{assume}\;{(f_{u_{i}}=0)};a_{i,0}
\coloneqq 1;\lambda_{4}:\mathsf{skip};)]$ Figure 17: $\forall\exists$ Checker
at level $i$, $\mathsf{c}_{\mathrm{FE[i]}}$, with labels
$\kappa_{1},\kappa_{2},\lambda_{3},\lambda_{4}$. We have $n$ such gadgets, one
for each level $0\leq i\leq n-1$.
###### Lemma 7.7.
For $0\leq i\leq n-1$, define sets of assignments
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i,0}}=\\{\alpha\in{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}\uplus{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}\mid\alpha(u_{i})=1,\alpha(e_{i+1})=0\\}$
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i,1}}=\\{\alpha\in{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}\uplus{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}\mid\alpha(u_{i})=1,\alpha(e_{i+1})=1\\}$
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i,0}}=\\{\alpha\in{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}\uplus{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}\mid\alpha(u_{i})=0,\alpha(e_{i+1})=0\\}$
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i,1}}=\\{\alpha\in{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}\uplus{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}\mid\alpha(u_{i})=0,\alpha(e_{i+1})=1\\}$
where $\uplus$ denotes disjoint union. Then
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i}}$
is equal to one of the sets
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i,1}}$
or
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i,0}}$.
Similarly,
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i}}$
is equal to one of the sets
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i,1}}$
or
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i,0}}$.
Proof of Lemma 7.7. We already know the definitions of
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}n}}$
and
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}n}}$.
Consider the case of
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}n-1}}$
and
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}n-1}}$.
By construction, to reach label $\lambda_{3}$ of
$\mathsf{c}_{\mathrm{FE[n-1]}}$,
* (a)
we need to have reached labels $\lambda_{3},\lambda_{4}$ of
$\mathsf{c}_{\mathrm{FE[n]}}$. The view (say $\mathsf{view}_{A}$) on reaching
the label $\kappa_{1}$ in $\mathsf{c}_{\mathrm{FE[n-1]}}$ has embedded
assignments from
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}n}}\uplus{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}n}}$.
* (b)
To reach the label $\kappa_{2}$ of $\mathsf{c}_{\mathrm{FE[n-1]}}$, we need
either $f_{e_{n}}$ to have time stamp 0 or $t_{e_{n}}$ to have time stamp 0 in
$\mathsf{view}_{A}$. If we had $\mathsf{view}_{A}(t_{e_{n}})>0$ and
$\mathsf{view}_{A}(f_{e_{n}})>0$, then the label $\kappa_{2}$ is not
reachable. That is, the assignments embedded in $\mathsf{view}_{A}$ agree on
the assignment of $e_{n}$.
* (c)
To reach the label $\lambda_{3}$ in $\mathsf{c}_{\mathrm{FE[n-1]}}$, the
assignments embedded in $\mathsf{view}_{A}$ agree on the assignment of
$u_{n-1}$, such that $u_{n-1}$ is assigned 1. Thus,
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}n-1}}$
is obtained from
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}n}}\uplus{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}n}}$
by keeping those assignments which agree on $e_{n}$ and where $u_{n-1}$ is
true.
Similarly, to reach label $\lambda_{4}$ in $\mathsf{c}_{\mathrm{FE[n-1]}}$,
* (a)
we need to have reached the labels $\lambda_{3},\lambda_{4}$ of
$\mathsf{c}_{\mathrm{FE[n]}}$. The view (say $\mathsf{view}_{B}$) on reaching
the label $\kappa_{1}$ in $\mathsf{c}_{\mathrm{FE[n-1]}}$ has embedded
assignments from
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}n}}\uplus{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}n}}$.
* (b)
To reach label $\kappa_{2}$, we need either $f_{e_{n}}$ to have time stamp 0
or $t_{e_{n}}$ to have time stamp 0 in $\mathsf{view}_{B}$. If we had
$\mathsf{view}_{B}(t_{e_{n}})>0$ and $\mathsf{view}_{B}(f_{e_{n}})>0$, then
the label $\kappa_{2}$ is not reachable. That is, the assignments embedded in
$\mathsf{view}_{B}$ agree on the assignment of $e_{n}$.
* (c)
To reach the label $\lambda_{4}$ in $\mathsf{c}_{\mathrm{FE[n-1]}}$, the
assignments embedded in $\mathsf{view}_{B}$ agree on the assignment of
$u_{n-1}$, such that $u_{n-1}$ is assigned 0. Thus,
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}n-1}}$
is obtained from
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}n}}\uplus{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}n}}$
by keeping those assignments which agree on $e_{n}$ and where $u_{n-1}$ is
false.
The proof easily follows for any
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i}},{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i}}$,
using the definitions of
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}},{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}$
as above. ∎
#### Proof of Lemma 7.4
We give an inductive proof for this, using Lemma 7.3 as the base case. As the
inductive step, assume that $\Psi_{i+1}$ is satisfiable iff we reach the label
$\lambda_{3}$ of the $\mathsf{c}_{\mathrm{FE[i+1]}}$ gadget in some thread,
and the label $\lambda_{4}$ of the $\mathsf{c}_{\mathrm{FE[i+1]}}$ gadget in
some thread.
Assume $\Psi_{i}$ is satisfiable. We can write $\Psi_{i}$ as $\forall
u_{i}\exists e_{i+1}\Psi_{i+1}$. We show that there is a thread which reaches
label $\lambda_{3}$ of the $\mathsf{c}_{\mathrm{FE[i]}}$ gadget with a view
that has
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i}}$
_embedded_ in it, and there is a thread which reaches the label $\lambda_{4}$
of the $\mathsf{c}_{\mathrm{FE[i]}}$ gadget with a view that has
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i}}$
_embedded_ in it.
By inductive hypothesis, since $\Psi_{i+1}$ is satisfiable, there is a thread
which reaches the label $\lambda_{3}$ of the $\mathsf{c}_{\mathrm{FE[i+1]}}$
gadget with a view $\mathsf{view}_{A}$ that has
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}$
embedded in it, and there is a thread which reaches label $\lambda_{4}$ of the
$\mathsf{c}_{\mathrm{FE[i+1]}}$ gadget with a view $\mathsf{view}_{B}$ that
has
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}$
embedded in it. Note that $a_{i+1,1},a_{i+1,0}$ have been written 1 by these
threads respectively, such that $\mathsf{view}_{A}(a_{i+1,1})>0$ and
$\mathsf{view}_{B}(a_{i+1,0})>0$. Thanks to this, there is a thread which can
take on the role of the $\mathsf{c}_{\mathrm{FE[i]}}$ gadget now. This thread
begins with a view $\mathsf{view}_{C}$ which is the merge of
$\mathsf{view}_{B}$ and $\mathsf{view}_{A}$. The label $\kappa_{1}$ of this
$\mathsf{c}_{\mathrm{FE[i]}}$ gadget is reachable by reading 1 from both
$a_{i+1,1}$ and $a_{i+1,0}$, and we want $\mathsf{view}_{C}(t_{e_{i+1}})=0$ or
$\mathsf{view}_{C}(f_{e_{i+1}})=0$. As seen in item(b) in the proof of
observation 7.7, this is possible only if $\mathsf{view}_{B}(t_{e_{i+1}})=0$
and $\mathsf{view}_{A}(t_{e_{i+1}})=0$, or $\mathsf{view}_{B}(f_{e_{i+1}})=0$
and $\mathsf{view}_{A}(f_{e_{i+1}})=0$.
By assumption, since $\Psi_{i}$ is satisfiable, there exists assignments from
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}$
and
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}$
which agree on $e_{i+1}$ and $u_{i}$. In particular, the satisfiability of
$\Psi_{i}=\forall u_{i}\exists e_{i+1}\Psi_{i+1}$ says that we have a set of
assignments
$S\subseteq{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}\uplus{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}$
which satisfy $\Psi_{i}$, such that for all $\alpha\in S$, $\alpha(u_{i})=1$
and $\alpha(e_{i+1})$ is some fixed value. Similarly, the satisfiability of
$\Psi_{i}$ also gives us a set of assignments
$S^{\prime}\subseteq{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}\uplus{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}$
such that for all $\alpha\in S^{\prime}$, $\alpha(u_{i})=0$ and
$\alpha(e_{i+1})$ is some fixed value. It is easy to see that
$S={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i}}$,
while
$S^{\prime}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i}}$.
Thus, the satisfiability of $\Psi_{i}$ implies the feasibility of the
assignments
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i}}$
and
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i}}$.
This in turn, gives us the following.
Thus, starting with a view $\mathsf{view}_{C}$ which has embedded assignments
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}\uplus{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}$,
it is possible for a thread to
1. 1.
read 1 from $a_{i+1,1}$ and $a_{i+1,0}$ (these are present in the
$\mathsf{view}_{C}$),
2. 2.
Check that either $t_{e_{i+1}}$ has time stamp 0 in $\mathsf{view}_{C}$ or
$f_{e_{i+1}}$ has time stamp 0 in $\mathsf{view}_{C}$ (this is possible since
the embedded assignments agree on $e_{i+1}$),
3. 3.
Check that $t_{u_{i}}$ has time stamp 0 in $\mathsf{view}_{C}$ (this is
possible since the embedded assignments are such that $u_{i}$ is assigned 1)
This ensures that the thread reaches the label $\lambda_{3}$ of
$\mathsf{c}_{\mathrm{FE[i]}}$ with a view having
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i}}$
embedded in it (notice that the last two checks filter out
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i}}$
from
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}\uplus{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}$).
In a similar manner, starting with a view $\mathsf{view}_{C}$ which has
embedded assignments
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}\uplus{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}$,
it is possible for a thread to
1. 1.
read 1 from $a_{i+1,1}$ and $a_{i+1,0}$ (these are present in the
$\mathsf{view}_{C}$),
2. 2.
Check that either $t_{e_{i+1}}$ has time stamp 0 in $\mathsf{view}_{C}$ or
$f_{e_{i+1}}$ has time stamp 0 in $\mathsf{view}_{C}$ (this is possible since
the embedded assignments agree on $e_{i+1}$),
3. 3.
Check that $f_{u_{i}}$ has time stamp 0 in $\mathsf{view}_{C}$ (this is
possible since the embedded assignments are such that $u_{i}$ is assigned 0)
This ensures that the thread reaches the label $\lambda_{4}$ of
$\mathsf{c}_{\mathrm{FE[i]}}$ with a view having
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i}}$
embedded in it (notice that the last two checks filter out
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i}}$
from
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}\uplus{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}$).
Conversely, assume that we have two threads which have reached respectively,
labels $\lambda_{3},\lambda_{4}$ of $\mathsf{c}_{\mathrm{FE[i]}}$ gadget
having views in which
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i}}$
and
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i}}$
are embedded. We show that $\Psi_{i}$ is satisfiable.
By the definition of
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i}}$,
we know that we have assignments from
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}\uplus{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}$
which agree on $e_{i+1}$, and which set $u_{i}$ to 1. The fact that we reached
the label $\lambda_{3}$ of $\mathsf{c}_{\mathrm{FE[i]}}$ gadget with a view
having
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i}}$
embedded in it shows that these assignments are feasible. Similarly, reaching
the label $\lambda_{4}$ of $\mathsf{c}_{\mathrm{FE[i]}}$ with a view having
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i}}$
embedded in it shows that we have assignments from
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{A}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}\uplus{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mathcal{B}}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i+1}}$
which agree on $e_{i+1}$, and which set $u_{i}$ to 0. The existence of these
two assignments proves the satisfiability of $\Psi_{i}$.
#### Proof of Lemma 7.5
Assume that we reach $\mathsf{assert}\;{\texttt{false}}$. Then we have read 1
from $a_{0,0}$ and $a_{0,1}$. These are set to 1 only when the labels
$\lambda_{4},\lambda_{3}$ of $\mathsf{c}_{\mathrm{FE[0]}}$ have been visited.
The converse is exactly similar : indeed if we reach the labels
$\lambda_{4},\lambda_{3}$ of $\mathsf{c}_{\mathrm{FE[0]}}$, we have written 1
to $a_{0,0}$ and $a_{0,1}$. This enables the reads of 1 from $a_{0,0}$ and
$a_{0,1}$ leading to $\mathsf{assert}\;{\texttt{false}}$.
###### Theorem 7.8.
Parameterized safety verification for
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{nocas},\mathsf{acyc})$
is $\mathsf{PSPACE}$-hard.
## 8 Limits of Semantic Simplification II: $\mathsf{NEXPTIME}$-hardness of
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{nocas},\mathsf{acyc})\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1}(\mathsf{nocas})$
In this section we show an $\mathsf{NEXPTIME}$ lower bound on the safety
verification problem in the presence of a single leader
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread,
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{nocas},\mathsf{acyc})||{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}(\mathsf{nocas})$.
The lower bound is obtained with a fragment of RA which does not use registers
and surprisingly in which
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
also does not perform any compare-and-swap operations. As in the case of the
$\mathsf{PSPACE}$-hardness, we work with a fixed set of shared memory
locations $\mathcal{X}$ (also called shared variables) from a finite data
domain $\mathcal{D}$. We show the hardness via a reduction from the succinct
version of 3CNF-SAT, denoted $\mathsf{SuccinctSAT}$. Following the main part
of the paper, we refer to the distinguished
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
thread as the ‘leader’ and individual threads from
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
as ‘contributors’.
### 8.1 $\mathsf{SuccinctSAT}$: succinct satisfiability
The complexity of succinct representations was studied in the pioneering work
[37] for graph problems. Typically, the complexity of a problem is measured as
a function of some quantity $V$, with the assumption that the input size is
polynomial in $V$. If the underlying problem concerns graphs, then $V$ is the
number of vertices in the graph, while if the underlying problem concerns
boolean formulae, then $V$ is the size of the formula. [37] investigated the
complexity of graph problems, when the input has an _exponentially succinct_
representation, that is, the input size is polylog in $|V|$, where $V$ is the
number of vertices of the graph, and showed that succinct representations
rendered trivial graph problems NP-complete, while [58] showed that graph
properties which are NP-complete under the usual representation became
$\mathsf{NEXPTIME}$-complete under succinct representations.
$\mathsf{SuccinctSAT}$ is essentially a exponentially succinct encoding of a
3CNF-SAT problem instance. Let $\phi(x_{0},\cdots,x_{2^{n}-1})$ be a 3CNF
formula with $2^{n}$ variables and $2^{n}$ clauses. Assume an $n$ bit binary
address for each clause.
A succinct encoding of $\phi$ is a circuit $D(y_{1},\cdots,y_{n})$ (with size
polynomial in $n$), which, on an $n$ bit input $y_{1}\cdots y_{n}$ interpreted
as a binary address for clause $c$, generates $3n+3$ bits, specifying the
indices of the 3 variables from $x_{1},\cdots,x_{2^{n}}$ occurring in clause
$c$ and their signs (1 bit each). Thus, the circuit $D$ provides a complete
description of $\phi(x_{1},\cdots,x_{2^{n}})$ when evaluated with all $n$-bit
inputs. Define $\mathsf{SuccinctSAT}$ as the following
$\mathsf{NEXPTIME}$-complete [58] problem.
Given a succinct description $D$ of $\phi$, check whether $\phi$ is
satisfiable.
Adopting the notation above, we assume that we have been given $n$, the
formula $\phi$ with $2^{n}$ boolean variables
$\mathsf{BVars}=\\{x_{0},\cdots,x_{2^{n}-1}\\}$, and the succinct
representation $D$ with input variables $\\{y_{1},\cdots,y_{n}\\}$. Denote the
variables in clause $c$ as
$\texttt{var1}(c),\texttt{var2}(c),\texttt{var3}(c)$ and their signs as
$\texttt{sig1}(c),\texttt{sig2}(c),\texttt{sig3}(c)$. We denote the $n$-bit
address $\bar{c}$ of a clause $c$ as a (boolean) word
$\bar{c}\in\\{0,1\\}^{n}$ and commonly use the variable $\alpha$ to refer to
clause addresses. We denote the variable addresses also as $n$-bit (boolean)
words and commonly use the the variable $\beta$ to represent them. We
construct an instance of the parametrized reachability problem consisting of a
${\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$
leader thread running program
$\mathsf{c}_{\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$
and the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
contributor threads running program
$\mathsf{c}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$.
We show that this system is ‘unsafe’ (an $\mathsf{assert}\;{\texttt{false}}$
is reachable) if and only if the $\mathsf{SuccinctSAT}$ instance is
satisfiable.
$q_{0}$$q_{2}$$q_{0}$$q_{1}$$q_{2}$$q_{1}$$\partial_{1}$$\partial_{2}$$\partial_{5}$$\partial_{3}$$\partial_{4}$Program
$\mathsf{c}_{i}$Program
$\mathsf{c}_{j}$$q_{\mathsf{end}}$$q_{\mathsf{end}}$$q_{\mathsf{start}}$$q_{\mathsf{start}}$Transformed
program $\mathsf{c}^{\prime}_{j}$Transformed program
$\mathsf{c}^{\prime}_{i}$$\iota(\partial_{1})$$\iota(\partial_{2})$$\iota(\partial_{3})$$\iota(\partial_{4})$$\iota(\partial_{5})$register
loadsregister loadsregister storesregister
stores$\mathsf{cas}(t_{i},0,\Lambda^{\bot})$$\mathsf{cas}(t_{i},1,\Lambda^{\bot})$$\mathsf{cas}(t_{i},\Lambda^{\bot},2)$$\mathsf{cas}(t_{i},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},0,\Lambda^{\bot})$$\mathsf{cas}(t_{j},2,\Lambda^{\bot})$$\mathsf{cas}(t_{j},1,\Lambda^{\bot})$$\mathsf{cas}(t_{j},\Lambda^{\bot},2)$$(\mathsf{y},0,\overline{00})$$(\mathsf{x},0,\overline{00})$$(\mathsf{y},1,\overline{00^{+}})$$(\mathsf{x},1,\overline{0^{+}0^{+}})$$(\mathsf{y},2,\overline{0^{+}0^{+}})$$(\mathsf{y},0,\overline{00})$$(\mathsf{x},0,\overline{00})$$(\mathsf{y},1,\overline{00^{+}})$$(\mathsf{x},1,\overline{0^{+}0^{+}})$$(\mathsf{y},2,\overline{0^{+}0^{+}})$$\begin{array}[]{rl}\mathsf{c}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}&=\mathsf{c}_{\mathsf{CL-
ENC}}\oplus\mathsf{c}_{\mathsf{SAT}}\oplus\mathsf{c}_{\mathsf{Forall[0]}}\oplus\cdots\oplus\mathsf{c}_{\mathsf{Forall[n-1]}}\oplus\mathsf{c}_{\mathrm{assert}}\\\
&\mathsf{choose}(u)=(t_{u} \coloneqq 1)\oplus(f_{u} \coloneqq 1)\\\
\mathsf{c}_{\mathsf{CL-
ENC}}&=\mathsf{choose}(u_{0});\mathsf{choose}(u_{1});\cdots;\mathsf{choose}(u_{n-1});s
\coloneqq 1\\\ \mathsf{c}_{\mathsf{SAT}}&=\leavevmode\nobreak\
(\mathsf{assume}\;{s=1});\leavevmode\nobreak\
\mathsf{c}_{\mathsf{CV}};\leavevmode\nobreak\ \mathsf{c}_{\mathsf{Check}};\\\
&\leavevmode\nobreak\
((\mathsf{assume}\;{(t_{u_{n-1}}=0)};\leavevmode\nobreak\ a_{n-1,1} \coloneqq
1)\leavevmode\nobreak\ \oplus\leavevmode\nobreak\
(\mathsf{assume}\;{(f_{u_{n-1}}=0)};\leavevmode\nobreak\ a_{n-1,0} \coloneqq
1))\\\ \mathsf{c}_{\mathsf{Forall[i]}}&=\leavevmode\nobreak\
\mathsf{assume}\;{a_{i+1,0}=1};\leavevmode\nobreak\
\mathsf{assume}\;{a_{i+1,1}=1};\\\ &\leavevmode\nobreak\
((\mathsf{assume}\;{t_{u_{i}}=0};\leavevmode\nobreak\ a_{i,1} \coloneqq
1)\leavevmode\nobreak\ \oplus\leavevmode\nobreak\
(\mathsf{assume}\;{f_{u_{i}}=0};\leavevmode\nobreak\ a_{i,0} \coloneqq 1))\\\
\mathsf{c}_{\mathrm{assert}}&=\mathsf{assume}\;{(a_{0,0}=1)};\leavevmode\nobreak\
\mathsf{assume}\;{(a_{0,1}=1)};\leavevmode\nobreak\
\mathsf{assert}\;{\texttt{false}}\end{array}$ Figure 18: The contributor
program
$\mathsf{c}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
used in the reduction. The sub-routines $\mathsf{c}_{\mathsf{CV}}$ and
$\mathsf{c}_{\mathsf{Check}}$ are described later.
### 8.2 Key features
The leader running program
$\mathsf{c}_{\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$
guesses an assignment to the boolean variables in $\phi$. The contributors
running the program
$\mathsf{c}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
will be tasked with checking that the assignment guessed by the leader does in
fact satisfy the formula $\phi$. They do this in a distributed fashion, where
one clause from $\phi$ is verified by one contributor. Then similar to the
$\mathsf{PSPACE}$-hardness proof, the program
$\mathsf{c}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
forces the contributors to combine checks for individual clauses as a
dependency tree. This is so that the root of the tree is able to reach an
assertion failure only if all threads could successfully check their clauses
under the leader’s guessed assignment. However since all the contributors run
the same program, the trick is to enforce that all clauses will be checked.
Gadgets.
$\mathsf{c}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
consists of a set of gadgets (modelled as ‘functions’ in the program), only
one of which may be non-deterministically executed by the contributors, while
$\mathsf{c}_{\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$
is the program executed by the leader.
$\mathsf{c}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}=\mathsf{c}_{\mathsf{CL-
ENC}}\oplus\mathsf{c}_{\mathsf{SAT}}\oplus\mathsf{c}_{\mathsf{Forall[0]}}\oplus\cdots\oplus\mathsf{c}_{\mathsf{Forall[n-1]}}\oplus\mathsf{c}_{\mathrm{assert}}$
Recall that in the $\mathsf{PSPACE}$-hardness proof too, there were similar
gadgets which were executed by the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads. The gadgets in
$\mathsf{c}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
do execute various tasks as follows.
* $\mathsf{c}_{\mathsf{CL-ENC}}$
guesses an $n$ bit address $\bar{c}$ of a clause $c$ in $\phi$,
* $\mathsf{c}_{\mathsf{SAT}}$
(1) acquires a clause address $\bar{c}$ generated by $\mathsf{c}_{\mathsf{CL-
ENC}}$, (2) uses the circuit $D$ to obtain the indices of variables
$\texttt{var1}{c}$, $\texttt{var2}{c}$, $\texttt{var3}{c}$ in clause $c$,
along with its sign (this is done by the sub-routine
$\mathsf{c}_{\mathsf{CV}}$), (3) accesses the assignment made to the variables
by the leader (sub-routine $\mathsf{c}_{\mathsf{Check}}$) and (4) the
assignment is such that $c$ is satisfied.
* $\mathsf{c}_{\mathsf{Forall[i]}}$
($0\leq i\leq n-2$) together ensure that the satisfiability of all the clauses
in $\phi$ has been checked. This is done by instantiations of
$\mathsf{c}_{\mathsf{SAT}}$, in levels (similar to the proof of
$\mathsf{PSPACE}$-hardness). At the $i$th level,
$\mathsf{c}_{\mathsf{Forall[i]}}$ checks the $\forall$ universality of the
$i^{th}$ address bits of clause $c$.
* $\mathsf{c}_{\mathrm{assert}}$
finally reaches $\mathsf{assert}\;{\texttt{false}}$, if all the previous
functions execute faithfully, implying that the $\mathsf{SuccinctSAT}$
instance is satisfiable.
The non-deterministic branching implies that each
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
thread will only be able to execute one of these gadgets. The check for
satisfiability of $\phi$ is distributed between the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads much like the $\mathsf{PSPACE}$-hardness construction. For this
distributed check, threads are allocated roles depending upon the function
(gadget) they execute. Additionally, the distinguished leader thread is tasked
with guessing the assignment. We now describe this.
Role of the leader. We have one leader thread which guesses a satisfying
assignment for the boolean variables $\mathsf{BVars}$ as a string of writes
made to a special program variable $g$. The writes made to $g$ have $n+2$
values $\mathsf{d}_{\texttt{t}},\mathsf{d}_{\texttt{f}},1,\dots,n$ in a
specific order. Let the initial values of all variables in the system be
$\mathsf{init}\notin\\{\mathsf{d}_{\texttt{t}},\mathsf{d}_{\texttt{f}},1,\dots,n\\}$.
To illustrate a concrete example, consider the case where $n=3$. Let the
guessed assignment for $\mathsf{BVars}$ be
$w=\texttt{ftftttff}\in\\{\texttt{t,f}\\}^{2^{3}}$, where t denotes true and f
false. Then the writes made by the leader are as below, where
$\mathsf{d}_{\texttt{t}}$ and $\mathsf{d}_{\texttt{f}}$ are macros for data
domain values (other than $\\{\mathsf{init},1,\cdots,n\\}$) representing true
and false respectively.
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{t}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}2}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{t}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}3}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{t}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{t}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}2}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}$
The leader alternates writing a guessed assignment for $x_{0},\dots,x_{7}$ (in
red) with writing a value from $\\{1,\dots,n\\}$ (in blue). The values in
$\\{1,\dots,n\\}$ (here $\\{1,2,3\\}$) in blue are written in a deterministic
pattern as
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}2}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}3}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}2}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}$, which
we call a ‘binary search pattern’ with 3 values, denoted $\mathsf{BSP}(3)$ for
short. $\mathsf{BSP}(n)$ is a unique word of length $2^{n}-1$ over
$\\{1,\dots,n\\}$, defined inductively as follows.
$\displaystyle\mathsf{BSP}(1)$ $\displaystyle=1$
$\displaystyle\mathsf{BSP}(n)$ $\displaystyle=\mathsf{BSP}(n-1)\cdot
n\cdot\mathsf{BSP}(n-1)\quad\text{ for }n\geq 2$
The assignments for $x_{0},\dots,x_{2^{n}-1}$ are interspersed alternately
with symbols in $\mathsf{BSP}(n)$ by the leader while writing to $g$.
Formally, let $\mathcal{S}(n,w)=\mathsf{BSP}(n)\shuffle w$ represent the
perfect shuffle (alternation) of $\mathsf{BSP}(n)$ with the guessed assignment
$w\in\\{\mathsf{d}_{\texttt{t}},\mathsf{d}_{\texttt{f}}\\}^{2^{n}}$. The
leader writes the word $\mathcal{S}(n,w)$ to $g$. From the example above,
$\mathcal{S}(3,\texttt{ftftttff})={\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{t}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}2}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{t}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}3}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{t}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{t}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}2}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}$.
We show that the shuffle sequence which need to generated is easily
implementable by the leader with a polysized program.
###### Lemma 8.1.
There exists a program
_$\mathsf{c}_{\color[rgb]{0.0,0.42,0.24}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.42,0.24}\mathsf{ldr}}$_
, which nondeterministically choses
$w\in\\{\mathsf{d}_{\texttt{t}},\mathsf{d}_{\texttt{f}}\\}^{2^{n}}$ and
generates the write sequence $\mathcal{S}(n,w)$ on a shared memory location
$g$, with the size of the program growing polynomially with $n$.
How contributors access variable assignments, intuitively.
Each contributor wants to check a single clause for which it needs to access
the 3 variables and their signs occurring in that clause. Since it pertains to
the $\mathsf{BSP}$, we first understand this task and discuss the others
(selecting clause, acquiring variable address and sign, etc.) later. For now
we assume that the contributor has a variable $x$ with address $\beta$ and
sign $\sigma$ wants to access the assignment made to variable $x$ by the
leader.
For boolean variable $x$, the contributor uses the $\mathsf{BSP}(n)$ pattern
to locate the assignment made to $x$, by reading a subword of
$\mathcal{S}(n,w)$. From program variable $g$, the contributor reads $n+1$
values $\\{\mathsf{d}_{\texttt{f}},1,\dots,n\\}$ or
$\\{\mathsf{d}_{\texttt{t}},1,\dots,n\\}$ without repetitions, depending upon
the sign of $x$ in the clause ($\mathsf{d}_{\texttt{f}}$ if sign is negative,
$\mathsf{d}_{\texttt{t}}$ if positive). In the running example, if contributor
wants to access $x_{2}$ from
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{t}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}2}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{t}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}3}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{t}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{t}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}2}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}$,
it reads the sequence
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}2}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}3}$.
Likewise, the value of $x_{6}$ is obtained by reading
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}3}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}2}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}$, while
for $x_{0}$, the contributor must read
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}2}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}3}$. We note
that for each $x\in\mathsf{BVars}$, there is a unique ‘access pattern’, which
forces the thread to acquire the assignment of exactly $x$ and not any other
variable. In this search, it is guided by the $\mathsf{BSP}$, which acts as an
indexing mechanism. It helps the contributor narrow down unambiguously, to the
part which contains the value of $x$.
#### 8.2.1 Formal description of contributors
The contributors check that each clause in $\phi$ has been satisfied in a
distributed fashion. Each contributor executes one of the functions in
$\mathsf{c}_{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$.
They do this as follows.
Clause Encoding: $\mathsf{c}_{\mathsf{CL-ENC}}$: A thread executing
$\mathsf{c}_{\mathsf{CL-ENC}}$ selects, nondeterministically, a clause address
$\alpha\in\\{0,1\\}^{n}$. This is done by writing 1 to either $t_{u_{i}}$ or
$f_{u_{i}}$ for all $0\leq i\leq n-1$. Finally, 1 is written into a special
variable $s$. The function $\mathsf{c}_{\mathsf{CL-ENC}}$ in Figure 18
describes this. The view of the message $(s,1,\mathsf{vw})$ encodes the
address $\alpha$ of a clause satisfying
$\displaystyle(\mathsf{vw}(t_{u_{i}})>0\iff\alpha[i]=0)\text{ and
}(\mathsf{vw}(f_{u_{i}})>0\iff\alpha[i]=1)\text{ for }0\leq i\leq n-1$
Recall that this is the same encoding technique as used for the
$\mathsf{PSPACE}$-hardness proof. Each bit is encoded in the view of a
message. Overall $2^{n}$ threads will execute $\mathsf{c}_{\mathsf{CL-ENC}}$
to cover all the clauses in the formula.
Satisfaction checking (for one clause): $\mathsf{c}_{\mathsf{SAT}}$: A thread
executing $\mathsf{c}_{\mathsf{SAT}}$ acquires the address $\bar{c}$ of a
clause $c$ through the view $\mathsf{vw}$ by reading the message
$(s,1,\mathsf{vw})$ generated by $\mathsf{c}_{\mathsf{CL-ENC}}$. This thread
has to check the satisfiability of the clause with address $\bar{c}$. For
this, it needs to know the 3 boolean variables $\texttt{var1}(c)$,
$\texttt{var2}(c)$, $\texttt{var3}(c)$ appearing in $c$. Recall that we have
been given, as part of the problem, the circuit $D$ which takes an $n$ bit
address $\alpha$ corresponding to some clause as input, and outputs the $3n+3$
bits corresponding to the 3 variables appearing in the clause, along with
their signs. We use $D$, and the encoding of a clause address $\bar{c}$ stored
in $\mathsf{vw}$, to compute $D(\alpha)$. We have a polysized sub-routine
$\mathsf{c}_{\mathsf{CV}}$ (CV for circuit value) that can compute the circuit
value of $D$.
Circuit Value: $\mathsf{c}_{\mathsf{CV}}$: The $\mathsf{c}_{\mathsf{CV}}$ sub-
routine takes the address $\alpha$ (of a clause $c$) and converts it into the
index of one of the variables in $c$. Thus in essence,
$\mathsf{c}_{\mathsf{CV}}$ evaluates the circuit-value problem $D(\alpha)$ by
simulating the (polynomially-many) gates in $D$. The idea is to keep two
boolean program variables for each node in $D$, and propagate the evaluation
of nodes in an obvious way (for instance, if we have $\wedge$ gate with input
gates $g_{1},g_{2}$ evaluating to 0, and 1 respectively, then $t_{\wedge}$
will be written 1). We now briefly explain how circuit value can be evaluated,
by taking an example of a single gate.
For each node $p$ in $D$ we use two boolean program variables, $t_{p}$ and
$f_{p}$. We say that a view $\mathsf{vw}$ encodes the value at node $p$ if the
following holds. We write $\mathsf{encAddr}(\mathsf{vw})$ to denote the value
values for boolean variables encoded in $\mathsf{vw}$.
$(\mathsf{vw}(t_{p})>0\iff p=0)\text{ and }(\mathsf{vw}(f_{p})>0\iff p=1)$ (1)
$q_{0}$$q_{2}$$q_{0}$$q_{1}$$q_{2}$$q_{1}$$\partial_{1}$$\partial_{2}$$\partial_{5}$$\partial_{3}$$\partial_{4}$Program
$\mathsf{c}_{i}$Program
$\mathsf{c}_{j}$$q_{\mathsf{end}}$$q_{\mathsf{end}}$$q_{\mathsf{start}}$$q_{\mathsf{start}}$Transformed
program $\mathsf{c}^{\prime}_{j}$Transformed program
$\mathsf{c}^{\prime}_{i}$$\iota(\partial_{1})$$\iota(\partial_{2})$$\iota(\partial_{3})$$\iota(\partial_{4})$$\iota(\partial_{5})$register
loadsregister loadsregister storesregister
stores$\mathsf{cas}(t_{i},0,\Lambda^{\bot})$$\mathsf{cas}(t_{i},1,\Lambda^{\bot})$$\mathsf{cas}(t_{i},\Lambda^{\bot},2)$$\mathsf{cas}(t_{i},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},\Lambda^{\bot},1)$$\mathsf{cas}(t_{j},0,\Lambda^{\bot})$$\mathsf{cas}(t_{j},2,\Lambda^{\bot})$$\mathsf{cas}(t_{j},1,\Lambda^{\bot})$$\mathsf{cas}(t_{j},\Lambda^{\bot},2)$$(\mathsf{y},0,\overline{00})$$(\mathsf{x},0,\overline{00})$$(\mathsf{y},1,\overline{00^{+}})$$(\mathsf{x},1,\overline{0^{+}0^{+}})$$(\mathsf{y},2,\overline{0^{+}0^{+}})$$(\mathsf{y},0,\overline{00})$$(\mathsf{x},0,\overline{00})$$(\mathsf{y},1,\overline{00^{+}})$$(\mathsf{x},1,\overline{0^{+}0^{+}})$$(\mathsf{y},2,\overline{0^{+}0^{+}})$$t_{i_{1}},f_{i_{1}}$$t_{i_{2}},f_{i_{2}}$$t_{o},f_{o}$
Now assume a thread has a view $\mathsf{vw}_{1}$ when it wants to evaluate a
logic (NAND) gate $G$, with output node $o$ and input nodes $i_{1}$ and
$i_{2}$. We assume $\mathsf{vw}_{1}$ must encode the values of $i_{1}$ and
$i_{2}$ (the thread has evaluated inputs of $G$) and the thread must not have
evaluated $G$ before (we have that
$\mathsf{vw}_{1}(t_{o})=\mathsf{vw}_{1}(f_{o})=0$). Assuming that these
conditions hold, the thread executes instructions such that the new view
$\mathsf{vw}_{2}$ of the thread (1) differs from $\mathsf{vw}_{1}$ only on
$t_{o}$ and $f_{o}$ and (2) $\mathsf{vw}_{2}$ correctly encodes value of $o$.
The function in Figure 19 evaluates $G$.
$\displaystyle\mathsf{c}_{\mathsf{NAND}}=$ $\displaystyle\leavevmode\nobreak\
(((\mathsf{assume}\;{f_{i_{1}}=\mathsf{init}})\oplus(\mathsf{assume}\;{f_{i_{2}}=\mathsf{init}}));f_{o}
\coloneqq 1)\oplus$ $\displaystyle\leavevmode\nobreak\
((\mathsf{assume}\;{t_{i_{1}}=\mathsf{init}});(\mathsf{assume}\;{t_{i_{2}}=\mathsf{init}});(t_{o}
\coloneqq 1))$ Figure 19: $\mathsf{c}_{\mathsf{NAND}}$ \- encoding evaluation
for a NAND Gate in views of threads
The main observation is that a thread can read $\mathsf{init}$ from a variable
only if its view on that variable is 0 (since there is only one
$\mathsf{init}$ message with timestamp 0). Claim (1) holds trivially since
only timestamps of only $t_{o}$ or $f_{o}$ may be augmented (reading from
$\mathsf{init}$ will not change timestamp). By a little observation we see
that the thread can write to $f_{o}$ if one of $f_{i_{\\{}1,2\\}}$ have
timestamp 0, implying that one of the inputs to the gate is 0 by the
assumption. Then it checks out that the new view on $f_{o}$ is greater than 0,
thus claim (2) holds. The case for $t_{o}$ may be checked similarly. Since $D$
has polynomially many gates, any thread can evaluate them in topological
order, and hence eventually will end up with the evaluation of $D(\alpha)$.
Also note that since the thread relied on its internal view, the same set of
program variables $\\{t_{p},\leavevmode\nobreak\ f_{p}|p\in G\\}$ may be used
by all threads (hence crucially avoiding the number of variables to vary with
the thread count).
###### Lemma 8.2.
There exists a sub-routine $\mathsf{c}_{\mathsf{CV}}$ that starting with the
view $\mathsf{vw}$ from $(s,1,\mathsf{vw})$, evaluates the circuit value
$D(\alpha)$, where $\alpha$ is the clause address encoded in the variables
$t_{u_{i}}$ and $f_{u_{i}}$ in $\mathsf{encAddr}(\mathsf{vw})$. Also,
$\mathsf{c}_{\mathsf{CV}}$ is polysized in $n$.
Once $D(\alpha)$ has been computed, the thread can nondeterministically choose
one of the three variables appearing in clause $c$, say
$x\in\\{\texttt{var1}(c),\texttt{var2}(c),\texttt{var3}(c)\\}$. For simplicity
we include this as a part of the routine $\mathsf{c}_{\mathsf{CV}}$ itself.
The address $\beta$ of the variable $x$, the contributor accesses the
assignment made by the leader to $x$ and checks if it satisfies clause $c$.
This is done by the routine $\mathsf{c}_{\mathsf{Check}}$.
Clause Check: $\mathsf{c}_{\mathsf{Check}}$: Having acquired the address
$\beta=\beta_{n-1},\dots,\beta_{0}$ and sign $\sigma$ of variable $x$, by
executing $\mathsf{c}_{\mathsf{CV}}$, the thread checks that variable $x$
satisfies clause $c$. To faithfully access the assignment to $x$ from the
variable $g$, the $\mathsf{BSP}$ guides the thread. The ‘access pattern’ for
$x$ denoted by $\mathsf{AP}(n)$ ($\beta,\sigma$ implicit) which is recursively
defined as
$\displaystyle\text{for }0<i\leq n\leavevmode\nobreak\ \mathsf{AP}(i)$
$\displaystyle=\begin{cases}i\cdot\mathsf{AP}(i-1)&\mbox{ if }\beta_{i-1}=1\\\
\mathsf{AP}(i-1)\cdot i&\mbox{ if }\beta_{i-1}=0\end{cases}$
$\displaystyle\text{for checking satisfiability }\mathsf{AP}(0)$
$\displaystyle=\begin{cases}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}&\mbox{
if }\sigma=0\\\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{t}}}&\mbox{
if }\sigma=1\end{cases}$
For example if $x_{6}$, with negative sign ($\sigma=0$), was to be accessed,
then the access pattern would be
$\mathsf{AP}(3)={\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}3}\cdot\mathsf{AP}(2)={\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}3}\cdot{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}2}\cdot\mathsf{AP}(1)={\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}3}\cdot{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}2}\cdot\mathsf{AP}(0)\cdot{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}$
=
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}3}\cdot{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}2}\cdot{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}\cdot{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}$
; likewise, for $x_{4}$, with negative sign ($\sigma=0$), the access pattern
would be
$\mathsf{AP}(3)={\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}3}\cdot\mathsf{AP}(2)={\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}3}\cdot\mathsf{AP}(1)\cdot{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}2}={\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}3}\cdot\mathsf{AP}(0)\cdot{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}\cdot{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}2}$
=
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}3}\cdot{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}\cdot{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}\cdot{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}2}$.
Going back to our example, if
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{t}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}2}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{t}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}3}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{t}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{t}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}2}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}\leavevmode\nobreak\
{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}\leavevmode\nobreak\
{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}$
was written to $g$ by the leader, it is easy to see that the reads with access
pattern for $x_{6}$
(${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}3}\cdot{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}2}\cdot{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}\cdot{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}$)
would be successful, since $x_{6}$ had been assigned to false by the leader
while that for $x_{4}$ (
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}3}\cdot{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}\cdot{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}1}\cdot{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}2}$)
would fail since it was assigned true while the contributor wished to read
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}$.
$\mathsf{AP}(0)$ is defined to ensure satisfiability of the clause,
$\mathsf{AP}(0)={\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{f}}}$
iff $f_{sign}=0$ (the sign of the variable in the clause is negative) and
$\mathsf{AP}(0)={\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{d}_{\texttt{t}}}$
iff $t_{sign}=0$ (the sign of the variable in the clause is positive).
The above recursive formulation gives us a poly-sized sub-routine which reads
values matching the $\mathsf{AP}$ sequence. We thus have the following lemma.
###### Lemma 8.3.
There exists a sub-routine $\mathsf{c}_{\mathsf{Check}}$, which, starting with
a view $\mathsf{vw}$ encoding (in
$t_{d_{0}},f_{d_{0}},\dots,t_{d_{n-1}},f_{d_{n-1}}$ and $t_{sign},f_{sign}$)
the address and sign of boolean variable $x$ in clause $c$, terminates only if
$c$ is satisfied under the assignment to $x$ made by the leader.
Until now, a thread which reads a clause from $\mathsf{c}_{\mathsf{CL-ENC}}$
has checked its satisfiability with respect to the assignment guessed by the
leader, using the $\mathsf{c}_{\mathsf{SAT}}$ module. However, to ensure
satisfiability of $\phi$, this check must be done for all $2^{n}$ clauses.
This is done in levels $0\leq i\leq n-2$ using
$\mathsf{c}_{\mathsf{Forall[i]}}$, exactly as in the
$\mathsf{PSPACE}$-hardness proof. Finally, we reach
$\mathsf{assert}\;{\texttt{false}}$ reading 1 from both $a_{0,0},a_{0,1}$.
However, in this case we do not have to check for alternation, but only for
the universality in the assignments.
Forall Checker: $\mathsf{c}_{\mathsf{Forall[i]}}$: The
$\mathsf{c}_{\mathsf{Forall[n-2]}}$ gadget checks the ‘universality’ with
respect to the second-last bit of the clause address,
$\mathsf{c}_{\mathsf{Forall[n-3]}}$ gadget does this check with respect to the
third-last bit, and so on, till $\mathsf{c}_{\mathsf{Forall[0]}}$ does this
check for the first bit, ensuring that all clauses have been covered.
$2^{n}$ threads execute $\mathsf{c}_{\mathsf{c}_{\mathsf{SAT}}}$, and write 1
to $a_{n-1,1}$ and $a_{n-1,0}$, depending on the last address bit of the
clause it checks. Next, $2^{n-1}$ threads execute
$\mathsf{c}_{\mathsf{Forall[n-2]}}$. A thread executing
$\mathsf{c}_{\mathsf{Forall[n-2]}}$ reads 1 from both $a_{n-1,0}$ and
$a_{n-1,1}$ representing two clauses whose last bits differ; this thread
checks that the second last bits in these two clauses agree: it writes 1 to
$a_{n-2,0}$ (if the second last bit is 0) or to $a_{n-2,1}$ (if the second
last bit is 1). When $2^{n-1}$ threads finish executing
$\mathsf{c}_{\mathsf{Forall[n-2]}}$, we have covered the second last bits
across all clauses. This continues with $2^{n-2}$ threads executing
$\mathsf{c}_{\mathsf{Forall[n-3]}}$. A thread executing
$\mathsf{c}_{\mathsf{Forall[n-3]}}$ reads 1 from both $a_{n-2,0}$ and
$a_{n-2,1}$ representing two clauses whose second last bits differ and checks
that the third last bits in these two clauses agree. Finally, we have 2
threads executing $\mathsf{c}_{\mathsf{Forall[0]}}$, certifying the
universality of the first address bit, writing 1 to $a_{0,0}$ and $a_{0,1}$.
Assertion Checker: $\mathsf{c}_{\mathrm{assert}}$: The assertion checker
gadget in Figure 18, reads 1 from $a_{0,0},a_{0,1}$. If this is successful,
then we reach the $\mathsf{assert}\;{\texttt{false}}$.
#### 8.2.2 Compare and contrast with $\mathsf{PSPACE}$-hardness proof
As is evident there are many things common between the two proofs. We now
recapitulate the similarities and differences.
* •
In the $\mathsf{PSPACE}$-h we wanted to check for truth of the QBF, hence
guessing an assignment was not necessary. Here the leader is tasked with
guessing an assignment to the boolean variables.
* •
In $\mathsf{PSPACE}$-h we want to check for quantifier alternation in the
boolean variables in $\Psi$. Here we want to also check for universality of
addresses, i.e. the fact that all clauses have been checked. This makes the
$\mathsf{c}_{\mathsf{Forall[\\_]}}$ gadget a bit simpler than its
$\mathsf{c}_{\mathrm{FE[\\_]}}$ counterpart.
* •
In the $\mathsf{PSPACE}$-h, the CNF formula, $\phi$ was given in a simple form
and hence all threads executing $\mathsf{c}_{\mathsf{SAT}}$ checked the
formula. Additionally, given the exponential size of the formula, the task is
distributed between (exponentially) many threads. Here CNF formula also was in
an encoded form, hence we had to devise the circuit value machinery to extract
it from the succinct representation $D$.
### 8.3 Correctness of the construction
The proof of this lemma is very close to that of Lemma 7.1. Some of the
terminology we use in this proof is borrowed from the proof of Lemma 7.1. As
in the case of section 7.4, we add some labels in the function descriptions
for ease of argument in the proof. We describe the notations and key sub
lemmas required for the proof.
##### Notation and Interpretation of Boolean Variables involved in the
construction
* •
We denote by $\alpha_{U}$, an assignment on the (boolean) variables
$\\{u_{0},u_{1},\cdots u_{n-1}\\}$, interpreted as the (n-bit) address of a
clause. Here $u_{n-1}$ is the most significant bit (MSB) and $u_{0}$ as the
least significant bit (LSB). We view the assignment so generated as
$\overline{\alpha_{U}}\in\\{0,1\\}^{n}$ as an $n$-bit vector.
$\alpha_{U}(u_{i})$ gives the assignment to $u_{i}$.
* •
We denote by $\alpha_{D}$, an assignment on (boolean) variables
$\\{d_{0},d_{1},\cdots d_{n-1},d_{sign}\\}$, interpreted as the (n-bit) index
of a variable in $Vars(\Psi)$ and one sign bit. Here $d_{n-1}$ is the MSB and
$d_{0}$ is the LSB. We view the assignment as
$\overline{\alpha_{D}}\in\\{0,1\\}^{n+1}$ as an $(n+1)$-bit vector.
* •
For an assignment $\overline{\alpha}\in\mathbb{B}^{n}$,
$D_{1}(\overline{\alpha_{U}})$, (similarly $D_{2}(\overline{\alpha_{U}})$ and
$D_{3}(\overline{\alpha_{U}})$) are the $n+1$ bits signifying
$\texttt{var1}(\overline{\alpha_{U}}),\texttt{sig1}(\overline{\alpha_{U}})$
($\texttt{var2}(\overline{\alpha_{U}}),\texttt{sig2}(\overline{\alpha_{U}})$
and
$\texttt{var3}(\overline{\alpha_{U}}),\texttt{sig3}(\overline{\alpha_{U}})$)
respectively.
$\displaystyle\mathsf{c}_{\mathsf{SAT}}=$ $\displaystyle\leavevmode\nobreak\
(\mathsf{assume}\;{s=1});\leavevmode\nobreak\
\mathsf{c}_{\mathsf{CV}};\lambda_{1}:\mathsf{skip};\leavevmode\nobreak\
\mathsf{c}_{\mathsf{Check}};$ $\displaystyle\leavevmode\nobreak\
((\mathsf{assume}\;{(t_{u_{n-1}}=0)};\leavevmode\nobreak\ a_{n-1,1} \coloneqq
1)\leavevmode\nobreak\ \oplus\leavevmode\nobreak\
(\mathsf{assume}\;{(f_{u_{n-1}}=0)};\leavevmode\nobreak\ a_{n-1,0} \coloneqq
1))$ Figure 20: $\mathsf{c}_{\mathsf{SAT}}$ \- acquiring a clause $c_{i}$ and
checking satisfiability of that clause, with the label $\lambda_{1}$
$\displaystyle\mathsf{c}_{\mathsf{Forall[i]}}=\leavevmode\nobreak\ $
$\displaystyle\mathsf{assume}\;{a_{i+1,0}=1};\leavevmode\nobreak\
\mathsf{assume}\;{a_{i+1,1}=1};$ $\displaystyle\leavevmode\nobreak\
((\mathsf{assume}\;{t_{u_{i}}=0};\leavevmode\nobreak\ a_{i,1} \coloneqq
1;\lambda_{3}:\mathsf{skip})\leavevmode\nobreak\ \oplus\leavevmode\nobreak\
(\mathsf{assume}\;{f_{u_{i}}=0};\leavevmode\nobreak\ a_{i,0} \coloneqq
1);\lambda_{4}:\mathsf{skip})$ Figure 21: $\mathsf{c}_{\mathsf{Forall[i]}}$ at
level $i$ with the labels $\lambda_{3},\lambda_{4}$. We have $n-1$ such
gadgets, one for each level $0\leq i\leq n-2$
#### 8.3.1 Acquiring Variable Index and Sign
We observe that each thread executing a $\mathsf{c}_{\mathsf{CL-ENC}}$
function makes a (single) write to $s$, with the message $(s,1,\mathsf{view})$
where $\mathsf{view}$ has embedded in it an assignment $\alpha_{U}$. We write
$\alpha\diamond\mathsf{view}$ to denote that the assignment $\alpha$ is
embedded in $\mathsf{view}$. Now a thread $p$ executing a
$\mathsf{c}_{\mathsf{SAT}}$ function acquires the assignment $\alpha_{U}$, and
computes (non-deterministically) one amongst $D_{1}(\overline{\alpha_{U}})$,
$D_{2}(\overline{\alpha_{U}})$, $D_{3}(\overline{\alpha_{U}})$ reaching the
label $\lambda_{1}$. The correctness invariant involved is formalized in the
following lemma.
###### Lemma 8.4.
Let a thread $p$ executing the $\mathsf{c}_{\mathsf{SAT}}$ function read a
message $(s,1,\mathsf{view})$ with $\alpha\diamond\mathsf{view}$. When the $p$
reaches the label $\lambda_{1}$ computing $D_{i}$ ($i\in\\{1,2,3\\}$) with
$\alpha_{D}=D_{i}(\overline{\alpha_{U}})$. Let the view of the thread be
$\mathsf{view}^{\prime}$. Then we have
$\alpha_{D}\diamond\mathsf{view}^{\prime}$.
#### 8.3.2 Checking Satisfiability of Clause
Continuing from above, let $p$ compute $D_{i}$ in $\mathsf{c}_{\mathsf{CV}}$
reaching $\lambda_{1}$. Then by lemma 8.4, we have
$\alpha_{D}=D_{i}(\overline{\alpha})$ embedded in the view of the thread. Now,
using $\mathsf{c}_{\mathsf{Check}}$, $p$ checks that the clause with index
$\overline{\alpha_{U}}$ is satisfied by the $n+1$ bits $\overline{\alpha_{D}}$
representing a variable and the sign of the variable in the clause. Finally
the thread makes writes to one of the program variables $a_{n-1,0}$ or
$a_{n-1,1}$. We have the following lemma that shows correctness of this
operation.
###### Lemma 8.5.
A thread $p$ can make the write $(a_{n-1,0},1)$ (similarly $(a_{n-1,1},1)$) if
and only if, clause $\overline{\alpha_{U}}$ is satisfied and if
$\alpha_{U}(u_{n-1})=1$ (similarly $\alpha_{U}(u_{n-1})=0$).
#### 8.3.3 Checking all Clauses
In section 8.3.1 and section 8.3.2 we have discussed how the system can check
satisfiability of a single clause. Now, we need to check that each clause
satisfied. We do this via additional modules to the PSPACE construction.
Towards this goal, define a _level predicate_
$\mathsf{IsSAT}(u_{n-1},u_{n-2},\cdots,u_{0})$ denoting that the clause
$\overline{\alpha_{U}}=u_{n-1}\cdots u_{1}u_{0}$ is satisfiable. Now very
similarly to section 7.4 we define the following formulae:
For $0\leq i\leq n-1$,
$\Upsilon_{i}\equiv\forall u_{i}\forall u_{i+1}\dots\forall
u_{n-1}{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\exists
u_{0}\dots\exists u_{i-1}}\mathsf{IsSAT}(u_{n-1},u_{n-2},\cdots,u_{0})$
And we claim the following lemma,
###### Lemma 8.6.
For $0\leq i\leq n-2$, $\Upsilon_{i}$ is true $\iff$ the labels
$\lambda_{3},\lambda_{4}$ in the gadget $\mathsf{c}_{\mathsf{Forall[i]}}$ can
be reached.
The proof of Lemma 8.6 follows exactly in similar lines to that of Lemma 7.3
and Lemma 7.4. Finally, note that $\Upsilon_{0}$ is equivalent to the
$\mathsf{SuccinctSAT}$ instance being satisfiable. We have the following final
lemma to show correctness of the entire construction.
###### Corollary 8.7.
We can reach the $\mathsf{assert}\;{\texttt{false}}$ assertion in the
$\mathsf{c}_{\mathrm{assert}}$ gadget $\iff\Upsilon_{0}$ is true.
This gives us the main theorem
###### Theorem 8.8.
The verification of safety properties for parametrized systems of the class
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{nocas},\mathsf{acyc})\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}(\mathsf{nocas})$
under RA is $\mathsf{NEXPTIME}$-hard.
## 9 Conclusion
Atomic CAS operations are indispensible for most practical implementations of
distributed protocols, yet, they hinder verification efforts. Undecidability
of safety verification in the non-parameterized setting [1] and even in the
loop-free parameterized setting
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{acyc})$,
are a testament to this.
We tried to reconcile the two by studying the effects of allowing restricted
access to CAS operations in parameterized systems. Systems which prevent the
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}$
threads from performing CAS operations and allow only either (1) loop-free
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
programs or (2) loop-free
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}$
programs along with a single (‘ego’) program with loops lead to accessible
complexity bounds. The simplified semantics based on a timestamp abstraction
provides the infrastructure for these results. The $\mathsf{PSPACE}$-hardness
gives an insight into the core complexity of RA ($\mathsf{PureRA}$) that stems
from the consistency mechanisms of view-joining and timestamp comparisons.
We conclude with some interesting avenues for future work. A problem arising
from this work is the decidability of CAS-free parameterized systems,
${\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\mathsf{env}}(\mathsf{nocas})||{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1}(\mathsf{nocas})\parallel\cdots\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{n}(\mathsf{nocas})$
which seems to be as elusive as its non-parameterized twin
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{1}(\mathsf{nocas})\parallel\cdots\parallel{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mathsf{dis}}_{n}(\mathsf{nocas})$.
We believe that ideas in this paper can be adapted to causally consistent
shared memory models [50] as well as transactional programs [15] in the
parameterized setting. On, the practical side, the Datalog encoding suggests
development of a tool, considering that Horn-clause solvers are state-of-the-
art in program verification.
## References
* [1] P. A. Abdulla, J. Arora, M. F. Atig, and S. N. Krishna. Verification of programs under the release-acquire semantics. In PLDI, pages 1117–1132. ACM, 2019.
* [2] Parosh Aziz Abdulla, Mohamed Faouzi Atig, Ahmed Bouajjani, Egor Derevenetc, Carl Leonardsson, and Roland Meyer. On the state reachability problem for concurrent programs under power. In Chryssis Georgiou and Rupak Majumdar, editors, Networked Systems - 8th International Conference, NETYS 2020, Marrakech, Morocco, June 3-5, 2020, Proceedings, volume 12129 of Lecture Notes in Computer Science, pages 47–59. Springer, 2020. doi:10.1007/978-3-030-67087-0\\_4.
* [3] Parosh Aziz Abdulla, Mohamed Faouzi Atig, Ahmed Bouajjani, K. Narayan Kumar, and Prakash Saivasan. Deciding reachability under persistent x86-tso. Proc. ACM Program. Lang., 5(POPL):1–32, 2021. doi:10.1145/3434337.
* [4] Parosh Aziz Abdulla, Mohamed Faouzi Atig, Ahmed Bouajjani, and Tuan Phong Ngo. A load-buffer semantics for total store ordering. Log. Methods Comput. Sci., 14(1), 2018. doi:10.23638/LMCS-14(1:9)2018.
* [5] Parosh Aziz Abdulla, Mohamed Faouzi Atig, and Rojin Rezvan. Parameterized verification under tso is pspace-complete. Proc. ACM Program. Lang., 4(POPL), December 2019. doi:10.1145/3371094.
* [6] Parosh Aziz Abdulla and Bengt Jonsson. Verifying programs with unreliable channels. In Proceedings of the Eighth Annual Symposium on Logic in Computer Science (LICS ’93), Montreal, Canada, June 19-23, 1993, pages 160–170. IEEE Computer Society, 1993. doi:10.1109/LICS.1993.287591.
* [7] Mustaque Ahamad, Gil Neiger, James E. Burns, Prince Kohli, and Phillip W. Hutto. Causal memory: Definitions, implementation, and programming. Distributed Comput., 9(1):37–49, 1995. doi:10.1007/BF01784241.
* [8] Jade Alglave, Luc Maranget, and Michael Tautschnig. Herding cats: Modelling, simulation, testing, and data mining for weak memory. ACM Trans. Program. Lang. Syst., 36(2), July 2014. doi:10.1145/2627752.
* [9] Jade Alglave, Luc Maranget, and Michael Tautschnig. Herding cats: Modelling, simulation, testing, and data mining for weak memory. ACM Trans. Program. Lang. Syst., 36(2):7:1–7:74, 2014.
* [10] Rajeev Alur, Kenneth L. McMillan, and Doron A. Peled. Model-checking of correctness conditions for concurrent objects. Inf. Comput., 160(1-2):167–188, 2000.
* [11] Mohamed Faouzi Atig, Ahmed Bouajjani, Sebastian Burckhardt, and Madanlal Musuvathi. On the verification problem for weak memory models. In Manuel V. Hermenegildo and Jens Palsberg, editors, Proceedings of the 37th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2010, Madrid, Spain, January 17-23, 2010, pages 7–18. ACM, 2010. doi:10.1145/1706299.1706303.
* [12] Mohamed Faouzi Atig, Ahmed Bouajjani, Sebastian Burckhardt, and Madanlal Musuvathi. What’s decidable about weak memory models? In Helmut Seidl, editor, Programming Languages and Systems - 21st European Symposium on Programming, ESOP 2012, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2012, Tallinn, Estonia, March 24 - April 1, 2012. Proceedings, volume 7211 of Lecture Notes in Computer Science, pages 26–46. Springer, 2012. doi:10.1007/978-3-642-28869-2\\_2.
* [13] A. R. Balasubramanian, Nathalie Bertrand, and Nicolas Markey. Parameterized verification of synchronization in constrained reconfigurable broadcast networks. In Dirk Beyer and Marieke Huisman, editors, Tools and Algorithms for the Construction and Analysis of Systems - 24th International Conference, TACAS 2018, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2018, Thessaloniki, Greece, April 14-20, 2018, Proceedings, Part II, volume 10806 of Lecture Notes in Computer Science, pages 38–54. Springer, 2018. doi:10.1007/978-3-319-89963-3\\_3.
* [14] Mark Batty, Scott Owens, Susmit Sarkar, Peter Sewell, and Tjark Weber. Mathematizing c++ concurrency. SIGPLAN Not., 46(1):55–66, January 2011. URL: http://doi.acm.org/10.1145/1925844.1926394, doi:10.1145/1925844.1926394.
* [15] Sidi Mohamed Beillahi, Ahmed Bouajjani, and Constantin Enea. Robustness against transactional causal consistency. In 30th International Conference on Concurrency Theory, CONCUR 2019, August 27-30, 2019, Amsterdam, the Netherlands, pages 30:1–30:18, 2019\. doi:10.4230/LIPIcs.CONCUR.2019.30.
* [16] Nathalie Bertrand, Patricia Bouyer, and Anirban Majumdar. Reconfiguration and Message Losses in Parameterized Broadcast Networks. In Wan Fokkink and Rob van Glabbeek, editors, 30th International Conference on Concurrency Theory (CONCUR 2019), volume 140 of Leibniz International Proceedings in Informatics (LIPIcs), pages 32:1–32:15, Dagstuhl, Germany, 2019. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. URL: http://drops.dagstuhl.de/opus/volltexte/2019/10934, doi:10.4230/LIPIcs.CONCUR.2019.32.
* [17] Nikolaj Bjørner, Arie Gurfinkel, Ken McMillan, and Andrey Rybalchenko. Horn clause solvers for program verification. In Fields of Logic and Computation II, pages 24–51. Springer, 2015\.
* [18] Nikolaj Bjørner, Ken McMillan, and Andrey Rybalchenko. On solving universally quantified Horn clauses. In SAS, volume 7935 of LNCS, pages 105–125. Springer, Springer, 2013.
* [19] Roderick Bloem, Swen Jacobs, Ayrat Khalimov, Igor Konnov, Sasha Rubin, Helmut Veith, and Josef Widder. Decidability of Parameterized Verification. Synthesis Lectures on Distributed Computing Theory. Morgan & Claypool Publishers, 2015.
* [20] Roderick Bloem, Swen Jacobs, Ayrat Khalimov, Igor Konnov, Sasha Rubin, Helmut Veith, and Josef Widder. Decidability in parameterized verification. SIGACT News, 47(2):53–64, 2016. doi:10.1145/2951860.2951873.
* [21] Ahmed Bouajjani, Michael Emmi, Constantin Enea, and Jad Hamza. On reducing linearizability to state reachability. In Magnús M. Halldórsson, Kazuo Iwama, Naoki Kobayashi, and Bettina Speckmann, editors, Automata, Languages, and Programming - 42nd International Colloquium, ICALP 2015, Kyoto, Japan, July 6-10, 2015, Proceedings, Part II, volume 9135 of Lecture Notes in Computer Science, pages 95–107. Springer, 2015. doi:10.1007/978-3-662-47666-6\\_8.
* [22] Ahmed Bouajjani, Constantin Enea, Rachid Guerraoui, and Jad Hamza. On verifying causal consistency. In Giuseppe Castagna and Andrew D. Gordon, editors, Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages, POPL 2017, Paris, France, January 18-20, 2017, pages 626–638. ACM, 2017\. URL: http://dl.acm.org/citation.cfm?id=3009888.
* [23] Ahmed Bouajjani, Constantin Enea, and Jad Hamza. Verifying eventual consistency of optimistic replication systems. In Suresh Jagannathan and Peter Sewell, editors, The 41st Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL ’14, San Diego, CA, USA, January 20-21, 2014, pages 285–296. ACM, 2014\. doi:10.1145/2535838.2535877.
* [24] Stefano Ceri, Georg Gottlob, and Letizia Tanca. Syntax and semantics of datalog. In Logic Programming and Databases, pages 77–93. Springer, 1990\.
* [25] Peter Chini, Roland Meyer, and Prakash Saivasan. Complexity of liveness in parameterized systems. In Arkadev Chattopadhyay and Paul Gastin, editors, 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science, FSTTCS 2019, December 11-13, 2019, Bombay, India, volume 150 of LIPIcs, pages 37:1–37:15. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2019. doi:10.4230/LIPIcs.FSTTCS.2019.37.
* [26] Peter Chini, Roland Meyer, and Prakash Saivasan. Liveness in broadcast networks. In Mohamed Faouzi Atig and Alexander A. Schwarzmann, editors, Networked Systems - 7th International Conference, NETYS 2019, Marrakech, Morocco, June 19-21, 2019, Revised Selected Papers, volume 11704 of Lecture Notes in Computer Science, pages 52–66. Springer, 2019. doi:10.1007/978-3-030-31277-0\\_4.
* [27] Peter Chini, Roland Meyer, and Prakash Saivasan. Fine-grained complexity of safety verification. J. Autom. Reason., 64(7):1419–1444, 2020. doi:10.1007/s10817-020-09572-x.
* [28] Edmund M. Clarke, Armin Biere, Richard Raimi, and Yunshan Zhu. Bounded model checking using satisfiability solving. Formal Methods Syst. Des., 19(1):7–34, 2001.
* [29] Giorgio Delzanno, Arnaud Sangnier, Riccardo Traverso, and Gianluigi Zavattaro. On the complexity of parameterized reachability in reconfigurable broadcast networks. In Deepak D’Souza, Telikepalli Kavitha, and Jaikumar Radhakrishnan, editors, IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science, FSTTCS 2012, December 15-17, 2012, Hyderabad, India, volume 18 of LIPIcs, pages 289–300. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2012. doi:10.4230/LIPIcs.FSTTCS.2012.289.
* [30] Giorgio Delzanno, Arnaud Sangnier, and Gianluigi Zavattaro. Verification of ad hoc networks with node and communication failures. In Holger Giese and Grigore Rosu, editors, Formal Techniques for Distributed Systems - Joint 14th IFIP WG 6.1 International Conference, FMOODS 2012 and 32nd IFIP WG 6.1 International Conference, FORTE 2012, Stockholm, Sweden, June 13-16, 2012. Proceedings, volume 7273 of Lecture Notes in Computer Science, pages 235–250. Springer, 2012. doi:10.1007/978-3-642-30793-5\\_15.
* [31] Antoine Durand-Gasselin, Javier Esparza, Pierre Ganty, and Rupak Majumdar. Model checking parameterized asynchronous shared-memory systems. In Daniel Kroening and Corina S. Pasareanu, editors, Computer Aided Verification - 27th International Conference, CAV 2015, San Francisco, CA, USA, July 18-24, 2015, Proceedings, Part I, volume 9206 of Lecture Notes in Computer Science, pages 67–84. Springer, 2015. doi:10.1007/978-3-319-21690-4\\_5.
* [32] Javier Esparza, Alain Finkel, and Richard Mayr. On the verification of broadcast protocols. In 14th Annual IEEE Symposium on Logic in Computer Science, Trento, Italy, July 2-5, 1999, pages 352–359, 1999. doi:10.1109/LICS.1999.782630.
* [33] Javier Esparza, Pierre Ganty, and Rupak Majumdar. Parameterized verification of asynchronous shared-memory systems. J. ACM, 63(1):10:1–10:48, 2016. doi:10.1145/2842603.
* [34] Philippe Flajolet, Jean-Claude Raoult, and Jean Vuillemin. The number of registers required for evaluating arithmetic expressions. Theoretical Computer Science, 9(1):99–125, 1979.
* [35] Cormac Flanagan, Stephen N. Freund, and Shaz Qadeer. Thread-modular verification for shared-memory programs. In ESOP, volume 2305 of LNCS, pages 262–277. Springer, 2002\.
* [36] Marie Fortin, Anca Muscholl, and Igor Walukiewicz. Model-checking linear-time properties of parametrized asynchronous shared-memory pushdown systems. In Rupak Majumdar and Viktor Kuncak, editors, Computer Aided Verification - 29th International Conference, CAV 2017, Heidelberg, Germany, July 24-28, 2017, Proceedings, Part II, volume 10427 of Lecture Notes in Computer Science, pages 155–175. Springer, 2017. doi:10.1007/978-3-319-63390-9\\_9.
* [37] Hana Galperin and Avi Wigderson. Succinct representations of graphs. Inf. Control., 56(3):183–198, 1983. doi:10.1016/S0019-9958(83)80004-7.
* [38] Georg Gottlob and Christos Papadimitriou. On the complexity of single-rule datalog queries. Information and Computation, 183(1):104–122, 2003.
* [39] Matthew Hague. Parameterised pushdown systems with non-atomic writes. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science, FSTTCS 2011, December 12-14, 2011, Mumbai, India, pages 457–468, 2011. doi:10.4230/LIPIcs.FSTTCS.2011.457.
* [40] Jad Hamza. On the complexity of linearizability. In Ahmed Bouajjani and Hugues Fauconnier, editors, Networked Systems - Third International Conference, NETYS 2015, Agadir, Morocco, May 13-15, 2015, Revised Selected Papers, volume 9466 of Lecture Notes in Computer Science, pages 308–321. Springer, 2015. doi:10.1007/978-3-319-26850-7\\_21.
* [41] Mengda He, Viktor Vafeiadis, Shengchao Qin, and João F. Ferreira. GPS $$+$$ + : Reasoning about fences and relaxed atomics. Int. J. Parallel Program., 46(6):1157–1183, 2018.
* [42] Maurice Herlihy. Wait-free synchronization. ACM Trans. Program. Lang. Syst., 13(1):124–149, 1991. doi:10.1145/114005.102808.
* [43] Maurice Herlihy and Jeannette M. Wing. Linearizability: A correctness condition for concurrent objects. ACM Trans. Program. Lang. Syst., 12(3):463–492, 1990.
* [44] Neil Immerman. Descriptive complexity. Springer Science & Business Media, 2012.
* [45] Bishoksan Kafle, John P Gallagher, and Pierre Ganty. Solving non-linear horn clauses using a linear horn clause solver. arXiv preprint arXiv:1607.04459, 2016.
* [46] Vineet Kahlon. Parameterization as abstraction: A tractable approach to the dataflow analysis of concurrent programs. In Proceedings of the Twenty-Third Annual IEEE Symposium on Logic in Computer Science, LICS 2008, 24-27 June 2008, Pittsburgh, PA, USA, pages 181–192, 2008. doi:10.1109/LICS.2008.37.
* [47] Jan-Oliver Kaiser, Hoang-Hai Dang, Derek Dreyer, Ori Lahav, and Viktor Vafeiadis. Strong logic for weak memory: Reasoning about release-acquire consistency in iris. In Peter Müller, editor, 31st European Conference on Object-Oriented Programming, ECOOP 2017, June 19-23, 2017, Barcelona, Spain, volume 74 of LIPIcs, pages 17:1–17:29. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2017. doi:10.4230/LIPIcs.ECOOP.2017.17.
* [48] Jeehoon Kang, Chung-Kil Hur, Ori Lahav, Viktor Vafeiadis, and Derek Dreyer. A promising semantics for relaxed-memory concurrency. In POPL, pages 175–189. ACM, 2017.
* [49] Michalis Kokologiannakis, Ori Lahav, Konstantinos Sagonas, and Viktor Vafeiadis. Effective stateless model checking for C/C++ concurrency. Proc. ACM Program. Lang., 2(POPL):17:1–17:32, 2018. doi:10.1145/3158105.
* [50] Ori Lahav and Udi Boker. Decidable verification under a causally consistent shared memory. In Alastair F. Donaldson and Emina Torlak, editors, Proceedings of the 41st ACM SIGPLAN International Conference on Programming Language Design and Implementation, PLDI 2020, London, UK, June 15-20, 2020, pages 211–226. ACM, 2020. doi:10.1145/3385412.3385966.
* [51] Ori Lahav, Nick Giannarakis, and Viktor Vafeiadis. Taming release-acquire consistency. In Rastislav Bodík and Rupak Majumdar, editors, Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2016, St. Petersburg, FL, USA, January 20 - 22, 2016, pages 649–662. ACM, 2016. doi:10.1145/2837614.2837643.
* [52] Ori Lahav, Nick Giannarakis, and Viktor Vafeiadis. Taming release-acquire consistency. In POPL, pages 649–662. ACM, 2016.
* [53] Ori Lahav and Viktor Vafeiadis. Owicki-gries reasoning for weak memory models. In ICALP, volume 9135 of LNCS, pages 311–323. Springer, 2015\.
* [54] Leslie Lamport. How to make a multiprocessor computer that correctly executes multiprocess programs. IEEE Trans. Computers, 28(9):690–691, 1979.
* [55] Wyatt Lloyd, Michael J. Freedman, Michael Kaminsky, and David G. Andersen. Don’t settle for eventual: scalable causal consistency for wide-area storage with cops. In Ted Wobber and Peter Druschel, editors, SOSP, pages 401–416. ACM, 2011. URL: http://dblp.uni-trier.de/db/conf/sosp/sosp2011.html#LloydFKA11.
* [56] Roland Meyer and Sebastian Wolff. Pointer life cycle types for lock-free data structures with memory reclamation. Proc. ACM Program. Lang., 4(POPL):68:1–68:36, 2020.
* [57] S. S. Owicki and D. Gries. An axiomatic proof technique for parallel programs I. Acta Informatica, 6:319–340, 1976.
* [58] Christos H. Papadimitriou and Mihalis Yannakakis. A note on succinct representations of graphs. Inf. Control., 71(3):181–185, 1986. doi:10.1016/S0019-9958(86)80009-2.
* [59] Anton Podkopaev, Ilya Sergey, and Aleksandar Nanevski. Operational aspects of C/C++ concurrency. CoRR, abs/1606.01400, 2016. URL: http://arxiv.org/abs/1606.01400, arXiv:1606.01400.
* [60] Azalea Raad, Ori Lahav, and Viktor Vafeiadis. On parallel snapshot isolation and release/acquire consistency. In Programming Languages and Systems - 27th European Symposium on Programming, ESOP 2018, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2018, Thessaloniki, Greece, April 14-20, 2018, Proceedings, pages 940–967, 2018. doi:10.1007/978-3-319-89884-1\\_33.
* [61] Anu Singh, C. R. Ramakrishnan, and Scott A. Smolka. Query-based model checking of ad hoc network protocols. In Mario Bravetti and Gianluigi Zavattaro, editors, CONCUR 2009 - Concurrency Theory, 20th International Conference, CONCUR 2009, Bologna, Italy, September 1-4, 2009. Proceedings, volume 5710 of Lecture Notes in Computer Science, pages 603–619. Springer, 2009. doi:10.1007/978-3-642-04081-8\\_40.
* [62] Salvatore La Torre, Anca Muscholl, and Igor Walukiewicz. Safety of parametrized asynchronous shared-memory systems is almost always decidable. In 26th International Conference on Concurrency Theory, CONCUR 2015, Madrid, Spain, September 1.4, 2015, pages 72–84, 2015. doi:10.4230/LIPIcs.CONCUR.2015.72.
* [63] Aaron Turon, Viktor Vafeiadis, and Derek Dreyer. GPS: navigating weak memory with ghosts, protocols, and separation. In OOPSLA, pages 691–707. ACM, 2014.
* [64] Viktor Vafeiadis and Chinmay Narayan. Relaxed separation logic: a program logic for C11 concurrency. In OOPSLA, pages 867–884. ACM, 2013.
* [65] M Vardi. The complexity of relational database queries. In Proc. STOC, pages 137–146, 1982.
* [66] Werner Vogels. Eventually consistent. Commun. ACM, 52(1):40–44, 2009. doi:10.1145/1435417.1435432.
|
# Outperforming classical estimation of Post-Newtonian parameters of Earth’s
gravitational field using quantum metrology
M. Rivera-Tapia${}^{1,2^{\ast}}$, Marcel I. Yáñez-Reyes3,4,†, A. Delgado1,2,
and G. Rubilar2
1 _Instituto Milenio de Investigación en Óptica, Universidad de Concepción,
Concepción, Chile._
2 _Departamento de Física, Facultad Ciencias Físicas y Matemáticas,
Universidad de Concepción, Concepción, Chile_
3 _ITFA, University of Amsterdam, Science Park 904, 1018 XE, Amsterdam, The
Netherlands_
4 _Nikhef Theory Group, Science Park 105, 1098 XG Amsterdam, The Netherlands_
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
The Hong-Ou-Mandel (HOM) effect is analyzed for photons in a modified Mach-
Zehnder setup with two particles experiencing different gravitational
potentials, which are later recombined using a beam-splitter. It is found that
the HOM effect depends directly on the relativistic time dilation between the
arms of the setup. This temporal dilation can be used to estimate the $\gamma$
and $\beta$ parameters of the parameterized post-Newtonian formalism. The
uncertainty in the parameters $\gamma$ and $\beta$ are of the order
$10^{-8}-10^{-12}$, depending on the quantum state employed.
## 1 Introduction
General Relativity is a non-linear and metric theory of the gravitational
field. The non-linear character of General Relativity has as consequence that
very few analytical solutions are known most of which require a high degree of
symmetry. In order to compare the predictions of the theory with observational
data certain approximations have become invaluable tools. For instance,
gravitational waves are often described by solutions to linear approximations
of the Einstein equations and do not arise as solutions of the exact equations
[1]. Another very useful approximation of General Relativity is the so called
post-Newtonian approximation, a method for solving Einstein’s field equations
for sources that move slowly compared to the speed of light and that generate
weak gravitational fields[2, 3]. This approximation has been intensively
employed as a tool to interpret experimental tests of General Relativity and
to test alternative metric theories of gravity[4, 5, 6, 7]. Furthermore, this
approximation is considered to be sufficiently precise for explaining most
solar-system tests that will be carried out in the foreseeable future [3]. The
Hong-Ou-Mandel (HOM) effect is a purely quantum effect with no classical
counterpart. This effect is observed in an optical arrangement consisting of
two photons arriving at a beam-splitter. Classically, the photons should exit
the system and be detected at two different ports. Nevertheless, the quantum
mechanical description allows for the possibility that the photons emerge at
the same output port or at different output ports, depending on whether there
are differences in the length of the arms of the array [8]. In the HOM effect
we are interested in the measurement of the coincidence probability, that is,
the event in which a single photon is detected at each output port. Several
applications of HOM effect have been proposed, such as, for instance,
measuring the difference of arrival times between photons, that is, a time
sensor [9], and applications to quantum information tasks, such as
implementation of Bell measurements [10] and the generation of high-
dimensional entangled states [11, 12]. Recently, a large improvement on the
measurement resolution of arrival times has been reported [9, 13].
Modern studies concerning quantum information techniques in relativistic
contexts have been performed by Fuentes et al. [14], where acceleration
effects on quantum entanglement in massless two-mode quantum fields were
studied. Refinements using massive particles and particles with spin have been
carried out [15]. Furthermore, these predictions have already been
experimentally confirmed [16]. On the other hand, recent studies focused
instead in effects induced by gravity in large-scale optical arrays [17, 18,
19], developing potential applications of quantum entanglement in general
relativity such as the estimation of the gravitational acceleration utilizing
photons [20], the usage of optical arrays in the post-Newtonian formalism [17,
18, 19], the implementation of Sagnac arrays to measure a rotation parameter
of Gödel metric [21, 22], the usage of optical arrays to generate a path-
entangled state induced by gravitational time delay in photons [23], and the
proposal of a HOM array to study the effect of the gravitational drag [18].
Furthermore, quantum metrology has been considered as a tool for the
estimation of gravitational parameters. Kohlrus et al. studied how photons
propagating between satellites in a Kerr spacetime can be used to estimate the
equatorial velocity and Schwarzschild radius of the gravitational source. In a
further study, the same group showed how the rotation in the photon’s
polarization induced by the Kerr metric can be utilized to estimate values of
Earth’s radius and the distance between satellites [24]. Finally, Kish et al.
studied the Kerr rotation parameter by using a Mach-Zehnder interferometer
[25].
Motivated by studies of quantum metrology applied in estimation of spacetime
parameters and taking into account the need for enhancing parameter estimation
with respect to classical estimations in General Relativity, we study the
quantum estimation of post-Newtonian parameters using a HOM array. We show how
the HOM effect is produced by a path difference between the arms of the
interferometer induced by the gravitational field. This effect depends on the
gravitational time dilation between the arrival of photons to the detectors,
and the wavepacket dispersion. We use this effect to estimate the arrival
times in order to calculate the bounds for the precision of the estimation of
the post-Newtonian parameters. In order to compare the quantum and classical
bounds for the precision we utilize a scheme which only measures arrival times
and we compare it with a scheme that relies on the HOM effect. To improve the
estimation using the HOM effect, we consider separable and two-mode squeezing
vacuum states. Finally, we compare our results with current values and discuss
the implications and improvements of applying quantum metrology to this
problem for the first time.
This article is organized as follows: in Sec. 2 we compute the temporal delay
in a HOM array. In Sec. 3 we calculate the HOM effect induced by the
gravitational time dilation. In Sec. 4 we estimate the post-Newtonian
parameters using the HOM array. In Sec. 5 we calculate the uncertainties in
the measurement of the parameters. Finally, in Sec. 6 we summarize our results
and conclude.
## 2 Temporal delay in a Hong-Ou-Mandel array
In this section we describe the weak field approximation and the metric
utilized to calculate the temporal delays of photons that travel in a Hong-Ou-
Mandel array.
The post-Newtonian (PN) formalism is a framework that allows us to describe
the effects of the gravitational field using a perturbative expansion, which
in its simpler form, depends on the Newtonian gravitational potential. We
shall use the following spacetime metric in isotropic coordinates within the
PN formalism,
$\displaystyle
ds^{2}=\left(1+2\frac{\phi(x)}{c^{2}}+2\beta\frac{\phi^{2}(x)}{c^{4}}\right)c^{2}dt^{2}-\left(1-2\gamma\frac{\phi(x)}{c^{2}}\right)\delta_{ij}dx^{i}dx^{j},$
(1)
where $\gamma$ is a constant that parametrizes how much spatial curvature is
generated by the gravitating mass, and $\beta$ characterizes the non-lineal
contribution to the metric field. Our main goal is to use the HOM effect to
determine deviations of both parameters from the values within Einstein’s
theory, $\gamma=\beta=1$.
Let us consider a HOM interferometer with light sources located at a height
$R$, detectors at $R+\Delta h$ and two paths/arms $(\gamma_{2},\gamma_{1})$,
as shown in 1. We use the defining property of photons, that is, the fact they
move along light-like curves ($ds^{2}=0$), which implies that the change in
temporal coordinates is given by
$\displaystyle\Delta t$ $\displaystyle=$ $\displaystyle\frac{1}{c}\int
dx\left(1+2\frac{\phi}{c^{2}}+2\beta\frac{\phi^{2}}{c^{4}}\right)^{-1/2}\left(1-2\gamma\frac{\phi}{c^{2}}\right)^{1/2}$
(2) $\displaystyle\approx$ $\displaystyle\frac{1}{c}\int
dx\left(1+\frac{\phi}{c^{2}}+(\frac{3}{2}-\beta)\frac{\phi^{2}}{c^{4}}\right)\left(1-\gamma\frac{\phi}{c^{2}}-\frac{\gamma^{2}}{2}\frac{\phi^{2}}{c^{4}}\right).$
Furthermore, the proper length of an arbitrary interval in this metric is
given by
$\displaystyle L$ $\displaystyle=$ $\displaystyle\int
dx\left(1-2\gamma\frac{\phi}{c^{2}}\right)^{1/2}$ (3) $\displaystyle\approx$
$\displaystyle\int
dx\left(1-\gamma\frac{\phi}{c^{2}}-\frac{\gamma^{2}}{2}\frac{\phi^{2}}{c^{4}}\right).$
Hence, in the HOM interferometer, the proper length of each arm is
$\displaystyle L_{\gamma_{1}}$ $\displaystyle\approx$
$\displaystyle\left(1-\gamma\frac{\phi(R)}{c^{2}}-\frac{\gamma^{2}\phi^{2}(R)}{2c^{4}}\right)\Delta
x_{\gamma_{1}},$ (4) $\displaystyle L_{\gamma_{2}}$ $\displaystyle\approx$
$\displaystyle\left(1-\gamma\frac{\phi(R+\Delta
h)}{c^{2}}-\frac{\gamma^{2}\phi^{2}(R+\Delta h)}{2c^{4}}\right)\Delta
x_{\gamma_{1}},$ (5)
where $\Delta x_{\gamma_{1}}$ is the interval of spatial coordinates in the
horizontal path.
For each one of the paths $\gamma_{1}$ and $\gamma_{2}$ in the interferometer
we have that the interval of temporal coordinates (2) in terms of the proper
lengths (4) and (5) become
$\displaystyle\Delta t_{\gamma_{2}}$ $\displaystyle\approx$
$\displaystyle(1-\frac{\phi(R+\Delta
h)}{c^{2}}+(\frac{3}{2}-\beta)\frac{\phi^{2}(R+\Delta
h)}{c^{4}})\frac{L_{\gamma_{2}}}{c},$ (6)
and
$\displaystyle\Delta
t_{\gamma_{1}}\approx(1-\frac{\phi(R)}{c^{2}}+(\frac{3}{2}-\beta)\frac{\phi^{2}(R)}{c^{4}})\frac{L_{\gamma_{2}}}{c}.$
(7)
In order to observe interference, we demand the condition $\Delta
x_{\gamma_{1}}=\Delta x_{\gamma_{2}}$, that is, an array placed on a
equipotencial surface is balanced and there is no difference of optical length
between the arms of the array. With this constraint, the proper length
$L_{\gamma_{1}}$, to second second order in $\phi/c^{2}$, can be written in
terms of $L_{\gamma_{2}}$ as
$\displaystyle\frac{L_{\gamma_{1}}}{L_{\gamma_{2}}}$ $\displaystyle\approx$
$\displaystyle 1+\gamma\frac{g\Delta h}{c^{2}}+2\gamma^{2}\frac{g\Delta
h\phi(R)}{c^{4}}.$ (8)
Then, the proper length of $\gamma_{1}$ becomes
$\displaystyle L_{\gamma_{1}}\approx\left(1+\gamma\frac{g\Delta
h}{c^{2}}+2\gamma^{2}\frac{g\Delta h\phi(R)}{c^{4}}\right)L_{\gamma_{2}}.$ (9)
Moreover, proper time measured by a clock located at the detectors, which are
at the gravitational potential $\phi(R+\Delta h)$, is given by
$\displaystyle\Delta\tau$ $\displaystyle=$
$\displaystyle\left(1+2\frac{\phi}{c^{2}}+2\beta\frac{\phi^{2}}{c^{4}}\right)^{1/2}\Delta
t,$ (10)
where $\Delta t=\Delta t^{\rm H}_{\gamma_{1}}-\Delta t^{\rm H}_{\gamma_{2}}$
is the difference of temporal coordinates between paths $\gamma_{1}$ and
$\gamma_{2}$. Thus, using Eqs. (6) and (7) we have
$\displaystyle\Delta\tau$ $\displaystyle=$
$\displaystyle\left((\gamma+1)\frac{g\Delta
h}{c^{2}}+2(\gamma^{2}-1+\beta)\frac{\phi(R)g\Delta
h}{c^{4}}\right)\frac{L_{\gamma_{2}}}{c}.$ (11)
This expression can be recast considering the effective area
$A=L_{\gamma_{2}}\times\Delta h$ of the interferometer. We obtain the
expression
$\displaystyle\Delta\tau$ $\displaystyle=$
$\displaystyle\left((\gamma+1)+2(\gamma^{2}-1+\beta)\frac{\phi(R)}{c}\right)\frac{Ag}{c^{3}}.$
(12)
As we can see from Eq. (12), this depends on the PN parameters $\gamma$ and
$\beta$, where the parameter $\gamma$ is only present to first order in the
the gravitational potential and we have contributions of $\gamma$ and $\beta$
to second order in the gravitational potential. According to Eq. (12), given a
measurement of the temporal delay $\Delta\tau$ the considered PN parameters
are not independent.
Figure 1: HOM interferometer: two photons are injected to the array. Each
photon follows different paths. Each photon experiences a different
gravitational potential, for photons moving through horizontal part of path
$\gamma_{1}$ experience a gravitational potential $\phi(R)$, and for photons
moving along horizontal part of path $\gamma_{2}$ experience a potential
$\phi(R+\Delta h)$. Finally photons are recombined at the beam-splitter
localized at that potential. The temporal delay in the arrival time of photon
is measured by a clock localized besides the detectors.
## 3 HOM effect induced by gravitational time dilation
Gravitational fields shift the phase of quantum states [17]. This shift is
accumulated through different paths of the interferometer and is the source of
interference in our system. To calculate the probability of detection in the
interferometer we incorporate the effect of gravity by using a unitary
operator $U(\varphi)$, where $\varphi$ is the phase. As we know, if we are
working in regime where quantum gravity effects are not relevant [26], the
propagation of an electromagnetic field is described by the classical wave
equations on a curved background. The usual approach consists in describing
the propagation of an electromagnetic wave by means of geometric optics [27].
In this approximation, the phase of the vector potential satisfies the eikonal
equation, which corresponds to the Hamilton-Jacobi equation for massless
particles. In the case of an interferometric experiment, and in a static
spacetime, the gravitationally induced phase shift is given by
$\Delta\Psi=\bar{\omega}\Delta t$, where $\bar{\omega}$ is the frequency
measured by an observer at the infinity and $\Delta t$ is the interval of
difference of temporal coordinate between paths of the array. The phase shift
can be recast, considering the frequency $\omega$ and the proper time
$\Delta\tau$ measured by an observer experiencing a specific gravitation
potential, as $\Delta\Psi=\omega\Delta\tau$, with
$\omega=\bar{\omega}/\sqrt{g_{00}}$ and $\Delta\tau=\sqrt{g_{00}}$.
### 3.1 Two-photon separable state
Let us consider an initial two-photon wave packet of the form
$\ket{\Psi}_{\rm in}=\int
d\omega_{1}\phi(\omega_{1})\hat{v}^{\dagger}(\omega_{1})\int
d\omega_{2}\phi(\omega_{2})\hat{u}^{\dagger}(\omega_{2})\ket{0}_{12},$ (13)
where $\hat{v}^{\dagger}$ creates a photon with frequency $\omega_{1}$, and
$\hat{u}^{\dagger}$ creates a photon with frequency $\omega_{2}$. The state
that describes the photons after the phase shift produced by the gravitational
field and the recombination produced by the beam-splitter is
$\displaystyle\ket{\Psi}_{\rm out}$ $\displaystyle=$
$\displaystyle\frac{1}{2}\iint
f(\omega_{1},\omega_{2})e^{-i(\omega_{1}\Delta\tau_{\gamma_{1}}+\omega_{2}\Delta\tau_{\gamma_{2}}+\phi)}\left[ia^{\dagger}(\omega_{1})+b^{\dagger}(\omega_{1})\right]\left[a^{\dagger}(\omega_{1})+ib^{\dagger}(\omega_{1})\right]|0\rangle_{12},$
where $a,a^{\dagger},b$ and $b^{\dagger}$ are creation and annihilation
operators of modes at the output ports of the beam-splitter.
Hence, using Born’s rule, the probability of simultaneous detection is given
by
$\displaystyle p_{cd}$ $\displaystyle=$ $\displaystyle\frac{1}{4}\iint
d\omega_{1}d\omega_{2}|\phi(\omega_{1},\omega_{2})|^{2}\left[2-e^{i\left(\omega_{1}-\omega_{2}\right)\tau_{\gamma_{2}}}e^{-i\left(\omega_{1}-\omega_{2}\right)\tau_{\gamma_{1}}}-e^{-i\left(\omega_{1}-\omega_{2}\right)\tau_{\gamma_{2}}}e^{i\left(\omega_{1}-\omega_{2}\right)\tau_{\gamma_{1}}}\right].$
If the photons have Gaussian frequency distributions, with possibly different
spectral distributions $\sigma_{1}$ and $\sigma_{2}$ and mean frequencies
$\omega_{1}$ and $\omega_{2}$, the detection probability becomes
$\displaystyle p_{cd}$ $\displaystyle=$
$\displaystyle\frac{1}{2}\left(1-\cos((\omega_{1}-\omega_{2})\Delta\tau)e^{-\frac{\Delta\tau(\sigma^{2}_{1}+\sigma^{2}_{2})}{4}}\right),$
(16)
where $\Delta\tau$ is the difference of arrival time measured by a clock
placed at the detectors. In the case of photons with the same spectral
distributions $\sigma_{1}=\sigma_{2}=\sigma$ and equal mean frequencies
$\omega_{1}=\omega_{2}$, the probability of coincidence detection Eq. (16)
becomes
$\displaystyle p_{cd}$ $\displaystyle=$
$\displaystyle\left(1-e^{-\sigma^{2}\Delta\tau^{2}/2}\right)/2.$ (17)
According to Eq. (16) the coincidence detection probability decreases
exponentially as the temporal delay between photons increases. Moreover, the
coincidence detection probability also exhibits an harmonic oscillation given
by a cosine function, which depends on the temporal delay times the difference
between the mean frequency of the photons. This last feature vanishes if we
have twin photons with the same frequency, and only the exponential decrease
of the probability is present, according to Eq. (17). The HOM effect is
produced here by the gravitational time dilation, even if both arms have the
same proper lengths. If we consider the array placed on a gravitational
equipotential surface, there is no difference of proper time, both arms have
the same proper length and consequently the coincidence detection probability
given by Eq. (17) vanishes.
Fig. 2 shows behavior of the coincidence detection probability according to
Eq. (17) for different values of the parameters $\beta$ and $\gamma$. The
range of values for $\gamma$ and $\beta$ is selected between 0.8 and 1.2 in
order to visualize the sensitivity of the HOM array to small deviations of
these parameters with respect to prediction of General Relativity. In Fig. 3
we plot different values of the parameters $\gamma$ and $\beta$ in terms of
the effective area of the HOM interferometer, considering small deviations of
actual values of the PN parameters [3]. As is apparent from this figure, the
coincidence detection probability of Eq. (17) has nearly the same variation
for each value of $\gamma$ and $\beta$ in the case of both parameter with the
same values.
Figure 2: Probability of detection in coincidence for HOM interferometer
according to Eq. (17) for different values of $\gamma$ and $\beta$ in the
interval $[0.8,1.2]$ and different wavelength width. For each color, upper
line represents $\beta=\gamma=1.2$ and mean wavelength $\lambda=3.3\;[\rm\mu
m]$, and lower line $\beta=\gamma=0.8$. Red lines $\delta\lambda=0.370\;[\mu
m]$, blue lines $\delta\lambda=0.296\;[\mu m]$ and green lines
$\delta\lambda=0.148\;[\mu m]$ . Figure 3: Contour plot for the probability
of detection in coincidence for HOM interferometer according to Eq. (17) in
terms of different values of $\gamma$ and $\beta$ in the interval $[0.8,1.2]$
and the area $A=L_{\gamma_{2}}\times\Delta h$. The mean wavelength is
$\lambda=3.3\;[\rm\mu m]$, and $\delta\lambda=0.370\;[\mu m]$.
To take advantage of the HOM effect, we measure the coincidence detection
probability given by the operator
$\displaystyle\hat{S}$ $\displaystyle=$ $\displaystyle\int d\omega
a^{\dagger}(\omega)|0\rangle\langle 0|a(\omega)\otimes\int d\omega
b^{\dagger}(\omega)|0\rangle\langle 0|b(\omega).$ (18)
This is such that,
$\displaystyle\langle\hat{S}^{2}\rangle=\langle\hat{S}\rangle.$ (19)
Therefore,
$\displaystyle\langle\hat{S}\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{2}\left(1-e^{-\sigma^{2}\Delta\tau^{2}/2}\right),$ (20)
and
$\displaystyle\langle(\Delta\hat{S})^{2}\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{4}\left(1-e^{-\sigma^{2}\Delta\tau^{2}}\right).$ (21)
Considering Eqs. (20) and (21) we can calculate the root mean square of the
coincidence operator in Eq. (18), where as the elapsed proper time increases,
the dispersion present in the observable increases. It is interesting to note
that considering Eq. (20) we can obtain an estimation of the temporal delay
$\Delta\tau$ through the mean value of the coincidence operator $\hat{S}$.
### 3.2 Two-mode squeezed-vacuum state
Let us assume that the HOM array is fed a two-mode squeezed-vacuum state,
which in the Fock basis is given by
$\displaystyle|\zeta\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{\cosh(r)}\sum_{n=0}^{\infty}(-1)^{n}e^{in\theta}\left(\tanh(r)\right)^{n}|n,n\rangle.$
(22)
Here, $\theta$ is a controllable phase for the squeezed vacuum, $r$ is the
squeezing parameter of the two-mode squeezer, and $n$ labels Fock states. The
probability of detection (for further details see appendix A) of the two
photons in coincidence is given by $p_{\rm
cd}=\operatorname{Tr}\left(\rho_{\rm squeezing}\hat{S}\right)$, where the
operator $\hat{S}$ is given by Eq. (18) and $\rho_{\rm
squeezing}=|\zeta\rangle\langle\zeta|$. The probability of coincidence
detection in the case of different frequencies, that is,
$\omega_{1}\neq\omega_{2}$, becomes
$\displaystyle\langle S\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{2}\left(\frac{\tanh(r)}{\cosh(r)}\right)^{2}\left(1-\cos((\omega_{1}-\omega_{2})\Delta\tau)e^{-\frac{\Delta\tau(\sigma^{2}_{1}+\sigma^{2}_{2})}{4}}\right).$
(23)
If the mean frequencies of the Gaussian wave packets are equal, then we obtain
$\displaystyle\langle S\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{2}\left(\frac{\tanh(r)}{\cosh(r)}\right)^{2}\left(1-\exp\left(-\sigma^{2}\Delta\tau^{2}/2\right)\right).$
(24)
We can recast Eqs. (23) and (24) considering Eqs. (16) and (17), respectively.
In this case, we finally have that the expectation value is given by
$\displaystyle\langle S\rangle$ $\displaystyle=$
$\displaystyle\left(\frac{\tanh(r)}{\cosh(r)}\right)^{2}P_{cd}.$ (25)
Thus, the coincidence detection probability of two-mode squeezed-vacuum state
is equal to the coincidence detection probability for a two-photo state times
a function that depends on the squeezing parameter $r$. In the limit where
$r\rightarrow 0$, that is, the squeezing parameter vanishes, the coincidence
detection probabilities Eqs. (24) and (23) become Eqs. (17) and (16),
respectively.
## 4 Estimation of $\gamma$ and $\beta$ parameters
### 4.1 Parameter estimation by an external clock
Considering the temporal delay given by Eq. (11) we use a quantum protocol to
obtain an estimate of the parameters $\gamma$ and $\beta$. We consider two
different arrival times, for each one we set the HOM interferometer with the
lower arm experiencing a gravitational potential $\phi(R_{i})$, with $i=1,2$,
and the upper arm experiencing the potential $\phi(R_{i}+\Delta h)$ such that
the difference of gravitational potential between each arm (for both
interferometers) is (at first order) $g\Delta h$ and each upper arm of the
different arrays has a proper length $L_{\gamma_{2}}$ and
$L_{\gamma^{\prime}_{2}}$, respectively. A classical approach consists in
measuring the time delay of a light wave-packet in the HOM interferometer, but
a quantum approach would be based on measuring the difference of proper time
through the probability of detection in the HOM array. Having an estimate of
the temporal delays in each array, we solve the following system of equations
$\displaystyle\Delta\tau_{1}$ $\displaystyle=$
$\displaystyle\left((\gamma+1)\frac{g\Delta
h}{c^{2}}+2(\gamma^{2}-1+\beta)\frac{\phi(R_{1})g\Delta
h}{c^{4}}\right)\frac{L_{\gamma_{2}}}{c},$ (26) $\displaystyle\Delta\tau_{2}$
$\displaystyle=$ $\displaystyle\left((\gamma+1)\frac{g\Delta
h}{c^{2}}+2(\gamma^{2}-1+\beta)\frac{\phi(R_{2})g\Delta
h}{c^{4}}\right)\frac{L_{\gamma^{\prime}_{2}}}{c}.$ (27)
Where $\Delta\tau_{1}$ and $\Delta\tau_{2}$ are the differences of arrival
time between each configuration of the array, respectively. $\phi(R_{1})$ and
$\phi(R_{2})$ are the gravitational potential that is experimented by the
lower path of each array, and $\Delta h$ is the difference of spatial
coordinate in the z direction (see Fig. (4)). Solving for $\gamma$ and
$\beta$, we obtain the following solutions
$\displaystyle\gamma$ $\displaystyle=$
$\displaystyle\frac{c^{3}\left(L_{\gamma_{2}}\Delta\tau_{2}\phi(R_{1})-L_{\gamma^{\prime}_{2}}\Delta\tau_{1}\phi(R_{2})\right)}{gL_{\gamma_{2}}L_{\gamma^{\prime}_{2}}\Delta
h\left(\phi(R_{1})-\phi(R_{2})\right)}-1,$ (28)
and
$\displaystyle\beta$ $\displaystyle=$
$\displaystyle\frac{2c^{3}\left(L_{\gamma_{2}}\Delta\tau_{2}\phi(R_{1})-L_{\gamma^{\prime}_{2}}\Delta\tau_{1}\phi(R_{2})\right)}{L_{\gamma_{2}}L_{\gamma^{\prime}_{2}}g\Delta
h\left(\phi(R_{1})-\phi(R_{2})\right)}+\frac{c^{5}\left(L_{\gamma^{\prime}_{2}}\Delta\tau_{1}-L_{\gamma_{2}}\Delta\tau_{2}\right)}{2L_{\gamma_{2}}L_{\gamma^{\prime}_{2}}g\Delta
h\left(\phi(R_{1})-\phi(R_{2})\right)}$ (29)
$\displaystyle-\frac{c^{6}\left(L_{\gamma_{2}}\Delta\tau_{2}\phi(R_{1})-L_{\gamma^{\prime}_{2}}\Delta\tau_{1}\phi(R_{2})\right)^{2}}{L^{2}_{\gamma_{2}}L^{2}_{\gamma^{\prime}_{2}}g^{2}\Delta
h^{2}_{1}\left(\phi(R_{1})-\phi(R_{2})\right)^{2}}.$
In terms of the areas $A_{1}$ and $A_{2}$ of each array the parameters
$\gamma$ and $\beta$ become
$\displaystyle\gamma$ $\displaystyle=$
$\displaystyle\frac{c^{3}\left(A_{1}\Delta\tau_{2}\phi(R_{1})-A_{2}\Delta\tau_{1}\phi(R_{2})\right)}{gA_{1}A_{2}(\phi(R_{1})-\phi(R_{2}))}-1,$
(30) $\displaystyle\beta$ $\displaystyle=$
$\displaystyle\frac{2c^{3}\left(A_{1}\Delta\tau_{2}\phi(R_{1})-A_{2}\Delta\tau_{1}\phi(R_{2})\right)}{A_{1}A_{2}g(\phi(R_{1})-\phi(R_{2}))}+\frac{c^{5}\left(A_{2}\Delta\tau_{1}-A_{1}\Delta\tau_{2}\right)}{2A_{1}A_{2}(\phi(R_{1})-\phi(R_{2}))}$
(31)
$\displaystyle-\frac{c^{6}\left(A_{1}\Delta\tau_{2}\phi(R_{1})-A_{2}\Delta\tau_{1}\phi(R_{2})\right)^{2}}{A^{2}_{1}A^{2}_{2}(\phi(R_{1})-\phi(R_{2}))^{2}}.$
As we can see in the expressions for $\gamma$ and $\beta$, the $\beta$
parameter can be recast in terms of $\gamma$. Thereby, $\beta$ as given by Eq.
(31) becomes
$\displaystyle\beta$ $\displaystyle=$
$\displaystyle\frac{c^{5}\left(A_{2}\Delta\tau_{1}-A_{1}\Delta\tau_{2}\right)}{2A_{1}A_{2}(\phi(R_{1})-\phi(R_{2}))}+2\left(\gamma+1\right)-\left(\gamma+1\right)^{2}.$
(32)
Consequently, our estimates of $\gamma$ and $\beta$ are dependent, because
they arise from an estimate of the same observable, which in our case is the
elapsed proper time $\Delta\tau$ of Eq. (12).
### 4.2 Parameter estimation employing a two-photon separable state
A second way to obtain $\gamma$ and $\beta$ is a slight deviation from the
previous one. Instead of considering the time delays of each interferometer
(measured with an external clock), we obtain the temporal delay from the
probability of detection in the HOM array. Consider the probability of
detection in coincidence of Eq. (17), from this probability the time delay (in
case of a Gaussian wave-packet) is
$\displaystyle\Delta\tau_{\rm
sep}=\pm\frac{\sqrt{2}}{\sigma}\sqrt{\ln\left(\frac{1}{1-2\langle
S\rangle}\right)},$ (33)
where $\langle S\rangle$ is the expectation value of the coincidence
observable (see Eq. 20), and $\sigma$ is the dispersion of the photons.
Replacing Eq. (33) for each HOM interferometer in (28) and (29) we obtain
$\displaystyle\gamma$ $\displaystyle=$ $\displaystyle-1+\frac{c^{3}\Delta
h}{gL_{\gamma_{2}}L_{\gamma^{\prime}_{2}}\Delta
h^{2}\left(\phi(R_{2})-\phi(R_{1})\right)}$
$\displaystyle\times\left(\frac{L_{\gamma^{\prime}_{2}}\phi(R_{1})}{\sigma_{1}}\sqrt{\ln\left(\frac{1}{1-2\langle
S_{1}\rangle}\right)}-\frac{L_{\gamma_{2}}\phi(R_{2})}{\sigma_{2}}\sqrt{\ln\left(\frac{1}{1-2\langle
S_{2}\rangle}\right)}\right),$
where $\sigma_{1}$, and $\sigma_{2}$ are the spectral width of the photons
employed in the first and second configuration, respectively. $\langle
S_{1}\rangle$ and $\langle S_{2}\rangle$ are the expectation values of the
coincidence observable for the first and second configuration, respectively.
This expression can be recast as
$\displaystyle\gamma$ $\displaystyle=$ $\displaystyle-1+\frac{c^{3}\Delta
h}{gL_{\gamma_{2}}L_{\gamma^{\prime}_{2}}\Delta
h^{2}\left(\phi(R_{2})-\phi(R_{1})\right)}$
$\displaystyle\times\left(\frac{L_{\gamma^{\prime}_{2}}\phi(R_{1})}{\sigma_{1}}\ln\left(\frac{1}{1-2\langle
S_{1}\rangle}\right)-\frac{L_{\gamma_{2}}\phi(R_{2})}{\sigma_{2}}\ln\left(\frac{1}{1-2\langle
S_{2}\rangle}\right)\right).$
Analogously for $\beta$, we obtain
$\displaystyle\beta$ $\displaystyle=$
$\displaystyle\frac{c^{3}}{2g^{2}L_{\gamma_{2}}L_{\gamma^{\prime}_{2}}\Delta
h^{2}\sigma_{1}\sigma_{2}\left(\phi(R_{1})-\phi(R_{2})\right)^{2}}\left[c^{2}gL_{\gamma_{2}}L_{\gamma^{\prime}_{2}}\Delta
h\sigma_{1}\sigma_{2}\left(L_{\gamma^{\prime}_{2}}\sigma_{2}\sqrt{\ln\left(\frac{1}{1-2\langle
S_{1}\rangle}\right)}\right.\right.$ (36)
$\displaystyle\left.\left.-L_{\gamma_{2}}\sigma_{1}\sqrt{\ln\left(\frac{1}{1-2\langle
S_{2}\rangle}\right)}\right)\left(\phi(R_{1})-\phi(R_{2})\right)+4gL_{\gamma_{2}}L_{\gamma^{\prime}_{2}}\Delta
h\sigma_{1}\sigma_{2}[\phi(R_{1})-\phi(R_{2})]\right.$
$\displaystyle\left.\times\left(L_{\gamma_{2}}\sigma_{1}\sqrt{\ln\left(\frac{1}{1-2\langle
S_{1}\rangle}\right)}\phi(R_{1})-L_{\gamma^{\prime}_{2}}\sigma_{2}\sqrt{\ln\left(\frac{1}{1-2\langle
S_{2}\rangle}\right)}\phi(R_{2})\right)\right.$
$\displaystyle\left.-2c^{3}\left(L_{\gamma_{2}}\sigma_{1}\sqrt{\ln\left(\frac{1}{1-2\langle
S_{1}\rangle}\right)}\phi(R_{1})-L_{\gamma^{\prime}_{2}}\sigma_{2}\sqrt{\ln\left(\frac{1}{1-2\langle
S_{2}\rangle}\right)}\phi(R_{2})\right)\right].$
Figure 4: HOM interferometer configuration: in order to obtain the values of
the PN parameters $\gamma$ and $\beta$ we measure two different times
$\Delta\tau_{1}$ (26) and $\Delta\tau_{2}$ (27) (or the probability of
detection in coincidence (17) for each arrival time) for each configuration
(a) and (b), respectively. In (a) two photons enter to HOM interferometer,
bottom path is in a potential $\phi(R_{1})$ and the upper path in
$\phi(R_{1}+\Delta h)$. Both photons are recombined at this potential. For (b)
all array is moved a distance such that both photons enter to the array at
$\phi(R_{2})$, and finally they are recombined at $\phi(R_{2}+\Delta h)$.
### 4.3 Parameter estimation employing a two-mode squeezed-vacuum state
In this subsection we employ the probability of detection in coincidence when
the HOM array is injected two single-mode vacuum states. Considering the
probability of detection given by the Eq. (24), the temporal delay becomes
$\displaystyle\Delta\tau_{\rm squeezing}$ $\displaystyle=$
$\displaystyle\left(\frac{\sqrt{2}}{\sigma}\right)\sqrt{\ln\left(\frac{\tanh^{2}(r)}{2\langle
S\rangle\cosh^{2}(r)-\tanh^{2}(r)}\right)},$ (37)
where $\langle S\rangle$ is the probability of detection in coincidence.
Replacing the temporal delay Eq. (37) the parameter $\gamma$ becomes
$\displaystyle\gamma$ $\displaystyle=$
$\displaystyle\frac{A_{1}c^{3}\phi(R_{1})\ln\left(\frac{1}{1-2\langle
S_{2}\rangle\operatorname{sech}^{4}(r_{2})}\right)-A_{2}c^{3}\phi(R_{2})\ln\left(\frac{1}{1-2\langle
S_{1}\rangle\operatorname{sech}^{4}(r_{1})}\right)}{A_{1}A_{2}g\sigma\phi(R_{1})-A_{1}A_{2}g\sigma\phi(R_{2})}-1,$
where $\langle S_{1}\rangle$ and $\langle S_{2}\rangle$ are the probabilities
of detection in coincidence for the first and second configuration of the HOM
array, respectively. $r_{1}$ and $r_{2}$ are the expectation values of the
squeezing parameter in the first and second configuration, respectively. For
the parameter $\beta$ we obtain
$\displaystyle\beta$ $\displaystyle=$
$\displaystyle-\frac{c^{6}\left(A_{1}\phi(R_{1})\ln\left(\frac{1}{1-2\langle
S_{2}\rangle\operatorname{sech}^{4}(R_{2})}\right)-A_{2}\phi(R_{2})\ln\left(\frac{1}{1-2\langle
S_{1}\rangle\operatorname{sech}^{4}(R_{1})}\right)\right)^{2}}{A_{1}^{2}A_{2}^{2}g^{2}\sigma^{2}\Delta\phi_{12}^{2}}$
$\displaystyle+\frac{A_{2}c^{5}\ln\left(\frac{1}{1-2\langle
S_{1}\rangle\operatorname{sech}^{4}(R_{1})}\right)-A_{1}c^{5}\ln\left(\frac{1}{1-2\langle
S_{2}\rangle\operatorname{sech}^{4}(R_{2})}\right)}{2A_{1}A_{2}g\sigma\phi(R_{1})-2A_{1}A_{2}g\sigma\phi(R_{2})}$
$\displaystyle+\frac{2A_{1}c^{3}\phi(R_{1})\ln\left(\frac{1}{1-2\langle
S_{2}\rangle\operatorname{sech}^{4}(R_{2})}\right)-2A_{2}c^{3}\phi(R_{2})\ln\left(\frac{1}{1-2\langle
S_{1}\rangle\operatorname{sech}^{4}(R_{1})}\right)}{A_{1}A_{2}g\sigma\phi(R_{1})-A_{1}A_{2}g\sigma\phi(R_{2})}.$
## 5 Uncertainties and relative errors of $\gamma$ and $\beta$
In this section, we calculate the uncertainties of the PN parameters $\gamma$
and $\beta$. In the first part we consider errors in the measurement of the
arrival time of photons of a classical wave. In the second part we focus on
the arrival time obtained from the probability of detection in coincidence,
but employing several schemes.
### 5.1 Errors in the arrival times
To calculate the uncertainty of $\gamma$ and $\beta$ , we consider errors only
in the measuring process of the proper time of each HOM interferometer.
Moreover the uncertainties of the measurement of the arrival time in each
configuration of the HOM array are considered independently, therefore we are
assuming that there is no correlation between them. In this case the
uncertainty of the parameters $\gamma$ and $\beta$ is given by
$\displaystyle\delta\epsilon(\Delta\tau_{1},\Delta\tau_{2})$ $\displaystyle=$
$\displaystyle\sqrt{\left(\frac{\partial\epsilon}{\partial\Delta\tau_{1}}\right)^{2}\delta\Delta^{2}\tau_{1}+\left(\frac{\partial\epsilon}{\partial\Delta\tau_{2}}\right)^{2}\delta\Delta^{2}\tau_{2}},$
(40)
where $\epsilon=(\gamma,\beta)$ corresponds to each parameter $\gamma$ and
$\beta$, $\delta\Delta\tau_{1}$ and $\delta\Delta\tau_{2}$ are the
uncertainties on the measurement of the temporal delays $\Delta\tau_{1}$ and
$\Delta\tau_{2}$, respectively. Therefore, given the uncertainties
$\delta\Delta\tau_{1}$ and $\delta\Delta\tau_{2}$ for the two configurations
of the HOM array, the uncertainty in the estimation of $\gamma$ becomes
$\displaystyle\delta\gamma$ $\displaystyle=$
$\displaystyle\frac{c^{3}}{gL_{\gamma_{2}}L_{\gamma^{\prime}_{2}}\Delta
h\left[\phi(R_{1})-\phi(R_{2})\right]}\sqrt{\left(L^{2}_{\gamma_{2}}\delta\Delta\tau^{2}_{1}\phi^{2}(R_{1})+L^{2}_{\gamma^{\prime}_{2}}\delta\Delta\tau^{2}_{1}\phi^{2}(R_{2})\right)}.$
For the $\beta$ parameter we obtain
$\displaystyle\delta\beta$ $\displaystyle=$
$\displaystyle\frac{c^{3}}{2g^{2}L^{2}_{\gamma_{2}}L^{2}_{\gamma^{\prime}_{2}}\Delta
h^{2}\left[\phi(R_{1})-\phi(R_{2})\right]^{2}}$
$\displaystyle\left(L^{2}_{\gamma_{2}}\delta\Delta^{2}\tau_{2}\left[L_{\gamma_{2}}\phi(R_{1})\left(c^{2}gL_{\gamma^{\prime}_{2}}\Delta
h+4\phi(R_{1})(c^{3}\Delta\tau_{2}-gL_{\gamma^{\prime}_{2}}\Delta
h)\right)\right.\right.$
$\displaystyle\left.\left.-L_{\gamma^{\prime}_{2}}\left(c^{2}gL_{\gamma_{2}}\Delta
h+4(c^{3}\Delta\tau_{1}-gL_{\gamma_{2}}\Delta
h)\phi(R_{1})\right)\phi(R_{2})\right]^{2}\right.$
$\displaystyle\left.+L^{2}_{\gamma^{\prime}_{2}}\delta\Delta^{2}\tau_{1}\left[L_{\gamma^{\prime}_{2}}\phi(R_{2})\left((c^{2}gL_{\gamma_{2}}\Delta
h+4\phi(R_{2})(c^{3}\Delta\tau_{1}-gL_{\gamma_{2}}\Delta
h)\right)\right)\right.$
$\displaystyle\left.\left.-L_{\gamma_{2}}\left(c^{2}gL_{\gamma_{2}}\Delta
h+4(c^{3}\Delta\tau_{1}-gL_{\gamma_{2}}\Delta
h)\phi(R_{2})\right)\phi(R_{1})\right]^{2}\right)^{1/2}.$
The relative error for the parameter $\gamma$, in terms of the uncertainties
$\delta\Delta\tau_{1}$ and $\delta\Delta\tau_{2}$, reads
$\displaystyle\frac{\delta\gamma}{\gamma}$ $\displaystyle=$
$\displaystyle\frac{\sqrt{c^{6}\left(L^{2}_{\gamma_{2}}\delta^{2}\Delta\tau_{2}\phi^{2}(R_{1})+L^{\prime
2}_{\gamma_{2}}\delta^{2}\Delta\tau_{1}\phi^{2}(R_{2})\right)}}{L^{\prime}_{\gamma_{2}}\left(gL_{\gamma_{2}}\Delta
h-c^{3}\Delta\tau_{1}\right)\phi(R_{2})-L_{\gamma_{2}}\left(gL^{\prime}_{\gamma_{2}}\Delta
h-c^{3}\Delta\tau_{2}\right)\phi(R_{1})},$ (43)
or in terms of the proper area
$\displaystyle\frac{\delta\gamma}{\gamma}$ $\displaystyle=$
$\displaystyle\frac{\sqrt{c^{6}\left(A_{1}^{2}\delta^{2}\Delta\tau_{2}\phi^{2}(R_{1})+A_{2}^{2}\delta^{2}\Delta\tau_{1}\phi^{2}(R_{2})\right)}}{A_{2}\left(gA_{1}-c^{3}\Delta\tau_{1}\right)\phi(R_{2})-A_{1}\left(gA_{2}-c^{3}\Delta\tau_{2}\right)\phi(R_{1})}.$
(44)
The relative error for the parameter $\beta$ is given by
$\displaystyle\frac{\delta\beta}{\beta}$ $\displaystyle=$
$\displaystyle-\left(c^{3}\left(2\Delta\tau_{2}L_{\gamma_{2}}^{2}\phi(R_{1})^{2}\left(c^{3}\Delta\tau_{2}-2\Delta
hgL^{\prime}_{\gamma_{2}}\right)\right.\right.$ (45)
$\displaystyle\left.\left.+L_{\gamma_{2}}L^{\prime}_{\gamma_{2}}\phi(R_{1})\left(4\phi(R_{2})\left(c^{3}(-\Delta\tau_{1})\Delta\tau_{2}+\Delta
h\Delta\tau_{2}gL_{\gamma_{2}}+\Delta
h\Delta\tau_{1}gL^{\prime}_{\gamma_{2}}\right)\right.\right.\right.$
$\displaystyle\left.\left.\left.+c^{2}\Delta
hg(\Delta\tau_{2}L_{\gamma_{2}}-\Delta\tau_{1}L^{\prime}_{\gamma_{2}})\right)+L^{\prime}_{\gamma_{2}}\phi(R_{2})\left(2\Delta\tau_{1}L^{\prime}_{\gamma_{2}}\phi(R_{2})\left(c^{3}\Delta\tau_{1}-2\Delta
hgL_{\gamma_{2}}\right)\right.\right.\right.$
$\displaystyle\left.\left.\left.+c^{2}\Delta
hgL_{\gamma_{2}}(\Delta\tau_{1}L^{\prime}_{\gamma_{2}}-\Delta\tau_{2}L_{\gamma_{2}})\right)\right)\right)^{-1}$
$\displaystyle\times\left[c^{6}\left(\delta\Delta\tau_{2}^{2}L^{2}_{\gamma_{2}}\left(L_{\gamma_{2}}\phi(R_{1})\left(\phi(R_{1})\left(4c^{3}\Delta\tau_{2}-4\Delta
hgL^{\prime}_{\gamma_{2}}\right)+c^{2}\Delta
hgL^{\prime}_{\gamma_{2}}\right)\right.\right.\right.$
$\displaystyle\left.\left.\left.-L^{\prime}_{\gamma_{2}}\phi(R_{2})\left(\phi(R_{1})\left(4c^{3}\Delta\tau_{1}-4\Delta
hgL_{\gamma_{2}}\right)+c^{2}\Delta
hgL_{\gamma_{2}}\right)\right)^{2}\right.\right.$
$\displaystyle\left.\left.+\delta\Delta\tau_{1}^{2}L^{{}^{\prime}2}_{\gamma_{2}}\left(L^{\prime}_{\gamma_{2}}\phi(R_{2})\left(\phi(R_{2})\left(4c^{3}\Delta\tau_{1}-4\Delta
hgL_{\gamma_{2}}\right)+c^{2}\Delta
hgL_{\gamma_{2}}\right)\right.\right.\right.$
$\displaystyle\left.\left.\left.-L_{\gamma_{2}}\phi(R_{1})\left(\phi(R_{2})\left(4c^{3}\Delta\tau_{2}-4\Delta
hgL^{\prime}_{\gamma_{2}}\right)+c^{2}\Delta
hgL^{\prime}_{\gamma_{2}}\right)\right)^{2}\right)\right]^{1/2}.$
This expression can be written in terms of the proper area $A$ of each array
as
$\displaystyle\frac{\delta\beta}{\beta}$ $\displaystyle=$
$\displaystyle-\left(c^{3}\left(2\Delta\tau_{2}A_{1}^{2}\phi(R_{1})^{2}\left(c^{3}\Delta\tau_{2}-2gA_{2}\right)\right.\right.$
(46)
$\displaystyle\left.\left.+A_{1}A_{2}\phi(R_{1})\left(4\phi(R_{2})\left(c^{3}(-\Delta\tau_{1})\Delta\tau_{2}+\Delta\tau_{2}gA_{1}+\Delta\tau_{1}gA_{2}\right)+c^{2}g(\Delta\tau_{2}A_{1}-\Delta\tau_{1}A_{2})\right)\right.\right.$
$\displaystyle\left.\left.+A_{2}\phi(R_{2})\left(2\Delta\tau_{1}A_{2}\phi(R_{2})\left(c^{3}\Delta\tau_{1}-2gL_{\gamma_{2}}\right)+c^{2}\Delta
hgA_{1}(\Delta\tau_{1}A_{2}-\Delta\tau_{2}A_{1})\right)\right)\right)^{-1}$
$\displaystyle\times\left[c^{6}\left(\delta\Delta\tau_{2}^{2}A_{1}^{2}\left(A_{1}\phi(R_{1})\left(\phi(R_{1})\left(4c^{3}\Delta\tau_{2}-4gA_{2}\right)+c^{2}gA_{2}\right)\right.\right.\right.$
$\displaystyle\left.\left.\left.-A_{2}\phi(R_{2})\left(\phi(R_{1})\left(4c^{3}\Delta\tau_{1}-4gA_{1}\right)+c^{2}gA_{2}\right)\right)^{2}\right.\right.$
$\displaystyle\left.\left.+\delta\Delta\tau_{1}^{2}A_{2}^{2}\left(A_{2}\phi(R_{2})\left(\phi(R_{2})\left(4c^{3}\Delta\tau_{1}-4gA_{1}\right)+c^{2}gA_{1}\right)\right.\right.\right.$
$\displaystyle\left.\left.\left.-A_{1}\phi(R_{1})\left(\phi(R_{2})\left(4c^{3}\Delta\tau_{2}-4gA_{2}\right)+c^{2}gA_{2}\right)\right)^{2}\right)\right]^{1/2}.$
Fig. 5 shows the relative error of the parameters $\gamma$ and $\beta$ in
accordance with Eqs. (44) and (46), respectively, in terms of the proper area
of the first configuration. In the simulation we consider an external clock
that measures the arrival times of the photons, which has an uncertainty
$\delta\tau_{\rm clock}=10^{-18}\;[s]$, and we consider the same uncertainty
for both configurations of the HOM array. In order to simplify the analysis we
configure the area of the second array as $A_{2}=\eta\times A_{1}$, with
$\eta=1,1/2,1/4,1/8$. As we can see from the figure, the configuration with a
lower relative error for the parameters $\gamma$ and $\beta$ is reached with
$A_{1}=A_{2}$. In Fig. 6 the uncertainty $\delta\Delta\tau$ necessary to
obtain a relative error of the parameters $\gamma$ and $\beta$ of $10^{-5}$ is
shown. In this case, the configuration of the HOM array with the higher
$\delta\Delta\tau$ corresponds to both interferometers having the same area.
From the figure, the uncertainty $\delta\Delta\tau$ for $\gamma$ is $\sim
10^{-22}$, while for $\beta$ it corresponds to $\sim 10^{-31}$. In this case,
to obtain the current uncertainties of $\gamma$ and $\beta$ we need a lower
error in the arrival times for $\beta$.
---
(a)
(b)
Figure 5: Relative error of the parameters $\gamma$ and $\beta$, as measured
by a external clock placed at the detectors, through Eqs. (26) and (27) in
terms of the area $A=\Delta h\times L_{\gamma_{2}}$ of the array. (a) Relative
error for the parameter $\gamma$, and the clock employed has a uncertainty
$\delta\tau_{\rm clock}=10^{-18}[\rm s]$. (b) Relative error for the parameter
$\beta$. We assume that in both HOM array configuration the clocks have the
same uncertainty. $\delta\tau_{\rm clock}=10^{-31}[\rm s]$ . Each array
configuration is characterized by an area, $A_{1}$ is used in the measurement
of the first time delay and $A_{2}=\eta A_{1}$ is used in the second array,
where $\eta$ is a constant. The blue continuous line represents $\eta=1/8$,
red continuous line $\eta=1/4$, green continuous line $\eta=1/2$ and brown
continuous line $\eta=1$.
---
(a)
(b)
Figure 6: Uncertainty on the measurement of the temporal delay necessary to
obtain the current value in the uncertainty of $\gamma$ and $\beta$, in terms
of the area $A=\Delta h\times L_{\gamma_{2}}$ of the array. (a) Uncertainty of
$\Delta\tau$ for the parameter $\gamma$. (b) Uncertainty of $\Delta\tau$ for
the parameter $\beta$. We assume that in both HOM array configurations the
uncertainty in the arrival time is the same. Each array configuration is
characterized by an area, $A_{1}$ is used for measuring the first time delay
and $A_{2}=\eta A_{1}$ is used in the second array, where $\eta$ is a
constant. Blue continuous line represents $\eta=1/8$, red continuous line
$\eta=1/4$, green continuous line $\eta=1/2$ and brown continuous line
$\eta=1$.
### 5.2 Errors in the coincidence detection probability
Considering the expectation value of the observable of detection in
coincidence for two photons Eq. (20), the uncertainty for the parameter
$\gamma$ reads
$\displaystyle\delta\gamma$ $\displaystyle=$
$\displaystyle\sqrt{\frac{c^{6}\left(\frac{\delta
S_{1}^{2}\phi(R_{2})^{2}}{A_{1}^{2}(1-2\langle
S_{1}\rangle)^{2}\sigma_{1}^{2}\ln\left(\frac{1}{1-2\langle
S_{1}\rangle}\right)}+\frac{\delta
S_{2}^{2}\phi(R_{1})^{2}}{A_{2}^{2}(1-2\langle
S_{2}\rangle)^{2}\sigma_{2}^{2}\ln\left(\frac{1}{1-2\langle
S_{2}\rangle}\right)}\right)}{g^{2}(\Delta\phi_{12})^{2}}},$ (47)
and for the parameter $\beta$
$\displaystyle\delta\beta$ $\displaystyle=$
$\displaystyle\frac{1}{2A_{1}^{2}A_{2}^{2}g^{2}\sigma_{1}^{2}\sigma_{2}^{2}(\Delta\phi_{12})^{2}}$
(48) $\displaystyle\times\left(c^{6}\left(\frac{A_{1}^{2}\delta
S_{2}^{2}\sigma_{1}^{2}}{(1-2\langle
S_{2}\rangle)^{2}\ln\left(\frac{1}{1-2\langle
S_{2}\rangle}\right)}\left(A_{1}\sigma_{1}\phi(R_{1})\left(\phi(R_{1})\left(4c^{3}\sqrt{\ln\left(\frac{1}{1-2\langle
S_{2}\rangle}\right)}-4A_{2}g\sigma_{2}\right)\right.\right.\right.\right.$
$\displaystyle\left.\left.\left.\left.+A_{2}c^{2}g\sigma_{2}\right)-A_{2}\sigma_{2}\phi(R_{2})\left(\phi(R_{1})\left(4c^{3}\sqrt{\ln\left(\frac{1}{1-2\langle
S_{1}\rangle}\right)}-4A_{1}g\sigma_{1}\right)+A_{1}c^{2}g\sigma_{1}\right)\right)^{2}\right.\right.$
$\displaystyle\left.\left.+\frac{A_{2}^{2}\delta
S_{1}^{2}\sigma_{2}^{2}}{(1-2\langle
S_{1}\rangle)^{2}\ln\left(\frac{1}{1-2\langle
S_{1}\rangle}\right)}\left(A_{2}\sigma_{2}\phi(R_{2})\left(4\phi(R_{2})\left(A_{1}g\sigma_{1}-c^{3}\sqrt{\ln\left(\frac{1}{1-2\langle
S_{1}\rangle}\right)}\right)-A_{1}c^{2}g\sigma_{1}\right)\right.\right.\right.$
$\displaystyle\left.\left.\left.+A_{1}\sigma_{1}\phi(R_{1})\left(\phi(R_{2})\left(4c^{3}\sqrt{\ln\left(\frac{1}{1-2\langle
S_{2}\rangle}\right)}-4A_{2}g\sigma_{2}\right)+A_{2}c^{2}g\sigma_{2}\right)\right)^{2}\right)\right)^{1/2}.$
Considering the errors Eqs. (47) and (48) together with Eqs.
(LABEL:ParamterGamma2) and (36) we can calculate the relative error of the
parameters $\gamma$ and $\beta$, we do not show the expression because it is
not enlightening. In Fig. 7 we show the relative error of the parameters
$\gamma$ and $\beta$. We choose the same configuration of the HOM arrays as in
the previous section, as indicated in the figure. The lower relative error for
both parameters is reached when both areas are equal. In our simulations we
employ states with wavelength $\lambda=0.995\;[\mu m]$ and bandwidth
$\delta\lambda=0.034[\mu m]$ employed in current SPDC sources [28], and an
error in the measurement of the coincidence operator $\delta S=10^{-18}$.
Therefore, the relative error of $\gamma$ and $\beta$ are $\sim 10^{-14}$ and
$\sim 10^{-6}$, respectively. Fig. 8 shows the uncertainty $\delta S$ of the
measurement of the operator detection in coincidence if we consider the errors
currently indicated in the literature, in terms of the effective area of one
of the arrays. As in the previous section, considering both configurations of
the interferometer with the same proper area we obtain a higher uncertainty
$\delta S$. For $\gamma$ the uncertainty $\delta S\sim 10^{-7}$, and for
$\beta$ $\delta S\sim 10^{-16}$.
---
(a)
(b)
Figure 7: Relative error of the parameters $\gamma$ and $\beta$ measured
through the probability of detection Eq. (17), in terms of the area $A=\Delta
h\times L_{\gamma_{2}}$ of the array. (a) Relative error for the parameter
$\gamma$. (b) Relative error for the parameter $\beta$. For both simulations
the uncertainty in the measure of the probability of detection is $\delta
S=10^{-18}$. We assume that in both HOM array configuration the uncertainty in
the probability of detection is the same. In the array configuration, $A_{1}$
is the area for the first array configuration, in which the detection of the
first time delay is done. The second array has area $A_{2}=\eta A_{1}$, where
$\eta$ is a constant. Blue continuous line represents $\eta=1/8$, red
continuous line $\eta=1/4$, green continuous line $\eta=1/2$ and brown
continuous line $\eta=1$.
---
(a)
(b)
Figure 8: Uncertainty on the measurement of the coincidence observable S Eq.
(18) necessary to obtain the current value in the uncertainty of $\gamma$ and
$\beta$, in terms of the area $A=\Delta h\times L_{\gamma_{2}}$ of the array.
(a) Uncertainty of S for the parameter $\gamma$. (b) Uncertainty of S for the
parameter $\beta$. We assume that in both HOM array configuration the
uncertainty in the probability of detection is the same. In the array
configuration, $A_{1}$ is the area for the first array configuration, in which
the detection of the first time delay is done. The second array has area
$A_{2}=\eta A_{1}$, where $\eta$ is a constan. Blue continuous line represents
$\eta=1/8$, red continuous line $\eta=1/4$, green continuous line $\eta=1/2$
and brown continuous line $\eta=1$.
### 5.3 Relative error employing a two-mode squeezed-vacuum state
Here, we calculate the relative error of the parameters $\gamma$ and $\beta$
given in Eqs. (LABEL:gammaSqueezing) and (LABEL:betaSqueezing), respectively.
The uncertainty of the parameter $\gamma$ is given by
$\displaystyle\delta\gamma$ $\displaystyle=$
$\displaystyle\frac{\sqrt{2}c^{3}}{A_{1}A_{2}g\sigma(\phi(R_{1})-\phi(R_{2}))}$
$\displaystyle\times\left(A_{1}\phi(R_{1})\ln\left(1-2\langle
S_{2}\rangle\left(\tanh^{2}(r_{2})+1\right)^{2}\right)-A_{2}\phi(R_{2})\ln\left(1-2\langle
S_{1}\rangle\left(\tanh^{2}(r_{1})+1\right)^{2}\right)\right)-1$
where $\delta S_{1}$ and $\delta S_{2}$ are the uncertainties on the
measurements of the probabilities $\langle S_{1}\rangle$ and $\langle
S_{2}\rangle$, respectively. $r_{1}$ and $r_{2}$ are the squeezing parameters
employed in the generation of each squeezed single-vacuum state for each
configuration of the HOM array, respectively. For the parameter $\beta$ we
obtain
$\displaystyle\delta\beta$ $\displaystyle=$
$\displaystyle\frac{c^{3}}{2A_{1}^{2}A_{2}^{2}g^{2}\sigma^{2}(\phi(R_{1})-\phi(R_{2}))^{2}}$
(50) $\displaystyle\left(-4c^{3}\left(A_{1}\phi(R_{1})\ln\left(1-2\langle
S_{2}\rangle\left(\tanh^{2}(r_{2})+1\right)^{2}\right)-A_{2}\phi(R_{2})\ln\left(1-2\langle
S_{1}\rangle\left(\tanh^{2}(r_{1})+1\right)^{2}\right)\right)^{2}\right.$
$\displaystyle\left.+\sqrt{2}A_{1}A_{2}c^{2}g\sigma(\phi(R_{1})-\phi(R_{2}))\left(A_{2}\ln\left(1-2\langle
S_{1}\rangle\left(\tanh^{2}(r_{1})+1\right)^{2}\right)\right.\right.$
$\displaystyle\left.\left.-A_{1}\ln\left(1-2\langle
S_{2}\rangle\left(\tanh^{2}(r_{2})+1\right)^{2}\right)\right)\right.$
$\displaystyle\left.+4\sqrt{2}A_{1}A_{2}g\sigma(\phi(R_{1})-\phi(R_{2}))\left(A_{1}\phi(R_{1})\ln\left(1-2\langle
S_{2}\rangle\left(\tanh^{2}(r_{2})+1\right)^{2}\right)\right.\right.$
$\displaystyle\left.\left.-\phi(R_{2})\ln\left(1-2\langle
S_{1}\rangle\left(\tanh^{2}(r_{1})+1\right)^{2}\right)\right)\right).$
The relative error of the parameter $\gamma$ is obtained through the
coefficient between Eqs. (LABEL:Errorgammasqz) and (LABEL:gammaSqueezing).
Analogously, $\beta$ is given by comparing Eqs. (50) and
(LABEL:betaSqueezing). Fig. 9 shows the relative error for the parameters
$\gamma$ and $\beta$ in terms of the effective area of the HOM array,
considering an interval of the squeezing parameter $r$ in the interval
$\left[1,2\right]$ and with an uncertainty $\delta S=10^{-13}$. Moreover, we
consider that in both configurations of the array, the squeezing parameters
are equal to $r$. In this sense, we obtain a lower relative error for both
parameters when $r=1$, this is, increasing the squeezing parameter does not
reduce relative errors. In this case we have employed the same analysis as
before i.e. the area of the second configuration of the array is given by
$A_{2}=\eta A_{1}$. Our simulation shows that the lower relative error is
found when using $\eta=1/4$ i.e. the area of the second configuration of the
HOM array is a quarter of the first area of the array. As in the previous
section, we have employed a mean wavelength $\lambda=0.995\;[\mu m]$ and
bandwidth $\delta\lambda=0.034\;[\mu m]$. In Fig. 10 we show the relative
error of the parameters $\gamma$ and $\beta$ in terms of the squeezing
parameter $r$. In this case, we consider a fixed area $A_{\rm
max}=8*10^{3}\;[km^{2}]$, the same wavelength $\lambda$, and bandwidth
$\delta\lambda$ as before, therefore the optimal configuration corresponds to
$r\in\left[0.8,1.0\right]$, for $\eta=1/4$ and $1/2$.
We now consider two squeezing parameters, $r_{1}$ and $r_{2}$, belonging to
the first and second array configuration, respectively. Furthermore, we assume
a fixed area $A_{\rm max}$ and the same SPDC source as before. In this case,
the optimal values for $r_{1}$ and $r_{2}$ belong to the interval
$\left[0.5,1.0\right]$. With this configuration, $\gamma$ and $\beta$ achieve
relative errors $\sim 10^{-11}$.
In order to find other areas of the arrays, in Fig. 12 we plot a contour plot
of the relative error of the parameters $\gamma$ and $\beta$ in terms of the
area of the array. For simplicity, we again consider that the squeezing
parameter is equal to $r$ for both configurations, and we assume the second
area of the array to be a quarter of the first. As we can see, for a larger
area the relative error for both parameters is constant if we vary the
squeezing parameter, in this same case, $\delta\gamma/\gamma$ and
$\delta\beta/\beta$ are $\sim 10^{-10}$. In this analysis we employed an
uncertainty of the measurement of $S$ of $\delta S=10^{-13}$. In order to find
the uncertainty necessary to obtain the current relative errors of $\gamma$
and $\beta$, we simulate the uncertainty $\delta S$ in terms of the area and
squeezing parameter. For the case in which $r$ is fixed and the area is
optimized (see Fig. 13), the uncertainty $\delta S$ is $\sim 10^{-8}$ for
$\gamma$ and $\beta$. On the other hand, when the area is fixed but $r$ varies
(see Fig. 14) we need an uncertainty $\delta S\sim 10^{-7}$ with an squeezing
parameter $r=1$ and for an area $A_{\rm max}=8\times 10^{3}\;[Km^{2}]$.
---
(a)
(b)
Figure 9: Relative error of the parameters $\gamma$ and $\beta$ measured
through the probability of detection Eq. (24), in terms of the area $A=\Delta
h\times L_{\gamma_{2}}$ of the array. (a) Relative error parameter $\gamma$.
(b) Relative error parameter $\beta$. For both simulations the uncertainty in
the measure of the probability of detection is $\delta S=10^{-13}$. We assume
that in both HOM array configuration the uncertainty in the probability of
detection is the same. In the array configuration, given an area $A_{1}$ for
the detection of the first time delay, the second area $A_{2}=\eta A_{1}$,
where $\eta=1/4$. Blue continuous line represents the squeezing parameter
$r=1$, red continuous line $r=2$. The mean wavelength of both photons is
$\lambda=0.995\;[\mu m]$ and a bandwidth $\delta\lambda=0.034\;[\mu m]$.
---
(a)
(b)
Figure 10: Relative error of the parameter $\gamma$ and $\beta$ measured
through the probability of detection Eq. (24), in terms of the squeezing
parameter $r$. (a) Relative error for the parameter $\gamma$. (b) Relative
error for the parameter $\beta$. For both simulations the uncertainty in the
measure of the probability of detection is $\delta S=10^{-13}$. We assume
that, in both HOM array configurations, the uncertainty in the probability of
detection is the same. In the array configuration, given an area $A_{1}$ for
the detection of the first time delay, the second area $A_{2}=\eta A_{1}$,
where $\eta$ is a constant. Continuous blue line $\eta=1/8$, continuous red
line $\eta=1/4$, continuous green line $\eta=1/2$. For (a) Continuous brown
line $\eta=1$. The mean wavelength of both photons is $\lambda=0.995\;[\mu m]$
and a bandwidth $\delta\lambda=0.034\;[\mu m]$.
---
(a)
(b)
Figure 11: Contour plot of the relative error of the parameters $\gamma$ and
$\beta$ measured through the probability of detection Eq. (24), in terms of
the squeezing parameters $r_{1}$ and $r_{2}$, employed in the first and second
configuration of the HOM array, respectively. (a) Relative error parameter
$\gamma$. (b) Relative error parameter $\beta$. The uncertainty in the measure
of the probability of detection is $\delta S=10^{-13}$. We assume that, in
both HOM array configurations, the uncertainty in the probability of detection
is the same.In the array configuration, $A_{1}$ is the area for the first
array configuration, in which the detection of the first time delay is done.
The second array has area $A_{2}=\eta A_{1}$, with $\eta=1/4$. The mean
wavelength of both photons is $\lambda=0.995\;[\mu m]$ and a bandwidth
$\delta\lambda=0.034\;[\mu m]$.
---
(a)
(b)
Figure 12: Contour plot of the relative error of the parameter $\gamma$
measured through the probability of detection Eq. (24), in terms of the
squeezing parameters $r_{1}$ ( assumed to be equal in both configurations of
the arrays), and the area $A=\Delta h\times L_{\gamma_{2}}$. The uncertainty
in the measure of the probability of detection is $\delta S=10^{-13}$. We
assume that, in both HOM array configurations, the uncertainty in the
probability of detection is the same.In the array configuration, $A_{1}$ is
the area for the first array configuration, in which the detection of the
first time delay is done. The second array has area $A_{2}=\eta A_{1}$, with
$\eta=1/4$. The mean wavelength of both photons is $\lambda=0.995\;[\mu m]$
and a bandwidth $\delta\lambda=0.034\;[\mu m]$.
---
(a)
(b)
Figure 13: Contour plot of the relative error of the parameter $\gamma$ and
$\beta$ measured through the probability of detection Eq. (24), for a
squeezing parameter $r=1$ (assumed to be equal in both configurations of the
arrays), in terms of the area $A=\Delta h\times L_{\gamma_{2}}$ and the
uncertainty of the measurement of the coincidence operator $\delta S$ Eq.
(18). We assume that in both HOM array configurations, the uncertainty in the
probability of detection is the same.In the array configuration, $A_{1}$ is
the area for the first array configuration, in which the detection of the
first time delay is done. The second array has area $A_{2}=\eta A_{1}$, with
$\eta=1/4$. The mean wavelength of both photons is $\lambda=0.995\;[\mu m]$
and a bandwidth $\delta\lambda=0.034\;[\mu m]$.
---
(a)
(b)
Figure 14: Contour plot of the relative error of the parameter $\gamma$ and
$\beta$ measured through the probability of detection Eq. (24), in terms of
the squeezing parameter $r$ and the uncertainty of the measurement of the
coincidence operator $S$ Eq. (18). We assume that in both HOM array
configurations, the uncertainty in the probability of detection is the same.
In the array configuration, $A_{1}$ is the area for the first array
configuration, in which the detection of the first time delay is done. The
second array has area $A_{2}=\eta A_{1}$, with $\eta=1/4$ and $A_{1}=8\times
10^{3}\,[\rm km^{2}]$.
## 6 Summary and conclusions
We have studied the problem of estimating values of the parameters $\gamma$
and $\beta$ of the post-Newtonian expansion. For this purpose, we have
considered a basic setup consisting of a HOM interferometer whose arms are at
different gravitational potentials. We consider the measurement of two
different arrival times in order to obtain an estimation of the PN parameters
$\gamma$ and $\beta$. Moreover, we study the case in which ones measures the
coincident detection probability. We show that the latter method leads to an
improvement in relative errors when compared with classical approaches [6]. We
consider two-photon separable states and two-mode squeezed-vacuum states to
estimate the values of $\gamma$ and $\beta$. Table 1 shows that relative
errors obtained for $\gamma$ and $\beta$ are approximately of the same order
of magnitude if we consider two-mode squeezed-vacuum states. Moreover,
employing this state we obtain a lower uncertainty in the estimation of the PN
parameters even with a higher uncertainty in the measurement of the
coincidence operator (see table 2). We emphasize that these schemes rely on
the use of specific quantum states of light, which impose conditions on the
sources employed to generate the quantum states. Fortunately, these sources
are already available and are known as ultra-broadband SPDC sources [28].
One of the advantages of the protocol discussed here is its versatility in
terms of the initial states that enter the interferometer. To improve the
detection and sensitivity we can employ NOON states or cat states, among many
others. An obvious modification of this scheme could be the use of a Mach-
Zehnder interferometer and measuring a different observable, instead the
coincidence detection observable, as for example the relative number of
photons arriving to the output ports [29]. A more realistic implementation
should consider loss in the detection [30] or the probability of no detection
in the HOM array [13, 9], and include the relative velocity of the satellites
[31, 19].
Method | $\delta\gamma/\gamma$ | $\delta\beta/\beta$ | $\delta S$
---|---|---|---
Classical tests | $\sim 10^{-5}$ | $\sim 10^{-5}$ | experiment dependent
Measurement of $\Delta\tau_{i}$ ($i=1,2$) | $\sim 10^{-2}$ | $\sim 10^{-11}$ | $\sim 10^{-18}$
Separable bipartite state | $\sim 10^{-16}$ | $\sim 10^{-7}$ | $\sim 10^{-18}$
Two-mode squeezing (r=1) | $\sim 10^{-10}$ | $\sim 10^{-11}$ | $\sim 10^{-13}$
Table 1: Relative errors for $\gamma$ and $\beta$ given an area of the first array $A_{\rm max}=8\times 10^{3}[km^{2}]$ Method | $\delta S$ for $\gamma$ | $\delta S$ for $\beta$
---|---|---
Measurement $\Delta\tau_{i}$ ($i=1,2$) | $\sim 10^{-20}$ | $\sim 10^{-15}$
separable biparite state | $\sim 10^{-10}$ | $\sim 10^{-16}$
two-mode squeezing ($r=1$) | $\sim 10^{-8}$ | $\sim 10^{-8}$
Table 2: Uncertainty necessary to achieve the current uncertainties of
$\gamma$ and $\beta$
## Appendix A Two mode vacuum states
A two mode vacuum state is given by the Eq. (22). The output state is given by
$\displaystyle|\zeta_{\rm out}\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{\cosh(r)}\iint
d\omega_{1}\omega_{2}f(\omega_{1},\omega_{2})\sum_{n=0}^{\infty}(-1)^{n}e^{in\theta}\left(\tanh(r)\right)^{n}\left[\frac{c^{\dagger}_{\omega_{1}}}{\sqrt{2}}+\frac{d^{\dagger}_{\omega_{1}}}{\sqrt{2}}\right]^{n}e^{-i\omega_{1}\Delta\tau_{\gamma_{1}}}$
$\displaystyle\left[\frac{c^{\dagger}_{\omega_{2}}}{\sqrt{2}}-\frac{d^{\dagger}_{\omega_{2}}}{\sqrt{2}}\right]^{n}e^{-i\omega_{2}\Delta\tau_{\gamma_{2}}}|0\rangle_{12},$
where $\Delta\tau_{\gamma_{1}}$ and $\Delta\tau_{\gamma_{2}}$, are the time
delay for the paths $\gamma_{1}$ and $\gamma_{2}$, respectively. In order to
simplify the probability of detection we consider $n=1$ squeezing states,
under this consideration the probability to detect one photon is given by
$\displaystyle P_{\rm cd}$ $\displaystyle=$ $\displaystyle\langle\zeta_{\rm
out}|\hat{M}_{c}\otimes\hat{M}_{d}|\zeta_{\rm out}\rangle,$ (52)
where $\hat{M}_{c}=\int d\omega c^{\dagger}_{\omega}|0\rangle\langle
0|c_{\omega}$ and $\hat{M}_{d}=\int
d\omega^{\prime}d^{\dagger}_{\omega^{\prime}}|0\rangle\langle
0|d_{\omega^{\prime}}$. The amplitude of probability for $n=1$ reads
$\displaystyle\langle 1|\langle 1|\zeta_{\rm out}\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{\cosh(r)}\iint
d\omega_{1}d\omega_{2}f(\omega_{1},\omega_{2})^{\ast}$ $\displaystyle\langle
0|c_{\omega}d_{\omega^{\prime}}\left[e^{-i\omega_{1}\Delta\tau_{\gamma_{1}}}e^{-i\omega_{1}\Delta\tau_{\gamma_{2}}}|0\rangle|0\rangle-e^{i\theta}\frac{\tanh(r)}{2}\left(c^{\dagger}_{\omega_{1}}+d^{\dagger}_{\omega_{1}}\right)\
\left(c^{\dagger}_{\omega_{2}}-d^{\dagger}_{\omega_{2}}\right)|0\rangle\right]$
$\displaystyle=$ $\displaystyle-\iint
d\omega_{1}d\omega_{2}f(\omega_{1},\omega_{2})^{\ast}\frac{e^{i\theta}\tanh(r)}{2\cosh(r)}\langle
0|c_{\omega}d_{\omega^{\prime}}\left(c^{\dagger}_{\omega_{1}}+d^{\dagger}_{\omega_{1}}\right)\left(c^{\dagger}_{\omega_{2}}-d^{\dagger}_{\omega_{2}}\right)|0\rangle.$
The non-vanishing terms in the operator are the following
$\displaystyle\langle
0|-c_{\omega}c^{\dagger}_{\omega_{1}}d_{\omega^{\prime}}d^{\dagger}_{\omega^{\prime}}d^{\dagger}_{\omega_{2}}+c_{\omega}c^{\dagger}_{\omega_{2}}d_{\omega^{\prime}}d^{\dagger}_{\omega^{\prime}}d^{\dagger}_{\omega_{1}}|0\rangle$
$\displaystyle=$
$\displaystyle-\delta(\omega-\omega_{1})\delta(\omega^{\prime}-\omega_{2})+\delta(\omega-\omega_{2})\delta(\omega^{\prime}-\omega_{1}).$
Therefore, the probability of detection, in case of
$f(\omega_{1},\omega_{2})=f(\omega_{1})f(\omega_{2})$, with $f(\omega_{i})$ a
Gaussian distribution centred in $\omega_{i}$, reads
$\displaystyle P_{\rm cd}$ $\displaystyle=$
$\displaystyle\frac{1}{2}\left(\frac{\tanh(r)}{\cosh(r)}\right)^{2}\left[1-\exp\left(-\sigma^{2}\Delta\tau^{2}/2\right)\right],$
(55)
where $\sigma$ is the spectral width of the Gaussian distribution, assumed to
be equal for both photons.
## Acknowledgements
This work was supported by the Millennium Institute for Research in Optics
(MIRO) and FONDECYT Grant 1180558. M.R.-T. acknowledges support by CONICYT
scholarship 21150323.
## References
* [1] Gerard Auger and Eric Plagnol “An Overview of Gravitational Waves” WORLD SCIENTIFIC, 2017 DOI: 10.1142/10082
* [2] Clifford M. Will “On the unreasonable effectiveness of the post-Newtonian approximation in gravitational physics” In _Proceedings of the National Academy of Sciences_ 108.15 National Academy of Sciences, 2011, pp. 5938–5945 DOI: 10.1073/pnas.1103127108
* [3] C.M Will “The Confrontation between General Relativity and Experiment” In _Living Rev. Relativ._ 17.4, 2014 DOI: 10.12942/lrr-2014-4
* [4] Stephen M. Merkowitz “Tests of Gravity Using Lunar Laser Ranging” In _Living Reviews in Relativity_ 13.7, 2010 DOI: 10.12942/lrr-2010-7
* [5] Magdalena Zych al. “General relativistic effects in quantum interference of photons” In _Classical and Quantum Gravity_ 29.22 IOP Publishing, 2012, pp. 224010 DOI: 10.1088/0264-9381/29/22/224010
* [6] Clifford M. Will “The Confrontation between General Relativity and Experiment” In _Living Reviews in Relativity_ 17.4, 2014 DOI: 10.12942/lrr-2014-4
* [7] Timothy Clifton “Parametrized post-Newtonian limit of fourth-order theories of gravity” In _Phys. Rev. D_ 77 American Physical Society, 2008, pp. 024041 DOI: 10.1103/PhysRevD.77.024041
* [8] C. K. Hong, Z. Y. Ou and L. Mandel “Measurement of subpicosecond time intervals between two photons by interference” In _Phys. Rev. Lett._ 59 American Physical Society, 1987, pp. 2044–2046 DOI: 10.1103/PhysRevLett.59.2044
* [9] Ashley Lyons et al. “Attosecond-resolution Hong-Ou-Mandel interferometry” In _Science Advances_ 4.5 American Association for the Advancement of Science, 2018 DOI: 10.1126/sciadv.aap9416
* [10] Jian-Wei Pan et al. “Multiphoton entanglement and interferometry” In _Rev. Mod. Phys._ 84 American Physical Society, 2012, pp. 777–838 DOI: 10.1103/RevModPhys.84.777
* [11] Yingwen Zhang et al. “Engineering two-photon high-dimensional states through quantum interference” In _Science Advances_ 2.2 American Association for the Advancement of Science, 2016 DOI: 10.1126/sciadv.1501165
* [12] B. Ndagano and A. Forbes “Entanglement distillation by Hong-Ou-Mandel interference with orbital angular momentum states” In _APL Photonics_ 4.1, 2019, pp. 016103 DOI: 10.1063/1.5079970
* [13] Y. Chen, M. Fink and F. et al. Steinlechner In _npj Quantum Information_ 5, 2019 DOI: 10.1038/s41534-019-0161-z
* [14] I. Fuentes-Schuller and R. B. Mann “Alice Falls into a Black Hole: Entanglement in Noninertial Frames” In _Phys. Rev. Lett._ 95 American Physical Society, 2005, pp. 120404 DOI: 10.1103/PhysRevLett.95.120404
* [15] Miguel Montero and Eduardo Martín-Martínez “Entanglement of arbitrary spin fields in noninertial frames” In _Physical Review A_ 84.1 American Physical Society (APS), 2011 DOI: 10.1103/physreva.84.012337
* [16] Matthias Fink et al. “Experimental test of photonic entanglement in accelerated reference frames” In _Nature Communications_ 8.1 Springer ScienceBusiness Media LLC, 2017 DOI: 10.1038/ncomms15304
* [17] Aharon Brodutch et al. “Post-Newtonian gravitational effects in optical interferometry” In _Phys. Rev. D_ 91 American Physical Society, 2015, pp. 064041 DOI: 10.1103/PhysRevD.91.064041
* [18] Anthony Brady and Stav Haldar “Relativistic frame-dragging and the Hong-Ou-Mandel dip $-$ a primitive to gravitational effects in multi-photon quantum-interference”, 2020
* [19] Daniel R. Terno, Giuseppe Vallone, Francesco Vedovato and Paolo Villoresi “Large-scale optical interferometry in general spacetimes” In _Phys. Rev. D_ 101 American Physical Society, 2020, pp. 104052 DOI: 10.1103/PhysRevD.101.104052
* [20] S. Y. Chen and T. C. Ralph “Estimation of gravitational acceleration with quantum optical interferometers” In _Phys. Rev. A_ 99 American Physical Society, 2019, pp. 023803 DOI: 10.1103/PhysRevA.99.023803
* [21] A Delgado, W P Schleich and G Süssmann “Quantum gyroscopes and Gödel's universe: entanglement opens a new testing ground for cosmology” In _New Journal of Physics_ 4 IOP Publishing, 2002, pp. 37–37 DOI: 10.1088/1367-2630/4/1/337
* [22] Endre Kajari, Reinhold Walser, Wolfgang P. Schleich and Aldo Delgado “Sagnac Effect in Gödel’s Universe” In _Frontiers in Optics_ Optical Society of America, 2006, pp. JWD48 DOI: 10.1364/FIO.2006.JWD48
* [23] M Rivera-Tapia, A Delgado and G Rubilar “Weak gravitational field effects on large-scale optical interferometric Bell tests” In _Classical and Quantum Gravity_ 37.19 IOP Publishing, 2020, pp. 195001 DOI: 10.1088/1361-6382/ab8a60
* [24] Jan Kohlrus, David Edward Bruschi and Ivette Fuentes “Quantum-metrology estimation of spacetime parameters of the Earth outperforming classical precision” In _Phys. Rev. A_ 99 American Physical Society, 2019, pp. 032350 DOI: 10.1103/PhysRevA.99.032350
* [25] S. P. Kish and T. C. Ralph “Quantum metrology in the Kerr metric” In _Phys. Rev. D_ 99 American Physical Society, 2019, pp. 124015 DOI: 10.1103/PhysRevD.99.124015
* [26] Graham M Shore “Quantum gravitational optics” In _Contemporary Physics_ 44.6 Taylor & Francis, 2003, pp. 503–521 DOI: 10.1080/00107510310001617106
* [27] Clifford M Will “Theory and experiment in gravitational physics” Cambridge university press, 2018
* [28] Aron Vanselow, Paul Kaufmann, Helen M. Chrzanowski and Sven Ramelow “Ultra-broadband SPDC for spectrally far separated photon pairs” In _Opt. Lett._ 44.19 OSA, 2019, pp. 4638–4641 DOI: 10.1364/OL.44.004638
* [29] Luca Pezzé and Augusto Smerzi “Mach-Zehnder Interferometry at the Heisenberg Limit with Coherent and Squeezed-Vacuum Light” In _Physical Review Letters_ 100.7 American Physical Society (APS), 2008 DOI: 10.1103/physrevlett.100.073601
* [30] S. P. Kish and T. C. Ralph “Estimating spacetime parameters with a quantum probe in a lossy environment” In _Phys. Rev. D_ 93 American Physical Society, 2016, pp. 105013 DOI: 10.1103/PhysRevD.93.105013
* [31] David Rideout al. “Fundamental quantum optics experiments conceivable with satellites—reaching relativistic distances and velocities” In _Classical and Quantum Gravity_ 29.22 IOP Publishing, 2012, pp. 224011 DOI: 10.1088/0264-9381/29/22/224011
|
# tf.data: A Machine Learning Data Processing Framework
Derek G. Murray Microsoft , Jiří Šimša Google , Ana Klimovic ETH Zurich
and Ihor Indyk Google
###### Abstract.
Training machine learning models requires feeding input data for models to
ingest. Input pipelines for machine learning jobs are often challenging to
implement efficiently as they require reading large volumes of data, applying
complex transformations, and transferring data to hardware accelerators while
overlapping computation and communication to achieve optimal performance. We
present tf.data, a framework for building and executing efficient input
pipelines for machine learning jobs. The tf.data API provides operators which
can be parameterized with user-defined computation, composed, and reused
across different machine learning domains. These abstractions allow users to
focus on the application logic of data processing, while tf.data’s runtime
ensures that pipelines run efficiently.
We demonstrate that input pipeline performance is critical to the end-to-end
training time of state-of-the-art machine learning models. tf.data delivers
the high performance required, while avoiding the need for manual tuning of
performance knobs. We show that tf.data features, such as parallelism,
caching, static optimizations, and non-deterministic execution are essential
for high performance. Finally, we characterize machine learning input
pipelines for millions of jobs that ran in Google’s fleet, showing that input
data processing is highly diverse and consumes a significant fraction of job
resources. Our analysis motivates future research directions, such as sharing
computation across jobs and pushing data projection to the storage layer.
*Work done while at Google
## 1\. Introduction
Data is the lifeblood of machine learning (ML). Training ML models requires
steadily pumping examples for models to ingest and learn from. While prior
work has focused on optimizing the accuracy and speed of model training and
serving, how we store and preprocess data for machine learning jobs has
received significantly less attention. Across the millions of ML jobs we run
in Google’s datacenters every month, we observe that the input data pipeline
accounts for significant resource usage and can greatly impact end-to-end
performance. Figure 1 shows how the fraction of compute time that jobs spend
in the input pipeline varies, where we define compute time as the time spent
on a hardware resource – such as a CPU or an accelerator core – scaled by the
compute capability of that resource. The marked point shows that $20\%$ of
jobs spend more than a third of their compute time in the input pipeline. When
taking into account the total compute time from all jobs in our analysis (§
5), we find that $30\%$ of the total compute time is spent ingesting data. A
complementary study of ML model training with public datasets found that
preprocessing data accounts for up to 65% of epoch time (coordl, 42). This
shows that input data pipelines consume a significant fraction of ML job
resources and are important to optimize.
Figure 1. CDF showing the fraction of compute time that millions of ML
training jobs executed in our fleet over one month spend in the input
pipeline. 20% of jobs spend more than a third of their compute time ingesting
data.
Input pipelines of machine learning jobs are often challenging to implement
efficiently as they typically need to ingest large volumes of data, apply
complex transformations, overlap communication and computation, and shuffle
and batch data with various data ordering guarantees. For example, some jobs
require that each example is visited exactly once before any example is
visited a second time during training. Moreover, to achieve good performance
and avoid input pipeline stalls, the data preprocessing should leverage
parallelism and pipelining to overlap preprocessing with model training
computations. Determining the optimal degree of parallelism and amount of data
to prefetch is often challenging as it depends on the nature of the workload
and the hardware resources available.
Hardware accelerators used for ML training further increase the need for
efficient input pipelines. Today’s accelerators, such as GPUs and TPUs, are
tailored towards executing the linear algebra operations that are common in ML
computations, but have limited support for common data preprocessing
operations. Hence, input data is commonly processed on the CPU and feeding an
accelerator with data at a sufficient rate to saturate its compute
capabilities is becoming increasingly challenging. The high cost of
accelerators compared to their CPU hosts makes it particularly important to
ensure that accelerators operate at high utilization (google-cloud-pricing,
19, 4).
We present tf.data, an API and a runtime for building and executing efficient
input data pipelines for machine learning jobs. The tf.data API provides
generic operators that can be parameterized by user-defined functions,
composed, and reused across ML domains. Inspired by the programming models of
relational databases (volcano, 20, 23), declarative collection libraries
(linq, 40, 28), and data-parallel big-data systems (dryadlinq, 64, 65), the
tf.data API consists of stateless datasets, which are an abstraction for users
to define their input pipeline, and stateful iterators, which produce a
sequence of elements and maintain the current position within a dataset. These
abstractions allow users to focus on the application logic of their input
pipeline and leave the task of executing the pipeline efficiently to the
tf.data runtime. In particular, tf.data internally represents an input
pipeline dataset as a graph and applies static optimizations using graph
rewrites. Furthermore, tf.data can automatically tune parameters such as the
degree of parallelism and data prefetch buffer sizes, which are critical for
performance yet often challenging for an average ML user to tune by hand.
Our evaluation demonstrates that 1) input pipeline performance is critical to
end-to-end training time of state-of-the-art ML benchmarks, 2) tf.data is
capable of improving input pipeline latency through a combination of software
pipelining, parallelization, and static optimizations, and 3) tf.data dynamic
optimizations avoid the need to manually tune performance knobs. For example,
we show that introducing parallelism and software pipelining to the input
pipeline of a Resnet50 model training on the ImageNet dataset results in a
$10.4\times$ decrease in time to convergence. Applying further optimizations
with tf.data, such as caching and static optimizations, improves training time
by an additional $2\times$. We also demonstrate that tf.data’s auto-tuning
matches the performance of expert hand-tuned input pipelines.
The tf.data API and runtime is open source and integrated in TensorFlow
(tensorflow, 1). We have been using tf.data in production since 2017 for a
variety of ML training jobs, such as supervised learning, federated learning,
and reinforcement learning; with different data modalities, including text,
image, and video data. The system is currently used daily by hundreds of
thousands of ML jobs in our fleet.
We conduct a fleet-wide analysis of tf.data jobs to characterize the input
pipelines of millions of real machine learning jobs and identify opportunities
for future work in data preprocessing systems. We find that the set of
transformations applied in input pipelines varies greatly across jobs. For 75%
of jobs, the materialized dataset is smaller in size compared to the raw input
data read from storage, which implies that preprocessing commonly decreases
the volume of data. Most notably, we observe that identical input pipelines
are re-executed within and across jobs, suggesting that caching materialized
datasets is a promising future direction to explore to improve the performance
and efficiency of input data processing for ML. We motivate several other
directions for future research based on our findings, such as processing data
closer to storage and disaggregating input data processing from model training
to avoid host resource bottlenecks.
## 2\. Input Pipeline Requirements
Figure 2. CDF of input data size across ML training jobs. 13% of jobs read
more than 1 TB of data. These jobs consume over 96% of total compute
resources.
Raw input data, such as images, audio, and text files, undergo both offline
and online preprocessing before being ingested for model training. Offline
data preprocessing involves extracting features from raw data, validating data
(data-validation-mlsys, 12), and converting data to binary formats, such as
Avro (avro, 8), Parquet (parquet, 9), or TFRecord (tfrecord, 56), to enable
higher throughput data ingestion. Batch computing frameworks such as Apache
Spark (spark, 65), Beam (beam, 6), and Flume (flume, 7) are commonly used for
offline preprocessing. While some data transformations, such as normalization,
are applied during offline preprocessing, ML training also requires applying
transformations online as examples are fed to the model. For instance, image
models commonly rely on data augmentation, e.g. randomly distorting images, to
improve accuracy (imagenet, 17, 53). Such transformations multiply the size of
the original dataset, making it prohibitive to store outputs in intermediate
files. Our work focuses on online data preprocessing, which executes as part
of the input pipeline of ML training jobs.
The input pipeline of ML training can be characterized as a three-stage
extract, transform, load (ETL) process. The first stage reads input data from
a storage system. Machine learning jobs commonly train on large data sets.
Figure 2 shows that $13$% of jobs, out of the millions of jobs we analyzed,
read at least $1$ TB of input data. This means that for a non-trivial fraction
of training jobs, the input data cannot fit in memory. Furthermore, over $96$%
of total compute resources across jobs are spent in jobs that read over 1 TB
of data.
The second stage transforms the data to a format amenable to ML training
computation. It applies transformations to the input data, such as sampling,
permuting, and filtering data to extract the subset of most relevant features.
When training image models, it is common practice to apply data augmentation
such as clipping, resizing, flipping, and blurring images. For text pipelines,
training example commonly need to be grouped and batched based on sequence
length. Finally, the third stage loads the data onto the accelerator device
that executes the training computation.
ML training imposes unique requirements for input data pipelines. We describe
these requirements below and summarize why they are not adequately addressed
by other systems.
##### Data ordering
Unlike many data-parallel data processing platforms (mapreduce, 16, 64, 65),
ML training is sensitive to the order in which records are delivered. The most
common training algorithms are derived from _stochastic_ gradient descent
(sgd, 49), which accesses the input examples pseudo-randomly. Empirically,
convergence is more rapid when the algorithm makes multiple passes over input
examples (called _epochs_), and uses a different random permutation of the
input data on each pass (or equivalently, samples examples without replacement
within each epoch) (bottou-curiously-fast-convergence, 10). Furthermore, to
improve system efficiency via vectorization and reduced communication, the
input pipeline typically concatenates consecutive examples into a batch that
is processed in a single training step.
The final parameters of a trained model can be sensitive to the exact order in
which the input examples were consumed. To aid in debugging, especially when
porting models between different hardware architectures, tf.data must be able
to produce random results in a deterministic order, according to a given seed.
While such a feature is useful for debugging, it is in tension with high
performance, since any variability in the element processing time could lead
to head-of-line blocking. Therefore, while tf.data defaults to deterministic
execution, a user can disable it to mitigate the effect stragglers have on
end-to-end performance.
Finally, both the end-to-end training computation and the individual epochs
can take a long time to complete. To provide ordering guarantees in the
presence of preemptions – commonplace in our data centers – the data
processing computation for ML training jobs must be checkpointable.
##### Performance
A single training step consumes a batch of input elements and updates the
current weights of the model. Often, the step computation runs on an
accelerator device – such as a GPU or TPU (tpuv2v3-cacm, 29) – that can
compute vector floating point operations efficiently, although the computation
may also run on a (multi-core) CPU. Ideally, the data processing computation
is pipelined with the training computation, minimizing the likelihood that the
training computation is blocked waiting for the next batch of elements and
hence maximizing the utilization of valuable accelerator resources.
The input pipeline is responsible for fetching the raw input data from storage
and transforming it into input features for the model. For example, the raw
input for an image classification model might be a protocol buffer (protobuf,
47) containing a JPEG-encoded image, and the input pipeline must convert the
raw input into a dense three-dimensional array of floating point values
corresponding to the RGB values of each pixel. Along the way, the input
pipeline must extract and decode the JPEG and apply additional transformations
such as affine transformations and colorspace changes to augment the training
data (best-practices-cnns, 53). These activities are CPU-intensive, and must
make efficient use of available CPU resources to maximize input pipeline
throughput.
##### Ease of use
Machine learning workloads in a typical large organization span different
domains, storage systems, data formats, and accelerator hardware. Therefore,
it must be possible to combine pipeline stages in unanticipated ways, and
extend the system with new data sources and transformations. To emphasize the
importance of flexibility, in our fleet-wide analysis of ML jobs, we
classified transformations into categories – such as reading input data from
storage, caching, batching, or shuffling – and recorded the combination of
transformation categories used by each job. While the $10$ most common
combinations of transformations account for over $75$% of jobs, there is a
heavy tail with over $1000$ combinations of transformations in total. In
addition to supporting diverse input pipelines, we also require the input
pipeline framework to address the tension between performance and ease-of-use.
Optimizing an input pipeline can require expertise in how to structure
operations and tune performance-related parameters, such as degrees of
parallelism and pipeline buffer sizes. Hence, we require that tf.data can
optimize an input pipeline automatically.
Before designing tf.data, we evaluated several existing input pipeline
implementations, and found that they did not meet our requirements in one or
more of the above areas: 1) PyTorch’s DataLoader API (pytorch-dataloader, 14)
is easy to use (it provides a simple Python interface), but its reliance on
Python on the critical path – despite the use of multiprocessing to work
around the interpreter lock bottleneck – and assumption of uniform random
access to all input data, do not satisfy our performance requirement,
especially for multi-terabyte datasets. 2) MXNet’s DataIter API (mxnet_dataio,
45) uses a native C++ implementation for greater performance than PyTorch, but
it requires users to add native extensions in order to handle new
preprocessing schemes. Therefore it does not help our users with diverse data
processing needs, who tend to prefer programming in Python, and who are often
restricted to memory-safe programming languages for security reasons. 3)
NVIDIA’s Data Loading Library (DALI) API (nvidia-dali, 21) enables some
preprocessing operations, such as image decoding, to be offloaded to a GPU.
This offloading partially fulfils our performance requirement, but it lacks
the flexibility to support heterogeneous preprocessing workloads and different
types of accelerators.
In the next section, we present the tf.data programming model, which is based
on chaining higher-order functional transformations, and inspired by LINQ
(linq, 40). Several data processing systems offer a similar programming model,
including DryadLINQ (dryadlinq, 64), Spark (spark, 65), and Naiad
(murray2013naiad, 44). We discuss them in more detail in § 6. For pragmatic
reasons, we did not consider using any of these systems, because the impedance
mismatch with TensorFlow’s C++ codebase would severely limit performance.
Furthermore, these systems are designed to optimize data parallel
computations, with a large number of independent values in each batch. This
makes it difficult or inefficient for them to produce values sequentially, to
fulfill the sequential ordering requirement. While one could use a system like
Spark Streaming (spark-streaming, 66) for online preprocessing and pass data
to the ML framework through an in-memory buffer, the additional copies would
have significant overhead due to the short step times in ML training
workloads. In the training workloads we have analyzed, step times less than 1
ms are not uncommon and most workloads have step times less than 10ms. The
extra copy overhead would be especially significant in the common case where
memory bandwidth is the bottleneck.
## 3\. Design and Implementation
In § 3.1, we present tf.data’s API which enables users to compose and
parameterize operators. In § 3.2 and § 3.3 we discuss key aspects of tf.data’s
runtime.
Method | Description
---|---
make_iterator | Creates a new iterator over the dataset.
serialize | Converts the dataset to a serialized expression.
element_spec | Returns the type signature of dataset elements.
Table 1. Dataset interface
### 3.1. Datasets and Iterators
The tf.data Dataset represents the stateless definition of an input pipeline
as a (potentially infinite) sequence of elements. A dataset can either be a
source dataset that is created from primitive values (e.g. a matrix of
floating-point numbers representing input examples, or a vector of strings
representing filenames), or a transformed dataset that transforms one or more
input datasets into a new sequence of elements. The elements of a dataset are
statically typed, and valid element types include tensors (with a specific
element type and optional shape) and composite types (such as tuples,
optionals, and nested datasets). Together, source and transformed datasets
form an expression tree that represents the entire input pipeline. Table 1
shows the Dataset interface.
tf.data includes source datasets that support common file formats and various
transformed datasets which implement functional transformations and may be
parameterized by user-defined functions (UDFs). The UDFs can be written in
Python, and tf.data uses TensorFlow’s Autograph library to convert them into
dataflow graphs (autograph, 43). Table 2 summarizes the most common tf.data
transformations.
The tf.data Iterator represents the current state of traversing a Dataset. An
iterator provides sequential access to the elements of a dataset via the
get_next operation that either returns a typed element, or an error status
such as ”out-of-range” (EOF). In tf.data, implementations of the Iterator
interface are thread-safe, so multiple threads can call get_next concurrently
to improve throughput, at the expense of determinism. The interface also
includes save and restore methods to support checkpointing.
The iterator interface (Table 3) abstracts all details of how the elements are
produced, including internal buffering and parallelism. Before applying
optimizations, there is a one-to-one correspondence between dataset and
iterator objects, but the optimizations in § 3.3 exploit the iterator
abstraction to change the underlying dataset graph, and optimize how elements
are produced, while presenting the same interface.
The example in Figure 3 illustrates a training loop that uses a tf.data input
pipeline to read elements from files, apply user-defined processing logic on
each element and combine the processed elements into a mini-batch.
Dataset | Description
---|---
batch | Concatenates multiple elements into a single element.
cache | Stores the input data in memory.
concatenate | Concatenates two datasets.
from_file | Reads elements from a file.
from_memory | Creates a singleton dataset from data in memory.
filter | Returns elements matching a predicate.
flat_map | Maps elements to datasets and flattens the result.
interleave | Like flat_map, but mixes outputs from input elements.
map | Transforms individual elements.
prefetch | Adds a buffer to pipeline input production.
reduce | Reduces a dataset to a single element.
repeat | Produces the input dataset multiple times.
shard | Selects a subset of elements from the dataset.
shuffle | Randomizes the order of elements.
unbatch | Splits input elements on the 0th dimension.
zip | Combines elements of multiple datasets into tuples.
Table 2. Common tf.data source and transformed datasets. Method | Description
---|---
get_next | Returns the next element, or raises EOF.
save | Writes the iterator state to a file.
restore | Reads the iterator state from a file.
Table 3. Iterator interface
### 3.2. Parallel and Distributed Execution
To efficiently utilize available host resources, tf.data provides
transformations that enable software pipelining, and parallel execution of
computation and I/O. The prefetch transformation decouples the producer and
consumer of data using an internal buffer, making it possible to overlap their
computation. Input pipelines can use this transformation to overlap host
computation, host-to-device transfer, and device computation. The map
transformation takes an optional argument that specifies the degree of
parallelism to use for applying the user-defined computation to input elements
concurrently. The interleave transformation provides a similar optional
argument that specifies the degree of parallelism to use for fetching data
from input elements concurrently. In particular, the interleave transformation
can parallelize I/O by interleaving data read from multiple files. By default,
tf.data transformations produce elements in a deterministic order. However, as
deterministic ordering can lead to head-of-line blocking, the parallel map and
interleave transformations provide a mechanism for enabling non-deterministic
ordering, which can result in better performance at the expense of
reproducibility.
To illustrate the benefits of the above transformations, we revisit the
example presented in Figure 3. Let us assume that it takes $5$ms to read an
element from the file, $2$ms to apply the user-defined logic to an element,
and $1$ms to batch 10 elements. The accelerator would be idle for
$(5+2)*10+1=71$ms at the start of each iteration before data for the training
computation becomes available.
⬇
ds = tf.data.from_file(["foo", ...])
ds = ds.map(parse).batch(batch_size=10)
for elem in ds:
train_step(elem)
Figure 3. Example of a training loop using tf.data input pipeline. parse is a
user-defined function for data processing.
⬇
ds = tf.data.from_memory(["foo", ...])
ds = ds.interleave(
tf.data.from_file, cycle_length=2,
num_parallel_calls=2)
ds = ds.map(parse, num_parallel_calls=10)
ds = ds.batch(batch_size=10)
ds = ds.prefetch(buffer_size=1)
for elem in ds:
train_step(elem)
Figure 4. Example of a training loop with tf.data input pipeline that employs
parallelism and software pipelining.
The tf.data input pipeline in Figure 4 is semantically equivalent to that of
Figure 3. However, it uses 1) the optional num_parallel_calls argument of
interleave and map to parallelize I/O and computation respectively, and 2)
prefetch to overlap the input pipeline computation with the training
computation. As a result, the input pipeline in Figure 4 will take
$max(10*5/2,10*2/10,1)=25$ms to produce a batch (assuming a sufficiently slow
consumer) and the input pipeline computation (of the next batch) will be
overlapped with the training computation on the accelerator (for the current
batch). If the training computation takes more than $25$ms, the data for each
iteration of the training loop will be ready by the time the iteration starts.
In § 3.3.2 we describe a mechanism for auto-tuning parallelism and buffer
sizes so that users do not have to tune them manually.
While interleave is typically used to parallelize I/O, it can also be used for
parallel execution of multiple copies of an arbitrary input pipeline
(operating over different shards of the input data). We have found this
mechanism useful to speed up input pipelines bottlenecked by inherently
sequential transformations, such as filter or unbatch.
In addition to supporting efficient single-host execution, we also designed
tf.data for distributed ML training computation use-cases, such as data
parallel synchronous training, across multiple hosts (and accelerators per
host). In this setup, each host has a tf.data input pipeline providing data
for the accelerators attached to the host. To provide for clean separation of
epochs, the input data can be sharded across multiple files and the shard
transformation ensures that different hosts operate over different shards of
the data. The sharded input pipelines do not communicate with each other.
### 3.3. Automatic Optimization
tf.data’s functional programming model enables it to provide multiple
different implementations for a single input pipeline. Automatic _static_
(§3.3.1) and _dynamic_ (§3.3.2) optimizations improve tf.data’s performance
and usability.
#### 3.3.1. Static Optimizations
At run-time, tf.data can reflect on the expression tree of any dataset and
replace it with a more efficient version. We implemented static optimizations
as a virtual dataset transformation that converts the input dataset to an
expression tree, applies a suite of rewriting rules, and then evaluates the
rewritten expression tree to produce an output dataset. The current
implementation uses TensorFlow’s GraphDef protocol buffer as the
representation and the Grappler optimization framework (grappler, 55) to
manipulate these expression trees. We are investigating the use of MLIR (mlir,
35) as a richer representation that will enable us to reuse optimizations from
other domains.
As we gained experience with tf.data, we created several custom transformation
that fuse commonly adjacent transformations for performance reasons: map \+
batch fusion, shuffle \+ repeat fusion, map \+ map fusion, map \+ filter
fusion, and filter \+ filter fusion. For example, the map \+ batch fusion
transforms $\mathrm{d}.\mathrm{map}(f).\mathrm{batch}(b)$ into
$\mathrm{map\\_and\\_batch}(f,b)$, which is functionally equivalent but the
implementation of the fused operator parallelizes and pipelines the copies of
each element into the output batch with the processing of other batch
elements. Many of the fusion optimizations in tf.data are inspired by
deforestation in functional languages (deforestation, 58). As the simplest
example, the map \+ map fusion transforms
$\mathrm{d}.\mathrm{map}(f).\mathrm{map}(g)$ expression into
$\mathrm{d}.\mathrm{map}(g\circ f)$. This eliminates the per-element overhead
of an iterator—a virtual call to get_next and one of two function
dispatches—and the composition $g\circ f$ may be optimized further by
Grappler’s standard optimization passes, such as arithmetic optimization and
dead code elimination.
tf.data static optimizations are not limited to fusions. The map vectorization
is a more advanced optimization that transforms
$\mathrm{d}.\mathrm{map}(f).\mathrm{batch}(b)$ into
$\mathrm{d}.\mathrm{batch}(b).\mathrm{map}(\mathrm{pfor}(f))$. In the
transformed expression, $\mathrm{pfor}(f)$ applies $f$ to every slice of the
batch in parallel (pfor, 2). This increases the efficiency of the resulting
code by converting multiple invocations of a per-element operation (e.g.
tf.matmul() into a single invocation of a batched operation (e.g.
tf.batch_matmul()) that itself has an efficient vectorized implementation. It
also reduces the framework-induced overhead by replacing $b$ function
invocations with a single invocation.
#### 3.3.2. Dynamic Optimizations
In many cases, the optimal configuration for a tf.data pipeline depends on
properties of the input data (e.g. raw image sizes) and the available
resources (e.g. number of CPU cores, RAM, and network bandwidth). Hence,
tf.data provides configuration parameters such as the degree of parallelism
for map transformations and the size of the buffer for the prefetch
transformation.
To avoid the need for users to manually tune performance-related knobs, the
tf.data runtime contains an auto-tuning mechanism that allocates CPU and RAM
resources across various parts of the input pipeline in a way that minimizes
the (expected) latency of the input pipeline producing an element. In the rest
of this section, we refer to the time it takes for an iterator to produce an
element as its output latency and the output latency of an input pipeline is
the output latency of the iterator for its final transformation.
To perform auto-tuning, tf.data executes the input pipeline in a light-weight
harness, which maintains a tree representation of the iterators currently
executing as part of the input pipeline and measures the processing time spent
in each of the iterators. The root of the tree is the iterator producing data
for training computation, the leaves of the tree correspond to source dataset
iterators, and edges are implied by the input-output relationship between
transformed datasets’ iterators and their inputs. The tree structure can
change over time as transformations such as interleave or repeat create
multiple iterators during their lifetime.
The auto-tuning implementation uses the processing time and the input pipeline
structure to build an analytical model that is used to estimate how input
pipeline parameters affect end-to-end latency. The estimating function is a
composition of the output latencies of individual iterators as functions of
tunable parameters, iterator’s processing time and inputs’ output latency. The
outermost function of the composition is the one for the final iterator. For
synchronous transformations (i.e. transformations that do not decouple
producer and consumer), the output latency of an iterator is a linear function
of the output latencies of its inputs and the processing time spent in the
iterator. For asynchronous transformations, such as prefetch and the parallel
map and interleave, the output latency of an iterator is no longer linear and
additionally depends on the parallelism, buffer size, and the rate of the
consumer. In particular, the expected output latency of the iterator is
computed as the output latency of its input(s) multiplied by the probability
that the buffer is empty, which we model using an M/M/1/k queue (mm1k, 52) and
estimate as:
(1) $\large p_{empty}=\begin{cases}\frac{1}{n+1}&{\normalsize\text{if
$x=y$}}\\\ \\\
\frac{1-\frac{x}{y}}{1-\left(\frac{x}{y}\right)^{n+1}}&\text{\normalsize
otherwise}\end{cases}$
where $n$ is the buffer size, $x$ is the producer rate, computed from the
output latency of the iterator input(s), and $y$ is the consumer rate,
computed from the frequency of get_next calls. Note that the producer rate,
$x$, in general depends on upstream computation, while the consumer rate, $y$,
in general depends on downstream computation. We traverse the iterator tree
depth first to estimate both $x$ and $y$ in a single traversal.
To illustrate how the estimation works, let’s revisit the example from Figure
4, additionally assuming that 1) the num_parallel_calls and buffer_size
arguments are set to the special AUTOTUNE value to enable auto-tuning, 2) the
training computation requests data every $10$ms on average, and 3) the auto-
tuning harness is estimating the following combination: interleave parallelism
$1$ and buffer size $1$, map parallelism $5$ and buffer size $5$, and prefetch
buffer size $2$. Figure 5 gives an example of how tf.data computes the output
latency for such a pipeline.
pipeline data consumer prefetch buffer size $=2$ batch batch size $=10$ map
parallelism $=5$ buffer size $=5$ interleave parallelism $=1$ buffer size $=1$
cycle length $=2$ from_file consumer rate, get_next calls per second
$\frac{1}{0.01}=100$$100$ $100*\text{batch size}$ $=1000$
$1000$$\frac{1000}{\text{cycle length}}=500$ output latency, ms
$x=\frac{1000}{36.5}=27.4,$ $y=100,\,n=2$ $36.5*p_{empty}=27$ $10*3.55+1=36.5$
producer time $=4.15+\frac{2}{5}=4.55$ $x=\frac{1000}{4.55}=220,$
$y=1000,\,n=5,$ $4.55*p_{empty}=3.55$ $x=\frac{1000}{5}=200,$ $y=1000,\,n=1,$
$5*p_{empty}=4.15$ $5$
Figure 5. Output latency estimation: the downward traversal computes the
consumer rate starting with the root adjusting it by the number of concurrent
get_next calls from the consumer and the number of iterators. The upward
traversal can compute the output latency of each iterator in the tree since by
the time the traversal returns to an iterator, the output latency of its
inputs is known. Asynchronous transformations prefetch, parallel map and
interleave use (1) to estimate the output latency, whereas a synchronous batch
produces an estimate with a linear function of its own processing time and
output latency of its input.
tf.data creates a background thread that periodically uses the estimation
process above to evaluate different combinations of parallelism and buffer
sizes for tunable transformations. Parameters are chosen to minimize the
expected output latency of the input pipeline subject to CPU and RAM budget
constraints. The optimization uses a gradient descent algorithm and is
depicted in Figure 6. The optimization period ranges from milliseconds to
seconds and is determined automatically based on changes to the input pipeline
structure and execution time.
⬇
while True:
model = pipeline.get_analytical_model()
params = model.get_tunable_parameters()
best_latency = INFINITY
latency = model.latency()
while (best_latency - latency >= EPS and
model.resource_usage() <= BUDGET):
best_latency = latency
params -= DELTA * model.latency_grad()
latency = model.latency()
pipeline.set_tunable_parameters(params)
sleep(OPTIMIZATION_PERIOD)
Figure 6. Periodic optimization of tunable parameters.
An important aspect of the optimization is its ability to minimize output
latency of the end-to-end input pipeline as opposed to minimizing the output
latency of individual transformations. As different transformations share the
same CPU and RAM resources, locally optimal decisions may lead to excessive
parallelism and buffering, which in turn lead to inefficient thread scheduling
and poor cache locality, negatively affecting end-to-end performance.
The ability to perform the optimization analytically is essential; it allows
tf.data to quickly find a good configuration without affecting the performance
of the real input pipeline while evaluating sub-optimal configurations. Once
the background thread identifies a configuration to use, it updates the
parallelism and buffer sizes of the actual input pipeline accordingly. For
most input pipelines the optimization takes microseconds to milliseconds to
complete.
## 4\. Evaluation
To evaluate tf.data we seek to answer the following questions: 1) how do
tf.data’s performance-related features affect input pipeline throughput, 2)
how do input pipeline optimizations impact the end-to-end time to reach a
target accuracy when training state-of-the-art ML models, and 3) how does
tf.data performance compare to other systems.
For our evaluation, we used the open-source MLPerf (mlperf, 39) benchmark
suite, which is the de facto standard for evaluating ML software and hardware
systems by measuring how fast a system can train models to a target quality
metric. We use tf.data to express and execute input pipeline computation in
MLPerf benchmarks. Our evaluation considers the following combinations of
model architectures and input data: 1) Resnet50 (resnet, 25) with ImageNet
(imagenet, 17), 2) SSD (ssd, 38) with COCO (coco, 37), 3) Mask-RCNN (maskrcnn,
24) with COCO (coco, 37), 4) GNMT (gnmt, 62) with WMT16 (wmt16, 60), and 5)
Transformer (transformer, 57) with WMT17 (wmt17, 61).
Table 4 summarizes the attributes of the MLPerf datasets, which range from 135
MB to 140 GB in size. Though these public datasets fit in memory before
decompression and/or data augmentations, in Section 5.1 we discuss our
experience with production workloads which commonly preprocess larger-than-
memory datasets (Figure 2). When dealing with such datasets, tf.data’s
prefetching and software pipelining optimizations become even more critical
for end-to-end performance.
Dataset | Domain | Artifacts | Size
---|---|---|---
ImageNet | image classification | 1.3M images | 140GB
COCO | object detection | 330K images | 19GB
WMT16 | translation | 4M pairs | 1.3GB
WMT17 | translation | 4.5M pairs | 720MB
Table 4. MLPerf input data overview. | Parallel computation | Parallel I/O | Software pipelining | Non-deterministic | Caching | Static Optimization | No intra-op parallelism
---|---|---|---|---|---|---|---
Resnet50 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓
SSD | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓
Mask-RCNN | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓
GNMT | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
Transformer | ✓ | ✓ | ✓ | ✓ | | |
Table 5. tf.data features used by different MLPerf benchmarks.
Table 5 shows the various performance-related features of tf.data used in the
input pipeline portion of our MLPerf benchmark implementations. All input
pipelines used the map, interleave, and prefetch transformations for parallel
computation, parallel I/O, and software pipelining, respectively. Non-
deterministic ordering was also used by all pipelines to mitigate the effect
of stragglers. With the exception of Transformer, the input pipelines used
static tf.data optimizations to benefit from transformation fusion and the
cache transformation to materialize intermediate preprocessing artifacts in
memory to avoid their recomputation across epochs. Note that intermediate
artifacts cannot always be materialized as they may be a result of a
randomized transformation which produces a different result each epoch.
Finally, the image-based input pipelines (Resnet50, SSD, and Mask-RCNN) also
disabled intra-op parallelism for tf.data computation. Intra-op parallelism
makes it possible to parallelize execution of individual TensorFlow ops, such
as tf.matmul, but this comes at the expense of increased CPU usage. For
tf.data input pipelines, intra-op parallelism generally provides little
benefit (as there is plenty of inter-op parallelism) and can actually hurt
performance of input pipelines that fully utilize CPU resources.
### 4.1. Input Pipeline Experiments
Methodology: To evaluate the effect of tf.data performance-related features
on input pipeline throughput, we executed the input pipeline portion of our
MLPerf benchmark implementations in a tight loop (with no model training
computation) and measured the time it takes to process an epoch’s worth of
data. We used a single machine with 56 Intel Xeon 2.60 GHz CPU cores, 128 GB
of RAM, and the input data stored on a 1 TB Samsung SM961 SSD. We limited the
Resnet50 experiment to only use $60\%$ of the ImageNet data to make sure that
an epoch’s worth of data can be cached in memory. For each of the input
pipelines we ran the following experiments: 1) a baseline which does not use
any tf.data performance features (i.e. sequential reading and processing), 2)
a version that uses expert-tuned 111Expert-tuned parallelism sets map
parallelism to the number of CPU cores available on the machine, interleave
parallelism to a constant between 10 and 64 tuned based on available I/O
bandwidth, and the prefetch buffer size to an empirically tuned multiple of
batch size. parallelism for I/O and compute, 3) a version that uses all
tf.data performance features in Table 5 with expert-tuned parallelism, and 4)
a version that uses all tf.data performance features with auto-tuned
parallelism. Note that even though the baseline does not use input pipeline
parallelism, TensorFlow may still parallelize the user-defined computation in
map.
Results: Figure 7 shows the mean duration of a single epoch, normalized to the
epoch duration of the baseline, which does not use any tf.data performance-
related features. On the 56 core machine used for the experiment, the speedups
ranged from $2.7\times$ (Mask-RCNN) to $63.1\times$ (SSD). Since we are
parallelizing both compute and I/O it is possible to achieve speedup greater
than $56\times$.
The performance of Resnet50 and SSD input pipelines benefits significantly
from the application of tf.data performance related features and the input
pipeline can fully utilize the available CPU. In particular, map \+ batch
fusion yields the most significant speedup among static optimizations for
these two benchmarks, as it enables computing multiple batches in parallel. In
contrast, the performance of Mask-RCNN, GNMT, and Transformer input pipelines
benefits from the application of tf.data performance-related features to a
lesser extent. For Mask-RCNN, the reason for the limited speedup is two-fold:
1) the baseline employs parallelism as the user-defined computation applied to
each element can be parallelized by TensorFlow and 2) the input pipeline is
bottlenecked by batching, which is performed sequentially because of an
intermediate step between map and batch in the pipeline that prevents map \+
batch fusion. Similarly, the text pipelines (GNMT and Transformer) did not
benefit from map \+ batch fusion as elements need to be grouped based on size
after the map operation before they are batched, but the tf.data runtime does
not currently support map +groupby +batch fusion. Most benchmarks saw less
than 4% improvement in training time with non-deterministic vs. deterministic
data ordering, however Resnet50 benefited more (approx. 40% throughput
improvement) as its dataset (ImageNet) has a wide distribution of image sizes,
and non-deterministic ordering avoids head-of-line blocking.
For all of the input pipelines, using auto-tuned parallelism instead of expert
hand-tuned parallelism results in comparable performance. This demonstrates
that the algorithm described in § 3.3 is able to automatically configure
performance knobs similar to a human expert.
Figure 7. Speedup of input pipeline processing time with different
configurations, relative to a sequential input pipeline.
### 4.2. End-to-End Experiments
Methodology: To evaluate how input pipeline performance optimizations with
tf.data translate to end-to-end performance benefits when training state-of-
the-art ML models, we measured the time it takes to reach target accuracy with
our tf.data-based implementation of the MLPerf benchmarks. We executed each
benchmark using 8 hosts with 112 CPU cores, 100 GB of RAM, and 8 TPUv3
accelerators each (tpuv2v3-cacm, 29). For the Mask-RCNN benchmark, we used 400
GB RAM per host to ensure that intermediate artifacts can be fully cached in
memory. We ran the following experiments for each benchmark: 1) a baseline
that trains the MLPerf model with sequential reading and processing of input
data, 2) a version that uses expert-tuned parallelism for I/O and compute in
the input pipeline, 3) a version that uses all tf.data performance features
with expert-tuned parallelism, and 4) a version that uses all tf.data
performance features with auto-tuning.
Results: Figure 8 shows the end-to-end training time speedup (relative to the
model training time with a sequential input pipeline) for each MLPerf
benchmark. We draw several insights from these results. First and foremost,
the performance of the input pipeline significantly affects the end-to-end
training performance. Second, computation and I/O parallelism is necessary but
not sufficient to match the rate at which accelerators perform training
computation. Compared to using a sequential input pipeline as the baseline,
adding software pipelining and parallelism in the input pipeline improved end-
to-end training time by $7\times$ on average across the five MLPerf
benchmarks. For image-based input pipelines (Resnet50, SSD, and Mask-RCNN),
the end-to-end performance benefited further from the application of tf.data
performance-oriented features, providing an additional $2\times$, $1.2\times$,
$1.4\times$, speedup respectively. For text-based input pipelines (GNMT and
Transformer), parallelism and software pipelining alone were sufficient to
match the rate at which data was consumed by the training computation.
Figure 8. Speedup of the time to convergence for MLPerf workloads with tf.data
optimizations, relative to execution with a sequential input pipeline.
Figure 8 also compares the training time with expert-tuned tf.data
configuration to training time with auto-tuned configuration. Similarly to the
input pipeline experiments, we find that using tf.data’s dynamic optimizations
to select parameters such as the degree of parallelism and prefetch buffer
sizes leads to similar performance compared to the expert tuned pipelines. The
end-to-end time to convergence with dynamic tuning is within $1\%$ of the time
to convergence with expert tuned input pipelines for Resnet50, SSD, Mask-RCNN,
and Transformer and within $4\%$ for GNMT. This demonstrates that tf.data can
effectively relieve users from the burden of hand-tuning input pipeline
configurations.
Finally, we also verified that tf.data optimizations enable input pipelines to
match the rate at which accelerators perform training computations for state-
of-the-art models. For each MLPerf benchmark, we measured the time it takes to
ingest a batch of data and perform the model computation when using 1) an
optimized tf.data input pipeline versus 2) an artificial input pipeline that
produces data as fast as possible (by caching a single batch and repeating it
infinitely). The artificial pipeline does not perform any data processing and
hence serves as an upper bound on input pipeline performance. Step times with
optimized tf.data pipelines match the upper-bound performance, hence the
MLPerf benchmarks are no longer input bound after tf.data optimizations.
### 4.3. Comparison to Other Systems
Input data framework | Hardware | Epoch duration (s)
---|---|---
PyTorch DataLoader | CPU-only | 213
NVIDIA DALI | CPU-only | 777
NVIDIA DALI | CPU + 1 GPU | 172
NVIDIA DALI | CPU + 2 GPUs | 107
tf.data | CPU-only | 110
Table 6. ImageNet-Resnet50 input data processing time with tf.data vs. NVIDIA
DALI and PyTorch DataLoader.
To evaluate how tf.data compares to other ML input data processing systems, we
implement a standard ImageNet pipeline using tf.data, PyTorch DataLoader
(pytorch-dataloader, 14), and NVIDIA DALI (nvidia-dali, 21). Table 6 shows the
average time to process an epoch’s worth of data with each framework running
on a 64 core server (n2-standard-64 on Google Cloud) with 256 GB of RAM, 500
GB local SSD, and NVIDIA Tesla T4 GPUs. The tf.data pipeline is 1.9$\times$
faster than DataLoader, thanks to tf.data’s static and dynamic optimizations.
For example, if we disable map \+ batch fusion in tf.data, performance drops
to 448 seconds per epoch. Table 6 shows that tf.data outperforms DALI on CPU
or even with one GPU. When offloading computation to multiple GPUs, DALI
achieves higher throughput, however using GPUs adds to the cost of input data
processing and consumes GPU cores and memory that could otherwise be dedicated
to model training.
In addition to comparing input pipeline throughput, it is useful to compare
end-to-end model training time with different input data frameworks across
heterogeneous platforms. The MLPerf Training competition provides the fairest
comparison across ML systems as each submission is optimized by experts
familiar with their performance knobs. For each benchmark, a cluster ranging
from 8 accelerators to over 1000 accelerators was used to train the model to a
target accuracy. Table 7 summarizes the top MLPerf v0.7 training times
achieved, categorized by the input pipeline framework used (mlperf-results-v7,
41). The end-to-end training times in Table 7 do not provide an apples-to-
apples performance comparison of input data frameworks, since the competition
entries used different software frameworks (TensorFlow, PyTorch, MXNet) and
hardware (TPUs, GPUs) to run model training computations. However, we can
still draw two important takeaways from the end-to-end training times in Table
7. First, tf.data is the only input processing framework that was used across
all MLPerf benchmarks, including image and text workloads. This attests to
tf.data’s flexibility. Other frameworks only achieved competitive results for
a subset of benchmarks (e.g., DALI for image workloads and DataLoader for text
workloads). Second, tf.data is fast enough to avoid input bottlenecks across
state-of-the-art models and hardware configurations, enabling training
Resnet50, SSD, Transformer, and BERT in under 30 seconds. As shown in § 4.2,
the MLPerf workloads are not input-bound after applying tf.data optimizations.
In particular, the higher end-to-end training time with GNMT, is due to the
TensorFlow model computation being slower than the PyTorch implementation; the
tf.data part of the computation is not on the critical path.
| Resnet50 | SSD | Mask-RCNN | GNMT | Trans-former | BERT
---|---|---|---|---|---|---
tf.data | 28.8 | 27.6 | 487.8 | 77.4 | 15.6 | 23.4
DataLoader | - | - | 627.6 | 42.6 | 37.2 | 48.6
DALI | 49.8 | 49.2 | - | - | - | -
Table 7. Best MLPerf v0.7 competition training times (in seconds), categorized
by the input data framework used. Entries with tf.data, DataLoader, and DALI
input pipelines use TensorFlow, PyTorch, and MXNet, resp., for model training.
## 5\. Experience
At Google, we have been using tf.data in training research and production ML
models since 2017. As of today, the system implementation consists of over 15K
lines of Python and over 40k lines of C++ (excluding test code). The tf.data
framework is used for data processing by the majority of TensorFlow training
jobs in Google’s fleet. These jobs run in production clusters, spanning a
variety of application domains (e.g., image classification, translation, and
video content recommendation) and using various types of ML training
algorithms (e.g., supervised learning, reinforcement learning, and federated
learning). tf.data’s generality has also facilitated novel research. For
example, a creative approach to working around limited I/O bandwidth when
training models and was implemented using three standard tf.data
transformations (data-echoing, 13). tf.data was also used to automatically
generate a data augmentation policy that achieved state-of-the-art results on
image classification tasks (cubuk2019randaugment, 15).
To understand the characteristics of machine learning input data pipelines at
scale, we studied millions of tf.data jobs in Google’s fleet over a one month
period in 2020. We show that input pipelines are highly diverse and frequently
re-executed. We also identify several future research directions motivated by
our findings, such as the opportunity to re-use input pipeline computation
across jobs.
### 5.1. Fleet-wide Input Pipeline Analysis
Methodology: We instrument tf.data to collect metrics such as the set of
transformations applied in each job’s input pipeline. For each job, we also
record the bytes consumed and produced by each transformation in its input
pipeline. $71\%$ of jobs define their input pipeline as a single tf.data
dataset, while the remaining jobs define their input processing logic across
two or more tf.data datasets. When an iterator is created for a tf.data
dataset, we fingerprint the dataset by computing a hash of its dataflow graph.
We include the list of input file names in the hash calculation and exclude
random seed values. We track the number of iterator creations for each unique
hash over time. We also measure the total compute time for jobs and the
compute time that jobs spend in tf.data. The compute time is measured in
normalized compute units and is the product of the time spent on a hardware
resource – such as a CPU or an accelerator core – scaled by the compute
capability of that resource. Our compute time metric is analogous to AWS’s
Elastic Compute Units (ECUs) (aws-faq-ecu, 3). We collect the metrics
described above with full coverage for all tf.data jobs, with one exception.
Measuring the fraction of compute time spent in tf.data requires a
configuration flag to be set when jobs are launched. Due to configuration
differences across jobs, we measured the fraction of compute time spent in
tf.data for 66% of jobs, accounting for 75% of total compute time across
tf.data jobs. For the remaining jobs, we assume that each job spends $10$% of
its total compute time in tf.data, as this is the median time that jobs spend
in the input pipeline (see Figure 1).
Our analysis focuses on three key questions: 1) how frequently are various
transformations used in an input pipeline, 2) how does the ”shape” of data
change as data flows through an input pipeline, and 3) how much computation is
shared across input pipeline executions in our fleet?
Figure 9. Types of input data pipeline operations and their prevalence, based
on the bytes produced by each type of op.
##### Which datasets are most common?
Figure 9 plots the relative frequency of tf.data transformations across jobs,
based on the number of bytes each transformation is applied on. The map,
batch, prefetch, repeat, and zip transformations are the five most commonly
applied types of transformations, followed by reading input data from local
memory and storage. We also study how many input pipelines rely on various
tf.data optimizations. On average, 77% of input pipelines rely on parallel I/O
optimizations by using the interleave transformation, 87% of input pipelines
rely on pipeline parallelism with the prefetch transformation, and 40% of
pipelines rely on parallelizing compute with the map transformation (and its
fusion with batch). Only 19% of jobs use the cache transformation to cache
input data in memory, though we later show that many more jobs could benefit
from caching since many input pipelines are re-executed.
##### How does preprocessing affect data volume?
While some transformations, such as filtering, decrease the size of input
data, machine learning jobs also commonly apply transformations that increase
the size of data, such as decompressing and augmenting images to train image
understanding models. To understand how the volume of data flowing through ML
input pipelines varies with different transformations, we measure each input
pipeline’s ratio of bytes produced versus the bytes read from inputs sources.
We compute this ratio for the end-to-end input pipeline of each job, as well
as for each type of transformation applied in the job’s input pipeline. When
the bytes produced over bytes consumed ratio is less than one, it means that
the input pipeline or transformation in this job decreases the data volume,
whereas a ratio greater than one implies that the volume of data increases.
Figure 10 plots the CDF of the bytes produced over bytes consumed ratio across
jobs for their end-to-end input pipeline, map transformations, and filter
transformations. For approximately 75% of jobs, the volume of data produced by
the input pipeline and fed to the model training stage is less than the volume
of input data read. In other words, for most jobs, the materialized dataset
used for training is smaller than the raw input data. For some jobs,
decompressing and augmenting data results in high expansion of source data.
Figure 10 shows that user-defined map transformations, while preserving
dataset cardinality, can decrease or expand data by over an order of magnitude
for $13\%$ of jobs. filter transformations, which can modify dataset
cardinality, discard more than $10\%$ of input data volume for approximately
$23\%$ of jobs. For $8\%$ of jobs, more than half of the bytes fed into filter
transformations are discarded. filter is also used to sanitize data, hence in
$70\%$ of jobs, the transformation reduces the data by less than $1\%$.
Figure 10. CDF showing how the ratio of bytes produced vs. bytes read varies
for end-to-end input pipelines, map, and filter transformations. For 75% of
jobs, preprocessing reduces the volume of data end-to-end.
##### How often are input pipelines re-executed?
We observe that input pipelines are commonly re-executed and there is a
sizeable opportunity to reuse input pipeline computation both within and
across jobs. Some jobs rely on different permutations of the same dataset
across iterators to improve convergence. To conservatively estimate the
opportunity for computation reuse across input pipeline executions, we have
excluded datasets that use the shuffle transformation (57% of tf.data jobs) in
this part our analysis.
An input pipeline iteration begins by creating an iterator for a dataset
definition. We record the number of iterator creations at the granularity of
one hour time intervals for each dataset fingerprint (computed by hashing its
dataflow graph). Figure 11 plots the fraction of input pipelines that are
executed more than $x$ times in the same hour, over time. Approximately 75% of
input pipelines are executed more than once within the same hour and 5% of
input pipelines are executed more than 100 times within an hour. Re-execution
of input pipelines can occur across epochs of a training job and also across
jobs. For example, neural architecture search (nas, 67) and hyper-parameter
tuning both require training multiple models using the same input pipeline.
Having found that many input pipelines are re-executed, we next quantify the
opportunity for reusing input pipeline computation by caching materialized
datasets. Figure 12 plots the cumulative distribution of input pipeline
executions over the one month time span of our study, with input pipelines
ordered from most to least frequently executed. We also show the CDF of the
compute resources spent executing these pipelines. Figure 12 shows that by
caching the top $10\%$ of materialized datasets, we can capture 72% of CPU
resources used for computing tf.data datasets across all jobs that executed in
the one month period. The steepness of the CDF curves indicates that some
datasets are particularly frequently executed and consumed significant
resources. Only 10% of input pipelines are re-executed across multiple jobs.
1% of input pipelines are executed by more than 25 different jobs and the
largest cross-job sharing we observed was approximately 50,000 jobs executing
the same input pipeline. However, our analysis conservatively estimates the
opportunity for reuse since it only counts re-executions of pipelines with
identical end-to-end transformation graphs. We anticipate further
opportunities to reuse computation across jobs by considering input pipeline
sub-graphs.
Figure 11. Fraction of input pipelines executed more than $x$ times per hour,
over time. Approx. 75% of input pipelines are executed more than once in the
same hour.
### 5.2. Implications for Future Research
##### Datasets as a service
We showed that input pipelines are frequently re-executed, yet only 19% of
jobs in our analysis used the cache transformation. It is often challenging
for users to decide if and where to apply caching as there are several factors
to consider: the cost-benefit of caching the data – spending RAM to save CPU
and possibly improve throughput – and the impact of caching on training
quality – in general, results of randomized transformation (such as shuffle)
should not be cached. Navigating the compute-storage trade-off and estimating
the benefit of caching on end-to-end performance and downstream accelerator
utilization for ML training jobs is a complex task for users (nectar, 22).
Hence, automating cache insertion in input pipelines is important (helix, 63).
Furthermore, since input pipelines can be shared across jobs, designing a
dataset caching service to re-use input pipeline computations across jobs is a
promising future direction. Quiver (quiver, 34) and CoorDL (coordl, 42)
already optimize source dataset caching for ML training. Several systems have
shown that caching data across jobs greatly improves performance for big data
analytics (nectar, 22, 5, 48, 18, 36).
Figure 12. CDF of input pipeline executions over a one month period. 10% of
pipelines account for 77% of total input pipeline executions and 72% of
compute resources.
##### Processing data closer to storage
Figure 10 showed that data preprocessing reduces the volume of data for 75% of
jobs. For $14\%$ of jobs, the volume of data fed into the model for training
is less than 10% of bytes read from storage. As input data for ML jobs
commonly resides in remote storage, such as a distributed file system or cloud
object store, this means that more data than necessary is sent over the
network during ML training. Designing ML data processing systems that apply
projections closer to storage is a promising way to reduce data transfers.
Using columnar data formats is another well-known approach to enable reading
only the relevant fields in a record (parquet, 9). We are exploring this
approach to improve data ingestion efficiency for ML jobs.
##### Addressing host bottlenecks
Some input pipelines require significant CPU and memory resources to produce
their data. When a host machine isn’t powerful enough to generate input data
at the rate the attached accelerator(s) consume the data, the accelerator(s)
idle and slow down model training. To solve this problem, we are currently
exploring the disaggregation of data processing from model training, by
enabling users feed accelerators from input workers distributed across
multiple hosts. The number of input workers can scale up or down as needed to
keep up with the accelerators, independent of the number of accelerators
attached to one host. Another approach to address host resource bottlenecks
for input data processing is to offload data preprocessing computations to
accelerators (nvidia-dali, 21).
##### Data processing for online inference
tf.data targets the input processing needs of ML training jobs. However, input
pipeline efficiency is also critical for ML inference. Online inference
pipelines perform fewer model computations per input element since only a
forward pass of the model is required, whereas training also requires
backpropagation. Hence, although not all input pipeline transformations
applied during training – such as data augmentations – are applied when
serving a model, the input pipeline for inference still presents a significant
fraction of total work. Inference jobs need a different input pipeline design
as the primary performance objective is to optimize the latency of individual
requests rather than overall throughput. This implies less buffering, no
shuffling, and a different approach to batching to balance request latency
with accelerator efficiency.
## 6\. Related Work
Kakarapathy et al. made the case for building a single, unified system for
data loading that could be shared between multiple machine learning jobs, and
potentially between different frameworks as well (unifying-data-loading-
hotcloud19, 30). Their observation that much I/O and preprocessing work can be
shared between jobs agrees with our findings in § 5.2. By contrast, our work
on tf.data has focused on a more general programming model, to enable users to
build different preprocessing schemes.
Our inspiration for tf.data’s programming model drew from the successful
application of LINQ (linq, 40) to parallel processing with PLINQ (plinq, 54),
big-data cluster processing with DryadLINQ (dryadlinq, 64), and stream
processing with Naiad (murray2013naiad, 44). Many transformations in tf.data
have direct equivalents in LINQ, though we added order-sensitive
transformations (e.g., batch, shuffle, and interleave) to support ML training
algorithms. Optimus (optimus, 33), which added dynamic graph rewriting support
to DryadLINQ, is similar to the automatic optimization framework that we
described in § 3.3. Optimus focused on reducing network I/O in distributed
big-data queries, whereas the bottleneck in tf.data applications tends to be
the host CPU, and our optimizations aim to reduce the wall-clock execution
time of code within a single machine. Dandelion extended LINQ with the ability
to run on accelerator devices such as GPUs and FPGAs (dandelion, 51), using
the PTask abstraction to manage the accelerators (ptask, 50). Respectively,
Dandelion and PTask provide a simple programming model and optimized
implementation that hides data movement between the host and accelerator
devices, similar to how tf.data uses prefetch to mask copies. Dandelion goes
further than tf.data in using functional transformations to represent all
computation – not just the input pipeline – while tf.data interoperates with
existing ML frameworks such as TensorFlow (tensorflow, 1), Pytorch (pytorch,
46), and JAX (jax, 11) by using their existing programming models for the
training loop.
The design, implementation, and optimization of tf.data all bear similarities
to how SQL is used in a relational database management system (RDBMS). A
related strand of work has investigated how to push machine learning
computations into SQL, and optimize across the boundary between relational
data and linear algebra. The MADlib analytics library pushes various learning
algorithms into an existing RDBMS (madlib, 26). MADlib uses existing SQL
constructs for orchestration – i.e. defining both the input pipeline and the
“driver program” (or training loop) – and provides a C++ abstraction layer for
plugging in user-defined functions that call high-performance numerical
libraries. By building tf.data into TensorFlow and using its Tensor type to
represent values, we achieved efficient interoperability for free. More
recently, Karanasos et al. introduced Raven, which integrates the ONNX Runtime
for machine learning into Microsoft SQL Server (karanasos2019extending, 32).
Raven focuses on ML inference for SQL-based analytic pipelines, achieving
better performance by pushing linear algebra operators into earlier stages of
the query plan and using ONNX Runtime to offload computation to accelerators.
The model-related optimizations in tf.data are more conservative than Raven’s,
because the model is mutable at training time, but the ideas in Raven would be
useful for applications like knowledge distillation (distillation, 27), where
inference on one model generates features for training another model.
Several related projects have investigated the problem of automatically tuning
dataflow workloads. SEDA addresses the problem of dynamic resource allocation
to stages, using a simple scheme that adds threads to a stage when its queue
length exceeds a threshold, and removes them when they idle for a period
(seda, 59). By contrast, tf.data tunes the performance of each stage based on
the predicted effect on end-to-end performance. The DS2 scaling controller for
dataflow-based stream processing attempts to find the minimum parallelism for
each stage in a dataflow graph that will enable it to consume data at the
rates of all the sources (ds2, 31). Like DS2, tf.data uses lightweight
instrumentation of “useful” processing time in each transformation to make
scaling decisions, but we additionally model memory consumption as a possible
bottleneck resource to avoid excessive buffering.
## 7\. Conclusion
We presented tf.data, a framework for building and executing efficient input
data processing pipelines for machine learning jobs at scale. tf.data’s
programming model enables users to build diverse input pipelines by composing
and customizing operators. tf.data executes input pipelines as dataflow graphs
and applies static optimizations that improve end-to-end training time for
state-of-the-art models. For example, input pipeline parallelism and software
pipelining improve Resnet50 training time by over $10\times$ and other tf.data
optimizations such as operator fusion provide an additional $2\times$
improvement. We developed an analytical approach to automatically tune
internal buffer sizes and the degree of parallelism in input pipelines. These
dynamic optimizations achieve comparable performance to expert-tuned input
pipelines while relieving users from the burden of manually tuning parameters.
Our fleet-wide analysis of tf.data usage across millions of real jobs at
Google quantified several aspects of ML data processing at scale, namely its
resource footprint, diversity, and extent of redundant computation. Our
findings motivate future work on sharing computation across jobs and pushing
data projection to the storage layer.
## Acknowledgements
We thank Paul Barham, Chandu Thekkath, Vijay Vasudevan, Martin Abadi, Sudip
Roy, Dehao Chen, and our anonymous reviewers for their helpful feedback on
this work. We gratefully acknowledge Andrew Audibert, Brennan Saeta, Fei Hu,
Piotr Padlewski, Rachel Lim, Rohan Jain, Saurabh Saxena, and Shivani Agrawal
for their engineering contributions to tf.data.
## References
* (1) Martin Abadi et al. “TensorFlow: A system for large-scale machine learning” In _Proceedings of OSDI_ , 2016, pp. 265–283
* (2) Ashish Agarwal “Static Automatic Batching In TensorFlow” In _Proceedings of ICML_ , 2019, pp. 92–101
* (3) Amazon “Amazon EC2 FAQs”, https://aws.amazon.com/ec2/faqs, 2020
* (4) Amazon “Amazon EC2 Pricing”, https://aws.amazon.com/ec2/pricing/, 2020
* (5) Ganesh Ananthanarayanan et al. “PACMan: Coordinated Memory Caching for Parallel Jobs” In _Proceedings of NSDI_ , 2012, pp. 20
* (6) “Apache Beam: An advanced unified programming model”, https://beam.apache.org/, 2020
* (7) “Apache Flume”, https://flume.apache.org/, 2020
* (8) Apache Software Foundation “Avro”, https://avro.apache.org/docs/1.2.0, 2012
* (9) Apache Software Foundation “Parquet”, https://parquet.apache.org/, 2018
* (10) Leon Bottou “Curiously Fast Convergence of some Stochastic Gradient Descent Algorithms” In _Proceedings of the Symposium on Learning and Data Science_ , 2009
* (11) James Bradbury et al. “JAX: composable transformations of Python+NumPy programs”, 2018 URL: http://github.com/google/jax
* (12) Eric Breck, Neoklis Polyzotis, Sudip Roy, Steven Whang and Martin Zinkevich “Data Validation for Machine Learning” In _Proceedings of Machine Learning and Systems (MLSys) 2019_ , 2019
* (13) Dami Choi, Alexandre Passos, Christopher J. Shallue and George E. Dahl “Faster Neural Network Training with Data Echoing”, 2019 arXiv:1907.05550 [cs.LG]
* (14) Torch Contributors “PyTorch Docs: torch.utils.data”, https://pytorch.org/docs/stable/data.html, 2019
* (15) Ekin D. Cubuk, Barret Zoph, Jonathon Shlens and Quoc V. Le “RandAugment: Practical automated data augmentation with a reduced search space”, 2019 arXiv:1909.13719 [cs.CV]
* (16) Jeffrey Dean and Sanjay Ghemawat “MapReduce: Simplified Data Processing on Large Clusters” In _Proceedings of OSDI_ , 2004, pp. 137–150
* (17) Jia Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei “ImageNet: A Large-Scale Hierarchical Image Database” In _Proceedings of CVPR_ , 2009
* (18) Francis Deslauriers, Peter McCormick, George Amvrosiadis, Ashvin Goel and Angela Demke Brown “Quartet: Harmonizing Task Scheduling and Caching for Cluster Computing” In _Proceedings of HotStorage_ , 2016
* (19) Google “Google Cloud: All Pricing”, https://cloud.google.com/compute/all-pricing, 2020
* (20) Goetz Graefe “Volcano: An Extensible and Parallel Query Evaluation System” In _IEEE Trans. on Knowledge and Data Engineering_ 6.1, 1994, pp. 120–135
* (21) Joaquin Anton Guirao et al. “Fast AI Data Preprocessing with NVIDIA DALI”, https://devblogs.nvidia.com/fast-ai-data-preprocessing-with-nvidia-dali, 2019
* (22) Pradeep Kumar Gunda, Lenin Ravindranath, Chandu Thekkath, Yuan Yu and Li Zhuang “Nectar: Automatic Management of Data and Computation in Datacenters” In _Proceedings of OSDI_ , 2010
* (23) Donald J. Haderle and Robert D. Jackson “IBM Database 2 overview” In _IBM Systems Journal_ 23.2, 1984, pp. 112–125
* (24) Kaiming He, Georgia Gkioxari, Piotr Dollár and Ross B. Girshick “Mask R-CNN” In _CoRR_ , 2017 arXiv: http://arxiv.org/abs/1703.06870
* (25) Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun “Deep Residual Learning for Image Recognition” In _Proceedings of CVPR_ IEEE Computer Society, 2016, pp. 770–778
* (26) Joseph M. Hellerstein et al. “The MADlib Analytics Library: Or MAD Skills, the SQL” In _Proc. VLDB Endow._ 5.12 VLDB Endowment, 2012, pp. 1700–1711
* (27) Geoffrey Hinton, Oriol Vinyals and Jeff Dean “Distilling the Knowledge in a Neural Network”, 2015 arXiv:1503.02531 [stat.ML]
* (28) Java “Stream API”, https://docs.oracle.com/javase/8/docs/api/java/util/stream/package-summary.html, 2020
* (29) Norman P. Jouppi et al. “A Domain-Specific Supercomputer for Training Deep Neural Networks” In _Commun. ACM_ 63.7 Association for Computing Machinery, 2020, pp. 67–78
* (30) Aarati Kakaraparthy, Abhay Venkatesh, Amar Phanishayee and Shivaram Venkataraman “The Case for Unifying Data Loading in Machine Learning Clusters” In _Proceedings of HotCloud_ , 2019
* (31) Vasiliki Kalavri, John Liagouris, Moritz Hoffmann, Desislava Dimitrova, Matthew Forshaw and Timothy Roscoe “Three Steps is All You Need: Fast, Accurate, Automatic Scaling Decisions for Distributed Streaming Dataflows” In _Proceedings of OSDI_ , 2018, pp. 783–798
* (32) Konstantinos Karanasos et al. “Extending Relational Query Processing with ML Inference” In _Proceedings of CIDR_ , 2020
* (33) Qifa Ke, Michael Isard and Yuan Yu “Optimus: a dynamic rewriting framework for data-parallel execution plans” In _Proceedings of EuroSys_ , 2013, pp. 15–28
* (34) Abhishek Vijaya Kumar and Muthian Sivathanu “Quiver: An Informed Storage Cache for Deep Learning” In _Proceedings of FAST_ , 2020, pp. 283–296
* (35) Chris Lattner et al. “MLIR: A Compiler Infrastructure for the End of Moore’s Law” In _CoRR_ , 2020 arXiv: https://arxiv.org/abs/2002.11054
* (36) Haoyuan Li, Ali Ghodsi, Matei Zaharia, Scott Shenker and Ion Stoica “Tachyon: Reliable, Memory Speed Storage for Cluster Computing Frameworks” In _Proceedings of SoCC_ , 2014, pp. 1–15
* (37) Tsung-Yi Lin et al. “Microsoft COCO: Common Objects in Context” In _Proceedings of ECCV_ , 2014
* (38) Wei Liu et al. “SSD: Single shot multibox detector” In _Proceedings of ECCV_ , 2016, pp. 21–37 Springer
* (39) Peter Mattson et al. “MLPerf training benchmark” In _arXiv preprint arXiv:1910.01500_ , 2019
* (40) Erik Meijer, Brian Beckman and Gavin Bierman “LINQ: Reconciling Object, Relations and XML in the .NET Framework” In _Proceedings of SIGMOD_ , 2006, pp. 706
* (41) MLPerf Training v0.7 Results “Designing Efficient Data Loaders for Deep Learning”, https://mlperf.org/training-results-0-7/, 2020
* (42) Jayashree Mohan, Amar Phanishayee, Ashish Raniwala and Vijay Chidambaram “Analyzing and Mitigating Data Stalls in DNN Training”, 2021 arXiv:2007.06775 [cs.DC]
* (43) Dan Moldovan et al. “AutoGraph: Imperative-style Coding with Graph-based Performance” In _SysML_ , 2019
* (44) Derek G. Murray, Frank McSherry, Rebecca Isaacs, Michael Isard, Paul Barham and Martín Abadi “Naiad: A Timely Dataflow System” In _Proceedings of the 24th ACM Symposium on Operating Systems Principles (SOSP)_ ACM, 2013
* (45) MXNET “Designing Efficient Data Loaders for Deep Learning”, https://mxnet.apache.org/api/architecture/note_data_loading, 2018
* (46) Adam Paszke et al. “PyTorch: An Imperative Style, High-Performance Deep Learning Library” In _Advances in Neural Information Processing Systems 32_ Curran Associates, Inc., 2019, pp. 8024–8035
* (47) “Protocol Buffers”, https://developers.google.com/protocol-buffers
* (48) K.. Rashmi, Mosharaf Chowdhury, Jack Kosaian, Ion Stoica and Kannan Ramchandran “EC-Cache: Load-Balanced, Low-Latency Cluster Caching with Online Erasure Coding” In _Proceedings of OSDI_ , 2016, pp. 401–417
* (49) Herbert Robbins and Sutton Monro “A Stochastic Approximation Method” In _Ann. Math. Statist._ 22.3 The Institute of Mathematical Statistics, 1951, pp. 400–407
* (50) Christopher J. Rossbach, Jon Currey, Mark Silberstein, Baishakhi Ray and Emmett Witchel “PTask: Operating System Abstractions to Manage GPUs as Compute Devices” In _Proceedings of SOSP_ , 2011, pp. 233–248
* (51) Christopher J. Rossbach, Yuan Yu, Jon Currey, Jean-Philippe Martin and Dennis Fetterly “Dandelion: A Compiler and Runtime for Heterogeneous Systems” In _Proceedings of SOSP_ , 2013, pp. 49–68
* (52) John E. Shore “The lazy repairman and other models: Performance collapse due to overhead in simple, single-server queuing systems” In _ACM SIGMETRICS Performance Evaluation Review_ 9.2, 1980, pp. 217–224
* (53) Patrice Y. Simard, Dave Steinkraus and John C. Platt “Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis” In _Proceedings of ICDAR_ IEEE Computer Society, 2003, pp. 958
* (54) Roy Patrick Tan, Pooja Nagpal and Shaun Miller “Automated Black Box Testing Tool for a Parallel Programming Library” In _Proceedings of ICST_ IEEE Computer Society, 2009, pp. 307–316
* (55) TensorFlow “TensorFlow Graph Optimizations”, https://research.google/pubs/pub48051.pdf, 2019
* (56) TensorFlow “TFRecord and tf.Example”, https://www.tensorflow.org/tutorials/load_data/tfrecord, 2020
* (57) Ashish Vaswani et al. “Attention is All you Need” In _Advances in Neural Information Processing Systems 30_ , 2017, pp. 5998–6008
* (58) Philip Wadler “Deforestation: Transforming Programs to Eliminate Trees” In _Proceedings of the Second European Symposium on Programming_ NLD: North-Holland Publishing Co., 1988, pp. 231–248
* (59) Matt Welsh, David Culler and Eric Brewer “SEDA: an architecture for well-conditioned, scalable internet services” In _ACM SIGOPS Operating Systems Review_ 35.5, 2001, pp. 230–243
* (60) WMT “1st Conference on Machine Translation”, http://statmt.org/wmt16, 2016
* (61) WMT “2nd Conference on Machine Translation”, http://statmt.org/wmt17, 2017
* (62) Yonghui Wu et al. “Google’s neural machine translation system: Bridging the gap between human and machine translation” In _arXiv preprint arXiv:1609.08144_ , 2016
* (63) Doris Xin, Litian Ma, Jialin Liu, Stephen Macke, Shuchen Song and Aditya Parameswaran “Helix: Accelerating Human-in-the-Loop Machine Learning” In _Proc. VLDB Endow._ 11.12 VLDB Endowment, 2018, pp. 1958–1961
* (64) Yuan Yu et al. “DryadLINQ: A System for General-Purpose Distributed Data-Parallel Computing Using a High-Level Language” In _Proceedings of OSDI_ , 2008, pp. 1–14
* (65) Matei Zaharia, Mosharaf Chowdhury, Michael J. Franklin, Scott Shenker and Ion Stoica “Spark: Cluster Computing with Working Sets” In _Proceedings of HotCloud_ , 2010
* (66) Matei Zaharia, Tathagata Das, Haoyuan Li, Timothy Hunter, Scott Shenker and Ion Stoica “Discretized Streams: Fault-Tolerant Streaming Computation at Scale” In _Proceedings of SOSP_ , 2013, pp. 423–438
* (67) Barret Zoph and Quoc V. Le “Neural Architecture Search with Reinforcement Learning” In _Proceedings of ICLR_ , 2017
|
# Mapping the Supernovae Driven Winds of the Large Magellanic Cloud in
H$\alpha$ Emission I
Drew A. Ciampa Department of Physics & Astronomy, Texas Christian University,
Fort Worth, TX 76129, USA Kathleen A. Barger Department of Physics &
Astronomy, Texas Christian University, Fort Worth, TX 76129, USA Nicolas
Lehner Department of Physics, University of Notre Dame, Notre Dame, IN 46556,
USA Madeline Horn Department of Physics & Astronomy, Texas Christian
University, Fort Worth, TX 76129, USA Department of Astronomy, Smith College,
Northampton, MA 01063, USA Michael Hernandez Department of Physics &
Astronomy, Texas Christian University, Fort Worth, TX 76129, USA L. Matthew
Haffner Embry-Riddle Aeronautical University, Daytona Beach, FL 32114, USA
Space Science Institute, Boulder, CO 80301, USA Department of Astronomy,
University of Wisconsin-Madison, Madison, WI 53706, USA Brianna Smart
Department of Physics, Astronomy, and Mathematics, University of
Hertfordshire, Hatfield AL10 9AB, UK Department of Astronomy, University of
Wisconsin-Madison, Madison, WI 53706, USA Chad Bustard Department of Physics,
University of Wisconsin-Madison, Madison, WI 53706, USA Sam Barber
Department of Physics & Astronomy, Texas Christian University, Fort Worth, TX
76129, USA Trinity Valley High School, Fort Worth, TX 76132, USA Henry Boot
Department of Physics & Astronomy, Texas Christian University, Fort Worth, TX
76129, USA Burleson High School, Burleson, TX 76028, USA
###### Abstract
We present the first spectroscopically resolved H$\alpha$ emission map of the
Large Magellanic Cloud’s (LMC) galactic wind. By combining new Wisconsin
H-alpha Mapper (WHAM) observations ($I_{\rm H\alpha}\gtrsim 10~{}{\rm mR}$)
with existing H i 21-cm emission observations, we have (1) mapped the LMC’s
near-side galactic wind over a local standard of rest (LSR) velocity range of
$+50\leq\rm v_{LSR}\leq+250~{}{\rm km}~{}{\rm s}^{-1}$, (2) determined its
morphology and extent, and (3) estimated its mass, outflow rate, and mass-
loading factor. We observe H$\alpha$ emission from this wind to typically
1-degree off the LMC’s H i disk. Kinematically, we find that the diffuse gas
in the warm-ionized phase of this wind persists at both low ($\lesssim
100~{}{\rm km}~{}{\rm s}^{-1}$) and high ($\gtrsim 100~{}{\rm km}~{}{\rm
s}^{-1}$) velocities, relative to the LMC’s H i disk. Furthermore, we find
that the high-velocity component spatially aligns with the most intense star-
forming region, 30 Doradus. We, therefore, conclude that this high-velocity
material traces an active outflow. We estimate the mass of the warm
($T_{e}\approx 10^{4}~{}\rm K$) ionized phase of the near-side LMC outflow to
be $\log{\left(M_{\rm ionized}/M_{\odot}\right)=7.51\pm 0.15}$ for the
combined low and high velocity components. Assuming an ionization fraction of
75% and that the wind is symmetrical about the LMC disk, we estimate that its
total (neutral and ionized) mass is $\log{\left(M_{\rm
total}/M_{\odot}\right)=7.93}$, its mass-flow rate is $\dot{M}_{\rm
outflow}\approx 1.43~{}M_{\odot}~{}\rm yr^{-1}$, and its mass-loading factor
is $\eta\approx 4.54$. Our average mass-loading factor results are roughly a
factor of 2.5 larger than previous H$\alpha$ imaging and UV absorption line
studies, suggesting that those studies are missing nearly half the gas in the
outflows.
ISM: kinematics and dynamics — ISM: outflows — galaxies: evolution — galaxies:
individual: Large Magellanic Cloud
††journal: ApJ
## 1 Introduction
Galactic feedback, such as stellar winds, supernovae, and active galactic
nuclei, expel both energy and momentum into the interstellar medium (ISM) of
the host galaxy. These processes can further drive gas out of the galaxies in
galactic winds and fountains. As these processes cycle gaseous material
through the galaxy and into its surroundings, they transport enriched gas to
the outskirts of the galaxy and into the circumgalactic medium (CGM; Heckman
2003, Veilleux et al. 2005, Tumlinson et al. 2011). Furthermore, if the gas
that is ejected into the CGM is lost from the galaxy or if it stagnates into
the galaxy’s halo (e.g., Ford et al. 2014; Peeples et al. 2014), the star-
formation rate of the galaxy will likely decline unless it is able to procure
additional gaseous material from different sources. In most cases, this baryon
cycle is difficult to resolve because the gaseous material in the ISM and CGM
is faint. However, the nearby Large Magellanic Cloud (LMC) provides an
unparalleled view of gaseous material both within and surrounding its disk. By
investigating the gaseous material in the CGM of the LMC, we can better
understand its behavior, enabling us to decipher how the baryon cycle is
connected to galaxy evolution.
At a distance of about $d_{\odot}\approx 50~{}\,\mathrm{kpc}$ (Pietrzyński et
al., 2013; de Grijs et al., 2014; Walker, 1999), the LMC is close enough to
resolve spatial features within its ISM. The stellar and total mass,
$M_{\star}=3\times 10^{9}~{}M_{\odot}$ (van der Marel et al., 2009) and
$M_{\rm total}=1.7\times 10^{10}~{}M_{\odot}$ ($\rm r_{\rm
enclosed}=8.7~{}\,\mathrm{kpc}$; van der Marel & Kallivayalil 2014), make the
LMC a low-mass galaxy allowing gas to interact more freely with its
environment and distribute material with the CGM efficiently (Heckman et al.,
2000). The gaseous disk of this galaxy is projected nearly face-on with an
inclination angle of $22\arcdeg\lesssim i\lesssim 26\arcdeg$ (Kim et al.,
1998; Staveley-Smith et al., 2003; Choi et al., 2018), providing an
unobstructed view of its interstellar medium and activity within the disk.
Observations have shown numerous neutral hydrogen super shells and holes that
exist throughout the LMC’s (Meaburn, 1980; Kim et al., 1999), which could be a
result of interactions with the Small Magellanic Cloud (SMC) and possibly the
Milky Way (MW; e.g., Besla et al. 2010, 2012), as well as recent periods of
intense star formation due to its interaction with the SMC (Harris & Zaritsky,
2009). Each of these studies suggests an active history and stellar lifecycle
that could lead to outflows with large amounts of energy and material blown
into their surroundings during times of increased stellar activity (Erb,
2015). The addition of energy and momentum from numerous supernovae throughout
the galaxy could be a way for a large-scale outflow to originate in a galaxy
like the LMC. Observations capturing a complete picture of any galactic-wide
outflowing material proved difficult due to the LMC’s size on the sky.
Observations of the H$\alpha$ emission from the ISM of the LMC in prior
studies focused primarily inside the LMC’s disk (Rosado et al. 1990, Laval et
al. 1992, and Reid & Parker 2012). While these studies revealed activity
within the ISM of the LMC (Pellegrini et al. 2012 and Winkler et al. 2015),
they only observed the brighter inner region of the LMC rather than the faint
emission from its extended diffuse disk and the galaxy’s circumgalactic
medium.
Although the LMC is our nearest gas-rich neighboring galaxy, it was not until
recently that many studies began to directly detect signatures of a large-
scale galactic outflow. Recent ultraviolet (UV) absorption-line spectroscopy
studies, using the Hubble Space Telescope (HST) and Far Ultraviolet
Spectroscopic Explorer (FUSE), investigated gas flows of the LMC (Howk et al.,
2002; Lehner & Howk, 2007; Lehner et al., 2009; Pathak et al., 2011). Howk et
al. (2002) used 12 stars embedded within the LMC as background targets to
explore the gas on the near-side of the LMC; they found that the absorption
along every sightline had kinematic signatures consistent with gaseous
material flowing out of the LMC. A study performed by Staveley-Smith et al.
(2003) using the H i 21-cm emission-line surveyed found high-velocity clouds
in the direction of the LMC and kinematic and morphological evidence that
these clouds could be associated with an LMC galactic outflow. A subsequent
absorption-line investigation conducted by Lehner & Howk (2007) toward four
more LMC stars confirmed the presence of outflowing gas and further found that
the LMC’s gas outflows correlated with H ii regions and super shells, possibly
signaling they are a result of supernovae in the disk. The blueshifted
material, relative to the LMC, was also detected along sightlines that
projected onto relatively quiescent regions. Lying in the direction of the
LMC, Lehner et al. (2009) found further evidence that the high-velocity cloud
(HVC) may have originated from an earlier LMC outflow event. They observed
that this cloud has a velocity gradient with R.A. in H i emission and low-
ionization species (see their Figure 5), a similar oxygen abundance as the
LMC, and depletion patterns that indicate the presence of dust —a strong
indicator that this material is of LMC origin.
However, Werner & Rauch (2015) were able to determine an upper distance limit
of an HVC at a similar velocity that is positioned only a few degrees offset
from the LMC’s disk using absorption-line spectroscopy toward a halo star at a
known distance. They found the cloud along $(l,~{}b)=(279\fdg 9,~{}-37\fdg 1)$
only lies $d_{\odot}\lesssim 13.3~{}\,\mathrm{kpc}$ away. While their distance
limit provides compelling evidence that some of the HVC material at a local
standard of rest (LSR) velocity of $\rm{v_{LSR}}=150~{}{\rm km}~{}{\rm
s}^{-1}$ is likely of MW origin (Richter et al., 2015), this does not
eliminate the possibility of two separate HVC complexes. It remains that
toward the LMC, an HVC is observed where the H i position-velocity map shows a
physical association with the LMC (Staveley-Smith et al., 2003), not the MW.
It is one of the sole HVCs where dust depletion is observed (Lehner et al.,
2009). The occurrence of both a MW and LMC HVC around the same projected area
is plausible given the prior work and the large angular region being explored.
Each of the previously mentioned absorption-line studies that detected
blueshifted material in the direction of the LMC use background targets
embedded within the disk of the LMC, which only traces material between the MW
and LMC. Moreover, the majority of previously explored sightlines were
preferentially selected toward active star-formation regions where outflows
are more likely to occur. However, there are four locations in the Lehner &
Howk (2007) that probe relatively quiescent regions that still find
blueshifted material relative to the LMC, leaving open the possibility of a
galactic wind across the whole of the near-side of the LMC disk. This comes in
addition to the prospect of a hot corona coexisting with the wind (de Boer &
Savage, 1980; Wakker et al., 1998; Lucchini et al., 2020). The scenario of
both an outflow and corona would support itself in that the outflow may feed
the corona and supply it with gas.
In order to probe gas flows on the far-side of the LMC, Barger et al. (2016)
used a “down-the-barrel” (star) and transverse (QSO) experiment to isolate
material that is located on the far-side of the LMC disk. They found that both
the low and high ions are symmetrically flowing out of the LMC disk at speeds
representing an intermediate-velocity cloud (IVC). Those results, when
combined with the previous studies of Howk et al. (2002) (absorption from 12
LMC stars showing outflowing material), Lehner & Howk (2007) (additional 4 LMC
stars correlating with H ii bubbles and super shells), Lehner et al. (2009)
(gradient found in the HVC toward the LMC), and Pathak et al. (2011) (strong
absorption in OVI across the entire face of the LMC), provide convincing
evidence that the LMC drives a global, large-scale outflow across its entire
disk.
Our study supplements the previous UV work by supplying the first
spectroscopically resolved H$\alpha$ map of the LMC and its surrounding
environment. This approach removes limitations of previous works that had
small numbers of pointings that were dependent on the location of background
targets, which severely reduced the spatial coverage of the material they
observed. The work we present is unique in that (1) our observations are
roughly an order of magnitude more sensitive than previous studies—enabling us
to map the diffuse optical emission from these clouds—and that (2) we were
able to spatially resolve the entirety of the LMC and its surroundings at an
angular resolution of $\theta_{\rm resolution}=1\arcdeg$. Both of these
results cannot be performed with the vast majority of other more distant gas-
rich galaxies. With emission-line observations of the ionized component of the
LMC’s IVCs and HVCs, we explore their global morphology and kinematic
distribution in Section 5. In this section, we further assess whether the IVC
material could be associated with an LMC wind origin. In Section 6, we discuss
how a portion of the emission is from gas currently being driven from the
galaxy as an IVC. This is followed up with an estimate of this material’s mass
as well as it’s role in the LMC’s neighborhood (Sections 6.1 and 6.2). Section
7 discusses a HVC that is moving at speeds upward of $\Delta{\rm v}_{\rm
LMCSR}\approx-150~{}{\rm km}~{}{\rm s}^{-1}$. The significance and possible
explanation for this material’s origin is considered in Section 7.1.
## 2 Data
This study utilizes archival radio and newly acquired optical emission-line
observations to trace the neutral and ionized hydrogen gas in and around the
LMC.
### 2.1 Wisconsin H$\alpha$ Mapper
We surveyed the faint H$\alpha$ ($\lambda_{\rm H\alpha}=6562.8~{}\mbox{\AA}$)
emission across the face of the LMC’s disk and in the region that surrounds it
using the Wisconsin H$\alpha$ Mapper (WHAM) telescope over the $+50\leq\rm
v_{LSR}\leq+250~{}{\rm km}~{}{\rm s}^{-1}$ velocity range.111We use the
kinematic definition of the LSR in which the Sun moves $20~{}{\rm km}~{}{\rm
s}^{-1}$ in the direction of $(\rm
R.A.,~{}DEC.)=(18^{h}3^{m}50.29^{s},~{}30\arcdeg 00\arcmin 16\farcs 8)$ for
the Julian 2000 epoch (J2000). Equipped with a Fabry-Pérot spectrometer, WHAM
is roughly an order of magnitude more sensitive than other currently available
instruments, enabling us to detect the faint emission at a sensitivity limit
of $I_{\rm H\alpha}\approx 10~{}{\rm mR}$222A Rayleigh (R) is a unit of
measure for the surface brightness of emission lines that is equal to
$1~{}{\rm R}=10^{6}/{4\pi}~{}\rm photons\,cm^{-2}\,sr^{-1}\,s^{-1}$, which is
$5.6\times 10^{-18}~{}\rm erg\,s^{-1}\,cm^{-2}\,arcsec^{-2}$ for H$\alpha$.
per 30 second exposure. WHAM’s sensitivity is achieved due to its high
throughput resulting from its 1 degree beam size. By adjusting the gas
pressures between the instruments etalon, we can tune our observations to
center on the H$\alpha$ emission-line that is associated with the LMC. More
information about the WHAM telescope and its capabilities are outlined in
Haffner et al. (2003).
Our WHAM observations have a $\theta_{\rm resolution}=1\arcdeg$ angular
resolution, which corresponds to a $\Delta d_{\rm resolution}\approx
1~{}\,\mathrm{kpc}$ spatial resolution at the distance of the LMC. The
observations were Nyquist sampled across the face of the galaxy, effectively
increasing the integration time per location on the sky and removing gaps
between observations with overlapping pointings. Our H$\alpha$ survey contains
more than 6,600 individual observations that are positioned both on and off
the LMC’s H i disk, covering an area that spans from $(l,b)=(246\fdg 5,-17\fdg
8)$ to $(315\fdg 0,-46\fdg 7)$.
### 2.2 Radio Data
We use archival H i 21-cm emission-line data from the Parkes Galactic All-Sky
Survey (GASS333The GASS survey is a publicly available survey with access
through an online database retrieval site:
https://www.atnf.csiro.au/research/GASS/Data.html.; McClure-Griffiths+2009) to
probe the neutral hydrogen gas phase in the LMC at an angular resolution of
$\theta_{\rm resolution}=16\arcmin$, corresponding to an angular area that is
roughly $14\times$ smaller than that of the WHAM beam. This survey spans a
velocity range of $-400\leq\rm v_{LSR}\leq+500~{}{\rm km}~{}{\rm s}^{-1}$ and
has a spectral resolution of $\Delta\rm v_{\rm resolution}=0.82~{}{\rm
km}~{}{\rm s}^{-1}$. The RMS brightness temperature noise is $T_{\rm
B,~{}RMS}=57~{}\rm mK$, corresponding to a $3\sigma$ sensitivity of
$\log{\left(N_{\rm H\textsc{~{}i}}/\,\mathrm{cm}^{-2}\right)}\approx 18.2$ for
a typical high-velocity cloud line width of $30~{}{\rm km}~{}{\rm s}^{-1}$
(McClure-Griffiths et al., 2009).444We convert H i brightness temperatures
($T_{\rm B}$) to column densities using the relationship $N_{\rm
H\textsc{~{}i}}=1.823\times 10^{18}\int({T_{\rm B}}/\rm K)(d{\rm v}/{\rm
km}~{}{\rm s}^{-1})~{}\,\mathrm{cm}^{-2}$, which assumes that these clouds
optically thin $21~{}\,\mathrm{cm}$ radiation. In this paper, the survey’s
$3\sigma$ limit is far lower than the practical limit used for our maps and
mass calculation.
In addition to the GASS data, we use a combined H i 21-cm emission-line map
consisting of combined ATCA and Parkes telescope observations (Kim et al.,
2003). The velocity range of this LMC HI survey spans $+190\leq\rm
v_{LSR}\leq+375~{}{\rm km}~{}{\rm s}^{-1}$ and has a $\Delta\rm v_{\rm
resolution}=1.6~{}{\rm km}~{}{\rm s}^{-1}$ spectral resolution. Compared to
the GASS survey, this combined map has a column density sensitivity of
$\log{\left(N_{\rm H\textsc{~{}i}}/\,\mathrm{cm}^{-2}\right)}\approx 18.86$
for a $\Delta\rm v=30~{}{\rm km}~{}{\rm s}^{-1}$ wide line, but a much higher
spatial resolution at $\theta_{\rm resolution}=1\arcmin$. This data resolves
smaller physical structures than GASS, improving our ability to discern
smaller-scale morphological features in the disk.
We also use data from the Leiden/Argentine/Bonn (LAB) Survey to measure
extinction along our sightlines due to its similar beam size as WHAM. LAB data
has an effective FWHM beam of $\theta_{\rm resolution}=35\arcmin$ for
declination $\leq-27\arcdeg$. The survey covers a velocity range of
$-450\leq\rm v_{LSR}\leq+400~{}{\rm km}~{}{\rm s}^{-1}$ with a spectral
resolution of $\Delta\rm v_{\rm resolution}=1.3~{}{\rm km}~{}{\rm s}^{-1}$.
The rms noise is $T_{\rm B,\,rms}=0.07~{}{\rm K}$ resulting in a $3\sigma$
sensitivity of $\log{\left(N_{\rm
H\textsc{~{}i}}/\,\mathrm{cm}^{-2}\right)}\approx 18.38$.
## 3 Data Reduction
The H$\alpha$ reduction process we performed was carried out in two stages. In
the first stage, we used the standard WHAM reduction pipeline, which performs
the bias subtraction, flat-fielding, ring-summing, cosmic ray contamination
removal, and air mass corrections. In the second stage, we velocity calibrated
our spectra, removed atmospheric signatures from observations, masked out
observations affected by foreground stars, and corrected for dust extinction.
### 3.1 WHAM Pipeline
We utilized the WHAM pipeline that is described in detail in Haffner et al.
(2003). During this data processing, pixels warmed by cosmic rays were first
removed. The circular interference patterns that result from our Fabry-Pérot
spectrometer observations were summed in annuli to produce a linear spectrum
that is a function of velocity. These linear spectra span a $\Delta\rm
v=200~{}{\rm km}~{}{\rm s}^{-1}$ velocity range and are uniformly binned to
$\Delta\rm v_{\rm bin}=2~{}{\rm km}~{}{\rm s}^{-1}$ intervals. The pipeline
normalizes the spectra by exposure time, scales them for the air mass of
observations, and applies an intensity correction factor to account for
sensitivity degradation of the WHAM instrumentation that occurs over time.
### 3.2 Velocity Calibration
Our observations span $+50\leq\rm v_{GEO}\leq+250~{}{\rm km}~{}{\rm s}^{-1}$
in the geocentric (GEO) velocity frame. Over this range, these observations do
not overlap with the bright geocoronal H$\alpha$ line at $\rm
v_{GEO}=-2.3~{}{\rm km}~{}{\rm s}^{-1}$ and only overlaps with the blue wing
of a bright OH line at $\rm v_{GEO}=+272.44~{}{\rm km}~{}{\rm s}^{-1}$.
Therefore, we were unable to use either of these lines to calibrate our
velocities using the method described in Hausen et al. (2002) and Barger et
al. (2013). Instead, we used the velocity calibration technique that is
described by Barger et al. (2017) and Antwi-Danso et al. (2020) for WHAM
observations that do not overlap with bright atmospheric lines at well
established transitions. Using this technique, we calibrated our velocity by
monitoring the pressure of the $\rm SF_{6}$ gas in the WHAM Fabry-Pérot
etalons and by further refining the calibration by comparing our observations
with an atmospheric template.
Using the linear relationship between the pressure of the Fabry-Pérot etalons
and $\Delta\lambda$ measured by Tufte (1997), we calculated the velocity
offset between the raw and geocentric velocity frames. This is essentially the
reverse of our tuning process, which enabled us to calibrate the velocity
frame to an accuracy of $\Delta\rm v_{GEO}\lesssim 5~{}{\rm km}~{}{\rm
s}^{-1}$. Because all of our observations were taken at the same tune (i.e.,
at the same interference order), the relative velocities of the calibrated
observations agree within $0.1~{}{\rm km}~{}{\rm s}^{-1}$ of each other as
described in Barger et al. (2017). We improved our calibration further by
aligning our observations with the faint atmospheric lines in the atmospheric
template presented by Barger et al. (2013) (see their Figure 3). This enabled
our velocity solution to be calibrated to an accuracy of $\Delta\rm
v_{GEO}\lesssim 1~{}{\rm km}~{}{\rm s}^{-1}$. We then converted from GEO to
LSR using a constant offset that accounted for the date, time, and location of
each observation.
### 3.3 Atmospheric Subtraction
#### 3.3.1 Atmospheric Template
There are significant faint atmospheric emission features which populate the
entire velocity range of our observations. While these lines are abundant,
they behave predictably and vary primarily with air mass. Because of this, we
modelled these lines using a template created by Barger et al. (2013) (see
their Figure 3), which characterizes the atmospheric emission present in our
observations. The template was scaled to account for differences in air mass
between observations and night-to-night variations due to humidity and
temperature.
Brighter lines are more variable and need to be fit individually. This
includes a bright OH molecular line at $\rm v_{GEO}=272.44~{}{\rm km}~{}{\rm
s}^{-1}$ whose line strength depends on the angle between the Sun and Earth’s
upper atmosphere. In the direction of the LMC, this bright OH line is
overwhelmed by emission from the LMC’s disk. To subtract this emission feature
in the direction of the LMC, we kept the area and width of the line fit
constant so that it matched with off-disk observations that were taken during
the same night and within roughly 15 minutes of the on-disk observations.
Because this narrow OH line is kinematically unresolved by the WHAM telescope,
we fit the line assuming that it has a width of $\Delta\rm v_{\rm
OH~{}width}=1~{}{\rm km}~{}{\rm s}^{-1}$ before we convolved it with WHAM’s
$\Delta\rm v_{\rm WHAM~{}IP}\approx 12~{}{\rm km}~{}{\rm s}^{-1}$ instrument
profile.
We fit the background emission with a constant term added to the atmospheric
template. The total fit —which includes the bright OH line at ${\rm
v_{geo}}=272.44~{}{\rm km}~{}{\rm s}^{-1}$, faint atmospheric lines, a flat
background, and Gaussian modelled astronomical emission— utilized a chi-square
minimization with conservative criteria to avoid over-fitting emission
features that are not physically realistic. These criteria account for the
line width and lower limits for the strength of the astronomical emission,
which are dependent on the gas temperature and instrument sensitivity,
respectively. An example of the pre and post atmospheric corrected spectrum is
shown in Figure 1 with a sightline piercing through the LMC. A detailed
description of these bright lines and how they were handled, including their
origin and the nature of their variability, is outlined in Barger et al.
(2013).
Figure 1: (Top panel) A pre-atmospheric subtracted WHAM spectrum toward
$(l,~{}b)=(279\fdg 0,~{}-31\fdg 0)$ drawn as the black line. The atmospheric
template we used to reduce our H$\alpha$ observations is indicated with a red
overlaid line. The emission contributions associated with the Milky Way at
$\rm v_{GEO}=+66.21~{}{\rm km}~{}{\rm s}^{-1}$ and a bright OH line at a
velocity of $\rm v_{GEO}=+272.44~{}{\rm km}~{}{\rm s}^{-1}$ are traced with
dashed dot gray lines. The solid, thick gray line traces all of the
atmospheric emission (atmospheric template and OH line) that we subtracted
from the WHAM spectrum during our reduction procedure. (Bottom panel) The
final reduced spectrum, where emission from the LMC ($\rm
v_{GEO}\gtrsim+260~{}{\rm km}~{}{\rm s}^{-1}$) is shaded light gray, LMC IVC
material ($+210\lesssim\rm v_{GEO}\lesssim+260~{}{\rm km}~{}{\rm s}^{-1}$) is
shaded light blue, LMC HVC material ($+150\lesssim\rm
v_{GEO}\lesssim+210~{}{\rm km}~{}{\rm s}^{-1}$) is shaded dark blue, and the
Milky Way ($\rm v_{GEO}\lesssim+90~{}{\rm km}~{}{\rm s}^{-1}$) is marked.
#### 3.3.2 Removal of Systematic Signatures
Following the removal of the atmospheric contamination with the atmospheric
template described above, we discovered systematic spectral signatures in the
reduced spectra. The cause of these structured residuals is likely due to very
faint, unresolved atmospheric lines that are not described in the atmospheric
template or from a slight velocity misalignment between the observed spectra
and the atmospheric template causing the signatures during subtraction. These
signatures appear at the same geocentric rest frame (GEO) velocity over
narrow, $5\lesssim\Delta\rm v\lesssim 10~{}{\rm km}~{}{\rm s}^{-1}$, velocity
widths. These signatures are visible when the spectra in our survey are
stacked across the same velocity range in the geocentric frame, as shown in
the top panel of Figure 2. At several velocities in this frame, there are
multiple relatively coherent vertical signatures across the spectra. However,
the two bright astronomical horizontal structures are associated with the
Magellanic Bridge ($+150\lesssim\rm v_{GEO}\lesssim+210~{}{\rm km}~{}{\rm
s}^{-1}$ and $0\leq\textrm{spectrum channel}\leq 35$) and the LMC and its wind
($+150\lesssim\rm v_{GEO}\lesssim+300~{}{\rm km}~{}{\rm s}^{-1}$ and
$100\leq\textrm{spectrum channel}\leq 200$).
These faint residual atmospheric signatures exist across all observed spectra.
To characterize these spectral artifacts, we averaged the spectra together at
locations within our map that contain little to no astronomical H$\alpha$
emission above our sensitivity limit. We then subtract this average off target
spectrum from our observed spectra to effectively remove these signatures. To
ensure that the off target spectra accurately characterized the sky for that
region of the map, this procedure was repeated for four Galactic latitudinal
sub-regions. We display stacked spectral image with before (top panel) and
after (bottom panel) samples of this reduction process for a subset of our
observations in Figure 2.
Overall, the vertical atmospheric features at $\rm v_{GEO}\approx+150$,
$+180$, and $+230~{}{\rm km}~{}{\rm s}^{-1}$ have been greatly reduced.
However, some of this residual atmospheric emission remains, especially at
$\rm v_{GEO}\approx+180~{}{\rm km}~{}{\rm s}^{-1}$ for $\textrm{spectrum
channels}>350$ that correspond to a region on the sky between
$(l,\,b)=(265\arcdeg,\,-20\arcdeg)$ and $(285\arcdeg,\,-25\arcdeg)$. These
atmospheric residuals persist in these spectra because the Galactic latitude
region had few good H$\alpha$ faint off locations. Although the presence of
this lingering atmospheric emission in our final reduced H$\alpha$ emission
map is not ideal, it does not impact the final results of this paper as we do
not use the data in that region of the sky in our mass calculation.
Figure 2: (Top) A latitude sub-region between $-45\leq b\leq-36$ containing
466 WHAM spectra before the removal of residual signatures. At various
locations of $\rm v_{GEO}=+155$, $+195$, and $+230~{}{\rm km}~{}{\rm s}^{-1}$,
vertical structures are visible within the blue dashed rectangles, labeled 1,
2, and 3. (Bottom) After the correction there is a large improvement in the
removal of the vertical signatures. At those same velocities as the top panel,
the signatures are reduced. In both panels, emission from the LMC is visible
as horizontal stripes over channels 100–200 while the Magellanic Bridge (MB)
is visible from channels 0–35 at lower velocities. Figure 3: Four example
H$\alpha$ spectra that probe different positions in our survey. This includes
emission toward: (Panel A) 30 Doradus at $(l,~{}b)=(279\fdg 5,~{}-31\fdg 7)$,
(Panel B) the LMC’s disk at $(278\fdg 3,~{}-31\fdg 0)$, (Panel C) the MW star
used in the Richter et al. (2015) study $(279\fdg 9,~{}-37\fdg 1)$, and (Panel
D) off the LMC’s disk at $(271\fdg 7,~{}-29\fdg 7)$. In these panels, we shade
material that coincides with the H i disk of the LMC in light gray (only Panel
C over the displayed LSR velocities) and LMC IVC material out to a line-of-
sight velocity of $|\rm v_{LOS}|=100~{}{\rm km}~{}{\rm s}^{-1}$ of its disk in
light blue.
### 3.4 Observational Cutoffs and Degradation Correction
We removed sightlines that are within a $0\fdg 55$ angular radius of stars
that are $m_{\rm V}\leq 6.0~{}{\rm mag}$ as their stellar continuum
contributes non-linear emission to the H$\alpha$ spectra. This cutoff removes
689 observations from our sample, corresponding to roughly 12% of our
observations. Because WHAM is a remote observatory configured for queue
observing and does not require constant monitoring during good weather
conditions, we double checked all of the observations that were taken at the
optimal CCD temperature for the WHAM camera of $T_{\rm CCD}=-101.2\arcdeg{\rm
C}$ and not during an automated liquid nitrogen fill to reduce noise. We
confirmed that the etalon pressures were stable and that the monitored values
matched the input values during setup for the observations at night. We also
ensured that all of the used observations were taken when the outside humidity
was less than 85%, at a Zenith Distance less than $75\arcdeg$, and during dark
time observations. Throughout this process a total of 291 additional
observations (or around 4%) of the remaining sample were removed. In total,
there were 980 observations (or 14%) removed from our sample leaving 5,931
observations. For sightlines with repeat observations, we averaged the spectra
together. Our survey samples a total of 1,712 unique sightlines, where each
sightline was observed an average of 3.5 times with the locations toward the
LMC’s H i disk sampled the most.
As a result of WHAM instrument degradation over time, observations suffered up
to a 20% decrease in observed H$\alpha$ intensity (Smart et al., 2019). The
procedure used for determining the WHAM instrument degradation trend with time
is outlined in Haffner et al. (2003). However, our intensity correction does
not include a night-to-night intensity calibration associated with airmass
variations that are due to variations in atmospheric conditions. This is
because there were insufficient calibration observations taken each night
during this survey, in part because there are few WHAM calibration targets in
the southern sky.
### 3.5 Extinction Correction
Previous absorption-line spectroscopic studies toward LMC stars (e.g., Howk et
al. 2002; Lehner & Howk 2007; Lehner et al. 2009; Barger et al. 2017) indicate
that the gas clouds surveyed in this study exist between us and the LMC.
Additionally, Barger et al. (2017) found compelling evidence that the gas
within $\Delta\rm v_{LOS}\approx 100~{}{\rm km}~{}{\rm s}^{-1}$ from the H i
disk of the LMC along the line of sight is associated with large-scale
galactic outflow. Similarly, the depletion patterns observed by Lehner et al.
(2009) for the HVC material that is in the same projected region on the sky
also indicates that this cloud contains dust. We, therefore, applied two
extinction corrections: one for attenuation due to the Milky Way and another
for self-extinction of the gas clouds assuming that they have a chemical
composition similar to that of the LMC.
To correct for reddening, we used the following relationship from Diplas &
Savage (1994) that relates the color excess with the average $N_{\rm
H\textsc{~{}i}}$ foreground emission:
$E(B-V)=\frac{\left<N_{\rm H\textsc{~{}i}}\right>}{4.93\times 10^{21}~{}\rm
atoms/(cm^{2}~{}mag)}$ (1)
We used the Leiden/Argentine/Bonn Galactic H i survey (LAB; Kalberla et al.,
2005; Hartmann & Burton, 1997) to calculate the average foreground H i
emission. For the MW, we integrated the H i emission over the $-100\leq\rm
v_{LSR}\leq+100~{}{\rm km}~{}{\rm s}^{-1}$ velocity range and assumed an
extinction parameter of $R_{\rm v}=3.1$. Similar to the Barger et al. (2013)
study, we also adopt the Cardelli et al. (1989) optical extinction curve.
Combined, the total extinction for the MW dust is then given by:
$A(\rm H\alpha)=5.14\times 10^{-22}\,\left<N_{\rm H\textsc{~{}i}}\right>\,\rm
cm^{-2}\,atoms^{-1}~{}mag,$ (2)
where the extinction corrected intensity is then $I_{\rm
H\alpha,\,corr}=I_{\rm H\alpha,\,obs}e^{A(\rm H\alpha)}$. After correcting for
extinction associated with foreground MW material, our observed H$\alpha$
intensities increase by roughly 10%.
The self extinction by the circumgalactic medium is small as this gas is
diffuse. Following a similar process to the MW extinction, we can account for
the self extinction of the winds by adopting the extinction parameter for the
LMC measured by Gordon et al. (2003) of $R_{\rm v}=3.41$. We used an $R_{\rm
v}$ that is measured for the LMC’s disk as the IVC and HVC material likely
originated from this galaxy via stellar feedback events. When we integrated
the H i emission across the velocity range of our wind, $+100\leq\rm
v_{LSR}\leq+225~{}{\rm km}~{}{\rm s}^{-1}$, we found that the associated self
extinction correction is much less than 1% and, therefore, we neglected this
correction in our mass calculations below.
## 4 LMCSR Velocity Frame
When exploring the circumgalactic material of the LMC, it is useful to use a
reference frame centered around the LMC disk rather than the LSR frame. We
refer to this reference frame as the Large Magellanic Cloud Standard of Rest
(LMCSR) frame. Because the LMC is actively forming stars across its H idisk,
the kinematic width of its disk varies rapidly at small scales. Across a slice
through the LMC’s disk at Galactic latitude of $b=-31\fdg 67$ and centered on
the 30 Doradus starburst region, the width varies from $25\lesssim\Delta{\rm
v}_{\rm H\textsc{~{}i}}\lesssim 50~{}{\rm km}~{}{\rm s}^{-1}$ (see Figure 4).
This multiple component structure complicates the process of determining where
the disk kinematically ends and where a wind begins.
Across the roughly 10 degree Galactic longitude slice shown in Figure 4, the
motion of the H i gas has a velocity gradient that spans from
$+225\lesssim{\rm v_{LSR}}\lesssim+275~{}{\rm km}~{}{\rm s}^{-1}$ at higher
Galactic longitudes of $l\approx 282\arcdeg$ to $+275\lesssim{\rm
v_{LSR}}\lesssim+325~{}{\rm km}~{}{\rm s}^{-1}$ at $l\approx 276\arcdeg$.
Along this velocity slice, there are several locations containing holes where
little to no H i exists. This is not surprising as these holes can be created
by energetic stellar feedback process activity occurring inside the disk,
which heats and ionizes the surrounding gas. This feedback can further drive
circumstellar and interstellar material outward, possibly contributing to a
galactic outflow as noted by Staveley-Smith et al. (2003). In the LMCSR
velocity frame, the spatially varying velocity gradient in the LSR frame is
removed, which helped us disentangle the disk and the wind material.
Figure 4: (Bottom) An H i intensity-weighted position-position map of the LMC.
The circle marks the location of 30 Doradus at $(l,~{}b)=(279\fdg
46,~{}-31\fdg 67)$ and the dashed line indicates the extent of the Galactic
Longitude range considered the top-left panel. (Top-left) A position-velocity
map of H i emission running through the location of 30 Doradus. (Top-right) H
i spectra toward the 30 Doradus sightline. The dashed line represents a single
ATCA H i spectrum toward 30 Doradus while the solid blue curve depicts an
average spectrum for all emission within circular area of 1 degree diameter
centered on 30 Doradus.
To convert our spectroscopic observations from the LSR to LMCSR velocity
reference frame, we initially used the relationship provided by Lehner et al.
(2009), which described the motion of the LMC’s disk stars. However, galaxy
interactions have disrupted the LMC’s disk so that the gaseous and stellar
components do not align. While the Lehner et al. (2009) LSR to LMCSR
relationship works reasonably well for the motion of the gas toward the disk,
our observations extend $\Delta\theta\gtrsim 5\arcdeg$ off the LMC’s gaseous
disk on all sides and is no longer centered in that velocity reference frame.
Instead, we modeled the H i emission the LMC and its surroundings to convert
our H$\alpha$ observations into a LMCSR velocity reference frame. This is
especially beneficial because the gaseous H i emission extends much further
than the stellar disk and because H$\alpha$ emission tends to kinematically
follow the H i emission in HVCs (e.g., Haffner et al. 2001; Putman et al.
2003; Hill et al. 2009; Barger et al. 2012, 2013, 2017; Antwi-Danso et al.
2020).
We determined the motion of the LMC’s H i disk by performing a Gaussian
decomposition of H i GASS spectra across our surveyed region. We enforce that
the Gaussian fits meet the following criteria: each fit must have a column
density above $\log{\left(N_{\rm
H\textsc{~{}i}}/\,\mathrm{cm}^{-2}\right)}\approx 19$, kinematic width of
approximately $\rm 30~{}{\rm km}~{}{\rm s}^{-1}$, and a velocity centroid
between $+175\leq{\rm v_{LSR}}\leq+325~{}{\rm km}~{}{\rm s}^{-1}$ to be
considered part of the LMC disk. We modelled the H i disk as a simple 2D plane
using a least-squares fit. To improve the accuracy of this plane, we weighted
our fit by the H i column density. Our resultant relationship between the
line-of-sight Galactic longitude ($l$) and latitude ($b$) and the central
velocity offset is:
$\frac{\Delta\rm v_{LMCSR}}{{\rm km}~{}{\rm
s}^{-1}}=262.55-3.25\left(l-280\right)+3.66\left(b-33\right)$ (3)
This offset corresponds to an LMCSR velocity as follows: ${\rm
v_{LMCSR}}$=${\rm v_{LSR}}$+${\Delta\rm v_{LMCSR}}$. We used the width of the
H i lines out to $3$ standard deviations to describe the thickness of the
LMC’s H i disk and to distinguish between disk and wind material. The near-
side and far-side disk boundaries are described by the difference between the
$3$ standard deviation fitted plane and the central velocity (Equation 3).
This results in adopting an LMC disk width of roughly $80~{}{\rm km}~{}{\rm
s}^{-1}$ across the face of the LMC, or $40~{}{\rm km}~{}{\rm s}^{-1}$ from
the kinematic center of the disk to its edge. The newly constructed velocity
frame improves our ability to separate the wind material from the LMC’s disk
that lies in front of the galaxy and to identify the wind material that
extends past its H i disk.
## 5 Kinematic Morphology of H$\alpha$ Emission
We observe blueshifted material at the intermediate- and high-velocities
relative to the LMC in both H i and H$\alpha$ emission. This emission is
consistent with the results of previous UV absorption-line spectroscopy
studies that suggest a large-scale galactic wind emanating from the LMC,
driven by the stellar activity within its disk (Howk et al., 2002; Lehner &
Howk, 2007; Barger et al., 2016). The H i IVC emission is strong toward the
LMC’s disk and rapidly decreases radially. The H$\alpha$ emission similarly
decreases radially away from the LMC yet extends off the boundary of the LMC
stellar ($r_{\star}\sim 2.15~{}\,\mathrm{kpc}$) and H i disk at
$\log{\left(\rm H\textsc{~{}i}/\,\mathrm{cm}^{-2}\right)}\approx 19$ (see the
left-hand panel of Figure 5). This H$\alpha$ emission is asymmetric, relative
to the H i, such that it extends farther along the edge of the LMC disk near
the 30 Doradus starburst region.
Gaseous debris has littered the surrounding area of the LMC due to its
interactions with the SMC. Therefore, in addition to the wind that is likely
associated with the LMC, there is Magellanic tidal material and MW HVCs that
pollute this region of the sky. At higher Galactic latitudes than the LMC
($b\geq-27\arcdeg$), there are a few sparse H i clouds that are likely
associated with the Leading Arm (LA) complex LA I near
$(l,~{}b)\approx(283\arcdeg,\,-24\arcdeg)$; for more details on the H$\alpha$
distribution of these offset clouds, see Smart et al. (2021), in preparation.
Likewise, because the Magellanic Bridge connects the LMC and SMC, its emission
is present at $l\geq 285\arcdeg$ with similar velocities as the high Galactic
longitude edge of the LMC. Toward the southern rim of the LMC disk, there are
fragmented clouds centered on $(l,~{}b)=(273\arcdeg,~{}-41\arcdeg)$,
$(278\arcdeg,~{}-38\arcdeg)$ and $(281\arcdeg,~{}-42\arcdeg)$ (see the right-
hand panel of Figure 5). The background halo star that Richter et al. (2015)
used to establish the distance of an HVC ($d_{\odot}\leq 13.3\,\mathrm{kpc}$)
at the location $(l,\,b)=(279\fdg 9,\,-37\fdg 1)$ lies within the region of
these low-latitude fragmented clouds. These clouds overlap in velocity with
the HVC absorption (${\rm v}_{\rm LMCSR}\approx-150~{}{\rm km}~{}{\rm
s}^{-1}$) along this stellar sightline, suggesting that it could be associated
with the MW. For this reason, we conservatively do not consider gas that is
more than a few degrees off of the LMC’s H i disk to be part of the LMC
outflow.
We find the gaseous material toward the LMC that is the least blueshifted
(i.e., kinematically closest to the LMC’s motion) morphologically follows the
H i disk of the LMC. In Figure 6, we separate the H$\alpha$ into four separate
emission maps, each with small integration ranges that allow us to study the
bulk properties of the gas cloud in discrete slices of velocity. The emission
at ${\rm v}_{\rm LMCSR}\approx-100~{}{\rm km}~{}{\rm s}^{-1}$ maintains an
intensity well over $I_{\rm H\alpha}\approx 0.3~{}\rm R$ across the face of
the LMC (See left two panels of Figure 6). This widespread H$\alpha$ emission
of the IVC is consistent with a large-scale LMC outflow.
At higher velocities, approaching the more blueshifted material, our data
shows the emission has a strong spatial alignment with the most active star-
forming region within the LMC, 30 Doradus (see right two panels of Figure 6).
This emission spans over $\Delta{\rm v}_{\rm LOS}\gtrsim 150~{}{\rm km}~{}{\rm
s}^{-1}$ and remains stable across each channel map toward the star-forming
region. Detecting strong H$\alpha$ emission out to velocities of nearly
$\Delta{\rm v}_{\rm LMCSR}\approx-150~{}{\rm km}~{}{\rm s}^{-1}$ while also
observing a connection to the lower velocities in the same region indicates an
association with the LMC.
Figure 5: (Top) H$\alpha$ emission maps of the LMC and its surroundings. This
map traces the material that is blueshifted relative to the LMC. From left to
right, the total wind integrated over the $-175\leq\rm v_{LMCSR}\leq-55~{}{\rm
km}~{}{\rm s}^{-1}$ velocity range, the IVC portion integrated over the
$-100\leq\rm v_{LMCSR}\leq-55~{}{\rm km}~{}{\rm s}^{-1}$ velocity range, and
the HVC portion integrated over the $-175\leq\rm v_{LMCSR}\leq-100~{}{\rm
km}~{}{\rm s}^{-1}$ velocity range. The overlaid black contours trace the H i
emission across the same integration range at $\log(N_{\rm
H\textsc{~{}i}}/{\rm cm}^{-2})=19$. (Bottom) H i column density map covering
the same region and velocity ranges above using GASS data.
Figure 6: H$\alpha$ emission channel maps centered at $\rm v_{LMCSR}=-70$,
$-100$, $-130$, $-160~{}{\rm km}~{}{\rm s}^{-1}$ (left to right). Widths of
these velocity slices are all $\Delta{\rm v}=30~{}{\rm km}~{}{\rm s}^{-1}$.
The bright emission in the right-hand panel at ${\rm log}(I_{\rm H\alpha}/{\rm
mR})\approx 2.5$ spatially aligns with the 30 Doradus starburst at
$(l,~{}b)=(279\fdg 5,~{}-31\fdg 7)$. H i column density contours are drawn at
$\log(N_{\rm H\textsc{~{}i}}/{\rm cm}^{-2})=19.0$ and $20.0$ in gray and
black, respectively.
## 6 Intermediate Velocity Gas
We find numerous clouds that are bright in H$\alpha$ emission that are blue
shifted by roughly $50-100~{}{\rm km}~{}{\rm s}^{-1}$ relative to the LMC’s H
i disk (Figures 5 and 6). Most of this intermediate velocity material
spatially aligns with the disk of the LMC, resembling a galactic outflow
previously suggested (Howk et al., 2002; Barger et al., 2016). Using emission
across 1,712 H$\alpha$ sightlines toward the LMC IVC, we determine its mass,
its mass-flow rate, and mass-loading factor. In the following LMC IVC mass
calculations, we only include the material that is within $\Delta\theta\approx
1\arcdeg$ of the LMC H i disk with $\log{\left(\rm
H\textsc{~{}i}/\,\mathrm{cm}^{-2}\right)}\gtrsim 19$. This is because some of
the H$\alpha$ emission that is projected multiple degrees off its disk could
be associated with Magellanic tidal debris (i.e., Magellanic Bridge, Leading
Arm) or MW HVCs (see Figure 5 and Section 5).
### 6.1 Mass Estimate of IVC
Barger et al. (2016) estimated the mass of the intermediate velocity
outflowing LMC winds to be $\log{\left(M_{\rm
ionized}/M_{\odot}\right)}\gtrsim 7.16$ for the low-ionization species.
Because they found that this wind is roughly symmetrical on either side of the
LMC’s disk, this would correspond to a mass of $\log{\left(M_{\rm
ionized}/M_{\odot}\right)}\gtrsim 6.86$ for only the near-side ionized
outflow. However, as that study only sampled the wind along two neighboring
sightlines via absorption-line spectroscopy, they had to make assumptions
about the spatial extent and morphology of the wind. With our kinematically
resolved, near-side H$\alpha$ emission map of the wind of the LMC, we are able
to measure both of these directly.
In contrast with the absorption-line work, our H$\alpha$ survey allows us to
obtain a mass estimate more naturally from the wind’s density times its
volume, $M=\rho{\rm V}$. We calculate its mass density using the electron
number density as a proxy for the density of protons as they are roughly equal
(i.e., $n_{p}\approx n_{e}$) and use a reduced mass of $\mu\approx 1.4m_{\rm
H}$ to account for the contribution from helium and metals. Calculating the
volume of the wind requires knowing its solid angle $\Omega$, line-of-sight
depth $L$, three dimensional geometry (see Figure 8), and distance $D$. We
also include a $\cos{\left(i\right)}$ factor to account for the inclination of
the cross-sectional area of the wind relative to our line of sight. The mass
is then given as: $M=\mu n_{e}\Omega D^{2}L\cos{\left(i\right)}$. For gas at
the distance of the LMC, the mass enclosed within one WHAM beam with
$\Omega=1\arcdeg$ is then:
$\frac{M_{\rm
ionized}}{{M_{\odot}}}=2.1~{}\times~{}10^{4}\cos{\left(i\right)}\Bigg{(}\frac{D}{50~{}{\rm
kpc}}\Bigg{)}^{2}\Bigg{(}\frac{L_{{\rm H}^{+}}}{\rm
pc}\Bigg{)}\Bigg{(}\frac{n_{e}}{\rm cm^{-3}}\Bigg{)}$ (4)
We estimate the total mass of the wind by summing the single beam mass across
the projected outflow area. For the 1,712 H$\alpha$ spectra that fill our map
of the LMC (Figure 5), we define the morphological extent of the LMC’s
galactic wind to include regions that are within $1\arcdeg$ of its H i disk
with neutral column densities larger than $\log(N_{\rm
H\textsc{~{}i}}/\,\mathrm{cm}^{-2})\geq 19.0$. This region contains 215 WHAM
sightlines contained within roughly $50~{}{\rm deg}^{2}$ that are used for our
mass estimate. For each of these sightlines, we integrated across the
$\rm-100\leq v_{LMCSR}\leq-55~{}{\rm km}~{}{\rm s}^{-1}$ velocity range to
measure the H$\alpha$ intensity of this wind and to explore its spatial
distribution (see Equation 3). The strength of the H$\alpha$ recombination
line is directly proportional to the electron density squared along the line-
of-sight depth and the electron temperature ($T_{e}$) of the gas as:
$\frac{I_{\rm H\alpha}}{\rm R}=0.364\,T_{4}^{-0.924}\left(\frac{EM}{{\rm
pc}\,\,\mathrm{cm}^{-6}}\right),$ (5)
where EM is the emission measure ($EM\equiv\int n_{e}(s)^{2}ds$) and $T_{4}$
is given in units of ten thousand Kelvin $(\rm i.e.,T_{4}=T_{e}/{10^{4}\,K})$.
Measurements of the average EM and associated velocity range are given in
Figure 1.
Since we cannot measure the line-of-sight depth or the electron density as a
function of depth directly, we adopt a few necessary assumptions to estimate
the mass of the wind that closely follow the procedures used in prior WHAM
H$\alpha$ studies for evaluating the mass of HVCs (e.g., Hill et al. 2009;
Barger et al. 2012, 2017; Smart et al. 2019).
Table 1: Observed Velocities and Emission Measures Outflow Component | $v_{\rm LMCSR}$ | $\left<{\rm EM}\right>$aaThis is an average emission measure across the corresponding velocity range used to calculate the ionized mass across all sightlines considered to be part of the wind.
---|---|---
| $({\rm km}~{}{\rm s}^{-1})$ | $(\rm 10^{-3}pc\,cm^{-6})$
IVC | $-100$ to $-55$ | $390$
HVC | $-175$ to $-100$ | $205$
#### 6.1.1 Line-of-Sight Depth
The most difficult of these assumptions pertains to the depth and line-of-
sight distribution of the wind. Past studies of the HVC component of the LMC
wind observed similar kinematics for the neutral and low-ionization species
(e.g., Lehner et al. 2009). Moreover, observations of both H$\alpha$ and H i
in outflows of other galaxies (e.g. M82; Lehnert et al. 1999 and Schwartz &
Martin 2004) support a multi-phase wind that is well mixed at large scales. In
our study, due to our large angular resolution at 1 degree, we are spatially
resolving the wind at the kiloparsec scale and are unable to resolve small-
scale structure in the wind. We therefore assume that the neutral gas and
ionized gas are well mixed at the scales we are probing in our survey such
that the ionized hydrogen depth is roughly equal to the neutral depth, i.e.,
$L_{\rm H^{+}}\approx L_{\rm H\textsc{~{}i}}$.
Figure 7: Simulated H$\alpha$ emission maps from Bustard et al. (2020) of the
LMC’s galactic wind from an edge-on perspective, using the Trident package
(Hummels et al., 2017). This wind is assumed to be in photoionization
equilibrium with the UV background (no local ionizing sources from the LMC or
Milky Way are included). This model uses the orbital history of the LMC from
the models of Besla et al. (2012), the present-day infall velocity of the LMC
is $258~{}{\rm km}~{}{\rm s}^{-1}$ directed edge-on and $194~{}{\rm km}~{}{\rm
s}^{-1}$ directed face-on with respect to the LMC; the ambient medium is
assumed to be smooth with a total gas number density of $\sim 10^{-4}~{}{\rm
cm}^{-3}$ at the present-day LMC distance of $d_{\odot}\approx 50~{}{\rm kpc}$
(Salem et al., 2015). (Top) H$\alpha$ emission at a lookback time of
$60~{}{\rm Myr}$. (Bottom) Current day edge-on view of the LMC and its
H$\alpha$ emission. The arrow in the lower-left corner of each panel
represents the motions of the head wind caused by the LMC’s path through the
MW halo. On each colorbar, the sensitivity of our observations (10 mR) is
marked with a horizontal line and an arrow pointed upward.
To estimate this depth, we analyze the fiducial simulations of Bustard et al.
(2020) for LMC-specific outflows (see Figure 7). These magnetohydrodynamic
simulations used the observed LMC star-formation history from Harris &
Zaritsky (2009) to the seed star cluster particles that would subsequently
deposit the thermal, kinetic, and cosmic ray energy into surrounding grid
cells. In these simulations, gas that emerged from the LMC’s disk due to
stellar driven outflows experienced an external pressure by surrounding
coronal gas. Bustard et al. (2020) found that the apparent coronal gas wind
pushes against the leading edge of the LMC via ram pressure and that its
effects are strong enough to suppress the near-side outflows and alter the
shape of the LMC’s halo. While this simulation neglects the gravitational
influence of the SMC and Milky Way, we expect the depth of the outflow to be
primarily influenced by the LMC’s gravitational potential and ram-pressure
effects.
On a global scale, the galactic winds produced in the Bustard et al. (2020)
simulations match well kinematically and spatially with the observed LMC
outflow. We use the Bustard et al. (2020) results as a guide for constraining
the depth of this wind. They find that the $10^{4}~{}{\rm K}$ gas in the near-
side outflow penetrated to a height of $z_{\rm wind}\approx 3~{}{\rm kpc}$
below the midplane of the LMC ($z_{\rm midplane}=0~{}{\rm kpc}$) at a lookback
time of $60~{}{\rm Myrs}$. At present-day, they find that this wind stalls at
a height of $z_{\rm wind}\approx 4~{}{\rm kpc}$ due to ram pressure. The
outflows on the far-side of the LMC, however, are able to travel much further
off the disk as ram-pressure effects are weaker on the trailing side of
galaxy. Accounting for the height of the LMC’s H i disk ($z_{\rm
H\textsc{~{}i}\,disk}\approx 1.75~{}{\rm kpc}$), this corresponds to present-
day wind depth of roughly $1\leq L_{\rm wind}\leq 2~{}\,\mathrm{kpc}$ off the
galaxy.
We used this depth to calculate an average electron density for the LMC’s
galactic wind using the measured EM as $\langle n_{e}\rangle=\langle
EM\rangle^{1/2}L_{\rm H^{+}}^{-1/2}$. We then used this density to calculate
the outflow’s mass with Equation 4. Although our main uncertainty involves the
depth of the wind, it is important to note that the mass scales as $M_{\rm
ionized}\propto L_{\rm H^{+}}^{-1/2}$, resulting in only a modest variance in
mass when we consider a range of reasonable depths. Moreover, we assume an
electron temperature in the range $0.8\lesssim T_{4}\lesssim 1.2$, which is
where the H$\alpha$ emission peaks. We further assume that the temperature of
the neutral and ionized hydrogen gas are roughly equal allowing us to relate
the neutral and ionized hydrogen number densities for a given pressure
scenario as $P_{\rm ionized}/n_{\rm ionized}=P_{\rm neutral}/n_{\rm neutral}$
under ideal gas conditions.
Figure 8: Explored near-side volume geometries of the LMC outflow, including
(a) a cylinder with a uniform outflow that spans the face of the galaxy out to
some height, (b) a partial cone with its narrow end embedded and centered on a
region of intense star-formation, (c) an inverted partial cone with its narrow
side pointing away from the LMC’s disk to match the morphology of the
simulated galactic outflow under influence of ram-pressure and headwinds.
Because the LMC is nearly face-on, we cannot constrain how the morphology of
the wind varies with depth as we only see its 2 dimensional projection on the
sky. Therefore, we acknowledge three separate volume scenarios: cylinder,
partial outward flaring cone, and partial tapered cone (see Figure 8). For the
cylindrical wind in scenario (a), we simply assume that the radius of the wind
is constant and matches the extent of the H$\alpha$ emission. We include the
outward flaring partial cone geometry in scenario (b) as it has been observed
for other galaxies (e.g., M82); in this scenario, we set the inner cone radius
to the stellar radius (2.15 kpc) and the outer radius to match radius of the
H$\alpha$ emission. The tapered, inverted cone in scenario (c) is the geometry
that resulted for the near-side LMC wind from Bustard et al. (2020) simulation
when they accounted for ram-pressure effects; in this scenario, we match the
radii to match the simulation and the H$\alpha$ observations. For these three
geometries, the near-side wind masses would correspond to $\log{\left(M_{\rm
ion}/M_{\odot}\right)}=7.36\pm 0.14$ for volume (a), $\log{\left(M_{\rm
ion}/M_{\odot}\right)}=7.10\pm 0.14$ for volume (b), a mass of
$\log{\left(M_{\rm ion}/M_{\odot}\right)}\leq 7.09\pm 0.14$ for volume (c). As
we cannot observationally determine which of these wind geometries better
matches with the LMC’s near-side galactic wind, we will report the values of
the cylindrical scenario in the text as it is the simplest volume that
requires the least assumptions. We report the values and ranges for the other
two volume scenarios in Table 2. To estimate the total mass of the neutral and
ionized gas of this wind, we assume the outflow is symmetric on both the near-
side and far-side of the LMC’s disk and that it has an ionization fraction of
${n_{\rm H^{+}}/n_{\rm H}\approx 0.75}$ (Barger et al., 2016). We find the
total IVC mass of the wind to be in the range $7.70\leq\log{\left(M_{\rm
total}/M_{\odot}\right)}\leq 7.85$. This is compared to the previous Barger et
al. (2016) estimate of $\log{\left(M_{\rm ionized}/M_{\odot}\right)}\gtrsim
7.16$.
We calculate the mass-flow rate for this wind by assuming an outflow time
($t_{\rm outflow}\approx 60~{}\rm Myr$). This is calculated using information
regarding the last period of star-formation ($\sim 100~{}{\rm Myrs}$) as well
as the time necessary for the wind to penetrate through the surrounding medium
and travel approximately 2 kpc off the H i disk. This results in a total IVC
mass-flow rate of $0.83\leq\dot{M}_{\rm outflow}\leq 1.18~{}M_{\odot}~{}\rm
yr^{-1}$. The mass-loading factor is also calculated to study the ratio of the
mass-flow rate to the star-formation rate, $(\eta\equiv\dot{M}_{\rm
outflow}/\dot{M}_{\star})$. We adopt a star-formation rate in the range
$0.3\lesssim\dot{M}_{\star}\lesssim 0.34~{}M_{\odot}~{}\rm yr^{-1}$ to agree
with the star-formation history of the LMC (see Figure 11 of Harris & Zaritsky
2009). This results in a mass-loading factor between $2.44\leq\eta\leq 3.93$.
Because the mass-loading factor, $(\eta\equiv\dot{M}_{\rm
outflow}/\dot{M}_{\star})$, is much greater than unity, this indicates that
the current star-formation state of the LMC is unsustainable such that the
galaxy could become quenched if this state is prolonged and if the ejected gas
is able to escape.
### 6.2 IVC Material - Discussion
Barger et al. (2016) characterized the IVC material with respect to the LMC
using UV absorption-line spectroscopy toward a LMC disk star and a background
QSO. They found the near-side material to have an estimated mass of
$\log{\left(M_{\rm low~{}ions}/M_{\odot}\right)}\gtrsim 7.16$ for low-
ionization species on both the near-side and far-side of the galaxy. This
corresponds to an ionized mass of $\log{\left(M_{\rm
ionized}/M_{\odot}\right)}\gtrsim 6.9$ for gas traveling up to $\Delta{\rm
v}_{\rm LMCSR}\approx-100~{}{\rm km}~{}{\rm s}^{-1}$ on the near-side of the
LMC. Over the same velocity range, we calculated the ionized hydrogen mass to
be $\log{\left(M_{\rm ionized}/M_{\odot}\right)}\approx 7.36$ (see Section 6).
While our estimate is larger than the Barger et al. (2016) near-side mass, it
is important to note the discrepancies between our estimates can be attributed
to how each study determined the masses.
In the Barger et al. (2016) study, they assumed (1) the wind has an angular
extent similar to the LMC’s H i disk ($R_{\rm H\textsc{~{}i}}\approx
3.7~{}\,\mathrm{kpc}$), (2) a covering fraction of $f_{\Omega}=0.7$ for low-
ionization and $f_{\Omega}=0.9$ for high-ionization species, and (3) the
average global strength for this wind could be represented by the absorption
they observed along their two sightlines. We, however, find that the H$\alpha$
emission of the wind extends around 1 degree on average off the H i disk,
which means that the outflow radius is roughly $1~{}\,\mathrm{kpc}$ larger
(i.e., $R_{\rm H\alpha}\approx R_{\rm H\textsc{~{}i}}+1~{}\,\mathrm{kpc}$).
Their third assumption may result in a significantly underestimated outflow
mass as the two sightlines used in the Barger et al. (2016) study probed a
relatively quiescent region of the LMC. We emphasized the effect of their
assumptions by taking the average emission measure from the same region as the
Barger et al. (2016) sightline—in the opposite quadrant as 30 Doradus—and
calculating a corresponding mass for an area similar to theirs. Using $\langle
EM\rangle\approx 0.2~{}{\rm pc}\,{\,\mathrm{cm}^{-6}}$, the outflow mass would
be $\log{\left(M_{\rm ionized}/M_{\odot}\right)}\approx 7.1$, which is nearly
half as large as our IVC cylindrical mass estimate and nearly in agreement
with the Barger et al. (2016) estimate for the low-ionization species.
Ultimately, the fate of this ejected material remains uncertain. The escape
velocity of the LMC is roughly $110~{}{\rm km}~{}{\rm s}^{-1}$ (Besla, 2015),
which corresponds to roughly $100~{}{\rm km}~{}{\rm s}^{-1}$ along the line of
sight for an inclination of around 25-degrees. Therefore, the majority of the
IVC material is not expected to escape. However, the Magellanic System is a
crowded environment and tidal interactions between the SMC, and possibly the
MW, can assist in the removal of otherwise bound gas (see D’Onghia & Fox 2016
for a review). Some of the material from previous outflow events may have been
displaced into the trailing Magellanic Stream. In a kinematic investigation of
the H i-21cm emission of the Magellanic Stream, Nidever et al. (2008) found
that one of its two filaments traces back to the 30 Doradus region of the LMC.
Richter et al. (2013) measured the chemical composition of this “30-Doradus”
filament and found that it has a metallicity that is consistent with an LMC
origin. Fox et al. (2013) explored the chemical composition of the other
Magellanic Stream filament and found that it has a lower metallicity that is
more consistent with an SMC origin, which kinematically traces back to the
Magellanic Bridge (Nidever et al., 2008).
Ram-pressure stripping may also play an important role removing gas from the
Magellanic Clouds. The impact that ram pressure has on the CGM strongly
depends on the density of the medium that it is traveling through and on the
motion of the gas within its surrounding medium. Based off the work of Salem
et al. 2015, Bustard et al. 2020 and others (Heckman et al. 2000, Mastropietro
et al. 2005, and Fujita et al. 2009), the effects of ram pressure on gas is
multi-faceted and either can promote or suppress the removal of gas from a
galaxy depending on the circumstance. This is because ram pressure can work in
direct opposition of galactic winds positioned on the leading side of a
galaxy, where the surrounding coronal gas will act as a head wind that pushes
against the outflowing back toward the galaxy. Therefore, the outflow on the
near-side of the LMC galaxy, which is the side leading the LMC’s orbit through
the MW’s halo, will be suppressed. Meanwhile, the far-side could experience an
enhanced outflow as ram pressure will push the gas away from the galaxy and it
could therefore be more massive. With ram-pressure stripping, the simulated
wind in the Bustard et al. (2020) study was able to reach a height of more
than $1.7~{}\,\mathrm{kpc}$ off the LMC’s H i disk and had a total ejected
mass in the range $6.7\leq\log{\left(M_{\rm ejected}/M_{\odot}\right)}\leq
8.4$. While our IVC mass estimate is within this mass range, Bustard et al.
predicts that a significant portion of this material flows off the far-side of
the LMC’s disk (see their Figure 11), which is the trailing side trails as the
LMC traverses the MW’s halo. Conversely, the Barger et al. (2016) UV
absorption-line study found that the near-side and far-side outflows were
kinematically symmetric along their explored sightlines, though this might not
be representative of the outflow’s large-scale structure. A study like this
one of the far-side—in which the winds are mapped—is needed to determine its
physical extent and the impact of ram-pressure stripping.
## 7 HVC Material
We also detected a high-velocity component to the LMC’s galactic wind in
H$\alpha$ emission over the $-175\leq\rm v_{LMCSR}\leq-100~{}{\rm km}~{}{\rm
s}^{-1}$ velocity range (see top-right panel of Figure 5). Much of this
material is traveling away from the LMC at speeds that exceed its escape
velocity and could therefore be permanently lost from the galaxy. Furthermore,
tidal forces could assist in carrying this material away. At these high
velocities, the H$\alpha$ emission is especially concentrated in the direction
of the 30 Doradus starburst region (refer to the right panel of Figure 6). We
calculate the ionized mass of this HVC using the procedures described in
Section 6.1.
Using the timescale of the IVC material (60 Myrs) as well as the observed
velocities, we estimate the HVC reaches a height up to 9 kpc off the H i disk.
It is possible the material reaches even further as Bustard et al. (2020)
shows ejected material to reach heights in excess of 13 kpc. With these
estimates and equation (4), we calculate an ionized hydrogen mass of
$\log{\left(M_{\rm ionized}/M_{\odot}\right)}=6.99\pm 0.13$ for the HVC. This
mass contains 124 WHAM beams that are within roughly $20~{}{\rm deg}^{2}$.
Following the procedure in Section 6.1, we assume the wind to be symmetrical
about the LMC disk and to contain both neutral and ionized material. This
corresponds to a total mass of $\log{\left(M_{\rm
total}/M_{\odot}\right)}\approx 7.41$, but we emphasize that this could be a
lower limit on its total mass as ram-pressure effects are likely enhancing the
far-side wind (see Figure 7 and Bustard et al. 2020). Given that this material
is the HVC component of the current outflow explored in Section 6.1, the mass-
flow rate is $\dot{M}_{\rm outflow}\approx 0.43~{}M_{\odot}~{}\rm yr^{-1}$ and
the mass-loading factor is $\eta\approx 1.36$. The full ranges of these
results are provided in Table 2.
When comparing our mass to prior work there is a general agreement. Lehner et
al. (2009) estimated its neutral hydrogen gas mass to be
$5.7\leq\log{\left(M_{\rm H\textsc{~{}i}}/M_{\odot}\right)}\leq 6.0$ with a
total mass of at least $\log{\left(M_{\rm total}/M_{\odot}\right)}>6.7$ with
the assumptions that it has an LMC origin and that it lies at a distance of
$40\leq d_{\odot}\leq 50~{}\,\mathrm{kpc}$ away. Using the ionization fraction
of $0.5\lesssim n_{\rm H\textsc{~{}II}}/n_{\rm H}\lesssim 0.8$ that Lehner et
al. (2009) measured, we calculate a total hydrogen mass (neutral and ionized)
in the range $6.9\leq\log{\left(M_{\rm H}/M_{\odot}\right)}\leq 7.1$, which is
in agreement with their total mass lower limit. Moreover, simulations from
Bustard et al. (2020) estimated $6.56\leq\log{\left(M_{\rm
ejected}/M_{\odot}\right)}\leq 7.78$ worth of material reaching over
$13\,\mathrm{kpc}$ away from the LMC disk. If we consider that the bulk of the
HVC material distance of the wind to be at $d_{\odot}\approx
13~{}\,\mathrm{kpc}$, then our observed HVC mass falls within the range of
this simulated ejected mass.
Table 2: Mass Estimates for IVC & HVC Material of the LMC outflow
Outflow Geometry | $M_{\rm ion}$ | $M_{\rm total}$ | $\dot{M}_{\rm outflow}$ | $\eta$
---|---|---|---|---
| $(M_{\odot})$ | $(M_{\odot})$ | $(M_{\odot}~{}\rm yr^{-1})$ |
IVC | | | |
Cylinder | $7.28$ – $7.43$ | $7.70$ – $7.85$ | $0.83$ – $1.18$ | $2.44$ – $3.93$
Outward Cone | $7.02$ – $7.17$ | $7.44$ – $7.59$ | $0.46$ – $0.65$ | $1.35$ \- $2.16$
Inward Cone | $7.01$ – $7.16$ | $7.43$ – $7.58$ | $0.45$ – $0.63$ | $1.32$ – $2.11$
HVC | | | |
Cylinder | $6.93$ – $7.05$ | $7.35$ – $7.47$ | $0.37$ – $0.49$ | $1.09$ – $1.63$
Note. — All values for $M_{\rm total}$ assume a symmetrical wind on the near-
side and far-side of the LMC with an ionization fraction of $n_{\rm
H\textsc{~{}ii}}/n_{\rm H}=0.75$ that includes neutral and ionized gas
assuming a reduced mass of $\mu\approx 1.4m_{\rm H}$ to account for helium and
metals. We used the values listed in Table 1 to calculate these outflow masses
and rates.
### 7.1 Origins of the HVC
We observe high-velocity material toward the LMC at velocities greater than
$100~{}{\rm km}~{}{\rm s}^{-1}$ off the LMC H i disk ($\rm+90\leq
v_{LSR}\leq+175~{}{\rm km}~{}{\rm s}^{-1}$), detailed in Section 7. This
emission is persistent at intermediate- to low-velocities relative to the LMC
and spatially aligns well with the LMC’s H i disk (see Figure 6). These
observed properties are consistent with an LMC origin in which the gas is
expelled from the galaxy by its stellar activity. This is a conclusion that
has also been reached by Staveley-Smith et al. (2003) using H i emission-line
and by Lehner et al. (2009) using UV absorption lines (also see Lehner & Howk
2007 and Barger et al. 2016). Staveley-Smith et al. (2003) found that the H i
column densities of the HVC peaked at locations that align with H i voids
within the LMC disk (such as supergiant shells, e.g., LMC 3); they further
identified spatial and kinematic H i bridges that linked back to the LMC’s
disk. Lehner et al. (2009) used 139 stars embedded in the LMC as background
targets in an UV absorption-line study to explore the properties of this HVC;
they found that the HVC has (i) a LSR velocity gradient in right ascension
that follows the LMC’s velocity gradient, (ii) dust based on depletion
patterns—signifying a galactic origin (see also Smoker et al. 2015), (iii) an
oxygen abundance similar to the LMC of $[\rm
O\textsc{~{}i}/H\textsc{~{}i}]=-0.51^{+0.12}_{-0.16}$, and (iv) a high
covering fraction in the direction of the LMC.
However, since the works mentioned above, the origin of this HVC has been
strongly debated. This is because Werner & Rauch (2015) found C ii, Si ii, and
Si iii absorption consistent with this HVC in the spectra of a background star
at a distance of $d_{\odot}=9.2^{+4.1}_{-7.2}~{}\,\mathrm{kpc}$. Richter et
al. (2015) confirmed the presence of the HVC in the direction of this star,
which places the HVC at a distance of $d_{\odot}<13.3~{}\,\mathrm{kpc}$, and
asserted that the HVC lies too far from the LMC to be consistent with an LMC
origin. They reason that it is unlikely that material ejected from the LMC
would be able to reach a distance of roughly $40~{}\,\mathrm{kpc}$ intact
during the few hundred million years it would take to travel at speeds of
$\sim 150~{}{\rm km}~{}{\rm s}^{-1}$. This is because, during the cloud’s
journey it would sustain numerous collisions with not only the CGM of the LMC,
but also the halo of the MW. Consequently, Richter et al. (2015) argue that
this would strip a large amount of the cloud’s material and drag forces would
slow the cloud. Needless to say, the time required to make this journey
($250-400~{}\rm Myr$; Barger et al. 2016) would consequently lead to a
transverse displacement and the cloud would no longer be toward the LMC.
In light of the results presented in this paper, and by previous studies, we
offer a mutual theory on the distribution and association of material observed
between the MW and LMC. We postulate that there are two HVCs with different
origins near the LMC on the sky: (1) an HVC is associated with the galactic
winds of the LMC and (2) an HVC that is associated with the MW. Strong
evidence for this latter HVC was presented by Werner & Rauch (2015) and
Richter et al. (2015), who confirmed that there is high-velocity material at
$(l,\,b)=(279\fdg 9,\,-37\fdg 1)$, $3-5$ degrees away from 30 Doradus, that
resides at a distance of $d_{\odot}<13.3~{}\,\mathrm{kpc}$. This sightline is
on the periphery of the H$\alpha$ emission that spans across the LMC (see
Figure 5). Meanwhile, similarities in kinematics, dust depletion, and oxygen
abundances strongly indicate that most of the high-velocity material in the
direction of the LMC has an LMC origin. Unless the previous evidence gathered
from Staveley-Smith et al. (2003), Lehner et al. (2009), and our study is
entirely coincidental, there is likely to be more than one complex toward the
LMC. Therefore, we argue that the LMC HVC spans a much larger area of the sky
in the direction of the LMC and that there is also a smaller HVC positioned
just offset from the LMC on the sky, which is associated with the MW.
### 7.2 Feedback of Low-Mass Galaxies
Figure 9: Mass-loading factors from various studies. The mass-loading factor
we calculated for the total LMC wind (IVC + HVC) is marked with a cyan bar
while solely the IVC component is the darker hashed blue bar. Our results for
the LMC wind are found in Table 2. The Barger et al. (2016) LMC loading factor
(IVC only) is indicated with a gray diamond after we adjust their outflow mass
to match the assumptions used in our study. Green squares mark mass loading-
factors from McQuinn et al. (2019), which only include the ionized gas mass in
outflows. The brown and purple envelopes are the limits of uncertainty for
mass-loading relations from the Chisholm et al. (2017) and Leethochawalit et
al. (2019) studies (refer to their equations 16 and 8, respectively).
Galactic winds and the ability of a galaxy to retain ejected gas are directly
related to the galaxy’s capacity to form future stars. These winds tend to
intensify as stellar activity increases. Moreover, because lower mass galaxies
have smaller gravitational potential wells, their gas is more easily ejected
out of them. For massive galaxies, the halo and extraplanar gas that surrounds
their disks suppresses outflowing material. This results in a general trend in
which the mass-loading factor is diminished for massive galaxies and enhanced
in low-mass galaxies. In the case of the LMC, with a stellar mass of
$M_{\star}=3\times 10^{9}~{}M_{\odot}$ (van der Marel et al., 2009) and total
mass $M_{\rm total}=1.7\times 10^{10}~{}M_{\odot}$ (van der Marel &
Kallivayalil, 2014), as well as an active star-formation history (Harris &
Zaritsky, 2009), it is expected that this galaxy will have a relatively
elevated mass-loading factor.
We estimate that the LMC’s mass-loading factor ranges from $3.53\leq\eta\leq
5.56$ when including both the IVC and HVC gas in its winds and when assuming
cylindrical geometry. Our values are relatively large when compared to other
studies for galaxies with similar stellar mass (see Figure 9). However, the
Barger et al. (2016), Chisholm et al. (2017), and Leethochawalit et al. (2019)
studies all used absorption-line spectroscopy, which spatially probes less of
the wind; this results in a more uncertain mass estimate as the physical
extent of winds are poorly constrained. In the case of the McQuinn et al.
(2019) H$\alpha$ imaging study, although they were able to measure the extent
of the wind, their observations were more than a magnitude less sensitive than
ours and were unable to detect the diffuse gas in the wind.
The impact of geometry assumptions is most telling when comparing our results
with those of the Barger et al. (2016) absorption-line study of the LMC.
Although our study shares many of the same assumptions as their study, we were
able to map the extent and morphology of the wind, whereas they were forced to
make simplistic assumptions for its solid angle. The mass-flow rates from the
Barger et al. (2016) and the star-formation rates from Harris & Zaritsky
(2009) correspond to a mass-loading factor for LMC that is in the range
$0.7\leq\eta\leq 0.8$. When we increase their solid angle to match what we
observe in H$\alpha$ emission, their mass-loading factor becomes
$1.3\leq\eta\leq 1.5$. This revised range is in better agreement with our
mass-loading factor for conical geometries (Outward and inward cone; see Table
2).
In studies that explore a wide range of galaxy masses, the trend that the
mass-loading factor decreases with galaxy mass is clear (see Figure 9).
Chisholm et al. (2017) find mass-loading factors ranging from $0.2\,-\,19.0$
for their sample of 8 dwarf star-forming galaxies
($6.9\leq\log{\left(M_{\star}/M_{\odot}\right)}\leq 10.7$) using UV
absorption-line spectroscopy. At the LMC mass, this study predicts a mass-
loading factor around $1.1$. Across similar masses, McQuinn et al. (2019)
observed their galaxies via H$\alpha$ imaging and found mass-loading factors
ranging from $0.2\leq\eta_{\rm ionized}\leq 7.1$ for the ionized gas in the
wind only. For more massive galaxies
($9.5\leq\log{\left(M_{\star}/M_{\odot}\right)}\leq 11.5$), Leethochawalit et
al. (2019) find mass-loading factors that range from $0.3\,-\,1.0$ for their
UV absorption-line study. While our largest LMC mass-loading factor estimate
is at least twice as large as the Chisholm et al. (2017) and Leethochawalit et
al. (2019) trendlines predict, we acknowledge that is in part because of
differences in the assumed geometries of the winds. When assuming a conical
geometry, a commonly assumed geometry for galactic winds, our mass-loading
factor is reduced by nearly 50% and is more consistent with their results (see
Table 2).
Still, there is a significant spread in mass-loading factors between studies
that requires additional work to reduce. As of right now, no uniform sample
exists across low- and high-mass galaxies. Such a consistent sample would go a
long way in improving our understand of the relationship between mass-loading
factor and galaxy mass. Further, more imaging or emission-line studies are
needed to better constrain the geometry of these winds, though they also need
to have a high enough sensitivity to detect the diffuse gas.
## 8 Summary
We completed the first kinematically resolved survey of the LMC’s near-side
galactic wind in H$\alpha$ $\lambda 6563$ using the Wisconsin H$\alpha$
mapper. These mapped observations span 20 x 20 degrees across the sky and are
comprised of 1,712 sightlines. By combining these observations with existing H
i observations, we are able to determine the extent and morphology of the
neutral and warm ionized ($T_{e}\approx 10^{4}~{}\rm K$) phases of this wind.
Here we summarize the main conclusions of this study:
1. 1.
Morphology and Extent: We find that diffuse gas in the galactic wind spans
across the entire face of the LMC. We additionally find numerous faint $I_{\rm
H\alpha}\approx 100~{}\rm mR$ clouds offset from the main wind structure, but
we are unable to confidently determine whether or not they are physically
associated with the LMC’s galactic wind as tidally displaced Magellanic Cloud
material and MW HVCs also pollute this region of the sky.
2. 2.
Kinematic Distribution: We find the bulk of the LMC’s galactic wind is moving
with velocities of $\rm v_{LMCSR}\lesssim-110~{}{\rm km}~{}{\rm s}^{-1}$
relative to the H i disk, which is less than the escape velocity. However,
roughly $\log{\left(M_{\rm ion,\,hvc}/M_{\odot}\right)}\approx 7.0$, or
roughly 44%, of this wind is moving away from the LMC at speeds greater than
the escape velocity. Specifically, we find the gas that is spatially aligned
with 30 Doradus is moving at the greatest speeds relative to the LMC at $\rm
v_{LMCSR}\lesssim-175~{}{\rm km}~{}{\rm s}^{-1}$.
3. 3.
Two HVC Complexes toward the LMC: We find H$\alpha$ emission at high
velocities relative to the LMC that is strongly spatially correlated with the
30 Doradus. This emission similarly persists at lower velocities. Our
results—in addition to the results from the Staveley-Smith et al. (2003),
Lehner & Howk (2007) Lehner et al. (2009), and Barger et al. (2016)
studies—lead us to conclude that this starburst region is responsible for
generating this HVC (see Figures 3 and 6). The HVC discussed in Richter et al.
(2015), which lies a few degrees off the $\log{\left(\rm
N_{H\textsc{~{}i}}/\,\mathrm{cm}^{-2}\right)}=19$ contour of the LMC’s H i
disk and at a distance of $d_{\odot}\lesssim 13.3~{}\,\mathrm{kpc}$, is a
different complex. That HVC likely has a MW origin based on their results.
4. 4.
Outflow Mass, Flow Rate, & Loading Factor: We measure an ionized gas mass in
the range $7.28\leq\log{\left(M_{\rm ionized}/M_{\odot}\right)}\leq 7.43$ for
the outflowing material on the near-side of the LMC that is moving at
intermediate-velocities, i.e., speeds that are within $\sim 100~{}{\rm
km}~{}{\rm s}^{-1}$ of the LMC’s H i disk. The high-velocity component of this
wind has an ionized mass of $6.93\leq\log{\left(M_{\rm
ionized}/M_{\odot}\right)}\leq 7.05$. Combined, we estimate that the total
ionized gas mass in this near-side wind is in the range
$7.44\leq\log{\left(M_{\rm ionized}/M_{\odot}\right)}\leq 7.58$. This
corresponds to a total neutral and ionized mass of the entire wind that ranges
between $7.87\leq\log{\left(M_{\rm total}/M_{\odot}\right)}\leq 8.0$, assuming
that it is symmetrical on the near-side and far-side of the LMC and that it is
75% ionized (see Lehner & Howk 2007 and Barger et al. 2016). We further
calculate a total mass-flow rate and mass-loading factor of
$1.20\leq\dot{M}_{\rm outflow}\leq 1.67~{}M_{\odot}~{}\rm yr^{-1}$ and
$3.53\leq\eta\leq 5.56$. Table 2 summarizes these results.
5. 5.
Undetected Diffuse Material: We compared our results with existing mass-
loading factor trends that vary with stellar mass. We find that our average
mass-loading factors are on average roughly 2.5 times larger than both optical
H$\alpha$ imaging and UV absorption-line studies at the stellar mass of the
LMC. This indicates that either the observational sensitivity (optical
imaging: McQuinn et al. 2019) may be insufficient to detect diffuse gas in
these outflows or that the geometric assumptions are too conservative (UV
absorption-line spectroscopy: Barger et al. 2016, Chisholm et al. 2017, and
Leethochawalit et al. 2019).
## Acknowledgments
We thank Lister Staveley-Smith and Sungeun Kim for providing us with the ATCA
and Parkes telescopes LMC H i survey datacube. This paper includes archived
LAB and GASS H i data obtained through the AIfA H i Surveys Data Server
(https://www.astro.uni-bonn.de/hisurvey/index.php). WHAM operations for these
observations were supported by National Science Foundation (NSF) awards
AST-1108911 and AST-1714472/1715623. Madeline Horn received additional support
through NSF grant PHY-1358770.
## References
* Antwi-Danso et al. (2020) Antwi-Danso, J., Barger, K. A., & Haffner, L. M. 2020, ApJ, 891, 176, doi: 10.3847/1538-4357/ab6ef9
* Barger et al. (2013) Barger, K. A., Haffner, L. M., & Bland-Hawthorn, J. 2013, The Astrophysical Journal, 771, 132, doi: 10.1088/0004-637x/771/2/132
* Barger et al. (2013) Barger, K. A., Haffner, L. M., & Bland-Hawthorn, J. 2013, ApJ, 771, 132, doi: 10.1088/0004-637X/771/2/132
* Barger et al. (2012) Barger, K. A., Haffner, L. M., Wakker, B. P., et al. 2012, ApJ, 761, 145, doi: 10.1088/0004-637X/761/2/145
* Barger et al. (2016) Barger, K. A., Lehner, N., & Howk, J. C. 2016, ApJ, 817, 91, doi: 10.3847/0004-637X/817/2/91
* Barger et al. (2017) Barger, K. A., Madsen, G. J., Fox, A. J., et al. 2017, ApJ, 851, 110, doi: 10.3847/1538-4357/aa992a
* Besla (2015) Besla, G. 2015, The Orbits of the Magellanic Clouds, 311, doi: 10.1007/978-3-319-10614-4_26
* Besla et al. (2010) Besla, G., Kallivayalil, N., Hernquist, L., et al. 2010, ApJ, 721, L97, doi: 10.1088/2041-8205/721/2/L97
* Besla et al. (2012) —. 2012, MNRAS, 421, 2109, doi: 10.1111/j.1365-2966.2012.20466.x
* Bustard et al. (2020) Bustard, C., Zweibel, E. G., D’Onghia, E., Gallagher, J. S., I., & Farber, R. 2020, ApJ, 893, 29, doi: 10.3847/1538-4357/ab7fa3
* Cardelli et al. (1989) Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, ApJ, 345, 245, doi: 10.1086/167900
* Chisholm et al. (2017) Chisholm, J., Tremonti, C. A., Leitherer, C., & Chen, Y. 2017, MNRAS, 469, 4831, doi: 10.1093/mnras/stx1164
* Choi et al. (2018) Choi, Y., Nidever, D. L., Olsen, K., et al. 2018, ApJ, 866, 90, doi: 10.3847/1538-4357/aae083
* de Boer & Savage (1980) de Boer, K. S., & Savage, B. D. 1980, ApJ, 238, 86, doi: 10.1086/157960
* de Grijs et al. (2014) de Grijs, R., Wicker, J. E., & Bono, G. 2014, AJ, 147, 122, doi: 10.1088/0004-6256/147/5/122
* Diplas & Savage (1994) Diplas, A., & Savage, B. D. 1994, ApJ, 427, 274, doi: 10.1086/174139
* D’Onghia & Fox (2016) D’Onghia, E., & Fox, A. J. 2016, ARA&A, 54, 363, doi: 10.1146/annurev-astro-081915-023251
* Erb (2015) Erb, D. K. 2015, Nature, 523, 169, doi: 10.1038/nature14454
* Ford et al. (2014) Ford, A. B., Davé, R., Oppenheimer, B. D., et al. 2014, MNRAS, 444, 1260, doi: 10.1093/mnras/stu1418
* Fox et al. (2013) Fox, A. J., Richter, P., Wakker, B. P., et al. 2013, ApJ, 772, 110, doi: 10.1088/0004-637X/772/2/110
* Fujita et al. (2009) Fujita, A., Martin, C. L., Mac Low, M.-M., New, K. C. B., & Weaver, R. 2009, ApJ, 698, 693, doi: 10.1088/0004-637X/698/1/693
* Gordon et al. (2003) Gordon, K. D., Clayton, G. C., Misselt, K. A., Landolt, A. U., & Wolff, M. J. 2003, ApJ, 594, 279, doi: 10.1086/376774
* Haffner et al. (2001) Haffner, L. M., Reynolds, R. J., & Tufte, S. L. 2001, ApJ, 556, L33, doi: 10.1086/322867
* Haffner et al. (2003) Haffner, L. M., Reynolds, R. J., Tufte, S. L., et al. 2003, ApJS, 149, 405, doi: 10.1086/378850
* Harris & Zaritsky (2009) Harris, J., & Zaritsky, D. 2009, The Astronomical Journal, 138, 1243, doi: 10.1088/0004-6256/138/5/1243
* Hartmann & Burton (1997) Hartmann, D., & Burton, W. B. 1997, Atlas of Galactic Neutral Hydrogen (”Cambridge University Press”)
* Hausen et al. (2002) Hausen, N. R., Reynolds, R. J., Haffner, L. M., & Tufte, S. L. 2002, ApJ, 565, 1060, doi: 10.1086/324692
* Heckman (2003) Heckman, T. M. 2003, in Revista Mexicana de Astronomia y Astrofisica Conference Series, Vol. 17, Revista Mexicana de Astronomia y Astrofisica Conference Series, ed. V. Avila-Reese, C. Firmani, C. S. Frenk, & C. Allen, 47–55
* Heckman et al. (2000) Heckman, T. M., Lehnert, M. D., Strickland , D. K., & Armus, L. 2000, ApJS, 129, 493, doi: 10.1086/313421
* Hill et al. (2009) Hill, A. S., Haffner, L. M., & Reynolds, R. J. 2009, ApJ, 703, 1832, doi: 10.1088/0004-637X/703/2/1832
* Howk et al. (2002) Howk, J. C., Sembach, K. R., Savage, B. D., et al. 2002, ApJ, 569, 214, doi: 10.1086/339322
* Hummels et al. (2017) Hummels, C. B., Smith, B. D., & Silvia, D. W. 2017, ApJ, 847, 59, doi: 10.3847/1538-4357/aa7e2d
* Kalberla et al. (2005) Kalberla, P. M. W., Burton, W. B., Hartmann, D., et al. 2005, A&A, 440, 775, doi: 10.1051/0004-6361:20041864
* Kim et al. (1999) Kim, S., Dopita, M. A., Staveley-Smith, L., & Bessell, M. S. 1999, AJ, 118, 2797, doi: 10.1086/301116
* Kim et al. (1998) Kim, S., Staveley-Smith, L., Dopita, M. A., et al. 1998, ApJ, 503, 674, doi: 10.1086/306030
* Kim et al. (2003) —. 2003, ApJS, 148, 473, doi: 10.1086/376980
* Laval et al. (1992) Laval, A., Rosado, M., Boulesteix, J., et al. 1992, A&A, 253, 213
* Leethochawalit et al. (2019) Leethochawalit, N., Kirby, E. N., Ellis, R. S., Moran, S. M., & Treu, T. 2019, ApJ, 885, 100, doi: 10.3847/1538-4357/ab4809
* Lehner & Howk (2007) Lehner, N., & Howk, J. C. 2007, MNRAS, 377, 687, doi: 10.1111/j.1365-2966.2007.11631.x
* Lehner et al. (2009) Lehner, N., Staveley-Smith, L., & Howk, J. C. 2009, ApJ, 702, 940, doi: 10.1088/0004-637X/702/2/940
* Lehnert et al. (1999) Lehnert, M. D., Heckman, T. M., & Weaver, K. A. 1999, ApJ, 523, 575, doi: 10.1086/307762
* Lucchini et al. (2020) Lucchini, S., D’Onghia, E., Fox, A. J., et al. 2020, arXiv e-prints, arXiv:2009.04368. https://arxiv.org/abs/2009.04368
* Mastropietro et al. (2005) Mastropietro, C., Moore, B., Mayer, L., Wadsley, J., & Stadel, J. 2005, MNRAS, 363, 509, doi: 10.1111/j.1365-2966.2005.09435.x
* McClure-Griffiths et al. (2009) McClure-Griffiths, N. M., Pisano, D. J., Calabretta, M. R., et al. 2009, The Astrophysical Journal Supplement Series, 181, 398. http://stacks.iop.org/0067-0049/181/i=2/a=398
* McQuinn et al. (2019) McQuinn, K. B. W., van Zee, L., & Skillman, E. D. 2019, in IAU Symposium, Vol. 344, Dwarf Galaxies: From the Deep Universe to the Present, ed. K. B. W. McQuinn & S. Stierwalt, 301–304, doi: 10.1017/S1743921319000085
* Meaburn (1980) Meaburn, J. 1980, MNRAS, 192, 365, doi: 10.1093/mnras/192.3.365
* Nidever et al. (2008) Nidever, D. L., Majewski, S. R., & Butler Burton, W. 2008, ApJ, 679, 432, doi: 10.1086/587042
* Pathak et al. (2011) Pathak, A., Pradhan, A. C., Sujatha, N. V., & Murthy, J. 2011, MNRAS, 412, 1105, doi: 10.1111/j.1365-2966.2010.17964.x
* Peeples et al. (2014) Peeples, M. S., Werk, J. K., Tumlinson, J., et al. 2014, ApJ, 786, 54, doi: 10.1088/0004-637X/786/1/54
* Pellegrini et al. (2012) Pellegrini, E. W., Oey, M. S., Winkler, P. F., et al. 2012, ApJ, 755, 40, doi: 10.1088/0004-637X/755/1/40
* Pietrzyński et al. (2013) Pietrzyński, G., Graczyk, D., Gieren, W., et al. 2013, Nature, 495, 76, doi: 10.1038/nature11878
* Putman et al. (2003) Putman, M. E., Bland-Hawthorn, J., Veilleux, S., et al. 2003, ApJ, 597, 948, doi: 10.1086/378555
* Reid & Parker (2012) Reid, W. A., & Parker, Q. A. 2012, MNRAS, 425, 355, doi: 10.1111/j.1365-2966.2012.21471.x
* Richter et al. (2015) Richter, P., de Boer, K. S., Werner, K., & Rauch, T. 2015, A&A, 584, L6, doi: 10.1051/0004-6361/201527451
* Richter et al. (2013) Richter, P., Fox, A. J., Wakker, B. P., et al. 2013, ApJ, 772, 111, doi: 10.1088/0004-637X/772/2/111
* Rosado et al. (1990) Rosado, M., Laval, A., Boulesteix, J., et al. 1990, A&A, 238, 315
* Salem et al. (2015) Salem, M., Besla, G., Bryan, G., et al. 2015, ApJ, 815, 77, doi: 10.1088/0004-637X/815/1/77
* Schwartz & Martin (2004) Schwartz, C. M., & Martin, C. L. 2004, ApJ, 610, 201, doi: 10.1086/421546
* Smart et al. (2021) Smart, B. M., Haffner, L. M., Barger, K. A., et al. 2021
* Smart et al. (2019) Smart, B. M., Haffner, L. M., Barger, K. A., Hill, A., & Madsen, G. 2019, ApJ, 887, 16, doi: 10.3847/1538-4357/ab4d58
* Smoker et al. (2015) Smoker, J. V., Fox, A. J., & Keenan, F. P. 2015, MNRAS, 451, 4346, doi: 10.1093/mnras/stv1189
* Staveley-Smith et al. (2003) Staveley-Smith, L., Kim, S., Calabretta, M. R., Haynes, R. F., & Kesteven, M. J. 2003, MNRAS, 339, 87, doi: 10.1046/j.1365-8711.2003.06146.x
* Tufte (1997) Tufte, S. L. 1997, PhD thesis, THE UNIVERSITY OF WISCONSIN - MADISON
* Tumlinson et al. (2011) Tumlinson, J., Thom, C., Werk, J. K., et al. 2011, Science, 334, 948, doi: 10.1126/science.1209840
* van der Marel & Kallivayalil (2014) van der Marel, R. P., & Kallivayalil, N. 2014, ApJ, 781, 121, doi: 10.1088/0004-637X/781/2/121
* van der Marel et al. (2009) van der Marel, R. P., Kallivayalil, N., & Besla, G. 2009, in IAU Symposium, Vol. 256, The Magellanic System: Stars, Gas, and Galaxies, ed. J. T. Van Loon & J. M. Oliveira, 81–92, doi: 10.1017/S1743921308028299
* Veilleux et al. (2005) Veilleux, S., Cecil, G., & Bland-Hawthorn, J. 2005, ARA&A, 43, 769, doi: 10.1146/annurev.astro.43.072103.150610
* Wakker et al. (1998) Wakker, B., Howk, J. C., Chu, Y.-H., Bomans, D., & Points, S. D. 1998, ApJ, 499, L87, doi: 10.1086/311334
* Walker (1999) Walker, A. 1999, in Astrophysics and Space Science Library, Vol. 237, Post-Hipparcos Cosmic Candles, ed. A. Heck & F. Caputo, 125, doi: 10.1007/978-94-011-4734-7_8
* Werner & Rauch (2015) Werner, K., & Rauch, T. 2015, A&A, 584, A19, doi: 10.1051/0004-6361/201527261
* Winkler et al. (2015) Winkler, P. F., Smith, R. C., Points, S. D., & MCELS Team. 2015, in Astronomical Society of the Pacific Conference Series, Vol. 491, Fifty Years of Wide Field Studies in the Southern Hemisphere: Resolved Stellar Populations of the Galactic Bulge and Magellanic Clouds, ed. S. Points & A. Kunder, 343
|
11institutetext: Instituto de Ciencias Nucleares, Universidad Nacional
Autónoma de México, Circuito Exterior, C.U., A.P. 70-543, 04510 México D.F.,
Mexico 22institutetext: Frankfurt Institute for Advanced Studies, Johann
Wolfgang Goethe Universität, Ruth-Moufang-Str. 1, 60438 Frankfurt am Main,
Germany
# A Semimicroscopic Algebraic Cluster Model for Heavy Nuclei
I: One heavy and one light cluster
Peter O. Hess 1122 Leonardo J. Chávez-Nuñez 11
(Received: date / Revised version: date)
###### Abstract
An extension of the Semimicroscopic Algebraic Cluster Model (SACM) is
proposed, based on the pseudo-$SU(3)$ model ($\widetilde{SU}(3)$). The
Hamiltonian and the spectroscopic factor operator of the model are presented
and a procedure of constructing the model space. Because a huge number of
$SU(3)$ irreducible representations (irrep) appear, one has to be careful in
designing a practical, consistent path to reduce the Hilbert space. The
concept of forbiddenness, taking into account excitations of the clusters, is
introduced and applied. The applications are to two systems with a low
forbiddenness, namely to 236U $\rightarrow$ 210Pb + 26Ne and 224Ra
$\rightarrow$ 210Pb + 14C, and to 236U $\rightarrow$ 146Xe + 90Sr, which
appears in the fission of ${}^{236}U$, which requires a large forbiddenness.
Energies, electromagnetic transitions and spectroscopic factors are
calculated.
###### pacs:
21.00nuclear structure and 21.60.Csshell model and 21.60.Gxcluster model
## 1 Introduction
The Semimicroscopic Algebraic Cluster Model (SACM) was proposed in cseh-letter
; cseh-levai-anph for light nuclei (see, for example, sacm-appl1 ), with the
intend as an alternative to also describe nuclear molecules, i.e., an alegraic
version of their geometrical description scheid1995 ; hess-1984 . A first
attempt to extend it to heavy nuclei is published in cseh-algora , using the
$\widetilde{SU}(3)$ (pseudo-$SU(3)$) model hecht ; arima . While the heavy
cluster is treated within the $\widetilde{SU}(3)$ the light cluster is
described by the $SU(3)$ standard model. In cseh-algora ; cseh-scheid , the
separation of nucleons into the unique and normal orbitals in the united
nucleus, compared to the ones in each cluster, is not well defined:
Distinction is made when a cluster is light, which is then treated within the
standard shell model, or both are heavy. In different words: Each cluster and
the united nucleus are not treated in the same mean field.
Since then, many applications of the SACM have been studied and further
attempts to extend it to heavy nuclei: For example, a construction of the
effective $SU(3)$ irreducible representations (irreps) for heavy nuclei
hunyadi , using the Nilsson model ring , and a study of preferences in
radioactive decays and/or fission sacm-fission1 ; sacm-fission2 ; sacm-
fission3 . In hess-86 , the spectra of $\alpha$ cluster nuclei, of significant
interest in astrophysics related to the production of heavy elements, and
their spectroscopic factors were calculated. In phase-I ; phase-II the first
steps in investigating phase transitions were taken and more recently david a
complete description of phase transitions using the catastrophe theory gilmore
was published. In renorm the renormalization of the coherent state parameters
is investigated, used for the geometric mapping.
More recently, the $\widetilde{SU}(3)$ model was applied to the shell like
quarteting in heavy nuclei cseh2020 , another method to restrict effectively
the shell model space on physical grounds. The quarteting model was first
proposed in arima-quart ; danos-quart and applied in cseh-quart within the
SACM for light nuclei. However, the proton and the neutron part are coupled
directly to a total $SU(3)$ irreducible representation (irrep) without taking
into account the preference for certain couplings, leading as a result to too
many irreps at low energy. This is also a problem of the model we present in
this contribution and a path on how to tackle it is proposed. The quarteting
model was compared to another procedure, called the proxy-$SU(3)$ bonatsos2017
, showing comparable results. All methods have in common to exploit a
symmetry, showing up in the single particle spectrum, and using it to
effectively cut the model space to a manageable size. Here we will propose an
equivalent manner to describe heavy nuclei, using the $\widetilde{SU}(3)$
model and a proposal on how to restrict the model space to the most important
contributions.
A further prove of the success of the SACM is the introduction of the multi-
channel symmetry multi , were different clusterizations were connected via the
same Hamiltonian, thus, reducing the complexity of the model and delivering
more insight into the structure of cluster systems.
In hunyadi a quite powerful method was presented on how to treat heavy
nuclei, it only delivers the ground state, or the first super- and hyper-
deformed states cseh2006 . Therefore, it is of interest to look for
alternative procedures in order to deal also with excited states.
Unfortunately, we cannot list all contributions of the SACM, but the ones
cited clearly demonstrate the success of the SACM.
The $\widetilde{SU}(3)$ was proposed in hecht ; arima for its application to
heavy nuclei, where it is observed that when the intruder states, called
unique orbitals, with $j=\eta+\frac{1}{2}$ in each shell ($\eta$ is the shell
number), are excluded, the remaining orbitals, called normal orbitals, are
grouped, just by counting, into shells of what is denominated as the
$\widetilde{SU}(3)$ symmetry. The normal orbital in the $\eta$-shell are
renamed (see next section) such that they correspond to a pseudo-shell of
${\tilde{\eta}}=\eta-1$. Within the Nilsson model asymptotic states also show
a degeneration in orbitals denoted by the asymptotic quantum numbers of the
Nilsson model $\Omega\left[\eta\eta_{z}\Lambda\right]$ ring ; plb1994 .
That the nuclear force exhibits such an unexpected symmetry is today well
understood, parting from microscopic field theoretic models of the nuclear
interaction fieldsu3 and mapping to the effective nuclear interaction. The
complete Hilbert space of the shell model is a direct product of a state
described within a $\widetilde{SU}(3)$, in the same manner as the $SU(3)$
model of Elliott for light nuclei, and a state describing the nucleons in the
intruder orbitals. The nucleons in the intruder orbitals are assumed to play
only the role of an observer: Nucleons in the unique orbitals play a passive
role in the dynamics due to their opposite parity and their coupling to spin-
zero pairs, contributing only to the binding energy. The pairing energy of
nucleons in the unique orbitals is big, due to the large $j$ ring , compared
to the nucleons in the normal orbitals. Thus, nucleons in the unique orbitals
only contribute at high energies, e.g., in the back-bending effect and related
phenomena draayer-book . Inter-shell excitations are considered only within
the normal states, for the same reasons. The contribution of the nucleons in
the unique orbitals are treated via well defined effective charges for
electromagnetic transitions NPA1994 . The effective charges do not depend on
parameters, but only on the total number of nucleons $A$, ${\tilde{A}}$ and
protons $Z$, ${\tilde{Z}}$, where the symbols with the tilde refer to numbers
in the normal orbitals. In conclusion, the restriction to the
$\widetilde{SU}(3)$ is well justified for states at low energy. Though, the
$\widetilde{SU}(3)$ has its limits, as not including independent dynamical
effects of nucleons in the unique orbitals, it is still useful for
investigating an extension of the SACM to heavy nuclei.
Already within the SACM for light nuclei, new problems arise: When the
lightest cluster is too large and assuming both clusters in their ground
state, the corresponding $SU(3)$ irrep of the united nucleus cannot be
reached. The main reason is that for increasing number of nucleons of the
light cluster, the number of quanta in the relative motion becomes too large
and the coupling of the relative motion irrep to the cluster irreps leads to
large final $SU(3)$ irreps, not coinciding wityh the ground state irrep of the
united nucleus. This problem was recognized in smirnov and the authors
introduced the concept of forbiddenness. The main idea is as follows: When the
two clusters are in their ground state and all missing quanta are put into the
relative motion (the Wildermuth condition wildermuth ) and the ground state of
the united nucleus can be reached, everything is fine. No excitations are then
allowed within a cluster, because these are included in the excitation of the
relative motion. However, when the ground state cannot be reached, one or the
two clusters are allowed to be excited, subtracting quanta from the relative
motion, up to the point when for the first time the ground state is reached.
The excited state of the cluster ($C_{k}^{*}$) is then fixed and the rest of
the procedure is identical as in the SACM for light nuclei. The minimal number
required for the excitation of the clusters is called the forbiddenness.
In smirnov the importance of this concept and its consequences was proven and
applied quite successfully in the prediction of preferences of fission
fragments and fusion of light and heavy systems. The larger the forbiddenness,
the more suppressed is the reaction channel. To summarize: For low lying
states and a large light cluster, the cluster system has to be excited!
Reconsidering the definition of forbiddenness from an alternative angle, we
were able to obtain a simple formula on the minimal number of excitation
quanta huitz-2015 needed, which will be resumed further below.
Both models, the $\widetilde{SU}(3)$ and the SACM extended to heavy nuclei,
will be explained in section 2. The formalism includes a restricted model
Hamiltonian, the calculation of spectra, electromagnetic transition rates and
the determination of spectroscopic factors. In particular, in Section 3 we
will demonstrate the importance of the concept of the forbiddenness. In
Section 3 the model is applied to several heavy cluster systems. Two systems
only require a low number of forbiddenness, while the third system exhibits a
large forbiddenness and demonstrates its importance. In section 4 conclusions
are drawn.
## 2 The SACM and its extension to heavy nuclei
The approach to $\widetilde{SU}(3)$ is as follows: In each harmonic oscillator
shell $\eta$ the orbital belonging to the largest spin $j=\eta+\frac{1}{2}$ is
removed from consideration as an active orbital, it is considered rather as a
spectator. To the remaining orbitals the redefinition
$\displaystyle j~{}=~{}l\pm\frac{1}{2}$ $\displaystyle\rightarrow$
$\displaystyle{\tilde{l}}~{}=~{}l\mp\frac{1}{2}~{},~{}\eta~{}\rightarrow~{}{\tilde{\eta}}~{}=~{}\eta-1~{}~{}~{},$
(1)
is applied, where ${\tilde{l}}$ denotes the pseudo-orbital angular momentum
and ${\tilde{\eta}}$ the pseudo-shell number. With this redefinition alone it
is easily verified that each shell $\widetilde{\eta}$ has the same content as
the corresponding shell in the standard $SU(3)$ model.
For large deformations, the Nilsson states for axial symmetric nuclei are
classified by their asymptotic quantum numbers ring
$\displaystyle\Omega\left[\eta\eta_{z}\Lambda\right]~{}~{}~{},$ (2)
where $\eta_{z}$ is the oscillation number in the $z$-direction, $\Lambda$ is
the projection of the orbital angular momentum onto the same axis and $\Omega$
= $\Lambda\pm\frac{1}{2}$.
Excluding the intruder levels, the reassignment of the orbitals is
$\displaystyle\Omega~{}=~{}\Lambda\pm\frac{1}{2}$ $\displaystyle\rightarrow$
$\displaystyle\widetilde{\Lambda}~{}=~{}\Lambda\mp\frac{1}{2}~{}~{}~{}.$ (3)
Inspecting the Nilsson diagrams ring , those orbitals with the same quantum
numbers $\left[\widetilde{\eta}\widetilde{\eta_{z}}\widetilde{\Lambda}\right]$
are degenerate, which implies a very small pseudo-spin-orbit interaction and
as a consequence an approximate symmetry. In addition, the content of the
$\widetilde{\eta}$ shell corresponds to the same one in the standard shell
model. Thus, the shell model for light nuclei can be directly extended to
heavy nuclei, using the $\widetilde{SU}(3)$ model instead of the $SU(3)$
model.
The basis in the Hilbert space is a direct product of the $\widetilde{SU}(3)$
states and the ones describing the unique (intruder) orbitals. As mentioned in
the introduction, their contribution to the nuclear dynamics is taken into
account by a well defined scaling factor (effective charge).
For example, the quadrupole effective charge is given by NPA1994 :
$\displaystyle e_{eff}(E2)$ $\displaystyle=$
$\displaystyle\left(\frac{Z}{\widetilde{A}}\right)^{2}\left(\frac{A}{{\widetilde{A}}}\right)^{\frac{4}{3}}~{}~{}~{},$
(4)
where $A$ and ${\widetilde{A}}$ are the total number of nucleons and the
number of nucleons in the normal orbitals, respectively. $Z$ is the total
number of protons.
Restricting the dynamics only to the nucleons in the normal orbitals was
justified in the introduction, recognizing that at large excitation energies,
where backbending effects play a role, the model has to be modified by adding
the contributions of the dynamics in the unique orbitals. The effectiveness of
$\widetilde{SU}(3)$ was demonstrated in NPA1994 , for the case of the pseudo-
symplectic model of the nucleus pseudo-sympl and many other applications of
the $\widetilde{SU}(3)$ model (see, for example, cseh-quart ).
### 2.1 Construction of the model space in light nuclei
In the SACM for light nuclei, the $SU(3)$ irreps are determined using the
following path: Each cluster is represented by an irrep
$\left(\lambda_{k},\mu_{k}\right)$ ($k=1,2$) in their ground state. Adding the
number of oscillation quanta contained in each cluster and comparing them with
the number of oscillation quanta of the united nucleus results in a mismatch:
The number of oscillation quanta of the united nucleus is larger than the sum
of both clusters. Wildermuth wildermuth showed that the necessary condition
to satisfy the Pauli exclusion principle is to add the missing quanta into the
relative motion, introducing a minimal number of relative oscillation quanta
$n_{0}$. This is known as the Wildermuth condition. However, there are still
irreps which are not allowed by the Pauli-exclusion principle.
An elegant solution to it, avoiding cumbersome explicit antisymmetrization of
the wave function, was proposed in cseh-letter ; cseh-levai-anph , the
original publication of the SACM: The coupling of the cluster irreps with the
one of the relative motion generates a list of $SU(3)$ irreps, i.e.,
$\displaystyle\left(\lambda_{1},\mu_{1}\right)\otimes\left(\lambda_{2},\mu_{2}\right)\otimes\left(n_{\pi},0\right)$
$\displaystyle=$
$\displaystyle\sum_{m_{\lambda,\mu}}m_{\lambda,\mu}\left(\lambda,\mu\right)~{}~{}~{},$
(5)
where $n_{\pi}$ is the number of relative oscillation quanta ($\pi$ gives
reference to the relative $\pi$-bosons of spin 1), limited from below by
$n_{0}$, and $m_{\lambda,\mu}$ is the multiplicity of
$\left(\lambda,\mu\right)$. This list of irreps is compared to the one of the
shell model. Only those, which have a counterpart in the shell model, are
included in the SACM model space. In this manner, the Pauli exclusion
principle is observed and the model space can be called microscopic.
In such a manner, the basis states are described by the ket state
$\displaystyle\mid(\lambda_{1},\mu_{1})(\lambda_{2},\mu_{2});\rho_{C}(\lambda_{C},\mu_{C})(n_{\pi},0);\rho(\lambda,\mu)\kappa
LM\rangle~{}~{}~{},$ (6)
where $\rho_{C}$, $\rho$ and $\kappa$ are multiplicity labels. The cluster
irreps are coupled first to $(\lambda_{C},\mu_{C})$, then with $(n_{\pi}0)$ to
the final $SU(3)$ irrep $(\lambda,\mu)$. The advantage of using the ket-
formalism is the absence to the need of a coordinate space description, it
suffices to get the quantum numbers describing Pauli allowed cluster states.
Of course, the disadvantage is that no explicit space distribution of the
clusters are depicted.
This representation of the model space is of advantage, because it involves
only $SU(3)$ groups, which refer to the shell model space of the clusters and
the complete nucleus.
The word Semi in the name of SACM appears due to the phenomenological
character of the Hamiltonian, which is a sum of terms associated to the single
particle energy, quadrupole-quadrupole interactions, angular momentum
operators, etc. Note, that this is an additional ingredient and the
construction of the cluster space is independent of it, which can be used in
any microscopic model.
Finally, we expose the path on how to deduce the $\widetilde{SU}(3)$ shell
model space: Each shell $\eta$ has $\frac{1}{2}(\eta+1)(\eta+2)$ orbital
degrees of freedom. Taking into account the two spin degrees of freedom, the
group-chain classifying the states within the shell $\eta$ is given by
$\displaystyle U((\eta+1)(\eta+2))$
$\displaystyle\supset~{}~{}~{}U(\frac{1}{2}(\eta+1)(\eta+2))\otimes U_{S}(2)$
$\displaystyle\left[1^{N}\right]$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\widetilde{\left[h\right]}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\left[h\right]$
$\displaystyle U(\frac{1}{2}(\eta+1)(\eta+2))$
$\displaystyle\supset~{}~{}~{}SU(3)$ $\displaystyle\left[h\right]$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}(\lambda,\mu)~{}~{}~{},$ (7)
where $[h]$ is a short hand notation for $\left[h_{1},h_{2}\right]$ and the
tilde denotes the conjugate Young diagram where rows and columns are
interchanged. (7) gives the relation of the spin-part (denoted by the index
$S$) and the orbital part, such that the complete state is anti-symmetric
($\left[1^{N}\right]$, with $N$ as the number nucleons in the shell $\eta$).
In the case of light nuclei, instead of the spin group $SU_{S}(2)$ the spin-
isospin group $SU_{ST}(4)$ appears. In contrast, in heavy nuclei the protons
and neutron have to be considered within different shells, which is the reason
why (7) is used. For the reduction to $SU(3)$ programs are available bahri .
The reduction scheme (7) is applied to each shell, which contains nucleons.
Each $\Delta n_{\pi}\hbar\omega$ excitations ($\Delta n_{\pi}=0,1,...$)
corresponds to a particular distribution of the nucleons within the shells.
The $SU(3)$ content of each shell is multiplied with all others, resulting in
a preliminary list. Finally, the center of mass motion is removed from the
$\Delta n_{\pi}\hbar\omega$ excitation, by multiplying the result for
$0\hbar\omega$ with $(\Delta n_{\pi},0)$, the $1\hbar\omega$ result by
$(\Delta n_{\pi}-1,0)$, etc. and subtracting all the results from the before
obtained large list. This is a standard procedure, which for heavy nuclei is
applied separately to the proton and neutron part.
### 2.2 Concepts for the extension to heavy nuclei
When dealing with heavy nuclei one encounters an exploding number of states
($SU(3)$ irreps). Thus, one not only has to find a consistent path on how to
combine the cluster irreps with the relative motion to the final irreps, but
also a way to restrict further the irreps according to their importance to
contribute at low energy. This has to be done twice, namely for protons and
neutrons, and combining them leads to even more irreps. Thus, the method to
restrict the Hilbert space is very essential.
For the extension of the SACM to heavy nuclei requires explanations of some
facts, concepts and assumptions:
* •
All nucleons move in the same mean field of the parent nucleus. This is
obvious, because even when clusters are defined as consisting of a subset of
$A_{k}$ ($k=1,2$) nucleons, these nucleons are still part of the parent
nucleus and not free clusters. All nucleons are part of the same shell model,
characterized by the same oscillator energy $\hbar\omega$ of the united
nucleus. As a consequence, a light cluster, cannot be the same within the
parent nucleus, as it is as a free nucleus. For example, for a free and
independent cluster the $\hbar\omega$ value is already different. The
identification of the clusters is only via their content in their number of
protons and neutrons.
In addition, in $\widetilde{SU}(3)$ each cluster has nucleons in the normal
and unique orbitals and only the ones in the normal orbitals are counted.
Thus, a cluster changes to a pseudo-cluster with $\widetilde{A}_{k}$ nucleons
and $\widetilde{Z}_{k}$ protons in the normal orbitals ($k=1,2$).
* •
Because protons and neutrons are in different shells, the construction of the
model space is applied separately to them, as done in any shell model
application for heavy nuclei ring . The proton and neutron part move in the
same mean field, i.e., with the same $\hbar\omega$.
* •
The nucleons are filled into the Nilsson orbitals at the deformation of the
united nucleus. The deformation values for a nucleus is retrieved from the
tables nix-tables . This does not suggest that we are working in different
shell models, it is just a fact that protons and neutrons are filling
different kind of orbitals, which is obvious by inspecting the Nilsson
diagrams for protons and neutrons ring . For both, however, the deformation
value is the same.
* •
The nucleons of the heavy cluster are filled in first and then the ones of the
light clusters. This involves the assumption that the light cluster is small,
compared to the heavier one, and is preformed within the united nucleus, a
phenomenological picture used in radioactive decays.
This step will provide us with a heavy and light cluster, for each one with a
certain number of nucleons in the normal orbitals, denoted by
$\widetilde{A}_{k}$ ($k=1,2$).
* •
In a final step, the proton and neutron part are combined, proposing a
particular selection, whose origin is the demand that the proton and neutron
fluid are aligned. In draayer1 this is discussed and in more detail in NPA576
: In the nuclear shell model the aligned irrep, i.e.
$(\lambda_{p},\mu_{p})\otimes(\lambda_{n},\mu_{n})$ $\rightarrow$
$(\lambda_{p}+\lambda_{n},\mu_{p}+\mu_{n})$ ($p$ for protons and $n$ for
neutrons), corresponds to aligned principle axes of the two (proton-neutron)
rotors. All other irreps are non-aligned proton-neutron rotors, which lie at
higher energy, corresponding to scissors mode, or isovector resonances
eisenberg . Thus, the restriction to aligned proton-neutron irreps is a pretty
good approximation, taking effectively into account the interaction, which
tends to align the proton and neutron fluid.
### 2.3 Construction of the model space and further restrictions
The construction of the ${\widetilde{S}U}(3)$ model space for heavy nuclei is
in line with the one for light nuclei. First some notational definitions: An
$\alpha=p,n$ denotes the type of the nucleons (protons or neutrons),
$(\widetilde{\lambda}_{k\alpha},\widetilde{\mu}_{k\alpha})$ ($k=1,2$) refers
to cluster number $k$ and for each type of nucleon. The
$(\widetilde{\lambda}_{\alpha C},\widetilde{\mu}_{\alpha_{C}})$ is called the
cluster irrep for the $\alpha$-type nucleons and it is the result of the
coupling
$\displaystyle(\widetilde{\lambda}_{1\alpha},\widetilde{\mu}_{1\alpha})\otimes(\widetilde{\lambda}_{2\alpha},\widetilde{\mu}_{2\alpha})$
$\displaystyle=$ $\displaystyle\sum_{\widetilde{\lambda}_{\alpha
C},\widetilde{\mu}_{\alpha_{C}}}m_{\widetilde{\lambda}_{\alpha
C},\widetilde{\mu}_{\alpha_{C}}}(\widetilde{\lambda}_{\alpha
C},\widetilde{\mu}_{\alpha_{C}})~{}~{}~{},$ (8)
being $m_{\widetilde{\lambda}_{\alpha C},\widetilde{\mu}_{\alpha_{C}}}$ the
multiplicity of $(\widetilde{\lambda}_{\alpha
C},\widetilde{\mu}_{\alpha_{C}})$.
Afterward, each $(\widetilde{\lambda}_{\alpha
C},\widetilde{\mu}_{\alpha_{C}})$ is coupled with the relative motion irrep
$(\widetilde{n}_{\alpha\pi},0)$, i.e.,
$\displaystyle(\widetilde{\lambda}_{\alpha
C},\widetilde{\mu}_{\alpha_{C}})\otimes(\widetilde{n}_{\alpha\pi},0)$
$\displaystyle=$
$\displaystyle\sum_{\widetilde{\lambda}_{\alpha},\widetilde{\mu}_{\alpha}}m_{\widetilde{\lambda}_{\alpha},\widetilde{\mu}_{\alpha}}(\widetilde{\lambda}_{\alpha},\widetilde{\mu}_{\alpha})~{}~{}~{}.$
(9)
The minimal number of ${\widetilde{n}}_{\alpha\pi}$ is determined in the same
manner as in the SACM for light nuclei: It is the difference of the number of
$\pi$-oscillation quanta in the united nucleus (only counting those in the
normal orbitals) and the sum of oscillation quanta of the two clusters, for
protons and neutrons separately.
This leads to a large list of irreps
$(\widetilde{\lambda}_{\alpha},\widetilde{\mu}_{\alpha})$ for each sector of
nucleons. This list is compared to the one obtained by the shell model. A
program, which does this automatically, can be send on request.
For a two cluster system, the path explained is resumed in the following group
chains:
$\displaystyle\widetilde{SU}_{1\alpha}(3)\otimes\widetilde{SU}_{2\alpha}(3)\otimes\widetilde{SU}_{R\alpha}(3)\supset$
$\displaystyle(\lambda_{1\alpha},\mu_{1\alpha})~{}~{}(\lambda_{2\alpha},\mu_{1\alpha})~{}~{}(\widetilde{n}_{\alpha\pi},0)$
$\displaystyle\widetilde{SU}_{\alpha
C}(3)\otimes\widetilde{SU}_{R\alpha}(3)\supset\widetilde{SU}(3)$
$\displaystyle~{}~{}(\lambda_{\alpha C},\mu_{\alpha
C})~{}~{}~{}(\widetilde{n}_{\alpha\pi},0)~{}~{}~{}~{}~{}~{}(\widetilde{\lambda},\widetilde{\mu})~{}~{}~{},$
(10)
where $R$ refers to the relative motion part. Below each group the
corresponding quantum numbers are listed. The $\widetilde{SU}(3)$ can be
further reduced to the angular momentum group.
Up to here, the procedure is in accordance to the SACM for light nuclei cseh-
letter , with the difference that due to the breaking of isospin symmetry the
protons and neutrons are treated separately. In order to obtain the final list
of Pauli allowed total irreps, the one of protons is multiplied with the one
of neutrons, i.e.,
$\displaystyle(\widetilde{\lambda}_{p},\widetilde{\mu}_{p})\otimes(\widetilde{\lambda}_{n},\widetilde{\mu}_{n})$
$\displaystyle=$
$\displaystyle\sum_{\widetilde{\lambda},\widetilde{\mu}}m_{\widetilde{\lambda},\widetilde{\mu}}(\widetilde{\lambda},\widetilde{\mu})~{}~{}~{},$
(11)
which is represented by the following group chain
$\displaystyle\widetilde{SU}_{p}(3)\otimes\widetilde{SU}_{n}(3)$
$\displaystyle\supset$ $\displaystyle\widetilde{SU}(3)~{}~{}~{}.$ (12)
To reduce further the large list, obtained in (11), we select only the irreps
corresponding to a linear coupling, namely
$\displaystyle(\lambda_{p},\mu_{p})\otimes(\lambda_{n},\mu_{n})$
$\displaystyle\rightarrow$
$\displaystyle(\lambda_{p}+\lambda_{n},\mu_{p}+\mu_{n})~{}~{}~{}.$ (13)
The justification is the same as mentioned further above, i.e., all other
irreps correspond to scissors modes at large high energy draayer1 .
### 2.4 The concept of forbiddenness
As for large clusters in light nuclei, for heavy clusters in heavy nuclei an
additional problem arises:
Usually, the clusters are in their ground state and all shell excitations are
dealt via the radial excitation. This is because an excitation of the clusters
is equal to the excitation in the radial motion.
However, as mentioned above, for the light cluster being sufficiently large
the clusters have to be excited, in order to connect to the ground state irrep
of the parent nucleus. The required number of excitation quanta are subtracted
from the relative motion. The cluster system has to be excited by a minimal
number of shell excitation $n_{C}$ = $n_{pC}+n_{nC}$, such that for the first
time the ground state can be reached in the coupling of the cluster irreps and
the relative motion, containing now less number of quanta. Further excitations
have to be added to the relative motion with the same argument as given in the
last paragraph. In smirnov this concept was denoted forbiddenenness with
important consequences for the explanation of the preferences of fission. This
concept is barely known but applied with success within the SACM sacm-fission1
; sacm-fission2 ; sacm-fission3 . In the examples presented in section 3 we
will demonstrate the importance of the forbiddenness.
An easy to apply formula to determine the forbiddenness $n_{\alpha C}$ is
given in huitz-2015 (again for protons and neutrons separately, with
$\alpha=p$ or $n$, and all variables are denoted by a tilde in order to stress
that we work within the $\widetilde{SU}(3)$ language):
$\displaystyle{\widetilde{n}}_{\alpha C}=$ $\displaystyle{\rm
max}\left[0,\frac{1}{3}\left\\{{\widetilde{n}}_{\alpha
0}-({\widetilde{\lambda}}_{\alpha}-{\widetilde{\mu}}_{\alpha})-(2{\widetilde{\lambda}}_{\alpha
C}+{\widetilde{\mu}}_{\alpha C})\right\\}\right]$ $\displaystyle+{\rm
max}\left[0,\frac{1}{3}\left\\{{\widetilde{n}}_{\alpha
0}-({\widetilde{\lambda}}_{\alpha}+2{\widetilde{\mu}}_{\alpha})+({\widetilde{\lambda}}_{\alpha
C}-{\widetilde{\mu}}_{\alpha C})\right\\}\right]~{}~{}~{},$ (14)
where $\widetilde{n}_{\alpha 0}$ is the minimal number of relative oscillation
quanta for $\alpha$ (p or n) as required by the Wildermuth condition. The
forbiddenness can be zero, as is the case for most cluster systems of light
nuclei. The $({\widetilde{\lambda}}_{\alpha C},{\widetilde{\mu}}_{\alpha C})$
denotes the cluster irrep (to which the two clusters are coupled and there may
appear several) and
$({\widetilde{\lambda}}_{\alpha},{\widetilde{\mu}}_{\alpha})$ is the final
$SU(3)$ irrep of the united nucleus. For later use, we define
$\left({\widetilde{\lambda}}_{\alpha 0},{\widetilde{\mu}}_{\alpha 0}\right)$
as the difference of the excited cluster irrep
$\left({\widetilde{\lambda}}^{e}_{\alpha C},{\widetilde{\mu}}^{e}_{\alpha
C}\right)$ to the non-excited one $\left({\widetilde{\lambda}}_{\alpha
C},{\widetilde{\mu}}_{\alpha C}\right)$ (the letter $e$ stands for excited),
via
$\displaystyle\left({\widetilde{\lambda}}^{e}_{\alpha
C},{\widetilde{\mu}}^{e}_{\alpha C}\right)$ $\displaystyle=$
$\displaystyle\left({\widetilde{\lambda}}_{\alpha
C}+{\widetilde{\lambda}}_{\alpha 0},{\widetilde{\mu}}_{\alpha
C}+{\widetilde{\mu}}_{\alpha 0}\right)~{}~{}~{}.$ (15)
Eq. (14) can be interpreted as follows: The first term in (14) indicates, that
in order to minimize ${\widetilde{n}}_{\alpha C}$, we have to maximize
$({\widetilde{\lambda}}_{\alpha C}+2{\widetilde{\mu}}_{\alpha C})$. The second
term suggests to minimize the difference $\left({\widetilde{\lambda}}_{\alpha
C}-{\widetilde{\mu}}_{\alpha C}\right)$. The condition of a maximal
$({\widetilde{\lambda}}_{\alpha C}+2{\widetilde{\mu}}_{\alpha C})$ and a
minimal $\left({\widetilde{\lambda}}_{\alpha C}-{\widetilde{\mu}}_{\alpha
C}\right)$ implies a large compact and oblate configuration of the two-cluster
system. Information on the relative orientation of the clusters is obtained in
comparing the distribution of oscillation quanta for each cluster in the
different spatial directions NPA576 ; orient .
These conditions are achieved, determining the whole product of
$({\widetilde{\lambda}}_{1\alpha},{\widetilde{\mu}}_{1\alpha})\otimes({\widetilde{\lambda}}_{2\alpha},{\widetilde{\mu}}_{2\alpha})$,
using (8) and searching for the irrep that corresponds to a large compact
structure. The ${\widetilde{n}}_{0}={\widetilde{n}}_{p0}+{\widetilde{n}}_{n0}$
is the total minimal number of relative excitation quanta and
${\widetilde{n}}_{C}={\widetilde{n}}_{pC}+{\widetilde{n}}_{nC}$ is the total
forbiddenness.
The $\widetilde{n}_{\alpha C}$ are transferred to the clusters, where the
result does not depend on how the distribution is done. For example, when only
one cluster is excited, first the irrep of the number of one less valence
nucleons is determined and then coupled with the nucleon in the higher shell,
with $n_{\alpha C}$ quanta above the valence shell. The important criterion is
that at the end one finds a combination of irreps which couple to the final
irrep in the united cluster. This is a direct but rather cumbersome procedure
and how to do it will be illustrated in the applications.
More restrictions have to be implemented, in order to avoid a too large, not
manageable list of irreps:
* •
When coupling the cluster irreps, only those $\left(\lambda_{\alpha
C},\mu_{\alpha C}\right)$ irreps in the product are considered which couple to
the ground state of the united nucleus, for protons and neutron separately.
* •
Of those, only the ones are considered with the largest eigenvalues of the
second order Casimir operator of ${\widetilde{S}U}(3)$, often the irrep with
the largest eigenvalue suffices. The main argument is that in deformed nuclei,
with a significant quadrupole-quadrupole interaction, only these selected
irreps contribute significantly at low energy.
* •
In coupling protons and neutrons together, only the linear coupling is
considered, for reasons explained before.
* •
The final list is, in general, still too large and one has to restrict to the
first few irreps with the larges eigenvalues of the second order Casimir
operator, an argument valid for dominant quadrupole-quadrupole interaction.
* •
Convergence criterium: Convergence due to these restrictions have to be
verified by adding and/or restricting some irreps. If no sensible change is
observed, then the cut-off procedure is set.
This is a consistent procedure and provides a basis for the extension to heavy
nuclei: The sum of nucleons in the normal orbitals, of the two cluster, will
be always equal to the ones in the united nucleus, which is for example
ignored in cseh-algora . All procedures known from SACM can now be applied in
the same manner.
For completeness, we mention a different definition for the forbiddenness: In
order to be able to deal with heavy systems, the alternative definition was
proposed in cseh-algora ; cseh-scheid , where the relation of an irrep to its
deformation was exploited rowe1 . The criterion applied is as follows: If an
irrep of the list has a similar deformation as the one in the shell model
allowed irrep, it implies a less forbidden state than irreps with a larger
difference in the irreps. The reason why it works is that the deformation can
be related to $SU(3)$ irreps rowe1 ; TE and comparing deformations is
equivalent to comparing the dimension of these irreps.
Therefore, the definition of forbiddenness used in cseh-scheid ; cseh-algora
is
$\displaystyle F$ $\displaystyle=$
$\displaystyle\frac{1}{1+min\left[\sqrt{\Delta n_{1}^{2}+\Delta
n_{2}^{2}+\Delta n_{3}^{2}}\right]}~{}~{}~{},$ (16)
where $\Delta n_{i}=\mid n_{i}-n_{i,\xi}\mid$ and in contrast to $S$, as
defined in cseh-scheid ; cseh-algora , we use $F$ because the letter $S$ will
be used later exclusively for the spectroscopic factor. The index $i$ refers
to the spatial direction of the oscillation and $\xi$ to the several cluster
irreps allowed by the Pauli-exclusion principle. The $n_{i}$ is the number of
oscillation quanta in direction $i$. When all $\Delta n_{i}$ are zero then
$F=1$ and the irrep is allowed. If at least one of those numbers is different
from zero the irrep is partially forbidden and when $F=0$ it is completely
forbidden.
### 2.5 The structure of the Hamiltonian and the quadrupole transition
operator
The most general algebraic Hamiltonian has the same structure as for light
nuclei, save that the operators (number operator, quadrupole operator, etc.)
are substituted by their pseudo counter parts. How this mapping is achieved,
is explained in detail in draayer1 , where an operator $O$ in $SU(3)$ is
mapped to its counterpart $O$ $\approx$
$\kappa{\mbox{\boldmath$\widetilde{O}$}}$ and $\kappa$ has a simple
approximation, namely $\kappa=\frac{\eta+\frac{3}{2}}{\eta+\frac{1}{2}}$,
close to 1. The factor can be assimilated into the parameters of the SACM
Hamiltonian.
We restrict to the $SU(3)$ limit, which turns out to be sufficient, due to the
large deformation of the systems considered. This, however, will result in
zero $B(E2)$ transition rates between states belonging to different
${\widetilde{S}U}(3)$ irreps.
A simplified model Hamiltonian is selected, which has the following structure:
$H$ $\displaystyle=$
$\displaystyle\hbar\omega\mbox{\boldmath$\widetilde{n}$}_{\pi}+(a_{2}-a_{5}\Delta\mbox{\boldmath$\widetilde{n}$}_{\pi})\mathit{\widetilde{{\mbox{\boldmath$C$}}}}_{2}\left(\widetilde{\lambda},\widetilde{\mu}\right)$
(17)
$\displaystyle+t_{3}\left[\mathit{\widetilde{{\mbox{\boldmath$C$}}}}_{2}\left(\widetilde{\lambda},\widetilde{\mu}\right)\right]^{2}+t_{1}\mathit{\widetilde{{\mbox{\boldmath$C$}}}}_{3}\left(\widetilde{\lambda},\widetilde{\mu}\right)$
$\displaystyle+\left(a_{3}+a_{Lnp}\Delta\widetilde{{\mbox{\boldmath$n$}}}_{\pi}\right){\mbox{\boldmath$\widetilde{L}$}}^{2}+t_{2}{\mbox{\boldmath$\widetilde{K}$}}^{2}$
where
$\Delta\widetilde{{\mbox{\boldmath$n$}}}_{\pi}=\widetilde{{\mbox{\boldmath$n$}}}_{\pi}-({\widetilde{n}}_{0}-{\widetilde{n}}_{C})$,
$({\widetilde{n}}_{0}-{\widetilde{n}}_{C})$ being the total minimal number of
quanta required by the Pauli principle and the possible effects of the
forbiddeness are taken into account by ${\widetilde{n}}_{C}$. The moment of
inertia, contained in the factor of the total angular momentum operator,
$\widetilde{{\mbox{\boldmath$L$}}}^{2}$, may depend on the excitation in
$\tilde{n}_{\pi}$ (excited states change their deformation, corresponding to a
variable momentum of inertia). The ${\mbox{\boldmath$K$}}^{2}$ term serves to
split the degeneracy in ${\widetilde{\mbox{\boldmath$L$}}}$ within the same
$\widetilde{SU}(3)$ irrep. The first term of the $\widetilde{SU}(3)$
Hamiltonian, $\hbar\omega\mbox{\boldmath$\widetilde{n}$}_{\pi}$, contains the
linear invariant operator of the $\widetilde{U}_{R}(3)$ subgroup defining the
mean field, and the $\hbar\omega$ is fixed via $(45A^{-1/3}-25A^{-2/3})$ MeV
for light nuclei hw , which can also be used for heavy nuclei. For heavy
nuclei $\hbar\omega=41A^{-\frac{1}{3}}$ MeV is more common. The $A$ is the
mass number of the real nucleus and not the number
${\tilde{A}}=\widetilde{A}_{1}+\widetilde{A}_{2}$ of nucleons in the normal
orbitals for the united nucleus.
The
$\widetilde{{\mbox{\boldmath$C$}}}_{2}\left({\widetilde{\lambda}},{\widetilde{\mu}}\right)$
is the second order Casimir-invariant of the coupled $\widetilde{SU}(3)$
group, having contributions both from the internal cluster part and from the
relative motion. The
$\widetilde{{\mbox{\boldmath$C$}}}_{2}\left({\widetilde{\lambda}},{\widetilde{\mu}}\right)$
is given by:
$\displaystyle\mbox{\boldmath$\widetilde{C}$}_{2}(\widetilde{\lambda},\widetilde{\mu})$
$\displaystyle=$ $\displaystyle
2\mbox{\boldmath$\widetilde{Q}$}^{2}+\frac{3}{4}\mbox{\boldmath$\widetilde{L}$}^{2},$
$\displaystyle\rightarrow$
$\displaystyle\left(\widetilde{\lambda}^{2}+\widetilde{\lambda}\widetilde{\mu}+\widetilde{\mu}^{2}+3\widetilde{\lambda}+3\widetilde{\mu}\right),$
$\widetilde{Q}$ $\displaystyle=$
$\displaystyle\mbox{\boldmath$\widetilde{Q}$}_{C}+\mbox{\boldmath$\widetilde{Q}$}_{R},$
$\widetilde{L}$ $\displaystyle=$
$\displaystyle\mbox{\boldmath$\widetilde{L}$}_{C}+\mbox{\boldmath$\widetilde{L}$}_{R},$
(18)
where $\widetilde{Q}$ and $\widetilde{L}$ are the total quadrupole and angular
momentum operators, respectively, and $R$ refers to the relative motion. The
eigenvalue of
$\mbox{\boldmath$\widetilde{C}$}_{2}(\widetilde{\lambda},\widetilde{\mu})$ is
also indicated. The relations of the quadrupole and angular momentum operators
to the $\widetilde{C}^{(1,1)}_{2m}$ generators of the $\widetilde{SU}(3)$
group, expressed in terms of $\widetilde{SU}(3)$-coupled $\pi$-boson creation
and annihilation operators, are escher :
$\displaystyle\mbox{\boldmath$\widetilde{Q}$}_{k,2m}$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{3}}\widetilde{C}^{(1,1)}_{k2m},$
$\displaystyle\mbox{\boldmath$\widetilde{L}$}_{k1m}$ $\displaystyle=$
$\displaystyle\widetilde{C}^{(1,1)}_{k1m},$
$\displaystyle\mbox{\boldmath$\widetilde{C}$}^{(1,1)}_{lm}$ $\displaystyle=$
$\displaystyle\sqrt{2}\left[\mbox{\boldmath$\pi$}^{\dagger}\otimes\mbox{\boldmath$\pi$}\right]^{(1,1)}_{lm}.$
(19)
The quadrupole electromagnetic transition operator is defined as
$\displaystyle\mbox{\boldmath$T$}_{m}^{(E2)}$ $\displaystyle=$
$\displaystyle\sum_{\gamma}e_{\gamma}^{(2)}\mbox{\boldmath$Q$}^{(2)}_{\gamma,m}~{}~{}~{},$
(20)
where $e_{\gamma}^{(2)}$ is the effective charge of the contribution to the
quadrupole operator, coming from the cluster $\gamma$ = $C_{1}$, $C_{2}$ and
from the relative motion $R$. The effective charges are determined as
explained in great detail in fraser-2012 .
### 2.6 Spectroscopic factors
A successful parametrization of the spectroscopic factor, within the SACM for
light nuclei, is given in specfac-draayer :
$\displaystyle S=$ $\displaystyle e^{{\cal A}+Bn_{\pi}+C{\cal
C}_{2}(\lambda_{1},\mu_{1})+D{\cal C}_{2}(\lambda_{2},\mu_{2})+E{\cal
C}_{2}(\lambda_{c},\mu_{c})}$ $\displaystyle\times e^{F{\cal
C}_{2}(\lambda,\mu)+G{\cal C}_{3}(\lambda,\mu)+H\Delta n_{\pi}}$
$\displaystyle\mid\langle(\lambda_{1},\mu_{1})\kappa_{1}L_{1},(\lambda_{2},\mu_{2})\kappa_{2}L_{2}\mid\mid(\lambda_{C},\mu_{C})\kappa_{C}L_{C}\rangle_{\varrho_{C}}$
$\displaystyle\cdot\langle(\lambda_{C},\mu_{C})\kappa_{C}L_{C},(n_{\pi},0)1l\mid\mid(\lambda,\mu)\kappa
L\rangle_{1}\mid^{2}~{}~{}~{},$ (21)
where the $\varrho$-numbers refer to multiplicities in the coupling to $SU(3)$
irreps and the $\kappa$’s to the multiplicities to the reduction to $SO(3)$.
The parameters were adjusted to theoretically exactly calculated spectroscopic
factors within the p- and sd-shell, using the $SU(3)$ shell model draayer2 ,
with an excellent coincidence. For the good agreement, the factor depending on
the $SU(3)$-isoscalar factors turns out to be crucial.
For heavy nuclei, spectroscopic factors are poorly or not at all known
experimentally. Therefore, we have to propose a simplified manageable ansatz,
compared to (21), including the forbiddenness and, if possible, parameter
free.
In what follows, we will propose an expression for the spectroscopic factor
which is motivated by the one used for nuclei in the p- and sd-shell. As in
(21) the expression is divided into two factors, the first one is an
exponential factor and the second one of pure geometrical origin specfac-
draayer , an expression in terms of coupling coefficients. The second factor
is maintained, because it refers to coupling of $SU(3)$ irreps only.
The first exponential factor deserves more explanation: As argued in specfac-
draayer , this term is the result of the relative part of the wave-function,
which for zero angular momentum is proportional to $e^{-aR^{2}}\sim
e^{-a\frac{\hbar}{\mu\omega}n_{\pi}}$, where $R$ is the relative distance of
the two clusters (though, an $e^{-aR}$ ansatz would be more appropriate, but
the clusters are defined in the harmonic oscillator picture and we stay to it
for consistency) and $a$ has units of $\rm{fm}^{-2}$. The $\mu$ is the reduced
mass. Let us restrict to the minimum value $n_{0}$ of $n_{\pi}$. Using the
relation of $r_{0}=\sqrt{\frac{\hbar}{\mu\omega}n_{0}}$ geom , where $r_{0}$
is the average, minimal distance between the clusters, and taking into account
that for this case $R=r_{0}$, we obtain $e^{-\mid B\mid n_{0}}$, with $\mid
B\mid=a\frac{\hbar}{\mu\omega}$ and $B<0$. When the wave function acquires the
value $e^{-1}$ it results in the relation $\mid B\mid=\frac{1}{n_{0}}$. For
the nuclei in the sd-shell, the adjustment of the parameters was done for
cases with $n_{0}=8$, which corresponds according to this estimation to
$B\approx-0.13$. This has to be compared to the value $-0.36$ as obtained in
the fit in specfac-draayer , i.e., it is only an approximation which gives at
least the correct order. The most important part of (21) is the factor
depending on the $SU(3)$ isoscalar factors, which are responsible for the
relative changes, while the influence of the exponential factor is not
dominant for the relative numerical values of the spectroscopic factors.
Furthermore, when ratios of spectroscopic factors are used, the exponential
contribution cancels for states in the $0\hbar\omega$ shell. Also when the
$(n_{0}-nc)$ is large, the corrections for $\Delta n_{\pi}$ of the order of
one will be negligible. We do not see any possibility to estimate the
parameter $\cal{A}$ in the exponential factor, which represents a
normalization. The other terms in the exponential factor represent corrections
to the inter-cluster distance, because they correspond to deformation effects
and the parameters in front turned out to be consistently small, thus, they
can for the moment be neglected.
In light of the above estimation and discussion, for heavy nuclei we propose a
similar expression as in (21), but due to the non-availability of a sufficient
number of spectroscopic factor values (or none at all) for heavy nuclei, we
propose the following simplified expression:
$\displaystyle S=$ $\displaystyle e^{{\tilde{\cal
A}}+{\tilde{B}}({\tilde{n}}_{0}-{\tilde{n}}_{C}+\Delta{\tilde{n}}_{\pi})}$
$\displaystyle\mid\langle({\tilde{\lambda}}_{1},{\tilde{\mu}}_{1}){\tilde{\kappa}}_{1}{\tilde{L}}_{1},({\tilde{\lambda}}_{2},{\tilde{\mu}}_{2}){\tilde{\kappa}}_{2}{\tilde{L}}_{2}\mid\mid({\tilde{\lambda}}_{C}+{\widetilde{\lambda}}_{0},{\tilde{\mu}}_{C}+{\widetilde{\mu}}_{0}){\tilde{\kappa}}_{C}{\tilde{L}}_{C}\rangle_{\varrho_{C}}$
$\displaystyle\cdot\langle({\tilde{\lambda}}_{C}+{\widetilde{\lambda}}_{0},{\tilde{\mu}}_{C}+{\widetilde{\mu}}_{0}){\tilde{\kappa}}_{C}{\tilde{L}}_{C},({\tilde{n}}_{\pi},0)1{\tilde{l}}\mid\mid({\tilde{\lambda}},{\tilde{\mu}}){\tilde{\kappa}}{\tilde{L}}\rangle_{1}\mid^{2}~{}~{}~{}.$
(22)
The $n_{\pi}$ in the exponential factor was substituted by
$[(\tilde{n}_{0}-\tilde{n}_{C})+\Delta n_{\pi}]$. The
$({\tilde{n}}_{0}-{\tilde{n}}_{C})$ is the number of relative oscillation
quanta in $0\hbar\omega$ ($\tilde{n}_{C}$ was added to the excitation of the
clusters).
The parameter is estimated as
${\tilde{B}}=-\frac{1}{{(\tilde{n}}_{0}-\tilde{n}_{C})}$. Because we can not
determine the parameter ${\tilde{\cal A}}$, as a consequence only
spectroscopic factors divided by the exponential factor $e^{{\widetilde{A}}}$
are listed. An additional dependence on $\tilde{n}_{C}$ is contained in the
product of reduced coupling coefficients, with the appearance of
${\widetilde{\lambda}}_{0}$ and ${\widetilde{\mu}}_{0}$ (see the definition in
(15)). ${\widetilde{n}}_{0}$ and ${\widetilde{n}}_{C}$ refer to the values
within the parent nucleus. In general, in (22) further parameters can be
added, as for light nuclei, which have though to be adjusted.
With this choice, the spectroscopic factor is parameter free, except for an
overall normalization, which does not play a role when only ratios are of
interest.
## 3 Applications
In this section we apply the pseudo-SACM to three sample systems. The first is
${}^{236}_{92}$U144 $\rightarrow$ ${}^{210}_{82}$Pb128+${}^{26}_{10}$Ne16, the
second one is ${}^{224}_{88}$Ra136 $\rightarrow$
${}^{210}_{82}$Pb128+${}^{14}_{6}$C8 and the third one is ${}^{236}_{92}$U144
$\rightarrow$ ${}^{146}_{54}$Xe92+${}^{90}_{38}$Sr52, which appears in the
fission channel of 236U. In the first two cases, the forbiddeness is small.
They serve to illustrate how to add the quanta to the clusters and how to
determine the final cluster irrep. The importance of the forbiddeness is
significant in the third case and it will be much harder to determine the
cluster irreps. This sequence also shows that the larger the lighter cluster
is, the larger the forbiddenness becomes.
For illustrative reasons, only the $\widetilde{SU}(3)$ dynamical symmetry
limit is considered, i.e., the united nucleus must be well deformed. The
mixing of $SU(3)$ irreps will be investigated in a future publication. The
examples also serve to illustrate the applicability of the model and on how to
deduce the quantum numbers. However, some transitions will be forbidden due to
$SU(3)$ selection rules.
Very useful is the Table 1, where the number of nucleons in a given shell
$\eta$ is related to the one of the pseudo-shell number $\widetilde{\eta}$. In
the last columns the accumulated number of oscillation quanta within the
$\widetilde{SU}(3)$ shell model is listed.
$\eta$ | No. of nucleons | $\widetilde{\eta}$ | No. of nucleons | acum. No. of quanta
---|---|---|---|---
0 | 2 | - | - | -
1 | 6 | 0 | 2 | 0
2 | 12 | 1 | 6 | 6
3 | 20 | 2 | 12 | 30
4 | 30 | 3 | 20 | 90
5 | 42 | 4 | 30 | 210
6 | 56 | 5 | 42 | 420
Table 1: Table, listing the number of particles in the shell number $\eta$
and the pseudo-shell number $\widetilde{\eta}$, including the spin degree of
freedom. The last column lists the accumulated number of quanta of the
$\widetilde{SU}(3)$, when each shell is full up to the valence one. The final
total number of quanta is obtained, by adding to the number of quanta, reached
up to the last closed shell, the number of quanta of the nucleons in the
valence shell.
In this section, we explain trough the examples in great detail the
calculations which lead us to the cluster irreps and with the relative motion
to the ground state irreps in the proton and neutron part. This requires a lot
of cumbersome counting and determination of irreps, especially when relative
oscillation quanta from the forbiddenness are added to the clusters. We
apologize for that and ask the reader to be patient. If she (he) is not
interested in the details, she (he) just can skip this part and jump to the
final numbers of the cluster irreps.
### 3.1 ${}^{236}_{92}$U144 $\rightarrow$
${}^{210}_{82}$Pb128+${}^{26}_{10}$Ne16
$\widetilde{\eta}$ | 0 | 1 | 2 | 3 | 4 | | $N$
---|---|---|---|---|---|---|---
${}^{128}_{46}Pd_{82}$ | 2 | 6 | 12 | 20 | 6 | 2(0)+6+2(12)) |
| | | | | | +3(20)+4(6 | 114
${}^{112}_{40}Zr_{72}$ | 2 | 6 | 12 | 20 | 0 | 2(0)+6+2(12) |
| | | | | | +3(20) | 90
${}^{14}_{6}C_{8}$ | 2 | 4 | 0 | 0 | 0 | 2(0)+4 | 4
Table 2: An example on how to determine the number of oscillation quanta ($N$) in the system derived from 236U $\rightarrow$ 210Pb+26Ne, with the help of Table 1. The same is applied for the two other sample cases, where Table 1 is very helpful for counting the number of relative excitation quanta. When one cluster is excited by $\widetilde{n}_{C}$ quanta, this number has still to be added. Parameter | system a | system b | system c
---|---|---|---
$\hbar\omega$ | $6.63$ | $6.73$ | $6.63$
$a_{2}$ | $-0.020841$ | $-0.0082194$ | $-0.023251$
$a_{3}$ | $0.0087042$ | $0.0099524$ | $0.0067124$
$t_{1}$ | $6.698\times 10^{-5}$ | $-0.00061005$ | $-2.74\times 10^{-5}$
$t_{2}$ | 0.40 | $0.2205$ | $0.0364275$
$a_{5}$ | -0.0081127 | $0.1$ | $0.0098151$
$a_{L}$ | $-0.0012576$ | $0.0025050$ | $0.00073054$
$a_{L_{n_{p}}}$ | $6.55\times 10^{-3}$ | $0.00025837$ | $-0.00018069$
$t_{3}$ | $-1.420\times 10^{-7}$ | $1.10\times 10^{-5}$ | $5.10\times 10^{-7}$
Table 3: Non-zero parameter values for system a: 236U $\rightarrow$ 210Pb +
26Ne; system b: 224Ra $\rightarrow$ 210Pb + 14C and system c: 236U
$\rightarrow$ 148Xe + 90Sr.
Figure 1: Spectrum of 236U, described by the clusterization 210Pb+26Ne. Only
states up to angular momentum 6 are depicted. The theoretical spectrum (left
panel) is compared to experiment (right panel). Below each rotational band the
content of the number of $\pi$ bosons ($n_{\pi}$) and the
${\widetilde{S}U}(3)$ irrep is indicated.
As explained above, the protons and neutrons are treated separately and the
nucleons in each sector are filled into the Nilsson diagram from below, at the
deformation value $\epsilon_{2}=0.200$ ($\beta_{2}=0.215$) nix-tables . The
$\hbar\omega=6.63$ MeV.
For 236U, the united nucleus, we obtain 46 protons in the normal orbitals and
the valence shell is $\widetilde{\eta}_{p}=4$ with 6 valence protons. The
ground state $\widetilde{SU}(3)$ irrep for the proton part is
$(\tilde{\lambda},\tilde{\mu})_{p}=(18,0)_{p}$, while for the neutrons there
are 82 particles in normal orbitals with 12 in the $\widetilde{\eta}_{n}=5$
valence shell, giving the ground state irrep
$(\tilde{\lambda},\tilde{\mu})_{n}=(36,0)_{n}$. These two irreps can be
coupled to the total one for 236U, namely
$(\tilde{\lambda},\tilde{\mu})=(54,0)$. The determination of the ground state
irrep is necessary for the evaluation of the forbiddenness (see (14)).
These considerations have to be repeated for the two clusters involved, with
210Pb being the largest cluster and 14C the lightest one. For 210Pb, filling
the protons into the Nilsson diagram, at the same deformation as for the
united nucleus, we obtain 40 protons in normal orbitals, where the valence
shell is ${\widetilde{\eta}}=3$ and closed, thus the corresponding irrep is
$(0,0)^{\rm Pb}_{p}$. For the neutrons one has 72 in normal orbitals with 2
neutrons in the ${\widetilde{\eta}}_{n}=5$ pseudo-shell. The corresponding
irrep is $(10,0)^{\rm Pb}_{n}$.
$J_{k}^{P}$ | 236U $E_{{\rm exp}}$ [MeV] | 224Ra $E_{{\rm exp}}$ [MeV]
---|---|---
$0_{2}^{+}$ | 0.919 | 0.916
$2_{1}^{+}$ | 0.045 | 0.084
$2_{2}^{+}$ | 0.958 | -
$2_{3}^{+}$ | 0.960 | 0.993
$3_{1}^{+}$ | 1.002 | -
$4_{1}^{+}$ | 0.150 | 0.251
$6_{1}^{+}$ | 0.210 | 0.479
$1_{1}^{-}$ | 0.688 | 0.216
$1_{2}^{-}$ | 0.967 | -
$3_{1}^{-}$ | 0.744 | 0.290
$5_{1}^{-}$ | 0.848 | 0.433
$J_{i}^{P}\rightarrow J_{f}^{P}$ | 236: $B(E2)$ [WU] | 224Ra: $B(E2)$ [WU]
$2_{1}\rightarrow 0_{1}$ | 250. | 99.
$4_{1}\rightarrow 2_{1}$ | 357. | 144
Table 4: Experimental data used in the fit of the parameters of the model Hamiltonian. The second column lists the data used for 236U and the third column for 224Ra. If no data are mentioned (dashed sign), the experimental value is not used in the fit. $J_{k}^{P}$ | 236U-1 (th) | 224Ra (th) | 236U-2
---|---|---|---
$0_{1}^{+}$ | 0.0 | 0.0 | $0.00403$
$0_{2}^{+}$ | 0.0 | 0.0 | $0.0000356$
$2_{1}^{+}$ | 0.0 | 0.0 | $0.00384$
$2_{2}^{+}$ | 0.0 | 0.0 | $4.79\times 10^{-7}$
$4_{1}^{+}$ | 0.0 | 0.0 | $0.00346$
$4_{2}^{+}$ | 0.0 | 0.0 | $7.00\times 10^{-6}$
$1_{1}^{-}$ | 0.0 | 0.0 | $1.04\times 10^{-5}$
$2_{1}^{-}$ | 0.0 | 0.0 | $1.04\times 10^{-5}$
$3_{1}^{-}$ | 0.0 | 0.0 | $6.03\times 10^{-5}$
Table 5: Some spectroscopic factors of low lying states, divided by
$e^{\widetilde{A}}$. In the first column the state quantum numbers are
tabulated. The values of the spectroscopic factor for 236U containing the
210Pb cluster, 224Ra and 236U containing the 146Xe cluster are in the second,
third and fourth column, respectively. Only values which are greater than
$10^{-8}$ are listed. As seen, only the last system shows significant
deviations from zero.
Using the numbers just deduced, the system ${}^{236}_{92}$U $\rightarrow$
${}^{210}_{82}$Pb+${}^{26}_{10}$Ne can be viewed within the
$\widetilde{SU}(3)$ description as a ${}^{128}_{46}\widetilde{\rm{Pd}}$
$\rightarrow$ ${}^{112}_{40}\widetilde{\rm{Zr}}$ \+
${}^{16}_{6}\widetilde{\rm{C}}$ cluster system, obtained by counting the
protons and neutrons in the normal orbitals. Of course, these so-called
pseudo-nuclei are only schematic in nature.
The Tables 1 and 2 serve to illustrate on how to determine the total number of
quanta in each cluster and the united nucleus, for the particular case
considered. For the other cases treated in this manuscript, it will be
similar.
The minimal number of quanta, which have to be added in the proton part, is 20
corresponding to a $(20,0)_{pR}$ irrep in the relative part. For the neutron
part, this number is 40, i.e., an irrep $(40,0)_{nR}$.
In the next step, the proton part of the clusters are coupled with the
relative part of the proton section of the united nucleus. The same is done
for the neutrons. For the proton part, the product
$(0,0)_{p}\otimes(0,2)_{p}\otimes(20,0)_{pR}$ (the index $pR$ refers to the
relative motion of the protons) contains the proton irrep $(18,0)_{p}$ of the
united nucleus, thus, the forbiddenness for the proton part is zero. The
situation is different for the neutron part: The product
$(10,0)_{n}\otimes(4,0)_{n}\otimes(40,0)_{nR}$ does not contain (36,0), which
is the irrep in the united nucleus. This indicates that the clusters have to
be excited, thus, the forbiddenness is different from zero. Using the formula
(14) we obtain a forbiddenness of $\widetilde{n}_{C}=2$. The excitation of the
clusters is achieved, changing as one possibility the irrep of 26Ne from
$(4,0)_{n}$ to $(6,0)_{n}$. The relative part is now reduced by two quanta,
leaving $(38,0)_{nR}$ (the index $nR$ refers to the relative motion of the
neutrons). With this change, the product
$(10,0)_{n}\otimes(6,0)_{n}\otimes(38,0)_{nR}$ now contains the dominant irrep
for neutrons in 236U. This is verified by Eq. (14). Distributing the
excitation quanta in a different manner leads to the same final result.
Using the Hamiltonian in the $SU(3)$-dynamical limit, the coefficients are
adjusted to the experimental data, listed in Table 4 in the second column. The
optimal parameters obtained are listed in Table 3, second column. With these
parameters, the spectrum calculated is depicted in Figure 1. The calculated
B(E2)-transition values are listed in Table 6, second (theory) and third
(experiment) column.
As can be noted, the agreement to experiment is good and shows the
effectiveness of the pseudo-SACM to describe the collective structure of heavy
nuclei.
Next, we calculated some spectroscopic factors, listed in Table 5, second
column. The Equation (22) was used with the approximation of the parameter
$\widetilde{B}$ as $\left(-\frac{1}{{\tilde{n}}_{0}-n_{C}}\right)$. The total
number of relative oscillation quanta for the system under study is
${\tilde{n}}_{0}=60$ and ${\widetilde{n}}_{C}=2$, thus,
$\widetilde{B}\approx-0.0172$ and the exponential factor in (22) acquires the
form
$e^{\widetilde{A}-0.0172({\tilde{n}}_{0}-n_{c}+\Delta{\tilde{n}}_{\pi})}$
$\approx$
$(0.983)^{({\tilde{n}}_{0}-n_{c}+\Delta{\tilde{n}}_{\pi})}e^{\widetilde{A}}$.
The factor $e^{\widetilde{A}}$ is unknown and, as explained before, the
spectroscopic factors values can be found in Table 5, divided by
$e^{{\widetilde{A}}}$. As observed, the spectroscopic factors to
$\Delta{\tilde{n}}_{\pi}=1$ are suppressed, with a value smaller than
$10^{-8}$ considered to be zero.
### 3.2 ${}^{224}_{88}$Ra136 $\rightarrow$ ${}^{210}_{82}$Pb128 \+
${}^{14}_{6}$C8
$J_{i}^{P_{i}}\rightarrow J_{f}^{P_{f}}$ | a | b | c | $U$-exp | $Ra$-exp
---|---|---|---|---|---
$2^{+}_{1}\rightarrow 0^{+}_{1}$ | $251$ | $99$ | $250$ | $250\pm 10$ | $99\pm 3$
$2^{+}_{2}\rightarrow 0^{+}_{1}$ | $0$ | $0.321$ | $0$ | - |
$4^{+}_{1}\rightarrow 2^{+}_{1}$ | $357$ | $141$ | $357$ | $357\pm 23$ | $141\pm 7$
$4^{+}_{2}\rightarrow 2^{+}_{1}$ | $0$ | $0.0312$ | $0$ | - | -
$4^{-}_{1}\rightarrow 2^{-}_{1}$ | $309$ | $120$ | $3.554$ | - | -
$4^{-}_{2}\rightarrow 2^{-}_{1}$ | $0.$ | $0.00169$ | $337$ | - | -
$6^{+}_{1}\rightarrow 4^{+}_{1}$ | $391$ | $155$ | $390$ | $385\pm 22$ | $157\pm 13$
$2^{+}_{1}\rightarrow 0^{+}_{2}$ | $0$ | $0.115$ | $0$ | - | -
$2^{+}_{2}\rightarrow 0^{+}_{2}$ | $269$ | $142$ | $16.13$ | - | -
$4^{+}_{1}\rightarrow 2^{+}_{2}$ | $0$ | $0.00472$ | $0$ | - | -
$4^{+}_{2}\rightarrow 2^{+}_{2}$ | $383$ | $58.94$ | $21$ | - | -
$4^{-}_{1}\rightarrow 2^{-}_{2}$ | $0$ | $0.974$ | $0$ | - | -
$4^{-}_{2}\rightarrow 2^{-}_{2}$ | $368$ | $6.25\times 10^{-6}$ | $0$ | - |
Table 6: Theoretical $B(E2)$-transition values of the system a: 236U
$\rightarrow$ 210Pb + 26Ne; system b: 224Ra $\rightarrow$ 210Pb + 14C and
system c: 224U $\rightarrow$ 146Xe + 90Sr. The last two columns list the
experimental values for 236U and 224Ra, respectively. The unit is in WU and
the theoretical values are compared to available experimental data brook .
Note that in 238U most of the inter-band transitions are zero. This is due to
the fact that the ground state irrep is (54,0) and that we are working in the
$SU(3)$ limit, thus, there are no transitions between states in distinct
$SU(3)$ irrep. This changes for 224Ra, whose ground state irrep in the $SU(3)$
limit is (48,4), which contains several $K$ bands ($K=0,2,4$) and allows now a
transition between states from different bands at low energy.
Figure 2: Spectrum of 224Ra, described by the clusterization 210Pb+14C. Only
states up to angular momentum 6 are depicted The theoretical spectrum (left
panel) is compared to experiment (right panel).
As in the former section, the protons and neutrons are treated separately,
where the nucleons are filled into the Nilsson diagram from below, at the
deformation value $\epsilon_{2}=0.150$ ($\beta_{2})=0.164$) nix-tables . The
$\hbar\omega=6.73$ MeV.
For 224Ra, the united nucleus, we obtain 46 protons in the normal orbitals and
the valence shell is ${\widetilde{\eta}}_{p}=4$ with 6 valence protons. The
$\widetilde{SU}(3)$ irrep is $(\tilde{\lambda},\tilde{\mu})_{p}=(18,0)^{\rm
Ra}_{p}$, while for the neutrons we have 80 normal particles with 10 in the
${\widetilde{\eta}}_{n}=5$ valence shell, giving
$(\tilde{\lambda},\tilde{\mu})^{\rm Ra}_{n}=(30,4)^{\rm Ra}_{n}$. These two
irreps can be coupled to the ground state irrep for 224Ra, namely
$(\tilde{\lambda},\tilde{\mu})=(48,4)$.
The compilation of the valence shells, their nucleon content and the
corresponding irreps for 236U can be found in the former subsection.
The light cluster 14C is added on top of the heavy cluster. We have then 6
protons and 8 neutrons in normal orbitals, which gives $(0,2)^{\rm C}_{p}$
(two holes in the $\widetilde{p}$ shell) and $(0,0)^{\rm C}_{n}$ (closed
$\widetilde{p}$ shell).
Using the numbers just deduced, the system ${}^{224}_{88}$Ra136 $\rightarrow$
${}^{210}_{82}$Pb128 \+ ${}^{14}_{6}$C8 can be viewed within the
$\widetilde{SU}(3)$ description as a ${}^{126}_{46}\widetilde{\rm{Ru}}_{80}$
$\rightarrow$ ${}^{112}_{40}\widetilde{\rm{Zr}_{72}}$ \+
${}^{14}_{6}\widetilde{\rm{C}}_{8}$ cluster system. Of course, these so-called
pseudo-nuclei only serve for illustration. In what follows, we will continue
to use the notation for the real nuclei.
The minimal number of quanta which have to be added in the proton part of the
united nucleus is 20, which is the result of counting the difference of the
oscillation in the united pseudo-nucleus to the sum of the oscillation quanta
of the two pseudo-clusters. This corresponds to a $(20,0)_{Rp}$ irrep in the
relative part. For the neutron part the difference in the oscillation quanta
is 34 and, thus, the irrep of the relative motion is $(34,0)_{Rn}$
For the proton part, the product $\left[(0,0)^{\rm
Pb}_{p}\otimes(0,2)^{C}_{p}\right]$ $\otimes$ $(20,0)_{Rp}$ does lead to the
final ground state irrep $(18,0)^{\rm Ra}_{p}$ and, therefore, the
forbiddenness is zero. For the neutron part, however, this is no longer the
case: The product $\left[(0,0)^{\rm Pb}_{n}\otimes(0,0)^{\rm C}_{n}\right]$
$\otimes$ $(34,0)_{Rn}$ does not contain the ground stte irrep $(30,4)^{\rm
Ra}_{\nu}$. Applying the formula for the forbiddenness gives $n_{C}^{n}=2$.
These two quanta are subtracted from the relative motion, giving $(32,0)_{Rn}$
and are added to the Pb-cluster, as one possibility (any other leads to the
same final result). The irrep used is obtained by subtracting one neutron from
the $\widetilde{\eta}_{n}=5$ and exciting one neutron to the
$\widetilde{\eta}_{n}$ = (5+2,0) = (7,0). The neutrons in the valence shell
provide the irrep (5,0) (only one valence neutron left) and the product with
(7,0) contains the $(8,2)^{\rm Pb}_{n}$ irrep. The product with the relative
motion is sufficient, because the $C$ pseudo-nucleus has the scalar neutron
irrep $(0,0)^{\rm C}_{n}$. The product $(8,2)^{\rm Pb}_{n}\otimes(32,0)_{Rn}$
contains the ground state irrep $(30,4)^{\rm Ra}_{n}$.
Using the Hamiltonian in the $SU(3)$-dynamical limit, the coefficients are
adjusted to the experimental data, listed in Table 4, third column. The
optimal parameters obtained are listed in Table 3, third column. With these
parameters, the spectrum calculated is depicted in Figure 2. The calculated
B(E2)-transition values are listed in Table 6, third (theory) and sixth
(experiment) column.
As can be noted, the agreement to experiment is good and shows also in this
example the effectiveness of the pseudo-SACM to describe the collective
structure of heavy nuclei.
Next, we calculated some spectroscopic factors for 224Ra, listed in Table 5,
third column. The total number of relative oscillation quanta for the system
under study is ${\tilde{n}}_{0}=40$ ($n_{C}=2$), thus
$\widetilde{B}\approx-0.026$ and the exponential factor in (22) acquires the
form
$\displaystyle
e^{\widetilde{A}-0.026({\tilde{n}}_{0}-n_{c}+\Delta{\tilde{n}}_{\pi})}$
$\displaystyle\approx$
$\displaystyle(0.974)^{({\tilde{n}}_{0}-n_{c}+\Delta{\tilde{n}}_{\pi})}e^{\widetilde{A}}~{}~{}~{}.$
(23)
The factor $e^{\widetilde{A}}$ is unknown and thus in Table 5, the
spectroscopic factors were divided by $e^{{\widetilde{A}}}$.
### 3.3 ${}^{236}_{92}$U144 $\rightarrow$
${}^{146}_{54}$Xe92+${}^{90}_{38}$Sr52
Figure 3: Spectrum of 236U, described by the clusterization 146Xe+90Sr. Only
states up to angular momentum are depicted. The theoretical spectrum (left
panel) is compared to experiment (right panel).
Consulting the table of nix-tables the deformation of 236U is
$\epsilon_{2}=0.200$ ($\beta_{2}=0.215$), further $\hbar\omega=6.63$MeV. The
number of normal and unique protons and neutrons is given in subsections 3.1,
therefore, we will resume the numbers for 146Xe and 90Sr, only.
Counting for 146Xe the number of protons in the normal orbitals, we obtain 4
in the pseudo-shell $\widetilde{\eta}_{p}$=3. Taking into account the lower
shells, we obtain for the total number of protons in normal orbitals
$\widetilde{Z}$=24. As the leading irrep, obtained in the reduction of
$U(\frac{1}{2}\left(\widetilde{\eta}_{p}+1\right)\left(\widetilde{\eta_{p}}+2\right))$
$\supset$ $\widetilde{SU}(3)$ we have $(8,2)_{p}$. For the neutron part, there
are 8 neutrons in the $\widetilde{\eta}_{n}$ = 4 shell, corresponding to
$(18,4)_{n}$ as the leading irrep. The total number of normal neutrons is 48.
The lighter cluster, 90Sr, is put on top of the 146Xe cluster. Counting the
difference of the normal protons and neutron in the 236U nucleus to the 146Xe
nucleus, we obtain 22 normal protons and 34 normal neutrons for 90Sr.
Distributing them in the $\widetilde{SU}(3)$ shell model, we have 2 valence
protons in the $\widetilde{\eta}_{\pi}$=3 shell and 14 neutrons in the same
pseudo-shell, which correspond to 6 holes. The leading irreps are respectively
$(6,0)_{p}$ and $(0,12)_{n}$.
Counting the difference in the oscillation quanta in the proton sector between
the 236U nucleus and the sum of the two clusters, we obtain 36 quanta.
However, when we couple the two cluster irreps and then with the relative
motion irrep $(36,0)_{R}$, the $(18,0)^{U}_{p}$ irrep of 236U cannot be
reached, which is a clear indication that the proton part requires the use of
the forbiddenness. Using the formula for the forbiddenness we obtain
$n^{p}_{C}=10$, demonstrating the importance of the concept of forbiddenness.
The 10 oscillation quanta are to be added to the clusters, choosing a path
which is easier to follow: As one possibility (all others lead to the same
result) we distribute 6 to the large and 4 to the light cluster (such that in
each an even number is added). For the proton part, in 146Xe the irrep of one
proton less in the valence shell is (7,1) (instead of the former (8,2)). One
proton is excited to $\widetilde{\eta}_{p}$ = (3+6,0) = (9,0). The product of
$(7,1)\otimes(9,0)$ contains the irrep $(14,2)^{\rm Xe}$. In Sr the irrep with
one nucleon less in the valence shell $\widetilde{\eta}_{p}=3$ is (3,0). One
nucleon is excited the $\widetilde{\eta}_{p}$ = 3+4 =7, i.e., with the irrep
(7,0). In the product of $(3,0)\otimes(7,0)$ the irrep $(10,0)^{\rm Sr}$
appears. In the final coupling with the radial irrep $(36-10,0)_{pR}$ =
$(26,0)_{pR}$ we obtain $(14,2)^{\rm Xe}_{p}\otimes(10,0)^{\rm Sr}_{p}$, which
contains (4,12) and coupled with $(26,0)_{pR}$ leads to the final proton irrep
$(18,0)^{U}_{p}$ and, thus, we reached our objective for the proton part
The same procedure has to be applied for the neutron part. Counting the
difference in the oscillation quanta in the neutron sector between the 236U
nucleus and the sum of the two clusters, we obtain 76 quanta. However, when we
couple the two cluster irreps and then with the relative motion irrep
$(76,0)_{nR}$, we cannot reach the $(36,0)^{U}_{n}$ irrep of 236U, which is a
clear indication that the neutron part requires the use of the forbiddenness.
Using the formula for the forbiddenness we obtain $n^{\nu}_{C}=48$, again
showing that the concept of forbiddenness is very important. The 48
oscillation quanta, have to be added to the clusters. This time, all the 48
quanta are added to the light cluster, because it does not matter how we
distribute the $n_{C}^{\nu}$ = 48 between the clusters. This is the reason why
we now follow this path. Before, in Sr there were 6 holes, now there will be
7, because one neutron is excited to the $\widetilde{\eta}_{n}$ = 3+48 = 51
and the irrep which carries the single neutron is (51,0). The valence shell,
with 7 holes, provides the irrep (2,11). Coupling both, $(2,11)\otimes(51,0)$,
a large list is obtained were we choose the most compact irrep, namely (38,2).
The reason to choose the most compact irrep is that this alone leads in the
product with the relative motion to the smallest irreps possible. Therefore,
we consider the product $\left[(38,2)\otimes(18,4)^{\rm Xe}_{n}\right]$
$\otimes$ $(28,0)_{nR}$ $\rightarrow$, which leads to the possible combination
of $(22,14)\otimes(28,0)_{nR}$, where (22,14) appears in the product
$(38,2)\otimes(18,4)$. The (22,14) was chosen, because taking the product with
the relative irrep $(28,0)_{\nu R}$ it contains the final irrep $(36,0)^{\rm
U}_{\nu}$.
The cluster irreps for Xe and Sr are obtained by couple linearly for each
cluster the proton and neutron irrep. Using the above results of the
compilations, we get $(48,2)^{\rm Sr}$ and $(32,6)^{\rm Xe}$, once coupled
contains the irrep (30,30), which coupled again with the relative motion
(54,0), where 54 is the sum of the relative motions in the proton (26) and the
neutron part (28). The final result contains the ground state irrep of 236U,
namely (54,0).
Counting only the normal nucleons, the system ${}^{236}_{92}$U144
$\rightarrow$ ${}^{146}_{46}$Xe92 \+ ${}^{90}_{22}$Sr52 can be viewed as the
system ${}^{128}_{68}$$\widetilde{{\rm Pd}}$82 $\rightarrow$
${}^{72}_{24}$$\widetilde{{\rm Cr}}$48 \+ ${}^{56}_{22}$$\widetilde{\rm
Ti}$34. The agreement to experiment is satisfactorily, as can be see by
consulting Figure 3 for the spectrum and Table 6 for the transition values.
Comparing to the 236U $\rightarrow$ 210Pb \+ 26Ne, the obtained spectrum has a
similar agreement to experiment, with some shifts in the band heads and
changes in the moment of inertia.
For the calculation of the spectroscopic factor, the parameter $\widetilde{B}$
= $-\frac{1}{\widetilde{n}_{0}-n_{C}}$, where $(\widetilde{n}_{0}-n_{C})$ =
-0.0185 are the remaining total relative oscillation quanta after subtracting
the forbiddenness. The spectroscopic factors are listed in the last column in
Table 5.
## 4 Conclusions
We have presented an extension of the Semimicroscopic Algebraic Cluster Model
(SACM), valid for light nuclei, to the pseudo-SACM, for heavy nuclei, limiting
to the $\widetilde{SU}(3)$ dynamical symmetry limit. Though, there exist
earlier attempts to extend the SACM to heavy nuclei, we found it necessary to
construct a model, which enables us to determine the complete spectrum and
circumvent some conceptual and practical problems of the former approaches and
to deliver a consistent procedure, as working in the same mean field of the
parent nucleus and simultaneously conserving the simplicity of the SACM for
light nuclei.
Protons and neutrons have to be treated separately, because they occupy
different shells. Only at the end they are coupled together. The protons and
neutrons are distributed within the normal and unique orbitals in such a
manner that the sum of normal nucleons of the clusters is the same as in the
parent nucleus. The construction of the model space is in complete analogy to
the SACM.
As examples, we considered 236U $\rightarrow$ 210Pb+26Ne, 224Ra $\rightarrow$
210Pb+14C and 236U $\rightarrow$ 146Xe+90Sr. We demonstrated that the model is
able to describe the spectrum and electromagnetic transition probabilities.
Spectroscopic factors were also calculated, without further fitting and they
can be considered as a prediction of the model. With this, we demonstrated the
usefulness of the pseudo-SACM for treating heavy nuclei. A more systematic
study of several nuclei is planned in the future.
The restriction to the $\widetilde{SU}(3)$ dynamical symmetry limit has to be
relaxed in future applications, including the other dynamical symmetries as
$SO(4)$. One has to study the extension of $\widetilde{SU}(3)$ too, including
the active participation of nucleons in the unique orbitals. Also the study of
phase transitions is of interest, requiring the use of the geometrical mapping
geom of the SACM.
## Acknowledgments
We acknowledge financial support form DGAPA-PAPIIT (IN100421).
## References
* (1) J. Cseh, Phys. Lett. B 281 (1992), 173.
* (2) J. Cseh and G. Lévai, Ann. Phys. (N.Y.) 230, 165 (1994).
* (3) W. Greiner, J. Y. Park and W Scheid, Nuclear Moelcules, (World Scientific, Singapure, 1995).
* (4) P. O. Hess and W. Greiner, Il Nuovo Cimento 83, 76 (1984).
* (5) H. Yépez-Martínez, M. J. Ermamatov, P. R. Fraser and P. O. Hess, Phys. Rev. C ’bf 86 (2012), 034309\.
* (6) A. Algora and J. Cseh, J. Phys. G 22, L39 (1996)
* (7) K.T. Hecht and A. Adler, Nucl. Phys. A 137, 129 (1969)
* (8) A. Arima, M. Harvey and K. Shimizu, Phys. Lett. B 30, 517 (1969)
* (9) J. Cseh, R. K. Gupta and W. Scheid, Phys. Lett. B 299, 205 (1993)
* (10) P. O. Hess, A. Algora, M. Hunyadi, J. Cseh, Eur. Phys. Jour. A15, 449 (2002)
* (11) P. Ring and P. Schuck, The Nuclear Many-Body Problem, (Springer, Heidelberg,1980).
* (12) A. Algora, J. Cseh and P. O. Hess, J. Phys. G 24, 2111 (1998)
* (13) A. Algora, J. Cseh and P. O. Hess, J. Phys. G 25, 775 (1999)
* (14) A. Algora, J. Cseh, J. Darai and P. O. Hess, Phys. Lett. B 639, 451 (2006)
* (15) H. Yépez-Martínez, M. J. Ermamatov, P. R. Fraser and P. O. Hess, Phys. Rev. C 86, 034309 (2012)
* (16) H. Yépez-Martínez, P. R. Fraser, P. O. Hess and G. Lévai, Phys. Rev. C 85, 014316 (2012)
* (17) P. R. Fraser, H. Yépez-Martínez, P. O. Hess and G. Lévai, Phys. Rev. C 85, 014317 (2012)
* (18) D. Lohr-Robles, E. López-Moreno and P. O. Hess, Nucl. Phys. A 992 (2019), 121629.
* (19) R. Gilmore, Catastrophe Theory for Scientists and Enegineers, (Wiley, New York, 1981)
* (20) H. Yépez-Martínez, G. E. Morales-Hernández, P. O. Hess, G. Lévai and P. R. Fraser, Int. J. Mod. Phys. E 22, 1350022 (2013)
* (21) J. Cseh, Phys. Rev. C 101 (2020), 054306\.
* (22) A. Arima, V. Gillet, and J. Ginocchio, Phys. Rev. Lett. 25 (1970), 1043.
* (23) M. Danos and V. Gillet, Phys. Rev. 161 (1967), 1034.
* (24) J. Cseh, Phys. Lett. B 743 (2015), 213.
* (25) D. Bonatsos, I. E. Assimakis, N. Minkov, et al., Phys. Rev. C 95 (2017), 064325.
* (26) J. Cseh, Phys. Rev. C 50 (1994), 2240.
* (27) A. Algora, J. Cseh, J. Darai and P.O. Hess, Phys. Lett. B 639, 451 (2006)
* (28) P. Ring and P. Schuck, The Nuclear Many-Body Problem, (Springer, Heidelberg, 1980).
* (29) O. Castaños, V. Velázquez, P. O. Hess and J. G. Hirsch, Phys. Lett. B 321 (1994), 303.
* (30) W. Greiner and J. A. Maruhn, Nuclear Models, (Springer, Berlin-Heidelberg, 1996).
* (31) J. Draayer, Fermion models, in Algebraic Approaches to Nuclear Structure, ed. R. Casten et al. (Harwood Academic Publisher, Pennsylvania, 1993) p. 423.
* (32) D. Troltenier, J. P. Draayer, P. O. Hess and O. Castaños, Nucl. Phys. A 576 (1994), 351.
* (33) Yu. F. Smirnov and Yu. M. Tchuvil’sky, Phys. Lett. B 134, 25 (1984)
* (34) K. Wildermuth and Y. C. Tang, _A Unified Theory of the Nucleus_ (Friedr. Vieweg & Sohn Verlagsgesselschaft mbH, Braunschweig, 1977).
* (35) H. Yépez-Martínez, P. O. Hess, J. Phys. G 42, 095109 (2015)
* (36) O.Castaños, P.O.Hess, P.Rocheford, J.P.Draayer, Nucl. Phys. A524, 469 (1991)
* (37) C. Bahri, D. J. Rowe and J. P. Draayer, Comput. Phys. Commun. 159 (2004), 121.
* (38) P. Möller, J.R. Nix, W.D. Myers, W.J. Swiatecki, At. Data Nucl. Data Tables 59, 185 (1995)
* (39) O. Castaños, J.P. Draayer and Y. Leschber, Ann. of Phys. 180, 290 (1987)
* (40) D. Troltenier, J. P. Draayer, O. Castaños and P. O. Hess, Nucl. Phys. A 576 (1994), 351.
* (41) J. M. Eisenberg and W. Greiner, Nuclear Theory I: Nuclear Models, 3rd edn (Amsterdam, North-Holland, 1987)
* (42) J. Cseh, J. Darai, A. Algora, H. Yépez-Martínez, P. O. Hess, Rev. Mex. Fís. 54 (S3), 30 (2008)
* (43) D. J. Rowe, Rep. Progr. Phys. 48, 1419 (1985)
* (44) Castaños O., Draayer J. P., Leschber Y.; Zeitschr. f. Physik A 329 (1988), 33.
* (45) J. Blomqvist and A. Molinari, Nucl. Phys. A 106, 545 (1968)
* (46) J. Escher and J.P. Draayer, J. Math. Phys. 39, 5123 (1998)
* (47) H. Yépez-Martínez, M. J. Ermamatov, P. R. Fraser and P. O. Hess, Phys. Rev C 86 (2012), 034309.
* (48) P. O. Hess, A. Algora, J. Cseh and J. P. Draayer, Phys. Rev. C70, 051303(R) (2004)
* (49) J. P. Draayer, Nucl. Phys. A 237, 157 (1975)
* (50) P. O. Hess, G. Lévai and J. Cseh, Phys. Rec C 54, 2345 (1996)
* (51) National Nuclear Data Center, http://www.nndc.bnl.gov 2111\.
|
aainstitutetext: Instituto Galego de Física de Altas Enerxías IGFAE,
Universidade de Santiago de Compostela, E-15782 Galicia-Spainbbinstitutetext:
CPHT, CNRS, École Polytechnique, Institut Polytechnique de Paris, 91128
Palaiseau, France
# A modified in-medium evolution equation with color coherence
João Barata a Fabio Domínguez a Carlos A. Salgado b Víctor Vila
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
QCD jets produced in heavy-ion collisions at LHC or RHIC energies partially
evolve inside the produced hot and dense quark gluon plasma, offering unique
opportunities to study QCD splitting processes in different backgrounds.
Induced (modified) splittings are expected to be the most relevant mechanism
driving the modifications of in-medium jets compared to vacuum jets for a wide
sets of observables. Although color coherence among different emitters has
been identified as an essential mechanism in studies of the QCD antenna
radiation, it is usually neglected in the multi-gluon medium-induced cascade.
This independent gluon emission approximation can be analytically proved to be
valid in the limit of very large media, but corrections or modifications to it
have not been computed before in the context of the evolution (or rate)
equation describing the gluon cascade. We propose a modified evolution
equation that includes corrections due to the interference of subsequent
emitters. In order to do so, we first compute a modified splitting kernel
following the usual procedure of factorizing it from the subsequent Brownian
motion. The calculation is performed in the two-gluon configuration with no
overlapping formation times, that is expected to provide the first correction
to the completely independent picture.
## 1 Introduction
One of the strongest evidences for the creation of the Quark Gluon Plasma
(QGP) at RHIC RHIC1 ; RHIC2 and LHC LHC1 ; LHC2 ; LHC4 ; LHC5 is jet
quenching: the modification of jets due to the interaction with the dense QCD
medium created in high-energy collisions of heavy atomic nuclei. The most
direct observable consequence of this effect is the suppression of the yields
of particles and jets at large transverse momentum — the quenching. However,
jet quenching is nowadays a generic name that embraces the modern technology
of jet studies, originally developed for jets in vacuum (i.e. in proton-proton
or simpler colliding systems), including a plethora of global or sub-jet
observables with different degrees of sophistication. These new observables
pose a challenge on present theoretical descriptions of in-medium jet cascades
that are stimulating advances towards a more precise implementation of the
underlying physics.
Jets in heavy-ion collisions develop partly inside the surrounding QCD matter
and partly outside of it, with quantum interference between the two
possibilities. Moreover, the total shower contains both medium-induced
radiation as well as (angular-ordered, infrared and soft divergent) vacuum
contributions. One of the main theoretical difficulties to write a consistent
cascade is to understand how to order the subsequent splittings of the two
kinds. Under some circumstances, for which color coherence between the
different emitters in the cascade plays a central role, the vacuum and medium
contributions to the cascade can be factorized Antenna4 ; Caucal . Before a
more complete description is available, a usual approximation is to evolve
both cascades independently. For soft gluons, which in the medium have small
formation time $t_{f}$, it can be shown that interference between subsequent
emitters can be neglected for large enough media, $t_{f}/L\ll 1$ BDIM1 ;
Liliana2 . This independence ensures a probabilistic picture in which
evolution equations (known as rate equations) can be easily computed for
different jet properties BDIM2 ; Jeon:2003gi . The goal of the present paper
is to go beyond this approximation, taking into account the first correction
to the completely independent subsequent gluon emission and to propose a
modification of the rate equations that takes into account color coherence.
The single gluon production, the building block for the in-medium cascade, has
been extensively studied along the past few decades BDMPS1 ; BDMPS2 ; BDMPS3 ;
BDMPS4 ; GLV ; Wiedemann ; Wang:2001ifa ; Arnold:2002ja ; BDIM1 ; Liliana2 ;
Sievert:2019cwq and although full numerical solutions are well established
numerical1 ; numerical2 ; numerical3 , a fully analytic formulation has not
been achieved111See CarlotaFabioLiliana ; CarlotaFabioMarcos ; IOE1 ; IOE2 ;
IOE3 for recent efforts.. Nonetheless, for sufficiently large and dense
media, the spectrum is well described by the Baier-Dokshitzer-Mueller-Peigné-
Schiff-Zakharov (BDMPS-Z) framework BDMPS1 ; BDMPS2 ; BDMPS3 ; BDMPS4 , which
encapsulates the propagation of an energetic parton that exchanges multiple
soft gluons with the medium. In this regime, the main mechanism for energy
loss consists in the emission of induced soft radiation with frequency
$\omega$ above the Bethe-Heitler bound, $\omega\gg\omega_{\rm
BH}\sim\frac{\mu^{4}}{\hat{q}}$, but below the critical frequency,
$\omega\ll\omega_{c}\sim\hat{q}L^{2}$, with a typical formation time
$t_{f}(\omega)=2\omega/\mathbf{k}^{2}$, where $\mathbf{k}$ is the transverse
momentum of the gluon, $L$ the medium length, $\mu$ the Debye screening mass
and $\hat{q}$ the averaged square transverse momentum acquired by a particle
propagating in the medium during a time t, i.e.
$\langle\mathbf{k}^{2}\rangle=\hat{q}t$. Using the previous estimates, one has
that $t_{f}\sim\omega/(\hat{q}t_{f})\sim\sqrt{\omega/\hat{q}}$, with the gluon
acquiring a transverse momentum
$\mathbf{k}^{2}\sim\hat{q}t_{f}\sim\sqrt{\hat{q}\omega}$ during the branching
process. In the limit where multiple gluon emissions are observed (i.e.
$\omega\ll\omega_{c}$222See the discussion on the multiple soft emission
region conducted in BDIM1 ; BDIM2 .), we have that gluon radiation is formed
almost instantly since $t_{f}(\omega)\ll t_{f}(\omega_{c})=L$, while the final
transverse momentum of the gluon
$\mathbf{k}^{2}\sim\hat{q}L\gg\sqrt{\hat{q}\omega}\gg\mu^{2}$, as the gluon
still has to propagate until the end of the medium after its formation. Thus,
to leading order in inverse powers of the medium length, soft gluons are
produced decoherently and almost instantaneously, and the probability to emit
a gluon is proportional to $L-t_{f}(\omega)\approx L$.
This discussion can also be formulated in terms of the angular structure of
the emission spectrum. Defining the emission angle
$\theta^{2}=\frac{\mathbf{k}^{2}}{\omega^{2}}$, we have that
$\theta^{2}\sim\frac{1}{\hat{q}t_{f}^{3}(\omega)}\gg\frac{1}{\hat{q}L^{3}}\equiv\theta_{c}^{2}$;
in contrast, the measured angle gets its main contribution from final state
broadening, as can be verified by noticing that accumulated transverse
momentum is proportional to the traversed length $L\gg t_{f}$, while the
energy is conserved. This justifies a picture of time localized splittings (as
$t_{f}\ll L$) producing decoherent partons, with the overall transverse
structure being determined by individual momentum broadening of the final
states BDIM1 ; Antenna2 .
More recently, a lot of effort has been put into studying multiparticle
interference effects absent from the BDMPS-Z picture. One of such effects is
color coherence between emitters, that has been considered in studies
exploring the physics of the QCD antenna with an extra in-medium gluon
emission Antenna0 ; Antenna1 ; Antenna2 ; Antenna3 ; Antenna4 . The main
conclusions of such studies were that for short time scales and emission
angles, partons keep color coherent and splittings are not immediately
resolved by the medium, while for long time intervals or large emission
angles, the medium randomizes the color fields of each parton such that the
system evolves decoherently. Generalizing such a picture for a full in-medium
jet Antenna4 , it was argued that the color coherence between emitters within
the in-medium shower might lead to a significant modification of the expected
gluon spectrum for relevant experimental conditions. In addition, it is also
well-known that in vacuum, color coherence between emitters needs to be taken
into account in order to properly describe experimental data Azimov:1985by .
In this paper we include, for the first time, color coherence effects in the
resummation of multiple gluon emissions. In particular, color coherence is
included by allowing partons to take a finite time to be resolved by the
medium after the splitting, followed by decoherent final state
broadening333See Hulcher:2017cpt for a qualitative similar idea introduced
within the context of the Hybrid model Hybrid .. Since splittings are still
sharply localized when compared to the scale $L$, we can take the single gluon
branching process as the building block for a probabilistic gluon shower,
similarly to the totally decoherent case BDIM2 ; bottom_up_therm . The
introduction of color coherence however leads the final evolution equation for
the gluonic shower to become, in general, non-local in time, since it takes a
finite amount of time for partons to color decohere. The details of the color
coherence dynamics are obtained by studying the emission of a soft gluon (the
same set up as BDMPS-Z) followed by the emission of a vacuum soft gluon which
serves as a probe of the color coherence of the two outgoing states. An
effective coherence factor is then extracted and applied as a correction to
the emission kernel used in the totally decoherent case. Although this ansatz
approach does not strictly follow directly from a first principle calculation,
it allows us to gauge the effects of including color coherence effects at the
level of each splitting.
The present paper is divided as follows. Section 2 gives an overview of
previous works on the resumation of multiple decoherent in-medium gluon
emissions BDIM1 ; BDIM2 ; BDIM3 ; Section 3 presents the computation of the
interference due to including a soft vacuum emission, while Section 4 provides
the derivation of the new evolution equation for the gluon shower. The
conclusions are presented in Section 5. Further details are provided in three
appendices.
## 2 Theoretical Set Up
In this section we briefly review the main results from BDIM1 ; BDIM2 , where
the double-differential medium-induced gluon emission spectrum was computed
and simplified in order to provide a probabilistic picture for the production
of medium-induced radiation444See Liliana2 for related work..
### 2.1 Gluon emission spectrum off a parton
When an energetic parton propagates in a dense QCD medium, it exchanges
multiple soft gluons with the medium. The leading order effect of such
interactions is to transversely kick the hard parton. The probability
$\mathcal{P}_{1}(\mathbf{k};t,t_{0})$ that the parton acquires a transverse
momentum $|\mathbf{k}|\ll p_{0}^{+}$, with $p_{0}^{+}$ the parton energy, due
to such interactions during a time $L-t_{0}$ is given by (see BDIM1 ; BDIM2
and references therein)
$\mathcal{P}_{1}(\mathbf{k};L,t_{0})=\int_{\mathbf{r}}e^{-i\mathbf{r}\cdot\mathbf{k}}\mathcal{P}(\mathbf{r};L,t_{0})=\int_{\mathbf{r}}e^{-i\mathbf{r}\cdot\mathbf{k}}\,e^{-\frac{C_{A}}{2}\int_{t_{0}}^{L}dt\,n(t)\sigma(\mathbf{r})}\,,$
(1)
where $\mathcal{P}(\mathbf{r})$ is the dipole operator in the adjoint
representation, $n=n(t)$ the density of scattering centers, and $\sigma$ the
in-medium elastic cross-section (see Appendix B). Taking the derivative with
respect to $L$ one obtains the following evolution equation for this
probability distribution,
$\partial_{L}\mathcal{P}_{1}(\mathbf{k};L,t_{0})=\int_{\mathbf{l}}\mathcal{C}(\mathbf{l},L)\mathcal{P}_{1}(\mathbf{k}-\mathbf{l};L,t_{0})\,,$
(2)
where for the momentum space integrals we use the shorthand
$\int_{\mathbf{q}}=\int(2\pi)^{-2}d^{2}\mathbf{q}$, and for the position space
integrals we use $\int_{\mathbf{r}}=\int d^{2}\mathbf{r}$. Here the broadening
kernel is given by
$\mathcal{C}(\mathbf{l},t)=-\frac{C_{A}}{2}n(t)\sigma(\mathbf{l})\,.$ (3)
In addition to momentum broadening, the multiple interactions with the medium
also induce the production of soft radiation. In a similar fashion, one can
construct the probability ${\cal P}_{2}(\mathbf{k},\mathbf{q};L,t_{0})$ of
observing two outgoing partons, with transverse momentum $\mathbf{k}$ and
$\mathbf{q}$ respectively, from an initial state with momentum
$\overrightarrow{p_{0}}=(p_{0}^{+},\mathbf{p}_{0})$, for a process happening
between times $t_{0}$ and $L$. Taking into account that the soft modes have
typical formation times inside the medium much smaller than the medium length
$t_{f}\ll L$, one can effectively ignore the formation time of radiation when
compared to any other time scale555See BDIM1 ; BDIM2 for a detailed
discussion of this approximation, in a notation close to the one implemented
in this manuscript. More details can also be found in the references therein
and follow the qualitative discussion in the previous section.. In this
approximation, the two outgoing states evolve decoherently at late times and
one can write BDIM1 ; BDIM2
$\begin{split}{\cal
P}_{2}(\mathbf{k},\mathbf{q},z;L,t_{0})&=2g^{2}z(1-z)\int_{t_{0}}^{L}dt\,\int_{\mathbf{m},\mathbf{Q},\mathbf{l}}{\cal
K}(\mathbf{Q},\mathbf{l},z,p_{0}^{+};t)\\\ &\times{\cal
P}_{1}(\mathbf{m}-\mathbf{p}_{0};t,t_{0}){\cal
P}_{1}(\mathbf{k}-\mathbf{p};L,t){\cal
P}_{1}(\mathbf{q}-(\mathbf{m}+\mathbf{l}-\mathbf{p});L,t)\,.\end{split}$ (4)
Here $\mathbf{l}$ is the transverse momentum acquired during the branching
process, $\mathbf{m}$ the momentum of the initial parton before splitting,
$\mathbf{p}$ the transverse momentum of the outgoing parton with energy
$zp_{0}^{+}$ just after the splitting, and
$\mathbf{Q}=\mathbf{p}-z(\mathbf{m}+\mathbf{l})$ the relative transverse
momentum of the system after branching – see Figure 1.
Figure 1: Diagrammatic representation of eq. (4). The labels used follow the
notation in the main text and the gray blob denotes the underlying QCD medium.
Noticing that $\mathbf{Q}$ and $\mathbf{l}$ are the only momenta scales
directly entering the branching process, they can be neglected in a regime
that makes it possible to build a probabilistic picture for the emission
process. The observation that such a region exists can be argued as follows.
The scale $\mathbf{l}$ is generated by transverse momentum broadening during
the branching process and thus $\mathbf{l}^{2}\sim\hat{q}t_{f}\ll\hat{q}L$,
which can be neglected with respect to
$\mathbf{k}^{2}\sim\mathbf{q}^{2}\sim\hat{q}L$. Then, disregarding such a
scale in the single particle broadening contributions and integrating ${\cal
K}$ over $\mathbf{l}$ we get
$\begin{split}{\cal
P}_{2}(\mathbf{k},\mathbf{q},z;L,t_{0})&=2g^{2}z(1-z)\int_{t_{0}}^{L}dt\,\int_{\mathbf{m},\mathbf{Q}}{\cal
K}(\mathbf{Q},z,p_{0}^{+};t)\\\ &\times{\cal
P}_{1}(\mathbf{m}-\mathbf{p}_{0};t,t_{0}){\cal
P}_{1}(\mathbf{k}-\mathbf{p};L,t){\cal
P}_{1}(\mathbf{q}-(\mathbf{m}-\mathbf{p});L,t)\,,\end{split}$ (5)
with $\mathbf{Q}=\mathbf{p}-z\mathbf{m}$.
The relative momentum $\mathbf{Q}$ is a purely kinematical scale, i.e. it
captures how non-collinear the outgoing parton’s momentum are after the
branching. Therefore, in this sense, there is a priori no constraint on the
values it can take. However, the magnitude of $\mathbf{Q}$ is determined by
the BDMPS-Z splitting kernel ${\cal K}$, which, as we will show below, is
peaked around $\mathbf{Q}^{2}\sim\sqrt{\hat{q}zp_{0}^{+}}\ll\hat{q}L$ (for
small $z$), with smaller momentum scales blocked by the LPM coherence effect
and larger values being exponentially harder to obtain via multiple soft
scattering BDIM1 ; BDIM2 . As a consequence, one can further simplify the
above relation to
$\begin{split}{\cal
P}_{2}(\mathbf{k},\mathbf{q},z;L,t_{0})&=2g^{2}z(1-z)\int_{t_{0}}^{L}dt\,\int_{\mathbf{m},\mathbf{Q}}{\cal
K}(\mathbf{Q},z,p_{0}^{+};t)\\\ &\times{\cal
P}_{1}(\mathbf{m}-\mathbf{p}_{0};t,t_{0}){\cal
P}_{1}(\mathbf{k}-z\mathbf{m};L,t){\cal
P}_{1}(\mathbf{q}-(1-z)\mathbf{m};L,t)\\\ &\equiv
2g^{2}z(1-z)\int_{t_{0}}^{L}dt\,\int_{\mathbf{m}}{\cal K}(z,p_{0}^{+};t)\\\
&\times{\cal P}_{1}(\mathbf{m}-\mathbf{p}_{0};t,t_{0}){\cal
P}_{1}(\mathbf{k}-z\mathbf{m};L,t){\cal
P}_{1}(\mathbf{q}-(1-z)\mathbf{m};L,t)\,,\end{split}$ (6)
where we used the fact that neglecting the momentum exchanges during branching
leads to a collinear splitting. The splitting kernel is given by BDIM2
$\mathcal{K}(z,p_{0}^{+};t)=\frac{P_{gg}(z)}{2\pi}\sqrt{\frac{\hat{q}(t)(1-z+z^{2})}{p_{0}^{+}z(1-z)}}\,,$
(7)
where purely gluonic degrees of freedom are assumed. Here $P_{gg}$ is the
Altarelli-Parisi vacuum kernel multiplied by a factor of $C_{A}$ and the time
dependence can be dropped as long as one assumes that the medium is static and
homogeneous (plasma brick model).
### 2.2 The Generating Functional and the shower building blocks
Using the results from the previous section, we now derive an evolution
equation for the single gluon inclusive distribution resumming multiple
medium-induced gluon emissions. We make use of the generating functional
method Book1 ; Book2 ; Pedrag , although this is not crucial.
We first consider the functional $\mathcal{Z}_{p_{0}}(u;t,t_{0})$ within the
time interval $t_{0}\leq t\leq L$,
$\mathcal{Z}_{p_{0}}(u;t,t_{0})=\sum_{n=1}^{\infty}\frac{1}{n!}\int_{\Omega_{n}}P_{n}(\overrightarrow{k_{1}},\ldots,\overrightarrow{k_{n}};t,t_{0})u(\overrightarrow{k_{1}})\ldots
u(\overrightarrow{k_{n}})\,,$ (8)
where the integration is performed over all the individual phase-spaces
$\Omega_{n}$ on each term of the sum, and $u(\overrightarrow{k})$ is a test
function that will eventually drop out via functional differentiation666We
define the functional derivative as $\frac{\delta
u\left(\overrightarrow{p}\right)}{\delta
u(\overrightarrow{q})}=\delta(p^{+}-q^{+})\delta(\mathbf{p}-\mathbf{q})$.
Inclusive distributions (as we are interested in computing) can be obtained
from functional $\mathcal{Z}$ by taking the functional derivative with respect
to $u$ at $u=1$ Pedrag .. All possible physical processes are stored in
$\mathcal{Z}$ via the elementary probabilities
$P_{n}(\overrightarrow{k_{1}},\ldots,\overrightarrow{k_{n}})$, which
correspond to the probability of measuring $n$ final-state gluons with the
momentum assigned to each one at time $t$. For the case at hand, the evolution
is defined by the single particle broadening probability $P_{1}$ and the
branching probability $P_{2}$. These are related to the probabilities
$\mathcal{P}_{n}$ introduced in the previous section by
$P_{n}(\overrightarrow{k_{1}},\ldots,\overrightarrow{k_{n}};t,t_{0})=2p_{0}^{+}(2\pi)\delta\left(\sum_{i=1}^{n}k^{+}_{i}-p_{0}^{+}\right)\mathcal{P}_{n}(\mathbf{k}_{1},\ldots,\mathbf{k_{n}};t,t_{0})\,,$
(9)
where the energy fractions $z_{i}$ have been omitted for the sake of clarity.
This relation makes it explicit that the dynamics are constrained to the
transverse plane, since $p_{0}^{+}=\sum_{i=1}^{n}k_{i}^{+}$ and the remaining
freedom in the $k_{i}^{+}$ is fixed by the splitting energy fractions $z_{i}$.
In order to obtain an evolution equation for the functional, an evolution law
for $\mathcal{P}_{2}$ is needed. However, as this probability already includes
broadening contributions associated to in/out-going legs (see Figure 1), we
truncate such terms by introducing the associated branching probability
$\begin{split}\widetilde{\mathcal{P}}_{2}(\mathbf{k},\mathbf{q},z;L,t_{0})&=2g^{2}z(1-z)\int_{t_{0}}^{L}\
dt\
\mathcal{K}(z,p_{0}^{+};t)(2\pi)^{4}\delta^{(2)}(\mathbf{k}-z\mathbf{p}_{0})\delta^{(2)}(\mathbf{q}-(1-z)\mathbf{p}_{0})\end{split}\,,$
(10)
where we have already performed the integration over the initial delta
function. Notice that by doing this, double counting contributions already
included in $P_{1}$ is avoided. The time-evolution equation is then given by
$\begin{split}\partial_{L}\widetilde{\mathcal{P}}_{2}(\mathbf{k},\mathbf{q},z;L,t_{0})=2g^{2}z(1-z)\mathcal{K}(z,p_{0}^{+};L)(2\pi)^{4}\delta^{(2)}(\mathbf{k}-z\mathbf{p}_{0})\delta^{(2)}(\mathbf{q}-(1-z)\mathbf{p}_{0})\,.\end{split}$
(11)
Now, combining this result with the time-evolution equation for
$\mathcal{P}_{1}$ and taking into account that for an infinitesimal time step
$dt$
$\begin{split}\mathcal{Z}_{p_{0}}(t_{0}+dt,t_{0})&=\int
d\Omega_{k}P_{1}(\overrightarrow{k},t_{0}+dt,t_{0})u(\overrightarrow{k})\\\
&+\frac{1}{2}\int
d\Omega_{k_{1}}d\Omega_{k_{2}}P_{2}(\overrightarrow{k}_{1},\overrightarrow{k}_{2};t_{0}+dt,t_{0})u(\overrightarrow{k_{1}})u(\overrightarrow{k_{2}})\,,\end{split}$
(12)
one can eventually write the evolution law for the functional
$\begin{split}\partial_{t}\mathcal{Z}_{p_{0}}(t,t_{0}|u)\bigg{|}_{t=t_{0}}&=\int_{\mathbf{l}}C(\mathbf{l},t)u(p_{0}^{+},\mathbf{p}_{0}+\mathbf{l})\\\
&+\alpha_{s}\int_{z}\mathcal{K}(z,p_{0}^{+};t)\left[u(z\overrightarrow{p_{0}})u((1-z)\overrightarrow{p_{0}})-u(\overrightarrow{p_{0}})\right]\,,\end{split}$
(13)
where the last term in the $\mathcal{O}(\alpha_{s})$ bracket comes from
probability conservation (so that when $u=1$ the right-hand side vanishes).
This relation can be extended to the full shower
$\begin{split}\partial_{t}\mathcal{Z}_{p_{0}}(t,t_{0}|u)&=\int_{q^{+}\mathbf{q}\mathbf{l}}C(\mathbf{l},t)u(q^{+},\mathbf{q}+\mathbf{l})\frac{\delta\mathcal{Z}_{p_{0}}(t,t_{0}|u)}{\delta
u(\vec{q})}\\\
&+\alpha_{s}\int_{z}\int_{q^{+}\mathbf{q}}\mathcal{K}(z,q^{+};t)\left[u(z\overrightarrow{q})u((1-z)\overrightarrow{q})-u(\overrightarrow{q})\right]\frac{\delta\mathcal{Z}_{p_{0}}(t,t_{0}|u)}{\delta
u(\vec{q})}\,.\end{split}$ (14)
Finally, the inclusive one-gluon distribution $D(x,\mathbf{k},t)$, which
represents the probability of observing a gluon at time $t$ with momentum
fraction $x$ and transverse momentum $\mathbf{k}$, can be obtained from the
functional BDIM2
$D(x,\mathbf{k},t)\equiv
k^{+}\left(\frac{\delta\mathcal{Z}_{p_{0}}(t,t_{0}|u)}{\delta
u(\overrightarrow{k})}\right)_{u=1}\,.$ (15)
Finally, using eq. (14) and noticing that ${\cal K}(z,p_{0}^{+};t)={\cal
K}(1-z,p_{0}^{+};t)$, we obtain
$\begin{split}\partial_{t}D(x,\mathbf{k},t)&=\int_{\mathbf{l}}C(\mathbf{l},t)D(x,\mathbf{k}-\mathbf{l},t)\\\
&+\alpha_{s}\int_{z}\left[\frac{2}{z^{2}}\mathcal{K}\left(z,\frac{x}{z}p_{0}^{+};t\right)D\left(\frac{x}{z},\frac{\mathbf{k}}{z};t\right)\Theta(z-x)-\mathcal{K}\left(z,xp_{0}^{+};t\right)D(x,\mathbf{k},t)\right]\,.\end{split}$
(16)
This is the well-known rate equation taking into account multiple soft medium-
induced gluon production derived in bottom_up_therm ; BDIM2 . It has a very
simple interpretation: the $\mathcal{O}(\alpha_{s}^{0})$ term corresponds to
the broadening in momentum space occurring between in-medium splittings; the
first term at $\alpha_{s}$ order corresponds to the production of a gluon with
energy fraction $x$ and momentum $\mathbf{k}$ from a parton of the same
kinematics enhanced by a $\frac{1}{z}$ factor; and the last term corresponds
to a gluon with momentum fraction $x$ and transverse momentum $\mathbf{k}$
being displaced to another energy and momentum mode via a splitting inside the
medium, such that the creation and annihilation rates are balanced – and thus
probability is conserved.
## 3 Computing the interference term in the soft regime
In order to gauge the role of color coherence, we study the vacuum emission of
a soft gluon after the single in-medium gluon emission considered in the
previous sections. The process under consideration is depicted in Figure 2:
the diagram in the right hand panel corresponds to the direct term, while the
diagram on the left one corresponds to the interference contribution. Although
both pieces have to be taken into account, the interference term is the only
one that carries new physical information as it resolves the in-medium quark-
gluon antenna. The equivalent BDMPS-Z contribution is obtained by removing the
extra vacuum gluon from both diagrams.
Figure 2: Left: The interference diagram
$\mathcal{M}_{q}\mathcal{M}_{g}^{\dagger}$ computed in this paper with the
soft gluon highlighted in blue. The remaining gluon leg is referred in the
text as BDMPS-Z gluon. Right: The direct diagram
$\mathcal{M}_{g}\mathcal{M}_{g}^{\dagger}$ that is not directly computed since
it is proportional to the BDMPS-Z result. On the left hand side we have
explicitly indicated all the time intervals present in the problem, with the
emission time not matching in the amplitude ($x^{+}$) and its complex-
conjugate ($x_{c}^{+}$). The medium is assumed to be static and homogeneous of
length $L$, and the initiating quark is produced in a hard process (black
blob) at time $t=t_{0}=0$ in both amplitude and complex-conjugate amplitude.
Note that the initial hard process can be factorized from the rest of the
diagram and is henceforth disregarded.
With that in mind, we start by computing the eikonal emission amplitude of a
gluon from either a quark or a gluon in vacuum,
$\mathcal{M}_{q}^{pure\,vac}=2gt^{a}\frac{\mathbf{k}^{\prime}\cdot\varepsilon_{\perp}^{\prime}}{\mathbf{k}^{\prime\
2}}\,,$ (17)
$\mathcal{M}_{g}^{pure\,vac}=2igf^{abc}\frac{\mathbf{k}^{\prime}\cdot\varepsilon_{\perp}^{\prime}}{\mathbf{k}^{\prime\
2}}\,.$ (18)
Using these results and the rules introduced in Appendix A, one can write the
amplitudes for the case in which the two gluons are emitted either from the
eikonal quark line ($\mathcal{M}_{q}$) or when the vacuum gluon comes from the
gluon line ($\mathcal{M}_{g}$),
$\mathcal{M}_{q}=-\frac{2g^{2}}{\omega}\int_{\mathbf{x}_{g}x^{+}}e^{-i\mathbf{k}\cdot\mathbf{x}_{g}}\partial_{\mathbf{x}}|_{0}G^{ab}(L,x^{+}|\omega)\cdot\varepsilon_{\perp}t^{c}_{mi}U_{ij}(L,x^{+})t^{b}_{jk}U_{kl}(x^{+},0)\frac{\mathbf{k}^{\prime}\cdot\varepsilon_{\perp}^{\prime}}{\mathbf{k}^{\prime^{2}}}\,,$
(19)
$\mathcal{M}_{g}=-\frac{2ig^{2}}{\omega}\int_{\mathbf{x}_{g}x^{+}}e^{-i\mathbf{k}\cdot\mathbf{x}_{g}}\partial_{\mathbf{x}}|_{0}f^{cda}G^{ab}(L,x^{+}|\omega)\cdot\varepsilon_{\perp}U_{ij}(L,x^{+})t^{b}_{jk}U_{kl}(x^{+},0)\frac{\mathbf{k}^{\prime}\cdot\varepsilon_{\perp}^{\prime}}{\mathbf{k}^{\prime^{2}}}\,,$
(20)
where $\mathbf{k}$ ($\omega$) and $\mathbf{k}^{\prime}$ ($\omega^{\prime}$)
correspond to the transverse momentum (energy) of the in-medium BDMPS-Z gluon
and the vacuum gluon, respectively. Their transverse polarization vectors are
given by $\varepsilon_{\perp}$ and $\varepsilon_{\perp}^{\prime}$. We denote
$\int_{x^{+}}\equiv\int_{0}^{L}dx^{+}$ and we have suppressed the dependence
on transverse positions for clarity. We also implement the approximation in
which the frequency of the vacuum gluon matches in amplitude and complex-
conjugate amplitude.
The interference amplitude $\mathcal{M}_{q}\mathcal{M}^{\dagger}_{g}$ is then
given by
$\mathcal{M}_{q}\mathcal{M}^{\dagger}_{g}=-\frac{4g^{4}i}{\omega^{2}(\mathbf{k}^{\prime})^{2}}\int_{\mathbf{x}_{g}\mathbf{x}^{\prime}_{g}x^{+}x_{c}^{+}}e^{i\mathbf{k}\cdot(\mathbf{x}_{g}^{\prime}-\mathbf{x}_{g})}\partial_{\mathbf{x}}\cdot\partial_{\mathbf{x}^{\prime}}\langle
t^{c}_{mi}U_{ij}t^{a}_{jk}G^{ab}U_{kl}f^{dcb}U^{\dagger}_{l\overline{k}}G^{\dagger
d\overline{a}}t^{\overline{a}}_{\overline{k}\overline{j}}U^{\dagger}_{\overline{j}m}\rangle_{\mathbf{x}=\mathbf{x}^{\prime}=0}\,,$
(21)
where $\langle\ \rangle$ indicates the medium average over all possible
configurations of the background field – see Appendix B. The color structure
with the space-time arguments given explicitly and $U\equiv U(\mathbf{0})$
reads
$\begin{split}&t^{c}_{ij}U_{jk}(L,x^{+})t^{a}_{kl}G^{ab}(L,\mathbf{x}_{g};x^{+},\mathbf{x}|\omega)U_{lm}(x^{+},0)\\\
\times&f^{dcb}U^{\dagger}_{m\overline{l}}(x^{+}_{c},0)G^{\dagger
d\overline{a}}(L,\mathbf{x}^{\prime}_{g};x^{+}_{c},\mathbf{x}^{\prime}|\omega)t^{\overline{a}}_{\overline{l}\overline{k}}U_{\overline{k}i}^{\dagger}(L,x_{c}^{+})\,,\end{split}$
(22)
where we have used that the vacuum emission is soft, so that the in-medium
gluon has the same energy in amplitude and conjugate amplitude. Combining the
two middle Wilson lines and using the results shown in Appendix A, we obtain
$\begin{split}&\operatorname{Tr}({t^{c}U_{L,x^{+}}t^{a}U^{\dagger}_{x_{c}^{+},x^{+}}t^{\overline{a}}U^{\dagger}_{L,x_{c}^{+}}})G^{ab}(L,\mathbf{x}_{g};x^{+},\mathbf{x})f^{dcb}G^{\dagger
d\overline{a}}(L,\mathbf{x}^{\prime}_{g};x_{c}^{+},\mathbf{x}^{\prime})\\\
=&\operatorname{Tr}(U^{\dagger}_{L,x_{c}^{+}}t^{c}U_{L,x_{c}^{+}}t^{e}t^{\overline{a}})W^{\dagger
ea}_{x_{c}^{+},x^{+}}G^{ab}(L,\mathbf{x}_{g};x^{+},\mathbf{x})f^{dcb}G^{\dagger
d\overline{a}}(L,\mathbf{x}^{\prime}_{g};x_{c}^{+},\mathbf{x}^{\prime})\,.\end{split}$
(23)
Now using the well-known identity
$t^{a}_{ij}t^{b}_{jk}=\frac{1}{2N_{c}}\delta^{ab}\delta_{ik}+\frac{1}{2}d^{abc}t^{c}_{ik}+\frac{i}{2}f^{abc}t^{c}_{ik}\,,$
(24)
and noticing that the only non-vanishing term has to be proportional to the
$f$ symbol777 The singlet term vanishes as it is proportional to
$\operatorname{Tr}(t)$, whereas the terms including the $d$ symbol vanish due
to the allowed non-vanishing contractions between $d$ and $f$ symbols., the
color structure simplifies to
$\begin{split}&\frac{i}{4}f^{e\overline{a}h}f^{dcb}W^{hc}_{L,x^{+}_{c}}W^{\dagger
ea}_{x_{c}^{+},x^{+}}G^{ab}(\mathbf{x}_{g},\mathbf{x})_{L,x^{+}}G^{\dagger
d\overline{a}}(\mathbf{x}^{\prime}_{g},\mathbf{x}^{\prime})_{L,x_{c}^{+}}\\\
=&\frac{i}{4}\int_{\mathbf{z}}f^{ijh}f^{dcb}W^{hc}_{L,x_{c}^{+}}W^{\dagger
ia}_{x_{c}^{+},x^{+}}G^{al}(\mathbf{z},\mathbf{x})_{x_{c}^{+},x^{+}}G^{lb}(\mathbf{x}_{g},\mathbf{z})_{L,x_{c}^{+}}G^{\dagger{dj}}(\mathbf{x}^{\prime}_{g},\mathbf{x}^{\prime})_{L,x^{+}_{c}}\,,\end{split}$
(25)
where we have used the composition rule for Wilson lines – see Appendix A.
Finally, using the locality in light-cone time of the medium averages (see
Appendix B or BDIM1 ; Liliana1 ; Liliana2 ; Carlos_lectures ), we conclude the
emission spectrum can be written as
$\begin{split}\omega\omega^{\prime}\frac{dI}{d^{2}\mathbf{k}d^{2}\mathbf{k}^{\prime}d\omega
d\omega^{\prime}}&=-\frac{2C_{F}C_{A}\alpha_{s}^{2}}{(2\pi)^{3}\pi\omega^{2}N_{c}(N_{c}^{2}-1)^{2}\mathbf{k}^{\prime
2}}\text{Re}\Bigg{[}\int_{\mathbf{x}_{g}\mathbf{x}_{g}^{\prime}x^{+}x^{+}_{c}\mathbf{z}}e^{i\mathbf{k}(\mathbf{x}^{\prime}_{g}-\mathbf{x}_{g})}\partial_{\mathbf{x}}|_{0}\cdot\partial_{\mathbf{x}^{\prime}}|_{0}\\\
&\times\operatorname{Tr}\langle
G(\mathbf{z},\mathbf{x}|\omega)W^{\dagger}(\mathbf{0}))\rangle_{x^{+}_{c},x^{+}}\\\
&\times f^{ijl}f^{abc}\langle
W^{ia}(\mathbf{0})G^{jb}(\mathbf{x}_{g},\mathbf{z}|\omega)G^{\dagger
cl}(\mathbf{x}^{\prime}_{g},\mathbf{x}^{\prime}|\omega)\rangle_{L,x_{c}^{+}}\Bigg{]}\,,\end{split}$
(26)
where we averaged over initial spin and color states, summed over the quantum
numbers of the final states and used the shorthand
$\int_{x^{+}x_{c}^{+}}\equiv\int_{0}^{L}dx^{+}\int_{x^{+}}^{L}dx^{+}_{c}$.
Note that the analogous BDMPS-Z spectrum is given by Qw2
$\begin{split}\left(\omega\frac{dI}{d^{2}\mathbf{k}d\omega}\right)^{\mathbf{In-
In}}&=\frac{2C_{F}\alpha_{s}}{(2\pi)^{2}\omega^{2}(N_{c}^{2}-1)^{2}}\text{Re}\Bigg{[}\int_{\mathbf{x}_{g}\mathbf{x}_{g}^{\prime}x^{+}x^{+}_{c}\mathbf{z}}e^{i\mathbf{k}(\mathbf{x}^{\prime}_{g}-\mathbf{x}_{g})}\partial_{\mathbf{x}}|_{0}\cdot\partial_{\mathbf{x}^{\prime}}|_{0}\\\
&\times\operatorname{Tr}\langle
G(\mathbf{z},\mathbf{x}|\omega)W^{\dagger}(\mathbf{0}))\rangle_{x^{+}_{c},x^{+}}\operatorname{Tr}\langle
G(\mathbf{x}_{g},\mathbf{z}|\omega)G^{\dagger}(\mathbf{x}^{\prime}_{g},\mathbf{x}^{\prime}|\omega)\rangle_{L,x_{c}^{+}}\Bigg{]}\,.\end{split}$
(27)
Additionally, we identify the vacuum gluon emission spectrum contribution
$\left(\omega\frac{dI}{d\omega
d^{2}\mathbf{k}}\right)^{\mathbf{g}}=\frac{\alpha_{s}C_{A}}{(2\pi^{2})\mathbf{k}^{2}}\,,$
(28)
which can be easily obtained from eq. (18). Combining it with eq. (26), it
results
$\begin{split}\omega\omega^{\prime}\frac{dI}{d^{2}\mathbf{k}d^{2}\mathbf{k}^{\prime}d\omega
d\omega^{\prime}}&=-\left(\omega^{\prime}\frac{dI}{d\omega^{\prime}d^{2}\mathbf{k}^{\prime}}\right)^{\mathbf{g}}\frac{2C_{F}\alpha_{s}}{(2\pi)^{2}\omega^{2}N_{c}(N_{c}^{2}-1)^{2}}\text{Re}\Bigg{[}\int_{\mathbf{x}_{g}\mathbf{x}_{g}^{\prime}x^{+}x^{+}_{c}\mathbf{z}}e^{i\mathbf{k}(\mathbf{x}^{\prime}_{g}-\mathbf{x}_{g})}\\\
&\times\partial_{\mathbf{x}}|_{0}\cdot\partial_{\mathbf{x}^{\prime}}|_{0}\operatorname{Tr}\langle
G(\mathbf{z},\mathbf{x}|\omega)W^{\dagger}(\mathbf{0}))\rangle_{x^{+}_{c},x^{+}}\\\
&\times f^{ijl}f^{abc}\langle
W^{ia}(\mathbf{0})G^{jb}(\mathbf{x}_{g},\mathbf{z}|\omega)G^{\dagger
cl}(\mathbf{x}^{\prime}_{g},\mathbf{x}^{\prime}|\omega)\rangle_{L,x_{c}^{+}}\Bigg{]}\,.\end{split}$
(29)
For completeness, the remaining necessary steps for the full calculation of
the spectrum are provided in Appendix B.
Comparing eqs. (27) and (29), we notice that, apart from the
$\left(\omega^{\prime}\frac{dI}{d\omega^{\prime}d^{2}\mathbf{k}^{\prime}}\right)^{\mathbf{g}}$
factor, both results only differ in the color structure of the late-time
medium average. Therefore, this piece must encapsulate the information related
to the color coherence of the system. Under this observation, this will lead,
in effect, to an ansatz for the emission kernel entering the rate equation
discussed in the previous section.
Within the harmonic oscillator approximation, the three point function in eq.
(29) reads
$\begin{split}&\int_{\mathbf{z}}^{\mathbf{x}_{g}}\mathcal{D}\mathbf{r}_{1}\int_{\mathbf{x}^{\prime}}^{\mathbf{x}_{g}^{\prime}}\mathcal{D}\mathbf{r}_{2}\exp\left(\int_{t}\frac{i\omega}{2}(\dot{\mathbf{r}}_{1}^{2}-\dot{\mathbf{r}}_{2}^{2})-\frac{\hat{q}}{8}\int_{t}\left(\mathbf{r}_{2}^{2}+\mathbf{r}_{1}^{2}+(\mathbf{r}_{2}-\mathbf{r}_{1})^{2}\right)\right)\\\
=&\int_{\mathbf{z}}^{\mathbf{x}_{g}}\mathcal{D}\mathbf{r}_{1}\int_{\mathbf{x}^{\prime}}^{\mathbf{x}_{g}^{\prime}}\mathcal{D}\mathbf{r}_{2}\exp\left(\int_{t}\frac{i\omega}{2}(\dot{\mathbf{r}}_{1}^{2}-\dot{\mathbf{r}}_{2}^{2})-\frac{\hat{q}}{4}\int_{t}\left(\mathbf{r}_{1}\cdot\mathbf{r}_{2}+(\mathbf{r}_{2}-\mathbf{r}_{1})^{2}\right)\right)\,,\end{split}$
(30)
where $\mathbf{r}_{1}$ and $\mathbf{r}_{2}$ represent the transverse
trajectories of the gluon in amplitude and its complex-conjugate,
respectively. The term proportional to $(\mathbf{r}_{1}-\mathbf{r}_{2})^{2}$
is the same one would obtain in the BDMPS-Z context.
As we wish to capture coherence effects between emitters at the level of the
splitting without including final-state broadening effects, we follow the
expansion of the in-medium propagator around the classical path introduced in
Altinoluk:2014oxa ; Altinoluk:2015gia , and write the full gluon propagator
$G$ as a Wilson line $W$ evaluated along the classical path, i.e.
$G(y^{+},\textbf{y};x^{+},\textbf{x}|\omega)\to
G_{0}(y^{+},\textbf{y};x^{+},\textbf{x}|\omega)W(\textbf{x}_{\rm cl})$, with
$\textbf{x}_{\rm cl}$ the classical path and $G_{0}$ the free propagator – see
Appendix A. The gluon propagators are effectively demoted to tilted Wilson
lines in transverse space, with the trajectory fixed by the kinematics.
The trajectories $\mathbf{r}_{1,2}$ are expected to be straight lines
diverging from the parent parton at an emission angle $\theta$ which will be
set by the final kinematics. At large enough times, one can then expect that
the $\mathbf{r}_{1}\cdot\mathbf{r}_{2}$ in the exponent in eq. (30) is
dominated by the term quadratic in $t$:
$\mathbf{r}_{1}(t)\cdot\mathbf{r}_{2}(t)\approx\theta^{2}t^{2}\,.$ (31)
The time integral for that term can then be performed exactly, thus yielding
the usual simplified color coherence factor
$1-\Delta_{med}\equiv\exp\left(-\frac{\hat{q}}{12}\theta^{2}(L-x_{c}^{+})^{3}\right)\,,$
(32)
in addition to the usual broadening experienced by the emitted gluon and
encoded in the usual term proportional to
$(\mathbf{r}_{1}-\mathbf{r}_{2})^{2}$. Given that our focus is on including
the proper factors which can capture the effects of color decoherence, the
angle entering the formulas above is a function of the relative momentum of
the outgoing antenna only, which is the momentum $\mathbf{Q}$ introduced in
section 2.1.
The coherence factor in eq. (32) controls the contribution from the
interference diagrams (see left panel of figure 2) and it leads to a simple
physical interpretation in two regimes. The first corresponds to the case when
either the emission angle or the in-medium propagation time after splitting,
$L-x_{c}^{+}$, are sufficiently large, i.e. $\hat{q}L^{3}\theta^{2}\gg 1$. In
such cases $1-\Delta_{med}\approx 0$, and the contribution to the spectrum
given by eq. (29) is vanishing. Heuristically, this limit corresponds to the
case when the outgoing in-medium quark and gluon color fields become
sufficiently randomized by the multiple interactions with the underlying
medium, such that they effectively lose their color connection. As a
consequence, there is no possibility of them exchanging color and thus the
future interference for a soft vacuum gluon radiation is prohibited. On the
other hand, a simple way to understand the case where the in-medium evolution
is short or the antenna opening angle is small, $1-\Delta_{med}\approx
1+\mathcal{O}(\hat{q}\theta^{2}L^{3})$, is to picture the transverse structure
of the medium as an ensemble of color domains of size proportional to (the
inverse) of the saturation scale $Q_{s}$, with different domains being color
disconnected. Since the quark-gluon antenna transverse size is small, i.e.
$Q_{s}^{-2}~{}\sim(\hat{q}L)^{-2}\gg\theta^{2}(L-x_{c}^{+})^{2}$, the system
evolves in the medium with both outgoing quark and gluon states probing the
same color domains, unable to break color coherence888The same picture is
valid in the regime $1-\Delta_{med}\approx 0$. In this case, the antenna is
large and thus the outgoing states always probe color uncorrelated domains in
the medium, leading to the randomization of their color fields..
On top of the interference term we computed above explicitly, one has to
consider the diagrams where the vacuum emission comes from the same parton in
both the amplitude and conjugate amplitude (see right panel of figure 2). Due
to the simple color structure, it is easy to observe this contribution to the
spectrum leads to a term related to the vacuum emission of the soft gluon
multiplying the in-medium contribution. Thus, in the two previous limiting
regimes, the net result should be proportional to
$1-(1-\Delta_{med})=\Delta_{med}$. In the regime where the outgoing states’
lose color coherence, $\Delta_{med}=1$, one sees that the net result
corresponds to the BDMPS-Z spectrum (up to a trivial vacuum term), as expected
since this result assumes that states lose color coherence instantaneously
after in-medium emission. On the other hand, when coherence is not lost,
$\Delta_{med}\approx\mathcal{O}(\hat{q}\theta^{2}L^{3})$, there is destructive
interference between the two types of diagrams in figure 2, and the emission
spectrum is suppressed. This picture is analogous to the one found for the in-
medium QCD antenna Antenna2 .
Beyond these limiting regimes and after considering both direct and
interference diagrams, we see that then effect of considering color coherence
can be implemented by including the coherence factor alongside the broadening
after emission:
$\mathcal{P}_{1}(\mathbf{k};L,x_{c}^{+})\to\mathcal{P}_{1}(\mathbf{k};L,x_{c}^{+})\>\times\>\Delta_{med}\,.$
(33)
Even though this correction appears in the same region as the broadening post-
emission, it makes sense instead to consider it as an additional factor to the
emission kernel. A way to justify this observation is to note that the
emission angle $\theta$ is controlled by the momentum and energy transfer
during the almost instantaneous branching, as we will detail in the next
section. In fact, this ansatz for the splitting kernel is qualitatively in
agreement with the branching picture discussed in the above sections:
branchings are still local in time and partons broaden independently, with the
only difference that now there is a time delay before the medium resolves
them. In addition, although we used a vacuum gluon to measure the coherence of
the system, in Appendix C we argue that a similar structure to the last three
point function in eq. (29) is still observed if one considers an in-medium
gluon emission.
This simple picture offers an opportunity to gauge the role of color coherence
effects in the multiple soft emission regime, which is precisely the final
goal of this paper.
## 4 Introducing color coherence effects into the rate equation
Our starting point is eq. (5), which we repeat here in a slightly different
form,
$\begin{split}{\cal
P}_{2}(\mathbf{k},\mathbf{q},z;L,t_{0})&=2g^{2}z(1-z)\int_{t_{0}}^{L}dt\,\int_{\mathbf{m},\mathbf{Q}}{\cal
K}(\mathbf{Q},z,p_{0}^{+};t)\\\ &\times{\cal
P}_{1}(\mathbf{m}-\mathbf{p}_{0};t,t_{0}){\cal
P}_{1}(\mathbf{k}-\mathbf{Q}-z\mathbf{m};L,t){\cal
P}_{1}(\mathbf{q}+\mathbf{Q}-(1-z)\mathbf{m};L,t)\,.\end{split}$ (34)
In the BDMPS-Z framework, under the harmonic approximation BDIM2 , the
splitting kernel reads
$\begin{split}\mathcal{K}(\mathbf{Q},z,p_{0}^{+},t)=\frac{2}{p_{0}^{+}}\frac{P_{gg}(z)}{z(1-z)}\sin\left(\frac{\mathbf{Q}^{2}}{2k_{f}^{2}(t)}\right)\exp\left(-\frac{\mathbf{Q}^{2}}{2k_{f}^{2}(t)}\right)\,,\end{split}$
(35)
where we have introduced the typical transverse momentum (squared) acquired
due to in-medium scattering during the branching process,
$k_{f}^{2}(t)=\sqrt{\hat{q}(t)(1-z(1-z))p_{0}^{+}(z(1-z))}$, with the time
dependence vanishing in the plasma brick model. When the energy fraction $z$
is small, this gives the usual estimate
$k_{f}^{2}\sim\sqrt{\hat{q}zp_{0}^{+}}$ implemented before. Integrating the
previous equation over $\mathbf{Q}$ restores eq. (7) and justifies why
$\mathbf{Q}^{2}\sim\sqrt{\hat{q}zp_{0}^{+}}\ll\hat{q}L$.
Following the discussion in the previous section, color coherence effects can
be introduced in an effective way by the new splitting kernel
$\begin{split}&{\cal
K}^{\prime}(\mathbf{Q},z,p_{0}^{+};L-t,t)\equiv\mathcal{K}(\mathbf{Q},z,p_{0}^{+};t)\times\Delta_{med}(\mathbf{Q},z,p_{0}^{+};L-t)\,,\end{split}$
(36)
where $\Delta_{med}$ is explicitly given by
$\Delta_{med}(\mathbf{Q},z,p_{0}^{+};L-t)=1-\exp\left(-\frac{\hat{q}}{12}\theta^{2}(\mathbf{Q},z,p_{0}^{+})(L-t)^{3}\right)\,.$
(37)
Here the emission angle depends on the transverse momenta variables only
through the relative transverse momentum between the two outgoing states
$\mathbf{Q}$ and is fixed by the kinematics of the emission process, as
considered in the previous section. It is explicitly given by
mapping_collinear
$\theta(\mathbf{Q},z,p_{0}^{+})=\left|\frac{\mathbf{Q}}{z(1-z)p_{0}^{+}}\right|\,.$
(38)
In this approximation it is still true that
$\mathbf{Q}^{2}\sim\sqrt{\hat{q}zp_{0}^{+}}$, and therefore
$\mathbf{Q}^{2}\ll\mathbf{k}^{2}\sim\mathbf{q}^{2}\sim\hat{q}L$ still holds.
Consequently, one can still neglect $\mathbf{Q}$ in the single particle
broadening probabilities and integrate the new kernel ${\cal K}^{\prime}$ over
$\mathbf{Q}$, as presented in Section 2. In this way we recover the
probabilistic picture of BDIM2 with the main difference that color coherence
effects enter through the factor $\Delta_{med}$ in the kernel, which
effectively delays the effect of the gluon emission thus accounting for the
time it takes for the daughter parton to become a possible independent source
of radiation. This modification incorporates into the evolution the fact that,
for medium-induced emissions, the decoherence time might be larger than the
formation time, as defined in mapping_collinear .
The inclusion of the decoherence factor also implies that the emission kernel
is no longer local in time, thus changing slightly the form of the evolution
equations. The new evolution equation for the (truncated) branching
probability now reads
$\begin{split}&\partial_{L}\widetilde{\mathcal{P}}_{2}(\mathbf{k},\mathbf{q},z;L,t_{0})=2g^{2}z(1-z)\bigg{\\{}\int_{\mathbf{Q}}\mathcal{K}^{\prime}(\mathbf{Q},z,p_{0}^{+};0,L)\delta(\mathbf{k}-\mathbf{Q}-z\mathbf{p}_{0})\delta(\mathbf{q}+\mathbf{Q}-(1-z)\mathbf{p}_{0})\\\
&+\int_{t_{0}}^{L}dt\,\int_{\mathbf{Q}}\mathcal{K}(\mathbf{Q},z,p_{0}^{+};t)\partial_{L}\Delta_{med}(z,\mathbf{Q},p_{0}^{+};L-t)\delta(\mathbf{k}-\mathbf{Q}-z\mathbf{p}_{0})\delta(\mathbf{p}+\mathbf{Q}-(1-z)\mathbf{p}_{0})\bigg{\\}}\,.\end{split}$
(39)
The term in the first line vanishes since ${\cal
K}^{\prime}(\mathbf{Q},z,p_{0}^{+};0,L)=0$ – see eq. (37). This result can
then be simplified to the form
$\begin{split}\partial_{L}\widetilde{\mathcal{P}}_{2}(\mathbf{k},\mathbf{q},z;L,t_{0})=2g^{2}z(1-z)\int_{t_{0}}^{L}dt\,\bar{\mathcal{K}}(z,p_{0}^{+};L-t,t)(2\pi)^{4}\delta^{(2)}(\mathbf{k}-z\mathbf{p}_{0})\delta^{(2)}(\mathbf{q}-(1-z)\mathbf{p}_{0})\,,\end{split}$
(40)
with
$\bar{\mathcal{K}}(z,p_{0}^{+};L-t,t)=\int_{\mathbf{Q}}\mathcal{K}(\mathbf{Q},z,p_{0}^{+};t)\partial_{L}\Delta_{med}(\mathbf{Q},z,p_{0}^{+};L-t)\,.$
(41)
Since eq. (40) has the same form as eq. (11), the generating functional
evolution equation is similar to the one in eq. (14). Therefore, one finds
that the gluon distribution $D(x,\mathbf{k},t)$ obeys
$\begin{split}&\partial_{t}D(x,\mathbf{k},t)=\int_{\mathbf{l}}C(\mathbf{l},t)D(x,\mathbf{k}-\mathbf{l},t)\\\
&+\alpha_{s}\int_{0}^{t}ds\,\int_{z}\left[\frac{2}{z^{2}}\bar{\mathcal{K}}\left(z,\frac{x}{z}p_{0}^{+};t-s,s\right)D\left(\frac{x}{z},\frac{\mathbf{k}}{z};s\right)\Theta(z-x)-\bar{\mathcal{K}}\left(z,xp_{0}^{+};t-s,s\right)D(x,\mathbf{k},s)\right]\,.\end{split}$
(42)
The new modified kernel $\bar{\mathcal{K}}$ has the effect of delaying the
full effect of the emission over the decoherence time. At each step of the
evolution in $t$ the effect of the emission at a previous time $s$ increases
until the emitted gluon has completely decohered from its parent parton. This
delay effect is particularly important to suppress further emissions, by not
including the new gluon as a possible source of independent emissions until it
is fully decohered from its parent parton.
## 5 Conclusion and Outlook
In this paper we have implemented color coherence effects in a rate equation
describing a full medium induced gluon shower. The obtained results are based
on the calculation of the two gluon emission process depicted in Figure 2.
In this work we have always considered the regime where multiple gluon
emissions become important, i.e. when the gluon frequency is much smaller than
the critical frequency, $\omega\ll\omega_{c}$, and consequently the gluon has
a short formation time $t_{f}\ll L$. This leads to a Markovian/factorized
picture, where in-medium branchings are well localized in time. In the fully
decoherent picture previously considered in BDIM2 , instantaneous branching is
followed by final state broadening of the emitted gluons over a scale $\sim
L$.
Going beyond this scenario, coherence effects are studied by introducing an
extra soft vacuum gluon whose main role in the calculation is to allow
measuring the degree of coherence of the outgoing quark-gluon antenna (see
Figure 2). In the soft limit that we consider here, the propagation of the
quark-gluon system after the formation time encodes both the broadening and
the decoherence parameter entangled into a three point function. By making use
of the tilted Wilson line approximation, we were able to factorize both
contributions into the usual final state broadening and a coherence factor
$\Delta_{med}$ which dictates when the outgoing states are resolved
individually by the medium. This coherence factor depends on an emission angle
$\theta$, related to the momentum transfer $\textbf{Q}^{2}\ll\hat{q}L$ during
the branching process and the associated energy fraction $z$. Since the
momentum $\textbf{Q}^{2}$ acquired during branching is small compared to the
momentum acquired due to broadening $\sim\hat{q}L$, at the level of the rate
equations it can be ignored everywhere, as assumed in the original derivation
of the rate equations in BDIM2 , except in the branching kernel ${\cal
K}(\textbf{Q},z)$. As such, when integrating over this momentum (or
equivalently over the emission angle) to obtain the energy kernel ${\cal
K}(z)$, $\Delta_{med}$ acts as an integration weight which suppresses the
radiative contribution from highly coherent systems. In comparison, the
incoherent case explored in BDIM2 , assumes that the outgoing radiation
decoheres instantaneously and hence, the integration weight is always unity –
see eq. (35) compared to eq. (36).
Since the emission angle and the energy fraction $z$, which dictate the
coherence between the outgoing states, are determined locally in the branching
process, one can associate $\Delta_{med}$ to the splitting kernel ${\cal K}$,
with the length scale set by the broadening time. This split of the coherence
factor and the broadening is of course a simplification we make so that the
resumation of multiple emissions is possible. Nonetheless, in the future it
would be relevant to incorporate broadening effects into the coherence factor,
leading to a more realistic treatment of color coherence physics.
The new evolution equation derived for the single gluon distribution, eq.
(42), becomes non-local in time, unlike previous totally decoherent results
BDIM2 . The non-locality of this equation relfects the fact that it takes a
finite amount of time for a system to become resolved by the medium. In
addition, the qualitative physical interpretation of the results, detailed in
the main text, is in line with previous studies of color coherence effects in
jet quenching Antenna1 ; Antenna0 ; Antenna2 ; Antenna3 ; Antenna4 .
###### Acknowledgements.
The authors are grateful to Néstor Armesto for helpful discussions. This work
has received financial support from European Union’s Horizon 2020 research and
innovation program under the grant agreement No. 82409; from Xunta de Galicia
(Centro singular de investigación de Galicia accreditation 2019-2022); from
the European Union ERDF; from the Spanish Research State Agency by “María de
Maeztu” Units of Excellence program MDM-2016-0692 and project FPA2017-83814-P
and from the European Research Council project ERC-2018-ADG-835105 YoctoLHC.
The work of V.V. is supported by the Agence Nationale de la Recherche under
the project ANR-16-CE31-0019-02. J.B. is supported by a fellowship from “la
Caixa" Foundation (ID 100010434) – fellowship code LCF/BQ/ DI18/11660057, and
by funding from the European Union’s Horizon 2020 research and innovation
program under the Marie Sklodowska-Curie grant agreement No. 713673.
## Appendix A Propagation of an energetic parton on a classical field
When traversing a dense QCD medium, a parton with transverse momentum
$\mathbf{p}$ and energy $p_{0}^{+}\gg|\mathbf{p}|$ keeps a straight trajectory
in transverse space and only its color field gets rotated. In this high-energy
regime, the in-medium propagator reduces to a Wilson line Carlos_lectures ;
GYK
$W(\mathbf{x};L,t_{0})=\mathcal{P}\exp\left(ig\int_{t_{0}}^{L}dx^{+}\,A_{-}(\mathbf{x};x^{+})\right)\,,$
(43)
so that the propagation is confined to a medium of length $L-t_{0}$. Here
$A_{-}$ is the $-$ light-cone component of the classical background field
describing the medium, while $\mathbf{x}$ is the transverse position at which
the parton is located along the future light cone999We use two interchangeable
notations for the light-cone time dependence: either it is given as an
argument or it appears as a lower index.. For simplicity, the gauge field’s
color indices are implicitly contracted with the generators of the color
algebra. In the main text, we reserve the $W$ symbol for adjoint Wilson lines
and $U$ for the fundamental representation case. In addition, we omit the
transverse position as an argument when $\mathbf{x}=\mathbf{0}$. A useful
relation between fundamental and adjoint Wilson lines is given by Kovner ;
Liliana1 ; Liliana2 ; GYK ; Carlos_lectures
$W^{\dagger
ab}(\mathbf{x})=W^{ba}(\mathbf{x})=2\operatorname{Tr}[t^{b}U^{\dagger}(\mathbf{x})t^{a}U(\mathbf{x})]\,,$
(44)
where we have made the (adjoint) color indices explicit and traced in the
fundamental representation. Two other useful identities are
$\begin{split}W^{ba}(\mathbf{x})t^{a}&=U(\mathbf{x})t^{b}U^{\dagger}(\mathbf{x})\,,\\\
t^{b}W^{ba}(\mathbf{x})&=U^{\dagger}(\mathbf{x})t^{a}U(\mathbf{x})\,.\end{split}$
(45)
Easing the eikonal restriction and including sub-eikonal corrections
$\mathcal{O}(\mathbf{p}/p_{0}^{+})$, yields the more general in-medium
propagator Carlos_lectures ; Kovner ; GYK
$G(\mathbf{x},L;\mathbf{y},t_{0})=\int_{\mathbf{y}}^{\mathbf{x}}\mathcal{D}\mathbf{r}\,\exp\left(\frac{ip_{0}^{+}}{2}\int_{t_{0}}^{L}d\xi\,\dot{\mathbf{r}}^{2}(\xi)\right)\times
W(L,t_{0};\mathbf{r}(\xi))\,,$ (46)
where now the trajectory in transverse space is not fixed, with the eikonal
propagator recovered in the limit $p_{0}^{+}\to\infty$. This propagator obeys
the simple composition law
$G(\mathbf{x},L;\mathbf{y},t_{0})=\int_{\mathbf{z}}G(\mathbf{x},L;\mathbf{z},t)G(\mathbf{z},t;\mathbf{y},t_{0})\,,$
(47)
used in the main text in order to explore the $x^{+}$ locality of the medium
averages.
Finally, we also make use of the results presented in Altinoluk:2014oxa ;
Altinoluk:2015gia , where a gradient expansion around the classical trajectory
of $G(\mathbf{x},L;\mathbf{y},t_{0})$ is performed. The leading result of such
an expansion was referred to, in the main text, as the tilted Wilson and it is
given by
$G(\mathbf{x},L;\mathbf{y},t_{0})=G_{0}(\mathbf{x},L;\mathbf{y},t_{0})W(\mathbf{x}_{\rm
classical})_{L,t_{0}}+\mathcal{O}\left(\frac{L-t_{0}}{p_{0}^{+}}\partial^{2}_{\mathbf{x}_{\rm
classical}}\right)\,.$ (48)
Here $G_{0}$ is the vacuum propagator (i.e. eq. (46) with the gauge field term
removed), and $\mathbf{x}_{\rm classical}$ is the classical trajectory in
transverse space between positions $\mathbf{y}$ and $\mathbf{x}$ over the time
interval $L-t_{0}$,
$\mathbf{x}_{\rm
classical}(s)=\mathbf{x}+\frac{\mathbf{x}-\mathbf{y}}{L-t_{0}}(s-L)\,,\quad
t_{0}\leq s\leq L\,.$ (49)
Note that eq. (48) is derived under the assumption that
$\frac{\mathbf{p}}{p_{0}^{+}}$ is finite.
## Appendix B The medium averages
This appendix outlines how to perform the medium averages present in our
calculation. For a more detailed discussion on this topic, references such as
GYK should elucidate the reader.
The basic object we wish to compute is the following two-point function in the
adjoint representation101010An analogous result can be derived in the
fundamental representation. within a time interval $L-t_{0}$,
$\frac{\operatorname{Tr}\langle
W(\mathbf{x})W^{\dagger}(\mathbf{y})\rangle}{N_{c}^{2}-1}=\frac{1}{N_{c}^{2}-1}\operatorname{Tr}\left\langle\exp\left(ig\int_{x^{+}}A_{-}(\mathbf{x};x^{+})\right)\exp\left(-ig\int_{y^{+}}A_{-}^{\dagger}(\mathbf{y};y^{+})\right)\right\rangle\,,$
(50)
where we have used the explicit form of the Wilson line from Appendix A. In
order to proceed, one expands in the coupling $g$ up to the first non-trivial
order and models the field correlator as
$\langle
A_{-}^{a}(\mathbf{x};x^{+})A_{-}^{b}(\mathbf{y};y^{+})\rangle=\delta^{ab}\delta(x^{+}-y^{+})B_{\gamma}(\mathbf{x},\mathbf{y};x^{+})\,.$
(51)
Here $B_{\gamma}$ corresponds to the Fourier transform of the elastic in-
medium scattering potential Kovner:2001vi ,
$B_{\gamma}(\mathbf{x},\mathbf{y};x^{+})=g^{2}n(x^{+})\int_{\mathbf{k}}e^{i\mathbf{k}\cdot(\mathbf{x}-\mathbf{y})}\gamma(\mathbf{k})=B_{\gamma}(\mathbf{x}-\mathbf{y};x^{+})\,,$
(52)
where $n(x^{+})$ is the longitudinal density of scattering centers inside the
medium and $\gamma$ the bare in-medium scattering potential, which in UV
behaves as $\gamma(\mathbf{k})\sim 1/\mathbf{k}^{4}$. This connects to the
dipole cross-section as follows,
$\sigma(\mathbf{x},t)=\sigma(\mathbf{x})n(t)=2g^{2}\int_{\mathbf{k}}(1-e^{i\mathbf{k}\cdot\mathbf{x}})B_{\gamma}(\mathbf{x},t)\approx\frac{\hat{q}(t)}{2C_{A}}\mathbf{x}^{2}\log\frac{1}{\mu^{2}\mathbf{x}^{2}}\approx\frac{\hat{q}(t)}{2C_{A}}\mathbf{x}^{2}\log\frac{Q_{c}^{2}}{\mu^{2}}\,,$
(53)
where in the next to last expression we have used the UV behavior of $\gamma$
and introduced $\hat{q}(t)=4\pi\alpha_{s}^{2}C_{A}n(t)$. In the last
expression we have also used the harmonic oscillator approximation and
regulated the logarithm by introducing a large momentum scale $Q_{c}^{2}\sim
1/\mathbf{x}^{2}\gg\mu^{2}$, with $\mu$ the Debye mass. Expanding the Wilson
line to first non-trivial order produces
$W^{ab}(\mathbf{x})=\delta^{ab}+iA^{c}(\mathbf{x})f^{cab}-\frac{C_{A}}{2}\delta^{ab}B_{\gamma}(\mathbf{0})\,.$
(54)
Under the above assumptions, the medium average at linear order, i.e. the
leading contribution for $N=1$ in the opacity expansion, becomes
$\frac{\operatorname{Tr}\langle
W(\mathbf{x})W^{\dagger}(\mathbf{y})\rangle^{N=1}}{N_{c}^{2}-1}=1-\frac{C_{A}}{2}\int_{x^{+}}n(x^{+})\sigma(\mathbf{x}-\mathbf{y})=1-\int_{x^{+}}\frac{\hat{q}(x^{+})}{4}(\mathbf{x}-\mathbf{y})^{2},$
(55)
with logarithmic contributions absorbed in $\hat{q}$. Re-exponentiating
produces
$\frac{\operatorname{Tr}\langle
W(\mathbf{x})W^{\dagger}(\mathbf{y})\rangle}{N_{c}^{2}-1}=\exp\left(-\frac{C_{A}}{2}\int_{x^{+}}n(x^{+})\sigma(\mathbf{x}-\mathbf{y})\right)\equiv\mathcal{P}(\mathbf{x}-\mathbf{y};L,t_{0})\,,$
(56)
where we have introduced the dipole operator $\mathcal{P}(\mathbf{r})$, whose
Fourier transform gives the single particle broadening probability, as already
stated in Section 2.
More complex averages can be computed following the same procedure. A relevant
example, possible when sub-eikonal corrections are included, is the following
two-point function, which under the harmonic oscillator approximation reads
GYK ; Carlos_lectures
$\frac{\operatorname{Tr}\langle
G(\mathbf{x},\mathbf{y})W^{\dagger}(\mathbf{0})\rangle_{x_{c}^{+},x^{+}}}{N_{c}^{2}-1}=\int_{\mathbf{y}}^{\mathbf{x}}\mathcal{D}\mathbf{r}\exp\left[\frac{i\omega}{2}\int_{x^{+}}^{x_{c}^{+}}dt\,\dot{\mathbf{r}}^{2}-\int_{x^{+}}^{x_{c}^{+}}dt\,\frac{\hat{q}(t)}{4}\mathbf{r}^{2}\right]\,,$
(57)
where on the left-hand side we have suppressed the dependence on the gluon
frequency $\omega$. Here $G(\mathbf{x},\mathbf{y})$ is the full gluon
propagator, and we have absorbed $\log Q_{c}^{2}/\mu^{2}$ in the definition of
$\hat{q}$, as done in the previous example. Within the harmonic approximation
implemented above, one can give an explicit solution Liliana1 ; Liliana2
$\begin{split}&\frac{1}{N_{c}^{2}-1}\operatorname{Tr}\langle
G(\mathbf{x},\mathbf{y})W^{\dagger}(\mathbf{0})\rangle_{x_{c}^{+},x^{+}}\equiv{\cal
K}(\mathbf{x},\mathbf{y})_{x_{c}^{+},x^{+}}\\\
&=\frac{A(x_{c}^{+},x^{+})}{i\pi}\exp\bigg{[}iA(x_{c}^{+},x^{+})(B(x^{+},x_{c}^{+})\mathbf{x}^{2}+B(x_{c}^{+},x^{+})\mathbf{y}^{2}-2\mathbf{x}\cdot\mathbf{y})\bigg{]}\,.\end{split}$
(58)
In particular, for a static and homogeneous medium the parametric functions
$A$ and $B$ are given by
$A(x_{c}^{+},x^{+})=\frac{\omega\Omega}{2\sin(\Omega(x_{c}^{+}-x^{+}))}\,,\quad
B(x_{c}^{+},x^{+})=2\cos(\Omega(x_{c}^{+}-x^{+}))\,,$ (59)
while for other medium profiles we refer the reader to Arnold_simple_formula .
Here $\Omega$ is the harmonic oscillator frequency,
$\Omega=\frac{1-i}{2}\sqrt{\frac{\hat{q}}{\omega}}\,.$ (60)
Another example is the three-point function appearing in eq. (26). This
correlator has already been explicitly computed in BDIM1 using the same
techniques outlined above (see next appendix for some details). Explicitly, it
can be written within the harmonic oscillator approximation as follows,
$\begin{split}&\int_{\mathbf{z}}^{\mathbf{x}_{g}}\mathcal{D}{\mathbf{r}_{1}}\int_{\mathbf{x}^{\prime}}^{\mathbf{x}_{g}^{\prime}}\mathcal{D}{\mathbf{r}_{2}}\exp\left(\frac{i\omega}{2}\int_{x_{c}^{+}}^{L}dt\
\dot{\mathbf{r}}_{1}^{2}-\dot{\mathbf{r}}_{2}^{2}-\frac{\hat{q}}{8}\int_{x_{c}^{+}}^{L}dt\
\mathbf{r}_{1}^{2}+\mathbf{r}_{2}^{2}+(\mathbf{r}_{1}-\mathbf{r}_{2})^{2}\right)\\\
=&\int_{\mathbf{z}}^{\mathbf{x}_{g}}\mathcal{D}{\mathbf{r}_{1}}\int_{\mathbf{x}^{\prime}}^{\mathbf{x}_{g}^{\prime}}\mathcal{D}{\mathbf{r}_{2}}\exp\left(\frac{i\omega}{2}\int_{x_{c}^{+}}^{L}dt\
\dot{\mathbf{r}}_{1}^{2}-\dot{\mathbf{r}}_{2}^{2}-\frac{\hat{q}}{4}\int_{x_{c}^{+}}^{L}dt\
\mathbf{r}_{1}^{2}+\mathbf{r}_{2}^{2}-\mathbf{r}_{1}\cdot\mathbf{r}_{2}\right)\,.\end{split}$
(61)
When writing the dynamical terms we assumed that gluons’ energies match in
both amplitude and its complex-conjugate (otherwise the calculation is
slightly more evolved, but still feasible). At this point, the unitary
transformation
$\begin{split}\mathbf{r}_{1}=\Gamma(\mathbf{r}_{1}^{\prime}+\beta\mathbf{r}_{2}^{\prime})\,,\quad\mathbf{r}_{2}=\Gamma(\mathbf{r}_{2}^{\prime}+\beta\mathbf{r}_{1}^{\prime})\,,\end{split}$
(62)
is boost invariant and leaves the kinetic part of the Lagrangian unchanged,
while the potential gives a diagonal term (with
$\Gamma^{-1}=\sqrt{1-\beta^{2}}$)
${\Gamma^{2}}\left[\mathbf{r}_{1}^{2}(1-\beta+\beta^{2})+\mathbf{r}_{2}^{2}(1-\beta+\beta^{2})\right]=\Gamma^{2}(1-\beta+\beta^{2})(\mathbf{r}_{1}^{2}+\mathbf{r}_{2}^{2})\,.$
(63)
Imposing that the cross-terms vanish results in
$1-4\beta+\beta^{2}=0\implies\beta=2\pm\sqrt{3}\,,$ (64)
so that choosing the convergent solution $\beta=2-\sqrt{3}$, one gets
$\begin{split}&\int_{\Gamma(\mathbf{z}-\beta\mathbf{x}^{\prime})}^{\Gamma(\mathbf{x}_{g}-\beta\mathbf{x}_{g}^{\prime})}D\mathbf{r}_{1}\int_{\Gamma(\mathbf{x}^{\prime}-\beta\mathbf{z})}^{\Gamma(\mathbf{x}_{g}^{\prime}-\beta\mathbf{x}_{g})}D\mathbf{r}_{2}\exp\left(\int_{x_{c}^{+}}^{L}dt\frac{i\omega}{2}\dot{\mathbf{r}}^{2}_{1}-\frac{\hat{q}\sqrt{3}}{8}\mathbf{r}_{1}^{2}\right)\\\
\times&\exp\left(\int_{x_{c}^{+}}^{L}dt-\frac{i\omega}{2}\dot{\mathbf{r}}^{2}_{2}-\frac{\hat{q}\sqrt{3}}{8}\mathbf{r}_{2}^{2}\right)\equiv{\cal
J}(\mathbf{b}_{g},\mathbf{b})_{L,x^{+}_{c}}{\cal
J}^{\dagger}(\mathbf{c}_{g},\mathbf{c})_{L,x_{c}^{+}}\,.\end{split}$ (65)
In this way, the present medium average is factorized into a product of the
previous two-point correlator, with an effective diffusion coefficient
$\hat{q}_{eff}=\hat{q}\frac{\sqrt{3}}{2}$. Note that we use ${\cal J}$ to
denote ${\cal K}$ with $\hat{q}\to\hat{q}_{eff}$, while the $\dagger$ denotes
that one should use the conjugate harmonic oscillator frequency. Here we have
also introduced
$\mathbf{b}_{g}=\Gamma(\mathbf{x}_{g}-\beta\mathbf{x}_{g}^{\prime})\,,\quad\mathbf{b}=\Gamma(\mathbf{z}-\beta\mathbf{x}^{\prime})\,,\quad\mathbf{c}_{g}=\Gamma(\mathbf{x}_{g}^{\prime}-\beta\mathbf{x}_{g})\,,\quad\mathbf{c}=\Gamma(\mathbf{x}^{\prime}-\beta\mathbf{z})\,.$
(66)
Finally, these results can be used to simplify the spectrum in eq. (26) to the
form
$\begin{split}\omega\omega^{\prime}\frac{dI}{d^{2}\mathbf{k}d^{2}\mathbf{k}^{\prime}d\omega
d\omega^{\prime}}&=-\left(\omega^{\prime}\frac{dI}{d\omega^{\prime}d^{2}\mathbf{k}^{\prime}}\right)^{\mathbf{g}}\frac{2C_{F}\alpha_{s}}{(2\pi)^{2}\omega^{2}}\text{Re}\Bigg{[}\int_{\mathbf{x}_{g}\mathbf{x}_{g}^{\prime}x^{+}x^{+}_{c}\mathbf{z}}e^{i\mathbf{k}(\mathbf{x}^{\prime}_{g}-\mathbf{x}_{g})}\\\
&\times\partial_{\mathbf{x}}\cdot\partial_{\mathbf{x}^{\prime}}{\cal
K}(\mathbf{z},\mathbf{x})_{x_{c}^{+},x^{+}}{\cal
J}(\mathbf{b}_{g},\mathbf{b})_{L,x_{c}^{+}}{\cal
J}^{\dagger}(\mathbf{c}_{g},\mathbf{c})_{L,x_{c}^{+}}\Bigg{]}_{\mathbf{x}=\mathbf{x}^{\prime}=\mathbf{0}}\,.\end{split}$
(67)
## Appendix C Outline of double in-medium gluon emission computation
In this appendix, we study the interference diagram in the particular case
where the second gluon emission happens inside the medium. We focus solely on
the color structure of the squared amplitude, which can be directly read off
from the diagram. The analysis of the full double in-medium gluon emission
constitutes an extremely challenging problem in the literature (see Iqbal1 ;
Casalderrey_doublegg for more in-depth discussions for both dense and dilute
systems). In what follows, we will still assume that any formation time is
much smaller than the medium length and we will not discuss the problem of
overlapping formation times.
Figure 3: Leading color interference diagram for the case of double in-medium
gluon emission. We identify the different time scales (above) and the
transverse position of each splitting (below). In addition, we indicate the
position in transverse space for the outgoing states.
In Figure 3, we depict the leading color interference diagram associated to
this process. As pointed out before, there are many ways one can order the
time scales $x^{+}$, $x_{c}^{+}$, $y^{+}$ and $y_{c}^{+}$, but for the
purposes of this appendix we will take $y^{+}_{c}>y^{+}>x_{c}^{+}>x^{+}$. We
also use that the hard quark loses energy $\omega$ due to the gluon emission,
with the first emission (in amplitude) having energy $\xi\omega$ and the
second $(1-\xi)\omega$. In particular, the color structure of the depicted
process reads
$\begin{split}&\langle
U_{ni}(L,y^{+})t^{c}_{ij}G^{cd}(L,\bar{\mathbf{x}}_{g};y^{+},\mathbf{0}|(1-\xi)\omega)U_{jk}(y^{+},x^{+})t^{a}_{kl}U_{lm}(x^{+},0)G^{ab}(L,\mathbf{x}_{g};x^{+},\mathbf{0}|\xi\omega)\\\
&\times G^{\dagger
b\bar{b}}(L,\mathbf{x}_{g}^{\prime};y_{c}^{+},\mathbf{y}^{\prime}|\xi\omega)G^{\dagger
d\bar{d}}(L,\bar{\mathbf{x}}_{g}^{\prime};y_{c}^{+},\mathbf{y}^{\prime}|(1-\xi)\omega)f^{\bar{c}\bar{d}\bar{b}}G^{\dagger\bar{c}\bar{a}}(y_{c}^{+},\mathbf{y}^{\prime};x_{c}^{+},\mathbf{0}|\omega)\\\
&\times
U^{\dagger}_{m\bar{l}}(x_{c}^{+},0)t^{\bar{a}}_{\bar{l}\bar{k}}U^{\dagger}_{\bar{k}n}(L,x_{c}^{+})\rangle\,,\end{split}$
(68)
where eikonality is assumed for the leading parton, such that
$\mathbf{x}=\mathbf{y}\equiv\mathbf{0}$111111In the triple gluon vertex at
$\mathbf{y}^{\prime}$ one would need to introduce dummy variables in order to
evaluate the derivative operators appearing in the full squared amplitude, and
only then set all positions to $\mathbf{y}^{\prime}$. However, for the current
discussion this subtlety does not become relevant since we are only interested
in the color structure.. After some algebraic manipulations and assuming
$\xi\to 1$ as done in the main text, such that
$(1-\xi)\omega\to\omega^{\prime}$ and $\xi\omega\to\omega$, it results
$\begin{split}&\int_{\mathbf{z}_{1}\mathbf{z}_{2}\mathbf{z}_{3}\mathbf{z}_{4}\mathbf{z}_{5}}\operatorname{Tr}\langle
G(\mathbf{z}_{1};\mathbf{0}|\omega)W^{\dagger}(\mathbf{0})\rangle_{x_{c}^{+},x^{+}}\langle
f^{l\bar{a}h}W^{hc}(y^{+},x_{c}^{+})G^{c\alpha}(y_{c}^{+},\mathbf{z}_{2};y^{+},\mathbf{0}|\omega^{\prime})G^{\alpha
d}(L,\bar{\mathbf{x}}_{g};y_{c}^{+},\mathbf{z}_{2}|\omega^{\prime})\\\ &\times
G^{\dagger
d\bar{d}}(L,\bar{\mathbf{x}}_{g}^{\prime};y_{c}^{+},\mathbf{y}^{\prime}|\omega^{\prime})G^{\dagger
b\bar{b}}(L,\mathbf{x}_{g}^{\prime};y^{+}_{c},\mathbf{y}^{\prime}|\omega)G^{l\beta}(y^{+},\mathbf{z}_{3};x_{c}^{+},\mathbf{z}_{1}|\omega)G^{\beta\sigma}(y_{c}^{+},\mathbf{z}_{4};y^{+},\mathbf{z}_{3}|\omega)\\\
&\times G^{\sigma
b}(L,\mathbf{x}_{g};y_{c}^{+},\mathbf{z}_{4}|\omega)f^{\bar{c}\bar{d}\bar{b}}G^{\dagger\bar{c}\gamma}(y^{+}_{c},\mathbf{y}^{\prime};y^{+},\mathbf{z}_{5}|\omega)G^{\dagger\gamma\bar{a}}(y^{+},\mathbf{z}_{5};x_{c}^{+},\mathbf{0}|\omega)\rangle\,.\end{split}$
(69)
It is now possible to break down the medium averages for each time interval as
follows,
$\begin{cases}f^{l\bar{a}h}\langle
W^{hc}G^{l\beta}G^{\dagger\gamma\bar{a}}\rangle\qquad&{\rm
in}\,(x_{c}^{+},y^{+}),\\\ f^{\bar{c}\bar{d}\bar{b}}\langle
G^{c\alpha}G^{\beta\sigma}G^{\dagger\bar{c}\gamma}\rangle\qquad&{\rm
in}\,(y^{+},y_{c}^{+}),\\\ \langle G^{\alpha d}G^{\dagger d\bar{d}}G^{\sigma
b}G^{\dagger b\bar{b}}\rangle\qquad&{\rm in}\,(y_{c}^{+},L)\,.\end{cases}$
(70)
The color structure of this system is quite involved and cannot be written in
a closed form BDIM1 ; Liliana2 . In order to proceed, we take again the late
time evolution of the system to be that of two independent color dipoles, so
that we obtain
$\begin{cases}f^{lah}\langle W^{hc}G^{l\beta}G^{\dagger\gamma
a}\rangle\qquad&{\rm in}\,(x_{c}^{+},y^{+}),\\\ f^{idb}\langle G^{cd}G^{\beta
b}G^{\dagger i\gamma}\rangle\qquad&{\rm in}\,(y^{+},y_{c}^{+}),\\\
\operatorname{Tr}\langle
G(\bar{\mathbf{x}}_{g};\mathbf{z}_{2}|\omega^{\prime})G^{\dagger}(\bar{\mathbf{x}}_{g}^{\prime};\mathbf{y}^{\prime}|\omega^{\prime})\rangle\operatorname{Tr}\langle
G(\mathbf{x}_{g};\mathbf{z}_{4}|\omega)G^{\dagger}(\mathbf{x}_{g}^{\prime};\mathbf{y}^{\prime}|\omega)\rangle\qquad&{\rm
in}\,(y_{c}^{+},L)\,.\end{cases}$ (71)
For the intermediate medium average, we extract the overall color factor by
making use of the techniques introduced in the previous appendix. Taking eqs.
(51) and (54), we consider the contribution of an extra scattering center to
the medium average at time $\tau$, such that the time interval is split into
two sub-intervals: the former from $(y^{+},y_{c}^{+}-\tau)$, while the latter
(infinitesimal one) is $(y_{c}^{+}-\tau,y_{c}^{+})$. Thus, we can write any
Wilson line as
$W^{ij}(\mathbf{x})_{y^{+},y_{c}^{+}}=W^{ik}(\mathbf{x})_{y_{c}^{+}-\tau,y^{+}}\left(\delta^{kj}\left(1-\frac{C_{A}}{2}B_{\gamma}(\mathbf{0})\right)-if^{ksj}A^{s}(\mathbf{x})\right)_{y_{c}^{+},y_{c}^{+}-\tau}\,.$
(72)
Finally, by using eq. (72) in the second line of eq. (71) and after the some
color algebra we obtain, in accordance with the results shown in the main
text,
$\begin{split}&f^{idb}\langle W^{cd}(\mathbf{x}_{1})W^{\beta
b}(\mathbf{x}_{2})W^{\dagger
i\gamma}(\mathbf{x}_{3})\rangle_{y^{+},y_{c}^{+}}\sim f^{idb}\langle
W^{cd}(\mathbf{x}_{1})W^{\beta b}(\mathbf{x}_{2})W^{\dagger
i\gamma}(\mathbf{x}_{3})\rangle_{y_{c}^{+}-\tau,y^{+}}\\\
&\times\left(1-\frac{3C_{A}}{2}B_{\gamma}(\mathbf{0})+\frac{C_{A}}{2}[B_{\gamma}(\mathbf{x}_{1}-\mathbf{x}_{2})+B_{\gamma}(\mathbf{x}_{3}-\mathbf{x}_{2})+B_{\gamma}(\mathbf{x}_{1}-\mathbf{x}_{3})]\right)_{y_{c}^{+},y_{c}^{+}-\tau}\,.\end{split}$
(73)
The time locality of the medium averages and this result leads to the
conclusion that the medium average within the time interval
$(x_{c}^{+},y^{+})$ must take the form
$\begin{split}&f^{hla}f^{ijk}\langle W^{hi}G^{lj}G^{\dagger
ka}\rangle_{x_{c}^{+},y^{+}}\,,\end{split}$ (74)
matching the result in the main text. For other orderings of the branching
times in amplitude and conjugate amplitude, we find the same result for the
earliest time interval.
## References
* (1) STAR Collaboration, J. Adams et al., Experimental and theoretical challenges in the search for the quark gluon plasma: The STAR Collaboration’s critical assessment of the evidence from RHIC collisions, Nucl. Phys. A757 (2005) 102–183, [nucl-ex/0501009].
* (2) PHENIX Collaboration, K. Adcox et al., Formation of dense partonic matter in relativistic nucleus-nucleus collisions at RHIC: Experimental evaluation by the PHENIX collaboration, Nucl. Phys. A757 (2005) 184–283, [nucl-ex/0410003].
* (3) ALICE Collaboration, K. Aamodt et al., Suppression of Charged Particle Production at Large Transverse Momentum in Central Pb-Pb Collisions at $\sqrt{s_{NN}}=$ 2.76 TeV, Phys. Lett. B696 (2011) 30–39, [arXiv:1012.1004].
* (4) CMS Collaboration, V. Khachatryan et al., Charged-particle nuclear modification factors in PbPb and pPb collisions at $\sqrt{s_{\mathrm{N}\;\mathrm{N}}}=5.02$ TeV, JHEP 04 (2017) 039, [arXiv:1611.01664].
* (5) CMS Collaboration, S. Chatrchyan et al., Observation and studies of jet quenching in PbPb collisions at nucleon-nucleon center-of-mass energy = 2.76 TeV, Phys. Rev. C84 (2011) 024906, [arXiv:1102.1957].
* (6) ALICE Collaboration, J. Adam et al., Measurement of jet suppression in central Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV, Phys. Lett. B746 (2015) 1–14, [arXiv:1502.01689].
* (7) J. Casalderrey-Solana, Y. Mehtar-Tani, C. A. Salgado, and K. Tywoniuk, New picture of jet quenching dictated by color coherence, Phys. Lett. B725 (2013) 357–360, [arXiv:1210.7765].
* (8) P. Caucal, E. Iancu, A. H. Mueller, and G. Soyez, Vacuum-like jet fragmentation in a dense QCD medium, Phys. Rev. Lett. 120 (2018) 232001, [arXiv:1801.09703].
* (9) J.-P. Blaizot, F. Dominguez, E. Iancu, and Y. Mehtar-Tani, Medium-induced gluon branching, JHEP 01 (2013) 143, [arXiv:1209.4585].
* (10) L. Apolinário, N. Armesto, G. Milhano, and C. A. Salgado, Medium-induced gluon radiation beyond the eikonal approximation, Nucl. Phys. A932 (2014) 152–157, [arXiv:1404.7079].
* (11) J.-P. Blaizot, F. Dominguez, E. Iancu, and Y. Mehtar-Tani, Probabilistic picture for medium-induced jet evolution, JHEP 06 (2014) 075, [arXiv:1311.5823].
* (12) S. Jeon and G. D. Moore, Energy loss of leading partons in a thermal QCD medium, Phys. Rev. C 71 (2005) 034901, [hep-ph/0309332].
* (13) R. Baier, Y. L. Dokshitzer, A. H. Mueller, S. Peigne, and D. Schiff, Radiative energy loss and p(T) broadening of high-energy partons in nuclei, Nucl. Phys. B484 (1997) 265–282, [hep-ph/9608322].
* (14) R. Baier, Y. L. Dokshitzer, A. H. Mueller, S. Peigne, and D. Schiff, Radiative energy loss of high-energy quarks and gluons in a finite volume quark - gluon plasma, Nucl. Phys. B483 (1997) 291–320, [hep-ph/9607355].
* (15) B. G. Zakharov, Fully quantum treatment of the Landau-Pomeranchuk-Migdal effect in QED and QCD, JETP Lett. 63 (1996) 952–957, [hep-ph/9607440].
* (16) B. G. Zakharov, Radiative energy loss of high-energy quarks in finite size nuclear matter and quark - gluon plasma, JETP Lett. 65 (1997) 615–620, [hep-ph/9704255].
* (17) M. Gyulassy, P. Levai, and I. Vitev, Reaction operator approach to nonAbelian energy loss, Nucl. Phys. B594 (2001) 371–419, [nucl-th/0006010].
* (18) U. A. Wiedemann, Gluon radiation off hard quarks in a nuclear environment: Opacity expansion, Nucl. Phys. B588 (2000) 303–344, [hep-ph/0005129].
* (19) X.-N. Wang and X.-f. Guo, Multiple parton scattering in nuclei: Parton energy loss, Nucl. Phys. A 696 (2001) 788–832, [hep-ph/0102230].
* (20) P. B. Arnold, G. D. Moore, and L. G. Yaffe, Photon and gluon emission in relativistic plasmas, JHEP 06 (2002) 030, [hep-ph/0204343].
* (21) M. D. Sievert, I. Vitev, and B. Yoon, A complete set of in-medium splitting functions to any order in opacity, Phys. Lett. B 795 (2019) 502–510, [arXiv:1903.06170].
* (22) S. Caron-Huot and C. Gale, Finite-size effects on the radiative energy loss of a fast parton in hot and dense strongly interacting matter, Phys. Rev. C82 (2010) 064902, [arXiv:1006.2379].
* (23) X. Feal and R. Vazquez, Intensity of gluon bremsstrahlung in a finite plasma, Phys. Rev. D98 (2018), no. 7 074029, [arXiv:1811.01591].
* (24) W. Ke, Y. Xu, and S. A. Bass, A modified-Boltzmann approach for modeling the hot QCD medium-induce splitting vertices in the deep LPM region, Phys. Rev. C100 (2019), no. 6 064911, [arXiv:1810.08177].
* (25) C. Andres, L. Apolinário, and F. Dominguez, Medium-induced gluon radiation with full resummation of multiple scatterings for realistic parton-medium interactions, JHEP 07 (2020) 114, [arXiv:2002.01517].
* (26) C. Andres, F. Dominguez, and M. G. Martinez, From soft to hard radiation: the role of multiple scatterings in medium-induced gluon emissions, arXiv:2011.06522.
* (27) Y. Mehtar-Tani, Gluon bremsstrahlung in finite media beyond multiple soft scattering approximation, JHEP 07 (2019) 057, [arXiv:1903.00506].
* (28) Y. Mehtar-Tani and K. Tywoniuk, Improved opacity expansion for medium-induced parton splitting, JHEP 06 (2020) 187, [arXiv:1910.02032].
* (29) J. Barata and Y. Mehtar-Tani, Improved opacity expansion at NNLO for medium induced gluon radiation, arXiv:2004.02323.
* (30) Y. Mehtar-Tani, C. A. Salgado, and K. Tywoniuk, The Radiation pattern of a QCD antenna in a dense medium, JHEP 10 (2012) 197, [arXiv:1205.5739].
* (31) Y. Mehtar-Tani and K. Tywoniuk, Jet coherence in QCD media: the antenna radiation spectrum, JHEP 01 (2013) 031, [arXiv:1105.1346].
* (32) Y. Mehtar-Tani, C. A. Salgado, and K. Tywoniuk, The radiation pattern of a QCD antenna in a dilute medium, JHEP 04 (2012) 064, [arXiv:1112.5031].
* (33) J. Casalderrey-Solana and E. Iancu, Interference effects in medium-induced gluon radiation, JHEP 08 (2011) 015, [arXiv:1105.1760].
* (34) Y. I. Azimov, Y. L. Dokshitzer, V. A. Khoze, and S. Troyan, Humpbacked QCD Plateau in Hadron Spectra, Z. Phys. C 31 (1986) 213.
* (35) Z. Hulcher, D. Pablos, and K. Rajagopal, Resolution Effects in the Hybrid Strong/Weak Coupling Model, JHEP 03 (2018) 010, [arXiv:1707.05245].
* (36) J. Casalderrey-Solana, D. C. Gulhan, J. G. Milhano, D. Pablos, and K. Rajagopal, A Hybrid Strong/Weak Coupling Approach to Jet Quenching, JHEP 10 (2014) 019, [arXiv:1405.3864]. [Erratum: JHEP 09, 175 (2015)].
* (37) R. Baier, A. H. Mueller, D. Schiff, and D. Son, ’Bottom up’ thermalization in heavy ion collisions, Phys. Lett. B 502 (2001) 51–58, [hep-ph/0009237].
* (38) J.-P. Blaizot, E. Iancu, and Y. Mehtar-Tani, Medium-induced QCD cascade: democratic branching and wave turbulence, Phys. Rev. Lett. 111 (2013) 052001, [arXiv:1301.6102].
* (39) R. K. Ellis, W. J. Stirling, and B. R. Webber, QCD and collider physics, Camb. Monogr. Part. Phys. Nucl. Phys. Cosmol. 8 (1996) 1–435.
* (40) Y. L. Dokshitzer, V. A. Khoze, A. H. Mueller, and S. I. Troian, Basics of perturbative QCD. 1991\.
* (41) P. Cvitanovic, P. Hoyer, and K. Zalewski, Parton evolution as a branching process, Nucl. Phys. B176 (1980) 429–448.
* (42) L. Apolinario, N. Armesto, and C. A. Salgado, Medium-induced emissions of hard gluons, Phys. Lett. B718 (2012) 160–168, [arXiv:1204.2929].
* (43) J. Casalderrey-Solana and C. A. Salgado, Introductory lectures on jet quenching in heavy ion collisions, Acta Phys. Polon. B38 (2007) 3731–3794, [arXiv:0712.3443].
* (44) C. A. Salgado and U. A. Wiedemann, Calculating quenching weights, Phys. Rev. D 68 (2003) 014008, [hep-ph/0302184].
* (45) T. Altinoluk, N. Armesto, G. Beuf, M. Martínez, and C. A. Salgado, Next-to-eikonal corrections in the CGC: gluon production and spin asymmetries in pA collisions, JHEP 07 (2014) 068, [arXiv:1404.2219].
* (46) T. Altinoluk, N. Armesto, G. Beuf, and A. Moscoso, Next-to-next-to-eikonal corrections in the CGC, JHEP 01 (2016) 114, [arXiv:1505.01400].
* (47) F. Domínguez, J. G. Milhano, C. A. Salgado, K. Tywoniuk, and V. Vila, Mapping collinear in-medium parton splittings, Eur. Phys. J. C80 (2020), no. 1 11, [arXiv:1907.03653].
* (48) Y. Mehtar-Tani, J. G. Milhano, and K. Tywoniuk, Jet physics in heavy-ion collisions, Int. J. Mod. Phys. A28 (2013) 1340013, [arXiv:1302.2579].
* (49) A. Kovner and U. A. Wiedemann, Gluon radiation and parton energy loss, hep-ph/0304151.
* (50) A. Kovner and U. A. Wiedemann, Eikonal evolution and gluon radiation, Phys. Rev. D 64 (2001) 114002, [hep-ph/0106240].
* (51) P. B. Arnold, Simple Formula for High-Energy Gluon Bremsstrahlung in a Finite, Expanding Medium, Phys. Rev. D 79 (2009) 065025, [arXiv:0808.2767].
* (52) P. Arnold and S. Iqbal, The LPM effect in sequential bremsstrahlung, JHEP 04 (2015) 070, [arXiv:1501.04964]. [Erratum: JHEP 09, 072 (2016)].
* (53) J. Casalderrey-Solana, D. Pablos, and K. Tywoniuk, Two-gluon emission and interference in a thin QCD medium: insights into jet formation, JHEP 11 (2016) 174, [arXiv:1512.07561].
|
# VALIDATION OF HD 183579b USING ARCHIVAL RADIAL VELOCITIES:
A WARM-NEPTUNE ORBITING A BRIGHT SOLAR ANALOG
Skyler Palatnick School of Engineering and Applied Sciences, University of
Pennsylvania, Philadelphia, PA 19104, USA David Kipping Department of
Astronomy, Columbia University, 550 W 120th Street, New York, NY 10027, USA
Daniel Yahalomi Department of Astronomy, Columbia University, 550 W 120th
Street, New York, NY 10027, USA
###### Abstract
As exoplanetary science matures into its third decade, we are increasingly
offered the possibility of pre-existing, archival observations for newly
detected candidates. This is particularly poignant for the TESS mission, whose
survey spans bright, nearby dwarf stars in both hemispheres - precisely the
types of sources targeted by previous radial velocity (RV) surveys. On this
basis, we investigated whether any of the TESS Objects of Interest (TOIs)
coincided with such observations, from which we find 18 single-planet
candidate systems. Of these, one exhibits an RV signature that has the correct
period and phase matching the transiting planetary candidates with a false-
alarm probability of less than 1%. After further checks, we exploit this fact
to validate HD 183579b (TOI-1055b). This planet is $<4$ $R_{\oplus}$ and has
better than 33% planetary mass measurements, thus advancing TESS’ primary
objective of finding 50 such worlds. We find that this planet is amongst the
most accessible small transiting planets for atmospheric characterization. Our
work highlights that the efforts to confirm and even precisely measure the
masses of new transiting planet candidates need not always depend on acquiring
new observations - that in some instances these tasks can be completed with
existing data.
techniques: spectroscopic — planets and satellites: detection — planets and
satellites: individual: HD 183579b
## 1 Introduction
Large scale photometric surveys have revolutionized our understanding of
extrasolar planets through the discovery of thousands of transiting planets
(Batalha, 2014). Despite the many advantages of the transit method, it suffers
from one primary weakness - a planet-like transit can be caused by numerous
false-positives (Bryson et al., 2013; Santerne et al., 2013; Leuquire et al.,
2018).
Determining the true nature of a planet-like transit is complicated by the
fact that the false-positive rate (FPR) varies between different photometric
surveys; for example, for Kepler it is ${\sim}10\%$ (Borucki et al., 2011) but
is far higher for ground-based surveys like KELT (Collins et al., 2018). Of
particular interest for this work is that TESS is predicted to have a FPR of
${\sim}40$% for the 2-minute cadence targets (Sullivan et al., 2015). However,
even when one focusses on a specific survey, the FPR varies dramatically with
the planetary radius; for example, Kepler’s FPR is 17.7% for giant planets but
6.7% for Neptunes (Fressin et al. 2013, but also see Santerne et al. 2012).
Yet more, it appear to further depend on the evolutionary state of the parent
star (Sliski & Kipping 2014, but also see Gaidos & Mann 2013) and even
position within the detector’s pixel array (Christiansen et al., 2020).
The “gold standard” solution for distinguishing between genuine transiting
planets and false-positives is to obtain radial velocity (RV) measurements
that detect the presence of a Doppler signal of the same period and phase as
the transit signal (e.g. Hébrard et al. 2019). In the era of thousands of
planetary candidates dawned by Kepler, this approach remains powerful but
ultimately limited in impact due to the challenge of securing the very large
number of observing nights required to cover the entire sample. High
significance RV detections are typically described as planet “confirmations”
in the associated paper (e.g. Almenara et al. 2018). In some cases, the lack
of a detectable RV signal in phase with a known transiter has been used to
place mass upper limits, which is then used as a basis to describe said
transiter as “confirmed” (e.g. Timmermann et al. 2020.
Due to the imbalance of RV observations versus transiting planet candidates,
alternative strategies have been developed in recent years to make further
progress. Specifically, the community has become familiar with the concept of
“statistical validation” of planetary candidates. Unlike confirmations, which
are implied to be essentially certain planets, validations frame signals as
being planets to some probability threshold. These validations generally
consider information such as the transit morphology, centroid positions and
high resolution imaging constraints in the context of both planet and non-
planet scenarios (e.g. hierarchical triples), in order to quantify the odds of
planethood.
Such validation efforts find their early footing in the work of Torres et al.
(2004), who showed that the transiting planet candidate associated with OGLE-
TR-33 was likely a false-positive. This technique was applied the following
year to OGLE-TR-56, which finally led to the first statistically validated
planet (Torres et al., 2005).
Statistical validation of transiting planet candidates remained somewhat of a
niche exercise in the exoplanet community during these years. This was largely
because most transiting planet candidates were coming from ground-based
surveys, such as HATNet (Bakos et al., 2004) and WASP (Pollacco et al., 2006),
whose targets were bright and not so numerous that RV confirmation was almost
ubiquitous in the resulting papers. This situation shifted in the Kepler-era,
when exoplanet astronomers began drinking from the fire hose and could no
longer keep up with the large catalogs of planetary candidates being released
(e.g. Borucki et al. 2011). The BLENDER software (Torres et al., 2011) was the
first attempt to generalize the validation framework for en-masse work and led
to the validation of dozens of new Kepler planets (Torres et al., 2015, 2017).
In tandem, more computationally efficient validation methods were developed,
such as vespa (Morton, 2012) and Gaussian Process Classification (Armstrong,
Gamper, & Damoulas, 2020), enabling the validation of many hundreds of Kepler
candidates. The inclusion of planet multiplicity information into the
validation methodology by Lissauer et al. (2012) further empowered the
approach to the point where the majority of Kepler planetary candidates have
now been validated (Rowe et al., 2014; Morton et al., 2016).
Despite the apparent dwindling need for RVs during these years, the value of
RV measurements is arguably entering somewhat of a renaissance. Precise RV
data sets have now been accumulating for two decades (Butler et al., 2017)
revealing the long-period population of exoplanets (e.g. Wittenmyer et al.
2016). Yet more, the paucity of observed planet masses amongst the transiting
population has highlighted their need, such as better constraints on the
planetary mass-radius relation (Chen & Kipping, 2017), predicting the scale-
height in transmission spectroscopy work (Anglada-Escudé et al., 2013), and
removing degeneracies in exomoon work (Teachey & Kipping, 2018). Yet more, the
impact of RVs is buoyed by the fact that TESS is focussed on brighter stars
(Ricker et al., 2015), which are far more amenable to RV characterization than
those of Kepler. Indeed, it is telling that a primary goal of the TESS mission
is to measure the masses of 50 small ($\lesssim 4$ $R_{\oplus}$) exoplanets
(despite the fact TESS itself generally cannot measure masses).
In the TESS era, then, RVs will clearly play a more impactful role than that
of Kepler, and, in this work, we consider to what extent they can be used to
validate known transiting planet candidates. In particular, we turn to
publicly available archival RV surveys over the last two decades of the
northern and southern skies which include many targets not just observed by
TESS but found to have TOIs (TESS Objects of Interest). Although no planets
may have been detectable in the original time series, the inclusion of the
TESS ephemerides (which are typically very precisely measured) adds new
constraints to these data sets that may elevate signals previously lying
beneath the noise floor. We note that Huang et al. (2018) utilized similar
methodology to validate HD 39091c, but their approach differs in that it does
not generalize to be applicable to the entire TOI database, and in our work we
push into lower signal-to-noise ratios.
We describe our new approach in Section 2, along with the identification of
one newly validated planet. In Section 3, we explore the physical properties
of this planet by including the constraints from the transit light curve and
stellar isochrones. The importance of this individual planet is discussed in
Section 4, along with a broader brush discussion of this new approach to
validating exoplanets.
## 2 Radial Velocity Analysis
### 2.1 Cross Referencing TOIs
We begin by curating a list of sources which have RV measurements available,
looking in particular for sources which have had no previous planet
detections. Although numerous surveys have been published over the years, we
limit ourselves to just the largest surveys in order to provide a degree of
catalog homogeneity. Specifically, we seek one large survey in each celestial
hemisphere to provide the necessary data. To this end, we identify the Lick-
Carnegie Exoplanet Survey (LCES) using the HIRES instrument on Keck-I, and the
High Accuracy Radial Velocity Planet Searcher (HARPS) mounted on the 3.6m ESO
telescope at La Silla as most suitable.
From LCES, we obtained 60,949 publicly available radial velocities for 1624
unique sources processed and published by Butler et al. (2017). These
observations span 20 years with an instrument upgrade occurring in August
2004. Despite this, Butler et al. (2017) report no significant velocity offset
after the upgrade in their published RVs, and explicitly state so in their
work, and thus we will treat the entire data set as originating from a single
instrument.
From HARPS, we obtained over 212,000 RVs for 2912 sources (Trifonov et al.,
2020). HARPS has been mounted since 2003 but an instrument upgrade in May 2015
introduces a RV offset that needs to be accounted for between these two eras,
and one which is different for each star (Trifonov et al., 2020).
In rare cases, sources were caught by both surveys. Of the 6 unique sources
for which this was the case, 4 were already known planet detections, and thus
were not impactful to our overall procedure. The remaining 2 were treated in
the same manner as the HARPS upgrade, with an RV offset that needs to be
accounted for between the two instruments which is different for each star.
We then proceeded to filter the list down to only sources which were also
listed as a TOI via the TESS Alert system, yielding 100 TOIs from 97 sources.
Of these, 70 were already known planet detections at the time of writing and
so these were excluded from our analysis. We also excluded 6 TOIs that had 5,
or fewer, total RV observations, and 3 sources with two TOIs each in the same
system because multi-planet systems are not compatible with our validation
methods. For multi-planet TOIs, our validation methodology can not adequately
disentangle the signal between the two planet candidates, and thus their
inclusion in our analysis is unnecessary. This provides us with a final list
of 18 TOIs which have not been confirmed as planets as of the time of writing,
and have archival precise radial velocities available from either HIRES or
HARPS. These 18 TOIs are listed in Table LABEL:tab:FAPtable.
Table 1: Summary of several tests applied to our TOIs to identify statistically significant and physically sound radial velocity solutions. $\dagger$ = An outlier point was removed during the analysis. TOI | Main Identifier | FAP | $K_{\mathrm{{\tt RadVel}}}$ [m/s] | $K_{\mathrm{{\tt forecaster}}}$ [m/s] | $K_{\mathrm{circ}}$ [m/s] | LS test for $K_{\mathrm{{\tt RadVel}}}$ | Physicality $p$-value
---|---|---|---|---|---|---|---
1055.01 | HD 183579 | $0.32\%$ | $4.7_{-1.2}^{+1.1}$ | [1.2, 8.1] | 3.8 | 0.49% | 0.23
260.01 | GJ 1008 $\dagger$ | $1.2\%$ | $3.44_{-0.80}^{+0.78}$ | [0.0, 5.1] | 3.1 | $\cdots$ | $\cdots$
560.01 | GJ 313 | $1.48\%$ | $14.2_{-2.5}^{+3.0}$ | [2.0,12.3] | 19 | $\cdots$ | $\cdots$
1611.01 | HD 207897 | $2.6\%$ | $7.3_{-3.5}^{+4.7}$ | [0.6, 4.3] | 2.8 | $\cdots$ | $\cdots$
1827.01 | Wolf 437 $\dagger$ | $4.2\%$ | $4.6_{-2.8}^{+2.3}$ | [0.9, 9.7] | 2.5 | $\cdots$ | $\cdots$
1011.01 | HD 61051 | $9.1\%$ | $3.04_{-0.74}^{+0.79}$ | [0.6, 3.5] | -1.2 | $\cdots$ | $\cdots$
179.01 | HD 18599 | $9.4\%$ | $36.4_{-7.4}^{+7.5}$ | [1.5, 8.5] | 18 | $\cdots$ | $\cdots$
440.01 | HD 36152 | $13.0\%$ | $1.7_{-1.7}^{+1.2}$ | [1.5, 6.8] | -0.55 | $\cdots$ | $\cdots$
461.01 | HD 15906 | $21.5\%$ | $-9.3_{-7.8}^{+7.7}$ | [1.3, 5.2] | 2.6 | $\cdots$ | $\cdots$
1860.01 | HD 134319 | $34.7\%$ | $132_{-62}^{+50}$ | [0.0, 10.3] | 59 | $\cdots$ | $\cdots$
486.01 | GJ 238 | $50.6\%$ | $0.57_{-0.73}^{+0.72}$ | [0.06,0.18] | 0.61 | $\cdots$ | $\cdots$
909.01 | HD 150139 | $57.8\%$ | $2.3_{-1.5}^{+1.0}$ | [0.8, 8.3] | -0.52 | $\cdots$ | $\cdots$
198.01 | GJ 7 | $63.6\%$ | $39.5_{-12.0}^{+4.3}$ | [0.5, 3.4] | 27.8 | $\cdots$ | $\cdots$
1970.01 | TYC 8647-2057-1 | $78.7\%$ | $6600_{-201}^{+220}$ | [42, 18000] | -71 | $\cdots$ | $\cdots$
253.01 | HIP 4468 | $87.1\%$ | $-32_{-32}^{+39}$ | [0.0, 8.0] | 0.77 | $\cdots$ | $\cdots$
731.01 | GJ 367 | $89.5\%$ | $0.28_{-0.84}^{+0.63}$ | [0.0, 4.6] | 0.11 | $\cdots$ | $\cdots$
139.01 | HIP 110692 | $91.2\%$ | $5.2_{-2.6}^{+2.5}$ | [1.4, 6.7] | 0.45 | $\cdots$ | $\cdots$
741.01 | GJ 341 | $92.6\%$ | $0.22_{-0.49}^{+0.45}$ | [0.0,2.0] | 0.054 | $\cdots$ | $\cdots$
### 2.2 A Check for Long-term Trends
Before we can look for the short-period RV signals expected due to the TOIs,
it is necessary to check for evidence of long-term trends in the data. If
these should exist, a failure to account for them would degrade our
sensitivity to detect low amplitude signals. To accomplish this, we performed
a linear least squares fit of the RV time series using the inverse square of
the reported uncertainties as the weights (no jitter is included for this
test). A flat, linear and quadratic trend model are regressed to each time
series, from which we compute a $\chi^{2}$ and BIC (Schwarz, 1978) score. The
model with the lowest BIC is saved as the appropriate trend model for each
TOI. To account for the effect of the 2015 upgrade to HARPs on RV datasets
with observations that span the eras before and after 2015, we implemented a
Nelder-Mead minimization routine to solve piecewise equations accounting for
the offset between the RVs from the pre- and post-upgrade time periods
corresponding to flat, linear, and quadratic trends. We then follow the stated
procedure of choosing the model with the lowest BIC as the correct trend for
the given TOI. The same workflow was applied to the TOIs with data from both
HARPS and LCES. We note that while the trend models we adopt are favored by
the BIC score for a given TOI, these trends are not necessarily statistically
significant. Of the TOIs we determined to have RV data with a linear or
quadratic favored trend model, TOIs 486.01, 560.01, 741.01, 198.01, 1860.01,
909.01, 1055.01, 179.01, 1611.01, 440.01, 1011.01, 253.01, and 1970.01 had
$\Delta\mathrm{BIC}>10$, indicating a strong likelihood of a trend in their
respective RV data.
### 2.3 Calculating False-Alarm Probabilities (FAPs)
We considered several tests to evaluate whether there is a genuine RV signal
in the archival data associated with each TOI, but the first of these is a
bootstrapping false-alarm probability (FAP) test. This test consists of three
steps, described as follows. First, for each TOI, we fit a trend + circular
orbit model (which can be expressed as a purely linear model for a given
period) to the TOI’s RV data, weighting by the inverse square uncertainties,
and evaluate the $\chi^{2}$ goodness-of-fit. The trend component of the model
corresponds to the BIC best fit trend; if the BIC for a given RV data set
favors a flat trend, only a constant offset term is included, while if the BIC
for a given RV data set favors a linear or quadratic trend, linear or
quadratic terms are included in addition to the constant offset term. We allow
for negative $K$ values during this process, which can be used a diagnostic
for “bogus” detections later. For planets on near circular orbits, which is
broadly expected given the short-period nature of the TOIs, one expects the
phase folded RVs to follow an inverted sinusoid of amplitude
$K_{\mathrm{circ}}$ (Kipping, 2013a). What this means is that at the time of
inferior conjunction (i.e. mid-transit time), the RV signal should be zero
since the star is moving tangentially, but the acceleration should be
blueshifting maximally (i.e. the RV gradient is maximally negative). Thus, our
linear equation was of the form:
$\displaystyle\mathrm{RV}(t)=a_{0}+a_{1}(t-t_{0})+a_{2}(t-t_{0})^{2}-K_{\mathrm{circ}}\sin(\frac{2\pi{(t-\tau)}}{P})$
(1)
where $t_{0}$ is a pivot point selected near the mid-point of the
observational baseline, and $a_{2}$, $a_{1}$, and $a_{0}$ are constants
corresponding to quadratic, linear, and constant trends in the data,
respectively. We utilized the linalg.lstsq function from the NumPy PYTHON
module to regress Equation (1) to the available RVs for each TOI. Once again,
for TOIs with HARPS RV data that spans the pre and post upgrade eras as well
as the TOIs with data from both HARPS and LCES, we used the non linear Nelder-
Mead minimization routine to solve the piecewise equation accounting for the
constant offset between the two observation periods or the two different
instruments.
Since this is a fit, with some parameter flexibility, then the resulting
$\chi^{2}$ will always be better than that obtained without the sinusoid
present. Thus, an improvement in $\chi^{2}$ is not sufficient to claim a
detection. Further, the noise properties cannot be assumed to behave as
strictly Gaussian and thus we avoid making detection claims based on the
degree to which $\chi^{2}$ improves either. In light of these points, how can
one go about evaluating a probability for the reality of these signals?
We approach this through bootstrapping. Specifically, we repeat the same
procedure described above but with a different (and ultimately false)
ephemeris. The orbital period is drawn from a probability distribution which
approximates the observed TOI period distribution, but we exclude any periods
which are within 20$\%$ of the true answer. This approximate distribution was
found by first inspecting the distribution the log-periods of the 2330
available TOIs, which exhibit an approximately triangular distribution mixed
with a background uniform distribution. We performed likelihood maximization
of a uniform+triangular mixture model, with support defined over the range of
the available log-periods, yielding a mixture model which is 0.777 triangular,
whose shape has a mode at $\log P=0.34$, a minimum at $-0.026$ and a maximum
at $3.804$. After a period is selected from this distribution, the phase is
simply randomized uniformly. For each random ephemeris, a linear equation with
an inverted sinusoid and trend is fit (or non-linear piecewise equation for
the TOIs where this is necessary), and the $\chi^{2}$ improvement is recorded.
Since the real fit allows negative $K$ values, the exact same procedure and
rule set is used for the bootstrap to keep everything like-for-like.
A false-alarm probability (FAP) score can then be computed by asking, how
often do the fake ephemerides lead to a $\chi^{2}$ improvement which is
superior than the improvement obtained with the true ephemeris? This can be
quantified using a one-tailed $p$-value then, similar to the typical
bootstrapping applied to periodogram analyses in RV surveys. RV signals driven
by stellar activity can occur across a broad range of frequency space and, in
general, have no reason to coincide with a series of periodic and
statistically significant box-shaped dips that represent a TOI. In this way
then, by evaluating the power at different random but representative
ephemerides, our FAP scores inflate in the presence of such behavior. A
consequence of this is that our approach may obtain false-negatives, genuine
RV planets that we reject because there is an activity signal present.
Nevertheless, we prefer to err on the side of being conservative in this sense
when validating planets in this work.
Following Morton et al. (2016)’s validation work on Kepler transiting planets,
we consider any FAP score lower than 1% grounds for potential validation
(subject to some further checks and tests). The FAP scores are listed in Table
LABEL:tab:FAPtable, where 1 of the 18 TOIs exhibits a FAP below 1% -
TOI-1055.01 (HD 183579.01). Figure 1 includes a histogram with the results of
the FAP test for this planet.
Four other TOIs exhibit FAP scores below 5%. In this work, these TOIs will not
be considered further as candidates for validation, but we note that they are
likely planets. These are TOI-260.01 (GJ 1008.01), TOI-560.01 (GJ 313.01),
TOI-1611.01 (HD 207897.01), and TOI-1827.01 (Wolf 437.01).
Figure 1: Results of the FAP test for HD 183579.01. The FAP percentage is
reported in the upper left corner, and the $\chi^{2}$ value for each true
linear fit (using the real $P$ and $\tau$) is denoted by a dashed black line.
### 2.4 RadVel Modeling and Testing for Non-Zero Semi-Amplitudes
To confirm the validity of this TOI as a planet, we conducted a more thorough
analysis of its RV data and the light curve of its host star and then ran two
additional tests. In total, 54 RV measurements of HD 183579 were taken by
HARPS over the course of 5.5 years. To analyze these RVs, we utilize the
RadVel package (Fulton et al., 2018).
RadVel uses MCMC regression to fit for various physical parameters including
$P$, $\tau$, $e$ (eccentricity), $\omega$ (argument of periastron), $K$, and
RV jitter. Since the object is transiting, it will be subject to the
eccentricity bias affecting transiting bodies due to geometric probability
(Barnes, 2007; Burke, 2008). This can be formally accounted for by using the
$e$-$\omega$ joint prior of Kipping (2014b), specifically their Equation (23).
However, the prior is unstable for an intrinsically uniform prior in $e$,
described in $\alpha=1$ and $\beta=1$ in that expression. Instead then, we use
a Beta distribution intrinsic prior for $e$ of $\alpha=1$ and $\beta=2$.
Formally, the RV population of short period planets is better described by
$\alpha=1$ and $\beta=3$, which places more emphasis on low-eccentricity
orbits. We elect to $\beta=2$ to create a softer, flatter and more
uninformative prior, yet one which is stable and gently favors more circular
orbits. The intrinsic prior on $\omega$ is uniform over the the $2\pi$
interval.
Since $P$ and $\tau$ are strongly constrained from the transit light curve, we
employ a Gaussian prior on these terms at their respective TESS ephemeris
values. For the RV jitter parameter, $\sigma_{\mathrm{jitter}}$, we employ a
broad log-uniform prior from 10$\%$ of the median RV error up to twice the
range of the RV data. For $\gamma$ (RV offset), $\dot{\gamma}$ (RV drift), and
$\ddot{\gamma}$ (RV curvature), we employ the default RadVel settings of a
uniform prior with initial guesses of the median RV value, 0m s-1, and 0m s-1,
respectively. The bounds on these terms are set by the range of the RV data in
hand. For $K$, we wanted to ensure zero-valued and negative solutions were
free to be explored and so we adopt a uniform prior from zero minus twice the
range of the RV data to zero plus twice the range of the RV data. The upper
and lower limits on this prior are chosen to simply allow any detectable
signal with a period shorter than the baseline to be modeled by RadVel.
Our RadVel fits use the default mode of running 8 independent ensembles in
parallel with 50 walkers per ensemble for up to a maximum of 10000 steps per
walker, or until convergence is reached (See Fulton et al. 2018 for further
details.) We also inspected the posteriors to check for convergence and mixing
and then use them in our calculations of physical properties, along with the
transit posteriors. We make these posteriors publicly available at this URL.
Once the fits were finished, we conducted two basic checks on the marginalized
posterior distribution for $K$. First, if the median of the distribution is
negative, the TOI was discarded, which occurred for TOI-461.01 and TOI-253.01.
However, we note that neither of these had low FAPs and thus would have been
rejected regardless. Second, for our TOIs with a FAP score below 1% (which is
just TOI-1055.01), we wanted to test if the $K$ posterior was significantly
pulled away from zero, implying a positive detection. Using Bayesian evidences
is somewhat unsatisfactory here because those values would strongly depend
upon the width of our prior. In particular, for $K$, there is no obvious upper
limit and thus it can be increased arbitrarily and thus dilute the Bayesian
evidences. Instead, we argue that a better test is the classic Lucy-Sweeney
test to evaluate if a parameter is offset from zero (Lucy & Sweeney, 1971).
The Lucy-Sweeney test returns a FAP that the parameter in question is
consistent with zero, which we report in the penultimate column of Table
LABEL:tab:FAPtable. The 1% FAP planet validation threshold of Morton et al.
(2016) is again used as a minimum threshold (in addition to the previous
tests) for a candidate to be considered validated.
### 2.5 Statistical Validation of HD 183579b
The TOI found earlier (see Section 2.3) to exhibit $<1$% FAPs with the Monte
Carlo test also exhibits a $<1$% FAP with the Lucy-Sweeney test (see Table
LABEL:tab:FAPtable), as well as positive median $K$ values. Thus the RadVel
solution indicates positive statistical evidence for a signal at the TESS
ephemerides for TOI-1055.01. The corresponding RV curves are shown in Figure
2.
As an additional check, we evaluated the one-sided $p$-value of the
a-posteriori median $K$ value against the predictions from forecaster \- just
to ensure the fits are physically plausible. Since forecaster is an empirical
mass-radius relation, then this essentially asks whether the implied planetary
densities are in the range one would expect for a planet of its size. Once
again, this TOI did not have a suspicious $p$-value (less than 0.05) and thus
appears physically sound.
We also re-visited the FAP calculation with consideration of the trend model
used. The existence of either a linear or quadratic trend appears
statistically secure with $\mathrm{BIC}_{\mathrm{quad}}=636.7$ and
$\mathrm{BIC}_{\mathrm{lin}}=641.7$, but $\mathrm{BIC}_{\mathrm{flat}}=672.8$,
indicating that $\Delta\mathrm{BIC}>30$. Although the quadratic model is
favored over the linear model, we repeated the FAP calculation with a linear
model only, and obtained an even better FAP score of 0.22%. The low FAP score
thus appears robust between these two competitive trend models.
Finally, we examined the data validation (DV) reports for this TOI in order to
confirm that there were no indicators of false positive signals. DV reports
are generated by the NASA Science Processing Operations Center (SPOC) pipeline
(Jenkins et al., 2016) for threshold crossing events (TCEs) in short cadence
observations and by the MIT Quick Look Pipeline (QLP, Huang et al. 2020) for
TCEs in long cadence observations (full frame images). TOI-1055.01 has public
DV reports from both the SPOC pipeline and the QLP pipeline. We checked these
DV reports to look for red flags such as centroid offsets, differences in odd
and even transits, or correlation between the flux depth and the aperture
size. We again find no reason to suspect the transit signals to be spurious.
We also note that the transit signal was independently inspected by Giacalone
et al. (2020) who find a 2% FAP from the light curve, not enough to validate
but again indicating a likely real planet.
From the passing of these checks in combination with our FAP scoring, we
conclude that TOI-1055.01 (HD 183579.01) and is most likely a real planet to
$>99$% confidence and refer to it as statistically validated in what follows
\- thus updating its moniker to HD 183579b.
Figure 2: The RV data fit by RadVel for HD 183579b (TOI-1055b). The first and
second subplots from the top show RadVel’s fit to the raw data and the
residuals of those fits. The bottom plot from the top show RadVel’s fits of
the phase folded RV data. The black points in the phase fold plots represent
binned data.
### 2.6 A Note on Outliers
We note that for several TOIs, the omission of outlier RVs has a noticeable
impact on their corresponding FAP score and Lucy-Sweeney results. RVs
considered as outliers are at least $6\sigma$ from both the favored long-term
trend model and the trend + circular fit, and also have large error bars
compared to the other RVs. TOI-253.01, while quite far from being validated,
saw its FAP score improve by 20% with the omission of an outlier RV.
TOI-260.01 and TOI-1827.01 were just on the threshold of validation with the
exclusion of one outlier RV each, but not quite past the $>99\%$ benchmark.
Nonetheless, these two objects remain highly interesting and with further
observation, may prove to be planets.
## 3 Transit and Isochrones Analysis
### 3.1 Stellar Isochrones
To complete our picture of the HD 183579 system, we require fundamental
stellar parameters. To this end, we performed a stellar isochrone analysis
using the isochrones package by Morton (2015). The isochrones package takes
the observable stellar properties as inputs and uses these to derive
fundamental properties by matching to stellar evolution models - in our case
we employed the Dartmouth models (Dotter et al., 2008).
As inputs, we start with the apparent magnitude in $V$-band reported in (Koen
et al., 2010) and (Høg et al., 2000), and in the 2MASS $J$, $H$, $K$ bands by
Cutri et al. (2003). Next, we searched the literature for stellar atmosphere
parameters and elected to use the precise atmosphere parameters reported in
Luck (2018), which leverage the public HARPS spectra.
We also used the Gaia DR2 parallax from Luri et al. (2018) as a luminosity
indicator. This was included as an extra constraint on the stellar luminosity
in the isochrone fits (Bakos et al., 2010). Although our target is bright by
exoplanet standards, the brightest in $G$ for HD 183579 is 8.5 and is thus
significantly fainter than the $G\lesssim 5$ range highlighted by Drimmel,
Bucciarelli, & Inno (2019) as exhibiting strong biases. However, we do account
for the much smaller systematic parallax error reported by Stassun & Torres
(2018). All of these input parameters are listed in Table
LABEL:tab:table_stellar. We ran isochrones (Morton, 2015) until 100,000
posterior samples had been generated.
We note that for this star, we obtain good agreement between the light curve
derived stellar density and that from our isochrone analysis, with a ratio of
$1.06\pm 0.22$. Unaccounted for blend sources would cause this ratio of these
two to deviate from unity (Kipping, 2014a) and if the transits were associated
with a completely different star (e.g. in the background) then the difference
could be very large. The fact that this case has a density ratio within one-
sigma of zero (see $\log\Psi$ row in Table LABEL:tab:table_stellar) thus
provides additional support that this is indeed a genuine planet transiting
the target star.
Table 2: Summary of the stellar parameters calculated from the isochrone analysis for the host star of HD 183579b. The parameters below the horizontal line are the physical dimensions of the stars. Parameter | Units | HD 183579 (TOI-1055)
---|---|---
$V$ | $V$-band Magnitude | $8.68\pm 0.01$
$J$ | $J$-band Magnitude | $7.518\pm 0.023$
$H$ | $H$-band Magnitude | $7.231\pm 0.047$
$K$ | $K$-band Magnitude | $7.150\pm 0.027$
$T_{\mathrm{eff}}$ | Effective Temperature (K) | $5788\pm 44$
Fe/H | Iron-to-Hydrogen Ratio | $-0.023\pm 0.050$
$\log(g)$ | Surface Gravity | $4.50\pm 0.03$
$\pi$ | Parallax | $17.516\pm 0.066$
$d$ | Distance (pc) | $57.06_{-0.24}^{+0.25}$
$M_{\star}$ | Stellar Mass $(M_{\odot})$ | $1.031_{-0.026}^{+0.025}$
$R_{\star}$ | Stellar Radius $(R_{\odot})$ | $0.985_{-0.026}^{+0.037}$
$\log_{10}(L)$ | Log Luminosity | $0.012_{-0.032}^{+0.043}$
Age | Age (Gyr) | $2.6_{-1.2}^{+1.4}$
$\rho_{\star}$ | Stellar Density (g cm-3) | $1.52_{-0.16}^{+0.13}$
### 3.2 Transit Analysis
We further improve our understanding of the validated planet by including an
analysis of its transit light curve. We downloaded the 2-minute Pre-Data
search Conditioning (PDC) light curve for source from the Mikulski Archive for
Space Telescopes (MAST). At the time of writing, TOI-1055b had been observed
in Sectors 13 and 27, exhibiting 2 and 1 transits respectively.
The light curve was cleaned of time stamps indicating error codes and outliers
using moving median filter. We then detrended the light curve of long-term
trends following the method marginalization approach described in Kipping et
al. (2019). As in that paper, the scatter between different model detrendings
is propagated into the updated formal uncertainties on our method marginalized
light curve.
The transit light curve was then fit using a 9-parameter Mandel & Agol (2002)
forward model coupled to a multimodal nested sampling algorithm, MultiNest
(Feroz, Hobson, & Bridges, 2009). Limb darkening was modeled using a quadratic
law but re-parameterized to the $q_{1}$-$q_{2}$ formulation of Kipping
(2013b), to enable efficient exploration of the parameter volume. We
parameterize the rest of the transit model with seven other terms: the time of
transit minimum, $\tau$, the orbital period, $P$, the impact parameter, $b$,
the ratio-of-radii, $p$, the mean stellar density, $\rho_{\star}$, the orbital
eccentricity, $e$, and the argument of periastron, $\omega$. For many of
these, we adopt a simple uniform prior ($q_{1}\in[0,1]$, $q_{2}\in[0,1]$,
$\tau\in[\hat{\tau}-0.1,\hat{\tau}+0.1]$, $P\in[\hat{P}-0.1,\hat{P}+0.1]$,
$b\in[0,2]$, $p\in[0,1]$). Note that $\hat{P}$ and $\hat{\tau}$ are the TESS
reported best-fitting ephemeris parameters.
Eccentricity and mean stellar density are degenerate in a light curve fit
(Kipping, 2014a), and so we use the stellar mean density derived from our
isochrone analysis (see Section 3.1) as an informative prior. After trying
several different parameteric distributions to describe the isochrone derived
stellar density distribution, we found the following provided a good
approximation: $\rho_{\star}\sim\mathcal{W}[1563,11]$ kg m-3 (where
$\mathcal{W}$ is a Weibull distribution). For eccentricity and argument of
periastron, we use the same joint prior as described earlier in Section 2.4.
We make the posterior samples publicly available for the these this regression
at the aforementioned GitHub repo. The maximum a-posteriori solution is
plotted in Figure 3 for HD 183579b. The physical parameters implied by this
fit is discussed later in Section 4.1.
Figure 3: Phase-folded transit light curve of HD 183579b (TOI-1055b) as
observed by TESS. The black points represent the method marginalized detrended
2-minute TESS photometry and the red line shows the maximum a-posteriori fit
from our regressions. The lower panel shows the residuals between the two.
### 3.3 Refined RadVel Fits
Although we have obtained an orbital fit for the radial velocities earlier in
Section 2.4, that analysis did not include any eccentricity constraints from
the transit fit, since the TOIs remained unvalidated at that time. Having now
validated HD 183579b we re-run the RadVel fit for this planet including the
eccentricity constraints from the transit to improve the overall precision in
our final system parameters.
This is accomplished by introducing a modification to the RadVel likelihood
function that accounts for this constraint on the orbital eccentricity. The
ratio of light curve derived stellar density (see Section 3.2) to that from an
independent measure - in our case from isochrones (see Section 3.1) - directly
yields $\Psi\equiv(1+e\sin\omega)^{3}(1-e^{2})^{-3/2}$, as shown in Kipping
(2010) (see their Equation 39).
To implement this constraint, in the logprob function of the RVLikelihood
class of the module’s likelihood.py file, we added a custom log-likelihood
function describing the agreement between each trial’s predicted $\log\Psi$
versus observed $\log\Psi$ (log of the density ratio) value. This was achieved
by using kernel density estimation on the transit $\log\Psi$ posteriors that
was then used to tabulate a grid of log-like versus $\log\Psi$, which was then
approximated with a piecewise fourth-order polynomial with a break at
$\log\Psi=0$. We sampled this function in a test MCMC to ensure it reproduces
the eccentricity distribution from the transits, as expected. This typically
helps reduce the amount of time RadVel spends exploring highly eccentric
solutions and keeps the radial velocity solution in line with that found from
the transit analysis (see Section 3.2).
Note that since we use the posteriors from the transit fit to construct the
revised RadVel prior, it is not necessary (or indeed allowed) to include the
intrinsic Beta prior and transit bias priors from before, since these are
already baked into the $\log\Psi$ posterior.
## 4 Discussion
### 4.1 Properties of HD 183579b
In this work, we report the validation of one planet orbiting HD 183579, which
represents a new exoplanet. A summary table of the physical properties is
shown in Table 3.
Table 3: Median and $\pm 38.1$% quantiles of the joint posteriors for HD 183579b’s fitted parameters (top) and derived parameters (bottom) $\dagger$:TESS BJD is equivalent to BJD - 2457000. Parameter | Value
---|---
$P$ [days] | $17.471278_{-0.000060}^{+0.000058}$
$\tau$ [TESS BJD] $\dagger$ | $1661.06315_{-0.00077}^{+0.00078}$
$p\equiv R_{P}/R_{\star}$ | $0.03300_{-0.00059}^{+0.00063}$
$b$ | $0.32_{-0.20}^{+0.17}$
$\rho_{\star}$ [g cm-3] | $1.53_{-0.17}^{+0.13}$
$q_{1}$ | $0.38_{-0.17}^{+0.26}$
$q_{2}$ | $0.24_{-0.16}^{+0.30}$
$e$ | $<0.28$ [2 $\sigma$]
$K$ [m s-1] | $4.9_{-1.0}^{+0.9}$
$\gamma$ [m s-1] | $5.2_{-1.6}^{+1.5}$; $-3.3_{-1.0}^{+1.1}$
$\dot{\gamma}$ [m s-1 yr-1] | $-3.2_{-0.7}^{+0.8}$
$\ddot{\gamma}$ [m s-1 yr-2] | $0.063_{-0.048}^{+0.043}$
$\sigma_{\mathrm{jitter}}$ [m s-1] | $2.64_{-0.57}^{+0.81}$; $3.61_{-0.55}^{+0.67}$
$R_{P}$ [$R_{\oplus}$] | $3.55_{-0.12}^{+0.15}$
$M_{P}$ [$M_{\oplus}$] | $19.7_{-3.9}^{+4.0}$
$\rho_{P}$ [g cm-3] | $2.39_{-0.54}^{+0.57}$
$i$ [∘] | $89.33_{-0.40}^{+0.41}$
$a/R_{\star}$ | $29.1_{-1.1}^{+0.8}$
$a$ [AU] | $0.1334_{-0.0061}^{+0.0062}$
$T_{14}$ [hours] | $4.36_{-0.51}^{+0.23}$
$T_{23}$ [hours] | $4.04_{-0.50}^{+0.25}$
$\tilde{T}$ [hours] | $4.20_{-0.50}^{+0.24}$
$u_{1}$ | $0.59_{-0.21}^{+0.17}$
$u_{2}$ | $0.01_{-0.24}^{+0.33}$
$\log\Psi$ | $0.06_{-0.22}^{+0.17}$
$S$ [$S_{\oplus}$] | $58.1_{-3.9}^{+5.3}$
$T_{\mathrm{blackbody}}$ [K] | $769_{-13}^{+17}$
TSM | $72_{-13}^{+19}$
HD 183579b orbits the G2V host star HD 183579 located $d=(57.37\pm 0.19)$ pc
away in the Telescopium constellation. This star is notably bright in both the
optical and infrared at $V=8.67$ and $K=7.15$, and therefore offers favorable
conditions for follow up observations. From our isochrone analysis, we
determine that HD 183579 is $(1.03\pm 0.051)$ $M_{\odot}$ and $(1.022\pm
0.071)$ $R_{\odot}$, which imply a slightly earlier-type than that reported in
TIC-8 ($1.04\pm 0.14$ $M_{\odot}$ and $0.975\pm 0.055$ $R_{\odot}$; Stassun et
al. 2019). The mass, radius, and spectral type of HD 183579 are remarkably
similar to that of the Sun. In fact, HD 183579 has been the subject of various
analyses of Sun-like stars including those for chemical abundances Bedell et
al. (2018), infrared excess Da Costa et al. (2017), and stellar age compared
to chemical composition Tucci Maia et al. (2016). Each of these studies
indicates that HD 183579 exhibits the typical properties of a Solar twin,
including having a spectrum very similar to the Sun.
The RV measurements for this star come from HARPS, with 53 measurements
spanning the dates October 13, 2011 to October 21, 2017. We determine that the
quadrature jitter term is approximately 3 m/s, close to the median formal
uncertainties for the data set of 1.2 m/s and indicating that the star is
relatively quiet. The target is flagged as having an “unambiguous” rotational
modulation by Canto Martins et al. (2020) with a clear periodicity present in
the light curve at 8.8 days. Regressing a sinusoid to the Sector 13 light
curve, we obtain an amplitude of 260 ppm, against which there is residual
scatter of 420 ppm - consistent with the median formal uncertainty of 409 ppm.
In Sector 27, we find almost the same periodicity (8.9 days) of amplitude 240
ppm, against which there is residual scatter of 417 ppm - consistent with the
median formal uncertainty of 374 ppm. We thus conclude that the star likely
exhibits rotational modulations due to spots, but this activity is small a
${\sim}250$ ppm and thus generally consistent with a quiet star.
For HD 183579b, we report a radius of $(3.55\pm 0.13)$ $R_{\oplus}$, thus
placing it firmly in the Neptunian category of Chen & Kipping (2017). We
determine a mass for HD 183579b of $M_{P}=19.7_{-3.9}^{+4.0}$ $M_{\oplus}$,
indicating to a bulk density of $\rho_{P}=2.39_{-0.54}^{+0.57}$ g cm-3. The
planetary mass and radius indicate that HD 183579b resembles Neptune/Uranus in
bulk density, and perhaps has thus migrated inwards from beyond the ice-line.
Figure 4 is a standard mass-radius diagram demonstrating where HD 183579b
falls in a distribution of known planets.
Figure 4: A mass-radius diagram demonstrating where the newly validated
planet lies amongst the population of known planets. HD 183579b is represented
by a red square. Contour lines indicate levels of constant density. HD 183579b
has dimensions consistent with a Neptunian exoplanet.
HD 183579b orbits its host star once every $17.5$ days at a semi-major axis of
$(0.1334\pm 0.0062)$ AU. From the transit morphology, we find that the orbital
eccentricity is consistent with a circular path with a median of
$0.14_{-0.10}^{+0.26}$. The FAP of this being eccentric using the Lucy &
Sweeney (1971) test is 37%, thus favoring a circular orbit. Further, using the
Savage-Dickey ratio (Dickey, 1971), we compute the Bayes factor between an
eccentric-to-circular orbit model to $0.39$ \- again indicating a preference
for the circular solution. Using just the transits, we conclude $e<0.66$ to
95.45% confidence.
The transit posteriors imply a constraint on $\log\Psi=0.06_{-0.22}^{+0.17}$
(median and standard deviation) which is, recall, propagated as a prior
constraint on eccentricity in our RadVel fits. From RadVel, the eccentricity
constraints are slightly improved by the inclusion of the RV information,
yielding $e=0.14_{-0.08}^{+0.07}$ \- which may suggest some small amount of
eccentricity that offer clues to this planet’s past. However, we caution that
neither the Savage-Dickey ratio nor the Lucy-Sweeney test formally favor an
elliptical orbit at this point. We conclude that $e<0.27$ to 95.45%
confidence, and once again remark that an RV trend appears to indicate an
outer body with a significance of $\Delta\mathrm{BIC}>30$.
From our measured mass and radius, we calculate a transmission spectroscopy
metric (TSM) for HD 183579b using Equations (1) & (2) of Kempton et al. (2018)
to indicate the expected signal-to-noise ratio for future James Webb Space
Telescope (JWST) measurements. Our calculated TSM$=72_{-13}^{+19}$ indicates
that HD 183579b is a promising object for future JWST observations.
### 4.2 The Use of Archival RVs
Using exclusively publicly available resources, we were able to validate and
characterize the physical properties of one Neptune sized exoplanet, HD
183579b. Thus, existing data advances TESS primary objective of measuring the
masses and radii of 50 small ($<4$ $R_{\oplus}$) exoplanets Ricker et al.
(2015).
The planet itself is a fascinating world that will likely be amongst the rare
planets observed by JWST, thanks to its small size and excellent
observability. But, we would also like to highlight that the technique used to
validate this object could be extended and utilized in the future. For
example, in this work we did not consider multiple planet systems as the FAP
scoring system would require some modification to handle the multiple
periodicities. Nevertheless, multiples are intrinsically more likely to be
genuine planets (Lissauer et al., 2012) and thus would need less of a nudge in
a probability-sense to become validated planets. Our work highlights the great
power of legacy RV surveys in synergy with active missions, such as TESS. And,
it demonstrates that the knee-jerk reaction of going to the telescope to get
new data is not always necessary, in some cases existing archives may in fact
already serve the desired goal.
## Acknowledgements
DK acknowledges support from Columbia’s Data Science Institute. The Cool
Worlds group thanks to Tom Widdowson, Mark Sloan, Douglas Daughaday, Andrew
Jones, Jason Allen, Marc Lijoi, Elena West, Tristan Zajonc, Chuck Wolfred,
Lasse Skov, Geoff Suter, Max Wallstab, Methven Forbes, Stephen Lee, Zachary
Danielson & Vasilen Alexandrov.
This research has made use of the NASA Exoplanet Archive, which is operated by
the California Institute of Technology, under contract with the National
Aeronautics and Space Administration under the Exoplanet Exploration Program.
This research has made use of the SIMBAD database, operated at CDS,
Strasbourg, France.
This work has made use of data from the European Space Agency (ESA) mission
Gaia, processed by the Gaia Data Processing and Analysis Consortium (DPAC).
Funding for the DPAC has been provided by national institutions, in particular
the institutions participating in the Gaia Multilateral Agreement.
This paper includes data collected with the TESS mission, obtained from the
MAST data archive at the Space Telescope Science Institute (STScI). Funding
for the TESS mission is provided by the NASA Explorer Program. STScI is
operated by the Association of Universities for Research in Astronomy, Inc.,
under NASA contract NAS 526555.
We acknowledge the use of public TESS Alert data from pipelines at the TESS
Science Office and at the TESS Science Processing Operations Center.
We are deeply grateful to the HARPS team at Observatoire de Genève,
Observatoire de Haute-Provence, Laboratoire d’Astrophysique de Marseille,
Service d’Aéronomie du CNRS, Physikalisches Institut de Universität Bern, ESO
La Silla, and ESO Garching, who built and maintained the HARPS instrument, and
were generous enough to make the data public.
Research at the Lick Observatory is partially supported by a generous gift
from Google. Some of the data presented herein were obtained at the W. M. Keck
Observatory, which is operated as a scientific partnership among the
California Institute of Technology, the University of California, and NASA.
The Observatory was made possible by the generous financial support of the
W.M. Keck Foundation.
We thank all of the observers who spent countless nights using both the HARPS
and LCES facilities to collect the data presented here and all of the PIs who
submitted telescope proposals year after year to allow the acquisition of
these data.
Finally, the authors wish to recognize and acknowledge the very significant
cultural role and reverence that the summit of Maunakea has always had within
the indigenous Hawaiian community. We are most fortunate to have benefited
from the observations obtained from this mountain.
Facilities: Keck:I (HIRES), TESS, HARPS
Software: emcee (Foreman-Mackey et al., 2013), MultiNest (Feroz, Hobson, &
Bridges, 2009), RadVel (Fulton et al., 2018), forecaster (Chen & Kipping,
2017), isochrones (Morton, 2015)
## References
* Almenara et al. (2018) Almenara J. M., Díaz R. F., Hébrard G., Mardling R., Damiani C., Santerne A., Bouchy F., et al., 2018, A&A, 615, A90. doi:10.1051/0004-6361/201732500
* Anglada-Escudé et al. (2013) Anglada-Escudé G., Rojas-Ayala B., Boss A. P., Weinberger A. J., Lloyd J. P., 2013, A&A, 551, A48. doi:10.1051/0004-6361/201219250
* Armstrong, Gamper, & Damoulas (2020) Armstrong D. J., Gamper J., Damoulas T., 2020, arXiv, arXiv:2008.10516
* Bakos et al. (2004) Bakos G., Noyes R. W., Kovács G., Stanek K. Z., Sasselov D. D., Domsa I., 2004, PASP, 116, 266. doi:10.1086/382735
* Bakos et al. (2010) Bakos, G. Á., Torres, G., Pál, A., et al. 2010, ApJ, 710, 1724. doi:10.1088/0004-637X/710/2/1724
* Barnes (2007) Barnes J. W., 2007, PASP, 119, 986. doi:10.1086/522039
* Burke (2008) Burke C. J., 2008, ApJ, 679, 1566. doi:10.1086/587798
* Batalha (2014) Batalha N. M., 2014, PNAS, 111, 12647. doi:10.1073/pnas.1304196111
* Bedell et al. (2018) Bedell M., Bean J. L., Meléndez J., Spina L., Ramírez I., Asplund M., Alves-Brito A., et al., 2018, ApJ, 865, 68. doi:10.3847/1538-4357/aad908
* Borucki et al. (2011) Borucki W. J., Koch D. G., Basri G., Batalha N., Brown T. M., Bryson S. T., Caldwell D., et al., 2011, ApJ, 736, 19. doi:10.1088/0004-637X/736/1/19
* Bryson et al. (2013) Bryson S. T., Jenkins J. M., Gilliland R. L., Twicken J. D., Clarke B., Rowe J., Caldwell D., et al., 2013, PASP, 125, 889. doi:10.1086/671767
* Butler et al. (2017) Butler R. P., Vogt S. S., Laughlin G., Burt J. A., Rivera E. J., Tuomi M., Teske J., et al., 2017, AJ, 153, 208. doi:10.3847/1538-3881/aa66ca
* Canto Martins et al. (2020) Canto Martins B. L., Gomes R. L., Messias Y. S., de Lira S. R., Leão I. C., Almeida L. A., Teixeira M. A., et al., 2020, ApJS, 250, 20. doi:10.3847/1538-4365/aba73f
* Chen & Kipping (2017) Chen J., Kipping D., 2017, ApJ, 834, 17. doi:10.3847/1538-4357/834/1/17
* Christiansen et al. (2020) Christiansen J. L., Clarke B. D., Burke C. J., Jenkins J. M., Bryson S. T., Coughlin J. L., Mullally S. E., et al., 2020, AJ, 160, 159. doi:10.3847/1538-3881/abab0b
* Collins et al. (2018) Collins K. A., Collins K. I., Pepper J., Labadie-Bartz J., Stassun K. G., Gaudi B. S., Bayliss D., et al., 2018, AJ, 156, 234. doi:10.3847/1538-3881/aae582
* Cutri et al. (2003) Cutri R. M., Skrutskie M. F., van Dyk S., Beichman C. A., Carpenter J. M., Chester T., Cambresy L., et al., 2003, yCat, II/246
* Da Costa et al. (2017) Da Costa A. D., Canto Martins B. L., Leão I. C., Lima J. E., Freire da Silva D., de Freitas D. B., De Medeiros J. R., 2017, ApJ, 837, 15. doi:10.3847/1538-4357/837/1/15
* Dickey (1971) Dickey, J., 1971, Ann. Math. Statist., 42, 204. doi:10.1214/aoms/1177693507
* Dotter et al. (2008) Dotter A., Chaboyer B., Jevremović D., Kostov V., Baron E., Ferguson J. W., 2008, ApJS, 178, 89. doi:10.1086/589654
* Drimmel, Bucciarelli, & Inno (2019) Drimmel R., Bucciarelli B., Inno L., 2019, RNAAS, 3, 79. doi:10.3847/2515-5172/ab2632
* Feroz, Hobson, & Bridges (2009) Feroz F., Hobson M. P., Bridges M., 2009, MNRAS, 398, 1601. doi:10.1111/j.1365-2966.2009.14548.x
* Foreman-Mackey et al. (2013) Foreman-Mackey D., Hogg D. W., Lang D., Goodman J., 2013, PASP, 125, 306. doi:10.1086/670067
* Fressin et al. (2013) Fressin F., Torres G., Charbonneau D., Bryson S. T., Christiansen J., Dressing C. D., Jenkins J. M., et al., 2013, ApJ, 766, 81. doi:10.1088/0004-637X/766/2/81
* Fulton et al. (2018) Fulton B. J., Petigura E. A., Blunt S., Sinukoff E., 2018, PASP, 130, 044504. doi:10.1088/1538-3873/aaaaa8
* Gaidos & Mann (2013) Gaidos E., Mann A. W., 2013, ApJ, 762, 41. doi:10.1088/0004-637X/762/1/41
* Giacalone et al. (2020) Giacalone S., Dressing C. D., Jensen E. L. N., Collins K. A., Ricker G. R., Vanderspek R., Seager S., et al., 2020, arXiv, arXiv:2002.00691
* Hébrard et al. (2019) Hébrard G., Bonomo A. S., Díaz R. F., Santerne A., Santos N. C., Almenara J.-M., Barros S. C. C., et al., 2019, A&A, 623, A104. doi:10.1051/0004-6361/201834333
* Høg et al. (2000) Høg E., Fabricius C., Makarov V. V., Urban S., Corbin T., Wycoff G., Bastian U., et al., 2000, A&A, 355, L27
* Huang et al. (2018) Huang, C. X., Burt, J., Vanderburg, A., et al. 2018, ApJL, 868, L39. doi:10.3847/2041-8213/aaef91
* Huang et al. (2020) Huang C. X., Vanderburg A., Pál A., Sha L., Yu L., Fong W., Fausnaugh M., et al., 2020, arXiv, arXiv:2011.06459
* Jenkins et al. (2016) Jenkins J. M., Twicken J. D., McCauliff S., Campbell J., Sanderfer D., Lung D., Mansouri-Samani M., et al., 2016, SPIE, 9913, 99133E. doi:10.1117/12.2233418
* Kempton et al. (2018) Kempton E. M.-R., Bean J. L., Louie D. R., Deming D., Koll D. D. B., Mansfield M., Christiansen J. L., et al., 2018, PASP, 130, 114401. doi:10.1088/1538-3873/aadf6f
* Kipping (2010) Kipping D. M., 2010, MNRAS, 407, 301. doi:10.1111/j.1365-2966.2010.16894.x
* Kipping (2013a) Kipping D. M., 2013a, MNRAS, 434, L51. doi:10.1093/mnrasl/slt075
* Kipping (2013b) Kipping D. M., 2013b, MNRAS, 435, 2152. doi:10.1093/mnras/stt1435
* Kipping (2014a) Kipping D. M., 2014a, MNRAS, 440, 2164. doi:10.1093/mnras/stu318
* Kipping (2014b) Kipping D. M., 2014b, MNRAS, 444, 2263. doi:10.1093/mnras/stu1561
* Kipping et al. (2019) Kipping D., Nesvorný D., Hartman J., Torres G., Bakos G., Jansen T., Teachey A., 2019, MNRAS, 486, 4980. doi:10.1093/mnras/stz1141
* Koen et al. (2010) Koen C., Kilkenny D., van Wyk F., Marang F., 2010, MNRAS, 403, 1949. doi:10.1111/j.1365-2966.2009.16182.x
* Leuquire et al. (2018) Leuquire J., Kasper D., Jang-Condell H., Kar A., Sorber R., Suhaimi A., KELT (Kilodegree Extremely Little Telescope), 2018, AAS
* Lissauer et al. (2012) Lissauer J. J., Marcy G. W., Rowe J. F., Bryson S. T., Adams E., Buchhave L. A., Ciardi D. R., et al., 2012, ApJ, 750, 112. doi:10.1088/0004-637X/750/2/112
* Luck (2018) Luck R. E., 2018, AJ, 155, 111. doi:10.3847/1538-3881/aaa9b5
* Lucy & Sweeney (1971) Lucy L. B., Sweeney M. A., 1971, AJ, 76, 544. doi:10.1086/111159
* Luri et al. (2018) Luri X., Brown A. G. A., Sarro L. M., Arenou F., Bailer-Jones C. A. L., Castro-Ginard A., de Bruijne J., et al., 2018, A&A, 616, A9. doi:10.1051/0004-6361/201832964
* Mandel & Agol (2002) Mandel K., Agol E., 2002, ApJL, 580, L171. doi:10.1086/345520
* Morton (2012) Morton T. D., 2012, ApJ, 761, 6. doi:10.1088/0004-637X/761/1/6
* Morton (2015) Morton T. D., 2015, ascl.soft. ascl:1503.010
* Morton et al. (2016) Morton T. D., Bryson S. T., Coughlin J. L., Rowe J. F., Ravichandran G., Petigura E. A., Haas M. R., et al., 2016, ApJ, 822, 86. doi:10.3847/0004-637X/822/2/86
* Pollacco et al. (2006) Pollacco D. L., Skillen I., Collier Cameron A., Christian D. J., Hellier C., Irwin J., Lister T. A., et al., 2006, PASP, 118, 1407. doi:10.1086/508556
* Ricker et al. (2015) Ricker G. R., Winn J. N., Vanderspek R., Latham D. W., Bakos G. Á., Bean J. L., Berta-Thompson Z. K., et al., 2015, JATIS, 1, 014003. doi:10.1117/1.JATIS.1.1.014003
* Rowe et al. (2014) Rowe J. F., Bryson S. T., Marcy G. W., Lissauer J. J., Jontof-Hutter D., Mullally F., Gilliland R. L., et al., 2014, ApJ, 784, 45. doi:10.1088/0004-637X/784/1/45
* Santerne et al. (2012) Santerne A., Díaz R. F., Moutou C., Bouchy F., Hébrard G., Almenara J.-M., Bonomo A. S., et al., 2012, A&A, 545, A76. doi:10.1051/0004-6361/201219608
* Santerne et al. (2013) Santerne A., Díaz R. F., Almenara J.-M., Lethuillier A., Deleuil M., Moutou C., 2013, sf2a.conf, 555
* Schwarz (1978) Schwarz, G. E., 1978, Annals of Statistics, 6, 461. doi:10.1214/aos/1176344136
* Sliski & Kipping (2014) Sliski D. H., Kipping D. M., 2014, ApJ, 788, 148. doi:10.1088/0004-637X/788/2/148
* Stassun & Torres (2018) Stassun K. G., Torres G., 2018, ApJ, 862, 61. doi:10.3847/1538-4357/aacafc
* Stassun et al. (2019) Stassun K. G., Oelkers R. J., Paegert M., Torres G., Pepper J., De Lee N., Collins K., et al., 2019, AJ, 158, 138. doi:10.3847/1538-3881/ab3467
* Sullivan et al. (2015) Sullivan P. W., Winn J. N., Berta-Thompson Z. K., Charbonneau D., Deming D., Dressing C. D., Latham D. W., et al., 2015, ApJ, 809, 77. doi:10.1088/0004-637X/809/1/77
* Teachey & Kipping (2018) Teachey A., Kipping D. M., 2018, SciA, 4, eaav1784. doi:10.1126/sciadv.aav1784
* Timmermann et al. (2020) Timmermann A., Heller R., Reiners A., Zechmeister M., 2020, A&A, 635, A59. doi:10.1051/0004-6361/201937325
* Torres et al. (2004) Torres G., Konacki M., Sasselov D. D., Jha S., 2004, ApJ, 614, 979. doi:10.1086/423734
* Torres et al. (2005) Torres G., Konacki M., Sasselov D. D., Jha S., 2005, ApJ, 619, 558. doi:10.1086/426496
* Torres et al. (2011) Torres G., Fressin F., Batalha N. M., Borucki W. J., Brown T. M., Bryson S. T., Buchhave L. A., et al., 2011, ApJ, 727, 24. doi:10.1088/0004-637X/727/1/24
* Torres et al. (2015) Torres G., Kipping D. M., Fressin F., Caldwell D. A., Twicken J. D., Ballard S., Batalha N. M., et al., 2015, ApJ, 800, 99. doi:10.1088/0004-637X/800/2/99
* Torres et al. (2017) Torres G., Kane S. R., Rowe J. F., Batalha N. M., Henze C. E., Ciardi D. R., Barclay T., et al., 2017, AJ, 154, 264. doi:10.3847/1538-3881/aa984b
* Trifonov et al. (2020) Trifonov T., Tal-Or L., Zechmeister M., Kaminski A., Zucker S., Mazeh T., 2020, A&A, 636, A74. doi:10.1051/0004-6361/201936686
* Tucci Maia et al. (2016) Tucci Maia M., Ramírez I., Meléndez J., Bedell M., Bean J. L., Asplund M., 2016, A&A, 590, A32. doi:10.1051/0004-6361/201527848
* Wittenmyer et al. (2016) Wittenmyer R. A., Butler R. P., Tinney C. G., Horner J., Carter B. D., Wright D. J., Jones H. R. A., et al., 2016, ApJ, 819, 28. doi:10.3847/0004-637X/819/1/28
|
11institutetext: Department of Physics, Imperial College London, London, SW7
2AZ, UK 22institutetext: Physics and Astronomy, The Johns Hopkins University,
3400 N. Charles Street, Baltimore, MD 21218, USA 33institutetext: Department
of Physics, Umeå University, 901 87 Umeå, Sweden 44institutetext:
Physikalisches Institut, University of Bern, Sidlerstrasse 5, 3012 Bern,
Switzerland 55institutetext: LESIA, Observatoire de Paris, Université PSL,
CNRS, Sorbonne Université, Université de Paris, 5 place Jules Janssen, 92195
Meudon, France 66institutetext: Southwest Research Institute, Department of
Space Studies, Suite 300, 1050 Walnut Street, Boulder, CO 80302, USA
77institutetext: SouthWest Research Institute, P.O. Drawer 28510, San Antonio,
TX 78228-0510, USA 88institutetext: Swedish Institute of Space Physics,
Ångström Laboratory, Lägerhyddsvägen1, 75237 Uppsala, 20 Sweden
# Multi-instrument analysis of far-ultraviolet aurora in the southern
hemisphere of comet 67P/Churyumov-Gerasimenko
P. Stephenson 11 M. Galand 11 P. D. Feldman 22 A. Beth 33 M. Rubin 44 D.
Bockelée-Morvan 55 N. Biver 55 Y.-C Cheng 55 J. Parker 66 J. Burch 77 F. L.
Johansson 88 A. Eriksson 88
(Received XXXX / Accepted XXXX)
###### Abstract
Aims. We aim to determine whether dissociative excitation of cometary neutrals
by electron impact is the major source of far-ultraviolet (FUV) emissions at
comet 67P/Churyumov-Gerasimenko in the southern hemisphere at large
heliocentric distances, both during quiet conditions and impacts of corotating
interaction regions observed in the summer of 2016.
Methods. We combined multiple datasets from the Rosetta mission through a
multi-instrument analysis to complete the first forward modelling of FUV
emissions in the southern hemisphere of comet 67P and compared modelled
brightnesses to observations with the Alice FUV imaging spectrograph. We
modelled the brightness of OI1356, OI1304, Lyman-$\beta$, CI1657, and CII1335
emissions, which are associated with the dissociation products of the four
major neutral species in the coma: CO2, H2O, CO, and O2. The suprathermal
electron population was probed by the Ion and Electron Sensor of the Rosetta
Plasma Consortium (RPC/IES) and the neutral column density was constrained by
several instruments: the Rosetta Orbiter Spectrometer for Ion and Neutral
Analysis (ROSINA), the Microwave Instrument for the Rosetta Orbiter (MIRO) and
the Visual InfraRed Thermal Imaging Spectrometer (VIRTIS).
Results. The modelled and observed brightnesses of the FUV emission lines
agree closely when viewing nadir and dissociative excitation by electron
impact is shown to be the dominant source of emissions away from perihelion.
The CII1335 emissions are shown to be consistent with the volume mixing ratio
of CO derived from ROSINA. When viewing the limb during the impacts of
corotating interaction regions, the model reproduces brightnesses of OI1356
and CI1657 well, but resonance scattering in the extended coma may contribute
significantly to the observed Lyman-$\beta$ and OI1304 emissions. The
correlation between variations in the suprathermal electron flux and the
observed FUV line brightnesses when viewing the comet’s limb suggests
electrons are accelerated on large scales and that they originate in the solar
wind. This means that the FUV emissions are auroral in nature.
###### Key Words.:
Comets: individual: 67P/CG - Ultraviolet: planetary systems - Planets and
satellites: aurorae
## 1 Introduction
Auroras, most familiarly observed at high latitudes over the northern and
southern regions of Earth, have been detected at several bodies in the Solar
System. Auroral emissions are generated by (usually charged) extra-atmospheric
particles colliding with an atmosphere, causing excitation (Galand &
Chakrabarti 2002). At Earth, other magnetised planets, and the Jovian moon
Ganymede, the magnetospheric structure restricts entry of these extra-
atmospheric particles into the atmosphere, confining auroras to regions with
open field lines. However, comets are unmagnetised (Heinisch et al. 2019), so
they exhibit more similarities to regions of Mars with no crustal
magnetisation, where diffuse auroras have been seen (Schneider et al. 2015).
The Rosetta mission (Glassmeier et al. 2007) observed comet 67P/Churyumov-
Gerasimenko (hereafter 67P) from within the coma throughout the two-year
escort phase, allowing measurement of cometary emissions from a new, close
perspective. Earth-based observations of comets in the far-ultraviolet (FUV)
with the Hubble Space Telescope (HST; Lupu et al. 2007; Weaver et al. 2011)
and the Far Ultraviolet Spectroscopic Explorer (FUSE; Feldman et al. 2002;
Weaver et al. 2002) have not seen evidence of aurora at comets. The analysis
of FUV emission spectra from HST and FUSE indicated that photodissociation and
resonance scattering were the dominant sources of emissions at these
wavelengths. However, these Earth-based observations are limited to active
comets, with an outgassing rate of $Q>10^{28}$ ${\mathrm{s}}^{-1}$, which are
close to the Sun ($<2$ $\mathrm{ua}$). Rosetta provided an opportunity to
observe a comet further from the Sun ($>3$ $\mathrm{ua}$) and at much lower
levels of activity ($Q<10^{26}$ ${\mathrm{s}}^{-1}$) than was previously
possible (Läuter et al. 2018).
The Alice FUV imaging spectrograph (Stern et al. 2007) onboard Rosetta was
used to measure emission spectra from within the coma of 67P, which contrasted
with the Earth-based measurements of cometary emissions. An analysis of the
emission line ratios in FUV spectra has suggested that the dissociative
excitation of cometary neutrals by electron impact (e + X for a cometary
molecule - X) was a significant source of FUV emissions (Feldman et al. 2015).
Dissociative excitation is driven by suprathermal electrons which have
energies greater than the high threshold energies ($>14$ $\mathrm{eV}$) for
these processes (McConkey et al. 2008; Ajello 1971; Mumma et al. 1972).
Suprathermal electrons have been observed in the coma of 67P throughout the
escort phase with the electron spectrometer (Burch et al. 2007) and do not
follow a Maxwellian distribution (Clark et al. 2015; Broiles et al. 2016).
There is also a thermal population of cold electrons ($<1$ $\mathrm{eV}$)
which have been observed throughout the escort phase (Eriksson et al. 2017;
Gilet, N. et al. 2020) but their energy is too low to be able to contribute to
the FUV emissions.
Early FUV spectra taken in the northern hemisphere summer were consistent with
the impact of electrons on water (Feldman et al. 2018). The outgassing of H2O
is closely linked to the illumination conditions of the nucleus and exhibits
seasonal trends in its production rate (Fink et al. 2016; Läuter et al. 2018).
Pre-perihelion, in the northern hemisphere summer, H2O is the dominant
outgassed neutral species (Hässig et al. 2015; Läuter et al. 2018; Bockelée-
Morvan et al. 2016), whilst there is also a significant presence of O2 (Bieler
et al. 2015a).
Unlike over the northern hemisphere, the post-perihelion spectra analysed over
the southern hemisphere summer exhibit strong carbon lines and hence they are
driven mostly by electron impact on CO2. This reflects the hemispherical
asymmetry in the composition of 67P’s coma that has been observed at large
heliocentric distances with the mass spectrometer (Balsiger et al. 2007) as
well as with the infrared (Coradini et al. 2007) and sub-mm (Gulkis et al.
2007) spectrometers onboard Rosetta. Throughout the mission, the outgassing of
CO2 and, to a lesser extent, CO were larger in the southern hemisphere than in
the northern hemisphere (Läuter et al. 2018), which reflects an inhomogeneity
in the surface of the nucleus. The significant increase in the outgassing of
both CO2 and CO post-perihelion (Gasc et al. 2017a; Biver et al. 2019) may
also result from the exposure of a more pristine surface layer of the nucleus,
due to erosion around perihelion (Fink et al. 2016; Filacchione et al. 2016).
Chaufray et al. (2017) showed that HI-Ly-$\beta$ emissions observed by the
Alice FUV spectrograph exhibit some correlation with remote measurements of
the water column density from the IR spectrometer on Rosetta. This
demonstrated the dependence of FUV emission brightness on the column density
of water along the line of sight. They also calculated the Ly-$\beta$
brightness, assuming a Maxwellian distribution of suprathermal electrons with
a constant temperature (17 $\mathrm{eV}$) and density (20
${\mathrm{cm}}^{-3}$). However, this does not account for large variations (by
a factor of 100) that have been observed in the suprathermal electron flux or
the non-thermal distribution of electrons. Chaufray et al. (2017) also found
that away from perihelion the suprathermal electron flux does not seem to vary
with cometocentric distance, in contrast to the total electron density which
varies approximately with $1/r$ (Galand et al. 2016; Heritier et al. 2017b).
Raghuram & Bhardwaj (2020) have analysed the brightness of FUV emissions using
a photochemical model, with application to a high outgassing regime
($Q>10^{27}$ s-1). However, they model significant emissions from several
photodissociation processes that require spin-forbidden transitions, and
therefore do not occur. When the analysis is applied to a larger heliocentric
distance (1.99 $\mathrm{ua}$), they cannot explain the observed FUV emission
brightnesses.
Galand et al. (2020) employed a multi-instrument analysis to combine FUV
brightnesses with in situ or remote neutral gas observations and in situ
measurements of the suprathermal electron flux. This work focused on the
northern, summer hemisphere of comet 67P at large helicoentric distances where
H2O is the major neutral species, demonstrating that electron impact on H2O
and O2 are the dominant sources of emissions. Galand et al. (2020) concluded
that emissions are driven by electrons which have been accelerated on large
scales rather than locally heated. Acceleration of solar wind electrons by an
ambipolar field is the most likely candidate (Deca et al. 2017, 2019), meaning
these emissions are auroras, a phenomenon which had not previously been
observed at a comet in the FUV.
In the non-illuminated southern hemisphere, the FUV emission spectra are very
different to those in the northern hemisphere, with much stronger emissions of
atomic carbon lines and molecular bands of CO (Feldman et al. 2018). There has
been no forward modelling to determine whether the FUV emissions in the
southern hemisphere are also driven by dissociative excitation of cometary
neutrals or to understand which neutral species are key to each emission line.
We propose to apply an extension of the multi-instrument analysis of Galand et
al. (2020) to model the FUV emissions in the southern hemisphere of comet 67P
at large heliocentric distances.
The brightest emission from the coma is Ly-$\alpha$ at 1216
$\mathrm{\SIUnitSymbolAngstrom}$ but due to the high flux of Ly-$\alpha$
photons during the mission, the detector degraded significantly at this
wavelength throughout the mission. We model the brightness of emissions in the
strongest remaining atomic emission lines. These are associated with atomic
transitions of oxygen (OI1356, OI1304), hydrogen (Ly-$\beta$), and carbon
(CI1657 and CII1335), which are the dissociation products of the four major
neutral species in the coma (CO2, H2O, CO, O2; Gasc et al. 2017a; Läuter et
al. 2018).
Throughout the duration of Rosetta’s escort phase, many solar events, such as
Coronal Mass Ejections (CMEs) and Corotating Interaction Regions (CIRs)
reached 67P, generating enhancements in the suprathermal electron population
in the coma (Edberg et al. 2016; Witasse et al. 2017; Hajra et al. 2018; Goetz
et al. 2019). The variation of emissions during these events has been observed
by Feldman et al. (2015) and Noonan et al. (2018), although the solar event
which Feldman et al. (2015) analysed was not identified until after
publication (Witasse et al. 2017). Both of these events were observed in the
northern hemisphere when H2O was the dominant outgassing species, with the CIR
of Feldman et al. (2015) occurring early in the mission and the CME of Noonan
et al. (2018) arriving when 67P was close to perihelion. Noonan et al. (2018)
qualitatively compared the ’warm’ (5-100 $\mathrm{eV}$) electron density to
the brightness of several atomic FUV lines (OI1356, OI1304, Ly-$\beta$ and
CI1657) during the arrival of a CME near perihelion.
In the present study, we focus on CIRs observed at 67P throughout the summer
of 2016 (Hajra et al. 2018). These are formed when a fast solar wind stream
interacts with the slow solar wind that precedes it, generating a region of
compression. The compression causes an increase in the electron number
density, while shock structures can lead to further heating of electrons at
large heliocentric distances (Smith & Wolfe 1976). The CIR is seen
periodically ($\sim\\!25$$\mathrm{d}$) from June to September 2016, because of
its solar corotation. Electron impact is the dominant ionisation process
during these CIRs and contributes significantly to the total plasma density
(Heritier et al. 2018b), but there has not been a quantitative assessment of
their impact on FUV emissions so far.
In this paper, we utilise a multi-instrument analysis to model the brightness
of FUV emission lines in the southern hemisphere of comet 67P at large
heliocentric distances, with direct comparison to observed brightnesses from
the FUV spectrograph onboard Rosetta. In Section 2, we introduce the
methodology used in the study to calculate the brightness of each emission
line, using measurements of neutral gas composition and density as well as
measurements of the suprathermal electron flux. In Section 3, we apply this
analysis in the southern hemisphere during quiet periods, outside of solar
events, both pre- and post-perihelion. We then model FUV emissions from the
coma during the August and July occurrences of the CIR in the summer of 2016
in Section 4. In Section 5, we compare our results with those obtained in the
northern hemisphere and discuss the implications of our findings on our
understanding of the cometary environment and on future analysis of cometary
FUV emissions.
## 2 Methods
### 2.1 Multi-instrument analysis
In this study, we employed an extension of a multi-instrument analysis
developed by Galand et al. (2020) to model the emissions driven by
dissociative excitation of cometary neutrals by electron impact. The analysis
brings together distinct datasets from several instruments onboard Rosetta.
The process of dissociative excitation by electron impact is outlined in Fig.
1, along with a qualitative description of how each instrument contributes to
the analysis.
DissociativeExcitationDe-excitation(a) Suprathermal Electron(b) Cometary
Molecules CO2, H2O, CO, O2(c) Excited AtomicFragments(d) Auroral FUV Emissions
HI, OI, CI & CIIRPC/IESROSINA, VIRTIS & MIROAlice* Figure 1: Schematic of the
multi-instrument analysis used in this study to model FUV emissions driven by
electron impact on cometary neutrals. From left to right: (a) Suprathermal
electrons present within the coma were measured using RPC/IES (see Section
2.4.2). (b) Neutral gas molecules in the coma. There were four major neutral
species seen at 67P: CO2, H2O, CO, and O2 (see Section 2.3). (c) A collision
between a suprathermal electron and a cometary molecule causes the molecule to
dissociate. A neutral fragment and an excited atom are produced. (d) The
excited atom de-excites, releasing a photon in the FUV. These photons were
observed with the Alice FUV imaging spectrograph (see Section 2.2).
The equation underlying the methodology allows direct comparison of the
brightness derived from the observations, by the Alice FUV imaging
spectrograph (see Section 2.2), with the brightness, $B^{X}$ [R, 1 rayleigh
$=10^{6}/4\pi$ photons cm-2s-1sr-1], of the atomic emission line, X (OI, CI,
CII, HI), calculated as follows:
$B^{X}=10^{-6}\sum\limits_{l}N_{l}\int\limits_{E_{Th,l}}^{E_{Max}}\\!\sigma^{X}_{l}(E)J(E)\,\mathrm{d}E=10^{-6}\sum\limits_{l}N_{l}\nu_{l}^{X}$
(1)
where $N_{l}$ [cm-2] is the column density of each neutral species, $l$ (CO2,
H2O, CO and O2), along the line of sight of the FUV spectrograph and the
summation is over each of the major species found in the coma of 67P (see
Section 2.3). Equation 1 is predicated on the assumption that the suprathermal
electron particle flux, $J(E)$ [cm-2s-1eV-1], is constant throughout the
column in question (see Section 2.4.2). The emission frequency, $\nu_{l}^{X}$
[s-1], is derived from the emission cross-section, $\sigma^{X}_{l}(E)$ [cm2],
for each emission line, $X$, and neutral species, $l$, and from the
suprathermal electron particle flux (see Section 2.4).
### 2.2 Observed brightness of FUV emission lines
The Alice FUV imaging spectrograph (Stern et al. 2007) observed emissions in
the range 700 Å - 2050 Å, with a spectral resolution of 8 Å in the slit
centre. The slit comprised 32 rows (0 to 31) with Row 15 at the centre, each
row subtending $0.3^{\circ}$ along the slit axis. The slit had a dogbone-like
shape as the central rows (13 to 17) have a width of $0.05^{\circ}$, whereas
the outer rows ($\geq 19$ and $\leq 12$) had a width of $0.10^{\circ}$
(Feldman et al. 2015). Rows 12 and 18 were transitional between these two
widths, hence they have been avoided in the present analysis. Typically, the
spectrograph scans lasted either approximately five or ten minutes, but to
improve the signal-to-noise ratio (S/N), several consecutive spectra can be
co-added.
The emission spectra exhibit an odd-even oscillation between rows (Chaufray et
al. 2017), so even numbers of rows were co-added to minimise the impact of
this aberration. When using a nadir viewing, during quiet periods, 67P was
fairly stationary in the Alice field-of-view over several scans. As such, the
co-added scans, ranging from 20 to 100 $\mathrm{min}$, probed a small region
of the coma within each viewing period.
In the cases of solar events, the temporal variation in the brightness of each
emission line is highly relevant. Therefore, each spectrum retrieved from
Alice has been considered individually, whilst several adjacent rows were
combined. To capture some of the spatial variability in the emissions, the
brightness was evaluated in three different regions of the Alice viewing slit.
In the present study, we considered emissions from five multiplets (as
illustrated in Fig. 2): Lyman-$\beta$, OI1304, OI1356, CI1657 and CII1335. We
have not considered emissions of Lyman-$\alpha$ as the contribution to this
line from the interplanetary medium (IPM) is very strong, and the detector of
the FUV spectrograph degraded significantly at this wavelength throughout the
mission. The contribution of the IPM to the Lyman-$\beta$ brightness is of the
order of 2 R when looking off-limb, whereas the strength of the IPM
Ly-$\alpha$ is 300 times larger (Feldman et al. 2015).
There are also other emission features that overlap with the lines of interest
and which, therefore, must be considered. The oxygen line at 1027
$\mathrm{\SIUnitSymbolAngstrom}$ is not well resolved from the emissions of
Ly-$\beta$. The CO Fourth Positive Group (4PG) emits in the range $1400-1800$Å
and has several bands which can contribute to the emissions observed at 1657Å.
Those bands which emit significantly in this range are (3,4) at 1648Å, (0,2)
at 1653Å and (1,3) at 1670Å (Beegle et al. 1999).
Ly-$\beta$CII1335CI1657OI1304OI1356 Figure 2: FUV spectrum measured by Alice
during two nadir scans during quiet periods. The spectra have been coadded
over four rows in the Alice slit and smoothed with a five point moving average
to minimise the noise in the spectra. The key emissions selected in this study
have been highlighted.
### 2.3 Neutral column density
The column density of the neutral gas along the line-of-sight of the FUV
spectrograph can be calculated with several different methods depending on the
viewing geometry of Rosetta. When observing nadir, we can extrapolate from in-
situ total neutral density measurements by the Rosetta Orbiter Spectrometer
for Ion and Neutral Analysis (ROSINA, Balsiger et al. 2007). We derived the
neutral composition from measurements by the Double Focusing Mass Spectrometer
(ROSINA/DFMS) but this was not available at all times. Therefore, FUV scans
were only selected if DFMS was measuring at similar times. The total neutral
density at Rosetta was probed by the COmet Pressure Sensor (ROSINA/COPS) and
has been corrected for the composition of the gas in line with Gasc et al.
(2017b).
The neutral gas moves approximately radially away from the comet, which is
along the line-of-sight when Alice is looking nadir. As a result, the whole
column should originate from a similar region of the nucleus and have a
constant composition throughout.
The local neutral density measurements, $n(r_{Rosetta})$, at the cometocentric
distance, $r_{Rosetta}$, of Rosetta can be converted to a radial column
density from the nucleus surface to Rosetta as the neutral density has been
observed to follow a $r^{-2}$-dependence (Hässig et al. 2015; Bieler et al.
2015a):
$N_{Tot}=n(r_{Rosetta})r_{Rosetta}\bigg{(}\frac{r_{Rosetta}}{r_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\text{67P}}}}-1\bigg{)}.$
(2)
However, under this model, the total column density, $N_{Tot}$, is highly
dependent on the radius of the comet, $r_{\text{67P}}$, at the foot-print of
the column, which varies significantly across the surface (Jorda et al. 2016).
We have taken $r_{\text{67P}}=1.7\text{\leavevmode\nobreak\ $\mathrm{km}$}$
and assumed that the expansion velocity of the neutral gas is constant, which
both introduce uncertainty to the neutral column density. The standard
deviation of the cometoradius is approximately $0.26\times r_{67P}$ (Gaskell
et al. 2017), which translates to a 30% uncertainty in the column density,
increasing to 35% at low cometoradii ($\sim 10$km). The assumption of a
constant expansion velocity results in an underestimate of the column density
as the gas undergoes acceleration near the nucleus surface (Heritier et al.
2017b; Bykov & Zakharov 2020). When looking off-limb, the neutral gas column
probed by the FUV spectrograph may have very different properties to the gas
measured locally by the pressure gauge, so the in situ neutral density was
only used to derive the column density when looking close to nadir.
The column density of several of the major neutral species could be measured
remotely using other instruments onboard Rosetta. The Visual InfraRed Thermal
Imaging Spectrometer (VIRTIS, Coradini et al. 2007) observed emissions from
the $\nu_{3}$ vibrational bands of H2O and CO2 at 2.67
$\mathrm{\SIUnitSymbolMicro m}$ and 4.27 $\mathrm{\SIUnitSymbolMicro m}$
respectively (Bockelée-Morvan et al. 2015). The calculations used to derive
the H2O and CO2 column densities are the same as those outlined in Bockelée-
Morvan et al. (2015). Emission from the $\nu$(1-0) band of CO was also
observed by the IR spectrometer, but the emission in this band is much weaker
than that of the $\nu_{3}$ bands of H2O and CO2, so the S/N was too low to be
useable at the large heliocentric distances under focus here. VIRTIS comprised
2 channels, M and H, operating at similar wavelengths in the infrared but
VIRTIS-M stopped working in May 2015 (Bockelée-Morvan et al. 2016), so in the
present study we used only VIRTIS-H measurements.
The Microwave Instrument for Rosetta Orbiter (MIRO; Gulkis et al. 2007)
observed emissions from H2O and CO lines in the sub-mm wavelengths. The
(110-101) rotational lines of H2O (H2^18O) at 557 $\mathrm{GHz}$ (548
$\mathrm{GHz}$) and the CO (5-4) rotational line at 576 $\mathrm{GHz}$ a were
seen in emission spectra when observing the limb and in absorption spectra
when viewing nadir (Biver et al. 2015, 2019). The CO line was more difficult
to observe due to its intrinsic low strength and the small abundance of CO.
The IR and sub-mm spectrometers were aligned with the FUV spectrograph line-
of-sight and their fields of view were located close to Row 15 of the viewing
slit. Thus, they probed approximately the same neutral column as the FUV
spectrograph (near the centre of the slit). This is particularly useful when
observing off-limb, as the composition may vary significantly along the column
and the source of gas is far more dispersed. Column densities derived from
MIRO data in nadir pointing are less reliable when viewing the nucleus due to
low contrast between the near surface warm gas emission and background
radiation emitted from the surface. The IR spectrometer could be used when
looking nadir if the surface is in shadow but it acquired little data in this
configuration throughout the mission. Each of these instruments has been used
to constrain the column density when the data are available.
### 2.4 Calculation of the emission frequency from dissociative excitation
The emission frequency, $\nu_{l}^{X}$, of a given neutral species, $l$, and
emission line, $X$, is driven by only two physical quantities (Eq. 1): the
emission cross-section (see Section 2.4.1) and the suprathermal electron
particle flux (see Section 2.4.2).
#### 2.4.1 Dissociative excitation cross-sections
The emission cross-sections for each spectral line, $X$, and neutral species,
$l$, are outlined in Table $2$. The current set of laboratory measurements of
the emission cross-sections due to electron impact are somewhat incomplete.
Many of the cross-sections have datapoints at only one or two energies so the
energy dependence of the cross-sections are not known. As such, several
assumptions about the energy dependence of the cross-sections have been made
and are summarised in Table $2$.
Several cross-sections were recently updated by Ajello et al. (2019), but we
do not use these in this study. The emission cross-sections of OI1356 from e +
CO2 in Ajello et al. (2019) are an order of magnitude smaller than those from
previous literature (Wells et al. 1972; Wells & Zipf 1974). The inferred
OI1304/OI1356 line ratio of $\sim 3.2$ is not consistent with emission spectra
from the Rosetta mission, when electron impact on CO2 was prevalent (Section
3.1 & Feldman et al. 2018). The 5S upper state of the OI1356 transition has a
long radiative lifetime (180 $\mu$s; Wells & Zipf 1974) and may have been
quenched through collisions. Wells et al. (1972) and Wells & Zipf (1974)
measured the production and radiative lifetime of the 5S state independently,
so the inferred emission cross section was not susceptible to collisions
experienced by the intermediate state. The experiments in Ajello et al. (2019)
were based on the emission spectra of CO2, which are more strongly impacted by
any quenching of the 5S state by the relatively dense neutral gas used ($\sim
3\times 10^{11}$ molecules cm-3). This is 3-4 orders of magnitude denser than
the neutral gas seen at 67P at the large heliocentric distances considered in
this study.
Table 1: Cross-sections for the dissociative excitation of cometary molecules by electron impact. . Emission Line | Species | Threshold Energy [$\mathrm{eV}$] | Reference and Assumptions
---|---|---|---
Ly-$\beta$ & OI1027 | H2O | 17.21 | Makarov et al. (2004). Assume the same shape as Ly-$\alpha$ with ratio $7/55$ at 200 $\mathrm{eV}$. Includes coincident OI line.
CO2 | $21\pm 2$11$1$Mumma et al. (1972)Mumma et al. (1972) | Kanik et al. (1993) Assume the same shape as OI1304. Use ratio at 200 $\mathrm{eV}$.
CO | 23.17 | James et al. (1992).
O2 | 20.6 | Wilhelmi & Schartner (2000). Absolute value given at 200 $\mathrm{eV}$. Assume same shape as OI1304.
OI1304 | H2O | 15.2 | Makarov et al. (2004).
CO2 | $21\pm 2$ | Mumma et al. (1972). Reduced by a factor of 0.59 due to updated measurements of H2 Ly-$\alpha$ (McConkey et al. 2008).
CO | 20.6 | Ajello (1971). Some uncertainty at energies $>100$ $\mathrm{eV}$.
O2 | 14.622$2$McConkey et al. (2008)McConkey et al. (2008) | Kanik et al. (2003). Scaled up by factor $2.93/2.90$ in line with the recommendation by McConkey et al. (2008).
OI1356 | H2O | 15.2 | Makarov et al. (2004). Assume same threshold and shape as OI1304.
CO2 | $20.6$22$2$From MIRO. | Feldman et al. (2015). Excitation rate of OI1356 33 times larger for CO2 than H2O (Wells et al. 1972), after revision of the lifetime of the ^5S^$\circ$ state (Wells & Zipf 1974). Assume the same shape as OI1304.
CO | 20.6 | Ajello (1971). Ratio given at 100 $\mathrm{eV}$ (Wells & Zipf 1974; Wells et al. 1972). Assume the same shape as OI1304.
O2 | 14.6 | Kanik et al. (2003). Scaled up by factor $6.47/6.40$ in line with recommendation by McConkey et al. (2008).
CI1657 | CO2 | $25\pm 2$ | Mumma et al. (1972).
CO | | Difficult to measure cross-section due to the strong overlapping CO4PG band. No contribution from dissociative excitation of CO considered for CI1657.
CO4PG | CO2 | $25\pm 2$ | Contribution of (0,2) bands included in CI1657 cross-section of Mumma et al. (1972).
CO | 8 | Beegle et al. (1999). Absolute values for bands given at 100 $\mathrm{eV}$. Shape of (0,1) band given. Scaled down by factor 0.925 due to remeasurement of calibrating NI line (Ajello et al. 2019).
CII1335 | CO2 | 44 | Mumma et al. (1972).
CO | 33 | Ajello (1971).
#### 2.4.2 Suprathermal electron flux
The suprathermal electron flux during each scan of the FUV spectrograph has
been derived from measurements by the Rosetta Plasma Consortium (RPC; Carr et
al. 2007). The count rate measurements from the Ion and Electron Sensor
(RPC/IES; Burch et al. 2007) were converted to an electron particle flux using
the method outlined in Appendix A. Several of the RPC/IES anodes degraded
throughout the mission, resulting in limited angular coverage of the electron
flux (Broiles et al. 2016). In the correction for the field of view, we
assumed that the electron flux was isotropic.
We also assumed that the suprathermal electron flux was constant along the
line of sight of the Alice FUV spectrograph, which is consistent with the
findings of Chaufray et al. (2017). We consider the electron depth (Heritier
et al. 2018b)
$\tau^{e-}=\sum\limits_{l}\sigma_{l}^{e-,inel}N_{l}(r),$ (3)
where $\sigma_{l}^{e-,inel}$ is the total inelastic collision cross-section
for 30 $\mathrm{eV}$ electrons with the neutral species H2O ($2.32\times
10^{-16}$ ${\mathrm{cm}}^{2}$, Itikawa & Mason 2005) and CO2 ($1.6\times
10^{-16}$ ${\mathrm{cm}}^{2}$, Itikawa 2002). The electron depth is analogous
to an optical depth, so for $\tau^{e-}<1$ we expect little degradation of
suprathermal electrons along the line of sight. As expected at large
heliocentric distances, the electron depth was small ($\tau^{e}<0.35$) for all
cases in the present study so suprathermal electrons were unlikely to undergo
collisions in the coma.
RPC/IES measured electrons that have energies between 4.32 $\mathrm{eV}$ and
17.67 $\mathrm{keV}$ at the detector. However, throughout the mission, the
spacecraft was typically at a voltage of $-10$ $\mathrm{V}$, as measured by
the Rosetta Dual Langmuir Probes (RPC/LAP, Odelstad et al. 2015). The negative
spacecraft potential ($V_{S/C}$) repelled electrons, allowing only those with
higher energies to reach the sensor. The minimum energy, $E_{min}$, in
$\mathrm{eV}$ that electrons observed by RPC/IES has is:
$E_{min}\,\text{[$\mathrm{eV}$]}=4.32-V_{S/C}\,\text{[$\mathrm{V}$]}$.
The electron flux was corrected for this using the Liouville’s theorem, under
the assumption that the phase space density is conserved within the potential
of the spacecraft:
$\frac{J(E)}{E}=\frac{J_{IES}}{E_{IES}}\quad\text{where}\quad
E\,\text{[$\mathrm{eV}$]}=E_{IES}\,\text{[$\mathrm{eV}$]}-qV_{S/C}\,\text{[$\mathrm{V}$].}$
(4)
Within the duration of each FUV scan, there were several measurements of the
electron flux by RPC/IES, each of which was individually corrected for the
spacecraft potential at that time. The time-average and the standard deviation
of the electron flux were calculated at each energy as shown in Figure 3.
Although RPC/IES could not measure the electron population below the detection
threshold, there may still have been a large electron flux at low energies. As
such, we extended the mean particle flux to low energies assuming a constant
particle flux at low energies. We also considered extrapolating
logarithmically but the method of extrapolation had little impact on the
resulting emission frequency as the cross-sections decrease significantly near
the threshold energies (given in Table $2$). The emission cross-sections
decrease sharply near the threshold energies of each line (see Table $2$) and
electrons below the threshold are unable to generate FUV emissisons. Despite
large particle fluxes the low energy ($<20$ $\mathrm{eV}$) electrons
contributed little to the model brightness.
Figure 3: Suprathermal electron particle flux during two nadir Alice scans
during quiet periods. The solid line is the average particle flux during each
of the scans, while the shaded region corresponds to the standard deviation of
the electron flux in the same period. The extrapolation of the electron flux
to energies below which RPC/IES could not measure, due to the spacecraft
potential, is given by the dashed line.
### 2.5 Other sources of emission
Alongside dissociative excitation by electron impact, there are several other
sources of emission that are observed at comets. We have already referred to
the contribution to the Lyman series by the IPM (see Section 2.2), but this
should not have been visible when viewing the surface of the nucleus from
Rosetta.
We considered two main emission sources outside of electron impact: prompt-
photodissociation of H2O to produce Lyman-$\beta$ and fluorescence of CO to
emit in the Fourth Positive Group.
These emission features are driven by the solar flux incident on the column of
gas along the line of sight. The brightness of the emissions from these
sources is given by
$B^{X,h\nu}_{l}=10^{-6}N_{l}\int\limits_{\lambda_{min}}^{\lambda_{Th}}\\!\sigma_{l}^{X,h\nu}(\lambda)I(\lambda)\,\mathrm{d}\lambda,$
(5)
where $l$ is H2O for $X=\text{Ly-$\beta$}$ and CO for $X=\text{CO4PG}$
(contributing to the CI1657 emissions).
The photon flux, $I(\lambda)$, at comet 67P was driven by TIMED-SEE (Woods et
al. 2005) measurements at 1 $\mathrm{ua}$ taken at the same Carrington
longitude as comet 67P for each time interval. The cross-section for resonance
fluorescence of CO was based on the model of Lupu et al. (2007). For the
prompt-photodissociation of H2O, we use the emission cross-section from Hans
et al. (2015).
The modelled brightness of these photon-driven emissions are upper bounds as
we assumed that the entire column of neutral gas along the line of sight was
illuminated. In reality, the neutral column may have been partially shadowed
near the nucleus, where the neutral number density was highest, and the
emissions from fluorescence of CO and prompt-photodissociation of H2O will be
overestimated.
## 3 Nadir analysis during quiet periods
### 3.1 Selected cases
Table 2: Cases selected in the southern hemisphere, with nadir viewing at large heliocentric distances. The horizontal lines separate nadir cases with high (Cases 1-8) and low (Cases 9-13) electron fluxes. Case | Date | Start Time [UTC] | Integration Time [$\mathrm{s}$] | Cometocentric Distance [$\mathrm{km}$] | Heliocentric Distance [$\mathrm{ua}$] | Spacecraft Latitude [$\circ$] | Spacecraft Longitude [$\circ$] | Total Column Density [$10^{15}$ ${\mathrm{cm}}^{-2}$]
---|---|---|---|---|---|---|---|---
1 | 14/01/2015 | 11:38:40 | 3629 | 28.4 | 2.55 | -23.4 | -82.9 | 0.58
2 | 29/01/2015 | 18:40:49 | 6048 | 27.8 | 2.44 | -63.8 | 5.4 | 0.27
3 | 30/01/2015 | 04:25:44 | 2419 | 27.8 | 2.43 | -64.6 | 56.6 | 0.22
4 | 30/01/2015 | 06:34:42 | 2419 | 27.8 | 2.43 | -64.2 | -11.4 | 0.51
5 | 30/01/2015 | 07:17:41 | 1763 | 27.8 | 2.43 | -64.0 | -34.0 | 0.48
6 | 30/01/2015 | 11:32:18 | 2419 | 27.9 | 2.43 | -62.5 | -167.4 | 0.18
7 | 30/01/2015 | 15:25:50 | 2419 | 27.9 | 2.43 | -60.5 | 71.35 | 0.28
8 | 26/04/2016 | 06:06:12 | 4452 | 21.2 | 2.9 | -33.1 | -107 | 1.88
9 | 29/01/2015 | 07:51:27 | 3629 | 27.8 | 2.44 | -58.4 | -17.1 | 0.75
10 | 21/04/2016 | 23:01:00 | 3398 | 30.9 | 2.85 | -24.6 | -57.7 | 1.24
11 | 21/03/2016 | 00:05:00 | 2603 | 12.2 | 2.62 | -24.2 | -36.8 | 1.43
12 | 14/05/2016 | 14:39:34 | 4613 | 9.8 | 3.0 | -53.1 | -111.7 | 1.13
13 | 27/05/2016 | 08:43:00 | 5592 | 7.0 | 3.09 | -56.2 | -42.3 | 1.32
In order to determine whether the FUV emissions over the southern hemisphere
are driven primarily by electron impact, cases with a nadir viewing geometry
over the shadowed nucleus are considered. By observing the shadowed nucleus,
there is no contribution from the IPM to the observed Lyman-$\beta$ brightness
and any contamination from solar photons reflected off the nucleus is
minimised. This geometry also provides the best constraint on the neutral
composition of the coma from the mass spectrometer (see Section 2.3).
We have considered 13 cases in the southern hemisphere and at large
heliocentric distances. The properties of each of these scans are outlined in
Table 2. The cases have been split into two sets, which have been approached
separately: nadir cases with a high suprathermal electron flux, as illustrated
by the blue spectrum in Fig. 3 (Cases 1-8; Section 3.2); and nadir cases with
a low suprathermal electron flux, as illustrated by the red spectrum in Fig. 3
(Cases 9-13; Section 3.3). The emission frequency for all the selected lines
and the column density of each neutral species are shown in Figs. 4 and 5,
respectively, for all cases in order to interpret the modelled brightnesses
presented in Fig. 6, along with the FUV spectrograph observations. The average
electron particle fluxes in two energy brackets are given in Table 3. The
distinction between the high and low suprathermal electron flux cases (see
Fig. 3) can be seen in both of the energy ranges, but to a greater extent from
$60-120$ $\mathrm{eV}$.
Table 3: Average electron flux in two energy ranges for each of the nadir cases outlined in Table 2. The distinction between cases with high and low suprathermal electron fluxes is clearer in the 60-120 $\mathrm{eV}$ range. Case | Ave. Electron Flux for $20-60$ $\mathrm{eV}$ [$10^{7}$ ${\mathrm{cm}}^{-2}\text{\,}{\mathrm{s}}^{-1}\text{\,}{\mathrm{eV}}^{-1}$] | Ave. Electron Flux for $60-120$ $\mathrm{eV}$ [$10^{7}$ ${\mathrm{cm}}^{-2}\text{\,}{\mathrm{s}}^{-1}\text{\,}{\mathrm{eV}}^{-1}$]
---|---|---
1 | $23.2\pm 1.2$ | $6.33\pm 0.85$
2 | $30.4\pm 1.9$ | $5.68\pm 1.34$
3 | $27.7\pm 3.6$ | $5.10\pm 2.62$
4 | $20.5\pm 1.3$ | $4.82\pm 0.95$
5 | $16.9\pm 0.9$ | $4.36\pm 0.65$
6 | $22.2\pm 0.8$ | $5.88\pm 0.56$
7 | $10.5\pm 1.4$ | $2.47\pm 1.01$
8 | $\phantom{0}6.42\pm 0.51$ | $2.60\pm 0.37$
9 | $3.51\pm 0.84$ | $0.05\pm 0.61$
10 | $1.18\pm 0.05$ | $0.22\pm 0.04$
11 | $1.40\pm 0.12$ | $0.06\pm 0.09$
12 | $0.51\pm 0.09$ | $0.01\pm 0.07$
13 | $1.03\pm 0.11$ | $0.03\pm 0.08$
(a)(b)(c)(d)(e) Figure 4: Emission frequency, $v_{l}^{X}$, of each neutral
species and emission line: (a) OI1356, (b) OI3014, (c) Ly-$\beta$ and OI1027,
(d) CI1657 and CO4PG, and (e) CII1335. Emissions due to electron impact (e +
X, Section 2.4) on CO2 (red), H2O (dark blue), CO (orange), and O2 (purple)
and other processes (hν \+ X, Section 2.5) have been included. The uncertainty
in the electron impact emission frequency is derived from the variability in
the electron flux during each scan of the FUV spectrograph (see Section
2.4.2). hν \+ CO refers to emissions from the fluorescence of CO (green),
while hν \+ H2O refers to the prompt-photodissociation of water (light blue)
to produce Lyman-$\beta$ (see Section 2.5). Figure 5: Column density of the
four major neutral species in the coma during each of the spectrograph scans.
The volume mixing ratios are derived from ROSINA/DFMS measurements (see
Section 2.3).
### 3.2 Nadir cases 1-8
As mentioned in Section 2.3, we can derive the total neutral density in nadir
viewing from in situ measurements. However, the strong dependence of the
column density on the cometoradius means this method is quite uncertain.
Alternatively, we can constrain the total column density by setting the
modelled brightness of the OI1356 line such that it equals the observed
brightness of this line. OI1356 emissions are associated with a forbidden
transition (^5S - ^3P), so there are few sources of this emission line apart
from electron impact. The contributions of both resonance scattering and
fluorescence to this line are negligible.
There is a close agreement between the modelled and observed brightnesses for
all five FUV emission lines selected for cases 1-8 (see Fig. 6). This suggests
that our model represents the sources of FUV emissions in the coma well.
The emission of OI1356 is dominated by e + CO2 (red, Fig. 6a) across the high
suprathermal electron flux cases. This is a result of the high emission
frequency of e + CO2 ($1.08\times 10^{-8}$ ${\mathrm{s}}^{-1}$) compared to e
+ H2O ($4.0\times 10^{-10}$ ${\mathrm{s}}^{-1}$, see Fig. 4) in this line. The
small emission frequency from e + H2O means the brightness of the OI1356
emissions is not sensitive to the column density of water. e + O2 has an
OI1356 emission frequency of $5.98\times 10^{-8}$ ${\mathrm{s}}^{-1}$ in cases
1-8, larger than that from e + CO2, but the column density of O2 is
insufficient for this process to contribute significantly to the total
brightness (purple, Fig. 5). In case 1, e + O2 drives 0.69 R of OI1356
emission (purple, Fig. 6a), which is considerably more than the $0.1-0.2$ R of
emission from this source in cases 2-8. This results from the larger column
density of O2 ($8.9\times 10^{12}$ ${\mathrm{cm}}^{-2}$, Fig. 5) in case 1
than in cases 2-7 ($3.1\pm 0.9\times 10^{12}$ ${\mathrm{cm}}^{-2}$, Fig. 5),
due to the more equatorial latitude compared to the southerly cases 2-7 (see
Table 2). Case 8 occurred post-perihelion, when the outgassing rate of CO2
increased, resulting in a low volume mixing ratio of O2 (purple, Fig. 5),
despite having a similar latitude to case 1 (see Table 2).
The OI1304 emissions are mostly driven by electron impact on CO2 and H2O (see
Fig. 6b). The emission frequency of OI1304 from e + CO2 is only 1.5 times more
efficient than e + H2O, in contrast to the factor of 20 in the emission
frequencies of OI1356. Electron impact on CO and O2 are 1.5 and 10 times more
efficient at emitting OI1304 than e + CO2 (see Fig. 4b), respectively, but the
small volume mixing ratios of these molecules throughout cases 1-8
($\ce{O2}/\ce{CO2}=0.04$; $\ce{CO}/\ce{CO2}=0.11$; Fig. 5) limits their
contribution to the total OI1304 brightness. However, these processes can be a
significant source of OI1304, when the volume mixing ratio of each species
increases (see. Fig. 5) as seen in case 1 for e + O2 and case 8 for e + CO
(see Fig. 6b).
The emission feature near 1026 $\mathrm{\SIUnitSymbolAngstrom}$ is dominated
by electron impact on water throughout cases 1-8 (dark blue, Fig. 6c),
generating emissions of Ly-$\beta$. As seen with OI1356 and OI1304 emissions,
the largest emission frequency at this wavelength is from e + O2 at
$1.17\times 10^{-8}$ ${\mathrm{s}}^{-1}$ (purple, Fig. 4c), which produces
OI1027, but this is only twice that of e + H2O, so the small column density of
O2 with respect to that of water means it contributes negligibly to the
modelled brightness (purple, Fig. 6). Throughout these cases, we see a
sizeable contribution from e + CO2 ($<0.9$ R) to the OI1027 brightness that is
not seen in the northern hemisphere (Galand et al. 2020). This is especially
prominent in case 8, with a post-perihelion enhanced CO2 column density of
$8.66\times 10^{14}$ ${\mathrm{cm}}^{-2}$ (see Fig. 5). In cases 1-8, prompt-
photodissociation of H2O (light blue, Fig. 4c) has an emission frequency 5
times smaller than from e + H2O (dark blue, Fig. 4c), so photodissociation is
a minor source of emissions when the suprathermal electron flux is large (Fig.
6c). We show that e + H2O is the major source of the emissions near 1026
$\mathrm{\SIUnitSymbolAngstrom}$ in the southern hemisphere, but the
contribution from e + CO2 and prompt-photodissociation of H2O to this line are
not negligible.
Throughout cases 1-8, the emissions near 1657 $\mathrm{\SIUnitSymbolAngstrom}$
are dominated by CI1657 emission from electron impact on CO2 (red, Fig. 6d).
There is also a small contribution from e + CO (orange, Fig. 6d) to the
overlapping bands of the Fourth Positive Group, which has an emission
frequency ($4.58\times 10^{-8}$ ${\mathrm{s}}^{-1}$) twice that of e + CO2
(orange and red, respectively, Fig 4d). As the suprathermal electron flux is
high for these cases, fluorescence of CO (green, Fig. 4d) has a lower or
similar emission frequency to e + CO2. However, the low column density of CO
compared to that of CO2 (see Fig. 5) means the emissions from both electron
impact on and fluorescence of CO are only minor contributions to the total
brightness near 1657 $\mathrm{\SIUnitSymbolAngstrom}$ (see Fig. 6d), except in
case 8, when the column density of CO ($1.19\times 10^{14}$
${\mathrm{cm}}^{-2}$) is larger than seen in cases 1-7 (see Fig. 5).
The CII1335 emissions have significant contributions from electron impact on
both CO and CO2. The emission frequency of e + CO ($2.04\times 10^{-8}$
${\mathrm{s}}^{-1}$) is 8 times larger from e + CO2 for this line, which is
balanced by a ratio of 0.11 of CO to CO2 in column density. The two competing
effects result in the two species contributing approximately equally to the
brightness of the CII1335 line (see Fig. 6e). This emission line is primarily
driven by $60-120$ $\mathrm{eV}$ electrons due to the high threshold energies
of e + CO (33 $\mathrm{eV}$) and e + CO2 (44eV, Table $2$) for this process.
The electrons in this energy range vary little across cases 1-8 (see Table 3),
which is reflected in the roughly constant emission frequency (see Fig. 4e).
Therefore, the variations in the CII1335 brightness in cases 1-8 are driven by
changes in the column densities of CO and CO2 (see Fig. 5). The large emission
frequency of e + CO relative to e + CO2 (see Fig. 4e) in this line means the
observed brightness is highly sensitive to the column density of CO. The
volume mixing ratio of CO, derived from measurements by the mass spectrometer,
includes a significant correction for fragmentation of CO2 within the
instrument, which leads to a contribution to the CO signal (Dhooghe et al.
2014). The close agreement between the modelled and observed brightness of
this line across cases 1-8 (Fig. 6e) suggests that the corrected
$\ce{CO}/\ce{CO2}$ volume mixing ratio derived from the mass spectrometer
measurements is accurate.
(a)(b)(c)(d)(e) Figure 6: Comparison of the total modelled (black) and
observed brightness (magenta), $B^{X}$, of each emission line. The stacked
bars show the contribution from each neutral species and emission process. The
same colour code as Fig. 4 is used. The error on the observed brightness is
derived from the integration of the FUV spectra. The error on the modelled
brightness is from the temporal variation of the electron flux and column
density, as well as a 20% uncertainty in the neutral composition. In cases
1-8, where the observed OI1356 brightness is used to fit the total neutral
column density (see Section 3.2), the error on the modelled OI1356 brightness
is included in the error of the other modelled emission brightnesses.
### 3.3 Nadir cases 9-13
In cases 9-13, the suprathermal electron flux was very low (see Table 3) in
both energy ranges. The electron flux was on average 11.8 times greater for
cases 1-8 than for cases 9-13 between 20 and 60 $\mathrm{eV}$, whereas in the
range $60-120$ $\mathrm{eV}$ the average flux ratio was $\sim\\!{60}$. The FUV
emission lines considered, except CII1335, are driven primarily by electrons
from $20-60$ $\mathrm{eV}$ so the emission frequencies are a factor of
$\sim\\!20$ smaller in cases 9-13 than in cases 1-8 (e.g. 26.6 for Ly-$\beta$
from e + H2O; dark blue, Fig. 4c). The threshold energies for emission of
CII1335 from electron impact on CO (33 $\mathrm{eV}$, see Table $2$) and CO2
(44 $\mathrm{eV}$) are much higher than for the other wavelengths considered,
and hence the emissions are more dependent on the electron flux between 60 and
120 $\mathrm{eV}$. This results in a ratio of $\sim\\!{50}$ in the emission
frequency of the process e + CO2 (red, Fig. 4e) between the high and low flux
cases.
Throughout the low flux cases, we observe no substantial emissions of OI1356
due to the low electron impact emission frequencies (see Fig. 6a). As a
result, we use the radial column density, derived from the in situ pressure
gauge measurements (see Section 2.3), to calculate the modelled emission
brightnesses. The radial column density is calculated assuming a constant
neutral gas velocity, which may result in an underestimate of the column
density.
We also observe negligible emissions of OI1304 throughout this time (see Fig.
6b). The column densities in cases 9-13 are similar to cases 1-8 (see Fig. 5),
so the lack of oxygen line emissions is a result of the lower electron flux
(Table 3). In addition, there are negligible emissions of CII1335 (see Fig.
6e), as there are few $60-120$ $\mathrm{eV}$ electrons in the coma (Table 3).
The lack of observed OI1356, OI1304, and CII1335 emissions (see Figs. 6a, 6b
and 6e) in these lines is replicated in the modelled brightnesses for the
cases associated with a low suprathermal electron flux. This demonstrates that
the only source of these emission lines in nadir viewing is dissociative
excitation by electron impact.
As shown in Fig. 6d, there are significant emissions near 1657
$\mathrm{\SIUnitSymbolAngstrom}$ for these cases, despite the small
populations of energetic electrons in the coma (Table 3) and the low emission
frequency of the processes e + CO ($3.96\times 10^{-9}$ ${\mathrm{s}}^{-1}$)
and e + CO2 ($1.07\times 10^{-9}$ ${\mathrm{s}}^{-1}$, see Fig. 4d). The
emission frequency of fluorescence of CO is $1.86\times 10^{-8}$
${\mathrm{s}}^{-1}$ through cases 9-13, and hence is the dominant source of
emissions at this wavelength. The dominance of CO fluorescence, compared to
case 8, results only from the decrease in the electron impact emission
frequency, as both the volume mixing ratio $\ce{CO}/\ce{CO2}=0.14\pm 0.05$ and
the emission frequency from CO fluorescence (green, Fig. 6d) are similar to
those in case 8. In cases 1-7, the total column density and the CO column
density are on average a factor 3 times smaller than in cases 9-13, which
result in a small absolute contribution from CO fluorescence (0.27 R). There
is some uncertainty in the source of the CO4PG emissions in the low flux
cases, as the Cameron bands, which are seen at long wavelengths, indicate that
there may be emissions from CO in the 4PG driven by a large flux of low energy
($\sim$10 eV) electrons. However, we have not seen evidence of this in the
measured electron flux as the spacecraft potential ($-23$ to $-17$ eV, as
measured by RPC/LAP) prevented measurements at such low electron energies.
Other emission lines are unaffected by these low energy electrons as these
lines have a much higher threshold energy ($>$15 eV) compared to 4PG emissions
from CO (8 eV, see Table $2$). Despite this, the modelled CI1657 and CO4PG
emissions well represent the observed emissions in these cases, given the
uncertainty in the column density, and dissociative excitation by electron
impact is not the major source of emissions when the suprathermal electron
flux is small.
In cases 9-13, the emission frequency of Ly-$\beta$ and OI1027 from electron
impact on all species drops below the emission frequency of prompt-
photodissociation of H2O ($8.6\times 10^{-10}$ ${\mathrm{s}}^{-1}$, Fig. 4c).
In these cases, prompt-photodissociation of H2O is the dominant source of
emissions as the process e + H2O has an emission frequency a factor four
smaller ($1.97\times 10^{-10}$ ${\mathrm{s}}^{-1}$, Fig. 4c).
Photodissociation of H2O is comparable in efficiency to electron impact on O2
near 1026 $\mathrm{\SIUnitSymbolAngstrom}$ but throughout these cases the
number density ratio of $\ce{O2}/\ce{H2O}$ is less than $0.02$, so the
contribution from electron impacts on O2 is small.
The observed and modelled brightnesses of Ly-$\beta$ and OI1027 agree closely
in cases 9 and 10 but for cases 11-13 the modelled brightness is smaller than
that observed (see Fig. 6c). The unexplained intense Lyman-$\beta$ emissions
are unlikely to be from electron impact emissions which would produce strong
signals in the other atomic lines, such as OI1304 from e + H2O at a factor
$\sim 2.9$ smaller than the Ly-$\beta$ brightness.
The Lyman-$\alpha$/Lyman-$\beta$ ratio can be used as an indicator of key
emission processes, with a ratio of 8 expected from pure e + H2O (Makarov et
al. 2004). On 29 Nov 2014, a spectrum of pure e + H2O (Galand et al. 2020) was
observed with a ratio Lyman-$\alpha$/Lyman-$\beta=4.4$. The disparity between
the observed ratio and the theoretical ratio may be a result of the difficult
calibration of Alice around Lyman-$\alpha$, variation in the shape of the
Lyman series cross-sections or differences in the cascade emissions. Case 8,
also in early 2016, has a ratio of 2.56 with a large electron flux, but the
1027Å line brightness has a large contribution of OI emissions from e + CO2.
The modelled Ly-$\beta$ emissions from e + H2O in case 8 are a factor 4.4
smaller than the Ly-$\alpha$ brightness, consistent with the line ratio in Nov
2014.
Cases 11-13 have Ly-$\alpha$/Ly-$\beta=$ 2.6, 3.2 and 3.3 respectively,
suggesting that a process with a small line ratio has contributed
significantly to the Ly-$\beta$ brightness. Electron impact on other neutral
species cannot be a strong source of OI1027 in these cases, as they would show
stronger emissions in other lines (e.g. OI1356 for O2 and CO2) which were not
observed. Resonant scattering of solar flux on atomic hydrogen would generate
Ly-$\alpha$ emissions 300 times brighter than Ly-$\beta$, greatly increasing
the line ratio. It is also very unlikely at the low cometocentric distances
($\sim 10$ km) as neutral molecules are advected away from the nucleus
(timescale of the order of seconds; Galand et al. 2016) before they can
undergo photodissociation (timescale $>10^{5}$ s; Huebner & Mukherjee 2015).
Prompt-photodissociation, which gives a Ly-$\alpha$/Ly-$\beta$ ratio of 1.4
(Hans et al. 2015), could generate the lower line ratio. The emission of
OI1304 from this process is also very weak, as a spin forbidden transition
would be required (Wu & Judge 1988). However, if we had significantly
underestimated the efficiency of this process in the model, a discrepancy
would be seen in the other cases. A large increase in the column of water in
cases 11-13 (by a factor of 8, 15 and 5, respectively) would bring the
modelled Ly-$\beta$ into agreement with the observed brightnesses, whilst
generating few emissions in the other atomic lines due to the low emission
frequencies (dark blue, Figs. 4a & b). However, this would result in a neutral
column comprising 80% water in cases 11 and 12 (50% in case 13), which is
inconsistent with the concurrent mass spectrometer measurements (see Fig
5).The required mixing ratio would also be incongruous with wider outgassing
trends, as the outgassing rate of CO2 in the southern hemisphere was larger
than that of H2O in Mar 2016 (Case 11) and was dominant over the outgassing of
H2O near the end of mission (Cases 12 and 13; Gasc et al. 2017a; Läuter et al.
2018). The source of the Lyman-$\beta$ emissions in cases 11-13 remains
unclear.
## 4 Corotating interaction regions in summer 2016
### 4.1 Selection of cases
Having confirmed that the impact of suprathermal electrons on cometary
neutrals is a major source of emissions during quiet periods in the southern
hemisphere in Section 3, we apply the multi-instrument analysis to the CIRs
observed in the summer of 2016. The CIR was observed over four solar rotations
throughout the summer of 2016 (Hajra et al. 2018), from the start of June
until the beginning of September. In the present study, we consider two
occurrences: one over the 9 and 10 July, and one on the 4 August. Enhanced FUV
emissions were observed on two additional solar rotations in early and late
September. However, observations in the FUV occurred infrequently during these
events, so we could not study the temporal variability of the emissions.
During these events, the suprathermal electron flux can vary by a factor of
100 within several hours, as seen during the first periods on 4 Aug and 9 July
(see Tables $3$ and 5 and Fig. 8). The brightness of the FUV emission lines
are also seen to vary by a factor of ten within the same time periods (see
Figs. 9 and 11). This scale of variation in the emission brightness has also
been observed by Galand et al. (2020) during a solar event in October 2014 and
Noonan et al. (2018) during a CME in summer 2015. To capture the variability,
we use the highest temporal resolution available for both the measurements of
the electron flux and the observed FUV brightness (10 $\mathrm{min}$). The
emission frequencies are calculated from individual electron flux
measurements, once the flux has been corrected for the spacecraft potential
(see Section 2.4.2). The modelled brightness is plotted in Figures 9 and 11 at
the time resolution of the electron spectrometer. We do not co-add consecutive
spectra from the FUV spectrograph, as in Section 2.2, but instead evaluate the
brightness at three distinct regions of the viewing slit for each spectrum.
The black vertical lines are therefore indicative of the spatial variability
of the emissions along the Alice slit. In Figures 9 and 11, the three regions
associated with rows 8-11, 13-16, and 18-21 are given by the red, black, and
blue crosses, respectively. The measurements taken simultaneously in the 3
regions of the FUV spectrograph slit are linked by vertical black lines.
Neither the case in July nor the one in August 2016 had a nadir viewing
geometry as was used in Section 3. On 4 August 2016, the FUV spectrograph was
viewing off the limb of the nucleus, whereas on the 9-10 July 2016 the
spectrograph was pointed at the nucleus but off-nadir. In both situations it
is not possible to derive the column density along the line of sight from the
ROSINA in-situ measurements as was done in cases 1-8 in Section 3.2. Variation
across the nucleus’ surface in both the density and composition of the
outgassing neutrals means we cannot extrapolate from in situ measurements to
calculate the column density along the line of sight. For the August case, the
column densities of CO2 and H2O are derived from remote observations by
VIRTIS-H and MIRO spectrometers, respectively, (see Section 2.3) over each of
the intervals of FUV observations.
In the July event, the surface of the nucleus was illuminated, which prevents
the estimation of reliable column densities using either of the spectrometers.
We, therefore, set the column density of CO2 during each interval such that
the scale of the modelled and observed brightnesses of the OI1356 within each
interval agree, which is consistent with the approach in cases 1-8 in Section
3.2.
The column density, taken constant with each interval during the CIR events
considered are given in Tables $3$ and 5. The different periods, with distinct
neutral column densities, are indicated by different coloured lines in Figures
9 and 11. During each interval the nucleus moved slowly within the field of
view of the spectrograph, justifying the constant column density during each
interval. However, between these periods the spacecraft underwent manoeuvres
in which the region of the coma observed changed rapidly, which justifies the
variation of the column density from one interval to the next. As the column
density is taken as constant in each interval, the variation of the modelled
brightness during the time periods is only due to the fluctuations of the
measured suprathermal electron flux. The average electron fluxes in each
period of the FUV spectra are shown in Figures 8 and 10 for August and July,
respectively. The electron flux in period 1 of the August case is split into
two parts, comprising the peaks (dark blue) and troughs (light blue) of the
electron flux as seen in the modelled brightness in Fig.9 (dark blue).
### 4.2 4 August 2016
#### 4.2.1 Overview
Figure 7: OSIRIS WAC image (Keller et al. 2007) from 4 Aug 2016 02:00 UT
during period 1 (see Section 4.2.3) with the Alice viewing slit shown in
white. The line of sight passes within $\sim 500$ m of the comet nucleus.
On this day the viewing geometry was limb (Fig 7), so we have used column
density measurements from the VIRTIS-H and MIRO spectrometers (see Section
2.3). The column density measurements taken in the four time periods are
outlined in Table 5. This solar event occurred near the end of mission at 3.5
$\mathrm{ua}$, when CO2 was dominant over H2O in the southern hemisphere as
seen in Table $3$, which is consistent with a volume mixing ratio of
$\ce{H2O}/\ce{CO2}\sim 0.05$ from Läuter et al. (2018). During period 2, the
line of sight of the spectrograph passed over the neck region of the comet,
which had an increased abundance of water (Migliorini et al. 2016), resulting
in the enhanced water column density. CO was not detected during any of the
four periods and the 3$\sigma$ upper-limits of the column density derived from
the sub-mm spectrometer are given in Table $3$. The FUV emission spectra
display weak CO band emissions which indicates there is some CO in the coma,
although insufficient to be detected by MIRO. In Figure 9, we have assumed
that there is no CO present in the coma as we do not have a good constraint on
the CO mixing ratio throughout the column. The spectrometers could not measure
the column density of O2, so we have assumed there is no O2 present in the
column along the line of sight. Towards the end of mission, the mixing ratio
$\ce{O2}/\ce{CO2}$ decreased ($<1$%, Läuter et al. 2018) and O2 was less
abundant in the southern hemisphere (Hässig et al. 2015), so we would not
expect significant emissions from e + O2. Including a few percent of O2
relative to the H2O column density has negligible impact on the emission
brightness, as emissions of the atomic oxygen lines are dominated by electron
impact on CO2.
Table 4: Properties of the four intervals on 4 Aug 2016 at 3.5 $\mathrm{ua}$. The first period has been split into the peaks and troughs in the electron flux, which can be seen in Fig. 9. These are the same periods as shown in Fig. 8 in dark blue (peaks) and light blue (troughs). Period | Start Time [UTC] | End Time [UTC] | CO2 Column Density11$1$From VIRTIS-H.From VIRTIS-H. [$10^{14}$ ${\mathrm{cm}}^{-2}$] | H2O Column Density22$2$From MIRO. [$10^{14}$ ${\mathrm{cm}}^{-2}$] | CO Column Density33$3$3$\sigma$ upper limits from MIRO. 3$\sigma$ upper limits from MIRO. [$10^{14}$ ${\mathrm{cm}}^{-2}$] | Ave. Electron Flux for 20 - 60 $\mathrm{eV}$ [$10^{7}$ ${\mathrm{cm}}^{-2}\text{\,}{\mathrm{s}}^{-1}\text{\,}{\mathrm{eV}}^{-1}$]
---|---|---|---|---|---|---
1 - Peaks | 01:38:59 | 05:42:30 | $6.21\pm 0.16$ | $0.5\pm 0.03$ | ¡1.1 | 15.5 - 44.9
1 - Troughs | | | | | | 0.56 - 10.1
2 | 07:25:24 | 10:59:15 | $2.65\pm 0.17$ | $2.24\pm 0.09$ | – | 0.77 - 23.3
3 | 11:51:51 | 15:56:18 | $8.43\pm 0.14$ | $0.36\pm 0.02$ | ¡1.3 | 3.16 - 20.4
4 | 17:38:16 | 21:12:03 | $7.93\pm 0.23$ | $0.31\pm 0.02$ | – | 3.48 - 7.95
Figure 8: Average electron particle flux during the three periods listed in
Table $3$, with the standard deviation of the flux shown by the shaded
regions. The colour of each period is the same as in Fig. 9. The first period
has been split into the peaks (dark blue) and troughs (light blue) in the
electron flux to highlight the variability in this period. The dotted lines
indicate energies where some of the RPC/IES measurements during the period
could not probe, due to a high spacecraft potential (see Section 2.4.2). The
dashed lines denote energies that could not be probed by any of the RPC/IES
scans during the period. The electron flux from Case 1 in the nadir study (see
Section 3 & Fig. 3) is plotted for comparison (black).
During the CIR, the variation in the suprathermal electron flux is mirrored in
the brightness of the emissions in all four of the selected lines: OI1356
(Fig. 9a), CI1657 and CO4PG (Fig. 9b), Ly-$\beta$ and OI1027 (Fig. 9c), and
OI1304 (Fig. 9d). This suggests that all of these emissions are strongly
driven by electron impact on cometary neutrals. For limb viewing, the
emissions originate from a distant region of the coma, which cannot be probed
in situ. The observed correlation between the in situ measurements of the
electron flux and the remote measurements of the FUV spectrograph suggest that
the variations in the electron flux occur on large scales. The electron fluxes
during each period (see Fig. 8) peak at 40-50 eV, which is not seen in the
electron spectra during the nadir cases in quiet periods (see Fig. 3). This
may result from the higher density and more energetic solar wind electrons
entering the coma that are associated with the CIR.
(a)(b)(c)(d) Figure 9: Comparison of the observed (crosses) and modelled
(coloured lines) brightness of four emission features, OI1356 (a), CI1657 and
CO4PG (b), Lyman-$\beta$ and OI1027 (c) and OI1304 (d), during four time
periods, during which a CIR was observed at comet 67P, on 4 Aug 2016 (see
Table $3$). The observed brightnesses are from three regions of the FUV
spectrograph slit (Rows 8-11 [red], 13-16 [black] and 18-21 [blue]). Vertical
black lines link simultaneous measurements of the brightness.
#### 4.2.2 Periods 2-4
The modelled brightness of the OI1356 emissions closely replicate both the
variations and the magnitude of the observed emissions in periods 2-4 (see
Fig. 9a). The emissions are dominated by e + CO2, which accounts for 98% of
the modelled emissions. This is consistent with our findings from the nadir
cases over the southern hemisphere (see Fig. 6a) for which the neutral
composition was derived from the mass spectrometer (see Section 3.2). The
close agreement in periods 2-4 suggests that there are no other significant
sources of OI1356 in these cases. The lower OI1356 brightness during the
second time period is a result of the lower column of CO2 compared to the
following period (see Table $3$). In period 3, the OI1356 emission frequency
from e + CO2 increases from $7.1\times 10^{-10}$ ${\mathrm{s}}^{-1}$ at 12:35
UT to $2.1\times 10^{-8}$ ${\mathrm{s}}^{-1}$ at 15:42UT, which causes the
concurrent rise in the OI1356 brightness.
Period 4 has fewer emissions due to the small electron flux (green, Fig. 8 &
Table $3$), which results in a lower emission frequency. In period 3 the
emission frequency of OI1356 from e + CO2 reached $2.1\times 10^{-8}$ s-1,
five times the peak frequency in period 4 ($3.9\times 10^{-9}$ s-1).
When looking nadir in Section 3.1, the CI1657 emissions were dominated by
electron impact on CO2, with only a small contribution from fluorescence of CO
(dark green, Fig. 6d) when the suprathermal electron flux was high. For limb
viewing, the modelled brightness of CI1657, driven only by electron impact on
CO2 (without any contribution from fluorescence of CO), well reproduces the
observed brightness throughout the periods 2 and 3 (see Fig. 9b), which is
consistent with there being no significant column of CO present along the line
of sight ($\ce{CO}/\ce{CO2}<0.05$). Furthermore, we would not expect any
significant contribution from fluorescence in this case, as e + CO is a more
efficient source of emissions when the suprathermal electron flux is large
(see cases 1-8, Fig. 4). In period 4, a slight underestimation of CI1657 by
the model may result from the lack of CO column included in the model. A
$\ce{CO}/\ce{CO2}$ ratio of 0.2 would explain the disparity, which is only
slightly larger than the ratio in production rates ($\ce{CO}/\ce{CO2}=0.1$) in
the southern hemisphere at this time (Läuter et al. 2018).
For limb viewing, emissions from the IPM contribute to the observed Ly-$\beta$
brightness, unlike in the nadir case (see Section 2.2). As such we include a
2-rayleigh background contribution from the IPM in this line. The Ly-$\beta$
brightness exhibits the same variations as measured in the electron flux
throughout the CIR (see Fig. 9c), suggesting that dissociative excitation by
electron impact is a significant source of this line.
The brightness of Ly-$\beta$ emissions is well captured throughout periods
2-4, which suggests there are no other major sources of emissions. The
contribution from prompt-photodissociation of H2O to the total brightness is
negligible throughout the CIR. In the limb viewing geometry, the extended coma
is also observed, in which photodissociation is a significant source of
neutral atoms. There could be some contribution from resonant scattering off
atomic hydrogen in the extended coma, but the agreement between the modelled
and observed brightnesses in Fig. 9b suggests there are not significant
emissions from this source. However, it is difficult to estimate it
independently as there is no constraint on the column of atomic hydrogen from
remote instrumentation on Rosetta.
Again, the fluctuations of the OI1304 emission brightness mirror the changes
in the suprathermal electron flux (see Fig. 9d), indicating that e + X is a
major source of emission. In the nadir cases, these emissions were driven by
electron impact on both CO2 and H2O (see Fig. 6b) and when the suprathermal
electron flux was low there were no emissions in this line (see cases 10-14,
Fig. 6). When looking at the limb, we observe substantial emissions
($\sim\\!3$ R) of OI1304 when the average electron flux from $20-60$
$\mathrm{eV}$ is less than $10^{7}$
${\mathrm{cm}}^{-2}\text{\,}{\mathrm{s}}^{-1}\text{\,}{\mathrm{eV}}^{-1}$. The
disparity between the observed and modelled emissions in Fig. 9d is
approximately constant throughout periods 2-4, suggesting that the residual
emissions are not driven by electrons which showed strong fluctuations over
the same time period. Similar to Ly-$\beta$, there may be a contribution from
resonant scattering from atomic oxygen along the line of sight, but with a
lack of constraints we cannot determine whether this is sufficient to explain
the discrepancy.
#### 4.2.3 Period 1
During the first period on 4 Aug 2016, the variations in the electron flux
with time are mirrored by the changes in the line brightnesses for all four
atomic lines. There are two strong peaks in the electron flux (dark blue, Fig.
8) from 01:45-02:20 UT and 02:40-03:40 UT during which all four of the atomic
lines considered also exhibit peaks in the brightnesses. From 03:40 UT until
05:20 UT, the electron flux is small (light blue, Fig. 8) and no significant
FUV emissions are observed in the atomic lines. The 2 rayleighs of Ly-$\beta$
emissions observed during this trough are attributed wholly to interplanetary
emissions that are seen with a limb viewing geometry. An increase in the
electron flux at the end of the first period (after 05:20 UT) also coincides
with a rise in the brightnesses of the atomic lines.
However, the scale of the fluctuations related to the two strong peaks in the
modelled emissions greatly exceeds those observed with the FUV spectrograph.
This disparity in scale could be attributed to either a poorly estimated
column density or in the electron flux along the line of sight. A reduction of
the CO2 column, which was the dominant emission source in period 1, by a
factor of 0.4 gives a close agreement between the modelled and observed
brightnesses for all four of the atomic lines. With this reduction, there is a
slight underestimation of the OI1304 brightness, which is consistent with the
results in the later periods. Variation of the CO2 column density within the
first period could drive the difference in scale, so we have considered higher
time resolution VIRTIS data, which splits the first period into four parts. A
slightly lower CO2 column is found during the first peak ($3.37\times 10^{14}$
cm-2, 01:40-02:40 UT), but during the second peak the CO2 column density is
found to be larger ($7.4\times 10^{14}$ cm-2, 02:40-03:40 UT). The result of
using these column is plotted with a dashed blue line in Figure 9.
Alternatively, the assumption that the suprathermal electron flux is constant
along the line of sight may break down when viewing the extended coma. The
electron flux is measured at a cometocentric distance of 12 km, but the limb
observed passes within roughly 500 m of the nucleus surface at the closest
point (see Fig. 7) so there could be some variation along the line of sight.
Using the higher time resolution CO2 columns from VIRTIS, the electron flux
would have to be reduced by a factor 0.6 during the first peak and 0.3 during
the second peak to reach a close agreement between the observed and modelled
brightnesses. However, if this assumption were invalid, a similar disparity to
that seen in period 1 would be expected in periods 2-4, which is not the case.
The viewing geometry has been compared between period 1 and periods 2-4 and
there is no obvious distinction (all four periods have a line of sight passing
within 1 km of the nucleus) that would cause a difference in the electron
behaviour.
As the temporal variations of the electron flux and the FUV emissions are well
correlated for all four atomic lines, these emissions are driven by electron
impact and generated by the variations in the CIR. However, it is not clear
why the brightness of the emission lines during the two large peaks in period
1 are overestimated in the model.
### 4.3 9 - 10 July 2016
During the CIR on 9-10 July 2016, the FUV spectrograph is viewing the
illuminated nucleus, which pollutes the FUV emission spectra at long
wavelengths ($>1500$ Å) and prevents reliable measurements of the column
density by either spectrometer. There are only a few periods when the
emissions driven by a CIR were observed by Alice, so despite the difficulties
in the analysis the reflected solar photons present, this period is still of
great interest.
In situ measurements do not provide a good constraint on the density and
composition of the neutral gas column due to the off nadir view. Therefore, we
adjust the column density during each of the periods to match the observed
OI1356 emission brightness, which is consistent with our findings of CO2
driving this line over the southern hemisphere (Sections 3.2 and 4.2). The
resulting column densities are given in Table 5, under the assumption of a
pure CO2 coma. The adjusted column densities of CO2 are between 0.88 and 1.6
times the radial column density derived from in situ measurements of the
neutral density (see Sec. 5). This is a good agreement given the variation in
the outgassing across the surface and the longer path through the coma due to
the viewing geometry. The off-nadir angle during periods 1-3 was roughly 10∘,
so the spatially variable outgassing is likely the major driver of the
difference in column density. The in situ measurements used for this
comparison have been corrected assuming the neutral gas is only CO2 (Gasc et
al. 2017a).
In Figure 11, we plot only the modelled and observed brightness of the OI1356
(a) and CI1657 (b) lines, as we have no constraint on the H2O column density.
The Ly-$\beta$ and OI1304 emissions both have significant contributions from e
+ H2O, which we cannot constrain with the OI1356 emissions (see Section 3.2).
Reflection of solar flux off the illuminated nucleus is seen in the Alice
spectra at long wavelengths, strongly contributing to the atomic line at 1657
Å. In order to distinguish the CI1657 emissions from the coma from those
reflected off the nucleus, we subtract the solar spectrum from the Alice
observations during this event as outlined in Appendix B. OI1356 is a weak
line in the solar spectrum due to the associated forbidden transition, so is
unaffected by this correction. Reflected solar flux would also contribute to
the Ly-$\beta$ and OI1304 brightnesses, but these lines are not considered in
this section.
Table 5: Properties of the three intervals on 9-10 July. Period | Start Time [After 9 July 2016 00:00 UTC] | End Time [After 9 July 2016 00:00 UTC] | CO2 Column Density [$10^{14}$ ${\mathrm{cm}}^{-2}$] | Total Radial Column Density [$10^{14}$ ${\mathrm{cm}}^{-2}$] | Ave. Electron Flux for 20 - 60 $\mathrm{eV}$ [$10^{7}$ ${\mathrm{cm}}^{-2}\text{\,}{\mathrm{s}}^{-1}\text{\,}{\mathrm{eV}}^{-1}$]
---|---|---|---|---|---
1 | 15:00:00 | 23:30:00 | 6.0 | 3.83 | 0.16 - 15.3
2 | 29:52:00 | 34:30:00 | 2.0 | 2.25 | 2.46 - 25.0
3 | 35:10:00 | 48:00:00 | 1.2 | 0.76 | 1.97 - 36.2
Figure 10: Average electron particle flux during the three periods listed in
Table 5, with the standard deviation of the flux shown by the shaded regions.
The colour of each period is the same as in Fig. 11. The format of the plot is
the same as outlined in Figure 8.
The OI1356 brightness closely matches the variation of the electron flux
during this event. Between 16:00 and 18:00 UT on 9 July, the emission
frequency of OI1356 from CO2 increases from $10^{-9}$ ${\mathrm{s}}^{-1}$ at
15:45 UT, plateaus at $9\times 10^{-9}$ ${\mathrm{s}}^{-1}$ until 17:30 UT,
and then drops to $4\times 10^{-10}$ ${\mathrm{s}}^{-1}$ at 17:45 UT. The
close agreement in the fine structure can be seen between 07:30 and 09:30UT on
10 July. The emission frequency of OI1356 from e + CO2 increases from
$5.7\times 10^{-9}$ ${\mathrm{s}}^{-1}$ to a maximum $1.3\times 10^{-8}$
${\mathrm{s}}^{-1}$ at 08:00 UT, followed by a decrease to $1.1\times 10^{-9}$
${\mathrm{s}}^{-1}$ at 08:40 UT. At 08:50 UT, the emission frequency and
brightness of the OI1356 line both increase rapidly to $2.0\times 10^{-8}$
${\mathrm{s}}^{-1}$ and 3.2 R, respectively, before plateauing.
(a)(b)July 9thJuly 10th Figure 11: Comparison of the observed and modelled
brightness of the OI1356 (a) and CI1657 emission lines from 9 July 15:00 UT to
11 July 00:00 UT 2016. The plots follow the same format as outlined in Figure
9. The modelled brightnesses shown here are only driven by electron impact on
CO2.
The observed CI1657 emissions display the same structures seen in the OI1356
brightness (e.g. between 16:00 and 18:00 UT on 9 July; Fig. 11b), capturing
both the magnitude and variability of the fluctuations. On 10 July, a peak in
the observed CI1657 brightness at 18:00 UT is mirrored by an increase in the
CI1657 emission frequency to $5.59\times 10^{-8}$ s-1, before a decrease in
both the observed emissions from the coma and the electron flux until 21:00
UT. When there are few suprathermal electrons around 21:00 UT 10 July, there
are almost no observed emissions from the coma, demonstrating that there are
no other major sources of emissions from the coma during this event. The good
agreement between the in situ electron measurement and remote FUV observations
strengthens the case that the electron variations occur over large scales.
## 5 Conclusion
The Rosetta spacecraft’s proximity to the nucleus of 67P, throughout the two-
year escort phase, provided us with the opportunity to observe FUV emissions
from within the coma. We have performed the first forward modelling of FUV
emissions in the southern hemisphere of comet 67P using an extension of the
multi-instrument analysis of Galand et al. (2020) and introducing carbon lines
for the first time. When observing the shadowed nucleus at large heliocentric
distances, the analysis we have applied well reproduces the observed
brightnesses of the selected FUV emission lines (OI1356, OI1304, Ly-$\beta$
and OI1027, CI1657 and CO4PG, and CII1335) for periods with large suprathermal
electron fluxes (see Section 3.2) . Therefore, dissociative excitation by
electron impact is a key source of the FUV emissions in the southern
hemisphere away from perihelion. When the suprathermal electron flux was small
(Averaged electron flux $<2\times 10^{7}$
${\mathrm{cm}}^{-2}\text{\,}{\mathrm{s}}^{-1}\text{\,}{\mathrm{eV}}^{-1}$
between 20 and 60 $\mathrm{eV}$), no emission of either of the OI lines or the
CII line ($B^{\text{OI1304}}<0.55$R, $B^{\text{OI1356}}<0.79$R,
$B^{\text{CII1335}}<0.22$R) were observed, indicating that energetic electrons
in the coma are the dominant driver of these emissions (see Section 3.3). In
the low flux cases, fluorescence of CO was a larger source of emissions at
1657 Å compared to many of the cases with a high electron flux (see Fig. 6d),
due to the larger total column density (see Table 2). The relative
contribution of CO fluorescence in the low flux cases was also enhanced by the
small emission frequency from electron impact processes (see Fig. 4d), which
had dominated in the high flux cases (cases 1-8, Fig 6d).
Unlike other low suprathermal electron flux cases, in cases 11 & 12 there were
significant emissions at 1027Å (5 R & 3 R), the source of which remains
unclear. Electron impact on neutrals cannot be the source of the emissions as
the brightness of other atomic lines, such as OI1304 and OI1356 would be
significant as well. The low Lyman-$\alpha$/Lyman-$\beta$ ratios in these
cases (2.6 - 3.3) rules out resonant scattering as a source (expected ratio of
300) and suggests photo-dissociation could be a driver of these emissions
(expected ratio of 1.4). However, prompt-photodissociation would require
significantly more H2O along the line of sight (by a factor 5-15) to generate
emissions in agreement with those observed. Late in the Rosetta mission, as
these cases were, water was more weakly outgassing than CO2 in the southern
hemisphere (Gasc et al. 2017a; Läuter et al. 2018), so the required mixing
ratios (up to 80% water) are unlikely to have occurred and are inconsistent
with the in situ measurements. Therefore, the source of these unexplained
Ly-$\beta$ emissions remains an open question.
In the low electron flux cases (cases 9-13), the neutral column density is
derived from a very simple model (Eq. 2), where we have assumed a fixed comet
radius and constant neutral gas velocity. Using a constant expansion velocity
may underestimate the gas density near the nucleus by up to 50% (Heritier et
al. 2017a), but this has little impact on the conclusions in these low flux
cases. We have also neglected any lateral motion of the neutral gas in the
coma, which could impact the composition and density of the neutrals along the
line of sight. When deriving column densities from in situ measurements, a
more complete 3D structure of the neutral coma (e.g. Bieler et al. 2015b),
including expansion of the gas near the nucleus, could be incorporated into
the multi-instrument analysis.
At large heliocentric distances, the FUV spectra obtained in the southern
hemisphere are very different to those from the northern, summer hemisphere
due to the prominence of CO2 in the southern hemisphere (Hässig et al. 2015),
especially post perihelion (Gasc et al. 2017a). Consequently, the FUV spectra
contain much stronger atomic carbon lines (CI and CII) as well as molecular
bands of CO, such as the Fourth Positive Group (Feldman et al. 2018) compared
to the northern hemisphere where they are barely detected. The hemispherical
asymmetry in composition results in significantly different sources for
several emission lines between the northern and southern hemispheres.
OI1356 is produced primarily by e + CO2 in the southern hemisphere (see Fig.
6a), whereas, in the northern hemisphere, e + O2 plays a more significant role
(Galand et al. 2020). The OI1304 brightness had significant contributions from
electron impact on all four of the major neutral species in the coma (CO2,
H2O, CO and O2) across the selected cases over the shadowed nucleus in the
southern hemisphere(see Fig. 6b). The emissions near 1026
$\mathrm{\SIUnitSymbolAngstrom}$ correspond only to emissions of Lyman-$\beta$
from e + H2O during the northern hemisphere summer (Galand et al. 2020),
whereas for post-perihelion cases in the southern, winter hemisphere, there
can be a significant contribution of OI1027 from e + CO2 (see case 8, Fig.
2c). Therefore, it is important to account for all four of the major neutral
species when analysing FUV emission spectra over the southern hemisphere to
derive column densities or mixing ratios.
The CI1657 emissions are dominated by e + CO2, when there is a large
suprathermal electron flux (see Section 3.2). However when the population of
suprathermal electrons in the coma is small, fluorescence of CO in the
overlapping 4PG bands becomes a more significant driver of emissions near 1657
$\mathrm{\SIUnitSymbolAngstrom}$.
The brightness of the CII1335 emissions is highly sensitive to the column
density of CO, due to the large emission frequency of CO compared to CO2 in
this line (see Fig. 4e). The close agreement between the observed and modelled
brightness of the CII1335 line confirms that the volume mixing ratio of CO
derived from the ROSINA mass spectrometer is accurate, once the contribution
to the CO signal due to fragmentation of CO2 in the ion source is subtracted
from the CO signal on the detector (Dhooghe et al. 2014).
During the CIRs over the 9 and 10 July and on 4 August 2016, the OI1356 and
CI1657 emissions were dominated by e + CO2, as attested by the agreement
between the observed and modelled brightnesses (see Section 4). This is not
surprising as CO2 was the dominant species outgassing from the southern
hemisphere near the end of mission (Luspay-Kuti et al. 2019). For limb
viewing, the Lyman-$\beta$ (Fig. 9c) and OI1304 (Fig. 9d) emissions both have
significant contributions from electron impact, but the model underestimates
the brightness of these lines. As the extended coma is also observed in limb
viewing, resonant scattering of solar photons from atomic hydrogen and oxygen
present along the line of sight may account for the discrepancy between the
model and observations for these lines (Combi et al. 2004; Feldman et al.
2018). When viewing the illuminated nucleus, reflected solar flux from the
nucleus introduced significant noise to the brightness of the CI1657
emissions, but the OI1356 line was generated only by dissociative excitation.
Throughout the CIRs, the brightnesses of all the selected emission lines
exhibit the same temporal variation as measured in the suprathermal electron
flux (Section 4). For all the time periods except the first on 4 Aug 2016, the
modelled brightnesses agreed very closely with the observed line brightnesses,
although there is a slight underestimation of OI1304 throughout the limb
observations (see Figure 9d), which may be attributed to resonant scattering
from atomic oxygen in the extended coma. Resonant scattering off atomic
hydrogen may contribute to Ly-$\beta$ emissions, but this is less apparent as
the emissions are much weaker than those from the IPM, which have been
accounted for. It is difficult to constrain resonant scattering off atomic
oxygen or hydrogen as we do not have measurements of the density of these
neutrals throughout the column. The discrepancy in magnitude in the first
period in August may originate from some change in the electron flux
throughout the column. There should be no significant degradation of electrons
in the coma at the large heliocentric distances considered (see Section
2.4.2), but a large scale potential well (Deca et al. 2017) could cause a
variation of the electron flux over the extended column in limb viewing.
However, if the assumption of a constant electron flux were invalid, the
discrepancy should also arise in the other periods on 4 August, which have
similar viewing geometries.
In a limb or off-nadir viewing geometry, the emissions originate from a
distant region of the coma, which cannot be probed in situ. The observed
correlation between the in situ measurements of the electron flux and the
remote measurements of the FUV spectrograph suggests that any acceleration of
the electron flux occurs on large scales as suggested by Deca et al. (2017).
At times with low electron fluxes, such as period 4 and the troughs in period
1 on 4 August 2016, there were no strong emissions of the FUV lines, which
excludes any strong local heating of electrons in a distant region along the
column. These results support the findings of Galand et al. (2020) that these
are solar wind electrons, which undergo acceleration by several tens of
$\mathrm{eV}$ in the coma. Therefore, the FUV emissions are auroral in nature.
The close correlation observed between the observed FUV auroral brightness and
the electron flux allows FUV spectroscopy to be used as another measure of
structures in the solar wind.
The aurora observed in the southern hemisphere of 67P is similar to the
diffuse aurora at Mars in several ways. Both auroras are driven by solar wind
electrons on open draped field lines (Brain et al. 2007; Volwerk et al. 2019)
as shown in this study at comet 67P and by Schneider et al. (2015) at Mars.
However, at comet 67P the solar wind electrons are accelerated by an ambipolar
field (Deca et al. 2017, 2019), whereas at Mars the electrons driving diffuse
auroras are accelerated at the Sun rather than within the Martian system
(Schneider et al. 2015). Consequently, these auroras are persistent at comet
67P whereas at Mars the diffuse aurora is only observed during strong solar
events.
## Acknowledgements
Rosetta is a European Space Agency (ESA) mission with contributions from its
member states and the National Aeronautics and Space Administration (NASA).We
acknowledge the continuous support of the Rosetta teams at the European Space
Operations Centre in Darmstadt and at the European Space Astronomy Centre. We
acknowledge the staff of CDDP and Imperial College for the use of AMDA and the
RPC Quicklook database. Work at Imperial College London was supported by STFC
of UK under grant ST/N000692/1 and ST/S505432/1. The Alice team acknowledges
support from NASA’s Jet Propulsion Laboratory through contract 1336850 to the
Southwest Research Institute. AB was supported by the Swedish National Space
Agency (grant 108/18). MR acknowledges the support of the State of Bern and
the Swiss National Science Foundation (200021 165869, 200020 182418). VIRTIS
was built by a consortium, which includes Italy, France, and Germany, under
the scientific responsibility of the Istituto di Astrofisica e Planetologia
Spaziali of INAF, Italy, which also guides the scientific operations. The
VIRTIS instrument development, led by the prime contractor Leonardo-
Finmeccanica (Florence, Italy), has been funded and managed by ASI, with
contributions from Observatoire de Meudon financed by CNES, and from DLR. We
thank the Rosetta Science Ground Segment and the Rosetta Mission Operations
Centre for their support throughout all the phases of the mission. The VIRTIS
calibrated data will be available through the ESAs Planetary Science Archive
(PSA) Website (www.rssd.esa.int) and is available upon request until posted to
the archive. We thank the following institutions and agencies for support of
this work: Italian Space Agency (ASI, Italy) contract number I/024/12/1,
Centre National d’Etudes Spatiales (CNES, France), DLR (Germany), NASA (USA)
Rosetta Program, and Science and Technology Facilities Council (UK). All
ROSINA data are the work of the international ROSINA team (scientists,
engineers and technicians from Switzerland, France, Germany, Belgium and the
US) over the past 25 years, which we herewith gratefully acknowledge.
## Supporting data
All of the primary data from Rosetta used in this study will be available
through ESA’s Planetary Science Archive (PSA) at www.rssd.esa.int or is
available on request until posted to the archive. The processed emission line
brightnesses, emission frequencies, column densities, and electron particle
fluxes are made available through the Centre de Donnés astronomiques de
Strasbourg (CDS) archives.
## References
* Ajello (1971) Ajello, J. M. 1971, The Journal of Chemical Physics, 55, 3158
* Ajello et al. (2019) Ajello, J. M., Malone, C. P., Evans, J. S., et al. 2019, Journal of Geophysical Research (Space Physics), 124, 2954
* Balsiger et al. (2007) Balsiger, H., Altwegg, K., Bochsler, P., et al. 2007, Space Science Reviews, 128, 745
* Beegle et al. (1999) Beegle, L. W., Ajello, J. M., James, G. K., Dziczek, D., & Alvarez, M. 1999, Astronomy & Astrophysics, 347, 375
* Bieler et al. (2015a) Bieler, A., Altwegg, K., Balsiger, H., et al. 2015a, Nature, 526, 678
* Bieler et al. (2015b) Bieler, A., Altwegg, K., Balsiger, H., et al. 2015b, Astronomy & Astrophysics, 583, A7
* Biver et al. (2019) Biver, N., Bockelée-Morvan, D., Hofstadter, M., et al. 2019, A&A, 630, A19
* Biver et al. (2015) Biver, N., Hofstadter, M., Gulkis, S., et al. 2015, Astronomy & Astrophysics, 583, A3
* Bockelée-Morvan et al. (2016) Bockelée-Morvan, D., Crovisier, J., Erard, S., et al. 2016, Monthly Notices of the Royal Astronomical Society, Volume 462, Issue Suppl_1, p.S170-S183, 462, S170
* Bockelée-Morvan et al. (2015) Bockelée-Morvan, D., Debout, V., Erard, S., et al. 2015, Astronomy & Astrophysics, 583, A6
* Brain et al. (2007) Brain, D. A., Lillis, R. J., Mitchell, D. L., Halekas, J. S., & Lin, R. P. 2007, Journal of Geophysical Research (Space Physics), 112, A09201
* Broiles et al. (2016) Broiles, T. W., Livadiotis, G., Burch, J. L., et al. 2016, Journal of Geophysical Research: Space Physics, 121, 7407
* Burch et al. (2007) Burch, J. L., Goldstein, R., Cravens, T. E., et al. 2007, Space Science Reviews, 128, 697
* Bykov & Zakharov (2020) Bykov, N. Y. & Zakharov, V. V. 2020, Physics of Fluids, 32, 067109
* Carr et al. (2007) Carr, C., Cupido, E., Lee, C. G. Y., et al. 2007, $\backslash$ssr, 128, 629
* Chaufray et al. (2017) Chaufray, J.-Y., Bockelée-Morvan, D., Bertaux, J.-L., et al. 2017, Monthly Notices of the Royal Astronomical Society, 469, S416
* Clark et al. (2015) Clark, G., Broiles, T. W., Burch, J. L., et al. 2015, Astronomy & Astrophysics, 583, A24
* Combi et al. (2004) Combi, M. R., Harris, W. M., & Smyth, W. H. 2004, Gas dynamics and kinetics in the cometary coma: theory and observations, ed. M. C. Festou, H. U. Keller, & H. A. Weaver, 523
* Coradini et al. (2007) Coradini, A., Capaccioni, F., Drossart, P., et al. 2007, Space Science Reviews, 128, 529
* Deca et al. (2017) Deca, J., Divin, A., Henri, P., et al. 2017, Physical Review Letters, 118, 205101
* Deca et al. (2019) Deca, J., Henri, P., Divin, A., et al. 2019, Phys. Rev. Lett., 123, 055101
* Dhooghe et al. (2014) Dhooghe, F., De Keyser, J., Altwegg, K., et al. 2014, in EGU General Assembly Conference Abstracts, Vol. 16, EGU General Assembly Conference Abstracts, 6265
* Edberg et al. (2016) Edberg, N. J. T., Eriksson, A. I., Odelstad, E., et al. 2016, Journal of Geophysical Research A: Space Physics, 121, 949
* Eriksson et al. (2017) Eriksson, A. I., Engelhardt, I. A. D., André, M., et al. 2017, Astronomy & Astrophysics, 605
* Feldman et al. (2015) Feldman, P. D., A’Hearn, M., Bertaux, J.-L., et al. 2015, Astronomy & Astrophysics, 583, A8
* Feldman et al. (2018) Feldman, P. D., Weaver, H. A., A’hearn, M. F., Combi, M. R., & Russo, N. D. 2018, The Astronomical Journal, 155, 193
* Feldman et al. (2002) Feldman, P. D., Weaver, H. A., & Burgh, E. B. 2002, The Astrophysical Journal, 576, L91
* Filacchione et al. (2016) Filacchione, G., Raponi, A., Capaccioni, F., et al. 2016, Science, 354, 1563
* Fink et al. (2016) Fink, U., Doose, L., Rinaldi, G., et al. 2016, Icarus, 277, 78
* Galand & Chakrabarti (2002) Galand, M. & Chakrabarti, S. 2002, Washington DC American Geophysical Union Geophysical Monograph Series, 130, 55
* Galand et al. (2020) Galand, M., Feldman, P. D., Bockelée-Morvan, D., et al. 2020, Nature Astronomy
* Galand et al. (2016) Galand, M., Héritier, K. L., Odelstad, E., et al. 2016, Monthly Notices of the Royal Astronomical Society, 462, S331
* Gasc et al. (2017a) Gasc, S., Altwegg, K., Balsiger, H., et al. 2017a, Monthly Notices of the Royal Astronomical Society, 469, S108
* Gasc et al. (2017b) Gasc, S., Altwegg, K., Fiethe, B., et al. 2017b, Planetary and Space Science, 135, 64
* Gaskell et al. (2017) Gaskell, R., Jorda, L., Capanna, C., Hviid, S., & Gutierrez, P. 2017, SPC SHAP5 CARTESIAN PLATE MODEL FOR COMET 67P/C-G 12K PLATES, RO-C-MULTI-5-67P-SHAPE-V2.0:CG_SPC_SHAP5_012K_CART, NASA Planetary Data System and ESA Planetary Science Archive, ftp://psa.esac.esa.int
* Gilet, N. et al. (2020) Gilet, N., Henri, P., Wattieaux, G., et al. 2020, A&A, 640, A110
* Glassmeier et al. (2007) Glassmeier, K.-H., Boehnhardt, H., Koschny, D., Kührt, E., & Richter, I. 2007, Space Science Reviews, 128, 1
* Goetz et al. (2019) Goetz, C., Tsurutani, B. T., Henri, P., et al. 2019, A&A, 630, A38
* Gulkis et al. (2007) Gulkis, S., Frerking, M., Crovisier, J., et al. 2007, Space Science Reviews, 128, 561
* Hajra et al. (2018) Hajra, R., Henri, P., Myllys, M., et al. 2018, Monthly Notices of the Royal Astronomical Society, 480, 4544
* Hans et al. (2015) Hans, A., Knie, A., Schmidt, P., et al. 2015, Phys. Rev. A, 92, 032511
* Hässig et al. (2015) Hässig, M., Altwegg, K., Balsiger, H., et al. 2015, Science, 347, aaa0276
* Heinisch et al. (2019) Heinisch, P., Auster, H. U., Richter, I., & Glassmeier, K. H. 2019, A&A, 630, A46
* Heritier et al. (2017a) Heritier, K. L., Altwegg, K., Balsiger, H., et al. 2017a, Monthly Notices of the Royal Astronomical Society, 469, S427
* Heritier et al. (2018a) Heritier, K. L., Altwegg, K., Berthelier, J.-J., et al. 2018a, Nature Communications, 9, 2580
* Heritier et al. (2018b) Heritier, K. L., Galand, M., Henri, P., et al. 2018b, $\backslash$aap, 618, A77
* Heritier et al. (2017b) Heritier, K. L., Henri, P., Vallières, X., et al. 2017b, Monthly Notices of the Royal Astronomical Society, 469, S118
* Huebner & Mukherjee (2015) Huebner, W. & Mukherjee, J. 2015, Planetary and Space Science, 106, 11
* Itikawa (2002) Itikawa, Y. 2002, Journal of Physical and Chemical Reference Data, 31, 749
* Itikawa & Mason (2005) Itikawa, Y. & Mason, N. 2005, Journal of Physical and Chemical Reference Data, 34, 1
* James et al. (1992) James, G. K., Ajello, J. M., Kanik, I., Franklin, B., & Shemansky, D. E. 1992, Journal of Physics B Atomic Molecular Physics, 25, 1481
* Jorda et al. (2016) Jorda, L., Gaskell, R., Capanna, C., et al. 2016, Icarus, 277, 257
* Kanik et al. (1993) Kanik, I., Ajello, J. M., & James, G. K. 1993, Chemical Physics Letters, 211, 523
* Kanik et al. (2003) Kanik, I., Noren, C., Makarov, O. P., et al. 2003, Journal of Geophysical Research, 108, 5126
* Keller et al. (2007) Keller, H. U., Barbieri, C., Lamy, P., et al. 2007, Space Science Reviews, 128, 433
* Kurz (1979) Kurz, E. A. 1979, American Laboratory, 11, 67
* Lupu et al. (2007) Lupu, R. E., Feldman, P. D., Weaver, H. A., & Tozzi, G. G.-P. 2007, The Astrophysical Journal, 670, 1473
* Luspay-Kuti et al. (2019) Luspay-Kuti, A., Altwegg, K., Berthelier, J. J., et al. 2019, Astronomy & Astrophysics
* Läuter et al. (2018) Läuter, M., Kramer, T., Rubin, M., & Altwegg, K. 2018, Monthly Notices of the Royal Astronomical Society, 483, 852
* Makarov et al. (2004) Makarov, O. P., Ajello, J. M., Vattipalle, P., et al. 2004, Journal of Geophysical Research: Space Physics, 109, 1
* McConkey et al. (2008) McConkey, J. W., Malone, C. P., Johnson, P. V., et al. 2008, Physics Reports, 466, 1
* Migliorini et al. (2016) Migliorini, A., Piccioni, G., Capaccioni, F., et al. 2016, Astronomy & Astrophysics, 589, A45
* Mumma et al. (1972) Mumma, M. J., Stone, E. J., Borst, W. L., & Zipf, E. C. 1972, The Journal of Chemical Physics, 57, 68
* Noonan et al. (2018) Noonan, J. W. J., Stern, S. A., Feldman, P. D. P., et al. 2018, The Astronomical Journal, 156, 16
* Odelstad et al. (2015) Odelstad, E., Eriksson, A. I., Edberg, N. J. T., et al. 2015, Geophysical Research Letters, 42, 10,126
* Raghuram & Bhardwaj (2020) Raghuram, S. & Bhardwaj, A. 2020, Icarus, 347, 113790
* Schneider et al. (2015) Schneider, N. M., Deighan, J. I., Jain, S. K., et al. 2015, Science, 350, 0313
* Smith & Wolfe (1976) Smith, E. J. & Wolfe, J. H. 1976, Geochim. Res. Lett., 3, 137
* Stern et al. (2007) Stern, S. A., Slater, D. C., Scherrer, J., et al. 2007, Space Science Reviews, 128, 507
* Volwerk et al. (2019) Volwerk, M., Goetz, Charlotte, Behar, Etienne, et al. 2019, A&A, 630, A44
* Weaver et al. (2011) Weaver, H. A., Feldman, P. D., A’Hearn, M. F., Russo, N. D., & Stern, S. A. 2011, The Astrophysical Journal, 734, L5
* Weaver et al. (2002) Weaver, H. A., Feldman, P. D., Combi, M. R., et al. 2002, The Astrophysical Journal, 576, L95
* Wells et al. (1972) Wells, W. C., Borst, W. L., & Zipf, E. C. 1972, Journal of Geophysical Research, 77, 69
* Wells & Zipf (1974) Wells, W. C. & Zipf, E. C. 1974, Physical Review A, 9, 568
* Wilhelmi & Schartner (2000) Wilhelmi, O. & Schartner, K. H. 2000, European Physical Journal D, 11, 79
* Witasse et al. (2017) Witasse, O., Sánchez-Cano, B., Mays, M. L., et al. 2017, Journal of Geophysical Research: Space Physics, 122, 7865
* Woods et al. (2005) Woods, T. N., Eparvier, F. G., Bailey, S. M., et al. 2005, Journal of Geophysical Research, 110, A01312
* Wu & Judge (1988) Wu, C. Y. R. & Judge, D. L. 1988, The Journal of Chemical Physics, 89, 6275
## Appendix A Calculating the suprathermal electron flux from the RPC/IES
counts
The Rosetta Plasma Consortium Ion and Electron Sensor (RPC/IES) was a top hat
electrostatic analyser which measured the number of electron counts in bins of
azimuthal angle, elevation angle and energy (Burch et al. 2007). The sensor
comprised $16\times 16\times 128$ bins in these dimensions with an angular
resolution of $$$\times$$$. The energy bins were distributed approximately
logarithmically with a bin width $\Delta E/E=8\%$. RPC/IES measured the
electron count rate in all 16 azimuthal bins simultaneously, whilst the
electron and energy bins were cycled through. During the mission, anodes from
the 8${}^{\text{th}}$ to 15${}^{\text{th}}$ azimuthal bins degraded (Broiles
et al. 2016), hence we only used anodes 0 to 7 in our analysis.
The electron particle flux was derived from RPC/IES measurements by Heritier
et al. (2018a), but here we provide an outline of each step in this
calculation. Within the Level 3 files on the PSA, the background count of
electrons, $C_{L3BG}$, is given for each scan of the electron spectrometer,
which is independent of the instrument bin. The background count must be
corrected as below before being subtracted from the counts in the measured
bins.
$C_{BG}=C_{L3BG}+\sqrt{5}C_{L3BG}$ (6)
When installed on Rosetta, the field of view of the instrument was obscured by
the main body of the spacecraft and other instruments (Clark et al. 2015).
This restricted field of view has been corrected with geometric factors,
$G(\theta_{0},\phi_{0},E_{0})$ (Broiles et al. 2016), where the subscript $0$
indicates that this has been evaluated on the underlying instrument grid of
$16\times 16\times 128$. However, in many of the operation modes, neighbouring
bins in all three dimensions were collapsed into one datapoint to reduce the
demand on telemetry. Combined azimuthal bins have already been separated in
the Level 2 datafiles, available on the PSA but the combined bins in elevation
and energy have not been separated. We refer to $N_{bins}$, the number of
combined energy bins multiplied by the number of combined elevation bins.
The counts, with background removed on the binned grid, were then un-binned
onto the instrumental $16\times 16\times 128$ grid, assuming the electrons
were distributed evenly between the combined bins.
$C^{\prime}(\theta_{0},\phi_{0},E_{0},t)=\frac{C(\theta_{b},\phi_{b},E_{b},t)-C_{BG}(t)}{N_{Bins}},$
(7)
where the subscript $b$ refers to the binned grid. The un-binned counts were
then divided by the geometric factor for each bin and the integration time
associated with the operation mode to convert to a differential energy flux,
$g^{\prime}(\theta_{0},\phi_{0},E_{0},t)$. The geometric factor of the
instrument used here is based on observations on 1 January 2015, as derived by
Broiles et al. (2016).
$g^{\prime}(\theta_{0},\phi_{0},E_{0},t)=\frac{C^{\prime}(\theta_{0},\phi_{0},E_{0},t)}{\Delta
t\times G(\theta_{0},\phi_{0},E_{0})}$ (8)
The differential flux above has not accounted for the efficiency,
$\epsilon(E)$, of the MicroChannel Plates (MCPs) in the instrument, which is
dependent on the impacting electron energy, E (in $\mathrm{eV}$; Kurz 1979).
$\begin{split}\epsilon(E)=0.52&\exp\Big{(}-[\log(E+37)-2.3]^{2}/0.82\Big{)}+0.071\\\
&+0.16\log(E+37)-0.028\log(E+37)^{2}\end{split}$ (9)
We returned to the binned energy grid, $E_{b}$, by summing over combined
energy bins, before dividing by the efficiency to give the differential
electron particle flux (DEF), $j(\theta_{b},\phi_{b},E_{b})$.
$j(\theta_{0},\phi_{0},E_{b},t)=\frac{\sum\limits_{\text{Comb.
Energies}}g^{\prime}(\theta_{0},\phi_{0},E_{0},t)}{N_{\text{Comb.
Energies}}\times\epsilon(\bar{E}_{b})\times\bar{E}_{b}},$ (10)
where the summation is over the energy bins which were combined in the binned
grid. $\bar{E_{b}}$ is the average energy of the combined energy bins. The DEF
was integrated over the field of view of RPC/IES to give the electron particle
flux, $J(E)$ (in
${\mathrm{cm}}^{-2}\text{\,}{\mathrm{s}}^{-1}\text{\,}{\mathrm{eV}}^{-1}$;
Heritier et al. 2018a):
$J(E_{b})=\frac{4\pi}{2\pi\sin$$}\int_{\theta_{0}=$$}^{$$}\int_{\phi_{0}=$$}^{$$}\\!\cos\theta_{0}\times
j(\theta_{0},\phi_{0},E_{b})\,\mathrm{d}\phi_{0}\mathrm{d}\theta_{0}$ (11)
The term representing a geometric factor at the start of Eq. 11 is a
correction for the limited field of view of RPC/IES, under the assumption of
an isotropic electron particle flux at the detector.
We note that the electron particle flux at the detector was not representative
of the electron flux in the coma, as the Rosetta spacecraft was generally at a
spacecraft potential lower than -10 $\mathrm{V}$ (Odelstad et al. 2015). This
was corrected for using Liouville’s theorem (see Eq. 4) but this prevented
measurement of any electrons at low energies (see Section 2.4.2).
## Appendix B Removing solar contamination from FUV emission spectra
Figure 12: Comparison of the photon flux observed by Alice at 15:58 UT 9 July
2016 to the solar photon flux, measured by TIMED-SEE at long wavelengths
(1500Å - 1800 Å). Three distinct regions of the Alice slit (Rows 8-11 [red],
Rows 13-16 [black] and Rows 18-21 [blue]) have been plotted which are the same
as those plotted in Figure 11. The linear fits are used to scale the solar
flux, so it can be subtracted from the observed Alice spectrum.
The emission spectra gathered on 9-10 July 2016 were strongly polluted by
reflected solar flux from the illuminated surface of the nucleus. As we are
only interested in the emissions from the coma, it was necessary to remove any
contribution of reflected solar photons. The shape of the solar spectrum was
taken from TIMED-SEE measurements on 16 July 2016 (green, Figure 13), which
was at the same Carrington longitude as the Alice observations. As this study
considers only cases at large heliocentric distances, the coma was optically
thin and there should not have been significant absorption of the solar flux
by cometary neutrals (Heritier et al. 2018b).
(a)(b)(c) Figure 13: Breakdown of the fitting procedure to retrieve the CI1657
and CO4PG line brightness for (a) Rows 8-11, (b) Rows 13-16 and (c) Rows
18-21. The solar spectrum from TIMED-SEE (green) is smoothed and then scaled
to the Alice photon flux (dark blue) at long wavelengths. The residual
observed flux, once the solar spectrum is subtracted, is shown in light blue
and with the CI1657 fit plotted in purple. The combination of the solar
background with the CI1657 line fit is plotted in orange.
As the solar flux was more intense at the long wavelength end of the Alice
spectral range (Feldman et al. 2018), the photon flux at 1au was compared to
the FUV emission spectra from Alice between 1500 Å and 1800 Å for each set of
co-added rows (see Figure. 12), with bright emission features around 1561 Å
and 1657 Å excluded. A clear linear correlation was seen for all three regions
of the Alice slit illustrating the strong presence of the solar spectrum. The
reflected solar flux was derived from the linear fit (green, Fig. 13) and was
subtracted from the observed Alice spectra. The residual FUV emission spectra
(light blue, Fig. 13) were then fitted with a linear combination of two
gaussian distributions (purple).
The CI1657 emission, measured by Alice (dark blue, Fig. 13), was more diffuse
than the atomic line measured at 1au by TIMED-SEE as a result of the lower
spectral resolution (8 -12 Å). The emission feature in the Alice spectra may
also contain several bands of the CO Fourth Positive Group (see Section 2.2)
leading to further broadening. We smoothed the TIMED-SEE flux over 10 Å to
broaden the peak at 1657 Å, which better captured the width of the peak in the
Alice dataset. If the adjusted goodness of fit exceeded 0.65, the line
brightness is given by
$B^{X}=(A_{1}+A_{2})\times
4\arcsin\Big{[}\sin\Big{(}\frac{\alpha}{2}\Big{)}\sin\Big{(}\frac{\beta}{2}\Big{)}\Big{]}\times\frac{10^{6}}{4\pi}$
(12)
where the line brightness $B^{X}$ is in rayleighs and the amplitudes of the
Gaussian distributions, $A_{1}$ and $A_{2}$, have units of photons cm-2s-1.
$\alpha$ and $\beta$ are the angles subtended by the coadded rows in the Alice
slit, and the numerical factor originates from the definition of 1 rayleigh
(see Equation 1).
|
Security, Fault Tolerance, and Communication Complexity
in Distributed Systems
A thesis presented
Donald Rozinak Beaver
The Division of Applied Sciences
in partial fulfillment of the requirements
for the degree of
Doctor of Philosophy
in the subject of
Computer Science
Harvard University
Cambridge, Massachusetts
May 1990
$\copyright$ 1990 by Donald Rozinak Beaver
All rights reserved.
We present efficient and practical algorithms for a large, distributed
system of processors to achieve reliable computations in a secure manner.
Specifically, we address the problem of computing a general function of
several private inputs distributed among the processors of a network,
ensuring the correctness of the results and the privacy of the inputs,
despite accidental or malicious faults in the system.
Communication is often the most significant bottleneck in distributed
Our algorithms
maintain a low cost in local processing time, are the first to
achieve optimal levels of fault-tolerance, and most importantly, have low
communication complexity. In contrast to the best known previous methods,
which require large numbers of rounds even for fairly simple computations,
we devise protocols that use small messages and a constant number of rounds
regardless of the complexity of the function to be computed. Through
direct algebraic approaches, we separate the communication complexity
of secure computing from the computational complexity of the
function to be computed.
We examine security under both the modern approach of computational
complexity-based cryptography and the classical approach of unconditional,
information-theoretic security. We develop a clear and concise set of
definitions that support formal proofs of claims to security, addressing an
important deficiency in the literature. Our protocols are provably secure.
In the realm of information-theoretic security, we characterize those
functions which two parties can compute jointly with absolute privacy. We
also characterize those functions which a weak processor can compute using
the aid of powerful processors without having to reveal the instances of
the problem it would like to solve. Our methods include a promising new
technique called a locally random reduction, which has given rise not
only to efficient solutions for many of the problems considered in this
work but to several powerful new results in complexity theory.
I would very much like to acknowledge the guidance and advice of
my advisor, Michael Rabin, and of my unofficial advisor, Joan
Feigenbaum, whose help and efforts on my behalf have been instrumental.
I would also like to thank
Roger Brockett,
Shafi Goldwasser,
Mei Hsu,
Les Valiant.
Several institutions besides Harvard have provided formal and informal
support and a productive working environment, including
Williams College,
Cal Tech,
the Hebrew University of Jerusalem,
the Deutsches Forschungszentrum für Künstliche Intelligenz,
and the New York Public Library.
This work was supported in part by NSF grant CCR-870-4513.
The great many colleagues and graduate students who have
made the past four years enjoyable, intellectually stimulating, or just
bearable are too many to mention, but in particular I would like to thank
Judit Bar-Ilan,
Stuart Haber,
Christos Kaklamanis,
Philip Klein,
Danny Krizanc,
Lisa Neal,
Rafail Ostrovsky,
Nir Shavit,
Thanasis Tsantilas.
I especially want to thank
Susan Greenbaum,
Amanda Lathroum,
Beralda Conceiçao de Lima,
Gerry Waggett
for their unending support and inspiration.
And, of course, without two very important Professors,
Don and Ollie Beaver, this effort would not have been possible.
I dedicate this work to them, to all of my family,
and especially to my brother, James.
Some of the protocols appearing here represent joint work:
Chapter <ref> with Dr. Judit Bar-Ilan and Prof. Michael Rabin,
Chapter <ref> with Prof. Shafi Goldwasser,
Chapter <ref> with Prof. Silvio Micali and Mr. Phillip Rogaway,
Chapters <ref> and <ref> with Dr. Joan Feigenbaum,
Chapters <ref> and <ref> with
Dr. Joan Feigenbaum, Dr. Joe Kilian, and Mr. Phillip Rogaway.
CHAPTER: INTRODUCTION
It is true that you can fool all the people some of the time;
you can even fool some of the people all the time;
but you can't fool all of the people all the time.
Abraham Lincoln
In an ideal world, a single, reliable, and trusted computer would take care
of every computational need without delay. In practice, however, no single
computer could be powerful enough, reliable enough, or even sufficiently
accessible to satisfy such an ideal. For reasons of efficiency, reliability,
security, and for the immense benefits of interaction, distributed
computing is the only solution, and indeed it is rapidly being realized.
The tremendous increase in dependence on large, interconnected computer
systems makes a thorough analysis of their reliability of paramount
importance. The issues of fault-tolerance and security are essential to
capitalizing on the advantages of scale and interaction.
At the same time, the need for practical methods to ensure reliability and
security can be satisfied only by efficient techniques. Unwieldy and
inefficient methods are difficult to implement correctly and, even worse,
are likely to be ignored altogether.
The protection of communications among various parties has a long and
rich history, by no means restricted to the age of computers.
Cryptographers have long concerned themselves with developing codes to
encipher and decipher messages. Similarly, limiting and authorizing access
to important resources (whether they are the headquarters of a military
command or the files in a centralized computer) has also been the focus of
centuries of analysis. Often, the methods developed for one problem are
applicable to the other; for instance, the $\mbox{\sc
Unix}^{tm}\index{UNIX}$ operating system implements password schemes to
authorize logins by using encryption functions originally designed to
protect communications.
Security for distributed computations, however, is a more recent
The direct approach is to use classical methods for security that treat a
collection of interacting computers as a group of individuals, protecting
each computer individually and then protecting the communications between
each pair of computers. Often, methods for security and reliability depend
on the invulnerability of some central host which takes care of
authorization, exclusive access to files, and so on.
Research into distributed system reliability (without regard to security)
addresses the problems of failures in individual processors, and develops
methods such as transaction processing and process migration to recover
from individual failures. Together, the collection of processors can carry
on computations despite local problems. Like the methods developed to
protect communications, though, these methods still rely on one or a few
central hosts, such as name-servers, for essential operations.
The very presence of a large number of somewhat independent processors
affords a much greater degree of reliability and security than that
provided by a single, central host. A large, distributed network of
computers is in some ways very much like a community of people. Some
elements are reliable, some are not; some are prone to accidental mishaps,
others behave maliciously. On the whole, most are reliable most of the
time, but there are few whom one would trust or depend upon completely.
Yet with the proper “laws,” individuals function together despite an
imperfect world, and societies prosper despite individual flaws.
In this work, we take advantage of the synergy provided by the interaction
among many processors in order to ensure that useful computations continue
despite failures in sizable and arbitrary portions of a network.
Interestingly and importantly, our methods apply to ensure both reliability
and security. In retrospect this is not surprising: reliability and
security are two sides of the same coin. A reliable system should be
resilient against the worst-case failure, which is best regarded as a
malicious attack. On the other hand, ensuring that an adversary cannot
obtain information means that the nature of its attack is independent of
that information. This has the advantage that, in an intuitive sense, the
attack can be treated more like a random or accidental failure.
In addition to these intuitive connections between reliability and
security, it turns out that the two have interesting and deeper connections
when defined formally. Essentially, the two goals are unified by the
single purpose of simulating a trusted and reliable central host, even
though none is available and it would be imprudent even with assurances to
depend on a single, supposedly reliable party.
A standard formal method to
ensure that unspecified information is not leaked during an interaction is
to demonstrate that the course of the interaction can be simulated, in a
certain formal sense, based only on the information that is supposed to be
leaked. This information is measured with respect to an ideal setting in
which a trusted host is available; the trusted host leaks only specified
facts, by definition.
We examine interaction in a deeper sense, investigating not simply the information flow from one processor to another but the influence
that one processor may have over the computation. We develop broad
definitions that compare the information and the influence that a
participant or adversary has in one protocol to that which it would have in
another protocol. The meaning of correctness in distributed
computation is closely tied to the measure of influence over outputs,
in the same way that privacy is tied to the measure of information. This observation forms the basis of a fundamentally new
and unified approach to distributed security.
The focus of this dissertation is a collection of efficient
methods whereby
a collection of individuals can achieve reliable computations in a
secure manner, despite the failure or corruption of some minority of the
Consider the following example. The board of trustees of a company would
like to take a votesecret ballot on a pressing issue, yet none has
time to meet in a particular location. At the same time, because of the
sensitivity of the issue, each trustee would prefer to keep his vote
private. The everyday solution of writing a vote on a piece of paper and
shuffling the sheets before counting the votes is physically impossible in
this setting; the trustees can only communicate over telephone lines,
perhaps using their workstations as well. An electronic analog of the
paper-solution will not work: there may be no trusted party to count the
votes; electronic messages may be easily traced or duplicated. We shall
see how to take a secret ballot, and how to conduct more general
distributed computations, in a simple, efficient, reliable, and secure
The unusual and key aspect to our methods is that we do not assume the
reliability of any particular processor or the availability of a central,
trusted host. A cynical and paradoxical synopsisthesis of our
approach might be, “Trust everyone, but trust no-one.” In less ambiguous
terms, the community as a whole is reliable, but there is no particular
individual whose reliability is guaranteed. Through “democratic” means,
we ensure that the majority rules, and as long as the majority is reliable,
the particular individuals who fail do not matter. In fact, reliable
operations are ensured even without the need to identify faulty elements in
advance, or to restart entire computations when failures occur (malicious
or otherwise) and the offending components are cast out.
We treat a distributed computation in a general manner as follows. Say
that each of $n$ processors in a network holds a private input value $x_i.$
Together, they must computesecret computation some function
$F(x_1,\dots,x_n),$ without revealing anything about the inputs other than
what could be deduced from learning $F(x_1,\dots,x_n)$ alone. In the
secret ballot example, a unanimous vote clearly reveals all the individual
votes, but this is allowable since it is leaked by the result, not as
a side effect of a protocol used to compute the result. In other words,
the network must simulate a trusted external party who receives
$x_1,\dots,x_n,$ computes $F(x_1,\dots,x_n),$ and returns only the result.
Since some arbitrary collection of $t$ of the processors may be
faultyfault tolerance, a centralized solution will not suffice.
Our goal is to develop multiparty protocols to compute the result even
though some of the processors or communication lines may be unreliable in a
random or malicious manner.
Research into secure multiparty computations can be divided roughly along
the same lines as the rest of cryptographic research: classical,
information-theoretic securityinformation-theoretic security vs. the modern approach of complexity-based
cryptographycomplexity-based cryptography. The classical approaches
to security examine secure communications under conditions where an
eavesdropper has unlimited computational resources at his disposal. Few
methods withstand such strong requirements for security. Fortunately,
requiring perfect, information-theoretic security may be unnecessary in
practical situations, if one is assured of some limitation on an
eavesdropper's or adversary's resources. Diffie and Hellman
pioneered the modern approach of computational
complexity-based cryptography, in which the processors, reliable or
otherwise, are assumed to have bounded amounts of time and memory to
achieve their purposes, honest or otherwise
Each approach has its own appeal and its own disadvantages.
Information-theoretic cryptography guarantees protection of the
information against any adversary (random or malicious in the worst
possible way), but it may require unreasonable amounts of
communication or computation. Complexity-based cryptography broadens
the range of secure information processing since it permits only weaker
adversaries, but most complexity-based results are based on unproven
assumptions about the intractability of problems like factoring large
Based on unproven complexity-theoretic assumptions, inefficient methods for
performing a variety of multiparty computations have been developed
[119, 121, 70, 71, 65, 76, 17]. Yao introduced the
idea of private function computation by presenting a method for two
cooperating individuals to compute a function $F(x,y)$ without having to
reveal $x$ and $y,$ given that the individuals are restricted to
polynomial-time computations [121]. Goldreich, Micali, and
Wigderson showed that any function $F$ described by a Boolean circuit $C_F$
can be computed reliably and securely as long as a majority of the
processors are reliable
[71]. Galil, Haber, and Yung improved the efficiency and
implemented other desirable aspects
[65, 76].
Figure <ref> describes some of the cryptographic
results, and some of the “non-cryptographic” ones, about multiparty
Reference Fault-Tol. Crypto
Rounds Message Size Local Time
[121]
$t=1,n=2$ yes const poly poly
[71]
$t<n/2$ yes $Dn^2$ poly poly
[65]
$t<n/2$ yes $D$ poly poly
[28, 39]
$t< n/3$ no $D$ poly poly
[5]
$t<n/3$ no $D/\log n$ poly poly
[5]
$t<n/3$ no const poss exp poss exp
[107, 8]
$t<n/2$ no $D$ poly poly
[15]
$t < \log n$ no const poly poss exp
[20]
$t<n/2$ yes const poly poly
Various protocols for 2-party and $n$-party secure computations.
“Crypto” indicates unproven complexity assumptions;
$D$ is the depth of a circuit $C_F$ for the function $F$ to be
computed; $n$ represents the number of players and the sizes of the inputs
(for clarity of presentation); “poly” means polynomially-bounded;
“poss exp” means possibly exponential (depending on the complexity
of $F$). The cost measures include
number of rounds of interaction,
message size,
and local computation time.
Some of these results appear here.
The disadvantages to the early, cryptographic solutions are twofold.
First, because these methods are based on evaluating a circuit $C_F$
representing the function $F$ gate by gate, the amount of interaction may
become prohibitive. We shall say more about this issue momentarily.
The second disadvantage lies in the set of assumptionsassumptions
on which these solutions are founded. Many assume that factoring large
numbers is difficult, or determining the discrete logarithm of a number
modulo some prime is hard; others rely on the general assumption that
one-way trapdoor functions exist, i.e. functions which are easy to
compute but hard to invert (one-way), but which are easy to invert given
additional, trapdoor information. Though these protocols are apparently
secure given the current lack of efficient methods for factoring or
computing discrete logarithms and the like, an advance in the techniques
for solving these apparently intractable problems could destroy the
foundations on which the protocols are based.
Ben-Or, Goldwasser, Wigderson, Chaum, , and have recently
introduced a new, “non-cryptographic” approach which circumvents the use
of unproven assumptions, replacing them with certain reasonable assumptions
about the network [28, 39]. Namely, they assume that at most a
third of the processors between the processors are faulty, and that secure
pairwise communication lines are available. In other words, let $t$ be a
bound on the number of faulty processors (for simplicity, regard a
processor as faulty if its communication lines are faulty). As long as
$3t<n,$ any function $F(x_1,\dots,x_n)$ described by a Boolean circuit
$C_F$ can be computed reliably and securely in a complete network of $n$
processors with private communication lines. Because the protocols are
based on circuit simulation using threshold schemes, the number of rounds
of interaction and the sizes of the messages involved are related to the
depth and size of $C_F.$
Since threshold schemesthreshold scheme form the basis for many of
these protocols, let us digress for a moment to describe them. Say that
one processor holds a private item of information, $s.$ It distributes this
secret among the network, sending values (known as “pieces”) to each
processor $i,$ so that provided that at most $t$ processors fail, the
secret can be put together later. The secret, however, must remain a
secret until then; no group of $t$ or fewer processors, even colluding and
malicious ones, can glean any information about $s$ from their pieces.
Shamir gave an elegant solution, known as secret sharing,
secret sharing
secret sharing!polynomial
secret sharing!Shamir
based on
properties of random polynomials
The recent methods for secret computation are based on combining secretly
shared values to create new secretly shared values. Say $x$ and $y$ have
been secretly shared among the system. Using a protocol for addition or
multiplication, new secrets $u$ and $v$ can be constructed such that their
hidden values are $u=x+y$ and $v=xy.$ The new secrets have the same
reliability and secrecy properties as secrets which have been distributed
by some dealer, but there no longer need be a dealer who knows the hidden
values of $u$ and $v.$
By evaluating the Boolean circuit $C_F,$ a new secret $w$ whose hidden
value is $F(x_1,\dots,x_n)$ is constructed. The use of secret sharing is
essential to hiding the intermediate values, which could reflect
information about the inputs which should not be revealed. Most multiparty
protocols, cryptographic or not, revolve around interactively simulating a
circuit gate by gate, following a procedure introduced in [71].
Thus, the communication complexity of most protocols is directly related to
the computational (circuit) complexity of $F.$ In concrete programming
terms, the number of rounds of interaction is proportional to the time that
a centrally-run program would take to compute $F.$
Even for useful functions having reasonable circuit depth, the number of
rounds of interaction becomes prohibitive in a practical sense. Because
communication is a bottleneck in distributed computations, protocols with
high communication complexity are at a great disadvantage. The overhead of
ensuring security by using these methods counteracts the advantages of
computing distributively, even to the point of making simple computations
too expensive to perform.
§ THE COMMUNICATION COMPLEXITY OF SECURE COMPUTATION
We examine a new measure of communication complexity:communication
complexity the communication complexity of secure computation, which
measures the number of rounds of interaction and the number of bits sent
during a protocol which computes a function reliably and securely. Developing secure and practical protocols with small
communication complexity is the primary focus of this dissertation.
We may contrast the communication complexity of secure computation with the
standard measure of communication complexity as follows. If we relax the
demand for security, any function has a “small” communication complexity:
each processor simply broadcasts its input $x_i,$ and then individually
computes $F(x_1,\dots,x_n)$ from the messages it received. (Technically,
we have ignored an important issue, namely how to achieve broadcast
reliably using private communications, but this problem, known as Byzantine
Agreement, has efficient solutions.) Thus, few rounds and small messages
would suffice.
It is reasonable to expect that more and longer messages may be required to
hide information otherwise leaked by a direct approach. This expectation
holds for the particular protocols mentioned earlier, and even led some
authors to conjecture that the circuit depth of a function is a lower
bound on the number of rounds needed for secret computation.
We disprove that conjecture with our first result [5], which is the
first to address the issue of communication complexity in secure
computation: any circuit of depth $D$ can be evaluated in $O(D/ \log n)$
rounds using small messages. For all functions in $NC^1$ (or more
generally for any function admitting a circuit of polynomial size and
logarithmic depth), this result gives protocols which operate in a small,
constant number of rounds.
For example, consider a secret ballot over a legal measure which will pass
only if there is a two-thirds majority. There is a simple and efficient
protocol which operates in constant rounds regardless of the number of
voters, to determine if the law is passed, without revealing the individual
votes or even the exact tally. A variety of useful functions becomes
efficiently computable using our techniques.
An even stronger and more surprising statement can be made if one is
willing to permit fewer than $O(\log n)$ faults in the network, as opposed
to the most general bound of $n/2$ or $n/3:$
Any function, regardless of its circuit depth or size, can
be computed in a constant number of rounds of interaction and using
small messages.
This is a very surprising result: the communication complexity of secure
computation need not be related to the computational complexity of the
function being computed. It lends great hope to the practical
implementation of methods for distributed security, since communication is
an expensive resource.
At the risk of basing protocols on unproven complexity-theoretic
assumptions, we show also that the goal of achieving constant-round
protocols to compute functions of arbitrary circuit depth can be achieved
at the higher levels of fault-tolerance ($2t < n$).
The common idea underlying many of our protocols is the following: to
compute a function in a distributed manner, convert it to several problems
that can be computed locally and then combined to give the solution,
despite errors in some of the local computations. The key, however, is to
ensure that each locally computed problem is independent of the original to
a certain degree, which hides information about the original problem and
its answer and simultaneously provides a degree of resilience against local
errors. The brunt of the computation is placed on local computations
rather than communications.
These are important tools for practical methods for distributed security
and reliability. With ample levels of fault-tolerance, and using small
messages, the number of rounds of interaction is reduced greatly.
§ FAULT TOLERANCE
The non-cryptographic protocols of [28, 39] tolerate faults in a
third of the network, namely $3t<n.$ It is not difficult to show that a
majority of faults is intolerable (e.g. the AND function cannot be
computed by a half-faulty network [28]). This left a gap between
the achievable $3t<n$ bound and the $2t \leq n$ impossibility result.
We close the gap, showing that as long as the number of faults satisfies
$2t<n,$ any circuit can be evaluated securely and reliably. The crucial
discovery that supports the tight bound on fault-tolerance relies on a
simple problem which we call the ABC Problem: one processor must share
three secrets, $a,$ $b,$ and $c,$ and then prove that $ab=c$ without
revealing any other information.
Our solution is simple and efficient, and supports secret computations
based on arithmetic over exponentially large ($n$-bit) fields (such as the
set of integers modulo $m,$ where $m$ is an $n$-bit prime). A similar but
less efficient result was obtained independently by Ben-Or [107]; in
contrast, that solution uses bitwise operations which would require $O(n
\log n)$ times as many rounds for the same arithmetic. Both methods rely
on a recent technique by Rabin [106] for verifiable secret sharing
without using cryptographic assumptions. (Verifiable secret sharing
has the added property that all nonfaulty processors are assured that the
secret is well-defined and reconstructable even when the dealer of the
secret may be faulty.)
Zero-Knowledge Proof Systems
Our solution to the ABC problem also provides a fast and efficient way for
one processor to prove that a general property holds on a string of bits,
without having to reveal the proof.
Goldwasser, Micali, and Rackoff [74] and Babai [4] pioneered
the concept of zero-knowledge proof systems, in which one party, the
prover, attempts to convince another party, the verifier, that a string
$x$ is in a particular language $L.$ The verifier learns whether $x \in L,$
but as in the multiparty protocols discussed above, he learns nothing more
than that result. Presumably the prover has far greater power to
prove language membership, while the verifier is limited to
polynomial time computations.
A similar but distinct kind of proof system involves committed strings,
and is similar to the ABC problem described above.
Say that the prover can place bits in envelopes and seal them, so that the
verifier cannot read the bits but the prover cannot change them before
opening the envelopes at some future time. In this case, the prover
commits to a string of bits $x,$ and then proves to the verifier that $x
\in L,$ without revealing the bits in $x.$
We give a formulation of both types of zero-knowledge proofs for a network
of processors. Assuming that a majority of individuals are reliable, we
show that a prover can prove membership in any language in
$\ip$interactive proof system!IP
(the class of languages with polynomial-time 2-player proof
systems) without having to know which processors are reliable. Our result
applies to both flavors of proof systems, proving statements about known
strings and proving statements about hidden but committed strings.
Thus, if the prover, the verifier, and a third party are present, the
prover is assured that his proof leaks no additional information, as long
as either the verifier or the third party is reliable.
Multiparty zero-knowledge proof systems are distinct from the related idea
of Goldwasser et al [27], who examine two-prover proof
systems in which provers are kept physically separate. Here, we allow
communication between all processors, where just one of them is a prover.
§ LOCALLY RANDOM REDUCTIONS
We develop a powerful new tool called a locally random reduction
locally random reduction
to achieve many of the results mentioned above, along with other results we
shall describe in a moment. A random reduction from the problem of
computing some function $f(x)$ to the problem of computing another function
$g(y)$ is a pair of probabilistic, polynomial-time algorithms $P$ and $Q.$
By computing $P(x),$ one obtains a list of values $(y_1,\dots,y_m)$ at
which to evaluate $g.$ The results are interpolated to determine $f(x)$ by
evaluating $Q(g(y_1),\dots,g(y_m)).$ The reduction is $(k,m)$-locally
random if for any subset $S$ of the $y$'s such that $\abs{S} \leq k,$ the
distribution on $\set{y_i \mid i \in S}$ induced by $P$ is the same,
regardless of $x.$
This new tool has widespread applications. It forms the basis of a theory
for program testingprogram testing developed by Lipton
[89], who also observed that, using our general technique for
LRR's, the Permanent function has a locally random reduction. Blum,
Levin, and Luby [33] used it to show that if the Permanent
function is computable on average in polynomial time with a uniform
distribution, then ${\#}P \subseteq \mbox{ZPP}.$ Nisan [98]
showed that any language in the polynomial time hierarchy admits a
two-prover interactive proof system; subsequently, Fortnow, Lund, Karloff,
and Nisan [91] showed that
$\mbox{PH} \subseteq \ip.$interactive proof system!IP
Previously, it was not known whether even coSAT admitted interactive
proofs. Finally, Shamir [115] solved a fundamental open question,
showing $\mbox{IP} = \mbox{PSPACE}.$
interactive proof system!IP
interactive proof system!IP=PSPACE
The algebraic approach of locally random reductions has sparked quite a bit
of interesting results in cryptography and complexity theory.
We apply locally random reductions to several problems in cryptography
including the secure multiparty computations and zero-knowledge proof
systems described above. As mentioned, the communication complexity of
secure computation can be made independent of the computational complexity
of the function. Locally random reductions serve this purpose by allowing
us to move the computational effort needed to compute the function from the
communication lines to the local processors, thus removing a serious burden
of secure protocols.
Locally random reductions were inspired by the problem of instance-hiding schemesinstance-hiding scheme [1, 14]: a
weak processor, $A,$ wishes to utilize the computational resources of
powerful oracles $B_1,\dots,B_m,$ in order to compute an intractable
function $f(x).$ The weak processor does not, however, want to reveal
anything about $x.$ We show that, using a $(1,m)$-LRR (defined in
Chapter <ref>), any function of $m$ bits has an instance-hiding
§ EXTREME FAULT-TOLERANCE
Privacy Without Cryptographic Assumptions
We have mentioned that general computations are impossible when fault
levels constitute a majority of the network. To understand better the
nature of privacy in distributed computations, we examine protocols
tolerating greater numbers but strictly weaker types of faults. In fact,
let us assume that no processor fails at all, but that some number $t$ of
them may pool their information to attempt to glean information that should
remain private. What sorts of functions can be computed privately
when $2t \geq n,$ without making unproven assumptions?
We take a first step in this direction by completely characterizing the
functions $f$ which can be computed privately by $n=2$ parties, any $t=1$
of which may “collude.” That is, for general functions of two arguments,
we give a straightforward method to determine if $F$ is privately
computable. This result was obtained independently (with an incorrect
proof) by Kushilevitz [88]. For the case of $n \geq 2,$ Chor and
Kushilevitz [43] showed that a very limited subset of the class of
Boolean valued functions are privately computableprivate function
for $2t \geq n.$ The generalization to general-valued functions of several
arguments remains open.
Privacy and Reliability With Cryptographic Assumptions
Not only are most functions impossible to compute with perfect security
when $2t \geq n,$ but an issue of fairness arises. In general, if a
majority of participants halt, the whole computation must come to a halt.
A faulty majority can therefore halt just after learning the result of the
computation, and just before the nonfaulty participants learn the result.
By restricting the processors and the adversarial processors to
polynomial-time computations, we give new protocols which solve both of
these problems, making the weakest possible unproven cryptographic
assumption in this case, that a protocol for two-player Oblivious Transfer
exists. A processor with unbounded resources
could break encryption schemes; thus we do not violate the impossibility of
perfect secrecy.
The important aspect of computations with a faulty majority is the issue of
fairness. Yao [121] and Galil, Haber, and Yung [65] proposed a
definition which did not allow the adversary to have access to the programs
of the nonfaulty processors, and gave cryptographic solutions under these
circumstances. We formulate a stronger definition, allowing the adversary
access to the programs of the reliable processors, and which we believe
best captures the notion of fairness in a quantitative and intuitive sense.
Furthermore, we are able to achieve this stronger property of fairness.
In addition to developing extremal protocols based on cryptographic
assumptions, we consider instead a network equipped with “noisy”
channels, namely channels which allow messages to pass with a 50-50 chance.
We give alternative protocols which utilize these channels instead of
depending on unproven assumptions, showing that fairness, privacy, and
correctness are achievable even when faults abound.
§ DEFINITIONS AND FORMAL PROOFS
In order to prove that protocols are correct and private, a formal model
for distributed protocols and the adversaries that represent the faulty
behavior of the participants is needed.
While the current literature contains many new techniques, it often omits
any mention of an underlying formal model for security, often giving only
intuitive arguments for the claimed results. While intuitive arguments are
satisfactory to the cursory reader, and ideal for the fast dissemination of
results, the lack of formal proofs undermines any assurance that the
methods do provide security and fault-tolerance. Furthermore, with the
variety of assumptions and network characteristics used by the various
solutions, it is difficult to compare the advantages of one over another.
The formulation of a rigorous and coherent model is a primary contribution
of this dissertation. The computational power of the participants and the
adversaries, the available communication channels, the assumptions made to
ensure secrecy, the nature of the adversaries, the definitions of
correctness, privacy, and fairness, all require a formal and explicit
treatment. We give a framework to define and prove various properties of
the networks and the protocols, enabling one to compare what is achievable
across the range of network characteristics and assumptions made.
In keeping with the historical trend of cryptography, we provide a model
which supports an analysis of perfect, information-theoretic security, yet
is adaptable to the modern approach of computational complexity-based
But our work goes far beyond the mere clarification of mechanical
specifications of interactions — which, though necessary for formal
proofs of resilience, are less than stimulating. We address the difficult
and often subtle problem of giving formal definitions for properties such
as correctness and privacy, issues that seem so intuitively easy that
common sense often leads down the wrong path. The research literature is
filled with ad hoc properties and with attempts to formalize each of
them. Few definitions are satisfactory, and because of their ad hoc
nature, few attempts to actually prove claims to security and reliability
We give a surprisingly simple and coherent definition of resilience
(see <ref> in <ref>), a combination of security
and reliability that unifies the various properties that one desires in a
multiparty computation. The subtle and key aspect of our approach is to
avoid analyzing each desired property of a protocol separately, instead
giving a single definition from which all properties arise. In a nutshell:
a real multiparty protocol is intended to achieve the results that an ideal protocol having a trusted host would achieve, and the information
and influence of an adversary must be no greater than that in the ideal
case. We define rigorously what it means to “achieve the same results,”
giving a general and powerful tool for measuring the accomplishments of one
protocol against those of another.
§ APPLICATIONS
The protocols we provide achieve secure and reliable distributed
computations of arbitrary functions $F(x_1,\dots,x_n).$ We demonstrate how
this abstraction can fit in with practical goals.
We develop some efficient applications to practically-motivated problems.
For example, we give a fast method for a processor to authenticate itself
and to authorize its transactions. We provide a quick protocol to deliver
mail anonymously, protecting the identity of the sender and even of the
receiver. We describe a reliability mechanism which resists traffic
analysis, by hiding not just the results of its computations but the nature
of the computations themselves (e.g. whether it is idle or processing
a request for a file, etc.).
§ SUMMARY
We develop a host of communication-efficient protocols to achieve optimally
fault-tolerant and secure distributed computations.
Figure <ref> describes the various combinations and
trade-offs in rounds, message sizes, and the computational complexity of
the desired computations.
Faults Rounds Bits Local Time Function Class
$t < n/2$ $\mbox{depth}(C_F)/\log n$ poly poly
P (or poly-size circuits)
constant poly poly $NC^1$ (or log-depth circuits)
exp size($C_F$) any $C_F$
$t = O(\log n)$ constant poly exp any $C_F$
Various tradeoffs in communication complexity, fault-tolerance,
and function-complexity for the results developed here.
We give the first protocol to achieve a secure multiparty computation using
a number of rounds less than the circuit depth of the function to be
computed. In fact, we give the first protocols that use only a constant
number of rounds.
Our protocols are proven secure and reliable using a concise and precise
set of formal definitions. The formulation of these definitions is a major
contribution of this dissertation and provides a clear and general basis
for proving the resilience of other protocols in the literature.
We investigate fault tolerance when faults constitute a majority of the
We characterize the class of functions which are privately computable
against a passive adversary.
We give a strong formal definition for fairness when the faults are
Byzantine, and show how to achieve it using additional assumptions.
We develop a powerful new tool, the locally random reduction, which
has broad applications. It reduces the communication complexity of many
secure protocols drastically, regardless of the computational complexity of
the computation the protocol attempts to achieve. It provides a solution
to the problem of utilizing powerful public resources without compromising
privacy. It has inspired applications to program testing and interactive
proof systems and led to several fundamental new results.
CHAPTER: PRELIMINARY DEFINITIONS
The rest of the pages he picked up from the floor, bunched together,
and threw down between his legs into the bowl. Then he pulled the
Down went the helping names, the influential numbers, the
addresses that could mean so much, into the round, roaring sea and
onto the rails.
Already they were lost a mile behind, blowing over
the track now, over the glimpses of hedges into the lightning-passing
Home and help were over. He had eight pounds ten and Lucille Harris'
“Many people have begun worse,” he said aloud.
Dylan Thomas, Adventures in the Skin Trade
We shall require a modest repertoire of definitions before the results of
this dissertation can be stated and proved. Many of these definitions are
standard and we sketch them here for completeness; others have not appeared
or are unsatisfying for many reasons, and the precise and concise
formalization of their intuitive meanings is a novel, necessary, and
unifying contribution of this dissertation. In this chapter we present
general definitions and describe the mechanical nature of a protocol –
i.e. the nature of the participants and adversaries, the network, and
the sequence of computations and communications involved. In
Chapter <ref> we examine the precise nature of security and
reliability for multiparty protocols, a more subtle and significant issue.
Let us emphasize once again the fundamental abstraction of a multiparty
Each of $n$ parties holds a private input value $x_i,$ and together the $n$
parties wish to compute a (finite and multi-valued “probabilistic”)
function $F(x_1,\dots,x_n)$ without revealing anything but the result.
That is, they wish to simulate the following situation: a trusted and
reliable party receives each of their inputs privately, computes the
function $F(x_1,\dots,x_n),$ and returns the results to the players.
In order to appreciate and to compare the variety of protocols under the
broad and varied set of assumptions and network models, a unifying model is
needed. The important mechanical parameters of a model and solution can be
listed in a few dimensions. The participants may be resource-unbounded
automata, having a finite or even an infinite number of states (the
“information-theoretic” model), or Turing machines with limited resources
(the “complexity-based” model). The network may connect the participants
completely, and it may also support a range of communication channels,
broadcast and private channels being the most important but by no means
exclusive. Unproven complexity theoretic assumptions or cryptographic
assumptions may be made;
often, for example, protocols are based on the unproven existence
of one-way or trapdoor functions.
Normally, one allows an adversary to observe and perhaps to change the
communications to and from various participants. The adversary may be
unbounded or bounded, static or dynamic in its choice of participants to
corrupt, and passive or malicious in its actions.
§ NOTATION
We use a vector notationvector notation to represent a labelled set
of items: $\vec{x} = \set{(1,x_1),\dots,(n,x_n)};$ we shall normally omit
the labels, writing $\vec{x} = \set{x_1,\dots,x_n}.$ A subscript denotes a
subset of those values: $\vec{x}_T = \set{x_i \mid i \in T}.$ Bars indicate
complements: $\overline{T} = [n]-T,$ where $[n] = \set{1,\dots,n}.$
The operation $a \Div b$ is defined as $b \lfloor \frac{a}{b} \rfloor.$
We adopt the standard alphabet$\Sigma,$ alphabet $\Sigma=\set{0,1}.$
We may extend it through a trivial encoding to include other characters,
especially the delimiter, #, and a symbol $\Lambda$ indicating a null message.
Let $\Sigma^{\leq c} = \bigcup_{i=0}^c \Sigma^c.$ If $\sigma$ is a string then
$\sigma[a..b]$ denotes the substring of $\sigma$ from the $a^{th}$
character to the $b^{th},$ inclusive. The symbol $\oplus$ denotes
exclusive or of bits, or the bitwise exclusive-or when applied to a pair of
strings. The exclusive-or of the bits $b_i$ as $i$ ranges over a specified
set is denoted $\oplus_i b_i.$ The symbol $\circ,$ when used with strings,
denotes concatenation. For notational convenience, we define $H_m =
(\sigstar)^m,$ the set of $m$-tuples of strings, and we let $H =
\bigcup_{i=1}^{\infty} H_i.$ We assume a uniquely decodable encoding of
$m$-tuples of strings into single strings.
We assume a natural encoding of sets of strings when written using
$\Sigma:$ if $S=\set{s_1,\dots,s_n}$ is a set of strings (where, without
loss of generality, $s_1,\ldots,s_n$ are in lexicographic order), then $S$
is written as $s_1\#s_2\#\cdots\#s_n.$ If $S'$ is a set of objects each of
which has a natural encoding as a string, then the encoding of $S'$ is the
same as the encoding of the set $S$ of encodings of each of its members.
When we speak of a set of messages or a state being written on a tape, we
implicitly assume a natural and uniquely decodable (e.g. prefix-free)
way to tag various items
such as messages (include the sender, receiver, communication channel,
contents, and time of sending) or Turing machine states.
The notation “$i:$” in a protocol description indicates a
local computations and variable assignments of player $i.$
The notation “$i\rightarrow j:m$”
indicates that $i$ sends $m$ to $j,$ and $i \rightarrow [n]:m$
means that $i$ broadcasts $m.$ The notation
$(1\leq i \leq n)$ indicates that the succeeding text is performed
in parallel for $i$ in the range from $1$ to $n.$
Let $\dist(X)$ denote the set of all distributions
$\dist(X),$ set of distributions
on a set $X.$ The probability of event $Y$ with respect to
distribution $P$ is denoted $\probb{P}{Y}.$
A probabilistic function
probabilistic function
$F$ is a function mapping some domain $X$ to a set of distributions on a set $Y,$ that is, $F : X \rightarrow \dist(Y).$ The
output of a probabilistic function is a distribution, but sometimes we may
abuse notation and refer to a the output as a sample taken
according to that distribution.
The difference of two distributions $P$ and $Q$ on a set $X$ is defined by
$\abs{P-Q} = \max_{W \subseteq X} \abs{\probb{P}{W}-\probb{Q}{W}}.$ We
denote a sample $x$ taken according to $P$ by $x \leftarrow P;$ a distribution
may also be written as $P=\set{x \leftarrow P}.$
$\set{x \leftarrow P \mid Y(x)}$ be a sample $x$ taken according to the
distribution $P$ subject to the condition that $Y(x)$ holds. Let $f$ be a
function from some number $j$ of arguments to a range $X;$ we denote by
\[
\set{x_1 \leftarrow P_1; x_2 \leftarrow P_2(x_1);
x_3 \leftarrow P_3(x_1,x_2);
\dots; x_j \leftarrow
P_j(x_1,\ldots,x_{j-1}): f(x_1,x_2,\dots,x_j)}
\]
the distribution on $X$ induced by running the $j$ experiments in order and
applying $f$ to the results. Where clear from context, we use the same
notation to describe a sample drawn from that distribution.
The composition of two probabilistic functions $F:X \rightarrow \dist(Y)$
and $G:Y \rightarrow \dist(Z)$ is the probabilistic function
$H:X \rightarrow \dist(Z)$ given by
\[
H(x)=\{y \leftarrow F(x); z \leftarrow G(y) : z \}.
\]
Some often used distributions are: the uniform distribution on a set $X,$
denoted $\uniform(X);$
the uniform distribution on the set $\polyn(t,s)$ of polynomials $f(u)$
of degree $t$ satisfying $f(0)=s,$ denoted
and the set of $n$ values obtained by selecting such a polynomial at
random and evaluating it at $n$ points, defined by
\begin{eqnarray*}
\allpieces(n,t,s)\index{$\allpieces,$ pieces of polynomial} & = &
\{
\vec{y} \in E^n \mid
(\exists f \in \polyn(t,s)) (\forall i) y_i = f(\alpha_i)
\} \\
\unifpieces(n,t,s)%
\index{$\unifpieces,$ uniform distribution on $\allpieces$}
& = &
\uniform(\allpieces)
\end{eqnarray*}
Here, $E$ is some finite field and the $\alpha_i$ values are implicitly fixed,
normally either $\{1,\ldots,n\}$ or $\{1,\omega,\ldots,\omega^{n-1}\}$
where $\omega$ is a primitive $n^{th}$ root of unity.
A coin biased to $0$ by $b$ is the distribution
$\bias(b)\in \dist(\set{0,1})$ given by
$\prob{0}=\half+b.$biased coin
An ensembleensemble is a family ${\cal P} = \set{P(z,k)}$ of
distributions on $\Sigma^{\leq p(\abs{z},k)},$ parametrized by
$z \in \sigstar$
and $k \in {\bf N},$
where $p(\cdot,\cdot)$ is polynomially bounded.
Ensembles $\scp$ and $\scq$ are $\delta(k)$-indistinguishableindistinguishability if
\[
%% (\forall l)
%% (\exists k_0: \nat \rightarrow \nat)
%% (\forall k \geq k_0(l))
%(\exists k_0)
%(\forall k \geq k_0)
(\forall k)
(\forall z)
\hspace{0.3in}
\abs{\scp(z,k)-\scq(z,k)} \leq \delta(k).
\]
We write this as $\scp
\indistEn^{\delta(k)}\index{$\indistEn,$ indistinguishable ensemble} \scq.$
Under the standard $O$-notation, $f(k)=O(g(k))$ means $(\exists c,k_0)(k
\geq k_0 \Rightarrow f(k) \leq c \cdot g(k)).$ We call $k_0$ the
convergence parameter.convergence parameter
Ensembles $\scp$ and $\scq$ are
$O(\delta(k))$-indistinguishableindistinguishable if
\[
(\exists \Delta: \nat \rightarrow \nat)
\mbox{\hspace{0.2in}}
\scp \indistEn^{\Delta(k)} \scq
\mbox{\hspace{0.3in} \rm and }
\Delta(k) = O(\delta(k)).
\]
We use the following adjectives to delineate various abilities to
discriminate between ensembles:
\[
\begin{tabular}{lccccrcl}
% PERFECT
{\defstyle perfect} &
(written $\scp \indistEn \scq$) &
if &
$\scp \indistEn^0 \scq.$ \\
% EXPONENTIAL
{\defstyle exponential} &
(written $\scp \indistEnE \scq$) &
if &
$(\exists c>1)$ &
$\scp \indistEn^{O(c^{-k})} \scq.$ \\
% STATISTICAL
{\defstyle statistical} &
(written $\scp \indistEnS \scq$) &
if &
$(\forall c)$ &
$\scp \indistEn^{O(k^{-c})} \scq.$ \\
% COMPUTATIONAL
{\defstyle computational} &
(written $\scp \indistEnC \scq$) &
if &
(see below)
\end{tabular}%
\index{indistinguishability!perfect}%
\index{indistinguishability!exponential}%
\index{indistinguishability!statistical}%
\index{indistinguishability!computational}%
\index{$\indistEn,$ perfectly indistinguishable}%
\index{$\indistEnE,$ exponentially indistinguishable}%
\index{$\indistEnS,$ statistically indistinguishable}%
\index{$\indistEnC,$ computationally indistinguishable}
\]
Computational distinguishability is based on the notion of a
resource-bounded machine that tries to distinguish strings sampled from one
distribution or the other. A distinguisher $A$ is a probabilistic
polynomial size circuit which outputs either 0 or 1. Let $A_{\scp(z,k)}$
denote the probability that $A$ outputs a 0 on an input selected according
to $\scp(z,k).$ Let PPCPPC denote the class of probabilistic
polynomial size circuit families.
Ensembles $\scp$ and $\scq$ are $\delta(k)$-computationally
indistinguishableindistinguishability!computational if
\[
(\forall A \in \mbox{\sc PPC})
%% (\forall l)
%% (\exists k_0: \nat \rightarrow \nat)
%% (\forall k \geq k_0(l))
(\exists k_0)
(\forall k \geq k_0)
(\forall z)
\mbox{\hspace{0.2in}}
\abs{A_{\scp(z,k)}-A_{\scp(z,k)}} \leq \delta(k).
\]
We say they are simply computationally indistinguishable if
$\delta(k)=O(k^{-c})$ for some fixed $c,$ and write $\scp
\indistEnC\index{$\indistEnC,$ computationally indistinguishable} \scq.$
We shall soon consider ensembles induced by running a protocol with
particular parameters $n$ (number of players) and $m$ (size of inputs).
Two families of ensembles $\set{\scp(i,j)}$ and $\set{\scq(i,j)}$ are
$\delta(k)$-indistinguishable if for all $i$ and $j,$ $\scp(i,j)
\indistEn^{\delta(k)} \scq(i,j).$ We write $\set{\scp(i,j)}
\indistFa^{\delta(k)}\index{$\indistFa,$ indistinguishable ensemble family}
\set{\scq(i,j)}.$
An ensemble $\scp$ is polynomially generable
generable, polynomially if there exists a polynomial size
circuit family with random inputs that, on input $z$
and $k,$ produces a distribution identical to $\scp(z,k).$
§ TWO-PARTY PROTOCOLS
Two-party protocols are programs for a pair of machines that must jointly
compute a result. In its most general form, the definition of a two-party
protocol specifies only that the machines take turns computing and
communicating, eventually placing an output on each of their output tapes.
In fact, the two participants need not be machines but can be arbitrary,
even non-recursive functions mapping inputs to a random variable describing
the output. We shall postpone discussion in full generality, however, and
for the moment consider a pair of machines.
Security for two-party protocols addresses what a given protocol accomplishes
when one of the participants is replaces by a different machine,
namely a corrupted machine. Protocols are designed to withstand
or at least to detect such changes and to achieve the same results as
when neither party is replaced.
The two common types of machines used to model interactive computations are
Turing machines and circuits. Depending on the degree and the nature of
the faults, one model has advantages over the other. Non-uniform circuits
have greater computational power than Turing machines, so a protocol which
is secure against faulty circuits is stronger than one secure against
faulty Turing machines. On the other hand, Turing machines present a more
natural model for real-world computations, in some senses, and thus the
power of nonfaulty members of a system is modelled more accurately. The
protocols presented in this work generally require the power of a
polynomial time Turing machine at most, but they are stated with respect to
polynomial size circuits or more powerful players, to provide maximal
security. A slightly different way to state this is that any uncorrupted
player is in general a Turing machine, whereas an adversary that corrupts
it may replace it by a more powerful, non-uniform circuit.
§.§ Interactive Turing Machines
An interactive Turing machine [74] is a probabilistic Turing
having a read-only input tape $x$ (which may be public or private),
a private work tape $W,$ a private random tape $R$
with a unidirectional read-head,
a read-only communication tape $C_{in}$,
and a writable communication tape $C_{out}$.
A random tape is a convenient intuitive tool to describe the generalized
computation of a probabilistic Turing machine: in particular, a probabilistic
Turing machine is a probabilistic function that maps input $x$ to the
distribution induced by choosing uniformly at random a mapping from
to $\{0,1\},$ regarding this as the random tape of a standard,
deterministic, multitape
Turing machine, and operating the machine in the usual way.
of bits, regarding this as
A machine makes a random decision, flips a coin, by reading
its random tape.
The interactive machine has a special inactive state, which
it enters after having performed a computation and writing on its
communication tape. It is awoken from the inactive state when the
other machine enters its inactive state. Local computation time is
the maximal number of steps used between entering inactive states;
total local computation is the total number of steps executed during
the interaction.
An interactive protocol is a pair of interactive Turing machines
[A,B] sharing the same public input tape and sharing
communication tapes in the standard sense. They take turns being active.
Normally, one or both of the machines uses polynomial local computation
time, and often one of the machines is more powerful.
For instance, in the zero-knowledge proof
model, a machine which proves a complicated theorem may be permitted a
polynomial space computation to assist it in convincing a weak ,
polynomial-time verifier of the truth of the theorem.
The communication complexity of a protocol is measured by two
factors: first, the number of rounds, i.e. the number of times
each Turing machine enters its inactive state, and second, the message
complexity, i.e. the total number of bits sent during the protocol.
Each measure is a function of the size $m$ of the inputs.
§.§ Non-Uniform Interaction
For many reasons, a non-uniform model is superior to the uniform Turing
machine approach. For instance, the security of an encryption scheme is
often stated with respect to circuits rather than Turing machines. A
canonical example is given in [73]. Consider a public-key
cryptosystem in which a user A publishes his encryption key $E_A.$ In order
for the system to be secure, there should be no way for an adversary to
choose a polynomial-time Turing machine which inverts the encryption, even
if the adversary can choose the machine after the encryption key is
published. The solution to this problem as presented in [73] is to
allow non-uniform adversaries, such as circuits, which can have such
knowledge “wired in.”
There are other, more important reasons to allow non-uniformity. In order
to combine protocols in a modular way, a way to model information obtained
in earlier interactions is needed [117, 100, 74]. Furthermore,
because external information may be available, even a protocol composed of
many subprotocols may not start with a tabula rasa.
Rather than using circuits as the model, though, it is more natural to
employ “non-uniform” Turing machines; the extensions to the circuit model
to allow interaction are somewhat inelegant. Since the power of polynomial
size circuits is achieved by polynomial time Turing machines with
polynomial size advice strings (the class $\ppoly$), there is no loss of
generality in defining security and correctness with respect to Turing
machines that take advice.
A non-uniform interactive Turing machine is an interactive
Turing machine having a private read-only auxiliary input tape $\sigma.$ A
two-player non-uniform interactive protocol is a pair of non-uniform
interactive Turing machines. Normally, each machine is taken to be a
polynomial-time machine, though for some purposes the machines may be
allowed unbounded computation time and unbounded advice.
In general, we shall use either the non-uniform Turing machine model
just described, or a more general model in which each player's
local computation is described by
some arbitrary, perhaps non-recursive function,
The former corresponds to the modern
trend of “complexity-based” cryptography, whereas the latter corresponds
to the classical approach of “information-theoretic” security, where
issues of computability and complexity are irrelevant.
§.§ Two-Party Interactive Proof Systems
Goldwasser, Micali, and Rackoff [74] and Babai [4] pioneered
similar definitions of what it means for one machine to convince another
that a string is in a language, even when the doubting party does not have
the resources to decide the answer directly. As defined in [74],
an interactive proof system [P,V] for a language $L$ is a
two-party protocol for machines called the prover (P) and the verifier (V). The verifier must use polynomial time, while the prover has
unbounded computation time and space. Both machines employ probabilistic
computation. At the end of the protocol, the verifier outputs
accept or reject. In order to be a proof system, the protocol
must have the following properties:
* Completeness:
For $x \in L,$ if the protocol is run with correct P
and V, then for every $c>0$ and sufficiently large $n=\abs{x},$
V accepts $x$ with probability at least $1-n^{-c}.$
* Soundness:
For $x \not\in L,$ if the protocol is run with any and V, then for every $c>0$ and sufficiently large $n=\abs{x},$
V rejects $x$ with probability at least $1-n^{-c}.$
This definition generalizes in a straightforward way to cover proofs that
$f(x)=y$ for a function $f$ and values $x$ and $y.$
§.§ Zero-Knowledge
It is natural to examine properties of the protocol under situations in
which one machine or the other is replaced by a different, “faulty”
machine, which tries to obtain more information than it truly deserves.
the transcript of messages send during a protocol can be generated by a
machine given only the output of the protocol, then intuitively that
machine learns nothing apart from the output by participating in the
In an ideal case, a trusted prover would state whether or not $x \in L,$
and the verifier would accept the statement, learning only that fact
regardless of whether it were itself corrupt. We wish to address the situation
in which the prover might be corrupt, while ensuring that no information
is leaked beyond that of the ideal case.
Let $L \subseteq \sigstar$ be a language. An ensemble $\scp(x,a)$ is perfectly (exponentially, statistically, computationally)
approximableapproximable on $L$ if, for $x \in L,$ there is a
probabilistic expected polynomial time Turing machine $M$ whose output
$M(x,a)$ is perfectly (exponentially, statistically, computationally)
indistinguishable from $\scp(x,a).$ The machine $M$ is called a simulator.
Notice the “one-sidedness” of the previous definitions. If $x$ is not in
the language, there is no restriction that the distribution be generable by
some machine.
Let $\vstar$ be a non-uniform interactive Turing machine with input $x$ and
auxiliary (advice) input $a.$ A protocol [P,V] is perfectly
(statistically, exponentially, computationally)
zero-knowledgezero-knowledge on
$L$ for if the ensemble induced by the message history of the
interaction between $P$ and $\vstar$ is perfectly (statistically,
exponentially, computationally) approximable on the language
\[
\starr{L} = \set{ (x,a) \mid x \in L}.
\]
The protocol [P,V] is zero-knowledge on $L$ if it is
zero-knowledge on $L$ for any such $\vstar.$
A more powerful version of zero-knowledge considers a single, universal
simulator that simulates the conversation between $P$ and any $\vstar,$
using $\vstar$ as a “black box”black box whose contents cannot be
examined or reset but whose input/output behavior is accessible. A good
discussion of this and other flavors of zero-knowledge appears in
§.§ Zero-Knowledge Proof Systems
With the definitions given above, it is easy to describe a zero-knowledge
proof system [74]:
A protocol [P,V] is a perfect (statistical, computational)
zero-knowledge proof systemzero-knowledge proof for a
language $L$ if it is an interactive proof system for $L$ and it is
perfectly (statistically, computationally) zero-knowledge on $L.$
§ MULTIPARTY PROTOCOLS: THE PARTICIPANTS
§.§ Player ex Machina
A playerplayer is an interactive automaton, namely an
automaton with an input tape, an auxiliary input tape, an output tape, and
a random tape. Let us first consider the familiar case of interactive
Turing machines.
The definitions for the multiparty scenario are straightforward extensions
of those given for two-party protocols.
An interactive Turing machine with an input and an output communication
tape is easily regarded as one having $n-1$ input tapes and $n-1$ output
tapes; technically, the bits of virtual tape $j$ are encoded in positions
$j+k(n-1)$ on the single tape, where $k$ ranges over the integers and
specifies the position on the virtual tape.
As in the two-party case, each has a
read-only input tape, a private auxiliary tape, a private work tape,
and a private output tape.
A multiparty Turing-machine protocol is a set of $n$ interactive
Turing machines $\{M_1,\dots,M_n\}$ each having $n-1$ pairs of
communication tapes: the read-only communication tapes of machine $M_i$ are
labelled $\set{R_{ij}}_{j=1,..,n;j \not= i},$ and its exclusive-write tapes
are labelled $\set{W_{ij}}_{j=1,..,n;j \not= i}.$
A round in a synchronous protocol operating on a complete network
consists of the following events. Each machine $M_i$
enters its active state, computes, and returns to its inactive state.
Each tape $W_{ij}$ is then copied to $R_{ji}.$ We shall soon
describe these mechanics in more detail, including how adversaries affect
the execution.
The computation time of machine $M_i$ is the sum of the steps it
takes while it is in an active state. The message complexity of
the protocol is the total length of all messages sent.
§.§ General Players
Before specifying the sequence of events and computations occurring during
the execution of a protocol in more detail, let us first describe a player
in the most general terms. We allow a broader model than Turing machines
for a few reasons. Turing machines are restricted to computing recursive
functions. If they compute a probabilistic function using a random tape,
then the resulting distribution is a recursive function of the input and
random tape. Non-uniform functions and general distributions would not be
achievable. In fact, even simple distributions, such as generating random
numbers modulo 3, are not computable in bounded time.
Even though the protocols we design will not require non-recursive
functions or distributions, we would like them to be secure against the
most powerful adversary. By using a broad computational model, we cover
all the participants — players and adversaries — in the protocol, and
do not need to pay attention to how a new set of messages is computed
(e.g. whether by Turing machine or non-recursive function) unless we
wish to do so under particular circumstances. This corresponds to the
“information-theoretic” model, in which one is concerned with “perfect”
secrecy; any distinguishable difference or dependence among distributions
on messages sent from player to player is considered to give away
information, regardless of the resources needed to detect it. The
information-theoretic model does not restrict the power of the players and
malicious adversaries to compute recursive or even polynomial-time
functions on the messages they see; in fact, each player is defined more
generally as a probabilistic function from input messages to output
Even though a function from strings to strings can be specified in a
discrete manner, a distribution on strings need not have some
discrete specification. Often a probabilistic machine is regarded as
having a discrete and uniformly distributed input (e.g. a random tape
of bits). Using functions rather than discrete specifications of
functions to describe the allowable transitions and distributions associated
with an “automaton” is more general and more powerful.
Thus, for full generality, we use a very general format: we describe
players as automata that may have a finite or infinite state set, and we
set bounds on available resources only under specific circumstances.
not set bounds on available resources.
In the “cryptographic” model, for example,
we may assume that players and adversaries are
polynomial-time Turing machines. It boils down to the comment made earlier:
one can either regard a corrupted participant as a player gone awry, in which
case it is better to regard each player as powerful, or one can describe
several different computational models, one for uncorrupted players, one
for an adversary, one for corrupted players, and so on.
Our informal use of the term “automaton” extends beyond the
common usage describing a finite-state machine, since we allow for infinite
state sets. A Turing machine with infinite work and random tapes is an
automaton having an infinite general-state set, where each state describes
the finite control and the contents of the tapes.
Each player has a state set $Q,$ a standard input set $X \subseteq
\sigstar,$ and an auxiliary input set $\auxin.$
For Turing machines, each input in $X$ includes a string $x$ and a security
parameter $k.$ The initial state of a player is selected according to the
input, auxiliary input, and security parameter $k;$ that is, each player has
a function $\initfn:X \times \auxin \times \nat \rightarrow Q.$
As in the case of non-uniform interactive protocols
(<ref>), the auxiliary inputs serve a few purposes. They
introduce and encapsulate non-uniformity in a uniform model; that is, when
giving the conceptually simple specification that each player is a Turing
machine, they are a convenient means to introduce non-uniformity. They may
also represent information held jointly by various players, such as shared
private encryption keys. Their most common use, however, is not as part of
the protocol per se but as a record of information obtained in other
or previous computations and interactions. For example, auxiliary input
$a_i$ may represent the history of conversations in which player $i$
participated during the previous protocol. As in the composition of
zero-knowledge proof systems, examining resilience for arbitrary auxiliary
inputs allows one to demonstrate that secure protocols remain secure when
composed sequentially (cf.
[100, 76]).
During each round of a protocol, each player performs a probabilistic
computation and sends messages distributed according to the results. The
sets of incoming and outgoing messages depend on the types of communication
channels and the number of other players in the network. In particular,
each player is described by a probabilistic transition function
$\delta$ that produces a distribution on new states and new messages
based on the current state and incoming messages:
\[
\delta : Q \times H \rightarrow
\dist(Q \times H).
\]
In other words, given a state and the messages of the previous round,
$\delta$ produces a distribution on new states and messages from which
distribution the subsequent state and outgoing messages of the player are
Note that the transition function produces a distribution on outputs,
in analogy to the computation of a probabilistic Turing machine, which need
not compute a function of its inputs, but rather produces a
distribution on outputs based on its input and induced by the uniformly
random bits on its random tape. A Turing machine can also be regarded as
producing a sample from that distribution (in particular, a Turing
machine is a function of its input and random input). We consider a
general player in a similar, dual fashion: as producing a distribution, or
as producing a sample from that distribution. This will allow us to
describe protocol executions in terms of the distributions on
messages as well as in terms of particular message histories
generated according to those distributions.
A playerplayer is a tuple $(Q,X,\auxin,\initfn,\delta)$
whose components are as described above.
A polynomial-time Turing machine player
player!Turing machine
is a player whose
transition function $\delta$ and initial-state function $\initfn$ are
computable in time polynomial in
the size of its input, auxiliary input, incoming messages, and an
additional parameter, $k.$
A polynomial-size circuit player
is a player whose
transition and initial-state functions are computable by a circuit family
of size polynomial in the above parameters.
§ PROTOCOLS
§.§ Networks and Communication Channels
A channelchannel is a probabilistic function $C : \sigstar
\rightarrow {\bf N} \times {\bf N} \times \sigstar$ that takes a message
$m$ as input and produces a distribution on sets of deliverable messages.
A deliverable messages is a triple of the form $(s,r,m),$ where $1 \leq s,r
\leq n$ and $m \in \sigstar.$ After all channel functions are applied and
the resulting distributions are sampled, the collection of resulting
triples forms the set ${\cal M}$ of messages to be delivered. The subset
${\cal M}_s$ defined as $\set{(s,r,m) \mid (s,r,m) \in {\cal M}, 1 \leq r
\leq m}$ is placed on the input tape of player $r.$ If player $s$ writes
$(m,C_{sr})$ on its communication tape (meaning “send $m$ on channel
$C_{sr}$”) then channel $C_{sr}$ is applied to $m,$ producing $(s,r,m),$
which is then placed on the communication tape of player $r.$ As mentioned
in <ref>, the set of messages appear in lexicographic order
on the tape.
For example, a private channelchannel!private $C_{sr}$ from player
$s$ to player $r$ is the function that on input $m$ produces
$\set{(s,r,m)}.$ A broadcast channelchannel!broadcast $C_s$ on
input $m$ produces $\set{(s,1,m),(s,2,m),\ldots,(s,n,m)}.$ An oblivious
transfer channel
channel!oblivious transferoblivious transfer
$C_{sr}^{\rm OT}$ gives
weight $1/2$ to messages of the form $\set{(s,r,m)}$ and $1/2$ to messages
of the form $\set{(s,r,\Lambda)}.$
An anonymous channel is modelled by supplying
the recipient simply with $m_j.$
§.§ Protocols
An $n$-player protocolprotocol is essentially a set of players,
channels, and output functions:
\[
\protoPi =
\set{(P_1,\scc_1,\outfn_1,Y_1),\dots,(P_n,\scc_n,\outfn_n,Y_n)},
\]
where $\outfn_i : Q_i \rightarrow Y_i$ is an output function mapping the
(final) state of player $i$ to an output value in $Y_i.$ This can be taken
to be the contents of a work tape or output tape at the end of the
protocol. The set of outputs depends on the purpose of the protocol but we
shall take it to be a set of $m$-bit strings. Player $P_i$ is described by
$P_i=(Q_i,X_i,\auxin_i,\initfni,\delta_i).$ Each $\scc_i$ denotes a set
$\set{C^j}_{j \in \natsmall}$ of channels on which player $P_i$ sends its
messages. We sometimes omit the square brackets and write $\Pi(n,m)$
instead; when other parties such as adversaries or trusted hosts are
discussed, the square brackets are intended to delimit the list of
A networknetwork is a set of channels. A protocol $\Pi(n,m)$protocol is
implementable on a given network only if its channels are a subset of those
in the network. A protocol familyprotocol family $\Pi=
\set{\Pi(n,m,k)}_{n,m\in \natsmall}$
specifies a protocol for each $n$ and $m.$ For each number $n$ of players
and $m$ of bits in the inputs, $\Pi(n,m)$ denotes a series of protocols
$\Pi(n,m).$ Finally, $\Pi(n)$ denotes a sequence of sets of $n$ players,
each with $m$ built in, but for purposes of asymptotic complexity analysis,
each of these players can be regarded as Turing machines that read $n$
and $m$ from its input tape.
§.§ Protocol Compilers for Turing Machines
At times we may wish to regard the participants as Turing machines even
though the function $F(x_1,\dots,x_n)$ itself may not be recursive.
[How can recursive machines compute a non-recursive function?
As we shall shortly discuss, it is often convenient to consider a universal
machine that receives a written specification of $F$ as an input. The
specification, which may be a circuit or formula, need not be generable
by a Turing machine.]
reason for regarding the players as circuits or non-uniform Turing machines
with respect to their power to “break” the protocol is that the
specification of a non-uniform function $F$ might itself lend power to an
otherwise uniform or polynomial-time machine. We therefore turn our
attention to protocol “compilers,” which produce the specifications of
the players for the various possible network sizes, input sizes, and
security parameters.
A protocol compilerprotocol compiler $\scc$ is a circuit family
$\set{C_{n,m,k}}$ where $n$ specifies network size, $m$ specifies the
number of bits in the arguments, and $k$ is a security parameter for
protocols which may ensure security and reliability with high probability,
although not certainty. The compiler $\scc$ produces a description of $n$
Turing machines, each of which has an input $x_i$ and $n$ communication
tapes. The computational resources of the compiler (its size and depth)
may or may not be of importance to the protocol designer, though usually
they are related to the complexity of $F.$ As with circuit families in
general, the compiler itself may be uniform (generated by a Turing
machine) or non-uniform, which would be necessary to specify protocols
computing general, perhaps non-recursive functions $F.$
A universal protocolprotocol!universal specifies a single
Turing machine $M$ which takes a special input $(1^n,1^m,1^k,i,C_F),$
where $i$
indicates the identity of the machine in the network and $C_F$ is a
circuit description of the function $F$ for the given size and number
of inputs.
In this case, the protocol for $n$ players consists of $n$ copies of $M,$
each of which is supplied with the appropriate parameters and a unique
identification. Our protocols are generally of this type.
We measure the
message complexity
(maximal number of bits in any execution),
round complexitycomplexity!round$\rounds$
local computational complexity
(resources required by a Turing machine to compute the player's
transitions between rounds),
computational complexity of F
(Turing machine
complexity of computing the outputs of all the players)
as functions of $n,$
$m,$ and $k,$ unless otherwise specified. Stating that a protocol uses
polynomial size messages means, for example, that it communicates at most
$p(n,m,k)$ bits in any execution, where $p(n,m,k) = O((nmk)^c)$
for some fixed $c.$
Usually, it is not difficult to design universal protocols in which the
communication complexity is polynomial in $n,m,k$ and the description of
$C_F.$ The trick is to design a compiler whose protocols have communication
complexity polynomial in $n,$ $m,$ and $k$ only, regardless of $F$ and its
§.§ Reliable Protocol Execution
Let us formally describe the sequence of steps that occur during a protocol
when all the participants are reliable.
Since our purpose is to specify precisely the nature of a protocol
and its execution rather than to become immersed in formalism, let us use
some shorthand to relieve the burden of the notation. As mentioned, during
each round of a protocol, a player performs a computation and then sends
messages over its various channels. Let $\mess^{\out}(U,V,r)$ denote the
set of messages sent from players in set $U$ to players in set $V$ during
round $r.$ Let $\mess(U,V,r)$$\mess(i,j,r),$ messages sent
denote the set of messages actually delivered
from players in set $U$ to players in set $V,$ that is, those messages
drawn according to the various channel distributions acting on
$\mess^{\out}(U,V,r).$ Denote the messages received by $V$ from $U$ and $U$
from $V$ by $\mesg(U,V,r) = \mess(U,V,r) \cup \mess(V,U,r).$
(As stated in <ref>, we assume a natural and uniquely
decodable encoding of these sets of messages as strings.) We sometimes
add a label to describe a substring of a message, so that $\mess(i,j,r;x)$
indicates the message $i$ sends to $j$ about variable $x.$
Let $\send(\set{m_1,\dots,m_k},\set{C_1,\dots,C_k})$ denote the
set of messages generated by applying each message $m_i$ to channel
An execution of a synchronous $R(n,m,k)$-round protocol $\Pi(n,m,k)$ on inputs
$\vec{x}$ and auxiliary inputs $\vec{a}$ is the sequence of states and
messages obtained from the following experiment. Initialize all states
according to $\vec{x}$ and $\vec{a}.$ At each round, select the new states
and outgoing messages using the transition functions of each player, apply
the messages to the channels, and generate the messages actually sent.
Finally, after round $R=R(n,m,k),$ allow one final computation. The output of
player $i$ is a function of its final state; in the case of a Turing
machine, the output is written on the output tape.
More formally, but still using a somewhat loose notation, consider the case
of a network with private lines and broadcast channels. We denote the
messages sent over private lines by $\mess_{\priv}(U,V,r)$ and those sent
over broadcast lines by $\mess_{\broad}(U,[n],r).$ Let $\scc_{\priv}$ and
$\scc_{\broad}$ denote the private and broadcast channels. An execution of
the protocol is generated by the experiment shown in
Figure <ref>.
Reliable Execution
initialize states
$\initfni \leftarrow \initfni(x_i,a_i,k).$
compute and send messages
\leftarrow $
\leftarrow $
$\send(\mess_{\priv}^{\out}(i,[n],r) \cup \mess_{\broad}^{\out}(i,[n],r),
$\scc_{\priv} \cup \scc_{\broad})$
final computation after protocol
$q_i^{f} \leftarrow
\delta_i(q_i^R,\mess([n],i,R))$
Execution of protocol $[\Pi]$ with reliable players in a synchronous
network with private and broadcast channels.
§.§ Views
A global state vectorglobal state vector is a sequence of $n$
states, one for each player. In general, we may consider state vectors for
arbitrary subsets of the $n$ players. Let $\playerstates$ be the set of
all partial state vectors:
\[
\playerstates = \set{
\vec{q}_B \mid \vec{q} \in Q_1 \times \cdots \times Q_n,
B \subseteq [n] }.
\]
Similarly, in order to consider sets of inputs and auxiliary inputs for
arbitrary subsets of the players, we define
\[
\playerinputs = \set{
(\vec{x}_B,\vec{a}_B) \mid \vec{x} \in X_1 \times \cdots \times X_n,
\vec{a} \in \auxin_1 \times \cdots \times \auxin_n,
B \subseteq [n] },
\]
and we use this notation to describe the set $\playeroutputs$ of outputs
for arbitrary subsets of the players.
We call the state vector and messages seen by
a subset $B$ of the players at a given round $r$ a view:
\[
\view_B^r = ( \vqr_B, \mesg(B,[n],r) ).
\]
The set of all possible viewsviews$\playervs,$ set of views is
\[
\playervs = \playerstates \times H.
\]
A history or executionexecution of the protocol is thus
a member of the set $\histsp$ of inputs, auxiliary inputs, initial states,
sequences of views, and final states:
\[
\histsp =
\playerinputs \times \playeraux \times \playerstates \times
(\playervs)^R \times \playerstates.
\]
A protocol induces a distribution on executions according to the experiment
described formally in <ref>. We may regard a protocol
as a probabilistic function from inputs to executions; the function
is the composition of several probabilistic functions as described.
§.§ Memoryless Protocols and Players
As described thus far, a protocol is memoryless;memoryless
the current states and messages are all that matter to the execution of the
protocol. The final states of the players do not necessarily reflect the
history of the protocol.
In practical situations it is often impossible to assure that
no record is kept of previous operations. Backups are kept;
transactions must be rolled back to consistent states when
failures occur; and a wide variety of reasons make it extremely
risky to assume that a history of previous behavior is erased
before an adversary gains control of an installation.
In order to assure maximal security, we consider a stronger model in which
each player is replaced by a modified player that records its view of the
history of the protocol in its current state. In other words, the state
$q_i^r$ of player $i$ at round $r$ determines the sequence
(q_i^r,\mesg(i,[n],r))).$ The transition function $\delta_i$ is easily
modified to accommodate this record, giving rise to a player whose behavior
is identical but whose state records the sequence of states and messages
seen by that player. In the case of Turing machines, this means that each
Turing machine is equipped with an extra, history tape on which it records
all its computations (communications, random bits, contents of work tapes,
We adopt this as the standard model,standard model
and assume that all
players are transformed in this manner.
In the standard model, the set of states of players in some coalition $T$
at round $r$ defines not only their current view, $\view_T^r,$
but the history of the protocol up to that point:
We distinguish the weaker model, in which players may forget
previous state information, by specifically noting it as
the memoryless model.memoryless model
The memoryless model has advantages in proving privacy
against dynamic adversaries in cryptographic models [19]
because it permits easier simulations of the knowledge of corrupted
§ ADVERSARIES AND FAULT MODELS
This dissertation examines the security and reliability of multiparty
protocols under various fault models. The types of allowable faults are
described below. The definitions for the multiparty case also apply to the
two-party scenario as a special case.
Failures are modelled by an adversary which “chooses” to substitute new
messages for the messages otherwise computed according to the protocol.
This “choice” is simply an interpretation of the string output by
the adversary, which is simply a player that computes an output string
based on some transition function and its current state; the description
of a protocol execution states how that output string is interpreted as a
request to corrupt individual participants.
A static adversary must choose the subset it will corrupt before the
protocol is executed. A dynamic adversary may choose at the end of a
round the set of new processors it will corrupt for the next round. In
either case, the adversary may be allowed to rush messages, that is,
to wait for nonfaulty processors to send their messages, see the messages
to which it has access, and then specify the messages of the corrupted
processors for that round. Finally, a strongly dynamic adversary can
rush all messages, examine them, and decide to corrupt machines of its
choice before the messages are sent; the set of corrupted machines may
encompass every player in the network over the course of the execution,
though at any particular time the current coalition must be an allowable
one. We shall consider dynamic but not strongly dynamic adversaries.
We measure security with respect to an adversary
class,adversary class namely a set of adversaries. A typical
example is the class of all polynomial-time Turing machines. A more
powerful class allows unlimited computing time, though still requiring
messages to be recursive functions of the inputs. Another powerful,
non-uniform adversary class is the set of all polynomial-size circuit
families (equivalently, polynomial-time Turing machines that take advice).
The most powerful class, corresponding to “information-theoretic”
security, allows the adversary to be a general player, namely to have an
arbitrary transition function with an arbitrary set of states.
An important parameter of the adversary is the number $t$ of machines it is
allowed to corrupt. The three ranges of primary importance are $t < n/3,$
$n/3 \leq t < n/2,$ and $n/2 \leq t \leq n-1.$ Different levels of
fault-tolerance and security are achievable for each of these ranges.
In addition to specifying the manner in which the adversary can choose
processors to corrupt, there are various sets of restrictions on the types
of messages it is allowed to substitute. In order of increasing power, the
types of faults are as follows.
* Passive:
The adversary cannot substitute different messages for any of the original,
uncorrupted machines. It does, however, have access to the tapes and
states of the machines it “corrupts.” This model is sometimes called the
gossip model.
* Fail-Stop:
The corrupted processors cannot write improper messages ( different from
the protocol specifications), but may halt at some round, sending only
null ($\Lambda$) messages thereafter.
* Omission:
The corrupted processors cannot write improper messages but may omit some
of them periodically (replacing them by $\Lambda$).
This model is useful in examining faulty
communication lines, where occasional messages are lost.
* Byzantine:
The adversary may compute any message of its choice (with restrictions,
of course, if the adversary is computationally bounded), whether proper or
improper according to the protocol, and it may omit messages as well.
A protocol attacked by an adversary is formally nothing more than a
protocol with $n+1$ players, each evaluating a probabilistic transition
function and receiving and producing messages. When a specific adversary
is concerned, the protocol is denoted $[A,\Pi];$ when an adversary may be
chosen from some adversary class $\advclass,$ the set of resulting
protocols is denoted $\anglebrack{A,\Pi}.$
§.§ Passive Adversaries
A fault classfault class $\sct$ is a collection of allowable
coalitions, namely a collection of subsets of $[n].$ We require that if
$T\in \sct$ and $U \subseteq T,$ then $U \in \sct.$ The standard fault
class is the $t$-fault class, $\sct = \set{T \subseteq [n] \mid
\abs{T} \leq t},$ allowing any set $T$ of size $t$ or less to be corrupted.
Formally, a passive adversarypassive is a tuple
$(Q_A,\auxin_A,\initfnA,\delta_A,T),$ where $Q_A$ is a set of states,
$\initfnA$ is a function mapping auxiliary inputs in $\auxin_A$ to initial
states in $Q_A,$ $\delta_A$ is a transition function to be described
shortly, and $T$ is a function mapping $Q_A$ to $[n],$ describing the
current coalition that the adversary chooses to corrupt. The passive
adversary does not generate messages but it does choose a new state and new
coalition based on its current state and the view of the coalition it
currently corrupts:
\[
\delta_A : Q_A \times \playervs \rightarrow \dist(Q_A).
\]
A passive adversary is static if $T$ is constant, and thus fixed
before the protocol. A passive adversary is dynamic if $T$ varies
with the state of the adversary. Note that if the number $n$ of players is
fixed, then there is no essential difference between static and dynamic
adversaries. There are a finite number, ${n \choose t}$ of fixed
coalitions; the chance of selecting any one of them at random is a constant
$c_n = 1/{n \choose t}.$ Any dynamic adversary chooses some coalition $T$
with probability at least $c_n.$ Thus there is some static adversary that
starts with coalition $T$ and therefore has a constant fraction ($c_n$) of
the probability of the adversary to corrupt the protocol successfully. We
have not yet presented definitions of security, but it turns out that
constant factors do not matter; essentially, there is a static adversary
having the same power as the dynamic one. If $n$ grows, then the
probability of a particular coalition $T$ vanishes, and there is a
difference between static and dynamic.
A passive $\sct$-adversary for a fault class $\sct$
is a passive adversary for which
(1) for all $q_A \in Q_A,$ $T(q_A) \in \sct,$ and
(2) it always chooses a new coalition that is a superset of
the current one (i.e., for all $q_A \in Q_A,$ for all partial views
$\view_{T(q_A)}^r,$ for all states $q_A'$ having nonzero probability weight
in $\delta_A(q_A,\view_{T(q_A)}^r),$ $T(q_A) \subseteq T(q_A')$).
Since we shall be concerned with fault classes containing all coalitions of
size $t$ or less, we refer simply to a passive $t$-adversary.
Figure <ref> shows how a $R$-round protocol is executed
synchronously in the presence of a passive adversary. The reliable
execution of a protocol defines a view seen by a subset of players; here,
it is straightforward to include the state of the adversary and the
messages it sees in the definitions of views and histories. For clarity,
we abuse notation for the final computation,
omitting mention of the last messages
generated by the transition function $\delta$ because they are ignored.
Passive Execution
$q_i \leftarrow \initfni(x_i,a_i,k).$
$q_A \leftarrow \initfnA(a_A,n,m,k).$
each computes and sends messages
$(q_i,\mess^{\out}(i,[n],r)) \leftarrow
\delta_i(q_i,\mess([n],i,r-1))$
$ \mess(i,[n],r) \leftarrow
\send(\mess^{\out}(i,[n],r),
\scc_1 \cup \cdots \cup \scc_n)$
adversary computes
\leftarrow
\delta_A(q_A,\view_{T(q_A)}^{r-1}). $
final computation
$q_i^{f} \leftarrow
\delta_i(q_i,\mess([n],i,R))$
$q_A^f \leftarrow \delta_A(q_A,\view_{T(q_A)}^R).$
Execution of protocol $\protoAPi$ with a passive adversary $A.$
§.§ Byzantine Adversaries
A ByzantineByzantine or malicious adversary
$(Q_A,\auxin_A,\initfnA,\delta_A,T),$ has the added capability of
overwriting the messages sent by players in the coalition it has corrupted.
We thus extend its transition function to allow it to generate malicious
\[
\delta_A : Q_A \times \playervs
\rightarrow \dist(Q_A \times H).
\]
As before, a Byzantine adversary is static if $T$ is a constant function
and dynamic if $T$ depends on the state. A Byzantine $t$-adversary is
allowed the fault class $\sct = \set{T \mid \abs{T} \leq t}.$
The most severe sort of Byzantine adversary can rush messages,
meaning it receives the messages sent from reliable players before other
messages are delivered, allowing it to choose a larger coalition to corrupt
and to choose new messages depending on the good messages it has seen.
Rather than consider an assortment of transition functions, one of which is
used to generate new adversarial messages and others which determine new
choices of coalitions after seeing rushed messages, we simply apply the
same transition function $\delta_A$ after each set of rushed messages is
received, allowing the adversary to change its state and select new players
to corrupt. Formally, the messages generated by the transition function
are ignored, until the point at which the adversary stops enlarging the
coalition it has chosen to corrupt. At that point, whatever messages the
adversary generated are delivered to the appropriate recipients via
the channels from the corrupted players, and the still-unsent messages
between reliable players are also delivered.
The steps involved in an execution of the protocol with a rushing,
Byzantine adversary on inputs $\vec{x},$ auxiliary inputs $\vec{a},$ and
adversary auxiliary input $a_A$ are shown in Figure <ref>.
For clarity of proofs, we sometimes regard the Repeat step
(2.1.2) as a For loop that is repeated $t$ times; the adversary can
increase the coalition no more than $t$ times, of course. The state of
player $i$ or the adversary at round $r$ is referred to as $q_i^r$ or as
$q_A^R,$ respectively. If we wish to dissect the execution even further,
examining the states within the Repeat loop, we refer to the
$\rho^{th}$ repetition of the repeat loop using the notation $(r,\rho).$
Hence we may speak of $q_A^{(r,\rho)}$ or of $q_A^r = q_A^{(r,0)}.$
Byzantine Execution
(B1.1) $i=1..n$
$q_i \leftarrow \initfni(x_i,a_i,k)$
(B1.2) Set
$q_A \leftarrow \initfnA(a_A,n,m,k)$
$T \leftarrow T(q_A)$
$T_{old} \leftarrow \emptyset$
$\mess([n],[n],0) \leftarrow \emptyset$
$i \in \tbar$
nonfaulty messages
$(q_i,\mess^{\out}(i,[n],r)) \leftarrow
\delta_i(q_i,\mess([n],i,r-1))$
rush messages; new corruption
$T_{new} \leftarrow T - T_{old}$
$T_{old} \leftarrow T$
$\mess(\tbar,T_{new},r) \leftarrow
\send( \mess^{\out}(\tbar,T_{new},r),
\scc_1 \cup \cdots \cup \scc_n)$
adversary computes, but messages aren't
actually sent until rushing is done
\leftarrow \delta_A(q_A, \mess(\tbar,T_{new},r) )$
$T \leftarrow T(q_A)$
$T_{new} = \emptyset.$
until no new corruptions
send all pending messages
$\mess(T,\tbar,r) ) \leftarrow
\send( \mess^{\out}(T,\tbar,r).$
$\mess(\tbar,\tbar,r) ) \leftarrow
\send( \mess^{\out}(\tbar,\tbar,r).$
(B3.1) $i \in \tbar$
final computation
$q_i^f \leftarrow \delta_i(q_i, \mess([n],i,R) )$
$q_A^f \leftarrow \delta_A(q_A, \mess([n],T,R) )$
Execution of protocol $\protoAPi$
$<A,\protoname>,$ execution of
with a message-rushing Byzantine adversary.
We shall have reason to speak not simply of the specific states and messages
occurring during the protocol but of the distributions on those states.
state distributions
\begin{eqnarray*}
\rvnames & = & \{ x_1,\ldots,x_n,a_1,\ldots,a_n,a_A,q_1,\ldots,q_n,q_A,\\
& &
\mess^{\out}(1,1),\ldots,\mess^{\out}(n,n),
\mess(1,1),\ldots,\mess(n,n), \\
& &
\} .
\end{eqnarray*}
(Semantically, this is a set of “labels” for random variables.)
The distribution on, say, the state $q_i^r$ at round $r$ will then be
described by $\rv(q_i,r).$ In general, the distribution on the state or
message $v \in \rvnames$ at round $r$ is described by $\rv(v,r).$ Each
distribution $\rv(v,r)$ is a probabilistic function of the results of
earlier samples as specified by Figure <ref>. For example,
$\rv(q_i,r)$ is a probabilistic function (namely, the transition function
$\delta_i$) of $\rv(q_i,r-1),\rv(\mess(1,i),r-1), \rv(\mess(2,i),r-1),
\ldots,\rv(\mess(n,i),r-1).$ Or, for example, the variable $\rv(\mess(1,1),r)$
is in fact a probabilistic function (namely, the channel function)
$\rv(\mess^{\out}(1,1),r).$ The “results” or “outputs” of a distributed
computation are described by
Without loss of generality, the random variable $\rv(y_A,R)$ is
“completely dependent”
on the variables $\rv(q_A,r)$ and $\rv(\mess(i,j),r)$ for $i,j \in
T_{\rv(q_A,R)}$ and $1 \leq r \leq R,$
in the sense that the output of the adversary specifies
every state and message it has seen.
Thus, the execution of a protocol is simply a sample taken from a set of
interdependent random variables $\rv(v,r).$ The text of this chapter shows
how to generate this joint distribution by sampling probabilistic functions
in a specified order. Our proofs will
fundamentally show that two probabilistic
functions are the same, where each is defined by a different sequence
of application of probabilistic functions.
§ OUTPUTS AND OUTPUT DISTRIBUTIONS
At the end of a general protocol the adversary and each player writes an
output string $Y_A=\outfn_A(q_A^f)$ or $Y_i=\outfn_i(q_i^f)$ (respectively)
on an output tape. A corrupted player's tape contains $\Lambda.$
Let adversary $A$ have auxiliary input $a.$ An execution of a protocol
$\Pi(n,m)$ on inputs $\vec{x}=(x_1,\ldots,x_n),$ auxiliary inputs
$\vec{a}=(a_1,\ldots,a_n),$ and security parameter $k$ induces a distribution on $(\sigstar)^{2n+2},$ that is, on the outputs and views
of $A$ and of the players. The distribution is denoted:
\[
\realhist(n,m)\protoIn =
\index{$\realpf(n,m)\protoIn$!distribution on executions}%
\index{distribution!protocol induced}
\]
For fixed $n$ and $m,$ parametrizing over $z=\vec{x} \circ \vec{a} \circ a$
and security parameter $k$ represents an ensemble,
ensemble!protocol induced
A universal protocol that computes a particular function
$F$ may include a circuit description $C_F$ in the $z$ parameter.
For readability, we occasionally write $\realpf\protoIn$ when
$n$ and $m$ are understood in the context.
The family of ensembles when $n$ and $m$ vary is written
and is induced by the family of protocols
When analyzed separately, the issues of privacy and correctness examine the
views apart from the outputs. We denote the output (which, without loss of
generality, includes the view) of the adversary generated in a protocol by:
\[
\realya\protoIn
= Y_A.
\]
We use a similar notation for the outputs of the players:
\[
\realyp\protoIn
= (Y_1,\ldots,Y_n).
\]
Privacy concerns $\realya,$ whereas correctness concerns $\realyp.$ The
ensemble specifying the view of the adversary and the outputs of the
players (not their views) is:
\[
\realy\protoIn
= (Y_A,Y_1,\ldots,Y_n).
\]
We shall see it is advantageous to consider $\realy$ as a whole, not to
break it down into components describing privacy and correctness.
§ THE FUNCTION TO COMPUTE
For full generality, we wish to consider families of probabilistic
functions mapping strings to strings (equivalently, integers to integers,
encoded in a natural fashion).
Let $F=\set{F^{n,m}}_{n,m\in\natsmall}$ be a family of probabilistic functions
with $n$ inputs of length $m$ and $n$ outputs of length $m:$
\[
F^{n,m} : X_1 \times X_2 \times \cdots \times X_n \rightarrow
Y_1 \times Y_2 \times \cdots \times Y_n
\]
where each $X_i,Y_i \subseteq \set{0,1}^m.$
Computing probabilistic functions allows us to accomplish
a wide range of tasks. It is often the case that
a given set of inputs is not mapped in a 1-1 manner to a set of outputs
and thus is not described by a function per se. A coin toss, useful
for tasks such as leader election and symmetry breaking, is a primary
Lest our analysis become too complicated, however, we consider distributions
described by probabilistic circuits, namely distributions induced by
setting certain bits of a Boolean circuit uniformly at random. This
covers all probabilistic Turing machine computations and is a natural
case to consider.
Let $\scf^{det,poly}$ be the set of all deterministic function families
$\hat{F}=\set{\hat{F}^{n,m}}_{n,m \in \natsmall}$ satisfying
\[
\hat{F}^{n,m} : \set{0,1}^{nm+\rho(n,m)} \rightarrow \set{0,1}^{nm}
\]
where $\rho(n,m)=O((nm)^c)$ for some $c.$ Each $F^{n,m}$
can be described as a circuit with $nm$ inputs and
a polynomial number of supplementary inputs, and $nm$ outputs
(or, if one prefers, as $nm$ such circuits each having a one-bit
Let $\scf$
$\scf,$ probabilistic function
function!to compute
be the set of all probabilistic function families $F=\set{F^{n,m}}$
such that there exists an $\hat{F} \in \scf^{det,poly}$
\[
F^{n,m}(\vec{x}) =
\set{ \vec{r} \leftarrow \uniform(\set{0,1}^{\rho(n,m)}) :
\hat{F}^{n,m}(\vec{x},\vec{r})}
\]
Denote the restriction of $F$ to its $i^{th}$ output bit
by $F_i(x_1,\dots,x_n).$
A protocol for $F \in \scf$ should compute a vector $(y_1,y_2,\dots,y_n)$
selected according to the distribution $F(x_1,x_2,\dots,x_n),$ and should
provide player $i$ with the correct value $y_i.$
We make two simplifying observations based on the ability of each player to
generate uniformly random bits. First of all, we may restrict our attention
to deterministic functions by arranging that each player $i$ supply,
as part of its input, a uniformly random sequence of $\rho(n,m)$ bits,
$\vec{r}_i=(r_{i1},\ldots,r_{i,\rho(n,m)}).$ It is easy to see that
as long as one player supplies uniformly random bits, it suffices
to compute the deterministic function
$\check{F}((x_1,\vec{r}_1),\ldots,(x_n,\vec{r}_n))$ defined as
$\hat{F}(x_1,\ldots,x_n,R_1,\ldots,R_{\rho(n,m)})$ where
$R_j = r_{1j} \oplus r_{2j} \oplus \cdots \oplus r_{nj}.$
Secondly, it suffices to consider
functions whose single result is revealed in its entirety to all players.
Observe that if each player provides a mask, namely a sequence of random
bits $r_i,$ it suffices to compute $(y_1 \oplus r_1,\dots,y_n
\oplus r_n)$ and reveal the entire vector to all players. Unless the
adversary specifically corrupts a given player $i,$ it gains no
information about the result $y_i$ from the publicized value
$y_i \oplus r_i$.
Thus, without loss of generality
our protocol descriptions consider each $F$ to be
a family of deterministic functions, with one $m$-bit output
or $n$ different $m$-bit outputs as convenient. A general protocol
compiler for families in $\scf^{det,poly}$ is also one for
CHAPTER: FORMAL DEFINITIONS FOR RELIABILITY AND SECURITY
To think is to forget differences, generalize, make abstractions. In
the teeming world of Funes, there were only details, almost immediate
in their presence.
Jorges Luis Borges, Funes the Memorious
The most striking problem with current definitions of security and
reliability, apart from their scarcity, is the ad hoc and
incremental nature in which they have developed. Correctness and
privacy are naturally the most important goals of a reliable, secure
computation. Their formal specifications, however, are more subtle than
their intuitive simplicity suggests. Furthermore, even though protocols
for distributed system security have been designed with such intuitively
simple properties in mind, other equally desirable security properties
have since arisen, requiring each protocol to be reconsidered — if not
redesigned — in light of these new properties. Definitions of
security have thus been clouded not only be unforeseen subtleties in the
precise formulation of very simple properties but by the confusing and
disharmonious set of overly detailed formalizations tailored differently
to each property.
We propose a new and concise definition of a property we call resilience that captures at a single blow every desirable property of a
distributed protocol. Not only is this new definition simple and broad
enough to avoid the pitfalls of previous approaches, it appears to
capture a priori all the natural properties one might imagine.
With such a unifying definition, we need design protocols and prove
their resilience only once; new and unforeseen properties are simply
new, previously unnoticed aspects of the single property of resilience.
In this chapter, we take a brief look at previous, ad hoc
approaches to formal definitions and discuss their difficulties and
insufficiencies. We then introduce the most important tool of this
work: a means to compare the resilience — namely the security and
reliability — of two arbitrary protocols.
By defining a specific, ideal protocol, we provide a
standard by which to measure the resilience of any protocol. In
an ideal protocol, a trusted and incorruptible host receives all the
inputs and returns the correct outputs. Though such a situation cannot
be guaranteed in reality, its robustness is the absolute standard we
seek. Our techniques and standards provide not only a concise way to
define and to understand security and reliability but also, as we shall
see in Chapter <ref>, a concise and formal means to consider
protocol composition and modularity.
Our point is this: a protocol is not meant to compute $F$ while
satisfying a list of desirable properties; rather, it must achieve
the same results, in a rigorous sense (see <ref>), as an
ideal protocol in which a trusted host performs the computation.
The ability to compare the results of two protocols is essential
to showing such a case is true. This chapter addresses these two
new and crucial observations.
§ WHERE AD HOC DEFINITIONS FAIL
Privacy and correctness are the most obvious properties required by
secure and reliable computations. They are intuitively easy to
understand: a protocol reveals no unnecessary information if the view of
an adversary can be generated from the bounded information to which it
is entitled (inputs and auxiliary inputs of corrupted players). A
protocol is correct if the output of each player is in fact the function
$F(x_1,\ldots,x_n)$ applied to the inputs.
At closer inspection, subtle problems appear. Definitions of privacy
often concentrate too closely on the privacy of the inputs and make
little reference to other information that may be leaked. ([76]
is a notable exception; [1] and [14] introduce a
specific measure of information that is hidden or leaked for the related
problem of instance-hiding schemes.) It may not be the case that,
running one protocol after another, the second protocol preserves the
privacy of the information in the first. Each protocol might hide its
own inputs but leak all the secret information of another protocol if
the definitions are not made carefully. Since the protocols in the
literature usually happen to satisfy the stronger requirement that no
information except $F(x_1,\ldots,x_n)$ is leaked, this issue is
overlooked. In order to compose protocols, however, a careful measure
of the information leaked by each subprotocol is necessary.
Even the definition of as obvious an idea as an input to a
protocol requires great care. Defining inputs and outputs is a far more
sensitive task than the established study of single-processor
computations would suggest. If a player is supplied with an
input, what happens if it behaves absolutely properly but as though it
received a different input? Does one define correctness with respect to
behavior or with respect to some “actual” input? What does “actual”
mean in this case?
Often, rather than delve into the subtleties, researchers make particular
assumptions (e.g., encryptions of every input are supplied to each
party, or every input is “secretly shared”) in order to finesse the
issue. The notion of committal to an input plays an important
part: after an initial stage (either all processors are supplied with
immutable encryptions of other processors' inputs, or the inputs are
validly shared), correctness can be defined with respect to the
committed inputs. While such approaches are interesting as particular
methods to accomplish security goals, they are too specific to merit
approval as general definitions of security.
Definitions for the related problem of Byzantine Agreement do not
suffice, since the goal of that problem is simply to agree on a common
value, a process that is fairly insensitive to individual variations in
a fraction of the input values, as opposed to determining a result that
may be extremely sensitive to individual inputs. We must find a broader
means to understand and define correctness without resorting to such
specific aspects of the process as committal, which, although it turns
out to be a necessary implication of protocol resilience, is not the
primary focus.
Hindering us in our search for clean definitions is the problem of when
to stop looking for new properties. Privacy and correctness were once
thought sufficient. Other important properties have since become
evident. For example, what if, in a secret ballot, one voter
were able to cast a vote opposite to that of another, through some
clever and malicious manipulation of messages in an intricate protocol,
even that corrupt player were not able to learn the vote that it
cast? A two-thirds majority could always be prevented by malicious
coalitions. Or, for example, what if the system must generate a random
bit upon which some decision must be based? If malicious players can
choose their inputs depending on those of reliable players, then using
the parity of random bits supplied by each player will fail. The
property of independence of input selection is immediately seen as
desirable, even though it is orthogonal to the properties of correctness
and privacy. Beaver ([8], see technical report)
gave a broad definition of
independence with respect to a broad (i.e. nonspecific and
general) committal function defined on transcripts of protocol
executions, but this early approach is unsatisfying for the reasons
noted above. Notice that when players are computationally unbounded,
independence of inputs can be neatly formulated via independence of
random variables describing inputs. When players and adversaries are
computationally bounded, however, subtle and difficult points arise:
for example, a faulty player may use the encrypted value it sees
of another player's input as its own input, in which case the two inputs
are information-theoretically quite dependent, but formulating that they
are independent in some computational task is intricate and unclean.
Another example, occurring primarily in the domain of protocols
tolerating extremely high fault rates, is that of fairness.
Reliable players must not be prevented from learning their outputs of
the protocol if corrupted players are able to learn theirs. That is,
even if an adversary is able to bring the whole computation to a halt
— which, fortunately, is not an issue in protocols with a faulty
minority — then it should not enjoy the fruits of the computation
while denying them to reliable participants. In order to maintain
parity in the knowledge gained by adversary and by reliable players,
methods for gradual disclosure of the results are employed
[90, 45, 121, 65].
The two primary definitions for fairness are related to the algorithmic
solutions provided by their proponents. Yao, and later Galil, Haber,
and Yung, allow the reliable players to run a recovery algorithm based
on the program used by the adversary, if a majority of players have
become faulty [121, 65]. Beaver and Goldwasser propose a
stronger definition in which the reliable players do not have access to
the program of the adversary, thereby allowing the adversary itself to
depend on the programs of the reliable players. Further discussion can
be found in Chapter <ref>, where protocols tolerating a faulty
majority are presented, and in <ref>, where we demonstrate
how our unified definition implies the current set of desired
Our definitions capture all of these properties in a concise and simple
manner by avoiding the ad hoc, divide-and-conquer approach.
The properties we have discussed are related to each other: they are
each aspects of an ideal protocol. The ideal protocol and the notion
of relative resilience bind them together.
§ RELATIVE RESILIENCE: SIMULATORS AND INTERFACES
The key idea behind our approach is a means to compare one
protocol to another in terms of their security and reliability.
We call the combination of security and reliability, resilience.
We measure not only the information gained by an
adversary during an execution of a protocol, but the influence it
has on the outputs.
This essential idea, unnoticed, was the source of many of the
inherent difficulties in previous approaches: even in an ideal
protocol, with a trusted host who computes results accurately, an
adversary has some influence over the outputs. An adversary who
is not able to attack the trusted host is nevertheless able to choose
inputs that corrupted players will send to the host, thereby having some
limited but unavoidable effect on the computation. Thus, we must
examine an attack not only with respect to what the adversary learns
(ideally, only the inputs and outputs of corrupted players), but with
respect to how the adversary affects the computation (ideally, only
through its choice of inputs for corrupted players).
Before delving into the nature of the ideal protocol that will
become our standard, we first investigate how to compare the resilience
of two arbitrary protocols. The notion of privacy introduced by
Goldwasser, Micali, and Rivest [74] in the realm of
zero-knowledge proof systems and later applied to multiparty protocols
( [71, 76]) provides a springboard for the ultimate
definition of resilience.
As described in <ref>, zero-knowledge proof systems make
use of a simulator to demonstrate that the conversation seen by a
corrupted verifier is in fact generable solely from the theorem, “$x
\in L$.” We phrase it in a more suggestive manner: the simulator
demonstrates that the conversation is generable solely from the
information ( “$x \in L$”) that it would obtain in an ideal
situation (given a trusted oracle, prover).
In any general interactive protocol
execution, we would like to ensure that the adversary can simulate its
portion of the history of a protocol using only the information that it
is entitled to learn. Galil, Haber, and Yung [66] utilize such a
generalization and make use of a fault oracle that supplies the
bounded information (inputs and outputs of corrupted players) to which
the adversary is entitled. A simulator must, with that information,
generate an accurate view of the real protocol execution. In
computationally bounded models, where the verifier has limited
resources, the simulator must also use such bounded resources, in order
to demonstrate not just that no extra information is leaked but
that no results that are computationally infeasible to generate
are leaked.
A related idea that avoids the computational orientation of simulation
says that the distribution on views seen by the adversary must be independent of the reliable player's information, or more generally
that the distribution depends only on the final output and the
inputs of the corrupted players. This sort of approach forms the basis
for the instance-hiding schemes of [1, 14]. When the players
are not computationally bounded, defining privacy according to
independence of variables is often a cleaner approach (see Chapters
In general, if a distribution is independent of certain
variables, then it can be simulated without the “knowledge” of the
values of those variables (though, of course, there is no a priori
guarantee that simulating the distribution is efficient). With this in
mind, we shall adopt the simulation approach, since it also has the
advantage of covering computationally-bounded models.
But such definitions for privacy are not sufficient to analyze
the security and reliability of interactive
multiparty protocols.
Simulation is essentially a passive approach: give the simulator
information, and let it create a view for the adversary.
The influence
of an adversary on others is not taken into account.
Let us consider two interactive multiparty protocols, $\protoa$ and
$\protob.$ Each has an associated class of allowable adversaries,
$\advclass_{\protoa}$ and $\advclass_{\protob}.$ To compare the
resilience of protocol $\protoa$ against an adversary $A \in
\advclass_{\protoa}$ to the resilience of protocol $\protob,$ we should
like to allow $A$ to wreak havoc on protocol $\protob.$ Unfortunately,
$\protoa$ and $\protob$ may be radically different protocols. One might
have many more players than the other, one might disallow certain
players from being corrupted, one might be written in C
while the other
is written in FORTRAN, etc. We cannot simply run protocol
$\protob$ with adversary $A.$
Instead, we surround $A$ by an interface, $\interface.$
The interface $\interface$
creates an environment for $A$ so that $A$ will believe it is
participating in protocol $\protoa.$ On the other hand, $\interface$
itself is
allowed to wreak havoc on protocol $\protob.$ In other words, the
combination $\interface(A,\cdot)$ with $A$ built into $\interface$ is
used itself as the
adversary to an execution of protocol $\protob.$ (The notation indicates
that $\interface$ has two communication lines, one of which is used to
communicate with $A,$ and the other of which is used as its adversarial
input/output line in a protocol execution.) The combination
must be a permissible adversary in $\advclass_{\protob}.$
[Technically, there must be a machine in $\advclass_{\protob}$
with an identical input/output behavior as the single tape machine
Consider, for example, the special case where protocol $\protob$ is the
particular and simple protocol in which an incorruptible host sends one
message (containing “$x \in L$”) to the corruptible player. An
adversary to protocol $\protob$ can obtain only this information (and
has no influence on the results, in this particular simple case). An
interface $\interface$ receives this message and must present a proper
environment for $A.$ In this manner, the interaction corresponds to the
special case of zero-knowledge proof systems (see <ref>),
and the interface is used only as a simulator.
In actuality, simulation is only half of its job. We shall later
discuss how the interface covers all aspects of the definition of
zero-knowledge; first, let us examine how it captures all aspects
of multiparty security, including privacy and correctness.
(Interface) An interface
$\interface(\cdot,\cdot)$ is a machine (interactive Turing machine
or general player)
with two communication lines; the first is
called an environment simulationenvironment
simulation line line, and the second is called an adversarialadversarial line line.
If $\protoa$ admits adversary class $\advclass_{\protoa}$ and
$\protob$ admits adversary class $\advclass_{\protob},$ we say
$\interface$ is an interface from $\protoa$
to $\protob$
if for every $A \in \advclass_{\protoa},$
$\interface(A,\cdot) \in \advclass_{\beta}.$
We denote an interface by $\interface(\cdot,\cdot).$ When hooked up to an
adversary $A$ (with auxiliary input $a$) which itself is simply an
interactive machine having a single communication line, we consider the
combination $\interface(A(a),\cdot)$ as an adversary of its own right.
As a unit, it has a single communication tape, namely the adversarial tape.
We treat $\interface(A(a),\cdot)$ as an adversary to $\beta$
simply by using the
corruption requests and corrupt messages
$\interface$ writes on its adversarial line and by supplying $\interface$
with the requested messages and information held by corrupted players
in $\beta,$ as would normally happen in the execution of protocol $\beta$
with an adversary.
We should like to formalize the intuitive statement that adversary $A$
can wreak no more havoc on protocol $\protoa$ than it could on protocol
$\protob$ — given an interface to translate its requests for
corruptions and specifications of corrupted messages. If this is the
case, we shall say that $\protoa$ is as resilient as $\protob.$
In each protocol, every reliable player ends up in a state according to
some distribution. We should like that the adversary's influence on the
state of reliable players is the same, whether it attacks $\protoa$ or
with help from an interface it attacks $\protob.$ The adversary itself
ends up in some state when it attacks protocol $\protoa;$ the
distribution on its final state should be the same whether it attacks
$\protoa$ or is fed the environment simulated by $\interface.$ (Recall
that an attack of an adversary is just a sequence of strings
including requests for states of newly corrupted players, messages to
replace those of corrupted players, the states of those players, and
messages sent by reliable players to corrupted players. The states and
reliable messages are supplied either by the formal execution of
protocol $\protoa$ or by the computations of interface $\interface.$)
This says that the adversary gains no more information in $\protoa$ than
it would in protocol $\protob.$
Formally, let us consider the distributions $\ensAAlpha(n,m)\protoIn$
and $\ensASBeta(n,m)\protoIn.$
The former is induced by running protocol $\protoa$ with
adversary $A(a)$ and the given inputs and auxiliary inputs. The latter
is induced by running protocol $\protob$ with adversary
$\interface(A(a),\cdot)$ and the same inputs and auxiliary inputs.
When $n,$ $m,$ $\vec{x},$ $\vec{a}$, and $a$ are allowed to vary,
we have two protocols that each induce an ensemble,
$\ensAAlpha$ and $\ensASBeta,$ respectively.
These two ensembles describe the final states of the players, reflecting
the influence of the adversary, and the final state of the
adversary, reflecting the information gained by the adversary.
(Black-Box Auxiliary-Input Relative Resilience)
resilience!relative!black-box auxiliary input
$\resilasFa,$ relative resilience
A protocol $\protoa$ is as
$(\advclass_{\protoa},\advclass_{\protob})$-resilient as protocol
$\protob,$ written
\[
\protoa
\resilasFa_{(\advclass_{\protoa},\advclass_{\protob})}
\protob,
\]
if there exists an interface $\interface$ from $\protoa$ to $\protob$
such that for all adversaries $A \in \advclass_{\protoa},$
\[
\ensAAlpha
\indistFa
\ensASBeta
\]
The subscript $(\advclass_{\protoa},\advclass_{\protob})$ is omitted
where clear from context. We say the protocols are
($\resilasFaE$), statistically ($\resilasFaS$), or computationally ($\resilasFaC$) relatively resilient according to how
indistinguishable the induced ensembles are. It is clear that the use
of auxiliary inputs can be removed simply by omitting them from the
definitions, if one wishes to consider strictly uniform computations.
To avoid potential confusion, we remark
that definition of indistinguishability already takes
the inputs and auxiliary inputs into account:
for every $z,$ namely for every value of $n,$ $m,$ and $\protoInZ,$
the two sequences of
distributions induced by $\protoIn$ must approach each other
as $k$ gets large. A trivial modification of the definitions
removes auxiliary inputs from consideration, which may be useful
if one wishes to consider strictly uniform computations with uniform
adversaries. Auxiliary inputs, as mentioned, are useful for other
reasons, and we include them in our definition.
When $n$ and $m$ are fixed, we may compare two protocols in the same
manner: $\protoa(n,m)$ is as resilient as $\protob(n,m),$
written $\protoa(n,m) \resilas\index{$\resilas,$ relative resilience}
\protob(n,m),$
if there exists an interface $\interface$ such that
$\ensAAlpha(n,m) \indistEn \ensASBeta(n,m).$ The technical distinction
is that here we compare ensembles, whereas above we compare
families of ensembles.
The study of zero-knowledge often considers a weaker form of simulation
in which there need not be a universal simulator $\interface$ but rather
there need only be a specific simulator $\interface_A$ for each
adversary $A.$ The simulator can depend on the internal structure of the
adversary. For completeness, we give the analogous definition for
resilience, considering an interface $\interface_A$ that acts as an
adversary to protocol $\protob$ in the same way $\interface(A,\cdot)$ did
previously. We shall not consider this definition further.
(Weak Auxiliary-Input Relative Resilience)
A protocol $\protoa$ is
weakly as
$(\advclass_{\protoa},\advclass_{\protob})$-resilient as protocol
$\protob,$ if for any adversary $A \in \advclass_{\protoa},$ there
exists an interface $\interface_A$ from $\protoa$ to $\protob$
such that
\[
\ensAAlpha
\indistFa
\ensASBeta
\]
§ IDEAL INTERACTION WITH A TRUSTED HOST
Given a means to compare the resilience of protocols, we come now to
the other important half of the approach we propound:
a secure and reliable protocol must achieve the same results as
an ideal protocol
in which a trusted host performs the computation.
Chapter <ref> describes the mechanics of a general protocol,
whether generic or attacked by an adversary. A real protocol is one
having a fault class consisting of arbitrary subsets of up to $t$ players,
and having some arbitrary set of communication channels. For example, a real
protocol may provide broadcast channels and private channels between each
pair of players. The essential point is that no player need be
An ideal protocol, on the other hand, provides an absolutely
reliable and secure trusted host,
even though in the “real” world, it is neither prudent nor efficient to
rely upon such an immensely vulnerable, centralized situation.
Computation with a trusted host must nevertheless
take into account the presence of an adversary.
In the ideal case, however, the limited powers of the adversary are
more clearly delineated than among the intricacies of a complex protocol,
giving a convincing justification for claims of absolute security and
Figure <ref> describes the ideal protocol for $F.$ The
participants are the usual $n$ players along with a central, trusted host,
player $(n+1).$ Each player has a private communication line to
player $(n+1),$ so we need not consider information leaked implicitly
through interaction among the players.
Ideal-Protocol $\protoId$
Each player $i$ ($1 \leq i \leq n$) starts with input and auxiliary input
$(x_i,a_i).$ Player $(n+1),$ the trusted host, has no input.
Each player $i$ sends $x_i$ to player $(n+1),$ for $1 \leq i \leq n.$
Player $(n+1)$ sets $x_i^{\star}=\Lambda$ for any messages not falling in the
domain $X_i$ and sets $x_i^{\star}$ to the message from player $i$ otherwise.
It computes the sample
$(y_1,\dots,y_n) \leftarrow F(x_1^{\star},\dots,x_n^{\star}).$
(Without loss of generality assume that $F$ is defined on input $\Lambda.$)
Player $(n+1)$ sends $y_i$ to player $i$ for $1 \leq i \leq n.$
ideal protocol
$<\idf>$!ideal protocol
Ideal computation with a trusted host and reliable parties.
When an adversary is allowed to attack the ideal protocol, its powers are
strictly limited to learning and influencing the inputs of players in the
range $\set{1,\dots,n};$ player $(n+1)$ is immune to corruption. That is,
the ideal-$t$-fault-class $\idealfclass$ consists of all subsets of
$\set{1,\dots,n}$ of size $t$ or less. The ideal-$t$-adversary class
$\idealaclass$ consists of all adversaries that only request coalitions in
the ideal $t$-fault class. For the purposes of discussing
complexity-based security, in which adversaries and players are assumed to
perform probabilistic polynomial-time computations, we may restrict this
class to computationally-bounded adversaries. In that case, the adversary
must not only request coalitions strictly in the specified fault class, but
it must be a polynomial time Turing machine as well.
The execution of the ideal protocol in the presence of a dynamic, Byzantine
adversary is described informally
in Figure <ref>.
The formal specifications of the distributions and histories obtained in this
interaction should be clear from the definitions given in
Chapter <ref>.
The interactions in the presence of a weaker adversary,
such as a passive or static one, should also be clear.
Notice that a dynamic adversary may base its
choice of players to corrupt on information gained from previously
corrupted players in an adaptive fashion.
It may also choose to corrupt more players after $F$ has been computed.
The idea of committal to inputs corresponds to the end of round 1: each
player has sent its input to the host, and the computation of $F$ has not
yet begun. A special protocol that is useful in formally capturing ideas
of commitment and privacy is the
ideal vacuous protocol, $\vacuous,$
vacuous protocol
protocol!ideal vacuous
$\vacuous,$vacuous protocol
which is the ideal protocol in which each player must supply a 0 input; the
trusted host returns a string of $n$ bits, each of which is 0 if the
corresponding player supplied a 0 and is 1 otherwise. Essentially, no
information besides the identities of the cheating parties is returned.
* Each player $i$ ($1 \leq i \leq n$) starts with input and auxiliary input
$(x_i,a_i).$ Player $(n+1),$ the trusted host, has no input.
* The adversary computes and chooses an allowable subset $T \subseteq
\set{1,\dots,n}$ of players to “corrupt.” (The trusted host
is not included in the class of allowable faults.) It obtains the
inputs $(\vec{x}_T,\vec{a}_T)$ of those players, and nothing else. A
dynamic adversary may repeat this step as often as it pleases (modulo
resource bounds, and modulo the requirement that new coalitions contain old
* The adversary chooses an alternate set of effective inputs $x_i'$
for the members of coalition $T,$ and sends them to the trusted host.
* Every uncorrupted player sends its input $x_i$ to the trusted host.
* The trusted host sets $x_i^{\star}$ to be $\Lambda$ for each $i$ from
which it received a message out of range, and otherwise sets $x_i^{\star}$
to be the message it received.
The trusted host samples
$(y_1,\dots,y_n) \leftarrow F(x_1^{\star},\dots,x_n^{\star}),$
where $x_i^{\star}$ is either $x_i,$ $\Lambda,$ or a corrupted but possible
input choice.
* The trusted host sends $y_i$ to player $i$ for each $i.$ The adversary
thereby receives outputs $\vec{y}_T.$
* The adversary may choose more players to corrupt, obtaining their inputs
$(x_i,a_i)$ and their output values $y_i,$ with the same restrictions on
coalitions as before.
ideal protocol!with adversary
protocol!ideal!with adversary
$<A,\interface,\idf>$!ideal protocol with adversary
Ideal computation with
a trusted host, dynamic malicious adversary, and reliable parties.
This protocol requires two rounds.
§ RESILIENCE: THE UNIFIED THEORY
The groundwork is laid for the concise and precise definition
of security and fault-tolerance for multiparty computations.
Using the concept of relative resilience and using the ideal
protocol as a standard:
A protocol $\protoname$ with fault class $\faultclass$ is
$\faultclass$-resilient leaking $F$ if
\[
\protoname
\hspace{0.1in}
\resilasFa_{(\faultclass,\idealaclass)}
\hspace{0.1in}
\idealname(\computef).
\]
We say exponentially, statistically, or computationally
if $\resilasFaE,$ $\resilasFaS,$ or $\resilasFaC$ holds, respectively.
§.§ Three Scenarios: A Summary
The diagram in Figure <ref> illustrates the three scenarios
of key importance. The first represents the ideal world, in which a
trusted and reliable host is available and ensures that the adversary is
truly restricted to gaining only the inputs and outputs of players of
its choice. The third represents the real world, in which no player can
be trusted but a protocol must be designed to perform the same
computations as in the ideal world, correctly and privately. The second
and intermediate scenario joins the two, modelling the interaction
in an ideal protocol attacked by a real adversary that is assisted by an
interface. Allowed to attack the ideal protocol as best
it may, the information and influence of the adversary is clearly
delineated in this central case. It connects the clear measurements of
security and reliability in the ideal case to the less easily understood
powers of the adversary in the real world.
Three scenarios: first, an ideal protocol with a trusted host, clearly
delineating the limited information and influence of an adversary;
second, an adversary attacking a trusted host by way of an interface
that creates a simulated environment for it; third, an adversary attacking
a real protocol with no trusted parties.
Squares and circles indicate players; squares within
the interface are simulated players.
The X marks indicate corrupted players.
§.§ Zero-Knowledge at One Blow
Resilience captures all aspects of zero-knowledge proof systems,
not just privacy, in a concise way. Consider a zero-knowledge proof
system as a two-party protocol in which any number of players
may be corrupted. The ideal protocol provides a trusted host.
The prover tells the trusted host, “$x \in L,$” for some $x \in L,$
or declines to participate by sending some other message. The host
simply sends “$x\in L$” or “” to the verifier, accordingly.
It is clear that the most an adversary can do is to cause the prover
detectably to cheat without convincing the verifier, or to gain
at most the information that the verifier learns, which in this case
amounts only to “$x\in L$.” Separately, these possibilities
correspond to soundness and to privacy. Resilience unifies them.
The ideal protocol, denoted
$\idealname(F_L)=\anglebrack{P_L,V_L},$ computes the function
$F_L$ defined by
\[
F_L(p, 0) =
\left\{
\begin{tabular}{cl}
``$x \in L$'' &
$p = \mbox{ ``$x \in L$''}$ and
$x \in L$ \\
``\cheating'' &
\end{tabular}
\right.
\]
$\anglebrack{P,V}$ is a (statistical, computational)
zero-knowledge proof system for $L$
if and only if it is a (statistically, computationally)
$2$-resilient protocol for $F_L.$
If $\anglebrack{P,V}$ is a zero-knowledge proof system for L, then
there exists a black-box simulator $S_{PV}.$
Construct an interface $\interface$
that does the following. Do nothing until adversary $A$ requests a
corruption. If $A$ corrupts $V,$ request corruption of $V_L$ in
protocol $\idealname(F_L),$ and obtain $V_L$'s output, “$x \in L.$”
Adversary $A$ may have waited until some later round $r$ to request
a corruption, so, by internally simulating $V$ and running the
simulator $S_{PV}(x)$ to generate a transcript, create a history of
the protocol through round $r.$ Note that the output of the simulator
is accurate for a proof system with no cheating (i.e. only passive
corruption) and with $x\in L.$ Therefore the prefix is accurate.
Now, supply $A$ with the current state of internally simulated
$V,$ and run $S_{PV}(x)$ to the end, with access to $A$ as the corrupt
[Weak zero-knowledge, in which the simulator is not
restricted to unresettable, black-box use of $V,$ corresponds to
weak resilience. The interface $\interface$ simply awaits the
output of such a simulator and then uses it as its own output.]
If $A$ corrupts $P,$ then $\interface$
corrupts $P_L.$ The interface $\int$ operates $V$ internally
[$V$ is polynomial time and its tapes are public.]
carrying on a correspondence with $A$ as the prover. At the
end of the process, the internal $V$ either accepts some $x\in L$
or rejects. If $V$ accepts, $\interface$ requires the corrupt
$P_L$ to send “$x \in L$” in the ideal protocol;
the host will approve and $V_L$ will accept.
If $V$ rejects, $\interface$ requires the corrupt $P_L$ to
send $\Lambda$ in the ideal protocol; the host and $V_L$ reject.
In either case the final outputs of the uncorrupted players,
$V$ and $V_L,$ are identical.
The adversary $A$ receives a view of an interaction with an
honest verifier.
Finally, if at some point $A$ corrupts both $P$ and $V,$
$\interface$ stops interacting with $A,$
since there are no nonfaulty outputs and no more nonfaulty players to
corrupt or to receive messages from.
Now, if $\anglebrack{P,V}$ is statistically
2-resilient, then for some interface
\[
[P,V] \indistFaS [\interface(A),P_L,V_L].
\]
Hence for any adversary that corrupts $V,$
\[
[P,V]^{Y_A} \indistFaS [\interface(A),P_L,V_L]^{Y_A}
\]
so $\anglebrack{P,V}$ is statistically $t$-private,
and for any adversary that corrupts $P,$
\[
[P,V]^{Y_V} \indistFaS [\interface(A),P_L,V_L]^{Y_V}
\]
so with asymptotically high probability, an honest $V$'s output
is 1 when $x \in L$ and 0 otherwise.
§ FAULT ORACLES
An intermediate approach pursued by Beaver [10] and
independently by Kilian, Micali, and Rogaway [87]
uses the idea of a fault-oracle that provides limited information
from which to construct a simulated view for an adversary.
Fault-oracles are a natural extension of zero-knowledge. Galil,
Haber, and Yung [66] proposed a function-oracle for the
case of two-party minimum knowledge decision proof systems,
in which the prover proves either $x \in L$ or $x \not\in L.$
Because the verifier should receive the result of a decision
about membership in $L$ in addition to a proof, the simulator
that produces its view ought to receive exactly that much information.
Thus, the simulator is allowed a single query to an oracle that computes
the characteristic function for $L.$
To define multiparty privacy, Beaver [10] and Rogaway
[111] extended the function-oracle to a fault-oracle.
Some additional requirements are needed, since the powers of
an adversary in a protocol are more diverse.
Essentially, the simulator having access to $\foracle$ is allowed
to request inputs of up to $t$ players and to substitute its own.
Then it receives from $\foracle$ a single computation of $F$ using
the unrevealed inputs of uncorrupted players. In Beaver's model,
it is also allowed to request more corruptions (up to a total of
$t,$ and obtaining inputs and outputs) after the computation,
since a dynamic adversary does have the option of doing so
after a normal protocol.
As with zero-knowledge, leaked information is bounded by the amount
of information a simulator needs to construct an accurate view of
a real protocol execution.
A key step, noted independently by Beaver and by Kilian, Micali,
and Rogaway, is that the computation of $F$
induces output values $F_i(x_1,\ldots,x_n)$ for nonfaulty
players, even though these values are not returned. Correctness can
thus be defined to hold when these induced outputs match those in
the real protocol, for then, since the induced outputs are correct,
so must the real ones be.
The definition of security becomes one of matching simulated to real
adversarial views, and matching induced to real nonfaulty outputs.
It is remarkably more concise and comprehensible than earlier approaches.
Unfortunately, the
definition is not overly flexible. For example, the concatenation of
two secure protocols may correspond to two fault-oracle queries,
but to prove the concatenation secure, one either allows only one
query or must redefine security and the fault oracle.
The essential step that was not previously observed was that the oracle
is far better regarded as a trusted host actually participating in
a protocol. This observation paves the way for developing a general
concept of protocol comparison. It serves as the basis to our approach.
The definition of relative resilience is as
concise, if not more so, as the definition of a fault oracle
and of security with respect to a fault oracle.
The fault oracle, even though a key stage in the development of
good definitions, is a very limited and inflexible
application of a very broad and powerful concept.
§ SATISFYING THE AD HOC PROPERTIES
The important properties of the trusted host are several. First, the
trusted host ensures that the inputs are not revealed, apart from
information leaked solely from the output of the function itself. Second,
it returns a correct set of values based on the inputs it has received.
Third, because it waits until all inputs are received (privately) before
evaluating the function, it ensures that faulty or maliciously chosen
inputs do not depend on the choices made by reliable players. Finally,
every player is assured of receiving its respective output, and fairness is
We claim that the ideal case captures a priori
all desirable properties that are as yet unconsidered; that is,
it declares precisely what needs to be accomplished by a secure protocol
and it specifies how malicious influences must be limited. New properties
that one might like to consider reflect a deeper understanding of the ideal
protocol as opposed to a change in the model and in the idea of security.
We can analyze the ad hoc list of individual properties as aspects
of our definition of resilience. Correctness addresses the
accuracy of the results computed by reliable players; privacy
addresses what results the adversary can compute based on what he sees.
(Correctness and Privacy)
* A protocol $\protoa$ is $t$-correct leaking $F$ if
there exists an interface $\interface$ such that
\[
\ensAPi^{\vec{Y}}
\indistFa
\ensASId^{\vec{Y}}.
\]
* Protocol $\realpf$ is $t$-private leaking $F$ if
there exists an interface $\interface$ such that
\[
\ensAPi^{Y_A}
\indistFa
\ensASId^{Y_A}.
\]
Independence of inputs, a more tricky property to define formally,
is also captured by the ideal case. In the ideal case there is
absolutely no interaction among players; the choice of input by a faulty
player is completely independent of messages sent by reliable players to
the trusted host, since those messages are carried on private channels.
Since the adversary cannot capture a reliable player's input or be
influenced by it in the ideal case without deciding to corrupt that
player, its choice of inputs is the same regardless of the information
held by reliable players. Resilience implies that the adversary's
behavior and hence its choice of inputs is the same in real and ideal
cases, so the faulty inputs are not influenced by the reliable ones.
[One could certainly make a weaker statement that if the outputs
of nonfaulty players have the same distribution in the real and ideal
cases, then even if the adversary's choice of inputs depends on
nonfaulty player's inputs, the final effect amounts to nothing. But
because the adversary's view, containing the choice of inputs,
must be identical in both cases, we need not weaken the claim.]
Our definition implies fairness, and in fact an even stronger
property: not only do all reliable players obtain their results whenever
the faulty players do, they obtain them regardless of the faulty players'
Yao [121] and Beaver and Goldwasser [17] (presented in
Chapter <ref>) examine protocols where $t \geq n/2.$ In this
scenario, it is impossible for reliable players to force a computation
to finish. It is impossible to achieve resilience. But it may be
possible to ensure correctness and privacy if a computation is not
halted. The issue of fairness has more meaning here. What if
some players should learn the results while others do not?
Whereas perfect fairness is impossible, [17] use a weaker
property to attain an approximation to fairness. The weaker property
states that, if any reliable player fails to obtain an output, then all
reliable players detect cheating. With cryptographic assumptions or
given a network supporting oblivious transfer, [17] showed that
misbehaving players can be identified. While this doesn't prevent
other faulty players from changing their inputs if the protocol were to
be run a second time after detecting some faults, it is a useful outcome
(especially in a litigious society). Hence we may extend the output to
be a vector of values or a value $(T,\cheating)$ which states that
coalition $T$ was agreed by all reliable players to be cheating.
An approximation to fairness based on the ability to detect cheating is
attainable [17]. If cheating does not occur, then all players
can progress slowly but equally toward knowing their output.
Nevertheless, defining this approximation is still tricky, since it is
not clear what an ideal protocol would achieve. A rough sketch is
as follows. A trusted party would accept inputs and compute the
function. It would then take several rounds to reveal the results
gradually. Intuitively, the odds of each player to know its result
$y_i$ after each round must advance in lock-step. A more precise
treatment of this concept appears in Chapter <ref>.
§ STRONG SECURITY: SOLVING A SUBTLE ERROR
A simple yet subtle bug exists in all known multiparty protocols.
If the $t$ corrupted players send all their information to a single
reliable player $i,$ then that player now knows every input.
For example, protocols based on secret sharing (cf.
[114, 71, 28, 39]) have the property that the information held
by any $t+1$ players determines all the inputs. Therefore if $t$ players
give up their information to player $i,$ player $i$ knows everything. One
way to circumvent this is to say that player $i$ is “reliable” and hence
“ignores” improper messages. What reliable person would believe what the
adversary says, anyway? An honest-but-curious player (one who behaves in
terms of messages but may try to determine additional information) may be
tempted to gamble that the adversary is telling the truth in leaking
information, and may benefit from improper extra knowledge if the adversary
is indeed leaking information.
But this sort of philosophizing is far from formal. If the information
arrives at a node, the bottom line is that it should not be there.
More formally, the set of messages given to the honest-but-curious player $i$
ought to be simulatable from the knowledge held by player $i$ and the
adversary; the inputs of other uncorrupted players should not be
compromised, and hence should not be needed to simulate the conversations
seen by player $i.$
We may consider this as an example of the following situation: in addition
to the malicious adversary $A,$ there are one or more passive adversaries
$B_1,B_2,\ldots$ A passive adversary $B,$ like the Byzantine adversary $A,$
is an automaton which requests states of players, but which does not
replace messages from those players. Each reliable player is a fixed
passive adversary with access only to its own communications. We would
like the protocol to be secure simultaneously against $A$ and at least
against such passive adversaries.
We address this formally in the following manner. Denote
the distribution on all output strings and views by
\begin{eqnarray*}
(\vec{x} \circ \vec{a} \circ a \circ a_B,k)
& = & \\
\view_A,\view_B,\view_1,\ldots,\view_n).
& &
\end{eqnarray*}
Here, $a_B$ is the auxiliary input of the passive adversary $B.$
We extend the interface $\interface$ to have a
third communication tape, to
interact with a (black-box) passive adversary $B.$ As before, consider
an execution of the ideal protocol with $\interface$ acting as the
Byzantine adversary. We also allow $\interface$ to corrupt
a second set of players passively,
just as B would. The number of additional players that $\interface$
can passively compromise is bounded by $t_B,$ the bound on the number of
players that $B$ could corrupt in a real protocol. (We would at least
like to ensure security for $t_B=1,$ corresponding to a single
honest-but-curious player.)
(\vec{x} \circ \vec{a} \circ a \circ a_B,k)$
denote the distribution on outputs and views during an ideal
protocol execution with $\interface$ playing the part of the adversary,
allowing $\interface$ to request $(t+t_B)$ inputs.
We define the ensemble families
$[A,B,\protoname]$ and
$[A,B,\interface,\idealname]$ as subsets of
the random variables, namely just the $Y$ variables in
$[A,B,\protoname]^{hist}$ and
as earlier.
Then the more careful notion of security is the following:
Protocol $\realpf$ is strongly $t$-resilient leaking $F$
if there exists an interface $\interface$
such that for any Byzantine $t$-adversary $A,$
for any passive $1$-adversary $B,$
\[
\indistFa
\]
We remark that similar modifications apply to the case of relative
resilience, though we shall not occupy additional space with them.
Auxiliary inputs can, as before, be excluded if so desired.
CHAPTER: MODULAR PROTOCOL DESIGN
Secure multiparty protocols are often based on a paradigm introduced by
Goldreich et al [70, 71]: “Share Inputs; Compute New
Secrets; Reconstruct Results.” The value of reducing a function
computation to a circuit evaluation is clear: by constructing
subprotocols to add and multiply secretly shared values, one provides the
modules for a simple and modular protocol to compute the overall function.
Though the direct application of this approach to circuit evaluation
results in inefficient protocols, we shall follow it and modify it to
provide a more general and far more efficient modular approach. Rather
than decomposing a function into subprotocols centered around each gate of
a circuit, we focus on constructing a sequence of intermediate functions
whose results can be revealed, unlike the outputs of intermediate gates
in a circuit,
without compromising security. We propose a more general modular
To support a modular approach, however, the concatenation of
protocols and the composition of functions require particular attention in
terms of security and reliability. In this chapter we address the
particular intuitive and formal issues arising from the pursuit of a
modular approach. We prove some intuitive lemmas regarding composition and
concatenation, some of which are intuitive results — such as the validity
of composing polynomially many protocols — whose statements as
folk-theorems are sufficiently imprecise that they do not hold.
We give the formal requirements under which they do hold.
In the next chapter we present particular protocols; this chapter
focuses on technical lemmas and methodology.
§ ROBUST AND PRIVATE REPRESENTATIONS
A threshold scheme allows one player, the dealer, to distribute a
secret value $s$ among the network in such a way that only certain
coalitions of players have sufficient information to determine $s.$
When a bound $t$ on the number of faulty or curious participants
is given, the power to reconstruct $s$ is given to groups of size $t+1$ or
more, but not to any group of size $t$ or less.
There are two properties we should like to satisfy. The first states that
there is a means to reconstruct a shared value despite errors. Given a
vector $\vec{y}$ of $n$ values, a
of $\vec{y}$ is a
vector differing in at most $t$ places from $\vec{y},$ namely a vector of
Hamming distance $t$ or less from $\vec{y}.$
A function $\sha:S \rightarrow (\sigstar)^n$ is a $t$-robust
representationrepresentation!robustrobust if there exists
a function $\recons$ such that,
for all $s \in S,$
for all $t$-modifications $\vec{y}\prime$
of $\vec{y}=\sha(s),$ we have $\recons(\vec{y}\prime)=s.$
Broadcast channels are useful to maintain robustness,
in which case the value is not kept private.
Secret sharing, which does achieve privacy at the same time,
is also useful.
The second property states that the information distributed to the players
preserves the privacy of $s.$ Recall that the vacuous
protocolvacuous protocol reveals no information
A function $\sha$ is a $t$-private function
private function
if there exists a protocol $\share$ to compute $\sha$ that is as
$t$-resilient as the ideal vacuous protocol.
Any function that produces the same results regardless of its inputs is
certainly private, though not necessarily robust. Though such a function
is apparently useless at first glance, we shall find important uses.
Robustness and privacy together define secret sharing:
A threshold schemethreshold scheme with threshold $t$
is a pair of protocols $\gensha$ and $\genrec,$ where $\gensha$ computes a
$t$-robust and $t$-private representation $\sha,$ and $\genrec$ computes
the function $\recons.$
§ COMPOSING FUNCTIONS
In order to develop modular protocols, let us consider protocols intended
to compute some sequence of intermediate functions. For clarity of
exposition, first consider only two functions, $F(x_1(1),\dots,x_n(1))$ and
$G(x_1(2),\dots,x_n(2)),$ in sequence. To be precise, the inputs to $G$
may include or be influenced by the inputs and outputs of $F.$ We
consider $x_1(2)$ to be a function of $y_1(1)$ (which
itself specifies $x_1(1),$ without loss of generality)
and of $x_1^{new}(2),$ a new portion
of the input. The desired application may certainly specify that
$x_1^{new}(2)$ is ignored; our protocols do not actually take advantage of
new inputs, and we shall soon omit any mention of them for
the sake of readability.
There are a few different ways to combine functions that are useful
(cf. <ref>):
* $(F,G):$ the computation of the function
$H(x_1,\ldots,x_n)$ that concatenates the outputs of $F$ and $G:$
namely $H_i(x_1,\ldots,x_n)=(F_i(x_1,\ldots,x_n),G_i(x_1,\ldots,x_n)).$
The inputs to $G$ do not depend on the outputs of $F.$
* $(F;G):$ the sequential computation of
$G$ with $F.$
The inputs to $G$ do not necessarily depend on the outputs of $F.$
$\opencomp \set{F,G}$ or $(F; G\circ F):$
the open composition of $F$ then $G,$
using the outputs of $F$ in the computation of $G.$
$\closedcomp \set{F,G}$ or $(G \closedcomp F):$
the hidden composition of $G$ with $F,$
revealing the results of function $G \circ F$
without revealing the results of $F.$
The two of greatest interest are (<ref>)
open composition
and (<ref>) hidden composition.
The idea of computing on hidden values, using the results of $F$ to compute
$G$ without revealing them in the interim, is extremely useful.
A collection of function families is denoted
$\scf=\set{F^1,F^2,\ldots}$ where each $F^i$ is a family $F^i =
\set{F^{i,n,m}}.$ Let $f(n,m,k)$ be some function from integers to integers.
The closed composition
of $f(n,m,k)$ functions from $\scf$, written $\closedcomp \scf$ or informally
$F^{f(n,m,k)}\closedcomp \cdots \closedcomp F^1,$ is defined as the function
family $F=\set{F^{n,m}},$ where each $F^{n,m}$ is:
\[
F^{n,m} = F^{f(n,m,k),n,m} \circ F^{f(n,m,k)-1,n,m} \circ \cdots \circ F^{1,n,m}.
\]
The open composition
of $f(n,m,k)$ functions from $\scf$, written
$\opencomp \scf,$ includes the progressive results in the composition of
the $f(n,m,k)$ functions:
\[
F^{n,m\cdot f(n,m,k)} = (F^{1,n,m},F^{2,n,m}\circ F^{1,n,m},\ldots,
F^{f(n,m,k),n,m} \circ \cdots \circ F^{1,n,m}).
\]
Note the use of $m\cdot f(n,m,k)$ in the indexing; the function has an
output that is $f(n,m,k)$ times as large. (We define the functions
$F^{n,m\cdot f(n,m,k) + a}$ for $0<a<f(n,m,k)$ to be identically 0.)
§ CONCATENATING PROTOCOLS
We shall speak of protocols, protocol families, and collections
of protocol families; to make the presentation more clear,
Figure <ref> outlines the various levels on which
we group together and discuss protocols and the events they induce.
Recall that a protocol execution on a particular set of inputs
$\vec{x},$ $\vec{a},$ and $a,$ with particular values of $n,$
$m,$ and $k,$ induces a distribution on histories. By
varying these parameters, we consider larger and larger
conglomerations of protocols (e.g. protocols, families
of protocols, etc.) which induce larger and larger
probabilistic conglomerations (e.g., distributions,
ensembles, families of ensembles, etc.).
§.§ Concatenating General Protocols
Let $\protosa=\set{\protoa(1),\protoa(2),\dots}$ be a collection of
protocol families, and let $f(n,m,k)$ be some function from $\nat$ to
Let $\set{x_i^{new}(r)}_{i,r \in \natsmall}$ be a collection of
inputs that are to be supplied as the execution of the protocol progresses.
Let $\set{a_i^{new}(r)}_{i,r \in \natsmall}$ be a collection of
auxiliary inputs that are to be supplied as the execution of the protocol
progresses. Let $x_i(1)=x_i^{new}(1)$ and $a_i(1)=a_i^{new}(1)$ for all
$i.$ Let $\set{\chi^{n,i,r}}_{n,i,r \in \natsmall}$ be a
collection of probabilistic functions mapping an output and auxiliary input
to a new input.
The sequential concatenationconcatenation of protocols in
$\protosa$ is the protocol described in Figure <ref> and is written
$\circ \protosa,$ or $\protoconc_{i=1}^{f(n,m,k)} \protoa(i),$ or
$(\protoa(1);\protoa(2);\ldots;\protoa(f(n,m,k))).$ For each $n,$
$f(n,m,k)$ protocols are executed in turn. The input of player $i$ to the
$r^{th}$ protocol is denoted $x_i(r).$ Before the $r^{th}$ protocol is
executed, each player $i$ obtains an external auxiliary input,
$a_i^{new}(r),$ which is combined with its view of previous executions to
provide the auxiliary input $a_i(r)$ for the $r^{th}$ protocol. Player $i$
chooses an input $x_i(r)$ for the $r^{th}$ protocol based on the output of
the previous protocol $y_i(r-1)$ and its new input $x_i^{new}(r),$
according to the function $\chi^{n,i,r}.$ This allows the player to base its
next input on the results of previous protocols as well as other
information it may have. The functions $\chi^{n,i,r}$ are part of the
protocol and players are required to use them; the protocol designer is
free to specify that players ignore previous outputs or that they use them
in some particular way. Note that players corrupted by a Byzantine
adversary are certainly allowed to specify inputs chosen differently.
We must include one restriction on the adversary: the set of players it
corrupts throughout the entire concatenated protocol must be a subset of
some allowable coalition in the fault class. Formally, if the adversary
classes are $\sca_1,\sca_2,\ldots,$ then the adversary class for the
concatenation of protocols is $\sca= \cap \sca_r.$ In each successive
execution, the adversary may be treated as a new adversary receiving the
view of the previous execution as its auxiliary input.
(CP1) Run protocol $\protoa(1):$
$\view(1) \leftarrow \ensAAlphao(n,m)\protoIn$
$y_i(1) \leftarrow Y_i(\view_i(1))$
$(1 \leq i \leq n)$
(CP(r)) $r=2..f(n,m,k)$
$a_i(r) \leftarrow (\view_i(r-1),a_i^{new}(r))$
$(1 \leq i \leq n)$
$a(r) \leftarrow (\view_A(r-1),a^{new}(r))$
$x_i(r) \leftarrow \chi^{n,i,r}(y_i^{r-1},x_i^{new}(r))$
$(1 \leq i \leq n)$
$\view(r) \leftarrow \ensAAlphar(n,m)\protoInr$
$y_i(r) \leftarrow Y_i(\view_i(r))$
$(1 \leq i \leq n)$
Concatenation of $f(n,m,k)$ protocols from the collection of protocol families
§.§ Concatenating Ideal Protocols
Though the definitions of <ref> demonstrate how ideal
protocols are concatenated, the direct concatenation of ideal protocols
will not provide quite the level of security and reliability we desire.
Operating two protocols $\idealname(F)$ and $\idealname(G)$ in sequence
does not ensure that the inputs to the second protocol are in any way
related to the inputs or outputs of the first. That is, a corrupt player
might obtain $y_i(1)$ as its output of $F$ but instead supply
$(y_i(1)+1,x_i^{new}(2))$ as its input $x_i(2)$ to $G.$
In many circumstances this is quite undesirable. For example, the
employees of a company might wish to calculate their average overall salary
and the average salary of management. First, they compute the overall
average. For the second computation, however, the management may report
lower salaries to obtain an advantage in salary negotiations.
As defined in <ref>, let $\scf=\set{F^i}_{i \in
\natsmall}$ be a possibly infinite collection of families
$F^i=\set{F^{i,n,m}}$ of finite functions, and let $f : \nat^3 \rightarrow
\nat.$ (We normally take $f(n,m,k)$ to be polynomial.)
The open concatenation of $f$ ideal protocols
using $\set{\chi^{n,i,r}}_{n,i,r \in \natsmall}$
is the following protocol, denoted by
$\ocip= \circ_i \idealname(F^i).$ Consider $n+f(n,m,k)$ players.
A $t$-fault class is, as usual, the collection of subsets of $[n];$ players
$i > n$ are incorruptible. The protocol requires $2f(n,m,k)$ rounds;
let $r$ range from $1$ to $f(n,m,k):$
(Round $1$)
Each player $i \in [n]$ sends $x_i(1)=x_i$ to trusted player $(n+1).$
(Round $2$)
Player $(n+1)$ computes the value $F^1(x_1,\ldots,x_n)$ and returns the
(Round $2r-1$)
Each player $i\in [n]$ chooses an input $x_i(r) \leftarrow
\chi^{n,i,r}(y_i(r-1),x_i^{new}(r-1))$ and sends it to trusted player $(n+r).$
(Round $2r$)
Player $(n+r)$ computes the value $F^r(x_1,\ldots,x_n)$ and returns
$y_i(r)$ to each player $i\in [n].$
Technically, the objection to simply concatenating protocols directly is
the following. Operating ideal protocols in sequence invokes different trusted parties, one to compute $F^1,$ one to compute $F^2,$ and
so on. None of the trusted parties share any information or communicate at
The ideal composite protocol
composite protocol!ideal
should have one trusted party. The protocol is slightly longer:
first the trusted host requests $x_1(1),\ldots,x_n(1),$ and returns
$F^1(x_1(1),\ldots,x_n(1)).$ The host retains the values of the outputs.
Since the inputs to the next computation are each some function of the
previous output and a new input, the players themselves need only supply
the new input, and the host can compute the next round's inputs using
$x_i(r) \leftarrow \chi^{n,i,r}(y_i(r-1),x_i^{new}(r)).$ This prevents
changing the output of previous protocols, while permitting a protocol
designer to include new information during the protocol if desired. More
The ideal open composite protocol
composite protocol!ideal open
for $p$ functions from $\scf,$
using $\set{\chi^{n,i,r}}_{n,i,r \in \natsmall},$ is the following
protocol, denoted $\idoc.$ For each $n,$ consider $n+1$ players. Player
$(n+1)$ is incorruptible; the $t$-fault class is the collection of subsets
of $[n].$ The protocol requires $2f(n,m,k)$ rounds:
(Round $1$)
Each player $i\in [n]$ sends $x_i(1)$ to trusted player $(n+1).$
(Round $2$)
Player $(n+1)$ computes the value $F^1(x_1,\ldots,x_n)$ and returns
$y_i(1)=F^1_i(x_1,\ldots,x_n)$ to each player $i \in [n].$
(Round $2r-1$)
Each player $i \in [n]$ sends $x_i^{new}(r)$ to the trusted host.
(Round $2r$)
In round $2r,$ player $(n+1)$ (not player $i$) computes $x_i(r)
\leftarrow \chi^{n,i,r}(y_i(r-1),x_i^{new}(r)),$ the next “input” for player
$i.$ Player $(n+1)$ then computes the value $F^r(x_1(r),\ldots,x_n(r))$ and
returns $y_i(r)$ to each player $i \in [n].$
The ideal hidden composite protocol,
composite protocol!ideal hidden
$\idhc,$ is the same as $\idoc$ except that in intermediate rounds the
trusted host always returns the same results as the vacuous protocol (i.e. a vector describing honest and cheating players) instead of $y_i^r.$
In the final round, the trusted host does return the final result.
Our ultimate goal is to achieve the two protocols $\idoc$ and $\idhc,$
since they ensure that information is consistent from one function
evaluation to the next. Section <ref> presents a method
to ensure that the concatenation of ideal protocols does in fact achieve
the same results as an ideal composite protocol with a single trusted
host. Verifiable secret sharing is essential, both for maintaining
consistency and for hiding progressive results. For clarity, we shall omit
mention of the new inputs $x_i^{new}$ and of the functions $\chi^{n,i,r}.$
§ THE MODULAR APPROACH
The key to achieving efficiency is to avoid evaluating a circuit for $F$
directly, and instead to break the computation of $F$ into the computation
of several functions $F^1,\ldots,F^{f(n,m,k)},$ each of which reveals nothing
about the inputs. Each of these computations is itself broken up into a
composition of fundamental operations $G,$ such as addition and
multiplication, according to the established paradigm. Each input is
shared as a secret using a robust representation. Instead of using the
fundamental computations, which would reveal information, robust and
private representations $\robsec(G)$ are used to compute pieces of
new secrets from the old ones. For example, secrets are added or
multiplied, but the inputs and outputs are maintained in shared form. The
use of a robust and private representation (secret sharing) allows us to
compute open compositions of functions, which allows us simply to
concatenate subprotocols. It also provides the glue to ensure that the
computations are insensitive to faulty players who fail to supply the
output of one subprotocol as the input to the next.
The supporting machinery [28, 39] establishes how to compute each
intermediate function $F^i$ by reducing it to simple operations on secrets.
We shall use the protocols of Ben-Or et al (see
<ref>) for these simple operations. We achieve vast
improvements in efficiency through careful choice of the intermediate
functions (see <ref>). In the remainder of this chapter we
give formal arguments for the validity of our methods.
Using more formal notation, the general technique is as follows. Function
family $F$ is first represented as the open composition $\opencomp \scf$ of
some collection of function families, each of which is private. In other
words, $F$ is decomposed into a sequence of intermediate functions whose
results can be revealed openly without compromising essential information.
Then, each family $F^i$ in the collection is itself written as a hidden composition of a collection $\scg^i$ containing more fundamental
functions (such as addition and multiplication). In other words, we should
like that each $F^i$ is computed as a composition of fundamental functions
$\set{G^{ij}},$ even though the intermediate results of these computations
cannot be revealed. Finally, to facilitate the hidden composition of
these fundamental functions, each fundamental function $G^{ij}$ must have a
private and robust representation $\robsec(G^{ij}).$ In other words,
$\robsec(G^{ij})$ is essentially a function that operates on secret pieces
of the inputs to $G^{ij}$ and produces secret pieces of the results of
$G^{ij},$ rather than producing the results themselves. Diagrammatically:
\begin{eqnarray*}
F & \rightarrow & \opencomp_i \set{F^i} \\
F^i & \rightarrow & \closedcomp_j \set{G^{ij}} \\
G^{ij} & \rightarrow & \robsec(G^{ij})
\end{eqnarray*}
§ INDISTINGUISHABILITY
In <ref> we defined how two ensembles are
indistinguishable, and we defined how two families of ensembles
(parametrized by $n$ and $m$) are indistinguishable. In order to examine
concatenations of many protocols and in order to facilitate proofs of
resilience, we define indistinguishability for collections of ensemble
families and collections of protocol families.
§.§ Collections of Families of Ensembles
Let $\enssa=\set{\ensa(i)}_{i \in \natsmall}$ be a collection of families
of ensembles. For concreteness, note that $\ensa(i)$ is a family of
ensembles, $\ensa(i)(n,m)$ is an ensemble, and $\ensa(i)(n,m)(z,k)$ is a
distribution on strings. Let $\enssb=\set{\ensb(i)}_{i \in \natsmall}$
also be a collection of families of ensembles. In the spirit of
mathematical analysis we define uniform indistinguishability, where
“uniform” indicates not Turing machines but the flavor of uniform vs. pointwise convergence. We require that for any $n$ and $m,$ there is
a bound $\delta(i,k)$ on the closeness of corresponding ensembles
$\ensa(i)(n,m)$ and $\ensb(i)(n,m),$ and that the bound is approached
uniformly for all $i:$
Two collections of ensemble families $\enssa=\set{\ensa(i)}$ and
$\enssb=\set{\ensb(i)}$ are
* pairwise $O(\delta(i,k))$-indistinguishable
if for all $i,$ $\ensa(i) \indistFa^{O(\delta(i,k))} \ensb(i).$
* uniformly pairwise $O(\delta(i,k))$-indistinguishable
indistinguishable!pairwise uniform
\[
(\exists \Delta: \nat \rightarrow \nat)
\mbox{\hspace{0.2in}}
(\forall i)
\mbox{\hspace{0.1in}}
\ensa(i) \indistEn^{\Delta(i,k)} \ensb(i)
\mbox{\hspace{0.3in} \rm and }
\Delta(k) = O(\delta(k)).
\]
* pairwise $O(\delta(i,k))$-computationally indistinguishable
if for all $i,$ $\ensa(i) \indistFaC^{O(\delta(i,k))} \ensb(i).$
* uniformly pairwise $O(\delta(i,k))$-computationally
indistinguishable!pairwise uniform!computational
\[
(\forall n,m)
(\forall A \in \mbox{\sc PPC})
(\exists c, k_0)
(\forall k \geq k_0)
(\forall z)
(\forall i)
\]
\[
\abs{A_{\ensa(i)(n,m)(z,k)}-A_{\ensb(i)(n,m)(z,k)}} \leq
c \cdot \delta(i,k).
\]
We write
$\enssa \indistCo^{O(\delta(k))}%
\index{$\indistCo,$ uniformly indistinguishable collection}
\enssb$
for the uniform case
$\enssa \indistCoC^{O(\delta(k))}%
\index{$\indistCoC,$ uniformly computationally indistinguishable collection}
\enssb$ for the uniform computational case.
The essential point is that for uniform indistinguishability, the
same convergence parameter $k_0$convergence parameter applies to
all the families simultaneously. That is, the closeness of the
corresponding pairs converges for sufficiently large $k,$ but the rate of
convergence (i.e. the point at which the families are thereafter
close) is the same for all pairs at once. Clearly, any two equally-sized
finite collections of ensemble families are uniformly pairwise
$O(\delta(i,k))$-indistinguishable if they are
We also examine a single collection of ensemble families with the
property that each ensemble is indistinguishable from its successor.
Before, there need be no relationship among families in the same
collection, whereas now we require one.
A collection $\enssa=\set{\ensa(i)}$ of families of ensembles is
* sequentially $O(\delta(k))$-indistinguishable
if for all $i,$ $\ensa(i) \indistFa^{O(\delta(k))} \ensa(i+1).$
* uniformly sequentially $O(\delta(k))$-indistinguishable
indistinguishable!sequential uniform
\[
(\exists \Delta: \nat \rightarrow \nat)
\mbox{\hspace{0.2in}}
(\forall i)
\mbox{\hspace{0.1in}}
\ensa(i) \indistFa^{\Delta(k)} \ensa(i+1)
\mbox{\hspace{0.3in} \rm and }
\Delta(k) = O(\delta(k)).
\]
* sequentially $O(\delta(k))$-computationally indistinguishable
if for all $i,$ $\ensa(i) \indistFaC^{O(\delta(k))} \ensa(i+1).$
* uniformly sequentially $O(\delta(k))$-computationally
indistinguishable!sequential uniform!computational
\[
(\forall n,m)
(\forall A \in \mbox{\sc PPC})
(\exists c,k_0)
(\forall k \geq k_0)
(\forall z)
(\forall i)
\]
\[
\abs{A_{\ensa(i)(n,m)(z,k)}-A_{\ensa(i+1)(n,m)(z,k)}} \leq c \cdot \delta(k).
\]
Clearly, any finite collection of ensemble families is uniformly
$O(\delta(k))$-indistinguishable if each successive pair of families is
§.§ Collections of Families of Protocols
The relative resilience of two collections of protocol families is
defined in a similar fashion as ensemble families: according to the
relative resilience of each corresponding pair of protocols.
Let $\protosa=\set{\protoa(i)}$ and $\protosb=\set{\protob(i)}$ be
collections of protocol families. Then $\protosa$ is
* pairwise $O(\delta(i,k))$-resilient
as $\protosb$ if for all $i,$
$\protoa(i) \resilasFa^{O(\delta(i,k))} \protob(i).$
* uniformly pairwise $O(\delta(i,k))$-resilient
resilient!pairwise uniform
as $\protosb$ if
\[
(\exists \Delta: \nat^2 \rightarrow \nat)
\mbox{\hspace{0.2in}}
(\forall i)
\mbox{\hspace{0.1in}}
\protoa(i) \resilasFa^{\Delta(i,k)} \protob(i)
\mbox{\hspace{0.3in} \rm and }
\Delta(i,k) = O(\delta(i,k)).
\]
We write $\enssa \resilasCo^{O(\delta(i,k))}\index{$\resilasCo,$ uniformly
resilient collection} \enssb$ for the uniform case.
A collection $\protosa=\set{\protoa(i)}$ of protocol families
* sequentially $O(\delta(k))$-resilient
if for all $i,$ $\protoa(i) \resilasFa^{O(\delta(k))} \protoa(i+1).$
* uniformly sequentially $O(\delta(k))$-resilient
resilience!sequential uniform
\[
(\exists \Delta: \nat \rightarrow \nat)
\mbox{\hspace{0.2in}}
(\forall i)
\mbox{\hspace{0.1in}}
\protosa(i) \resilasFa^{\Delta(k)} \protosa(i+1)
\mbox{\hspace{0.3in} \rm and }
\Delta(k) = O(\delta(k)).
\]
The adjectives perfect, exponential, statistical, and computational apply respectively, as in <ref> and
<ref>, mutatis mutandis.
§ INDISTINGUISHABLE ENSEMBLES: TECHNICAL LEMMAS
Two indistinguishable ensembles remain indistinguishable
when restricted to simple subranges of the samples.
(Indistinguishable Substrings)
Let $\scg=\set{G(z,k)}$ and $\sch=\set{H(z,k)}$ be
ensembles where each $G(z,k)$ and $H(z,k)$ is a distribution on $\Sigma^{f(\abs{z},k)}.$
Let $a(z,k)$ and $b(z,k)$ be
polynomial-time computable indices in the range $1,\ldots,f(\abs{z},k)),$ with
$a(z,k) \leq b(z,k).$
Let $G[a,b](z,k)$ be the distribution
\[
\set{ \sigma \leftarrow G(z,k); \tau \leftarrow \sigma[a(z,k)..b(z,k)]:
\tau}
\]
and define $H[a,b](z,k)$ similarly.
Then subranges are indistinguishable:
\begin{eqnarray}
\scg \indistEn \sch & \Rightarrow & \scg[a,b] \indistEn \sch[a,b]
\label{eqn-subone} \\
\scg \indistEnE \sch & \Rightarrow & \scg[a,b] \indistEnE \sch[a,b]
\label{eqn-subtwo} \\
\scg \indistEnS \sch & \Rightarrow & \scg[a,b] \indistEnS \sch[a,b]
\label{eqn-subthree} \\
\scg \indistEnC\sch & \Rightarrow & \scg[a,b] \indistEnC \sch[a,b]
\label{eqn-subfour}
\end{eqnarray}
Statements (<ref>)-(<ref>) follow by observing that
$\scg \indistEn^{\delta(k)} \sch \Rightarrow \scg[a,b] \indistEn^{\delta(k)}
\sch[a,b],$ as shown by:
\begin{eqnarray*}
\abs{\probb{\scg[a,b]}{\tau}-\probb{\sch[a,b]}{\tau}}
& = &
\abs{\sum \probb{\scg}{\sigma \mid \tau}-\probb{\sch}{\sigma \mid \tau}}
\\
& = &
\abs{\sum_{\sigma \mid \sigma[a..b]=\tau} \probb{\scg}{\sigma} -
\sum_{\sigma \mid \sigma[a..b]=\tau} \probb{\sch}{\sigma}}
\\
& \leq & \delta(k),
\end{eqnarray*}
which holds for for sufficiently large $k,$ by the indistinguishability of
$\scg$ and $\sch.$
To verify (<ref>), which describes computational
indistinguishability, let us assume it fails. Then there exists a machine
$M$ that distinguishes $G[a,b]$ from $H[a,b].$ Construct machine $M'$ which
does the following: on input $\sigma$ of length $n^c,$ set
$\tau=\sigma[a(n)..b(n)],$ run $M$ on $\tau,$ and use its output. Now, for
any $c,$ the distinguisher $M$ outputs 0 with different probabilities:
$\abs{M_{G[a,b]}[\tau] - M_{H[a,b]}[\tau]} \geq n^{-c}$
infinitely often. Clearly,
$M'_{G}[\sigma] = M'_{G[a,b]}[\tau]$ and
$M'_{H}[\sigma] = M'_{H[a,b]}[\tau].$
Therefore for any $c,$
$\abs{M'_{G}[\sigma] - M'_{H}[\sigma]} \geq n^{-c}$
infinitely often, contradicting the indistinguishability of $\scg$ and
Observe that replacing one ensemble in a series by an indistinguishable
ensemble gives rise to an indistinguishable entire series:
Let $\scp_1, \scp_2^{\alpha}, \scp_2^{\beta},$ and $\scp_3$ be ensembles.
Define the ensembles
\begin{eqnarray*}
\scq^{\alpha}(z,k) & = &
\{
z_1 \leftarrow \scp_1(z,k);
z_2^{\alpha} \leftarrow \scp_2^{\alpha}(z_1,k);
z_3^{\alpha} \leftarrow \scp_3(z_2^{\alpha},k):
\} \\
\scq^{\beta}(z,k) & = &
\{
z_1 \leftarrow \scp_1(z,k);
z_2^{\beta} \leftarrow \scp_2^{\beta}(z_1,k);
z_3^{\beta} \leftarrow \scp_3(z_2^{\beta},k):
\}
\end{eqnarray*}
If $\scp_2^{\alpha} \indistEn^{O(\delta(k))} \scp_2^{\beta},$
then $\scq^{\alpha} \indistEn^{O(\delta(k))} \scq^{\beta}.$
The convergence parameters are identical.
Let $k_0$ be the convergence parameter for $\scp_2^{\alpha}$ and
with associated constant $c_0.$
Assume by way of contradiction that $\scq^{\alpha}
\scq^{\beta}.$ Then there is a $k \geq k_0$ and a $z$ such that
\begin{eqnarray*}
\sum_{z_1,z_2,z_3} \mid
\probb{\scp_3(z_2,k)}{z_3}
\probb{\scp_2^{\alpha}(z_1,k)}{z_2}
\probb{\scp_1(z,k)}{z_1} & & \\
\probb{\scp_3(z_2,k)}{z_3}
\probb{\scp_2^{\beta}(z_1,k)}{z_2}
\probb{\scp_1(z,k)}{z_1} \mid
& > &
c_0 \cdot \delta(k)
\end{eqnarray*}
Thus there exists a $z_1$ such that
\begin{eqnarray*}
\sum_{z_2,z_3}
\mid (
\probb{\scp_3(z_2,k)}{z_3}
\probb{\scp_2^{\alpha}(z_1,k)}{z_2} & & \\
\probb{\scp_3(z_2,k)}{z_3}
\probb{\scp_2^{\beta}(z_1,k)}{z_2}
) \mid
& > &
c_0 \cdot \delta(k)
\end{eqnarray*}
It follows that
\begin{eqnarray*}
\sum_{z_2}
\abs{
\probb{\scp_2^{\alpha}(z_1,k)}{z_2}
- \probb{\scp_2^{\beta}(z_1,k)}{z_2}
& = & \\
\sum_{z_2}
\left(
\sum_{z_3} \probb{\scp_3(z_2,k)}{z_3} \cdot
\abs{
\probb{\scp_2^{\alpha}(z_1,k)}{z_2}
\probb{\scp_2^{\beta}(z_1,k)}{z_2}
\right)
& > &
c_0 \cdot \delta(k).
\end{eqnarray*}
Since $k \geq k_0,$ this contradicts $\scp_2^{\alpha}
\indistEn^{O(\delta(k))} \scp_2^{\beta}.$
Let $\scp_1, \scp_2^{\alpha}, \scp_2^{\beta},$ and $\scp_3$ be polynomially
generable and let $\scq^{\alpha}$ and $\scq^{\beta}$ be as in
Lemma <ref>.
If $\scp_2^{\alpha} \indistEnC^{\delta(k)} \scp_2^{\beta}$ then
$\scq^{\alpha} \indistEnC^{\delta(k)} \scq^{\beta}.$
Let $k_0$ be the convergence parameter for $\scp_2^{\alpha}$ and
Assume by way of contradiction that $\scq^{\alpha}
{\notindistEnC}^{\delta(k)} \scq^{\beta}.$
Then there exists a probabilistic polynomial size distinguisher $A$ such
that for some $z$ and infinitely many $k,$
\[
\abs{A_{\scq^{\alpha}(z,k)} - A_{\scq^{\beta}(z,k)}} > \delta(k).
\]
Now, $A_{\scq^{\alpha}(z,k)}$ is just
\[
\sum_{z_1,z_2,z_3}
\probb{A(z_3)}{1}
\probb{\scp_3(z_2,k)}{z_3}
\probb{\scp_2^{\alpha}(z_1,k)}{z_2}
\probb{\scp_1(z,k)}{z_1}
\]
so there exists a $z_1$ such that
\begin{eqnarray*}
\sum_{z_2,z_3}
\mid (
\probb{A(z_3)}{1}
\probb{\scp_3(z_2,k)}{z_3}
\probb{\scp_2^{\alpha}(z_1,k)}{z_2} & & \\
\probb{A(z_3)}{1}
\probb{\scp_3(z_2,k)}{z_3}
\probb{\scp_2^{\beta}(z_1,k)}{z_2}
) \mid & & \\
& > & \delta(k).
\end{eqnarray*}
Let $A'$ be the distinguisher that does the following. On input $\sigma,$
sample $z_3 \leftarrow \scp_3(\sigma,k),$ and run $A(z_3).$ Return the
result of $A.$ It follows that
\begin{eqnarray*}
& &
\mid
A'_{\scp_2^{\alpha}(z_1,k)} - A'_{\scp_2^{\beta}(z_1,k)}
\mid
\\
& = &
\mid
\probb{A(z_3)}{1}
\probb{\scp_3(z_2,k)}{z_3}
\probb{\scp_2^{\alpha}(z_1,k)}{z_2}
\\ & &
\probb{A(z_3)}{1}
\probb{\scp_3(z_2,k)}{z_3}
\probb{\scp_2^{\beta}(z_1,k)}{z_2}
\mid
\\
& > & \delta(k).
\end{eqnarray*}
Since this holds for some $z$ and $z_1,$ and for
infinitely many $k,$ $A'$ actually distinguishes
$\scp_2^{\alpha}$ from $\scp_2^{\beta},$ so $\scp_2^{\alpha}
{\notindistEnC}^{\delta(k)} \scp_2^{\beta}.$
Given a collection of ensemble families and a function $f(n,m,k),$ we may
define a particular ensemble family using the $k^{th}$ distribution from
the $f(n,m,k)^{th}$ family.
The diagonal ensemble family
$\ensa^f$ of a collection of ensemble families
$\enssa=\set{\ensa(0),\ensa(1),\ldots}$ is, for a given function $f: \nat^3
\rightarrow \nat,$ defined by the following:
\[
\ensa^f(n,m)(z,k) = \ensa(f(n,m,k))(n,m)(z,k).
\]
The diagonal protocol family
$\protoa^f$ for a collection of protocol families
$\protosa=\set{\protoa(1),\protoa(2),\ldots}$ is the protocol defined by:
\[
\protoa^f(n,m)\protoIn =
\protoa(f(n,m))(n,m)\protoIn.
\]
The first ensemble family in a sequentially indistinguishable collection is
indistinguishable from the diagonal family to a certain degree:
Let $\enssa=\{\ensa(0),\ensa(1),\ldots \}$ be a uniformly sequentially
$O(\delta(k))$-indistinguishable collection of families of ensembles, and let
$f : \nat^3 \rightarrow \nat.$
Then $\ensa(0) \indistFa^{O(\delta(k) \cdot f(n,m,k))} \ensa^f.$
Let $k_0$ be the convergence parameter for $\enssa.$ Assume by way of
contradiction that $\ensa(0) {\notindistFa}^{\delta(k) \cdot f(n,m,k)}
\ensa^f.$ Then there are $n$ and $m$ such that $\ensa(0)(n,m)
{\notindistEn}^{\delta(k) \cdot f(n,m,k)} \ensa^f(n,m).$ Let $c$ be
arbitrary; there exists a $k \geq k_0$ and a $z$ such that
\[
\abs{\ensa(0)(n,m)(z,k) - \ensa(f(n,m,k))(n,m)(z,k)} >
c \cdot \delta(k)f(n,m,k).
\]
That is,
\[
\abs{
\sum_{i=0}^{f(n,m,k)-1}
\ensa(i)(n,m)(z,k) - \ensa(i+1)(n,m)(z,k)
} > c \cdot \delta(k)f(n,m,k).
\]
So for some $i_0,$ we have $k \geq k_0$ and
\[
\abs{
\ensa(i_0)(n,m)(z,k) - \ensa(i_0+1)(n,m)(z,k)
} > \frac{c \cdot \delta(k)f(n,m,k)}{f(n,m,k)} = c \cdot \delta(k),
\]
contradicting the uniform $\delta(k)$-indistinguishability of $\enssa.$
Adapting this proof to computational indistinguishability in a fashion
similar to the proofs of of Lemmas <ref> and
<ref>, a similar result holds for computational
Let $\enssa$ be a uniformly sequentially $O(\delta(k))$-computationally
indistinguishable collection of families of ensembles. Let each ensemble
be polynomially generable. Let $f(n,m,k)$ be a polynomial.
Then $\ensa(0) \indistFaC^{O(\delta(k) \cdot f(n,m,k))} \ensa^f.$
The following lemma summarizes direct consequences of Lemmas
<ref> and <ref>:
Let $\enssa$ be a uniformly sequentially $O(\delta(k))$-indistinguishable
collection of families of ensembles. Let $f(n,m,k)$ be a polynomial. Then the
following hold according to the indistinguishability of $\enssa:$
\[
\begin{tabular}{lcrllcrcl}
perfect & $[(\forall i)$ &
$\ensa(i)$ & $\indistFa$ & $\ensa(i+1)]$
$\Rightarrow$ & $\ensa(0)$ & $\indistFa$ & $\ensa^f$ \\
exponential & $[(\forall i)$ &
$\ensa(i)$ & $\indistFa^{O(c^{-k})}$ & $\ensa(i+1)]$
$\Rightarrow$ & $\ensa(0)$ & $ \indistFaE$ & $\ensa^f$ \\
statistical & $[(\forall i)$ &
$\ensa(i)$ & $\indistFa^{O(k^{-c})}$ & $\ensa(i+1)]$
$\Rightarrow$ & $\ensa(0)$ & $\indistFaE$ & $\ensa^f$ \\
computational & $[(\forall i)$ &
$\ensa(i)$ & $\indistFaC^{O(k^{-c})}$ & $\ensa(i+1)]$
$\Rightarrow$ & $\ensa(0)$ & $\indistFaC$ & $\ensa^f$
\end{tabular}
\]
We also consider the indistinguishability of two diagonal ensemble families
taken from two pairwise uniformly indistinguishable collections.
Let $\enssa=\{\ensa(i)\}$ and $\enssb=\{\ensb(i)\}$ be uniformly
$O(\delta(i,k))$ pairwise indistinguishable collections of families of
ensembles, and let $f : \nat^3 \rightarrow \nat.$ Then $\hat{\ensa}^f
\indistFa^{O(\delta(f(n,m,k),k))} \hat{\ensb}^f.$
Let $k_0$ be the convergence parameter for $\enssa.$ Assume by way of
contradiction that $\hat{\ensa}^f
{\notindistFa}^{O(\delta(i,k) \cdot f(n,m,k))}
\hat{\ensb}^f.$ Then there are $n$ and $m$ such that $\ensa(f(n,m,k))(n,m)
{\notindistEn}^{O(\delta(i,k) \cdot f(n,m,k))} \ensb(f(n,m,k))(n,m).$
Hence for an arbitrary $c,$ there is a $k \geq k_0$ and a $z$ such that
\[
\abs{\ensa(f(n,m,k))(n,m)(z,k) -
\ensb(f(n,m,k))(n,m)(z,k)} > c \cdot \delta(f(n,m,k),k)
\]
contradicting the uniform pairwise $O(\delta(i,k))$ indistinguishability
of $\enssa$ and $\enssb.$
As before, the proof is adaptable to computational
indistinguishability, giving:
Let $\enssa=\{\ensa(i)\}$ and $\enssb=\{\ensb(i)\}$ be uniformly
$O(\delta(i,k))$ pairwise computationally indistinguishable collections of
families of ensembles, and let $f$ be a polynomial. Then $\hat{\ensa}^f
\indistFaC^{O(\delta(f(n,m,k),k))} \hat{\ensb}^f.$
§ FORMAL PROOFS
Generally, an interface creates a simulated environment for the
adversary. Consider the collection of random variables describing the
views of all the players generated during a run of a given protocol:
\[
\set{\view_1^1,\ldots,\view_n^1,\view_A^1;
\view_1^2,\ldots,\view_n^2,\view_A^2;\ldots;
\view_1^R,\ldots,\view_n^R,\view_A^R}.
\]
With static adversaries, the interface need only keep track of a fixed
subset of these views. Interfacing becomes more difficult when dynamic
adversaries are considered: even though the variables are dependent on one
another, the interface will not be able to specify in advance all of the
necessary views. The interface must be able to answer two types of queries
from an adversary: requests for rushed messages, and requests for the view
of a newly corrupted player. It computes these incrementally; as the
interaction between $\interface$ and $A$ progresses, $\interface$
fills in some of the views
(corresponding to newly corrupted players) as necessary. For instance, if
player $i$ is corrupted at round $r,$ the interface must sample the values
of the variables $\view_i^1,\view_i^2,\ldots,\view_i^r,$ even though it had
not previously “filled in” these values. To do
this, it must use appropriately chosen conditional distributions.
When a protocol is a composition of several subprotocols, the interface
actually runs sub-simulations $\interface_j$ for each of the intermediate
executions. The simulation is not quite as simple as running
sub-simulations, however: the view an adversary gains when corrupting a new
player $i$ in the third subprotocol, say, includes not only the original
auxiliary input $a_i$ — which is all that $\interface$
would obtain if it requests
the corruption of player player $i$ in the ideal protocol — but also the
view $\view_i(1) \circ \view_i(2)$ of player $i$ in the earlier two
subprotocols. In particular, in order to be able to run interface
$\interface_{j+1},$ $\interface$ must be able to generate auxiliary input
$a_i(j),$ the
auxiliary input of player $i$ in the $j^{th}$ subprotocol, when
$\interface_{j+1}$ requests it.
Thus, before we are able to discuss the resilience of concatenating
resilient protocols, we must first consider (1) how to obtain an accurate
final set of views by sampling a subset of the random variables in some
order, and (2) how to ensure that a sub-simulation is not just resilient
but resilient enough to provide views even after the sub-simulation has
§.§ Random Variables and Tableaus
The description of a general protocol execution in the presence of an
adversary (Chapter <ref>, <ref>) and the
particular specification of the protocol describe how to sample the
collection of random variables parametrized by $\rvnames \times [R]$ by
sampling some of them and later applying probabilistic functions to the
results in a specified order. We are concerned on the one hand with the
distributions $\rv(y_1,R),\rv(y_2,R),\ldots,\rv(y_n,R),\rv(y_A,R),$ which
make up the ensemble $\realy(\vec{x} \circ \vec{a} \circ a,k)$ generated by
running protocol $\realpf(n,m,k)(\vec{x},\vec{a},A(a)).$ On the other hand
are the distributions
\rv_{ideal}(y_A,R),$ which make up the ensemble $\idealy(\vec{x} \circ
\vec{a} \circ a,k)$ generated by running protocol
$\idealpf(n,m,k)(\vec{x},\vec{a},S(A(a),\cdot)).$ We must eventually show
that the interaction of interface with black-box adversary and ideal
protocol produces the same ensembles, even though the process of sampling
various distributions ($\set{\rvnames_{ideal}(v,r)}$) leading up to these
final ones is different than in the real protocol (i.e., local
variables of reliable players are never assigned, and distributions
are sampled in a different order).
Let $\set{X_v}_{v \in V},$ be a collection of (joint) distributions on some
set X parametrized by $V \times {\bf N}.$ An assignment tableau
$\psi$ on $V$ is a function
\[
\psi : V \times {\bf N} \rightarrow \set{\bot} \cup X
\]
\[
(\forall v \in V)(\forall r) \psi(v,r) \not= \bot \Rightarrow
\psi(v,r+1) = \psi(v,r).
\]
The meaning of $\psi(v,r)$ is the following. If $\psi(v,r)=x,$ then
distribution $\rv(v,r)$ has been sampled with result $x.$ If $\psi(v,r) =
\bot$ then distribution $X_v$ has not been sampled.
Then a real protocol $\realpf$ induces a tableau $\tableau$ on $\rvnames$
such that $(\forall v \in \rvnames) \tableau(v,R) \not=
\bot.$
That is, all distributions are sampled. The specification of protocol
execution in the presence of an adversary (Chapter <ref>,
<ref>) describes exactly the particular sequence the
particular sequence of experiments and random variables that are analogous
to the distribution $P$ described above. On the other hand, the tableau
$\psi_{ideal}$ on $\rvnames$ computed by a interface is not completely
instantiated, but it should be the case that the output distributions are sampled: $(\forall i) \tableau(y_i,R) \not= \bot$ and $\tableau(y_A,R)
\not= \bot.$
To illustrate the principle behind our arguments, let us consider two
experiments. Let $X_0$ be a distribution on some set $S,$ and let $\chi_1$
and $\chi_2$ be functions on $S$ such that $x_0 = (\chi_1(x_0),\chi_2(x_0))$
for all $x_0 \in S.$ Let $f_3$ and $f_4$ be probabilistic functions.
The first experiment generates the following distribution:
\[
P = \set{ x_0 \leftarrow X_0; x_1 \leftarrow \chi_1(x_0);
x_2 \leftarrow \chi_1(x_0); x_3 \leftarrow f_3(x_2);
x_4 \leftarrow f_4(x_1,x_3) : x_4}.
\]
At the point at which $x_3$ is computed, the value of $x_1$ does not
matter; in a sense, though it has been specified, it is a hidden variable.
It comes into play later, when $x_4$ is computed. Intuitively, this
experiment is analogous to the following sequence in a real protocol: a
reliable player computes its transition function, specifying hidden
information (new state and messages to reliable players) and compromised
information (messages known to the adversary). Later, the adversary
decides to corrupt the player, and obtains the values of the hidden
variables, which it uses in later computations.
Now define the following distributions:
\[
X_1 = \set{ x_0 \leftarrow X_0: \chi_1(x_0) }
\]
\[
X_2 = \set{ x_0 \leftarrow X_0: \chi_2(x_0) }
\]
\[
(X_1 \mid x_2) = \set{ x_0 \leftarrow X_0: \chi_1(x_0) \mid \chi_2(x_0)=x_2 }
\]
The second experiment is the following:
\[
Q = \set{ x_2 \leftarrow X_2; x_3 \leftarrow f_3(x_2);
x_1 \leftarrow (X_1 \mid x_2); x_4 \leftarrow f_4(x_1,x_3) : x_4 }.
\]
In a sense, $x_1$ is not computed until after $x_3$ has been computed.
This is analogous to the operation of an interface: first the interface
computes the variable $X_2$ that is initially “known” to the adversary,
and it does not attempt to specify the variable $X_1.$ Later, the adversary
corrupts a new player, and requests the value of $x_0$ (that is, of $x_1$
and $x_2$). The interface must now sample $X_1$ given the information it
has already committed to the adversary, namely given $x_2.$
It is not hard to see that distributions $P$ and $Q$ are identical.
The essential principle to note is that the order of sampling the
distributions is irrelevant as long as the appropriate conditional
distributions are used.
The task of proving privacy reduces to showing that the interface can
gradually fill in the necessary portions of a simulated tableau by sampling
the appropriate conditional distributions $(X_1 \mid x_2)$ given the
random variables ($x_2$) which it has already sampled and committed to $A,$
by using the information it has at hand (inputs obtained by
corruptions in the ideal protocol).
§.§ Post-Protocol Corruption
For proofs involving dynamic adversaries, simulating the view of a newly
corrupted player is essential. Certainly, during the run of a protocol, an
interface must be able to compute such a view accurately. In the case of
the sequential concatenation of protocols, however, the view of a reliable
player includes its view during previous protocols. The task becomes more
difficult; despite the fact that the simulation of the earlier subprotocol
has already occurred, the earlier view must be generated so that it can be
included in the overall view of the newly corrupted player. We therefore
consider an additional requirement on the interface, to facilitate proofs of
resilience. This requirement is necessarily satisfied in the case of perfect and exponential resilience, in the case of static
adversaries, or trivially in memoryless protocols.
Consider an execution of a protocol $\protob$ with $\interface(A,\cdot),$
giving distribution $\idealy\protoIn.$ At the
conclusion of the protocol, $\interface(A(a),\cdot)$ has corrupted
some coalition
$T.$ It may be the case that $\abs{T} < t,$ in which case
would have been permitted $(t-\abs{T})$ more corruptions. The interface
$\interface$ can continue to run, since it simply receives requests for new
corruptions. There is, in general, no guarantee that if it is supplied
with additional requests after $A(a)$ has halted, the views
returns are accurate in any fashion. Post-protocol corruptibility
ensures that
the interface does have this power, namely that even after $A(a)$ has
halted, producing its output $Y_A,$ the interface can accurately output
$\view_i$ for other players.
Let $T'$ be a set of players. When $\interface$ continues to run after
$A(a)$ has
halted and is supplied in turn with each $i \in T'$ written to its input
tape as a request for a new corruption, it returns the views
$\set{\view_i \mid i \in T'},$ inducing the ensemble of distributions
\[
\protob^{\view_{T'}}\protoIn.
\]
An interface $\interface(\cdot,\cdot)$ is
post-protocol compatible
compatible!post-protocolpost-protocol compatible
with respect to protocols $\protoa$ and $\protob$
if for any adversary $A,$ for any execution of
for any $(\abs{T}-t)$-coalition
$T' \subseteq [n]-T$
where $T$ is the final coalition selected by adversary
$A(a),$ the following ensembles are indistinguishable:
\begin{eqnarray*}
\protoa\protoIn
& \indistEn &
\protob\protoIn
\\
\protoa^{\view_{T'}}\protoIn
& \indistEn &
\protob^{\view_{T'}}\protoIn
\end{eqnarray*}
A protocol $\protoa$ is post-protocol corruptible
post-protocol corruptible
with respect to $\protob$
if it admits a post-protocol compatible interface $\interface.$
Memoryless protocols are trivially post-protocol corruptible, since the views
are erased. Protocols secure against static adversaries are also
post-protocol corruptible, because the set of corrupted players is
initially maximal so
that $T'=\emptyset$ always.
A protocol $\protoa$ that is perfectly as resilient as protocol $\protob$
(e.g. a real protocol $\realpf$ that is perfectly resilient) against
dynamic adversaries is of necessity post-protocol corruptible by the following
argument. For any adversary $A,$ let $A'$ be the adversary that operates
$A$ until the final round and then does the following. If the coalition
$T$ has size less than $t,$ then $A'$ chooses
$T'=\set{\sigma_1,\dots,\sigma_{\tau}}$ uniformly at random from all
subsets of $[n]-T$ of size $(t-\abs{T})$ or less. Then $A'$ requests the
corruption of the players in $T'.$ If follows that for all $T',$
$\view_{\sigma_1},\dots,\view_{\sigma_{\tau}},$ and $\view_A,$ the
resilience of $\protoa$ against $A'$ implies that
\begin{eqnarray*}
\probover{\protoa}{T',\view_{T'},\view_A}
& = &
\probover{\protob}{T',\view_{T'},\view_A}
\\
\probover{\protoa}{T'}
\probover{\protoa}{\view_{T'}\mid\view_A}
\probover{\protoa}{\view_A}
& = &
\probover{\protob}{T'}
\probover{\protob}{\view_{T'}\mid\view_A}
\probover{\protob}{\view_A}
\end{eqnarray*}
Summing over all possible views of $A,$
\begin{eqnarray*}
\sum_{\view_A}
\probover{\protoa}{\view_{T'}\mid\view_A}
\probover{\protoa}{\view_A}
& = &
\sum_{\view_A}
\probover{\protob}{\view_{T'}\mid\view_A}
\probover{\protob}{\view_A}
\\
\probover{\protoa}{\view_{T'}}
& = &
\probover{\protob}{\view_{T'}}
\end{eqnarray*}
Similar arguments show that any protocol that is exponentially or
statistically resilient against a dynamic adversary also must be
post-protocol corruptible. If we require that the
convergence parameter is bounded by a
polynomial in $n$ and $m,$ i.e. that security is achieved when $k$ is
of size polynomial in $n$ and $m,$ then any protocol that is exponentially
resilient must be post-protocol corruptible.
§ RELATIVELY RESILIENT PROTOCOLS AND CONCATENATION
We observe that in a sequence of protocols, each as resilient as its
successor (to within $\delta(k)$), the first protocol is as resilient (to
within $\delta(k)\cdot f(n,m,k)$) as the particular protocol $\protoa^f$
constructed by using the $f(n,m,k)^{th}$ protocol from each family.
Let $\protosa=\set{\protoa(0),\protoa(1),\ldots}$ be a uniformly
sequentially $O(\delta(k))$-relatively resilient collection of protocol
families. Let $f:\nat^3 \rightarrow \nat.$ Then $\protoa(0)
\resilasFa^{O(\delta(k)f(n,m,k))} \protoa^f.$
Let $A$ be an adversary for protocol $\protoa(0)$ and let
be the interfaces postulated by the sequential resilience of the
collections. Let $\interface^j(A,a)$ denote the nested interface
$\ensa(j)(n,m)(\vec{x} \circ \vec{a} \circ a,k)$ be the ensemble generated
by running $\protoa(j)(n,m,k)(\vec{x},\vec{a},A(a)).$ Because $\protosa$ is
uniformly sequentially $O(\delta(k))$ resilient, the collection
$\enssa=\set{\ensa(0),\ensa(1),\ldots}$ is uniformly sequentially
$O(\delta(k))$ indistinguishable. Construct an interface
$\interface$ that runs
$\interface^k$ when given security parameter $k,$ and note that
$\ensa^f(n,m)(\vec{x} \circ \vec{a} \circ a,k)$ is the ensemble generated
by running protocol $\protoa^f(n,m,k)(\vec{x},\vec{a},\interface^j(A,a)).$
Lemma <ref> implies:
\[
\ensa(0) \indistFa^{\delta(k)\cdot f(n,m,k)} \ensa^f.
\]
The corresponding result holds for computational resilience:
Let $\protosa=\set{\protoa(0),\protoa(1),\ldots}$ be a uniformly
sequentially $O(\delta(k))$ computationally relatively resilient collection of
protocol families. Let $f(n,m,k)$ be a polynomial. Then, computationally,
$\protoa(0) \resilasFa^{O(\delta(k)f(n,m,k))} \protoa^f.$
The transitivityrelative resilience!transitive
of $\resilasFa$ falls out as an immediate corollary,
recalling that a finite sequentially resilient collection of
protocol families is uniformly sequentially resilient.
Let $P_1, P_2,$ and $P_3$ be protocol families.
If $P_1 \resilasFa P_2$ and $P_2 \resilasFa P_3,$
then $P_1 \resilasFa P_3.$
if $P_1 \resilasFa^{O(\delta(k))} P_2$ and $P_2 \resilasFa^{O(\delta(k))} P_3,$
then $P_1 \resilasFa^{O(\delta(k))} P_3.$
Let $\protosa=\set{\protoa(i)}$ and $\protosb=\set{\protob(i)}$ be two
collections of protocol families. Consider the $j$-fold concatenation of
protocols from each: $\hat{\protoa}(j) = \protoconc_{i=1}^j \protoa(i)$
and $\hat{\protob}(j) = \protoconc_{i=1}^j \protob(i).$
(Protocol Concatenation) Fix $j.$ If $\protosa \resilasCo^{O(\delta(k))}
\protosb$ with post-protocol compatible interfaces and with convergence
parameter $k_0,$ then $\hat{\protoa}(j) \resilasFa^{O(j \cdot \delta(k))}
\hat{\protob}(j)$ with convergence parameter $k_0.$
We gradually replace $\protoa$'s by $\protob$'s. Define the hybrid
protocol family $\hybrids=\set{\hybrid(0),\hybrid(1),\ldots}$ by:
\begin{eqnarray*}
\hybunit(i,l)(n,m)
& = &
\left\{
\begin{tabular}{ll}
$\beta(i)(n,m)$ & $i<l$ \\
$\alpha(i)(n,m)$ & $i \geq l$
\end{tabular}
\right.
\\
\hybrid(l)(n,m)
& = &
\protoconc_{i=1}^l
\hybunit(i,l)(n,m) \\
& = &
\hybunit(l,l)(n,m) \protoconc
\hybunit(l-1,l)(n,m) \protoconc
\cdots \protoconc
\hybunit(1,l)(n,m)
\end{eqnarray*}
It suffices to show that $\hybrids$ is uniformly sequentially
$O(\delta(k))$ relatively resilient with convergence parameter $k_0,$ since
Lemma <ref> with $f(n,m,k)=j$ implies that
\[
\hat{\protoa}(j) = \hybrid(0) \resilasFa^{O(\delta(k)\cdot f(n,m,k))}
\hybrid^f = \hat{\protob}(j).
\]
Assume otherwise; then for some $n,m,$ and $l,$
\[
\hybrid(l)(n,m) {\notresilas}^{O(\delta(k))}
\hybrid(l+1)(n,m).
\]
In particular, for all $c$ there are $\vec{x},\vec{a},A,a,$ and $k \geq
k_0$ such that
\begin{eqnarray} \label{eqn-big-diff}
\abs{
\hybrid(l)(n,m)\protoIn -
\hybrid(l+1)(n,m)\protoIn}
> c \cdot \delta(k)
\end{eqnarray}
Let $\interface_1,\ldots,\interface_j$ be the
post-protocol compatible
interfaces for the $j$
subprotocols. Because the interfaces are post-protocol corruptible, we may
treat the sequential execution of the subprotocols in $\hybrid(l)$ or in
$\hybrid(l+1)$ as a sequence of three probabilistic experiments,
$(\scp_1,\scp_2^{\alpha},\scp_3),$ or
\begin{eqnarray*}
\scp_1
& = &
\protoa(l-1)(n,m)(\vec{x}(l-1) \circ \vec{a}(l-1) \circ a(l-1),k)
\circ \cdots
\\
& &
\circ
\protoa(1)(n,m)(\vec{x} \circ \vec{a} \circ a,k)
\\
\scp_2^{\alpha}
& = &
\protoa(l)(n,m)(\vec{x}(2) \circ \vec{a}(2) \circ a(2),k)
\\
\scp_2^{\beta}
& = &
\protob(l)(n,m)(\vec{x}(l) \circ \vec{a}(l) \circ a(l),k)
\\
\scp_3
& = &
\protoa(j)(n,m)(\vec{x}(j) \circ \vec{a}(j) \circ a(j),k)
\circ \cdots
\\
& &
\circ
\protoa(l+1)(n,m)(\vec{x}(l+1) \circ \vec{a}(l+1) \circ a(l+1),k)
\end{eqnarray*}
By assumption, $\scp_2^{\alpha} \indistEn^{O(\delta(k))} \scp_2^{\beta}$
with convergence parameter $k_0.$ By Lemma <ref>,
$\scq^{\alpha} \indistEn^{O(\delta(k))} \scq^{\beta}$ with parameter $k_0.$ But
$\scq^{\alpha} = \hybrid(l)(n,m)(\vec{x}\circ \vec{a} \circ a,k)$ and
$\scq^{\beta} = \hybrid(l+1)(n,m)(\vec{x}\circ \vec{a} \circ a,k).$ This
contradicts (<ref>).
We are now ready to measure the resilience achieved by concatenating
a growing number of subprotocols.
(Protocol Concatenation)
Let $f: \nat^3 \rightarrow \nat.$
If $\protosa \resilasCo^{\delta(i,k)}
\protosb$ with post-protocol compatible interfaces and with convergence
parameter $k_0,$ then concatenating $f(n,m,k)$ protocols from each
\[
\protoconc_{i=1}^f \protoa(i)
\resilasFa^{O(\delta(f(n,m,k),k) \cdot f(n,m,k))}
\protoconc_{i=1}^f \protob(i).
\]
In particular,
\[
\begin{tabular}{rllcrlllr}
$\protosa$ & $\resilasCo$ & $\protosb$ &
$\Rightarrow$ &
$\protoconc \protosa$ & $\resilasFa$ & $\protoconc \protosb$ &
\hspace{0.2in}(1)
\\
$\protosa$ & $\resilasCoE$ & $\protosb$ &
$\Rightarrow$ &
$\protoconc \protosa$ & $\resilasFaE$ & $\protoconc \protosb$ &
(if $f(n,m,k)=O(k^c))$ &
\hspace{0.2in}(2)
\\
$\protosa$ & $\resilasCoS$ & $\protosb$ &
$\Rightarrow$ &
$\protoconc \protosa$ & $\resilasFaS$ & $\protoconc \protosb$ &
(if $f(n,m,k)=O(k^c))$ &
\hspace{0.2in}(3)
\\
$\protosa$ & $\resilasCoC$ & $\protosb$ &
$\Rightarrow$ &
$\protoconc \protosa$ & $\resilasFaC$ & $\protoconc \protosb$ &
(if $f(n,m,k)=O(k^c))$ &
\hspace{0.2in}(4)
\end{tabular}
\]
We define two collections of protocol families as follows. The first is
$\linconcsa=\set{\linconca(1),\linconca(2),\ldots},$ the concatenation of
linearly many protocols from $\protosa:$
\[
\linconca(j)(n,m,k) = \protoconc_{i=1}^j \protoa(i)(n,m).
\]
The second is defined similarly:
\[
\linconcb(j)(n,m,k) = \protoconc_{i=1}^j \protob(i)(n,m).
\]
Assume $\protosa \resilasCo^{O(\delta(i,k))} \protosb$ pairwise uniformly with
convergence parameter $k_0.$ By Lemma <ref>, $\protosa
\resilasCo^{\delta(i,k)} \protosb$ implies that, for each $i,$ $\linconcsa(i)
\resilasCo^{i \delta(i,k)} \linconcsb(i)$ with convergence parameter $k_0.$
It follows that $\linconcsa \resilasCo^{i\delta(i,k)} \linconcsb,$ pairwise
uniformly with convergence parameter $k_0.$ By
Lemma <ref>, the diagonal protocol families
and $\hat{\linconcb}^f$ satisfy
\[
\hat{\linconca}^f
\resilasFa^{O(f(n,m,k) \cdot \delta(f(n,m,k),k))}
\hat{\linconcb}^f.
\]
\begin{eqnarray*}
\hat{\linconca}^f & = & \protoconc_{i=1}^{f(n,m,k)} \protosa(i) \\
\hat{\linconcb}^f & = & \protoconc_{i=1}^{f(n,m,k)} \protosb(i)
\end{eqnarray*}
It follows that
\[
\protoconc_{i=1}^{f(n,m,k)} \protosa(i)
\resilasFa^{O(f(n,m,k)\cdot\delta(f(n,m,k),k))}
\protoconc_{i=1}^{f(n,m,k)} \protosa(i)
\]
Implication (1) follows directly; implications (2), (3), and (4) are
straightforward. For the proof of (4), we invoke the computational
versions of the corresponding supporting lemmas.
If intermediate functions are private, then computing their open
composition is as secure as computing their closed composition:
(Open Function Composition)
Let $f:\nat^3\rightarrow\nat.$
Let $\scf = \set{F^i}$ be a collection of function families
where $F^{1,n,m},\ldots,F^{f(n,m,k)-1,n,m}$ are private.
Let $F=\circ \scf.$
Then the ideal open composite protocol $\idoc$ for $\scf$
is as resilient as the ideal protocol $\idealy$ for $F:$
\[
\idoc \resilasFa \idhc \resilasFa \idealy.
\]
If $f(n,m,k)$ is polynomial, then this holds for exponential, statistical,
and computational relative resilience.
Since each $F^{1,n,m},\ldots,F^{f(n,m,k)-1,n,m}$ is private, each is
as resilient as the vacuous protocol. So for each intermediate function
there exists a protocol $\idealpf(F^j)$ and an interface
$\interface_j$ showing
$\idealpf(F^j)$ is as resilient as the ideal vacuous protocol. Recall that
the ideal open composite protocol runs for $f(n,m,k)$ rounds, accepting an
initial set of inputs $x_1^1,\ldots,x_n^1,$ returning the value of
$F^r\circ \cdots \circ F^1(x_1^1,\ldots,x_n^1)$ at each round $r.$ The
ideal hidden composite protocol returns only the vectors of identities of
“cheating” players (players not supplying a valid input) and returns the
final result.
The interface $\interface$ uses each $\interface_j$ to create the needed
function values $F_j\circ\cdots\circ F_1$ that are supplied to corrupted
players. That is, at the first round, $\interface$ collects requests for
corruptions from $A,$ supplies them to $\interface_1,$ collects requests for
corruptions from $\interface_1,$ requests those corruptions itself,
and returns the
results along the same paths. When $A$ and $\interface_1$ are done
corruptions, $\interface_1$ supplies messages from corrupted players
(either 0 or 1
depending on if the player decides to cheat detectably), $\interface$ sends
them in
the hidden protocol, and $\interface$ obtains the vacuous output from the
host in round 1 of the hidden protocol. Interface $\interface$ then
supplies the
vacuous output to $\interface_1,$ who may request more corruptions but
computes the values of $F_1$ to be sent to corrupted players. When
is finished, $\interface$ runs $\interface_2$ as before; if $\interface_2$
corrupts a player, $\interface$ must make a post-protocol corruption
request to $\interface_1$ to produce the view of that player
during round 1.
When $\interface_2$ is finished corrupting players, $\interface$ requests
the vacuous output
of round $2$ of the hidden protocol and supplies it to $\interface_2.$
This continues for $f(n,m,k)-1$ steps. At the last step, $\interface$
requests for corruptions directly from $A$ and supplies responses as before.
When $A$ requests the $f(n,m,k)^{th}$ output from the open protocol, namely
the value of $F^{f(n,m,k)} \circ\cdots\circ F^1,$ $\interface$ requests it
in the
hidden protocol and returns the result.
The second statement, $\idhc \resilasFa \idealy,$ is easy to see by noting
that an interface itself can compute the vacuous messages returned to $A$
for each round up to the penultimate one, based on the messages sent to
by $A;$ its only interaction in protocol $\idealy$ is to request original
inputs of corrupted players. In the final round, $\interface$
obtains the results
for corrupted players in $\idealy$ and returns them to $A.$
Concatenating ideal protocols for robust functions is as resilient as a single
ideal composite protocol that computes their open composition — this is
the motivation for robustness:
(Ideal Protocol Concatenation)
Let $f(n,m,k)$ be a polynomial.
Let $\scf = \set{F^i}$ be a collection of function families
where $F^{1,n,m},\ldots,F^{f(n,m,k)-1,n,m}$ are robust.
Then the concatenation of ideal protocols for $\scf$
is as resilient as the ideal open composite protocol for $\scf:$
\[
\ocip \resilasFa \idoc
\]
Protocol $\idealpf(\opencomp \scf)$ lasts $f(n,m,k)$ rounds. For the first
$f(n,m,k)-1$ rounds, whenever adversary $A$ requests the corruption of
player $i,$ interface $\interface$ corrupts player $i$ in protocol
$\idealpf(\closedcomp \scf)$ and returns the original input, along with the
list of outputs generated thus far. The output of each round is, as in the
vacuous protocol, a list of $n$ bits which are 0 for every uncorrupted
player and are 0 or 1 for corrupted players depending on whether the
adversary forced them to send a 0 message or not. (Note that the vacuous
output is the same for all players; $\interface$ simply records what it has
generated at each round of the simulation in order to include it in the
information of later corruptions.)
At round $f(n,m,k),$ however, when the adversary generates its last set of
messages for players in the $\idealpf(\opencomp \scf)$ protocol, the
interface $\interface$ sends the vector of messages it received for corrupted
players in the first round of the $\idealpf(\opencomp \scf)$ protocol:
these are the inputs upon which the computation of $F^{f(n,m,k)}$ is based.
Interface $\interface$ receives a message from the trusted host in protocol
$\idealpf(\closedcomp \scf)$ that contains the outputs for corrupted
players, and it relays this message to $A.$ That the final messages are
identically distributed in $\idealpf(\opencomp \scf)$ and
$\idealpf(\closedcomp \scf)$ is ensured by the robustness of each
intermediate result.
Let $\gensha$ be a protocol that implements a robust and private
representation $\sha,$ and let $\genrec$ be a protocol that reconstructs the
represented value according to the $\rec$ function. The robust and secret
version of an arbitrary function $H$ is the function:
\[
\robsec(H) = \sha \opencomp H \opencomp \rec
\]
We have,
\[
\rec \opencomp \robsec(H) \opencomp \sha = H
\]
or more generally, for a collection of families $\set{H^j},$
\begin{eqnarray}
\label{eqn-compose-hide}
\rec \circ ( \circ_j \robsec(H^j) ) \circ \sha = \circ_j H^j
\end{eqnarray}
The protocols in [71, 28, 39], for example, are of the form
\[
\genrec \circ (\circ_j \Pi(\robsec(G^j))) \circ \gensha,
\]
where each $G^j$ is an arithmetic or Boolean function (a set of gates on
one level of a circuit). In particular, the players share the inputs,
evaluate functions $G^1,\ldots,G^{f(n,m,k)}$ on them, and reveal only the final
result. Our protocols will be of a somewhat more general form, computing
several intermediate functions whose values are revealed on the
path to computing $F:$
\[
\circ_i (\genrec \circ (\circ_j \Pi(\robsec(G^{ij}))) \circ \gensha).
\]
The following theorem demonstrates the resilience of our modular approach:
(Modular Protocol Construction) Let $p(n,m,k)=O((nmk)^c)$ and
$q(n,m,k)=O((nmk)^c)$ for some $c.$
Let $\set{\scg^i}$ be a set of collections of function
families, where $\scg^i=\set{G^{i,j}},$ and $G^{i,j}=\set{G^{i,j,n,m}}.$
Let $F^i = \circ \scg^i$ be the composition of $p(n,m,k)$ functions from
$\scg^i.$ Let $\scf = \set{F^i},$ and let $F=\circ \scf$ be the composition
of $q(n,m,k)$ functions from $\scf.$
If each $F^{1,n,m},\ldots,F^{q(n,m,k)-1,n,m}$ is private and robust, and if
there exist $t$-resilient protocols $\Pi(\robsec(G^{ij}))$ for each $i,j,$
\[
\protoconc_{i=1}^q (\genrec \protoconc
(\protoconc_{j=1}^p \Pi(\robsec(G^{ij}))) \protoconc \gensha)
\resilasFa
\idealpf
\]
In other words, the concatenation of the $t$-resilient protocols as
described above is a $t$-resilient protocol for $F.$ This is also true of
exponential, statistical, and computational resilience.
First let us examine the resilience of computing each $F^i$ by
concatenating the ideal protocols for the $G^{ij}$ functions:
$\genrec \circ [ \circ_j \idealname(\robsec(G^{ij})) ] \circ \gensha$
$\genrec \circ \idealname( \opencomp_j \robsec(G^{ij})) \circ \gensha$
$\genrec \circ \idealname( \closedcomp_j \robsec(G^{ij})) \circ \gensha$
$\idealname( \recons \closedcomp [ \closedcomp_j \robsec(G^{ij}) ]
\closedcomp \sha) $
$\resilasFa$ $\idealname( \closedcomp_j G^{ij} ) $
$\idealname( F^i )$
At this stage, we have demonstrated enough to support the methods of
[28, 39]. We wish to concatenate protocols that do reveal
(restricted) intermediate values:
(\genrec \circ [ \circ_j \idealname(\robsec(G^{ij})) ] \circ \gensha)$
$\circ_i \idealname( F^i )$
$\idealname( \closedcomp_i F^i )$
The theorem stipulates the existence of resilient protocols
$\Pi(\robsec(G^{ij}))$ for each $i,j.$ That is,
\[
\Pi(\robsec(G^{ij}))
\resilasFa
\idealname(\robsec(G^{ij}))
\]
(\genrec \circ [ \circ_j \Pi(\robsec(G^{ij})) ] \circ \gensha)$
(\genrec \circ [ \circ_j \idealname(\robsec(G^{ij})) ] \circ \gensha)$
tocpartMultiparty Protocols
CHAPTER: EFFICIENT, UNCONDITIONALLY SECURE MULTIPARTY PROTOCOLS
They developed a tendency to shirk every movement that didn't seem
absolutely necessary or called for efforts that seemed too
great to be worthwhile.
Thus these men were led to break, oftener and oftener, the rules of hygiene
themselves had instituted, to omit some of the numerous disinfections they
should have practiced, and sometimes to visit the homes of people
suffering from pneumonic plague without taking steps to safeguard themselves
against infection, because they had been notified only at the last
moment and could not be bothered with returning to a sanitary service station,
sometimes a considerable distance away, to have the necessary injections.
There lay the real danger; for the energy
they devoted to fighting the disease made them all the more liable to it.
Albert Camus, The Plague
Ben-Or, Goldwasser, Wigderson, and Chaum, , and
[28, 39] show how, given private communication channels and a
broadcast channel,
[The requirement of a broadcast channel can be removed at the cost
of a constant factor increase in the number of rounds, using a Byzantine
Agreement protocol of Feldman and Micali [60]; the communications
during the protocol are independent of all values except the broadcast
value, so that the protocol maintains privacy and resilience.]
a network can evaluate a circuit $C_F$ for a function
$F(x_1,\dots,x_n)$ in the presence of $t$ Byzantine faults, as long as $t <
\frac{n}{3}.$ No complexity-theoretic assumptions are necessary.
Their construction requires a number of rounds of interaction proportional
to the depth of $C_F$ and messages which grow with its size. In fact,
since most early protocols for securely computing a function $F$ were based
on evaluating a circuit for $F$ gate by gate, this led many researchers to
conjecture that the circuit depth of $F$ was a lower bound on the number of
rounds of interaction, and the circuit size a lower bound on the message
We prove to the contrary that the communication complexity of securely
computing $F$ need not be related to the computational (circuit) complexity
of $F.$ First, we show that the number of rounds of interaction can be
reduced by a logarithmic factor while maintaining small message sizes. The
number of rounds required for any function in the broad class $NC^1$ is
constant. In fact, the number of rounds for any function is
reducible to a constant, though at the price of larger messages (see
<ref>) or lower fault tolerance (see
Chapter <ref>).
Our protocols are simple to implement: they employ matrix multiplication,
polynomial evaluation, and polynomial interpolation as their most
complicated subroutines. All of the local computations are fast.
The main result of this chapter is a protocol to evaluate a
function in a constant number of rounds using a technique
that avoids gate-by-gate simulation.
(see <ref>–<ref>)
Before presenting our results,
we must first describe the threshold schemes that underlie most of the
protocols in this dissertation. By recombining values that have been
distributed according to the threshold scheme, new, robustly shared results
are constructed. We give a method, similar to yet more efficient than that
of [28, 39], for evaluating the AND, OR, and NOT gates of a
circuit, representing the results using the threshold scheme. Because
proofs and formal specifications of the protocols in [28, 39]
have not appeared, we must provide proofs of the underlying methods as
well. We present a general paradigm for modular protocol design and prove
it robust.
Assumptions made in this chapter. The network is complete,
with private lines, $n$ processors,
and at most $t < \frac{n}{3}$ Byzantine faults, chosen dynamically.
The protocols are information-theoretically secure,
or in other words, the results are perfectly resilient.
§ THRESHOLD SCHEMES
A particularly efficient and simple threshold scheme is presented by Shamir
[114]; we refer to it as secret sharing.
secret sharing
secret sharing!polynomial
secret sharing!Shamir
Fix a finite field
$E$ of size greater than $n,$ the number of players. For example, the set
${\bf Z}_p$ of integers taken modulo $p$ will do, where $p$ is a prime
number in the range $n < p < 2n.$ Arithmetic over ${\bf Z}_p$ is extremely
easy to compute. Another convenient field to use, though slightly less
intuitive to implement, is $\gf(2^n),$ which has the often useful property
that $x+x=0$ for any $x.$
* Fix field $E,$ $\abs{E} > n,$ and elements $\alpha_1,\dots,\alpha_n \not=
* Dealer selects $a_t,\dots,a_1 \in E$ at random.
* Dealer sets $p(u) = a_t u^t + \cdots + a_1 u + s.$
* Dealer computes $\piece_i(s) \leftarrow p(\alpha_i).$
* Dealer sends $\piece_i(s)$ to player $i.$
Protocol for dealer to secretly share value $s \in E.$
In any case, the dealer shares $s \in E$ by the method displayed in
Figure <ref>. The dealer chooses $t$ random coefficients
$a_t,\dots,a_1$ in $E$ and creates a polynomial $p(n)$ of degree $t$ using
these coefficients, with $p(0) = s.$ Then, for fixed, public,
nonzero evaluation
points $\alpha_1,\dots,\alpha_n,$ he constructs the “piece” $\piece_i(s)$
by evaluating $p(\alpha_i).$ Finally, he sends $\piece_i(s)$ to player $i,$
privately. Distributions $\unifpolyn(t,s)$ and $\unifpieces(n,t,s),$
discussed in $\S\ref{sec-notation},$ describe the distributions
on polynomials and vectors of pieces so generated.
Any $t+1$ players can determine $s$ easily, since $t+1$ points determine a
polynomial of degree $t.$ By interpolating the polynomial of degree $t$
passing through their points, a sufficiently large collection of players
can recover $p(0)=s.$ As long as the number of omitted pieces, which is
bounded by $t,$ leaves $t+1$ valid pieces remaining, the reconstruction is
possible; hence $n-t \geq t+1$ must be satisfied, or equivalently $2t<n.$
Thus at the appropriate time specified by the protocol, it is easy to
reconstruct the secret from the information held by reliable players.
* Each player $i$ sends $\piece_i(s)$ to player $j.$
* Player $j$ selects $t+1$ of the pieces he collects.
Player $j$ interpolates the polynomial $p(u)$ passing through
* Player $j$ computes $s=p(0).$
Protocol to reveal secret $s$ to player $j.$
Intuitively speaking, any $t$ or fewer evaluation points of a
$t^{th}$-degree polynomial do not determine that polynomial. Thus, an
adversary who gains the information held by a coalition of $t$ or fewer
players learns nothing; it obtains a uniformly distributed vector of
values, regardless of the value of $s.$
(Shamir 1979)
Let $E$ be a field and let $1 \leq t < \abs{E}.$
Then for any $s \in E,$ for any $t' \leq t,$
and for any set
\subset E - \set{0}$ of size ${t'},$
\[
\set{ \vec{a} \leftarrow \uniform(E^t): (p_{s,\vec{a}}(i_1),
\ldots,p_{s,\vec{a}}(i_{t'})) }
\uniform(E^{t'})
\]
where $p_{s,\vec{a}}(u) = s + \sum_{j=1}^t a_j u^j.$
In particular, for any $n < \abs{E},$
$\alpha_1,\ldots,\alpha_n,s \in E,$
and $T \subseteq [n]$ with $\abs{T} \leq t,$
\[
\set{ \vec{p} \leftarrow \unifpieces(n,t,s) : \vec{p}_T }
\uniform(E^{\abs{T}})
\]
Given some pieces of a secret,
new pieces remain uniformly random, up to a total of $t:$
Let $E$ be a field and let $1 \leq t < n < \abs{E}.$
Then for any $s \in E,$ for any disjoint $T,T' \subseteq [n]$
satisfying $\abs{T \cup T'} \leq t,$ and for any
$\vec{q} \in E^n,$
\[
\set{ \vec{p} \leftarrow \unifpieces(n,t,s) : \vec{p}_{T'} \mid
\vec{p}_{T} = \vec{q}_{T} }
\uniform(E^{\abs{T'}})
\]
Elementary linear algebra.
For $3t < n,$ in a completely connected network of $n$ processors with
private channels, the method for secret sharing presented above is
$t$-resilient against dynamic, passive adversaries.
In the ideal protocol, the dealer sends $n$ pieces to the
trusted host, who then distributes them.
Before the adversary corrupts the dealer, the interface need
only generate random field elements in response to corruption
requests, which it does according to Lemma <ref>
by generating uniformly random field elements. If the adversary
corrupts the dealer, then the interface first chooses
$t-\abs{T}$ remaining pieces uniformly at random as directed
by Lemma <ref>, and corrupts
the dealer in the ideal protocol, obtaining $s.$ Given $s$ and
$t$ pieces, the polynomial is uniquely determined and $\interface$
simply solvers for the remaining pieces. It constructs a view of
the dealer consisting of $s,$ the coefficients, and the entire
list of pieces, and provides the list to the adversary. Subsequent
requests for corruptions are computed by evaluating the polynomial.
Polynomial interpolation is extremely fast and efficient, especially at
fixed interpolation points. One way to perform the interpolation of $p(x)$
from values $p(1),\dots,p(n)$ is through the LaGrange interpolation
\begin{eqnarray} \label{eqn-lagrange-interp}
p(x) & = & \sum_1^n L_i(x) p(i)
\end{eqnarray}
where the LaGrange polynomial $L_i(x)$ is defined by
\begin{eqnarray*}
L_i(x) & = & \prod_{j=1..n; j\not=i} \frac{x-j}{i-j}.
\end{eqnarray*}
Each $p(i)$ is a constant in Equation <ref>, so that
$p(x)$ is the weighted sum of polynomials $L_i(x).$ This sum is easily and
efficiently computable. (Note that even if the degree of $p(x)$ is
$t<n-1,$ the polynomial thus interpolated will be identical to $p(x);$ the
terms will cancel properly to ensure the final polynomial is of degree
The coefficients of the polynomials $L_i(x)$ are easy to compute as well,
and need only be computed once. An alternative means to compute them is to
compute the inverse of the Vandermonde matrix $M$ defined as $M_{ij} =
A particularly suitable set of evaluation points is given by $\alpha_i =
\omega^{i-1},$ so that $M_{ij} = \omega^{(i-1)(j-1)},$ where $\omega$ is an
$n^{th}$ root of unity over the field $E$ used for secret sharing. The
evaluation of the polynomial $p(u)$ becomes a fast Fourier transform:
\begin{eqnarray*}
M \vec{a} =
\left[
\begin{tabular}{ccccc}
1 & 1 & 1 & $\cdots$ & 1 \\
1 & $\omega$ & $\omega^2$ & $\cdots$ & $\omega^{n-1}$ \\
1 & $\omega^2$ & $\omega^4$ & $\cdots$ & $\omega^{2(n-1)}$ \\
\vdots & & & & \vdots \\
1 & $\omega^{n-1}$ & $\omega^{(n-1)2}$ & $\cdots$ & $\omega^{(n-1)(n-1)}$
\end{tabular}
\right]
\cdot
\left[
\begin{tabular}{c}
$s$ \\
$a_1$ \\
$a_2$ \\
\vdots \\
$a_t$ \\
0 \\
\vdots \\
\end{tabular}
\right]
& = &
\left[
\begin{tabular}{c}
$p(\omega^0)$ \\
$p(\omega^1)$ \\
$p(\omega^2)$ \\
\vdots \\
$p(\omega^{t-1})$ \\
$p(\omega^t)$ \\
\vdots \\
\end{tabular}
\right]
= \vec{p}
\end{eqnarray*}
so that the inverse of M gives the desired weights:
\[
\vec{a} = M^{-1} \vec{p}.
\]
As observed crucially by Ben-Or et al [28], the use of these
particular evaluation points supports the use of error-correcting codes,
which will be essential to defend against malicious errors.
Regardless of the choice of evaluation points, the weights can be
precomputed at the cost of a matrix inversion. Using roots of unity as
evaluation points, evaluating $p$ and interpolating $p$ is even faster
($O(n \log n)$ time vs. $O(n^3)$) since the computations become
fast Fourier transforms. Using the precomputed weights in the online
interpolation of $p(u)$ costs a matrix multiplication.
§.§ Verifiability
The astute reader will note that a single error in any of
the values will throw off the interpolation and produce an incorrect value
for $s.$ Certainly, $t+1$ reliable players will have sufficient
information to reconstruct $s$ at the appropriate time (the “appropriate”
time is specified by their programs, which, since they are reliable, is
specified by the protocol designer), but they must be able to distinguish
correct pieces from incorrect ones. Furthermore, they ought to be able to
tell if the dealer distributed a valid set of pieces in the first place.
This problem is known as verifiable secret sharing
[42, 28, 107].
secret sharing!verifiable
Cryptographic solutions often involve ideas such as
digitally signing the pieces to ensure that they aren't changed and
zero-knowledge proofs to ensure that they do interpolate to a proper
polynomial of degree $t.$ In this chapter, however, we are concerned with
unconditionally secure protocols: we make no complexity-theoretic
assumptions or restrictions, and cannot rely on such cryptographic
For the range $t < \frac{n}{3},$ Ben-Or et al present a technique to
ensure verifiability with unconditional security. In
Chapter <ref> we give a method for verifiable secret sharing for
$t < \frac{n}{2},$ which is unconditionally secure with high probability.
Ben-Or et al base their methods on BCH codes, using particular values
$\alpha_1 = \omega^0, \alpha_2 = \omega^1, \dots,
\alpha_n = \omega^{n-1}$ as the evaluation points for $p(u),$ where
$\omega$ is a primitive $n^{th}$ root of unity in the field $E$
used for secret
sharing. We shall not list the details of error correction
here, instead referring the
interested reader to [28, 101]. The essential property to note is
that there is an error-correcting method to interpolate polynomials
evaluated at roots of unity that tolerates omissions and changes in up to a
third of the values.
For completeness and for the purposes of proving resilience (a proof has
not appeared), we describe their method for verifiability using BCH codes.
The dealer chooses a bivariate polynomial $p(u,v)$ of degree $t$ in $u$ and
$v,$ subject to $p(0,0)=s,$ and sends the polynomials
$p_i(u)=p(u,\omega^{i-1})$ and $q_i(v)=p(\omega^{i-1},v)$ to player $i.$
The original piece $\piece_i(s)$ corresponds simply to $p_i(0);$ the other
information is for verification. Then each player $i$ sends
$p_i(\omega^{j-1})$ to player $j.$ Player $j$ can check for himself that
the values $p_1(\omega^{j-1}),\dots,p_n(\omega^{j-1})$ match his values for
$q_j(\omega^{0}),\dots,q_j(\omega^{n-1});$ if not, he requests that the
dealer broadcast the true values of $p(i,j)$ at the discrepancies.
Now, if any player detects more than $t$ errors or had to correct one of
his values, then he impeaches the dealer. An impeachment is a
broadcast request by player $i$ to reveal $p_i(u)$ and $q_i(v).$ If a
player finds a contradiction with the polynomials broadcast by the dealer,
he too impeaches the dealer. Finally, any player seeing $t+1$ or more
impeachments decides that the dealer is faulty, and otherwise accepts the
secret, since at least $n-t \geq t+1$ reliable players have accepted their
values and therefore determine a secret uniquely. Figure <ref>
lists the steps involved. The following theorem is proved in
[after Ben-Or, Goldwasser, Wigderson 1988]
For $3t < n,$ in a completely
connected network of $n$ processors with private channels, protocol is
$t$-private and correct against Byzantine adversaries.
§ FUNDAMENTALS: COMPUTING WITH SECRETS
Let $C_F$ be a boolean circuit for $F$ over the gates
$\set{\logand,\logor,\lognot},$ having inputs $x_1,\dots,x_n.$ Such a
circuit is represented as an arithmetic circuit over field $E$ using
the standard mapping $\phi$ taking $x_i^{\phi} \mapsto x_i, (x \logand
y)^{\phi} \mapsto xy, (x \logor y)^{\phi} \mapsto x+y-xy, (\lognot x)
\mapsto 1-x.$
The three-stage paradigm, “Share Inputs; Compute New Secrets; Reconstruct
Results,” forms the basis for constructing multiparty protocols to compute
a function $F(x_1,\dots,x_n).$ The intermediate stage is the backbone of
secure multiparty protocols: a collection of subprotocols to add and to
multiply secrets, creating new secrets from old, and thus to evaluate $C_F$
gate by gate, secretly. In this section we review and improve techniques
of [28] for evaluating $F$ via $C_F.$ Formally speaking, for now we
consider computing $F$ directly without dividing it into several
intermediate functions $F^i.$ Our more general techniques will be
introduced in <ref> and <ref>.
§.§ Linear Combinations
Given secretly shared values $X_1$ and $X_2,$ a protocol to create a new
secretly shared $Y$ whose value is their sum is easy to construct (see
Figure <ref>). Each player computes his new piece as
$\piece_i(Y) = \piece_i(X_1+X_2) \leftarrow \piece_i(X_1) + \piece_i(X_2).$
Behind the scenes, if $X_1$ and $X_2$ are shared using polynomials
$p_{X_1}(u)$ and $p_{X_2}(u),$ then the polynomial defined by
$\piece_1(Y),\dots,\piece_n(Y)$ is $p_Y(u) = p_{X_1}(u)+p_{X_2}(u),$ a
polynomial of degree $t$ and with uniformly random coefficients subject to
This protocol requires no communication. Noting that $(c \cdot p)(X) = c
\cdot p(X)$ is uniformly distributed and of degree $t$ with free term
$c\cdot p(0) = cX,$ it easily generalizes to multiplying secrets by fixed
constants and hence to linear combinations of secrets. To effect a
constant multiplication, each player multiplies his piece by that constant,
since $(c \cdot p)(\alpha_i) = c \cdot p(\alpha_i).$
Player $i$ sets
%= \piece_i(c_0+c_1 X_1+\cdots+c_N X_N)
\leftarrow
c_0 + c_1 \cdot \piece_i(X_1) + \cdots + c_N \cdot \piece_i(X_N).$
Protocol to create a new secret $Y= c_0+c_1 X_1+\cdots+c_N X_N,$ where
$c_0,\dots,c_N$ are fixed constants and $X_1,\dots,X_N$ are secret.
The protocol to compute several linear combinations simultaneously is
simple: repeat the single linear combination protocol in parallel. No
interaction is needed. Figure <ref> describes the
Player $i$ sets
$ \piece_i(Y_j)
\leftarrow
c_{j0} + c_{j1} \cdot \piece_i(X_{j1}) + \cdots +
c_{jN} \cdot \piece_i(X_{jN}). $
Protocol to create new secrets $Y_1,\ldots,Y_M$ whose values are
$Y_j= c_{j0}+c_{j1} X_{j1}+\cdots+c_{jN} X_{jN}.$ The $\set{c_{jl}}$
values are fixed (“public”) constants and the the $\set{X_{jl}}$ are secrets.
Protocol is a $t$-resilient protocol to compute a robust and private
representation $\robsec(G)$ where
\[
G(\set{c_{jl}},\set{X_{jl}}) = (c_{10}+\sum_1^N X_{1l},\ldots,
c_{M0}+\sum_1^N X_{Ml} ).
\]
An interface for the protocol is trivial, since no messages
need be generated. The interface need only request the inputs and
auxiliary inputs of the players that its black-box adversary $A$ requests
to corrupt, and supply the values to $A.$
Post-protocol corruption is accomplished
by the same process.
It is easy to see that the results of the protocol are the correct secretly
shared values.
§.§ Multiplying Secrets
The protocol for adding secrets does not immediately generalize to
multiplication, since the product $p_Y(u) = p_{X_1}(u) p_{X_2}(u)$ has
degree $2t.$ If the degree of the representation of the secrets is allowed
to grow, there will quickly by an insufficient number of pieces to
reconstruct the result.
Ben-Or et al and Chaum et al present the first methods to
reduce the degree of the polynomial represented by having each player
compute $\piece_i(X_1)\cdot\piece_i(X_2).$ They reduce the problem of
degree reduction to a linear combination of secrets using matrix
operations. Before the degree reduction occurs, a random polynomial of
degree $2t$ (and zero free term) is added in order to ensure a uniform
distribution on the resulting coefficients.
We present an alternative and independently discovered approach to degree
reduction and a faster method for ensuring uniform randomness of the
resulting coefficients. These minor optimizations are not the main thrust
of this chapter but are included because they also have certain conceptual
We have seen in Equation <ref> how a polynomial can be
interpolated using a weighted sum of LaGrange polynomials. Let
$\overline{p}(u)$ denote the result of truncating all terms of polynomial
$p(u)$ having degree higher than $t;$ that is, $\overline{p}(u) = p(u) \mod
x^{t+1}.$ Then we have
\begin{eqnarray} \label{eqn-interp-trunc}
\overline{p}(x) & = & \sum_1^n \overline{L}_i(x) p(i).
\end{eqnarray}
Clearly, then, we can express the value of $\overline{p}(u)$ at the point
$\alpha_j$ as a weighted sum of the values $p(1),\dots,p(n):$
\begin{eqnarray} \label{eqn-interp-trunc-i}
\overline{p}(\alpha_j) & = & \sum_1^n \overline{L}_i(\alpha_j) p(i).
\end{eqnarray}
The weights, $\overline{L}_1(\alpha_j), \dots, \overline{L}_n(\alpha_j),$
are easily precomputed. Furthermore, $\overline{p}(0) = p(0).$
If the values $p(1),\dots,p(n)$ are themselves shared as secrets,
then Equation <ref> indicates how to express each value
$\overline{p}(\alpha_j)$ as a linear combination of secrets.
Thus, $\overline{p}(u)$ could be used as a secret-sharing polynomial to
represent the secret $p(u),$ since it has degree $t.$ Unfortunately, the
coefficients of $\overline{p}(u)$ are not uniformly random unless the
original coefficients of $p(u)$ are.
To randomize the coefficients, it suffices to add a random polynomial of
degree $t+1$ and free term 0 to $p(u)$ before truncation. This observation
differs from [28], who use $t$ different polynomials to
construct a polynomial of degree $2t$ in order to randomize all
high-order coefficients. However, it is necessary only to randomize
coefficients of degree $t$ or less, since the others will be truncated.
Their protocol hence requires a $t$-fold increase in the expense of
this subprotocol.
If $r(u)$ is
completely random and of degree $t,$ then $u \cdot r(u)$ is of degree $t+1$
with free term 0 as desired. To create a random polynomial $r(u)$ (whose
free term is random), each player $i$ shares a secret $R_i$ using a
polynomial $r_i(u).$ Their sum, $r(u)=\sum_i r(u),$ is what we need;
each player $j$ holds the value $r(j) = \sum_i r(j) = \sum_i
\piece_j(R_i).$ The truncation protocol is given in
Figure <ref>.
Choose $R_i$ at random and share it.
Receive $\piece_i(R_1),\dots,\piece_i(R_n).$
Set $\piece_i(R) \leftarrow \piece_i(R_1) + \cdots + \piece_i(R_n).$
Set $\gamma_i \leftarrow p(i) + i \cdot \piece_i(R).$
Share $\gamma_i.$
Receive $\piece_i(\gamma_1),\dots,\piece_i(\gamma_n).$
Set $\piece_i(p(0)) \leftarrow \overline{L}_1(i) \piece_i(\gamma_1)
+ \cdots + \overline{L}_n(i) \piece_i(\gamma_n).$
Protocol to secretly share $p(0)$ when each player $i$ holds $p(i),$
and $p(u)$ is of degree $2t.$ (Code for player $i.$)
Given a protocol to truncate a high-degree polynomial secretly, the
protocol to compute the product of two secrets can be stated concisely
(see Figure <ref>). Simply: each player computes the product
of his pieces and participates in a truncation protocol, which involves
secretly sharing $\piece_i(X_1) \piece_i(X_2)$ itself.
Set $p(i) \leftarrow \piece_i(X_1) \piece_i(X_2).$
Run the Truncate protocol on $p(u)$ to obtain
$\piece_i(p(0)) = \piece_i(X_1 X_2).$
Protocol to multiply secrets $X_1$ and $X_2.$
(Code for player $i.$)
Execute protocol to compute $Y_j = X_{j1} \cdot X_{j2}.$
Protocol to create new secrets $Y_1,\ldots,Y_M$ whose values are
$Y_j = X_{j1} \cdot X_{j2}.$ The $\set{X_{j1},X_{j2}}_{j\in [M]}$ are
secrets. (Code for player $i.$)
§.§.§ Byzantine Faults
As before, we must be careful to check for misbehavior. The protocols as
listed are secure against a passive adversary but are not reliable in the
presence of Byzantine faults.
Since the error-correcting codes (polynomials evaluated at roots of unity)
are additive, the addition of verified secrets need not be checked.
Multiplication, however, requires the resharing of pieces; it must be
checked that player $i$ correctly shares $p(i) = \piece_i(X_1)
\piece_i(X_2).$ In this section we review the method of [28];
in Chapter <ref>, we present a new method to achieve this goal
that tolerates a much larger number of faults.
The ABC problem.ABC problem
Alice knows the values of secrets $a$ and $b.$
Alice must share a new secret $c$ and prove to the other
players that the secret value of $c$ is indeed $ab.$
If Alice is honest, no information about $a,$ $b,$ or $c$
should be revealed.
A solution to this problem is detailed in Figure <ref>.
Notice that if $f(0)g(0)$ differs from $h(0),$ then it is impossible
for Alice to select polynomials of degree $t$ to make $h(x)$ of
degree $t.$ The use of random coefficients ensures that if Alice
is nonfaulty, then an interface can easily generate messages from
Alice to faulty players, because those messages will contain uniformly
random field elements, according to Lemma <ref>.
The protocol specifies twice that Alice participate in a protocol using particular polynomials or pieces, in order that
the system may verify that Alice uses polynomials of degree $t.$
In other words, the subprotocols correspond to executing
but requiring that Alice send out $(p_i(u),q_i(v))$ values
in the first round that match the given polynomials. The recipients
of these messages check in the first round whether Alice has sent
them consistent values, and if not, they consider Alice to have
sent them nothing, and continue the protocol exactly as specified
from there.
$i=1..t$ $j=0..t-1$
$r[i,j] \leftarrow \uniform(E)$
$h_i(x) = \sum_{j=0}^{t-1} r[i,j] x^j + $
$x^t \cdot ( c_{t+i} - \sum_{j=1}^{t-1} r[t+1-j,j])$
Alice shares $c_t,\ldots,c_{2t}$ using these polynomials.
$(1 \leq i \leq n)$
$h(\alpha_i)=f(\alpha_i)g(\alpha_i) -
\sum_{i=1}^t (\alpha_i)^t h_i(\alpha_i).$
Alice secretly shares $ab$ using these as $p_i(0)$ values.
Protocol for Alice to verifiably share secret $c=ab.$
Protocol is a $t$-resilient protocol to compute a robust and private
representation $\robsec(G)$ of the produce of $M$ secrets, where:
\[
(X_{11} \cdot X_{12},\ldots,X_{M1} \cdot X_{M2}).
\]
Step (MO1) of is noninteractive and hence trivially $t$-resilient.
The protocol is $t$-resilient because it is
the concatenation of $t$-resilient protocols (and
Protocol is a concatenation of two $t$-resilient subprotocols,
namely step (MO1) and the protocol,
so by Theorem <ref> it is itself $t$-resilient.
§.§ Disqualification and Fault Recovery
Rather than burden the descriptions of the protocols to come, we implicitly
require that, following an impeachment, namely a broadcast message from one
player $i$ stating that some player $j$ is faulty, each player checks to
see if the impeached player should be disqualified. We consider a
processor disqualified if $t+1$ or more players have broadcast
impeachments of it. The meaning of “checks to see” is determined by the
specifications of the particular protocol. If the player is disqualified,
a recovery procedure may be necessary. In this chapter, when the number of
faults is less than a third, no recovery procedure is necessary (though of
course, misbehavior is eliminated by eliminating the messages of faulty
players identified by error-correction) except in the Input stage.
If a fault occurs during the Input stage, namely when a player ought to
share its input, the faulty player's secret is replaced by a default value
selected (for full generality) according to some samplable distribution: in
addition to sharing the input $x_i,$ each player $i$ shares a set of random
bits which are used to construct a default input for player $j$ should it
fail. Let $T$ be the set of players disqualified when sharing the inputs.
Then it is easy to describe a set of circuits $\set{C_j}_{j \in T}$ which
take as inputs the random bits shared by players $i \not\in T,$ compute
their exclusive or, and output default values for each $x_j.$ Rather than
evaluating $C_F$ directly, the protocol calls for an evaluation of the
circuit $C_F'$ which is the circuit $C_F$ with its $i^{th}$ input set to
either the output of $C_i$ or to the value $x_i$ shared by player $i;$
the circuit $C_F'$ selects one or the other by considering the sum of extra
inputs from each player specifying whether the player impeaches player $i.$
If the sum is large enough, the circuit selects the default input, and
otherwise selects $x_i.$
We shall normally consider these conditions implicitly in our descriptions
of the protocols, for otherwise they would be tremendously difficult to
§.§ Putting It Together
Given subprotocols to add and to multiply secrets, the specification of a
protocol to evaluate an arbitrary arithmetic circuit $C_F$ is not hard to
describe. Let $E$ be a fixed finite field. A basis for a circuit is a set
of functions which the gates may compute. An arithmetic gate is an element
of the set $\set{\times} \cup \set{(+,a_0,a_1,\dots,a_m)}_{a_0,\ldots,a_m
\in E}$ where the latter sort of gate is a linear combination,
producing $a_0 + a_1x_1 + \cdots + a_mx_m$ on inputs $x_1,\dots,x_m.$
For ease of description, we shall consider circuits as arrays of gates;
this allows us to specify circuits with multiple outputs or to describe
straight-line programs (circuits of width 1) as we desire. It also allows
us to specify in a natural way that the outputs of particular gates at the
final level are given to distinct players. Specifically, a circuit is a
$(d+1) \times w$ array of elements of the form
$(g_{ij},\inputgates(g_{ij})),$ where $\inputgates(g_{ij})$ is a set of
pairs $(a,b)$ describing the indices of gates whose output leads to
$g_{ij}.$ We define $\inputgates_k(g_{ij})$ to be $(a,b)$ if $g_{ab}$ is
the $k^{th}$ earliest gate (in row-major order) leading into $g_{ij}.$ Each
input $g_{ab}$ to gate $g_{ij}$ must satisfy $a<i.$ The $0^{th}$ layer
represents the inputs $x_1,\dots,x_n$ to the circuit. Let $\outgates(i)$
denote the set of gates $g_{dj}$ whose output represents the value $F_i$
that player $i$ should learn.
Without loss of generality we assume that even-numbered levels contain only
linear combination gates and odd-numbered levels contain only
multiplication gates. The depth of the circuit is doubled at most.
Denote the coefficients of the additive gate $g_{ij}$ by
$a_{i,j,0},a_{i,j,1},\ldots,a_{i,j,\mid \inputgates{g_{ij}} \mid}.$
A circuit with bounded fan-in satisfies $\abs{\inputgates(g_{ij})} <
c$ for some constant $c$ and for all $i,j.$ A circuit with bounded
multiplicative fan-in satisfies $\abs{\inputgates(g_{ij})} < c$ for some
constant $c$ and for all $i,j$ such that $g_{ij}=\times.$ A
$\times$-fanin-2 circuit has multiplicative fan-in of 2.
The class $NC^1$ consists of functions with unbounded fan-in
polynomial-size logarithmic-depth circuit families, over the basis
$\set{{\rm AND, OR, NOT}}.$ The class $ANC^1$ is described by
polynomial-size logarithmic-depth circuit families, over the arithmetic
basis $\set{(+,a_0,a_1,\dots,a_m),\times}.$ We consider a circuit efficient if it is of polynomial size and polynomial depth.
(After [28])
Let $\set{F^{n,m}}$ be a family of functions described by a
$\times$-fanin-2 circuit family $C_{F^{n,m}}.$ Then for $3t<n,$ there
exists a $t$-resilient protocol leaking $F^{n,m}.$ This protocol
a polynomial number of bits (in the size of $C_{F^{n,m}}$) and $O({\tt
depth}(C_{F^{n,m}}))$ rounds of interaction.
The protocol (see Figure <ref>) consists of secretly sharing all
the inputs, then evaluating $C_{F^{n,m}}$ layer by layer, maintaining the
gate outputs as secrets. At each layer, all gates are evaluated in
parallel. Finally, for $1 \leq i \leq n,$ each output secret in
$\outgates(i)$ is reconstructed for player $i.$
Each player $i$ runs ($x_i$).
Denote these secrets at level 0 of the circuit by
(E2) Evaluate addition and multiplication layers
Run with coefficients $\set{c_{lv,j}},$ secrets
$\set{z_{\inputgates_j{g_{lv}}}}$ to compute:
$z_{2l,v}= c_{2l,v,0} + \sum c_{2l,v,j} z_{\inputgates_j(g_{2l,v})} $
($v=1..w,$ $j=1..\abs{\inputgates(g_{lv})}$)
Run with secrets
$\set{(z_{\inputgates_1(g_{lv})}, z_{\inputgates_2(g_{lv})} )}$
to compute:
z_{\inputgates_1(g_{lv})} \cdot z_{\inputgates_2(g_{lv})}$
$j\in \outgates(i)$
Reveal $z_{lj}$ to player $i.$
Protocol to evaluate a circuit $C_F$ for $F(x_1,\dots,x_n).$
By lemmas <ref> and <ref> and Theorem
<ref>, the overall protocol is $t$-resilient, since it is a
concatenation of perfectly $t$-resilient protocols for robust and private
§.§ Some Applications
Protocols for problems such as taking a secret ballot are now relatively
easy to describe. Chapter <ref> also describes some simple
protocols for useful problems, though those protocols have been optimized
for efficiency using various tricks not evident in the preceding analysis.
Let us examine how, at this stage, secure multiparty protocols may be
constructed from simple gate operations.
Secret Ballot.
A secret ballot is a private and reliable distributed computation of
the sum of 0/1 inputs. Let us give a protocol for taking a secret ballot
which is private, correct, ensures that the voters cast votes
independently, and reveals the result to everyone, even when up to a third
of the voters may be untrustworthy.
The protocol is straightforward, given the secret linear-combination and
multiplication protocols. Each voter casts his vote $X_i$ by secretly
sharing it. To ensure that each voter cast at most one vote, they secretly
compute $Z_i=X_i(X_i-1)$ for each $i,$ and reconstruct the result for
everyone. Any voter whose value is nonzero is disqualified. Let $J$ be
the set of voters who are not disqualified. Then, together, the voters run
the protocol to compute $Y=\sum_{i \in J} X_i.$ Finally, they
reconstruct the result $Y$ for everyone.
It is easy to see that revealing $Z_i$ gives no information about the vote
cast by reliable voter $i.$ That is, the result for reliable players is
always 0, and hence a private and robust function. Furthermore, this
protocol is fast and efficient, requiring one multiplication and two
additions, which take a small constant number of rounds.
Unanimous Vote.
A slightly more complicated goal is to decide if an entire committee votes
unanimously or not, without revealing the tally if the committee is not in
agreement. As in Example <ref>, the voters secretly compute
$Y,$ the tally of the valid 0/1 votes. Next, they compute the function
$f(y) = 1-(n-y)^{p-1},$ where $p$ is the size of the field being used for
sharing. Notice that $f(n)=1$ whereas $f(y)=0$ for $y \not= n.$
Finally they reveal $f(y)$ to everyone.
An algebraic circuit of depth $\log p$ suffices to compute $f(y),$ noting
that the multiplicative gates are restricted to fan-in 2. If we take $p$
to be on the order of $n,$ then the depth is roughly $O(\log n).$ The
number of rounds required to perform the unanimous ballot is $O(\log n),$
and the number of bits is polynomial in $n.$ This is a prime example of a
protocol whose running time is made efficient through our methods for
constant-rounds protocols.
* Each voter $i$ shares $X_i.$
* Compute and reveal $Z_i=X_i(X_i-1)$ for $1\leq i \leq n.$ Let $J$ be the set
of $i$ for which the result is 0.
* Compute $Y = \sum_{i \in J} X_i.$
* Compute $V_1 = Y-n$ and $W_1 = Y-n.$
* $j = 1 .. \lfloor \log p \rfloor$
* Compute $W_{i+1} = W_{i}^2.$
* If the $i^{th}$ bit of the binary expansion of $p-1$ is 1,
compute $V_{i+1} = V_i W_{i+1},$ else let $V_{i+1} = V_i.$
* Reconstruct $V_{\log p}.$ If 0, output “unanimous,” else output
Protocol to decide if secret ballot is unanimous without revealing the
tally. (Network protocol.)
§ TOOLS FOR EFFICIENT PROTOCOLS
In this section we present a collection of useful and important
subprotocols. The technique for unbounded fan-in multiplication presented
in <ref> is the critical tool for developing protocols that
require a constant number of rounds as opposed to a number of rounds
proportional to the depth of a circuit for $F.$
§.§ Random Secret Values
We often have need of secretly shared random field elements whose value no
player knows. That is, we desire a secret whose value is uniformly random
and independent of the information of any $t$ or fewer players.
* Each player $i$ chooses $R_i$ uniformly at random from the field $E$ and
runs to share it. The default value is 0 for misbehaving players.
* Run to compute $R = R_1 + \dots + R_n.$
Protocol to create random secret field element $R.$
Random secret bits are also useful. They are generated simply by computing
the parity of 0/1-valued secrets shared by each player. Over a field of
characteristic 2, one computes the parity by computing simply the sum,
requiring no interaction beyond the secret sharing. Over other fields, a
logarithmic-depth circuit is needed (though <ref> shows how to
perform this in constant rounds). Before computing the parity, each shared
bit is verified to be 0 or 1 by computing $b_i(b_i-1),$ which is 0 iff
$b_i$ is 0 or 1, and which is a private function since its value is
independent of $b_i.$
* Each player $i$ sets $b_i \leftarrow \uniform(\set{0,1})$ and shares it.
* Run and to compute $c_i=b_i(b_i-1).$
* Run to compute $b = \sum_{c_i = 0} b_i.$
Protocol to create random secret field element $b$ in a field of
characteristic 2.
§.§ Groups and Inverses
The best known algebraic circuit to compute multiplicative inverses in a
field is fairly deep. It turns out, surprisingly, that the number of
rounds required to secretly compute the multiplicative inverse of a secret
is the same as that needed to multiply two elements, despite the large
number of rounds that would be required by simulating an arithmetic circuit
directly. The power of this result will be evident in the next section,
where we shall use it to construct a protocol to invert matrices, which in
turn will support a constant-round protocol for any $NC^1$ circuit.
(Constant Rounds for Group Inverses)
Let $G$ be a group, and let $X \in G$ be secretly shared. Let $\Pi_M$ be a
protocol to multiply two elements in $G$ that runs in $T_M$ rounds
and uses $C_M$ bits; let $\Pi_R$ be a protocol to generate a random element
in $G$ that runs in $T_R$ rounds and uses $C_R$ bits. Let
$T=$max$(T_M,T_R)$ and $C=$max$(C_M,C_R,C_{\mbox{\scriptsize share}}),$
where $C_{\mbox{\scriptsize share}}$ is the number of bits used to
secretly share a group element.
Then there is a protocol to secretly compute $X^{-1}$ using $O(T)$ rounds
and $O(C)$ bits. Specifically, if multiplication of two elements requires
constant rounds then inversion requires only constant rounds.
The protocol is exhibited in Figure <ref>. Its security
rests on the elementary observation that, for any $X,$ the distribution
induced by multiplying by a uniformly random field element is uniform over
the group. In other words, the intermediate function $F^1(X)=(V,U)$ is
private and robust, since the public results ($V$) have the same
distribution regardless of $X,$ and the other results are secretly shared;
so the computation of $F^1$ is as easy to simulate as the ideal vacuous
protocol. The function $F^2$ is such that $F^2(V,U)=V^{-1}U=X^{-1}$
and $F^2$ admits a resilient protocol $\Pi_M.$ Thus $F=F^2 \closedcomp
F^1$ and Theorem <ref> applies.
* Run $\Pi_R$ to secretly generate a random secret element $U \in G.$
* Run $\Pi_M$ to secretly compute $V \leftarrow UX.$
* Reconstruct $V$ for every player.
* Each player $i$ computes $V^{-1}$ individually. It can now be treated
as a fixed public constant.
* Secretly compute $Y \leftarrow V^{-1}U.$
Protocol to compute secret inverse of a secret group element.
§.§ Secret Matrices and Matrix Inversion
A secret $n\times n$ matrix is a collection of $n^2$ secrets, each
representing an element of the matrix. Thus each player holds $n^2$
pieces, one of each entry. Figure <ref> shows how to
multiply matrices secretly.
* $i=1..\alpha$
run to compute
\[
C_{ij} = \sum_{k=1}^{\beta} A_{ik} B_{kj}.
\]
Protocol to secretly multiply secret matrices $A$ and $B$ of dimensions
$\alpha \times \beta$ and $\beta \times \gamma,$ respectively.
Each entry of the inverse of a $3 \times 3$ matrix $M$ is easily expressed
as a small, constant-depth circuit applied to the nine entries of $M.$
By Theorem <ref>, we have the following:
There exists a protocol to invert a full-rank secret $3 \times 3$ matrix
using a polynomial number of bits and a fixed constant number of
Given a constant-rounds protocol $\Pi_R$ to generate a random $n \times n$
secret matrix of full rank, Theorem <ref> implies the
following result:
There exists a protocol to invert a full-rank secret $n \times n$ matrix
using a polynomial number of bits and a constant expected number of rounds.
Consider the group $G$ of full rank matrices under multiplication. To
generate random $n \times n$ secret matrices having full rank, it suffices
to generate a pair $(R,S)$ of uniformly random secret matrices such that
$RS$ has full rank. See Figure <ref>. Because $S$ is
therefore uniformly random of full rank, the distribution on $U=RS$ is
uniformly random over full-rank matrices regardless of the value of $R.$
Hence the desired function (generate a full rank random matrix) can be
written as the composition of two private and robust functions, the first
computing $U$ and the second computing the full rank matrix. Because the
probability of generating a uniformly random matrix of full rank is
the expected number of rounds is constant. Clearly, a single repetition
suffices with high probability
if several ($k$) of these pairs are generated and tested in parallel, and
the first qualifying pair is used. Because the subprotocol to generate
random group elements uses expected constant rounds, the overall protocol
is expected constant rounds. This differs from the fixed-size
matrix inverse problem, whose solution requires fixed constant
Generate uniformly random secret matrices $R,S,$
using the protocol to create each entry.
Secretly compute $U=RS$ and reveal the result. If $U$ has full rank, use
$R.$ Otherwise go to step 1.
Protocol to generate random secret matrix of full rank.
§.§ Random Inverse Pairs and Large Fields
The following theorem describes another useful tool for a computation for
which no shallow circuit is known: inverting secret field elements.
In order to avoid revealing a 0-valued secret because of a failed attempt
to invert it, we extend the multiplicative inverse so that $0^{-1}=0.$
(Constant Rounds for Multiplicative Field Inverses)
Let $E$ be a field, and let $X \in E$ be secretly shared. Let
$\Pi_M$ be a protocol to multiply or add two elements in $E$ that
runs in $T_M$ rounds and uses $C_M$ bits; let $\Pi_R$ be a protocol to
generate a random element in $E$ that runs in $T_R$ rounds and uses $C_R$
bits. Let $T=$max$(T_M,T_R)$ and $C=$max$(C_M,C_R,C_{\mbox{\scriptsize
share}},\abs{E}^4),$ where $C_{\mbox{\scriptsize share}})$ is the number of
bits used to secretly share a group element.
Then there is a protocol to secretly compute $X^{-1}$ using $O(T)$ rounds
and $O(C)$ bits. Specifically, if multiplication of two elements requires
constant rounds then inversion requires only constant rounds.
The protocol is exhibited in Figure <ref>. The distribution
on $U_i$ and $V_i$ is clearly independent of $X.$ Notice that the
distribution on $R_i,$ given $U_i=V_i=0,$ is uniform over all field
elements, including 0. Hence $X-R_i$ is uniform over all field
elements regardless of $X,$ and the intermediate results have the same,
uniform distribution for any $X.$ It is not hard to see that pairs of
extended inverses are the only pairs to give zeros in both tests in step
(FI2). Thus, the inverse function $F$ is written as the composition of
private and robust intermediate functions, and Theorem <ref>
applies. Choosing $\abs{E}^4$ random pairs ensures that with probability
at least $1-2^{-\abs{E}}$ all such pairs will appear and that step (FI5)
will succeed.
Invert a secret field element.
Generate $\abs{E}^4$ random secret pairs $(R_i,S_i).$
For each $1 \leq i \leq \abs{E}^4,$ secretly compute
\begin{eqnarray*}
U_i & = & R_i( 1 - R_i S_i ),\\
V_i & = & S_i( 1 - R_i S_i ).
\end{eqnarray*}
Reveal all the $U_i$ and $V_i.$
For each $i$ such that $U_i=V_i=0,$
compute and reveal $X-R_i.$
Call $Y$ the first secret $S_i$ such that
$X-R_i=0.$ If none exists, go to step 1.
Protocol to compute secret multiplicative inverse of a secret in a field
§.§ Iterated Multiplication
The final set of tools provide the crucial support for reducing the number
of rounds of interaction. We show the surprising result that large fan-in
multiplication of group elements can be performed in constant rounds. A
naive approach would require $\log N$ rounds, where $N$ is the number of
elements to multiply. Specifically, we show:
(Constant Rounds for Iterated Multiplication)
Let $G$ be a group, and let $X_1,\dots,X_N \in G$ be secretly shared. Let
$\Pi_M$ be a protocol to multiply two elements in $G$ that runs in
$T_M$ rounds and uses $C_M$ bits; let $\Pi_R$ be a protocol to generate a
random element in $G$ that runs in $T_R$ rounds and uses $C_R$ bits. Let
$T=$max$(T_M,T_R)$ and $C=N \cdot {\rm max}(C_M,C_R,C_{\mbox{\scriptsize
share}}),$ where $C_{\mbox{\scriptsize share}}$ is the number of bits used
to secretly share a group element.
Then there is a protocol to secretly compute $Y=X_1 \cdots X_N$ using
$O(T)$ rounds and $O(C)$ bits. Specifically, if multiplication of two
elements requires constant rounds then multiplication of $N$ elements
requires only constant rounds.
Figure <ref> describes the protocol.
The following demonstrates that $Y$ is the desired product:
\begin{eqnarray*}
Y & = & R_0 S R_N^{-1} \\
& = & R_0 R_0^{-1} X_1 R_1 R_1^{-1} X_2 \cdots X_j R_N R_N^{-1} \\
& = & X_1 \cdots X_N
\end{eqnarray*}
Generate $N+1$ secret uniformly random
group elements $R_0,\dots,R_N \in G.$
Secretly compute the inverses,
For $j=1,\dots,N,$ simultaneously compute the following new secrets:
\[
S_j = R_{j-1}^{-1} X_j R_j.
\]
Reveal all the secret elements $S_j.$
Each player privately computes
\[
S = S_1 \cdots S_N.
\]
Secretly compute $Y = R_0 S R_N^{-1}.$
Protocol to compute the product of several elements in a field $E.$
Since each $R_j$ is generated uniformly at random,
and since each $R_j$ and each $X_j$ is invertible,
revealing $S_1,\dots,S_N$ gives no information about $X_1,\dots,X_N.$
That is, the list of elements $(S_1,\dots,S_N)$ is distributed uniformly
at random, given by the following easy lemma:
Let $G$ be a group. For any $X_1,\dots,X_N \in G,$
$\uniform( G^N ) =$
$\{(R_0,\ldots,R_N) \leftarrow \uniform(G^{N+1}));$
S_1 \leftarrow R_0 X_1 R_1^{-1};
\ldots;
S_N \leftarrow R_{N-1} X_N R_N^{-1}:
(S_1,S_2,\ldots,S_N) \}
Formally speaking, protocol computes a four robust and private
intermediate functions. The first function $F^1$ supplies each player with
pieces of random group elements; the second, $F^2,$ provides pieces of
their inverses; the third, $F^3,$ generates a public (hence robust)
uniformly distributed (hence private) vector of values $(S_1,\ldots,S_N);$
and the fourth provides pieces of $Y.$ Theorem <ref>
implies the resilience of .
Applying Theorem <ref>, we need only $O(C)$ rounds to invert
each secret $R$ in (IM2). Steps (IM3) and (IM6) use constant depth
circuits and are easily performed with a constant number of rounds of
interaction. Step (IM4) requires a round of interaction and step (IM5)
requires none.
§ THE POWER OF ITERATED MULTIPLICATION
In this section we prove the crucial and surprising result that is the
focus of this chapter. The result depends critically on the ability to
perform iterated multiplication of secret $3 \times 3$ matrices;
Theorem <ref> of the previous section is key.
Let $ANC^1$ denote the class of functions which can be written
as polynomial size algebraic formulas, or logarithmic depth circuits,
over a fixed finite field $E.$
(The abbreviation derives from “Algebraic NC.”)
The standard approach of simulating a circuit suggests that the number
of rounds required to evaluate $F \in ANC^1$ grows unboundedly, and
it was conjectured that a logarithmic lower bound on the number of
rounds would hold. We prove to the contrary that a fixed number
of rounds suffice.
constant rounds!NC1
For $3t<n$ and any function $F \in ANC^1$ (or $F \in NC^1$),
there exists a $t$-resilient protocol to compute $F$
in a constant number of rounds, with polynomial message sizes.
In [7] Barrington showed that $NC^1$ is equivalent
to multiplying polynomially-many permutations of 5 elements.
Ben-Or and Cleve [24] generalized this to show that computing
polynomial-size algebraic formulas (complete for $ANC^1$)
over a field $E$ is equivalent
to multiplying polynomially-many $3 \times 3$ matrices over that field.
Our method for computing $ANC^1$ was facilitated by a table-chaining technique
suggested by M. Rabin [104].
Let $F$ be representable by an algebraic formula $\scf$ of depth $d$ having
variables $X_1,\dots,X_n.$ By [24], there is a sequence of $3 \times
3$ matrices $M_1,\dots,M_{p(d)}$ whose product $M[\scf]$ contains
$F(X_1,\dots,X_n)$ in the upper right entry $(1,3).$ Here, $p(d)=O(4^d).$
For completeness we describe the sequence of matrices corresponding to
$\scf.$ Let
\[
\begin{tabular}{rclrcl}
$J_1$ & = &
\begin{tabular}{rrr}
0 & 1 & 0 \\
-1 & 0 & 0 \\
0 & 0 & 1 \\
\end{tabular}
\right]$
$J_2$ & = &
\begin{tabular}{rrr}
0 & 0 & -1 \\
1 & 0 & 0 \\
0 & 1 & 0 \\
\end{tabular}
\right]$
\\
$J_3$ & = &
\begin{tabular}{rrr}
0 & 1 & 0 \\
0 & 0 & 1 \\
1 & 0 & 0 \\
\end{tabular}
\right]$
$J_4$ & = &
\begin{tabular}{rrr}
0 & 0 & 1 \\
-1 & 0 & 0 \\
0 & 1 & 0 \\
\end{tabular}
\right]$
\end{tabular}
\begin{tabular}{lcr}
$J_5$ & = &
\begin{tabular}{rrr}
-1 & 0 & 0 \\
0 & 0 & 1 \\
0 & 1 & 0 \\
\end{tabular}
\right]$
\end{tabular}
\]
Furthermore, let
$M[f]$ =
\begin{tabular}{rrr}
1 & 0 & $f$ \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\end{tabular}
\right)$
Constants and variables are represented by $M[c]$ and $M[x_i].$ The product
$M[\scf]$ is derived from the following observations:
\begin{eqnarray*}
M[f+g] & = & M[f] \cdot M[g] \\
M[f \cdot g] & = & J_1 \cdot M[g] J_2 \cdot M[f]
\cdot J_3 \cdot M[g] J_4 \cdot M[f] \cdot J_5.
\end{eqnarray*}
The protocol is simple: each $M[x_i]$ is shared by player $i,$ and the
network multiplies all the matrices together. Notice that the matrices are
all full-rank and therefore form a group under multiplication. According
to Theorem <ref> and
Corollary <ref>, the secret product $M[\scf]$ takes a
constant number of rounds to compute. The result is the secret $M[\scf](1,3)$
and is revealed or left secret for use in further protocols.
Figure <ref> gives more details. For the sake of efficiency,
the constant matrices are collapsed at the start:
let $H=\set{M_i \mid M_i
\mbox{\hspace{0.1in} contains a variable}},$
let $q = \abs{H},$ let $h(i)$ be the index of
the $i^{th}$ member of $H,$ let $G(i)=\set{M_j \mid h(i)<j<h(i+1)}$ be the
set of constant matrices to the right of $M_{h(i)},$ and let $G(0)$ be the
set of matrices to the left of $M_{h(1)}.$ Define $N_1 = [\prod G(0)] H(1)
[\prod G(1)]$ and $N_j = H(j)[\prod G(j)].$
* Each player $i$ shares $x_i.$
* Secretly compute each $N_1,\ldots,N_q$ using , noninteractively.
* Run ($N_1,\ldots,N_q$) to obtain secret matrix $M[\scf].$
* The result is the top right secret of $M[\scf],$ i.e. $M[\scf](1,3).$
Protocol to evaluate $NC^1$ or $ANC^1$ circuit in constant rounds and
polynomial message sizes.
§.§ Reducing Rounds for Polynomial Size Circuits
The results of the previous section easily show how to reduce the number of
rounds for circuit-based protocols.
For $3t<n$ and any function family $F$ described by a polynomial-size
circuit family $C_F,$ there exists a $t$-resilient protocol to compute $F$
using $O({\tt depth}(C_F)/(\log nm))$ rounds, with messages of size
polynomial in $n$ and $m.$
Define the slice function $F^i$ to be the set of outputs at the $(i\cdot
\log nm)^{th}$ level of circuit $C_F.$ Clearly, $F$ can be written as the
product of ${\tt depth}C_F/(\log nm)$ of these functions, and each function
is in $NC^1.$ By Theorem <ref>, there is a set of protocols to
evaluate each slice function; by eliminating the final step
from each protocol, we obtain a protocol that produces secretly
shared values rather than revealing the outputs of that level. By
Theorem <ref>, the concatenation is $t$-resilient. The number
of rounds required by the concatenated protocol is clearly $\log nm$ times
the number of rounds to compute a slice function, which is constant.
§.§ Determinants in Constant Rounds
In fact, Theorem <ref> gives a stronger result using iterated
multiplication of $n \times n$ matrices. By
Theorem <ref> and Corollary <ref>, there
is an expected constant-rounds protocol to compute the product of a
polynomial number of matrices. Cook [50] and Berkowitz
[30] show that the iterated product of integer $n \times n$
matrices is complete for , the class of all problems that are
$NC^1$ reducible to DET, namely those that are $NC^1$ reducible to
computing the determinant of an $n \times n$ matrix.
For $3t<n$ and any function $F \in \mbox{DET$^*$},$ there exists a
$t$-resilient protocol to compute $F$ in a constant expected number
of rounds, with polynomial message sizes.
§ ANY FUNCTION IN CONSTANT ROUNDS
In fact, at the expense of a possible exponential blowup in message size,
it is certainly possible to achieve secure protocols in constant rounds for
$3t<n.$ The idea is based on representing a function $F(x_1,\ldots,x_n)$ as
a weighted sum whose addenda are computable in constant rounds. The
message size depends on the number of addenda, which itself depends on $F.$
Ignoring the number of addenda, each of which can be computed in parallel
and then added non-interactively, the protocol requires constant rounds.
Any function $F : E^n \rightarrow E$ has a canonical representation
as a function $c_F$ such that on $\set{0,1}^n,$ $c_F(x_1,\ldots,x_n) =
F(x_1,\ldots,x_n),$ in the following manner:
\[
c_F(x_1,\ldots,x_n) =
\sum_{(\epsilon_1,\ldots,\epsilon_n) \in \set{0,1}^n}
F(\epsilon_1,\ldots,\epsilon_n) \cdot
\delta((\epsilon_1,\ldots,\epsilon_n),(x_1,\ldots,x_n))
\]
where $\delta((\epsilon_1,\ldots,\epsilon_n),(x_1,\ldots,x_n))=1$
Kronecker delta
delta function
iff each $\epsilon_i=x_i,$ and otherwise is 0.
For $3t<n$ and any function $F,$
there exists a $t$-resilient protocol to compute $F$
in a constant number of rounds. The message sizes may grow
exponentially, depending on the nature of $F.$
The protocol is trivial, given Lemma <ref> and a means to
compute $\delta$delta in constant rounds: in parallel,
secretly compute $\delta$
for all $(\epsilon_1,\ldots,\epsilon_n)$ values such that
$F(\epsilon_1,\ldots,\epsilon_n)\not=0,$ and then compute the secret linear
combination of the results using the publicly-known weights
$F(\epsilon_1,\ldots,\epsilon_n)$ as specified.
Define the normalization of $x$ to be $\norm{x}=1$ iff $x \not= 0,$ and
$\norm{0}=0.$ Then we have
\[
\delta((\epsilon_1,\ldots,\epsilon_n),(x_1,\ldots,x_n)) =
1- \norm{ \sum_{i=1}^n \norm{\epsilon_i-x_i} ~~~ }
\]
But Theorem <ref> states that there is a constant-rounds
protocol for computing the extended multiplicative inverse of a secret.
Clearly, $\norm{x}=x \cdot x^{-1},$ for this extended inverse. Then the
protocol to compute $\delta$ is simple: normalize each difference by
calling the protocol on $\epsilon_i-x_i;$ sum the results
non-interactively; normalize the sum; and subtract the result from 1,
An alternative but equivalent formulation arises from the following
Any function $F : E^n \rightarrow E$ has a canonical representation as a
polynomial $c_F$ of degree $n$ such that on $\set{0,1}^n,$
$c_F(x_1,\ldots,x_n) = F(x_1,\ldots,x_n).$ In particular, any function
$F : \set{0,1}^n \rightarrow \set{0,1}$ has such a representation as a
polynomial of degree $n$ over $E.$
Follows from simple algebra and the following definition:
\[
c_F(x_1,\dots,x_n) =
\sum_{(\epsilon_1,\ldots,\epsilon_n) \in \set{0,1}^n}
F(\epsilon_1,\ldots,\epsilon_n) \cdot
\prod_{i=1}^n (1 - (x_i-\epsilon_i)(-1)^{\epsilon_i} )
\]
§ FORMAL PROOFS
Describing an interface is usually far more tedious and difficult than
giving a convincing argument that the information held by an adversary is
independent of the inputs of reliable players. It should be remarked that
the protocol designer need not concern himself with the details of the
interface; given a protocol compiler and a list of subprotocols that can be
concatenated, the actual implementation of a protocol has nothing to do with
the specification of a interface for the protocol. Thus the only purpose
of the following specifications are to prove the resilience of the protocol
and are irrelevant to the implementation.
§.§ Step By Step Simulation
In constructing messages from reliable players or reconstructing the state
of a newly corrupted player, it is essential to distinguish which portion
of the player's information is implicitly or explicitly determined by the
current view of the adversary, and which portion is completely unknown to
the adversary. In our protocols there are two extremes: either the
adversary already “knows” the value of a variable held by a reliable
player (as in the case of a piece distributed by a corrupted player), or it
“knows” nothing, in the sense that the variable is uniformly distributed
(e.g. over the set of field elements). Thus the task of the
interface is easy, once it has determined which local variables have
already been fixed; it must simply choose the other local
variables according to the appropriate conditional distributions (fixed and
usually uniform).
There are many dependencies among the random variables describing the views
of the players and the adversary; therefore, we introduce a notation to
clarify the conditional distributions which the interface samples from when
interacting with the adversary. The important point to note about each
variable is whether it is known (i.e. determined by a current view) or
hidden (i.e. unknown but distributed according to some known
(Compromised Information)
Let $T \subseteq [n] \cup \set{A},$ and let $\rho \leq r.$
A random variable $\rv(v_i,\rho)$
of player $i$ at round $\rho$ is
compromised by $T$ at round $r$
if there exists a function $f$ such that $f(\view_T^r) = \psi(v_i,\rho).$
We write this as $\view_T^r \vdash \rv(v_i,\rho).$
§.§ Data Structures and Local Variables
In order to examine the progressive states of each player and in order to
implement any of the protocols, we need to list explicitly the local
variables that make up each player's state. The value of each variable is
a string representing a bit, a field element, or a message; if unassigned,
the value is $\Lambda.$ The local variables listed below are certainly
redundant to some degree:
* $x_i,$ its input.
* $a_i,$ its auxiliary input.
* $y_i,$ its auxiliary input.
* $\randfield_i = (\rho_{i1},\rho_{i2},\ldots),$
an array of generic values, some of which may represent secretly shared
values (e.g. wire values), other public information (e.g.
publicly known constants and other information), and other information
necessary to the particular protocol (e.g. information used to verify
* $\randbits_i = (b_{i1},b_{i2},\ldots),$
an array of random bits.
* $\vec{s} = (s_1,s_2,\ldots),$
an array of known values, some of which may represent secrets and others of
which may represent information necessary to the protocol (such as
additional information used to verify pieces).
* $\pieces_i = (\piece_1(s_1),\ldots,\piece_n(s_1);
\piece_1(s_2),\ldots,\piece_n(s_2); \ldots),$
an array of the pieces of all the secrets. Player $i$ will know either one
of the pieces or all of the pieces of each given secret.
* $\disqual_i = (d_1,d_2,\ldots,d_n),$
an array of disqualifications. $\disqual_i[j]=1$ if player $i$ believes
player $j$ to be corrupt; otherwise it is 0.
* $\globdisqual_i = (g_1,g_2,\ldots,g_n),$
an array of global disqualifications. $\globdisqual_i[j]=1$ if player $i$
believes all reliable players have disqualified player $j.$ (This should
have the same value among all reliable players.)
* $\mess([n],i,1..R),$ messages from other players.
* $\mess(i,[n],1..R),$ messages to other players.
A state $q_i$ is specified by the vector
\begin{eqnarray*}
& (x_i,a_i,\randfield_i,\randbits_i,\vec{s}_i,\pieces_i,\disqual_i,
\globdisqual_i, & \\
& \mess([n],i,1..R),\mess(i,[n],1..R)). &
\end{eqnarray*}
As in Chapter <ref>, <ref>, we use these as
labels to parametrize distributions in more detail. That is, instead of
considering simply the distribution $\rv(q_i,r)$ on the state of player $i$
at round $r,$ we consider the distribution $\rv(\piece_i(s_1),r)$ on the
value of player $i$'s local variable $\piece_i(s_1)$ at round $r,$ the
distribution $\rv(\piece_i(s_2),r)$ on the value of player $i$'s local
variable $\piece_i(s_2)$ at round $r,$ and so on. The collection of random
variables is parametrized by $r$ and the set $\rvnames'$ of variable names
obtained from $\rvnames$ by replacing each $q_i$ in $\rvnames$ by the
individual labels listed above.
Our goal is to specify the distributions $\rvAA$ describing variables for
$\VSS$ and the distributions $\rvASB$ describing variables for $\VSS$
induced by $\interface$ in the $\idealname(\share)$ protocol and to show
that they are equal. Some are not sampled by $\interface,$ and we denote
unsampled variable values by $\rvASB(v,r)=\notsamp.$
Implicit in the operation of an interface is the specification that
after each round of interaction with $A,$ $\interface$ sets
$\rvASB(v,r+1)=\rvASB(v,r)$ for each $v$ such that
$\rvASB(v,r) \not= \lambda.$
In each round we
describe corruptions and then rushed messages from $\tbar.$ After each
round of $\VSS,$ $\interface$ records messages from $A$ in
$\rvASB(\mess(T,\tbar,r),r)$ accordingly.
We give arguments along the way that $\rvAA(v,r)=\rvASB(v,r)$ for
sampled variables.
§.§ Proofs of Resilience
§.§.§ Verifiable Secret Sharing
Theorem <ref>
We would like to show that $\VSS \resilasFa \idealname(\share) \resilasFa
\vacuous.$ First we construct an interface $\interface$ from $\VSS$ to
$\idealname(\share).$ For clarity we refer to player $i$ in $\VSS$ and
to player $i_{id}$ in $\idealname(\share).$
The $\idealname(\share)$ protocol allows the dealer, $D,$ to supply pieces
to the host, who distributes them if they are properly interpolatable, but
otherwise sends to all players. The interface runs most of
$\VSS$ with $A$ before finishing the first round of $\idealname(\share).$
Remark. The intuition that $A$ gains no information by
complaining and forcing $D$ to reveal pieces because $A$ knows them already
is formalized by $\interface$ having at a given stage in $\VSS$ set the
random variable $\rvASB(x,r)$ to some value that either has come from $A$
or has been generated by $S$ for $A$ in response to a corruption request.
If $A$ corrupts $D$ before it supplies secret $s,$ $\interface$ requests
$D_{id}$ be corrupted and supplies $A$ with $s.$ If $A$ requests $i$ be
corrupted, $\interface$ corrupts $i_{id}$ and returns the auxiliary input.
(V1) If $D$ is corrupt, $\interface$ does nothing but record the
outgoing message of $A$ as $\rvASB(\mess(D,[n],1),1).$
$\interface$ chooses $\rvASB(\piece_i(s),1) \leftarrow \uniform(E)$
for all $i \in T,$ constructs
\[
\{ p_i(u) \leftarrow \unifpolyn(n,t,\piece_i(s)) \}
\]
\[
\{ q_i(v) \leftarrow \unifpolyn(n,t,0) \mid
(\forall j \in T) q_i(\omega^{j-1}) = p_j(\omega^{i-1}) \},
\]
and delivers these messages to $A.$ By Lemma <ref>,
the conditional distributions satisfy
\[\rvASB(\mess(D,i,1;(p_i(u),q_i(v))),1) =
\rvAA(\mess(D_{id},n+1,1;(p_i(u),q_i(v))),1).
\]
(V2) If $A$ newly corrupts $D,$ $\interface$ corrupts $D_{id},$ sets
\[
\{ g(u) \leftarrow \unifpolyn(n,t,s) \mid
(\forall i \in T) g(\omega^{i-1}) = \piece_i(s) \},
\]
sets for all $i \not\in T$
\[
\{ p_i(u) \leftarrow \unifpolyn(n,t,\piece_i(s)) \mid
(\forall j \in T) p_i(\omega^{j-1}) = q_j(\omega^{i-1}) \},
\]
\[
\{ h(v) \leftarrow \unifpolyn(n,t,s) \mid
(\forall i \in T) h(\omega^{i-1}) = q_i(0) \},
\]
sets for all $i \not\in T$
\[
\{ q_i(v) \leftarrow \unifpolyn(n,t,h(\omega^{i-1}) \mid
(\forall j \in T) p_i(\omega^{j-1}) = q_j(\omega^{i-1}) \},
\]
and sets $\rvASB(\mess(D,i,1),1) = (p_i(u),q_i(v))$ for all $i \in T.$
Interface $\interface$ sends $A$ the view of $D$ using $x_D,$ $a_D,$
By Lemma <ref>
each of these distributions satisfies $\rvASB(x,1) = \rvAA(x,1)$
and is polynomial-time computable, being uniform over an easily computed set
of solutions determined by the conditions listed above. This computation
in effect induces the variable $\rvASB(p(u,v),1) = \rvAA(p(u,v),1).$
If $A$ newly corrupts player $i$ we must consider whether $D$ is yet
corrupted. If $D$ is corrupt, then all information held by $i$ is
known, in particular, $\rvASB(\mess(D,i,1),1).$
The interface corrupts $i_{id}$ to obtain $a_{id}$
(note that $x_{id}$ is nil)
and constructs $\view_i^1$ from these values.
It then provides $A$ with the view.
If $D$ is not corrupt, then $\interface$ must construct the
$(p_i(u),q_i(v))$ message that $D$ sent to $i$ in round (V1).
To generate rushed messages from nonfaulty players, if $D$ is nonfaulty
then $\interface$ sets $\rvASB(\mess^{\broad}(i,[n],2),2) = 0$
for all $i \not\in T,$ since a nonfaulty player will accept the
proper (but private) message from a nonfaulty dealer.
If $D$ is faulty, then its messages to nonfaulty players are
determined by $\rvASB(\mess(D,[n],1),1),$ and $\interface$
sets $\rvASB(\mess^{\broad}(i,[n],2),2)$ to be $0$ if the
message to $i$ describes polynomials of degree $t,$ and $\interface$
sets the variable to $1$ otherwise. Clearly this is the same
probabilistic computation $\delta_i$ that each nonfaulty player
applies in (V2).
Now, if $A$ newly corrupts $i$ while $D$ is nonfaulty, $\interface$
generates $p_i(u)$ and $q_i(v)$ as in (V1), and returns them. If $D$ is
already corrupt, $\interface$ uses $\rvASB(\mess(D,T,1),1)$ to determine
$p_i(u)$ and $q_i(u).$
To generate messages from nonfaulty players, $\interface$ does the
following. For each $i\not\in T$ and $j \in T,$ the $p_i(w^{j-1})$
values sent by nonfaulty players are determined already by the values
sent to corrupt players, so $\interface$
sets $\rvASB(\mess(i,j,2;p_i(\omega^{j-1})),2) =
\rvASB(\mess(D,j,1;p_i(\omega^{j-1})),1).$
(V4) If $A$ newly corrupts $D,$ corrupt $D_{id}$ and construct
$\rvASB(\mess(D,T,1..2),1..2)$ as before. If $A$ newly corrupts $i,$
$\interface$ creates an earlier view as in (V3) and must then construct
incoming messages about $p_k(\omega^{i-1})$ in round (V3).
For each $k \in T,$ check if $k$ sends $i$ a proper
$p_k(\omega^{i-1})$ according to whether
$\rvASB(q_i(\omega^{k-1}),1) =
\rvASB(\mess(i,k,2;q_i(\omega^{k-1})),2).$
The former is fixed by previous computations of $\interface.$
Set $L(i,k)$ accordingly.
If $D$ is nonfaulty then set $L(i,k)=0$
for all $k \not in T,$
because all nonfaulty players send what $D$ sent them.
If $D$ is faulty but $k \not in T,$ use $\rvASB(\mess(D,k,1),1)$ and
$\rvASB(\mess(D,i,1),1)$ to determine whether $i$ complains.
This determines $\rvASB(L(i,k),2)$ for all $k,$ and $\interface$ supplies
it along with $a_i$ (obtained by corrupting $i_{id}$) to $A.$
To generate the broadcast messages from nonfaulty players, $\interface$
performs the same computation as the new corruption of $i$ just described.
It uses the vector $L(i,\cdot)$ as the broadcast value from $i\not\in T.$
The view of a newly corrupt dealer includes that generated according to
(V4) along with the broadcast messages $\rvASB(\mess^{\broad}(T,[n],4),4)$
and $\rvASB(\mess^{\broad}(\tbar,[n],4),4)$ generated by $A$ and
$\interface$ in round (V4). The view of a newly corrupted $i$ is generated
as in (V4), also attaching the list of broadcast messages.
If $D$ is nonfaulty, all disputes involve at least one faulty player $i,$
so $\interface$ has already specified in
$\rvASB(\mu(D,i,1;p_i(\omega^{i-1}),1)$ the correct
value. So $\interface$ sets $\rvASB(\mess(D,[n],4;p(i,j)),4)=
\rvASB(\mu(D,i,1;p_i(\omega^{i-1}),1).$ If $D$ is faulty, no messages
from nonfaulty players need be generated.
New corruptions of $D$ or player $i$ are treated as in (V5), and
$\interface$ adds on the messages broadcast by $D$ in (V5), whether
generated by $\interface$ or by $A.$
If $D$ is nonfaulty, every player $i\not\in T$ does not impeach $D,$ so
$\rvASB(\mess^{\broad}(i,[n],6;M(i)),6) = 0.$ Otherwise it is easy for
$\interface$ to compute whether player $i\not\in T$ impeaches $D$
from the values of $\rvASB(\mu(D,i,1),1)$ and
$\rvASB(\mu(D,[n],4),4),$ since these previous messages from $D$ to $i$
have already been recorded (if not generated) by $\interface,$ and since
\rvAA(\mu(D,i,1),1)$ and
\rvAA(\mu(D,[n],4),4).$
New corruptions of $D$ or $i$ are treated as in (V6), adding the broadcast
impeachments from (V6) on the end of the views.
If $D$ is nonfaulty, impeachments come from faulty players only,
so $D$ will broadcast $(p_i(u),q_i(v))$ values that $\interface$
has already generated:
\begin{eqnarray*}
\rvASB(\mess^{\broad}(D,[n],7;M'(i)),7) & = &
\rvASB(\mess(D,i,1),7) \\
& = & \rvAA(\mess(D,i,1),7) \\
& = & \rvAA(\mess^{\broad}(D,[n],7;M'(i)),7)
\end{eqnarray*}
For $i\not\in T,$ $D$ broadcasts $M'(i)=0,$ of course. If $D$ is faulty,
$\interface$ simply records $\rvASB(\mess^{\broad}(D,[n],7),7)$
as generated by $A.$
New corruptions are as in (V7), with the broadcast values
$\rvASB(\mess^{\broad}(D,[n],7),7)$ concatenated at the end.
If player $i$ is nonfaulty, $\interface$ checks whether
If so, it sets $\rvASB(\mess^{\broad}(i,[n],8),8)=0.$
Otherwise, it checks whether, for all $j$ for which $D$
broadcasts $(p_j(u),q_j(v)),$ the values agree with $i$'s
value $p_j(\omega^{i-1})$ as derived accordingly from
$\rvASB(\mess(D,i,1),8)$ or
or $\rvASB(\mess^{\broad}(D,[n],7),8),$
the choice depending on whether $i$ has complained before.
If so, $\interface$ sets $\rvASB(\mess^{\broad}(i,[n],8),8)=0,$
and otherwise sets it to $1.$
Finally, if $\rvASB(\globdisqual_i(D),8)=1$ for any $i\not\in T$
($\interface$ calculates this easily from
$\rvASB^{\broad}(\tbar,[n],8),8)$), then $\interface$
requests that the corrupted dealer $D_{id}$ send
$\Lambda$ in the ideal protocol.
The trusted host will supply each player with the output,
$(0,\rej).$ Otherwise, if the dealer $D$ is corrupted,
$\interface$ requests that $D_{id}$ send the pieces
that were finally accepted by nonfaulty players, and fills out
the list with values for corrupted players $i_{id}$ that provide
a polynomial of degree $t.$ These corrupted players will receive
those values but their outputs are considered to be $\Lambda,$
as described in Chapter <ref>. The nonfaulty players
in $\idealname(\share)$ will output
This concludes the description of $\interface.$ We claim that for
every $r$ and every $v$ such that
$\rvASB(v,r) \not= \lambda,$
$\rvASB(v,r) = \rvAA(v,r).$
At each step, $\interface$ samples new variables $\rvASB(v,r)$
as a probabilistic function (sometimes even deterministic,
as when reporting values already broadcast) $f$ of earlier samples.
Each time, $\interface$ uses either:
* a previous output of $A;$
* uniform distributions based on Lemma <ref>;
* direct computation of a nonfaulty player on values already broadcast
or known by virtue of correct behavior of other nonfaulty players
(e.g. nonfaulty players do not impeach other nonfaulty
With the aid of Lemma <ref> and broadcast channel properties,
it holds that
\[
\rvASB(v,r+1) = f(\rvASB(v_1,r),\ldots,\rvASB(v_K,r)) =
f(\rvAA(v_1,r),\ldots,\rvASB(v_K,r)) = \rvAA(v,r+1)
\]
for some $K.$ In particular,
$\rvAA(\outfn(q_i),8) = \rvASB(\outfn(q_i),8)$ for all
$i$ and $\rvAA(q_A,8) = \rvASB(q_A,8),$ hence
$\ensAAlpha\protoIn =
\ensASBeta\protoIn.$
§.§.§ Multiplication
0.7in 0.4in 0.4in
0.4in 0.4in 0.4in
Fix $E=E(m,n)$ and $\omega.$
$(1 \leq i \leq n)$
$\{ p(u,v) \leftarrow \uniform( p \in E[u,v] \mid p(0,0) = s,
p(u,v) = \sum_1^t \sum_1^t p_{ij} u^i v^j \}$
$D \rightarrow i:$
$(p_i(u),q_i(v)) = (p(u,\omega^{i-1}), q(\omega^{i-1},v))$
$(1 \leq i \leq n)$
$i \rightarrow [n]:$
degree $t$ $0$ $1.$
$(1 \leq i,j \leq n)$
$i \rightarrow j:$
$(1 \leq i,j \leq n)$
$L(i,j) = \left\{
\begin{array}{ll}
0 & \mess(j,i,3;p_j(\omega^{i-1})) = \mess(j,i,3;q_i(\omega^{j-1})) \\
1 & \mbox{ otherwise }
\end{array}
\right.$
$i \rightarrow [n]:$
$(1 \leq i \leq n)$
$L'(i) = \left\{
\begin{array}{ll}
0 & \mess(i,[n],4;L(i,j))=0 \\
p_i(\omega^{j-1}) & \mbox{ otherwise }
\end{array}
\right.$
$D \rightarrow [n]:$
$(1 \leq i \leq n)$
$(\exists j)$ $\mess^{\broad}(i,[n],4;L(i,j))=1$
or $\mess^{\broad}(j,[n],4;L(j,i))=1,$
and $\mess^{\broad}(D,[n],5;p_i(\omega^{j-i})) \not= $
$\mess(D,i,1; q_i(\omega^{i-1}))$
$M(i)=1$ $M(i)=0.$
$i \rightarrow [n]:$
$(1 \leq i \leq n)$
$M'(i) = \left\{
\begin{array}{ll}
0 & \mess(i,[n],6;M(i))=0 \\
(p_i(u),q_i(v)) & \mbox{otherwise}
\end{array}
\right.$
$D \rightarrow [n]:$
$(1 \leq i \leq n)$
$\mess^{\broad}(i,[n],6];M(i)) = 0$ or
$\mess^{\broad}(D,[n],7];(p_i(u),q_i(v)))$ is consistent
$\disqual_i(D)=0$ $\disqual_i(D)=1.$
$i \rightarrow [n]:$
$(1 \leq i \leq n)$
$t+1$ players disqualified $D,$
set $\globdisqual_i(D)=1.$
$Y_i=(0,\rej)$ $Y_i=(p_i(0),\acc).$
Protocol for dealer to verifiably share secret $s.$
See text for more details.
CHAPTER: TOLERATING A MINORITY OF FAULTY PROCESSORS
Democracy is the recurrent suspicion that more than half of the people
are right more than half of the time.
E. B. White, The Wild Flag
The methods of [28, 39] and even the far more efficient methods
we have presented in Chapter <ref> allow faults in at most a third
of the network ($3t<n$). Perfect privacy is not achievable with higher fault
tolerance for even some simple functions like AND [28] (but see
Chapter <ref> for a characterization of functions computable with
perfect privacy at high, passive fault-tolerance levels).
The natural question to ask, though, is whether higher fault tolerance is
possible if the assurances of security and reliability need not be
absolute. We show that, allowing for a negligible chance of error, $2t<n$
can be achieved. That is, as long as only a minority of the processors are
faulty, we can construct protocols to compute any function reliably and
securely. For larger numbers of faults, it becomes impossible for the
players even to share a secret, which does not immediately imply a negative
result but gives a strong intuition as to why higher fault-tolerance is
generally impossible without making other assumptions about the network
Notice that the major problems with extending the methods of
[28, 39] to $2t<n$ are that verifiable secret sharing fails
and that the specific techniques for the ABC Problem fail.
Rabin [106] made initial and significant
progress in the direction of improving
the fault bounds from $3t<n$ to $2t<n$ by demonstrating a method for
verifiable secret sharing for $2t<n,$ using a broadcast network.
Earlier methods for verifiable secret sharing with a faulty minority
required cryptographic assumptions [42].
The extension to performing computations for $2t<n,$ however,
remained open until this work.
We give a new and efficient method to solve the ABC problem when
secret addition is possible, for $2t<n.$
Our techniques utilize verifiable secret sharing for $2t<n,$ and
allow the field used for secret sharing to be of exponential size.
Ben-Or [107] and Kilian [85] have
independently developed methods to
tolerate a faulty minority, based on standard ideas of boolean circuit
simulation. Their methods require that the field used for secret sharing
be of
polynomial size, which restricts the efficiency of their solutions.
There exists a protocol that is $t$-resilient against
against dynamic Byzantine adversaries for $2t \leq n.$
Because we can simulate large-field arithmetic operations quickly and
directly while alternative methods use bit-simulations or small-field
arithmetic, our protocols have the practical advantage of using far fewer
rounds of communication for many natural functions. Joined with the
techniques of Chapter <ref>, our techniques use tremendously fewer
rounds of interaction. The bit-simulation techniques of
[107, 85], on the other hand, are incompatible with reducing
rounds through the arithmetic circuit reductions of Chapter <ref>
since they require the added cost of simulating arithmetic operations by
bitwise Boolean operations.
As before, our methods are secure against adversaries with unbounded
resources, while at the same time requiring only polynomial time to
We shall first describe a modification of the solution in [106]
for verifiable secret sharing, show how to add secrets, and then give
our solution to the ABC problem when $2t<n.$ The framework of
[28, 39] can then be applied, using these more resilient
subprotocols to support the share-secrets-create-secrets paradigm.
Assumptions made in this chapter. The network is complete,
with private lines, broadcast lines, $n$ processors,
and at most $t < \frac{n}{2}$ Byzantine faults, chosen dynamically.
The protocols are information-theoretically secure with high probability,
or in other words, the results are statistically resilient. No unproven
complexity-theoretic assumptions are made.
§ VERIFIABLE TIME-RELEASE MESSAGES
There is a long history of solutions for VSS under various
unproven assumptions. We
wish to avoid unproven cryptographic assumptions and to tolerate $2t<n.$ We
shall utilize a method for VSS for $2t<n$ very similar to that given by
[106], but somewhat more convenient and efficient. Briefly, it uses
Shamir's method for sharing a secret, and requires that each piece be
reshared using a weak form of sharing. The weaker form of sharing includes
information called “check vectors,” which allow verification of the
pieces. Our modification of the protocol is a new method for constructing
check vectors. For completeness, because formal proofs of
[106, 107] have not appeared, and
because we use different subroutines,
we shall present the essential details of [106, 107] but
we refer the reader to that work for a deeper discussion.
The verification property of Rabin's scheme relies on a subprotocol for
what we call Verifiable Time-Release Messages.
We consider three players, a sender S, a receiver R, and an intermediary I.
The sender would like to give a secret bit to the intermediary, who will
pass it on at a later time to the receiver. The receiver must be able to
detect any tampering on the part of the intermediary. At the same time,
the intermediary should know whether the information given him will in fact
satisfy the receiver at the appropriate time. He must therefore be able to
check the behavior of the sender during the initial part of the protocol.
Rabin [106] provides an elegant way to satisfy these properties
in a manner similar to secret sharing with threshold 1. Her
idea involves generating nonzero random numbers $u$ and $v$ $\mod p$ and
setting $w=b+uv.$ The receiver gets $(v,w)$ while $I$ gets $(b,u).$ When
$I$ passes the value on to R later, he sends $(b,u),$ and R checks that
$w=b+uv.$ In order to convince R of an incorrect value, the intermediary
must change $u$ correctly, which he cannot do with non-negligible
probability. Checking that the sender is correct uses a cut-and-choose
idea, which we shall use below.
We present a simple alternative with the nice property that
no multiplications are necessary. In terms of polynomial-time
algorithms, this makes no difference, but in terms of actual
implementations, multiplications over a finite field are
considerably more expensive than additions.
Our approach is derived from work on a
different problem with Feigenbaum and Shoup [16].
If S and R were to share a one-time pad (a private sequence of random
bits), then how may S send a secret bit to R via another player I? Player
I might change some of the bits. Our simple solution uses $3k$ bits of the
one-time pad and ensures that tampering is detected with probability
$1-2^{-k}.$ See Figure <ref>. Intuitively, for I to
change the bit $b$ without detection, it must guess the exact sequence of
bits $\alpha(1),\ldots,\alpha(k),$ which it can do with probability at most
Phase I.
S reads $\set{(\alpha(i),\beta_0(i),\beta_1(i))}_{i=1..k}$
from the one-time pad.
$(1\leq i \leq k)$ S:
$\gamma_0(i) \leftarrow \beta_0(i) + b \cdot \overline{\alpha}(i)$
$\gamma_1(i) \leftarrow \beta_1(i) + b \cdot {\alpha}(i)$
$M_b \leftarrow \set{(\gamma_0(i),\gamma_1(i))}_{i=1..k}.$
$S \rightarrow I:$
Phase II.
$I \rightarrow R:$
Protocol for S to send a bit $b$ to R via player I.
Note that addition may be either modulo two, if the $\alpha$ and
$\beta$ values on the pad are bits, or it may be addition over
a finite field, if the $\beta$ values are field elements.
We use this verifiable one-time pad to construct a verifiable
time-release scheme
time-release scheme!verifiable
as follows. Roughly speaking, the sender sends the
one-time pad privately to R, and sends the message $M_b$ to I, who holds
onto it until the time it should be released. The one-time pad serves
as a check vectorcheck vector, a sequence
of bits used to check the accuracy of the secret held by I.
The property that the
intermediary is convinced that R will accept the message later on must also
be satisfied, and we now turn our attention to that problem.
As in [106], we use a cut-and-choose method. Instead of generating
$k$ $(\alpha,\beta_0,\beta_1)$ triples, generate $2k$ of them. Half of
these will be revealed to I by R so that I can check for misbehavior. In
order for S to cheat undetected, it must misbehave on exactly that half of
the triples that R does not immediately send to I. Notice that $I$
learns $b$ upon receiving a single $(\alpha,\beta_0,\beta_1)$ triple. To
keep $b$ secret from $I,$ the sender simply splits $b$ into two random bits
as $b=b_I \oplus b_R,$ and gives $b_I$ to $I$ and $b_R$ to $R.$
The protocol is detailed in Figure <ref>. If $M$ is a message
that one party is supposed to send to another, we denote by $M'$ the
message it actually sends.
Intuitively, if S attempts to cheat, it must guess in advance the exact
set of indices $i(1),\ldots,i(k)$ that R selects at random, and it must
behave properly on that list but must cheat on every other row of the
table. The chances of this are certainly at most $2^{-k}.$
0.5in 0.5in 0.7in 0.5in 0.5in 0.5in 0.5in
Phase I.
$S$ computes:
$b_I \leftarrow \uniform(\set{0,1});$ $b_R \leftarrow b \oplus b_I.$
$S$ computes $(i=1..2k):$
$\{(\alpha(i),\beta_0(i),\beta_1(i))\} \leftarrow \set{0,1}^{3}$
$\gamma_0(i) = \beta_0(i) \oplus b_I \cdot \overline{\alpha}(i)$
$\gamma_1(i) = \beta_1(i) \oplus b_I \cdot {\alpha}(i)$
$S \rightarrow R:$
$b_R; \set{(\alpha(i),\beta_0(i),\beta_1(i))}_{i=1..2k}.$
$S \rightarrow I:$
$b_I; \set{(\gamma_0(i),\gamma_1(i))}_{i=1..2k}.$
$I$ generates:
$i(1)..i(k) \in_R \set{1,\dots,2k}$ such that
$i(j) \not= i(j')$ for $j \not= j'.$
$I \rightarrow R:$
$R \rightarrow I:$
there exists $j$ such that
$\gamma'_0(i(j)) = \beta''_0(i(j)) \oplus b_I
\cdot \overline{\alpha''}(i(j))$
$\gamma'_1(i(j)) = \beta''_1(i(j)) \oplus b_I
\cdot {\alpha''}(i(j))$
let $I'=\rej$ let $I'=\acc.$
$I \rightarrow S,R:$
Phase II.
$I \rightarrow R:$
$b_I, \set{(\gamma'_0(i),\gamma'_1(i))}_{i=1..2k}.$
If there exists $i$ such that
$\gamma'_0(i) = \beta''_0(i) + b_I \cdot \overline{\alpha''}(i)$
$\gamma'_1(i) = \beta''_1(i) + b_I \cdot {\alpha''}(i)$
then let $R=\rej$ else let $R=\acc.$
Protocol for S to send a bit $b$ to R via player I, who holds on to the bit
before passing it on. If $M$ is a message that the protocol specifies to
send, then $M'$ denotes the message actually sent.
Note that $\oplus$ may denote field addition instead of exclusive or,
in which case the $\beta$ values
are instead selected uniformly at random from the field.
Using field addition is conducive to adding the unreleased
values together without revealing them.
§.§ Simulating the Time-Release Scheme
No formal proof of the techniques used in [106] has appeared to
date, so we must include a proof for completeness.
Protocol $\vertimerel$ is exponentially
$2$-resilient against Byzantine adversaries.
The ideal protocol $\idVTR$ is as follows. In Phase I,
the sender $S$ sends two bits, $b_I$ and $b_R,$ to the host.
The host sends $b_i$ to $I$ and $b_R$ to $R$ if it indeed received
two bits; otherwise it sends . Then, $I$ and $R$ send
a 0 to the host to indicate they wish to participate, and the host
passes on either $0$ or to $I$ and $R$ to indicate whether
they chose to participate. In Phase II, $I$ again sends 0 to indicate
it wishes to reveal the value, and the host sends $b_i$ to $R,$ or
it sends if $I$ supplied a nonzero value.
The simulator does the following. For corruptions before round
(VTR1), it corrupts the corresponding player in $\idealname(VTR)$
and returns the inputs to $A.$ To generate the message from honest
$S$ to corrupt $R,$ the interface generates $6k$ random field elements
and sets $b_R$ by corrupting $R_{id}$ in $\idealname(VTR)$ after it receives
$b_R.$ To generate the message from honest $S$ to corrupt
$I,$ the interface generates $2k$ random field elements
and sets $b_I$ by corrupting $I_{id}$ in $\idealname(VTR)$ after it receives
If both $R$ and $I$ are corrupt, then the interface corrupts them
in $\idealname(VTR)$ to obtain $b_I$ and $b_R,$ calculates
$b=b_I\oplus b_R,$ and performs the computation of $S$ to generate
the $\alpha, \beta, \gamma$ tables. If $S$ is corrupt, the interface
obtains its two messages, checks to see if they are possible
(by solving for $\gamma_0,\gamma_1$), and if not, causes $S_{id}$ to
output $\Lambda.$ Notice that whereas
$\rvAA(\mess(S,\{I,R\},1) = \rvASB(\mess(S,\{I,R\},1),$
the resulting distribution on final player outputs in $\idealname(VTR)$
will be only exponentially indistinguishable from those in
$\vertimerel,$ because with probability at most
$2^{-k},$ $S$ will cheat undetected in $\anglebrack{A,\vertimerel}.$
In that case, because the players in $\idealproto(VTR)$ reject $S,$
their outputs are different. But by Lemma <ref>, we
may use a probabilistic function that differs exponentially little
from the “proper” one, and the net result is exponentially close.
New corruptions are easy to specify by solving the equations
$\gamma_0(i)=\beta_0(i) + b \cdot \overline{\alpha}(i)$ and
$\gamma_1(i)=\beta_1(i) + b \cdot {\alpha}(i)$ using information
from corrupting a player in the ideal protocol or from the variables
sampled already in (VTR1).
If $I$ is nonfaulty whereas $R$ is faulty, then the interface simply generates
$i(1),\ldots,i(k)$ randomly as specified and supplies them to $A.$
If $I$ is faulty and sends an improper list, then the interface specifies
that $I_{id}$ send $\Lambda$ to the trusted host to indicate
lack of participation. The distributions on outputs of
$R$ and $R_{id}$ will be identical, namely both will indicate .
If $R$ is nonfaulty whereas $I$ is faulty, then the interface does one
of two things. If $S$ is already corrupted, it uses the values
of $\alpha,\beta_0,\beta_1$ already sent by $A.$ If $S$ is not corrupt,
the interface nevertheless knows $b_I$ and
the values of $\gamma_0$ and $\gamma_1$
held by $I,$ so it generates the $k$ values of $(\alpha,\beta_1,\beta_2)$
by selecting $\alpha$ uniformly at random and solving for $\beta_0$
and $\beta_1.$ It is not hard to see that this is the correct
conditional distribution given the values of $\gamma_0$ and $\gamma_1,$
if $S$ and $R$ are not corrupt.
If $I$ is faulty, the interface does nothing. Otherwise,
if $I$ is nonfaulty, and $S$ is faulty,
the interface can calculate from the current history
whether $I$ detects if $S$ has cheated, and it sets
$\rvASB(\mu(I,\{S,R\},4),4)$ accordingly. If $I$ does detect
cheating then the interface requests that $S_{id}$ send $\Lambda$
to the trusted host, in which case the output of nonfaulty players
in $\idealname(VTR)$
will have a distribution exponentially indistinguishable from those
in $\rvAA.$
If $I$ is nonfaulty and $R$ is faulty, then the interface has
a record of what $S$ sent to $R$
In order to demonstrate the resilience of the time-release scheme, we must
demonstrate a simulator. The task is not difficult but requires
case-by-case analysis. We consider 8 possibilities corresponding to
whether or not S, I, and R are corrupted (bad or good) respectively.
First let us assume the adversary is static, and later we examine how to
extend our simulator to handle a dynamic adversary.
todo: fill in cases – easy
Case GGG.
Nothing to simulate.
Case GGB.
Case GBG.
Case GBB.
Case BGG.
Case BGB.
Case BBG.
Case BBB.
In the VSS
secret sharing!verifiable
method of [106], all players act both as intermediaries
and recipients for pieces of the secret $u.$ Each piece $\piece_i(u)$ of
$u$ is given to an intermediary $i,$ and check vectors $s_j(\piece_i(u))$
are sent out to every other processor $j.$ At reconstruction time, the
intermediaries $i$ broadcast their check vectors $\vec{s}(\piece_i(u)),$
which include $\piece_i(u).$ Each player, acting as a recipient,
reconstructs $u$ using the pieces he concludes are accurate.
(after T. Rabin [106])
There exists a VSS protocol for $2t<n$ that is $t$-resilient.
expand, list, prove
§ ADDITION OF SECRETS
It is possible though not trivial to extend the VSS method given above
to a method for linearly combining secrets.
Say that $u$ and $v$ were shared using polynomials $f(x)$ and
$g(x),$ respectively.
Let $h(x)=af(x)+bg(x)+c;$ then $h(0)=af(0)+bg(0)+c =w.$
Sums of the pieces are pieces of the sum;
Each player $i$ uses
\[
\piece_i(w) = a \cdot \piece_i(u) + b \cdot \piece_i(v) + c
\]
as his piece of $w.$
In order that $w$ be a verifiable secret according to the VSS protocol,
each $\piece_j(w)$ must be reshared in a weak fashion. This involves
the use of check vectors for the pieces of $h(j).$
Each player $i$ holds pieces of every other
player's pieces, $\piece_i(\piece_j(u))$ and
He sets
\[
\piece_i(\piece_j(w)) =
a \cdot \piece_i(\piece_j(u)) + b \cdot \piece_i(\piece_j(v)) + c.
\]
The only remaining part of the VSS structure is the set of
check vectors for the pieces $\piece_j(\piece_i(w)).$
Since player $i$ knows $\piece_i(w),$ he creates and distributes new
check vectors for the pieces of this value. The other players check the
correctness of his vectors using the protocol specified in
the VSS scheme [106].
Compute $w = au+bv+c.$
* Set
\[ \piece_i(w) = a \cdot \piece_i(u) + b \cdot \piece_i(v) + c. \]
* For each $j,$ compute pieces of the pieces:
\[
\piece_i(\piece_j(w)) =
a \cdot \piece_i(\piece_j(u)) + b \cdot \piece_i(\piece_j(v)) + c.
\]
* Choose new check vectors
for $\piece_j(\piece_i(w))$ for all players $j\not = i,$
and participate in their verification,
as per the VSS protocol.
* Receive check vectors for other pieces $\piece_i(\piece_j(w))$
and participate in their verification, as per the VSS protocol.
Protocol for linear combinations of secrets.
(Code for processor $i.$)
The protocol is resilient
because each of its steps is either a non-interactive computation
of a robust and private representation of some secret value,
or an application of a subprotocol for VSS that is itself resilient.
It is clear that players who deviate from the protocol cannot create
convincingly false check vectors with probability
exceeding some $\frac{1}{2^{k_1}},$ as determined by
the security parameter $k_1$ of the VSS protocol.
In this case, an interface would fail to be able to provide a
correct argument, since the adversary would learn information
to which it is not entitled, and the interface would need to
corrupt additional players to obtain that information.
The net weight of these events is exponentially small, however.
In fact, the only undetectable misbehavior is the possible
choice of acceptable check vectors
according to an inappropriate
This is, however, easily dealt with by an interface, which
needs merely record what the adversary specifies;
the interface need not itself generate these distributions.
Note that nonfaulty players' behavior is independent
of how the faulty players generate check vectors, even
acceptable ones.
§ MULTIPLICATION OF SECRETS
The protocol for multiplication of secrets is more complicated;
the key new idea is a method for giving a proof that the
product of two secrets is a third. In fact, the ability to
prove products of secrets is the basis for achieving high
fault-tolerance in all of the results of this paper.
The protocol we shall present relies on a protocol for addition
of secrets.
Our solution follows a few brief steps (cf. [28]).
As before,
let $u$ and $v$ be shared using $f(x)$and $g(x).$
Each player $i$ secretly shares the value $f(i)g(i)$ and
“proves” that he has in fact shared this value
(see <ref>).
If his proof fails, he is disqualified (see <ref>).
From the collection of secret products determines
the polynomial $h(x)=f(x)g(x),$
of degree $2t$ and free term $uv.$
Using a protocol to add a random polynomial
of degree $t+1$ and free term 0
and then to truncate the polynomial to degree $t$
(see <ref>),
each player $i$ is supplied with
the value $h(i)$ for the resulting polynomial
$h(x)$ of degree $t$ and free term $uv.$
Figure <ref> describes the protocol.
* Secretly share $h(i)=\piece_i(u) \piece_i(v).$
* Run the protocol to prove that player $i$ shared this
* Receive pieces $\piece_i(h(j))$ from each player $j \not= i.$
* Participate in the protocol for $2t<n$ to verify that
player $j$ shared the correct value.
* Run the protocol using $h(i)$ to obtain a
piece $\piece_i(w)$ of a random, degree $t$ polynomial
whose free term is $w.$
Protocol to multiply two secrets.
(Code for processor $i.$)
§.§ Verifiable Multiplication
In order to accomplish the resilient verifiable sharing of $f(i)g(i),$
we must provide a solution to the ABC problem when $2t<n.$
In fact, we describe a new method that shows that, if secret addition
is possible, then multiplication is possible, for $2t<n.$
Given a protocol for the ABC problem, Alice will be able to
prove to the network that the new secret which she shares
is indeed $f(i)g(i).$
(ABC Lemma.)ABC Lemma
If there exists a $t$-resilient protocol for linear combinations
of secrets, then
there exists a $t$-resilient protocol to solve the ABC problem.
The protocol is exhibited in Figure <ref>.
First, an overview:
In the first phase,
Alice shares several triples of secrets $({\cal R},{\cal S},{\cal D})$
satisfying a simple equation
(of the form ${\cal D}=(a+{\cal R})(b+{\cal S})$),
which will be used to ensure that Alice does not misbehave.
In the second phase,
the players select and reveal combinations of some of these triples in order
to confirm that every triple satisfies the simple equation.
Finally, each unrevealed triple of secrets gives rise to
a simple linear combination
of secrets that should equal the desired product $ab.$
The third phase checks that the linear combinations are consistent.
Let ${k_0}$ denote a security parameter; the chance of incorrectness
will be bounded by $\frac{1}{2^{k_0}}.$
For simplicity we take ${k_0}>n$ and ${k_0}$ a power of two.
* Let $a$ and $b$ be verifiably shared.
* Alice verifiably shares $c = ab.$
* Alice shares secrets $r_1,\dots,r_{2{k_0}}$ and $s_1,\dots,s_{2{k_0}}$
chosen uniformly at random over the field $E.$
* For $j = 1,\dots,2{k_0},$ Alice computes $d_j = (a+r_j)(b+s_j),$
and shares $d_j.$
* The network confirms with high probability that each
$d_j = (a+r_j)(b+s_j):$
* For $i = 1,\dots,{k_0},$ the network selects a random index
$j_i$ by having each player select a random secret $\mod (2{k_0}+1-i),$
sharing it (over fields of characteristic $p$ such that
$p \mid 2{k_0}+1-i,$ so that the distribution is uniform), and
then computing the sum.
* Let $Y = \set{j_1,\dots,j_{k_0}}.$
* For all $j \in Y,$ run the protocol on $(a,r_j)$
to obtain the sum $a+r_j.$
* For all $j \in Y,$ run the protocol on $(b,s_j)$
to obtain the sum $b+s_j.$
* For all $j \in Y,$ reconstruct $a+r_j,$ $b+s_j,$ and $d_j.$
Disqualify Alice if any of the $d_j$ are not equal to
* The network confirms that $c$ matches the product $ab.$
* For all $j \not \in Y,$ reconstruct the values $r_j, s_j.$
* For all $j \not \in Y,$ compute
\[ c_j = c - d_j + as_j + br_j + rs \]
* For all $j \not \in Y,$ reconstruct $c_j.$
* If any $c_j \not= 0,$ disqualify Alice.
ABC problem
Protocol to prove that the product of two secrets is a third secret.
(Code for Alice and network.)
Now let us consider the protocol in more depth.
If Alice is honest,
all of the sums $(a+r_j)$ and $(b+s_j)$ are independent of
$a$ and $b,$ since $r_j$ and $s_j$ are uniformly
random field elements. The products $d_j$ are also independent
of $a$ and $b.$ The choice of ${k_0}$ indices $j_1,\dots,j_{k_0}$ to check
is independent of $a$ and $b,$ given that any one player in the
system is honest and shares uniformly random numbers.
The partial views obtained by faulty players during the addition protocols
are independent of the good players' inputs.
Finally, since Alice is honest, $d_j = (a+r_j)(b+s_j) = c + as_j+br_j+rs,$
so $c_j=0$ always.
Therefore, the set of messages are $t$-wise
independent of $a$ and $b,$
and messages from nonfaulty to faulty players are easily generated
accurately by an interface.
We would like to show that the chance that $c \not = ab$ without
Alice's being detected is smaller than $\frac{1}{k_0}.$
We shall show that she must behave properly on exactly the indices
in $Y$ and she must misbehave on all the others.
Let $X$ be the set of indices $j$ for which Alice shares $d_j$ correctly,
that is, for which Alice shares $d_j$ having the value $(a+r_j)(b+s_j).$
The set $Y$ of indices chosen by the system must be
a subset of $X,$ or else Alice's misbehavior is detected
for some $j \in Y \backslash X.$
The remaining indices $j \not \in Y$ must all satisfy
$c_j = c - d_j + as_j + br_j + rs = 0$
or else Alice is caught.
If $c \not= ab$ (Alice is cheating),
none of the indices $j \not \in Y$ is in $X.$
Hence $X=Y,$ and
since $Y$ is chosen randomly after Alice has shared all her secrets,
the probability that Alice can cheat without being detected
is no more than $\frac{1}{2^{k_0}}.$ An interface fails to
produce a correct output
an exponentially small amount of the time, hence the overall
distributions are exponentially indistinguishable.
§ THE PROTOCOL COMPILER FOR FAULTY MINORITY
At this point we have described all the tools necessary to
create a multiparty protocol to evaluate any circuit $C_F$ for
$F(x_1,\dots,x_n)$ privately, tolerating $t< \frac{n}{2}$ faults.
The protocol is the same as that of protocol (Figure <ref>), with the new and protocols of this
chapter substituted for the ones called by that specification. The
disqualification procedure is different, as well: after each cycle of
evaluating gates, a recovery procedure is initiated if faults have been
Let $\tau$ be the number of faults. The $(n-\tau,t-\tau)$ recovery
protocol uses a stripped-down version of the protocol to reduce
the degrees of each polynomial being used by $\tau.$ The remaining players
in fact reconstruct each piece of the various secrets held by the
$\tau$ faulty players, using straightforward linear combinations of
existing pieces, as seen by the following argument. Without loss of
generality take the set of reliable players to be $1,\ldots,t+1.$ Then the
LaGrange interpolation gives
\[
p(j) = \sum_1^{t+1} L_i(j) p(i),
\]
so that the piece $\piece_j(s) = p(j)$ is easily computed as a weighted sum
of secrets. That is, each player verifiably reshares his pieces and then
the system computes the given linear combinations. If more faults occur
then the process is restarted. Notice that the players do not
reconstruct all the information held by the faulty players; instead, they
reconstruct pieces of secrets held by the faulty processor. This
bears no relation to the value of that faulty player's input; such
information is not revealed.
Now, each reliable player has learned the pieces held by disqualified
players. It now adjusts the value of its own pieces through a simple
computation in order to obtain pieces of the original secrets, but shared
with polynomials of degree $t-\tau.$ For concreteness, assume only one
player has been disqualified. Say that secret $s$ is shared via polynomial
$p(u)=\sum_0^t a_t u^t.$ Player $i$ has $\piece_i(s)=p(\alpha_i).$ The
newly publicized piece of the disqualified player is $p(\alpha_j)=\beta.$
Let $m_1(u)=\prod_{i\not=j} (u-\alpha_i),$ $m_2(u)=(u-\alpha_j),$ and
$M(u)=m_1(u) m_2(u).$ Define $g(u) = f(u) \mod m_1(u);$ then by the Chinese
Remainder Theorem,
\begin{eqnarray*}
M_1(u) & = & [m_2(u)^{-1} (\mod m_1(u))] \cdot m_2(u) \\
M_2(u) & = & [m_1(u)^{-1} (\mod m_2(u))] \cdot m_1(u) \\
f(u) & = & M_1(u) g(u) + M_2(u) \beta \hspace{0.3in} (\mod M(u))
\end{eqnarray*}
Then player $i$ computes its new piece of $s$ as:
\[
g(\alpha_i) = [f(\alpha_i) - M_2(\alpha_i) \cdot \beta ] / M_1(\alpha_i).
\]
When using or revealing this value, the verification information is easily
adjusted since it verifies the $f(\alpha_i)$ value, which is itself easy
to recover from the $g(\alpha_i)$ value since $M_1,$ $M_2,$ $\alpha_i,$ and
$\beta$ are publicly known.
§ PROOF OF RESILIENCE
CHAPTER: MULTIPARTY ZERO-KNOWLEDGE PROOF SYSTEMS
In this chapter, we generalize the concept of a zero-knowledge proof
systemzero-knowledge proof in two ways. First, we consider
proofs that some condition holds on secretly shared values. One player,
the prover , knows the values of a collection of secrets,
and through an
interaction with the rest of the network, it proves that a given predicate
$\scp$ holds on those secrets. Second, we consider efficient interactive
proofs of general theorems, where secrecy is not the goal. Rather, the
prover must prove a given theorem (i.e. prove membership in a
language) to one or more verifiers. Instead of making unproven
cryptographic assumptions to guarantee zero-knowledge, we utilize the
presence of a majority of trusted players. No single player need be
Assumptions made in this chapter. The network is complete, with
private lines, $n$ processors, and at most $t < \frac{n}{2}$ Byzantine
faults, chosen dynamically. The protocols are unconditionally secure with
high probability; the results are statistical zero-knowledge (with high
probability no extra information is revealed). There are two
distinguished parties, a prover, which is either polynomial-time or
polynomial-space bounded, and a verifier, which is either unbounded
or polynomial-time bounded.
§ ZERO-KNOWLEDGE PROOFS ABOUT SECRETS
In Chapter <ref> we considered the ABC Problem: Alice, the prover,
must prove that $c=ab,$ where $a,b,$ and $c$ are secrets. Here, we extend
Lemma <ref> to demonstrate that a player may prove that a given
predicate $\predicate$ holds on $m$ shared secrets $v_1,\dots,v_m,$
revealing no additional information. In a sense, this is analogous to the
two-player problem of giving zero-knowledge proofs that a predicate holds
on committed bits [84]. Here, we consider values shared among a
network instead of committed bits, which have slightly different
properties. For example, though committed bits are secret and
unchangeable, they cannot be added together directly. Since most secure
multiparty protocols are based on threshold schemes like secret sharing,
proofs about shared values are an important tool.
Let $\predicate$ be a function on $m$ variables over the field $E$ and let
$C_{\predicate}$ be an algebraic circuit over $E$ which computes
$\predicate;$ note that any boolean predicate on boolean inputs is easily
modelled using arithmetic over $E.$
Let $\prover$ and $\verifier$ be distinguished parties in the network. We
consider $\prover$ or $\verifier$ to be “cheating” if they are corrupted
by an adversary $A.$
Let the prover's input be $x_{\prover}=(v_1,\dots,v_m,w)$
and let all other $x_i=0.$
We define a function whose output is known only to the verifier:
$F_i(x_1,\dots,x_n) = 0$ if $i \not= \verifier,$ and
\[
F_{\verifier}(x_1,\dots,x_n) =
\left\{
\begin{array}{rl}
1 & \mbox{ if } x_{\prover}=(v_1,\dots,v_m,w) \mbox{ and }
w = \predicate(v_1,\dots,v_m) \\
0 & \mbox{ otherwise. } \\
\end{array}
\right.
\]
Without regard to zero-knowledge, a simple proof system is trivial:
simply reconstruct all the secrets and check if
A protocol $\Pi$ is a $(n,t)$ multiparty zero-knowledge
interactive proof system
!multiparty zero-knowledge
interactive proof system on secrets
zero-knowledge proof
multiparty zero-knowledge
interactive proof system!on secrets
on secrets for $\predicate$ if it
is a $t$-resilient $n$-player protocol for the function $F_{\predicate}$
defined above.
The terms statistical and computational apply according
to whether the protocol is statistically or computationally resilient.
We call such a protocol a , for short.
Theorem <ref>
tells us that there exists a for any function $\predicate:$
simply compute $F_{\predicate}$ using the protocol.
The purpose of this section, aside from defining the concept of proof
systems with many parties, is to introduce a protocol which allows a direct
proof system which is far more efficient than simulating $C_{\predicate}.$
Simulating a circuit level by level (or even $(\log n)$-slice by slice) is
an expensive undertaking. We can take advantage here, however, of the fact
that the prover already knows the results of all of the gates. The prover
shares the results of each and every gate, reducing the task of the
verifier to checking that each shared result is correct. In other words,
the system does not need to evaluate the circuit layer by layer, but rather
only to verify that the collection of secrets representing the output
values of each gate is correct. This verification requires only a local
examination of the secret inputs and secret output of each gate, an
operation which can be performed simultaneously for all the gates.
Figure <ref> details the protocol.
$0 \leq l \leq {\tt depth}(C_{\predicate})$
$1 \leq j \leq {\tt width}(C_{\predicate})$ :
shares the output of each gate $g_{lj}$ as a secret, $w_{lj}.$
$0 \leq l \leq {\tt depth}(C_{\predicate})$
$1 \leq j \leq {\tt width}(C_{\predicate})$ :
the network secretly computes $u_{lj}$ as follows:
$g_{lj}=(\times)$ run to compute
$u_{lj} = g_{\inputgates_1(g_{lj})} \cdot g_{\inputgates_2(g_{lj})}.$
$g_{lj}=(+,a_0,\ldots,a_m)$ run to compute
$u_{lj} = a_0 + \sum_{j=1}^m g_{\inputgates_m(g_{lj})}.$
$0 \leq l \leq {\tt depth}(C_{\predicate})$
$1 \leq j \leq {\tt width}(C_{\predicate})$ :
reveal $u_{lj}$ to .
If all $u_{lj}=0,$ outputs accept, else it outputs reject.
Protocol to prove $w = \predicate(v_1,\dots,v_m).$
predicate!proof of
The results of Chapter <ref> give the following:
Let $\predicate$ be a family of predicates and let $F_{\predicate}$ and
$C_{\predicate}$ be as described above. For $2t<n,$ protocol is
a statistical $(n,t)$- for $\predicate.$ It runs in constant rounds
and has message complexity polynomial in $(n,m,k,C_{\predicate}).$ of
When $3t<n,$ the results of $[28]$ ensure that the
gatewise verification of $C_{\predicate}$ occurs without error,
so that the simulation is perfect:
Let $\predicate$ be a family of predicates and let $F_{\predicate}$ and
$C_{\predicate}$ be as described above. For $3t<n,$ protocol is
a perfect $(n,t)$- for $\predicate.$ It runs in constant rounds and
has messages complexity polynomial in $(n,m,k,C_{\predicate}).$
Protocols whose message complexity is polynomial in $n,$ independently of the circuit complexity of $F_{\predicate},$ are presented
in Chapter <ref>. They require the machinery of locally
random reductions, and may require the players to compute time-consuming
functions locally. Though the results of this section require polynomial
local time if the circuit is of polynomial size, verifying huge circuits
requires very little local time per gate but great cumulative local time.
When the goal is to prove predicates regardless of their computational
complexity, then the cost must be paid somewhere; better it be on the local
computation than on the communication lines. The next section, however,
describes a particular class of intractable functions for which only the
prover itself need perform more than polynomial time computations.
§ ZERO-KNOWLEDGE PROOFS FOR IP
Any language $L \in \ip$interactive proof system!IP
certainly has a circuit $C_L$ which
describes its characteristic function $\chi_L;$ Theorems
<ref> and <ref> imply there is a
for $L$ in the presence of a partly trusted network. The size of
$C_L,$ however, may be inordinately large, especially because
$\ip=\pspace$interactive proof system!IP=PSPACE
[115]. Protocol would
therefore require a great deal of time and very large messages to compute,
and is unsuitable for proving intractable predicates in a network of
polynomial time machines.
We present a protocol which achieves a zero-knowledge proof system for any
language $L \in \ip,$
interactive proof system!zero-knowledge
zero-knowledge proof!for IP yet uses
only a polynomial number of rounds and polynomial message complexity. In
particular, zero-knowledge proofs for any language in $\ip$ can be made
without complexity theoretic assumptions, if a partly trusted network of
probabilistic polynomial time Turing machines is available. In contrast,
when there are only a prover and a verifier, perfect zero-knowledge proofs
for NP-complete languages (which are in IP) are impossible unless the
polynomial hierarchy collapses
The task at hand is distinguished from in a few ways. First of
all, there are no secretly shared values a priori. Second, the
string $x$ and the language $L$ that must be proven to contain it are both
public knowledge, or at least known to both prover and verifier.
Third, the purpose of the network is not to act as a repository for secrets
and committed values but to minimize the information leaked by the proof
system while ensuring that the prover behaves.
We distinguish two members of the network as before, the prover $\prover$
and the verifier $\verifier.$ In a two-party interactive proof system
([74, 4], and <ref>), the prover is a $\pspace$
machine and the verifier is a polynomial-time machine. The protocol is
such that, if $x \in L$ and the prover is not corrupted, then an
uncorrupted verifier will output accept with high probability.
Otherwise, an uncorrupted verifier will output reject. Any language
in $\pspace$ admits an interactive proof [115]; the presence of a
network of processors is irrelevant.
For the purpose of restricting information, however, a network of
processors is invaluable. Zero-knowledge proof systems are often based on
unproven complexity assumptions [74]; perfect zero-knowledge proofs
even for languages in $\np$ cannot be achieved, unless the polynomial
hierarchy collapses [62]. Perfect zero-knowledge proof systems can
be obtained given two provers that are not allowed to communicate
[27, 25]. Here, we allow only one prover, and that prover is
allowed to communicate with all of the participants and to collaborate with
a constant fraction.
The idea of zero-knowledge proofs again corresponds to secure multiparty
protocols. The protocol must provide the verifier with one of two outputs,
accept and reject. If the prover and verifier are not
corrupted, then the verifier accepts with high probability if $x \in L,$
and otherwise rejects. If the verifier is not corrupted and $x \not \in
L,$ then the verifier rejects with high probability. As in the definition
of two-party zero-knowledge, a cheating verifier must be able to simulate
the conversation. The definition of multiparty protocol resilience
suffices to provide privacy not only with respect to the verifier but with
respect to the prover as well.
We define a particular function $F_L$ that states whether a string $x$
agreed upon by two players is in a given language $L:$ $F_i(x_1,\dots,x_n)
= 0$ if $i \not= \verifier,$ and
\[
F_{\verifier}(x_1,\dots,x_n) =
\left\{
\begin{array}{rl}
1 & \mbox{ if } x_{\prover}=x_{\verifier}= x \not= \Lambda \mbox{ and }
x \in L \\
0 & \mbox{ otherwise. } \\
\end{array}
\right.
\]
Note that either player can refuse to participate by failing to supply a
proper input ($x=\Lambda$). The only way for an honest verifier to be
convinced that $x \in L$ is if the prover decides to participate and $x\in
Let $L$ be a language.
A $(t,n)$ zero-knowledge network proof
system (ZKNPS) for $L$ is a $t$-resilient $n$-player protocol for $F_L.$
As usual, the adjectives perfect, statistical, computational apply.
For $2t<n$ and any $L \in \pspace,$ is a statistical ZKNPS
for $L.$
For $3t<n,$ is a perfect ZKNPS for $L.$
By a result of [75], $L$ admits an interactive proof if and only if
there is an Arthur-MerlinArthur-Merlin protocol [4] for
$L,$ in which the verifier takes the position of Arthur, and simply sends
random coins to Merlin. Arthur accepts the proof based on a polynomial
time computation using the set of messages sent by Merlin.
Without loss of generality, assume that for some polynomial $p_1(m,k),$ the
AM protocol runs in $p_1(m,k)$ steps, and that at each step, the messages
are of length $p_2(m,k).$ The program of Arthur is simple to specify: at
round $r,$ generate $p_2(m,k)$ random bits, send them to Merlin, receive a
message $M_r$ from Merlin, and repeat $p_1(m,k)$ times; afterwards, compute
a polynomial-time function $V$ on the messages $M_1,\dots,M_{p_1(m,k)}$
from Merlin, and output the result. Merlin, however, must compute some
presumably complicated function $P_r$ to generate the correct response
$M_r$ at each round.
Thus we may take the interactive proof system to be the hidden composition
of the functions
\[
V \closedcomp P_{p_1(m,k)} \closedcomp V_{p_1(m,k)}
\closedcomp \cdots \closedcomp P_1 \closedcomp V_1.
\]
The result, and only the result, is revealed to .
Because each function $V_r$ is simply a set of random bits, protocol
suffices. The key is to construct a protocol that computes
each $P_r$ in a $t$-resilient fashion. In general, the functions $P_r$ may
be too complex to admit small circuits, and hence would be expensive in
terms of interaction. In fact, however, there is no need to compute $P_r$
through circuit simulation. Instead, the network reveals $V_r$ to ,
who computes $P_r$ on its own, and shares the result. Notice that the
functions $V_r$ are private, since they are simply random bit sequences
independent of all input values; hence revealing them to $P_r$ is perfectly
The computation of function $V$ could be performed through circuit
simulation, but at an expense of polynomially many rounds, since there is
no guarantee that $V,$ a polynomial-time calculation, admits sub-polynomial
depth circuits. Instead, computes $V$ and shares the result, $v.$
Player then provides a zero-knowledge proof on secrets, as per
<ref>, to ensure that $v$ is correct. By verifying each step
in the computation of $V$ locally yet in parallel, the number of rounds is
kept down to a constant.
Run to generate random secret $b_{uv}.$
Reconstruct $V_r = (b_{r,1},\ldots,b_{r,p_1(m,k)})$ for $\prover.$
computes $M_r=P_r$ and shares it.
computes $v=V(M_{p_1(m,k)},V_r{p_1(m,k)},\ldots,M_1,V_1)$
and shares it.
Run (,;$V$;
output is reject or $v \not=\acc$
outputs reject.
outputs accept.
Protocol for $\prover$ to prove $x \in L$ to $\verifier.$
If $\prover$ is corrupted and fails to share the correct value of $P_r$ for
some $r$ or fails to supply a correct proof at the end, this corresponds to
its choosing, in the ideal protocol, an input $x=\Lambda.$ The prover has
the inevitable right not to participate; but the protocol we present
ensures that lack of participation is reflected properly in the final
results. Notice that by the definition of two-party interactive proof
systems, function $V$ is robust against incorrect computations of
$P_1,\ldots,P_r,$ so that a corrupt $\prover$ who shares incorrect values
corresponds to a prover in the ideal case who chooses not to participate.
Formally speaking, we perform the composition of the following ideal
protocols. In the first protocol, each player supplies a sequence of
$p_1(m,k)p_2(m,k)$ random bits, and the trusted host computes a string of
$p_1(m,k)p_2(m,k)$ uniformly random bits by computing the parities of each
group of $n$ bits. It divides these into $p_1(m,k)$ strings $V_r$ of
length $p_2(m,k)$ and returns a robust and secret representation of them to
the players. In each protocol $\idealname^{2r},$ the trusted party gives
$V_r$ to $\prover.$ In each protocol $\idealname^{2r+1},$ $\prover$
computes $P_r$ and gives it to the trusted host, who computes a robust and
secret representation of what $\prover$ supplied, and returns pieces to the
players. In the final protocol $(2p_1(m,k)+2),$ each player provides its
pieces of the robust and private representation of all of the previous
inputs, and the trusted host computes $V$ on the reconstructed values. It
returns the result to .
The composition of these protocols provides with the closed
\[
V \closedcomp P_{p_1(m,k)} \closedcomp V_{p_1(m,k)}
\closedcomp \cdots \closedcomp P_1 \closedcomp V_1
\]
which reveals only the final result $V,$ as desired.
By Theorems <ref>,
<ref>, <ref>, <ref>, and
<ref>, the results are perfectly (statistically) $t$-resilient
if $3t<n$ ($2t<n$).
CHAPTER: PRIVACY FOR PASSIVE ADVERSARIES
“Notice all the computations, theoretical scribblings, and lab
equipment, Norm.... Yes, curiosity killed these cats.”
Gary Larson
Multiparty computations are generally impossible when the number of
faulty players exceeds half the network, unless additional assumptions
are made. Chapter <ref> describes how noisy channels allow
computations to proceed correctly and fairly, and
demonstrates how to achieve the same results using a cryptographic
protocol for oblivious transfer.
In order to elucidate the nature of privacy in distributed computations, we
examine protocols which tolerate very large numbers of passive faults. We
call this the “honest-but-curious” model: each player is honest and never
sends a faulty message, but players may form coalitions and pool their
knowledge to try to obtain extra information to which they are not
entitled. We may more or less ignore the issues of correctness and
fault-tolerance, focusing only on privacy.
When the bound $t$ on the number of curious parties satisfies
$t <\lceil \frac{n}{2} \rceil,$ any function can be computed privately
in the “honest-but-curious” model (cf. [28, 39] and
Chapters <ref> and <ref>).
When $t \geq\lceil \frac{n}{2} \rceil,$ the class of privately computable
functions is restricted. Chor and Kushilevitz [43] characterized
the set of boolean functions that can be computed privately for $t
\geq\lceil \frac{n}{2} \rceil$ as those of the form
\[
f(x_1,x_2,\dots,x_n) = f_1(x_1) \oplus f_2(x_2) \oplus \cdots
\]
In other words, for the boolean case, the only functions privately
computable when a majority of the parties are curious are exclusive-or's of
$n$ functions, each depending on one input.
The results of [121, 71, 65, 17] demonstrate that when the
participants are computationally bounded but still only curious, any
function can be privately computed for $t \leq n-1.$ Furthermore, even
functions that have non-boolean outputs are privately computable. In an
information-theoretic sense, however, coalitions of curious parties do hold
more information than that to which they are entitled.
The complete characterization of privately computable functions when the
participants are not bounded remains an open question:
What general functions (say from $n$ inputs to $n$-bit outputs)
can be computed privately by $n$ parties, allowing
$t \geq\lceil \frac{n}{2} \rceil,$ and maintaining privacy in an
information-theoretic sense?
In this chapter, we take the penultimate step toward a complete
characterization, by solving the following problem:
What general functions
can be computed privately by two parties?
We give a characterization of functions that are privately computable by
two parties, based on a simple property of the table for the functions.
Specifically, any function whose table can be partitioned in a
certain manner can be privately computed by two parties; and any function
that can be privately computed by two parties has a table which can be
partitioned. (See <ref> for a statement of the main theorem.)
In preliminary papers, Kushilevitz [88] and Beaver [9]
independently obtained this characterization, but the proofs appearing in
those papers contain a subtle flaw. We present here a different proof,
which uses a different approach (see <ref>).
Our characterization bears on the $n$-party problem in that any $n$-party
protocol that allows $t\geq\lceil \frac{n}{2} \rceil$ can be adapted to a
two-party protocol. Thus, functions that are privately computable in the
$n$-party case must satisfy certain properties satisfied by functions that
are private in the two-party case.
We also address a different version of the $n$-party problem, in which the
output of the function is not revealed. Instead, it is maintained in a
distributed form, using an arbitrary threshold scheme. In view of the
recent development of multiparty computation protocols based on threshold
schemes (cf. [28, 39]), as well as the protocols described in the
rest of this dissertation, all of which allow the output to be maintained
in a shared form, a characterization of the functions that can be computed
secretly is needed. We show a strongly negative result: the only functions
that can be computed secretly, maintaining the result as a secret (for use
in further protocols), are additive functions. This result fits neatly
with that of [43] showing that privately computable boolean
functions are addition functions ($\mod 2$).
Assumptions made in this chapter. The network is complete,
with private lines, $2$ or $n$ computationally
unbounded processors.
Protocols are perfectly resilient against passive $t$-adversaries
where $t$ may be greater than $\frac{n}{2}.$
§ DEFINITIONS
For the two-party case, we assume that the two parties communicate
using finite strings and for a finite number of exchanges. Neither
party is computationally bounded; the privacy of the protocols is
based on information-theoretic bounds.
Because we consider only two parties, without an active adversary,
we can make some simplifying observations.
A party can be described simply by a family of distributions on strings,
parametrized by the input and the current transcript of the protocol.
These distributions are induced by the formal transition functions,
to which we shall not refer again.
Thus, party 1 is a family of distributions
$\set{D^1_{x,t} \mid x\in X, t \in \Sigma^{\star}},$
where $\Sigma=\set{0,1},$ and each $D^1_{x,t}$ is a distribution
on $\Sigma^{\star}.$ The message that party 1 sends after
round $r$ is a finite string selected at random according to distribution
$D^1_{x,t_r},$ where $x$ is its input and $t_r$ is the transcript
of messages through round $r.$ Party 2 is defined similarly.
We assume that the transcripts are delimited so that the messages
are uniquely decodable from the transcripts.
Let the domain of $f$ be $X \times Y =
\set{1,\dots,\alpha} \times \set{1,\dots,\beta}.$
Let $T$ be a random variable which describes the transcript of
message exchanges between parties 1 and 2.
We say that $f$ is private if there is a protocol
such that, when party 1 holds $x \in X$ and party 2 holds $y \in Y:$
* After any run of the protocol, each party knows $f(x,y).$
* (Privacy for party 1)
private function
For any $x'$ such that $f(x',y)=f(x,y)$ and for any transcript $t:$
\[
\prob{ t \mid x', y} = \prob{ t \mid x, y }.
\]
* (Privacy for party 2)
For any $y'$ such that $f(x,y')=f(x,y)$ and for any transcript $t:$
\[
\prob{ t \mid x, y'} = \prob{ t \mid x, y }.
\]
We shall also denote the prefix of a transcript $t$ after the $i^{th}$
round by $t_i,$ and the corresponding random variable by $T_i.$
§ TWO PARTY PRIVACY FOR PASSIVE ADVERSARIES
§.§ Partitions
For any $x \in X$ and any subset $P \subseteq Y,$
we define the row set for row $x$ and columns in $P$ to be
the range of values that $f$ takes
over that row of its table:
\[
R(x,P) = \set{ f(x,y) \mid y \in P}.
\]
Similarly, we define the column set for
$y \in Y$ and a subset of values $P \subseteq X:$
\[
C(P,y) = \set{ f(x,y) \mid x \in P}.
\]
We say that $f$ is column-partitionable into $f_P$ and $f_Q$ if
if there exists a nontrivial partition of $Y$ into blocks $P$ and $Q$
such that
\[
(\forall x \in X) R(x,P) \hspace{0.1in} \cap R(x,Q) = \emptyset,
\]
and $f_P$ and $f_Q$ are the restriction of $f$ to the sets $P$ and $Q.$
In other words, if we look at a particular row in the table for $f$
and consider the range of values which $f$ takes on
for $y \in P,$ then no such value will appear
as the output of $f$ for $y \in Q.$
Similarly, $f$ is row-partitionable
if there exists a nontrivial partition of $X$ into blocks $P$ and $Q$
such that
\[
(\forall y \in Y) R(P,y) \hspace{0.1in} \cap R(Q,y) = \emptyset,
\]
and $f_P$ and $f_Q$ are the restriction of $f$ to the sets $P$ and $Q.$
Partitionability is recursively defined as follows.
We say that $f$ is partitionable if:
* $f$ is constant; or
* $f$ is column-partitionable or row-partitionable
into $f_P$ and $f_Q,$ each of which are themselves partitionable.
For example, the AND function is not partitionable,
while the function $f(x,y) = x + y \mod 7$ is partitionable.
Functions that are not partitionable need not be based on AND
or OR, as demonstrated by the following table, which cannot
be partitioned by rows or by columns:
( 400, 100)
( 187.2, 58.1)( 35.4, 17.7)
( 223.4, 40.4)( 17.7, 35.4)
( 205.7, 22.7)( 35.4, 17.7)
( 187.2, 22.7)( 17.7, 35.4)
( 160.1, 81.6)$f(x,y)$
( 194.1, 81.6)0
( 212.6, 81.6)1
( 231.2, 81.6)2
( 194.1, 63.6)0
( 212.6, 63.6)0
( 231.2, 63.6)1
( 231.2, 45.9)1
( 231.2, 28.2)2
( 212.6, 28.2)2
( 212.6, 45.9)4
( 194.1, 45.9)3
( 194.1, 28.2)3
( 176.1, 63.6)0
( 176.1, 45.9)1
( 176.1, 28.2)2
It is easy to see that any function that is insensitive to
$x$ or to $y$ is partitionable.
It is also straightforward to determine from an $n \times n$ table
for $f$ whether $f$ is partitionable, by using a transitive-closure algorithm,
which runs in time polynomial in $n$, to determine the columns
or rows that must fall in the same blocks of any allowable partition.
§.§ Two Parties Cannot Compute Unpartitionable Functions
We show the following result:
If $f$ is not partitionable,
then $f$ cannot be computed privately
by two parties.
Before proceeding to the proof, let us expose the flaw in the earlier proofs
of Kushilevitz [88] and Beaver [9].
Without loss of generality,
let party 1 send messages $m_r$ to party 2 during odd-numbered rounds $r;$
let party 2 send messages during even-numbered rounds.
This does not restrict who “speaks” first, since null
or completely random messages are allowed.
We expand the probability of a transcript $t$ given $x$ and $y$
according to its prefixes $t_1,t_2,\dots,$
where $t_i = m_1 \circ m_2 \circ \cdots \circ m_i:$
\begin{eqnarray*}
\prob{t \mid x,y} & = &
\prod_r \prob{t_r \mid x,y,t_{r-1}} \\
& = &
\prob{t_{2j+1} \mid x,y,t_{2j}} )
\prob{t_{2j} \mid x,y,t_{2j-1}} ) \\
& = & P_1(t,x) P_2(t,y)
\end{eqnarray*}
where $P_1$ and $P_2$ are defined as the parenthesized expressions.
(This notation appears literally as
$P_1(s~\mid~x), P_2(s~\mid~y)$ in [88],
and as $\gamma_t(x),\delta_t(y)$ in [9].)
The faulty proof appearing in [88, 9] runs along the
following lines. Using the privacy of $f,$ it is shown
by induction that $\prob{t \mid x,y}$ is constant over all $x.$
The induction step fails, however, since it is based on the following
statement, which intends to show that $P_1$ is insensitive to $x:$
If $\prob{t \mid x_1,y} = \prob{t \mid x_2,y}$ then
$P_1(t,x_1) = P_1(t,x_2).$ Unfortunately, if $P_2(t,y) = 0,$
this conclusion does not necessarily hold.
For example, consider the function described by
$f(x,y)$ $y_1$ $y_2$
and the 4-round deterministic protocol where party 1 starts, and each party
sends 1 bit, described by the table of transcripts:
$y_1$ $y_2$
Then $\prob{0000 \mid x_1,y_1} = P_1(0000,x_1) P_2(0000,y_1)
= \prob{0000 \mid x_2,y_1} = P_1(0000,x_2) P_2(0000,y_1),$
and we have $P_2(0000,y_1) = 1$ and $P_1(0000,x_1)=P_1(0000,x_2) = 1.$
The claim that $P_1$ is insensitive to $x$
is satisfied for this particular transcript.
On the other hand,
$\prob{0100 \mid x_1,y_1} = \prob{0100 \mid x_2,y_1} = 0,$
but $P_1(0100,x_1)=1$ while $P_1(0100,x_2) = 0.$
Informally, transcript $0100$ is impossible given $y_1,$
but it may or may not be possible for different $x,$
a fact that cannot be determined simply by looking at the
column corresponding to $y_1.$ The claim that $P_1$ is insensitive
to $x$ fails.
It turns out that $P_1$ is indeed insensitive to $x$ when $f$ cannot be
partitioned into columns, but this is not a trivial observation, and the
proof must use the unpartitionability of $f$ by columns. The earlier
proofs did not use this property, and thereby fail on a counterexample like the
one presented above (note that $f$ is not partitionable by rows,
but it is partitionable by columns). Introducing the property of
unpartitionability by columns into the earlier proof techniques
seems to be an unwieldy approach, without an easy fix;
instead, we present an alternative proof.
Proof of Lemma <ref>:
Since $t_{2j+1} = t_{2j} \circ m_{2j+1},$ and $m_{2j+1}$
is selected by party 1 according to distribution
$D^1_{x,t_{2j}},$ we may write:
\[
\prob{t_{2j+1} \mid x,y} =
\prob{ t_{2j+1} \mid x,t_{2j} }
\prob{ t_{2j} \mid x,y }.
\]
\[
\prob{t_{2j} \mid x,y} =
\prob{ t_{2j} \mid y,t_{2j-1} }
\prob{ t_{2j-1} \mid x,y }.
\]
First we shall show that if $f$ is privately computable but
not partitionable, then party 1 “never speaks first.” In other
words, all conversations are independent of $x$ until party 2 gives
away some information depending on $y.$ Then we shall show that party
2 never speaks first, leading to a contradiction.
In order to formalize the notion of not speaking first, consider the
following notations.
Let $D \subseteq X \times Y,$
and let $\tau(x,y)=\set{t \mid \prob{t \mid x,y} \not= 0}.$
Given a transcript $t$ and two pairs of inputs $(x,y)$ and $(u,v),$
let $\theta(t,x,y,u,v)$ be the value $r$ such that
\[
\prob{t_r \mid x,y} \not= \prob{t_r \mid u,v} \mbox{ and }
(\forall \rho < r)
\prob{t_{\rho} \mid x,y} = \prob{t_{\rho} \mid u,v}.
\]
Clearly, such an $r$ is unique if it exists. If no such $r$ exists,
let $\theta(t,x,y,u,v) = \infty.$
The earliest round at which the probabilities of conversations on
$(x,y)$ differ from those for some other input $(u,v)$ is denoted
\[
\phi(t,D,x,y) =
\mbox{min}_{(u,v) \in D} \hspace{0.2in} \theta(t,x,y,u,v).
\]
Now let
\[
\psi(D) = \set{ \phi(t,D,x,y) \mid (x,y) \in D, t \in \tau(x,y)}.
\]
We say that party 1 does not speak first on $D$ if
$\psi(D) \subset \set{2j \mid j \in \mbox{\bf N}} \cup \set{\infty},$
that is, if the earliest times at which conversations differ are always
even-numbered rounds, corresponding to party 2 sending a message to
party 1.
First, let us show an easy lemma:
(Row Lemma)
For any $x\in X$ and $y_1,y_2 \in Y,$ and for any possible transcript
$t \in \tau(x,y_1),$ if
\[
\prob{t_r \mid x,y_1} \not= \prob{t_r \mid x,y_2} \mbox{ and }
(\forall \rho < r)
\prob{t_{\rho} \mid x,y_1} = \prob{t_{\rho} \mid x,y_2},
\]
then $r$ is not odd.
If $r$ is odd, party 1 sends the message at round $r:$
\begin{eqnarray*}
\prob{t_r \mid x,y_1}
& = & \prob{t_r \mid x, t_{r-1} } \prob{t_{r-1} \mid x,y_1} \\
& = & \prob{t_r \mid x, t_{r-1} } \prob{t_{r-1} \mid x,y_2} \\
& = &
\prob{t_r \mid x,y_2}
\end{eqnarray*}
which is a contradiction.
The crucial lemma we need is the following:
If $f$ is privately computable but not partitionable, then
party 1 does not speak first on $X \times Y.$
We show by induction that there exist
$P_1 \subset P_2 \subset \cdots \subset P_{\abs{X}} = X$
such that party 2 speaks first on each $P_i \times Y.$
Using lemma <ref>, we see that party 1 does not speak first on
$P_1 \times Y,$ where $P_1 = \set{a_1}$ and $a_1 \in X$ is arbitrary.
Since party 1 only has one argument in $P_1,$ this is intuitively
Assume by way of induction that party 1 does not speak first on $P_i \times Y.$
We must demonstrate a $P_{i+1}$ such that $P_i \subset P_{i+1}$
and party 1 does not speak first on $P_{i+1}.$
Since $f$ is not partitionable, there exist values
$x_1 \in P_i, x_2 \in \overline{P}_i,$ and $y_1 \in Y$ so that
$f(x_1,y_1) = f(x_2,y_1).$ Let $P_{i+1} = P_i \cup \set{x_2}.$
To show that party 2 speaks first on $P_{i+1} \times Y,$
it suffices to show that for each $y \in Y,$
$\phi(t,\set{x_2} \times Y, x_2, y)$ is never odd and
$\phi(t,P_i \times Y, x_2, y)$ is never odd. In other words,
we consider the earliest times that a meaningful message is sent for
the new inputs in $\set{x_2} \times Y$ relative to the original set
$P_i \times Y$ and relative to the new input set itself.
Lemma <ref> shows that
$\phi(t,\set{x_2} \times Y, x_2, y)$ is never odd.
To obtain the claim that $\phi(t,P_i\times Y,x_2,y)$ is never odd,
let us divide the row $\set{x_2} \times Y$ into three sets,
$A = \set{(x_2,y_1)},$
$B = \set{(x_2,y) \mid (\exists a \in P_i) f(a,y) = f(x_2,y)}\backslash A,$
$C = \set{(x_2,y) \mid (\forall a \in P_i) f(a,y) \not= f(x_2,y)}.$
First, we show that $\phi(t,P_i \times Y, x_2,y_1)$ is even for
any possible transcript $t \in \tau(x_2,y_1).$ If $t \in
\tau(x_2,y_1)$ then by the privacy of $f,$
\[
(\forall r)
\prob{t_r \mid x_1,y_1} = \prob{t_r \mid x_2,y_1}
\]
Then it follows easily that
$\phi(t,P_i \times Y, x_2,y_1) = \phi(t,P_i \times Y, x_1,y_1),$
and our first claim follows.
By a similar argument, $\phi(t, P_i \times Y, x_2, y)$ is never odd
for any $y \in B$.
Thirdly, for any $(x_2,y) \in C,$ we show $\phi(t,P_i \times Y, x_2, y)$
is never odd. We have two cases to consider: $\theta(t,x_2,y,x_1,y_1)$
is minimal or it is not. Say it is minimal. Again, by the privacy of
$f$ we have that
\[
(\forall r)
\prob{t_r \mid x_1,y_1} = \prob{t_r \mid x_2,y_1}
\]
and the smallest $r$ at which
$\prob{t_r \mid x_1,y_1} \not= \prob{t_r \mid x_2,y}$
is identical to that for which
$\prob{t_r \mid x_2,y_1} \not= \prob{t_r \mid x_2,y}.$
Since $(x_2,y_1)$ and $(x_2,y)$ are in the same row,
$r$ must be even.
If $\theta(t,x_2,y,x_1,y_1)$ is not minimal, let
$(u,v)$ be such that
$\theta(t,x_2,y,u,v)$ is minimal.
For a given transcript $t,$ consider the probabilities
$\prob{t_r \mid u,v},$
$\prob{t_r \mid x_1,y_1},$
and $\prob{t_r \mid x_2,y}.$
At some earliest round $r,$ two or three of these differ.
Since $(u,v)$ gave the minimal value with respect to $(x_2,y)$
we see that $\prob{t_r \mid x_2,y} \not= \prob{t_r \mid u,v}.$
Since $(x_1,y_1)$ did not give the minimal value,
$\prob{t_r \mid x_2,y} = \prob{t_r \mid x_1,y_1}.$
$\prob{t_r \mid x_1,y_1} \not= \prob{t_r \mid u,v}.$
By the induction hypothesis, since $(x_1,y_1)$ and $(u,v)$
are in $P_i \times Y,$ we deduce $r$ is even. $\Box$
A symmetric argument gives the corresponding lemma:
If $f$ is privately computable but not partitionable, then
party 2 does not speak first on $X \times Y.$
Combining lemmas <ref> and <ref>,
we see that neither party speaks first on $X \times Y,$ implying that
$\psi(X \times Y) = \set{\infty}.$
Let $a \in X$ and $b \in Y$
be arbitrary, and let $t \in \tau(a,b).$ Then for every $x \in X$ and
$y \in Y,$ we have
$\theta(t,a,b,x,y) = \infty,$
implying $\prob{t \mid a,b} = \prob{t \mid x,y}.$
Since $f(x,y)$ is determined by the transcript $t,$ $f$ must
be constant and hence partitionable, giving a contradiction.
The proof of lemma <ref>
does not assume that the protocol is deterministic.
In the next section we observe that a deterministic protocol suffices
to compute any partitionable function. Furthermore, even if the
protocol is only correct with probability exceeding $\half,$
the argument shows that the distributions on transcripts
are identical regardless of the inputs, so that $f$ must be constant,
giving a contradiction.
§.§ Two Parties Can Compute Partitionable Functions
We now show a converse result:
If $f$ is partitionable, then there is a
(deterministic) protocol whereby $f$ can
be computed privately by two parties.
By induction on $\alpha = \abs{X}.$
(Base Case.)
If $X=\set{x_1},$ then party 2 simply announces $f(x_1,y).$
This protocol is trivially private for party 1, and
since for all $y,y' \in Y$ such that $f(x_1,y)=f(x_1,y')$
the transcript consists exactly of $f(x_1,y),$ the protocol
is private for party 2.
(Inductive Hypothesis.)
If $\abs{X} \leq \alpha,$ then
if $f$ is partitionable then there is a deterministic
protocol $\Pi_f$ to compute $f.$
(Inductive Step.)
Let $\abs{X} = \alpha+1.$ Say that $f$ is partitionable.
If $f$ is constant there is a trivial private protocol.
Otherwise, $f$ is row-partitionable or column-partitionable.
If $f$ is row-partitionable into $f_P$ and $f_Q,$ then by the
induction hypothesis there exist private
protocols $\Pi_{f_P}$ and $\Pi_{f_Q}$ for $f_P$ and $f_Q.$
Define the protocol $\Pi_f$ as follows. If $x \in P,$ party 1
sends a 0; then the parties execute the protocol for $\Pi_{f_P}.$
Otherwise if $x \in Q,$ party 1 sends a 1,
and the parties execute the protocol for $\Pi_{f_Q}.$
Since the protocols are deterministic, let $\Pi(x,y)$ denote
the transcript of protocol $\Pi$ on inputs $x$ and $y.$
Protocol $\Pi_f$ is private with respect to party 1.
If $f(x,y)=f(x',y),$ then either $x,x' \in P$ or
$x,x' \in Q;$ otherwise $f$ would not be row-partitionable.
If $x,x' \in P,$ then
\[
\Pi_f(x,y)= 0 \circ \Pi_{f_P}(x,y)
= 0 \circ \Pi_{f_P}(x',y) = \Pi_f(x',y).
\]
Similarly, if
$x,x' \in Q,$ then
$\Pi_f(x,y)= \Pi_f(x',y).$
Protocol $\Pi_f$ is also private with respect to party 2.
If $f(x,y)=f(x,y'),$ then either $x \in P,$ in which case
\[
\Pi_f(x,y)= 0 \circ \Pi_{f_P}(x,y)
= 0 \circ \Pi_{f_P}(x,y') = \Pi_f(x,y');
\]
or $x \in Q,$ which likewise gives
$\Pi_f(x,y)= \Pi_f(x,y').$
Now, if $f$ is not row-partitionable then it must be
column-partitionable into $P$ and $Q.$ We now prove by
induction on $\beta=\abs{Y}$ that $f$ is private.
Say $\beta=1;$ then $f$ is trivially private. Now,
assume that the hypothesis holds for $\abs{Y} \leq \beta.$
Consider $\abs{Y} = \beta+1.$ Then $f$ is partitionable
into $f_P$ and $f_Q,$ where $\abs{P},\abs{Q} \leq \beta.$
Hence there exist private protocols $\Pi_{f_P}$ and
$\Pi_{f_Q}$ for $f_P$ and $f_Q.$ The protocol for $f$ is
If $y \in P,$ party 2
sends a 0; then the parties execute the protocol for $\Pi_{f_P}.$
Otherwise if $y \in Q,$ party 2 sends a 1,
and the parties execute the protocol for $\Pi_{f_Q}.$
The proof that this
$\Pi_f$ is private follows the same lines as above.
§.§ Main Result
We can summarize the main result of this chapter in the following theorem:
private function
A finite function $f(x,y)$ is 1-private if and only if
it is partitionable.
Follows from lemmas <ref> and <ref>.
§ MULTIPARTY PRIVACY FOR MODULAR PROTOCOLS
As described in Chapters <ref>, <ref> and <ref>,
Many of the methods for performing arbitrary secret computations follow the
method of constructing a modular library of protocols that can be selected
and combined to construct new protocols.
A general library based on threshold (secret sharing) methods allows the
output of one protocol to be maintained in a secret format suitable for
input to another protocol, so that intermediate computations are not
In this section we consider what functions can be computed given an
arbitrary threshold scheme that allows addition of secrets over the field
used for sharing, when we also require that the output of the function be
retained as a secretly shared value for potential use in further
Let $E$ be a field and let $L_0$ be a set of function families mapping
$E^n$ to $E.$ Let $L$ be the closure of functions that can be written as a
finite composition of functions in $L_0.$ For example, with
$g,f_1,\dots,f_n \in L,$ the following function is in $L.$
\[
\]
A $t$-private library ${\cal L}$ for $L$
is a set of protocols satisfying:
* Each protocol $\Pi \in {\cal L}$ computes a function $f \in L,$ $t$-privately.
* For every $f \in L$ there is a protocol $\Pi \in {\cal L}$
that computes it.
The protocols for multiplication and addition given in [28, 39]
satisfy these properties, and in fact are complete for any finite function
when $t< \lceil \frac{n}{2} \rceil.$ In other words, they form a $\lceil
\frac{n}{2} \rceil$-private library for the class of all finite functions.
For $t \geq\lceil \frac{n}{2} \rceil,$ however, the situation is less
optimistic. We show that any library sufficiently powerful to compute
affine functions over $E$ can only compute affine functions over the
field $E.$
If ${\cal L}$ is a $t$-private library of protocols that includes all
affine functions over the field $E,$ and if $t \geq\lceil \frac{n}{2}
\rceil,$ then any function $f(x_1,\dots,x_n) \in L$ can be written
\[
f(x_1,\dots,x_n) = f_1(x_1) + f_2(x_2) + \dots + f_n(x_n).
\]
Proof: Let $f(x_1,\dots,x_n)$ be $t$-private for some $t \geq \lceil
\frac{n}{2} \rceil,$ and let $f$ depend on $k$ variables, $1 \leq k \leq
n.$ That is, for some $j_1,\dots,j_k$ and $\hat{f},$
\[
f(x_1,\dots,x_n) = \hat{f}(x_{j_1},\dots,x_{j_k}).
\]
Then we show by induction on $k$ that
\[
(\exists f_1,\dots,f_n,\hat{f}) \hspace{0.1in}
f(x_1,\dots,x_n) = \sum_{i=1}^n f_{i}(x_{i}).
\]
The statement is trivially true for $k=1.$
Assume that it holds for $k.$
Let $f$ be $t$-private and let $f$ depend on $k+1$ variables:
\[
(\exists j_1,\dots,j_{k+1}) \hspace{0.1in}
f(x_1,\dots,x_n) = \hat{f}(x_{j_1},\dots,x_{j_{k+1}}).
\]
Without loss of generality let $j_1=1,j_2=2,\dots,j_{k+1}=k+1.$
Fix arbitrary arguments $a_1,\dots,a_{k+1},$
and let $l = \lceil \frac{k}{2} \rceil.$
Suppose there exist $b_1,\dots,b_{k+1}$ such that
\begin{eqnarray}
\label{eqn-suppose}
\hat{f}(b_1,\dots,b_l,b_{l+1},\dots,b_{k+1})
+ \hat{f}(a_1,\dots,a_l,a_{l+1},\dots,a_{k+1})
& \not= & \\
\nonumber
\hat{f}(a_1,\dots,a_l,b_{l+1},\dots,b_{k+1})
+ \hat{f}(b_1,\dots,b_l,a_{l+1},\dots,a_{k+1}).
& &
\end{eqnarray}
It is not hard to see that there exist $r$ and $s$ with $r \not= s,$
such that $a_r \not= b_r$ and $a_s \not= b_s.$
Define the linear functions $p_r$ and $p_s$ as
\begin{eqnarray*}
p_r(x_1,\dots,x_{k+1}) & = &
(x_r - a_r) (b_r - a_r)^{-1}
( \hat{f}(b_1,\dots,b_l,a_{l+1},\dots,a_{k+1})
\\ & &
- \hat{f}(a_1,\dots,a_l,a_{l+1},\dots,a_{k+1}) )
\\
& &
+ \hat{f}(a_1,\dots,a_l,a_{l+1},\dots,a_{k+1}) \\
p_s(x_1,\dots,x_{k+1}) & = &
(x_s - a_s) (b_s - a_s)^{-1}
( \hat{f}(a_1,\dots,a_l,b_{l+1},\dots,b_{k+1})
\\ & &
- \hat{f}(a_1,\dots,a_l,a_{l+1},\dots,a_{k+1}) )
\\
& &
+ \hat{f}(a_1,\dots,a_l,a_{l+1},\dots,a_{k+1})
\end{eqnarray*}
Using the assumption that the library contains all affine functions,
the function $G$ defined as follows is $t$-private:
\[
G(x_1,\dots,x_n) =
- p_r(x_1,\dots,x_{k+1})
- p_s(x_1,\dots,x_{k+1})
+ \hat{f}(a_1,\dots,a_{k+1}).
\]
\begin{eqnarray*}
G(a_1,\dots,a_l,a_{l+1},\dots,a_{k+1},0,0,\dots,0) & = & 0, \\
G(a_1,\dots,a_l,b_{l+1},\dots,b_{k+1},0,0,\dots,0) & = & 0, \\
G(b_1,\dots,b_l,a_{l+1},\dots,a_{k+1},0,0,\dots,0) & = & 0, \\
G(b_1,\dots,b_l,b_{l+1},\dots,b_{k+1},0,0,\dots,0) & \not= & 0.
\end{eqnarray*}
Let $\Pi_G$ be a $t$-private protocol for $G.$ We use $\Pi_G$
to construct a $1$-private protocol for two parties to compute
AND(x,y), by having party 1 simulate half the parties and
party 2 simulate the other half.
Party 1 holds an input $x \in \set{0,1},$ and party 2 holds an
input $y \in \set{0,1},$ though when they simulate $\Pi_G$ they
will “pretend” to hold general arguments instead.
Let $m = \lfloor \frac{n-k-1}{2} \rfloor.$
Party 1 follows $\Pi_G,$ simulating the parties holding inputs
Party 2 follows $\Pi_G,$ simulating
the parties holding inputs
Notice that neither party 1 nor party 2
simulates more than $t$ parties.
If party 1 holds input $x=0,$ it selects $x_1=a_1,\dots,x_l=a_l.$
If party 1 holds input $x=1,$ it selects $x_1=b_1,\dots,x_l=b_l.$
It sets all other variables to 0.
if party 2 holds input $y=0,$ it selects
If party 2 holds input $y=1,$ it selects
It sets all other variables to 0.
Together, party 1 and party 2 simulate $\Pi_G,$ sending messages to one
another when $\Pi_G$ specifies that a player from the group simulated by
party 1 interact with a player from the group simulated by party 2.
Both party 1 and party 2 learn the value of $G$ from the simulated
protocol. If $G = 0,$ each outputs AND(x,y)$=0$;
if $G \not= 0,$ each outputs AND(x,y)$=1$.
It is easy to see that their outputs are correct.
Since $\Pi_G$ is $t$-private, where
$t \geq \lceil \frac{n}{2} \rceil,$
this two-party protocol is also $1$-private.
But Theorem <ref> implies that
there is no two-party $1$-private protocol for AND;
hence supposition (<ref>) is false. Thus,
\begin{eqnarray*}
f(x_1,\dots,x_n) & = &
\\ & &
+ f(x_1,\dots,x_l,a_{l+1},\dots,a_{k+1},0,\dots,0)
\\
& &
- f(a_1,\dots,a_l,a_{l+1},\dots,a_{k+1},0,\dots,0).
\end{eqnarray*}
\begin{eqnarray*}
g(x_1,\dots,x_n) & = &
f(a_1,\dots,a_l,x_{l+1},\dots,x_{k+1},0,\dots,0), \\
h(x_1,\dots,x_n) & = &
\end{eqnarray*}
Since $f$ is $t$-private, so are $g$ and $h.$
But $g$ and $h$ each depend on at most $k$ variables.
By the induction
hypothesis, there are $g_1,\dots,g_n,h_1,\dots,h_n$ such that
\begin{eqnarray*}
g(x_1,\dots,x_n) & = &
g_1(x_1)+ \cdots + g_n(x_n), \\
h(x_1,\dots,x_n) & = &
h_1(x_1)+ \cdots + h_n(x_n).
\end{eqnarray*}
Let $d = f(a_1,\dots,a_l,a_{l+1},\dots,a_{k+1},0,\dots,0).$
Then we have,
\begin{eqnarray*}
f(x_1,\dots,x_n) & = &
g(x_1,\dots,x_n) + h(x_1,\dots,x_n) - d
\\
& = &
g_1(x_1)+ \cdots + g_n(x_n) + h_1(x_1)+ \cdots + h_n(x_n) - d \\
& = &
f_1(x_1) + \cdots + f_n(x_n),
\end{eqnarray*}
where we define $f_1(x_1)=g_1(x_1)+h_1(x_1) - d$
and $f_i(x_i) = g_i(x_i)+h_i(x_i)$ for $i \geq 2.$
This completes the induction step.
Applying the result for $k=n,$ we conclude that
for an arbitrary $t$-private function $f$
there exist $f_1,\dots,f_n$ such that
\[
f(x_1,\dots,x_n) = f_1(x_1)+ \cdots + f_n(x_n),
\]
completing the proof of Theorem <ref>.
CHAPTER: CRYPTOGRAPHIC METHODS TOLERATING A FAULTY MAJORITY
In fact the incontinent person is like a city that votes for all the
right decrees and has good laws, but does not apply them, as in
Anaxandrides' taunt, “The city willed it, that cares nothing for
Aristotle, Nicomachean Ethics
The public-key, complexity-based approach to security
complexity-based cryptography
introduced by Diffie
and Hellman [54] affords a greater range of resilience than the
unconditional, information-theoretic approach discussed thus far. An
adversary that has bounded resources will not be able to break encryptions
or generate effective malicious messages if the encryptions and the
protocols require tremendous resources to corrupt, even though they might
leak information to an unbounded adversary in the pure, Shannon sense.
We shall consider protocols in which the players and the adversary are
polynomial-time Turing machines. A tool common to complexity-based
cryptography is the one-way function [120, 37]. A one-way
function is, informally, a function that is easy to compute but hard to
invert. A wide variety of candidates exist, but none have been proven to
be one-way; this is not surprising in view of the fact that the existence
of a one-way function would imply P$\not=\np.$ For example, exponentiation
modulo a prime $p$ is easy to perform, but no efficient solution is known
for its inverse, the discrete logarithm problem
(Definition <ref>). One-way trapdoor functions are
functions that are difficult to compute but with a small amount of advice
become easy. Computing quadratic residuosity modulo a product of two
primes $n=pq$ is a common example; without the factorization of $n,$
computing the residuosity is presumably difficult, whereas with the factors
of $n$ there is a simple polynomial-time algorithm to compute residuosity.
The drawback to complexity-based cryptography is the problem that unproven
assumptions are made. Security is not unconditional, but conditioned on
cryptographic assumptions.
[It may be observed that assuming the presence of a private
channel is a cryptographic assumption, but the use of the
term cryptographic assumption often
refers to unproven complexity-theoretic conjectures.]
It is therefore essential to keep the assumptions to a minimum. Rather
than make a particular assumption like the intractability of factoring
large integers, it is desirable to design protocols and encryption
techniques based on the existence of an arbitrary one-way trapdoor
function. Assuming a one-way function is preferable to assuming a one-way
trapdoor function or a one-way permutation, for example.
without assuming new primitives such as unproven complexity-theoretic
conjectures or measurably noisy communication channels, it is impossible
to achieve general multiparty computations when a majority of players
may be faulty. Consider intuition, first. If a minority of players is
able to determine the input of a given player, then certainly a faulty
majority could do so. Therefore, a protocol cannot allow any minority
the power to determine inputs. But if the group of nonfaulty players is
a minority, the information it holds will be insufficient to determine
the input of any faulty player (even in some oblivious, shared or
distributed manner that preserves the privacy of the faulty player), and
the joint computation cannot hope to depend on that player's input.
[One cannot in general “penalize” a faulty player
at any stage simply by ignoring its input, since an unfair bias may
result (consider a faulty player that can withdraw its input of “1”
when a parity computation isn't proceeding to its liking).] The old rule
of thumb stating, “The majority rules,” applies even when the majority
is bad.
Even worse, faulty players may withdraw precisely after learning
the output, gaining full benefit
at the expense of honest players. Notice that when the faulty players hold
only a minority, the majority of nonfaulty players always holds enough
information to determine the result and cannot be denied the answer.
Fairness has been treated in the two-party, cryptographic
setting by Yao [121], and
in the $n$-party case by Galil, Haber, and Yung [65],
primarily through techniques based on exchanging secret keys
Luby, Micali and Rackoff [90] use the Quadratic Residuosity Assumption
to design a biased coin that two parties compute jointly;
using this biased coin, the two parties exchange secret keys
All of these results rely on strong and specific complexity theoretic
Cleve [45] examines impossibility results independent of
complexity theoretic assumptions.
This chapter presents techniques to attain fair, secure, and reliable
computations even though the majority of processors may be malicious.
Based on certain cryptographic assumptions, however,
a minority of honest processors
can restrict the power of a faulty majority to the ability only to
withdraw; a faulty majority cannot cause the honest processors to adopt
incorrect values.
We discuss different and stronger definitions for fairness
in the $n$-party scenario,
and present a learning-based approach for which fairness and secret
computation are in fact achievable.
Our solution uses a technique called
gradual disclosure,
in which each party learns the result slowly, so
that if the procedure halts, each player has gained roughly equal knowledge.
This technique has similarities to the methods of [90] for secret
exchange, but does not use strong assumptions.
For clarity, our exposition starts with a weakened adversary, who can
corrupt $t > n/2$ players but who is allowed only fail-stop corruptions.
(Chapter <ref> discusses an even weaker, completely passive
No incorrect messages may be sent. Our solution
assumes two-player oblivious circuit evaluation, which we shall later
replace by assuming either the presence of noisy (“oblivious transfer”)
channels or by assuming that a cryptographic protocol for
two-party oblivious transfer exists.
A result of Impagliazzo and Rudich [82] implies that
weaker complexity theoretic
assumptions are difficult to make without proving P$\not=\np.$
Finally, to investigate a fully Byzantine adversary that can corrupt a
majority, we present methods that restrict the powers of the Byzantine
adversary to those of a fail-stop adversary. Through zero-knowledge proofs
using either noisy channels or one-way functions, we essentially compile
our fail-stop protocol (in the sense of [71]) into one resilient
(with certain limitations)
against Byzantine adversaries. That is, faced with a
possibly cheating Byzantine adversary, nonfaulty players can detect
cheating and disqualify faulty players, thus limiting the power of the
Byzantine adversary. The use of noisy channels for this purpose is novel.
Assumptions made in this chapter. The network has private channels, a
broadcast channel, $n$ processors, and any number $t$ of Byzantine
faults, chosen statically.
We assume that a protocol for oblivious transfer
exists, and consider the goal of computing Boolean functions.
We ultimately remove the assumptions of private
The protocols are computationally resilient.
§ FAIRNESS
For $2t \geq n,$ full resilience cannot be achieved, but we can measure
how far each player progresses toward an answer in order to show that
parity is maintained. As earlier, we would like a standard: a fair,
ideal protocol, and we would like a means to compare an arbitrary
protocol to it. For the latter we shall continue to use relative
The fair, ideal protocol should allow a stronger adversary, restricted
to the ideal $t$-fault-classideal fault class
but given the ability to corrupt player $(n+1)$ (the trusted host)
in a fail-stop manner.
We call this an extended $t$-adversary class.
adversary class!extended
Thus, an adversary can see rushed messages in a
given round and prevent the host from completing the round.
To measure how much each player learns in a protocol, consider the chance
that a player guesses a particular output, for any value $y$ and state $q:$
\begin{eqnarray*}
p_i(y,q_i) & = & \prob{\outfn_i(q_i) = y_i} \\
p_A(y,q_A) & = & \prob{\outfn_A(q_A) = y_A}.
\end{eqnarray*}
The initial probability of player $i$ is $p_i(y,x_i \circ a_i).$
Motivated by definitions of likelihood and weight of evidence
frequently used in learning theory,
we define
\[
\odds(p) = \frac{p}{1-p}. \index{odds}
\]
The increase in odds that a player or adversary outputs the correct
result serves as a measure of how much that party learns from a
A fair, ideal protocol computes some function $f$ while maintaining
parity in knowledge as it reveals the result. In measuring the increase
in knowledge of adversaries, we make two assumptions. The first is that
the adversary is static. Defining fairness with respect to dynamic
adversaries is more involved and fraught with subtleties. For example,
it is hard to define an adversary's initial information,
since a dynamic adversary may (intentionally) start off ignorant
but later gain huge amounts of information, completely disparate
with the minor gains of nonfaulty players, by obtaining the inputs
of newly corrupted players. These difficulties suggest that
a proper formulation of fairness for dynamic adversaries is
an attractive open problem. Because our protocols are designed to withstand
static adversaries, we need not treat it in this chapter.
The second assumption we make is that an adversary maximizes its
initial chances to guess the result. As mentioned, an adversary might
intentionally compute wrongly when given only the initial information,
in order that a quantitative measure of fairness be broken. Here,
it is not a question of ignorance, but of programming, since the adversary
knows all the corrupt inputs at the start. Given an adversary class
$\advclass,$ we define the
maximal initial probability of coalition $T$:
maximal initial probability
\[
p_T(y,\vec{x}_T \circ \vec{a}_T \circ a) =
\mbox{max } \{
\prob{\outfn_A(\vec{x}_T \circ \vec{a}_T \circ a) = y_A} \mid
A\in \advclass, T=T(A)
\}
\]
fairideal protocol!fair
A $(\delta,t)$-fair, ideal protocol for $\computef$
is a protocol with the following properties, for any adversary $A$
from an extended, ideal, static $t$-adversary class:
The host must collect all inputs in the first round and compute
$F(\vec{x}'),$ where $\vec{x}'$ is the vector of inputs it collects;
For any $\protoIn,$
for any execution,
for all $r \leq R,$
for all $i \not \in T=T(q_A),$
and for all $\vec{x}'$ such that $\vec{x}_T = \vec{x}_T',$
\[
\frac{\odds(p_A(\computef(\vec{x}'),q_A^r))}%
\leq
(1+\delta) \cdot
\frac{\odds(p_T(\computef(\vec{x}'),
\vec{x}_T \circ\vec{a}_T \circ a))}%
{\odds(p_i(\computef(\vec{x}'),x_i \circ a_i))},
\]
\[
\odds(p_A(\computef(\vec{x}'),q_A^r)) = \infty
\mbox{\hspace{0.3in} and \hspace{0.3in}}
\odds(p_i(\computef(\vec{x}'),q_i^r)) \geq \frac{1}{\delta};
\]
The host must return $\computef(\vec{x}')$ in round $R.$
We say $\delta$-fair, ideal when $t$ is clear
from context.
Condition (3) seems pointless because the adversary ought to stop
the host before this step, always learning $\computef(\vec{x}')$ while
preventing nonfaulty players from learning $\computef(\vec{x}')$ with
certainty. Condition (2), however, ensures that the nonfaulty
players nevertheless have a very good idea of the output.
Our protocols will satisfy an additional constraint, that
in order to stop the trusted host,
the adversary must identifiably corrupt
at least $n-t$ players.
In a litigious society, this is a high price to pay. At the very
least, it prevents further infractions.
The ideal standard for fairness allows us to define fairness for
a general protocol:
A protocol $\protoname$ computes $F$ $t$-fairly if,
for some $c>0,$
there exists a $O(k^{-c})$-fair ideal protocol $\idfairname(\computef)$
for $F$ such that
\[
\protoname \resilasFaS \idfairname(\computef).
\]
We shall make use of protocols that would be resilient if the
adversary could not halt the protocol entirely. Specifically,
a semi-ideal protocolprotocol!semi-ideal
$\seminame(F)$ for $F$
is one that accommodates an
extended adversary, but if the adversary halts the host at some round $r,$
then all nonfaulty players output $\outfn(q_i^r)=(\cheating,y_i),$
where $y_i$ indicates a “best guess” for the output.
Otherwise, as in the
ideal protocol, each nonfaulty player receives the value of
A protocol $\protoname$ is $t$-semi-resilient for $F$ if
there exists a semi-ideal protocol $\seminame(F)$ such that,
with respect to an extended ideal $t$-adversary class,
\[
\protoname \resilasFa \seminame(F)
\]
§.§ Comparison to Previous Work
Galil, Haber, and Yung extended Yao's two-party protocol
to an $n$-party protocol that is fair under a different, weaker
definition [65, 121]. Their definition allows nonfaulty
players to run a recovery protocol based on the programs
of faulty players. This is unrealistic in two ways. First,
it requires that adversaries have less computational power.
Second, the adversaries' programs cannot depend in any way on
the nonfaulty players' programs.
Their solution involves encrypting the result using a trapdoor function
and revealing the trapdoor bit by bit. At any point, it would seem
that no player is more than one bit ahead of another. Unfortunately,
if an adversary is a powerful investment firm with a supercomputer
that is a thousand times faster than the personal computer of a
small investor, the firm can always quit once it has enough of the
trapdoor, leaving the small investor in the dust.
We make no such restrictions and allow a full range of adversaries
that know the programs of nonfaulty players (though not, of course,
their inputs).
Our protocols use a near optimal number of rounds relative to
a two-party lower bound of Cleve [46] that generalizes
to the $n$-party, $2t \geq n$ case. If $p$ and $q$ are the
a priori probabilities of each player
to guess the result, there is a quitting strategy for one player allowing
him to predict $\computef$ with probability
$\frac{\mbox{\scriptsize min}(1-p,1-q)}{2k}$
better than the other, where $k$ is the number of rounds. A certain
number of rounds are required to attain fairness.
§ GRADUAL DISCLOSURE: FAIL-STOP FROM PASSIVE
We begin our analysis with a method to achieve fairness against
fail-stop adversaries, given protocols to compute functions
resiliently against passive adversaries. The fundamental idea
is that the result should be revealed slowly; the particular method
we employ uses a series of coin flips biased slightly toward the
answer ( [90]), each computed using a
semi-resilient multiparty protocol.
The ideal coin-flip protocol
coin!ideal protocol
ideal!coin flip protocol
is described in Figure <ref>.
We show:
Protocol is a $(4k^{-1},t)$-fair, ideal protocol.
0.5in 0.7in 1.0in 0.5in 0.5in 0.5in 0.5in (IC1)
$(1 \leq i \leq n)$
$i \rightarrow n+1:$
$F \leftarrow F(x_1,\ldots,x_n)$
$c_i \leftarrow \bias(\frac{1}{k})$
$n+1 \rightarrow [n]:$
$F \oplus c_i$
Ideal coin protocol to evaluate $F(x_1,\dots,x_n),$
against an extended fail-stop adversary.
For $O(k^{-c})$-fairness, replace $k$ by $k^{c}.$
$\odds(p_T(\computef(\vec{x}'),\vec{x}_T \circ\vec{a}_T \circ a))
= \infty,$
there is nothing to prove.
Otherwise, the adversary's odds become infinite only in
the final round, when $F(\vec{x}')$ is revealed,
but at that point the odds of nonfaulty players exceeds $k.$
This can be seen from bounds on the tail of the binomial distribution
proven by Chernoff and Angluin and Valiant [41, 2].
A binomial random variable $X_{N,P}$ measures the number of
successes in $N$ independent Bernoulli trials, where the probability
of success is $P.$
(Chernoff, Angluin, Valiant)
If $X_{N,P}$ is a binomial random variable, then
\[
\prob{X \leq (1-\epsilon) NP} \leq
\exp({-\half\epsilon^2 NP}).
\]
Let $X$ represent the number of coins of value $F(\vec{x}'),$
$N=k^3,$ and $P=(\half + \frac{1}{k}).$ Then
\[
\prob{X \leq \half k^3}
\leq
\exp({-\half \left(\frac{2}{k+2}\right)^2 (\half k^3 + k^2)})
\exp({-\frac{k^2}{k+2}})
\leq
\exp({-\frac{k}{2}}),
\]
for $k \geq 2.$
\[
\odds(p_i(\computef(\vec{x}'),q_i^R))
\geq
\frac{1-\exp({-\frac{k}{2}})}{\exp({-\frac{k}{2}})}
= \exp({\half k})-1 > k,
\]
for $k \geq 3.$ If the adversary allows a nonfaulty player to see
the $(k+1)^{st}$ coin flip, the odds are clearly better.
Thus it suffices to show that the ratio of odds
increases by a factor of no more than $(1+4k^{-1})$ in each round.
First let us show that the best adversarial strategy for guessing the output
is simple (choose the majority, the best Bayesian guess) and achievable.
Let $A$ be an adversary that, for some sequence $\rho$ of coin flips
for which $\prob{F=0 \mid \rho} > \half,$ but $A$ outputs $1$
with probability $\alpha > 0.$ Let $B$ be $A$ modified to output
$1$ always in this case. Then $B$ is more successful in guessing
the output when the sequence $\rho$ comes up:
0.3in 2.1in $ (\prob{B=0,F=0 \mid \rho} + \prob{B=1,F=1 \mid \rho})$
$ - (\prob{A=0,F=0 \mid \rho} + \prob{A=1,F=1 \mid \rho})$
$ = $
$ \prob{B=0\mid\rho} \prob{F=0\mid\rho} + \prob{B=1\mid\rho} \prob{F=1\mid\rho}$
$ - \prob{A=0\mid\rho} \prob{F=0\mid\rho} - \prob{A=1\mid\rho} \prob{F=1\mid\rho}$
$ = $
$ (1-(1-\alpha))\prob{F=0\mid\rho} - \alpha\prob{F=1\mid\rho}$
$ = $
$ \alpha ( \prob{F=0\mid\rho} - \prob{F=1\mid\rho} )$
$ > $
$ 0.$
Given the best possible adversary, we now show that the amount it
learns from a round of the ideal coin protocol increases its odds
by no more than a factor of $(1+4k^{-1}).$
$b=\frac{1}{k},$ $\gamma_0=(\half+b),$
and $\gamma_1=(\half-b).$
Clearly, if $p$ is the number of $0$'s in $\rho,$ and $q=\abs{\rho}-p,$
\[
\prob{\rho \mid F = d} = (\gamma_d)^p (\gamma_{\overline{d}})^q.
\]
\begin{eqnarray*}
\prob{F=d \mid \rho} & = &
\prob{\rho \mid F=d}\prob{F=d} \\
& &
\cdot {\left(\prob{\rho \mid F=0} \prob{F=0} +
\prob{\rho \mid F=1} \prob{F=1} \right)}^{-1}.
\end{eqnarray*}
Consider what happens when the next coin toss is a $0.$ We must consider
the odds of a correct guess given the sequence ${\rho}{0}.$ It suffices to
show that the ratio $\Delta$ of these odds to those of the adversary when
given only $\rho$ is no more than $(1+4k^{-1}).$
\begin{eqnarray*}
\Delta & = &
\frac%
\\
& = &
\frac%
{\prob{F=0\mid {\rho}0}}
{1 - \prob{F=0\mid {\rho}0}}
\cdot
\left(
\frac%
{\prob{F=0\mid \rho}}
\right)^{-1}
\end{eqnarray*}
With a bit of manipulation, letting $L=\gamma_0 \gamma_1^{-1}$
and $G=\prob{F=0}(\prob{F=1})^{-1},$
\[
\Delta =
\frac%
{1+ L^{q-p} G^{-1}}
{1+ L^{q-p-1} G^{-1}}
\cdot
\frac%
{1+ L^{p+1-q} G}
{1+ L^{p-q} G}
\]
The extremal cases occur when $p \approx q$ and $G \approx 1;$
\begin{eqnarray*}
\Delta & < & \frac{1+L}{1+L^{-1}} = L \\
& &
(\frac{1}{2} + \frac{1}{k})
(\frac{1}{2} - \frac{1}{k})^{-1}
\leq 1+4k^{-1}.
\end{eqnarray*}
Let $\idfairname(\computef)$ be a $\delta$-fair, ideal protocol for
$\computef$ having $R(n,m)$ steps. If $\protoname=\set{\protoname(i)}$ is
a family of protocols such that $\protoname(i)$ statistically
$t$-semi-resiliently computes the $i^{th}$ step of
$\idfairname(\computef),$ then the protocol $\hat{\protoname}$ defined as
$\protoname(R(n,m)) \circ \cdots \circ \protoname(1)$ is a
$t$-semi-resilient and $\delta$-fair protocol for $\computef.$
We write $\computef$ as a composition of functions $\hat{\computef}=
\hat{\computef}^{R(n,m)} \circ \cdots \circ \hat{\computef}^1,$ where each
can also take an input of the form $x_i=(\cheating,g),$ in which case it
outputs $y_i=(\cheating,g).$ By Theorem <ref>,
$\hat{\protoname} \resilasFaS \protoconc_1^{R(n,m)}
\seminame(\hat{\computef}^i) \resilasFa \idfairname(\computef).$
For any protocol $\protoname,$ define protocol
$\fs(\protoname)$ by modifying each player
to broadcast $\cheating$ if it fails to
receive an expected message, receives an improper message (according to
some local computation it can make), or receives a broadcast $\cheating$
message. If any of these occasions occur, it halts in the next round,
producing and output of the form $(\cheating,y).$
If $\protoname$ is $t$-resilient leaking $\computef$ against passive
adversaries, then $\fs(\protoname)$ is $t$-semi-resilient leaking
$\computef$ against fail-stop adversaries.
Let $\interface'$ be an interface for passive adversaries, from
$\protoname$ to $\idealname(\computef).$ Define interface $\interface$ for
fail-stop adversaries from $\fs(\protoname)$ to $\seminame(\computef)$ as
follows. Interface $\interface$ runs $\interface'$ until adversary $A$
requests the failure of some subset of processors, by specifying they send
no messages to some set $U$ of nonfaulty players. Interface $\interface$
halts the host in $\seminame(\computef),$ causing all players to halt and
output , just as in $\fs(\protoname).$ The interface also
supplies $A$ with messages of $\cheating$ from players in $U,$ uses
$\interface'$ to generate the outgoing messages from the remaining
nonfaulty players in the current round of $\fs(\protoname),$ and in the
next round reports $\cheating$ from all nonfaulty players that did not
already broadcast $\cheating.$
If $A$ never causes a failure, then $\interface$ runs $\interface'$ to the
end, never halting the trusted host. The distribution (conditioned on no
halting) on outputs in $\anglebrack{A,\fs(\protoname)}$ is the same as in
$\anglebrack{A,\protoname},$ which in turn is the same as in
$\anglebrack{A,\interface',\idealname(\computef)},$ which (conditioned on
no halting) matches that in $\anglebrack{A,\interface,\seminame(\computef)}.$
The semi-resilient general protocol, even though unfair,
is easily utilized to accomplish what the ideal, fair protocol
accomplishes. Figure <ref> describes a
$O(k^{-c})$-fair, semi-resilient protocol for any function $F.$
The protocol is a concatenation of protocols that
compute the result of each round of the ideal coin-flip protocol,
namely a random coin biased toward the result, $F(x_1,\ldots,x_n),$
with probability $\half+\frac{1}{k}.$ Clearly, $k$ can
be replaced by $k^c$ for any $c>0,$ providing greater
fairness if desired.
Let $t \leq n.$ If for any $G$ there exists a protocol for
$G$ that is $t$-resilient against passive adversaries, then for any
$\computef$ and $c$ there exists a $(k^{-c},t)$-fair protocol for $\computef$ that is resilient against fail-stop adversaries. If
$\computef$ is described by a circuit family $C_{\computef^{n,m}},$ the
protocol requires $O(\size(C_{\computef^{n,m}})^{c_0})$ bits and
$O(\depth(C_{\computef^{n,m}})^{c_0})$ rounds of interaction
for some fixed $c_0.$
If any player halts, broadcast .
If any player broadcasts , halt and output .
Run to supply each player $i$
with $\piece_i(F(x_1,\ldots,x_n)).$
Run to compute $\piece_i(c_j)$ for player $i,$
where $c_j \leftarrow \bias(\frac{1}{k}).$
Run on $\{\piece_i(F),\piece_i(c_j)\}_{i\in [n]}$
to compute and reveal $F \oplus c_j.$
Protocol to evaluate $F(x_1,\dots,x_n),$
against a fail-stop adversary.
For $O(k^{-c})$-fairness, replace $k$ by $k^{c}.$
Biased coins $c_1,\ldots,c_{k^3+1}$ may also be computed beforehand
and masked with $F$ all at once, so that the bulk of the protocol (Step
(EF2)) becomes simply a gradual disclosure of $F,$ secret by secret.
Let $\computef$ and $c$ be arbitrary. Lemma <ref> states
that there exists a $(k^{-c},t)$-fair protocol $\idcoin$ for $\computef.$
The condition of the theorem states that there exist protocols
$\protoname(1),\ldots,\protoname(R(n,m))$ for each step of $\idcoin$ that
are $t$-resilient against passive adversaries. By Lemma <ref>,
protocols $\fs(\protoname(1)),\ldots,\fs(\protoname(R(n,m)))$ are
$t$-semi-resilient against fail-stop adversaries, and
Lemma <ref> concludes that their composition is
$(k^{-c},t)$-fair and resilient against fail-stop adversaries.
§ DEGREE REDUCTION: PASSIVE FROM TWO-PARTY PROTOCOLS
Let us assume either that a black box for two-party function evaluation
exists (i.e., in the formal model for protocol execution, two players
write values on a special tape, and in the next round the result of a
particular function $f(x_1,x_2)$ is written on another tape for them to
read) or that some such protocol exists (i.e., a 2-resilient protocol
admitting an appropriate interface). Based on two-party
function evaluation, we show how $n$ parties can achieve the same result:
general function computation.
We consider secret sharing with a threshold of $n-1.$ In this case, $n$
players are needed to determine a secret, and sharing reduces essentially to
a form we call sum-sharing,
secret sharing!sum
sum sharing
which can be regarded as a
generalization of parity-sharing (sum modulo 2) originally developed by
Haber [78], and applied to improve message complexity
and to protect information of faulty players in [65].
In particular, a secret is represented as a sum of all
pieces, where the pieces arise by selecting $n$ random field elements
subject to their sum being the secret.
[We leave it to the
interested reader to identify the simple correspondence between
polynomial secret sharing having threshold $n-1$ and sum sharing.]
Addition remains easy (add pieces individually), but multiplication is again
somewhat more difficult. Our solution, independently derived, generalizes
the parity-based method of Haber and Micali.
Protocols for sum sharing and secret addition are listed in Figures
<ref> and <ref>.
For multiplication of secrets $a$ and $b,$ each
player must receive a piece $\piece_i(ab).$ Noting that
\[
ab =
(\sum_{i=1}^n \piece_i(a)) (\sum_{j=1}^n \piece_j(b))
\]
we see that for every combination of $i$ and $j,$ player $i$'s piece of $a$
must be multiplied by player $j$'s piece of $b;$ the sum of these products
is $ab.$ It would not do for player $i$ to know
$\piece_i(a)\cdot\piece_j(b),$ for then it could easily calculate
$\piece_j(b).$ In fact, we use a two-party protocol for player $i$ and
player $j$ to calculate the values $(\piece_i(a)\piece_j(b) + r_{ij})$ and
$(-r_{ij}),$ where $r_{ij}$ is a uniformly random field element.
Each receives
one of these values. No information is revealed, however, since each value
by itself is uniformly random. On the other hand, the sum of the two values
is the desired contribution to the final result, namely the pairwise product
$\piece_i(a)\piece_j(b).$ Summing over all results of all two-party
combinations gives the desired result, $ab;$ thus the sum of each player's
individual results represents a sum share of $ab.$ That is, the newly
constructed pieces $\piece_i(ab)$ are such that any $n-1$ of them are
distributed uniformly at random, while all $n$ pieces sum to $ab.$ The
protocol is given in Figure <ref>.
0.5in 0.5in 0.7in 0.5in 0.5in 0.5in Phase I (Share):
$(\piece_1(s),\ldots,\piece_n(s)) \leftarrow $
$\uniform(\vec{p} \in E^n \mid \sum p_i = s \}$
$(1 \leq i \leq n)$
$D \rightarrow i:$
Phase II (Reconstruct for $R$):
$(1 \leq i \leq n)$
$\piece_i(s) \rightarrow R.$
Protocol to sum-share a secret, $s.$
Resilient against passive adversaries for any $t.$
$(1 \leq i \leq n)$
$\piece_i(a+b) \leftarrow \piece_i(a) + \piece_i(b).$
protocol to add two secrets shared using the sum representation.
Resilient against passive adversaries for any $t.$ Linear combinations
are a simple modification.
$(1 \leq i,j \leq n), i \not= j:$
$r_{ij}^i \leftarrow \uniform(E)$
$(1 \leq i,j \leq n; i \not= j):$
Run between players $i$ and $j$
on inputs $((\piece_i(a),r_{ij}^i),(\piece_j(b),r_{ij}^j)),$
to compute
$\alpha_{ij}=(\piece_i(a)\piece_j(b) + r_{ij}^i + r_{ij}^j),$
$beta_{ij}=(- r_{ij}^i - r_{ij}^j).$
$(1 \leq i,j \leq n)$
$\piece_i(ab) = \piece_i(a)\piece_i(b) +
\sum_{j=1}^n (\alpha_{ij} + \beta_{ji}).$
Protocol to multiply two secrets shared using the sum representation.
Resilient against passive adversaries for any $t.$
Given a $2$-semi-resilient protocol for two-party
circuit evaluation, protocol achieves $n$-semi-resilient secret multiplication against
fail-stop adversaries.
Clearly, the functions $\alpha$ and $\beta$
described in Figure <ref> are $n$-private;
if one of the pair of participants is corrupted, the messages from
the nonfaulty processors are obtained from the assumed sub-interface for
run as a subroutine, supplied with a uniformly random output
for the faulty player.
For each $i$ and $j,$ the protocol
computes an $n$-private representation of $\piece_i(a)\piece_j(b),$
and then computes an $n$-private representation of their sum,
which is the desired product.
Let $\set{F^{n,m}}$ be a family of functions described by
circuit family $C_{F^{n,m}}.$
Given a $2$-semi-resilient protocol for two-party circuit
for any $t$ there
exists a $t$-semi-resilient protocol leaking $F$ against
fail-stop adversaries. The protocol requires $O(\size(C_{F^{n,m}})^{c_0})$
bits and
$O(\depth(C_{F^{n,m}})^{c_0})$ rounds of interaction for some fixed $c_0.$
The protocol is almost identical to protocol $-C_F,$
described in Chapter <ref>, using ,
, and instead of the secret sharing, secret
addition, and secret multiplication subprotocols.
If any player detects cheating, i.e. if any nonfaulty player
observes that another player halts, then it broadcasts
and each nonfaulty player outputs .
The proof of relative resilience applies here, mutatis mutandis.
Note in particular that,
when faced with a request from the adversary
to halt a player in ,
the interface simply requests that the trusted
host in $\seminame(F)$ be halted. We conclude for a passive adversary
class $\advclass$ that $\evalsemipass \resilasFaC \idf,$ hence
for fail-stop adversaries,
$\evalsemipass \resilasFaC \seminame(\computef).$
If any player halts, broadcast .
If any player broadcasts , halt and output .
Each player $i$ runs ($x_i$).
Denote these secrets at level 0 of the circuit by
Evaluate addition and multiplication layers
Run with coefficients $\set{c_{lv,j}},$ secrets
$\set{z_{\inputgates_j{g_{lv}}}}$ to compute:
$z_{2l,v}= c_{2l,v,0} + \sum c_{2l,v,j} z_{\inputgates_j(g_{2l,v})} $
($v=1..w,$ $j=1..\abs{\inputgates(g_{lv})}$)
Run with secrets
$\set{(z_{\inputgates_1(g_{lv})}, z_{\inputgates_2(g_{lv})} )}$
to compute:
z_{\inputgates_1(g_{lv})} \cdot z_{\inputgates_2(g_{lv})}$
$j\in \outgates(i)$
Reveal $z_{lj}$ to player $i.$
Protocol to evaluate a circuit $C_F$ for $F(x_1,\dots,x_n),$
with a fail-stop adversary.
§.§ Polynomial Sharing
The protocols just described are prone to a single error of omission.
Polynomial sharing is not, and forces an adversary to corrupt more
players in order to disrupt the protocol. The motivating factor for
developing a scheme to multiply values shared via sum-sharing arose
from designing a multiplication protocol for secrets shared using
polynomials. We briefly discuss that scheme.
Degree reduction is again the problem. We reduce it to the problem
of generating publicly known evaluation points of $h(x)=f(x)g(x)+xr(x),$
where $f(x)$ and $g(x)$ are the original polynomials of degree $t$
used to share secrets $u$ and $v,$
and, as in <ref>, $r(x)$ is a random polynomial
of degree $t.$
Returning to LaGrange interpolation,
we express $h(m)$ as the product of two sums, where
the addenda in each sum are weighted pieces of $u$ and $v,$
and may be regarded as sum-shares:
\begin{eqnarray*}
h(m) & = & f(m)g(m) \\
& = & ( \sum_i L_i(m) f(i) ) ( \sum_j L_j(m) g(i) ) \\
& = & ( \sum_i L_i(m) \cdot \piece_i(u) )
( \sum_j L_j(m) \cdot \piece_i(v) ) \\
\end{eqnarray*}
Define $\overline{h}(x)= h(x) \mod x^{t+1}.$
It is a matter of straightforward algebra for each player $i$
to calculate the value
$\overline{h}(i)$ from $h(i)$ and the public values
Figure <ref> describes the protocol to synthesize values
and reduce the degree of $h(x).$
Run the protocol to provide each $i$ with $r_i,$
a piece of a random polynomial $r(x)$ of degree $t.$
For $m = -1, \dots, -n:$
Run the protocol with player $i$'s pieces being:
$L_i(m) \cdot \piece_i(u), L_i(m) \cdot \piece_i(v).$
Each $i$ receives a piece $c_i^m$ of the result.
Each $i$ shares $c_i^m$ as a secret,
and receives $\piece_i(c_j^m)$ for $1 \leq j \leq n.$
Run to compute $\piece_i(\sum_{j=1}^n c_j^m).$
Broadcast $(i r_i L_i(m) + \piece_i(\sum_{j=1}^n c_j^m),$
and interpolate the value, $h(m).$
Player $i$ computes $\overline{h}(i)$ from $h(-1),\dots,h(-n)$ and
$h(i)=\piece_i(u) \cdot \piece_i(v) + r_i.$
Protocol to compute and broadcast $h(-1),\dots,h(-n),$
and to compute $\overline{h}(i)$ for player $i.$
§ TWO-PARTY PROTOCOLS FROM OBLIVIOUS TRANSFER
§.§ Cryptographic Primitives
A one-way functionone-way function
$f$ is a family
$\set{f^n}$ mapping $\Sigma^n$ to
$\Sigma^{n^c}$ for some $c$ such that
* $f$ is polynomial-time computable;
* for all $c,$
there exists $n_0$ such that for all $n>n_0,$ for all polynomial time
Turing machines $M,$
\[
\prob{x \leftarrow \uniform(\Sigma^n): f(x) = f(M(f(x)))}
< n^{-c}.
\]
A pseudorandom number generator $G$ is a family
$\set{G^n}$ mapping $\Sigma^n$ to $\Sigma^{dn^c}$ for some $c,d$ such that
$G$ is polynomial-time computable and
\[
\set{x \leftarrow \uniform(\Sigma^n): G(x)}
\indistEnC
\uniform(\Sigma^{dn^c}).
\]
Impagliazzo, Levin, and Luby [81] show how to produce a
pseudorandom number generator (PRG) from any one-way function:
(Impagliazzo et al (1989))
If there exists a one-way function in the non-uniform model of security
then there exists a pseudorandom generator in the non-uniform model of
Based on PRG's, we may construct all sorts of primitives, including simple
private-key cryptosystems (if two parties share a seed, they generate a
one-time pad of pseudorandom bits which they use to mask messages) and
special encrypted forms of gates to evaluate circuits obliviously. It will
be useful to note that, by Lemma <ref>, any simple
subrange of the output of a PRG is itself indistinguishable from uniformly
random strings.
§.§ Yao Gates
An essential tool to the construction of cryptographic
multiparty protocols is that of two-party oblivious circuit
evaluation, introduced by Yao [121].
Two players wish to evaluate a circuit $C$ at private inputs.
The output may go to one or both of the players, and may be different
for each.
A Yao gate is a circuit gate that is encrypted in a special way
to allow its evaluation without learning the result. Yao
[121] introduced a particular implementation and Goldreich et
al improved upon it [71]. Both approaches used particular
cryptographic assumptions, such as the intractability of factoring or
quadratic residuosity, or even
the existence of any one-way trapdoor function.
We present an implementation that requires only an
arbitrary one-way function, not any particular function [17].
Our solution is
based only on the assumption of a protocol for oblivious transfer.
By [11], the existence of a protocol for oblivious transfer
implies the existence of a one-way function.
The existence of a trapdoor function need not be assumed.
The Yao gateYao gate
has two inputs, $X$ and $Y,$ and one output $Z.$ The gate
computes some function $g(X,Y)=Z.$ Rather than allowing the wire values
$X,$ $Y,$ and $Z$ to be known, however, there are random keys, $X_0,$
$X_1,$ $Y_0,$ $Y_1,$ $Z_0,$ and $Z_1,$ that represent the wire values
0 and 1. The correspondence between $X_0/X_1$ and wire values 0/1 is
random, however; the same holds true for $Y$ and $Z.$ In other words, there
are random bits $\omega_X,$ $\omega_Y,$ and $\omega_Z$ that encode the
correspondence between the keys and the wire values:
\begin{eqnarray*}
X_0 & \leftrightarrow & \omega_X \\
X_1 & \leftrightarrow & \overline{\omega}_X
\end{eqnarray*}
Thus, if $\omega_X=1,$ then key $X_0$ represents the wire value $1$ and key
$X_1$ represents the wire value $0.$ The key translations
key translations
$\omega_X,$ $\omega_Y,$ and $\omega_Z,$ are kept hidden.
The construction of the gate is such that knowing $X_{\alpha}$ and
$Y_{\beta}$ allows one to compute $Z_{g({\alpha} \oplus \omega_X, {\beta}
\oplus \omega_Y) \oplus \omega_Z}.$ That is, without knowing any of the key
translations, one can compute the $Z$ key that corresponds to the output
wire value. Furthermore, the only information one learns is the
appropriate $Z$ key itself, not the wire value it represents, nor the other
$Z$ key. One only need know two input keys $X_{\alpha}$ and
$Y_{\beta}.$ In fact, none of the other input keys are ever revealed;
knowing both $X_0$ and $X_1,$ for example, would allow one to perform two
computations — and the results [$Z_{g(0, y \oplus \omega_Y) \oplus
\omega_Z}, Z_{g(1, y \oplus \omega_Y) \oplus \omega_Z}$] would be either
two different $Z$ keys or the same $Z$ key, potentially revealing
information about $Y.$
Thus, the encryption of the gate is public, and exactly one of each pair of
input keys may be revealed, but all other values must remain secret. In a
two-player protocol based on Yao gates [121, 71], one of the
players creates the encrypted circuit by encrypting each gate in a
coordinated fashion, gives the circuit to the second, and obliviously
allows the second player to learn
exactly one input key for every input gate. The
second player then evaluates the circuit by computing the output keys at
each level, until it reaches the final output. The first player reveals
the correspondence between the final output keys and the wire values. Note
that this method requires little interaction; the computation of the new
keys from the old ones is performed locally.
Yao's original method was based on assuming the intractability
of factoring large numbers. We assume only that a protocol for
oblivious transfer exists. Oblivious transfer,oblivious transfer
introduced by Rabin [79], is an important
two-player protocol to compute the probabilistic function on
$\emptyset \times \{0,1\}^2$ defined by
$F(x_1,x_2)=\{ b \leftarrow \uniform(\{0,1\}):
(\Lambda,(b,b\cdot x_1))\}.$
The first player, Alice, has a bit, $x_1,$ which she transfers
to the second player, Bob, with probability $\half.$ Alice
never learns, however, whether Bob received the bit; her output
is always the same, $0.$ Bob knows whether he received the
bit by checking whether $b=1$ or not.
Bellare, Cowen, and Goldwasser [22] show that the existence
of an oblivious transfer (OT) protocol implies
that one-way functions exist. On the other hand, even though
it would be safer to assume only that one-way functions exist,
Impagliazzo and Rudich [82] show that proving the existence
of an OT protocol based on one-way functions is likely to be as
hard as proving P$\not=$NP.
Fix a function family $F=\{F^m\}$ mapping
$\set{0,1}^m \times \set{0,1}^m \rightarrow \set{0,1}^m.$
Let $\gen(X)$ be a PRG that outputs $3\abs{X}$ bits.
Denote the first $\abs{X}$ bits by $\gentag(X),$ the
next $\abs{X}$ bits by $\genmask(0,X),$ and the final $\abs{X}$ bits by
These functions will be used to create identifying tags
and masks for the keys.
In the sequel, all keys have the same length $k.$
Our gate consists of four $3k$-bit entries as follows:
for $a=0,1$ and for $b=0,1,$
$\encode(a,b,X_0,X_1,\omega_X,Y_0,Y_1,\omega_Y,Z_0,Z_1,\omega_Z) = $
$\gentag(X_{a}) \circ \gentag(Y_{b}) \circ $
$[\genmask(b,X_{a}) \oplus
\genmask(a,Y_{b}) \oplus
Z_{g(a \oplus \omega_X, b \oplus \omega_Y) \oplus \omega_Z}
Given a $12k$-bit string, we consider it as four $3k$-bit strings,
and within each of those $3k$-bit strings we refer to
segments $\Yaogate_1(a,b),$ $\Yaogate_2(a,b),$ and $\Yaogate_3(a,b).$
The entire table is thus:
= $
$\encode(0,0,X_0,X_1,\omega_X,Y_0,Y_1,\omega_Y,Z_0,Z_1,\omega_Z) \circ$
$\encode(0,1,X_0,X_1,\omega_X,Y_0,Y_1,\omega_Y,Z_0,Z_1,\omega_Z) \circ$
$\encode(1,0,X_0,X_1,\omega_X,Y_0,Y_1,\omega_Y,Z_0,Z_1,\omega_Z) \circ$
Figure <ref> describes the the decoding of
a gate given input keys $X$ and $Y.$
Compute $\gentag(X),\gentag(Y).$
$a = 0..1$
Compute $\genmask(a,X),\genmask(a,Y).$
Determine the least $\alpha,\beta$ such that
$\gentag(X_\alpha) = \Yaogate_1(\alpha,\beta)$ and
$\gentag(Y_\beta) = \Yaogate_2(\alpha,\beta)$
Set $Z\leftarrow \Yaogate_3(\alpha,\beta) \oplus
\genmask(\beta,X) \oplus \genmask(\alpha,Y).$
Obtaining the output key $Z$ from a generalized Yao gate, given the
encrypted table and two input keys $X$ and $Y.$
§.§ Two-Party Yao Circuit Construction
Given the means to construct gates, constructing a circuit is
Let us fix a notation for the wires of a circuit. Given a circuit of depth
$d$ and width $w,$ assume without loss of generality that there is a gate
$g_{i,j}$ at every location $(i,j)$ ($i \in [d], j \in [w]$), and assume
that no wires skip from one level to the next. Dummy gates computing the
identity function can be introduced for this purpose; without loss of
generality, all gates have exactly two inputs (which could be the same
We consider an array $\wires[i,j,l]$ of wire keys for $i \in
\set{0,\ldots,d},j\in \set{1,\ldots,w},$ $l\in \set{0,1},$ $I \in [n].$ The
keys $\wires[i,j,0]$ and $\wires[i,j,1])$ represent wire $(i,j),$ which
carries the output of gate $g_{i,j}.$ Keys $\wires[0,j,0]$ and
$\wires[0,j,1])$ represent inputs to the circuit. Associated with each wire
$(i,j)$ is a key translation bit $\keytrans[i,j].$
Given $S,$ a set of components to void, and $\wires,$ the construction of
the circuit is as follows. First, construct a table $\gentab$ of
$6dwnk$ pseudorandom bits, where
\[
\gentab = \gen(\wires)
\]
and where the function $\gen(\wires)$ gives
a table of $(2dwn)$ $3k$-bit strings defined by
\[
\gen(\wires)[i,j,l] = \gen(\wires[i,j,l]).
\]
Second, convert this table to a circuit in the same fashion as the
gates are constructed. That is, with respect to table $\gentab,$ define
the functions $\tabletag(\gentab,i,j,l)$ and
($c\in\set{0,1}$) as the three substrings of length $k$ of
The Yao gate $\yaogate(\wires,\gentab,\keytrans,i,j)$ is then
constructed as in <ref>, using $\tabletag$ and $\tablemask$
as the tag and mask functions. If $\inp_0(i,j)$ and $\inp_1(i,j)$ are the
indices of the left and right inputs to $g_{i,j},$ then the wire keys are as
\begin{eqnarray*}
X_l & = & \wires[\inp_0(i,j),l] \\
Y_l & = & \wires[\inp_1(i,j),l] \\
Z_l & = & \wires[i,j,l]
\end{eqnarray*}
The key translations used to construct the gate are
$\keytrans_X=\keytrans[\inp_0(i,j)],$ $\keytrans_Y=\keytrans[\inp_1(i,j)],$
and $\keytrans_Z=\keytrans[i,j].$
The circuit is denoted
$\yaocircuit(\wires,\gentab,\keytrans).$ Notice that the construction
takes $\gentab$ into account independently of $\wires;$ it is well-defined
whether or not $\gentab$ arises from $\wires.$
$\yaocircuit(\wires,\gentab,\keytrans)[i,j] \leftarrow
\yaogate(\gentab,\keytrans,i,j);$
more precisely, $\yaocircuit(\wires,\gentab,\keytrans)$ is the string
$\yaogate(\wires,\gentab,\keytrans,1,1) \circ
\yaogate(\wires,\gentab,\keytrans,1,2) \circ \cdots$
$\circ \yaogate(\wires,\gentab,\keytrans,d,w-1)
\circ \yaogate(\wires,\gentab,\keytrans,d,w)$
How to construct a Yao-type circuit from a table $\wires$ of keys,
a table
$\gentab$ of tags and masks, and a set $\keytrans$ of key translations.
The inputs $x_1$ and $x_2$ are written as a set of input bits,
The value $\wireval[i,j]$ of wire $(i,j)$ is determined by the
input bits and the gates $g_{i,j}$ (in the natural fashion in which
the values are percolated through a circuit).
We use a bijection $L_0(J)=(J \Div m, J
\mod m)$ to map wire $(0,J)$ to input $x[L_0(J)].$
The set of keys necessary to evaluate the encrypted circuit is
given by
\[
\inkeys(\wires,\keytrans,x_1,x_2) =
\{\wires[0,J,\keytrans[L_0(J)] \oplus x[L_0(J)]]\}_{J \in [2m]}
\]
Finally, the output key translations that must be revealed in order to
interpret the keys of the final level are
\[
\]
The entire encrypted circuit is:
\[
\hidecirc(\wires,\gentab,\keytrans,x_1,x_2)
\yaocircuit(\wires,\gentab,\keytrans)
\circ
\circ
\inkeys(\wires,\keytrans,x_1,x_2)
\]
The initial portion corresponding to the circuit, the output keys,
and the input keys for inputs from player $1$ is denoted
and does not depend on $x_2.$ The suffix containing
the input keys for player $2$ is called
$\hidecirc^2(\wires,\gentab,\keytrans,x_1,x_2),$ and does not depend
on $x_1.$
§.§ Two-Party Protocols for Passive Adversaries
The encrypted circuits we have just described allow us to specify
a protocol for two players to evaluate some function $F(x_1,x_2).$
We examine the case where player $2$ learns the output, and player
$1$ learns only whether player $2$ cheats or not; the case where both
players learn the output is covered by two applications of the
unidirectional protocol.
Figure <ref> describes the protocol. Player $1$
constructs an encrypted circuit and gives it to player $2.$ Player
$1$ and player $2$ also engage in a 1-out-of-2 oblivious transfer protocol
in order for player $2$ to obtain the needed wire keys corresponding
to its inputs. Player $1$ should not learn which keys player $2$
obtained, and player $2$ should obtain only one of each pair of
One out of two oblivious transfer is like oblivious transfer except
that instead of receiving a given secret with 50-50 probability,
Bob is allowed to choose exactly one of two secrets to obtain.
Alice, instead of not knowing whether Bob received anything, now
fails to know which secret Bob chose. and Kilian show how
to implement 1-out-of-2 OT using standard OT. We denote their protocol,
with the assumed OT protocol built in, as $(b_1,b_2;c);$
Bob receives bit $b_c$ from Alice. The protocol is not limited
to transferring bits; strings may be transferred as well.
$\wires \leftarrow \uniform(\{0,1\}^{2dwk})$
$\gentab \leftarrow \gen(\wires)$
$\keytrans \leftarrow \uniform(\{0,1\}^{2dw}$
\leftarrow \hidecirc^{1}(\wires,\gentab,\keytrans,x_1,x_2)$
$1 \rightarrow 2:$
Transfer input keys,
Run $(\wires[0,m+i,0],\wires[0,m+i,1];x[2,i]).$
Run to decode $\EC^{1}\circ\EC^{2}.$
Protocol to transfer an encrypted circuit and the necessary input
keys to decode it from player $1$ to player $2.$
To show this protocol secure, we need to show that the
encrypted circuit $\EC,$ when generated properly, reveals nothing
more than $F(x_1,x_2).$ We show that is as resilient
as an ideal protocol that provides player $2$ with $\EC$
and we show the latter is as resilient as an ideal protocol
to provide player $2$ with $F(x_1,x_2).$
Protocol is $2$-resilient against static, passive adversaries.
Let be an ideal protocol in which player $1$ creates
$\wires,$ $\gentab,$ and $\keytrans$ as above and sends them along
with $x_1$ to the trusted host, and player $2$ sends $x_2$ to the trusted
host. The host computes $\hidecirc(\wires,\gentab,\keytrans,x_1,x_2)$
and returns the value to player $2.$ It suffices to show
$\unitwoeval \resilasFa \idealtwoec$ and
$\idealtwoec \resilasFaC \idf.$
First let us show $\unitwoeval \resilasFa \idealtwoec.$
If both players are corrupt, the interface $\interface$ simply corrupts
them both in the ideal protocol, obtains the inputs, operates
the players as a subroutine, and returns the results to the adversary.
If player $1$ is corrupt, then $\interface$ corrupts player $1$ in
the ideal protocol, obtains $x_1,$ $\wires,$ $\gentab,$ and
$\keytrans,$ computes $\EC^{1}$ and delivers it to $A$ as the view
of player $1$ in step (TE1). We have assumed is resilient,
so there is an interface that $\interface$ can run as a subroutine
for step (TE2) of (note that $\interface$ has all keys
that player $1$ puts up in the protocol).
If player $2$ is corrupt, then
$\interface$ corrupts player $2$ in the ideal protocol, obtains
$x_2$ and $\EC=\hidecirc(\wires,\gentab,\keytrans,x_1,x_2),$ and
gives to $A$ the message $\EC^{2}$ from player $1$ in step (TE1).
For step (TE2), $\interface$ runs a subinterface for ,
providing it with $x_2$ when it requests the corruption of player $2$
in the subprotocol. It should be clear that for this interface
\[
[A,\unitwoeval] \indistFaC
\]
The proof that $\idealtwoec \resilasFa \idf$ is slightly more involved.
If both player $1$ and player $2$ are corrupted, the interface
$\interface'$ has a trivial job. If player $1$ is corrupted, $\interface'$
need only corrupt player $1$ in $\idealtwoec$ to obtain $x_1$ and supply
it to $A.$ The difficulty arises when only player $2$ is corrupt, for
then the interface needs to generate an encrypted circuit without knowing
all the keys. Intuitively, the solution is to construct the part of the
encrypted circuit that would be decrypted during ,
given the known subset of keys and the value of $F(x_1,x_2).$
In other words, each gate contains four entries, only one of
which is used in the percolation. This entry must be generated
accurately, but the known subset of keys suffice to generate it.
The remaining entries are normally produced using the remaining
keys, but $\interface'$ does not have them. Instead, $\interface'$
places uniformly random strings in their place. Because these
entries are normally produced by a PRG, the resulting message that
produces for $A$ is computationally indistinguishable from one generated
in , so the behavior of $A,$ by Lemma <ref>,
is essentially the same, and the corresponding final outputs in the
two protocols are indistinguishable.
The proof is an easy corollary of the proof of Lemma <ref>.
The proof of the lemma defines a set of ensembles
which move progressively from one generated according to the real protocol
to one generated by the interface. It is shown that, because each successive
ensemble is indistinguishable, the first and last ensembles are
indistinguishable. The encryption technique is more general, applying
to $n$ parties, but the encryption allows a subset $S$ of the $n$ parties
to be ignored (in case of misbehavior). If we take $S=[n]-\{1\},$
as though only player $1$ behaves,
the distributions are identical to the ones considered here, and
the result follows directly.
Let $t \leq n.$
If there exists a two-party protocol for oblivious transfer
or oblivious transfer channels are available, then for any
$\computef$ and $c$ there exists a $(k^{-c},t)$-fair protocol
for $\computef$ that is resilient against fail-stop adversaries.
If $\computef$ is described by a circuit family $C_{\computef^{n,m}},$ the
protocol requires $O(\size(C_{\computef^{n,m}})^{c_0})$ bits and
$O(\depth(C_{\computef^{n,m}})^{c_0})$ rounds of interaction for some
fixed $c_0.$
The theorem follows directly from Theorems
<ref>, and
§ BYZANTINE ADVERSARIES
§.§ Private and Public Channels
Before attacking the issue of Byzantine adversaries, we must consider
how to eliminate private channels. In a cryptographic setting,
when players are resource-bounded, we may employ encryption schemes
between each pair of players. Before the protocol begins, each pair
is given secret encryption and decryption keys $(E_{ij},D_{ij}),$
or each pair runs a protocol to generate and exchange them, such that
each encryption scheme is independent of all the others whenever at
least one player in the pair is nonfaulty. We convert a private
channel protocol, $\protoname,$ to a public channel protocol,
$\pub(\protoname),$ by specifying that in $\pub(\protoname),$
every message $\mess(i,j,r)$ that would otherwise be privately sent
is encrypted and broadcast by $i.$ That is, in $\pub(\protoname),$
$\mesg^{\broad}(i,[n],r;\mesg(i,j,r)) = E_{ij}(\mesg(i,j,r)).$
An unpublished claim of Feldman states that private channels may be
replaced by public, encrypted channels without loss of security
[59]. Indeed, a probability walk like the ones used to prove
Theorems <ref> and <ref>
( <ref>)
shows that the encryptions broadcast between nonfaulty players are
indistinguishable to the adversary from uniformly random strings.
Hence an interface need merely supply the adversary with uniformly
random strings in their stead. It is important to note that this
argument works only for static adversaries. We have:
(After Feldman) For any protocol $\protoname,$
$\pub(\protoname) \resilasFaC \protoname.$
Let $t \leq n.$
If there exists a two-party protocol for oblivious transfer
or oblivious transfer channels are available, then for any
$\computef$ and $c$ there exists a $(k^{-c},t)$-fair protocol
for $\computef$ that is resilient against fail-stop adversaries
and uses only broadcast channels. If
$\computef$ is described by a circuit family $C_{\computef^{n,m}},$ the
protocol requires $O(\size(C_{\computef^{n,m}})^{c_0})$ bits and
$O(\depth(C_{\computef^{n,m}})^{c_0})$ rounds of interaction
for some fixed ${c_0}.$
§.§ Byzantine Adversaries
For technical reasons we consider only static adversaries. Recent
work of Haber, Yung, and Beaver [19] shows that, with
additional techniques, dynamic adversaries can be withstood in the
memoryless, cryptographic model, but we shall not delve into those
issues here. The definition of fairness itself requires deeper examination
with respect to dynamic adversaries.
We compile a public-channel cryptographic Turing machine protocol
$\protoname,$ resilient against passive or fail-stop adversaries,
to a protocol $\byz(\protoname),$
resilient against Byzantine adversaries, in the sense of
[70, 71]. Each player broadcasts an encryption of the
machine $M_i$ it runs, its state, and the contents of all of its tapes.
Later, each player gives zero-knowledge proofs that the new state and
outgoing messages are computed correctly according to the encryptions
and incoming messages. Let us consider this in more detail.
At each round of protocol $\protoname,$ machine $M_i$ is described
precisely by its superstate $s_i$ (Turing machine program,
current state and position of tape heads, all tape contents).
An encrypted superstate $e_i$ is the result of applying
encryption $E_i$ to the string $s_i.$ At each round of $\byz(\protoname),$
every player
broadcasts $e_i^r,$ the encrypted superstate of $M_i$ at round $r.$ The
contents of $M_i's$ communication tapes, $c_i^r,$ are all
broadcast messages, hence public knowledge.
To prevent a Byzantine adversary from causing some player to simulate
$M_i$ incorrectly, each player must prove publicly to every other
player that the new, encrypted superstate and set of outgoing messages
it broadcasts are correct with respect to the previous encryption
and incoming messages. Define the predicate $\transgood$ by:
\begin{eqnarray*}
\transgood(e_i^r,c_i^r,e_i^{r+1},c_i^{r+1}) = 1
& \Leftrightarrow &
(\exists D_i) (D_i(e_i^r)=s_i^r) (D_i(e_i^{r+1})=s_i^{r+1})
\mbox{\hspace{0.2in} and } \\
& &
s_i^r(c_i^r) = (s_i^{r+1},c_i^{r+1})
\end{eqnarray*}
where $s_i^r(c_i^r)$ indicates the local computation of a machine
$M_i$ as specified by superstate $s_i^r$ with incoming messages
$c_i^r.$ The predicate claims that player $i$ can decode the encrypted
superstate to demonstrate that the local computation is correct, something it
cannot do with nonnegligible probability unless it actually encrypts
a valid transition.
Each player $i$ conducts a zero-knowledge proof of
$\transgood(e_i^r,c_i^r,e_i^{r+1},c_i^{r+1})$ over broadcast channels
with each other player as the verifier. As long as one player
honestly plays the part of the verifier, with exponentially high
probability no false statement will go undetected. Furthermore,
all nonfaulty players concur as to the validity of the proof, having
witnessed it over broadcast lines. Of course, proofs by nonfaulty
players are zero-knowledge, and an interface easily creates their
transcripts for the adversary without having to corrupt nonfaulty
players to obtain their knowledge.
Another important predicate must be proved at the start of the protocol.
Each player must prove it broadcast a proper description of the
Turing machine $M_i$ in its initial state with zeros written over
unused portions of the tapes ( everything but the random tape
(a sequence of some bounded number $p(m,n)$ of uniformly random bits),
the $m$ bits of input, and the auxiliary tape). Call this predicate
Should any proof fail, all players halt and output .
In a certain sense, nothing better is achievable, since a faulty
majority always has the power to halt the protocol. A slightly
more robust treatment, in the spirit of using polynomial secret
sharing instead of sum sharing, directs each player initially to
polynomially share its initial superstate, $s_i^0.$ If player $i$
fails, then $s_i^0$ is reconstructed and the current state
computed using the public knowledge of broadcast messages.
Better yet, if states are maintained as polynomially secretly
shared secrets as the protocol progresses, then the reconstruction
is immediate and no history need be recorded. Yet more involved
treatments are possible: the shared portions of $s_i^r$ can be
incorporated into the computation without revealing them.
The protocol starts over with a new goal in mind, that of
computing the appropriate result based on current states
and the private pieces of $s_i^r.$
The protocol is described in Figure <ref>.
(After Goldreich et al)
Let $\advclass_1$ be a static, Byzantine, polynomial-time adversary class
let $\advclass_2$ be a static, fail-stop, polynomial-time adversary class.
Then for any protocol $\protoname,$
\[
\byz(\protoname) \resilasFaC_{(\advclass_1,\advclass_2)}
\hspace{0.1in} \protoname.
\]
If any player fails to give a proper proof, halt and output .
$(1\leq i,j \leq n)$
Generate secret encryption and decryption keys $E_{ij},D_{ij}.$
$(1 \leq i \leq n)$
Generate secret encryption and decryption key $E_i,D_i.$
$i\rightarrow [n]:$
$e_i^0 = E_i(s_i^0)$
${\tt ZKP}(i,j,\initgood(e_i^0))$
$(s_i^{r+1},\mess(i,[n],r+1)) \leftarrow s_i^r(\mess([n],i,r)),$
$e_i^{r+1} \leftarrow E_i(s_i^{r+1})$
$i \rightarrow [n]:$
$e_i^{r+1}, \mess(i,[n],r+1)$
${\tt ZKP}(i,j,
\transgood(e_i^r,\mess([n],i,r),e_i^{r+1},\mess(i,[n],r+1)))$
Protocol to simulate protocol $\protoname$ in order to ensure
resilience against Byzantine adversaries.
Here, ${\tt ZKP}(i,j,P)$ is a zero-knowledge proof protocol
with prover $i,$ verifier $j,$ predicate $P,$ and all messages
As discussed, the job of the interface $\interface$ is relatively
simple. For each step in $\protoname,$ it participates with the
adversary $A$ in $\byz(\protoname)$ to supply it with zero knowledge
proofs from honest players or to supply it uniformly random strings
substituted for encrypted messages between nonfaulty players. All
encrypted messages known to the adversary are obtained from protocol
$\protoname.$ Should any proof from a corrupt player fail, $\interface$
causes the corresponding player in $\protoname$ to halt.
Protocol $\protoname$ deems what occurs thereafter (, all
nonfaulty players may halt).
It is not hard to see that all messages obtained by $A$ are
indistinguishable from those it sees in $\anglebrack{A,\byz(\protoname)}.$
Stripping away the zero-knowledge proofs, the messages processed
by nonfaulty players are the same in $\anglebrack{A,\byz(\protoname)}$
and $\anglebrack{A,\interface,\protoname}$ — that is,
the behavior of each nonfaulty
internal $M_i$ in $\byz(\protoname)$ matches
that of each $M_i$ in $\protoname.$
Hence all final outputs are distributed indistinguishably, by
Lemma <ref>.
If there exists a protocol for two-party oblivious transfer,
then for any $F$ and $c>0$ there exists a protocol
for $F$ that is $(k^{-c})$-fair against Byzantine adversaries.
Define $\evalfair = \byz(\evalfairfs),$ the compiled version of
the fair, fail-stop protocol for $F.$ By Proposition <ref>,
satisfies the claim.
§.§ Restarting the Protocol
For most functions it is impossible to restart the protocol without
allowing the adversary some degree of bias on the final result.
For example, after learning the result of a parity computation
with $80\%$ likelihood, the adversary may halt the protocol
in order to bias the result in the other direction. A certain amount
of information about inputs is leaked from learning the result
$F(x_1,\ldots,x_n);$ a second sample of $F$ after faulty players
have changed their inputs may easily give away tremendous information.
For example, $F(x_1,\ldots,x_n)= x_1x_3 + x_2(1-x_3)$ allows corrupt
player $3$ to learn $x_1$ with a reasonable degree of certainty
by first using $x_3=1,$ and in a restarted protocol to learn $x_2$
by setting $x_3=0.$ Though inputs may be forced to be the same
from one execution to another by using the encryptions as commitments,
an astute adversary can glean information by a particular choice
of inputs to omit, without having to change inputs.
The polynomial secret sharing method is advantageous in a setting
where a correct answer must be had, even at the cost of allowing
bias or loss of privacy. At least $n-t$ players must identifiably
cheat in order for the protocol to halt. The maximal
number of restarts is reduced by a factor of $(n-t)$ over the
simple sum sharing methods. Eventually the number of nonfaulty
players, $n-t,$ becomes greater than half the number of remaining
players, and a protocol for faulty minority (see Chapter <ref>)
may be employed.
§ SIMULTANEOUS, VERIFIABLE SECRET SHARING
Fairness is precisely the most significant problem
with secret sharing when the majority
is faulty. An adversary
can easily collect the pieces of nonfaulty players at reconstruction
time but then refuse to reveal pieces held by faulty players.
A method for one player to share a secret that can later be
revealed in a fair manner would be a useful tool. This is the problem of
Simultaneous Verifiable Secret Sharing (SVSS):
secret sharing!simultaneous verifiable
the secret must be verifiable when shared, but it also must later
be revealed simultaneously to all players.
The ideal protocol utilizes a fair, ideal host who accepts
the secret and in the next stage reveals it fairly. This is equivalent
to a fair, ideal host who computes a trivial identity function.
A protocol is an $(\delta,t)$-ideal SVSS protocol
ideal protocol!SVSS
with dealer $D$ if
it is a $(\delta,t)$-fair, ideal protocol for the function
The SVSS problem is to find a protocol that satisfies
$\protoname \resilasFaC \idfairname(F_D).$
Define the protocol $\svss(D)$ to be the $\evalfair$ protocol
as run on $F_D(x_1,\ldots,x_n)=x_D.$ Then by
Theorem <ref>, $\svss(D) \resilasFaC
\idfairname(F).$
The solution to SVSS is clear, once we have the machinery of
this chapter to construct
fair protocols for arbitrary functions. The revelation of a secret
boils down to of a sequence of unfair revelations of coin tosses biased
toward the secret value.
CHAPTER: CRYPTOGRAPHIC METHODS FOR CONSTANT ROUNDS
We have seen that cryptographic assumptions of one sort or another
facilitate solutions where none are possible, according to
information theory. The level of
fault-tolerance is vastly improved by assuming the existence of an
oblivious-transfer protocol; can other parameters be improved by making
cryptographic assumptions?
As it turns out, the number of rounds for any protocol can be reduced to a
constant, without any concomitant explosion in message size, if one is
willing to make the weakest sort of cryptographic assumption, namely that
there exists a one-way function. No details about the structure of such a
function are needed for our protocols. Thus, in turning to the
vulnerability of making unproven assumptions, we take the smallest possible
gamble. Until such time as there is a proof that P$\not=\np$ and that
one-way functions exist, our techniques provide all the advantages of
complexity-based cryptography but at the least risk.
The solution is inspired by Yao's method for oblivious circuit evaluation,
in which one player supplies the other with an encrypted circuit and a
partial set of keys to decrypt it (cf. Chapter <ref>). The
encrypted circuit is evaluated locally, without any interaction. Certainly
in the unbounded-resource models of Chapters <ref> and
<ref>, such an encryption is not possible with perfect,
information-theoretic security. On the other hand, if we limit the
adversary to begin polynomial time, an encrypted circuit of the type
introduced by Yao can be computed itself as a result of a multiparty
protocol. If the construction of the encrypted circuit requires constant
rounds, then the overall protocol will require only constant rounds, since
the encrypted circuits can be evaluated locally, without interaction.
There are crucial differences between the two-party
construction and the multiparty construction — most notably the fact that
no single party actually knows how the circuit was constructed — but the
important property is that, if the generalized Yao circuit can be
constructed, then it requires no interaction to evaluate it.
The encryption we describe here is more general and more involved than
that in Chapter <ref> but is quite similar in other ways.
The complication arises from the need to combine pseudorandom
sequences generated by all the players in order to ensure that
no single player can fathom any part of the encrypted circuit not
corresponding to the restricted path of percolated keys. Interestingly,
this approach would not work if general secret computation were needed
to construct the encrypted circuit, because the construction itself
must use constant rounds, and generating pseudorandom bits is a polynomial
time computation, not known to be in $NC^1,$ and hence not known to be
computable in constant rounds. The second technique that allows the
solution to go through arises from the observation that each player
can compute pseudorandom sequences locally and prove to the network,
using methods of Chapter <ref>, that it shares these results
properly. Proceeding from the pseudorandom outputs to the encrypted
circuit is an extremely fast and simple computation.
§ GENERALIZED YAO GATES
Fix $n,$ $m,$ and $F.$
Let $\gen(X)$ be a PRG that
outputs $(1+2n)\abs{X}$ bits (or to be pedantic, let $\gen(X)$ be the first
$(1+2n)\abs{X}$ bits produced by a PRG that outputs $3\abs{X}^2$ bits, with
$\abs{X} \geq n$). Denote the first $\abs{X}$ bits by $\gentag(X),$ the
next $n\abs{X}$ bits by $\genmask(0,X),$ and the final $n\abs{X}$ bits by
$\genmask(1,X).$ In the sequel, all keys have the same length $k.$
Let $S \subseteq [n],$ and define the string $S(i)=0^{nk}$ if $i \not\in S$
and $S(i)=1^{nk}$ if $i \in S.$ The set $S$ will ultimately be used to zero
out components corresponding to faulty players. If $\logand$ is the
bitwise logical AND, define the following strings, for $a \in \set{0,1}:$
\begin{eqnarray*}
\gentag(S,\vec{X}_{a}) & = & [\gentag(X_{a})\logand S(1)] \circ
[\gentag(X_{a}^2)\logand S(2)] \circ \cdots \circ
[\gentag(X_{a}^n)\logand S(n)] \\
\gentag(S,\vec{Y}_{a}) & = & [\gentag(Y_{a})\logand S(1)] \circ
[\gentag(Y_{a}^2)\logand S(2)] \circ \cdots \circ
[\gentag(Y_{a}^n)\logand S(n)]
\end{eqnarray*}
(The vector notation gives $X^i$ as the $i^{th}$ component of $\vec{X}.$)
Define the following strings for $a,b \in \set{0,1}:$
\begin{eqnarray*}
\genmask(S,a,\vec{X}_{b}) & = &
[\genmask(a,X_{b}^1)\logand S(1)] \oplus \cdots
\oplus
[\genmask(a,X_{b}^n)\logand S(n)] \\
\genmask(S,a,\vec{Y}_{b}) & = &
[\genmask(a,Y_{b}^1)\logand S(1)] \oplus \cdots
\oplus
[\genmask(a,Y_{b}^n)\logand S(n)] \\
\end{eqnarray*}
Knowing any one of the keys will allow a player to match it to a tag (by
evaluating the PRG), but computing the masks requires knowledge of all
the keys.
The generalized Yao gate is the table consisting of four $3nk$-bit
entries as follows: for $a=0,1$ and for $b=0,1,$
\vec{Y}_0,\vec{Y}_1,\omega_Y,
\vec{Z}_0,\vec{Z}_1,\omega_Z)
= $
$\gentag(S,\vec{X}_{a}) \circ \gentag(S,\vec{Y}_{b}) \circ $
$[\genmask(S,b,\vec{X}_{a}) \oplus
\genmask(S,a,\vec{Y}_{b}) \oplus
\vec{Z}_{g(a \oplus \omega_X, b \oplus \omega_Y) \oplus \omega_Z}
We refer to the three $nk$-bit strings making up an entry of the table as
segments $\Yaogate_1(a,b),$ $\Yaogate_2(a,b),$ and $\Yaogate_3(a,b).$ The
entire table is thus:
\vec{Y}_0,\vec{Y}_1,\omega_Y,
\vec{Z}_0,\vec{Z}_1,\omega_Z)
= $
\vec{Y}_0,\vec{Y}_1,\omega_Y,\vec{Z}_0,\vec{Z}_1,\omega_Z) \circ$
\vec{Y}_0,\vec{Y}_1,\omega_Y,\vec{Z}_0,\vec{Z}_1,\omega_Z) \circ$
\vec{Y}_0,\vec{Y}_1,\omega_Y,\vec{Z}_0,\vec{Z}_1,\omega_Z) \circ$
\vec{Y}_0,\vec{Y}_1,\omega_Y,\vec{Z}_0,\vec{Z}_1,\omega_Z)$
Figure <ref> describes the the decoding of
a gate given input keys $\vec{X}$ and $\vec{Y}.$
Compute $\gentag(S,\vec{X}),\gentag(S,\vec{Y}).$
$a = 0..1$
Compute $\genmask(S,a,\vec{X}),\genmask(S,a,\vec{Y}).$
Determine the least $\alpha,\beta$ such that
$\gentag(S,\vec{X}_\alpha) = \Yaogate_1(\alpha,\beta)$ and
$\gentag(S,\vec{Y}_\beta) = \Yaogate_2(\alpha,\beta)$
Set $\vec{Z}\leftarrow \Yaogate_3(\alpha,\beta) \oplus
\genmask(S,\beta,\vec{X}) \oplus \genmask(S,\alpha,\vec{Y}).$
Obtaining the output key $\vec{Z}$ from a generalized Yao gate, given the
encrypted table and two input keys $\vec{X}$ and $\vec{Y}.$ The set $S$
describes the indices of strings to consider as 0 when computing $\gentag,$
§.§ Yao Circuit Construction
Given the means to construct gates, constructing a circuit is
straightforward. We use the notation for gates $g_{i,j}$
and input wires $\inp_0(i,j)$ and $\inp_1(i,j)$ as
described in <ref>.
We consider an array $\wires[i,j,l,I]$ of wire keys for $i \in
\set{0,\ldots,d},j\in \set{1,\ldots,w},$ $l\in \set{0,1},$ $I \in [n].$ The
keys $\wires[i,j,0,I]$ and $\wires[i,j,1,I])$ represent wire $(i,j),$ which
carries the output of gate $g_{i,j}.$ Keys $\wires[0,j,0,I]$ and
$\wires[0,j,1,I])$ represent inputs to the circuit. Associated with each wire
$(i,j)$ is a key translation bit $\keytrans[i,j].$
Given $S,$ a set of components to void, and $\wires,$ the construction of
the circuit is as follows. First, construct a table $\gentab$ of
$2dwn(1+2n)k$ pseudorandom bits, where
\[
\gentab = \gen(S,\wires)
\]
and where the function $\gen(S,\wires)$ gives
a table of $(2dwn)$ $((1+2n)k)$-bit strings defined by
\[
\gen(S,\wires)[i,j,l,I] = \gen(\wires[i,j,l,I]) \logand S(I).
\]
Second, convert this table to a circuit in the same fashion as the
gates are constructed. That is, with respect to table $\gentab,$ define
the functions $\tabletag(\gentab,i,j,l,I)$ and
($c\in\set{0,1}$) as the substrings of length $k$, $nk,$ and $nk$ of
The generalized Yao gate $\yaogate(\wires,\gentab,\keytrans,S,i,j)$ is then
constructed as in <ref>, using $\tabletag$ and $\tablemask$
as the tag and mask functions. The wire keys are as
\begin{eqnarray*}
\vec{X}_l & = &
\wires[\inp_0(i,j),l,2]), \ldots, \wires[\inp_0(i,j),l,n]) \\
\vec{Y}_l & = &
\wires[\inp_1(i,j),l,2]), \ldots, \wires[\inp_0(i,j),l,n]) \\
\vec{Z}_l & = &
\wires[i,j,l,2]), \ldots, \wires[i,j,l,n])
\end{eqnarray*}
The key translations used to construct the gate are
$\keytrans_X=\keytrans[\inp_0(i,j)],$ $\keytrans_Y=\keytrans[\inp_1(i,j)],$
and $\keytrans_Z=\keytrans[i,j].$ As mentioned, a set $S$ of components to
be voided is used to place 0's over appropriate components.
The circuit thus constructed is denoted
$\yaocircuit(\wires,\gentab,\keytrans,S).$ Notice that the construction
takes $\gentab$ into account independently of $\wires;$ it is well-defined
whether or not $\gentab$ arises from $\wires.$
$\yaocircuit(\wires,\gentab,\keytrans,S)[i,j] \leftarrow
\yaogate(\gentab,\keytrans,S,i,j);$
more precisely, $\yaocircuit(\wires,\gentab,\keytrans,S)$ is the string
$\yaogate(\wires,\gentab,\keytrans,S,1,1) \circ
\yaogate(\wires,\gentab,\keytrans,S,1,2) \circ \cdots$
$\circ \yaogate(\wires,\gentab,\keytrans,S,d,w-1)
\circ \yaogate(\wires,\gentab,\keytrans,S,d,w)$
How to construct a generalized Yao circuit from a table $\wires$ of keys,
a table
$\gentab$ of tags and masks, a set $\keytrans$ of key translations, and a
set $S \subseteq [n]$ of components to be voided.
A set of inputs $\vec{x}=(x_1,\ldots,x_n)$ is written as a set of input
bits, $(x[1,1],\ldots,x[1,m],x[2,1],\ldots,x[2,m],
\ldots, x[n,1],\ldots,x[n,m]).$ The
value $\wireval[i,j]$ of wire $(i,j)$ is determined by the
input bits and the gates $g_{i,j}$ (in the natural fashion in which
a circuit is evaluated)
We use a bijection $L_0(J)=(J \Div m, J
\mod m)$ to map wire $(0,J)$ to input $x[L_0(J)].$
The input keys $\inkeys$ that must be revealed for the circuit to be
evaluable through key percolation are given by
\begin{eqnarray*}
\inkeys_I(\wires,\keytrans,\vec{x}) & = &
\{\wires[0,J,\keytrans[L_0(J)] \oplus x[L_0(J)],I]\}_{J \in [nm]} \\
\inkeys(\wires,\keytrans,\vec{x}) & = &
\{\inkeys_I\}_{I \in [n]}
\end{eqnarray*}
Finally, the output key translations that must be revealed in order to
interpret the keys of the final level are
\[
\]
The encrypted circuit is then
\[
\hidecirc(\wires,\gentab,\keytrans,S,\vec{x})
\yaocircuit(\wires,\gentab,\keytrans,S)
\circ
\circ
\inkeys(\wires,\keytrans,\vec{x})
\]
§ PROTOCOLS IN CONSTANT ROUNDS
Assume there exists a one-way function. Let $F$ be a polynomial-time
function family. Consider a complete broadcast network with private
channels, where each player is a probabilistic polynomial-time Turing
machine. The adversary class consists of polynomial-size circuit families
and a fault class allowing $2t<n.$ Then there is a $t$-resilient protocol
for $F$ that runs in constant rounds and has polynomial message
Player $i$ shares input $x_{i}$ as secret bits
Run to generate random bits
$\set{\rho[i,j]}_{i\in [n],j \in [dwnk]}.$
Reveal $\rho[i,1],\ldots,\rho[i,dwnk]$ to player $i.$
$i=1..n$ $j=1..dwn$
Player $i$ locally computes $G_{i,j}=\gen(\rho[i,jk+1]\cdots\rho[i,(j+1)k]).$
$i=1..n$ $j=1..dwn$
Player $i$ shares $G_{i,j}.$
$i=1..n$ $j=1..dwn$
Player $i$ proves
using .
Let $S$ be the set of players whose proof failed.
Define $G_{i,j}=0^{(1+2n)k}$ for $i \in S$ and $j \in [dwn].$
Compute $\yaocircuit(S,W),$ where $W$ is defined by the $G_{i,j}$ values.
Compute $\chi$ from $W,\vec{x}.$
Reveal $\yaocircuit(S,W),\chi,B$ to all players.
(Note that secrets in $B$ are contained in $W.$)
Each player $i$ uses $\decodemygate$ to compute $F(\vec{x}).$
Protocol to create a generalized Yao circuit $\yaocircuit$ and the
decrypting keys needed to evaluate it.
The protocol is given in Figure <ref>. Constructing the
encrypted circuit requires constant rounds. Revealing the encrypted
circuit reveals nothing more than the final outputs. The resilience
of the protocol follows from Lemmas
<ref> (see <ref>),
<ref> (see <ref>),
and <ref>.
§.§ Generalized Yao Circuits Are Private
Consider the following ideal protocol $\idealname(\hidecirc)$ to generate
encrypted circuits. Each player supplies an input $(x_i,p_i)$ where $p_i
%\in \set{{\em participate},{\em quit}}.$ The trusted host sets % drb2020
%$S=\set{i \mid p_i={\em quit}}$ and selects keys $\wires$ and key % drb2020
\in \set{\mbox{\em participate},\mbox{\em quit}}.$ The trusted host sets
$S=\set{i \mid p_i=\mbox{\em quit}}$ and selects keys $\wires$ and key
translations $\keytrans$ uniformly at random for all the indices $i \not\in
S.$ It computes $\gentab$ using $\gen$ and $\wires.$ It then constructs the
circuit and returns $\hidecirc(\wires,\gentab,\keytrans,S,\vec{x}).$ (By
<ref> we can assume that each player learns every output
bit, so there is no need to return different subsets of output key
Finally, each player computes its output by percolating the keys from $B$
through the gates, using .
Let the adversary class $\advclass$ include all nonuniform
polynomial-time Turing machines, consider only static adversaries, and let
$t \leq n-1.$ Then
\[
\idealname(\hidecirc)
\resilasFaC_{\advclass}
\idf
\]
The bijection $L(m)=(m\Div w,m\mod w)$ between $[dw]$ and $[d]
\times [w]$ defines a natural ordering on pairs, namely $L(m) < L(m+1).$
This row-major ordering extends directly to longer tuples. In particular,
we consider the bijection $L(m)$ between
The tuple $(i,j,l,I)$ corresponds to a particular wire key and to one level
of the tag and mask table, $\gentab.$
We construct a progressive obliteration of a generalized Yao circuit by
stepping through the pseudorandom table row by row, replacing each string
in the table by a uniformly random string. With respect to a particular
input assignment,
a set $S$ of voided components, and an
additional parameter $T \subseteq [n],$ the strings that are actually used
in the percolation and evaluation of the circuit, or that are voided or
otherwise given special exception, are not replaced. This generates a
sequence of hidden circuits that are indistinguishable.
For a given set of inputs and a given circuit for $F,$ the
values $\wireval[i,j]$ on each wire are determined.
The indices $\perc[i,j]$
of the keys that are percolated during evaluation are determined
by the key translation bits $\keytrans$ and the wire values:
\[
\perc[i,j] = \keytrans[i,j] \oplus \wireval[i,j].
\]
For a given $T,$ $S \subseteq T,$ and $M$ ($0 \leq M \leq 2dwn$),
the obliterated table is as follows.
Let $\randtab$ be a string of $2dwnk$ bits, let $L(\mu)=(i,j,l,I),$
and define the obliteration of entry $m:$
\begin{eqnarray*}
\oblitgentabrow(\wires,\randtab,S,T,m) & = & \left\{
\begin{tabular}{ll}
$0^{3k}$ &
if $I \in S$
\\
$\randtab[L(m)]$ &
if $I \not\in T,$ $l \not= \perc[i,j]$
\\
$\gen(S,\wires)[L(m)]$ &
\end{tabular}
\right.
\end{eqnarray*}
Now, row $m$ of the $M^{th}$ obliterated table is:
\begin{eqnarray*}
\oblitgentab(\wires,\randtab,S,T,M)[m] & = & \left\{
\begin{tabular}{ll}
$\oblitgentabrow(\wires,\randtab,S,T,m)$ &
if $m \leq M$
\\
$\gen(S,\wires)[L(m)]$ &
\end{tabular}
\right.
\end{eqnarray*}
The obliterated circuit is obtained by using the partly obliterated
pseudorandom table:
\[
\pyc(\wires,\randtab,\keytrans,S,T,M) =
\yaocircuit(\wires,\oblitgentab(\wires,\randtab,S,T,M),\keytrans,S)
\]
Finally, the progressively obliterated encrypted circuits are defined by:
\begin{eqnarray*}
& \oblhidecirc(\wires,\randtab,\keytrans,S,T,\vec{x},M) = & \\
\inkeys(\wires,\keytrans,\vec{x}),B(\keytrans))
\end{eqnarray*}
Define ensembles $D(M)(z,k)$ for all $M$ as follows.
If $z$ is not of the form
$n \circ m \circ d \circ w \circ S \circ T \circ \vec{x} \circ \vec{a}
\circ \keytrans$
or if $k < 2ndw,$
then all probability
weight is placed on the string $0.$
Otherwise, using the security parameter $k$ as the key length,
\begin{eqnarray*}
D(M)(z,k) & = &
\{ \wires \leftarrow \{0,1\}^{2dwn}; \\
& &
\randtab \leftarrow \{0,1\}^{2dwn(1+2n)k}: \\
& &
\oblhidecirc(\wires,\randtab,\keytrans,S,T,\vec{x},M)
\}
\end{eqnarray*}
Successive pairs $D(M-1)(z,k)$ and $D(M)(z,k)$ are indistinguishable
because they
differ at most in the generation of a single pseudorandom sequence.
Let us argue formally.
Suppose $D(M-1)(z,k)$ and $D(M)(z,k)$ are
In particular this means that
$z=n \circ m \circ d \circ w \circ S \circ T \circ \vec{x}
\circ \vec{a} \circ \keytrans$
\[
\oblitgentabrow(\wires,\randtab,S,T,M) =
\gen(\wires[L(M)]),
\]
for otherwise $D(M-1)(z,k)=D(M)(z,k).$
Then for any $c$ there exists a distinguisher $\scm$ such that
\[
\abs{ \scm_{D(m)(z,k)} - \scm_{D(m-1)(z,k)}} > k^{-c}
\]
for infinitely many $k.$
We overwrite the $(1+2n)k$-bit string in location $L(m)$ of $\randtab$
using the function
\begin{eqnarray*}
\overwrckt(\randtab,\sigma)[L(m')] & = & \left\{
\begin{tabular}{ll}
$\randtab[L(m')]$ &
if $m' \not= m$
\\
$\sigma$ &
if $m' = m$
\end{tabular}
\right.
\end{eqnarray*}
Consider the machine $\scm'$ that
on input $\sigma,$ sets $k=\abs{\sigma}$ and selects $\wires$ and
$\randtab$ uniformly at random, It then sets
$\randtab' = \overwrckt(\randtab,\sigma),$
computes $\oblitgentab(\wires,\randtab',S,T,M),$ and constructs
hidden circuit $\hidecirc.$ Finally, it runs $\scm$ on $\hidecirc$ and returns
the output of $\scm.$
Define the ensembles
\begin{eqnarray*}
\scg(k) & = & \{X\leftarrow \uniform(\{0,1\}^{k}): \gen(X)\} \\
\sch(k) & = & \uniform(\{0,1\}^{(1+2n)k}).
\end{eqnarray*}
\{\randtab \leftarrow \{0,1\}^{2dwn(1+2n)k};$
$\sigma \leftarrow \scg(k);$
$\randtab' = \overwrckt(\randtab,\sigma):$
$\oblhidecirc(\wires,\randtab',\keytrans,S,T,\vec{x},M) \}$
\{\randtab \leftarrow \{0,1\}^{2dwn(1+2n)k};$
$\sigma \leftarrow \sch(k);$
$\randtab' = \overwrckt(\randtab,\sigma):$
$\oblhidecirc(\wires,\randtab',\keytrans,S,T,\vec{x},M) \}$
so that by the construction of $\scm',$
\[
\abs{ \scm'_{\scg(k)} - \scm'_{\sch(k)}} =
\abs{ \scm_{D(m)(z,k)} - \scm_{D(m-1)(z,k)}} > k^{-c}
\]
This holds for infinitely many $k,$ and certainly infinitely many
$k$ larger than $2dwn,$ so
$\scm'$ distinguishes $\scg$ from $\sch,$
contradicting the assumption that $\gen$ is a PRG.
Therefore, for all $M,$ $D(M-1) \indistEnC^{k^{-c-1}} D(M),$
so $D(0) \indistEnC^{k^c} D(2dwn)$ (recall $k \geq 2dwn$).
With this in mind, we argue that the task of an interface is
obtain and return $x_i$ and $a_i$ for all players $i$ that
$A$ wishes to corrupt, by corrupting $i_{id}$ in
use the input values shared by corrupted players $i$
to replace those of corrupted players $i_{id}$ in $\idealname(F),$
or have $i_{id}$ send $\Lambda$ if $i$ does not share a value;
determine the set $S$ of non-participating faulty players
from the interaction with $A$ ($S$ is the list
of players who are disqualified);
and return an obliterated circuit that is constructed
according to distribution $D(2dwn),$ namely according to the circuit for
$F$ and the output returned by the trusted host.
It is clear that distribution $D(2dwn)$ is easy to sample
given the list of gates in the circuit for $F,$ the value
returned by the trusted host, and the list
$S$ of voided components. Now, in protocol
\[
\view_A^f = a_A \circ \vec{a}_T \circ \vec{x}_T \circ \EC
\]
where $T=T(q_A^f)$ and $\EC$ is returned by the host.
Note that the host samples
\[
\EC \leftarrow
D(0)(n \circ m \circ d \circ w \circ S \circ T
\circ \vec{x} \circ \vec{a},k).
\]
On the other hand, in protocol
\[
\view_A^f = a_A \circ \vec{a}_T \circ \vec{x}_T \circ \EC
\]
where $\EC$ is returned by $\interface.$
Note that $\interface$ samples
\[
\EC \leftarrow
D(2dwn)(n \circ m \circ d \circ w \circ S \circ T
\circ \vec{x} \circ \vec{a},k).
\]
We have already shown that the ensembles $D(0)$ and $D(2dwn)$ are
computationally indistinguishable, so the families of ensembles
[A,\idealname(\hidecirc)]^{Y_A} \indistFaC
Now, by the construction of $D(0)$ and $D(2dwn),$ the value of
$F$ that is percolated through an encrypted circuit is the same
regardless of which method is used to generate it, so
[A,\idealname(\hidecirc)]^{\vec{Y}} \indistFaC
Hence $\idealname(\hidecirc) \resilasFaC \idf.$
§.§ Protocol Resiliently Computes $F$
Let the adversary class $\advclass$ be static and restricted
to nonuniform polynomial-time Turing machines, and let $t<n.$
\[
\ccrproto \resilasFaC_{\advclass} \idealname(\hidecirc)
\]
The protocol is a composition of resilient computations.
The computation of random bits is a private function and
$t$-resilient. The computation of each secret PRG output is $t$-resilient,
using Theorem <ref>. Revealing $S$ is private.
The protocol evaluates
$\yaocircuit(\wires,G,\keytrans,S)$ in a constant number of rounds
since $\yaocircuit$ is of constant depth.
In fact, the circuit construction requires an exclusive-or
of bit streams, some of which are voided according to $S.$
Because $S$ is public, the indices of the appropriate pseudorandom
sequences to include are public and no secret
AND computations need be performed. Therefore,
constructing the encrypted circuit secretly requires no interaction.
Secretly computing the input keys to reveal requires an execution of a
multiplication protocol (an AND must be secretly computed).
The encrypted circuit itself is a robust (all players learn it,
so a minority of alterations are not effective)
and private (cf. Lemma <ref>) representation of
the result, $F(x_1,\ldots,x_n).$
The reconstruction stage is $t$-resilient.
By Theorem <ref>, the composition is as
resilient as the composition of the corresponding ideal
vacuous protocols with $\idealname(\hidecirc),$ the which composition
is in turn as resilient as $\idealname(\hidecirc).$
The adversary must be static in order that the subprotocols
be post-protocol corruptible and hence composable.
Each subprotocol of requires only a constant number
of rounds, so requires only a constant number of rounds.
§ WITHOUT PRIVATE CHANNELS
By an unpublished claim of Feldman [59]
(see <ref>), private channels in a
protocol can be replaced by public channels over which encrypted messages
are broadcast. Lemma <ref> along with a standard
probability walk as in <ref>
would provide a proof of this claim.
constant rounds!cryptographic
Assume there exists a one-way function. Let $F$ be a polynomial-time
function family. Consider a complete broadcast network of public channels,
where each player is a probabilistic polynomial-time Turing machine. The
adversary class consists of polynomial-size circuit families and a fault
class allowing $2t<n.$ Then there is a $t$-resilient protocol for $F$ that
runs in constant rounds and has polynomial message complexity.
tocpartLocally Random Reductions
CHAPTER: INSTANCE HIDING SCHEMES
An instance-hiding scheme is a method by which a weak
(polynomial-time) processor can obtain the value $f(x)$ of a function it
cannot compute on its own, by querying more powerful processors, without
having to reveal the instance $x$ at which it would like to compute $f.$
Our research into instance-hiding schemes motivated the development of a
new tool, called a locally random reduction, which inspired a broad
range of results in cryptography and complexity theory, including a theory
of program testing [89] and a recent line of research leading to
the proof that interactive proof system!IP=PSPACE
[98, 91, 115]. These applications are discussed in
Chapter <ref>, which investigates locally random reductions in
depth. In this chapter we provide some of the motivations for our
development of locally random reductions and investigate the first of many
applications which locally random reductions solve, namely instance-hiding
Abadi, Feigenbaum, and Kilian were the first to investigate the problem of
using a powerful, public resource to solve a problem without revealing the
instance of the problem [1]. They developed a formal model to
measure the information which is hidden or leaked during the interaction
between querier and oracle. They were motivated by the practical question
of whether a weak, private computing device, such as a smart card or
terminal, can take advantage of a powerful, shared computing device while
keeping private some important aspects of its user's data. They were also
motivated by the theoretical question of whether several well-studied yet
intractable number-theoretic functions, such as discrete logarithm and
quadratic residuosity, whose instances can be hidden significantly
when querying an oracle for the solution, are examples of a more general
phenomenon; that is, do other seemingly intractable problems, such as SAT,
also have instance-hiding schemes?
The main result of Abadi, Feigenbaum, and Kilian is negative: if $f$ is an
NP-hard function, a weak querier $A$ cannot query a single oracle $B$ while
hiding all but the size of the instance, assuming that the polynomial
hierarchy does not collapse. Their proof draws on a connection between
single-oracle instance-hiding schemes and the nonuniform complexity classes
NP/poly and CoNP/poly, and it is related to other complexity-theoretic
notions, such as random-self-reducibility. This negative result holds even
if $B$ is modelled as an oracle Turing Machine and is given access to an
arbitrary (non-r.e.) oracle set. We refer the interested reader to
[1] for full details.
[Abadi, Feigenbaum, Kilian 1989]
If language $L$ admits an instance-hiding scheme hiding all but $\abs{x},$
then $L \in \mbox{NP}/poly \cap \mbox{coNP}/poly.$
Following a question originally posed by Rivest [109], we generalize
instance-hiding schemes to handle several, physically separate oracles,
$B_1,\dots,B_m$ and show that, given sufficiently many oracles, any
function admits a multioracle instance-hiding scheme. This contrasts with
the single oracle case, where not only is one oracle insufficient for some
functions [1] but, as we shall see, one oracle is insufficient for
most functions.
We consider two models for $m$-oracle instance-hiding schemes:
* Oracles $B_1$ through $B_m$ may collude before the start of the protocol,
but they are kept physically separate during the protocol. In this model,
$m = |x|$ oracles suffice for any function $f$.
* Oracles $B_1$ through $B_m$ may not collude at all, either before or during
the protocol. In this model, $m = 2$ oracles suffice for any function $f$.
Conversely, for most boolean functions $f,$ two oracles are necessary.
In the first model, our proof of sufficiency demonstrates an unintuitive
connection between instance-hiding schemes and secure multiparty protocols
[28, 39]. The connection is unintuitive for many reasons, the most
obvious of which is that, in the first problem, we require explicitly that
the oracles not communicate at all and, in the second problem, we require
that they communicate extensively. More fundamentally, the two problems
seem at first to exemplify two incompatible views of distributed
computations with secret data. The instance-hiding problem first defined
in [1] and generalized in Section <ref> below
formalizes the following view. A weak processor A requires interaction
with powerful processors $B_1$ through $B_m$, because A does not have
enough computational resources to compute $f;$ A does not want to reveal
more than necessary about its private input $x$ because the $B_i$'s are
public resources. Secure multiparty protocols address an alternative view:
mutually untrusting, equally powerful processors $B_1$ through $B_m$
must interact in a computation because each of them has a private input
$x_i$ without which the computation cannot proceed.
§ PRELIMINARIES
Following Abadi, Feigenbaum, and Kilian, we take A to be a probabilistic
polynomial-time Turing Machine transducer [1]. Let $f$ be a
boolean function on $\set{0,1}$ for which no probabilistic
polynomial-time algorithm is known. In order to compute $f(x),$ A
consults players $B_1$, $\ldots$, $B_m$, where $m$ is (necessarily)
bounded by a polynomial in $\sizex.$ Each $B_i$ is an interactive
oracle Turing machine that can use an unbounded amount of time and
space. It is convenient to think of the oracle tape of $B_i$ as a
random variable $O_i.$ Player $B_i$ is completely specified by its finite
control and the value of $O_i$.
An m-oracle instance-hiding scheme for $f$ is a $R$-round,
synchronous protocol executed by players $A,$ $B_1$, $\ldots$, $B_m.$
Player $A$ draws input $x$ according to a distribution $X.$ The
round-complexity $R$ is bounded by a polynomial in $\sizex.$ Let
$\view_A^{1..r}$ denote the sequence of messages sent and received by $A$
in rounds 1 through $r$, let $\delta_A$ denote its finite control, and let
$R_A$ denote its random tape.
At round $r+1$ of the protocol, $A$ performs a probabilistic
polynomial-time computation producing $m$ messages $y_{r,1},\ldots,y_{r,m}$
for $B_1,\ldots,B_m,$ based on $x, \view_A^{1..r}, \delta_A,$ and
$R_{A}.$ Each $B_i$ computes a response $z_{r+1,i}$ based on
$y_{1,i},\ldots,y_{r+1,i}$ and sends it to player $A.$
In the simple case, which corresponds to the motivating idea of a
collection of public servers supplying answers to queries, this response is
a function $g$ applied to $y_{r+1,i};$ player $B_i$ can perform an
unbounded amount of local computation, and replies with $z_{r+1,i} =
g(y_{r+1,i}).$ After round $R$ of the protocol, $A$ computes $f(x)$ based
on $x,$ $\view_A^{1..r},$ $\delta_A,$ and $R_A.$ Note that we do not allow
the output of $A$ to be incorrect.
Let $Y_i$ be the induced distribution on the sequence
$\anglebrack{y_{1,i},\ldots,y_{R,i}}$ of messages $A$ sends
to player $B_i.$
We wish to make precise the statements that an instance-hiding scheme
“leaks at most” some function $L(x)$ to player $B_i$ or that it
“hides at least” some function $H(x)$ from player $B_i$. Note that
$L(X)$ and $H(X)$ are also induced random variables.
We consider two generalizations of the definitions in [1]. In both
models, all players “know” the plaintext distribution $X$ and the
contents of the finite controls
$\delta_A,\delta_{B_1},\ldots,\delta_{B_m}.$ Furthermore, in both models,
each player $B_i$ “does not know” the content of the random tape $R_A$,
and, for $i\neq j$, player $B_i$ “does not see” the messages $y_{r,j}$
that A sends to $B_j$ or the responses that $B_j$ sends back. The
difference between the two models lies in whether or not $B_i$ “knows”
the content of oracle tape $O_j$ for $j\not=i.$
Model 1:
* leakinstance-hiding scheme
An instance-hiding scheme leaks at most $L$ to oracle $B_i$ if,
for all plaintext distributions $X,$
for all $u \in \mbox{ \em Range}(L),$
the random variables $X$ and
are independent given $L(X)=u.$
* An instance-hiding scheme hides at least $H$ from oracle $B_i$ if,
for all plaintext distributions $X,$
the random variables $H(X)$ and
$\anglebrack{Y_i,O_1,\ldots,O_m}$ are independent.
Intuitively, a player “knows” nothing about the outcome of a random
variable $X$ if the observations of random variables to which it has
access are independent of $X.$ Thus, in Model 1, we allow $B_i$ to
“know” the contents of every oracle tape by virtue of measuring its
knowledge with respect not only to the query sequence $Y_i$ and oracle
tape $O_i$ but to the entire set of oracle tapes.
Informally, this corresponds to the case in which the powerful players
can collude before, but not during, the execution of the protocol.
In Model 2, however, we allow no collusion at all, before or during the
execution of the protocol:
Model 2:
* An instance-hiding scheme leaks at most $L$ to oracle $B_i$ if,
for all plaintext distributions $X,$
for all $u \in \mbox{ \em Range}(L),$
the random variables $X$ and
are independent given $L(X)=u.$
* An instance-hiding scheme hides at least $H$ from oracle $B_i$ if,
for all plaintext distributions $X,$
the random variables $H(X)$ and
$\anglebrack{Y_i,O_i}$ are independent.
Specifically, in Model 2, player $B_i$ has access only to $Y_i$ and
$O_i;$ it is possible that the additional knowledge of other oracle
tapes might lead to a dependence among random variables, revealing some
additional information.
In either model, we say that the scheme “leaks $L$” if it leaks at
most $L$ to each $B_i,$ and we say that it “hides $H$” if it hides at
least $H$ from each oracle $B_i.$ Throughout this paper, we are
primarily concerned with schemes that leak $\sizex$. In either model,
we say that “$f$ has an $m$-oracle instance-hiding scheme” or that
“$m$ oracles suffice for $f$” to mean that $f$ has an $m$-oracle
instance-hiding scheme that leaks $\sizex.$ As in [1], we
define leaking and hiding in terms of independence of random variables.
Complexity-based cryptography is irrelevant, because A is time-bounded
and the $B_i$'s are time-unbounded.
Models 1 and 2 are equivalent if $m=1$.
We end this section with the proposition that if the querier A is
limited in space, but not in time or in number of rounds of interaction
with the $B_i$'s, then the instance-hiding problem is trivial.
Every function has a 1-oracle instance-hiding scheme that leaks at most
$\sizex$ in which the querier A is limited to (deterministic) constant space.
The oracle simply sends every instance of size $\sizex$ along with its
answer, and the querier checks the instance against $x,$ copying the answer
to the output when the instance matches $x.$ We refer the interested reader
to [48, 46, 55, 86], for example, for a discussion of the related
topic of (zero-knowledge) interactive proof systems with space-bounded
§ MODEL 1: $\MID X \MID$ ORACLES SUFFICE
Our main result on model-1 instance-hiding schemes uses Shamir's method for
secret sharing.
For notational convenience, we shall refer to a value $p(\alpha_i)$ of a
polynomial of degree $t$ with free term $s$ as a $t$-point of $s.$
The difference between a $t$-point and a piece of $s$ is that
the latter indicates that the polynomial has been selected uniformly at
random (subject to the constraints on degree and free term), whereas the
former concerns an arbitrary polynomial. The randomness property will be
essential only for the initial queries of $A.$
The following lemma is an easy consequence of Lemma <ref>.
Any boolean function $f(x)$ on inputs $x$ of length $n$ can be represented
as a polynomial $c_f(x_1,\ldots,x_n)$ over an arbitrary field $E,$
such that when the values $x_1,\ldots,x_n$ match the bits of $x,$
With a few straightforward observations, we shall be able to construct
efficient instance-hiding schemes for arbitrary boolean functions. First,
note that if elements $\gamma_1, \ldots, \gamma_k$ of $E$ are $t$-points of
$s_1, \ldots, s_k$, respectively, then $\gamma_1 + \cdots + \gamma_k$ is a
$t$-point of $s_1 + \cdots + s_k.$ In fact, any this holds for any linear
combination: if $\beta_1, \ldots, \beta_k$ are fixed constants in $E,$ then
$\beta_1 \gamma_1 + \cdots + \beta_k \gamma_k$ is a $t$-point of $\beta_1
s_1 + \cdots \beta_k s_k.$ Furthermore, $\gamma_1 \times \cdots \times
\gamma_k$ is a $kt$-point of $s_1 \times \cdots \times s_k$.
(By comparison, recall that in the protocol
(Chapter <ref>), the product of two $t$-shares is a $2t$-point,
which later is converted to a $t$-share.)
Specifically, if $c_f(x_1,\ldots,x_n)$ has degree $n,$ and
$\gamma_1,\ldots,\gamma_k$ are $t$-points of $x_1,\ldots,x_n,$
then $c_f(\gamma_1,\ldots,\gamma_n)$ is a $nt$-point of $c_f(x_1,\ldots,x_n).$
A total of $(nt+1)$ $nt$-points suffice to determine $c_f(x_1,\ldots,x_n).$
Let $f(x)$ be any function whose output $\abs{f(x)}$ is polynomially
bounded. Then for any positive constant $c,$ $f(x)$ has a model-1 $(\sizex
- c \log \sizex)$-oracle instance-hiding scheme that leaks at most $\sizex.$
Assume WLOG that $f$ is boolean. The result for general functions then
follows by regarding each output bit as a boolean function of the input.
We first show that $\sizex+1$ oracles suffice and then improve upon the
For concreteness, let $E$ be ${\bf Z}_p$ for the smallest prime exceeding
$n+2.$ By lemma <ref>, there is a polynomial
$c_f(x_1,\dots,x_n)$ over $E$ which is equal to $f(x)$ at 0/1-valued
inputs. Let each $B_i$ have access to an oracle for $c_f.$
The protocol begins with the querier secretly sharing $x_1,\ldots,x_n$
among $n+1$ oracles using $t=1$ as the bound on coalition sizes. Each
oracle evaluates $c_f$ on the collection of $1$-points it receives, giving
a $n$-point of the final value. Finally, $A$ interpolates the $(n+1)$
$n$-points to determine the $n^{th}$-degree polynomial $c_f.$
Figure <ref> lists the steps exactly. The interpolate
function outputs the coefficients of the minimal-degree polynomial running
through its arguments.
$A:$ $n\leftarrow \sizex,$ where $x=x_1x_2 \cdots x_n.$
$A:$ select $p_1(u),\dots,p_n(u)$ randomly of degree 1 with $p_i(0)=x_i.$
$A:$ $\piece_i(x_j) \leftarrow p_i(j).$
(for $1 \leq i \leq n+1.$)
$A:$ $y_i \leftarrow
\anglebrack{n, \piece_i(x_1),\ldots,\piece_i(x_n)}$
(for $1 \leq i \leq n+1.$)
$A \rightarrow B_i:$ $y_i$
(for $1 \leq i \leq n+1.$)
$B_i:$ $z_i \leftarrow c_f(\piece_i(x_1),\ldots,\piece_i(x_n))$
(for $1 \leq i \leq n+1.$)
$B_i \rightarrow A:$ $z_i$
(for $1 \leq i \leq n+1.$)
$A:$ $p(u) \leftarrow \mbox{\tt interpolate}(z_1,z_2,\dots,z_{n+1})$
$A:$ $f(x) \leftarrow p(0).$
instance-hiding scheme!$n$ oracle
Instance-hiding scheme for $f(x)$ with $n+1$ oracles.
Each $B_i$ has an oracle $O_i$ for $c_f.$
Intuitively, basing the protocol on ideas from secret-sharing ensures that
each $B_i$ learns nothing about $x.$ Even though $B_i$ can use its
unbounded resources to attempt to recover $x,$ it cannot communicate with
any of the other $B_j,$ so the only physically possible coalitions are
trivial ones of size 1. Because player $B_i$ sees only the $1$-shares of $x_1,\ldots,x_n$ and does not see the later collection of
$n$-points, he receives no information about $x.$
To prove that each $B_i$ learns nothing but $n,$ we use a simple lemma,
akin to the statement that secret sharing is $1$-private:
For any $x_1, \ldots, x_n \in E$ and for any $i \not = 0,$ the
following distribution is the same as $\uniform(E^n):$
\[
\set{ (a_1,\ldots,a_n) \leftarrow \uniform(E^n):
(a_1 i + x_1, \ldots, a_n i + x_n)}
\]
Clearly, for fixed $x_1,\dots,x_n$ and $i\not= 0,$ the mapping
$(a_1,\ldots,a_n) \mapsto (a_1 i + x_1, \ldots, a_n i + x_n)$ is
a bijection, so if $(a_1,\ldots,a_n)$ is uniform over $E^n,$
so is $ (a_1 i + x_1, \ldots, a_n i + x_n).$
Thus, the distribution on messages $y_i$ seen by $B_i$ is the same for any
$x:$ uniform over $\set{n} \times E^n.$ Therefore the random variables $X$
and $\anglebrack{Y_i,O_i}$ are independent. In fact, since each $B_j$ ($j
\not= i$) computes the same polynomial $c_f(x_1,\ldots,x_n),$ each oracle
tape is the same, and hence the random variables $X$ and
$\anglebrack{Y_i,O_1,\dots,O_{n+1}}$ are independent.
To reduce the number of oracles queried in the protocol from $n+1$ to $n,$
we can “instantiate” the input bit $x_1$ and treat $f$ as though it were
a function of $n-1$ inputs. The querier $A$ first executes the basic
protocol on input $0x_2\cdots x_n$, then executes it on input $1x_2\cdots
x_n$, and uses the result that corresponds to the original input $x_1 x_2
\cdots x_n$. More generally, by instantiating $O(\log n)$ input bits (thus
generating $n^{O(1)}$ queries) the number of $B_i$'s queried can be reduced
to $n - O(\log n)$.
§ MODEL 2: TWO ORACLES SUFFICE
In this section, it is convenient to regard a boolean function $f$ as the
characteristic function $\chi_S$ of a set $S\subseteq \{0,1\}^*$ or as a
language $L_f = \set{x \mid f(x) = 1}.$ By a random set $S$, we mean
one in which $\chi_S(x)$ is the outcome of a fair coin toss, for each $x\in
\{0,1\}^*$. The expression $S_1 \bigtriangleup S_2$ denotes the symmetric
difference of sets $S_1$ and $S_2$; that is,
\[
\chi_{S_1 \bigtriangleup S_2}(x) \equiv \chi_{S_1}(x) \oplus \chi_{S_2}(x).
\]
If $s_1$ and $s_2$ are both $n$-bit strings, then $s_1 \oplus s_2$ is the
$n$-bit string whose $i^{\rm th}$ bit is the exclusive-or of the $i^{th}$
bits of $s_1$ and $s_2$.
We also use the following nonstandard terminology and notation. A singleton sequence is a subset of $\{0,1\}^*$ that contains exactly one
string of each length. A random singleton sequence is one in which
the length-$n$ string is chosen u.a.r. from $\{0,1\}^n$. If $S$ is an
arbitrary set and $V = \{v_1, v_2, \ldots\}$ is an arbitrary singleton
sequence, then $S\circ V$ denotes the set with characteristic function
$$\chi_{S\circ V}(x) \equiv \chi_S(x\oplus v_{\sizex}).$$
Note that each
$v_n$ in $V$ effects a permutation of the bits in the characteristic vector
of $S \cap \set{0,1}^n.$
Every function
[As before, we assume the size of the output, $\abs{f(x)},$
is polynomially bounded.]
has a model-2 2-oracle instance-hiding scheme that leaks at most $\sizex$.
Conversely, two oracles are necessary for most boolean functions.
Necessity follows from Observation <ref> and
Lemma <ref>, which is proved in the next section. We show
sufficiency by demonstrating the existence of a model-2 2-oracle
instance-hiding scheme for an arbitrary function $\chi_S$ that leaks at
most $\sizex$. As in the proof of Theorem <ref>, we assume
WLOG that $f$ is boolean.
Let us illustrate the basic idea of the proof through the following simpler
argument. Assume first that the conditional plaintext distribution $P(X
\mid \abs{X})$ is uniform. Under this simplifying assumption, we can see that
the characteristic function of a random set $S$ has a model-2 2-oracle
instance-hiding scheme that, with probability 1, leaks at most $\sizex$ to
$B_1$ and at most $\chi_S(x)$ to $B_2$: Let $B_1$ have an oracle for a
random singleton sequence $V= \{v_n\}_{n=1}^\infty,$ and let $B_2$ have an
oracle for $S \circ V.$ On cleartext input $x,$ A first sends $\sizex$ to
$B_1$, who sends back $v_{\sizex}$. A then sends $x \oplus v_{\sizex}$ to
$B_2$, who sends back $\chi_{S\circ V}(x \oplus v_{\sizex}) = \chi_S(x)$.
Clearly, $B_1$ learns only $\sizex$. Intuitively, $B_2$ learns only
$\chi_S(x)$ because, for random $S$ and $V,$ $S\circ V$ is also random, and
the encrypted input $x \oplus v_{\sizex}$ is a uniformly distributed
$n$-bit string. To make this idea work for the theorem as stated, we must
make an arbitrary $S$ “look random,” and we must show how to avoid
leaking $\chi_S(x)$ to $B_2$. We accomplish this by using the symmetric
difference of $S$ with a random set and by “splitting” the singleton
sequence $V$ into two halves, one of which is given to each of $B_1$ and
Suppose that $\chi_S$ is the (arbitrary) boolean function for which we seek
an instance-hiding scheme. Let $R$ be a random set,
$V=\{v_n\}_{n=1}^\infty$ and $U = \{b_{1,n}\}_{n=1}^\infty$ be random
singleton sequences, and let $T = \{b_{2,n}\}_{n=1}^\infty$ be such that
$b_{2,n} = v_n \oplus b_{1,n}$. Let the oracle $O_1$ for $B_1$ encode both
$R \circ V$ and $U$ in a standard way, and let the oracle $O_2$ for $B_2$
encode both $(S \bigtriangleup R)\circ V$ and $T.$ The instance-hiding
scheme is described in Figure <ref>.
A: $n\leftarrow \sizex$.
A $\rightarrow$ $B_1$, $B_2$: $n$.
$B_1$ $\rightarrow$ A: $b_{1,n}$.
$B_2$ $\rightarrow$ A: $b_{2,n}$.
A: $y\leftarrow x\oplus b_{1,n} \oplus b_{2,n}$.
A $\rightarrow$ $B_1$, $B_2$: $y$.
$B_1$ $\rightarrow$ A: $\chi_{R\circ V}(y)$.
$B_2$ $\rightarrow$ A: $\chi_{(S\bigtriangleup R)\circ V} (y)$.
A: $\chi_S(x)\leftarrow \chi_{R\circ V}(y) \oplus
\chi_{(S \bigtriangleup R)\circ V}(y)$.
instance-hiding scheme!two oracle
Instance-hiding scheme for $f(x)$ with two oracles.
Oracle $O_1$ encodes $(R \circ V,U);$ $O_2$ encodes
$((S \bigtriangleup R)\circ V,T).$
The querier obtains the result:
\begin{eqnarray*}
\chi_{R\circ V}(y) \oplus \chi_{(S \bigtriangleup R)\circ V}(y)
& = &
\chi_R(x \oplus b_{1,n} \oplus b_{2,n} \oplus v_n) \oplus
\chi_{S \bigtriangleup R}
(x \oplus b_{1,n} \oplus b_{2,n} \oplus v_n)
\\ & = & \chi_R(x) \oplus \chi_{S \bigtriangleup R}(x)
\\ & = & \chi_S (x).
\end{eqnarray*}
The messages seen by $B_1$ are $y_{1,1} = n$ and $y_{2,1} = x \oplus
b_{1,n} \oplus b_{2,n}.$ First let us show that for every $x$ of size $n,$
the distribution on $((y_{1,1},y_{2,1}),(R \circ V, U))$ is the same.
The distribution on $(R,V,U)$ given $n$ is uniform over the $2^{2^n} 2^n
2^n$ choices for $(R,V,U).$ For any particular $x$ of size $n,$ the
protocol induces a bijection with $(y_{2,1},R \circ V,U).$ Thus, for any
particular $x$ of size $n,$ the distribution on $(y_{2,1},R \circ V,U)$ is
uniform over all possibilities. Since $y_{1,1}$ is always $n,$ the
distribution on $((y_{1,1},y_{2,1}),(R\circ V, U))$ is uniform over all
possible values, for each $x$ of size $n.$ Hence the random variable
$\anglebrack{Y_1,O_1}$ is independent of $x,$ given $n.$
Now consider $B_2.$ Any fixed $S$ induces a bijection on the set of
possible $R.$ Hence there is a bijection between $(R,V,U)$ and $(S
\bigtriangleup R,V,U).$ By definition, there is a bijection between the
latter set and $(S \bigtriangleup R,V,T).$ Using the argument of the
previous paragraph, there is a bijection between $(S \bigtriangleup R,V,T)$
and $(n,y_{2,2}, (S \bigtriangleup R) \circ V, T).$ Hence the distribution
on $((y_{2,1},y_{2,2}), ((S \bigtriangleup R) \circ V, T))$ is uniform, for
any $x$ of size $n.$ Thus $\anglebrack{Y_2,O_2}$ is independent of $x,$
given $n.$
§ OTHER RESULTS
§.§ Unconditional Negative Results
The main theorem of [1] is that NP-hard functions have no 1-oracle
instance-hiding schemes that leak $\sizex$, unless the polynomial hierarchy
collapses at the third level. This is a conditional negative result. No
unconditional negative results about instance-hiding schemes that leak
$\sizex$ are provided in [1]. We give one here.
Measure one of boolean functions do not have 1-oracle instance-hiding
schemes that leak at most $\sizex.$
Let $\vec{c}_n(f)$ denote the characteristic vector of $f$ for inputs
of $n$ bits
(e.g., $\vec{c}_2(f)$ is the concatentation of $f(00)$,
$f(01)$, $f(10)$, and $f(11)$).
Let $C(f)$ denote the set of characteristic vectors for $f,$
$\set{\vec{c}_n(f) \mid n \in {\bf N}}.$ We refer to a set
containing exactly one string of length $2^n$ for each $n$
as a characteristic set.
If $L_f,$ the set of $x$ such that $f(x)=1,$ is a language in NP/$poly,$
then it is not hard to see that for some constant $d,$ the Kolmogorov
complexity of $\vec{c}_n(f)$ is at most $n^d$ for all $n.$
We shall show that for any $d,$ the class of characteristic sets
consisting of strings with Kolmogorov complexity $n^d$ is contained
in a class having measure zero, using a natural measure.
The measure we use is the natural one defined on the class of boolean
functions: any boolean function $f$ is equivalent to a real number $R_f$
between 0 and 1, where the $i^{th}$ bit of the number is $f(i).$ The
measure of a set of boolean functions is the Lebesgue measure on the
corresponding set of reals. Since the representations of $f$ as a boolean
function, real number, language, or characteristic set are equivalent, we
loosely apply “measure” to classes of all of them, without loss of
For any $n,$ a straightforward counting argument shows that at least
$2^{2^n}/2$ strings of length $2^n$ have Kolmogorov complexity at least
$2^n-1.$ Let $K_n$ be the set of the lexicographically first $2^{2^n}/2$ of
these strings, and let $K = \cup_n K_n.$ If ${\cal C}$ is the class of
characteristic sets, then let $J = \set{ C \in {\cal C} \mid (\forall n)
\abs{K_n \cap C} < \infty};$ each characteristic set in $J$ contains at
most a finite number of vectors appearing in $K.$ The class $J$ is composed
of a countable number of subclasses ($J_l^m,$ the class of sets having $l$
vectors in common with $K$ but each of length at most $2^{2^m}$), each
similar to the Cantor set and having measure 0 by the standard argument.
The measure of $J$ is thus 0.
Let us argue that the characteristic set for any language in NP/$poly$ is a
member of $J.$ Let $L_f \in \mbox{NP}/poly,$ and let $L_f$ be recognized by
a nondeterministic Turing machine with a program of length $l$ and with
advice of length $n^e.$ For some $d,$ $l + n^e \leq n^d,$ and the
Kolmogorov complexity of any characteristic vector $\vec{c}_n(f)$ is
therefore at most $n^d.$ Now, for some constant $n_d,$ $n \geq n_d
\Rightarrow n^d < 2^n-1,$ and therefore $n \geq n_d \Rightarrow \vec{c}_n(f)
\not\in K_n.$ Hence a finite number of characteristic vectors are in $K,$
and $C(f) \in J.$
The class of characteristic sets for languages in NP/$poly$ is thus contained
in $J$ and has measure 0. For every function $f$ having a 1-oracle
instance-hiding scheme, $L_f \in \mbox{NP}/poly,$ so the measure of
functions having 1-oracle instance-hiding schemes is 0.
§.§ Random-Self-Reducing Circuits
Intuitively, a random-self-reduction of a set $S$ is an
expected-polynomial-time, randomized algorithm that maps $S$ to $S$, maps
${\overline S}$ to ${\overline S}$, and, on each input $x$ in $S$, outputs,
with equal probability, each $y$ in $S \cap \{0,1\}^{\sizex}$. Abadi,
Feigenbaum, and Kilian defined random-self-reducing algorithms rigorously
and related them to 1-oracle instance-hiding schemes [1]. In this
section, we consider random-self-reducing circuits in the same
A randomized psize circuit family is a set
$\{C_n\}_{n=1}^\infty$ of circuits in which the number of gates in $C_n$ is
at most $p(n)$ for some polynomial $p$, and $C_n$ has $n$ inputs
$x_1,\ldots,x_n,$ random inputs $r_1,\ldots,r_{q(n)},$ and outputs
$y_1,\ldots,y_{m(n)},$ for polynomials $q(n)$ and $m(n).$ We say that
$\set{C_n}$ generates a collection of distributions
$\set{D_n(x)}_{n\in\natsmall, x \in \{0,1\}^{*}}$ if for all $x,$
$C_{\abs{x}}$ outputs a sample point from $D_n(x)$ with probability at
least $1/2,$ and otherwise outputs a distinguished string $\Lambda.$
A set $S$ has random-self-reducing circuits if there is a randomized
psize circuit family $\set{C_n}$ that generates the distribution
$D_n(x)$ defined as the uniform distribution on $S \cap
\set{0,1}^n$ for $x \in S,$ or an arbitrary distribution on
$\overline{S} \cap \set{0,1}^n$ for $x \in \overline{S}.$ The family
$\set{C_n}$ is called a random-self-reducing circuit family for $S$.
A set $S$ has two-sided random-self-reducing circuits if there is a
randomized psize circuit family $\set{C_n}$ that generates the
distribution $D_n(x)$ defined as the uniform distribution on $S \cap
\set{0,1}^n$ for $x \in S,$ or the uniform distribution on
$\overline{S} \cap \set{0,1}^n$ for $x \in \overline{S}.$ The family
$\set{C_n}$ is called a two-sided random-self-reducing circuit family
for $S$.
If $S$ has two-sided random-self-reducing circuits, then $\chi_S$ has a
1-oracle instance-hiding scheme that leaks at most $\sizex$ and
Lemma <ref> provides a way of showing that certain boolean
functions have model-1 instance-hiding schemes that use many fewer oracles
than the schemes given by Theorem <ref>. Unfortunately, as
the following theorem indicates, neither SAT nor ${\overline {\rm SAT}}$ is
likely to have random-self-reducing circuits. (It is even less likely that
SAT has two-sided random-self-reducing circuits, as it would need for Lemma
<ref> to apply.)
If ${\overline {\rm SAT}}$ has random-self-reducing circuits, then the
polynomial hierarchy collapses at the third level.
If SAT has random-self-reducing circuits, then the
polynomial hierarchy collapses at the third level.
By a proof of Nisan [97] that, if SAT had a random-self-reducing
algorithm, then ${\overline {\rm SAT}}$ would be in IP$[2]$. Then, we use
the facts that ${\rm IP}[2] \subseteq {\rm AM}[4] \subseteq {\rm AM}[2]
\subseteq {\rm NP}/poly$ (see [4, 74, 75]) and that ${\overline {\rm
SAT}}\subseteq {\rm NP}/poly \Rightarrow {\rm PH} \subseteq \Sigma_3^p$
(see [122]). These results extend mutatis mutandis to the
case of random-self-reducing circuits for SAT and proof-systems with
§.§ Multiple Queries
The instance-hiding schemes we give are easily extended to allow the
querier to ask an unlimited number of questions. The scheme for Model 1
given in the proof of Theorem <ref> can be repeated
directly. The scheme for Model 2 can be used only once, but by generating
an exponential number of (non-reusable) schemes for each $n$ and encoding
them in a direct way into the two oracles, the scheme can be modified to
support many queries.
§.§ Nontrivial One-Oracle Schemes
Abadi, Feigenbaum, and Kilian show that certain well-known number-theoretic
functions, such as discrete logarithm and quadratic residuosity, have
one-oracle instance-hiding schemes that hide some significant information
about the input [1]. However, those schemes do leak more than
the size of the input $x$, and [1] provides no nontrivial examples
of functions with instance-hiding schemes that leak at most
$\sizex$.[Examples of trivial instance-hiding schemes that leak at
most $\sizex$ are the obvious schemes for $\chi_S$, where $S$ is a set in
P/$poly$; the oracle simply sends the polynomial advice string, and the
querier computes on his own.] Let us present a nontrivial example of a
presumably intractable function with a one-oracle instance-hiding scheme.
The discrete logarithm to the base $g$ modulo $p$ of a number $x,$
$\mbox{DLOG}_{g,p}(x),$ is the exponent $e$ such that $g^e \equiv x \mod
p.$ Here, $p$ is prime, and $g$ is a generator of the nonzero integers
$\mod p.$ No efficient method for computing DLOG is known.
The instance-hiding scheme for DLOG given in [1] is as follows.
On input $g\#p\#x,$ choose $r\in \set{1,\dots,p-1}$ uniformly at random,
and let $y=x g^r \mod p$ (if $x=0,$ pretend $x=1$ and note that the answer
is $\Lambda.$). Clearly, $y$ is distributed uniformly over ${\bf
Z}_{p}^{*}$ for any $g\#p\#x$ of size $n,$ given $g$ and $p.$ On receiving
the answer $z =
\mbox{DLOG}_{g,p}(y),$ return $z-r.$ The reduction is correct:
\[
\mbox{DLOG}_{g,p}(y)- r =
\mbox{DLOG}_{g,p}(x g^r)- r =
\mbox{DLOG}_{g,p}(x) +r- r =
\mbox{DLOG}_{g,p}(x).
\]
Thus this scheme leaks $\abs{g\#p\#x}, g,$ and $p.$ We now define a
function that has a scheme which leaks only $\abs{g\#p\#x}.$
Let $p_n$$p_n$!smallest prime exceeding $2^n$
denote the smallest prime exceeding $2^n,$ and $g_n$ the smallest generator
of ${\bf Z}_{p_n}.$ Define the function $f(x) =
\mbox{DLOG}_{g_{|x|},p_{|x|}}(x).$ (If $x\equiv 0 \mod p_{|x|},$ then say
$f(x)=\Lambda.$) As in the general case of the discrete logarithm, no
efficient method for computing $f(x)$ is known.
On the other hand, given $g_n$ and $p_n,$ the reduction described above is
easy to perform. Thus, an instance-hiding scheme for $f(x)$ can be
constructed as follows. First, the querier sends $n=\abs{x},$ and receives
$g_n$ and $p_n$ in return. Then the querier performs the
random-self-reduction described above, and obtains the result as described
above. The scheme hides everything but $\sizex.$
CHAPTER: LOCALLY RANDOM REDUCTIONS
However, this bottle was not marked “poison,” so Alice
ventured to taste it, and finding it very nice (it had, in fact, a
sort of mixed flavour, of cherry-tart, custard, pine-apple, roast
turkey, toffee, and hot buttered toast), she very soon finished it
“What a curious feeling!” said Alice. “I must be shutting up like
a telescope.”
And so it was indeed: she was now only ten inches high, and her face
brightened up at the thought that she was now the right size for going
through the little door into that lovely garden.
Lewis Carroll, Alice's Adventures in Wonderland
Locally random reductions are a new class of reductions from one
computational problem to another. Inspired by the problem of instance-hiding schemes (presented in Chapter <ref>), locally
random reductions have found a broad variety of applications since their
introduction by Beaver and Feigenbaum. These results include methods for
program testing [89, 34] and the surprising result that
$\ip=\pspace$interactive proof system!IP=PSPACE
[115]. We describe some of these
important subsequent developments in <ref>, and show
how LRR's drastically reduce the communication complexity of zero-knowledge
proof systems and secure multiparty protocols in Chapters
<ref> and <ref>.
Reductions from one language to another are fundamental to complexity
theory. One can reduce the problem “is $x \in L_1?$” to solving the problem
“is $y \in L_2?$” if there is an efficiently computable function $P$ such
that $x \in L_1 \Leftrightarrow P(x) \in L_2.$ A reduction from a function $f(x)$ to another function $g(y)$ similarly maps an instance $x$
in the domain of $f$ to an instance $y=P(x)$ in the domain of $g.$ It also
requires that the value of $g(y)$ be interpolated to obtain the
answer: $f(x) = Q(g(y)).$
In general, a reduction from $f$ to $g$ can produce several
instances $y_1,\dots,y_m$ at which to evaluate $g,$ and need not
generate these instances deterministically. Let $X$ be the domain of
$f$ and $Y$ be the domain of $g.$
A random reduction from $f(x)$ to $g(y)$ is a pair of
probabilistic, polynomial time algorithms $(P,Q)$ satisfying the
following properties:
$P : X \rightarrow \dist(Y^m \times \sigstar),$ that is, the querying
algorithm $P$ produces a sample $(y_1,\dots,y_m,\sigma)$ according to
distribution $P(x),$ where each $y_i$ is in the domain of $g,$ and $\sigma$
is a string.
If $(y_1,\dots,y_m,\sigma)$ has nonzero weight in the distribution $P(x),$
then the interpolation algorithm gives $Q(g(y_1),\dots,g(y_m),\sigma) =
The term locally random refers to the distributions on subsets of the
queries $y_1,\dots,y_m$ produced by $P(x).$ In a certain sense, it
generalizes the idea of secret-sharing, in that secret-sharing ensures a
uniform distribution on sufficiently small subsets of pieces, regardless of
the secret. Here, we should like that the distributions on subsets of the
pieces must be the same for any $x$ of a given size $n,$ even though it
need not be uniform, as it is in the case of secret sharing.
Let $B \subseteq \set{1,\dots,n}.$ Then $D_x^B$ denotes the distribution on
$\set{y_i \mid i \in B}$ induced by $P(x).$
A $(k(n),m(n))$-locally random reduction (LRR) from $f(x)$ to $g(y)$
is a random reduction $(P,Q)$ satisfying:
* $P(x)$ produces $m(n)$ queries on inputs of size $n;$
* For all $n,$ for all subsets $B\subseteq \set{1,\dots,n}$ of size
$\leq k(n),$ the distribution $D_x^B$ is the same for all $x$ of size
The discrete logarithm function, $\mbox{DLOG}_{g,p}(x),$ was defined
in Definition <ref>. No efficient algorithm for computing DLOG is
known. The random-self-reduction given for $DLOG$ is in fact a
$(1,1)$-locally random reduction: it produces one query $y=xg^r,$ and the
distribution on query sets of size $\leq 1$ is the same for every $x.$
For illustration, let us demonstrate a $(10,21)$-locally random reduction
from the multiplication function to itself. Let $f(x_1,x_2) = x_1 x_2,$
over ${\bf Z}_{p_n}$ for concreteness. Let $g(x_1,x_2) = x_1 x_2.$
We reduce $f$ to $g$ via $(P,Q)$ as follows.
The querying algorithm $P(x_1,x_2)$ chooses
$a_1,\dots,a_{10},b_1,\dots,b_{10} \in {\bf Z}_{p_n}$ uniformly at random,
and sets $p_1(u)=a_{10}u^{10}+\cdots+a_1 u+x_1$ and
$p_2(u)=b_{10}u^{10}+\cdots+b_1 u+x_2.$ It computes $y_i = (p_1(i),p_2(i))$
for each $1 \leq i \leq 21.$
The interpolation algorithm $Q(z_1,\dots,z_{21},\sigma)$ does just that: it
interpolates the polynomial $q(u)$ of maximal degree 20 passing through the
points $(i,z_i)$ for $1\leq i \leq 21.$ It returns the value $q(0).$
To see that the reduction is correct, observe that $g(p_1(i),p_2(i)) =
p_1(i)p_2(i) = q(i)$ for $1 \leq i \leq 21,$ and both $g(p_1(u),p_2(u))$
and $q(u)$ are polynomials of maximal degree $20.$ By elementary algebra,
$g(p_1(u),p_2(u))=q(u),$ hence $g(p_1(0),p_2(0))=q(0).$
To see that the reduction is (10,21)-locally random, note that because the
coefficients of $p_1(u)$ and $p_2(u)$ are chosen uniformly at random, the
distribution on any $10$-set of values of $p_1(u)$ or $p_2(u)$ is uniform
over $({\bf Z}_{p_n})^{10}$ regardless of $x_1$ and $x_2.$ Thus for any
subset $B$ of $10$ or fewer queries, $D_{x_1,x_2}^B$ is the same for any
The main results of this chapter are twofold: the first shows the existence
of locally random reductions for polynomials $f(x_1,\dots,x_m)$ and the
second for arbitrary boolean functions. These reductions inspire a variety
of applications from program testing to interactive proofs to secure
distributed protocols.
For any polynomial $f(x_1,\dots,x_n)$ of degree at most $n$ over some
field $E,$ there is a $(1,n+1)$ locally random reduction from $f$ to
For any boolean function $f(x),$ there exists a $(1,n+1)$ locally
random reduction from $f(x)$ to some function $g(y).$ In particular, for
any field of size exceeding $n+1,$ $g(y)$ can be taken to be a polynomial
$c_f(x_1,\ldots,x_m)$ of degree $n$ over that field.
See Lemmas <ref> and <ref>,
proved below in <ref> and
In fact, a stronger statement is achievable:
For any polynomial $f(x_1,\dots,x_n)$ of degree at most $n$ over some
field $E,$ for any $d(n),$ and for any constant $c>0,$ there is a $(d,dn/(c
\log n))$ locally random reduction from $f$ to a polynomial
$h_f(w_1,\ldots,w_{n^{c+1}/(c \log n)})$ of degree $n/(c\log~n)$
over that field.
For any boolean function $f(x),$ for any $d(n),$ and for any constant
$c>0,$ there exists a $(d,dn/(c \log n))$ locally random reduction from
$f(x)$ to some function $g(y).$ In particular, for any field of size
exceeding $(dn/(c \log n)),$ $g(y)$ can be taken to be a polynomial
$h_f(w_1,\ldots,w_{n^{c+1}/(c \log n)})$ of degree $n/(c\log~n)$ over that
field, where each $w_i$ itself is a product of at most $O(\log~n)$ variables
See Lemmas <ref> and <ref>.
§ LRR'S FOR MULTIVARIATE POLYNOMIALS
For any polynomial $f(x_1,\dots,x_n)$ of degree at most $n$ over some
field $E,$ there is a $(1,n+1)$ locally random reduction from $f$ to
We assume without loss of generality that $\abs{E} > n+1$ (otherwise, we
may use an extension field). The query function $P(x_1,\dots,x_n)$ is
described in Figure <ref>, and the interpolation function
$Q(z_1,\dots,z_{n+1})$ is described in Figure <ref>.
By Lemma <ref>, the
distribution $D_{x_1,..,x_n}^{i}$ is
uniform on $E^n$ for all assignments to $x_1,\ldots,x_n.$
* Choose $a_1,\dots,a_n \in E$ uniformly at random, and set
$p_1(u)=a_1 u +x_1,
p_2(u)=a_2 u +x_2,
\dots, p_n(u)=a_n u +x_n.$
* Set $y_i \leftarrow (p_1(i),\dots,p_n(i))$ for $1 \leq i \leq n+1.$
Query function $P(x_1,\dots,x_n)$ for self-reducing the multivariate
polynomial $f(x_1,\dots,x_n).$
The interpolation function $Q$ is easy to specify: given $n+1$ values
$z_1,\dots,z_n,$ interpolate the polynomial $q(u)$ of degree $n$ running
through the $n+1$ points. Return $q(0).$ That this reduction is correct
can be verified by simple algebra: $f(p_1(u),\ldots,p_n(u))$ and $q(u)$ are
both polynomials in $u$ of degree $n.$ They agree at $n+1$ points
($q(i)=f(p_1(i),\ldots,p_n(i)),$ $1\leq i \leq n+1$). They must therefore
be identical, so $q(0)= f(p_1(0),\ldots,p_n(0)) = f(x_1,\ldots,x_n).$
* Interpolate the polynomial $q(u)$ passing through $(i,z_i)$ for $1\leq i
\leq n+1.$
* Return $q(0).$
Interpolation function $Q(z_1,\dots,z_{n+1})$
for self-reducing the multivariate polynomial $f(x_1,\dots,x_n).$
§ LRR'S FOR BOOLEAN FUNCTIONS
locally random reduction!boolean
For any boolean function $f(x),$ there exists a $(1,n+1)$ locally random
reduction on inputs of size $n$ from $f(x)$ to some function $g(y).$ In
particular, for any field of size exceeding $n+1,$ $g(y)$ can be taken to
be a polynomial $c_f(x_1,\ldots,x_n)$ of degree $n$ over that field.
By Lemma <ref>,
every function $f$ on inputs of length $n$
can be expressed as a polynomial $c_f$ of degree $n$ over an arbitrary
field $E,$ such that when the variables $x_1,\ldots,x_n$ are assigned
values in $\set{0,1}$ that match $x,$ $c_f(x_1,\ldots,x_n) = f(x).$
By Lemma <ref>, then, $c_f$ is $(1,n+1)$-locally-random
reducible to itself, which implies that $f$ is $(1,n+1)$-locally-random
reducible to $c_f.$
§ REDUCING THE NUMBER OF QUERIES
By an appropriate change of variables, we can decrease the number of
queries needed to reduce a polynomial to itself by a logarithmic factor
(and, correspondingly, to reduce a boolean function to a polynomial).
locally random reduction!fewer queries
Every polynomial $f(x_1,\dots,x_n)$ of degree $n$ is expressible as a
polynomial $h_f(w_1,\dots,w_N)$ of degree $\frac{n}{\log n}$ over new
variables $w_1,\dots,w_N,$ where $N = n^2/\log~n,$ and each variable $w_i$
is a product of at most $\log n$ of the $x$-variables.
Express $f$ as a weighted sum of its monomials:
\[
f(x_1,\dots,x_n) =
\sum_{\epsilon} c_{\epsilon} \prod_{i=1}^n x_i^{\epsilon(i)}
\]
where $\epsilon : \set{1,\dots,n} \rightarrow \set{0,1}$ indicates whether
variable $x_i$ is in a given monomial of $f,$ and $c_{\epsilon}$ is 0 or 1
according to whether the monomial represented by $\epsilon$ appears in $f.$
The sum is taken over all possible $\epsilon.$ We group the $x$'s into
blocks of size $\log n$ and assign a variable $w$ to every product of a
combination of $x$'s taken from a single block. For example, if
$n=16,$ then we take blocks $\set{x_1,x_2,x_3,x_4},
\set{x_5,x_6,x_7,x_8},
\set{x_9,x_{10},x_{11},x_{12}},
\set{x_{13},x_{14},x_{15},x_{16}},$
and we take $w_1=1, w_2=x_1, w_3=x_2, w_4=x_2x_1,w_5=x_3,\ldots,
\[
\psi(b,\epsilon) =
bn + \sum_{i=1}^{\log n} 2^{i-1} \epsilon(i + (b-1) \log n),
\]
for block number $b$ in the range $\set{1,\dots,\frac{n}{\log n}},$
we have the correspondence
\[
w_{\psi(b,\epsilon)} =
\prod_{i=(b \log n)+1}^{(b+1)\log n} x_i^{\epsilon(i)}.
\]
Then $f$ is expressed in terms of the new variables as
\[
f(x_1,\dots,x_n) =
h_f(w_1,\dots,w_N) \equiv
\sum_{\epsilon} c_{\epsilon}
\prod_{b=1}^{n/\log~n} w_{\psi(b,\epsilon)}
\]
It is not hard to see that $h$ has degree $\frac{n}{\log n}$ over
$N=n^2/\log~n$ variables.
locally random reduction!polynomial
For any polynomial $f(x_1,\dots,x_n)$ of degree at most $n$ over some
field $E,$ for any $d(n),$ and for any constant $c>0,$ there is a $(d,dn/(c
\log n))$ locally random reduction from $f$ to a polynomial
$h_f(w_1,\ldots,w_{n^{c+1}/(c \log n)})$ of degree $n/(c\log~n)$
over that field.
In particular, a polynomial $h_f(w_1,\dots,w_N)$
of degree $n/(c\log~n)$ suffices, where $N=\frac{n^{c+1}}{c\log~n}.$
The idea behind the reduction is to decrease the degree of $f$ to
$n/(c\log~n),$ and then to use random polynomials of degree $d$ to hide the
variables. Assume WLOG that $c\log~n$ divides $n.$
By an easy extension of Lemma <ref>, there exists a
polynomial $h_f(w_1,\dots,w_N)$ of degree $n/(c\log~n)$ in
$N=n^{c+1}/(c\log~n)$ variables. Given $x_1,\dots,x_n,$ computing
$w_1,\dots,w_N$ is an easy $NC^1$ computation: simply multiply together all
the $x_j$ appearing in the monomial corresponding to $w_i,$ for each $w_i.$
Instead of choosing linear polynomials as in the proof of
Lemma <ref>, the query algorithm selects
uniformly random polynomials $p_1(u),\dots,p_n(u)$ of degree $d$ subject to
$p_1(0)=w_0,\dots,p_N(0)=w_N.$ It then generates
$y_i=(p_1(i),\dots,p_N(i))$ for $1 \leq i \leq dn/(c\log~n).$
On input $z_1,\dots,z_{1+dn/(c\log~n)},$ interpolates the points
$(i,z_i)$ to a polynomial $q(u)$ of degree $dn/(c\log~n)$ and returns
Correctness is satisfied through straightforward algebra: $q(u)$ and
$h_f(p_1(u),\dots,p_N(u))$ have degree $dn/(c\log~n)$ and agree at
$1+dn/(c\log~n)$ points, so they are identical;
\[
q(0) = h_f(p_1(0),\dots,p_N(0))
= h_f(w_0,\dots,w_N) = f(x_1,\dots,x_N).
\]
Any $d$-subset of the $y_i$'s contains $d$ values on each polynomial
$p_1(u),\dots,p_N(u).$ By Lemma <ref> these values are distributed
uniformly over $E^d$ for any $w_1,\dots,w_N;$ that is, for any
The extra “+1” in the number of queries is removed by
instantiating $w_0$ to 0 and then to 1 (see the proof of
Theorem <ref> in Chapter <ref>), generating queries
that are twice as long.
For any boolean function $f(x),$ for any $d(n),$ and for any constant
$c>0,$ there exists a $(d,dn/(c \log n))$ locally random reduction from
$f(x)$ to some function $g(y).$ In particular, for any field of size
exceeding $(dn/(c \log n)),$ $g(y)$ can be taken to be a polynomial
$h_f(w_1,\ldots,w_{n^{c+1}/(c \log n)})$ of degree $n/(c\log~n)$ over that
field, where each $w_i$ itself is a product of at most $O(\log~n)$ variables
The proof of Lemma <ref> is virtually the same as that
of Lemma <ref>: $f(x)$ is equivalent to a polynomial
$g(x_1,\dots,x_n)$ of degree $n$ over an arbitrary field $E.$ The only new
observation we must make is that $g$ is $(d,dn/(c\log~n))$ locally random
reducible to some polynomial $h_f,$ by virtue of Lemma <ref>.
§ INSTANCE HIDING SCHEMES
The construction of the scheme given in the proof of
Theorem <ref> used secret-sharing with maximal coalition
size $t=1$ to generate a a $(1,n+1)$-locally-random reduction, which gave
rise to a $(n+1)$-oracle instance hiding schemes.
In fact, it is not hard to show that any function $f(x)$ admitting a
$(1,m)$-locally random reduction to some function $g(y)$ also has an
$m$-oracle instance hiding scheme. This gives a direct way to generate
instance hiding schemes for arbitrary functions.
locally random reduction!instance-hiding scheme
instance-hiding scheme!locally random reduction
If there exists a $(1,m(n))$-locally random reduction from $f(x)$ to $g(y),$
then there exists an $m(n)$-oracle instance hiding scheme for $f(x).$
The protocol is similar to the one given in the proof of
Theorem <ref>, except that instead of computing $y_i$
as a list of $1$-shares of $x_1,\ldots,x_n,$ we let $y_i$ be the result of
computing the reduction $P(x).$ In other words, the querier computes
$(y_1,\dots,y_m) \leftarrow P(x),$ and sends $y_i$ to $B_i,$ who computes
$g(y_i).$ Then $A$ determines $f(x)$ by computing
$Q(g(y_1),\ldots,g(y_m)).$ Since $(P,Q)$ is a $(1,m)$ locally-random
reduction, the distribution $D_n^{\set{y_i}}$ on query $y_i$ is the same
for any $x$ of size $n,$ so the random variables
$\anglebrack{Y_i,O_1,\ldots,O_m}$ and $X$ are independent given
$\abs{x}=n.$ (Note that the oracles are identical, as before, and each
computes $g(y)$.)
§ IP$=$PSPACE AND OTHER SUBSEQUENT WORK
§.§ Program Testing and the Permanent
Lipton [89] has designed a theory of program testing based on
locally-random self reductions. Given a program which is presumed to work
with reasonably high probability on most inputs, he investigates means to
check the answer of the program on a particular input by calling the
program on several random but related inputs. Based on
Theorem <ref>, which gives a $(1,n+1)$ locally-random self
reduction for any polynomial of degree $n$ in $n$ variables, his main
theorem states that any program for a function expressible as a polynomial
of degree $n$ is testable using $n+1$ calls to the program. (See
[32, 34] for a related, earlier approach to program checking and
In particular, Lipton observes the following:
[Lipton 1989] The Permanent function is
testable of order $n+1,$ i.e. it has a $(1,n+1)$-locally random
The Permanent of a $n \times n$ matrix $A$ is given by the following
expression, similar to the determinant:
\[
\mbox{\sc Perm\ } A \equiv \sum_{\sigma} \prod_1^n A_{i,\sigma(i)},
\]
where the sum is over all permutations of $\set{1,\ldots,n}.$
The interesting and important property to note is that the Permanent
is not known to be efficiently computable. In fact, it is complete for the
class $\#P$ of counting problems [118], presumed to be
extremely intractable. The polynomial hierarchy (PH), which contains NP
and coNP, is contained in $P^{\#P}$ by a result of Toda [116];
the Permanent function is certainly as hard and presumably harder
than any problem in NP or coNP.
§.§ Interactive Proofs: IP$=$PSPACE
Beaver, Feigenbaum, Kilian, and Rogaway [15] applied locally random
reductions to show that zero-knowledge proofs of properties about bits
which are committed but remain secret require only a constant number of
rounds of interaction and a small polynomial message size. In other words,
one processor places several bits in envelopes and seals them, and then
claims and proves to another processor that a certain property holds on the
bits in the envelopes. The bits are not revealed. Previous methods
required a message complexity proportional to the size and depth of a
circuit describing the property to be shown. Chapter <ref>
describes and proves this result in more detail.
Nisan [98] used Observation <ref> to show that there
is a multiprover interactive proof system for $\#P.$ That is, two
physically separate provers can convince a polynomial-time verifier of the
value of the permanent of some matrix, despite the difficulty of computing
the permanent. Based on Nisan's solution, Fortnow, Lund, Karloff, and
Nisan [91] showed that there is a single-prover proof system
for ${\#P},$ namely that one prover can convince a verifier of the value of
a permanent. Since the polynomial hierarchy is contained in $P^{\#P}$ by
[116], this implies interactive proof systems for languages in
coNP, which was a large open question.
Shamir showed that in fact
$\ip=\pspace,$interactive proof system!IP=PSPACE namely that the
class of languages that are interactively provable is the same as the class
of languages computable by a Turing machine using a polynomial amount of
memory [115]. His proof, based on the line of research inspired by
locally random reductions, uses simple algebraic techniques.
Interestingly, the proof that $\ip=\pspace$ connects a class based on
interaction and communication with a class based on computational
complexity. The line of research sparked by the algebraic methods of
locally random reductions joins communication complexity with computational
complexity. On the other hand, the results of [15], described in
Chapters <ref> and <ref>, show how to make
communication complexity and computational complexity independent in
protocols for security and reliability.
Thus, locally random reductions elucidate the connection between
communication complexity and computational complexity. Their
straightforward, algebraic nature and the concise and simple solutions they
provide and inspire suggest that more applications are waiting to be found.
CHAPTER: ZERO KNOWLEDGE PROOFS ON COMMITTED BITS
Notarized envelope schemesenvelope!notarized are a natural
extension of the zero-knowledge proof systems introduced by Goldwasser,
Micali, and Rackoff [74] (see <ref>). An
envelopeenvelope scheme is a means to commit to a string $x$ (the
contents of the envelope) in such a way that $x$ cannot later be changed.
Until the envelope is opened, the string remains secret. It is similar to
secret-sharing in these respects: a secretly shared value remains hidden
until it is explicitly reconstructed, and its value cannot be changed in
the meantime. In fact, threshold schemes provide a natural means to
implement envelopes in the presence of a network, an idea that will be
especially useful in Chapter <ref>.
A notarized envelopeenvelope!notarized provides some
additional information about the contents of the envelope. Namely, it is
an envelope associated with a predicate $\scp$ that holds true when applied
to the contents. The truth of the predicate may reveal information about
the contents; but nothing more is compromised. Like zero-knowledge
proofs,zero-knowledge proof!on envelopes the bearer of the
envelope sees the simple fact that $x \in L$ — or here, the fact that
$\scp(x)=1$ — and whatever else it can compute given the notarized
envelope, it can compute given simply that $\scp(x)=1.$ (We shall consider
an even stronger statement: for some function $F,$ the committer
commits to $x$ and also provides a value $y$ such that $F(x)=y.$ The
example of a predicate is one-sided in that it does not apply when
Using cryptographic assumptions, notarized envelope schemes have been
developed by, among others, Goldreich, Micali, Wigderson [70],
Brassard, Chaum, [38], and Impagliazzo, Yung [83].
In this model, at least one of the two players (prover and verifier) is
limited in computational power, and bit commitment is implemented using
cryptographic tools such as one-way functions.
We shall consider the ideal envelope model,envelope!ideal in
which both prover and verifier have unlimited computational power, no
cryptographic assumptions are made, and bit commitment is assumed as a
primitive. Any primitive that implements bit commitment in this model is
called an envelope scheme. A natural question to address is whether
notarized envelope schemes exist in this model; that is, can notarized
envelopes be constructed from generic envelopes? This question was
answered in the affirmative by Bennett, Brassard, Crépeau [21],
Ben-Or et al. [25], and Rudich [113].
The resources measures for notarized envelope schemes are similar to those
of multiparty protocols, or for that of any interaction, for that matter.
Local computing time and space, number of rounds of interaction,
message sizes, and number of generic envelopes (“envelope complexity”)
are of primary interest.
The notarized envelope schemes of [21, 38, 25, 70, 83, 113] are
not directly comparable, because they consider a wide variety of models,
but they have one feature in common: All have bit complexity proportional
to the circuit complexity of $\scp$. Here, we achieve a more
communication-efficient general construction of notarized envelope schemes.
We consider the execution of an ideal generic envelope scheme to require
$O(1)$ rounds.
(Constant Rounds Notarization)
notarization!constant rounds
In the ideal envelope model, every function family $F$ (and thus every
predicate family $\scp$) has an exponentially-secure notarized envelope
scheme that uses a constant number of rounds and a small polynomial
number (in $(n,M,k)$) of envelopes and bits, regardless of the
computational complexity of $F.$
By comparison, zero-knowledge proof systems are limited to languages in
$\ip=\pspace.$ Though it is unlikely that there are small circuits
describing $\pspace$ languages, the class of provable statements is limited,
and there are no proof systems requiring only a bounded number of rounds.
Our construction for notarized envelopes makes no restriction on the
predicates, and uses a bounded number of rounds.
§ DEFINITIONS
An ideal (generic) envelope schemeenvelope!generic is the
following two-stage ideal protocol for three players, one of whom is a
trusted host. We refer to the players as the Sender S, the Recipient R,
and the trusted Intermediary I. In the first stage, S sends either the
message refuse or it sends a bit $b$ to I. If it sends a bit $b,$
then I sends the message accept to R, to indicate having accepted a
bit; otherwise I sends reject to indicate refusal or misbehavior. In
the second stage, S sends one of two messages, open or retain,
to I. If S sends open, then I sends $b$ to R. If S sends retain, then I sends $\Lambda$ to R. This protocol is similar to the
Verifiable Time-Release Schemes of Chapter <ref>, though here we
are not concerned whether the intermediary knows the bit.
Committing to a string $x$ simply means performing $\abs{x}$ repetitions of
the ideal envelope scheme, one for each bit of $x.$ We assume a different
trusted intermediary for each committed bit. We say that S “opens an
envelope” to mean that S sends open in stage 2.
An ideal notarized envelope scheme for (deterministic) function
family $F$ requires a few simple modifications. The sender sends strings
$x$ and $y$ to I, who computes $F(x).$ If $F(x)=y,$ then I returns (accept,$y$) to R; otherwise, I sends (reject,$y$). In the second
stage, I sends either $x$ or $\Lambda$ to R, according to the request of S.
Where clear from context, we in fact refer to a notarized envelope for $F$
as one of two cases: as above, the sender S has provided a value $y$ for
$F(x);$ or secondly, the sender has also committed $y$ and is “really”
using a notarized envelope for the function $\chi_F(x,y)$ defined as 1 iff
$F(x)=y,$ and otherwise 0. In the latter case we implicitly assume the
sender is claiming $\chi_F(x,y)=1.$
A notarized envelope for a predicate $\scp$ has classically been defined as
the special modification in which $y=1$ always. This weak notion does not
allow the sender to claim that the predicate does not hold. It
corresponds to zero-knowledge in the sense that zero-knowledge proof
systems are one-sided; they show that a string $x$ is in a language, but do
not need to show $x$ is not in a language. Thus, notarized envelopes for
predicates in NP are distinguished from notarized envelopes for coNP, or
even from notarized envelopes for NP$\cap$coNP — which corresponds to the
more powerful definition in terms of functions. That is, given an NP
predicate $\scp,$ if the sender is allowed to choose $y=0$ or $y=1$
then the function $F$ is an NP$\cap$coNP decision question; if the sender
were allowed only to claim $y=1,$ then the function $F$ would be an NP
decision problem, corresponding to the case of proof systems for NP.
No efficient notarized envelope schemes built on generic envelope schemes
have previously been discovered for predicates outside of NP, even with the
weaker one-sided definition. We observe that the techniques employed to
show that $\ip=\pspace$ provide an unbounded-round method for notarized
envelopes, even with the stronger two-sided definition; and they provide an
implementation for a sender and receiver of different computational power.
In this chapter, however, we consider equally powerful parties and present
results that are not only more general but more efficient than past methods
and derivations of those methods.
§ NOTARIZED ENVELOPES FROM GENERIC ENVELOPES
Theorem <ref>
Our goal is to construct a protocol that uses generic envelope schemes to
implement a notarized envelope scheme. In order to construct notarized envelopes for an arbitrary function family $F,$ we use a $(1,m)$
locally random reduction from $F$ to a family $G$ of polynomials.
We shall make use of the following important result for functions that can
be computed by logarithmic depth, polynomial size circuits. It certainly
applies to the reduction (P) and interpolation (Q) functions of the locally
random reductions of Chapters <ref> and <ref>, which are
randomized $NC^1$ computations.
(Bennett et al. [21], Ben-Or et al. [25], Rudich [113])
If $F^{n,M}$ is computable by a boolean or arithmetic circuit of depth
$O(\log nM)$ and size polynomial in $nM,$ then it has a notarized envelope
scheme with constant round complexity, polynomial bit
complexity, and polynomial envelope complexity.
The protocol (F,x,y) for notarized envelopes for arbitrary function
$F$ is described in Figure <ref>. Essentially, the sender
repeats the following protocol $kn^2$ times in parallel. It first commits
to two sequences of random bits $r_1$ and $r_2$ that the reduction and
interpolation algorithms, P and Q, will use. (Assume that the two
randomized algorithms require $p(n)$ bits on inputs of length $n;$ with
these bits, the algorithms are essentially deterministic.) The sender uses
a notarized envelope to commit to a string containing $m$ outputs
$y_1',y_2',\ldots,y_m'$ that it claims to be the outputs of $P(r_1,x).$ The
sender also uses a notarized envelope to commit values
$z_1,z_2',\ldots,z_m$ that it claims satisify
$Q(r_2,z_1',z_2',\ldots,z_m')=y.$ These two claims are satisfied using the
notarization schemes for polynomial size circuits [21, 25, 113].
The crucial claim is the following. The sender claims that for each $i \in
[m],$ $z_i' = G(y_i').$ Of course, the computation of $G(y_i')$ might
require a large circuit, and previous techniques will not apply without
incurring tremendous expense in terms of bits, rounds, and commitments.
The sender “proves” his claim by allowing the receiver to open one
of the pairs $(y_i',z_i').$ If S has behaved then indeed $z_i'=G(y_i'),$
and R will accept it. If S has not behaved, namely if $F(x) \not= y,$ then
given that $P(r_1,x)=(y_1',y_2',\ldots,y_n')$ and
$Q(r_2,z_1',z_2',\ldots,z_n')=y,$ there must be some pair $(y_i',z_i')$ for
which $z_i' \not= y_i'.$ With probability at least $1/m,$ then, R will
detect cheating and reject. The probability is at least $1/n$ if a $(1,n)$
LRR is used, and through amplification by repetition, the probability is at
least $1-(1-1/n)^{kn^2}$ or asymptotically $1-e^{-kn}$ that R detects
cheating if it exists. (Locally random reductions with sharper parameters
will decrease the number of repetitions needed to amplify the
Stage I (Commit):
S computes $y=F(x)$ and sends $y$ to R.
$K=1..nk^2$ :
S commits two uniformly random strings $\rhat_1^K,\rhat_2^K$ of
length $p(n).$
R sends to S two uniformly random strings $\shat_1^K,\shat_2^K$ of
length $p(n).$
S sets $r_1^K=\rhat_1^K \oplus \shat_1^K,
r_2^K=\rhat_2^K \oplus \shat_2^K.$
S computes $P(r_1^K,x) = (y_1^K,y_2^K,\ldots,y_m^K).$
$i=1..m:$ S computes $z_i^K=G(y_i^K).$
S runs ($P$;$x,r_1^K$;$y_1^K,y_2^K,\ldots,y_m^K$).
R assigns $P_{K}$ to be accept or reject accordingly.
S runs ($P$;$r_2^K,z_1^K,z_2^K,\ldots,z_m^K$;$y$).
R assigns $Q_{K}$ to be accept or reject accordingly.
R randomly chooses $(i_1,\ldots,i_K) \in_R [m]^K$ and sends it to S.
S opens the generic envelopes containing $y_i$ and $z_i$ for each $i.$
R receives the opened values $y_i'$ and $z_i'.$
R computes $G(z_i').$
for all $K \in [nk^2],$ $y_i'=G(z_i')$ and $P_K=Q_K=$accept,
R outputs (accept,$y$)
R outputs (reject,$y$).
Stage II (Open):
S opens the generic envelopes containing $x.$
Protocol for sender S to place $x$ in a notarized envelope claiming $F(x)=y.$
$(P,Q)$ is a $(1,m)$ locally random reduction from $F$ to $G,$ and $P$
and $Q$ are $NC^1$ computations.
More formally, the overall protocol computes the composition of three
functions, $F^1,$ $F^2,$ and $F^3.$ The first computes a robust and private
representation of $x$ and $\set{r_1^K,y_1^K,y_2^K,\ldots,y_m^K}_{K\in
[nk^2]},$ as well as the appropriate output accept or reject
for receiver R. The output is clearly private: a reliable sender will
always provide inputs producing accept for R, and the output induced
by a corrupted sender depends only on the sender's information. The second
computes a robust and private representation of $y$ and
$\set{r_2^K,z_1^K,z_2^K,\ldots,z_m^K}_{K\in [nk^2]}$ as well as the
appropriate output accept or reject for receiver R.
The third computes the function that computes reject if R has
rejected anything so far, and computes reject if any $i_j$
selects a pair $(y_i,z_i)$ that was computed incorrectly, and otherwise
computes accept for R. Function $F^3$ is private, since if S is
reliable then the output is always accept. It is a robust
representation of the statement that $r_1,r_2,i_1,i_2,\ldots,i_{nk^2}$
select a random reduction and a set of pairs that are consistent.
The overall protocol runs three subprotocols in sequence, each of which
computes one of these robust and private representations of intermediate
values. It uses a security parameter of $k'=k/4$ to ensure that the
concatenation is sufficiently resilient.
The ideal notarized envelope protocol allows no chance to cause R to output
accept improperly. The composite interface for the three functions
$F^1,$ $F^2,$ and $F^3$ can simulate all outcomes accurately given that
cheating does not go undetected, namely that the executions of for the first two phases do not run into an undetected inconsistency, and
that $r_1,r_2,i_1,i_2,\ldots,i_{nk^2}$ do not lead to an undetected
inconsistency. The probability of such runs is at most $2^{-k'} 2^{-k'}
2^{-k'} < 2^{-k}.$ Hence the output distributions induced by the simulator
are exponentially close to that of the ideal notarized envelope protocol.
§ ZERO-KNOWLEDGE PROOFS ABOUT SECRETS
zero-knowledge proof!on secrets
In <ref> we saw that any function $F_{\predicate}$ for a
predicate $\predicate$ admits a multiparty zero-knowledge interactive proof
system having constant round complexity and having message complexity
polynomial in the size of $C_{\predicate},$ a circuit for $\predicate.$
Using locally random reductions, we can show that the message complexity is
indeed independent of the circuit complexity, namely that the message
complexity is polynomial in $n,$ $m,$ and $k,$ for any $\predicate.$
Secret-sharing is indeed a method for committal, given one technical
restriction: since the committer has the option of refusing to
decommit, the reconstruction protocol must be adapted so that reliable
players give up their pieces only if the dealer requests.
A multiparty zero-knowledge proof system on secrets
zero-knowledge proof!on secrets
(definition <ref>) is then a special case of a notarized
envelope scheme where envelopes are replaced by secret-sharing.
As a direct application of Theorem <ref>, we have the
(Constant Rounds ZKMIPSS) In the ideal envelope model, every function
family $F$ (and thus every predicate family $\scp$) has an
exponentially-resilient notarized envelope scheme that uses a constant
number of rounds and a small polynomial number of envelopes and bits.
CHAPTER: EFFICIENT MULTIPARTY PROTOCOLS FOR ANY FUNCTION
She was pale when she talked with the chaplain.
She said that the war instead of ennobling soldiers made beasts of them.
Downstairs the patients had stuck their tongues out at her and told
her she was a scarecrow and a frightful skinny old frump.
“Oh, how dreadful, chaplain,” she said in German.
“The people have been corrupted.”
Jaroslav Hašek, The Good Soldier Švejk
Chapter <ref> gives secure and reliable protocols for any function
in $NC^1$ using a constant number of rounds and tolerating faults in a
third of the network. The construction also gives constant round protocols
for any function but at a potential exponential blowup in message sizes.
Here we present secure and reliable protocols to compute arbitrary
functionsconstant rounds!arbitrary functions in constant
rounds, using small, polynomial-size messages, and tolerating $t=O(\log n)$
faults. Thus, instead of incurring a price in message size in order to be
able to compute any function, we tolerate a smaller yet still reasonable
number of faults.
Locally random reductionslocally random reduction appear twice in
our protocol. First, they serve to to reduce a highly interactive network
computation of $F$ to an $n$-fold local computation of a polynomial
for $F.$ Second, they ensure that Byzantine faults are detected and
The power of locally random reductions is evident in the results: any
function admits an unconditionally $t$-resilient protocol having constant round complexity and small message complexity, regardless of its computational complexity. Through simple algebraic
approaches, we separate the communication complexity of secure
computing from the computational complexity of the function to be
Let us now consider the inputs as a single sequence of bits instead of a
set of $n$ different inputs each having length $m.$ That is, for $a=bm+c,$
define $\xhat_a$ to be the $c^{th}$ bit of input $x_b.$ Each player shares
each bit of its input separately. Since we do have a bound on the number
of input bits in terms of the number of players, we shall consider the
number of output bits, $N(n),$ as a separate parameter.
Let $t(n)$ and $m(n)$ satisfy $t(n)m(n)=O(\log n).$
Let $F$ be a family of functions $\set{F^n}$ mapping $(\Sigma^{m(n)})^n
\rightarrow \Sigma^{N(n)},$ where $N(n)$ is polynomial in $n.$
Then there exists a $t$-resilient protocol for
$F$ that runs in a constant number of rounds and uses a small
polynomial number of message bits.
Let $\alpha_1,\dots,\alpha_n$ be fixed nonzero field elements over a field
$E$ of size exceeding $n.$ Let $q=\lceil \log n \rceil$ and let $M$ be the
least multiple of $q$ exceeding $nm.$ Let $r=(2^q) \frac{M}{q},$ which is
clearly polynomially bounded in $n.$ The function $F^n$ maps
$\xhat_1,\ldots,\xhat_{nm}$ to an $N$-bit string; let
$F^{nb}(\xhat_1,\ldots,\xhat_{nm})$ be the $b^{th}$ bit of
$F^{n}(\xhat_1,\ldots,\xhat_{nm}).$ Without loss of generality, consider
$F^{nb}$ to be an $M$-variate function that is insensitive to the extra
$M-nm$ variables. By Theorem <ref>, there is a polynomial
$h_F^{nb}(w_1,\dots,w_r)$ of degree $(m/q)$ for $F^{nb},$ such that each
$w_i$ is a product of at most $2^q$ variables $\xhat_j.$
Figure <ref> describes the protocol. We have expanded on the
locally-random reduction so that the steps of this protocol are more
concrete; note that any locally-random reduction computable in $NC^1,$ not
simply the one based on polynomial evaluation that sufficed for the proof
of Theorem <ref>, will do. The particular locally-random
reduction of Theorem <ref> is easy to implement and very
efficient, so we use it directly.
Because the locally random reduction is in $NC^1$ (each $w_i$ is the
product of at most it can be secretly evaluated in constant rounds using
Theorem <ref>. The system generates $N(n)$ sets of $n$ queries,
one set for each output bit. The queries from each set are given to the
players individually. The crux of the solution is that, though computing
the reduced functions $h^{nb}$ may be complicated, it is a local
computation requiring no interaction.
constant rounds!arbitrary functions
(Share $\hat{x}_1,\dots,\hat{x}_m.$)
Each player $i$ shares its input as
Variables $\hat{x}_{mn+1},\ldots,\hat{x}_{m}$ are taken to be 0.
(Expand variables.)
Compute new secrets $w_1,\dots,w_{r}$ by running protocol
to evaluate $w_i(\hat{x}_1,\dots,\hat{x}_m).$
(Select LRR.)
Select $rt$ random secrets $p_1^1,\dots,p_1^t;\ldots;p_r^1,\dots,p_r^t,$
using protocol . These denote the higher coefficients of
polynomials $p_i(u)=w_i + \sum_j p_i^j u^j.$
(Apply LRR.)
Using protocol , compute new secrets
$w^i_j = p_j(\alpha_i)$ for $i=1..n,$ $j=1..r,$ in parallel.
Reveal $(w^i_1,\dots,w^i_r)$ to player $i.$
(Compute reduced problem locally.)
Each player $i$ computes
\[
v_i^b=h_F^{nb}(w^i_1,\dots,w^i_r) \hspace{0.3in} 1 \leq b \leq N(n)
\]
and secretly shares $(v_i^1,\ldots,v_i^{N(n)})$ among the network.
(Protect against Byzantine failures.)
Run Protocol ($i,j$) for each $1\leq i,j \leq n;$
thus each player $i$ proves to the network in zero-knowledge that
\[
(F^{n1}(w^i_1,\dots,w^i_r),\ldots, F^{n,N(n)}(w^i_1,\dots,w^i_r))
\]
* Each player $j$ broadcasts a vote as to whether each player $i$ gave him a
satisfactory proof in step <ref>. Disqualify each player $i$ who
is impeached by a majority; reveal its query
and use
\[
v_i^b=h_F^{nb}(w^i_1,\dots,w^i_r) \hspace{0.3in} 1 \leq b \leq N(n)
\]
as publicly known constants for the remainder of the protocol.
(Reconstruct $f.$) Run protocol to secretly interpolate
each set $(v_1^b,v_2^b,\ldots,v_n^b),$ for $b=1..N(n)$ in parallel.
For each set, obtain the free term $y^b$ of the polynomial of degree $rt$
passing through the points.
Protocol to evaluate any $F$ in constant rounds and low fault-tolerance.
Locally random reductions, however, are not necessarily robust against
errors in the reduced computations. Reducing the overall computation to
locally independent computations suffices for privacy but not for
robustness. Thus we need to verify that the results of
step <ref> are accurate before using them to interpolate the
final answer.
Let us expand upon step <ref>. Each player $i$ gives a ZKMIPS
proof (<ref>) to each player $j$ that the secret value of
$v_i^b$ it shared satisfies $v_i^b=h_F^{nb}(w^i_1,\dots,w^i_r),$ using the
methods of Chapter <ref>. This is not quite sufficient per
se, since a faulty player may cause some good players to believe the proof
while others find fault. Thus, after the proving phase, every player $j$
broadcasts a vote $V_{ji}$ as to whether it believes the proof of player
$i.$ If a majority vote to accept $i,$ then player $i$ is accepted. A
majority vote in favor of player $i$'s honesty implies that at least one
reliable player found player $i$ correct, which means that player $i$ was
able to prove its correctness to someone, and its results are therefore
correct. If player $i$ is rejected by the vote, it failed to satisfy any
reliable player of its behavior, and the query $(w^i_1,\dots,w^i_r)$ with
which it is supposed to compute is revealed so that the reliable players
may incorporate it properly.
Thus, we have specified eight function families corresponding to the eight
modules of the protocol. Each of these computes a robust and private
representation of its results (small lists of questions from the reductions
are independent of the inputs and hence private; the $v$ values are robust
given that the zero-knowledge proofs succeed, or else they are broadcast
implicitly through the revelation of $w^i_1,\ldots,w^i_r$; pieces of the
various secrets are robust and private representations). By
Theorem <ref>, the composition is $t$-resilient.
The number of rounds is constant, by the following observations.
Step <ref> is an evaluation of several polynomials $w_i$ of
degree $2^q,$ which is an $NC^1$ computation. By Theorem <ref>,
this takes constant rounds and polynomial message sizes.
Step <ref>, the selection of $n$ random polynomials of degree $t$
with free terms $p_1^1,\dots,p_1^t;\ldots;p_r^1,\dots,p_r^t,$ requires the
secret choice of $rt$ coefficients. The selection of these random secret
values is performed in parallel and requires constant rounds.
Applying polynomials $p_1,\dots,p_r$ to the known, fixed values
$\alpha_1,\dots,\alpha_n$ is a linear combination and hence requires no
Each player $i$ computes $h_F^{nb}$ locally and non-interactively,
and its resharing requires polynomial bits and constant rounds.
The zero-knowledge proofs of step <ref> are slightly
more complicated yet still require constant rounds and polynomial
message sizes.
The interpolation of $y^b$ from $v_1^b,\dots,v_{n}^b,$
step <ref>, uses an $NC^1$ circuit.
tocpartApplications and Conclusions
CHAPTER: APPLICATIONS
The artist possesses the ability to breathe soul into the lifeless
product of the machine.
Walter Gropius
The bulk of this dissertation has addressed formal specifications of
protocols for networks of players to compute arbitrary functions in a
distributed manner. In this chapter we present efficient and
practical solutions for useful and important computations.
It would certainly be impractical to require multiparty protocols around
each distributed computation a collection of processors must make. Some
computations, especially administrative and system-security oriented
computations, are of greater importance than others, but at the same time
need to be performed less often. For example, authenticating a user or a
remote process must be correct and secure but are performed with lower
frequency than other computations. Authorizing access to a file is another
We propose very fast protocols for authenticationauthentication,
authorizationauthorization, and anonymous mailanonymous
mail. These protocols are designed with specific goals in mind and
capitalize on properties that a general-purpose protocol-compiler (based
only on circuit specifications) ignores.
For the purposes of ensuring distributed system reliability, we envision a
security kernel,security kernel in terms of processors and of
processes. That is, not all the processors in a network need be involved
in ensuring the resilience of every computation. A specific committee of,
say, seven processors would suffice, with the assurance that no more than
three are compromised. These processors are tightly coupled with
high-speed secure communication lines.
With respect to the operations themselves, we propose that authentication
and authorization are infrequent yet essential computations, and that it is
reasonable to take special care to ensure the correct and secure operation
of a small body of desired processes. Once a process has been
authenticated and authorized to read a file, the authorization need not be
repeated and the authentication can be omitted during repeated file access
The Secure UNIX of Reeds and McElroy [93, 94, 95, 108] adopts
this sort of approach, allowing two remote processes to communicate over a
single pipe that excludes other processes and that fails as soon as one
process detaches. While this does allow certain attacks to succeed — a
corrupt operating system might capture an authorized line to a remote
system by running an authenticated process and substituting its own process
after the authentication with the remote host has occurred — the attacks
are more difficult and complex. Furthermore, a repeated authentication of
the host operating system, not each individual process at each
individual request, would serve as a more efficient safety check. If the
host is continually authenticated then presumably its operations are valid
and the processes it runs cannot be substituted after their initial
authentication. This is a reasonable trade-off of absolute security for
Our efficient algorithms for specific tasks support the approach of
infrequent yet highly efficient and secure kernel operations. We focus not
simply on achieving a constant number of rounds of interaction but on
achieving a small constant number of rounds, on the order of 2 or 3.
§ ON STRONG ASSUMPTIONS
In general, we assume a completely connected, synchronous network with
private linesassumptions!noncryptographic. The robustness of
polynomial interpolation suggests the techniques developed here should be
adaptable to asynchronous models. The fault-tolerance levels would be
lower, and many technical issues must be considered (such as the
availability of broadcast channels vs. Byzantine Agreement
subprotocols). Because our protocols are robust against fail-stop faults,
their extension to problems of asynchronicity is natural.
A network without private lineschannel!private can still use the
protocols developed here if suitable encryption of the lines is available.
Care must be taken to ensure that this approach of treating sub-issues in a
modular fashion does in fact treat each module independently — for
example, to avoid unintended interaction between the design of the overall
protocol and the design of a communication line protection scheme, the same
encryption methods should not be used at all levels. That is, if a
protocol assumes a private channel but also requires an encryption for
other reasons, the same encryption should not be used to replace the
private channel with an encrypted channelchannel!encrypted.
Proofs of security remain necessary.
We have also assumed a complete networknetwork!complete. This
assumption is highly unlikely to hold except on a local-area network. In
practical terms, however, it is not hard to achieve for a processor
security kernel with a small number of tightly-coupled processors. In
theoretical terms, the issue of simulating a complete network using a
$t+1$-connected network is an interesting separate subproblem.
The modular treatment of the various subproblems provides optimism for the
design of very general protocols that are robust under very harsh
circumstances. A word of caution must be interjected: as always, intuitive
claims to security require formal treatment. But the modular approach
makes the analysis easier.
§ CHOICE OF FIELD
For the purpose of implementation, we suggest a few natural finite fields.
The first is $\gf(257),$ the set of integers modulo 257. This field easily
encodes bytes using integers in the range 0—255. Modular arithmetic is
fast and standard, though individual bit operations on the eight bits are
not quite so facile. Two-byte words are supported by a modulus of
The other fields we suggest have characteristic 2: $\gf(256)=\gf(2^8),$
$\gf(65536)=\gf(2^{16}),$ $\gf(2^{32}),$ and $\gf(2^{64}),$
Field elements are represented as polynomials over $\gf(2)$ modulo an
irreducible polynomial (for example, $x^8+x^4+x^3+x^2+1$ for
$\gf(2^8),$ $x^{16}+x^{12}+x^3+x+1$ for $\gf(2^{16}),$ and
$x^{32}+x^{22}+x^2+x+1$ for $\gf(2^{32})$).
Each coefficient is a single bit, so that the field elements themselves
correspond naturally to common word-sizes. Addition over these fields is
quite simple and normally requires a single machine instruction: take the
bitwise exclusive-or. Multiplication is somewhat more complicated,
requiring one to reduce the product modulo the appropriate irreducible
polynomial, but fast algorithms exist.
Generating random bits as the exclusive-or of random bits supplied by
many players is much easier using a field of characteristic 2 than one of
characteristic 257, since addition of random secret 0/1 values will
suffice. In general, bitwise operations are facilitated by using
arithmetic over a field of characteristic 2.
§ A FAST AND PRACTICAL PASSWORD SCHEME
In the UNIX$^{tm}$ passwordpassword scheme for user authentication,
the passwords are not stored in a file on the system. Rather, each
password is encrypted (using a DES-like encryption function) and the result
is stored in a password file. Even if the password file is compromised, an
attacker cannot learn the passwords directly from the file. When a user
would like to log in to the system, his attempted password is encrypted and
compared against the stored encryption. If they match, the user is
One can imagine a generalization to the distributed environment in which
each host authenticates the user and the system as a whole considers the
user acceptable if a majority of the hosts have authenticated him. An
immediate problem surfaces with password schemes: if the same password is
used for all the hosts, then a single corrupt host will learn the user's
global password when the user requests authentication. This can be
circumvented by having a different password for each host (requiring some
nontrivial and probably security-prone bookkeeping by the user).
Nevertheless, there are many disadvantages, including the fundamental
problem that the encryption function itself is only assumed to be
We propose a simple and efficient password scheme
This work was done while the author was on leave of absence from Harvard
University in the spring and summer of 1988.]
discovered independently by Michael Rabin [105] that has some
similar properties but does not depend on the security of an encryption
function like DES. That is, passwords cannot be obtained by capturing the
files on some fraction of the systems, and are certainly not compromised
during an execution in which one of the hosts is corrupt. In order
to break the password directly, an attacker must corrupt a majority of the
hosts (we ignore dictionary searches and repeated attempts, which are
certainly valid issues for any password scheme but are orthogonal to the
problem of breaking the system directly).
The basic idea is as follows: a user secretly shares his password
$\password$ among the system at start-up. Later, when he requests
authentication, he demonstrates his knowledge of the password by sharing it
again. Let us call the second secret his attempt, $\attempt.$ The system
must compare $\password$ to $\attempt$ without revealing $\password$.
Let $E=\gf(2^{64}).$
User shares $\password.$
Each host $i$ selects two uniformly random 64-bit elements $r_i,s_i,$
and shares them.
Run the protocol (non-interactively) to secretly compute
$r=\sum r_j, s=\sum s_j:$
— More precisely, each host $i$ sets
$\piece_i(r) = \sum_j \piece_i(r_j)$
$\piece_i(s) = \sum_j \piece_i(s_j)$
Protocol to initialize a user's password in a distributed system of
$n$ hosts.
It does not suffice to compute $(\password - \attempt)$ and reveal the
result, since knowing $\attempt$ would then reveal $\password.$ This
remains true even if the user is not supplied with the result; if the
user cooperates with just one corrupted host involved in the
authentication, then that host can reveal the result when it learns it.
One solution is to use Theorem <ref> to secretly compute
$\norm{\password - \attempt}$ in a constant number of rounds. The protocol
most certainly is correct and uses only a few rounds, but we propose an
even simpler and more efficient protocol that is designed with the
particular password problem in mind.
The essential information that the system must compute is whether or not
$\password = \attempt.$ The result of the computation should specify
whether or not they are equal, but it need not be a deterministic result;
that is, the result need not be either 0 or 1. What matters is that if the
values are not equal, the result of the computation should have the same
distribution regardless of what attempt is made and of what the password
is. The problem of revealing information about the password through a
failed attempt is then solved.
The trick is the following: compute and then reveal $(\password -
\attempt)\cdot r,$ where $r$ is secret and uniformly random over $E -
\set{0}.$ If the attempt is correct, then the result is always 0. But if
the attempt is invalid, then the result will be a uniformly distributed
nonzero random field element, for any $\password$ and $\attempt.$ Thus, no
information about $\password$ or $\attempt$ is revealed by the result, apart
from whether they are identical or not. The system accepts the user if and
only if the result is 0;
each host outputs accept if the result is 0 and otherwise outputs
The only minor inefficiency is to generate a nonzero random field element.
The protocol will generate a uniformly random secret $r$ in
$E,$ but we should then have to normalize it in order to test if $r=0,$
returning us to the original problem. One simple and acceptable approach
is simply to finesse the problem by using a large field $E$ so that the
chance that $r=0$ is negligible. This would allow some very low
probability that a failed attempt will succeed. In fact, the chance of
such an event is the same as guessing a valid password. See
Figure <ref>.
User shares $\attempt.$
Run the protocol (non-interactively) to secretly compute
$v=\password - \attempt.$
— More precisely, each host $i$ sets
$\piece_i(v) = \piece_i(\password) - \piece_i(\attempt)$
To be ready for the next authentication,
each host $i$ selects two uniformly random 64-bit elements
$r_i^{new},s_i^{new},$ and shares them. Each host $i$ sets
$\piece_i(r^{new}) = \sum_j \piece_i(r_j^{new})$
$\piece_i(s^{new}) = \sum_j \piece_i(s_j^{new})$
Run the protocol to secretly compute $w=v \cdot r.$
$w=0$ accept the user, reject the user.
Protocol to authenticate user who has already shared password $\password.$
The chance that the scheme is broken through this error is less than that
of randomly selecting a password, and it is neither repeatable (the user
has no control over $r$) nor assisted by ulterior means (such as dictionary
lookups). It therefore doubles at most the inherent chance of failure.
Figure <ref> describes the stripped-down scheme. One round
suffices to share the attempt, and one to check it (with only fail-stop
A completely accurate solution is the following: generate a second random
secret, $s,$ and multiply it by $r.$ If the result is nonzero, then the
system has a proof that $r \not=0$ without revealing $r.$
The chance that randomly choosing $r$ and $s$ gives $r=0$ or that $r\not=
0$ and $s=0$ is small: $\frac{2 \mid E \mid -1}{\mid E\mid^2}.$ Otherwise,
with high probability the system has generated a uniformly random nonzero
secret $r,$ along with a (“zero-knowledge”) certificate that $r
\not= 0.$
For concreteness, let us use the field $\gf(2^{64})$ to handle passwords of
8 bytes. The password set-up protocol is described in
Figure <ref>. The authentication protocol is described in
Figure <ref>. Note that in the authentication protocol, a failure
to obtain a nonzero secret $r$ requires that the network generate one; this
occurs with probability $2^{-64}$ and can essentially be ignored.
User shares $\attempt.$
Run the protocol (non-interactively) to secretly compute
$v=\password - \attempt.$
— More precisely, each host $i$ sets
$\piece_i(v) = \piece_i(\password) - \piece_i(\attempt)$
To be ready for the next authentication,
each host $i$ selects two uniformly random 64-bit elements
$r_i^{new},s_i^{new},$ and shares them. Each host $i$ sets
$\piece_i(r^{new}) = \sum_j \piece_i(r_j^{new})$
$\piece_i(s^{new}) = \sum_j \piece_i(s_j^{new})$
Run the protocol to secretly compute $w=v \cdot r.$
Run the protocol to secretly compute $u=r \cdot s.$
(Refer to $r^{new}$ as $r$ and $s^{new}$ as $s.$)
Reconstruct $w$ and $u.$
$u \not= 0$ :
$w=0$ accept the user, reject the user.
$u=0$ repeat step (A2).
Protocol to authenticate user who has already shared password $\password.$
For $2t<n,$ protocol is statistically $t$-resilient and requires
a constant expected number of rounds.
Clearly, host $i$ outputs accept iff $r \not= 0$ and
$(\password-\attempt)\cdot r = 0,$ i.e. iff $\password = \attempt.$
If $\password=\attempt,$ the distribution on $(w,u)$ is
\[
\set{r \leftarrow \uniform(E); s \leftarrow \uniform(E): (0,rs) },
\]
regardless of the values of $\password$ and $\attempt.$
If $\password \not= \attempt,$ the distribution on $(w,u)$ is the
\[
\set{r \leftarrow \uniform(E); s \leftarrow \uniform(E):
((\password-\attempt) \cdot r,rs)},
\]
which for any (unequal) values of $\password$ and $\attempt,$
is fixed and identical to
\[
\set{r \leftarrow \uniform(E); s \leftarrow \uniform(E);
w \leftarrow \uniform(E-\set{0}): (w,rs)}.
\]
For $2t<n,$ Protocol is $t$-resilient against passive (or even
fail-stop) adversaries and requires only $(3+\epsilon)$ expected rounds,
where $\epsilon=4/\abs{E}$ is small.
Follows from Theorem <ref> and the observation that, when messages
are always correct, the number of rounds required for sharing and for
multiplying secrets is 1. Step (A2) is repeated with probability
$\frac{2 \mid E \mid -1}{\mid E\mid^2},$ which gives an expected time
less than $4/\abs{E}$ when $\abs{E}\geq 7.$
Remark: The secure authentication schemes we have
presented take the same amount of time as the simple method of having a
user authenticate itself to each processor separately and then taking a
vote. They provide, on the other hand, a good deal more robustness. For
example, once an intruder has recorded the message from a valid user to one
host in the voting scheme, that password can be re-used; the invader need
only accumulate enough attacks over time. In our scheme, however, at least
half of the communications must be tapped simultaneously to obtain
any information at all.
§.§ Unanimous Secret Ballots
Secret ballots are fast and simple (<ref>), and by
Theorem <ref>, checking whether a secret vote is unanimous or
not, without revealing the tally, also admits a protocol in constant
rounds. The authorization techniques listed above suggest a faster and
more direct method: first compute $v=\sum_i x_i,$ where each $x_1$ is a
verified 0/1 secret vote, generate a random (nonzero) secret $r,$ and
secretly compute $vr.$ Reveal the result. If the ballot is unanimous, the
result will be 0; otherwise it will be a uniformly distributed nonzero
§ ANONYMOUS MAIL
anonymous mailmail
A mail system ought to provide privacy on two levels. First, it should
hide the contents of each message from the general public, and certainly
from those involved in its delivery. Second, it should not need to
identify the sender of the message, either to those involved in the
delivery or to the recipient. It does not suffice simply to deliver a
message without attaching a sender's name in the header, since the name of
the originating host or workstation may be enough to reveal the sender. In
fact, the delivery of anonymous messages in a network is powerful enough to
support general function computation [6], though we shall not go
into the details here. Delivering mail anonymously even when hosts are
identified is thus a powerful and a useful tool.
To be practical, however, a mail scheme must be efficient. If the message
is an undivided unit, then it traverses some path from the sender S to the
receiver R. If every host on that path is corrupt, then the receiver can
identify the origin. Thus the path must be longer than the number $t$ of
corruptible hosts. Already this is a very inefficient solution.
Furthermore, it requires that the message be encrypted in some form so that
the intermediate hosts cannot read the contents.
We adopt a different approach: divide the message in an appropriate manner
and send each portion via a different path. If sufficiently few pieces are
captured, then nothing is learned about the contents. Secretly sharing the
message and reconstructing it for the recipient suffices. This is trivial
in a complete network with private channels, but in an incomplete network,
when two players may be forced to communicate via intermediate players,
secret sharing solves the problem of privacy robustly.
Preserving the privacy of the sender, however, is a slightly more difficult
goal. We must ensure that the paths to R do not identify S. Thus, the
direct approach of as secretly sharing the message and reconstructing it
for R is suggestive, but not sufficient. Every host knows who shared
the secret.
Consider the following situation: there is some secret permutation $\sigma$
on the elements $\set{1,\ldots,n},$ and each player $i$ knows $\sigma(i).$
Each player $i$ would like to send a message $M(i)$ to player $\sigma(i)$
without identifying itself. We would like to compute a function $F$ such
that $F_{\sigma(i)}(M(1),\ldots,M(n),\sigma(1),\ldots,\sigma(n)) =M(i).$
One method to perform this computation would be to compute
\[
F_j(M(1),\ldots,M(n),\sigma(1),\ldots,\sigma(n) =
\sum_{i=1}^n M(i) \delta(\sigma(i),j)\index{delta}
\]
Because the delta function is in $NC^1,$ Theorem <ref> implies
there is a constant-rounds protocol to compute $F.$ Note that we have
omitted a verification that the secrets $\sigma(1),\ldots,\sigma(n)$ do
indeed define a permutation; malicious players could upset the protocol.
An additional function that compares each pair of destinations and makes
sure no collisions occur is a necessary prerequisite.
A more efficient way is the following. Each player, instead of sharing
$\sigma(i),$ shares a set of $n$ secrets $d(i,1),\ldots,d(i,n),$ such that
$d(i,\sigma(i))=1$ and the remaining secrets are 0. It is trivial to check
privately that every $d(i,j)$ is 0 or 1 by computing and revealing
$d(i,j)(d(i,j)-1).$ It is also trivial to ensure that each player shares a
valid vector, by computing $\sum_j d(i,j)$ for each $i$ and revealing the
sums. Thirdly, it is trivial to check for collisions by taking $\sum_i
d(i,j)$ for each $j$ and revealing the sums. (Collisions give a sum
greater than 1, which reveals no information to the faulty processors that
changed their destinations.) Thus there are three simple verifications to
perform in parallel, requiring the time of one multiplication and
Given verified vectors, the final outputs are simply
\[
F_j(M(1),\ldots,M(n),d(1,1),\ldots,d(n,n)) =
\sum_{i=1}^n M(i) d(i,j).
\]
This construction gives an easy solution to the converse problem, limited espionage.espionage
[The terms espionage and anonymous mail were suggested
by M. Rabin.]
Like 1-out-of-2 Oblivious Transfer, in which the recipient chooses one of
two values to learn but the holder does not find out which, limited
espionage allows each player in the network to learn one secret from a list
of $N$ secrets without revealing which one it chose. The protocol requires
each player to share a 0/1 vector specifying a 1 in the component
corresponding to the secret it desires, and the verifications are performed
as above before computing the simple weighted sum. Note that in this case,
collisions need not be avoided; two players may look at the same secret.
The general problem of supporting arbitrary maps from sources to
destinations and multiple messages from a single player is somewhat more
involved. Allow each player to send some number $N$ of messages to any
other player. The set $\set{M(i,j,k) \mid i,j\in [n], k \in [N]}$
describes the list of all messages. Player $i$ secretly shares each
$M(i,j,k)$ as a secret $s(j,(i-1)nN+k).$ The goal is to construct a random
secret permutation $\sigma \in {\cal S}_{nN}$ for each $j,$ and apply it to
the secrets $s(j,1),\ldots,s(j,nN)$ destined for player $j.$ The permuted
list of results hides the origins and is revealed directly to player $j.$
To generate a random secret permutation $X(j)$ on $nN$ indices, protocol
of <ref> suffices, using 0/1
matrices to represent permutations. The permuted list of results for
player $j$ is computed by secretly multiplying the matrix $X(j)$ by the
vector of secrets $(s(j,1),\ldots,s(j,nN)).$
While mathematically correct, the size of the matrices makes this solution
prohibitive. In a more practical vein, a different sort of approach is
preferable. The approach used in delivering mail whose destinations are a
permutation of the sources is useful. We provide each player $j$ with a
set of $n$ “mailboxes,” namely a fixed set of secrets $b(j,l).$ Each
mailbox has a corresponding set of flags $c(j,l)$ that indicate whether one
or more players have attempted to place a message in the mailbox. The
protocol is simple: player $i$ sends a message $M(i,j,k)$ to player $j$ by
choosing a mailbox $(i,l)$ at random and sharing $M(i,j,k)$ as a secret
value $B(j,l,i)$ and sharing $C(j,l,i)=1.$ Other $C(j,l,i)$ values are
shared as 0. The 0/1 verification is easy, and it is easy to bound the
number of messages sent by each player.
Finally, the players secretly compute $b(j,l)= \sum_i B(j,l,i)$ and
$c(j,l)= \sum_i C(j,l,i).$ Each $b(j,l)$ and $c(j,l)$ is revealed to player
$j,$ who detects if there is a collision by virtue of $c(j,l)>1.$ Each
player announces the boxes containing collisions and the protocol is
repeated so that the senders can try again. By choosing the number of
mailboxes sufficiently large for the expected traffic — a practical issue
— the expected number of repetitions is small.
CHAPTER: CONCLUSIONS
Forsan et haec olim meminisse iuvabit.
[Perhaps this will be a pleasure to look back on one day.]
Virgil, the Aeneid
The issues of this dissertation are three-fold. First, reliability and
security are best ensured by avoiding assumptions that particular
processors are reliable. Second, proving the security and reliability of
distributed protocols requires clear and concise definitions. Our
definition of resilience captures security and reliability simultaneously
and a priori captures all the intuitive properties one expects of a
robust system. Third, theoretical protocols are interesting, but to be
practical a higher standard of efficiency must be met. Protocols requiring
more than a dozen rounds of interaction are likely to be too complex or too
inefficient to be implementable; low communication complexity is of utmost
In general, we have made some strong simplifying assumptions, such as the
presence of a complete, synchronous network with private channels. These
issues are of deep practical and theoretical significance, but we place
them aside in the hope that a modular approach provides a deeper
understanding of the fundamental nature of security and reliability in
interactive protocols. Chapter <ref> (<ref>)
discusses the incorporation of solutions for underlying network problems.
The robustness of our protocols against message omissions and fail-stop
faults suggests a cautious optimism that, with appropriate formal proofs,
they can be provided with modular subroutines to withstand a weakening of
the simplifying assumptions.
The understanding of security and reliability has been hampered by a lack
of formal proofs and definitions. Much current research develops
techniques that are intuitively secure but which have not been proven
secure. We provide a definition for resilience that solidifies the
foundations of research into security.
Precise definitions of standard properties such as correctness and privacy
fall out as an easy consequence. We believe that all the desirable
properties of security and reliability are captured a priori by
considering the ideal protocol as our standard of computational
resilience. Privacy, correctness, and other properties are aspects of the
same definition, tied together by the idea of an ideal, trusted party,
and measurable by the idea of relative resilience.
Our definition for resilience provides not just a means to rate the
security of a protocol with respect to an ideal situation but a means
to compare arbitrary protocols. The ability to measure relative resilience
provides a greater flexibility and modularity in proof techniques and
protocol design. Rather than directly prove a protocol is secure, one can
compare it to an intermediate protocol that is itself proven secure.
Relative resilience also provides a conceptually easier means to show when
protocol concatenation preserves the security of the protocols, a very
important issue that is often difficult to analyze.
It remains to be seen if unconditional security, high fault-tolerance,
highly complex function computation, and low communication complexity can
be achieved simultaneously. For functions of arbitrarily high complexity,
the locally random reductions of Chapter <ref> provide low
communication complexity, but at a cost of lower ($O(\log n)$) fault
tolerance. Chapter <ref> achieves low communication complexity
and high fault-tolerance, but the results are conditioned on the existence
of a one-way function. If the computational complexity of the function is
restricted, Chapter <ref> demonstrates that
low communication complexity and low local computational
complexity are simultaneously achievable.
Locally random reductions with better parameters — i.e. a higher
ratio of the independence-set size $k$ to the number $m$ of queries — are
one path to achieving higher fault-tolerance for arbitrarily complex
functions while using a constant number of rounds. A characterization of
the achievable parameters for locally random reductions would be of general
interest beyond serving simply to provide more secure protocols.
We have separated the communication complexity of secure computation from
the computational complexity of the desired result. This bodes well for
practical implementations, but is more striking from a theoretical
standpoint. The crucial tool of locally random reductions vastly
reduces upper bounds on the number of rounds of communication for a
variety of cryptographic protocols. In a deeper sense, it has triggered
other research leading to a result, $\ip=\pspace,$ which relates a
complexity class associated with interaction to a complexity class
associated with computational complexity. The algebraic nature of the
locally random reductions we develop suggests not only wide
applications but a deeper understanding of the nature of interaction,
computational complexity, and reliable computation.
[1]
M. Abadi, J. Feigenbaum, J. Kilian.
“On Hiding Information from an Oracle.”
J. Comput. Systems Sci. 39 (1989), 21–50.
[2]
D. Angluin, L. Valiant.
“Fast Probabilistic Algorithms for Hamiltonian Paths and Matchings.”
J. Comput. Systems Sci. 18 (1979), 155–193.
[3] L. Babai, L. Fortnow, C. Lund.
“Non-Deterministic Exponential Time has Two-Prover Interactive Proofs,”
Submitted to FOCS, IEEE, 1990.
[4]
L. Babai, S. Moran.
“Arthur-Merlin Games: A Randomized Proof System,
and a Hierarchy of Complexity Classes.”
J. Comput. System Sci. 36 (1988), 254–276.
[5]
J. Bar-Ilan, D. Beaver.
“Non-Cryptographic Fault-Tolerant Computing
in a Constant Expected Number of Rounds of Interaction.”
Proceedings of PODC, ACM, 1989, 201–209.
[6]
J. Bar-Ilan, D. Beaver, M. Rabin.
Personal communication, 1988.
[7]
D. Barrington,
“Bounded Width Polynomial Size Branching Programs
Recognize Exactly those Languages in $NC^1.$ ”
Proceedings of the $18^{th}$ STOC, ACM, 1986, 1–5.
[8]
D. Beaver.
“Distributed Computations Tolerating a Faulty Minority,
and Multiparty Zero-Knowledge Proof Systems.”
J. Cryptology, 1990.
A preliminary version appeared as,
“Secure Multiparty Protocols Tolerating Half Faulty Processors,”
in Proceedings of Crypto, ACM, 1989, and in Technical Report
TR-19-88, Harvard University, September, 1988.
[9]
D. Beaver.
“Perfect Privacy for Two-Party Protocols.”
Proceedings of the DIMACS Workshop on
Distributed Computing and Cryptography, Princeton, NJ, October, 1989,
J. Feigenbaum, M. Merritt (eds.).
Preliminary version in TR-11-89, Harvard University.
[10]
D. Beaver.
“Formal Definitions for Secure Distributed Protocols.”
Proceedings of the DIMACS Workshop on
Distributed Computing and Cryptography, Princeton, NJ, October, 1989,
J. Feigenbaum, M. Merritt (eds.).
[11]
D. Beaver, R. Cleve, S. Goldwasser.
In preparation, 1989.
[12]
D. Beaver, J. Feigenbaum.
“Hiding Information from Several Oracles.”
Harvard University Technical Report TR-10-89, May 1, 1989.
[13]
D. Beaver, J. Feigenbaum.
“Encrypted Queries to Multiple Oracles.”
AT&T Bell Laboratories Technical Memorandum, August 14, 1989.
[14]
D. Beaver, J. Feigenbaum.
“Hiding Instances in Multioracle Queries.”
Proceedings of the the $7^{th}$ STACS,
Springer–Verlag LNCS 415, 1990,
[15]
D. Beaver, J. Feigenbaum, J. Kilian, P. Rogaway.
“Cryptographic Applications of Locally Random Reductions.”
Proceedings of Crypto 1990.
Also appeared as
AT&T Bell Laboratories Technical Memorandum, November 15, 1989.
[16]
D. Beaver, J. Feigenbaum, V. Shoup.
“Hiding Instances in Zero-Knowledge Proof Systems.”
Proceedings of Crypto 1990.
[17]
D. Beaver, S. Goldwasser.
“Multiparty Computation with Faulty Majority.”
Proceedings of the $30^{th}$ FOCS, IEEE, 1989, 468–473.
[18]
D. Beaver, S. Goldwasser, Y. Mansour.
Personal communication, 1989.
[19]
D. Beaver, S. Haber, M. Yung.
In preparation, 1989.
[20]
D. Beaver, S. Micali, P. Rogaway.
“The Round Complexity of Secure Protocols.”
Proceedings of the $22^{st}$ STOC, ACM, 1990, 503–513.
[21] C. Bennett, G. Brassard, C. Crépeau.
Personal communication, 1989.
[22]
M. Bellare, L. Cowen, S. Goldwasser.
“On the Power of Secret Key Exchange.”
In preparation, 1989.
[23]
J. Benaloh.
“Verifiable Secret Ballot Elections”
PhD Thesis, Yale University, 1987.
[24]
M. Ben-Or, R. Cleve, “Computing Algebraic Formulas
Using a Constant Number of Registers.”
Proceedings of the $20^{th}$ STOC, ACM, 1988, 254–257.
[25]
M. Ben-Or, O. Goldreich, S. Goldwasser, J. Hastad,
J. Kilian, S. Micali, P. Rogaway.
“Everything Provable is Provable in Zero-Knowledge.”
Proceedings of Crypto 1988, Springer–Verlag, 1990.
[26]
M. Ben-Or, O. Goldreich, S. Micali, R. Rivest.
“Fair Contract Signing.”
Proceedings of the ICALP (1985).
[27]
M. Ben-Or, S. Goldwasser, J. Kilian, A. Wigderson.
“Multi-Prover Interactive Proofs: How to Remove Intractability.”
Proceedings of the $20^{th}$ STOC, ACM, 1988, 113–131.
[28]
M. Ben-Or, S. Goldwasser, A. Wigderson.
“Completeness Theorems for Non-Cryptographic Fault-Tolerant
Distributed Computation.”
Proceedings of the $20^{th}$ STOC, ACM, 1988, 1–10.
[29]
M. Ben-Or, R. El-Yaniv,
“Interactive Consistency in Constant Expected Time.”
Unpublished Manuscript, 1988.
[30]
S. Berkowitz.
“On Computing Determinant in Small Parallel Time Using a Small
Number of Processors.”
Info. Proc. Letters 18:3 (1984), 147–150.
[31]
“Security Proofs for Information Protection Systems.”
Proceedings of the the 1980 Symposium on Security and Privacy,
IEEE Computer Society Press, NY (1981), 79–88.
[32]
M. Blum, S. Kannan.
“Designing Programs that Check Their Work.”
Proceedings of the $21^{st}$ STOC, ACM, 1989, 86–97.
[33]
M. Blum, L. Levin, M. Luby.
“If Permanent with the Uniform Distribution
is in Average Polynomial Time then $\#$P $\subseteq$ ZPP.”
Unpublished Manuscript, October 31, 1989.
[34]
M. Blum, M. Luby, R. Rubinfeld.
“Self-Testing/Correcting with Applications to Numerical Problems.”
Preprint, November, 1989.
[35]
M. Blum, M. Luby, R. Rubinfeld.
“Program Result Checking Against Adaptive Programs
and in Cryptographic Settings.”
Proceedings of the DIMACS Workshop on
Distributed Computing and Cryptography, Princeton, October, 1989.
[36]
M. Blum, M. Luby, R. Rubinfeld.
“Stronger Checkers and General Techniques for Numerical Problems,
Proceedings of the $22^{nd}$ STOC, ACM, 1990, 73–83.
[37]
M. Blum, S. Micali.
“How to Generate Cryptographically Strong
Sequences of Pseudo-Random Bits.”
SIAM J. Comput. 13 (1984), 850–864.
[38]
G. Brassard, D. Chaum, C. Crépeau.
“Minimum Disclosure Proofs of Knowledge.”
J. Comput. System Sci. 37 (1988), 156–189.
[39]
D. Chaum, C. Crépeau, I. Damgaard.
“Multiparty Unconditionally Secure Protocols.”
Proceedings of the $20^{th}$ STOC, ACM, 1988, 11–19.
[40]
D. Chaum, I. Damgaard, J. van de Graaf.
“Multiparty Computations Ensuring Secrecy of Each Party's Input
and Correctness of the Output.”
Proceedings of Crypto 1987, Springer–Verlag, 1988.
[41]
H. Chernoff.
“A Measure of Asymptotic Efficiency for Tests of a Hypothesis
Based on the Sum of Observations.”
Annals of Math. Statistics 23 (1952), 493–507.
[42]
B. Chor, S. Goldwasser, S. Micali, B. Awerbuch.
“Verifiable Secret Sharing and
Achieving Simultaneity in the Presence of Faults.”
Proceedings of the $17^{th}$ STOC, ACM, 1985, 383–395.
[43]
B. Chor, E. Kushilevitz.
“A Zero-One Law for Boolean Privacy,”
Proceedings of the $21^{st}$ STOC, ACM, 1989, 62–72.
[44]
B. Chor, M. Rabin.
“Achieving Independence in a Logarithmic Number of Rounds.”
Proceedings of the $6^{th}$ PODC, ACM, 1987.
[45]
R. Cleve.
“Limits on the Security of Coin Flips when Half the Processors are Faulty.”
Proceedings of the $18^{th}$ STOC, ACM, 1986, 364–370.
[46]
R. Cleve.
Personal communication, 1989.
[47]
J. Cohen, M. Fischer.
“A Robust and Verifiable Cryptographically Secure Election.”
[48]
A. Condon.
“Space Bounded Probabilistic Game Automata.”
Proceedings of the $3^{rd}$ Structure in Complexity Theory Conf.,
IEEE, 1988, 162–174.
[49]
A. Condon, R. J. Lipton.
“On the Complexity of Space-Bounded Interactive Proofs.”
Proceedings of the $30^{th}$ FOCS, IEEE, 1989, 462–467.
[50]
S. Cook.
“A Taxonomy of Problems with Fast Parallel Algorithms.”
Info. and Control 64 (1985), 2–22.
[51]
C. , J. Kilian.
“Achieving Oblivious Transfer Using Weakened Security Assumptions.”
Proceedings of the $29^{th}$ FOCS, IEEE, 1988, 42–52.
[52]
D. Denning,
Cryptography and Data Security.
Addison-Wesley, Reading, MA (1982).
[53]
Department of Defense Trusted Computer System Evaluation
US Department of Defense, Fort Meade, MD (August 15, 1983).
[54]
W. Diffie, M. Hellman.
“New Directions in Cryptography.”
IEEE Transactions of Information Theory IT-22 (November 1976),
[55] C. Dwork, L. Stockmeyer.
“Interactive Proof Systems with Finite State Verifiers.”
IBM Research Report RJ 6262 (61659), May 26, 1988.
Extended Abstract in
Proceedings of Crypto 1988, Springer–Verlag, 1990, 71–75.
[56]
T. Duff.
“Experience with Viruses on UNIX Systems.”
Computing Systems 2:2 (1989), 155–171.
[57]
S. Even, O. Goldreich, A. Lempel.
“A Randomized Protocol for Signing Contracts.”
Proceedings of Crypto 1982, Springer–Verlag, 1983, 205–210.
[58]
P. Feldman.
“A practical scheme for Noninteractive Verifiable Secret Sharing.”
Proceedings of the $28^{th}$ FOCS, IEEE, 1987, 427–437.
[59]
P. Feldman.
“One Can Always Assume Private Channels.”
Unpublished Manuscript, 1988.
[60]
P. Feldman, S. Micali.
“Optimal Algorithms for Byzantine Agreement.”
Proceedings of the $20^{th}$ STOC, ACM, 1988, 148–161.
[61]
J. Feigenbaum, S. Kannan, N. Nisan.
“Lower Bounds on Random-Self-Reducibility.”
AT&T Bell Laboratories Technical Memorandum, December 4, 1989.
[62]
L. Fortnow.
“The Complexity of Perfect Zero-Knowledge.”
Proceedings of the $19^{th}$ STOC, ACM, 1987, 204–209.
[63]
L. Fortnow, J. Rompel, M. Sipser.
“On the Power of Multi-Prover Interactive Protocols.”
Proceedings of the $3^{rd}$ Structure in Complexity Theory Conf.,
IEEE, 1988, 156–161.
[65]
Z. Galil, S. Haber, M. Yung.
“Cryptographic Computation: Secure Fault-Tolerant Protocols
and the Public-Key Model.”
Proceedings of Crypto 1987, Springer–Verlag, 1988, 135–155.
[66]
Z. Galil, S. Haber, and M. Yung.
“Minimum-Knowledge Interactive
Proofs for Decision Problems.”
SIAM J. Comput. 18 (1989), 711–739.
[67]
O. Goldreich, S. Goldwasser, S. Micali.
“How to Construct Random Functions.”
JACM 33:4 (1986), 792–807.
[68]
O. Goldreich and H. Krawczyk.
“On the Composition of Zero-Knowledge Proofs.”
Technical Report 570, Technion, Israel, June, 1989.
[69]
O. Goldreich, L. Levin.
“A Hard-Core Predicate for All One-Way Functions.”
Proceedings of the $21^{st}$ STOC, ACM, 1989, 25–32.
[70] O. Goldreich, S. Micali, A. Wigderson.
“Proofs that Yield Nothing but Their Validity and a
Methodology of Cryptographic Protocol Design.”
Proceedings of the $27^{th}$ FOCS, IEEE, 1986, 174–187.
[71]
O. Goldreich, S. Micali, A. Wigderson.
“How to Play Any Mental Game, or
A Completeness Theorem for Protocols with Honest Majority.”
Proceedings of the $19^{th}$ STOC, ACM, 1987, 218–229.
[72]
O. Goldreich, R. Vainish.
“How to Solve any Protocol Problem – An Efficiency Improvement.”
Proceedings of Crypto 1987, Springer–Verlag, 1988, 73–86.
[73]
S. Goldwasser, S. Micali.
“Probabilistic Encryption.”
J. Comput. System Sci. 28 (1984), 270–299.
[74]
S. Goldwasser, S. Micali, C. Rackoff.
“The Knowledge Complexity of Interactive Proof Systems.”
SIAM J. Comput. 18:1 (1989), 186–208.
[75]
S. Goldwasser, M. Sipser.
“Private Coins vs. Public Coins in Interactive Proof Systems.”
Proceedings of the $18^{th}$ STOC, ACM, 1986, 59–68.
[76]
S. Haber.
Multi-Party Cryptographic Computation:
Techniques and Applications,
PhD Thesis, Columbia University, 1988.
[78]
S. Haber, S. Micali.
Personal communication, 1987.
[79]
J. Halpern, M. Rabin.
“A Logic to Reason about Likelihood.”
Proceedings of the $15^{th}$ STOC, ACM, 1983, 310–319.
[80]
N. Immerman, S. Landau.
“The Complexity of Iterated Multiplication.”
Proceedings of the $4^{th}$ Structure in Complexity Theory Conf.,
IEEE, 1989, 104–111.
[81]
R. Impagliazzo, L. Levin, M. Luby.
“Pseudo-Random Generation from One-Way Functions.”
Proceedings of the $21^{st}$ STOC, ACM, 1989, 12–24.
[82]
R. Impagliazzo, S. Rudich.
“Limits on The Provable Consequences of One-Way Permutations.”
Proceedings of the $21^{st}$ STOC, ACM, 1989, 44–62.
[83]
R. Impagliazzo, M. Yung.
“Direct Minimum-Knowledge Computation.”
Proceedings of Crypto 1987, Springer–Verlag, 1988, 40–51.
[84]
J. Kilian.
“Founding Cryptography on Oblivious Transfer.”
Proceedings of the $20^{th}$ STOC, ACM, 1988, 20–29.
[85]
J. Kilian. Personal communication, 1988.
[86]
J. Kilian.
“Zero-Knowledge with Log-Space Verifiers.”
Proceedings of the $29^{th}$ FOCS, IEEE, 1988, 25–35.
[87]
J. Kilian, S. Micali, P. Rogaway.
“The Notion of Secure Computation.”
Unpublished Manuscript, 1990.
[88]
E. Kushilevitz.
“Privacy and Communication Complexity.”
Proceedings of the $30^{th}$ FOCS, IEEE, 1989, 416–421.
[89]
R. Lipton.
“New Directions in Testing.”
Preprint, October, 1989.
[90]
M. Luby, S. Micali, C. Rackoff.
“How to Simultaneously Exchange a Secret Bit
by Flipping a Symmetrically Biased Coin.”
Proceedings of the $24^{th}$ FOCS, IEEE, 1983, 11–21.
[91]
K. Lund, L. Fortnow, H. Karloff, N. Nisan.
“The Polynomial Time Hierarchy has Interactive Proofs.”
Electronic mail announcement, December 13, 1989.
[93]
D. McIlroy, J. Reeds.
“A Security Model for Files and Processes in the UNIX System.”
AT&T Bell Laboratories Technical Report, April 14, 1987.
[94]
D. McIlroy, J. Reeds.
“Multilevel Security with Fewer Fetters.”
Proceedings of the the Spring 1988 EUUG Conference,
European UNIX Users' Group, London.
[95]
D. McIlroy, J. Reeds.
“Multilevel Windows on a Single-Level Terminal.”
Proceedings of the UNIX Security Workshop (1988),
USENIX, Portland, Oregon.
[96]
D. McIlroy.
“Virology 101.”
Computing Systems 2:2 (1989), 173–181.
[97]
N. Nisan.
Personal communication, 1988.
[98]
N. Nisan.
“Co-SAT has Multiprover Interactive Proofs.”
Preliminary draft, November, 1989.
[99]
I. Niven and H. Zuckerman.
An Introduction to the Theory of Numbers.
Wiley, New York, 1972.
[100]
Y. Oren.
“On the Cunning Power of Cheating Verifiers:
Some Observations about Zero Knowledge Proofs.”
Proceedings of the $28^{th}$ FOCS, IEEE, 1987, 462–471.
[101]
W. Peterson and E. Weldon.
Error Correcting Codes.
Second Ed., MIT Press (1972).
[102]
M. Rabin.
“Digitalized Signatures.”
Foundations of Secure Computations, R. Demillo et al, Ed.
Academic Press (1978), 155–165.
[103]
M. Rabin.
“Digitalized Signatures and Public-Key Functions as Intractable as
Technical Report LCS/TR-212, MIT, January, 1979.
[104]
M. Rabin.
Personal communication, 1988.
[105]
M. Rabin.
Personal communication, 1989.
[106]
T. Rabin.
“Robust Sharing of Secrets When the Dealer is Honest or Cheating.”
Masters Thesis, Hebrew University, 1988.
[107]
T. Rabin, M. Ben-Or.
“Verifiable Secret Sharing and
Multiparty Protocols with Honest Majority.”
Proceedings of the $21^{st}$ STOC, ACM, 1989, 73–85.
[108]
J. Reeds.
“Secure IX Network.”
Proceedings of the DIMACS Workshop on
Distributed Computing and Cryptography, Princeton, NJ, October, 1989,
J. Feigenbaum, M. Merritt (eds.).
[109]
R. Rivest.
Workshop on Communication and Computing, MIT, October, 1986.
[110]
R. Rivest, A. Shamir, L. Adleman.
“A Method for Obtaining Digital Signatures and Public Key
Communications of the ACM 21:2 (1978), 120–126.
[111]
P. Rogaway, Personal Communication, 1989.
[112]
P. Rogaway.
“The Round Complexity of Secure Protocols.”
PhD Thesis, Massachusetts Institute of Technology, 1990.
[113] S. Rudich.
Personal communication, 1989.
[114]
A. Shamir.
“How to Share a Secret.”
Communications of the ACM, 22 (1979), 612–613.
[115]
A. Shamir.
“IP = PSPACE.”
Electronic mail announcement, December, 1989.
[116]
S. Toda.
“On the Computational Power of PP and $\oplus P$.”
Proceedings of the $30^{th}$ FOCS, IEEE, 1989, 514–519.
[117] M. Tompa and H. Woll.
“Random Self-Reducibility and Zero-Knowledge
Proofs of Possession of Information.”
Proceedings of the $28^{th}$ FOCS, IEEE, 1987, 472–482.
[118]
L. Valiant.
“The Complexity of Computing the Permanent.”
Theor. Comput. Sci. 8 (1979), 189–201.
[119] A. C. Yao.
“Protocols for Secure Computations.”
Proceedings of the $23^{rd}$ FOCS, IEEE, 1982, 160–164.
[120] A. Yao,
“Theory and Applications of Trapdoor Functions,”
Proceedings of the $23^{rd}$ FOCS, IEEE, 1982, 80–91.
[121]
A. Yao.
“How to Generate and Exchange Secrets.”
Proceedings of the $27^{th}$ FOCS, IEEE, 1986, 162–167.
[122] C. Yap.
“Some Consequences of Nonuniform Conditions on Uniform Classes.”
Theor. Comput. Sci. 26 (1983), 287–300.
|
# Low-complexity Rank-Efficient Tensor Completion For Prediction And Online
Wireless Edge Caching††thanks: Authors are with Institute of digital
communications, School of engineering, The University of Edinburgh, Edinburgh,
UK, EH9 3FG. Emails: {ngarg<EMAIL_ADDRESS>
Navneet Garg, , and Tharmalingam Ratnarajah
###### Abstract
Wireless edge caching is a popular strategy to avoid backhaul congestion in
the next generation networks, where the content is cached in advance at base
stations to serve redundant requests during peak congestion periods. In the
edge caching data, the missing observations are inevitable due to dynamic
selective popularity. Among the completion methods, the tensor-based models
have been shown to be the most advantageous for missing data imputation. Also,
since the observations are correlated across time, files, and base stations,
in this paper, we formulate the cooperative caching with recommendations as a
fourth-order tensor completion and prediction problem. Since the content
library can be large leading to a large dimension tensor, we modify the latent
norm-based Frank-Wolfe (FW) algorithm with towards a much lower time
complexity using multi-rank updates, rather than rank-1 updates in literature.
This significantly lower time computational overhead leads in developing an
online caching algorithm. With MovieLens dataset, simulations show lower
reconstruction errors for the proposed algorithm as compared to that of the
recent FW algorithm, albeit with lower computation overhead. It is also
demonstrated that the completed tensor improves normalized cache hit rates for
linear prediction schemes.
## I Introduction
With the continuous development of various intelligent devices and various
sized innovative application services such as high quality video feeds,
software updates, news updates, etc., wireless mobile communications has been
experiencing an unprecedented traffic surge with a lot of redundant and
repeated information, which limits the capacity of the fronthaul and backhaul
links [1]. To lower the redundant traffic, caching has emerged as an effective
solution for reducing the peak data rates by pre-fetching the most popular
contents in the local cache storage of the base stations (BS). In the recent
years, caching at the BS is actively feasible due to the reduced cost and size
of the memory [2]. In [2] where cache-enabled networks are classified into
macro-cell, heterogeneous and D2D networks, given a set of a content library
and the respective content popularity profile, content placement and delivery
have been investigated in order to optimize the backhaul latency delay in [3],
server load in [4], cache miss rate in [5, 6, 7], etc. With the known
popularity profile, reinforcement learning approaches [8, 9] are studied for
learning the content placement. However, in practice, this profile is time-
varying and not known in advance, therefore, it needs to be estimated from the
past observations of the content requests. To estimate future popularities,
deep learning based prediction is employed with huge training data in [10,
11]. In [12], auto regressive (AR) prediction cache is used to predict the
number of requests in the time series. Linear prediction approach is
investigated for video segments in [13]. To learn popularities independently
across contents, online policies are presented for cache-awareness in [14],
low complexity video caching in [1, 15], user preference learning in [16],
etc. These works on prediction focus independently on the BSs to estimate the
future content popularities. However, the demands across files are correlated
[17], since other similar contents can be served from the cache with the same
features as the requested content in order to maximize the cache hit, which is
also known as soft cache hit [18]. Regarding that in literature [17, 19, 20,
21, 22, 23, 24], recommendation based caching has been carried out based on
low-rank decomposition, deep reinforcement learning, etc. Moreover, the
content is also correlated across base stations, as studied in the recent
works on in-network caching solutions [25, 26], where base stations are
jointly allocated cache contents. Furthermore, regarding the correlation of
popularities across time slots, in edge caching literature, to avoid
prediction, cache placement is performed to maximize different objectives such
as cache hit rate [6], average success probability [27, 28, 29, 30], etc.,
given the past information up to the present time slot. The maximization
problems of these objectives can be simplified to the prediction problem.
Therefore, in this work, we focus on prediction of such correlated demands to
improve the caching performance using a tensor approach. Due to these
correlations and missing data, it is difficult to store and predict the
content popularities for large content lirbary. Thus, after modeling the
tensor completion problem, we modify the Frank-Wolfe approach for the
solution, followed by linear prediction methods. A brief review of tensor
completion approaches is given as follows.
### I-A Tensor completion methods
In the past decades, tensor completion is intensively researched due to its
wide applications in a variety of fields, such as computer vision [31, 32,
33], multi-relational link prediction [34, 35, 36], and recommendation system
[37, 38]. The goal of tensor completion is to recover an incomplete tensor
from partially observed entries. To the best of our knowledge, tensor
completion methods can be categorized into decomposition based and rank-
minimization based methods.
Decomposition based methods aim to factorize the incomplete tensor into a
sequence of low-rank factors and then predict the missing entries via the
latent factors. In recent years, CANDECOMP/PARAFAC (CP) decomposition [39] and
Tucker decomposition [40] are the two most studied and popular models applied
in tensor completion in [41], [42], [43]. Although CP and Tucker methods
obtain good performance for low order tensors, yet their performance rapidly
degrades for higher-order tensors. Moreover, the computing of CP-rank is NP
hard and the number of parameters of Tucker decomposition is exponential with
the order of given tensors. Recently, a tensor decomposition model, called
Tensor-Ring (TR) decomposition [44, 45, 46, 47], is proposed to process high-
order tensors. TR decomposition can express a higher-order tensor by a multi-
linear product over a sequence of lower-order latent cores. A notable
advantage of TR decomposition is that its total number of parameters increases
linearly with the order of the given tensor, reducing the curse of
dimensionality as compared to Tucker decomposition. Attracted by these
features, TR decomposition has drawn lots of attention such as TR based
weighted optimization [48], alternating least squares [49], low-rank factors.
Rank-minimization based methods exploits the low-rank structure to complete
the tensor. Since the rank minimization is a non-convex and NP-hard problem,
overlapped nuclear norm [50, 51, 52] and latent nuclear norm [53, 54] have
been defined as the convex surrogates of tensor rank. The former norm assumes
low-rank across all modes and thus perform poorly, when the target is low-rank
in certain modes [50]. The latter norm assumes few modes in low-rank and often
performs better than the former [53]. However, these two norms are based on
the unbalanced mode-$k$ unfolding, and thus, causing lack of capturing global
information for higher-order tensors.
### I-B Contributions
In this paper, inspired by the TR decomposition, we employ convex TR-based
latent nuclear norm [55] based on cyclic unfolding, i.e. overcoming the
drawback of unbalanced unfolding. The tensor completion task can be cast as a
convex optimization problem to minimize the latent norm, which is solved via
modified Frank-Wolfe (FW) algorithm [55, 56] with cyclic unfolding to provide
an efficient solution towards lower time complexity. In this work, FW method
is modified to obtain a low time-complexity solution via gradient descent
updates. Furthermore, simulations on MovieLens dataset are carried out to
verify the algorithm and completion results for cache hit rate in caching. The
contributions of this paper can be summarized as follows.
#### I-B1 Problem Formulation
The missing entry problem in edge caching is cast as a tensor completion
problem, where the entries are correlated across base stations, files and
time.
#### I-B2 Online solution
To solve the tensor completion problem, we modify the FW algorithm from [55]
towards a lower time complexity solution. The modified approach is
significantly faster than that in [55]. Using the proposed tensor completion,
an online content caching algorithm is presented to improve the cache hit
rates.
#### I-B3 Simulations and comparison
Simulations are performed for MovieLens dataset towards the convergence and
observing the effect of latent factors. The cache hit rate performance of
caching is plotted for both mean based and linear prediction methods, which
also shows normalized cache hit rate improvements compared to conventional CP
decomposition (CPD) [57] and the FW method in [55]. For higher decomposition
rank, the proposed method provides better hit rates than that with [55].
### Related work on TR based completion
Related works include latent-norm based methods [53, 54, 55] and Tensor-Ring
based methods [49, 52]. [53] employed the latent nuclear norm, while [54]
defined a new latent nuclear norm via Tensor Train. However, they are based on
unbalanced mode-$k$ unfolding. [49] applied TR decomposition with alternating
least squares for completion, while [58] proposed TR low-rank factors. To
reduce the computational complexity per iteration, [52] utilize an overlapped
TR nuclear norm. Further, to reduce the number of parameters for selection,
[55] proposed a new latent TR-nuclear norm.
### Organization
The rest of this paper is organized as follows. The prediction in the edge
caching framework is presented in section II. In Section III, tensor
completion algorithm is provided. Section IV investigates simulation results
for the real-world dataset. Finally, the paper is concluded in section V.
### Notations
Scalars, vectors, and matrices are respectively denoted by lowercase, boldface
lowercase, and bold capital letters. A tensor of order $N>3$ is denoted by
calligraphic letter $\mathcal{X}$. The notation
$\mathcal{X}(i_{1},i_{2},\ldots,i_{N})$ represents an element in $X$, while
$\mathcal{X}(:,i_{2},\ldots,i_{N})$ and $\mathcal{X}(:,:,i_{3},\ldots,i_{N})$
denotes a fiber along mode $1$ and a slice along mode 1 and mode 2
respectively. Inner product of two tensors $\mathcal{X}$ and $\mathcal{Y}$ of
the same size is given as
$\left\langle\mathcal{X},\mathcal{Y}\right\rangle=\sum_{i_{1},i_{2},\ldots,i_{N}}\mathcal{X}\left(i_{1},i_{2},\ldots,i_{N}\right)\mathcal{Y}\left(i_{1},i_{2},\ldots,i_{N}\right)$
and the Frobenius norm can be obtained as
$\left\|\mathcal{X}\right\|_{F}^{2}=\left\langle\mathcal{X},\mathcal{X}\right\rangle$.
Notations $\text{tr}(\mathbf{A})$ and $\|\mathbf{A}\|_{*}$ defines the trace
and the nuclear norm of a matrix $\mathbf{A}$.
Symbol | Description
---|---
$N_{BS}$ | Number of base stations
$\mathcal{F}$, $F$ | Content library and its size
$L_{BS}=\left|\mathcal{C}_{bt}\right|$ | Cache size at a BS
$\mathbf{c}_{bt},\mathbf{c}_{bt}(f)$ | Cache status and $f^{th}$ file status
$b$ | BS index
$t$ | Time slot index
$f$ | Content library index
$\mathcal{D}_{t}$,$D_{fibj}$ | Observed tensor of #requests, and its entries
$\tau$ | Number of time slots for prediction
$\mathcal{H}_{b,t}$ | Cache hit rate
$\bar{D}_{fbt}$,$\hat{\bar{D}}_{fbt}$ | Normalized and estimated #requests
$M$,$c_{bi}$ | Order and coefficients of linear prediction
$d$ | For cyclic unfolding/folding
$N$ | Number of tensor dimensions
$\mathcal{I}$ | Binary (0 or 1) valued tensor
$\mathcal{T}$ | Observed tensor for completion
$\mathcal{X}$ | Tensor to be determined
$\mathcal{X}_{k}$ | $k^{th}$ component s.t. $\mathcal{X}=\sum_{k=1}^{N}\mathcal{X}_{k}$
$\mathcal{X}_{k,(k,d)}$ | Cyclic unfolding of $\mathcal{X}_{k}$ of size $\bar{I}_{k}\times\bar{J}_{k}$
$\mathcal{S}$ | Gradient representation
$\gamma$,$\beta$ | Step size, norm constraint
$R_{k}$, $R$ | $k^{th}$ mode rank and rank constraint
$r_{k}$ | Rank of current SVD
$\mathbf{U}_{k},\mathbf{V}_{k},\Sigma_{k}$ | Decomposition of $\mathcal{X}_{k}$
Table I: List of variables.
## II System model
We consider a multi-cell network with one macro base station (MBS) and
$N_{BS}$ small base stations (SBS), where each SBS serves multiple users. An
example of this system is illustrated in Figure 1.
Figure 1: System model of the caching framework.
Each user requests contents from a fixed library. Let the content library be
indexed by the set $\mathcal{F}=\left\\{1,\ldots,F\right\\}$, where each
content is assumed to be of equal size .
In the time slot $t\in\left\\{1,\ldots,T\right\\}$, the $b^{th}$ SBS has a
cache of size $L_{BS}$, and the $F\times 1$ vector $\mathbf{c}_{bt}$ describes
the status of cache, that is, if the $f^{th}$ content is cached,
$\mathbf{c}_{bt}(f)=1$ (else $0$ for not cached) with the cache size
constraint $\mathbf{c}_{bt}^{T}\mathbf{1}_{F}=L_{BS}$. For fractional caching,
where a portion of content is cached rather than the whole file, we have
$\mathbf{c}_{bt}(f)\in\left[0,1\right]$ denoting the fraction of the $f^{th}$
content being cached, for each $f\in\mathcal{F}$. Before delivering the
requested contents from users, a subset of popular contents are cached in the
cache of the $b^{th}$ base station. Users are provided with the
recommendations of the contents from library in a decreasing order of
popularities at the local SBS. Based on the requested contents from the
library, we define a direct hit to be the cache hit when the requested content
is present in the cache, whereas indirect hits are the hits in cache for which
users choose as an alternative to the requested content based on the
recommendations when the requested content is unavailable in the cache. Each
SBS collects the data about the direct and indirect hits for each time slot.
Let $\mathcal{D}_{t}=\big{\\{}D_{fibj}\in\mathbb{R}_{+},\forall
f,i\in\mathcal{F},b=1,\ldots,N_{BS},j=t-\tau+1,\ldots,t\big{\\}}$ be a fourth-
order tensor representing the aggregated data of number of direct and indirect
content requests across all SBSs and for previous $\tau$ time slots
($t-\tau+1,\ldots,t$). The four dimensions of the tensor $\mathcal{D}_{t}$
respectively represent the requested content’s index, indices of requested
contents based on recommendations, base station index, and time slot. In other
words, in the $t^{th}$ time slot at $b^{th}$ BS, the number of recommended
requests for the $i^{th}$ file, when the $f^{th}$ content is primarily
requested, is denoted as $D_{fibt}$. Thus, the size of tensor is $I_{1}\times
I_{2}\times I_{3}\times I_{4}$, where $I_{1}=I_{2}=F,\,I_{3}=N_{BS}$ and
$I_{4}=\tau$. Note that only few entries of the tensor $\mathcal{D}_{t}$ can
be observed, since the library is large and a few files are popular in a given
time slot. Therefore, before processing this sparse tensor to derive the cache
placement scheme, a tensor completion approach is essential. The following
subsection describes the dynamics of the caching system.
### II-A Edge caching procedures
The structure of the time slot is shown in Figure 2.
Figure 2: Time slot structure. CPL: content placement, CD: content delivery,
IE: information exchange.
The first phase is the content placement (CPL) phase, where contents are
placed in each BS’s cache based on the present information at base stations.
The subsequent phase is the content delivery phase, where the content is
delivered from the cache as per the requests from users. The next phase is
dedicated for information exchange, where multiple base stations exchange the
information about the number of hits and requests for better cache placements
in the next time slot. For the CPL phase, the content placement strategy is
chosen as a combination of two methods, that is, linear prediction of
contents’ demands followed by most-popular caching (MPC) scheme. Since
accurate prediction requires the data available for different contents, tensor
completion problem is considered prior to performing prediction.
Moreover, in practice, the demands (number of requests for contents) are
correlated across different contents and base stations, reducing the rank of
the tensor $\mathcal{D}$. Thus, an independent prediction for individual base
station and for each file can cause performance degradation. To deal with
these correlation issues, authors in literature considers in-network caching
[59], joint reinforcement learning [8], etc. However, in this work, we focus
on improving the caching performance using tensor completion methods.
### II-B Normalized cache hit rate and content placement
In literature [29, 16], to measure caching performance, several measures like
cache hit (miss) rate, average success probability (ASP), etc have been
considered. Improvements in these measures also relies on the prediction of
demands. Cache hit rate at the $b^{th}$ base station in the $t^{th}$ time slot
can be written as
$\mathcal{H}_{bt}=\frac{\sum_{f,i\in\mathcal{F}}D_{fibt}\mathbf{c}_{bt}(f)}{\sum_{f,i\in\mathcal{F}}D_{fibt}},$
(1)
which evaluates the cache placement policy.
Note that the number of requests at SBSs is random with unknown distribution,
and not available in advance. That is, the decision for the $f^{th}$ file is
set based on the previous number of requests. The objective of caching is to
find the best content placement strategy to maximize the hit rate above. If in
$(t+1)^{th}$ time slot the number of requests $D_{fib,t+1}$ is known in
advance, we can choose $L_{BS}$ files with largest number of requests. On the
other hand, when $D_{fib,t+1}$ is not known, it is natural to maximize the
expected value given the previous information as
$\displaystyle\arg\max_{\mathbf{c}_{b,t+1},\forall
b}\mathbb{E}_{D}\left[\sum_{b}\mathcal{H}_{b,t+1}|\mathcal{D}_{t}\right]$ (2a)
$\displaystyle=\arg\max_{\mathbf{c}_{b,t+1},\forall
b}\sum_{b}\sum_{f\in\mathcal{F}}\mathbf{c}_{bt}(f)\mathbb{E}_{D}\left[\frac{\sum_{i\in\mathcal{F}}D_{ifb,t+1}}{\sum_{f,i\in\mathcal{F}}D_{ifb,t+1}}|\mathcal{D}_{t}\right],$
(2b)
in which
$\mathbb{E}_{D}\left[\frac{\sum_{i\in\mathcal{F}}D_{ifb,t+1}}{\sum_{f,i\in\mathcal{F}}D_{ifb,t+1}}|\mathcal{D}_{t}\right]$
denotes the conditional mean estimate of the normalized number of requests.
This estimate is the prediction of demands at $t+1$, given the past $\tau$
observations of demands until time slot $t$. Note that given the prediction
estimates, the solution for caching strategy includes the $L_{BS}$ contents
which have largest values in the estimate. To compute this estimate, the
probability distribution of normalized requests must be known. However, the
number of users’ requests are random and non-stationary with unknown
distribution. Therefore, in the following, we obtain linear prediction
estimation using least squares.
### II-C Linear prediction
Let
$\bar{D}_{fbt}=\frac{\sum_{i\in\mathcal{F}}D_{ifbt}}{\sum_{f,i\in\mathcal{F}}D_{ifbt}}$
denote the normalized number of requests, such that
$\sum_{f\in\mathcal{F}}\bar{D}_{fbt}=1$. To obtain the linear predicton, the
normalized demands are assumed to evolve as a linear combination of demands in
the temporal dimension as
$\bar{D}_{fb,t+1}\approx\sum_{m=1}^{M}c_{bm}\bar{D}_{fbm},\forall
f\in\mathcal{F},$ (3)
where $M$ is the order of prediction and $c_{bm}$ are the prediction
coefficients. These coefficients are obtained by solving least squares fit
problem, given the previous $\tau$ observations
$\bar{D}_{fbj},j=t-\tau+1,\ldots,t$ for each $b$ as
$\displaystyle\min_{c_{bi}\forall i}$
$\displaystyle\sum_{f\in\mathcal{F}}\sum_{j=0}^{\tau-M-1}\left|\bar{D}_{fb,t-j}-\sum_{m=1}^{M}\bar{D}_{fb,t-j-m}c_{bm}\right|^{2}$
(4a) subject to $\displaystyle\sum_{m=1}^{M}c_{bm}\bar{D}_{fbm}\geq 0,\forall
f\in\mathcal{F}$ (4b)
$\displaystyle\sum_{f\in\mathcal{F}}\sum_{m=1}^{M}c_{bm}\bar{D}_{fbm}=1,$ (4c)
where the constraints provides the non-negative values. Let the prediction
estimate be denoted as
$\hat{\bar{D}}_{fb,t+1}=\sum_{m=1}^{M}c_{bm}\bar{D}_{fbm}$. Mean based
approach can be considered as a special case of linear prediction approach,
i.e. $c_{bi}=\nicefrac{{1}}{{d}},\forall i=1,\ldots,d$. Since the data is
sparse and correlated across files and base stations, it is difficult to
directly obtain accurate predictions via these methods. Therefore, after
filling the entries via tensor completion method, the above linear prediction
can be used to find the future popularity estimates. For simplicity, in later
sections, we recall this linear prediction method using the notation
$\left\\{\hat{\bar{D}}_{fb,t+1},\forall f,b\right\\}\leftarrow
LP\left\\{\mathcal{D}_{t}\right\\}$.
#####
### II-D Tensor preliminaries: Circular unfolding and folding
To efficiently represent the information in higher order tensors, authors in
[52, 60] defined a balance unfolding scheme based on circular unfolding. For
an $N$-order tensor $\mathcal{X}$, the tensor circular unfolding matrix,
denoted by $\mathcal{X}_{(k,d)}$ of size
$\bar{I}_{k}\times\bar{J}_{k}=I_{a}I_{a+1}\ldots I_{k}\times I_{k+1}\ldots
I_{a-1}$, can be written as
$\mathcal{X}_{\left(k,d\right)}\left(i_{a}i_{a+1}\ldots i_{k}\times
i_{k+1}\ldots
i_{a-1}\right)=\mathcal{X}\left(i_{1},i_{2},\ldots,i_{N}\right),$ (5)
where $d$ is a positive integer and
$a=\begin{cases}k-d+1,&d\leq k;\\\ k-d+1+N,&d>k.\end{cases}$ (6)
The above unfolding with the given mode $k$ and shift $d$ provides the
balanced unfolding. Note that the balance of the above unfolding depends on
the shift $d$. Similarly, the folding of a matrix $\mathbf{X}$ of size
$I_{a}I_{a+1}\ldots I_{k}\times I_{k+1}\ldots I_{a-1}$ along the mode $k$ and
shift $d$ provides the tensor
$\left\llbracket\mathbf{X}\right\rrbracket_{\left(k,d\right)}$ of size
$I_{1}\times\ldots\times I_{N}$.
### II-E Latent nuclear norm with cyclic unfolding
The cyclic unfolding based latent nuclear norm is defined for an $N$-order
tensor $\mathcal{X}$ as follows [36, 55]
$\left\|\mathcal{X}\right\|_{TR}=\min_{\mathcal{X}_{1}+\ldots+\mathcal{X}_{N}=\mathcal{X}}\sum_{k=1}^{N}\left\|\mathcal{X}_{k,(k,d)}\right\|_{*},$
(7)
where the minimum is over $N$ tensors
$\left\\{\mathcal{X}_{k}\right\\}_{k=1}^{N}$, and $\mathcal{X}_{k,(k,d)}$
denotes the low-rank unfolding of $\mathcal{X}_{k}$ along the mode $k$ with a
given value of $d$.
### II-F Tensor completion problem
Let $\mathcal{T}\in\mathbb{R}^{I_{1}\times\cdots\times I_{N}}$ be an $N$-order
sparse tensor with the observed entries. The location of entries in
$\mathcal{T}$ is denoted by another indicator tensor $\mathcal{I}$, where
$\mathcal{I}\left(i_{1},i_{2},\ldots,i_{N}\right)$ is 1 when
$\mathcal{T}\left(i_{1},i_{2},\ldots,i_{N}\right)\neq 0$, and 0, otherwise.
The notation $\left|\mathcal{I}\right|$ and $\mathcal{T}(\mathcal{I})$ define
the number of non-zeros in the $\mathcal{I}$ and the entries of $\mathcal{T}$
for the corresponding non-zeros indices in $\mathcal{I}$, respectively.
The optimization problem for low-rank tensor completion using TR-latent
nuclear norm is cast as
$\displaystyle\min_{\mathcal{X}_{k},\forall k}$
$\displaystyle\left\|\mathcal{X}\right\|_{TR}$ (8a) subject to
$\displaystyle\mathcal{X}=\sum_{k=1}^{N}\mathcal{X}_{k},\mathcal{X}(\mathcal{I})=\mathcal{T}(\mathcal{I}),$
(8b)
where $\mathcal{X}_{k}$ are component tensors and unfolded cyclically. To
solve the above optimization, recently in [55, 36], Frank Wolfe algorithm is
used to obtain parameters independent iterative procedure for tensor
completion. However, the computation complexity is large due to optimum mode
selection $k$, which requires SVD of $\mathcal{X}_{k}$ along each of $N$ modes
(since $k=1,\ldots,N$); and the iterative compression of basis matrices, where
SVD, QR and a quadratic optimization is performed. In this work, we propose a
procedure with much lower computational overhead and along with significantly
better reconstruction error. The details are described in the following.
## III Tensor completion algorithm
### III-A Rank efficient modified FW algorithm
Under the Frank-Wolfe framework, the optimization problem in (8a) can be
rewritten as
$\displaystyle\min_{\mathcal{X}}$ $\displaystyle F(\mathcal{X})$ (9a) subject
to $\displaystyle\left\|\mathcal{X}\right\|_{TR}\leq\beta,$ (9b)
where
$F(\mathcal{X})=\frac{1}{2}\left\|\mathcal{X}(\mathcal{I})-\mathcal{T}(\mathcal{I})\right\|_{F}^{2}$
and $\beta>0$. The constraint $\left\|\mathcal{X}\right\|_{TR}\leq\beta$ is
also related to the rank constraint, that is, if the rank constraint is not
specified, the rank of the solution can be high. However, for the proposed
procedure, we will show that with the rank constraint specified, the above
constraint can be relaxed; in other words, low rank solutions can be obtained
with the specified $\beta$ limit.
The above optimization is solved via the gradient descent steps in the FW
algorithm. It can be observed that the constraint
$\mathcal{X}=\sum_{k=1}^{N}\mathcal{X}_{k}$ is also included in the TR-norm
constraint. For the cyclic unfolding, we can write
$\mathcal{X}=\sum_{k=1}^{N}\left\llbracket\mathcal{X}_{k,\left(k,d\right)}\right\rrbracket_{\left(k,d\right)},$
(10)
where the matrices $\mathcal{X}_{k,\left(k,d\right)}$ are of low-rank, say
$R_{k}$. Given the rank constraint of overall TC problem (say $R$), all
$\mathcal{X}_{k}$ must satisfy $\sum_{k=1}^{N}R_{k}\leq R$. Note that for
notational simplicity, we omit the iteration index. The update of the gradient
descent can be given as
$\mathcal{X}\leftarrow\mathcal{X}-\gamma\mathcal{S},$ (11)
where $\gamma>0$ and the tensor $\mathcal{S}$ represent the gradient of
$F(\mathcal{X})$, $\nabla
F=\mathcal{X}(\mathcal{I})-\mathcal{T}(\mathcal{I}).$ The efficient
representation of $F(\mathcal{X})$ with a decomposition and the TR-norm
constraint can be obtained as follows.
#### III-A1 Constraint linear optimization
The problem of finding the tensor gradient of $F(\mathcal{X})$ to satisfy the
constraint (9b) can be expressed as
$\displaystyle\mathcal{S}$
$\displaystyle=\arg\max_{\left\|\mathcal{S}\right\|_{TR}\leq\beta}\left\langle\mathcal{S},\nabla
F\right\rangle,$
where the objective is maximize the correlation between the above two tensors.
Note that the objective function is linear. One possible solution is to choose
$\mathcal{S}=\beta\frac{\nabla F}{\left\|\nabla F\right\|_{TR}}$. However, we
also need low-rank decomposition of $\mathcal{S}$ for the tensor completion
problem. Therefore, we first find the optimum mode for cyclic unfolding, and
then, leverage SVD for decomposition components. To find the optimum mode, we
compare the first dominant singular value of the unfolded tensor $\nabla F$
along different modes, that is,
$k^{*}=\arg\max_{k}\sigma_{max}\left[\left(\nabla
F\right)_{\left(k,d\right)}\right],$
where the notation $\sigma_{max}(\mathbf{A})$ denotes the maximum eigenvalue
of the matrix $\mathbf{A}$. Let the folded gradient has SVD as $\left(\nabla
F\right)_{\left(k,d\right)}=\tilde{\mathbf{U}}_{k}\tilde{\Sigma}_{k}\tilde{\mathbf{V}}_{k}^{T}$,
where the singular values are assumed in a descending order. For a low-rank
tensor completion, rank is a constraint, say $r_{k}$. We will present how to
obtain $r_{k}$ later. Therefore, we have the gradient solution as
$\mathcal{S}=\beta\frac{\left\llbracket\tilde{\mathbf{U}}_{k^{*}}(1:r_{k^{*}})\tilde{\Sigma}_{k^{*}}(1:r_{k^{*}},1:r_{k^{*}})\tilde{\mathbf{V}}_{k^{*}}^{T}(1:r_{k^{*}})\right\rrbracket_{\left(k^{*},d\right)}}{\mathbf{1}_{r_{k^{*}}}^{T}\tilde{\Sigma}_{k^{*}}(1:r_{k^{*}},1:r_{k^{*}})\mathbf{1}_{r_{k^{*}}}},$
(12)
where for a matrix $\mathbf{A}(1:m,1:n)$ denotes the submatrix with entries
belonging to the first $m$ rows and first $n$ columns of $\mathbf{A}$; and
$\mathbf{A}(1:n)$ is the submatrix with first $n$ columns of $\mathbf{A}$;
$\mathbf{1}_{n}$ denotes $n\times 1$ vectors of ones.
_Remark (rank-1 updates)_ : For gradient representation $\mathcal{S}$, instead
of choosing $r_{k}$ rank, rank-$1$ updates can be considered (for a suboptimal
solution with larger overhead) as
$\mathcal{S}=\beta\left\llbracket\tilde{\mathbf{U}}_{k^{*}}(1)\tilde{\mathbf{V}}_{k^{*}}^{T}(1)\right\rrbracket_{\left(k^{*},d\right)},$
(13)
which is inefficient than as compared to (12) due to the fact that the
structure of singular values $\tilde{\Sigma}_{k^{*}}$ is absent. Algorithm in
[55, 36] gathers many such singular vector (and values), and find the
structure of singular values via the compression step.
_Remark (max-mode selection)_ : In the above step, the mode $k^{*}$ is
selected based on maximum singular value. For an $I_{1}\times\cdots\times
I_{N}$ tensor, the singular value of an unfolding is maximum when its
dimensions are minimum. If the $k^{th}$ unfolding has dimension
$\bar{I}_{k}\times\bar{J}_{k}$, then the value $k^{*}$ can be selected as
$k^{*}=\arg\min_{k}\min\left\\{\bar{I}_{k},\bar{J}_{k}\right\\},$ (14)
which can significantly reduce the computational complexity of mode-selection,
that is, the computations for $N$ singular values.
#### III-A2 Line search
The line search problem is to find the step size, which can be expressed as
$\displaystyle\gamma$ $\displaystyle=\arg\min_{\gamma\geq
0}\left\|\left\\{\mathcal{X}\left(\mathcal{I}\right)-\gamma\mathcal{S}\right\\}-\mathcal{T}\left(\mathcal{I}\right)\right\|_{F}^{2}$
(15a) $\displaystyle=\arg\min_{\gamma\geq 0}\bar{a}\gamma^{2}-2\bar{b}\gamma$
(15b) $\displaystyle=\max\left\\{\frac{\bar{b}}{\bar{a}},0\right\\},$ (15c)
where $\bar{a}=\left\|\mathcal{S}\left(\mathcal{I}\right)\right\|_{F}^{2}$,
$\bar{b}=\left\langle\mathcal{X}\left(\mathcal{I}\right)-\mathcal{T}\left(\mathcal{I}\right),\mathcal{S}\left(\mathcal{I}\right)\right\rangle$,
and the solution is obtained by differentiation. The max-operator arises due
to $\gamma\geq 0$ constraint. With $\gamma$ and $\mathcal{S}$ obtained, the
tensor $\mathcal{X}$ can be updated using the equation (11).
#### III-A3 Decomposition update
Thanks to the decomposition, one does not need to store the whole tensor
$\mathcal{S}$ or $\mathcal{X}$. We can store SVD components and step size as a
representation of the update of the unfolded-component tensors. In other
words, for the selected optimum mode-$k^{*}$, singular vectors along each mode
are updated from, the update equation (11) can be simplified as
$\displaystyle\mathcal{X}\leftarrow\mathcal{X}-\gamma\mathcal{S}$ (16a)
$\displaystyle=\sum_{k=1}^{N}\left\llbracket\mathcal{X}_{k,\left(k,d\right)}\right\rrbracket_{\left(k,d\right)}-\gamma\mathcal{S}$
(16b) $\displaystyle=\sum_{k\neq
k^{*}}\left\llbracket\mathcal{X}_{k,\left(k,d\right)}\right\rrbracket_{\left(k,d\right)}+\Bigg{\llbracket}\mathcal{X}_{k^{*},\left(k^{*},d\right)}-$
(16c)
$\displaystyle\gamma\beta\frac{\tilde{\mathbf{U}}_{k^{*}}(1:r_{k^{*}})\tilde{\Sigma}_{k^{*}}(1:r_{k^{*}},1:r_{k^{*}})\tilde{\mathbf{V}}_{k^{*}}^{T}(1:r_{k^{*}})}{\mathbf{1}_{r_{k^{*}}}^{T}\tilde{\Sigma}_{k^{*}}(1:r_{k^{*}},1:r_{k^{*}})\mathbf{1}_{r_{k^{*}}}}\Bigg{\rrbracket}_{\left(k^{*},d\right)}.$
Since for each $k$, we have
$\mathcal{X}_{k,\left(k,d\right)}=\mathbf{U}_{k}\boldsymbol{\Sigma}_{k}\mathbf{V}_{k}^{T}$,
the above equation leads to the updation of singular vectors along only
$k^{*}$-th mode as
$\displaystyle\mathbf{U}_{k^{*}}$
$\displaystyle\leftarrow\left[\mathbf{U}_{k^{*}},-\tilde{\mathbf{U}}_{k^{*}}(1:r_{k^{*}})\right],$
(17a) $\displaystyle\mathbf{V}_{k^{*}}$
$\displaystyle\leftarrow\left[\mathbf{V}_{k^{*}},\tilde{\mathbf{V}}_{k^{*}}(1:r_{k^{*}})\right],$
(17b) $\displaystyle\boldsymbol{\Sigma}_{k^{*}}$
$\displaystyle\leftarrow\left[\begin{array}[]{cc}\boldsymbol{\Sigma}_{k^{*}}&\mathbf{0}\\\
\mathbf{0}&\frac{\gamma\beta\tilde{\Sigma}_{k^{*}}(1:r_{k^{*}},1:r_{k^{*}})}{\mathbf{1}_{r_{k^{*}}}^{T}\tilde{\Sigma}_{k^{*}}(1:r_{k^{*}},1:r_{k^{*}})\mathbf{1}_{r_{k^{*}}}}\end{array}\right],$
(17e) $\displaystyle R_{k^{*}}$ $\displaystyle\leftarrow R_{k^{*}}+r_{k^{*}},$
(17f)
where the components
$\mathbf{U}_{k},\boldsymbol{\Sigma}_{k},\mathbf{V}_{k},R_{k},k\neq k^{*}$
remains unchanged during this iteration. Regarding the update of $r_{k}$,
there are two constraints, rank of the unfolded matrix
$r_{k}\leq\min\left\\{\bar{I}_{k},\bar{J}_{k}\right\\}-R_{k}$, and the overall
rank constraint $r_{k}\leq R-\sum_{k=1}^{N}R_{k}$. Thus, for the next
iteration, the update for $r_{k}$ can be written as
$r_{k}\leftarrow\min\left\\{\bar{I}_{k}-R_{k},\bar{J}_{k}-R_{k},R-\sum_{i=1}^{N}R_{i}\right\\},$
(18)
which also defines the stopping criteria, that is, the iterative procedure
stops if $r_{k}$ reaches $0$.
_Remark (Compression step)_ : It can be seen that due to rank constraint
$r_{k}$, the above representation does not explode into many basis matrices.
Thus, no-compression is required here, which significantly reduces the
computational overhead, as compared to [55, 36]. Further, if the compression
step is leveraged into the proposed algorithm, the reconstruction errors can
be further reduced. However, for an online algorithm, this step is omitted.
#### III-A4 Algorithm
The Algorithm 1 presents the proposed TC procedure, which combines the steps
obtained in the above subsections. In addition to the input observed tensor
$\mathcal{T}$, the method requires to specify the required rank $R$ and the
low-rank reconstruction error limit $\beta$. After the initialization of
variables $\mathcal{X}=0$,
$R_{k}=r_{k}=0,\mathbf{U}_{k}=\Sigma_{k}=\mathbf{V}_{k}=\emptyset,\forall k$,
first the optimum mode $k^{*}$ is selected and the corresponding unfolded mode
SVD of $\nabla F$ is computed. Based on the value of $r_{k}$, the gradient
representation $\mathcal{S}$ and the step size $\gamma$ are calculated via
rank-$r_{k}$ truncated SVD. Subsequently, the rest of variables are updated
including $\mathcal{X}$,
$\mathbf{U}_{k^{*}},\boldsymbol{\Sigma}_{k^{*}},\mathbf{V}_{k^{*}}$ and
$R_{k^{*}}$. For usage, the algorithmic procedure is denoted as
$\mathcal{X}\leftarrow TCA(\mathcal{T},R)$.
The value of $r_{k}$ denotes the number of available dimensions in the
$k^{th}$ mode. The value $r_{k}=0$ means either, the overall rank constraint
$R=\sum_{k}R_{k}$ is satisfied, or, mode-$k$ rank is reached, i.e.,
$R_{k}=\min\left\\{\bar{I}_{k},\bar{J}_{k}\right\\}$. The former constraint
leads to the stopping criteria, while the latter adds the pruning step for the
search space of optimum mode selection, that is,
$\mathcal{N}\leftarrow\mathcal{N}\setminus\left\\{k\right\\}$.
###### Proposition 1.
Given the rank constraint $R$, the Algorithm 1 is independent of $\beta$.
###### Proof:
In the algorithm, the update equation (11) depends on the the product
$\gamma\beta$. By the definition of $\gamma$, we write
$\gamma=\frac{\left\langle\mathcal{X}\left(\mathcal{I}\right)-\mathcal{T}\left(\mathcal{I}\right),\mathcal{S}\left(\mathcal{I}\right)\right\rangle}{\left\|\mathcal{S}\left(\mathcal{I}\right)\right\|_{F}^{2}}.$
Substituting the value $\mathcal{S}$ from (12) provides
$\beta\gamma=constant$, where the constant is specified via the singular
values in the unfolding in $k^{*}$-mode of $\nabla F$. In other words,
$\gamma$ adjusts itself according to $\beta$ in each iteration, leading to
$\beta$-independent algorithm. This is also verified via simulations. ∎
1:$\mathcal{T}$, $\beta$, $R,d$.
2:$\mathbf{U}_{k},\boldsymbol{\Sigma}_{k},\mathbf{V}_{k},\forall
k$:$\left\|\sum_{k=1}^{N}\left\llbracket\mathbf{U}_{k}\boldsymbol{\Sigma}_{k}\mathbf{V}_{k}^{T}\right\rrbracket_{\left(k,d\right)}\right\|_{TR}\leq\beta$.
3:Initialize $\mathcal{X}=0$,
$R_{k}=r_{k}=0,\mathbf{U}_{k}=\Sigma_{k}=\mathbf{V}_{k}=\emptyset,\forall k$.
4:Initialize $\mathcal{N}=\left\\{1,\dots,N\right\\}.$
5:for $n=1,2,\ldots,n_{\max}$ do
6: Set the tensor $\nabla
F=\mathcal{X}(\mathcal{I})-\mathcal{T}(\mathcal{I})$.
7: Obtain $k^{*}=\arg\max_{k\in\mathcal{N}}\sigma_{max}\left[\left(\nabla
F\right)_{\left(k,d\right)}\right]$.
8: Get the SVD $\left(\nabla
F\right)_{\left(k^{*},d\right)}=\tilde{\mathbf{U}}_{k^{*}}\tilde{\Sigma}_{k^{*}}\tilde{\mathbf{V}}_{k^{*}}^{T}\in\mathbb{R}^{\bar{I}_{k^{*}}\times\bar{J}_{k^{*}}}$
9: Update
$r_{k}\leftarrow\min\left\\{\bar{I}_{k}-R_{k},\bar{J}_{k}-R_{k},R-\sum_{k=1}^{N}R_{k}\right\\}$.
10: if $r_{k}\leq 0$ then
11: Break the loop.
12: end if
13: Compute $\mathcal{S}$ from (12).
14: Get step size $\gamma$ from (15c).
15: Update $\mathcal{X}\leftarrow\mathcal{X}-\gamma\mathcal{S}$.
16: Update
$\mathbf{U}_{k^{*}},\boldsymbol{\Sigma}_{k^{*}},\mathbf{V}_{k^{*}}$.
17: Update $R_{k^{*}}=R_{k^{*}}+r_{k^{*}}$.
18: if $R_{k^{*}}=\min\left\\{\bar{I}_{k^{*}},\bar{J}_{k^{*}}\right\\}$ then
19: $\mathcal{N}\leftarrow\mathcal{N}\setminus\left\\{k^{*}\right\\}$.
20: end if
21:end for
22:Return $\mathbf{U}_{k},\boldsymbol{\Sigma}_{k},\mathbf{V}_{k},\forall k$.
Algorithm 1 Rank efficient modified FW algorithm, $TCA(\mathcal{T},R)$.
#### III-A5 Time and space complexity
Let $\mathcal{X}$ be an $N$-order tensor of dimension $I\times\ldots\times I$.
In the algorithm 1, rank-$R$ SVD is computed for the cyclically unfolded
matrix of size $I^{d}\times I^{N-d}$, which incurs the complexity
$\mathcal{O}\left(I^{2d}I^{N-d}\right)=\mathcal{O}\left(I^{N+d}\right)$.
Regarding space complexity for rank-$R$ decomposition, $I^{d}R+I^{N-d}R+R$
real valued space is required for
$\mathbf{U}_{k},\mathbf{V}_{k},\Sigma_{k},\forall k$, and
$\left|\mathcal{I}\right|$ space for the observable tensor.
### III-B Online prediction and caching algorithm
Since the above tensor completion is fast, it can be used in an online manner
for edge caching, as shown in the Algorithm 2. In this algorithm, the tensor
completion and linear prediction methods are employed in the information
exchange phase, whereas the cache placement phase is dedicated to placing the
content according to the predicted number of normalized requests.
1:$\mathcal{D}_{t},\forall t$, $R$.
2:$\mathbf{c}_{bt},\forall b,t$
3:for $t=\tau,\tau+1,\ldots$ do
4: _IE phase_ : Observe the tensor $\mathcal{D}_{t}$.
5: Apply $\mathcal{X}\leftarrow TCA(\mathcal{D}_{t},R)$.
6: Employ $\left\\{\hat{\bar{D}}_{fb,t+1},\forall f,b\right\\}\leftarrow
LP(\mathcal{X})$.
7: _CPL phase_ : Based on $\hat{\bar{D}}_{fb,t+1},\forall f$, obtain MPC
placement $\mathbf{c}_{bt}$, for each $b$.
8: _CD phase_ : deliver contents from as per users’ requests.
9:end for
Algorithm 2 Online prediction and caching algorithm.
## IV Simulation Results
Simulations are performed on the MovieLens dataset [61], where fourth order
tensors ($N=4$) are constructed from the movie ratings with dimensions
$F\times F\times N_{BS}\times\tau$, where $F=128$, $N_{BS}=3$, and $\tau=10$.
Each time slot entry is set by aggregating the rating for 30 days based on the
timestamps given. $M=6$-th order prediction is performed. For tensor
completion, values $d=1$, $\beta=10^{5}$, $R=nN$, $n=2,4,6,8,10,12$ are
chosen. For edge caching, $L_{BS}=32$ is chosen. The algorithm is compared for
normalized reconstruction errors, defined as
$RSE=\frac{\left\|\mathcal{X}(\mathcal{I})-\mathcal{T}(\mathcal{I})\right\|_{F}}{\left\|\mathcal{T}(\mathcal{I})\right\|_{F}},$
(19)
which equals to $1$ at the start of first iteration, since $\mathcal{X}=0$. .
Two prediction methods are chosen, that is, linear predictions with optimum
coefficients (LP) and with equal coefficients (MP). The performance of edge
caching is measured in terms of cache hit rate. Algorithms are run on a
windows PC with Intel Xeon CPU E3-1230 v5 (3.40GHz, 32GB RAM).
### IV-A Convergence
Figure 3: RSE versus the execution time (in MATLAB) for the proposed algorithm
and the algorithm in [55], for $\beta=10^{5}$ and $R=nN$, $\forall
n=2,4,6,8,10,12$.
Figure 3 plots the convergence in terms of RSE for the proposed algorithm, and
the FW algorithm in [55] for $\beta=10^{5}$ and $R=nN$, $\forall
n=2,4,6,8,10,12$. It can be observed that the proposed algorithm outperforms
significantly as compared to the one in [55]. It achieves less RSE in less
iterations (e.g. at $R=2N=32$, $RSE\approx 10^{-1}$ for Yu _et al_ , and
$RSE<10^{-6}$ for the proposed one), and each iteration executes in less time
as well (total time is 0.2 seconds versus 200 seconds approximately).
### IV-B Effect of latent factors $R$ and $\beta$
To observe the effect of the factor $\beta$, Figure 4 plots the RSE with
respect to $\beta$ at $R=6N$ for all the three methods, while Figure 5 shows
with respect to the rank $R$ with the fixed $\beta=10^{5}$. It can be seen
that for all values of $\beta$, the proposed algorithm provides much less RSE
than for CPD and the method in [55]. Moreover, the proposed algorithm is
invariant to changes in $\beta$, as shown in Proposition 1. Regarding the
plots versus rank-$R$, the proposed method performs significantly better at
lower ranks. As the rank is increased, the performance difference between the
proposed and the FW algorithm decreases. For sufficiently high rank $R>45$, FW
provides minutely better performance, where the difference in RSE is the order
of $10^{-6}$.
Figure 4: Relative and reconstruction errors with respect to $\beta$ at
$R=6N$. Figure 5: Relative and reconstruction errors versus different ranks
with fixed $\beta$.
### IV-C Cache hit rate
Figure 6: Average cache hit rate for two linear prediction methods with three
different completion methods for MovieLens dataset with $R=2N,4N,6N$ ranks and
$\beta=10^{5}$.
To evaluate the performance of the proposed method for edge caching scenario,
Figure 6 shows the normalized cache hit rates averaged across time slots for
three different methods with the cache size of $32$. The results are averaged
over 220 time slots, wherein each time slot a tensor completion problem is
solved and future popularity is predicted using linear prediction (LP) and
mean-prediction (MP) methods. It can be observed that as compared to CPD, the
CHR improvement is around 75.8%. Mean prediction and linear prediction perform
approximately similar for CHR. As the rank of completion is increased,
improvements in the CHR remains similar, which is due to rank-efficient tensor
completion. In other words, the proposed low-rank completion is efficient at
low ranks; and so, the completion at higher ranks yields similar CHR
performance, which can also be concluded from Figure 5.
## V Conclusion
In this paper, we have proposed an improved time complexity based modified FW
algorithm based on gradient descent, which has been shown to significantly
outperform in term of computational complexity and performance. This algorithm
needs only a few iterations, which is of the order of the specified rank $R$.
Further, for the wireless edge caching, we have formulated content
recommendation and prediction problem into a tensor completion framework.
Using the completed tensor, we have employed linear prediction methods to
obtain future popularities. For this application, it is shown via simulations
that after completion, the algorithm provides significantly better cache hit
rate (75%) as compared to the CP decomposition.
For the edge caching, the library size is typically large of the order of
$10^{4}$. Performing tensor completion on such a large tensor is both time and
space intensive. Therefore, in future, we shall explore to find the
independent blocks in the observed tensor. It will significantly reduce the
time and space requirements and better tensor decomposition can be found for
the remaining block tensors as compared to the full tensor.
## References
* [1] H. S. Goian, O. Y. Al-Jarrah, S. Muhaidat, Y. Al-Hammadi, P. Yoo, and M. Dianati, “Popularity-based video caching techniques for cache-enabled networks: A survey,” _IEEE Access_ , vol. 7, pp. 27 699–27 719, 2019.
* [2] L. Li, G. Zhao, and R. S. Blum, “A survey of caching techniques in cellular networks: Research issues and challenges in content placement and delivery strategies,” _IEEE Communications Surveys & Tutorials_, vol. 20, no. 3, pp. 1710–1732, 2018.
* [3] K. Shanmugam, N. Golrezaei, A. G. Dimakis, A. F. Molisch, and G. Caire, “Femtocaching: Wireless content delivery through distributed caching helpers,” _IEEE Transactions on Information Theory_ , vol. 59, no. 12, pp. 8402–8413, 2013.
* [4] K. Poularakis and L. Tassiulas, “On the complexity of optimal content placement in hierarchical caching networks,” _IEEE Transactions on Communications_ , vol. 64, no. 5, pp. 2092–2103, 2016.
* [5] B. Blaszczyszyn and A. Giovanidis, “Optimal geographic caching in cellular networks,” in _IEEE International Conference on Communications (ICC)_ , 2015, pp. 3358–3363.
* [6] B. Serbetci and J. Goseling, “Optimal geographical caching in heterogeneous cellular networks with nonhomogeneous helpers,” _arXiv preprint arXiv:1710.09626_ , 2017.
* [7] A. Papazafeiropoulos and T. Ratnarajah, “Modeling and performance of uplink cache-enabled massive mimo heterogeneous networks,” _IEEE Transactions on Wireless Communications_ , vol. 17, no. 12, pp. 8136–8149, 2018\.
* [8] A. Sadeghi, F. Sheikholeslami, and G. B. Giannakis, “Optimal and scalable caching for 5G using reinforcement learning of space-time popularities,” _IEEE Journal of Selected Topics in Signal Processing_ , vol. 12, no. 1, pp. 180–190, 2018.
* [9] N. Garg, M. Sellathurai, V. Bhatia, and T. Ratnarajah, “Function approximation based reinforcement learning for edge caching in massive mimo networks,” _IEEE Transactions on Communications_ , 2020.
* [10] J. Yin, L. Li, H. Zhang, X. Li, A. Gao, and Z. Han, “A prediction-based coordination caching scheme for content centric networking,” in _WOCC_ , 2018, pp. 1–5.
* [11] W.-X. Liu, J. Zhang, Z.-W. Liang, L.-X. Peng, and J. Cai, “Content popularity prediction and caching for ICN: A deep learning approach with SDN,” _IEEE Access_ , vol. 6, pp. 5075–5089, 2018.
* [12] H. Nakayama, S. Ata, and I. Oka, “Caching algorithm for content-oriented networks using prediction of popularity of contents,” in _IFIP/IEEE International Symposium on Integrated Network Management (IM)_ , 2015, pp. 1171–1176.
* [13] Y. Zhang, X. Tan, and W. Li, “Ppc: Popularity prediction caching in icn,” _IEEE Communications Letters_ , vol. 22, no. 1, pp. 5–8, Jan 2018\.
* [14] R. Haw, S. M. A. Kazmi, K. Thar, M. G. R. Alam, and C. S. Hong, “Cache aware user association for wireless heterogeneous networks,” _IEEE Access_ , vol. 7, pp. 3472–3485, 2019.
* [15] J. Wu, Y. Zhou, D. M. Chiu, and Z. Zhu, “Modeling dynamics of online video popularity,” _IEEE Transactions on Multimedia_ , vol. 18, no. 9, pp. 1882–1895, Sep. 2016.
* [16] Y. Jiang, M. Ma, M. Bennis, F. Zheng, and X. You, “User preference learning-based edge caching for fog radio access network,” _IEEE Transactions on Communications_ , vol. 67, no. 2, pp. 1268–1283, Feb 2019.
* [17] Y. Wang, M. Ding, Z. Chen, and L. Luo, “Caching placement with recommendation systems for cache-enabled mobile social networks,” _IEEE Communications Letters_ , vol. 21, no. 10, pp. 2266–2269, 2017.
* [18] P. Sermpezis, T. Giannakas, T. Spyropoulos, and L. Vigneri, “Soft cache hits: Improving performance through recommendation and delivery of related content,” _IEEE Journal on Selected Areas in Communications_ , vol. 36, no. 6, pp. 1300–1313, 2018.
* [19] L. E. Chatzieleftheriou, M. Karaliopoulos, and I. Koutsopoulos, “Jointly optimizing content caching and recommendations in small cell networks,” _IEEE Transactions on Mobile Computing_ , vol. 18, no. 1, pp. 125–138, 2019\.
* [20] P. Cheng, C. Ma, M. Ding, Y. Hu, Z. Lin, Y. Li, and B. Vucetic, “Localized small cell caching: A machine learning approach based on rating data,” _IEEE Transactions on Communications_ , vol. 67, no. 2, pp. 1663–1676, 2019.
* [21] K. Guo and C. Yang, “Temporal-spatial recommendation for caching at base stations via deep reinforcement learning,” _IEEE Access_ , vol. 7, pp. 58 519–58 532, 2019.
* [22] W. He, Y. Su, X. Xu, Z. Luo, L. Huang, and X. Du, “Cooperative content caching for mobile edge computing with network coding,” _IEEE Access_ , vol. 7, pp. 67 695–67 707, 2019.
* [23] S. Mehrizi, T. X. Vu, S. Chatzinotas, and B. Ottersten, “Trend-aware proactive caching via tensor train decomposition: A bayesian viewpoint,” _IEEE Open Journal of the Communications Society_ , vol. 2, pp. 975–989, 2021.
* [24] N. Garg and T. Ratnarajah, “Tensor completion based prediction in wireless edge caching,” in _2020 54th Asilomar Conference on Signals, Systems, and Computers_ , 2020, pp. 1579–1582.
* [25] L. Cai, X. Wang, J. Wang, M. Huang, and T. Yang, “Multidimensional data learning-based caching strategy in information-centric networks,” in _2017 IEEE International Conference on Communications (ICC)_ , May 2017, pp. 1–6.
* [26] J. Yang, Z. Yao, B. Yang, X. Tan, Z. Wang, and Q. Zheng, “Software-defined multimedia streaming system aided by variable-length interval in-network caching,” _IEEE Transactions on Multimedia_ , vol. 21, no. 2, pp. 494–509, 2019.
* [27] Y. Zhu, G. Zheng, L. Wang, K. Wong, and L. Zhao, “Performance analysis and optimization of cache-enabled small cell networks,” in _GLOBECOM 2017 - 2017 IEEE Global Communications Conference_ , Dec 2017, pp. 1–6.
* [28] N. Garg, M. Sellathurai, and T. Ratnarajah, “Content placement learning for success probability maximization in wireless edge caching networks,” in _IEEE ICASSP_ , May 2019, pp. 3092–3096.
* [29] N. Garg, M. Sellathurai, V. Bhatia, B. N. Bharath, and T. Ratnarajah, “Online content popularity prediction and learning in wireless edge caching,” _IEEE Transactions on Communications_ , vol. 68, no. 2, pp. 1087–1100, 2020.
* [30] N. Garg, V. Bhatia, B. Bettagere, M. Sellathurai, and T. Ratnarajah, “Online learning models for content popularity prediction in wireless edge caching,” _arXiv preprints_ , 2019.
* [31] Y. Liu, Z. Long, H. Huang, and C. Zhu, “Low cp rank and tucker rank tensor completion for estimating missing components in image data,” _IEEE Transactions on Circuits and Systems for Video Technology_ , vol. 30, no. 4, pp. 944–954, 2020.
* [32] T. Yokota and H. Hontani, “Simultaneous tensor completion and denoising by noise inequality constrained convex optimization,” _IEEE Access_ , vol. 7, pp. 15 669–15 682, 2019.
* [33] B. Romera-Paredes and M. Pontil, “A new convex relaxation for tensor completion,” in _Advances in Neural Information Processing Systems_ , 2013, pp. 2967–2975.
* [34] Y. Liu, F. Shang, L. Jiao, J. Cheng, and H. Cheng, “Trace norm regularized candecomp/parafac decomposition with missing data,” _IEEE Transactions on Cybernetics_ , vol. 45, no. 11, pp. 2437–2448, 2015.
* [35] R. Jenatton, N. L. Roux, A. Bordes, and G. R. Obozinski, “A latent factor model for highly multi-relational data,” in _Advances in neural information processing systems_ , 2012, pp. 3167–3175.
* [36] X. Guo, Q. Yao, and J. T.-Y. Kwok, “Efficient sparse low-rank tensor completion using the frank-wolfe algorithm,” in _Thirty-First AAAI Conference on Artificial Intelligence_ , 2017.
* [37] E. Frolov and I. Oseledets, “Tensor methods and recommender systems,” _WIREs Data Mining and Knowledge Discovery_ , vol. 7, no. 3, p. e1201, 2017\. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/widm.1201
* [38] V. N. Ioannidis, A. S. Zamzam, G. B. Giannakis, and N. D. Sidiropoulos, “Coupled graph and tensor factorization for recommender systems and community detection,” _IEEE Transactions on Knowledge and Data Engineering_ , 2019.
* [39] R. Bro, “Parafac. tutorial and applications,” _Chemometrics and Intelligent Laboratory Systems_ , vol. 38, no. 2, pp. 149–171, 1997. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0169743997000324
* [40] L. R. Tucker, “Some mathematical notes on three-mode factor analysis,” _Psychometrika_ , vol. 31, no. 3, pp. 279–311, 1966.
* [41] E. Acar, D. M. Dunlavy, T. G. Kolda, and M. Morup, “Scalable tensor factorizations for incomplete data,” _Chemometrics and Intelligent Laboratory Systems_ , vol. 106, no. 1, pp. 41–56, Mar 2011. [Online]. Available: http://dx.doi.org/10.1016/j.chemolab.2010.08.004
* [42] Q. Zhao, L. Zhang, and A. Cichocki, “Bayesian cp factorization of incomplete tensors with automatic rank determination,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 37, no. 9, pp. 1751–1763, 2015.
* [43] Y. Chen, C. Hsu, and H. M. Liao, “Simultaneous tensor decomposition and completion using factor priors,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 36, no. 3, pp. 577–591, 2014.
* [44] Q. Zhao, G. Zhou, S. Xie, L. Zhang, and A. Cichocki, “Tensor ring decomposition,” _arXiv preprint arXiv:1606.05535_ , 2016.
* [45] Q. Zhao, M. Sugiyama, L. Yuan, and A. Cichocki, “Learning efficient tensor representations with ring-structured networks,” in _ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2019, pp. 8608–8612.
* [46] Y. Chen, T. Huang, W. He, N. Yokoya, and X. Zhao, “Hyperspectral image compressive sensing reconstruction using subspace-based nonlocal tensor ring decomposition,” _IEEE Transactions on Image Processing_ , vol. 29, pp. 6813–6828, 2020.
* [47] Y. Chen, W. He, N. Yokoya, T. Huang, and X. Zhao, “Nonlocal tensor-ring decomposition for hyperspectral image denoising,” _IEEE Transactions on Geoscience and Remote Sensing_ , vol. 58, no. 2, pp. 1348–1362, 2020.
* [48] L. Yuan, J. Cao, X. Zhao, Q. Wu, and Q. Zhao, “Higher-dimension tensor completion via low-rank tensor ring decomposition,” in _2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)_ , 2018, pp. 1071–1076.
* [49] W. Wang, V. Aggarwal, and S. Aeron, “Efficient low rank tensor ring completion,” in _2017 IEEE International Conference on Computer Vision (ICCV)_ , 2017, pp. 5698–5706.
* [50] J. Liu, P. Musialski, P. Wonka, and J. Ye, “Tensor completion for estimating missing values in visual data,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 35, no. 1, pp. 208–220, 2013\.
* [51] J. A. Bengua, H. N. Phien, H. D. Tuan, and M. N. Do, “Efficient tensor completion for color image and video recovery: Low-rank tensor train,” _IEEE Transactions on Image Processing_ , vol. 26, no. 5, pp. 2466–2479, 2017\.
* [52] J. Yu, C. Li, Q. Zhao, and G. Zhou, “Tensor-ring nuclear norm minimization and application for visual : Data completion,” _ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , pp. 3142–3146, 2019.
* [53] R. Tomioka and T. Suzuki, “Convex tensor decomposition via structured schatten norm regularization,” in _Advances in Neural Information Processing Systems 26_ , C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, Eds. Curran Associates, Inc., 2013, pp. 1331–1339. [Online]. Available: http://papers.nips.cc/paper/4985-convex-tensor-decomposition-via-structured-schatten-norm-regularization.pdf
* [54] A. Wang, X. Song, X. Wu, Z. Lai, and Z. Jin, “Latent schatten tt norm for tensor completion,” in _ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , 2019, pp. 2922–2926.
* [55] J. Yu, W. Sun, Y. Qiu, and Y. Huang, “An efficient tensor completion method via new latent nuclear norm,” _IEEE Access_ , vol. 8, pp. 126 284–126 296, 2020.
* [56] M. Jaggi, “Revisiting Frank-Wolfe: Projection-free sparse convex optimization,” in _Proceedings of the 30th International Conference on Machine Learning_ , ser. Proceedings of Machine Learning Research, S. Dasgupta and D. McAllester, Eds., vol. 28, no. 1. Atlanta, Georgia, USA: PMLR, 17–19 Jun 2013, pp. 427–435. [Online]. Available: http://proceedings.mlr.press/v28/jaggi13.html
* [57] N. Vervliet, O. Debals, L. Sorber, M. Van Barel, and L. De Lathauwer. (2016, Mar.) Tensorlab 3.0. Available online. [Online]. Available: https://www.tensorlab.net
* [58] L. Yuan, C. Li, D. Mandic, J. Cao, and Q. Zhao, “Tensor ring decomposition with rank minimization on latent space: An efficient approach for tensor completion,” 2018.
* [59] N. Garg, M. Sellathurai, and T. Ratnarajah, “In-network caching for hybrid satellite-terrestrial networks using deep reinforcement learning,” in _ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , 2020, pp. 8797–8801.
* [60] J. Yu, G. Zhou, Q. Zhao, and K. Xie, “An effective tensor completion method based on multi-linear tensor ring decomposition,” in _2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)_ , 2018, pp. 1344–1349.
* [61] F. M. Harper and J. A. Konstan, “The movielens datasets: History and context,” _ACM Trans. Interact. Intell. Syst._ , vol. 5, no. 4, pp. 19:1–19:19, Dec. 2015. [Online]. Available: http://doi.acm.org/10.1145/2827872
|
# Two-color differential dynamic microscopy for capturing fast dynamics
R. You R. McGorty<EMAIL_ADDRESS>Department of Physics and
Biophysics, University of San Diego, San Diego, CA 92110, USA
###### Abstract
Differential dynamic microscopy (DDM) is increasingly used in the fields of
soft matter physics and biophysics to extract the dynamics of microscopic
objects across a range of wavevectors using optical microscopy. Standard DDM
is limited to detecting dynamics no faster than the camera frame rate. We
report on an extension to DDM where we sequentially illuminate the sample with
spectrally-distinct light and image with a color camera. By pulsing blue and
then red light separated by a lag time much smaller than the camera’s exposure
time we are able to use this two-color DDM method to measure dynamics
occurring much faster than the camera frame rate. The following article has
been accepted by Review of Scientific Instruments. After it is published, it
will be found at https://aip.scitation.org/journal/rsi.
††preprint: AIP/123-QED
A number of optical techniques allow users to quantify the dynamics of small
particles, molecules, intracellular bodies or whole cells. These include
single-particle trackingCrocker and Grier (1996), image correlation
spectroscopyPetersen _et al._ (1993), fluorescence correlation
spectroscopyMagde, Elson, and Webb (1972), and dynamic light scatteringBerne
and Pecora (2000). Researchers were given another option to add to this list
in 2008: differential dynamic microscopy (DDM)Cerbino and Trappe (2008). Its
advantages include being able to extract particle dynamics from time series of
images even when the contrast is too low or the particle concentration is too
high to allow for accurate particle localizations and being compatible with a
number of optical microscopy modalities including bright-field, dark-
fieldBayles, Squires, and Helgeson (2016), wide-field fluorescence, confocalLu
_et al._ (2012), and light-sheet microscopiesWulstein _et al._ (2016).
As DDM can be used with nearly any optical microscope with a digital camera
and yields data analogous to what one would obtain with dynamic light
scattering, it has been applied to numerous system. To provide just a partial
list, DDM has been used to measure the dynamics of: colloidal particles or
nanoparticlesHe _et al._ (2012); anisotropic particlesReufer _et al._
(2012); swimming bacteriaWilson _et al._ (2011); biomacromolecules or
particles in biomimetic environmentsRegan _et al._ (2019); Burla _et al._
(2020); colloidal gelsCho, Cerbino, and Bischofberger (2020); probe particles
for microrheological determinations of viscoelasticityBayles, Squires, and
Helgeson (2017); particles in crowded environmentsSentjabrskaja _et al._
(2016); and foamsGiavazzi, Trappe, and Cerbino (2020).
In this paper, we present a modification to DDM that allows dynamics faster
than the camera frame rate to be measured. With standard DDM, one is unable to
quantify dynamics occurring over times shorter than the time interval between
camera frames. This makes studying fast dynamics problematic without access to
high-speed cameras. We show that by using a color camera and illuminating the
sample with spectrally-separated pulses of light one can uncover dynamics
occurring faster than the camera frame rate. Inspiration for this method came
from two-color particle velocimetry which has been used to measure fast
dynamicsAdrian (1986); Goss _et al._ (1991).
To use standard DDM, one acquires a time series of images taken using any
spatially invariant microscopy method and analyzes the difference between
images separated by a given lag time in Fourier space. One can then find the
time it takes for intensity fluctuations to decay as a function of the spatial
frequency or wavevector, $\bm{q}$. In practice, one first calculates the
difference between images separated by a lag time $\Delta t$,
$d(\bm{x},t,\Delta t)=I(\bm{x},t+\Delta t)-I(\bm{x},t)$ where $\bm{x}$ is the
pixel position. One next obtains the image structure function by taking two
dimensional Fourier transforms of these differences and averaging over all
times $t$: $D(\bm{q},\Delta t)=\langle|\hat{d}(\bm{x},t,\Delta
t)|^{2}\rangle_{t}$. Isotropic samples will result in a radially symmetric
image structure function which can then be azimuthally averaged to yield
$D(q,\Delta t)$ where $q=\sqrt{q_{x}^{2}+q_{y}^{2}}$. This function can then
be fit to
$\displaystyle D(q,\Delta t)=A(q)\big{(}1-f(q,\Delta t)\big{)}+B(q),$ (1)
where the amplitude, $A(q)$, depends on the scattering properties of the
sample and the optical properties of the microscope, the background, $B(q)$,
depends on the noise in the image, and $f(q,\Delta t)$ is the intermediate
scattering function which accounts for the sample dynamics. For diffusive
dynamics $f(q,\Delta t)=\exp(-\Delta t/\tau(q))$ where $\tau=(Dq^{2})^{-1}$
and $D$ is the diffusion coefficient.
Given that DDM analyzes images acquired with a digital camera on an optical
microscope, the range of accessible spatial scales span from the diffraction
limit or pixel size (whichever is greater) to the size of the field of view.
In typical studies of thermally-diffusing colloids or moving bacteria, this
range is often from the submicron to 100 $\mu$m. Over what time scales will
dynamics over this spatial range occur? For diffusive dynamics, the time for a
density fluctuation to decay is proportional to the length scale squared. That
is $\tau\propto q^{-2}$, where $\tau$ is the decay time and $q$ is the
wavevector. Therefore, to probe diffusive dynamics across two orders of
magnitude in space requires data over four orders of magnitude in time.
How can one acquire data over such a span of timescales? In several DDM
studies, image sequences are acquired at multiple frame rates. As one example,
Germain et al. investigate the dynamics of micron-sized colloids and bacteria
and acquire images at both 400 Hz and 4 HzGermain, Leocmach, and Gibaud
(2016). This allows the authors to cover time scales from 2.5 ms to 1000 s.
More recently, Arko and Petelin developed a dual-camera method to acquire DDM
data over about 6 orders of magnitude in timeArko and Petelin (2019). Using a
beam splitter they imaged their sample onto two separate cameras, each
recording frames at 200 Hz. By triggering the cameras at offset times and
comparing frames between the two cameras, they could measure time lags much
smaller than with a single camera.
In this paper, we describe a new method which likewise provides access to
dynamics faster than the frame rate of our camera. Using a single color camera
and illuminating the sample with pulses of blue and red light, we can compare
the blue and red channels of a single image to observe changes within the
sample over times much smaller than the exposure time of a single frame.
We perform two-color DDM measurements with the setup shown in Figure 1a. The
light from red and blue LEDs (M625L2 and M455L3, Thorlabs) are combined with a
longpass dichroic mirror (DMLP550R, Thorlabs) and illuminate the sample. A 40×
objective (0.65 NA, Olympus) and $f$ = 200 mm tube lens (AC254-200-ML-A,
Thorlabs) image the sample onto a color CMOS camera (DFK 37BUX287, The Imaging
Source). Triggering the two LEDs and the camera is a Digilent Analog Discovery
2 (National Instruments).
Figure 1: (a) Schematic of our optical microscope shows two LED light sources
(one with a peak wavelength of 625 nm, the other 455 nm) used to illuminate a
sample. A 40× objective and tube lens image the sample onto a color CMOS
camera. (b) Within a single exposure time of the camera, we pulse each LED.
The time interval between pulses is varied and, for the data shown, ranges
from 3 to 91 ms.
The trigger sequencing is indicated in Figure 1b. We trigger the camera to
start an exposure every 100 ms, resulting in a frame rate of 10 Hz. Within a
single exposure time, both the blue (455 nm) and red (625 nm) LEDs each turn
on for 1 ms. The spacing between the blue and red pulses is staggered. We use
a sequence of 10 pulse delays in our experiments: 3, 5, 7, 10, 14, 20, 29, 43,
62 and 91 ms. We repeat this sequence of different pulse delays 800 times.
Therefore, we acquire 8000 color images acquired at 10 Hz. However, the short
delay times between the blue and red illumination pulses allow us to measure
dynamics occurring much faster than 10 Hz.
The analysis of this data follows closely to what was previously described for
standard DDM. However, we now first separate the recorded time series of
images into blue and red channels. We then calculate the image structure
function using the differences between data in the blue and red channels:
$d_{two-color}(\bm{x},t,\Delta t)=I_{red}(\bm{x},t+\Delta
t)-I_{blue}(\bm{x},t)$. Here, $\Delta t$ no longer must be some integer
multiple of the time between adjacent frames (as in standard DDM) but can be
as small as the time separating the blue and red illumination pulses.
We evaluate this method using a sample of 180-nm-diameter polystyrene
particles (FluoSpheres F8811, Molecular Probes) suspended in water. These
particles are diluted by a factor of 100 and a few microliters are sealed
between a glass slide and coverslip. This sample was first investigated with
standard DDM using a camera running at 80 Hz and constant bright-field
illumination. We found a diffusion coefficient of $2.48\pm 0.03\mathrm{\mu
m^{2}/s}$. We then acquired data using the two-color DDM method as described
where we acquired 8000 frames of 256$\times$256 pixels with the camera running
at 10 Hz and the blue and red illumination being pulsed with the times between
pulses ranging from 3 to 91 ms.
Figure 2: (a) For five different values of the wavevector $q$ we plot
$f(q,\Delta t)$. In blue and red split circles, the values of $f(q,\Delta t)$
are shown for 10 lag times corresponding to the time intervals between the
blue and red illumination pulses that occur within a single camera frame.
These lag times range from 3 to 91 ms. In violet diamonds, the values of
$f(q,\Delta t)$ are shown using standard DDM methods on just the blue channel
of the acquired image sequence. Since the camera was running at 10 Hz, for
standard DDM methods the minimum time lag corresponds to 100 ms. The solid
lines represent the theoretical intermediate scattering function for diffusive
dynamics, $f(q,\Delta t)=\exp(-Dq^{2}\Delta t)$. (b) The determined decay
rate, $\tau^{-1}$, is plotted versus $q^{2}$. For diffusive dynamics,
$\tau^{-1}=Dq^{2}$ and the solid line here shows
$D=2.48\mu\mathrm{m}^{2}\textfractionsolidus\mathrm{s}$. Data from the
standard DDM method (violet diamonds) starts deviating from this straight line
just after the expected decay rate exceeds 10 s-1, as anticipated given the
images were acquired at 10 frames per second. Data from the two-color DDM
method follows the expected linear relationship up to around 70 s-1. The inset
focuses on the low-$q$ region. The two-color method does not accurately detect
data fluctuations slower than about 10 s-1.
After splitting this time series of images into blue and red channels, we find
the two-color image structure function, $D_{two-color}(q,\Delta t)$. This two-
color image structure function is found for the 10 time lags, $\Delta t$, that
correspond to the delay times between the blue and red illumination pulses
that occur within a single camera frame. With the same sequence of 8000 images
we also find $D_{standard}(q,\Delta t)$ using only data from the blue channel
of each frame. For this, the values of $\Delta t$ range from 100 ms (the time
between frames) to 10 s.
For both $D_{two-color}(q,\Delta t)$ and $D_{standard}(q,\Delta t)$, we
proceed with typical DDM analysis where for each $q$ we fit the image
structure function to Equation 1 and find $A$, $B$ and $\tau$. In Fig. 2a we
show $f(q,\Delta t)$, and the relationship between $\tau$ and $q$ is shown in
Fig. 2b. We observe that our data follows the expected exponential decay of
$f(q,\Delta t)$ whether using the two-color or standard method. However, for
the two-color method, we are able to observe the dynamics at larger
wavevectors. Our data is consistently fit for wavevectors with decay rates of
up to about 70 s-1. Such fast dynamics are inaccessible using standard DDM on
data acquired at 10 Hz. In Fig. 2b, we observe, as expected, that $\tau^{-1}$
depends linearly on $q^{2}$. Decay rates determined through
$D_{standard}(q,\Delta t)$ start to deviate from this linear relationship
beyond about 10 s-1. However, with $D_{two-color}(q,\Delta t)$ we find decay
rates that match the expected linear relationship up to several times faster
than the rate at which images were acquired.
The decay rates, $\tau^{-1}$, determined from the two-color DDM method appear
noisier than those from the standard method. This is likely the result of less
data going into the two-color DDM method. When fitting our data to Equation 1,
for each value of $q$ we have ten data points (corresponding to the 10 time
intervals between blue and red illumination pulses) going into $D_{two-
color}(q,\Delta t)$ and 30 data points contributing to $D_{standard}(q,\Delta
t)$ (corresponding to lag times logarithmically spaced from 100 ms to 10 s).
Furthermore, for each lag time, $\Delta t$, we average together 800 Fourier
transformed image differences for $D_{two-color}(q,\Delta t)$. Whereas with
the standard DDM method applied to the same data set, we have 7999 Fourier
transformed image differences to average for $\Delta t$ of one frame (100 ms).
We note that modifications to this two-color method could be made. For
example, an image splitter system could be used to separate the two colors
onto distinct regions of the camera sensor which would allow for the use of
monochromatic cameras. Furthermore, while we applied this method to bright-
field imaging, one could use similar principles using fluorescence with
spectrally distinct fluorophores.
In summary, we have devised a method to allow differential dynamic microscopy
to be used to study dynamics faster than the camera frame rate. This obviates
the need for high speed cameras to measure fast dynamics if one can instead
illuminate the sample with spectrally-separated pulses offset in time and then
split the different color signals on the image sensor. We show that this two-
color DDM method allows us to extract the dynamics of colloidal particles that
occur up to several times faster than the camera frame rate.
R.M. acknowledges support from the Research Corporation for Science
Advancement through the Cottrell Scholars program.
The data that support the findings of this study are available from the
corresponding author upon reasonable request.
## References
* Crocker and Grier (1996) J. C. Crocker and D. G. Grier, Journal of Colloid and Interface Science 179, 298 (1996).
* Petersen _et al._ (1993) N. O. Petersen, P. L. Höddelius, P. W. Wiseman, O. Seger, and K. E. Magnusson, Biophysical Journal 65, 1135 (1993).
* Magde, Elson, and Webb (1972) D. Magde, E. Elson, and W. W. Webb, Physical Review Letters 29, 705 (1972).
* Berne and Pecora (2000) B. J. Berne and R. Pecora, _Dynamic Light Scattering: With Applications to Chemistry, Biology, and Physics_ (Courier Corporation, 2000) p. 388.
* Cerbino and Trappe (2008) R. Cerbino and V. Trappe, Physical Review Letters 100, 188102 (2008).
* Bayles, Squires, and Helgeson (2016) A. V. Bayles, T. Squires, and M. E. Helgeson, Soft Matter 12, 2440 (2016).
* Lu _et al._ (2012) P. J. Lu, F. Giavazzi, T. E. Angelini, E. Zaccarelli, F. Jargstorff, A. B. Schofield, J. N. Wilking, M. B. Romanowsky, D. A. Weitz, and R. Cerbino, Physical Review Letters 108, 218103 (2012).
* Wulstein _et al._ (2016) D. M. Wulstein, K. E. Regan, R. M. Robertson-Anderson, and R. McGorty, Optics Express 24, 20881 (2016).
* He _et al._ (2012) K. He, M. Spannuth, J. C. Conrad, and R. Krishnamoorti, Soft Matter 8, 11933 (2012).
* Reufer _et al._ (2012) M. Reufer, V. A. Martinez, P. Schurtenberger, and W. C. K. Poon, Langmuir 28, 4618 (2012).
* Wilson _et al._ (2011) L. G. Wilson, V. A. Martinez, J. Schwarz-Linek, J. Tailleur, G. Bryant, P. N. Pusey, and W. C. K. Poon, Physical Review Letters 106, 018101 (2011).
* Regan _et al._ (2019) K. Regan, D. Wulstein, H. Rasmussen, R. McGorty, and R. M. Robertson-Anderson, Soft Matter 15, 1200 (2019).
* Burla _et al._ (2020) F. Burla, T. Sentjabrskaja, G. Pletikapic, J. v. Beugen, and G. H. Koenderink, Soft Matter 16, 1366 (2020).
* Cho, Cerbino, and Bischofberger (2020) J. H. Cho, R. Cerbino, and I. Bischofberger, Physical Review Letters 124, 088005 (2020).
* Bayles, Squires, and Helgeson (2017) A. V. Bayles, T. M. Squires, and M. E. Helgeson, Rheologica Acta , 1 (2017).
* Sentjabrskaja _et al._ (2016) T. Sentjabrskaja, E. Zaccarelli, C. De Michele, F. Sciortino, P. Tartaglia, T. Voigtmann, S. U. Egelhaaf, and M. Laurati, Nature Communications 7, 11133 (2016).
* Giavazzi, Trappe, and Cerbino (2020) F. Giavazzi, V. Trappe, and R. Cerbino, Journal of Physics: Condensed Matter 33, 024002 (2020).
* Adrian (1986) R. J. Adrian, Applied Optics 25, 3855 (1986).
* Goss _et al._ (1991) L. P. Goss, M. E. Post, D. D. Trump, and B. Sarka, Journal of Laser Applications 3, 36 (1991).
* Germain, Leocmach, and Gibaud (2016) D. Germain, M. Leocmach, and T. Gibaud, American Journal of Physics 84, 202 (2016).
* Arko and Petelin (2019) M. Arko and A. Petelin, Soft Matter (2019), 10.1039/C9SM00121B.
|
# Self-organization of oscillation in an epidemic model for COVID-19
Takashi Odagaki∗
Kyushu University
Nishiku, Fukuoka 819-0395, Japan
and
Research Institute for Science Education, Inc.
Kitaku, Kyoto 603-8346, Japan
∗Corresondence to: Research Institute for Science Education, Inc.,
Kitaku, Kyoto 603-8346, Japan
Email address<EMAIL_ADDRESS>
###### Abstract
On the basis of a compartment model, the epidemic curve is investigated when
the net rate $\lambda$ of change of the number of infected individuals $I$ is
given by an ellipse in the $\lambda$-$I$ plane which is supported in
$[I_{\ell},I_{h}]$. With $a\equiv(I_{h}-I_{\ell})/(I_{h}+I_{\ell})$, it is
shown that (1) when $a<1$ or $I_{\ell}>0$, oscillation of the infection curve
is self-organized and the period of the oscillation is in proportion to the
ratio of the difference $(I_{h}-I_{\ell})$ and the geometric mean
$\sqrt{I_{h}I_{\ell}}$ of $I_{h}$ and $I_{\ell}$, (2) when $a=1$, the
infection curve shows a critical behavior where it decays obeying a power law
function with exponent $-2$ in the long time limit after a peak, and (3) when
$a>1$, the infection curve decays exponentially in the long time limit after a
peak. The present result indicates that the pandemic can be controlled by a
measure which makes $I_{\ell}<0$.
## 1 Introduction
Since the first outbreak in China in November 2019, COVID-19 has been
spreading in all continents including Antarctica. According to a recent
analysis of infection status of 186 countries [1, 2], the time dependence of
the daily confirmed new cases in more than 80 countries show oscillations
whose periods range from one to five months depending on the country. The
period of the oscillation is much shorter than that of Spanish flu in
1918$\sim$1919 which is the result of the mutation of virus, and it is an open
question why the infection curve of COVID-19 shows oscillation in some
countries.
There have been several compartmental models which explain epidemic
oscillations [3, 4, 5]. The simplest idea to explain the oscillation is to
introduce a sinusoidal time dependence of parameters of the model. Recently,
Greer et al [6] introduced a dynamical model with time-varying births and
deaths which shows oscillations of epidemics.
Since the infection curve of COVID-19 shows different features depending on
the country, the infection curve must have a strong relation to the government
policy, and the conventional approach may not be appropriate to COVID-19. In
fact, different measures have been employed in each country by its government
and citizens have been restricting the social contact among them, both of
which depend on the infection status. Therefore, parameters including
transmission coefficient of the virus can be considered to be a function of
the infection status, and the non-linear effects due to this dependence must
be clarified.
In this paper, I introduce a compartment model in which the net rate $\lambda$
of change of the number of infected individuals $I$ is a function of $I$ and
the function is given by an ellipse in the $\lambda$-$I$ plane which is
supported in $[I_{\ell},I_{h}]$. Here, $I_{h}$ is the upper limit of the
number of infected individuals above which the government does not allow, and
$I_{\ell}$ is the lowest value below which the government will lift measures.
I show that an oscillatory infection curve can be self-organized when
$I_{\ell}>0$ and that the period is determined by the ratio of the difference
$I_{h}-I_{\ell}$ and the geometric mean $\sqrt{I_{h}I_{\ell}}$ of $I_{h}$ and
$I_{\ell}$. I also show that when $I_{\ell}=0$ the infection curve in the long
time limit after a single peak decays following a power law function with
exponent -2 and when $I_{\ell}<0$ it decays exponentially in the long time
limit.
## 2 Model country
In most of compartmental models for epidemics, the number of infected
individuals $I(t)$ is assumed to obey
$\frac{dI(t)}{dt}=\lambda I(t).$ (1)
The net rate of change $\lambda$ of the number of infected individuals is
written generally as
$\lambda=\beta\frac{S}{N}-\gamma-\alpha.$ (2)
Here, $\beta$ and $\gamma$ are the transmission rate of virus from an infected
individual to a susceptible individual and a per capta rate for becoming a
recovered non-infectious (including dead) individual (R), respectively, and
$S$ and $N$ are the number of susceptible individuals and the total
population. In Eq. (2), $\alpha$ is a model-dependent parameter representing
different effect of epidemics. In the SIR model [8], it is assumed that no
effects other than transmission and recovery are considered and thus
$\alpha=0$ is assumed. The SEIR model [9] introduces a compartment of exposed
individuals (E), and if one sets $\alpha=(dE/dt)/I$, the basic equation of the
SEIR model reduces to Eq. (1).
The SIQR model [10, 11] separates quarantined patients (Q) as a compartment in
the population and $\alpha$ in Eq. (1) is given by the quarantine rate
$q\equiv\Delta Q(t)/I(t)$ where $\Delta Q(t)$ is the daily confirmed new cases
[7]. In the application of the SIQR model to COVID-19, it has been shown that
$\Delta Q(t)\propto I(t-\tau),$ (3)
where $\tau$ is a typical value of the waiting time between the infection and
quarantine of an infected individual. Therefore, the number of the daily
confirmed new cases can be assumed to obey Eq. (1) with the redefined time
$t-\tau$. Since $\Delta Q(t)$ is treated as an explicit variable instead of
$I(t)$, the SIQR model is relevant to COVID-19.
In this paper, I focus on the time evolution of $I(t)$ governed by Eq. (1) for
COVID-19. The transmission coefficient is determined by characteristics of the
virus and by government policies for lockdown measure and vaccination and by
people’s attitude for social distancing. Medical treatment of infected
individuals affects $\gamma$ and the government policy on PCR test changes the
quarantine rate. The government policies are determined according to the
infection status and therefore the rate of change is considered to be a
function of $I(t)$ in Eq. (1).
Here, I consider a model country in which $\lambda$ depends on $I$ through
$\left(\frac{\lambda}{\lambda_{0}}\right)^{2}+\left(\frac{I-I_{0}}{\Delta}\right)^{2}=1.$
(4)
This implies that when $I$ becomes large, some policies are employed to reduce
$\lambda$ to the negative area so that $I(t)$ begins to decline and when $I$
becomes small enough, then some measures are lifted and $\lambda$ becomes
positive again. In fact, the plots of $\lambda(t)$ against $I(t)$ in many
countries show similar loops [2]. Note that $\lambda=0$ corresponds to either
a maximum or a minimum of the number of infected individuals.
Figure 1 shows this dependence, namely $I_{h}\equiv I_{0}+\Delta$ and
$I_{\ell}\equiv I_{0}-\Delta$ are the maximum and minimum of the number of
infected individuals set by the policy in the country. When $I_{\ell}<0$,
$\lambda$ in the region $I<0$ is not relevant since no infected individuals
exist in this region.
Figure 1: The dependence of the net rate $\lambda$ on the number of infected
individuals $I$ in a model country.
## 3 Infection curve and self-organization of oscillation
In order to solve Eq. (1) with Eq. (4), I introduce a variable $x$ through
$\displaystyle\lambda$ $\displaystyle=$ $\displaystyle\lambda_{0}\cos x,$ (5)
$\displaystyle I-I_{0}$ $\displaystyle=$ $\displaystyle\Delta\sin x$ (6)
and rewrite Eq. (1) as
$a\frac{dx}{d\tilde{t}}=1+a\sin x,$ (7)
where $\tilde{t}\equiv\lambda_{0}t$ is the time scaled by $\lambda_{0}^{-1}$
and $a\equiv\Delta/I_{0}\geq 0$ is a parameter of the model. Equation (7) can
be solved readily under the initial condition $I(t=0)=I_{0}$:
$\tilde{t}=\left\\{\begin{array}[]{ll}\frac{\displaystyle
2a}{\displaystyle\sqrt{1-a^{2}}}\left[\arctan\frac{\displaystyle\tan(x/2)+a}{\displaystyle\sqrt{1-a^{2}}}-\arctan\frac{\displaystyle
a}{\displaystyle\sqrt{1-a^{2}}}\right]&\quad\mbox{when $a<1$},\\\
\frac{\displaystyle 2\tan(x/2)}{\displaystyle 1+\tan(x/2)}&\quad\mbox{when
$a=1$},\\\ \frac{\displaystyle
a}{\displaystyle\sqrt{a^{2}-1}}\left[\ln\frac{\displaystyle\tan(x/2)+a-\sqrt{a^{2}-1}}{\displaystyle\tan(x/2)+a+\sqrt{a^{2}-1}}-\ln\frac{\displaystyle
a-\sqrt{a^{2}-1}}{\displaystyle a+\sqrt{a^{2}-1}}\right]&\quad\mbox{when
$a>1$}.\end{array}\right.$
The infection curve is given in terms of $\tan(x/2)$ by
$\frac{I(t)}{I_{0}}=1+\frac{a\tan(x/2)}{1+\tan^{2}(x/2)}.$ (8)
The infection curves are shown for $a=0.4,0.6,0.8,1$ in Fig. 2(a) and for
$a=1,2,4,6$ in Fig. 2(b). Therefore, the infection curve is a periodic
function when $a<1$ and a decaying function with a single peak when $a\geq 1$.
(a) (b)
Figure 2: The infection curve for the model country. (a) When $a<1$, a wavy
infection curve is self-organized. (b) When $a>1$ the infection curve is a
decaying function with a single peak. The infection curve for $a=1$ shown in
both panels obeys a power-law decay in the long time limit after a peak.
Characteristics of the infection curve are in order:
(1) When $a<1$, the infection curve shows a self-organized oscillation which
can be characterized as follows:
1. 1.
The location of the peak $I_{\rm max}/I_{0}=1+a=I_{h}/I_{0}$ and the bottom
$I_{\rm min}/I_{0}=1-a=I_{\ell}/I_{0}$ are given by
$\displaystyle\tilde{t}_{\rm max}(n)$ $\displaystyle=$
$\displaystyle\frac{2a}{\sqrt{1-a^{2}}}\arctan\left[\sqrt{\frac{1-a}{1+a}}+(n-1)\pi\right],$
(9) $\displaystyle\tilde{t}_{\rm min}(n)$ $\displaystyle=$
$\displaystyle\frac{2a}{\sqrt{1-a^{2}}}\arctan\left[\sqrt{\frac{1+a}{1-a}}+(n-1)\pi\right],$
(10)
respectively, where $n=1,2,\dots$.
2. 2.
Therefore, the period $T$ is given by
$T\lambda_{0}=\frac{2\pi
a}{\sqrt{1-a^{2}}}=\frac{4\pi(I_{h}-I_{\ell})}{\sqrt{I_{h}I_{\ell}}}.$ (11)
Namely, the period is given by the ratio of a half of the difference
$\Delta=\frac{I_{h}-I_{\ell}}{2}$ and the geometrical mean
$\sqrt{I_{h}I_{\ell}}$ of $I_{h}$ and $I_{\ell}$.
(2) When $a=1$, the infection curve shows a peak, after which it decays to
zero. It can be characterized as follows:
1. 1.
The infection curve reaches its maximum $I_{\rm max}/I_{0}=2$ at
$t\lambda_{0}=1$.
2. 2.
In the long time limit, it decays as $t^{-2}$.
(3) When $a>1$, the infection curve shows a peak, after which it decays to
zero. It can be characterized as follows:
1. 1.
The infection curve reaches its maximum $I_{\rm max}/I_{0}=1+a$ at
$t\lambda_{0}=\frac{a}{\sqrt{a^{2}-1}}\ln(a+\sqrt{a^{2}-1})$.
2. 2.
The infection curve returns to the initial state $I(t)=I_{0}$ at
$t\lambda_{0}=\frac{a}{\sqrt{a^{2}-1}}\ln\frac{a+\sqrt{a^{2}-1}}{a-\sqrt{a^{2}-1}}$.
3. 3.
In the long time limit, the effective relaxation time defined by
$\tau\equiv-\left(\frac{d\ln I}{dt}\right)^{-1}$ is given by
$\tau\lambda_{0}=\frac{a}{\sqrt{a^{2}-1}}$.
Figure 3 shows the period for $a<1$ and the relaxation time for $a>1$ as
functions of $a$.
Figure 3: The period when $a<1$ and the relaxation time in the long time
behavior when $a>1$ are shown as functions of $a$.
## 4 Discussion
I have shown that oscillation of the infection curve can be self-organized in
the epidemic model described by an ordinary differential equation which
exploits the net rate of change Eq. (4) depending on the number of infected
individuals. All countries employ their own policy which depends on the
infection status of the country and the relation Eq. (4) represents general
trend of the policy. Namely, when the number of infected individuals
approaches the maximum number acceptable in a country, a strong measure is
introduced to make the net rate of change $\lambda$ negative, and the measure
will be lifted when the number of infected individuals is considered to be
small enough, which makes $\lambda>0$. Therefore, the policy with $I_{\ell}>0$
itself is considered to be the origin of the oscillation of the infection
curve and the policy with $I_{\ell}<0$ seems to have succeeded in controlling
the pandemic [2]. As an example, I show in Fig. 4(a) the time dependence of
daily confirmed new cases in Japan from April 5, 2020 to February 11, 2021
which consists of three waves. Using $\lambda$ determined by fitting the data
by piece-wise quadratic functions as shown by the solid curve [2], I show the
correlation between $\lambda$ and $\Delta Q$ in Fig. 4(b).
(a) (b)
Figure 4: (a) The time dependence of daily confirmed new cases in Japan from
April 5, 2020 to February 11, 2021. The dependence is fitted by piece-wise
quadratic functions. (b) The net rate is shown as a function of the number of
new cases. The spiral nature of this plot indicates an enhancing wavy behavior
of the infection curve. The jumps seen in the plot are due to the procedure
which does not impose the continuity of the curvature.
Some of oscillations in biological systems such as the prey and predator
system have been explained by the Lotka-Volterra model [12, 13], which is
essentially a coupled logistic equation and it is reducible to a second order
non-linear differential equation for one variable. Since the present model is
based on a first order non-linear differential equation, the origin of
oscillatory solution of the present model is different from that of the Lotka-
Volterra model.
Several important implications of the present results are:
(1) In order to control the outbreak, a policy is needed to make $a>1$ or
$I_{\ell}<0$ and $\lambda<0$. Since $\lambda$ is determined by $\beta$, $S$,
$\gamma$ and $\alpha$(or $q$), this can be achieved by the lockdown measure to
reduce $\beta$, by the vaccination to reducing $S$ and by the quarantine
measure to increase $q$.
(2) The worst policy is $I_{\ell}>0$. In this case, oscillation continues
until $\lambda$ becomes negative due to the herd immunity by vaccination
and/or infection of a significant fraction of the population.
(3) In order to make $\lambda$ negative, it has been rigorously shown that
increasing the quarantine rate $q$ is more efficient than reducing the
transmission coefficient $\beta$ by the lockdown measure [14].
This result indicates that the pandemic can be controlled only by keeping
measures of $\lambda<0$ till $I=0$.
(4) It should be remarked that the change in the infectivity of the virus due
to mutation can be included in $\lambda(I)$ in the present model. Namely
effects due to new variants of SARS-CoV-2 found in UK, in South Africa or in
Brazil can be included by moving the state to a new $\lambda$ vs $I$ relation.
In this study, I assumed that $I_{0}$ is fixed and the dependence of $\lambda$
on $I$ is symmetric. It is easy to generalize the present formalism to the
case of non-symmetric dependence of $\lambda$ on $I$.
Acknowledgments
This work was supported in part by JSPS KAKENHI Grant Number 18K03573.
## References
* [1] Coronavirus Resource Center, Johns Hopkins University
https://coronavirus.jhu.edu/
* [2] T. Odagaki and R. Suda, https://doi.org/10.1101/2020.12.17.20248445
* [3] H. W. Hethcote, SIAM Rev. 42, 599-653 (2000).
http://doi:10.1137/S0036144500371907
* [4] D. J. D. Earn, Mathematical epidemiology. Lecture Notes in Mathematics, vol. 1945 (eds. F. Brauer, P. van den Driessche, and J. Wu) 3-17 (Springer, Berlin, Germany, 2008).
https://doi.org/10.1007/978-3-540-78911-6
* [5] X. Zhang, C. Shan, Z. Jin and H. Zhu, J. Differ. Equ. 266, 803-832 (2019).
https://doi:10.1016/j.jde.2018.07.054
* [6] M. Greer, R. Saha, A.Gogliettino, C. Yu and K. Zollo-Venecek. R. Soc. open sci. 7, 191187 (2020).
https://dx.doi.org/10.1098/rsos.191187
* [7] T. Odagaki, Sci. Rep. 11, 1936 (2021).
https://doi.org/10.1038/s41598-021-81521-z
* [8] W. O. Kermack and A. G. McKendrick, Proc. Roy. Soc. A 115, 700-721 (1927).
https://doi.org/10.1098/rspa.1932.0171
* [9] R. M. Anderson and R. M. May, Science 215, 1053-1060 (1982).
DOI: 10.1126/science.7063839
* [10] H. Hethcote, M. Zhien and L. Shengbing, Math. Biosciences 180, 141-160 (2002).
https://doi.org/10.1016/S0025-5564(02)00111-6
* [11] T. Odagaki, Infect. Dis. Model. 5, 691-698 (2020).
https://doi:10.1016/j.idm.2020.08.013
* [12] A. J. Lotka, Proc. Natl. Acad. Sci. 6, 410-415 (1920).
https://doi.org/10.1073/pnas.6.7.410
* [13] V. Volterra, Proc. Edin. Math. Soc. 6, 4-10 (1939).
https://doi.org/10.1017/S0013091500008476
* [14] T. Odagaki, Physica A564, 125564–1-9 (2021).
https://doi.org/10.1016/j.physa.2020.125564
|
# Optic Nerve Microcirculation: Fluid Flow and Electrodiffusion
Yi Zhu Department of Mathematics and Statistics, York University, Toronto,
Ontario, Canada. Shixin Xu Corresponding author<EMAIL_ADDRESS>Duke Kunshan University, 8 Duke Ave, Kunshan, Jiangsu, China. Robert.S.
Eisenberg Department of Applied Mathematics, Illinois Institute of
Technology, Chicago IL 60616 USA. Huaxiong Huang Joint Mathematical Research
Centre of Beijing Normal University and BNU-HKBU United International College,
Zhuhai, China Department of Mathematics and Statistics, York University,
Toronto, Ontario, Canada. Division of Science and Technology, BNU- HKBU United
International College, Zhuhai, 519087, China
## Abstract
Complex fluids flow in complex ways in complex structures. Transport of water
and various organic and inorganic molecules in the central nervous system are
important in a wide range of biological and medical processes [C. Nicholson,
and S. Hrabětová, Biophysical Journal, 113(10), 2133(2017)]. However, the
exact driving mechanisms are often not known. In this paper, we investigate
flows induced by action potentials in an optic nerve as a prototype of the
central nervous system (CNS). Different from traditional fluid dynamics
problems, flows in biological tissues such as the CNS are coupled with ion
transport. It is driven by osmosis created by concentration gradient of ionic
solutions, which in term influence the transport of ions. Our mathematical
model is based on the known structural and biophysical properties of the
experimental system used by the Harvard group Orkand et al [R.K. Orkand, J.G.
Nicholls, S.W. Kuffler, Journal of Neurophysiology, 29(4), 788(1966)].
Asymptotic analysis and numerical computation show the significant role of
water in convective ion transport. The full model (including water) and the
electrodiffusion model (excluding water) are compared in detail to reveal an
interesting interplay between water and ion transport. In the full model,
convection due to water flow dominates inside the glial domain. This water
flow in the glia contributes significantly to the spatial buffering of
potassium in the extracellular space. Convection in the extracellular domain
does not contribute significantly to spatial buffering. Electrodiffusion is
the dominant mechanism for flows confined to the extracellular domain.
## 1 Introduction
The theory of complex fluids deals with complex fluids in complex structures
[23, 34, 62, 19]. Here we deal with the complex fluid of an ionic solution
[14] in a complex structure typical of biological systems in particular the
central nervous system. These structures are known in some detail—both
structure and function—because of the work of generations of neuroanatomists,
histologists and neurobiologists [29, 45]. The biophysical properties of
membranes are also well known [8]. So we can formulate a biologically
significant problem in the language of theory of complex fluids and use the
methods of computational fluid mechanics to analyze the system, here the optic
nerve of an amphibian. The results are of interest biologically because of the
importance of the central nervous system: the optic nerve of amphibian is an
experimentally accessible part of the central nervous system.
The analysis used here may also serve as a bridge, and archetype, of how the
theory of complex fluids can deal with what at first may seem formidable
challenges of structured biological systems in other biological systems, e.g.,
kidney, blood brain barrier, and epithelial in general.
The rest of the paper is organized as follows. In Section 2, we present the
biological background about the optic nerve and the tridomain mathematical
model in detail. The three domains, axon, glial and extracellular ones, are
coupled via transmembrane fluxes for three major ions, namely sodium,
potassium and chloride, treated as reaction terms. Model calibration is
discussed in Section 3 by matching extracellular potassium concentration
accumulation after the optic nerve is stimulated by a train of electric
current pulses. In Section 4, we present estimates using order of magnitude
analysis of transport of ionic and water fluxes cross membranes. They provide
useful insight into the mechanisms for potassium clearance. Then in Section 5,
numerical simulations are carried out. We investigate the role of water flow
(convection) in ionic transport during and after stimulus of the optic nerve.
Our analysis shows that convection is very important within the glia. Water
flow in glia has an indirect but significant effect in clearing potassium from
the narrow extracellular space. This may be an important role for glia
wherever they are found in the central nervous system, and even in structures
of the peripheral nervous system. A discussion on the parameters in the
compartment models and field models are presented in Section 6. In Section 7,
we provide concluding remarks on the limitation of our study and directions
for future research.
## 2 Biological Background and Model
### 2.1 Biological Background
Recent experimental studies [44] suggest that transport in the central nervous
system during sleep plays a critical role in maintaining the health of brain
tissue. Since the nervous system is densely packed with neurons communicating
with each other, question arises: how is the state of steady internal
conditions—known as “homeostasis” in the biological literature—maintained. A
few action potentials are known to significantly alter ion concentration in
the immediate vicinity of peripheral and optic nerve cells [48, 18] and that
change in concentration acts on more than one axon, producing “cross talk”.
The question is then how does the central nervous system deal with changes in
ion concentration produced by hundreds or thousands of action potentials and
maintain a healthy environment? How does the central nervous system maintain
concentrations in its narrow extracellular space? What are the roles played by
of glial cells and extracellular space?
Complex flows in complex structures cannot be understood unless the structure
is understood. The central nervous system contains nerve fibers and glia,
separated by a narrow extracellular space. We use three domains to describe
the flow and diffusion of ions and water in the optic nerve bundle of the
central nervous system, hoping to glimpse general properties by which the
central nervous system controls the concentration of ions in such narrow
confines. The optic nerve bundle contains paired cranial nerve bundled with
cell bodies in the retina. It reaches from the eye through the optic chiasma
to the cortex and transfers visual information from the retina to the vision
centers of the brain using digital (actually binary) electrical signals
(action potentials). The optic nerve is customarily separated into four main
regions [56, 58]: (1) intraocular nerve head, (2) intraorbital region, (3)
intracanalicular and (4) intracranial [56, 26]. In this paper, we mainly focus
on the intraorbital region, which occupies more than half of the optic nerve.
There are about one million optic nerve fibers in the optic nerve bundle. The
ganglion cells that are the cell bodies of the axons are scattered on the
retina and form into a bundle at the optic disc. The bundle passes through the
mesh-like lamina cribrosa region into the intraorbital region. Like almost all
nerve cells, optic nerve fibers are functionally isolated, nearly insulated
one from another , without connexins between them, so neither ions nor
electrolytes can flow directly from the interior of one nerve cell to another.
Current flow down one axon cannot flow into the adjacent axon or glia [4, 35].
The ‘ephaptic communication’ of concern to pioneers in electrophysiology rare
occurs.
Glial cells wrap the nerve fiber bundles producing a narrow cleft of
extracellular space between nerve fiber and glia. Glial cells are connected to
each other through connexin proteins, called ‘gap junctions’, and form an
electrical syncytium (as do so many other cells, e.g., epithelia, cardiac
muscle, lens of the eye, liver, etc.) in which current flow in one cell
spreads into another with little extra resistance. In syncytia like this,
inorganic ions, and many organic molecules (typically less than 2 nanometer
diameter) can diffuse from cell to cell with hardly any restriction and thus
with mobility and ionic conductance similar to that in cytoplasm. Thus, glial
cells are thought to play an important role in accelerating $\mathrm{K^{+}}$
clearance from the extracellular space [6, 69]. Sometimes, central retinal
blood vessels (CRV, arterioles in fact) are found in the center of the optic
nerve bundle in the intraorbital region. Here we consider the case where the
blood vessel is not present, as in the optic nerve of the mud puppy, the
amphibian salamander Necturus used in the experiments of Orkand et al. [48,
35].
Figure 1: Optic nerve structure. (a) Longitudinal section of the optic nerve;
(b) Cross section of the optic nerve.
The optic nerve bundles are surrounded by the meningeal sheath which consists
of dura mater, arachnoid mater and pia mater, and cerebrospinal fluid (CSF) in
the subarachnoid space (SAS) [26, 25]. Also see Fig. 1a. The pia mater and
dura mater are thin deformable shells, with mechanical properties important in
glaucoma [25, 32, 50, 28]. Andrew et. al [3] and Killer et. al [32, 31]show
that the dura mater contains lymphatic vessels that drain CSF out of SAS [28,
41]. Pia mater forms a macroscopic semipermeable membrane made of many cells,
not just one lipid bilayer [16]. Many layered epithelia have been
characterized as “semipermeable membranes” in low resolution studies of
epithelia for more than a century. Filipidis et. al. [68] have written a most
helpful review that identifies analogous leptomeningeal structures important
in the physiology of “like pleura [24, 51, 57, 76, 77, 75], peritoneum [36,
59, 63, 74, 78, 79], pericardium [68], fetal membranes [66, 1], and
leptomeninges [15],” We imagine that a general tridomain model may help
understand many of these tissues.
### 2.2 Mathematical Model
The model is first proposed in Ref. [81]. Here in order to make this paper
self-contained, we summarize the model. The model deals with two types of
flow: the circulation of water (hydrodynamics) and the circulation of ions
(electrodynamics) in the glial compartment $\Omega_{gl}$, axon compartment
$\Omega_{ax}$ and extracellular space $\Omega_{ex}$.
Figure 2: Domain of axial symmetry model. The optic nerve $\Omega_{OP}$
consist of axon compartment $\Omega_{ax}$, glial compartment $\Omega_{gl}$ and
extracellular space $\Omega_{ex}^{OP}$. The subarachnoid space $\Omega_{SAS}$
only has extracellular space.
The glial compartment and axon compartment are limited to the optic nerve
bundle, while extracellular space exists both in the optic nerve bundle
$\Omega_{ex}^{OP}$ and in the subarachnoid space $\Omega_{ex}^{SAS}$, (See
Fig. 2)
$\Omega_{OP}=\Omega_{ax}\cup\Omega_{gl}\cup\Omega_{ex}^{OP},\quad\Omega_{SAS}=\Omega_{ex}^{SAS}.$
The model is mainly based on the law of mass conservation [46], in
$\Omega_{l},~{}l=ax,~{}gl,~{}ex$
$\frac{\partial}{\partial
t}(\eta_{l}f_{l})+\nabla\cdot(\eta_{l}\mathbf{J}_{l})+S=0,$ (1)
where $\eta_{l}$ is the volume fraction of $l$ compartment, $f_{l}$ is the
concentration of given substance, $\mathbf{J}_{l}$ is the flux inside
compartment, and $S$ is the source term induced by the pumps and channels on
the membranes.
We first introduce the following notations used in the paper, where
$i=\mathrm{Na^{+},K^{+},Cl^{-}}$ for ion species, $l=ex,gl,ax$ for
extracellular space, glial compartment and axon compartment, and $k=gl,ax$ for
glial or axon membrane in the optic nerve. The summary of notations is listed
in Appendix A1.
In each domain, we assume that electroneutrality such that
$\displaystyle\eta_{gl}\sum_{i}z^{i}c_{gl}^{i}+z^{gl}\eta_{gl}^{re}A_{gl}$
$\displaystyle=0,$ (2a)
$\displaystyle\eta_{ax}\sum_{i}z^{i}c_{ax}^{i}+z^{ax}\eta_{ax}^{re}A_{ax}$
$\displaystyle=0,$ (2b) $\displaystyle\sum_{i}z^{i}c_{ex}^{i}$
$\displaystyle=0,$ (2c)
where $A_{l}>0$ with $l=ax,gl$ is the density of proteins in axons or glial
cells with valence $z^{l}$, $l=gl,ax$. The $\eta_{ax}$ and $\eta_{gl}$ are the
volume fraction of axon and glial compartments in the optic nerve and
$\eta_{ax}^{re}$ and $\eta_{gl}^{re}$ are the resting state volume fractions.
#### 2.2.1 Water Circulation
The conservation of mass in each domain yields
$\displaystyle\frac{\partial\eta_{gl}}{\partial
t}+\mathcal{M}_{gl}U^{m}_{gl}+\nabla\cdot\left(\eta_{gl}\mathbf{u}_{gl}\right)=0,~{}\text{in}~{}\Omega_{OP},$
(3a) $\displaystyle\frac{\partial\eta_{ax}}{\partial
t}+\mathcal{M}_{ax}U^{m}_{ax}+\frac{\partial}{\partial
z}\left(\eta_{ax}u_{ax}^{z}\right)=0,~{}\text{in}~{}\Omega_{OP},$ (3b)
$\displaystyle\nabla\cdot\left(\eta_{gl}\mathbf{u}_{gl}\right)+\nabla\cdot\left(\eta_{ex}\mathbf{u}_{ex}\right)+\frac{\partial}{\partial
z}\left(\eta_{ax}u_{ax}^{z}\right)=0,~{}\text{in}~{}\Omega_{OP},$ (3c)
$\displaystyle\eta_{gl}+\eta_{ax}+\eta_{ex}=1,~{}\text{in}~{}\Omega,$ (3d)
where the transmembrane water flux is proportional to the
intracellular/extracellular hydrostatic pressure and osmotic pressure
differences, i.e., Starling’s law on the membrane,
$\displaystyle U^{m}_{gl}$
$\displaystyle=L_{gl}^{m}\left(p_{gl}-p_{ex}-\gamma_{gl}k_{B}T\left(O_{gl}-O_{ex}\right)\right),$
$\displaystyle U^{m}_{ax}$
$\displaystyle=L_{ax}^{m}\left(p_{ax}-p_{ex}-\gamma_{ax}k_{B}T\left(O_{ax}-O_{ex}\right)\right).$
The glial cells are connected to each other by connexins and form a syncytium;
While the axons are separate, more or less parallel cylindrical cells that do
not form a syncytium. (See Fig. 1) Then we assume that glial cells are
isotropic and axons are anisotropic. Here $\mathbf{u}_{l}$ and $p_{l}$ with
$l=gl,ax,ex$ are the velocity and pressure in the glial cells and axons and
extracellular space, respectively. And $k_{B}TO_{l}$, is the osmotic pressure
[72, 80] defined by
$O_{ex}=\sum_{i}c_{ex}^{i},\quad
O_{l}=\sum_{i}c_{l}^{i}+A_{l}\frac{\eta_{j}^{re}}{\eta_{l}},\quad l=gl,ax,$
where $A_{l}\frac{\eta_{l}^{re}}{\eta_{l}}>0\ (l=gl,ax)$ is the density of the
permanent negatively charged protein in glial cell and axons that varies with
the volume (fraction) of the region.
The relation between the hydrostatic pressure $p_{l}$ and volume fraction
$\eta_{l}$ $(l=ex,gl,ax)$ is connected by the force balance on the membrane
$k(=gl,ax)$ [42, 72].
$\displaystyle K_{gl}\left(\eta_{gl}-\eta_{gl}^{re}\right)$
$\displaystyle=p_{gl}-p_{ex}-\left(p_{gl}^{re}-p_{ex}^{re}\right),\text{ in
}\Omega_{OP},$ (4a) $\displaystyle
K_{ax}\left(\eta_{ax}-\eta_{ax}^{re}\right)$
$\displaystyle=p_{ax}-p_{ex}-\left(p_{ax}^{re}-p_{ex}^{re}\right),\text{ in
}\Omega_{OP},$ (4b)
where $K_{k}\ (k=gl,ax)$ is the stiffness constant related to Young’s modules
and Poisson’s ratio. The $p_{l}^{re}\ (l=gl,ax,ex)$ is the resting state
hydrostatic pressure.
###### Remark 2.1.
If we introduce the characteristic velocities $u^{*}_{l}$ in $l$ compartment,
the characteristic transmembrane velocity $U^{*}_{l}$, the characteristic time
$t^{*}$, the characteristic lengths $r^{*}$ in radius direction and $z^{*}$ in
longitude direction, Eqs. (3a), (3b) and (3c) could be written as
$\displaystyle\frac{\partial\eta_{gl}}{\partial\tilde{t}}+\delta_{1}\tilde{U}^{m}_{gl}+\delta_{2}\tilde{\nabla}\cdot\left(\eta_{gl}\tilde{\mathbf{u}}_{gl}\right)=0,$
(5a)
$\displaystyle\frac{\partial\eta_{ax}}{\partial\tilde{t}}+\delta_{3}\tilde{U}^{m}_{ax}+\delta_{4}\frac{\partial\left(\eta_{ax}\tilde{u}^{z}_{ax}\right)}{\partial\tilde{z}}=0,$
(5b)
$\displaystyle\tilde{\nabla}\cdot\left(\eta_{ex}\tilde{\mathbf{u}}_{ex}\right)+\delta_{5}\tilde{\nabla}\cdot\left(\eta_{gl}\tilde{\mathbf{u}}_{gl}\right)+\delta_{6}\delta_{0}\frac{\partial\left(\eta_{ax}\tilde{u}^{z}_{ax}\right)}{\partial\tilde{z}}=0,$
(5c)
where
$\displaystyle\tilde{\nabla}\cdot\left(\eta_{l}\tilde{\mathbf{u}}_{l}\right)=\frac{1}{\tilde{r}}\frac{\partial\left(\tilde{r}\eta_{l}\tilde{u}^{r}_{l}\right)}{\partial\tilde{r}}+\delta_{0}\frac{\partial\left(\eta_{l}\tilde{u}^{z}_{l}\right)}{\partial\tilde{z}},\
\ l=gl,ex,$
and
$\displaystyle\delta_{0}=\frac{r^{*}}{z^{*}},\delta_{1}=\mathcal{M}_{gl}U^{*}_{gl}t^{*},\
\ \delta_{2}=\frac{u^{*}_{gl}t^{*}}{r^{*}},\ \
\delta_{3}=\mathcal{M}_{ax}U^{*}_{ax}t^{*},$
$\displaystyle\delta_{4}=\frac{u^{*}_{ax}t^{*}}{z^{*}},\ \
\delta_{5}=\frac{u^{*}_{gl}}{u^{*}_{ex}},\ \
\delta_{6}=\frac{u^{*}_{ax}}{u^{*}_{ex}}.$
Further scaling can be applied for velocity components in the r and z
directions when the cross membrane flux is absent due to incompressibility.
However, no such scaling is considered due to significant cross membrane flux.
The water flows in glial, axon compartments and extracellular space are low
Reynold number flows and the characteristic velocity is around $1\sim 10\
\mathrm{nm/s}$ due to the existence of connexin and high tortuosity. Then the
stationary Stokes equation is used
$-\nabla\cdot(\mu\nabla\bm{u}_{l})+\nabla p_{l}=f_{l},$
where $f_{l}$ is the body force density in different compartments, for
example, Lorentz force in the extracellular space [73]. Next, since the
tissues have similar property as the porous media, The rigorous homogenization
theories [2, 54] or the control volume average methods [38, 7] yield Darcy’s
Law is a good macro-scale approximation for the Stokes flow in the porous
media. For the sake of simplicity, we model flows in the following as porous
media flows by using Darcy’s Law [42, 80].
Fluid Velocity in the Glial Compartment. As we mentioned before, the glial
space is a connected space, where water can flow from cell to cell through
connexin proteins joining membranes of neighboring cells.
The velocity of fluid in glial syncytium $\mathbf{u}_{gl}$ depends on the
gradients of hydrostatic pressure and osmotic pressure:
$\displaystyle
u_{gl}^{r}=-\frac{\kappa_{gl}\tau_{gl}}{\mu}\left(\frac{\partial
p_{gl}}{\partial r}-\gamma_{gl}k_{B}T\frac{\partial O_{gl}}{\partial
r}\right),$ (6a) $\displaystyle
u_{gl}^{z}=-\frac{\kappa_{gl}\tau_{gl}}{\mu}\left(\frac{\partial
p_{gl}}{\partial z}-\gamma_{gl}k_{B}T\frac{\partial O_{gl}}{\partial
z}\right).$ (6b)
The boundary conditions of fluid in the glial syncytium are as follows
$\left\\{\begin{aligned} &\mathbf{u}_{gl}\cdot\hat{\mathbf{r}}=0,&\text{ on
}\Gamma_{1},\\\ &\nabla p_{gl}\cdot\hat{\mathbf{z}}=0,&\text{ on
}\Gamma_{2},\\\ &\nabla p_{gl}\cdot\hat{\mathbf{z}}=0,&\text{ on
}\Gamma_{6},\\\ &\mathbf{u}_{gl}\cdot\hat{\mathbf{r}}=0,&\text{ on
}\Gamma_{7}.\end{aligned}\right.$ (7)
Fluid Velocity in the Axon Compartment. Since the axons are only connected in
the longitudinal direction and the fluid velocity in axons region is defined
along $z$ direction as
$\displaystyle u_{ax}^{r}=0,$ (8a) $\displaystyle
u_{ax}^{z}=-\frac{\kappa_{ax}}{\mu}\frac{\partial p_{ax}}{\partial z}.$ (8b)
Dirichlet boundary conditions are used to the fluid velocity in axons
$\nabla p_{ax}\cdot\hat{\mathbf{z}}=0,\quad\text{ on
}\Gamma_{2}\cup\Gamma_{6}.$ (9)
Fluid Velocity in the Extracellular Space. The extracellular space is narrow,
and the extracellular velocity is determined by the gradients of hydro-static
pressure and electric potential
$\displaystyle u_{ex}^{r}=-\frac{\kappa_{ex}\tau_{ex}}{\mu}\frac{\partial
p_{ex}}{\partial r}-k_{e}\tau_{ex}\frac{\partial\phi_{ex}}{\partial r},$ (10a)
$\displaystyle u_{ex}^{z}=-\frac{\kappa_{ex}\tau_{ex}}{\mu}\frac{\partial
p_{ex}}{\partial z}-k_{e}\tau_{ex}\frac{\partial\phi_{ex}}{\partial z},$ (10b)
where $\phi_{ex}$ is the electric potential in the extracellular space,
$\tau_{ex}$ is the tortuosity of extracellular region [46, 52] and $\mu$ is
the viscosity of water, $k_{e}$ is introduced to describe the effect of
electro-osmotic flow [40, 65, 70], $\kappa_{ex}$ is the permeability of
extracellular space. Here the hydro permeability $\kappa_{ex}$, tortuosity
$\tau_{ex}$ and electric-osmotic parameter $k_{e}$ have two distinguished
values in the region $\Omega_{ex}^{OP}$ and $\Omega_{ex}^{SAS}$,
$\displaystyle\kappa_{ex}$
$\displaystyle=\left\\{\begin{array}[]{l}\kappa_{ex}^{OP},\text{ in
}\Omega_{OP},\\\ \kappa_{ex}^{SAS},\text{ in
}\Omega_{SAS},\end{array}\right.\tau_{ex}=\left\\{\begin{array}[]{l}\tau_{ex}^{OP},\text{
in }\Omega_{OP},\\\ \tau_{ex}^{SAS},\text{ in
}\Omega_{SAS},\end{array}\right.$ $\displaystyle k_{e}$
$\displaystyle=\left\\{\begin{array}[]{l}k_{e}^{OP},\text{ in }\Omega_{OP},\\\
k_{e}^{SAS},\text{ in }\Omega_{SAS},\end{array}\right..$
Since $\Gamma_{2}\cup\Gamma_{3}$ are the far end of optic nerve away from
eyeball and next to the optic canal, we assume the hydro-static pressure of
extracellular is equal to the cerebrospinal fluid (CSF) pressure. On the other
hand, the intraocular pressure (IOP) is imposed at $\Gamma_{6}$ where the
extracellular space is connected to the retina. At boundary $\Gamma_{5}$, we
assume a non-permeable boundary. We are aware of the significance of the
pressures and flows at these boundaries for clinical phenomena including
glaucoma [5, 47, 22] and will return to that subject in later publications.
The water flow across the semi-permeable membrane $\Gamma_{4}$ is produced by
the lymphatic drainage on the dura membrane, which depends on the difference
between extracellular pressure and orbital pressure (OBP). We assume the
velocity across the pia membrane $\Gamma_{4}$, is continuous and determined by
the combination of hydrostatic and osmotic pressures. To summarize, the
boundary conditions of the extracellular fluid are
$\left\\{\begin{aligned} &\mathbf{u}_{ex}\cdot\hat{\mathbf{r}}=0,&&\mbox{on
$\Gamma_{1}$},\\\ &p_{ex}=p_{CSF},&&\mbox{on $\Gamma_{2}\cup\Gamma_{3}$},\\\
&\mathbf{u}^{SAS}_{ex}\cdot\
\hat{\mathbf{r}}=L^{m}_{dr}\left(p^{SAS}_{ex}-p_{OBP}\right),&&\mbox{on
$\Gamma_{4}$},\\\ &\mathbf{u}_{ex}\cdot\hat{\mathbf{r}}=0,&&\mbox{on
$\Gamma_{5}$},\\\ &p_{ex}=p_{ICP},&&\mbox{on $\Gamma_{6}$},\\\
&\mathbf{u}^{OP}_{ex}\cdot\hat{\mathbf{r}}=\mathbf{u}^{SAS}_{ex}\cdot\hat{\mathbf{r}}\\\
&=L^{m}_{pia}\left(p^{OP}_{ex}-p^{SAS}_{ex}-\gamma_{pia}k_{B}T\left(O^{OP}_{ex}-O^{SAS}_{ex}\right)\right),&&\mbox{on
$\Gamma_{7}$},\end{aligned}\right.$ (11)
where $p_{CSF}$ is the cerebrospinal fluid pressure [5] and $p_{ICP}$ is the
pressure in the eye and $p_{OBP}$ is the orbital pressure on the dura mater.
###### Remark 2.2.
Substituting velocities (6), (8) and (10) into conservation law Eq. (3) yields
Poisson Equations of hydrostatic pressures in different compartments. Eqs.
(6), (8) and (10) mean that velocities vary in both $r$ and $z$ direction,
which depend on the gradient of the hydrostatic pressure, osmotic pressure, or
electric field. The distribution of velocity in radius direction during and
after a train of stimuli is shown in Appendix Fig. 17.
#### 2.2.2 Ion Transport
The conservation of chemical species implies the following system of partial
differential equations to describe the dynamics of ions in each region, for
$i=\mathrm{Na^{+}},\mathrm{K^{+}},\mathrm{Cl^{-}}$
$\displaystyle\frac{\partial\left(\eta_{gl}c_{gl}^{i}\right)}{\partial
t}+\mathcal{M}_{gl}J^{m,i}_{gl}+\nabla\cdot\left(\eta_{gl}\mathbf{j}_{gl}^{i}\right)=0,\text{
in }\Omega_{OP},$ (12)
$\displaystyle\frac{\partial\left(\eta_{ax}c_{ax}^{i}\right)}{\partial
t}+\mathcal{M}_{ax}J^{m,i}_{ax}+\frac{\partial}{\partial
z}\left(\eta_{ax}j_{ax,z}^{i}\right)=0,\text{ in }\Omega_{OP},$ (13)
$\displaystyle\frac{\partial\left(\eta_{ex}c_{ex}^{i}\right)}{\partial
t}-\mathcal{M}_{ax}J^{m,i}_{ax}-\mathcal{M}_{gl}J^{m,i}_{gl}+\nabla\cdot\left(\eta_{ex}\mathbf{j}_{ex}^{i}\right)=0,\text{in
}\Omega_{OP},$ (14)
where the last equation reduces to the following in the $\Omega_{SAS}$ region,
$\frac{\partial c_{ex}^{i,SAS}}{\partial
t}+\nabla\cdot\mathbf{j}_{ex}^{i,SAS}=0.$ (15)
The transmembrane ion flux $J_{k}^{m,i}\ (k=gl,ax)$ consists of active ion
pump source $J_{p,k}^{i}$ and passive ion channel source $J_{c,k}^{i}$, on the
$k$ membrane,
$J_{k}^{m,i}=J_{p,k}^{i}+J_{c,k}^{i},\quad k=gl,ax,\quad
i=\mathrm{Na}^{+},\mathrm{K}^{+},\mathrm{Cl}^{-}.$
On the glial cell membranes, $J_{c,gl}^{i}$ is defined as
$J_{c,gl}^{i}=\frac{g_{gl}^{i}}{z^{i}e}\left(\phi_{gl}-\phi_{ex}-E_{gl}^{i}\right),\
\ i=\mathrm{Na^{+},K^{+},Cl^{-}},$ (16)
where the Nernst potential is used to describe the gradient of chemical
potential
$E_{gl}^{i}=\frac{k_{B}T}{ez^{i}}\log\left(\frac{c_{ex}^{i}}{c_{gl}^{i}}\right)$
and the conductance $g_{gl}^{i}$ for $i$th ion specie on the glial membrane is
a fixed constant, independent of voltage and time. On the axon’s membrane,
$J_{c,ax}^{i}$ is defined as
$J_{c,ax}^{i}=\frac{g_{ax}^{i}}{z^{i}e}\left(\phi_{ax}-\phi_{ex}-E_{ax}^{i}\right),\
\ i=\mathrm{Na^{+},K^{+},Cl^{-}},$
where
$g_{ax}^{Na}=\bar{g}^{Na}m^{3}h+g_{leak}^{Na},\ \
g_{ax}^{K}=\bar{g}^{K}n^{4}+g_{leak}^{K},\ \ g_{ax}^{Cl}=g_{leak}^{Cl}.$
The time dependent dynamic of open probability, often loosely called ‘gating’
is governed by the Hodgkin-Huxley model [17, 20]
$\displaystyle\frac{dn}{dt}$ $\displaystyle=\alpha_{n}(1-n)-\beta_{n}n,$ (17)
$\displaystyle\frac{dm}{dt}$ $\displaystyle=\alpha_{m}(1-m)-\beta_{m}m,$
$\displaystyle\frac{dh}{dt}$ $\displaystyle=\alpha_{h}(1-h)-\beta_{h}h,$
where $n$ is the open probability of $\mathrm{K}^{+}$ channel, $m$ is the open
probability of the $\mathrm{Na}^{+}$ activation gate, and $h$ is the open
probability of the $\mathrm{Na}^{+}$ inactivation gate.
We assume that the only pump is the Na/K active transporter. We are more than
aware that other active transport systems can and likely do move ions and thus
water in this system. They will be included as experimental information
becomes available.
In the case of the Na/K pump $J_{p,k}^{i}$ $(k=ax,gl)$, the strength of the
pump $I_{k}$ depends on the concentration in the intracellular and
extracellular space [21, 17], i.e.
$J_{p,k}^{Na}=\frac{3I_{k}}{e},\quad J_{p,k}^{K}=-\frac{2I_{k}}{e},\quad
J_{p,k}^{Cl}=0,\quad k=gl,ax,$ (18)
where
$\displaystyle I_{k}$
$\displaystyle=I_{k,1}\left(\frac{c_{k}^{Na}}{c_{k}^{Na}+K_{Na1}}\right)^{3}\left(\frac{c_{ex}^{K}}{c_{ex}^{K}+K_{K1}}\right)^{2}$
(19)
$\displaystyle+I_{k,2}\left(\frac{c_{k}^{Na}}{c_{k}^{Na}+K_{Na2}}\right)^{3}\left(\frac{c_{ex}^{K}}{c_{ex}^{K}+K_{K2}}\right)^{2},\quad
k=ax,gl.$
$I_{k,1}$ and $I_{k,2}$ are related to the maximum current of $\alpha_{1}-$
and $\alpha_{2}-$ isoform of $\mathrm{Na/K}$ pump on the glial membrane
($k=gl$) or axon membrane ($k=ax$).
The definitions of ion flux in each domain are as follows, for
$i=\mathrm{Na^{+},K^{+},Cl^{-}}$,
$\displaystyle\mathbf{j}_{l}^{i}=c_{l}^{i}\mathbf{u}_{l}-D_{l}^{i}\tau_{l}\left(\nabla
c_{l}^{i}+\frac{z^{i}e}{k_{B}T}c_{l}^{i}\nabla\phi_{l}\right),\quad l=gl,ex,$
$\displaystyle
j_{ax,z}^{i}=c_{ax}^{i}u_{ax}^{z}-D_{ax}^{i}\left(\frac{\partial
c_{ax}^{i}}{\partial
z}+\frac{z^{i}e}{k_{B}T}c_{ax}^{i}\frac{\partial\phi_{ax}}{\partial
z}\right).$
For the axon compartment and glial compartment boundary condition, we have
$c_{ax}^{i}=c_{ax}^{i,re},\quad\text{ on }\ \Gamma_{2}\cup\Gamma_{6},$ (20)
and
$\left\\{\begin{array}[]{ll}\mathbf{j}_{gl}^{i}\cdot\hat{\mathbf{r}}=0,&\text{
on }\Gamma_{1},\\\ c_{gl}^{i}=c_{gl}^{i,re},&\text{ on
}\Gamma_{2}\cup\Gamma_{6},\\\
\mathbf{j}_{gl}^{i}\cdot\hat{\mathbf{r}}=0,&\text{ on
}\Gamma_{7},\end{array}\right.$ (21)
where the Dirichlet boundary conditions are used at locations
$\Gamma_{2}\cup\Gamma_{6}$ for axons and glial cell, and a non-flux boundary
condition is used for glial cells ions flux on pia mater $\Gamma_{7}$.
For the extracellular space boundary condition, similar boundary conditions
are imposed except on the pia mater $\Gamma_{7}$. The flux across the pia
mater is assumed continuous and Ohm’s law is used [80]. Additionally, a non-
permeable boundary condition is used at location $\Gamma_{5}$ and a
homogeneous Neumann boundary condition is applied at the location of the dura
mater $\Gamma_{4}$,
$\left\\{\begin{array}[]{ll}\mathbf{j}_{ex}^{i}\cdot\hat{\mathbf{r}}=0,&\text{
on }\Gamma_{1},\\\ c_{ex}^{i}=c_{csf}^{i},&\text{ on
}\Gamma_{2}\cup\Gamma_{3},\\\ \nabla c_{ex}^{i}\cdot\hat{\mathbf{r}}=0,&\text{
on }\Gamma_{4},\\\ \mathbf{j}_{ex}^{i}\cdot\hat{\mathbf{z}}=0,&\text{ on
}\Gamma_{5},\\\ c_{ex}^{i}=c_{eye}^{i},&\text{ on }\Gamma_{6},\\\
\mathbf{j}_{ex}^{i,OP}\cdot\hat{\mathbf{r}}=\mathbf{j}_{ex}^{i,SAS}\cdot\hat{\mathbf{r}}=\frac{G_{pia}^{i}}{z^{i}e}\left(\phi_{ex}^{OP}-\phi_{ex}^{SAS}-E_{pia}^{i}\right),&\text{
on }\Gamma_{7}.\end{array}\right.$ (22)
###### Remark 2.3.
Suppose the $c^{i,*}_{l}$ is the scale of $i$ ion specie in the $l$ space and
$\Delta c^{i,*}_{l}$ is the scale of $r$ and $z$ direction $i$ ion specie
concentration variation in the $l$ space. If We define
$\delta^{i}_{7,l}=\frac{\Delta c^{i,*}_{l}}{c^{i,*}_{l}},\ \
i=\mathrm{Na}^{+},\mathrm{K}^{+},\mathrm{Cl}^{+},\ \ l=ax,gl,ex.$
the ion fluxes could be written as
$\displaystyle\tilde{\mathbf{j}}^{i}_{l}$
$\displaystyle=Pe^{i}_{l}\delta^{i}_{7,l}\tilde{c}^{i}_{l}\tilde{\mathbf{u}}_{l}-\left(\delta^{i}_{7,l}\tilde{\nabla}\tilde{c}^{i}_{l}+z^{i}\tilde{c}^{i}_{l}\tilde{\nabla}\tilde{\phi}_{l}\right),\
\ \ l=gl,ex,$ $\displaystyle\tilde{j}^{i}_{ax,z}$
$\displaystyle=Pe^{i}_{ax}\delta^{i}_{7,l}\tilde{c}^{i}_{l}\tilde{u}^{z}_{ax}-\left(\delta^{i}_{7,l}\frac{\partial\tilde{c}^{i}_{l}}{\partial\tilde{z}}+z^{i}\tilde{c}^{i}_{l}\frac{\partial\tilde{\phi}_{l}}{\partial\tilde{z}}\right),$
with Peclet numbers
$\displaystyle Pe^{i}_{ax}=\frac{u^{*}_{ax}z^{*}c^{i,*}_{ax}}{D^{i}_{ax}\Delta
c^{i,*}_{ax}},\ \
Pe^{i}_{l}=\frac{u^{*}_{l}r^{*}c^{i,*}_{l}}{D^{i}_{l}\tau_{l}\Delta
c^{i,*}_{l}},\ l=gl,ex.$ (23)
If we let $g_{l}^{*},~{}l=ax,gl$ be the characteristic membrane conductance,
$\frac{k_{B}T}{e}$ be the characteristic electric potential, the dimensionless
form of transmembrance flux is
$\tilde{J}^{m,i}_{l}=\tilde{J}^{i}_{c,l}+\tilde{J}^{i}_{p,l},$
where for $i=\mathrm{Na^{+},K^{+},Cl^{-}},\ l=gl,ax,$
$\displaystyle\tilde{J}_{c,l}^{i}=\frac{\tilde{g}_{l}^{i}}{z^{i}}\left(\tilde{\phi}_{k}-\tilde{\phi}_{ex}-\tilde{E}_{gl}^{i}\right),\
\ \ \tilde{J}_{p,l}^{i}=\frac{J_{p,l}^{i}e^{2}}{k_{B}Tg^{*}_{l}}.$
The governing equations for ions become
$\displaystyle\frac{\partial\left(\eta_{gl}\tilde{c}_{gl}^{i}\right)}{\partial\tilde{t}}+\delta^{i}_{8}\tilde{J}^{m,i}_{gl}+\delta^{i}_{9}\tilde{\nabla}\cdot\left(\eta_{gl}\tilde{\mathbf{j}}^{i}_{gl}\right)=0,$
(24)
$\displaystyle\frac{\partial\left(\eta_{ax}\tilde{c}_{ax}^{i}\right)}{\partial\tilde{t}}+\delta^{i}_{10}\tilde{J}^{m,i}_{ax}+\delta^{i}_{11}\frac{\partial}{\partial\tilde{z}}\left(\eta_{ax}\tilde{j}_{ax,z}^{i}\right)=0,$
(25)
$\displaystyle\frac{\partial\left(\eta_{ex}\tilde{c}_{ex}^{i}\right)}{\partial\tilde{t}}-\delta^{i}_{12}\delta^{i}_{10}\tilde{J}^{m,i}_{ax}-\delta^{i}_{13}\delta^{i}_{8}\tilde{J}^{m,i}_{gl}+\delta^{i}_{14}\tilde{\nabla}\cdot\left(\eta_{ex}\tilde{\mathbf{j}}^{i}_{ex}\right)=0,\quad\quad\quad$
(26)
where
$\displaystyle\tilde{\nabla}\cdot$
$\displaystyle\left(\eta_{l}\tilde{\mathbf{j}}^{i}_{l}\right)=\frac{1}{\tilde{r}}\frac{\partial\left(\tilde{r}\eta_{l}\tilde{j}_{l}^{r,i}\right)}{\partial\tilde{r}}+(\delta_{0})^{2}\frac{\partial\left(\eta_{l}\tilde{j}_{l}^{z,i}\right)}{\partial\tilde{z}},\
\ l=gl,ex,$ $\displaystyle\delta^{i}_{8}$
$\displaystyle=\frac{t^{*}\mathcal{M}_{gl}g^{*}_{gl}k_{B}T}{c_{gl}^{i,*}e^{2}},\
\ \delta^{i}_{9}=\frac{D^{i}_{gl}\tau_{gl}t^{*}}{(r^{*})^{2}},$
$\displaystyle\delta^{i}_{10}$
$\displaystyle=\frac{t^{*}\mathcal{M}_{ax}g^{*}_{ax}k_{B}T}{c_{ax}^{i,*}e^{2}},\
\ \delta^{i}_{11}=\frac{D^{i}_{ax}t^{*}}{(z^{*})^{2}},$
$\displaystyle\delta^{i}_{12}$
$\displaystyle=\frac{c_{ax}^{i,*}}{c_{ex}^{i,*}},\ \
\delta^{i}_{13}=\frac{c_{gl}^{i,*}}{c_{ex}^{i,*}},\ \
\delta^{i}_{14}=\frac{D^{i}_{ex}\tau_{ex}t^{*}}{(r^{*})^{2}}.$
###### Remark 2.4.
In the rest of this paper, the symbol $\Delta f$ is used to denote the
variation of the variable $f$ from its resting state value.
Multiplying Eqs. in (12-14) with $z_{i}e$ respectively, summing up, and using
the charge neutrality condition, we have the following system for the electric
fields in $ax,gl,ex$,
$\displaystyle\sum_{i}z^{i}e\mathcal{M}_{gl}J^{m,i}_{gl}+\sum_{i}\nabla\cdot\left(z^{i}e\eta_{g}\mathbf{j}_{gl}^{i}\right)=0,$
(27)
$\displaystyle\sum_{i}z^{i}e\mathcal{M}_{ax}J^{m,i}_{ax}+\sum_{i}\frac{\partial}{\partial
z}\left(z^{i}e\eta_{ax}j_{ax,z}^{i}\right)=0,$ (28)
$\displaystyle\sum_{i}z^{i}e\nabla\cdot\left(\eta_{gl}\mathbf{j}_{gl}^{i}\right)+\sum_{i}\frac{\partial}{\partial
z}\left(z^{i}e\eta_{ax}j_{ax,z}^{i}\right)+\sum_{i}\nabla\cdot\left(z^{i}e\eta_{ex}\mathbf{j}_{ex}^{i}\right)=0,$
In the subarachnoid space $\Omega_{SAS}$, the extracellular equations reduce
to
$\sum_{i}\nabla\cdot\left(z^{i}e\sum_{i}\mathbf{j}_{ex}^{i,SAS}\right)=0.$
(30)
The boundary conditions for electric fields $\phi_{ax}$, $\phi_{gl}$ and
$\phi_{ex}$ are given below.
In the axon compartment:
$\left\\{\begin{aligned} \nabla\phi_{ax}\cdot\hat{\mathbf{z}}=0,&\text{ on
}\Gamma_{2},\\\ \nabla\phi_{ax}\cdot\hat{\mathbf{z}}=0,&\text{ on
}\Gamma_{6},\end{aligned}\right.$ (31)
In the glial compartment:
$\left\\{\begin{aligned} &\nabla\phi_{gl}\cdot\hat{\mathbf{r}}=0,&\text{ on
}\Gamma_{1},\\\ &\nabla\phi_{gl}\cdot\hat{\mathbf{z}}=0,&\text{ on
}\Gamma_{2},\\\ &\nabla\phi_{gl}\cdot\hat{\mathbf{z}}=0,&\text{ on
}\Gamma_{6},\\\ &\nabla\phi_{gl}\cdot\hat{\mathbf{r}}=0,&\text{ on
}\Gamma_{7},\end{aligned}\right.$ (32)
and in the extracellular space:
$\left\\{\begin{aligned} &\nabla\phi_{ex}\cdot\hat{\mathbf{r}}=0,&&\text{on
}\Gamma_{1},\\\ &\nabla\phi_{ex}\cdot\hat{\mathbf{z}}=0,&&\text{on
}\Gamma_{2}\cup\Gamma_{3},\\\
&\nabla\phi_{ex}\cdot\hat{\mathbf{r}}=0,&&\text{on }\Gamma_{4},\\\
&\nabla\phi_{ex}\cdot\hat{\mathbf{z}}=0,&&\text{on }\Gamma_{5},\\\
&\nabla\phi_{ex}\cdot\hat{\mathbf{z}}=0,&&\text{on }\Gamma_{6},\\\
&\sum_{i}z^{i}e\mathbf{j}_{ex}^{i,OP}\cdot\hat{\mathbf{r}}=\sum_{i}z^{i}e\mathbf{j}_{ex}^{i,SAS}\cdot\hat{\mathbf{r}}&&\\\
&=\sum_{i}G_{pia}^{i}\left(\phi_{ex}^{OP}-\phi_{ex}^{SAS}-E_{pia}^{i}\right),&&\text{on
}\Gamma_{7}.\end{aligned}\right.$ (33)
In the rest of this paper, the full electric-diffusion-convection model is
defined by Eqs. (3a) through (33). The electric-diffusion model is defined by
Eqs. (12)-(33). The electric diffusion model is a reduced version of the full
model in which water is neglected.
## 3 Model Calibration and Validation
In this section, we use the physiological and anatomical data in Orkand et al.
[48] to calibrate the value of parameters, like membrane conductance,
capacitance, and structural parameters. We then validate our model by
computing results with these parameters and comparing the computation with the
experiment, which are designed to measure the change in potential across the
glial membrane produced by a train of action potentials.
In the Orkand experiment, optic nerve has been put in bathing solutions with
three different $\mathrm{K^{+}}$ concentration $(1.5\ \mathrm{mM},3\
\mathrm{mM},4.5\ \mathrm{mM})$ and the resting potential across the glia
membrane was measured. Then the axon was stimulated simultaneously at both
ends (see lines 5-6 of the Methods section of Orkand paper) to give a train of
action potentials. The action potentials increased $\mathrm{K^{+}}$ in
extracellular space (ECS). The accumulated $\mathrm{K^{+}}$ then made the glia
membrane potential more positive.
In the simulation, we applied a train of stimuli with frequency
$17/\mathrm{s}$ for $1\mathrm{s}$ to the axon membrane at
$z={\color[rgb]{0,0,0}2.25}\mathrm{mm},13.5\mathrm{mm},0<r<R_{a}=48\mathrm{\mu
m}$. Each individual stimulus in the train lasted $3\ \mathrm{ms}$ (as
Orkand’s paper indicated) and had strength $3\ \mathrm{mA/m^{2}}$. The
stimulus was large enough to exceed threshold and generate action potentials.
We set the ECS $\mathrm{K^{+}}$ to be $1.5\ \mathrm{mM},3\ \mathrm{mM}$, or
$4.5\ \mathrm{mM}$ and record the largest absolute value of the change in
glial membrane potential in each case as in the Fig. 4 . This number is
loosely called ‘the depolarization’ in most laboratories. The blue symbols
show experimental data, red ones are the simulations results of
electrodiffusion model and the green ones are the full model. Fig. 4 shows
that both the full model and electrodiffusion model could match the
experimental resting potentials (solid symbols) and depolarizations (open
symbols) very well for the different ECS $\mathrm{K^{+}}$ concentrations.
Figure 3: (a) axon membrane potential profile when eye-end axon stimulated.
The built-in figure is the stimulus current profile. (b) axon membrane
potential profile when two-end axon simulated.
Fig. 3 shows the propagation of the axon action potential. The membrane
potential from axons at the center of the optic nerve bundle is shown when
different locations of the axon had been stimulated. In both eye-end and two-
end cases, the stimulus current was applied from $t=1\ \mathrm{ms}$ to $t=4\
\mathrm{ms}$. In Fig. 3a, the stimulus was applied near to the optic nerve
near the eye-end $(z={\color[rgb]{0,0,0}2.25}\ \mathrm{mm})$. At $t=1\
\mathrm{ms}$, the discontinuity of stimulus current induces jumps of the axon
membrane potential in Fig. 3. At $t=10\ \mathrm{ms}$, the action potential
completely has propagated and left the location near far-eye-end $(13.5\
\mathrm{mm})$. The axon in the optic nerve of the mud puppy is unmyelinated.
This speed of action potential propagation in the model lies in the range of
the action potential speeds typical of unmyelinated axons, i.e., between $0.5\
\mathrm{m/s}$ and $2.0\ \mathrm{m/s}$ [64]. In the Fig. 3b, when the two-ends
of the axon stimulated, the axon membrane potential has is more uniform
spatially at each time point in compare to the single side stimulus case.
Orkand et al used the dual stimulation to more closely approximate a ‘space
clamp’.
Figure 4: The comparison between the experiment [48] and simulation on the
effect of nerve impulses on the membrane potential of glial cells. The solid
symbols are resting potentials and the open symbols are depolarization
potentials with different ECS $\mathrm{K^{+}}$ concentrations.
## 4 Effects of Water Flow
In this section, when part of the nerve is stimulated, we estimate the
transmembrane fluxes and the resulting accumulation of ions in the
extracellular space and glial cells. Our main conclusion is that the variation
of osmotic pressure between extracellular space and glial cells is the
dominant mechanism that drives water flow. And water flows are significant and
many important flows occur in the glial region. It is important to note that
these flows can occur in the glia because it is a syncytium of irregular but
finite cells (i.e., not long cylinders) that allows easy flow from cell to
cell. The circulation pattern and strength of water flow in optic nerve are
also presented.
To simplify our discussions, we focus our analyses on an idealized setting
where the stimulus is applied at an inner part of the axon compartment. As
shown in Fig. 5, the stimulus was applied at $0<r<r_{sti}$ at a given location
$z=z_{0}$. This stimulus is within the optic nerve, so $r_{sti}<R_{a}=r^{*}$
shown in Fig. 5. We distinguish the stimulated region and the non-stimulated
region in the optic nerve $\Omega_{OP}$ shown in the Fig. 5, since the
electrical signal propagates in the $z$ direction in the axon compartment. We
do not put the stimulus everywhere in this region, rather we only apply the
stimulus at the location $(z_{0})$ within a radial.
Figure 5: Stimulated region and non-stimulated region in the optic nerve
$(\Omega_{OP})$. The stimulus is applied in the axon compartment where
$0<r<r_{sti}$ at a given location $z=z_{0}$.
To understand the mechanism inducing the water circulation, we first estimate
the variations of ion concentrations from axon to the extracellular space
during a single action potential. Then we analyze the different transmembrane
current on the glial cells and identify the dominant $\mathrm{K^{+}}$ current.
Finally, we study osmotic pressure change after a train of action potentials
on axon.
### 4.1 Single action potential estimation
We first estimate the amount of ion exchange between axon and extracellular
space during a single action potential. We assume that during the single
action potential, the volume fraction $\eta_{l},\ l=ax,gl,ex$, does not differ
from their resting state. We find then that the variation of $\mathrm{Na^{+}}$
and $\mathrm{K^{+}}$ in the stimulated extracellular region is the same to
leading order, and that agrees with experimental observations [49, 30, 12].
Although our estimation is based on the classic Hodgkin-Huxley model, the
methods are general and can be applied to systems with other channels and
transporters.
When an action potential occurs in the nerve, the equilibrium (or steady
state) balance between the ions and electric fields is lost and resting state
changes. We introduce notations to separate the resting state variables (with
superscript ‘$re$’) before the action potentials from the variables during the
action potentials (with superscript ‘$dy$’).
We introduce the current of $i$th ionic species through axon and glial
membrane as
$\displaystyle
I_{k}^{i,j}=z^{i}eJ_{k}^{m,i,j}=z^{i}eJ_{p,k}^{i,j}+z^{i}eJ_{c,k}^{i,j},\
i=\mathrm{Na}^{+},\mathrm{K}^{+},\mathrm{Cl}^{-},$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\ j=re,dy,\ k=gl,ax,$
where $J_{k}^{m,i,j}$ consists of the active $\mathrm{Na/K}$ pump source
$(J_{p,k}^{i,j})$ and passive ion channel source $(J_{c,k}^{i,j})$ for $i$th
ionic species on the axons $(k=ax)$ or glial cells membranes $(k=gl)$ at
resting state $(j=re)$ before the action potentials or during the action
potentials $(j=dy)$.
At the resting state, $\mathrm{Na/K}$ pump source $J_{p,k}^{i,re}$ and ion
channels source $J_{c,k}^{i,re}$ on the axon membrane $(k=ax)$ and glial
membrane $(k=gl)$ satisfy
$\displaystyle J_{p,k}^{Na,re}=\frac{3I_{k}^{re}}{e},\ \ \
J_{p,k}^{K,re}=-\frac{2I_{k}^{re}}{e},\ \ \ J_{p,k}^{Cl,re}=0,$ $\displaystyle
J_{c,k}^{i,re}=\frac{g_{k}^{i,re}}{z^{i}e}\left(V_{k}^{re}-E_{k}^{i,re}\right),i=\mathrm{Na^{+},K^{+},Cl^{-}},k=gl,ax$
where the membrane potential $V^{re}_{k}$ at the resting state is
$V_{k}^{re}=\phi_{k}^{re}-\phi_{ex}^{re},\ \ k=gl,ax.$
The ion channel conductance on the glial membrane is a fixed constant,
$g_{gl}^{i,re}=g_{gl}^{i},\quad i=\mathrm{Na^{+},K^{+},Cl^{-}}.$
and the ion channel conductance on the axon membrane is defined as in the
classical Hodgkin-Huxley model
$\displaystyle
g_{ax}^{Na,re}=\bar{g}^{Na}\left(m^{re}\right)^{3}h^{re}+g_{leak}^{Na},\ \
g_{ax}^{K,re}=\bar{g}^{K}\left(n^{re}\right)^{4}+g_{leak}^{K},$ $\displaystyle
g_{ax}^{Cl,re}=g_{leak}^{Cl},$
The kinetic variables $m^{re}$, $h^{re}$ and $n^{re}$ are measures of the
resting state open probability for the voltage-gated $\mathrm{Na^{+}}$ and
$\mathrm{K^{+}}$ channel on the axon membrane. In addition, in the resting
state, the ion fluxes through the active Na/K pump $J_{p,k}^{i,re}$ and ion
channel $J_{c,k}^{i,re}$ in the glial membrane ($k=gl$) or axon membrane
($k=ax$) are balanced in magnitude
$O\left(|J_{p,k}^{i,re}|\right)=O\left(|J_{c,k}^{i,re}|\right),\
i=\mathrm{Na^{+},K^{+},Cl^{-}},\ k=gl,ax.$
During action potentials, the ion fluxes through active $\mathrm{Na/K}$ pump
are
$J_{p,k}^{Na,dy}=\frac{3\left(I_{k}^{re}+\Delta I_{k}\right)}{e},\ \ \
J_{p,k}^{K,dy}=-\frac{2\left(I_{k}^{re}+\Delta I_{k}\right)}{e},\ \ \
k=gl,ax,$
where $\Delta I_{k}$ is the variation of current through Na/K pump in the
membrane due to the ion concentration changes. The ion fluxes through ion
channels can be written as
$\displaystyle J_{c,k}^{i,dy}=$
$\displaystyle\frac{g_{k}^{i,dy}}{z^{i}e}\left(V_{k}^{re}-E_{k}^{i,re}\right)+\frac{g_{k}^{i,dy}}{z^{i}e}\left(\Delta
V_{k}-\Delta E_{k}^{i}\right),\ k=gl,ax,$
where $\Delta\mathrm{X}_{k}=\mathrm{X}_{k}^{dy}-\mathrm{X}_{k}^{re}$ is the
deviation of $\mathrm{X}$ away from the resting state value with
$\mathrm{X}=V,E,I$ on the membrane $k$. For the conductance on membranes, we
have
$\displaystyle
g_{ax}^{Na,dy}=\bar{g}^{Na}\left(m^{dy}\right)^{3}h^{dy}+g_{leak}^{Na},\ \
g_{ax}^{K,dy}=\bar{g}^{K}\left(n^{dy}\right)^{4}+g_{leak}^{K},$ $\displaystyle
g_{ax}^{Cl,dy}=g_{ax}^{Cl,re},\ \ g_{gl}^{i,dy}=g_{gl}^{i,re},\ \
i=\mathrm{Na^{+},K^{+},Cl^{-}},$
where $m^{dy}$, $h^{dy}$ and $n^{dy}$ are governed by system (17). During a
single action potential, we claim that the variation of ion’s Nernst potential
is much smaller than changes in the axon membrane potential (see Appendix B),
$\Delta E_{ax}^{i}=o\left(\Delta V^{*}_{ax}\right),\ \
i=\mathrm{Na^{+},K^{+},Cl^{-}},$
At the same time, we estimate that
$J_{p,ax}^{i,dy}=o\left(\frac{g_{ax}^{i,dy}}{z^{i}e}\left(V_{ax}^{re}-E_{ax}^{i,re}\right)\right),\
\ i=\mathrm{Na^{+},K^{+}}.$
This is because the voltage-gated $\mathrm{Na^{+}}$ and $\mathrm{K^{+}}$
channels are open during the action potential and satisfy
$g_{ax}^{i,re}=o\left(g_{ax}^{i,dy}\right),\quad i=\mathrm{Na^{+},K^{+}}.$
In addition, the increments of $\mathrm{Na/K}$ pump strength is limited since
the ion fluxes through the $\mathrm{Na/K}$ pump is controlled by its maximum
currents $I_{ax,1}$ and $I_{ax,2}$ in Eq. (18).
In sum, during action potentials, we can approximate the axon transmembrane
current for each ionic species as
$I_{ax}^{i,dy}\approx
g_{ax}^{i,dy}\left(V_{ax}^{re}-E_{ax}^{i,re}\right)+g_{ax}^{i,dy}\Delta
V_{ax},\quad i=\mathrm{Na^{+},K^{+},Cl^{-}}.$ (34)
In the next paragraphs, by using Eq. (34), we estimate the accumulative
$\mathrm{Na^{+}}$ and $\mathrm{K^{+}}$ fluxes through the axon membrane during
a single action potential. This estimation helps us estimate the concentration
changes in the stimulated extracellular region.
The governing equation of the open probability for $\mathrm{Na}^{+}$ channel
$m$-gates in the Hodgkin-Huxley model is
$\frac{dm^{dy}}{dt}=\alpha_{m}\left(1-m^{dy}\right)-\beta_{m}m^{dy},$ (35)
where
$\alpha_{m}=\frac{1}{10}\frac{25-\Delta V_{ax}}{\exp\left(\frac{25-\Delta
V_{ax}}{10}\right)-1},\ \ \ \beta_{m}=4\exp\left(-\frac{\Delta
V_{ax}}{18}\right),$ (36)
and $\Delta V_{ax}=V_{ax}^{dy}-V_{ax}^{re}$. The solution for Eq. (35) is
$\displaystyle m^{dy}$
$\displaystyle(t)=m_{0}\exp\left(\int_{0}^{t}\alpha_{m}(s)+\beta_{m}ds\right)$
(37)
$\displaystyle+\int_{0}^{t}\alpha_{m}(s)\exp\left(-\int_{s}^{t}\alpha_{m}(u)+\beta_{m}(u)du\right)ds,$
with initial value $m_{0}$.
During a single action potential period $[0,T_{ax}^{*}]$, we define two
distinguished time intervals based on the rapidly-responding $m$-gates open
probability $m^{dy}$ as shown in Fig. 6.
Figure 6: Two distinguished time intervals used in the estimation during a
single action potential. The blue line is the axon membrane potential
variation $\Delta V_{ax}(=V^{dy}_{ax}-V_{ax}^{re})$ during a single action
potential. The dark dash line is the linear approximation of the $\Delta
V_{ax}$. $t_{m1}$ and $t_{m2}$ are the time parameters in Eqs. (101) and
(102).
The first period $[0,t_{m1}]$ is when the $\mathrm{Na^{+}}$ channel becomes
fully open, and the action membrane potential moves positive from its resting
value to its most positive value. The second period
$[t_{m1},T_{ax}^{*}=t_{m1}+t_{m2}]$ occurs when the $\mathrm{Na^{+}}$ channel
closes and the action potential recovers from the peak value to the
hyperpolarization value.
In the first time interval $[0,t_{m1}]$, we estimate that $\Delta V_{ax}$
increases monotonically from $0$ to $E_{ax}^{Na,re}-V_{ax}^{re}$, where we
approximate the peak value of action potential by the Nernst potential of
$\mathrm{Na^{+}}$ in the resting state such that
$\Delta V_{ax}(t)=\frac{E_{ax}^{Na,re}-V_{ax}^{re}}{t_{m1}}t,\ \ \
t\in[0,t_{m1}].$ (38)
where $E_{ax}^{Na,re}-V_{ax}^{re}\approx 1.4\times 10^{2}$ $\mathrm{mV}$. In
Eq. (38), the $t_{m1}$ is an unknown variable. The initial value of Eq. (37)
is chosen when $\Delta V_{ax}=0\ \mathrm{mV}$ as
$\displaystyle m_{0}=m^{re}=m^{eq}(0),$
where $m^{eq}$ is the equilibrium state of Eq. (35) depending on $\Delta
V_{ax}$,
$m^{eq}(\Delta V_{ax})=\frac{\alpha_{m}(\Delta V_{ax})}{\alpha_{m}(\Delta
V_{ax})+\beta_{m}(\Delta V_{ax})}.$ (39)
By using Eqs. (36), (37) and (38), we can obtain one equation for $t_{m1}$ as
shown in Eq. (101) (see Appendix C). Without loss of generality, we assume the
voltage-gated $\mathrm{Na^{+}}$ channel is almost fully open when $t=t_{m1}$
and $m^{dy}(t_{m1})=0.95$. The estimation from Eq. (101) gives $t_{m1}\approx
0.67\ \mathrm{ms}$.
In the second time interval, we use the homogeneous property of Eq. (35) and
move the time interval $[t_{m1},T_{ax}^{*}=t_{m1}+t_{m2}]$ to $[0,t_{m2}]$ to
simplify the notation. We assume that $\Delta V_{ax}$ decreases monotonically
from $E^{Na,re}_{ax}-V_{ax}^{re}$ to $E^{K,re}_{ax}-V_{ax}^{re}$ at second
time period such that
$\Delta
V_{ax}(t)=E_{ax}^{Na,re}-V_{ax}^{re}-\frac{E_{ax}^{Na,re}-E_{ax}^{K,re}}{t_{m2}}t,\
\ t\in[0,t_{m2}],$ (40)
where $E_{ax}^{Na,re}-E_{ax}^{K,re}\approx 1.5\times 10^{2}\ \mathrm{mV}$. We
assume that the initial value $m_{0}$ of Eq. (37) at the second time period is
$m_{0}=m^{dy}(t_{m1}).$
The $\mathrm{Na^{+}}$ channel is in a nearly closed state when the $\Delta
V_{ax}$ approaching $E_{ax}^{K,re}-V_{ax}^{re}$ and we estimate
$m^{dy}(t_{m2})=0.1$. In a similar way, by using Eqs. (36), (37) and (40), we
could have another equation for $t_{m2}$ as shown in Eq. (102) (see Appendix
C). Based on Eq. (102), we get $t_{m2}\approx 3\ \mathrm{ms}$.
In sum, based on estimated $t_{m1}$ and $t_{m2}$ in above, we obtain the
approximations for the $\Delta V_{ax}$ and the $h$ during a single action
potential period $(t\in[0,T_{ax}^{*}=t_{m1}+t_{m2}])$ as
$\displaystyle\Delta V_{ax}=\left\\{\begin{aligned}
&\frac{E_{ax}^{Na,re}-V_{ax}^{re}}{t_{m1}}t,&&t\in[0,t_{m1}],\\\
&E_{ax}^{Na,re}-V_{ax}^{re}-\frac{E_{ax}^{Na,re}-E_{ax}^{K,re}}{t_{m2}}(t-t_{m1}),&&t\in[t_{m1},T_{ax}^{*}].\end{aligned}\right.$
and
$\displaystyle h^{dy}(t)$
$\displaystyle=h_{0}\exp\left(-\int_{0}^{t}\alpha_{h}(s)+\beta_{h}(s)ds\right)$
$\displaystyle+\int_{0}^{t}\alpha_{h}(s)\exp\left(-\int_{s}^{t}\alpha_{h}(u)+\beta_{h}(u)du\right)ds,$
where
$\displaystyle\alpha_{h}=\frac{7}{100}\exp\left(-\frac{\Delta
V_{ax}}{20}\right),\ \ \beta_{h}=\frac{1}{\exp\left(\frac{30-\Delta
V_{ax}}{10}\right)+1},$
with the initial value $h_{0}$
$h_{0}=h^{re}(0)=\frac{\alpha_{h}(0)}{\alpha_{h}(0)+\beta_{h}(0)}.$
By using Eq. (34), we estimate the cumulative $\mathrm{Na^{+}}$ flux
Eqs.blackthrough the axon membrane during a single action potential
$[0,T_{ax}^{*}]$ by
$\displaystyle\int_{0}^{T_{ax}^{*}}J_{ax}^{m,Na,dy}dt$
$\displaystyle\approx\int_{0}^{T_{ax}^{*}}\frac{\bar{g}^{Na}h^{dy}(m^{dy})^{3}}{z^{Na}e}\left(V_{ax}^{re}-E_{ax}^{Na,re}\right)+\frac{\bar{g}^{Na}h^{dy}(m^{dy})^{3}}{z^{Na}e}\Delta
V_{ax}dt$ $\displaystyle\approx-2\times 10^{-9}\ \mathrm{mol/m^{2}}.$ (41)
In the next step, we estimate the cumulative $\mathrm{Cl^{-}}$ flux through
the axon membrane during a single action potential $[0,T_{ax}^{*}]$ by
$\int_{0}^{T_{ax}^{*}}J_{ax}^{m,Cl,dy}dt\approx\int_{0}^{T_{ax}^{*}}\frac{g_{ax}^{Cl}\Delta
V_{ax}}{z^{Cl}e}dt\approx-3.7\times 10^{-10}\ \mathrm{mol/m^{2}}.$ (42)
In Eq. (42), we use
$I_{ax}^{Cl,dy}=g_{ax}^{Cl}\left(V_{ax}^{re}-E_{ax}^{Cl,re}\right)+g_{ax}^{Cl}\left(\Delta
V_{ax}-\Delta E_{ax}^{Cl}\right)\approx g_{ax}^{Cl}\Delta V_{ax},$
since both $V_{ax}^{re}-E_{ax}^{Cl,re}$ and $\Delta E_{ax}^{Cl}=o\left(\Delta
V_{ax}\right)$. In the next, we provide the estimation of the cumulative
$\mathrm{K}^{+}$ flux through axon membrane during a single action potential.
The governing equation of $\phi_{ax}$ yields
$\sum_{i}z^{i}e\frac{\partial}{\partial
z}\left(\eta_{ax}j_{ax}^{i}\right)=-\mathcal{M}_{ax}\left(I_{ax}^{Na,dy}+I_{ax}^{K,dy}+I_{ax}^{Cl,dy}\right).$
(43)
At every location of the stimulated region, the duration of a single action
potential is $T_{ax}^{*}$. We introduce $T_{all}^{*}$ for the electrical
signal propagation time, during which the signal propagates from one end of
the axon (near the the optic nerve head) to the other end (far-eye-side of the
optic nerve) as shown in Fig. 3. By integrating right-hand side of Eq. (43)
over space $[0,L]$ and time $[0,T_{all}^{*}]$, we have
$\displaystyle-\mathcal{M}_{ax}$
$\displaystyle\int_{0}^{T_{all}^{*}}\int_{0}^{L}I_{ax}^{Na,dy}+I_{ax}^{K,dy}+I_{ax}^{Cl,dy}dzdt$
(44)
$\displaystyle\approx-\mathcal{M}_{ax}L\int_{0}^{T_{ax}^{*}}I_{ax}^{Na,dy}+I_{ax}^{K,dy}+I_{ax}^{Cl,dy}dt.$
where we use the propagation property of the action potential along $z$
direction, and only the axon firing period is taken into consideration. By
integrating the left-hand side of Eq. (43), we have
$\int_{0}^{T_{all}^{*}}\int_{0}^{L}\sum_{i}z^{i}e\frac{\partial}{\partial
z}\left(\eta_{ax}j_{ax}^{i}\right)dzdt=O\left(T_{all}^{*}e\eta_{ax}j_{ax}^{bd}\right).$
(45)
We assume that the characteristic time scale of $T_{all}^{*}$ equals
$O(10^{-3})$. The scale of ion flux $j_{ax}^{bd}$ at left and right boundaries
$(z=0,L)$ is dominated by the diffusion term
$j_{ax}^{bd}=O\left(D_{ax}^{*}\frac{\Delta c^{*}_{ax}}{z^{*}}\right),$
since the boundary conditions are $\frac{\partial\phi_{ax}}{\partial
z}\big{|}_{z=0,L}=0$ and $u_{ax}(0)=u_{ax}(L)=0$. The $\Delta c^{*}_{ax}$ is
the characteristic difference between ion concentration at boundary value and
the ion concentration inside the axon after a single action potential. Based
on the $\mathrm{Na^{+}}$ flux estimation in Eq. (4.1), we estimate $\Delta
c^{*}_{ax}=O(10^{-1})$. From Eqs. (4.1) and (42), we get the following order
of cumulative fluxes through axon membrane during a single action potential
time interval
$\displaystyle O\left(T_{all}^{*}\eta_{ax}j_{ax}^{bd*}\right)$
$\displaystyle\ll
O\left(\mathcal{M}_{ax}L\bigg{|}\int_{0}^{T_{ax}^{*}}J_{ax}^{m,Cl,dy}dt\bigg{|}\right)$
(46) $\displaystyle\ll
O\left(\mathcal{M}_{ax}L\bigg{|}\int_{0}^{T_{ax}^{*}}J_{ax}^{m,Na,dy}dt\bigg{|}\right).$
In other words, based on Eqs. (44), (45) and (46), it yields
$O\left(\bigg{|}\int_{0}^{T_{ax}^{*}}J_{ax}^{m,K,dy}dt\bigg{|}\right)=O\left(\bigg{|}\int_{0}^{T_{ax}^{*}}J_{ax}^{m,Na,dy}dt\bigg{|}\right).$
(47)
Based on Eq. (4.1), the cumulative axon transmembrane $\mathrm{K^{+}}$ flux
during a single action potential should be
$\int_{0}^{T_{ax}^{*}}J_{ax}^{m,K,dy}dt\approx 2\times 10^{-9}\
\mathrm{mol/m^{2}}.$ (48)
where $[0,T_{ax}^{*}]$ is the time interval enclosing a single action
potential.
###### Remark 4.1.
Eq. (47) shows that for a single action potential, the leading order of the
cumulative $\mathrm{K}^{+}$ flux out of the axon to the extracellular space
equals the leading order of the cumulative $\mathrm{Na}^{+}$ flux into the
axon from the extracellular space. This estimation is consistent with
observations in the literature [49, 30, 12].
Next, we estimate the concentration variation in the stimulated extracellular
region due to a single action potential. The time scale $t^{*}$ of a single
action potential is in milliseconds and during action potential the scale of
$g^{*}_{ax}$ is $\bar{g}^{Na}$. In Appendix B, the scale of axon membrane
potential $\Delta V^{*}_{ax}$ is
$\frac{k_{B}T}{\Delta V^{*}_{ax}e}=o(1).$
Therefore, in Eq. (26) by taking
$\delta^{i}_{10}=\frac{t^{*}\mathcal{M}_{ax}\bar{g}^{Na}\Delta
V^{*}_{ax}}{c_{ax}^{i,*}e}$ , we have
$\left\\{\frac{\delta^{i}_{13}\delta^{i}_{8}}{\delta^{i}_{12}\delta^{i}_{10}},\frac{\delta^{i}_{14}}{\delta^{i}_{12}\delta^{i}_{10}}\right\\}\subset
o(1).$
Hence, the cumulative ion fluxes through axon transmembrane are the main
source changes the ion concentration in the stimulated extracellular region,
$\eta_{ex}\Delta
c_{ex}^{i}=\mathcal{M}_{ax}\int_{0}^{T_{ax}^{*}}J_{ax}^{m,i,dy}dt,\ \
i=\mathrm{Na^{+},K^{+}},$ (49)
where $\Delta c_{ex}^{i}$ is the $i$th ion’s concentration variation from its
resting state and $\eta_{ex}$ is unchanged by Eqs. (5a) and (5b) under time
scale $t^{*}=10^{-3}\mathrm{s}$. Based on Eqs. (47) and (49), the absolute
variation of $\mathrm{Na^{+}}$ and $\mathrm{K^{+}}$ concentrations in the
stimulated extracellular region due to action potentials, can be written as
$\Delta
c_{sti}=O\left(\frac{\mathcal{M}_{ax}}{\eta_{ex}}\bigg{|}\int_{0}^{T_{ax}^{*}}J_{ax}^{m,i,dy}dt\bigg{|}\right),\
i=\mathrm{Na^{+},K^{+}}.$ (50)
In the following discussion, we use $\Delta c_{sti}$ describes the
concentration changes in the stimulated extracellular space after a single
action potential,
$\Delta c_{sti}=0.12\ \mathrm{mM}.$ (51)
### 4.2 Estimation of glial transmembrane potassium flux
In this section, we estimate the glial transmembrane current when the
$\mathrm{K}^{+}$ and the $\mathrm{Na}^{+}$ concentration vary by $\Delta
c_{sti}$ in the stimulated extracellular region. We also find that the
electric field $\phi_{gl}$ responds immediately to the glial $\mathrm{K}^{+}$
Nernst potential changes. In the stimulated region, the variation of
extracellular electric potential $\Delta\phi_{ex}$ is small in compare to the
variation of glial electric potential $\Delta\phi_{gl}$.
The dominant current through the glial membrane in the stimulated region is
through the passive $\mathrm{K}^{+}$ channel, rather than the
$\mathrm{Na}^{+}$ channel or the $\mathrm{Na/K}$ pump. At the same time, in
the non-stimulated extracellular region, almost the same amount of
$\mathrm{K}^{+}$ moves from the glial compartment to extracellular space. In
other words, both the glial cells and extracellular space in the non-
stimulated region participate in the spatial buffering process to help
potassium clearance [60, 13].
In the stimulated region, the Nernst potential for $\mathrm{K^{+}}$ across the
glial membrane changes because of the additional potassium $\Delta c_{ex}^{K}$
in the extracellular space,
$\Delta E_{gl}^{K}=\frac{k_{B}T}{z^{K}e}\left(\log\left(1+\frac{\Delta
c_{ex}^{K}}{c_{ex}^{K,re}}\right)-\log\left(1+\frac{\Delta
c_{gl}^{K}}{c_{gl}^{K,re}}\right)\right),$ (52)
where $\Delta c_{l}^{K},l=gl,ex$ are the variations of concentrations in the
$l$ compartment. The variation of $\mathrm{K}^{+}$ concentration in the glial
compartment $\Delta c_{gl}^{K}$ is a result of the $\Delta c_{ex}^{K}$
produced by the glial transmembrane $\mathrm{K}^{+}$ flux. Recall that the
volume fraction $(\eta_{gl})$ of the glial compartment is much larger than the
extracellular space $(\eta_{ex})$. At same time, based on Eq. (50) and
$\mathrm{K}^{+}$ concentration at resting state, we get
$\Delta c_{ex}^{K}=o\left(c_{ex}^{K,re}\right),\quad\frac{\Delta
c_{gl}^{K}}{c_{gl}^{K,re}}=o\left(\frac{\Delta
c_{ex}^{K}}{c_{ex}^{K,re}}\right).$
Therefore, $\Delta E_{gl}^{K}$ in Eq. (52) can be approximated by its Taylor
expansion,
$\Delta E_{gl}^{K}\approx\frac{k_{B}T}{z^{K}e}\frac{\Delta
c_{ex}^{K}}{c_{ex}^{K,re}}.$ (53)
The variation of $\mathrm{K}^{+}$ Nernst potential in the stimulated region
produces the changes of glial membrane potential $\Delta V_{gl}$ and glial
compartment electric potential $\Delta\phi_{gl}$. We move on now to estimate
the variations of electric potentials in the stimulated extracellular and
glial regions.
From the governing equation for $\phi_{ex}$,
$\displaystyle\sum_{i}z^{i}e\nabla\cdot\left(\eta_{ex}\mathbf{j}_{ex}^{i}\right)$
$\displaystyle=\sum_{i}z^{i}e\mathcal{M}_{gl}\left(J_{p,gl}^{i}+J_{c,gl}^{i}\right)$
(54)
$\displaystyle+\sum_{i}z^{i}e\mathcal{M}_{ax}\left(J_{p,ax}^{i}+J_{c,ax}^{i}\right),$
where
$\mathbf{j}_{ex}^{i}=c_{ex}^{i}\mathbf{u}_{ex}-D_{ex}^{i}\tau_{ex}\left(\nabla
c_{ex}^{i}+\frac{z^{i}e}{k_{B}T}c_{ex}^{i}\nabla\phi_{ex}\right).$
We claim that after the axon stops firing, the major current is through glial
membrane $\mathrm{K^{+}}$ channels (see Appendix D). Therefore, the right-hand
side of Eq. (54) can be approximated as
$\displaystyle\sum_{i}z^{i}e\mathcal{M}_{gl}\left(J_{p,gl}^{i}+J_{c,gl}^{i}\right)$
$\displaystyle+\sum_{i}z^{i}e\mathcal{M}_{ax}\left(J_{p,ax}^{i}+J_{c,ax}^{i}\right)$
$\displaystyle\approx$ $\displaystyle\mathcal{M}_{gl}g_{gl}^{K}\left(\Delta
V_{gl}-\Delta E_{gl}^{K}\right).$ (55)
Next, we integrate Eq. (54) over the stimulated region
$V_{S}=\\{(r,z,\theta)|r\in[0,r_{sti}],\ z\in[0,L],\ \theta\in[0,2\pi]\\}$,
through which the action potential propagates as shown in Fig. 5. By Eq.
(4.2), we have the approximation of the total current
$\int_{V_{S}}\mathcal{M}_{gl}g_{gl}^{K}\left(\Delta V_{gl}-\Delta
E_{gl}^{K}\right)dv\approx\pi
r_{sti}^{2}L\mathcal{M}_{gl}g_{gl}^{K}\left(\Delta V_{gl}-\Delta
E_{gl}^{K}\right).$ (56)
blackIn the left-hand side of Eq. (54), by the charge neutrality assumption in
Eq. (2), we naturally have
$\sum_{i}z^{i}ec_{ex}^{i}\mathbf{u}_{ex}=0.$
Based on Eqs. (42), (47) and (50), we know that after a single action
potential the leading order of ion concentration variations in the stimulated
extracellular region are as follows
$\Delta c_{ex}^{Na}=-\Delta c_{sti},\ \ \Delta c_{ex}^{K}=\Delta c_{sti},\ \
\Delta c_{ex}^{Cl}=o\left(\Delta c_{sti}\right).$ (57)
Using Eqs. (57) and (33), the diffusion term in left-hand side of Eq. (54) can
be approximated as
$-\int_{V_{S}}\sum_{i}z^{i}e\nabla\cdot\left(\eta_{ex}D_{ex}^{i}\tau_{ex}\nabla
c_{ex}^{i}\right)dv\approx 2\pi
r_{sti}Le\eta_{ex}D_{ex}^{\mathrm{diff}}\tau_{ex}\frac{\Delta
c_{sti}}{r^{*}},$ (58)
where $D_{ex}^{\mathrm{diff}}=D_{ex}^{K}-D_{ex}^{Na}$. In Eq. (58), we claim
that the currents through the left $(z=0)$ and right $(z=L)$ boundaries of the
stimulated region $V_{S}$ is much smaller than those through the radial
transition region $S_{T}$. This is because (1) the ion concentration
variations are in radial direction (between stimulated region and non-
stimulated region) and (2) the length scales in the $z$ and $r$ direction are
different. Therefore, the radial transition region
$S_{T}=\\{(r,z,\theta)|r=r_{sti},z\in[0,L],\theta\in[0,2\pi]\\}$ has much
larger area than the left and right boundaries of $V_{S}$.
Similarly, the integration of the electric drift term in left-hand side of Eq.
(54) yields the approximation,
$\displaystyle-\int_{V_{S}}\sum_{i}z^{i}e\nabla\cdot\left(\eta_{ex}D_{ex}^{i}\tau_{ex}\frac{z^{i}e}{k_{B}T}c_{ex}^{i}\nabla\phi_{ex}\right)dv$
$\displaystyle\approx 2\pi
r_{sti}L\eta_{ex}\sigma_{ex}\frac{\Delta\phi_{ex}}{r^{*}},$ (59)
where
$\sigma_{ex}=\frac{\tau_{ex}e^{2}}{k_{B}T}\sum_{i}(z^{i})^{2}D_{ex}^{i}c_{ex}^{i}$.
From Eqs. (56), (58) and (4.2), we get
$\frac{2}{r_{sti}}\left(\frac{\eta_{ex}\tau_{ex}eD_{ex}^{\mathrm{diff}}}{\mathcal{M}_{gl}}\frac{\Delta
c_{sti}}{r^{*}}+\frac{\eta_{ex}\sigma_{ex}}{\mathcal{M}_{gl}}\frac{\Delta\phi_{ex}}{r^{*}}\right)\approx
g^{K}_{gl}\left(\Delta V_{gl}-\Delta E_{gl}^{K}\right).$ (60)
At the same time, from the governing equation of $\phi_{gl}$
$\sum_{i}z^{i}e\nabla\cdot\left(\eta_{gl}\mathbf{j}_{gl}^{i}\right)=-\sum_{i}z^{i}e\mathcal{M}_{gl}\left(J_{p,gl}^{i}+J_{c,gl}^{i}\right),$
(61)
where
$\mathbf{j}_{gl}^{i}=c_{gl}^{i}\mathbf{u}_{gl}-D_{gl}^{i}\tau_{gl}\left(\nabla
c_{gl}^{i}+\frac{z^{i}e}{k_{B}T}c_{gl}^{i}\nabla\phi_{gl}\right),$
we obtain the following estimation in a similar way
$-\frac{2}{r_{sti}}\frac{\eta_{gl}\sigma_{gl}}{\mathcal{M}_{gl}}\frac{\Delta\phi_{gl}}{r^{*}}\approx
g_{gl}^{K}\left(\Delta V_{gl}-\Delta E_{gl}^{K}\right),$ (62)
where
$\sigma_{gl}=\frac{\tau_{gl}e^{2}}{k_{B}T}\sum_{i}(z^{i})^{2}D_{gl}^{i}c_{gl}^{i}$.
We neglect the diffusion and convection terms in Eq. (61) because these terms
require much longer time to respond to the extracellular concentration change.
Based on Eq. (60) and Eq. (62), we have
$\Delta\phi_{ex}=-\frac{\eta_{gl}\sigma_{gl}}{\eta_{ex}\sigma_{ex}}\Delta\phi_{gl}-\frac{\tau_{ex}eD_{ex}^{\mathrm{diff}}}{\sigma_{ex}}\Delta
c_{sti}.$ (63)
In Appendix E, by matching the orders in both side of Eq. (62), we claim that
$\Delta\phi_{ex}=o\left(\Delta\phi_{gl}\right)$ in the stimulated region and
therefore,
$\Delta V_{gl}=\Delta\phi_{gl}-\Delta\phi_{ex}=O(\Delta\phi_{gl}).$ (64)
In the next step, we approximate the $\mathrm{K}^{+}$ current through the
leaking $\mathrm{K}^{+}$ channel on the glial membrane. Based on Eqs. (62) and
(64), we get
$g_{gl}^{K}\left(\Delta\phi_{gl}-\Delta E_{gl}^{K}\right)\approx
g_{gl}^{K}\left(\Delta V_{gl}-\Delta
E_{gl}^{K}\right)\approx-\frac{2\eta_{gl}\sigma_{gl}}{r_{sti}\mathcal{M}_{gl}}\frac{\Delta\phi_{gl}}{r^{*}}.$
(65)
Hence, by Eq. (65), we obtain the relation between $\Delta E_{gl}^{K}$ and
$\Delta\phi_{gl}$ as
$\Delta E_{gl}^{K}\approx\left(1+h_{\epsilon}\right)\Delta\phi_{gl},$ (66)
where
$h_{\epsilon}=\frac{2\eta_{gl}\sigma_{gl}}{r_{sti}\mathcal{M}_{gl}r^{*}g_{gl}^{K}}.$
Based on Eq. (65), it gives us the following approximation
$g_{gl}^{K}\left(\Delta V_{gl}-\Delta
E_{gl}^{K}\right)\approx-\frac{g_{gl}^{K}h_{\epsilon}}{1+h_{\epsilon}}\Delta
E_{gl}^{K}.$ (67)
Furthermore, from Eqs. (63), (66) and (53), we get the approximation
$\displaystyle\Delta\phi_{ex}\approx-\frac{\eta_{gl}\sigma_{gl}k_{B}T}{\eta_{ex}\sigma_{ex}\left(1+h_{\epsilon}\right)z^{K}e}\frac{\Delta
c_{ex}^{K}}{c_{ex}^{K,re}}.$ (68)
The variations of electric field $\Delta\phi_{gl}$ in both stimulated and non-
stimulated regions are produced without delay by $\Delta E_{gl}^{K}$ in the
stimulated region, as described in the governing equation of $\phi_{gl}$ in
Eq. (27). The $\mathrm{K}^{+}$ leaking current is the major current through
the glial membrane in the non-stimulated region as it is in the stimulated
region because the current through the ion channel is voltage $\phi_{gl}$
dependent and $\mathrm{K}^{+}$ conductance is one dominant ion conductance in
the glial membrane
$g_{gl}^{i}=o\left(g_{gl}^{K}\right),\quad i=\mathrm{Na^{+},Cl^{-}}.$
In the next steps, we introduce the superscript notation ‘$S$’ for the
stimulated region variables and superscript ‘$NS$’ for non-stimulated region
ones. For the glial transmembrane currents, we have the following
approximation
$\displaystyle\sum_{i}z^{i}e\mathcal{M}_{gl}\left(J_{p,gl}^{S,i}+J_{c,gl}^{S,i}\right)$
$\displaystyle\approx\mathcal{M}_{gl}g_{gl}^{K}\left(\Delta V_{gl}^{S}-\Delta
E_{gl}^{S,K}\right),$
$\displaystyle\sum_{i}z^{i}e\mathcal{M}_{gl}\left(J_{p,gl}^{NS,i}+J_{c,gl}^{NS,i}\right)$
$\displaystyle\approx\mathcal{M}_{gl}g_{gl}^{K}\left(\Delta V_{gl}^{NS}-\Delta
E_{gl}^{NS,K}\right).$
By integration of the $\phi_{gl}$ Eq. (27) over the stimulated region $V_{S}$
and the non-stimulated region $V_{NS}$ respectively, it yields
$\left\\{\begin{aligned}
&\int_{V_{S}}\sum_{i}z^{i}\mathrm{e}\nabla\cdot\left(\eta_{gl}^{S}\mathbf{j}_{gl}^{S,i}\right)dv\approx\int_{V_{S}}\mathcal{M}_{gl}g_{gl}^{K}\left(\Delta
V_{gl}^{S}-\Delta E_{gl}^{S,K}\right),\\\
&\int_{V_{NS}}\sum_{i}z^{i}\mathrm{e}\nabla\cdot\left(\eta_{gl}^{NS}\mathbf{j}_{gl}^{NS,i}\right)dv\approx\int_{V_{NS}}\mathcal{M}_{gl}g_{gl}^{K}\left(\Delta
V_{gl}^{NS}-\Delta E_{gl}^{NS,K}\right).\end{aligned}\right.$ (69)
Most of the current between region $V_{S}$ and region $V_{NS}$ goes through
the radial transition region $S_{T}$. By Eq. (69) and boundary conditions for
$\phi_{gl}$ we obtain
$\displaystyle\int_{V_{S}}\mathcal{M}_{gl}$ $\displaystyle
g_{gl}^{K}\left(\Delta V_{gl}^{S}-\Delta E_{gl}^{S,K}\right)dv$ (70)
$\displaystyle\approx-\int_{V_{NS}}\mathcal{M}_{gl}g_{gl}^{K}\left(\Delta
V_{gl}^{NS}-\Delta E_{gl}^{NS,K}\right)dv.$
blackBased on Eq. (70),the average $\mathrm{K^{+}}$ flux through the glial
membrane in the non-stimulated region leaks out to extracellular space with an
approximate strength
$\frac{g_{gl}^{K}}{z^{K}e}\left(\Delta V_{gl}^{NS}-\Delta
E_{gl}^{NS,K}\right)=-\frac{r_{sti}^{2}}{r^{*2}-r_{sti}^{2}}\frac{g_{gl}^{K}}{z^{K}e}\left(\Delta
V_{gl}^{S}-\Delta E_{gl}^{S,K}\right).$ (71)
In summary, Eq. (70) and Eq. (71), show how the glial compartment in the non-
stimulated region serve as spatial buffers and help clear potassium from the
extracellular space outside the stimulated axons [10].
###### Remark 4.2.
The glial compartment serves as an important and quick potassium transport
device to remove accumulated potassium during the axon firing as shown in Fig.
7.
In the stimulated region, the change in the potassium Nernst potential change
makes the glial membrane potential more positive and moves potassium through
ion channels into the glial compartment. In the non-stimulated region, since
glia is an electrical syncytium, the glial membrane potential simultaneously
increases as it does in the stimulated region. However, the glia potassium
Nernst potential in the non-stimulated region is not very different from that
in the resting state. These potentials produce an outward potassium flux from
the glial compartment in the non-stimulated region.
Interacting regions of this sort depend on spatial variables and the
properties of the glia as a syncytium. It is difficult to capture these
effects in models that do not include space as an independent variable. Even
if such compartment models capture these effects correctly in one set of
conditions (because parameters are chose to make the description correct),
they are unlikely to describe the effects of changes in conditions
consistently, including membrane potential.
### 4.3 The water flow: circulation and estimation
In this section, we discuss water circulation between the stimulated and the
non-stimulated regions. As extra $\mathrm{K^{+}}$ is gradually cleared, it
produces an osmotic pressure difference between the intra- and inter- domain,
i.e., between the inside the glial compartment and the extracellular space.
This osmotic pressure variation drives transmembrane water flow and water
circulation in the optic nerve.
Now we consider a train of stimulus stimulated with the frequency $f_{m}$ in
the axon region $(r<r_{sti},z=z_{0})$ during time $[0,T_{sti}]$. The
estimation depends on the $\mathrm{K}^{+}$ and $\mathrm{Na}^{+}$ concentration
variations in the extracellular space and charge neutrality condition. The
clearance of extra amount of $\mathrm{K^{+}}$ $(\Delta c_{ex}^{K})$ in the
stimulated extracellular space mostly goes through glial membrane and
extracellular pathway (see Appendix F),
$\frac{d\left(\eta_{ex}\Delta
c_{ex}^{K}\right)}{dt}=-\left(\lambda_{gl}^{m,K}+\lambda_{ex}^{K}\right)\Delta
c_{ex}^{K},$ (72)
where
$\lambda_{gl}^{m,K}=\frac{\mathcal{M}_{gl}g_{gl}^{K}h_{\epsilon}k_{B}T}{z^{K}\left(1+h_{\epsilon}\right)e^{2}c_{ex}^{K,re}},\
\ \ \lambda_{ex}^{K}=\frac{2\eta_{ex}D_{ex}^{K}\tau_{ex}}{r_{sti}r^{*}}.$
The $\lambda_{gl}^{m,K}$ presents the effect of glial transmembrane
$\mathrm{K}^{+}$ flux and the $\lambda_{ex}^{K}$ describes the spatial effect
of the extracellular $\mathrm{K}^{+}$ transport between the stimulated region
and non-stimulated region. This spatial communication is not negligible since
$\lambda_{ex}^{K}$ is comparable magnitude to the $\lambda_{gl}^{m,K}$. The
initial value of Eq. (72) starts with the first stimulus on axon as
$\Delta c_{ex}^{K}(0)=\Delta c_{sti},$
and at the beginning of each period $T$, there is an additional $\Delta
c_{sti}$ amount of $\mathrm{K^{+}}$ accumulated in the extracellular space due
to the axon firing
$\Delta c_{ex}^{K}(iT)=\Delta c_{ex}^{K}(iT)+\Delta c_{sti},\ \ i=1\dots n-1,$
where $n\left(=\frac{T_{sti}}{f_{m}}\right)$ is the total number of periods.
In the above, we view the extracellular $\mathrm{K^{+}}$ concentration changes
due axon firing as a source term $\Delta c_{sti}$.
###### Remark 4.3.
The concentration in the stimulated extracellular region changes rapidly
because of the transmembrane action potentials, as well as the extracellular
electric potential $\phi_{ex}$. The effect of fluid circulation is the
cumulative result of the above $\Delta O_{ex}$. The fluid flows from the non-
stimulated region to the stimulated region are dominated by the trans-glia-
membrane flow. So, the convection in the extracellular reduces (i.e.,
flattens) the variation of osmotic pressure.
###### Remark 4.4.
These effects make our spatially inhomogeneous model quite different from
existing ODE models [49, 43], since those ODE models either take the
extracellular ion concentration as constant or they do not consider the ion
exchange between the extracellular space and other compartments at all. In a
recent work, Marte J. et al [55] introduce a compartment model similar to Eq.
(72) by considering ion flux between neuron, glia and extracellular regions in
both the dendrite and soma region. It is always possible to take a field
theory and approximate its $x$ dependence into compartments. But it is quite
difficult to know how to describe the parameter dependence, and compartment
inter-dependence in such models consistently. And it is probably impossible to
describe the parameter dependence and compartment inter-dependence uniquely.
These issue are also considered in the Discussion Section.
Field theories show the interdependence as outputs of the analysis. Because
field models are consistent, and their solutions are unique, parameter
dependence and compartmental interdependence is unique.
In compartment models, different assumptions are possible and difficult to
compare. Analysis with different sets of assumed compartments is likely then
to give different results in the hands of different investigators, creating
uproductive controversies, and slowing progress. Field models have many fewer
assumptions and are more productive. However, they involve considerably more
mathematical analysis [72, 80] and numerical difficulties. Field models still
contain many known parameters (e.g., most structural parameters, capacitance
of membranes, conductivity of extra and intraellular solutions) and a number
of not well known parameters, like the properties and distributions of
membrane channels (and their ensemble properties) and active transport
systems. Direct experimentation is the best way to determine these parameters
and modern optical methods in particular allow many such measurements on
scales much smaller than a cell diameter. But curve fitting to available data
is often all that is possible, as in some cases in this paper, with its
unavoidable ambiguities.
The time course of $\mathrm{Na}^{+}$ variation $(\Delta c_{ex}^{Na})$ in the
stimulated extracellular space is (see Appendix F)
$\frac{d\left(\eta_{ex}\Delta
c_{ex}^{Na}\right)}{dt}=-\lambda_{ex}^{Na,1}\Delta
c_{ex}^{Na}+\lambda_{ex}^{Na,2}\Delta c_{ex}^{K},$ (73)
with the initial condition
$\Delta c_{ex}^{Na}(0)=-\Delta c_{sti}.$
There is $\Delta c_{sti}$ amount of $\mathrm{Na^{+}}$ flux into axon
compartment from the extracellular space at the beginning of each period
$\Delta c_{ex}^{Na}(iT)=\Delta c_{ex}^{Na}(iT)-\Delta c_{sti},\ \ i=1\dots
n-1.$
In Eq. (73), the $\lambda_{ex}^{Na,1}$ describes the effect of extracellular
diffusion and $\lambda_{ex}^{Na,2}$ presents the extracellular electric drift
between stimulated and non-stimulated regions. In Eq. (73), we have
$\lambda_{ex}^{Na,1}=\frac{2\eta_{ex}D_{ex}^{Na}\tau_{ex}}{r_{sti}r^{*}},\quad\lambda_{ex}^{Na,2}=\frac{2\eta_{gl}\sigma_{gl}D_{ex}^{Na}\tau_{ex}c_{ex}^{Na,re}}{r_{sti}\sigma_{ex}\left(1+h_{\epsilon}\right)r^{*}c_{ex}^{K,re}}.$
In Appendix F, we present the solution of the coupled linear system of (72)
and (73). By the charge neutrality condition Eq. (2), the variation of
extracellular osmotic concentration is
$\Delta O_{ex}=2\left(\Delta c_{ex}^{K}+\Delta c_{ex}^{Na}\right),$ (74)
where $\Delta c_{ex}^{K}$ and $\Delta c_{ex}^{Na}$ are written in Eqs. (123)
and (124).
Notice that sodium and potassium behave differently in the extracellular
space. In the extracellular space, the electric drift $\mathrm{K^{+}}$ flux
has a much smaller magnitude in comparison to diffusive $\mathrm{K^{+}}$ flux,
since the scale ratio $R_{ex}^{K}$ between the electric drift term and
diffusion term for $\mathrm{K^{+}}$ is (see Appendix F)
$R_{ex}^{K}=\frac{\eta_{gl}\sigma_{gl}}{\eta_{ex}\sigma_{ex}(1+h_{\epsilon})}=o(1).$
(75)
However, for $\mathrm{Na^{+}}$ in the extracellular space, the magnitude of
electric drift flux are comparable to diffusive flux since (see Appendix F)
$R_{ex}^{Na}=\frac{\eta_{gl}\sigma_{gl}}{\eta_{ex}\sigma_{ex}\left(1+h_{\epsilon}\right)}\frac{c_{ex}^{Na}}{c_{ex}^{K}}=O(1).$
(76)
In the next discussion, we estimate the scales of the glial transmembrane
velocity, glial radial velocity, and extracellular radial velocity. The
variation in osmotic pressure in the stimulated region is the driving force
for the water flow and circulation. Our estimation is based on the equations
governing fluid flow and the spatial variation of osmotic pressure.
From the conservation of mass in glial compartment, we have
$\frac{\partial\eta_{gl}}{\partial
t}+\mathcal{M}_{gl}U^{m}_{gl}+\nabla\cdot\left(\eta_{gl}\mathbf{u}_{gl}\right)=0.$
(77)
Based on Eq. (74), at $t=T_{sti}$, we know there is cumulative osmosis
variation $\Delta O_{ex}(T_{sti})$ in the stimulated extracellular region.
Since the glial compartment volume fraction ($\eta_{gl}$) is larger than the
extracellular volume fraction ($\eta_{ex}$), we have
$|\Delta O_{gl}|<|\Delta O_{ex}|.$
Therefore, we view the $\Delta O_{ex}$ is the driving force for hydrostatic
pressure variation. At the resting state, Eq. (77) yields
$\mathcal{M}_{gl}L_{gl}^{m}\left(p_{gl}^{re}-p_{ex}^{re}-\gamma_{gl}k_{B}T\left(O_{gl}^{re}-O_{ex}^{re}\right)\right)+\nabla\cdot\left(\eta_{gl}^{re}\mathbf{u}_{gl}^{re}\right)=0,$
and by Eq. (77), we get
$\displaystyle\frac{\partial\Delta\eta_{gl}}{\partial
t}+\mathcal{M}_{gl}L_{gl}^{m}\left(\Delta p_{gl}-\Delta
p_{ex}-\gamma_{gl}k_{B}T\left(\Delta O_{gl}-\Delta O_{ex}\right)\right)$
$\displaystyle+\nabla\cdot\left(\Delta\left(\eta_{gl}\mathbf{u}_{gl}\right)\right)=0.$
(78)
Based on Eq. (5a), the scale of the second term in Eq. (4.3) is much larger
than the third term, since
$\frac{\delta_{2}}{\delta_{1}}=\frac{\kappa_{gl}\tau_{gl}}{\mu(r^{*})^{2}\mathcal{M}_{gl}L_{gl}^{m}}=o\left(1\right).$
where we choose
$U^{*}_{gl}=k_{B}TO^{*},\ \
u^{*}_{gl}=\frac{\kappa_{gl}\tau_{gl}k_{B}TO^{*}}{\mu r^{*}}.$
Therefore, Eq. (4.3) in the stimulated glial region can be approximated as
$\displaystyle\frac{\partial\left(\Delta p_{gl}-\Delta
p_{ex}\right)}{K_{gl}\partial t}$
$\displaystyle+\mathcal{M}_{gl}L_{gl}^{m}\left(\Delta p_{gl}-\Delta
p_{ex}\right)$ (79)
$\displaystyle+\mathcal{M}_{gl}L_{gl}^{m}\gamma_{gl}k_{B}T\Delta O_{ex}=0,$
with the initial condition
$\Delta\eta_{gl}(0)=\frac{\Delta p_{gl}(0)-\Delta p_{ex}(0)}{K_{gl}}=0.$
In Eq. (79), we have used the relationship between hydraulic pressures
$p_{l},\ l=gl,ex$ and glial compartment volume fraction $\eta_{gl}$ in Eq.
(4a)
$K_{gl}\Delta\eta_{gl}=\Delta p_{gl}-\Delta p_{ex}.$ (80)
By using a linear approximation of extracellular osmotic concentration
variation $\Delta O_{ex}$
$\Delta O_{ex}(t)=\frac{\Delta O_{ex}(T_{sti})}{T_{sti}}t,\ \
t\in[0,T_{sti}],$
the solution of $\Delta\left(p_{gl}-p_{ex}\right)$ in Eq. (79) can be written
as
$\displaystyle\Delta p_{gl}(t)-\Delta p_{ex}(t)=$
$\displaystyle\left(\frac{Bt}{A}\exp(At)-\frac{B}{A^{2}}(\exp(At)-1)\right)$
(81) $\displaystyle\exp(-At)$
where
$A=\mathcal{M}_{gl}L_{gl}^{m}K_{gl},\quad
B=-K_{gl}\mathcal{M}_{gl}L_{gl}^{m}\gamma_{gl}k_{B}T\frac{\Delta
O_{ex}\left(T_{sti}\right)}{T_{sti}}$
Hence, we estimate the average glial transmembrane water velocity in the
stimulated region as
$U_{gl}^{m}(t)=L_{gl}^{m}\left(\Delta p_{gl}(t)-\Delta
p_{ex}(t)+\gamma_{gl}k_{B}T\Delta O_{ex}(t)\right),$ (82)
and the scale of glial transmembrane velocity in the stimulated region as
$U_{gl}^{*}=\left|U_{gl}^{m}(T_{sti})\right|.$ (83)
In Eq. (82), the hydrostatic pressure variations $\Delta p_{l},l=gl,ex$
passively react to the osmotic pressure variation $k_{B}T\cdot\Delta O_{ex}$
in the stimulated region. Therefore, the direction of this glial transmembrane
water flow is determined by osmotic pressure variation $k_{B}T\cdot\Delta
O_{ex}$.
In the next step, we estimate the glial radial velocity scale $u_{gl}^{r*}$
and extracellular radial velocity scale $u_{ex}^{r*}$. By the
incompressibility condition, we have
$\nabla\cdot\left(\eta_{gl}\mathbf{u}_{gl}\right)+\nabla\cdot\left(\eta_{ex}\mathbf{u}_{ex}\right)+\frac{\partial\left(\eta_{ax}u_{ax}^{z}\right)}{\partial
z}=0.$ (84)
In Eq. (84), the dominant terms are the gradients in radial direction, because
the length scale difference between $r^{*}$ and $z^{*}$ and the osmotic
pressure variation are both in the radial direction. Therefore, Eq. (84) can
be approximated by
$\frac{\partial\left(\eta_{gl}u_{gl}^{r}\right)}{\partial
r}+\frac{\partial\left(\eta_{ex}u_{ex}^{r}\right)}{\partial r}=0,$ (85)
The velocity boundary conditions at $r=0$,
$u^{r}_{gl}=u^{r}_{ex}=0,$
and Eq. (85) yield
$\eta_{gl}u_{gl}^{r}+\eta_{ex}u_{ex}^{r}=0.$ (86)
With the help of Eq. (86), we can rewrite $u_{gl}^{r}$ in form of
$u_{gl}^{r}=(1-\chi)u_{gl}^{r}-\chi\frac{\eta_{ex}}{\eta_{gl}}u_{ex}^{r},$
(87)
where the $\chi$ is defined as
$\chi=\frac{\kappa_{gl}\tau_{gl}}{\frac{\eta_{ex}}{\eta_{gl}}\kappa_{ex}\tau_{ex}+\kappa_{gl}\tau_{gl}}.$
By substituting Eqs. (6), (10) into Eq. (87), we estimate the radial velocity
scale in the glial compartment as
$\displaystyle u_{gl}^{r*}=$
$\displaystyle\left|(1-\chi)\frac{\kappa_{gl}\tau_{gl}}{\mu}\frac{\Delta
p_{gl}-\Delta
p_{ex}}{r^{*}}-(1-\chi)\frac{\kappa_{gl}\tau_{gl}}{\mu}\gamma_{gl}k_{B}T\frac{\Delta
O_{gl}}{r^{*}}\right.$ (88)
$\displaystyle\left.-\chi\frac{\eta_{ex}}{\eta_{gl}}k_{e}\tau_{ex}\frac{\Delta\phi_{ex}}{r^{*}}\right|_{t=T_{sti}}$
In Eq. (88), the $\Delta O_{gl}$ is due to the changes of the volume fraction
of the glial compartment $\Delta\eta_{gl}$ (see Remark 4.5) can be estimated
as
$\Delta
O_{gl}\approx\frac{\eta_{gl}^{re}}{\eta_{gl}^{re}+\Delta\eta_{gl}}O_{gl}^{re}-O_{gl}^{re}=-\frac{\Delta\eta_{gl}}{\eta_{gl}^{re}+\Delta\eta_{gl}}O_{gl}^{re},$
where $\Delta\eta_{gl}$ can be written by using the $\Delta p_{l}$ as in Eq.
(80)
$\Delta\eta_{gl}=\frac{\Delta p_{gl}-\Delta p_{ex}}{K_{gl}}.$
Furthermore, by Eq. (86), the scale of radial direction extracellular region
velocity scale $(u_{ex}^{*})$ given by
$u_{ex}^{*}=\frac{\eta_{gl}}{\eta_{ex}}u_{gl}^{*}.$ (89)
Fig. 7b shows that the water flow exhibits circulation patterns between the
extracellular space and glial compartment. The water flow in the glial
compartment is from the stimulated region to the non-stimulated region in the
radial direction. In extracellular space, the water flow in the radial
direction is from the non-stimulated region to stimulated region.
###### Remark 4.5.
We assume the average total number of molecules (not concentration) in the
stimulated glial region does not change since the major glial transmembrane
ion flux in the stimulated region is $\mathrm{K^{+}}$ flux and this
$\mathrm{K^{+}}$ flux from the stimulated extracellular space moves through
the glial transition $S_{t}$ to the non-stimulated extracellular space as Eq.
(70).
Figure 7: (a) Schematic graph of the potassium flux when inner part axon
stimulated. In the stimulated region, the potassium takes the way of
extracellular pathway and through the glial compartment via glial membrane. In
the non- stimulated region, the potassium leaks out to the extracellular space
through the glial membrane. (b) Schematic graph of the water circulation when
inner part axon stimulated. In the stimulated region, the glial transmembrane
water flow goes from extracellular space into glial compartment as the effect
of osmosis difference. In the extracellular space, water goes from non-
stimulated region to stimulated region in radial direction. In the glia
compartment goes in the opposite direction. This compartment drawing is given
only to aid qualitative understanding.
### 4.4 The relative importance of ion flux components
In this section, we discuss the relative importance of ion flux components,
due to diffusion, convection, and electric drift in the glial and
extracellular regions, respectively. Our discussion focuses on the radial
direction since these are the dominant fluxes.
In the extracellular space, we characterize the relative importance of
electric drift and diffusion (of potassium and sodium) in the extracellular
space by the ratios $R_{ex}^{K}$ and $R_{ex}^{Na}$ analyzed in Eq. (75) and
Eq. (76)
$R_{ex}^{K}=\left|\frac{\eta_{gl}\sigma_{gl}}{\eta_{ex}\sigma_{ex}\left(1+h_{\epsilon}\right)}\right|,\quad
R_{ex}^{Na}=\left|\frac{\eta_{gl}\sigma_{gl}}{\eta_{ex}\sigma_{ex}\left(1+h_{\epsilon}\right)}\frac{c_{ex}^{Na}}{c_{ex}^{K}}\right|.$
For radial direction flux, the ratio between convection and diffusion in the
extracellular space is estimated by the Peclet number shown in Eq. (23)
$Pe_{ex}^{i}=\left|\frac{c_{ex}^{i}u_{ex}^{*}r^{*}}{D_{ex}^{i}\tau_{ex}\Delta
c_{ex}^{i}}\right|,\quad i=\mathrm{Na}^{+},\mathrm{K}^{+},$ (90)
where we approximate radial diffusion flux scale in the extracellular space as
$\left|D_{ex}^{*}\tau_{ex}\frac{\Delta c_{ex}^{i}}{r^{*}}\right|,\quad
i=\mathrm{Na}^{+},\mathrm{K}^{+}.$
In a similar way, we estimate the Peclet numbers shown in Eq. (23) in the
glial compartment as
$Pe_{gl}^{i}=\left|\frac{c_{gl}^{i}u_{gl}^{*}r^{*}}{D_{gl}^{*}\tau_{gl}\Delta
c_{gl}^{i}}\right|,\quad i=\mathrm{Na}^{+},\mathrm{K}^{+}.$ (91)
Note that the Peclet numbers for $\mathrm{Na^{+}}$ and $\mathrm{K^{+}}$ are
significantly different due to their different concentrations as shown in Eqs.
(90) and (91). In the glial compartment, the ratio between electric drift and
diffusion is
$R_{gl}^{K}=\left|\frac{1}{1+h_{\epsilon}}\frac{c_{gl}^{K}\Delta
c_{ex}^{K}}{c_{ex}^{K}\Delta c_{gl}^{K}}\right|,\quad
R_{gl}^{Na}=\left|\frac{1}{1+h_{\epsilon}}\frac{c_{gl}^{Na}\Delta
c_{ex}^{K}}{c_{ex}^{K}\Delta c_{gl}^{Na}}\right|.$ (92)
where we have used Eqs. (53) and (66). In Eq. (92), we estimate the
$\mathrm{K}^{+}$ concentration change $(\Delta c_{gl}^{K})$ in the stimulated
glial compartment as
$\Delta c_{gl}^{K}\approx\left(nc_{sti}-\Delta
c_{ex}^{K}\right)\frac{\lambda_{gl}^{m,K}}{\lambda_{gl}^{m,K}+\lambda_{ex}^{K}}\frac{\eta_{ex}}{\eta_{gl}},$
(93)
where $\lambda_{gl}^{m,K}$ and $\lambda_{ex}^{K}$ are defined in Eq. (72), and
$n$ is the number of stimuli.
We estimate the $\Delta c_{gl}^{Na}$ in the stimulated glial compartment as
$\Delta c_{gl}^{Na}\approx-\frac{3\Delta I_{gl}}{g_{gl}^{K}\left(\Delta
V_{gl}-\Delta E_{gl}^{K}\right)}\Delta c_{gl}^{K},$ (94)
where $\Delta I_{gl}$ are approximated by Taylor expansion as
$\Delta I_{gl}\approx
2\left(\frac{\mathrm{K}_{\mathrm{K}1}I_{gl}^{re,1}}{c_{ex}^{K,re}\left(c_{ex}^{K,re}+K_{K1}\right)}+\frac{\mathrm{K}_{\mathrm{K}2}I_{gl}^{re,2}}{c_{ex}^{K,re}\left(c_{ex}^{K,re}+K_{K2}\right)}\right)\Delta
c_{ex}^{K}.$
In the next section, we carry out a numeric simulation as mentioned
previously. Furthermore, we compare the results between the electrodiffusion
model with the convection-electrodiffusion (full) model.
## 5 Numerical simulation
In this section, numerical simulations are used to confirm our asymptotic
estimations. The comparison between electrodiffusion model and the full
convection-electrodiffusion model is conducted to understand how the nervous
(neuron-glia) system interacts with the extracellular space to create
microcirculation.
A train of stimuli is applied to stimulate the axon membrane near the left
boundary $(\\{(z_{0},r)|z_{0}={\color[rgb]{0,0,0}1.875}\ \mathrm{mm}\
\mathrm{and}\ r<r_{sti}=\frac{1}{2}r^{*}=24\ \mathrm{\mu m}\\})$. Each single
stimulus has current strength $I_{sti}=3\times 10^{-3}\ \mathrm{A/m^{2}}$ with
duration $3\ \mathrm{ms}$. The frequency of the stimuli is $50\ \mathrm{Hz}\
(T=0.02\ \mathrm{s})$ and the duration is $T_{sti}=0.2\ \mathrm{s}$. The
obtained full model is solved by using Finite Volume Method with mesh size
$h=1/20$ and temporal size $t=1/10$ in dimensionless. The code is written in
the Matlab environment.
### 5.1 Estimation of velocity scales
We first estimate how large are the fluid velocities in extracellular space
and glial compartment generated by a train of stimuli. From Eqs. (124) and
(123), the estimated concentration variations in the stimulated extracellular
region at $t=T_{sti}$ are
$\Delta c_{ex}^{Na}\approx-1.06\ \mathrm{mM},\quad\Delta c_{ex}^{K}\approx
0.89\ \mathrm{mM},\quad\Delta O_{ex}\approx-0.34\ \mathrm{mM}.$
The estimated glial transmembrane velocity by Eq. (88) is
$U_{gl}^{*}\approx 9.78\times 10^{-2}\ \mathrm{nm}/\mathrm{s}.$
From Eqs. (88) and (89), the estimated scale of radial water velocities inside
glial compartment and extracellular space are
$u_{ex}^{*}\approx 1.56\times 10^{1}\ \mathrm{nm}/\mathrm{s},\ \ \
u_{gl}^{*}\approx 3.90\ \mathrm{nm}/\mathrm{s}.$
Figure 8: Numerical Results. (a-c) Average concentration variations in the
stimulated extracellular region; (d-e) Average radial velocity in the
intradomain; (f) Average glial transmembrane velocity in the stimulated region
(with normal direction points to ECS).
In Fig. 8a-c, we plot the computed average variation of concentrations in the
stimulated extracellular region. These computed concentration changes are
consistent with the estimates presented previously. The change of
concentration reaches its peak at the end of the train of stimulus
$(t=T_{sti})$ and quickly returns to its previous equilibrium value.
In Fig. 8f, we plot the computed average transmembrane water flow through the
glial membrane in the stimulated region. We see Fig. 7b that water flows into
the glial compartment from the extracellular space in the stimulated region.
This transmembrane water flow generates the water circulation between the
stimulated region and non-stimulated region in the radial direction. As in the
Fig. 7b, in the extracellular compartment, the water flow goes from the non-
stimulated region to the stimulated region and in the glial compartment, water
flows in the opposite (radial) direction. In the Fig. 8d-e, we plot the
computed average water velocity in the radial direction in the glial
compartment and in the extracellular space. The computations are consistent
with our estimation above.
In the Fig. 9a, we show the transmembrane water flow through the glial
membrane in the non-stimulated region as in the Schematic Fig. 7b. This water
flow to the extracellular space produces widening of the extracellular space
volume in the non-stimulated region, as shown in Fig. 9b. At the same time,
the extracellular space volume shrinks (in the stimulated region) as shown in
Fig. 9c. The shrinkage is produced by the inward water flow through the glial
membrane in stimulated region, as in Fig. 9f. In Fig. 10 and Fig 11, the
variations of volume fractions of the extracellular space and glial
compartment in the whole domain are plotted at time $t=0.1\mathrm{s}$ (during
the stimulus), $t=0.5\mathrm{s}$ (maximum variations) and $t=2\mathrm{s}$
(back to resting state). Our simulation is consistent with the experiments in
references [27, 33], where the extracellular space becomes smaller in the
middle cortical layers (where the stimulus is applied) but widens in the most
superficial and deep cortical layers (where no stimulus is applied).
###### Remark 5.1.
In Figs. 10-11, it is an illusion that there are jumps in the contours of
volume fractions for extracellular space and glial compartment. By checking a
line-plot at a fixed radius $r=1.5\mathrm{\mu m}$, Fig.16 in the Appendix
illustrates that there are not jumps rather than local extreme values at the
$z_{0}=1.875\mathrm{mm}$ where the stimuli are applied. These stimuli result
in the local potassium accumulation which decreases the osmosis variation in
the extracellular space near $z_{0}$ (see Appendix Fig. 18). Therefore, less
shrunken of the extracellular volume fraction near $z_{0}$ as Figs. 10-11
shown.
Figure 9: (a) Average glial transmembrane velocity in the non-stimulated
region (the normal direction points from glial compartment to extracellular
space.); (b-c) Average variation of the extracellular volume fraction in non-
stimulated region and stimulated regions. Figure 10: (a)-(c): Extracellular
space volume fraction $(\eta_{ex})$ variation at time
$t=0.1\mathrm{s},~{}0.5\mathrm{s},~{}2\mathrm{s}$. The blue is the enlarged
region of extracellular space and red is the shrunken region of the
extracellular space which is qualitatively consistent with the results in Ref.
[27, 33]. The stimulus current has been applied at $z_{0}=1.875\mathrm{mm}$ as
shown in Fig. 5, which induces ion concentration and osmosis variation differ.
The volume fraction changes depend on the hydrostatic pressure difference
which involves the osmotic pressure (see Fig. 18 in the Appendix). Figure 11:
(a)-(c):Glial compartment volume fraction $(\eta_{gl})$ variation at time
$t=0.1\mathrm{s},~{}0.5\mathrm{s},~{}2\mathrm{s}$.
### 5.2 Importance of convection
In this section, we explore the importance of fluid convection during
potassium clearance in each region. We first examine the estimated Peclet
numbers for $\mathrm{Na^{+}}$ and $\mathrm{K^{+}}$ in the extracellular and
glial compartments. By Eq. (90), the Peclet numbers (for the radial ion flux)
in the extracellular space are
$\displaystyle Pe_{ex}^{K}$
$\displaystyle=\left|\frac{c_{ex}^{K}u_{ex}^{*}r^{*}}{D_{ex}^{K}\tau_{ex}\Delta
c_{ex}^{K}}\right|\approx 1.0\times 10^{-2},$ $\displaystyle\quad
Pe_{ex}^{Na}$
$\displaystyle=\left|\frac{c_{ex}^{Na}u_{ex}^{*}r^{*}}{D_{ex}^{Na}\tau_{ex}\Delta
c_{ex}^{Na}}\right|\approx 3.5\times 10^{-1}.$
By Eqs. (75) and (76), the ratios between electric drift and diffusion (of the
radial ion flux) in the extracellular space are
$\displaystyle
R_{ex}^{K}=\left|\frac{\eta_{gl}\sigma_{gl}}{\eta_{ex}\sigma_{ex}\left(1+h_{\epsilon}\right)}\right|\approx
6.2\times 10^{-2},$ $\displaystyle
R_{ex}^{Na}=\left|\frac{\eta_{gl}\sigma_{gl}}{\eta_{ex}\sigma_{ex}\left(1+h_{\epsilon}\right)}\frac{c_{ex}^{Na}}{c_{ex}^{K}}\right|\approx
2.3.$
In the glial compartment, based on Eqs. (91), (93) and (94), we get the Peclet
numbers (for the radial ion flux) in the glial compartment are
$\displaystyle
Pe_{gl}^{K}=\left|\frac{c_{gl}^{K}u_{gl}^{*}r^{*}}{D_{gl}^{K}\tau_{gl}\Delta
c_{gl}^{K}}\right|\approx 2.9\times 10^{1},$ $\displaystyle
Pe_{gl}^{Na}=\left|\frac{c_{gl}^{Na}u_{gl}^{*}r^{*}}{D_{gl}^{Na}\tau_{gl}\Delta
c_{gl}^{Na}}\right|\approx 1.7\times 10^{1}.$
By Eq. (92), the ratios between electric drift and diffusion (of the radial
ion flux) in the glial compartment are
$\displaystyle R_{gl}^{K}=\left|\frac{1}{1+h_{\epsilon}}\frac{c_{gl}^{K}\Delta
c_{ex}^{K}}{c_{ex}^{K}\Delta c_{gl}^{K}}\right|\approx 4.3\times 10^{2},$
$\displaystyle
R_{gl}^{Na}=\left|\frac{1}{1+h_{\epsilon}}\frac{c_{gl}^{Na}\Delta
c_{ex}^{K}}{c_{ex}^{K}\Delta c_{gl}^{Na}}\right|\approx 1.7\times 10^{2}.$
In Fig. 12, we plot the computed potassium and sodium fluxes (in the radial
direction) in the extracellular space and glial compartments .
Figure 12: (a) Average radial direction fluxes components in the extracellular
space; (b) Average radial direction fluxes components in the glial compartment
(radial direction as normal direction).
In the extracellular space, the importance of different fluxes are complicated
because they depend on the ion species concentration as shown in Eq. (90). For
potassium, the diffusion flux is dominant as shown in Fig. 12a upper panel.
But for the sodium (Fig. 12a lower panel), the three fluxes, diffusion,
convection, and electric drift, are comparable with the electric drift flux
being somewhat larger. These simulation results agree with our estimations
above. In the extracellular space, the potassium’s Peclet number $Pe_{ex}^{K}$
and the ratio $R_{ex}^{K}$ are in $O(10^{-2})$, while the sodium’s Peclet
number $Pe_{ex}^{Na}$ is order of $O(10^{-1})$ and the ratio $R_{ex}^{Na}$ is
in $O(1)$.
In the glial compartments (Fig. 12b), the situation is different from the
extracellular space. The electric drift is dominant, and convection flux comes
as second in importance for both sodium and potassium. The water flow has a
more important effect on potassium in the glial compartment than in the
extracellular space. The maximum of the convection flux occurs after the
stimuli, since it takes that long for osmotic pressure to accumulate. Also, it
lasts longer time when the effect of electric drift has diminished.
Figure 13: (a) Potassium and sodium flux variation through Na/K pump and ion
channels on the glial membrane in the stimulated region; (b) Potassium and
sodium flux variation through Na/K pump and ion channels on the glial membrane
in the non-stimulated region. c: the total potassium flux through potassium
channel on the glial membrane.
In the Fig. 13a and 13b, the potassium and sodium flux through the glial
membrane are presented and the results are consistent with our estimates. The
major current through the glial membrane is through the potassium channel in
both stimulated region and non-stimulated region. Fig. 13c compares the
stimulated and non-stimulated region by showing the total potassium flux
through potassium channels (integrated over all the glial membrane). The total
potassium flux has different direction in the stimulated region and non-
stimulated region, as shown in our estimation in Eq. (70). The strength is the
same, but the direction is different.
Figure 14: (a) Cumulative $\mathrm{K^{+}}$ flux on extracellular transition
region; (b) Cumulative $\mathrm{K^{+}}$ flux on glail transition region
(radial direction as normal direction).
Fig. 14 compares the potassium flux in the electrodiffusion (ED) model and
convection-electrodiffusion (full) model. In the full model, the water
circulation between the stimulated and non-stimulated region in both
extracellular and glial compartments have an important role in the circulation
of potassium. The water circulation has an important role in buffering
potassium in the optic nerve bundle. The water circulation increases the
potassium flow through the glial compartment.
Fig. 14b show how water flow increases the potassium flux through the glia in
the transition region between the stimulated and non-stimulated region. The
potassium flux moves back to the stimulated extracellular region from non-
stimulated extracellular region through the extracellular pathway, as shown in
Fig. 14a. The time rate of change of the cumulative $\mathrm{K^{+}}$ flux
through the extracellular transition region decreases after stimulus.
Figure 15: Multiple trains of action potentials. (a) Cumulative
$\mathrm{K^{+}}$ flux on extracellular transition region; (b) Cumulative
$\mathrm{K^{+}}$ flux on glail transition region (radial direction as normal
direction).
Multiple trains of action potentials strengthen the effect of water flow on
the transport through the glial compartment. In the Fig. 15, three trains of
action potentials occur with $0.2\ \mathrm{s}$ resting period between each.
Fig. 15b shows that water flow increases $25\%$ of the amount of cumulative
potassium flux through the transition region in the glial compartment, beyond
the potassium flow in the electrodiffusion model. Consequently, the amount of
cumulative potassium flux through the transition region in the extracellular
space is around $15\%$ less than in the electrodiffusion model see Fig. 15a.
## 6 Discussion
Biological systems, like engineering systems, are complex, involving many
components connected in specific structures, using a range of forces to
perform specific functions, often that can be defined by quantitative
measurements and relations. These systems are defined in textbooks of
physiology and some in more mathematical detail elsewhere.
Many parameters are involved that need to be known if function is to be
understood and predicted. What is not so well known is how these parameters
are determined. In one extreme, the circuits of electronic devices all
parameters—every one—are known by independent measurements. Curve fitting is
not involved at all. Indeed, it is hard to imagine how a computer of some
$10^{13}$ devices that interact with each other some $10^{9}$ times a second
could function if parameters were not definite and known to the designer of
the circuit. Thus, complexity in itself does not prevent definite
understanding.
A crucial help in dealing with electronic circuits is the universal and exact
nature of the Maxwell equations that govern electronic current flow in these
structures. The same equations are true for biological systems for ions, but
the mechanical response of the system to the charges and their movement when
electric fields change (loosely called ‘polarization’) is not so well known.
Measurements of the physical and electrical structure of tissues is, however,
sometimes possible giving some of the certainty to fortunate biological
systems that the Maxwell-Kirchhoff equations bring to electronic systems. It
is natural to try to simplify the electrical and then the electrodiffusional
and osmotic properties of biological tissues with compartment models, in which
spatial variables and differential equations in space and time are replaced by
compartments and ordinary differential equations in time. These compartments
can be derived in some cases by well defined perturbation procedures (some of
which we use here) but the accuracy of the perturbation scheme and reduced
models is difficult to determine, to put it mildly, given the large number of
parameters that affect that accuracy, particularly as conditions change. The
compartments introduce a level of uncertainty that is hard to resolve and is
likely to impede agreement among investigators and thus the progress of
knowledge. In some fortunate cases, biological systems are known well. Then
field equations can be written and solved that are general and quite
independent of the choice of compartments, as we have tried to do here. The
system of long cylindrical nerve fibers, ionic channels and
membranes—particularly their capacitance—that conducts the signals (action
potentials) of the nervous system is known quite well. Independent
measurements of every component are available. Parameters can be measured of
almost all components in several independent ways that give indistinguishable
results. Thus action potential propagation can be computed with little
ambiguity.
Some syncytial tissues are known almost this well. The lens of the eye has
been studied by impedance spectroscopy and morphometry so the structure and
structural parameters are well known. Flows have been directly measured and
also pressure, sometimes with spatial dependence, in Mathias group more than
anywhere else In the case of the lens, the biological system is nearly as well
determined as the electronic system. The optic nerve is not so well known.
Here we have good structural information but limited knowledge of parameters.
Membrane capacitance and extracellular and intracellular resistivities are
known. Conductance of voltage activated channels and connexins is known but
the spatial distribution of connexins and channels is not known, and even the
identity of the channels is not known. Thus calibration of our optic nerve
model is incomplete, as we have tried to explain in detail in the text. And so
validation is limited as well. What is needed for calibration in the optic
nerve more than anything else is experimental measurements of the type and
spatial distribution of pumps and channels. What is needed for validation is
experimental measurements of the spatial distribution of potentials,
concentrations and pressures. The theory can easily be extended to compute
those quantities not already included. Indeed, this process of calibration and
validation is what is needed, in our view, to understand the role of water
flow, ion migration and diffusion in other systems in the central nervous
system. Understanding the glymphatic flows in the central nervous system
requires a field theory in the spirit of that presented here. It requires
calibration with the spatial distribution of pumps and channels. It requires
validation by measurement of the spatial distribution of concentration,
electrical potential and pressure. A validated and calibrated theory can then
predict and understand the glymphatic flows so important in biological
processes like sleep and pathological situations like migraine and epilepsy.
## 7 Conclusion
This work provides a comprehensive set of estimates and computations, showing
the water circulation in the optic nerve. The water flow is generated by the
osmotic difference between the glial compartment and extracellular space.
Through the estimation, we show that in the stimulated region, the
extracellular osmotic changes are not induced by ion fluxes from the axon
compartment when the axon is firing. Indeed, based on the analysis, we found
that the leading order of potassium flux out and sodium flux into axon is the
same during the action potential, which is consistent with the literature [49,
30]. The osmotic difference is generated due to the sodium and potassium
conductance difference in the glial membrane. In other words, more potassium
leaks into the glial compartment, and less sodium leaks out. As a result of
this glial transmembrane water flow in the stimulated region, it forms a water
circulation in the radial direction between the stimulated region and the non-
stimulated region.
Our estimation of the velocity scales in the glial compartment and
extracellular space shows that this water flow has a considerable effect on
potassium flux in the glial compartment. By comparing the full model
(including water) with the electrodiffusion model (exclude water), we validate
that water circulation through the glial pathway helps clearance of potassium
in the extracellular space and enhance the glial buffering effect. With
additional numerical simulations, we show that the repetitive activity of the
nerve fibers further increases the importance of water flow, and the water
flow contribution to glia buffering, which is likely to dramatically dominate
pathological situations of repetitive activity.
Besides, through our analysis, we show that the electrical syncytium property
of the glial cells is critical for clearing potassium (from the extracellular
space) when the neuron fires. Based on the governing equation of glial
electric potential, we explain why the inward glial transmembrane potassium
flux in the stimulated region is almost the same as the outward potassium flux
out to the extracellular space in the non-stimulated region when axon firing.
This is because the electric potential spreads through the connected cells in
the glial compartment. The glial electric potential in the non-stimulated
region becomes more positive in response to the depolarization of the glial
electric potential in the stimulated region. This electric property for the
glial compartment is always exist as long as there exists two distinguish
stimulated region and non-stimulated region. The glial wrap the axon like a
faster potassium transporter, which quickly remove the extra potassium (in the
extracellular space) from the stimulated region to the non-stimulated region.
Finally, we’d like to point out that the coupling of ionic and water flows is
not unique to optic nerve. It is ubiquitous in many parts of the mammalian
body and other biological tissues. Our analysis of the model for the optic
nerve is just a first small step towards the understanding of the mechanisms
of various transport processes and the consequences of a disrupted process
under pathological conditions.
## Author Contributions
Y.Z., S.X., and H.H. did the model derivations and carried out the numerical
simulations. R.S.E. and H.H. designed the study, coordinated the study, and
commented on the manuscript. All authors gave final approval for publication.
## Acknowledgments
This research is supported in part by National Natural Science Foundation of
China 12071190 (S.X), the Fields Institute for Research in mathematical
Science (S.X., R.S.E., and H.H.) and the Natural Sciences and Engineering
Research Council of Canada (H.H.). Authors also would like to thank anonymous
reviewers for their valuable suggestions on model calibration $\&$ validation.
## Appendix A Notations
$c_{l}^{i}$: Ion $i$ concentration in the $l$ region,
---
$\phi_{l}$: Electric potential in $l$ region,
$p_{l}$: Hydrostatic pressure in $l$ region,
$\mathbf{u}_{l}$: Fuild velocity inside of the $l$ region,
$\eta_{l}$: Volume fraction of $l$ region,
$O_{l}$: Osmotic concentration in $l$ region,
$\mathcal{M}_{k}$: Membrane area $k$ in per unit control volume,
$\kappa_{l}$: Water permeability of $l$ region,
$L_{k}^{m}$: Membrane hydrostatic permeability of $k$ membrane,
$\mu$: Fluid viscosity.
$K_{k}$: Stiffness constant of $k$ membrane,
$\tau_{l}$: Tortuosity of $l$ region,
$z^{i}$: Valence of the ion $i$,
$A_{l}$: Negative charged protein density in $l$ region,
$J_{p,k}^{i}$: Active ATP based ion $i$ pump on $k$ membrane,
$J_{c,k}^{i}$: Passive transmembrane source of $k$ membrane,
$g_{k}^{i}$: Conductance of k membrane for ion $i$,
$\bar{g}^{i}$: Maximum conductance of axon membrane for ion $i$,
$g_{leak}^{i}$: Leak conductance of axon membrane for ion $i$,
## Appendix B Comparison between membrane potential and Nernst potential on
axon membrane
The classical Hodgkin Huxley analysis of a single action potential [11]
assumes that changes in concentration of ions are much less important than
current flow in determining the shape of the action potential. In other words,
the change in the Nernst (i.e., equilibrium) potential is much less than the
change in the membrane potential. In this section, we show that the variation
of the Nernst potential for $\mathrm{Na^{+}}$, $\mathrm{K^{+}}$ and
$\mathrm{Cl^{-}}$ on the axon membrane is much smaller than the axon membrane
potential changes during action potentials,
$\Delta E_{ax}^{i}=o\left(\Delta V^{*}_{ax}\right),\quad
i=\mathrm{Na^{+}},\mathrm{K^{+}},\mathrm{Cl^{-}}.$
During action potentials,the scale of the $\Delta V_{\mathrm{ax}}$ can be
approximated by the $\mathrm{Na}^{+}$ and $\mathrm{K}^{+}$ Nernst potential
difference at the resting state,
$\Delta V^{*}_{\mathrm{ax}}=O\left(E^{Na,re}_{ax}-E^{K,re}_{ax}\right).$ (95)
We take the $\mathrm{Cl^{-}}$ Nernst potential for example. By the charge
neutrality condition in Eq. (2), we have
$\Delta c_{ax}^{Cl}\approx-\frac{\eta_{ex}}{\eta_{ax}}\Delta c_{ex}^{Cl}.$
(96)
Therefore, the variation of $\mathrm{Cl^{-}}$ Nernst potential on axon
membrane yields
$\displaystyle\Delta E_{ax}^{Cl}$
$\displaystyle=V^{*}\left(\log\left(\frac{c_{ex}^{Cl,re}+\Delta
c_{ex}^{Cl}}{c_{ax}^{Cl,re}+\Delta
c_{ax}^{Cl}}\right)-\log\left(\frac{c_{ex}^{Cl,re}}{c_{ax}^{Cl,re}}\right)\right)$
(97) $\displaystyle\approx V^{*}\left(\log\left(1+\frac{\Delta
c_{ex}^{Cl}}{c_{ex}^{Cl,re}}\right)-\log\left(1-\frac{\eta_{ex}\Delta
c_{ex}^{Cl}}{\eta_{ax}c_{ax}^{Cl,re}}\right)\right),$
where
$V^{*}=\frac{k_{B}T}{e},\quad\frac{1}{c_{ex}^{Cl,re}}=O\left(10^{-2}\right),\quad\frac{\eta_{ex}}{\eta_{ax}c_{ax}^{Cl,re}}=O\left(10^{-2}\right).$
In addition, the characteristic time for a single action potential
$T_{ax}^{*}$ is in millisecond level $(O\left(10^{-3}\right))$, so the scale
of $\Delta c_{ex}^{Cl}$ in the stimulated region is
$\Delta c_{ex}^{Cl,*}=\Delta c_{ex}^{Na,*}+\Delta
c_{ex}^{K,*}<O\left(\frac{T_{ax}^{*}\mathcal{M}_{ax}\bar{g}^{Na}\Delta
V^{*}_{ax}}{e\eta_{ex}}\right)=O(1),$ (98)
where we use charge neutrality condition and maximum conductance of the
voltage-gated $\mathrm{Na}^{+}$ channel. Therefore, Eq. (97) yields
$\Delta E_{ax}^{Cl}\approx
V^{*}\left(\frac{1}{c_{ex}^{Cl,re}}+\frac{\eta_{ex}}{\eta_{ax}c_{ax}^{Cl,re}}\right)\Delta
c_{ex}^{Cl},$ (99)
Based on Eqs. (95), (99) and (98), and the fact that $\frac{V^{*}}{\Delta
V^{*}_{ax}}=o(1)$, we have $\Delta E_{ax}^{Cl}=o\left(\Delta
V^{*}_{ax}\right)$. In a similar way, we can get
$\Delta E_{ax}^{i}=o\left(\Delta V^{*}_{ax}\right),\quad
i=\mathrm{Na^{+},K^{+}}.$ (100)
## Appendix C Estimations of $t_{m1}$ and $t_{m2}$
In this section, we provide estimations on $t_{m1}$ and $t_{m2}$. For the
first time interval parameter $t_{m1}$, by substituting Eq. (36), Eq. (38)
into Eq. (37), we obtain
$\displaystyle m^{dy}(t_{m1})=$ $\displaystyle
m_{0}\exp\left(\frac{18t_{m1}}{35}\left(\exp\left(\frac{-70}{9}\right)-1\right)+\frac{t_{m1}}{14}\left[\mathrm{Li}_{2}\left(\exp(x)\right)+x\ln\left(1-\exp(x)\right)-\frac{1}{2}x^{2}\right]\bigg{|}_{2.5}^{-11.5}\right)-\frac{t_{m1}}{14}\int_{2.5}^{-11.5}\frac{s}{\exp(s)-1}$
(101)
$\displaystyle\exp\left(\frac{18t_{m1}}{35}\left(\exp\left(-\frac{70}{9}\right)-\exp\left(-\frac{25-10s}{18}\right)\right)+\frac{t_{m1}}{14}\left[\mathrm{Li}_{2}(\exp(x))+x\ln(1-\exp(x))-\frac{1}{2}x^{2}\right]\bigg{|}_{s}^{-11.5}\right)ds,$
Based on Eq. (101), we present the estimations of $t_{m1}$ by choosing
different open probabilities value for $m^{dy}(t_{m1})$ in Table 1 below.
Table 1: Estimation of $t_{m1}$ $m^{dy}\left(t_{m1}\right)$ | 0.93 | 0.95 | 0.97
---|---|---|---
$t_{m1}$ | $0.57\mathrm{~{}ms}$ | $0.67\mathrm{~{}ms}$ | $0.92\mathrm{~{}ms}$
Table 1 shows that the estimation of $t_{m1}$ through Eq. (101) has consistent
results. In the similar way, for the second time interval parameter $t_{m2}$,
by substituting Eq. (36), Eq. (40) into Eq. (37), we obtain
$\displaystyle
m^{dy}(t_{m2})=m_{0}\exp\left(\frac{36t_{m2}}{75}\left(\exp\left(\frac{-70}{9}\right)-\exp\left(\frac{5}{9}\right)\right)+\frac{t_{m2}}{15}\left[\mathrm{Li}_{2}(\exp(x))+x\ln(1-\exp(x))-\frac{1}{2}x^{2}\right]\bigg{|}_{3.5}^{-11.5}\right)+\frac{t_{m2}}{15}$
(102)
$\displaystyle\int_{-11.5}^{3.5}\frac{s}{\exp(s)-1}\exp\left(\frac{36t_{m2}}{75}\left(\exp\left(\frac{-(35-10s)}{18}\right)-\exp\left(\frac{5}{9}\right)\right)+\frac{t_{m2}}{15}\left[\mathrm{Li}_{2}(\exp(x))+x\ln(1-\exp(x))-\frac{1}{2}x^{2}\right]\bigg{|}_{3.5}^{s}\right)ds.$
In the second time interval, we choose $m^{dy}(t_{m1})=0.95$ as the initial
value $m_{0}$ in Eq. (102). Table 2 shows consistent estimation of the
$t_{m2}$ when different value for $m^{dy}(t_{m2})$ has been chosen.
Table 2: Estimation of $t_{m2}$ $m^{dy}\left(t_{m2}\right)$ | 0.15 | 0.1 | 0.05
---|---|---|---
$t_{m2}$ | $2.44\mathrm{~{}ms}$ | $3.00\mathrm{~{}ms}$ | $4.01\mathrm{~{}ms}$
In sum, based on the results in Table 1-2, we confirm that by using Eq. (101)
and Eq. (102) to estimate the time parameter $t_{m1}$ and $t_{m2}$ for $\Delta
V_{ax}$ have robust results.
## Appendix D Estimation of transmembrane currents
After the axon stop firing, we assume that voltage-gated $\mathrm{Na}^{+}$ and
$\mathrm{K}^{+}$ channel’s conductance on axon membrane have returned to their
resting state in the stimulated region,
$g_{ax}^{i,dy}\approx g_{ax}^{i,re},\ i=\mathrm{Na^{+},K^{+}}.$
At this stage, we have ion channel conductance on the glial and axon membrane
as
$\\{g_{ax}^{Na,re},\ g_{ax}^{K,re},\ g_{ax}^{Cl},\ g_{gl}^{Cl},\
g_{gl}^{Na}\\}\subset o\left(g_{gl}^{K}\right).$ (103)
Similar to Eq. (53), we claim in the stimulated region
$\Delta E_{k}^{i}=o\left(\Delta E_{gl}^{K}\right),\ i=\mathrm{Na^{+},Cl^{-}},\
k=gl,ax,$ (104)
since Eq. (57) and
$c_{ex}^{K,re}=o\left(c_{ex}^{i,re}\right),\quad
i=\mathrm{Na}^{+},\mathrm{Cl}^{-}.$
In addition, for the increase current through $\mathrm{Na/K}$ pump in Eq.
(54), we have
$z^{Na}e\Delta J_{p,k}^{Na}+z^{K}e\Delta J_{p,k}^{K}=\Delta I_{k},\ k=gl,ax.$
By the Taylor expansion, we approximate the increase current through the
$\mathrm{Na/K}$ pump due to the extracellular $\mathrm{K^{+}}$ concentration
changes as
$\Delta I_{k}\approx
2\left(\frac{K_{K1}I_{k}^{re,1}}{c_{ex}^{K,re}(c_{ex}^{K,re}+K_{K1})}+\frac{K_{K2}I_{k}^{re,2}}{c_{ex}^{K,re}(c_{ex}^{K,re}+K_{K2})}\right)\Delta
c_{ex}^{K},$ (105)
where $I_{k}^{re,1}$ and $I_{k}^{re,2}$ are the resting state current through
$\alpha_{1}-$ and $\alpha_{2}-$ isoform of the Na/K pump on glial membrane
$(k=gl)$ or axon membrane $(k=ax)$.
By comparison between Eq. (53) and Eq. (105), we have
$\Delta I_{k}=o\left(g_{gl}^{K}\Delta E_{gl}^{K}\right),\quad\ k=gl,ax.$ (106)
In all, based on the estimations in Eqs. (103), (104) and (106), we claim the
dominated term in the right-hand side of Eq. (54) is
$\displaystyle\sum_{i}z^{i}e\mathcal{M}_{gl}\left(J_{p,gl}^{i}+J_{c,gl}^{i}\right)+\sum_{i}z^{i}e\mathcal{M}_{ax}\left(J_{p,ax}^{i}+J_{c,ax}^{i}\right)$
$\displaystyle\approx\mathcal{M}_{gl}g_{gl}^{K}\left(\Delta V_{gl}-\Delta
E_{gl}^{K}\right),$
where we use the fact that at the resting state, the transmembrane currents in
both axon membrane and glial membrane are negligible in compare to the source
term $g_{gl}^{K}\Delta E_{gl}^{K}$.
## Appendix E Comparison between $\Delta\phi_{gl}$ and $\Delta\phi_{ex}$
In this section, we show that the scale of the glial electric potential
variation $\Delta\phi_{gl}$ is much larger than the scale of the extracellular
electric variation $\Delta\phi_{ex}$ in the stimulated region. Based on Eq.
(63), we know
$O\left(\frac{\eta_{gl}\sigma_{gl}}{\eta_{ex}\sigma_{ex}}\right)=10^{-2},\quad\
O\left(\frac{\tau_{ex}eD_{ex}^{\mathrm{diff}}}{\sigma_{ex}}\Delta
c_{sti}\right)=10^{-6}.$ (107)
If the $\Delta\phi_{ex}\neq o(\Delta\phi_{gl})$, then based on Eqs. (63) and
(107), we should have
$O\left(\Delta\phi_{gl}\right)<10^{-5}.$
Therefore, the right-hand side of Eq. (62) becomes
$\left|\frac{g_{gl}^{K}}{e}\left(\Delta V_{gl}-\Delta
E_{gl}^{K}\right)\right|\approx\left|\frac{g_{gl}^{K}}{e}\Delta
E_{gl}^{K}\right|=O\left(10^{-8}\right).$ (108)
where we use the estimation of $\Delta E_{gl}^{K}\
\left(=O\left(10^{-3}\right)\right)$ in Eqs. (53) and (50), and
$O\left(\Delta
V_{gl}\right)=O\left(\Delta\phi_{gl}-\Delta\phi_{ex}\right)<10^{-5}.$
At the same time, the left-hand side of Eq. (62) gives
$\left|\frac{2}{r_{sti}}\frac{\eta_{gl}\sigma_{gl}}{\mathcal{M}_{gl}}\frac{\Delta\phi_{gl}}{r^{*}}\right|<O\left(10^{-11}\right).$
(109)
In Eq. (62), based on Eqs. (109) and (108), the order of right-hand side does
not match with the order of left-hand side. Therefore, we conclude that
$\Delta\phi_{ex}=o(\Delta\phi_{gl}).$
## Appendix F Estimation of extracellular $\mathrm{Na^{+}}$ and
$\mathrm{K^{+}}$ transport
For the $\mathrm{K^{+}}$ clearance in the stimulated extracellular region in
Eq. (72), based on Eqs. (53) and (67), the effect of average glial
transmembrane $\mathrm{K}^{+}$ flux in the stimulated region is
$\lambda_{gl}^{m,K}=\frac{\mathcal{M}_{gl}g_{gl}^{K}h_{\epsilon}k_{B}T}{z^{K}\left(1+h_{\epsilon}\right)e^{2}c_{ex}^{K,re}}.$
(110)
For $\mathrm{K^{+}}$ flux through the extracellular pathway, we only consider
the effects from diffusion and electric drift terms in the radial
$\mathrm{K^{+}}$ flux. The fluid flows in the extracellular space from the
non-stimulated region to the stimulated region. So, the convection flux in the
extracellular is a consequence of the osmosis and flattens the variation of
osmotic pressure in the stimulated region.
The scale of the radial diffusive $\mathrm{K^{+}}$ flux in the extracellular
space can be approximated as
$O\left(-D_{ex}^{K}\tau_{ex}\frac{dc_{ex}^{K}}{dr}\right)=\frac{D_{ex}^{K}\tau_{ex}}{r^{*}}\Delta
c_{ex}^{K}.$ (111)
The scale of the radial electric drift $\mathrm{K^{+}}$ flux in the
extracellular space is
$\displaystyle
O\left(-\frac{D_{ex}^{K}\tau_{ex}e}{k_{B}T}c_{ex}^{K}\frac{d\phi_{ex}}{dr}\right)$
$\displaystyle=\frac{D_{ex}^{K}\tau_{ex}e}{k_{B}T}c_{ex}^{K}\frac{\Delta\phi_{ex}}{r^{*}}$
(112)
$\displaystyle\approx-\frac{\eta_{gl}\sigma_{gl}D_{ex}^{K}\tau_{ex}}{\eta_{ex}\sigma_{ex}\left(1+h_{\epsilon}\right)r^{*}}\Delta
c_{ex}^{K},$
where $\Delta\phi_{ex}$ used the estimation from Eq. (68).
Based on Eqs. (111) and (112), we note that the electric drift
$\mathrm{K^{+}}$ flux is in the opposite radial direction to the diffusive
$\mathrm{K^{+}}$ flux in the extracellular space. At the same time, the
electric drift $\mathrm{K^{+}}$ flux has a much smaller magnitude than the
diffusive $\mathrm{K^{+}}$ flux because the ratio $R_{ex}^{K}$ between the
electric drift and diffusion terms is
$R_{ex}^{K}=\frac{\eta_{gl}\sigma_{gl}}{\eta_{ex}\sigma_{ex}(1+h_{\epsilon})}=o(1).$
(113)
Therefore, in Eq. (72), the average effect of the $\mathrm{K^{+}}$ transport
through extracellular pathway can be approximated as
$\lambda_{ex}^{K}=\frac{2\eta_{ex}D_{ex}^{K}\tau_{ex}}{r_{sti}r^{*}},$ (114)
where we used the ratio between volume $V_{S}$ and the effective radial
surface.
In Eq. (73), we first look for the effect of $\mathrm{Na}^{+}$ fluxes through
the extracellular pathway. Similar to Eq. (111), the scale of the radial
diffusive $\mathrm{Na^{+}}$ flux in the extracellular space is
$O\left(-D_{ex}^{Na}\tau_{ex}\frac{dc_{ex}^{Na}}{dr}\right)=\frac{D_{ex}^{Na}\tau_{ex}}{r^{*}}\Delta
c_{ex}^{Na}.$ (115)
The scale of the radial electric drift flux for $\mathrm{Na^{+}}$ in in the
extracellular space is
$\displaystyle
O\left(-\frac{D_{ex}^{Na}\tau_{ex}e}{k_{B}T}c_{ex}^{Na}\frac{d\phi_{ex}}{dr}\right)$
$\displaystyle=\frac{D_{ex}^{Na}\tau_{ex}e}{k_{B}T}c_{ex}^{Na}\frac{\Delta\phi_{ex}}{r^{*}}$
(116)
$\displaystyle\approx-\frac{\eta_{gl}\sigma_{gl}D_{ex}^{Na}\tau_{ex}}{\eta_{ex}\sigma_{ex}\left(1+h_{\epsilon}\right)r^{*}}\frac{c_{ex}^{Na}}{c_{ex}^{K}}\Delta
c_{ex}^{K}$
For $\mathrm{Na^{+}}$ in the extracellular space, the radial electric drift
$\mathrm{Na^{+}}$ flux is in the same direction as the radial diffusive
$\mathrm{K^{+}}$ flux since $\Delta c_{ex}^{Na}$ is negative in the stimulated
region.
The scale of the radial diffusive $\mathrm{Na^{+}}$ flux is at same level as
the radial electric drift $\mathrm{Na^{+}}$ flux in the extracellular space.
From Eqs. (115) and (116), the ratio $R_{ex}^{Na}$ is
$R_{ex}^{Na}=\frac{\eta_{gl}\sigma_{gl}}{\eta_{ex}\sigma_{ex}\left(1+h_{\epsilon}\right)}\frac{c_{ex}^{Na}}{c_{ex}^{K}}=O(1),$
(117)
since $\Delta c_{ex}^{Na}$ and $\Delta c_{ex}^{K}$ is at the same leading
order. The $\mathrm{Na^{+}}$ flux through glial transmembrane is much smaller
than the $\mathrm{K^{+}}$ flux such that
$\lambda_{gl}^{m,Na}=o\left(\lambda_{gl}^{m,K}\right).$ (118)
This is because the conductance on the glial membrane
$g_{gl}^{Na}=o\left(g_{gl}^{K}\right)$. The effect of $\mathrm{Na^{+}}$ flux
through glial transmembrane can be neglected in Eq. (73), since Eq. (118), and
the diffusive fluxes in Eqs. (115) and (111) are in the same magnitude. In
sum, for Eq. (73), we get
$\lambda_{ex}^{Na,1}=\frac{2\eta_{ex}D_{ex}^{Na}\tau_{ex}}{r_{sti}r^{*}},\quad\lambda_{ex}^{Na,2}=\frac{2\eta_{gl}\sigma_{gl}D_{ex}^{Na}\tau_{ex}c_{ex}^{Na,re}}{r_{sti}\sigma_{ex}\left(1+h_{\epsilon}\right)r^{*}c_{ex}^{K,re}}.$
where we used the ratio between volume $V_{S}$ and the effective radial
surface.
In the end of this section, we consider the solution for the coupled dynamical
system of (72) and (73)
$\frac{d}{dt}\left(\begin{aligned} &\Delta c_{ex}^{K}\\\ &\Delta
c_{ex}^{Na}\end{aligned}\right)=A\left(\begin{aligned} &\Delta c_{ex}^{K}\\\
&\Delta c_{ex}^{Na}\end{aligned}\right),$ (119)
where
$A=\left[\begin{array}[]{cc}A_{11}&0\\\
A_{21}&A_{22}\end{array}\right]=\left[\begin{array}[]{cc}-\left(\lambda_{gl}^{m,K}+\lambda_{ex}^{K}\right)/\eta_{ex}^{re}&0\\\
\lambda_{ex}^{Na,2}/\eta_{ex}^{re}&-\lambda_{ex}^{Na,1}/\eta_{ex}^{re}\end{array}\right].$
(120)
In the system (119), we assume that $\eta_{ex}$ keeps at its resting state
$(\eta_{ex}^{re})$ and the initial condition is
$\left(\begin{aligned} \Delta c_{ex}^{K,0}\\\ \Delta
c_{ex}^{Na,0}\end{aligned}\right)=\left(\begin{aligned} \Delta c_{sti}\\\
-\Delta c_{sti}\end{aligned}\right).$ (121)
The solution for System (119) in the time interval $t\in[0,T]$ is
$\left\\{\begin{aligned} \Delta c_{ex}^{K}(t)=&\Delta
c_{sti}\exp\left(A_{11}t\right),\\\ \Delta c_{ex}^{Na}(t)=&\frac{A_{21}\Delta
c_{sti}}{A_{11}-A_{22}}\left(\exp\left(A_{11}t\right)-\exp\left(A_{22}t\right)\right)\\\
&-\Delta c_{sti}\exp\left(A_{22}t\right),\end{aligned}\right.$ (122)
where $T$ is the time interval between each single action potential in the
axon compartment. There are $n\ (=\frac{T_{sti}}{f_{m}})$ stimuli in the time
interval $[0,T_{sti}=nT]$, we have
$\displaystyle\Delta c_{ex}^{K}(iT)=\Delta c_{ex}^{K}(iT)+\Delta c_{sti},\ \
\Delta c_{ex}^{Na}(iT)=$ $\displaystyle\Delta c_{ex}^{Na}(iT)-\Delta c_{sti}.$
$\displaystyle i=1\dots n-1,$
In the above, we view the extracellular $\mathrm{K^{+}}$ and $\mathrm{Na^{+}}$
concentration immediately changes due to axon firing. By using Eq. (122), we
have
$\Delta c_{ex}^{K}(nT)=\Delta
c_{sti}\frac{\exp\left(A_{11}T\right)-\exp\left((n+1)A_{11}T\right)}{1-\exp\left(A_{11}T\right)},$
(123)
and
$\displaystyle\Delta c_{ex}^{Na}(nT)=$
$\displaystyle\sum_{i=1}^{n}\frac{A_{21}\Delta
c_{ex}^{K}((i-1)T)}{4}\left(\exp\left(A_{11}T\right)-\exp\left(A_{22}T\right)\right)$
(124) $\displaystyle\exp\left((n-i)A_{22}T\right)-\Delta
c_{sti}\sum_{i=1}^{n}\exp\left(iA_{22}T\right),$
where
$\Delta c_{ex}^{K}(jT)=\Delta
c_{sti}\frac{1-\exp\left((j+1)A_{11}T\right)}{1-\exp\left(A_{11}T\right)},\ \
j=0,1,\ldots n-1.$
## Appendix G Spatial Distribution of velocity and osmotic pressure
Figure 16: Longitudinal direction changes of $\eta_{ex}$ and $\eta_{gl}$ at
$r=1.5\mathrm{\mu m}$ at $t=0.1\mathrm{s},0.5\mathrm{s},2\mathrm{s}$. Figure
17: Spatial distribution of velocity in radius direction during and after a
train of stimuli. Figure 18: Spatial distribution of osmotic pressure changes
from resting state during and after a train of stimuli. Table 3: Parameters in
Optic Nerve Model
Parameters | Value | Parameters | Value
---|---|---|---
$R_{a}$ | $4.8\times 10^{-5}\mathrm{~{}m}$ (Ref.[35, 9]) | $\mu$ | $7\times 10^{-4}\mathrm{~{}Pa}\cdot\mathrm{s}$ (Ref.[39])
$R_{b}$ | $6\times 10^{-5}\mathrm{m}$ (Ref.[71]) | $c_{csf,eye}^{Na}$ | $111\ \mathrm{mM}$ (Ref.[35])
$L$ | $1.5\times 10^{-2}\mathrm{~{}m}$ (Ref.[35]) | $c_{\text{csf,eye }}^{\text{K }}$ | $3\ \mathrm{mM}$ (Ref.[35])
$e$ | $1.69\times 10^{-19}\mathrm{~{}A}\cdot\mathrm{s}$ | $c_{gl}^{\text{Na,re }}$ | $7.57\ \mathrm{mM}$ (*)
$k_{B}$ | $1.38\times 10^{-23}\mathrm{~{}J}/\mathrm{K}$ | $c_{gl}^{K,re}$ | $100.84\ \mathrm{mM}$ (*,Ref.[35])
$T$ | $296.15\mathrm{~{}K}$ (Ref.[35]) | $c_{ax}^{\text{Na,re}}$ | $10.17\ \mathrm{mM}$ (*)
$\eta_{ax}^{re}$ | $5\times 10^{-1}$ (Ref.[35]) | $c_{ax}^{K,re}$ | $100.04\ \mathrm{mM}$ (*)
$\eta_{gl}^{re}$ | $4\times 10^{-1}$ (Ref.[35]) | $A^{re}_{ax,gl}$ | $105\ \mathrm{mM}$ (*)
$\eta_{ex}^{re}$ | $1\times 10^{-1}$ (Ref.[35]) | $\tau_{ex}^{OP}$ | $0.16$ (Ref.[39, 38])
$\mathcal{M}_{ax}$ | $5.9\times 10^{6}\mathrm{~{}m}^{-1}$ (Ref.[53]) | $\tau_{ex}^{SAS}$ | $1$ (*)
$\mathcal{M}_{gl}$ | $1.25\times 10^{7}\mathrm{~{}m}^{-1}$ (Ref.[53]) | $\tau_{gl}$ | $0.5$ (*)
$z^{Na,K}$ | $1$ | $p_{CSF}$ | $1.3\times 10^{3}\mathrm{~{}Pa}$ (Ref.[5])
$z^{Cl}$ | $-1$ | $p_{ICP}$ | $4\times 10^{3}\mathrm{~{}Pa}$ (Ref.[5])
$z^{ax,gl}$ | $-1$ (*) | $p_{OBP}$ | $0\mathrm{~{}Pa}$ (Ref.[5])
$\gamma_{\text{ax,gl}}$ | $1$ (Ref.[39, 38]) | $D_{ex,ax}^{Na}$ | $1.39\times 10^{-9}\mathrm{~{}m}^{2}/\mathrm{s}$ (Ref.[39])
$\gamma_{pia}$ | $1$ (Ref.[39, 38]) | $D_{ex,ax}^{K}$ | $2.04\times 10^{-9}\mathrm{~{}m}^{2}/\mathrm{s}$ (Ref.[39])
$K_{\text{Na1,Na2}}$ | $2.3393\mathrm{mM}$ (Ref.[80]) | $D_{ex,ax}^{Cl}$ | $2.12\times 10^{-9}\mathrm{~{}m}^{2}/\mathrm{s}$ (Ref.[39])
$K_{K1}$ | $1.6154\mathrm{mM}$ (Ref.[80]) | $D_{gl}^{Na}$ | $1.39\times 10^{-11}\mathrm{~{}m}^{2}/\mathrm{s}$ (Ref.[39])
$K_{K2}$ | $0.1657\mathrm{mM}$ (Ref.[80]) | $D_{gl}^{K}$ | $2.04\times 10^{-11}\mathrm{~{}m}^{2}/\mathrm{s}$ (Ref.[39])
$I_{gl,1}$ | $4.78\times 10^{-4}\mathrm{~{}A}/\mathrm{m}^{2}$ (**,Ref.[80]) | $D_{gl}^{Cl}$ | $2.12\times 10^{-11}\mathrm{~{}m}^{2}/\mathrm{s}$ (Ref.[39])
$I_{gl,2}$ | $6.5\times 10^{-5}\mathrm{~{}A}/\mathrm{m}^{2}$ (**,Ref.[80]) | $k_{ex}^{OP}$ | $1.3729\times 10^{-8}\mathrm{~{}m}^{2}/\cdot\mathrm{s}$ (Ref.[38])
$I_{ax,1}$ | $9.56\times 10^{-4}\mathrm{~{}A}/\mathrm{m}^{2}$ (**,Ref.[80]) | $k_{ex}^{SAS}$ | $0\mathrm{~{}m}^{2}/\mathrm{V}\cdot\mathrm{s}$ (*)
$I_{ax,2}$ | $1.3\times 10^{-4}\mathrm{~{}A}/\mathrm{m}^{2}$ (**,Ref.[80]) | $K_{ax}$ | $1.67\times 10^{6}\mathrm{~{}Pa}$ (Ref.[28, 37])
$g_{gl}^{Na}$ | $2.2\times 10^{-3}\mathrm{~{}S}/\mathrm{m}^{2}$ (Ref.[39]) | $K_{gl}$ | $8.33\times 10^{5}\mathrm{~{}Pa}$ (Ref.[28, 37])
$g_{gl}^{K}$ | $2.1\mathrm{~{}S}/\mathrm{m}^{2}$ (Ref.[39]) | $L_{dr}^{m}$ | $8.89\times 10^{-13}\mathrm{~{}m}/\mathrm{Pa}\cdot\mathrm{s}$ (Ref.[38, 80])
$g_{gl}^{Cl}$ | $2.2\times 10^{-3}\mathrm{~{}S}/\mathrm{m}^{2}$ (Ref.[39]) | $L_{pia}^{m}$ | $8.89\times 10^{-13}\mathrm{~{}m}/\mathrm{Pa}\cdot\mathrm{s}$ (Ref.[38, 80])
$g_{leak}^{Na}$ | $4.8\times 10^{-3}\mathrm{~{}S}/\mathrm{m}^{2}$ (**,Ref.[61]) | $L_{gl}^{m}$ | $1.34\times 10^{-13}\mathrm{~{}m}/\mathrm{Pa}\cdot\mathrm{s}$ (Ref.[38, 80])
$g_{leak}^{K}$ | $2.2\times 10^{-2}\mathrm{~{}S}/\mathrm{m}^{2}$ (**,Ref.[61]) | $L_{ax}^{m}$ | $7.954\times 10^{-14}\mathrm{~{}m}/\mathrm{Pa}\cdot\mathrm{s}$ (Ref.[67])
$\bar{g}^{Na}$ | $1.357\times 10^{1}\mathrm{~{}S}/\mathrm{m}^{2}$ (**,Ref.[61]) | $\kappa_{gl}$ | $9.366\times 10^{-19}\mathrm{~{}m}^{2}$ (Ref.[38, 80])
$\bar{g}^{K}$ | $2.945\mathrm{~{}S}/\mathrm{m}^{2}$ (**,Ref.[61]) | $\kappa_{ax}$ | $1.33\times 10^{-16}\mathrm{~{}m}^{2}$ (Ref.[38, 80])
$g_{ax}^{Cl}$ | $1.5\times 10^{-1}\mathrm{~{}S}/\mathrm{m}^{2}$ (*) | $\kappa_{ex}^{OP}$ | $3.99\times 10^{-16}\mathrm{~{}m}^{2}$ (**,Ref.[38, 80])
$G_{pia}^{Na,K,Cl}$ | $3\mathrm{~{}S}/\mathrm{m}^{2}$ (*) | $\kappa_{ex}^{SAS}$ | $1.33\times 10^{-14}\mathrm{~{}m}^{2}$ (**,Ref.[38, 80])
* a
Note: the ‘*’ estimated or induced from the concentration balance.
* b
Note: the ‘**’ deducted proportional from reference.
## References
* [1] Elizabeth A Adams, Hyung Min Choi, Cecilia Y Cheung, and Robert A Brace. Comparison of amniotic and intramembranous unidirectional permeabilities in late-gestation sheep. American journal of obstetrics and gynecology, 193(1):247–255, 2005\.
* [2] Gregoire Allaire, Andro Mikelić, and Andrey Piatnitski. Homogenization of the linearized ionic transport equations in rigid periodic porous media. Journal of Mathematical Physics, 51(12):123103, 2010.
* [3] KH Andres, M Von Düring, K Muszynski, and RF Schmidt. Nerve fibres and their terminals of the dura mater encephali of the rat. Anatomy and embryology, 175(3):289–301, 1987.
* [4] David A Atchison, George Smith, and George Smith. Optics of the human eye, volume 2. Butterworth-Heinemann Oxford, 2000.
* [5] Leah R Band, Cameron L Hall, Giles Richardson, Oliver E Jensen, Jennifer H Siggers, and Alexander JE Foss. Intracellular flow in optic nerve axons: a mechanism for cell death in glaucoma. Investigative ophthalmology & visual science, 50(8):3750–3758, 2009.
* [6] Alba Bellot-Saez, Orsolya Kekesi, John W Morley, and Yossi Buskila. Astrocytic modulation of neuronal excitability through k+ spatial buffering. Neuroscience & Biobehavioral Reviews, 77:87–97, 2017.
* [7] George B Benedek and Felix MH Villars. Physics with illustrative examples from medicine and biology: mechanics. Springer Science & Business Media, 2000.
* [8] Walter F Boron and Emile L Boulpaep. Medical physiology E-book. Elsevier Health Sciences, 2016.
* [9] H Bracho, PM Orkand, and RK Orkand. A further study of the fine structure and membrane properties of neuroglia in the optic nerve of necturus. Journal of neurobiology, 6(4):395–410, 1975.
* [10] Kevin C Chen and Charles Nicholson. Spatial buffering of potassium ions in brain extracellular space. Biophysical journal, 78(6):2776–2797, 2000.
* [11] SY Chiu, JM Ritchie, RB Rogart, and D Stagg. A quantitative description of membrane currents in rabbit myelinated nerve. The Journal of physiology, 292(1):149–166, 1979.
* [12] I Dietzel, U Heinemann, G Hofmeier, and HD Lux. Stimulus-induced changes in extracellular na+ and cl- concentration in relation to changes in the size of the extracellular space. Experimental brain research, 46(1):73–84, 1982.
* [13] Jens P Dreier and Clemens Reiffurth. The stroke-migraine depolarization continuum. Neuron, 86(4):902–922, 2015.
* [14] Bob Eisenberg, Yunkyong Hyon, and Chun Liu. Energy variational analysis of ions in water and channels: Field theory for primitive models of complex ionic fluids. The Journal of Chemical Physics, 133(10):104104, 2010.
* [15] Aristotelis Filippidis, Sotirios Zarogiannis, Maria Ioannou, Konstantinos Gourgoulianis, Paschalis-Adam Molyvdas, and Chrissi Hatzoglou. Transmembrane resistance and histology of isolated sheep leptomeninges. Neurological research, 32(2):205–208, 2010.
* [16] Aristotelis S Filippidis, Sotirios G Zarogiannis, Maria Ioannou, Konstantinos Gourgoulianis, Paschalis-Adam Molyvdas, and Chrissi Hatzoglou. Permeability of the arachnoid and pia mater. the role of ion channels in the leptomeningeal physiology. Child’s Nervous System, 28(4):533–540, 2012.
* [17] Richard Fitzhugh. Thresholds and plateaus in the hodgkin-huxley nerve equations. The Journal of general physiology, 43(5):867–896, 1960.
* [18] B Frankenhaeuser and AL Hodgkin. The after-effects of impulses in the giant nerve fibres of loligo. The Journal of physiology, 131(2):341–376, 1956.
* [19] Gerald G Fuller and Jan Vermant. Complex fluid-fluid interfaces: rheology and structure. Annual review of chemical and biomolecular engineering, 3:519–543, 2012.
* [20] Fabrizio Gabbiani and Steven James Cox. Mathematics for neuroscientists. Academic Press, 2017.
* [21] Junyuan Gao, X Sun, V Yatsula, RS Wymore, and RT Mathias. Isoform-specific function and distribution of na/k pumps in the frog lens epithelium. The Journal of membrane biology, 178(2):89–101, 2000.
* [22] Bruce S Gardiner, David W Smith, Michael Coote, and Jonathan G Crowston. Computational modeling of fluid flow and intra-ocular pressure following glaucoma surgery. PLoS One, 5(10):e13178, 2010.
* [23] William M Gelbart and Avinoam Ben-Shaul. The “new” science of “complex fluids”. The Journal of Physical Chemistry, 100(31):13169–13189, 1996.
* [24] CH Hatzoglou, KI Gourgoulianis, and PA Molyvdas. Effects of snp, ouabain, and amiloride on electrical potential profile of isolated sheep pleura. Journal of Applied Physiology, 90(4):1565–1569, 2001.
* [25] Sohan Singh Hayreh. The sheath of the optic nerve. Ophthalmologica, 189(1-2):54–63, 1984.
* [26] Sohan Singh Hayreh. Ischemic optic neuropathy. Progress in retinal and eye research, 28(1):34–62, 2009.
* [27] Knut Holthoff and Otto W Witte. Directed spatial potassium redistribution in rat neocortex. Glia, 29(3):288–292, 2000.
* [28] Yi Hua, Andrew P Voorhees, and Ian A Sigal. Cerebrospinal fluid pressure: revisiting factors influencing optic nerve head biomechanics. Investigative ophthalmology & visual science, 59(1):154–165, 2018\.
* [29] Eric R Kandel, James H Schwartz, Thomas M Jessell, Steven Siegelbaum, A James Hudspeth, and Sarah Mack. Principles of neural science, volume 4. McGraw-hill New York, 2000.
* [30] RD Keynes. The ionic movements during nervous activity. The Journal of physiology, 114(1-2):119, 1951.
* [31] H Esriel Killer, Hubert R Laeng, and Peter Groscurth. Lymphatic capillaries in the meninges of the human optic nerve. Journal of neuro-ophthalmology: the official journal of the North American Neuro-Ophthalmology Society, 19(4):222–228, 1999.
* [32] HE Killer, HR Laeng, J Flammer, and P Groscurth. Architecture of arachnoid trabeculae, pillars, and septa in the subarachnoid space of the human optic nerve: anatomy and clinical considerations. British Journal of Ophthalmology, 87(6):777–781, 2003.
* [33] Paulo Kofuji and Eric A Newman. Potassium buffering in the central nervous system. Neuroscience, 129(4):1043–1054, 2004.
* [34] J Murali Krishnan, Abhijit P Deshpande, and PB Sunil Kumar. Rheology of complex fluids. Springer, 2010.
* [35] SW Kuffler, JG Nicholls, and RK Orkand. Physiological properties of glial cells in the central nervous system of amphibia. Journal of Neurophysiology, 29(4):768–787, 1966.
* [36] Fu Keung Li, Chi Ho To, JK Leung, Tak Mao Chan, and Ka Neng Lai. Electrophysiology and glucose transport of human peritoneal mesothelial cells: implications for peritoneal dialysis. Peritoneal dialysis international, 21(2):115–121, 2001.
* [37] Yun-Bi Lu, Kristian Franze, Gerald Seifert, Christian Steinhäuser, Frank Kirchhoff, Hartwig Wolburg, Jochen Guck, Paul Janmey, Er-Qing Wei, Josef Käs, et al. Viscoelastic properties of individual glial cells and neurons in the cns. Proceedings of the National Academy of Sciences, 103(47):17759–17764, 2006.
* [38] Duane Tearaitoa Kingwell Malcolm. A computational model of the ocular lens. PhD thesis, ResearchSpace@ Auckland, 2006.
* [39] RICHARD T Mathias. Steady-state voltages, ion fluxes, and volume regulation in syncytial tissues. Biophysical journal, 48(3):435, 1985.
* [40] STUART McLAUGHLIN and RICHARD T Mathias. Electro-osmosis and the reabsorption of fluid in renal proximal tubules. The Journal of general physiology, 85(5):699–728, 1985.
* [41] William H Morgan, Chandrakumar Balaratnasingam, Christopher RP Lind, Steve Colley, Min H Kang, Philip H House, and Dao-Yi Yu. Cerebrospinal fluid pressure and the eye. British Journal of Ophthalmology, 100(1):71–77, 2016.
* [42] Yoichiro Mori. A multidomain model for ionic electrodiffusion and osmosis with an application to cortical spreading depression. Physica D: Nonlinear Phenomena, 308:94–108, 2015.
* [43] Shingo Murakami and Yoshihisa Kurachi. Mechanisms of astrocytic k+ clearance and swelling under high extracellular k+ concentrations. The Journal of Physiological Sciences, 66(2):127–142, 2016.
* [44] Maiken Nedergaard and Steven A Goldman. Glymphatic failure as a final common pathway to dementia. Science, 370(6512):50–56, 2020.
* [45] John G Nicholls, A Robert Martin, Bruce G Wallace, and Paul A Fuchs. From neuron to brain, volume 271. Sinauer Associates Sunderland, MA, 2001.
* [46] Charles Nicholson. Diffusion and related transport mechanisms in brain tissue. Reports on progress in Physics, 64(7):815, 2001.
* [47] Richard E Norman, John G Flanagan, Ian A Sigal, Sophie MK Rausch, Inka Tertinegg, and C Ross Ethier. Finite element modeling of the human sclera: influence on optic nerve head biomechanics and connections with glaucoma. Experimental eye research, 93(1):4–12, 2011.
* [48] RK Orkand, JG Nicholls, and SW Kuffler. Effect of nerve impulses on the membrane potential of glial cells in the central nervous system of amphibia. Journal of neurophysiology, 29(4):788–806, 1966.
* [49] Ivar Østby, Leiv Øyehaug, Gaute T Einevoll, Erlend A Nagelhus, Erik Plahte, Thomas Zeuthen, Catherine M Lloyd, Ole P Ottersen, and Stig W Omholt. Astrocytic mechanisms explaining neural-activity-induced shrinkage of extraneuronal space. PLoS computational biology, 5(1):e1000272, 2009.
* [50] Mona Pache and Peter Meyer. Morphological changes of the retrobulbar optic nerve and its meningeal sheaths in glaucoma. Ophthalmologica, 220(6):393–396, 2006.
* [51] D KEITH Payne, GARY T Kinasewitz, and ENRIQUE Gonzalez. Comparative permeability of canine visceral and parietal pleura. Journal of Applied Physiology, 65(6):2558–2564, 1988.
* [52] MA Pérez-Pinzón, LIAN Tao, and CHARLES Nicholson. Extracellular potassium, volume fraction, and tortuosity in rat hippocampal ca1, ca3, and cortical slices during ischemia. Journal of Neurophysiology, 74(2):565–573, 1995.
* [53] CH Pilgrim, I Reisert, and D Grab. Volume densities and specific surfaces of neuronal and glial tissue elements in the rat supraoptic nucleus. Journal of Comparative Neurology, 211(4):427–431, 1982.
* [54] Nadja Ray, Tycho van Noorden, Florian Frank, and Peter Knabner. Multiscale modeling of colloid and fluid dynamics in porous media including an evolving microstructure. Transport in porous media, 95(3):669–696, 2012.
* [55] Marte Julie Sætra, Geir Halnes, and Gaute T Einevoll. An electrodiffusive neuron-extracellular-glia model with somatodendritic interactions. bioRxiv, 2020.
* [56] Juan J Salazar, Ana I Ramírez, Rosa De Hoz, Elena Salobrar-Garcia, Pilar Rojas, José A Fernández-Albarral, Inés López-Cuenca, Blanca Rojas, Alberto Triviño, and José M Ramírez. Anatomy of the human optic nerve: Structure and function. In Optic Nerve. IntechOpen, 2018.
* [57] S Sarkos, CH Hatzoglou, J Dahabre, KI Gourgoulianis, and PA Molyvdas. Effect of amiloride in human and sheep parietal pleura. Respiratory physiology & neurobiology, 132(2):233–237, 2002.
* [58] John B Selhorst and Yanjun Chen. The optic nerve. In Seminars in neurology, volume 29, pages 029–035. © Thieme Medical Publishers, 2009.
* [59] M Simon. Peritoneal mesothelium in vitro: an electrophysiologic study. Peritoneal dialysis international, 16(4):393–397, 1996.
* [60] Justin M Smith, Daniel P Bradley, Michael F James, and Christopher L-H Huang. Physiological studies of cortical spreading depression. Biological Reviews, 81(4):457–481, 2006.
* [61] Zilong Song, Xiulei Cao, and Huaxiong Huang. Electroneutral models for dynamic poisson-nernst-planck systems. Physical Review E, 97(1):012411, 2018.
* [62] Saverio E Spagnolie. Complex fluids in biological systems. Biological and Medical Physics, Biomedical Engineering, 2015.
* [63] Ioannis Stefanidis, Vassilios Liakopoulos, Panagiota Kourti, Sotirios Zarogiannis, Antigoni Poultsidi, Peter R Mertems, Marios Salmas, Chrissi Hatzoglou, Konstantinos Gourgoulianis, and Paschalis-Adam Molyvdas. Amiloride-sensitive sodium channels on the parietal human peritoneum: evidence by ussing-type chamber experiments. Asaio Journal, 53(3):335–338, 2007.
* [64] Keiichiro Susuki. Myelin: a specialized membrane for cell communication. Nature Education, 3(9):59, 2010.
* [65] Ehsan Vaghefi, Duane TK Malcolm, Marc D Jacobs, and Paul J Donaldson. Development of a 3d finite element model of lens microcirculation. Biomedical engineering online, 11(1):69, 2012.
* [66] CH Verikouki, CH Hatzoglou, KI Gourgoulianis, PA Molyvdas, A Kallitsaris, and IE Messinis. Rapid effect of progesterone on transepithelial resistance of human fetal membranes: evidence for non-genomic action. Clinical and Experimental Pharmacology and Physiology, 35(2):174–179, 2008.
* [67] Raimundo Villegas and Gloria M Villegas. Characterization of the membranes in the giant nerve fiber of the squid. The Journal of general physiology, 43(5):73, 1960.
* [68] Konstantinos Vogiatzidis, Chrissi Hatzoglou, Sotirios Zarogiannis, Galatia Matafia, Konstantinos Gourgoulianis, and Paschalis-Adam Molyvdas. $\mu$-opioid influence on transmesothelial resistance of isolated sheep pleura and parietal pericardium. European journal of pharmacology, 530(3):276–280, 2006.
* [69] Anke Wallraff, Rüdiger Köhling, Uwe Heinemann, Martin Theis, Klaus Willecke, and Christian Steinhäuser. The impact of astrocytic gap junctional coupling on potassium buffering in the hippocampus. Journal of Neuroscience, 26(20):5438–5447, 2006.
* [70] Li Wan, Shixin Xu, Maijia Liao, Chun Liu, and Ping Sheng. Self-consistent approach to global charge neutrality in electrokinetics: A surface potential trap model. Physical Review X, 4(1):011042, 2014.
* [71] Ningli Wang. Intraocular and Intracranial Pressure Gradient in Glaucoma, volume 1. Springer, 2019.
* [72] Shixin Xu, Bob Eisenberg, Zilong Song, and Huaxiong Huang. Osmosis through a semi-permeable membrane: a consistent approach to interactions. arXiv preprint arXiv:1806.00646, 2018.
* [73] Shixin Xu, Ping Sheng, and Chun Liu. An energetic variational approach for ion transport. Communications in Mathematical Sciences, 12(4):779–789, 2014.
* [74] S Zarogiannis, P Kourti, C Hatzoglou, V Liakopoulos, A Poultsidi, K Gourgoulianis, PA Molyvdas, and I Stefanidis. Influence of the sodium transport inhibition by amiloride on the transmesothelial resistance of isolated visceral sheep peritoneum. In Advances in peritoneal dialysis. Conference on Peritoneal Dialysis, volume 21, pages 5–8, 2005.
* [75] Sotirios Zarogiannis, Triantafyllia Deligiorgi, Ioannis Stefanidis, Vassilios Liakopoulos, Konstantinos Gourgoulianis, Paschalis Adam Molyvdas, and Chrissi Hatzoglou. Dexamethasone decreases the transmesothelial electrical resistance of the parietal and visceral pleura. The Journal of Physiological Sciences, 59(4):335–339, 2009.
* [76] Sotirios Zarogiannis, Chrissi Hatzoglou, Ioannis Stefanidis, Maria Ioannou, Efrosini Paraskeva, Konstantinos Gourgoulianis, and Paschalis-Adam Molyvdas. Comparison of the electrophysiological properties of the sheep isolated costal and diaphragmatic parietal pleura. Clinical and experimental pharmacology & physiology, 34(1-2):129–131, 2007.
* [77] Sotirios Zarogiannis, Chrissi Hatzoglou, Ioannis Stefanidis, Vassilios Liakopoulos, Konstantinos Gourgoulianis, and Paschalis-Adam Molyvdas. Adrenergic influence on the permeability of sheep diaphragmatic parietal pleura. Respiration, 74(1):118–120, 2007.
* [78] Sotirios Zarogiannis, Vassilios Liakopoulos, Chryssi Hatzoglou, Panagiota Kourti, Konstantinos Vogiatzidis, S Potamianos, T Eleftheriadis, K Gourgoulianis, PA Molyvdas, and I Stefanidis. Effect of sodium-potassium pump inhibition by ouabain on the permeability of isolated visceral sheep peritoneum. Adv Perit Dial, 23:43–47, 2007.
* [79] Sotirios Zarogiannis, Konstantinos Vogiatzidis, Chryssi Hatzoglou, Vassilios Liakopoulos, Spyros Potamianos, T Eleftheriadis, et al. $\mu$-opioid stimulation of isolated parietal sheep peritoneum decreases peritoneal permeability in vitro. Adv. Perit. Dial, 23:34–37, 2007.
* [80] Yi Zhu, Shixin Xu, Robert S Eisenberg, and Huaxiong Huang. A bidomain model for lens microcirculation. Biophysical journal, 116(6):1171–1184, 2019.
* [81] Yi Zhu, Shixin Xu, Robert S Eisenberg, and Huaxiong Huang. A tridomain model for potassium clearance in optic nerve. arXiv preprint arXiv:2012.03303, 2020.
|
# Reinforcement Learning based Per-antenna Discrete Power Control for Massive
MIMO Systems††thanks: The work was supported by the U.K. Engineering and
Physical Sciences Research Council (EPSRC) under Grants EP/P009549/1 and
EP/P009670/1.
Navneet Garg, Mathini Sellathurai†, Tharmalingam Ratnarajah
The University of Edinburgh, UK, †Heriot-Watt university, Edinburgh, UK
###### Abstract
Power consumption is one of the major issues in massive MIMO (multiple input
multiple output) systems, causing increased long-term operational cost and
overheating issues. In this paper, we consider per-antenna power allocation
with a given finite set of power levels towards maximizing the long-term
energy efficiency of the multi-user systems, while satisfying the QoS (quality
of service) constraints at the end users in terms of required SINRs (signal-
to-interference-plus-noise ratio), which depends on channel information.
Assuming channel states to vary as a Markov process, the constraint problem is
modeled as an unconstraint problem, followed by the power allocation based on
$Q$-learning algorithm. Simulation results are presented to demonstrate the
successful minimization of power consumption while achieving the SINR
threshold at users.
## I Introduction
Massive MIMO systems are the central part of 5G and next generation wireless
networks. Due to large number of antennas in the array, the increased power
consumption i.e. reduced energy efficiency (EE), causes increased operational
cost and overheating problems which leads to reduced lifespan of the array.
The power allocation problem has been widely investigated in literature via
different schemes such as antenna selection schemes [1, 2, 3, 4, 5, 6, 7, 8,
9, 10, 11], machine/deep learning (ML/DL) schemes [12, 13, 14], convex
approximation based [15, 16], etc. In massive MIMO systems, transmit
correlation with mutual coupling is studied in [17], while with hybrid
precoding, power consumption cost is minimized in [18]. The antenna selection
methods require NP-hard non-convex problem to be solved, and power allocation
step is still needed, which reduces its preference of usage in practice. The
drawback of ML/DL approaches is that they require the huge data for training
and the optimal solution is not guaranteed. Convex-approximation based
approaches approximate the non-convex EE expressions into convex ones and
obtain sub-optimal power allocation. Therefore, a unified power allocation and
antenna selection approach is essential in improving the energy efficiency.
In this paper, we present the discrete power allocation scheme using
reinforcement $Q$-learning for downlink multi-user massive MIMO system towards
that maximization of the long-term energy efficiency subject to the total
power constraint, per-antenna power constraint, and the quality of service
(QoS) constraints at the end users in terms of SINR. Discrete power allocation
can also be considered as a generalization of antenna selection schemes, which
has only two power levels. Assuming the channel changes as a Markov process in
the time-slotted model with unknown transition probabilities, the long term
energy efficiency maximization problem is presented subjected to total power
constraint and per-antenna power constraint. This constraint problem is
formulated as an unconstraint problem and $Q$-learning is used to obtain the
solution. Simulation results demonstrate that $Q$-learning algorithm converges
and minimizes the power consumption, while satisfying the QoS constraint at
users.
## II System Model
Consider a downlink multi-user system, where a base station (BS) is equipped
with a large number of antennas ($M$).
Figure 1: BS with discrete power control
$\mathbf{P}=\mathcal{D}\left\\{p_{1},\ldots,p_{M}\right\\}$ and
$\mathbb{E}\left\\{\mathbf{s}\mathbf{s}^{\dagger}\right\\}=\frac{1}{K}\mathbf{I}_{K}$
and $\mathbb{E}\left\\{\mathbf{s}\right\\}=\mathbf{0}$.
The BS serves simultaneously a set of $K$ users indexed by
$\mathcal{K}=\left\\{1,\ldots,K\right\\}$. The transmitted signal from the BS
can be expressed as
$\mathbf{x}=\mathbf{P}^{1/2}\sum_{k\in\mathcal{K}}\mathbf{v}_{k}s_{k}=\mathbf{P}^{1/2}\mathbf{V}\mathbf{s},$
(1)
where $\mathbf{s}=\left[s_{1},\ldots,s_{K}\right]^{T}$ is $K\times 1$ symbol
vector to be transmitted such that for each $k^{th}$ user,
$\mathbb{E}\left\\{s_{k}\right\\}=0$,
$\mathbb{E}\left\\{s_{k}s_{j}^{*}\right\\}=\frac{1}{K}\delta_{kj}$ and
$\mathbb{E}\left\\{\mathbf{s}\mathbf{s}^{\dagger}\right\\}=\frac{1}{K}\mathbf{I}$
with $\delta_{kj}$ being the Kronecker delta having value 1 when $k=j$ and $0$
otherwise; the matrix
$\mathbf{V}=\left[\mathbf{v}_{1},\ldots,\mathbf{v}_{K}\right]$ is an $M\times
K$ orthonormal precoder such that
$\mathbf{V}^{\dagger}\mathbf{V}=\mathbf{I}_{K}$; the quantity
$\mathbf{P}=\mathcal{D}\left(p_{1},\ldots,p_{M}\right)$ is an $M\times M$
diagonal power allocation matrix with non-negative entries. Using the above,
the per-antenna and the total power constraints at the BS can be obtained as
$\displaystyle T_{per}(p_{m})$
$\displaystyle=\left[\mathbb{E}\left\\{\mathbf{x}\mathbf{x}^{H}\right\\}\right]_{m,m}$
(2)
$\displaystyle=\frac{p_{m}}{K}\left[\mathbf{V}\mathbf{V}^{H}\right]_{m,m}\leq\bar{P}_{m},\forall
m$ (3) $\displaystyle T_{tot}(\mathbf{P})$
$\displaystyle=\mathbb{E}\left\|\mathbf{x}\right\|^{2}=\frac{1}{K}\text{tr}(\mathbf{P}\mathbf{V}\mathbf{V}^{H})\leq\bar{P}_{T},$
(4)
where $\bar{P}_{m}$ and $\bar{P}_{T}$ are the $m^{th}$ antenna and the total
power constraints. For simplicity, we assume equal power constraint per
antenna i.e. $\bar{P}_{m}=\bar{P}_{per},\forall m=1,\ldots,M$. Towards
discrete power control, let the set
$\mathcal{P}=\left\\{p^{(1)},\ldots,p^{(\left|\mathcal{P}\right|)}\right\\}$
denote all the power levels for each antenna i.e. $p_{m}\in\mathcal{P},\forall
m$ and $\mathbf{P}\in\mathcal{P}^{M}$ such that $0=p^{(1)}\leq\cdots\leq
p^{(\left|\mathcal{P}\right|)}=\bar{P}_{per}$, where $\bar{P}_{per}$ also
denotes the maximum power transmitted by a single antenna. Let
$\mathbf{h}_{k}$ denote the channel state information (CSI) from BS at origin
to the $k^{th}$ user. Through this channel, the received signal at the
$k^{th}$ user can be written as
$\displaystyle y_{k}$ $\displaystyle=\mathbf{h}_{k}^{H}\mathbf{x}+n_{k},$ (5)
$\displaystyle=\mathbf{h}_{k}^{H}\mathbf{P}^{1/2}\mathbf{V}\mathbf{s}+n_{k},$
(6)
$\displaystyle=\mathbf{h}_{k}^{H}\mathbf{P}^{1/2}\mathbf{v}_{k}s_{k}+\mathbf{h}_{k}^{H}\mathbf{P}^{1/2}\mathbf{V}_{-k}\mathbf{s}_{-k}+n_{k},$
(7)
where $n_{k}\sim\mathcal{CN}(0,\sigma^{2})$ is the circularly symmetric
complex Gaussian noise;
$\mathbf{V}_{-k}=\left[\mathbf{v}_{1},\ldots,\mathbf{v}_{k-1},\mathbf{v}_{k+1},\ldots,\mathbf{v}_{K}\right]$
and
$\mathbf{s}_{-k}=\left[s_{1},\ldots,s_{k-1},s_{k+1},\ldots,s_{K}\right]^{T}$.
At the $k^{th}$user, the resultant SINR can be given as
$\xi_{k}(\mathbf{P}|\mathbf{H})=\frac{\left|\mathbf{h}_{k}^{H}\mathbf{P}^{1/2}\mathbf{v}_{k}\right|^{2}\frac{1}{K}}{\text{tr}\left(\mathbf{h}_{k}^{H}\mathbf{P}^{1/2}\mathbf{V}_{-k}\mathbf{V}_{-k}^{H}\mathbf{P}^{1/2}\mathbf{h}_{k}\right)\frac{1}{K}+\sigma^{2}},$
(8)
which depends on CSI
$\mathbf{H}=\left[\mathbf{h}_{1},\ldots,\mathbf{h}_{K}\right]$. Thus, stacking
all the received signals gives
$\mathbf{y}=\mathbf{H}^{H}\mathbf{P}^{1/2}\mathbf{V}\mathbf{s}+\mathbf{n}$. If
the CSI variations follow a Markov process, the resultant SINR process will
also be Markov. In other words, the power in the elements of $\mathbf{P}$
needs to be adjusted according to CSI to satisfy QoS constraints at the
$k^{th}$ user. Further, the achievable sum-rate is given as
$R(\mathbf{P}|\mathbf{H})=\sum_{k\in\mathcal{K}}\log_{2}\left(1+\xi_{k}(\mathbf{P}|\mathbf{H})\right).$
(9)
The resultant energy efficiency can be defined as the ratio of the sum rate
over the total power incurred in the transmission as
$\eta(\mathbf{P}|\mathbf{H})=\frac{R(\mathbf{P}|\mathbf{H})}{T_{tot}(\mathbf{P})},$
(10)
where the circuit power is ignored as it is a constant. In the following, we
simplify the sum rate for two popular precoding schemes based on ZF (zero-
forcing) and MRT (maximal ratio transmission).
### II-A Zero-forcing
For zero forcing transmission, to find the precoder satisfying
$\mathbf{h}_{k}^{H}\mathbf{P}^{1/2}\mathbf{v}_{j}=0,\forall k\neq j$, we
normalize the columns of
$\mathbf{V}^{\prime}=\mathbf{P}^{1/2}\mathbf{H}\left(\mathbf{H}^{H}\mathbf{P}\mathbf{H}\right)^{-1}$
to be unit norm columns. The above precoder results in the received signal
$y_{k}=\mathbf{h}_{k}^{H}\mathbf{P}^{1/2}\mathbf{v}_{k}s_{k}+n_{k}$, resulting
into the sum rate
$R_{ZF}(\mathbf{P}|\mathbf{H})=\sum_{k\in\mathcal{K}}\log_{2}\left(1+\frac{\left|\mathbf{h}_{k}^{H}\mathbf{P}^{1/2}\mathbf{v}_{k}\right|^{2}}{\sigma^{2}K}\right).$
(11)
### II-B Maximal ratio transmission
For MRT based precoding, the precoder is set as
$\mathbf{v}_{k}=\frac{\mathbf{P}^{1/2}\mathbf{h}_{k}}{\sqrt{\mathbf{h}_{k}^{H}\mathbf{P}\mathbf{h}_{k}}}$.
Note that MRT precoding is used for low complexity operations, thus, the
precoding vectors are not orthonormalized. The sum rate can be simplified as
$\displaystyle
R_{MRT}(\mathbf{P}|\mathbf{H})=\sum_{k\in\mathcal{K}}\log_{2}\left(1+\frac{\mathbf{h}_{k}^{H}\mathbf{P}\mathbf{h}_{k}}{\sum_{j\neq
k}\frac{\mathbf{h}_{k}^{H}\mathbf{P}\mathbf{h}_{j}\mathbf{h}_{j}^{H}\mathbf{P}\mathbf{h}_{k}}{\mathbf{h}_{j}^{H}\mathbf{P}\mathbf{h}_{j}}+K\sigma^{2}}\right).$
### II-C Problem formulation
Our goal is to maximize the energy efficiency of transmissions via discrete
power allocations. However, note that in each time slot, finding discrete
power levels for each antenna in the massive MIMO system is an NP-hard search
problem and a non-convex problem. Moreover, the estimation of CSI in massive
MIMO consumes resources. Therefore, for faster operations, utilizing the CSI
correlation via Markov process, reinforcement learning is utilized to obtain
these power levels. Thus, assuming the channel information varies as a finite
state Markov chain, our objective to find the discrete power allocation to
maximize the long term efficiency subject to the QoS constraints satisfied for
each user, can be expressed as
$\displaystyle\max_{\mathbf{P}(t)}$
$\displaystyle\sum_{\tau=t}^{\infty}\gamma^{\tau-t}\eta(\mathbf{P}(t)|\mathbf{H}(t))$
(12) subject to $\displaystyle
T_{tot}(\mathbf{P}(t),\mathbf{H}(t))\leq\bar{P}_{T},\mathbf{P}(t)\in\mathcal{P}^{M},$
$\displaystyle\xi_{k}(\mathbf{P}(t)|\mathbf{H}(t))\geq\bar{\xi}_{k},\forall
k\in\mathcal{K},$ (13)
where $\bar{\xi}_{k}$ represents the SINR requirements for QoS at $k^{th}$
user, and $(t)$ denotes their time dependent behavior. Note that the total
power is also considered here a function of $\mathbf{H}$. It is due to the
fact that the precoders are computed using channel information $\mathbf{H}$.
This makes the problem non-convex and difficult to solve.
## III Reinforcement Learning
### III-A Dynamics of EEPA
We consider time varying channel across time slots. Within a time slot, the
channel remains constant. The CSI in a cellular network varies if the user is
walking, running or in a vehicle. In literature [19, 20], the time varying
channel is modeled using a finite state Markov chain, where the ergodic
channel in each time slot takes value in one of the Markov states. Let
$\mathcal{H}=\left\\{\mathbf{H}^{(1)},\ldots,\mathbf{H}^{(\left|\mathcal{H}\right|)}\right\\}$
denote the states in the Markov chain. The transition probability between
channel states is fixed and unknown111In some literature, first order auto-
regressive process is used to model the channel variations due to the
mobility, where the resulting channel model provides continuous state Markov
process, rather than finite state chain..
### III-B States, Actions and Rewards
For the above system dynamics, let $\underline{s}(t)$ be the state at time
$t$, which is given as the CSI of the same slot as
$\underline{s}(t)=\mathbf{H}(t)\in\mathcal{H}$. An action in the system
corresponds to discrete power control i.e.
$\underline{a}(t)=\mathbf{P}(t)\in\mathcal{P}^{M}$. The action chosen is
evaluated using the reward which is defined as the energy efficiency i.e.
$\underline{r}(\underline{s}(t),\underline{a}(t))=\frac{1}{\left(\sum_{k\in\mathcal{K}}\left|\xi_{k}(\mathbf{P}(t)|\mathbf{H}(t))-\bar{\xi}_{k}\right|\right)T_{tot}(\mathbf{P}(t),\mathbf{H}(t))},$
(14)
where $\left|\cdot\right|$ ensures that the resulting SINR does not achieve
values far from $\bar{\xi}_{k}$.
Here, the learner seeks the optimum action $\underline{a}(t)$ based on the
previous observation $\mathbf{H}(t-1)=\underline{s}(t-1)$ by interactively
making sequential decisions and observing the corresponding costs. In this
way, the agent learns the best action policy against the random Markov chain
transitions. Let the policy function be
$\pi:\mathcal{H}\rightarrow\mathcal{P}^{M}$, which maps a state to an action.
Under policy $\pi(\cdot)$, the power allocation is carried out via action
$\underline{a}(t+1)=\pi(\underline{s}(t))$, dictating the allocation policy at
time $t+1$. For the reward
$\underline{r}_{\pi}\left(\underline{s}(t)\right)=\underline{r}\left(\underline{s}(t),\pi\left(\underline{s}(t)\right)\right)$,
power consumption performance is measured through the state value function as
$V_{\pi}(\underline{s}(t))=\sum_{\tau=t}^{\infty}\gamma^{\tau-T}\underline{r}_{\pi}(\underline{s}(t)),$
(15)
which is the total average cost incurred over an infinite time horizon. The
objective of this paper is to find the optimal policy $\pi^{*}$ such that the
average cost of any state is maximized
$\pi^{*}=\arg\max_{\pi}V_{\pi}(\mathbf{S}).$
### III-C Action set reduction
For the BS equipped with $M$ antennas, there are huge number of
$\left|\mathcal{P}\right|^{M}$ possible actions. However, not all actions are
valid actions. Valid actions are those actions which satisfy the power
constraint in (4). The total power $T_{tot}(\mathbf{P},\mathbf{H})$ depends on
the normalized precoder $\mathbf{V}$. To simplify the constraint in order to
reduce the valid action set, we approximate the total power constraint as
$\frac{1}{K}\text{tr}(\mathbf{P}\mathbf{V}\mathbf{V}^{H})\approx\frac{1}{K}\text{tr}\left(\mathbf{P}\mathbb{E}\left\\{\mathbf{V}_{R}\mathbf{V}_{R}^{H}\right\\}\right)=\frac{1}{M}\text{tr}\left(\mathbf{P}\right)\leq\bar{P}_{T},$
(16)
where $\mathbf{V}_{R}$ is any random orthonormal precoder; the equality on the
right follows from [21, Lem. 1]. Further, at least $K$ actions should be non-
zero i.e. $p_{i_{k}}>0,\forall k\in\mathcal{K}$ that excludes
$\sum_{k=1}^{K-1}{M\choose k}$ actions in $\mathcal{P}^{M}$. To get the
minimum transmission power constraint to reduce huge number of possibilities,
we approximate the QoS constraint as
$\displaystyle\xi_{k}(\mathbf{P}|\mathbf{H})$
$\displaystyle=\frac{\left|\mathbf{h}_{k}^{H}\mathbf{P}^{1/2}\mathbf{v}_{k}\right|^{2}}{\text{tr}\left(\mathbf{h}_{k}^{H}\mathbf{P}^{1/2}\mathbf{V}_{-k}\mathbf{V}_{-k}^{H}\mathbf{P}^{1/2}\mathbf{h}_{k}\right)+K\sigma^{2}},$
$\displaystyle\stackrel{{\scriptstyle(a)}}{{\approx}}\frac{\text{tr}\left(\mathbf{P}\mathbf{v}_{k}\mathbf{v}_{k}^{H}\right)}{\text{tr}\left(\mathbf{P}\mathbf{V}_{-k}\mathbf{V}_{-k}^{H}\right)+K\sigma^{2}},$
$\displaystyle\stackrel{{\scriptstyle(b)}}{{\approx}}\frac{\text{tr}\left(\mathbf{P}\right)\frac{1}{M}}{\text{tr}\left(\mathbf{P}\right)\frac{K-1}{M}+K\sigma^{2}}=\frac{1}{(K-1)+KM\frac{\sigma^{2}}{\text{tr}\left(\mathbf{P}\right)}},$
where (a) follows from the massive MIMO channel hardening effect
$\mathbf{h}_{k}\mathbf{h}_{k}^{H}\rightarrow\mathbf{I}_{M}$; and (b) follows
similarly from (16). For ZF precoding, we have
$\frac{1}{KM\frac{\sigma^{2}}{\text{tr}\left(\mathbf{P}\right)}}\geq\bar{\xi}_{k}$
$\implies$ $\text{tr}\left(\mathbf{P}\right)\geq KM\sigma^{2}\bar{\xi}_{k}$.
Let $\bar{P}_{\min}$ denote this lower bound on the transmission power. The
new action space can now be expressed as
$\bar{\mathcal{P}}_{M}=\left\\{\left(\begin{array}[]{c}p_{1}\\\ \vdots\\\
p_{M}\end{array}\right):\begin{array}[]{c}\bar{P}_{\min}\leq\text{tr}(\mathbf{P}(t))\leq
M\bar{P}_{T},\\\ p_{i_{k}}>0,\forall k\in\mathcal{K}\end{array}\right\\},$
(17)
where $\bar{\mathcal{P}}_{M}\subset\mathcal{P}^{M}$. Note that the above
approximations are to reduce the possible actions, and thus, it does not
affect the optimal power allocations.
### III-D Bellman’s Equations and $Q$-learning
Let $\Pr(\underline{s},\underline{s}^{\prime}|\underline{a})$ be the
probability of transition from the current state $\underline{s}$ to the next
state $\underline{s}^{\prime}$ under action $\underline{a}$. Bellman equations
express the state value functions in a recursive fashion as
$\displaystyle V_{\pi}(\underline{s})=$
$\displaystyle\underline{r}_{\pi}(\underline{s})+\gamma\sum_{\underline{s}^{\prime}\in\Xi^{K}}\Pr(\underline{s},\underline{s}^{\prime}|\pi(\underline{s}))V_{\pi}(\underline{s}^{\prime}),\forall\underline{s}$
(18) $\displaystyle Q_{\pi}(\underline{s},\underline{a})=$
$\displaystyle\underline{r}_{\pi}(\underline{s})+\gamma\sum_{\underline{s}^{\prime}\in\Xi^{K}}\Pr(\underline{s},\underline{s}^{\prime}|\underline{a})V_{\pi}(\underline{s}^{\prime}),\forall\underline{s},\underline{a}.$
(19)
The above equations can be used to obtain the optimal policy by minimizing
$Q$-function as
$\pi^{*}=\arg\max_{\underline{a}}Q_{\pi}(\underline{s},\underline{a}),\forall\underline{s},$
(20)
where under $\pi^{*}$,
$V_{\pi^{*}}(\underline{s})=\max_{\underline{a}}Q_{\pi}(\underline{s},\underline{a})$
and it gives the solution
$\displaystyle Q_{\pi}(\underline{s},\underline{a})$
$\displaystyle=\underline{r}_{\pi}(\underline{s})+\gamma\sum_{\underline{s}^{\prime}\in\Xi^{K}}\Pr(\underline{s},\underline{s}^{\prime}|\underline{a})\max_{\underline{a}}Q_{\pi}(\underline{s},\underline{a}),$
$\displaystyle=\sum_{\underline{s}^{\prime}\in\Xi^{K}}\Pr(\underline{s},\underline{s}^{\prime}|\underline{a})\left[\underline{r}(\underline{s},\underline{a})+\gamma\max_{\underline{a}}Q_{\pi}(\underline{s},\underline{a})\right].$
The above solution demands an iterative solution for $Q$-function, which is
given in Algorithm 1.
1: state $\underline{s}(0)$ randomly and
$Q_{0}(\underline{s},\underline{a})=0\forall\underline{s},\underline{a}$
2:for $t=1,2,\ldots$, do
3: For given profile $\underline{s}(t-1)$, take action $\underline{a}(t)$ as
$\underline{a}(t)=\begin{cases}\arg\max_{\underline{a}}Q_{t-1}(\underline{s}(t),\underline{a})&\text{w.p.}\>1-\epsilon\\\
\text{random
}\underline{a}\in\bar{\mathcal{P}}&\text{w.p.}\>\epsilon\end{cases}$
4: Observe $\underline{s}(t)$ and compute
$\underline{r}(\underline{s}(t),\underline{a}(t))$
5: Update $\displaystyle
Q_{t}(\underline{s}(t),\underline{a}(t))=(1-\beta_{t})Q_{t-1}(\underline{s}(t),\underline{a}(t))$
(21)
$\displaystyle+\beta_{t}\left[\underline{r}(\underline{s}(t),\underline{a}(t))+\gamma\max_{\underline{a}}Q_{t-1}(\underline{s}(t),\underline{a})\right].$
6:end for
Algorithm 1 $Q$-learning algorithm.
In a time slot $t$, after observing the state $\underline{s}(t)$, the
$\epsilon$-greedy action $\underline{a}(t)$ is taken and instantaneous cost
$\underline{r}(\underline{s}(t),\underline{a}(t))+\gamma\max_{\underline{a}}Q(\underline{s}(t),\underline{a})$
is incurred. Under mean squared error (MSE) criteria, the MSE expression for
the estimated $Q$-function values can be written as
$\displaystyle\epsilon(\underline{s}(t),\underline{a}(t))$
$\displaystyle=\Bigg{[}\underline{r}(\underline{s}(t),\underline{a}(t))+$
$\displaystyle\hfill\gamma\max_{\underline{a}}Q(\underline{s}(t),\underline{a})-Q(\underline{s}(t),\underline{a}(t))\Bigg{]}^{2}.$
Minimizing the above error expression for $Q$-values using gradient descent
method yields the following
$\displaystyle
Q_{t}(\underline{s}(t),\underline{a}(t))=(1-\beta_{t})Q_{t-1}(\underline{s}(t),\underline{a}(t))$
(22)
$\displaystyle+\beta_{t}\left[\underline{r}(\underline{s}(t),\underline{a}(t))+\gamma\max_{\underline{a}}Q_{t-1}(\underline{s}(t),\underline{a})\right],$
where $Q_{t}$ is estimated $Q$-values at time $t$. It can be noted that the
convergence of the algorithm depends on the values of $\beta_{t}$. Choosing
$\beta_{t}$ such that $\sum_{t}\beta_{t}<\infty$ guarantees the convergence.
These cases of convergence and several other related algorithms has been
thoroughly studied in [22].
Note that the cardinality of action space is increased exponentially for
increase in the number of antennas and the number of power levels. Therefore,
to make it scalable, deep reinforcement learning based methods will be
investigated as a part of future work.
## IV Simulation Results
The following values are assumed for $Q$-learning parameters: $M=8,16$
antennas; $K=4$ downlink users $\left|\mathcal{P}\right|=3,5$; $1000$ number
of episodes for $Q$-learning with each episode having $2000$ iterations;
exploration decay factor per episode $0.1$; transmit power constraint $28$ dB;
per-antenna maximum power constraint $30$ dB; $Q$oS constraint for SINR $20$
dB; number of channel states $\left|\mathcal{H}\right|=128$. Zero-forcing
based precoding is assumed since $M$ is not high enough and the present
$Q$-learning algorithm is computationally time consuming.
Figure 2: Progress of average rewards, average SINR at users, and average
transmit power at BS for different iterations for $M=16$ and
$\left|\mathcal{P}\right|=3$ levels with per-antenna constraint, total
transmit power constraint and user-SINR constraint of $30$ dB, $28$ dB and
$20$ dB (green lines) respectively. Figure 3: Progress of average rewards,
average SINR at users, and average transmit power at BS for different
iterations for $M=8$ and $\left|\mathcal{P}\right|=5$ levels with per-antenna
constraint, total transmit power constraint and user-SINR constraint of $30$
dB, $28$ dB and $20$ dB respectively. .
Figure 2 shows the plots for the progresses of average reward over iterations,
average SINR across users and iterations, and average transmit power across
iterations, respectively for $M=16$ antennas at BS and
$\left|\mathcal{P}\right|=3$ power levels for each antenna. The action set is
reduced from $3^{16}$ to around $12000$ entries. It can be seen that the
$Q$-learning learns the optimum power allocation in terms of reward, and the
learned actions provide SINR greater than the $Q$oS constraint for each user,
keeping the transmit power within the constraint. Due to larger size of
$Q$-matrix, it takes around 750 iterations to learn the optimum converging
action. Similar trends can be seen for the case, when five power levels are
assumed as shown in Figure 3. It shows the successful application of
$Q$-learning in quickly finding the optimum power allocation among such a
large set of possibilities $3^{16}\approx 4\times 10^{7}$.
## V Conclusion
In this paper, we have presented reinforcement learning solution for discrete
power allocation, which is a combinatorial optimization problem and is NP-
hard. By leveraging the correlation between channels for slowing moving
scenarios in wireless cellular networks, we model the channel variations as a
finite state Markov chain and presented the RL formulation where the
constraints are transmit power constraint and the Quality of service guarantee
in terms of received SINR at each user with an objective of maximizing the
energy efficiency at the transmitter. Typically, to handle the constraints in
$Q$-learning, primal-dual approaches are used. However, we model the reward
function to incorporate these constraints, without needing any additional dual
variables in design. Simulations shows the successful application of the power
allocation while satisfying these constraints.
The future work is to make the algorithm scalable for larger number of power
levels and larger number of antennas.
## References
* [1] X. Gao, O. Edfors, F. Tufvesson, and E. G. Larsson, “Massive MIMO in real propagation environments: Do all antennas contribute equally?” _IEEE Transactions on Communications_ , vol. 63, no. 11, pp. 3917–3928, 2015\.
* [2] Z. Liu, W. Du, and D. Sun, “Energy and spectral efficiency tradeoff for massive MIMO systems with transmit antenna selection,” _IEEE Transactions on Vehicular Technology_ , vol. 66, no. 5, pp. 4453–4457, May 2017\.
* [3] A. Garcia-Rodriguez, C. Masouros, and P. Rulikowski, “Reduced switching connectivity for large scale antenna selection,” _IEEE Transactions on Communications_ , vol. 65, no. 5, pp. 2250–2263, May 2017.
* [4] H. Tang and Z. Nie, “RMV antenna selection algorithm for massive MIMO,” _IEEE Signal Processing Letters_ , vol. 25, no. 2, pp. 239–242, Feb 2018.
* [5] M. Olyaee, M. Eslami, and J. Haghighat, “An energy-efficient joint antenna and user selection algorithm for multi-user massive MIMO downlink,” _IET Communications_ , vol. 12, no. 3, pp. 255–260, 2018.
* [6] M. Hanif, H. Yang, G. Boudreau, E. Sich, and H. Seyedmehdi, “Antenna subset selection for massive MIMO systems: A trace-based sequential approach for sum rate maximization,” _Journal of Communications and Networks_ , vol. 20, no. 2, pp. 144–155, April 2018.
* [7] A. Konar and N. D. Sidiropoulos, “A simple and effective approach for transmit antenna selection in multiuser massive MIMO leveraging submodularity,” _IEEE Transactions on Signal Processing_ , vol. 66, no. 18, pp. 4869–4883, Sep. 2018.
* [8] H. Li, J. Cheng, Z. Wang, and H. Wang, “Joint antenna selection and power allocation for an energy-efficient massive MIMO system,” _IEEE Wireless Communications Letters_ , vol. 8, no. 1, pp. 257–260, Feb 2019.
* [9] D. Park, “Sum rate maximisation with transmit antenna selection in massive MIMO broadcast channels,” _Electronics Letters_ , vol. 54, no. 21, pp. 1245–1247, 2018.
* [10] S. Asaad, A. M. Rabiei, and R. R. MÃŒller, “Massive MIMO with antenna selection: Fundamental limits and applications,” _IEEE Transactions on Wireless Communications_ , vol. 17, no. 12, pp. 8502–8516, Dec 2018.
* [11] W. A. Al-Hussaibi and F. H. Ali, “Efficient user clustering, receive antenna selection, and power allocation algorithms for massive MIMO-NOMA systems,” _IEEE Access_ , vol. 7, pp. 31 865–31 882, 2019.
* [12] H. Huang, Y. Song, J. Yang, G. Gui, and F. Adachi, “Deep-learning-based millimeter-wave massive MIMO for hybrid precoding,” _IEEE Transactions on Vehicular Technology_ , vol. 68, no. 3, pp. 3027–3032, March 2019.
* [13] S. Zhang, C. Xiang, S. Cao, S. Xu, and J. Zhu, “Dynamic carrier to MCPA allocation for energy efficient communication: Convex relaxation versus deep learning,” _IEEE Transactions on Green Communications and Networking_ , vol. 3, no. 3, pp. 628–640, Sep. 2019.
* [14] C. He, Y. Hu, Y. Chen, and B. Zeng, “Joint power allocation and channel assignment for NOMA with deep reinforcement learning,” _IEEE Journal on Selected Areas in Communications_ , vol. 37, no. 10, pp. 2200–2210, Oct 2019.
* [15] K. Singh, K. Wang, S. Biswas, Z. Ding, F. A. Khan, and T. Ratnarajah, “Resource optimization in full duplex non-orthogonal multiple access systems,” _IEEE Transactions on Wireless Communications_ , vol. 18, no. 9, pp. 4312–4325, 2019.
* [16] L. Li, F. Khan, M. Pesavento, T. Ratnarajah, and S. Prakriya, “Sequential search based power allocation and beamforming design in overlay cognitive radio networks,” _Elsevier Signal Process._ , vol. 97, no. C, pp. 221–231, Apr. 2014.
* [17] C. Masouros, M. Sellathurai, and T. Ratnarajah, “Large-scale mimo transmitters in fixed physical spaces: The effect of transmit correlation and mutual coupling,” _IEEE Transactions on Communications_ , vol. 61, no. 7, pp. 2794–2804, 2013.
* [18] S. Payami, M. Ghoraishi, M. Dianati, and M. Sellathurai, “Hybrid beamforming with a reduced number of phase shifters for massive mimo systems,” _IEEE Transactions on Vehicular Technology_ , vol. 67, no. 6, pp. 4843–4851, 2018.
* [19] X. Liu, Z. Qin, Y. Gao, and J. A. McCann, “Resource allocation in wireless powered iot networks,” _IEEE Internet of Things Journal_ , vol. 6, no. 3, pp. 4935–4945, June 2019.
* [20] F. Sangare, D. H. N. Nguyen, and Z. Han, “Learning frameworks for dynamic joint RF energy harvesting and channel access,” _IEEE Access_ , vol. 7, pp. 84 524–84 535, 2019.
* [21] N. Garg and G. Sharma, “Analog precoder feedback schemes with interference alignment,” _IEEE Transactions on Wireless Communications_ , vol. 17, no. 8, pp. 5382–5396, 2018.
* [22] J. N. Tsitsiklis and B. Van Roy, “An analysis of temporal-difference learning with function approximation,” _IEEE Transactions on Automatic Control_ , vol. 42, no. 5, pp. 674–690, 1997.
|
# Poncelet–Darboux, Kippenhahn, and Szegő: interactions between projective
geometry, matrices and orthogonal polynomials
Markus Hunziker Department of Mathematics, Baylor University, Waco TX, USA
<EMAIL_ADDRESS>, Andrei Martínez-Finkelshtein Department of
Mathematics, Baylor University, Waco TX, USA, and Department of Mathematics,
University of Almería, Almería, Spain<EMAIL_ADDRESS>,
Taylor Poe Department of Mathematics, Baylor University, Waco TX, USA
<EMAIL_ADDRESS>and Brian Simanek Department of Mathematics,
Baylor University, Waco TX, USA<EMAIL_ADDRESS>
###### Abstract.
We study algebraic curves that are envelopes of families of polygons supported
on the unit circle $\mathbb{T}$. We address, in particular, a characterization
of such curves of minimal class and show that all realizations of these curves
are essentially equivalent and can be described in terms of orthogonal
polynomials on the unit circle (OPUC), also known as Szegő polynomials. Our
results have connections to classical results from algebraic and projective
geometry, such as theorems of Poncelet, Darboux, and Kippenhahn; numerical
ranges of a class of matrices; and Blaschke products and disk functions.
This paper contains new results, some old results presented from a different
perspective or with a different proof, and a formal foundation for our
analysis. We give a rigorous definition of the Poncelet property, of curves
tangent to a family of polygons, and of polygons associated with Poncelet
curves. As a result, we are able to clarify some misconceptions that appear in
the literature and present counterexamples to some existing assertions along
with necessary modifications to their hypotheses to validate them. For
instance, we show that curves inscribed in some families of polygons supported
on $\mathbb{T}$ are not necessarily convex, can have cusps, and can even
intersect the unit circle.
Two ideas play a unifying role in this work. The first is the utility of OPUC
and the second is the advantage of working with tangent coordinates. This
latter idea has been previously exploited in the works of B. Mirman, whose
contribution we have tried to put in perspective.
###### Key words and phrases:
Poncelet porism, algebraic curves, foci, Blaschke product, numerical range,
unitary dilation, OPUC
###### 2010 Mathematics Subject Classification:
Primary: 14N15; Secondary: 14H50, 14H70, 30J10, 42C05, 47A12
## 1\. Introduction
The main theme of this paper is the study of real plane algebraic curves in
the unit disk $\mathbb{D}:=\\{z\in\mathbb{C}:\,|z|<1\\}$ that can be inscribed
in families of polygons with vertices on the unit circle
$\mathbb{T}:=\partial\mathbb{D}=\\{z\in\mathbb{C}:\,|z|=1\\}$. In our
discussion below we use terminology that is carefully explained in the paper,
especially in Section 2. The purpose of this introduction is to give a brief
historical overview and a summary of recent results and our contributions.
In 1746, the English surveyor W. Chapple discovered that a circle $C$ in
$\mathbb{D}$ with center at $a\in\mathbb{D}$ is inscribed in infinitely many
triangles with vertices on $\mathbb{T}$ if and only if its radius is
$r=\frac{1-|a|^{2}}{2}\ .$ (1.1)
Even though Chapple’s proof was flawed (see e.g. [13]), it is clear that he
understood that the existence of one triangle inscribed in $\mathbb{T}$ and
circumscribed about $C$ implies the existence of an infinite family of such
triangles. Chapple’s formula (1.1) is then easily obtained by looking at an
isosceles triangle in the family.
This brings us to _Poncelet’s closure theorem_ , also known as _Poncelet’s
porism_ (see e.g. [13, 14]), one of the most surprising and beautiful results
of planar projective geometry. The theorem was discovered by J.-V. Poncelet in
1813 while he was a prisoner of war in Russia and published in 1822 in his
_Traité sur les propriétés projectives des figures_ [54]. In slightly
simplified form, adjusted to our context, the result can be stated as follows:
###### Theorem A (Poncelet, 1813).
Let $C$ be an ellipse in $\mathbb{D}$, and suppose there is an $n$-sided
polygon $\mathscr{P}_{0}$ inscribed in $\mathbb{T}$ and circumscribed about
$C$. Then for any point $z\in\mathbb{T}$, there exists an $n$-sided polygon
$\mathscr{P}(z)$ inscribed in $\mathbb{T}$ and circumscribed about $C$ such
that $z$ is a vertex of $\mathscr{P}(z)$.
Figure 1. An ellipse with a Poncelet property.
Here by an $n$-sided polygon we mean a simple closed polygonal chain of length
$n$. However, Poncelet’s closure theorem remains true if we replace $n$-sided
polygons inscribed in $\mathbb{T}$ by (possibly self-intersecting) closed
polygonal chains of length $n$ with vertices on $\mathbb{T}$. In either case,
we say that an ellipse as in the theorem has the $n$-_Poncelet_ property.
After Poncelet published his _Traité_ , following a suggestion of J. Steiner,
C. G. J. Jacobi used the theory of elliptic integrals to give a proof of
Poncelet’s closure theorem in the special case when $C$ is a circle. Inspired
by Jacobi’s work, N. Trudi and later A. Cayley used elliptic integrals to
prove Poncelet’s theorem in the general case (see e.g. [13, Chs. 3–5]).
Furthermore, Cayley was able to find an explicit criterion for ellipses to
have the $n$-Poncelet property in terms of power series expansions of the
square-root of certain determinants. Cayley’s criterion makes it possible to
determine, for example, for which semi-minor axes an ellipse with prescribed
foci has the $n$-Poncelet property. A modern interpretation of Cayley’s
criterion was given by Griffith and Harris [32].
Are there any algebraic curves $C$ in $\mathbb{D}$ of higher degree that have
the $n$-Poncelet property? In the late 1860’s, G. Darboux started to
investigate this question, and he realized that it is best approached by
considering the dual problem of finding curves passing through the
intersection points of $n$ tangent lines to $\mathbb{T}$. By introducing a
convenient system of plane coordinates, now called Darboux coordinates, he was
able to give a new proof of Poncelet’s closure theorem and generalizations
(see e.g. [13, Ch. 9].) Adjusted to our context, Darboux’s results can be
summarized as follows:
###### Theorem B (Darboux [12], 1917).
Let $C$ be a closed convex curve in $\mathbb{D}$ and suppose that there is an
$n$-sided polygon $\mathscr{P}_{0}$ inscribed in $\mathbb{T}$ and
circumscribed about $C$. If the curve $C$ is a connected component of a real
algebraic curve $\Gamma$ in $\mathbb{D}$ of class111The class of a plane
algebraic curve is the degree of its dual curve, see Section 2.1 for details.
$n-1$ such that each diagonal of $\mathscr{P}_{0}$ is tangent to $\Gamma$,
then for every point $z\in\mathbb{T}$, there exists an $n$-sided polygon
$\mathscr{P}(z)$ inscribed in $\mathbb{T}$ and circumscribed about $C$ such
that $z$ is a vertex of $\mathscr{P}(z)$ and each diagonal of $\mathscr{P}(z)$
is tangent to $\Gamma$. In the special case when $C$ is an ellipse, there
always exists such an algebraic curve $\Gamma$ and this curve decomposes into
$(n-1)/2$ ellipses if $n$ is odd, and $(n-2)/2$ ellipses and an isolated point
if $n$ is even.
If $C$ and $\Gamma$ are as in the theorem, then $\Gamma$ can be recovered from
$C$ as the envelope of all the diagonals of the family of $n$-sided polygons
inscribed in $\mathbb{T}$ and circumscribed about $C$ (see Figure 2). This
will be made precise in Section 3 in terms of Darboux’s curve of degree $n-1$
that is dual to $\Gamma$. We note that even though the curve $\Gamma$ is
singular in general, it has exactly $n-1$ tangent lines in each arbitrarily
given direction (just as in the special case when $\Gamma$ decomposes into
ellipses).
| |
---|---|---
Figure 2. Envelopes of convex hulls (left) and of all closed polygons with
vertices at a family of points on the circle.
In the study of conic sections, the idea of foci plays a central role. In
1832, J. Plücker defined foci of higher degree algebraic curves (see the
definition in Section 2.1). It turns out that a curve $\Gamma$ as in the
theorem above has exactly $n-1$ real foci (counted with multiplicity) and
$(n-1)(n-2)$ nonreal foci. Furthermore, all the real foci are in $\mathbb{D}$.
We will later see that the $n-1$ real foci determine $\Gamma$ completely. A
priori, this fact is by no means obvious and was not discovered until twenty
years ago.
One way to find an algebraic curve with given foci is using the notion of the
numerical range of a matrix (see the definition in Section 2.3).
###### Theorem C (Kippenhahn [38, 39]).
For an $n\times n$ complex matrix $\bm{A}$ there exists a real algebraic curve
$\Gamma$ of class $n$ whose real foci are the eigenvalues of $\bm{A}$ and such
that the numerical range of $\bm{A}$ is the convex hull of the real points of
$\Gamma$.
Kippenhahn’s proof is constructive and elegant, and we will explain his
arguments briefly in Section 2.3.
Starting in 1998, H.-L. Gau and P. Y. Wu and, independently, B. Mirman studied
when the boundary of the numerical range of an $n\times n$ matrix $\bm{A}$
exhibits the $(n+1)$-Poncelet property with respect to the unit circle
$\mathbb{T}$. In a series of papers, Gau and Wu [27, 24, 25, 65] showed that a
necessary and sufficient condition is that $\bm{A}$ belongs to the class
$\mathcal{S}_{n}$ of completely non-unitary contractions with defect index 1.
This class can also be identified with the compressed multiplication operators
on $\mathbb{T}$. The corresponding Poncelet polygons will join the eigenvalues
of all rank 1 unitary extensions of $\bm{A}\in\mathcal{S}_{n}$. Mirman, using
slightly different terminology and more geometric techniques, gave beautiful
new proofs of many of Darboux’s results, apparently without being aware of
Darboux’s work until 2005 (see the comments in the introduction to [50]).
A seemingly alternative approach to the problem above, using rational
functions, can be traced back to the following result of J. Siebeck,
popularized in Marden’s 1948 book [45, Ch. 1, §4], _Geometry of Polynomials_ :
###### Theorem D (Siebeck [59]).
Let $\\{w_{1},\ldots,w_{n}\\}$ be the vertices of a convex polygon in
$\mathbb{C}$ ordered counter-clockwise. Let
$M(z)=\sum_{j=1}^{n}\frac{m_{j}}{z-w_{j}},$ (1.2)
where $m_{1},\ldots,m_{n}$ are real numbers. Then the zeros of $M(z)$ are the
foci of a real algebraic curve of class $n-1$ which intersects each of the
line segments $[w_{j},w_{k}]$, $j\not=k$, at the point dividing the line in
ratio $m_{j}/m_{k}$.
Daepp, Gorkin and collaborators [7, 10, 11, 31] published a series of papers
concerning finite Blaschke products $b(z)$. These are the Schur functions
(analytic maps of $\mathbb{D}$ to itself) which are analytic in a neighborhood
of $\overline{\mathbb{D}}$, of magnitude $1$ on $\partial\mathbb{D}$, with $n$
zeros in $\mathbb{D}$. A connection with Siebeck’s theorem is through the
formula (see [10])
$\sum_{j=1}^{n}\frac{m_{j}(\lambda)}{z-w_{j}}=\frac{b(z)}{zb(z)-\bar{\lambda}},$
which shows that solutions of equations of the form
$zb(z)=\bar{\lambda}\in\mathbb{T}$ (1.3)
generate configurations of points on $\mathbb{T}$ such that the envelope of
the convex hull of these points is a (component) of an algebraic curve whose
foci coincide with the zeros of $b$. Furthermore, Fujimura [19, 20, 21]
extended their analysis to the whole interior curve and its dual, see details
in Section 4.
It turns out that both approaches (via the numerical range or using Blaschke
products) are equivalent, and in this sense, provide the only possible
construction of algebraic curves in $\mathbb{D}$ with prescribed foci having a
Poncelet property. In a recent work [46], some of the authors of this paper
showed that both points of view are naturally connected via orthogonal
polynomials on the unit circle (OPUC), initially studied in a systematic way
by Szegő [63] and Geronimus [28]; for a modern account on the theory, see the
treatise by Simon [60]. In particular, it was shown that the class
$\mathcal{S}_{n}$ is exactly the class of the truncated CMV matrices that are
the natural family of matrices to be studied in the theory of OPUC and that
equations (1.3) define the zeros of the so-called _paraorthogonal_ polynomials
on the unit circle.
Poncelet’s construction can be approached also from a point of view of the
theory of elliptic billiards [16, 17, 40, 53] and discrete dynamical systems
such as the pentagram map [56, 57, 52, 64].
The literature on Poncelet is vast (so the bibliography cited here is far from
complete), with a clear increase in attention to the subject in the last two
decades. However, trying to navigate through these alternative and
complementary perspectives can be also frustrating due to diversity of
terminology and, to be said, occasional lack of rigor. Notions like a curve
inscribed in a family of lines or an envelope of polygons, or even a Poncelet
curve, are many times left to intuition and can lead (and did lead) to wrong
conclusions. It also became clear to us that the phenomenon discovered by
Poncelet is a “shell” (convex hull) of a much richer structure analyzed by
Darboux (and incidentally, rediscovered by Mirman) and should be studied from
that perspective. Thus, our initial motivation was to unify several results
scattered in the literature, in part using the tool of OPUC, as well as to
define precisely the objects we are working with. This paper is the first
result of this endeavor; it contains several results, some old (either stated
without proof or proved from a different perspective), some new. In
particular,
* •
We give a rigorous definition of the Poncelet property, of curves tangent to a
family of polygons (or envelopes of families of polygons), and polygons
associated with Poncelet curves.
* •
We illustrate some misconceptions that appear in literature and present
counterexamples. We also analyze sufficient conditions that invalidate these
situations. For instance, we show that curves inscribed in some families of
polygons supported on $\mathbb{T}$ are not necessarily convex, can have cusps
and even can intersect the unit circle.
* •
We prove a characterization of all “complete Poncelet curves” of minimal class
and show that all realizations of these curves, mentioned in this
Introduction, are equivalent.
* •
Regarding the tools, OPUC, brought up in [46], play a unifying role in our
analysis. We also put forward the advantage of working with tangent
coordinates. They have been exploited in the works of B. Mirman (whose
contribution, in our opinion, is not sufficiently known and we tried to put in
perspective), so that we call its application in our context the “Mirman’s
parametrization”. Another very convenient tool from projective geometry is
reciprocation with respect to the unit circle, which turned out to be very
useful, see e.g. [21].
The structure of the paper is as follows. In order to make this paper self-
contained and facilitate its reading, in Section 2 we introduce three main
components (“toolboxes”) of the necessary background. Section 3 contains,
among other things, a rigorous definition of a Poncelet curve, an analysis of
when such a curve belongs to the unit disk, as well as Mirman’s
parametrization of a package of Poncelet curves, along with a number of
interesting examples. Section 4 contains a characterization of complete
Poncelet curves of minimal class and the equivalence of their constructions,
unifying in a certain sense the previous work of Daepp, Gorkin, Gau, Wu,
Fujimura and collaborators. Finally, in Section 5 we explore in more detail
the setting related to Darboux’s result (Theorem B), namely when one of the
components of the complete Poncelet curve is an ellipse.
## 2\. Our toolbox
### 2.1. Tool set 1: Projective algebraic geometry
We start with a few elementary notions from projective algebraic geometry,
which we will need throughout this paper. All these results are standard and
can be found in practically any text on classical algebraic geometry, see e.g.
[2, 3, 29, 37, 55, 58].
#### 2.1.1. The projective plane
For any field $\mathbb{K}$ such as $\mathbb{R}$ or $\mathbb{C}$, the
_projective plane_ $\mathbb{P}^{2}(\mathbb{K})$ over $\mathbb{K}$ is the set
of all equivalence classes of triples
$(x_{1},x_{2},x_{3})\in\mathbb{K}^{3}\setminus\\{(0,0,0)\\}$, where
$(x_{1},x_{2},x_{3})$ and $(x_{1}^{\prime},x_{2}^{\prime},x_{3}^{\prime})$ are
equivalent if and only if
$(x_{1}^{\prime},x_{2}^{\prime},x_{3}^{\prime})=(\lambda x_{1},\lambda
x_{2},\lambda x_{3})$ for some $\lambda\in\mathbb{K}\setminus\\{0\\}$. The
equivalence class of a triple
$(x_{1},x_{2},x_{3})\in\mathbb{K}^{3}\setminus\\{(0,0,0)\\}$ is denoted
$(x_{1}:x_{2}:x_{3})$ and is called the point of $\mathbb{P}^{2}(\mathbb{K})$
with _homogenous coordinates_ $(x_{1},x_{2},x_{3})$. As usual, we embed the
affine plane $\mathbb{K}^{2}$ in $\mathbb{P}^{2}(\mathbb{K})$ by identifying
the point $(x_{1},x_{2})\in\mathbb{K}^{2}$ with the point
$(x_{1}:x_{2}:1)\in\mathbb{P}^{2}(\mathbb{K})$ and conversely, any point
$(x_{1}:x_{2}:x_{3})\in\mathbb{P}^{2}(\mathbb{K})$ such that $x_{3}\neq 0$
with the point $(x_{1}/x_{3},x_{2}/x_{3})\in\mathbb{K}^{2}$. The complement of
$\mathbb{K}^{2}$ in $\mathbb{P}^{2}(\mathbb{K})$, that is, the set
$\\{(x_{1}:x_{2}:x_{3})\in\mathbb{P}^{2}(\mathbb{K}):x_{3}=0\\}$, is called
the _line at infinity_.
The _real projective plane_ $\mathbb{P}^{2}(\mathbb{R})$ is canonically
embedded in _complex projective plane_ $\mathbb{P}^{2}(\mathbb{C})$.
Furthermore, for much of this paper, we will identify $\mathbb{R}^{2}$ with
$\mathbb{C}$ and hence $\mathbb{C}$ is embedded in
$\mathbb{P}^{2}(\mathbb{R})$ as above:
$\mathbb{C}=\mathbb{R}^{2}\subset\mathbb{P}^{2}(\mathbb{R}),\quad
x+iy=(x,y)\mapsto(x:y:1).$ (2.1)
We view $\mathbb{P}^{2}(\mathbb{R})$ as a real two-dimensional compact
manifold in the usual way so that (2.1) is an open embedding. The image of
this embedding is dense in $\mathbb{P}^{2}(\mathbb{R})$, and hence
$\mathbb{P}^{2}(\mathbb{R})$ is a compactification of $\mathbb{C}$. This
compactification is not to be confused with the one-point compactification of
$\mathbb{C}$ which is the Riemann sphere.
#### 2.1.2. Real algebraic curves
A _plane real algebraic curve_ of degree $d$ is an algebraic curve
$\Gamma\subset\mathbb{P}^{2}(\mathbb{C})$ defined by an equation
$F(x_{1},x_{2},x_{3})=0$, where $F(x_{1},x_{2},x_{3})$ is a nonzero
homogeneous polynomial of degree $d$ with _real_ coefficients. Notice that
$F(x_{1},x_{2},x_{3})$ being zero only depends on the equivalence class of the
triplet $(x_{1},x_{2},x_{3})$ since $F(\lambda x,\lambda y,\lambda
z)=\lambda^{d}F(\lambda x,\lambda y,\lambda z)$, and that although we speak
about a real curve $\Gamma$, we have
$\Gamma\subset\mathbb{P}^{2}(\mathbb{C})$. We say that the curve $\Gamma$ is
_irreducible_ if the polynomial $F(x_{1},x_{2},x_{3})$ is irreducible over
$\mathbb{C}$.
The _set of real points_ of a real algebraic curve
$\Gamma\subset\mathbb{P}^{2}(\mathbb{C})$ is defined as the set
$\Gamma(\mathbb{R}):=\Gamma\cap\mathbb{P}^{2}(\mathbb{R}).$
In this paper, we will be very careful to always distinguish between $\Gamma$
and $\Gamma(\mathbb{R})$.
If $f(x,y)$ is any (not necessarily homogenous) polynomial of degree $d$ with
real coefficients, then
$F(x_{1},x_{2},x_{3}):=x_{3}^{d}f(x_{1}/x_{3},x_{2}/x_{3})$ is a homogenous
polynomial of degree $d$ with real coefficients. Thus, a nonzero polynomial
$f(x,y)$ with real coefficients defines a real algebraic curve $\Gamma$ via
this homogenization. Geometrically, $\Gamma$ is the projective closure of the
curve given by the equation $f(x,y)=0$, that is,
$\Gamma=\text{clo}{\\{(x,y)\in\mathbb{C}^{2}:f(x,y)=0\\}}$. Conversely,
$F(x_{1},x_{2},x_{3})$ can be dehomogenized by setting $f(x,y):=F(x,y,1)$ and
we have $\Gamma\cap\mathbb{C}^{2}=\\{(x,y)\in\mathbb{C}^{2}:f(x,y)=0\\}$.
###### Remark 2.1.
We should emphasize that a real plane algebraic curve $\Gamma$ is more than
its set of real points $\Gamma(\mathbb{R})$ and even the set of complex points
of $\Gamma$ does not determine its defining polynomial $F(x_{1},x_{2},x_{3})$.
However, if we assume that $F(x_{1},x_{2},x_{3})$ is square-free, that is,
without any repeated irreducible factors, then $\Gamma$ determines
$F(x_{1},x_{2},x_{3})$ uniquely up to a nonzero multiplicative constant. We
will always make this assumption.
#### 2.1.3. Nonsingular and singular points
Suppose $\Gamma$ is defined by the equation $F(x_{1},x_{2},x_{3})=0$. Then
$(a_{1}:a_{2}:a_{3})\in\Gamma$ is called a _nonsingular point_ of $\Gamma$ if
$\left(\frac{\partial F}{\partial x_{1}}(a_{1},a_{2},a_{3}),\frac{\partial
F}{\partial x_{2}}(a_{1},a_{2},a_{3}),\frac{\partial F}{\partial
x_{3}}(a_{1},a_{2},a_{3})\right)\not=(0,0,0);$
otherwise $(a_{1}:a_{2}:a_{3})$ is a _singular point_. We say that $\Gamma$
(resp., $\Gamma(\mathbb{R})$) is _nonsingular_ , if all points of $\Gamma$
(resp., $\Gamma(\mathbb{R})$) are nonsingular. Note that if
$(a_{1}:a_{2}:a_{3})$ is a nonsingular point, then the equation
$\frac{\partial F}{\partial x_{1}}(a_{1}:a_{2}:a_{3})\,x_{1}+\frac{\partial
F}{\partial x_{2}}(a_{1}:a_{2}:a_{3})\,x_{2}+\frac{\partial F}{\partial
x_{3}}(a_{1}:a_{2}:a_{3})\,x_{3}=0$ (2.2)
defines a line in $\mathbb{P}^{2}(\mathbb{R})$ (resp.,
$\mathbb{P}^{2}(\mathbb{C})$) which is called the _tangent line_ of
$\Gamma(\mathbb{R})$ (resp., $\Gamma$) at the point $(a_{1}:a_{2}:a_{3})$. If
$(a_{1}:a_{2}:a_{3})$ is a singular point, then the equation (2.2) is
meaningless. However, since an algebraic curve has only finitely many singular
points (and hence every singular point is an isolated point), tangent lines at
a singular point $(a_{1}:a_{2}:a_{3})$ can be defined by continuity. For
example, Figure 3 shows the tangent lines at a _node_ (or _ordinary double
point_) and at a _cusp_ (or _return point_).
Figure 3. Tangent lines at a node (left) and at a cusp.
#### 2.1.4. Duality
The _dual projective plane_ $\mathbb{P}^{2}(\mathbb{K})^{*}$ is the set of
lines in $\mathbb{P}^{2}(\mathbb{K})$. We will identify
$\mathbb{P}^{2}(\mathbb{K})$ with $\mathbb{P}^{2}(\mathbb{K})^{*}$ by letting
the point $(u_{1}:u_{2}:u_{3})\in\mathbb{P}^{2}(\mathbb{K})$ correspond to the
line in $\mathbb{P}^{2}(\mathbb{K})$ given by the equation
$u_{1}x_{1}+u_{2}x_{2}-u_{3}x_{3}=0.$ (2.3)
Note that via this identification, we can view
$\mathbb{P}^{2}(\mathbb{R})^{*}$ as a subset of
$\mathbb{P}^{2}(\mathbb{C})^{*}$. The motivation for the negative sign in
(2.3) will be explained in the next subsection.
The _dual_ of a real algebraic curve $\Gamma\subset\mathbb{P}^{2}(\mathbb{C})$
is the real algebraic curve $\Gamma^{*}\subset\mathbb{P}^{2}(\mathbb{C})$
whose points correspond to the tangent lines of $\Gamma$, that is,
$\Gamma^{*}:=\left\\{(u_{1}:u_{2}:u_{3})\in\mathbb{P}^{2}(\mathbb{C}):\,u_{1}x_{1}+u_{2}x_{2}-u_{3}x_{3}=0\text{
is a tangent line of $\Gamma$}\right\\}.\\\ $ (2.4)
In this context, triples $(u_{1}:u_{2}:u_{3})$ are known as the _tangent
coordinates_ of the dual $\Gamma^{*}$.
Given an equation $F(x_{1},x_{2},x_{3})=0$ that defines $\Gamma$, we can find
an equation $G(u_{1},u_{2},u_{3})=0$ that defines $\Gamma^{*}$ by eliminating
the variables $x_{1},x_{2},x_{3}$ from the system of equations
$F(x_{1},x_{2},x_{3})=0,\ \frac{\partial F}{\partial
x_{1}}(x_{1},x_{2},x_{3})=u_{1},\ \frac{\partial F}{\partial
x_{2}}(x_{1},x_{2},x_{3})=u_{2},\ \frac{\partial F}{\partial
x_{3}}(x_{1},x_{2},x_{3})=-u_{3}.$
In practice, this can be achieved by computing a suitable Gröbner basis. For
example, to obtain $G(u_{1},u_{2},u_{3})$ from $F(x_{1},x_{2},x_{3})$, we can
use the Mathematica code
GroebnerBasis[{F, D[F,x1]-u1, D[F,x2]-u2, D[F,x3]+u3},
$\displaystyle\mbox{{\tt\ \ \
\\{u\textsubscript{1},u\textsubscript{2},u\textsubscript{3}\\},\\{x\textsubscript{1},x\textsubscript{2},x\textsubscript{3}\\},
MonomialOrder -> EliminationOrder]}},$
where F stands for the polynomial $F(x_{1},x_{2},x_{3})$. The output will be a
Gröbner basis with a single element, namely the desired polynomial
$G(u_{1},u_{2},u_{3})$. All plots of dual curves in this paper were produced
by first obtaining the equation of the dual in this manner.
The terminology “dual” is justified by the fact that
$(\Gamma^{*})^{*}=\Gamma$. The degree of $\Gamma^{*}$ is called the _class_ of
$\Gamma$. The relationship between the degree $d$ and the class of $\Gamma$ is
rather subtle and is described by Plücker’s formula. In the special case when
$\Gamma$ is nonsingular, Plücker’s formula says that the class of $\Gamma$ is
equal to $d(d-1)$. Since $(\Gamma^{*})^{*}=\Gamma$, this means that if
$\Gamma$ is nonsingular and $d>2$, then $\Gamma^{*}$ must have singular
points. The dual of nonsingular conic ($d=2$) is again a nonsingular conic;
the dual of a line ($d=1$) is a point.
We will be mostly interested in the set of real points of the duals of real
algebraic curves. Suppose $C\subseteq\Gamma(\mathbb{R})$ is a union of path-
components of the set of real points of a real algebraic curve $\Gamma$. Then
we define the dual of $C$ as the set
$C^{*}:=\left\\{(u_{1}:u_{2}:u_{3})\in\mathbb{P}^{2}(\mathbb{R}):\,u_{1}x_{1}+u_{2}x_{2}-u_{3}x_{3}=0\text{
is a tangent line of $C$}\right\\}.$ (2.5)
In particular, $C^{*}\subseteq\Gamma^{*}(\mathbb{R})$. Furthermore, if $C$ is
a path-component of $\Gamma(\mathbb{R})$, then $C^{*}$ is a path-component of
$\Gamma^{*}(\mathbb{R})$ and $(C^{*})^{*}=C$.
#### 2.1.5. Reciprocation about $\mathbb{T}$
A nice geometric interpretation of duality in the real projective plane
$\mathbb{P}^{2}(\mathbb{R})$ can be given in terms of so-called reciprocation
about the unit circle $\mathbb{T}$. In the following, we will view
$\mathbb{C}\subset\mathbb{P}^{2}(\mathbb{R})$ as in (2.1). Then $\mathbb{T}$
is the set of real points of the algebraic curve given by
$x_{1}^{2}+x_{2}^{2}-x_{3}^{2}=0$.
The _reciprocal_ or _polar_ of a point $\zeta=u+iv\not=0$ in $\mathbb{C}$
about $\mathbb{T}$ is the line $l$ that contains the point $\zeta/|\zeta|^{2}$
and is perpendicular to the ray from $0$ through $\zeta$ (see Figure 4). We
also say that $\zeta$ is the _reciprocal_ or the _pole_ of $l$. The reciprocal
of the origin $0$ is the line at infinity $l_{\infty}$. We can extend
reciprocation about $\mathbb{T}$ to give a bijection between points and lines
in $\mathbb{P}^{2}(\mathbb{R})$. It is an easy exercise to verify that the
reciprocal of the point $(u_{1}:u_{1}:u_{3})\in\mathbb{P}^{2}(\mathbb{R})$
about $\mathbb{T}$ is precisely the line given by the equation
$u_{1}x_{1}+u_{2}x_{2}-u_{3}x_{3}=0$ (which motivates the definition
(2.3)–(2.4)).
Suppose $l$ is a line in $\mathbb{C}$ that intersect the unit circle
$\mathbb{T}$ in two distinct points $z_{1}$ and $z_{2}$. Then $l$ is given by
the equation
$z+z_{1}z_{2}\overline{z}-(z_{1}+z_{2})=0$ (2.6)
in the complex variable $z$. If we write $z=x+iy$ and compare (2.6) to
$ux+vy-1=0$, we obtain
$u=\frac{1+z_{1}z_{2}}{z_{1}+z_{2}}\ ,\quad
v=i\frac{1-z_{1}z_{2}}{z_{1}+z_{2}},\quad\zeta=\frac{2z_{1}z_{2}}{z_{1}+z_{2}}.$
(2.7)
Using (2.7), it is now straightforward to express the elementary symmetric
functions $z_{1}+z_{2}$ and $z_{1}z_{2}$ in terms of $\zeta=u+iv$ as follows:
$z_{1}+z_{2}=\frac{2}{\overline{\zeta}}\ ,\quad
z_{1}z_{2}=\frac{\zeta}{\overline{\zeta}}\ .$ (2.8)
This in turn allows us to write
$z_{1}=\frac{1+i\sqrt{\zeta\overline{\zeta}-1}}{\overline{\zeta}},\quad
z_{2}=\frac{1-i\sqrt{\zeta\overline{\zeta}-1}}{\overline{\zeta}}\ .$ (2.9)
Geometrically, the points $z_{1},z_{2}\in\mathbb{T}$ are the points where the
two tangent lines to $\mathbb{T}$ containing the point $\zeta$ touch
$\mathbb{T}$ (see Figure 4).
#### 2.1.6. Mirman’s parametrization
Let $\Gamma$ be a real algebraic curve such that
$\Gamma(\mathbb{R})\subset\mathbb{D}$. In this case, we can use reciprocation
about $\mathbb{T}$ to give an alternative parametrization of $\Gamma^{*}$ that
has been systematically exploited in the work of Mirman and collaborators (see
e.g., [47, 50]). Suppose $\Gamma$ has class $m$ so that $\Gamma^{*}$ is given
by an equation $G(u_{1},u_{2},u_{3})=0$, where $G(u_{1},u_{2},u_{3})$ is a
homogenous polynomial of degree $m$. Let $g(u,v):=G(u,v,1)$ be the
dehomogenization. Using the substitution (2.7), define a polynomial
$P(z_{1},z_{2}):=(z_{1}+z_{2})^{m}g\left(\frac{1+z_{1}z_{2}}{z_{1}+z_{2}},i\frac{1-z_{1}z_{2}}{z_{1}+z_{2}}\right).$
(2.10)
Then $P(z_{1},z_{2})$ is a symmetric polynomial of (total) degree $2m$ with
complex coefficients. By our discussion above, a point
$\zeta=u+iv\in\mathbb{C}$ in the exterior of $\mathbb{T}$ lies on
$\Gamma^{*}(\mathbb{R})$ if and only if the points $z_{1},z_{2}\in\mathbb{T}$
given by (2.7) or Figure 4 satisfy the equation
$P(z_{1},z_{2})=0.$ (2.11)
We say that in (2.11), $\Gamma^{*}$ (or equivalently, $\Gamma$) is written in
tangent coordinates, and refer to it as Mirman’s parametrization of the
algebraic curve.
\begin{overpic}[scale={0.8}]{Fig4} \put(31.0,98.0){ $z_{1}$} \put(80.0,30.0){
$z_{2}$} \put(65.0,50.0){ $l$} \put(119.0,103.0){ \small$\zeta$}
\put(30.0,24.0){ \small$\Gamma(\mathbb{R})$}
\put(145.0,67.0){\small$\Gamma^{*}(\mathbb{R})$} \put(2.0,20.0){ $\mathbb{T}$}
\end{overpic} Figure 4. Geometric construction of the dual curve via
reciprocation.
#### 2.1.7. Real foci
In the study of real conics (real algebraic curves of degree $2$), the points
called foci have played a central role since antiquity. Plücker generalized
the concept to curves of higher degree as follows. A point
$(a_{1}:a_{2}:a_{3})\in\mathbb{P}^{2}(\mathbb{C})$ is called a _focus_ of a
real algebraic curve $\Gamma\subset\mathbb{P}^{2}(\mathbb{C})$ if the two
lines through $(a_{1}:a_{2}:a_{3})$ and the so-called circular points at
infinity $(1:\pm i:0)$ are tangent to $\Gamma$. A focus $(a_{1}:a_{2}:a_{3})$
of $\Gamma$ is called a _real focus_ if
$(a_{1}:a_{2}:a_{3})\in\mathbb{P}^{2}(\mathbb{R})$.
It is easy to verify that the lines in $\mathbb{P}^{2}(\mathbb{C})$ through
$(a_{1}:a_{2}:a_{3})$ and $(1:\pm i:0)$ are given by the equations
$a_{3}x_{1}\pm a_{3}x_{2}-(a_{1}\pm ia_{2})x_{3}=0$, respectively. Thus, if
$G(u_{1},u_{2},u_{3})=0$ is the equation defining the dual curve $\Gamma^{*}$,
then $(a_{1}:a_{2}:a_{3})\in\mathbb{P}^{2}(\mathbb{C})$ is a focus of $\Gamma$
if and only if $G(a_{3},\pm ia_{3},a_{1}\pm ia_{2})=0$. If
$(a_{1}:a_{2}:a_{3})\in\mathbb{P}^{2}(\mathbb{R})$, then
$G(a_{3},-ia_{3},a_{1}-ia_{2})=\overline{G(a_{3},ia_{3},a_{1}+ia_{2})}$ since
the coefficients of $G(u_{1},u_{2},u_{3})$ are real. So,
$(a_{1}:a_{2}:a_{3})\in\mathbb{P}^{2}(\mathbb{R})$ is a real focus of $\Gamma$
if and only if $G(a_{3},ia_{3},a_{1}+ia_{2})=0$. Viewing
$\mathbb{C}\subset\mathbb{P}^{2}(\mathbb{R})$ as in (2.1), we see that
$z\in\mathbb{C}$ is a real focus of $\Gamma$ if and only if
$G(1,i,z)=0.$ (2.12)
### 2.2. Tool set 2: Orthogonal polynomials on the unit circle
#### 2.2.1. OPUC and Szegő recursion
Throughout the rest of the paper we reserve the notation $\Phi_{n}(z)$ for
monic polynomials of degree exactly $n$; when we need to make the dependence
on the zeros explicit, we will write
$\Phi_{n}(z;f_{1},\dots,f_{n}):=\prod_{j=1}^{n}\left(z-f_{j}\right).$ (2.13)
Moreover, if
$\Phi_{n}(z)=\sum_{j=0}^{n}c_{j}z^{j},\quad c_{n}=1,$
then its reversed polynomial is
$\Phi^{*}_{n}(z)=\sum_{j=0}^{n}\overline{c_{j}}z^{n-j}=z^{n}\,\overline{\Phi_{n}\left(1/\overline{z}\right)}.$
(2.14)
Observe that $\Phi^{*}_{n}(z)$ can be of degree strictly less than $n$.
If $\Phi_{n}(z)=\Phi_{n}(z;f_{1},\dots,f_{n})$ is a monic polynomial of degree
$n$ with all its zeros $f_{j}\in\mathbb{D}$, then there exists a measure $\mu$
on the unit circle $\mathbb{T}$ such that $\Phi_{n}(z)$ is orthogonal to
$\\{z^{j}\\}_{j=0}^{n-1}$ in $L^{2}(\mathbb{T},\mu)$. There are actually many
such measures $\mu$ and one example is
$c\cdot|\Phi_{n}(e^{i\theta};f_{1},\ldots,f_{n})|^{-2}d\theta,$
where $c$ is a normalization constant. Here we identify measures on
$\mathbb{T}$ and measures on $[0,2\pi)$ in the usual way. This means that the
polynomial $\Phi_{n}(z)$ has all the properties of an orthogonal polynomial on
the unit circle (OPUC).
The most important property of OPUC for our investigation is the _Szegő
recursion_ , which states that if $\Phi_{k}(z)$ and $\Phi_{k+1}(z)$ are
consecutive orthogonal polynomials for a measure $\mu$ on $\mathbb{T}$, then
there exists some $\alpha_{k}\in\mathbb{D}$ so that
$\begin{pmatrix}\Phi_{k+1}(z)\\\
\Phi_{k+1}^{*}(z)\end{pmatrix}=\begin{pmatrix}z&-\overline{\alpha_{k}}\\\
-\alpha_{k}z&1\end{pmatrix}\,\begin{pmatrix}\Phi_{k}(z)\\\
\Phi_{k}^{*}(z)\end{pmatrix}.$ (2.15)
It is easy to see that $\alpha_{k}=-\overline{\Phi_{k+1}(0)}$; these are known
as the _Verblunsky coefficients_ for the OPUC. Notice that the Szegő recursion
can be inverted, allowing one to recover the orthogonal polynomials of degree
smaller than $n$ from the degree $n$ orthogonal polynomial. This tells us that
even though there are many choices of orthogonality measure for the polynomial
$\Phi_{n}$, they all have the same first $n$ orthogonal polynomials
$\\{\Phi_{j}(z)\\}_{j=0}^{n-1}$ and hence they all have the same first $n$
Verblunsky coefficients (see [60, Theorem 1.7.5]). Hence, any monic
$\Phi_{n}(z)$ with all of its zeros in $\mathbb{D}$ can be alternatively
parametrized by its Verblunsky coefficients $\alpha_{0},\dots,\alpha_{n-1}$
(instead of its zeros). When we need to make this dependence explicit, we will
use the notation
$\Phi_{n}(z)=\Phi_{n}^{(\alpha_{0},\dots,\alpha_{n-1})}(z)$ (2.16)
in contrast to (2.13).
#### 2.2.2. CMV matrices
If the orthogonality measure $\mu$ on $\mathbb{T}$ has infinitely many points
in its support, then the sequence of its Verblunsky coefficients is also
infinite. In this case, one can define a sequence of $2\times 2$ matrices
$\\{\bm{\Theta}_{j}\\}_{j=0}^{\infty}$ by
$\bm{\Theta}_{j}=\begin{pmatrix}\bar{\alpha}_{j}&\sqrt{1-|\alpha_{j}|^{2}}\\\
\sqrt{1-|\alpha_{j}|^{2}}&-\alpha_{j}\end{pmatrix}$
and the operators $\bm{{\mathcal{L}}}$ and $\bm{{\mathcal{M}}}$ by
$\bm{{\mathcal{L}}}=\bm{\Theta}_{0}\oplus\bm{\Theta}_{2}\oplus\bm{\Theta}_{4}\oplus\cdots,\qquad{\mathcal{M}}=\bm{1}\oplus\bm{\Theta}_{1}\oplus\bm{\Theta}_{3}\oplus\cdots$
where the initial $\bm{1}$ in the definition of $\bm{{\mathcal{M}}}$ is a
$1\times 1$ identity matrix. The _CMV matrix_ corresponding to $\mu$ is then
given by
$\bm{\mathcal{G}}=\bm{\mathcal{G}}(\\{\alpha_{j}\\}):=\bm{{\mathcal{L}}}\bm{{\mathcal{M}}}$,
or explicitly,
$\bm{\mathcal{G}}:=\begin{pmatrix}\overline{\alpha_{0}}&\overline{\alpha_{1}}\rho_{0}&\rho_{1}\rho_{0}&0&0&\dots\\\
\rho_{0}&-\overline{\alpha_{1}}\alpha_{0}&-\rho_{1}\alpha_{0}&0&0&\dots\\\
0&\overline{\alpha_{2}}\rho_{1}&-\overline{\alpha_{2}}\alpha_{1}&\overline{\alpha_{3}}\rho_{2}&\rho_{3}\rho_{2}&\dots\\\
0&\rho_{2}\rho_{1}&-\rho_{2}\alpha_{1}&-\overline{\alpha_{3}}\alpha_{2}&-\rho_{3}\alpha_{2}&\dots\\\
0&0&0&\overline{\alpha_{4}}\rho_{3}&-\overline{\alpha_{4}}\alpha_{3}&\dots\\\
\dots&\dots&\dots&\dots&\dots&\dots\end{pmatrix},\quad\rho_{n}=\sqrt{1-|\alpha_{n}|^{2}}$
(2.17)
(see [60, Section 4.2]). Since each of $\bm{{\mathcal{L}}}$ and
$\bm{{\mathcal{M}}}$ is a direct sum of unitary matrices, each of
$\bm{{\mathcal{L}}}$ and $\bm{{\mathcal{M}}}$ is unitary and hence
$\bm{{\mathcal{G}}}$ is unitary as an operator on $l^{2}(\mathbb{N})$. The
principal $n\times n$ submatrix of $\bm{{\mathcal{G}}}$, which depends only on
the Verblunsky coefficients $\alpha_{0},\dots,\alpha_{n-1}$, will also be
called the $n\times n$ _cut-off CMV matrix_ , which we will denote by
$\bm{{\mathcal{G}}}^{(n)}=\bm{{\mathcal{G}}}^{(n)}(\alpha_{0},\dots,\alpha_{n-1})$.
The cut-off CMV matrices are the canonical representation of the compressed
multiplication operator (see [46]) and satisfy
$\Phi_{n}^{(\alpha_{0},\dots,\alpha_{n-1})}(z)=\det(z\bm{I}_{n}-\bm{\mathcal{G}}^{(n)}).$
(2.18)
Thus it is true that all of the eigenvalues of $\bm{\mathcal{G}}^{(n)}$ are in
the unit disk $\mathbb{D}$. Furthermore, we see from the construction that the
operator norm $\|\bm{\mathcal{G}}^{(n)}\|=1$ and
$\operatorname{\mbox{rank}}(\bm{I}_{n}-\bm{{\mathcal{G}}}^{(n)}\bm{{\mathcal{G}}}^{(n)*})=\operatorname{\mbox{rank}}(\bm{I}_{n}-\bm{{\mathcal{G}}}^{(n)*}\bm{{\mathcal{G}}}^{(n)})=1.$
#### 2.2.3. Paraorthogonal polynomials
A _paraorthogonal polynomial_ on the unit circle (POPUC) can be generated by
the Szegő recursion (2.15) if we replace the last Verblunsky coefficient
$\alpha_{n-1}$ by a value $\lambda\in\mathbb{T}$:
$\Phi_{n}^{(\alpha_{0},\dots,\alpha_{n-2},\lambda)}(z)=z\,\Phi_{n-1}(z)-\overline{\lambda}\,\Phi_{n-1}^{*}(z),\quad\Phi_{n-1}(z)=\Phi_{n-1}^{(\alpha_{0},\dots,\alpha_{n-2})}(z).$
(2.19)
The $n$ zeros $z_{j}=z_{n,j}^{\lambda}$, $j=1,\dots,n$, of
$\Phi_{n}^{(\alpha_{0},\dots,\alpha_{n-2},\lambda)}(z)$ are distinct and
belong to $\mathbb{T}$. This can be seen by noting that the zeros of
$\Phi_{n}^{(\alpha_{0},\dots,\alpha_{n-2},\lambda)}(z)$ are the eigenvalues of
an $n\times n$ unitary dilation of $\bm{\mathcal{G}}^{(n-1)}$. By this we mean
that one can take the cutoff CMV matrix corresponding to the Verblunsky
coefficients $\\{\alpha_{j}\\}_{j=0}^{n-2}$ and add one row and one column to
get a unitary $n\times n$ matrix whose characteristic polynomial is
$\Phi_{n}^{(\alpha_{0},\dots,\alpha_{n-2},\lambda)}(z)$:
$\Phi_{n}^{(\alpha_{0},\dots,\alpha_{n-2},\lambda)}(z)=\det(z\bm{I}_{n}-\bm{\mathcal{G}}^{(n)}),\quad\bm{\mathcal{G}}^{(n)}=\bm{\mathcal{G}}^{(n)}(\alpha_{0},\dots,\alpha_{n-2},\lambda),\quad\lambda\in\mathbb{T}.$
(2.20)
In fact it is true that all $n\times n$ unitary dilations of
$\bm{\mathcal{G}}^{(n-1)}$ are obtained this way by an appropriate choice of
$\lambda\in\mathbb{T}$.
###### Definition 2.2.
If in (2.19), $\Phi_{n-1}(z)=\Phi_{n-1}(z;f_{1},\dots,f_{n-1})$, that is, if
$f_{1},\dots,f_{n-1}$ are the zeros of $\Phi_{n-1}(z)$, then the
$1$-parametric family of points
$\mathcal{Z}_{n}^{\lambda}=\\{z_{n,1}^{\lambda},\dots,z_{n,n}^{\lambda}\\}$ on
$\mathbb{T}$ is called the _paraorthogonal extension_ of the zeros
$f_{1},\dots,f_{n-1}$ of $\Phi_{n-1}(z)$.
It is known that two sets, $\mathcal{Z}_{n}^{\lambda_{1}}$ and
$\mathcal{Z}_{n}^{\lambda_{2}}$ from this extension, with
$\lambda_{1},\lambda_{2}\in\mathbb{T}$, $\lambda_{1}\neq\lambda_{2}$,
determine the original points $f_{1},\dots,f_{n-1}$, and hence
$\Phi_{n-1}(z)$, completely (Wendroff’s Theorem for OPUC, see e.g. [46]).
For further details on orthogonal polynomials on the unit circle see e.g. the
modern treatise [60], or the more classical texts [28] and [63].
#### 2.2.4. Blaschke products
Very much related with OPUC is the notion of Blaschke products. A normalized
_Blaschke product_ is a rational function of the form
$b_{n}(z;f_{1},\dots,f_{n})=\frac{\Phi_{n}(z)}{\Phi_{n}^{*}(z)},$ (2.21)
where points $f_{1},f_{2},\dots,f_{n-1}$ are inside the unit disk
$\mathbb{D}$,
$\Phi_{n}(z)=\Phi_{n}(z;f_{1},\dots,f_{n}),\qquad\qquad\Phi_{n}^{*}(z)=\Phi_{n}^{*}(z;f_{1},\dots,f_{n}),$
see notation (2.13)–(2.14). We will say that the Blaschke product $b_{n}(z)$
has _degree $n$_.
For any $\lambda\in\mathbb{T}$, the equation
$b_{n}(z)=\overline{\lambda}$ (2.22)
has exactly $n$ solutions, $z_{1}^{\lambda},\dots,z_{n}^{\lambda}$, all
distinct and on $\mathbb{T}$.
###### Definition 2.3.
If $z_{1}^{\lambda},\dots,z_{n}^{\lambda}$ are the solutions of (2.22), then
we say that Blaschke product $b_{n}$ _identifies_ the set of points
$\mathcal{Z}^{\lambda}=\\{z_{1}^{\lambda},\dots,z_{n}^{\lambda}\\}$.
### 2.3. Tool set 3: Matrices and numerical range
#### 2.3.1. Class $\mathcal{S}_{n}$.
A square complex matrix $\bm{A}\in\mathbb{C}^{n\times n}$ is a _completely
non-unitary contraction_ if $\|\bm{A}\|\leq 1$ and all eigenvalues of $\bm{A}$
are strictly inside the unit disk $\mathbb{D}$. The space $\mathcal{S}_{n}$ is
the set of completely non-unitary contractions in $\mathbb{C}^{n\times n}$
with defect index
$\operatorname{\mbox{rank}}(\bm{I}_{n}-\bm{A}\bm{A}^{*})=\operatorname{\mbox{rank}}(\bm{I}_{n}-\bm{A}^{*}\bm{A})=1.$
The spaces $\mathcal{S}_{n}$ and their infinite-dimensional analogues have
been studied extensively, initially in the work of Livshitz [44], and in the
1960s by Sz.-Nagy and collaborators [62]. A canonical example of a matrix in
$\mathcal{S}_{n}$ ia a shift operator or $n\times n$ nilpotent Jordan block
$\bm{J}_{n}=\begin{pmatrix}0&&&&&\\\ 1&0&&&&\\\ &1&0&&\\\ &&\ddots&\ddots&\\\
&&&1&0\end{pmatrix}_{n\times n}.$ (2.23)
A remarkable property of the class $\mathcal{S}_{n}$ is that the spectrum
$\sigma(\bm{A})$ of a matrix $\bm{A}\in\mathcal{S}_{n}$ determines the unitary
equivalence class of $\bm{A}$. It turns out that the equivalence class of
$\bm{A}\in\mathcal{S}_{n}$ is determined even by its numerical range
$W(\bm{A})$ (see the definition in Section 2.3.2 below).
There are several “canonical” representations for matrices (or rather, of
their unitary equivalence classes) in $\mathcal{S}_{n}$, each having its own
merit. For instance, we can think of elements of $\mathcal{S}_{n}$ as the
“compressions of the shift” [62] or “compressed multiplication operators” [46]
in a Hardy space setting. We can characterize $\bm{A}\in\mathcal{S}_{n}$ via
their singular value decomposition (SVD), $\bm{A}=\bm{U}\bm{D}\bm{V}^{*}$,
where $\bm{U}$ and $\bm{V}$ are unitary and $\bm{D}$ is the diagonal matrix
$\operatorname{diag}(1,...,1,a)$ with $0\leq a<1$ (see [65]). Another
representation, also in [65], says that each such a matrix
$\bm{A}=\left(a_{ij}\right)_{i,j=1}^{n}\in\mathcal{S}_{n}$ if and only if it
is unitarily similar to an upper triangular matrix with elements satisfying
$\left|a_{ii}\right|<1$ for all $i$, while for $i<j$,
$a_{ij}=b_{ij}\left(1-\left|a_{ii}\right|^{2}\right)^{\frac{1}{2}}\left(1-\left|a_{jj}\right|^{2}\right)^{\frac{1}{2}},\quad
b_{ij}=\begin{cases}\displaystyle\prod_{k=i+1}^{j-1}\left(-\bar{a}_{kk}\right)&\text{
if }i<j-1\\\\[2.84526pt] 1&\text{ if }i=j-1.\end{cases}$
From our observations above it follows that the cut-off CMV matrix
$\bm{\mathcal{G}}^{(n)}$ is in the class $\mathcal{S}_{n}$. In fact, from [46,
Theorem 2] we know that every matrix from the class $\mathcal{S}_{n}$ is
unitarily equivalent to a cutoff CMV matrix. In short, CMV matrices are
another canonical representation of elements in $\mathcal{S}_{n}$, which has
several advantages. For instance, it gives us an effective construction of the
equivalence class of $\bm{A}\in\mathcal{S}_{n}$ from its eigenvalues
$f_{1},\dots,f_{n}$: from the monic polynomial $\Phi_{n}(z;f_{1},\dots,f_{n})$
use inverse Szegő recursion to obtain the Verblunsky coefficients
$\alpha_{0},\dots,\alpha_{n-1}$ and take
$\bm{A}=\bm{\mathcal{G}}^{(n)}(\alpha_{0},\dots,\alpha_{n-1})$.
#### 2.3.2. Numerical range.
The _numerical range_ or _field of values_ of a matrix
$\bm{A}\in\mathbb{C}^{n\times n}$ is the subset of the complex plane
$\mathbb{C}$ given by
$W(\bm{A})=\\{x^{*}\bm{A}x:\,x\in\mathbb{C}^{n},\;x^{*}x=1\\}.$
In other words, the numerical range is the image of the Euclidean unit sphere
under the continuous map $x\mapsto x^{*}\bm{A}x$.
For a matrix $\bm{A}$, the numerical range $W(\bm{A})$ is a compact and convex
subset of $\mathbb{C}$ (Toeplitz–Hausdorff Theorem) that contains the spectrum
$\sigma(\bm{A})$ of $\bm{A}$, and it is invariant by unitary conjugation of
$\bm{A}$. This shows that for normal matrices we can reduce the analysis of
$W(\bm{A})$ to the case of diagonal $\bm{A}$; a straightforward consequence is
that for a normal $\bm{A}$, $W(\bm{A})$ is the convex hull of
$\sigma(\bm{A})$. In particular, for a unitary matrix $\bm{A}$, its numerical
range is a convex polygon with vertices at its eigenvalues, inscribed in the
unit circle $\mathbb{T}$.
The so-called Elliptical Range Theorem (see e.g. [10, Chapter 6], [35, §1.3]
or [43]) says that if $n=2$, then $W(\bm{A})$ is an ellipse with eigenvalues
$f_{1}$ and $f_{2}$ as foci, and the minor axis of length
$\sqrt{\operatorname{tr}(\bm{A}^{*}\bm{A})-|f_{1}|^{2}-|f_{2}|^{2}}.$
The numerical range of the Jordan block (2.23) is the circular disk with
center at $0$ and radius $r=\cos(\pi/(n+1))$ [33, Proposition 1] (notice that
for $n=2$ it satisfies Chapple’s formula (1.1)).
The most general statement in this direction is Kippenhahn’s theorem, see
Theorem C in the Introduction, which states that _for
$\bm{A}\in\mathbb{C}^{n\times n}$ there exists a real algebraic curve $\Gamma$
of class $n$ whose foci are the eigenvalues of $\bm{A}$, such that $W(\bm{A})$
is the convex hull of $\Gamma(\mathbb{R})$._
In fact, the proof of Kippenhahn’s theorem is constructive and contains the
derivation of an equation for the dual $\Gamma^{*}$ of the algebraic curve
$\Gamma$. Indeed, for the given matrix $\bm{A}\in\mathbb{C}^{n\times n}$,
consider the homogeneous polynomial
$G_{\bm{A}}(u_{1},u_{2},u_{3}):=\det(u_{1}\mathop{\rm
Re}\bm{A}+u_{2}\mathop{\rm Im}\bm{A}-u_{3}\bm{I}),$ (2.24)
where $\mathop{\rm Re}\bm{A}:=\left(\bm{A}+\bm{A}^{*}\right)/2$ and
$\mathop{\rm Im}\bm{A}:=\left(\bm{A}-\bm{A}^{*}\right)/(2i)$ are the real and
imaginary parts of $\bm{A}$, respectively, and $\bm{I}$ in this case denotes
the $n\times n$ identity matrix. It is easy to see that
$G_{\bm{A}}(u_{1},u_{2},u_{3})$ is a homogenous polynomial of degree $n$ with
real coefficients. Thus $G_{\bm{A}}(u_{1},u_{2},u_{3})=0$ defines a real
algebraic curve $\Gamma_{\bm{A}}^{*}\subset\mathbb{P}^{2}(\mathbb{C})$ of
degree $n$ that is the dual of an algebraic curve
$\Gamma_{\bm{A}}\subset\mathbb{P}^{2}(\mathbb{C})$ of class $n$. We will call
$\Gamma_{\bm{A}}$ the _Kippenhahn curve_ of $\bm{A}$. As it follows from [38,
39] (see also [41, Theorem 6.1]), if we denote by
$\lambda_{\varphi}\in\mathbb{R}$, $\varphi\in[0,2\pi]$, the maximal eigenvalue
of the matrix $\mathop{\rm Re}(e^{-i\varphi}\bm{A})$ then
$(\cos\varphi:\sin\varphi:\lambda_{\varphi})\in\mathbb{P}^{2}(\mathbb{R})$
belongs to $\Gamma_{\bm{A}}^{*}(\mathbb{R})$ and the equation
$u_{1}\cos\varphi+u_{2}\sin\varphi-u_{3}\lambda_{\varphi}=0$
defines a supporting line to $W(\bm{A})$. In consequence, the numerical range
$W(\bm{A})$ is the convex hull of $\Gamma_{\bm{A}}(\mathbb{R})$. Finally, as
seen in (2.12), the real foci of $\Gamma_{\bm{A}}$ are the solutions of
$G_{\bm{A}}(1,i,z)=\det(\bm{A}-\left.z\bm{I}\right)=0,$
that is, the eigenvalues of $\bm{A}$.
#### 2.3.3. Dilations.
We say that an $m\times m$ matrix $\bm{A}$ _dilates_ to the $n\times n$ matrix
$\bm{B}$ ($m<n$) if there is an isometry $\bm{V}$ from $\mathbb{C}^{m}$ to
$\mathbb{C}^{n}$ such that $\bm{A}=\bm{V}^{*}\bm{B}\bm{V}$. This is equivalent
to saying that $\bm{B}$ is unitarily similar to an $n\times n$ matrix of the
form
$\begin{pmatrix}\bm{A}&*\\\ *&*\end{pmatrix},$
in which $\bm{A}$ appears in the upper left corner. It is a well-known fact
that the numerical range is monotone by dilation: if $\bm{A}$ dilates to
$\bm{B}$ then $W(\bm{A})\subset W(\bm{B})$.
An important dilation, which is also closely related to the results we
discuss, is the _unitary dilation of completely non-unitary contractions_. The
notion of completely non-unitary contraction was already introduced in Section
2.3. A classical result of Halmos [34, Problem 222(a)] says that every
completely non-unitary contraction $\bm{A}$ has unitary dilations.
As we have seen in Section 2.3, OPUC provides not only an effective
construction of such a matrix $\bm{A}$ from any equivalence class of
$\mathcal{S}_{n-1}$, but also of its $1$-parametric family of unitary
dilations. Indeed, for
$\bm{A}=\bm{\mathcal{G}}^{(n-1)}(\alpha_{0},\dots,\alpha_{n-2})\in\mathcal{S}_{n-1}$,
its unitary dilations are given by
$\bm{\mathcal{G}}^{(n)}(\alpha_{0},\dots,\alpha_{n-2},\lambda)$, with
$\lambda\in\mathbb{T}$.
## 3\. Curves inscribed in polygons
### 3.1. Definitions
Poncelet’s closure theorem (see Theorem A in the Introduction) sets up a
leitmotiv of this paper: curves in $\mathbb{D}$ that are tangent to polygons
with vertices on $\mathbb{T}$. More precisely, we will be interested in real
algebraic curves that satisfy the Poncelet porism: if it is inscribed in a
certain polygon, it is so for the whole continuum of such polygons (in other
words, curves that are envelops of such a family of polygons). We need to make
all these notions rigorous.
#### 3.1.1. Polygons
We denote by $[a,b]$ the straight segment joining points $a,b\in\mathbb{C}$.
For $n\geq 3$, a _polygon_ with $n$ vertices in $\mathbb{C}$ is a union of $n$
distinct segments
$\mathscr{P}=[z_{1},z_{2}]\cup[z_{2},z_{3}]\cup\cdots[z_{n-1},z_{n}]\cup[z_{n},z_{1}],$
(3.1)
where $z_{1},\ldots,z_{n}\in\mathbb{C}$ are $n$ distinct points, such that any
two segments intersect in at most one point and any two segments sharing a
common endpoint are noncollinear. The $n$ segments are called the _sides_ or
the _edges_ of the polygon. If the sides of $\mathscr{P}$ only intersect when
they share a common endpoint, then $\mathscr{P}$ is called _simple_ , and if
all its vertices belong to $\mathbb{T}$ we say that $\mathscr{P}$ is inscribed
in $\mathbb{T}$. It is well-known that any simple polygon inscribed in
$\mathbb{T}$ is convex.
We will often view the segments in (3.1) as directed line segments and hence
$\mathscr{P}$ as an oriented piecewise linear closed curve; in consequence,
the notation $-\mathscr{P}$ stands for the polygon $\mathscr{P}$ traversed in
the opposite direction. Note that every simple polygon is a Jordan curve and
hence divides the complex plane $\mathbb{C}$ into an interior region and an
exterior region. As usual, we say that a simple polygon $\mathscr{P}$ is
positively oriented if the interior is to the left. Equivalently, a simple
polygon is positively oriented if its winding number with respect to any point
in its interior is equal to $1$. For a general (not necessarily simple)
oriented polygon $\mathscr{P}$, if we traverse $\mathscr{P}$ in the direction
of the orientation, then at each vertex we turn by a nonzero angle between
$-\pi$ and $\pi$. The sum of these turning angles divided by $2\pi$ is called
the _turning number_ of $\mathscr{P}$. If $\mathscr{P}$ is simple, then the
turning number is equal to the winding number with respect to a point in the
interior. We say that $\mathscr{P}$ is positively oriented if its turning
number is positive.
It will be convenient to also consider “degenerate” polygons with two
vertices. These would be chains of the form
$\mathscr{P}=[z_{1},z_{2}]\cup[z_{2},z_{1}]$, where we view $[z_{1},z_{2}]$
and $[z_{2},z_{1}]$ as distinct directed segments. By convention, the turning
number of such a polygon is equal to $\pm 1$.
#### 3.1.2. Poncelet curves and Poncelet correspondence
###### Definition 3.1.
By a closed curve we mean a subset $C\subset\mathbb{P}^{2}(\mathbb{R})$ such
that there exist a continuous map
$\gamma:\mathbb{T}\rightarrow\mathbb{P}^{2}(\mathbb{R})$ with
$C=\gamma(\mathbb{T})$ and a real algebraic curve $\Gamma$ such that
$C\subset\Gamma(\mathbb{R})$ with the additional condition that either all
points of $C$ are nonsingular or the parametrization $\gamma$ can be chosen
such that if $\gamma(t_{0})$ is a singular point of $\Gamma$, then for some
sufficiently small interval $I\subset\mathbb{T}$ containing $t_{0}$, the curve
$\gamma(I)$ lies in a single local branch of $\Gamma$. If $C$ is nonsingular,
then $C$ is a simple closed curve in $\mathbb{C}$. We also allow for the
degenerate case when $C$ consists of just a single point.
Notice that algebraicity is built into our definition of a closed curve; in
the following, all closed curves that we consider will be duals of smooth
closed curves in $\mathbb{P}^{2}(\mathbb{R})$.
###### Definition 3.2.
For $n\geq 2$, we say that a set of (not necessarily simple) $n$-sided
polygons $\\{\mathscr{P}(z):\,z\in\mathbb{T}\\}$ inscribed in $\mathbb{T}$ is
a _family of $n$-Poncelet polygons_ if for each $z\in\mathbb{T}$, $z$ is one
of the vertices of $\mathscr{P}(z)$, and the following condition holds:
$\text{ $w\in\mathbb{T}$ is a vertex of
$\mathscr{P}(z)$}\quad\Rightarrow\quad\mathscr{P}(z)=\mathscr{P}(w).$
A closed curve $C\subset\mathbb{D}$ is a _Poncelet curve of rank $n$_ or an
_$n$ -Poncelet curve_ with respect to $\mathbb{T}$ if there is a family of
$n$-Poncelet polygons $\\{\mathscr{P}(z):\,z\in\mathbb{T}\\}$ such that
1. i)
for every $z\in\mathbb{T}$, each of the $n$ sides of $\mathscr{P}(z)$ is
tangent to $C$, and each tangent line of $C$ passing through $z$ contains a
side of $\mathscr{P}(z)$;
2. ii)
for every $z\in\mathbb{T}$, the two sides of $\mathscr{P}(z)$ with the common
vertex at $z$ are the only tangents to $C$ emanating from $z$;
3. iii)
for every $\zeta\in C$ there exists $z\in\mathbb{T}$ such that one of the
sides of $\mathscr{P}(z)$ is tangent to $C$ at the point $\zeta$.
Notice that an $n$-Poncelet curve $C$ determines the family
$\\{\mathscr{P}(z):\,z\in\mathbb{T}\\}$ uniquely. Observe also that we set a
convention that $C\subset\mathbb{D}$ is a necessary condition for $C$ being
called a Poncelet curve.
###### Remark 3.3.
1. i)
If $\mathcal{Z}^{\lambda}=\\{z_{1}^{\lambda},\dots,z_{n}^{\lambda}\\}$ is a
one-parametric family of pair-wise distinct points on $\mathbb{T}$ such that
for every $z\in\mathbb{T}$ there exists a unique value of the parameter
$\lambda$ for which $z\in\mathcal{Z}^{\lambda}$, then convex hulls of
$\mathcal{Z}^{\lambda}$ constitute a family of convex $n$-Poncelet polygons.
An example of such a construction is the sets of points identified by a
Blaschke product, see Definition 2.3.
2. ii)
Clearly, convexity of a Poncelet curve does not imply convexity of its
Poncelet polygons, see for example curve $C_{2}$ in Figure 5, left.
Surprisingly, the converse does not hold either (despite an assertion in [47,
Remark 1]): there are non-convex Poncelet curves with the corresponding family
of convex Poncelet polygons; see Figure 5, right, for a non-convex
$3$-Poncelet curve. Its construction is explained in Example 3.15 below.
\begin{overpic}[width=151.76964pt]{Fig5a} \put(47.0,120.0){\small$C_{1}$} \put(50.0,67.0){\small$C_{2}$} \end{overpic} | | \begin{overpic}[width=151.76964pt]{Fig5b} \put(33.0,85.0){\small$C$} \end{overpic}
---|---|---
Figure 5. Left: two convex $5$-Poncelet curves: for $C_{1}$ all its Poncelet
polygons $\mathscr{P}(z)$ are convex, while for $C_{2}$ they are not. Right: a
non-convex $3$-Poncelet curve.
Given a family of positively oriented $n$-Poncelet polygons
$\\{\mathscr{P}(z):\,z\in\mathbb{T}\\}$ in the sense of Definition 3.2, we
define the map $\tau:\mathbb{T}\rightarrow\mathbb{T}$ as follows: if $[z,w]$
is the edge of the polygon $\mathscr{P}(z)$ emanating from $z\in\mathbb{T}$
when traversed in the positive direction, then $\tau(z)=w\in\mathbb{T}$. The
sequence $\\{\tau^{j}(z)\\}_{j=1}^{\infty}$ is periodic with period $n$ (that
is, $n$ is the smallest positive integer such that $\tau^{n}=\text{Id}$, where
Id is the identity operator), and the positively oriented $n$-Poncelet
polygons $\\{\mathscr{P}(z):\,z\in\mathbb{T}\\}$ can be written as
$\mathscr{P}(z)=[z,\tau(z)]\cup[\tau(z),\tau^{2}(z)]\cup\cdots\cup[\tau^{n-1}(z),z].$
(3.2)
The map $\tau:\mathbb{T}\rightarrow\mathbb{T}$, also known as the Poncelet
correspondence, see [18] (and in some particular cases is related to the John
mapping, see [5]), provides a connection between Poncelet curves and discrete
dynamical systems and ellipsoidal billiards, see e.g. the work of Dragović and
collaborators [17, 1] and Schwarz [52, 57]. Notice that in the construction of
$\tau$ we could start alternatively from an $n$-Poncelet curve $C$ and use its
family of $n$-Poncelet polygons $\\{\mathscr{P}(z):\,z\in\mathbb{T}\\}$ to
define $\tau$. We develop this idea further in the next section, with the
notion of associated Poncelet curves.
#### 3.1.3. Associated Poncelet curves
Let $\\{\mathscr{P}(z):\,z\in\mathbb{T}\\}$ be a family of $n$-Poncelet
polygons and $\tau:\mathbb{T}\rightarrow\mathbb{T}$ the Poncelet
correspondence defined above. We will also assume that $\tau$ is smooth and
$\frac{d}{d\theta}\operatorname{arg}\tau(e^{i\theta})>0\ \mbox{for all
$\theta\in\mathbb{R}$},$ (3.3)
where the argument of $\tau(e^{i\theta})$ is viewed as a smooth
$\mathbb{R}$-valued function of $\theta\in\mathbb{R}$. For instance, the
condition (3.3) automatically holds if the family
$\\{\mathscr{P}(z):\,z\in\mathbb{T}\\}$ corresponds to a convex and
nonsingular $n$-Poncelet curve $C\subset\mathbb{D}$.
For $k\in\mathbb{N}$, we define a curve $C_{k}\subset\mathbb{D}$ as the
envelope of all chords $[z,\tau^{k}(z)]$, where $z\in\mathbb{T}$. A priori, it
is not evident that this definition makes sense. For example, we really should
consider the envelope of all lines determined by the chords $[z,\tau^{k}(z)]$
since it is not clear that the points of tangency must lie on these chords.
This is rather subtle and we will see later (see Theorem 3.6) that it is
precisely the monotonicity condition (3.3) which implies this.
We can make this more precise (and constructive) as follows. Define a map
$\zeta_{k}:\mathbb{T}\rightarrow\mathbb{P}^{2}(\mathbb{R})$ by
$\zeta_{k}(z):=\frac{2z\,\tau^{k}(z)}{z+\tau^{k}(z)}=\mbox{pole of the line
containing $[z,\tau^{k}(z)]$,}$ (3.4)
see (2.7). By construction, if $z\neq\tau^{k}(z)$ then $\zeta_{k}(z)$ lies
outside of the unit disk. Since $\tau^{n}=\text{Id}$, we only need to consider
$1\leq k\leq n-1$. Notice that $\zeta_{2}$ is related to the pentagram map,
see e.g. [56, 57].
###### Lemma 3.4.
For each $1\leq k\leq n-1$, the map
$\zeta_{k}:\mathbb{T}\rightarrow\mathbb{P}^{2}(\mathbb{R})$ is a smooth
immersion, and the image $\zeta_{k}(\mathbb{T})$ is nonsingular.
###### Proof.
By our assumptions, the map $\tau:\mathbb{T}\rightarrow\mathbb{T}$ is smooth
so that each of the maps
$\zeta_{k}:\mathbb{T}\rightarrow\mathbb{P}^{2}(\mathbb{R})$ is also smooth,
and the argument of $\tau(e^{i\theta})$ is a strictly increasing smooth
function when viewed as a continuous $\mathbb{R}$-valued function of
$\theta\in\mathbb{R}$. It then also follows that the argument of
$\tau^{k}(e^{i\theta})$ is a strictly increasing function of
$\theta\in\mathbb{R}$. As a consequence, the differential of the map
$\zeta_{k}:\mathbb{T}\rightarrow\mathbb{P}^{2}(\mathbb{R})$ is nonzero
everywhere and hence $\zeta_{k}$ is a smooth immersion. Since an immersion is
a local embedding, it follows that $\zeta_{k}(\mathbb{T})$ is nonsingular. ∎
###### Definition 3.5.
For each $1\leq k\leq n-1$, the curve $C_{k}$ is the dual of the nonsingular
curve $\zeta_{k}(\mathbb{T})$.
Notice that $C_{k}$ may have singularities. As usual, cusps of $C_{k}$
correspond to tangent lines at inflection points of $\zeta_{k}(\mathbb{T})$
and double points of $C_{k}$ correspond to lines that are tangent at two
distinct points of $\zeta_{k}(\mathbb{T})$, see e.g. Figure 7.
###### Theorem 3.6.
For each $1\leq k\leq n-1$, assumption (3.3) implies that
$C_{k}\subset\mathbb{D}$. If
$\frac{d}{d\theta}\operatorname{arg}\tau(e^{i\theta})\geq 0\ \mbox{for all
$\theta\in\mathbb{R}$}$
then $C_{k}\subset\overline{\mathbb{D}}$ and can have points on $\mathbb{T}$
(see Figure 6).
Finally, if at some $\theta\in\mathbb{R}$,
$\frac{d}{d\theta}\operatorname{arg}\tau(e^{i\theta})<0$
then $C_{k}$ has points outside $\overline{\mathbb{D}}$.
###### Proof.
Obviously, it is sufficient to prove the statement for $k=1$.
The curve $C_{1}$ is obtained from the tangent lines to curve
$\zeta_{1}(\mathbb{T})$ via reciprocation in $\mathbb{T}$. In particular,
$C_{1}$ contains a point outside of $\overline{\mathbb{D}}$ if and only if
there is a tangent line to $\zeta_{1}(\mathbb{T})$ that intersects
$\mathbb{T}$ in two distinct points (or alternatively, whose distance to the
origin is $<1$).
In order to simplify notation, denote
$w=\tau(z),\quad
w^{\prime}=\frac{d}{dz}\tau(z),\quad\dot{w}=\frac{d}{d\theta}\operatorname{arg}\tau(e^{i\theta}).$
Since $w\in\mathbb{T}$, $\log w=i\arg w$, we get that
$\dot{w}=\frac{zw^{\prime}}{w}.$
Thus, differentiating $\zeta_{1}(z)=2zw/(z+w)$ we get that
$\frac{d}{d\theta}\zeta_{1}=iz\,\zeta_{1}^{\prime}(z)=i\zeta_{1}\,\frac{w+\dot{w}z}{w+z},\quad
z=e^{i\theta},\quad\zeta_{1}=\zeta_{1}\left(z\right).$
Hence, the parametric equation of the straight line tangent to
$\zeta_{1}(\mathbb{T})$ at the point $\zeta_{1}(e^{i\theta})$ is
$L(t)=\zeta_{1}+it\,\zeta_{1}\,\frac{w+\dot{w}z}{w+z}=\frac{2zw}{z+w}\left(1+it\,\frac{w+\dot{w}z}{w+z}\right),\quad
t\in\mathbb{R}.$
Notice that $\dot{w}\in\mathbb{R}$, $z,w\in\mathbb{T}$, so that
$\overline{L(t)}=\frac{2}{z+w}\left(1-it\,\frac{z+\dot{w}w}{w+z}\right),$
and hence,
$\displaystyle|L(t)|^{2}$
$\displaystyle=\frac{4zw}{(z+w)^{2}}\left(1+it\,\frac{w+\dot{w}z}{w+z}\right)\left(1-it\,\frac{z+\dot{w}w}{w+z}\right)=\frac{4wz}{(w+z)^{4}}\,(\alpha
t^{2}+\beta t+\gamma),$
with
$\alpha=(w+\dot{w}z)(z+\dot{w}w),\quad\beta=i(w^{2}-z^{2})(1-\dot{w}),\quad\gamma=(z+w)^{2}.$
This is a quadratic function in $t$ whose minimum is attained at
$t=-\beta/(2\alpha)$. Replacing it in the expression for $|L(t)|^{2}$ we get
that the square of the distance of the tangent line to the origin is
$\frac{zw(1+\dot{w})^{2}}{(w+\dot{w}z)(z+\dot{w}w)}=\frac{(1+\dot{w})^{2}}{|z+\dot{w}w|^{2}}.$
If $\dot{w}\geq 0$ then by the triangle inequality,
$|z+\dot{w}w|\leq|z|+|\dot{w}w|\ =1+\dot{w},$
which shows that the distance is $\geq 1$. Moreover, equality holds only when
either $z=w$ or when $\dot{w}=0$. In the same vein, the reversed triangle
inequality and the assumption $\dot{w}<0$ yields that the distance is $<1$, so
that the tangent line intersects the unit circle in two points. ∎
| |
---|---|---
Figure 6. A family of $3$-Poncelet polygons such that the curve $C_{1}$ has a
cusp on $\mathbb{T}$. The highlighted point on $\mathbb{T}$ corresponds to
$e^{i\theta}$ for the values $\theta=-\pi/25$, $0$, and $\pi/25$,
respectively. The function $\operatorname{arg}\tau(e^{i\theta})$ is strictly
increasing but has a stationary point at $\theta=0$; $C_{1}$ intersects
$\mathbb{T}$.
###### Lemma 3.7.
For each $1\leq k\leq n-1$, the map
$\zeta_{k}:\mathbb{T}\rightarrow\mathbb{P}^{2}(\mathbb{R})$ is one-to-one,
unless $n=2k$, in which case $\zeta_{k}$ is two-to-one. Additionally,
$\zeta_{k}(\mathbb{T})=\zeta_{n-k}(\mathbb{T})$.
###### Proof.
The case $n=2$ is trivial, so assume $n\geq 3$. Let $z,w\in\mathbb{T}$ such
that $\zeta_{k}(z)=\zeta_{k}(w)$. Then $[z,\tau^{k}(z)]=[w,\tau^{k}(w)]$ and
hence either $z=w$ or $\tau^{k}(z)=w$ and $z=\tau^{k}(w)$. The second
condition is equivalent to $\tau^{2k}(z)=z$ which is only possible if $n=2k$.
It follows that the map $\zeta_{k}$ is one-to-one if $n\not=2k$ and two-to-one
if $n=2k$.
The last assertion follows from the fact that if $z\in\mathbb{T}$ and
$w=\tau^{k}(z)$, then by definition $\zeta_{k}(z)$ is the pole of the line
containing $[z,w]$. At the same time, $\tau^{n-k}(w)=\tau^{n}(z)=z$, so that
$\zeta_{n-k}(w)$ is the pole of the line containing $[w,z]$, which shows that
$\zeta_{n-k}(w)=\zeta_{k}(z)$, with $w=\tau^{k}(z)$. ∎
\begin{overpic}[width=420.0pt]{Fig7} \put(70.0,20.0){$C_{1}^{*}$}
\put(167.0,70.0){$C_{1}$} \put(360.0,125.0){$C_{2}^{*}$}
\put(207.0,105.0){$C_{2}$} \put(120.0,85.0){ $\mathbb{T}$} \end{overpic}
Figure 7. The curves $C_{1}^{*}$ and $C_{2}^{*}$ associated to a convex
Poncelet curve of rank $5$.
This lemma shows that $C_{n-k}=-C_{k}$, and we only need to consider curves
$C_{k}$ for $1\leq k\leq[n/2]$. It turns out that each such a component
$C_{k}$ exhibits the Poncelet property:
###### Theorem 3.8.
Assume that $n\geq 3$ and all $C_{k}$, $1\leq k\leq[n/2]$, are closed curves
in the sense of Definition 3.1. Then, for each $1\leq k\leq[n/2]$, the curve
$C_{k}\subset\mathbb{D}$ is a Poncelet curve of rank $n/\gcd(k,n)$. Moreover,
if all Poncelet polygons $\mathscr{P}(z)$ are convex then the positively
oriented Poncelet polygons for $C_{k}$ have turning number $k/\gcd(k,n)$.
Furthermore, if $d$ is a divisor of $n$ and $d\geq 3$, then the number of
curves $C_{k}$, $1\leq k\leq[n/2]$, that have the $d$-Poncelet property is
$\phi(d)/2$, where $\phi$ denotes Euler’s totient function (i.e., $\phi(d)$
counts the positive integers up to $d$ that are relatively prime to $d$).
Recall that here we assume that condition (3.3) holds.
###### Proof.
For the following, set $\mathbb{Z}_{n}:=\\{0,1,2,\ldots,n-1\\}$ and view
$\mathbb{Z}_{n}$ as an additive group, where addition is the usual addition of
integers modulo $n$. It is an elementary result in basic group theory that the
order of $k\in\mathbb{Z}_{n}$ is equal to $n_{k}:=n/\gcd(k,n)$. Here the order
of $k\in\mathbb{Z}_{n}$ is defined as the smallest positive integer $m$ such
that
$mk:=\underbrace{k+\cdots+k}_{m}=0\ \mbox{in $\mathbb{Z}_{n}$}.$
For $k\in\mathbb{N}$ and $z\in\mathbb{T}$, define a polygon with $n_{k}$ sides
by
$\mathscr{P}_{k}(z):=[z,\tau^{k}(z)]\cup[\tau^{k}(z),\tau^{2k}(z)]\cup\cdots\cup[\tau^{(n_{k}-1)k}(z),z].$
(3.5)
Since $\tau^{n}(z)=z$ we can view the exponents in the squence $z$,
$\tau^{k}(z)$, $\tau^{2k}(z)$, etc., as elements of $\mathbb{Z}$ or as
elements of $\mathbb{Z}_{n}$.
Note that each one of the segments is of the form $[w,\tau^{k}(w)]$ for some
$w\in\mathbb{T}$. In fact, for any $0\leq m\leq n_{k}-1$, we have
$[\tau^{mk}(z),\tau^{(m+1)k}(z)]=[w,\tau^{k}(w)],\ \text{where
$w=\tau^{mk}(z)$}.$
Thus, $C_{k}$ is an $n_{k}$-Poncelet curve with Poncelet polygons
$\mathscr{P}_{k}(z)$.
Moreover, if all Poncelet polygons $\mathscr{P}(z)$ are convex, then for each
$z\in\mathbb{T}$, the vertices $z,\tau(z),\ldots,\tau^{n-1}(z)$ of
$\mathscr{P}(z)$ are points on $\mathbb{T}$ in counterclockwise order. Since
$n_{k}=n/\gcd(k,n)=\min\\{m\in\mathbb{N}:\mbox{$mk$ is a positive multiple of
$n$}\\}$ and
$\frac{n}{\gcd(k,n)}\,k=\frac{k}{\gcd(k,n)}\,n,$
it follows that the turning number of $\mathscr{P}_{k}(z)$ is $k/\gcd(k,n)$.
A consequence of the just established fact is that a polygon
$\mathscr{P}_{k}(z)$ in (3.5) has exactly $n$ segments if and only if
$\gcd(n,k)=1$. Thus, the total number of such polygons is precisely the number
of integers $1\leq k\leq[n/2]$ with $\gcd(n,k)=1$, which coincides with
$\phi(n)/2$. More generally, suppose $d$ is a divisor of $n$, $d\geq 2$. A
well known fact is that the number of integers $1\leq k\leq[n/2]$ with
$\gcd(n,k)=n/d$ is equal to $\phi(d)/2$, which proves the theorem. ∎
#### 3.1.4. Complete Poncelet curves
Let $\\{\mathscr{P}(z):\,z\in\mathbb{T}\\}$ be a family of convex $n$-Poncelet
polygons such that the Poncelet correspondence
$\tau:\mathbb{T}\rightarrow\mathbb{T}$ is smooth and condition (3.3) holds.
Consistent with the hypotheses of Theorem 3.8, we assume that each of the
associated Poncelet curves $C_{k}$, $1\leq k\leq[n/2]$, as constructed in
Section 3.1.3, is closed (and thus, algebraic) in the sense of Definition 3.1.
Moreover, by Theorem 3.6, all $C_{k}\subset\mathbb{D}$.
###### Definition 3.9.
Under the assumptions above, the union
$\mathcal{K}_{n}:=\bigcup_{k=1}^{[n/2]}C_{k}\subset\mathbb{D}$ (3.6)
is called a package of Poncelet curves generated by the family
$\\{\mathscr{P}(z):\,z\in\mathbb{T}\\}$. If
$\\{\mathscr{P}(z):\,z\in\mathbb{T}\\}$ are the convex Poncelet polygons for a
closed curve $C\subset\mathbb{D}$, we alternatively say that the package of
Poncelet curves is generated by $C$; in this case, $C_{1}=C$.
This terminology was apparently introduced by Mirman, see [47].
Recall that by assumption, every package of Poncelet curves is algebraic, so
that there exists a real real algebraic curve $\Gamma$ such
$\mathcal{K}_{n}\subset\Gamma(\mathbb{R})$.
###### Lemma 3.10.
Let $\Gamma$ be any real algebraic curve such that
$\mathcal{K}_{n}=\bigcup_{k=1}^{[n/2]}C_{k}\subseteq\Gamma(\mathbb{R}).$ (3.7)
Then the class of $\Gamma$ is at least $n-1$.
If the class of $\Gamma$ is exactly $n-1$, then (3.7) becomes
$\mathcal{K}_{n}=\bigcup_{k=1}^{[n/2]}C_{k}=\Gamma(\mathbb{R}).$ (3.8)
###### Proof.
First note that every tangent line of one of the curves $C_{k}$ is also a
tangent line of $\Gamma(\mathbb{R})$. Since $C_{k}=-C_{n-k}$ it then follows
that $C_{k}^{*}\subset\Gamma^{*}(\mathbb{R})$ for all $1\leq k\leq n-1$.
For every $z\in\mathbb{T}$, the line in $\mathbb{P}^{2}(\mathbb{R})$ that is
tangent to $\mathbb{T}$ at $z$ intersects
$\mathcal{K}_{n}^{*}:=\bigcup_{k=1}^{[n/2]}C_{k}^{*}$
in exactly $n-1$ distinct points, namely the polars of the lines containing
the diagonals $[z,\tau^{k}(z)]$, $k=1,\ldots,n-1$. Thus, since
$\mathcal{K}_{n}^{*}\subseteq\Gamma^{*}(\mathbb{R})$, it follows by Bézout’s
theorem that the degree of $\Gamma$ must be at least $n-1$.
Now suppose that the class of $\Gamma$ is exactly $n-1$. By the argument
above, if $l$ is any line in $\mathbb{P}^{2}(\mathbb{R})$ that is tangent to
$\mathbb{T}$, then $\mathcal{K}_{n}^{*}\cap l=\Gamma^{*}(\mathbb{R})\cap l$.
Suppose that $\mathcal{K}_{n}\not=\Gamma(\mathbb{R})$ or, equivalently,
$\mathcal{K}_{n}\not=\Gamma(\mathbb{R})$. Let
$p\in\Gamma^{*}(\mathbb{R})\setminus\mathcal{K}_{n}^{*}$ and let $l$ be a line
containing $p$ that is tangent to $\mathbb{T}$. Then for this line $l$ we
would have $\mathcal{K}_{n}^{*}\cap l\not=\Gamma^{*}(\mathbb{R})\cap l$ which
is a contradiction. Thus, if the class of $\Gamma$ is exactly $n-1$, then
$\mathcal{K}_{n}=\Gamma(\mathbb{R})$. ∎
If (3.8) holds, then $\Gamma(\mathbb{R})$ is a complete Poncelet curve in the
sense of of the following definition.
###### Definition 3.11.
If there exists a real algebraic curve $\Gamma$ such that (3.8) holds then the
set of real points $\mathcal{K}_{n}=\Gamma(\mathbb{R})\subset\mathbb{D}$ of a
plane algebraic curve $\Gamma$ is called a _complete Poncelet curve_ (also, a
complete Poncelet–Darboux curve) _of rank $n$_ or a _complete $n$-Poncelet
curve_ with respect to $\mathbb{T}$, generated by the family
$\\{\mathscr{P}(z):\,z\in\mathbb{T}\\}$.
The dual $\Gamma^{*}$ is called the _Darboux curve_ for $\Gamma$.
If $\\{\mathscr{P}(z):\,z\in\mathbb{T}\\}$ are the convex Poncelet polygons
for a closed curve $C\subset\mathbb{D}$, we alternatively say that the
complete $n$-Poncelet curve is generated by $C$; in this case, $C_{1}=C$.
Notice that if $\Gamma(\mathbb{R})$ is a complete Poncelet curve of rank $n$
then for every $z\in\mathbb{T}$ there exists an $n$-sided polygon
$\mathscr{P}(z)$ inscribed in $\mathbb{T}$ such that $z$ is one of these
vertices, each of the $n(n-1)/2$ lines connecting the $n$ vertices of
$\mathscr{P}(z)$ is tangent to $\Gamma(\mathbb{R})$, and each tangent line of
$\Gamma(\mathbb{R})$ containing one of the vertices of the polygon must
contain a side of $\mathscr{P}(z)$. Since in this construction we can always
replace $\mathscr{P}(z)$ by the boundary of its convex hull, the convexity
assumption on the family $\\{\mathscr{P}(z):\,z\in\mathbb{T}\\}$ of
$n$-Poncelet polygons is actually not a restriction and is made for
convenience. Any other choice of polygons would simply imply a different
enumeration of the components $C_{k}$.
###### Example 3.12.
For $n=24$, if $\Gamma(\mathbb{R})$ is a complete Poncelet curve, then
$\Gamma(\mathbb{R})=\bigcup_{k=1}^{12}C_{k},$
where $C_{1}$, $C_{5}$, $C_{7}$, and $C_{11}$ are $24$-Poncelet curves,
$C_{2}$ and $C_{10}$ are $12$-Poncelet curves, $C_{3}$ and $C_{9}$ are
$8$-Poncelet curves, $C_{4}$ is a $6$-Poncelet curve, $C_{6}$ is a
$4$-Poncelet curve, $C_{8}$ is a $3$-Poncelet curve, and $C_{12}$ is a
$2$-Poncelet curve. Note that $C_{12}$ may consist of a single point, namely
in the case when connecting opposite vertices of the inscribed dodecagons all
meet in a single point.
### 3.2. Mirman’s parametrization of a package of Poncelet curves
#### 3.2.1. From tangent coordinates to the Bezoutian form
Given a package of Poncelet curves $\mathcal{K}_{n}$ generated by a family of
Poncelet polygons $\\{\mathscr{P}(z):\,z\in\mathbb{T}\\}$ (or by a closed
curve $C$), we can use Mirman’s parametrization explained in Section 2.1.6 to
derive an equation for the algebraic curve $\Gamma$ in the right hand side of
(3.7)–(3.8). Recall that if $z=z_{1}\in\mathbb{T}$, $w=z_{2}\in\mathbb{T}$ are
endpoints of a line tangent to $\Gamma(\mathbb{R})$, then (2.10) yields an
equation of the form
$P(z,w)=0,$ (3.9)
where $P$ is a polynomial, symmetric in $z$ and $w$.
A crucial consequence of Definition 3.2 is that for every $w_{0}\in\mathbb{T}$
there exist exactly $n-1$ solutions $w_{1},\dots,w_{n-1}$ of the equation
$P(w_{0},w)=0,$ (3.10)
namely the other vertices of the Poncelet polygon $\mathscr{P}(w_{0})$, all of
them on $\mathbb{T}$, and that they satisfy that
$P(w_{i},w_{j})=0,\quad 0\leq i<j\leq n-1.$ (3.11)
This shows that $P(z,w)$ is of degree at least $n-1$ in $w$, $z$ (recall that
its degree matchess the class of $\Gamma$, which is consistent with the
statement of Lemma 3.7). However, $P(w_{0},w)=0$ could have other solutions
$w\in\mathbb{C}\setminus\mathbb{T}$, and in consequence, the actual degree of
$P$ is $N-1$, with $N\geq n$.
Denote by $f_{1},\dots,f_{N-1}$ the solutions (with account of multiplicity)
of
$P(0,w)=0.$ (3.12)
###### Proposition 3.13.
The points $f_{1},\dots,f_{N-1}$, which are solutions (with account of
multiplicity) of (3.12), are the real foci of the curve $\Gamma$.
###### Proof.
As it was pointed out, the equation
$g(u,v)=0$
of the dual curve (in the affine coordinates) can be obtained from (3.9) using
the substitution (2.9), with
$u=\frac{\zeta+\overline{\zeta}}{2},\quad
v=\frac{\zeta-\overline{\zeta}}{2i}.$
according to which $z=z(\zeta,\overline{\zeta})$,
$w=w(\zeta,\overline{\zeta})$. The resulting equation should be homogenized to
a curve in $\mathbb{P}^{2}(\mathbb{C})$ with its equation obtained by taking
$\zeta\to\zeta/t$, $t\in\mathbb{C}$. As it follows from (2.12), foci of
$\Gamma$ are solutions of the equation
$g\left(\frac{1}{t},\frac{i}{t}\right)=0.$
We see that
$u=\frac{\zeta+\overline{\zeta}}{2}=\frac{1}{t},\quad
v=\frac{\zeta-\overline{\zeta}}{2i}=\frac{i}{t}$
implies that $\zeta=0$ and $\overline{\zeta}=2/t$. (Notice that here $u$ and
$v$ are _complex_ numbers and hence $\overline{\zeta}=u-iv$ need not be the
complex conjugate of $\zeta=u+iv$.) If we fix the branch of the square root in
(2.9) by $\sqrt{-1}=i$, then in this case $z=0$, $w=t$ so that the foci of
$\Gamma$ are precisely the solutions of the equation
$P(0,t)=0.$
∎
An important consequence of the discussion in [50, Section 5] (see in
particular Lemma 5.1.2) is the following result:
###### Theorem 3.14.
Let $n\in\mathbb{N}$, $n\geq 3$. With the notation above (see also
(2.13)–(2.14)) and up to a multiplicative constant,
$P(z,w)=\frac{w\,\Phi_{N-1}(w)\Phi_{N-1}^{*}(z)-z\,\Phi_{N-1}(z)\Phi_{N-1}^{*}(w)}{w-z},$
(3.13)
where
$\Phi_{N-1}(z)=\Phi_{N-1}(z;f_{1},\dots,f_{N-1}).$ (3.14)
Thus, the real foci $f_{j}$ of $\Gamma$ are precisely the zeros of
$\Phi_{N-1}(z)$.
Furthermore, we have the relation
$N=n+2m+d,$ (3.15)
where $m$ is the number of real foci $f_{j}$ (accounted with multiplicity)
that lie in the exterior of $\mathbb{T}$ and $d$ is the number of real foci
$f_{j}$ (accounted with multiplicity) that lie on $\mathbb{T}$.
In particular, if all real foci $f_{j}$ are in $\mathbb{D}$, then $N=n$.
Since the right hand side in (3.13) is a Bezoutian of the polynomials
$\Phi_{N-1}$ and $\Phi_{N-1}^{*}$, we say that $P(z,w)$ is written in a
Bezoutian form.
###### Proof.
Formula (3.13) has been proved in [50], see Theorem 1.1 and equation (20)
therein. Relation (3.15) for $d=0$ also follows from [50, Lemma 5.1.2]. Thus,
we only need to consider $d>0$.
Suppose $f_{j}$ is a real focus such that $|f_{j}|=1$. Then
$\overline{f_{j}}=1/f_{j}$ and we easily find that
$P(z,w)=\operatorname{const}\,(z-f_{j})(w-\overline{f_{j}})\widetilde{P}(z,w),$
where $\widetilde{P}(z,w)$ is the polynomial as given by (3.13) with
$\Phi_{N-1}(z)$ replaced by
$\widetilde{\Phi}_{N-2}(z):=\Phi_{N-2}(z;f_{1},\ldots,f_{j-1},f_{j+1},\ldots,f_{N-1})$.
If we had exactly $d$ foci on the unit circle, the degree of
$\widetilde{P}(z,w)$ would be $N-d$, and by [50, Lemma 5.1.2], the number of
solutions of the equation $\widetilde{P}(w_{0},w)=0$ on the unit circle
satisfies $N-d=n+2m$. Thus, (3.15) holds. ∎
#### 3.2.2. From the Bezoutian form to Poncelet curves
As we have just seen, the Poncelet property of a curve implies that its
Mirman’s parametrization can be written in a Bezoutian form. This motivates us
to analyze the converse: is a Poncelet curve associated to any symmetric
polynomial in a Bezoutian form? This approach is tempting since the only data
necessary to generate such polynomials is the collections of foci $f_{j}$’s.
Let $P(z,w)$ be a symmetric polynomial in $z,w$, given by (3.13)–(3.14).
Clearly, representation (3.13) is sufficient for property (3.11) to hold. This
does not mean however that $P(z,w)=0$ automatically parametrizes a Poncelet
curve. A necessary condition for that is, for instance, that for every
$z\in\mathbb{T}$, the equation $P(z,w)=0$ has the same number of solutions on
$\mathbb{T}$. Mirman showed in [47, Thm. 1] that this holds if $d=0$ and
$1+\sum_{j=1}^{N-1}\frac{1-|f_{j}|^{2}}{|z-f_{j}|^{2}}>0\quad\text{for all
$z\in\mathbb{T}$}.$ (3.16)
More precisely, (3.16) implies that for each $z\in\mathbb{T}$, the equation
$P(z,w)=0$ has exactly $n-1=N-2m-1$ distinct solutions $w\in\mathbb{T}$
(notice that it is automatically satisfied if all $f_{j}\in\mathbb{D}$). For
such polynomials $P(z,w)$ we can then define a family of $n$-Poncelet polygons
by
$\mathscr{P}(z):=\partial\left(\operatorname{conv}\\{w\in\mathbb{T}:P(z,w)=0\\}\right)$
that generate the package of Poncelet curves $C_{1},\dots,C_{[n/2]}$; here and
in what follows we denote the convex hull of the set $S$ by
$\operatorname{conv}(S)$.
Since $P(z,w)$ is a symmetric polynomial in $z$ and $w$, it can be written in
the form $P(z,w)=h(z+w,zw)$. If we set
$g(u,v):={\bar{\zeta}}^{N-1}h\left(\frac{2}{\bar{\zeta}},\frac{\zeta}{\bar{\zeta}}\right)\
\text{with }\zeta=u+iv,\;\overline{\zeta}=u-iv,$
then $g(u,v)$ is a polynomial with real coefficients of degree $N-1$. Fujimura
(see [21]) expressed $g(u,v)$ as a polynomial in $\zeta$ and
$\overline{\zeta}$, where the coefficients are given by explicit formulas in
the elementary symmetric functions (and their complex conjugates) of the
$f_{j}$’s. She only stated these formulas in the case when $m=d=0$, but her
arguments work in general. Let $G(u_{1},u_{2},u_{3})$ denote the
homogenization of $g(u,v)$ as usual. Then the real algebraic curve given by
$G(u_{1},u_{2},u_{3})=0$ is the dual of a real algebraic curve $\Gamma$ of
class $N-1$ whose real foci are the $f_{j}$’s. By construction, we have
$\bigcup_{k=1}^{[n/2]}C_{k}\subseteq\Gamma(\mathbb{R}).$
We cannot assure in general that this inclusion is not strict. Neither can we
assure that the $C_{k}$’s will be Poncelet curves, in the sense of Definition
3.2. Moreover, Mirman [47, Remark 1] stated (without proof) that $C_{1}$ is
always convex. In the following example we show that this is false in general,
if $m>0$.
###### Example 3.15.
Consider a family of examples with $P(z,w)$ given by (3.13)–(3.14), where
$N=5$ and
$\Phi_{4}(z)=\Phi_{4}(z;0,0,0,a),\quad a\in\mathbb{C}\setminus\mathbb{T}.$
In this case, the corresponding polynomial $g(u,v)$ is of the form
$\displaystyle g(u,v)$
$\displaystyle=(a^{2}-1)v^{4}+2\left((a^{2}-1)u^{2}-2au-2a^{2}+6\right)v^{2}$
$\displaystyle\qquad+\left((a^{2}-1)u^{4}-8au^{3}+(12-4a^{2})u^{2}+16au-16\right).$
It is easy to show that $g(u,v)$ is an irreducible polynomial (over
$\mathbb{C}$) by looking at its Newton polygon which is the convex hull of the
lattice points $(0,0)$, $(4,0)$, and $(0,4)$. The method of showing (absolute)
irreducibility of polynomials in several variables via Newton polytopes is
nicely explained in [22]. Notice that $g(u,v)$ can be viewed as quadratic
polynomial in $v^{2}$. Thus it is relatively easy to carry out explicit
calculations to study $\Gamma^{*}(\mathbb{R})$.
In particular,
* •
$\Gamma^{*}(\mathbb{R})$ is nonsingular and a disjoint union of two nested
“ovals” in $\overline{\mathbb{C}}$.
* •
If $|a|<1$, then both components of $\Gamma^{*}(\mathbb{R})$ lie in the
exterior of $\mathbb{T}$ (which means that
$\Gamma(\mathbb{R})\subset\mathbb{D}$). Furthermore, if
$|a|\leq\sqrt{3-\sqrt{6}}\approx 0.741964$, then neither of the two components
of $\Gamma^{*}(\mathbb{R})$ has inflection points (so, $\Gamma(\mathbb{R})$
has no cusps). If $\sqrt{3-\sqrt{6}}<|a|<1$, then the larger component of
$\Gamma^{*}(\mathbb{R})$ has inflection points, see Figure 8, left.
* •
If $1\leq|a|\leq 5/3$, then one component of $\Gamma^{*}(\mathbb{R})$
intersects $\mathbb{T}$, see Figure 8, right.
* •
If $|a|>5/3$, then one component of $\Gamma^{*}(\mathbb{R})$ lies in the
exterior and the other lies in the interior of $\mathbb{T}$ (so, the same is
true for $\Gamma(\mathbb{R})$). Furthermore, if
$|a|\geq\sqrt{3+\sqrt{6}}\approx 2.33441$, then neither of the two components
of $\Gamma^{*}(\mathbb{R})$ has inflection points, see Figure 9. If
$5/3<|a|<\sqrt{3+\sqrt{6}}$, then the component of $\Gamma^{*}(\mathbb{R})$
that lies in the exterior of $\mathbb{T}$ has inflection points.
| |
---|---|---
Figure 8. Illustration for Example 3.15: curve $\Gamma(\mathbb{R})$ (solid lines) and its dual $\Gamma^{*}(\mathbb{R})$, dotted lines, for $a=0.9$ (left) and $a=1.5$, in which case Mirman’s condition (3.16) is not satisfied (right). Notice that in this situation $\Gamma(\mathbb{R})$ is tangent both to triangles and pentagons. | |
---|---|---
Figure 9. Illustration for Example 3.15: a nonsingular $3$-Poncelet curve
$C_{1}$ (left) and the corresponding algebraic curve $\Gamma(\mathbb{R})$
(solid lines) and its dual $\Gamma^{*}(\mathbb{R})$, dotted lines, for
$a=2.4$. Curve $C_{1}$ for $a=2$ appears in Figure 5, right.
Notice that Mirman’s condition (3.16) to generate Poncelet curves is satisfied
whenever $|a|<1$ or $|a|>5/3$. For $|a|<1$, we obtain a package of two
$5$-Poncelet curves $C_{1}$ and $C_{2}$ with $C_{1}\cup
C_{2}=\Gamma(\mathbb{R})$. In this case, $C_{1}$ is nonsingular and equal to
the boundary of the convex hull of $\Gamma(\mathbb{R})$. Furthermore, if
$|a|\leq\sqrt{3-\sqrt{6}}$, then $C_{2}$ is also nonsingular and convex since
it’s dual $C_{2}^{*}$ has no inflection points.
For $|a|>5/3$, the situation is more surprising and we obtain a package
consisting of a single $3$-Poncelet curve $C_{1}$ with
$C_{1}\not=\Gamma(\mathbb{R})$, see Figure 9. If $|a|\geq\sqrt{3+\sqrt{6}}$,
then $C_{1}$ is nonsingular and convex since it’s dual $C_{1}^{*}$ has no
inflection points. However, if $5/3<|a|<\sqrt{3+\sqrt{6}}$, then $C_{1}$ is
singular since $C_{1}^{*}$ has inflection points which correspond to cusp
singularities of $C_{1}$.
###### Example 3.16.
Another common misconception is that in the package of Poncelet curves $C_{1}$
encloses the rest of the curves $C_{k}$. This is definitely the case if the
starting point is a family of convex Poncelet polygons
$\\{\mathscr{P}(z):\,z\in\mathbb{T}\\}$, as in Definition 3.9. However, if we
construct our curves from the representation (3.13)–(3.14), satisfying (3.16),
this is not always true. The situation with $N=7$,
$\Phi_{6}(z)=\Phi_{6}(z;0,0,0,0,0,a)$
and $a=1.41$ is illustrated in Figure 10.
\begin{overpic}[width=151.76964pt]{Fig10} \put(95.0,107.0){$C_{1}$}
\put(60.0,80.0){$C_{2}$} \end{overpic} Figure 10. Illustration for Example
3.15: curves $C_{1}$ and $C_{2}$ have a non-empty intersection.
## 4\. Complete Poncelet curves with given foci
In this section, we gather all the knowledge accumulated so far to analyze
minimal class complete $n$-Poncelet curves $\Gamma$ in the sense of the
Definition 3.11. In particular, we show that they are characterized by either
one of these properties:
* •
$\Gamma$ is of class $n-1$;
* •
all real foci of $\Gamma$ are inside the unit disk $\mathbb{D}$.
Moreover, in this case the set of foci $f_{1},\dots,f_{n-1}$ determines
$\Gamma$ completely, so that this is a bijection between points in
$\mathbb{D}^{n-1}$ and complete $n$-Poncelet curves. The curve $\Gamma$ can be
reconstructed from its real foci. We describe three approaches to this
problem; all three can be formulated in terms of the paraorthogonal extension
of the set $f_{1},\dots,f_{n-1}$.
Recall that the paraorthogonal extension (see Definition 2.2) consists in
applying the Szegő recursion as in (2.19) with
$\Phi_{n-1}(z)=\Phi_{n-1}(z;f_{1},\dots,f_{n-1})$ and obtaining the
$1$-parametric family of points
$\mathcal{Z}_{n}^{\lambda}=\\{z_{n,1}^{\lambda},\dots,z_{n,n}^{\lambda}\\}$ on
$\mathbb{T}$ as the zeros of the resulting paraorthogonal polynomials of
degree $n$. It turns out that sets $\mathcal{Z}_{n}^{\lambda}$ are identified
by the same Blaschke product with zeros at $f_{j}$’s and, at the same time,
are eigenvalues of a $1$-parametric family of unitary dilations (see Section
2.3.3) of the CMV matrix $\bm{\mathcal{G}}^{(n-1)}\in\mathcal{S}_{n-1}$
associated with $\Phi_{n-1}$. In short, all these approaches are completely
equivalent.
###### Theorem 4.1.
Let $\Gamma$ be a plane real algebraic curve such that $\Gamma(\mathbb{R})$ is
a complete $n$-Poncelet curve ($n\geq 3$) with respect to $\mathbb{T}$,
generated by a family of convex Poncelet polygons $\mathscr{P}(z)$ (see
Definition 3.11), so that
$\Gamma(\mathbb{R})=\bigcup_{k=1}^{[n/2]}C_{k}.$
Then $\Gamma$ is of minimal class (that is, of class $n-1$) if and only if all
real foci are contained the unit disk $\mathbb{D}$. In this case, $C_{1}$ is a
convex curve, real foci $f_{1},\dots,f_{n-1}$ of $\Gamma$ (enumerated with
account of multiplicity) determine $\Gamma$, and for every set of points
$f_{1},\dots,f_{n-1}$ in $\mathbb{D}$, not necessarily all distinct, there
exists a (unique) algebraic $n$-Poncelet curve $\Gamma$ of class $n-1$ with
real foci precisely at $f_{1},\dots,f_{n-1}$. Such a curve $\Gamma$ is
completely determined by its set of real points $\Gamma(\mathbb{R})$.
There are three equivalent realizations of the complete $n$-Poncelet curve
$\Gamma$ of class $n-1$:
1. (i)
$\Gamma(\mathbb{R})$ is the envelope of the closed polygons supported on the
paraorthogonal extension (see Definition 2.2) of the set
$f_{1},\dots,f_{n-1}$.
2. (ii)
The Blaschke product
$B_{n}(z)=z\,\frac{\Phi_{n-1}(z)}{\Phi_{n-1}^{*}(z)},\quad\Phi_{n-1}(z)=\Phi_{n-1}(z;f_{1},\dots,f_{n-1}),$
(4.1)
identifies the set of points
$\mathcal{Z}^{\lambda}=\\{z_{1}^{\lambda},\dots,z_{n}^{\lambda}\\}$ on
$\mathbb{T}$ (see Definition 2.3) in such a way that $\Gamma(\mathbb{R})$ is
the envelope of the closed polygons supported on $\mathcal{Z}^{\lambda}$.
3. (iii)
there is a matrix $\bm{A}\in\mathcal{S}_{n-1}$ with its spectrum equal to the
set $f_{1},\dots,f_{n-1}$ (and hence, $\bm{A}$ unique up to unitary
equivalence) such that
$\operatorname{conv}(\Gamma(\mathbb{R}))=\partial W(\bm{A}).$
The equation of the dual curve $\Gamma^{*}$ in $\mathbb{P}^{2}(\mathbb{C})$ is
given by
$G_{\bm{A}}(u_{1},u_{2},u_{3})=0,$
where $G_{\bm{A}}(u_{1},u_{2},u_{3})$ was defined in (2.24). As a matrix
$\bm{A}$ we can take the cut-off CMV matrix $\bm{\mathcal{G}}^{(n-1)}$
corresponding to the polynomial $\Phi_{n-1}(z;f_{1},\dots,f_{n-1})$.
Furthermore, each set
$\mathcal{Z}^{\lambda}=\\{z_{1}^{\lambda},\dots,z_{n}^{\lambda}\\}$ of
eigenvalues of a $1$-parametric family of unitary dilations of matrix $\bm{A}$
in (i) is identified by the Blaschke product (4.1).
###### Proof.
We first show that such a curve $\Gamma$ of minimal class $n-1$ is completely
determined by its set of real points $\Gamma(\mathbb{R})$. Let
$\widetilde{\Gamma}$ be the real algebraic curve such that
$\widetilde{\Gamma}^{*}$ is the intersection of all real algebraic curves
containing $\Gamma(\mathbb{R})^{*}$. Note that
$\widetilde{\Gamma}^{*}\subseteq\Gamma^{*}$ and
$\widetilde{\Gamma}^{*}(\mathbb{R})=\Gamma^{*}(\mathbb{R})$. If
$\widetilde{G}(u_{1},u_{2},u_{3})$ and ${G}(u_{1},u_{2},u_{3})$ are the
homogenous polynomials of minimal degree defining $\widetilde{\Gamma}^{*}$ and
$\Gamma^{*}$, respectively, then $\widetilde{G}(u_{1},u_{2},u_{3})$ divides
$G(u_{1},u_{2},u_{3})$. By Lemma 3.10, the class of the curve
$\widetilde{\Gamma}$ is $n-1$, which means that
$\widetilde{G}(u_{1},u_{2},u_{3})$ has degree $n-1$. Since
$\widetilde{G}(u_{1},u_{2},u_{3})$ divides $G(u_{1},u_{2},u_{3})$ and since
$G(u_{1},u_{2},u_{3})$ also has degree $n-1$, we have
$G(u_{1},u_{2},u_{3})=c\,\widetilde{G}(u_{1},u_{2},u_{3})$, where $c$ is a
nonzero constant. Thus, $\Gamma^{*}=\widetilde{\Gamma}^{*}$ and hence
$\Gamma=\widetilde{\Gamma}$. Since $\widetilde{\Gamma}$ is completely
determined by $\Gamma(\mathbb{R})$, so is $\Gamma$.
Let $P(z,w)=0$ be Mirman’s parametrization of the complete $n$-Poncelet curve
$\Gamma$, as explained in Section 3.2.1. By Theorem 3.14, $\Gamma$ is of class
$n-1$ if and only if $N=n$, that is, $m=d=0$ (in other words, when all real
foci are in $\mathbb{D}$).
Let $f_{1},\dots,f_{n-1}$ be the real foci of a complete $n$-Poncelet curve
$\Gamma$ of class $n-1$. Again by Theorem 3.14, its Mirman’s parametrization
is, up to a multiplicative constant,
$P(z,w)=\frac{w\,\Phi_{n-1}(w)\Phi_{n-1}^{*}(z)-z\,\Phi_{n-1}(z)\Phi_{n-1}^{*}(w)}{w-z},$
(4.2)
with $\Phi_{n-1}$ defined in (4.1). In particular, it means that for any pairs
of points $z,w\in\mathbb{T}$ satisfying $P(z,w)=0$ it holds that
$B_{n}(z)=B_{n}(w),\quad\text{with}\quad
B_{n}(z)=z\,\frac{\Phi_{n-1}(z)}{\Phi_{n-1}^{*}(z)},$
i.e., $z,w$ are identified by $B_{n}$. As we have seen in Section 2.2.1, this
is equivalent to being $z,w$ paraorthogonal extensions of $\Phi_{n-1}$. Since
$|B_{n}(z)|=1$ for $z\in\mathbb{T}$, there exists $\lambda\in\mathbb{T}$ such
that
$z,w\in\mathcal{Z}^{\lambda}=\\{z_{1}^{\lambda},\dots,z_{n}^{\lambda}\\},$
the set of solutions of $B_{n}(z)=\overline{\lambda}$. Hence, by our
discussion in Section 2.2.2, for each $\lambda\in\mathbb{T}$, points from
$\mathcal{Z}^{\lambda}$ are also the eigenvalues of a rank one unitary
dilation of the $(n-1)\times(n-1)$ cutoff CMV matrix $\bm{A}$ whose
eigenvalues are $\\{f_{j}\\}_{j=1}^{n-1}$. From [46] (see also Section 2.2.1)
it follows that the unitary equivalence class of $\bm{A}$ is also uniquely
determined by $f_{1},\dots,f_{n-1}$ and as a representative of this class we
can take $\bm{A}=\bm{\mathcal{G}}^{(n-1)}(\alpha_{0},\dots,\alpha_{n-1})$,
where the Verblunsky coefficients are determined by $\Phi_{n-1}$ using the
inverse Szegő recursion: with notation (2.13) and (2.16),
$\Phi_{n-1}(z)=\Phi_{n-1}(z;f_{1},\dots,f_{n-1})=\Phi_{n}^{(\alpha_{0},\dots,\alpha_{n-1})}(z).$
Let
$\mathscr{P}(z):=\partial\left(\operatorname{conv}\\{w\in\mathbb{T}:P(z,w)=0\\}\right);$
by the previous observation, this family coincides with
$\left\\{\partial\left(\text{conv}(\mathcal{Z}^{\lambda})\right):\lambda\in\mathbb{T}\right\\}.$
Direct calculations (see also [61, formula (6.10)]) show that
$\frac{d}{d\theta}\arg
B\left(e^{i\theta}\right)=\frac{d}{d\theta}\arg\overline{\lambda}=1+\sum_{j=1}^{n-1}\frac{1-|f_{j}|^{2}}{|z-f_{j}|^{2}}>0$
(compare with the necessary condition in (3.16); see also an alternative
expression in terms of orthonormal OPUC in [42, formula (10.8)]). The Inverse
Function Theorem shows that the Poncelet correspondence $\tau$ for
$\\{\mathscr{P}(z):\,z\in\mathbb{T}\\}$ is strictly increasing, is smooth and
satisfies (3.3). In consequence, the associated Poncelet curves
$\\{C_{1},\ldots,C_{[n/2]}\\}$ constructed in Section 3.1.3, make up the
package of Poncelet curves generated by $\bm{A}$ in the sense described in
[46]. Therefore, we may apply what is stated in [46, page 130], namely that
(4.2) is the Mirman parametrization of a complete $n$-Poncelet curve $\Gamma$.
It follows also that $\Gamma$ is an algebraic curve of class $n-1$ with real
foci $\\{f_{j}\\}_{j=1}^{n-1}$ (see Proposition 3.13). Our construction also
shows the equivalence of (i) and (ii).
It was proved in [25] that
$W(\bm{A})=\bigcap_{\lambda\in\mathbb{T}}\text{conv}(\mathcal{Z}^{\lambda}).$
(4.3)
This shows, in particular, that $\partial W(\bm{A})$ coincides with the
component $C_{1}$ in the package of Poncelet curves (3.8); our earlier
considerations imply that $C_{1}$ is also the convex hull of
$\Gamma(\mathbb{R})$ (and thus, is a convex curve). If $\Gamma^{\prime}$ is
the curve whose existence is guaranteed by Kippenhahn’s Theorem (see Theorem
C), then since the convex hull of $\Gamma^{\prime}(\mathbb{R})$ is $\partial
W(\bm{A})$, it follows from Bezout’s Theorem that $\Gamma=\Gamma^{\prime}$.
The content of Section 2.3.2 implies that the equation for the dual curve of
$\Gamma$ is
$G_{\bm{A}}(u_{1},u_{2},u_{3})=0,$ (4.4)
where $G_{\bm{A}}$ was defined in (2.24), as claimed. We have thus
demonstrated that $\Gamma$ can be realized by the description in (iii). The
fact that each set of eigenvalues of a $1$-parametric family of unitary
dilations of $\bm{A}$ in (iii) is identified by the Blaschke product $B_{n}$
is the direct consequence of (2.20).
To show that $\Gamma$ is the unique curve with the desired properties, we
suppose that $\widetilde{\Gamma}$ is another complete $n$-Poncelet curve of
class $n-1$ with real foci $\\{f_{j}\\}_{j=1}^{n-1}$. Let
$\widetilde{G}(u_{1},u_{2},u_{3})=0$ be the equation of
$\widetilde{\Gamma}^{*}$. Then
$G_{\bm{A}}(1,i,f_{j})=0=\widetilde{G}(1,i,f_{j}),\qquad\quad j=1,2\dots,n-1.$
Since $G_{\bm{A}}(1,i,z)$ and $\widetilde{G}(1,i,z)$ are both polynomials of
degree (at most) $n-1$ and have $n-1$ zeros in common, it follows that they
must be scalar multiples of one another and hence they define the same curve.
This means $\Gamma^{*}=\widetilde{\Gamma}^{*}$ and so
$\Gamma=\widetilde{\Gamma}$. ∎
###### Remark 4.2.
1. a)
In Theorem 4.1 we assume that $\Gamma$ is a complete $n$-Poncelet curve and is
of class $n-1$. We conjecture that this assumption is superfluous, and the
results in Theorem 4.1 can be established for any complete Poncelet curve
$\Gamma$. In particular, it would imply that all real foci of such a curve are
in $\mathbb{D}$. Notice that in general for a real algebraic curve $\Gamma$,
the fact that $\Gamma(\mathbb{R})\subset\mathbb{D}$ does not imply that its
real foci are in $\mathbb{D}$. A simple example [51] is the curve given by the
equation
$\left(x_{1}^{2}+x_{2}^{2}-x_{3}^{2}/2\right)\left((x_{1}-2)^{2}+x_{2}^{2}+x_{3}^{2}\right)=0$
with real foci at $(0:0:1)$ and $(2:0:1)$.
2. b)
It was established in [50] that the $n$-Poncelet property of a convex curve of
class $n-1$ characterizes this curve as being a boundary of $W(\bm{A})$ for
some $\bm{A}\in\mathcal{S}_{n-1}$. The assumption on its class is essential as
a counterexample in [47, Example 1 on p. 131] to a conjecture of Gau and Wu
[25, Conjecture 5.1] shows; see also the discussion in [27] on p. 184.
Curiously, the counterexample that appears in [50], constructed by violating
the assumption that all $f_{j}$’s are in $\mathbb{D}$, is wrong. Namely, the
authors consider the case of $\Phi_{4}(z;0,0,0,a)=z^{3}(z-a)$, $a>1$, and the
corresponding curve with Mirman’s parametrization (3.13). In [50], they take
$a=2$, but the resulting Poncelet curve is not convex! In fact, it is depicted
in Figure 5, right. However, for a correct counterexample it is sufficient to
use $a>\sqrt{3+\sqrt{6}}$, see our detailed discussion in Example 3.15.
3. c)
Construction (ii) was extensively explored in the works [10, 11]. Its
equivalence with (ii) was proved in [46].
4. d)
Property (4.3) characterizes the class $\mathcal{S}_{n-1}$: as it was shown in
[25, Theorem 4.4], a contraction $\bm{A}\in\mathbb{C}^{(n-1)\times(n-1)}$
(i.e. $\|\bm{A}\|\leq 1$) is in $\mathcal{S}_{n-1}$ if and only if $W(\bm{A})$
in $\mathbb{D}$ has the Poncelet property.
Although Theorem 4.1 is stated for $n\geq 3$, its “toy version” for $n=2$ also
holds:
###### Proposition 4.3.
Let $f\in\mathbb{D}$, and for $\lambda\in\mathbb{T}$ define
$z^{\lambda}_{\pm}=\frac{1}{2}\left(f-\overline{f\lambda}\pm\sqrt{\left(f-\overline{f\lambda}\right)^{2}+4\overline{\lambda}}\right)\in\mathbb{T},$
zeros of
$\Phi_{2}^{(\overline{f},\lambda)}(z)=z(z-f)-\overline{\lambda}(1-\overline{f}z)$,
or equivalently, eigenvalues of the $2\times 2$ cut-off CMV matrix
$\bm{\mathcal{G}}^{(2)}=\bm{\mathcal{G}}^{(2)}(\overline{f},\lambda)=\begin{pmatrix}f&\overline{\lambda}\sqrt{1-|f|^{2}}\\\
\sqrt{1-|f|^{2}}&-\overline{f\lambda}\end{pmatrix}.$
Then for every $\lambda\in\mathbb{T}$, the straight segment joining
$z^{\lambda}_{-}$ and $z^{\lambda}_{+}$ passes through $f$.
See the proof for instance in [10, Theorem 4.1].
## 5\. Elliptic case
Recall the fundamental result proved by Darboux (see [12], also [15, Theorem
3]), mentioned as Theorem B in the Introduction: if a component $C_{j}$ of a
complete $n$-Poncelet curve (3.8) is an ellipse and has $n$-Poncelet property
(notice that here both $n$ are the same) then $C$ is a union of $[n/2]$
disjoint ellipses (also known as a package of ellipses). Using the terminology
introduced in Section 3.1, this is equivalent to assuming that the curve
$C_{j}$ is the envelope of polygons $\mathscr{P}_{j}(z)$ as defined in (3.5),
with $\gcd(j,n)=1$.
In this section we want to explore these ideas further and analyze the case of
a component $C_{j}$ being an ellipse. This situation, especially when the
convex component $C_{1}$ is assumed to be an ellipse, has attracted interest
before, see e.g. [4, 6, 7, 8, 9, 10, 23, 26, 30, 43, 49]. As it could be
expected, for an ellipse, Mirman’s parametrization $P(z,w)=0$, described in
Section 3, has a more explicit form.
It was derived by Mirman [47, 48] that if $f_{1},f_{2}\in\mathbb{D}$ are the
foci of $C_{j}$ and $s$ is the length of its minor semiaxis, then the straight
line $l$ joining two points, $z,w\in\mathbb{T}$, is tangent to $C_{j}$ if and
only if $z,w$ satisfy the equation $q(z,w)=0$, where
$q(z,w)=q(z,w;f_{1},f_{2},s):=\left(w+b_{1}(z;f_{1})\right)\left(w+b_{1}(z;f_{2})\right)-\frac{4s^{2}zw}{\Phi_{2}^{*}(z;f_{1},f_{2})},$
(5.1)
see the notation in (2.13), (2.14), (2.21). If $C_{j}$ is degenerated to a
point ($f_{1}=f_{2}$, $s=0$), then we need to replace the expression above by
$q(z,w)=q(z,w;f_{1},f_{1},0)=q(z,w;f_{1}):=w+b_{1}(z;f_{1}).$ (5.2)
In the non-degenerate case ($C_{j}$ is not a point) the Poncelet
correspondence $\tau$ is correctly defined. Algebraically, it means that for
$z\in\mathbb{T}$, the equation $q(z,w;f_{1},f_{2},b)=0$ is quadratic in $w$
and has two solutions, the endpoints of the tangents to $C_{j}$ starting at
$z$ and ending on $\mathbb{T}$ (namely, $\tau(z)$ and $\tau^{-1}(z)$). This
allows us to define an iterative process (that we call the _circular Mirman’s
iteration_) as follows:
> Start from $w_{0}\in\mathbb{T}$ and define $w_{1}\in\mathbb{T}$ as one of
> the two solutions of $q(w_{0},w;f_{1},f_{2},s)=0$.
>
> For $i=1,2,\dots$, choose as $w_{i+1}$ the solution of
> $q(w_{i},w;f_{1},f_{2},s)=0$ such that $w_{i+1}\neq w_{i-1}$.
Clearly, $C_{j}$ has an $n$-Poncelet property if and only if $w_{n}=w_{0}$,
and $n$ is the smallest natural number for which equality holds (in other
words, if the orbit of the circular Mirman’s iteration has length $n$).
The advantage of the algebraic interpretation of the iteration $\tau^{k}(z)$
is that we no longer need to assume $z\in\mathbb{T}$ (in which case the
geometric interpretation is less obvious).
Reasoning as in the case of Mirman’s parametrization (see Section 3.2), we see
that replacing $w_{0}=0$ in $q(w_{0},w;f_{1},f_{2},s)=0$ yields the foci of
$C_{j}$; in other words, starting in Mirman’s iterations with $w_{0}=0$
necessarily yields either $w_{1}=f_{1}$ of $w_{1}=f_{2}$. So, we can define
the iterative process (let us call it the _inner Mirman’s iteration_) as
follows:
> Start from $w_{0}=0$, define $w_{1}\in\\{f_{1},f_{2}\\}$ and for
> $i=1,2,\dots$, choose as $w_{i+1}$ the solution of
> $q(w_{i},w;f_{1},f_{2},s)=0$ such that $w_{i+1}\neq w_{i-1}$.
Assume that $C_{j}$ has the $n$-Poncelet property and let
$f_{1},\dots,f_{n-1}$ be the set of points in $\mathbb{D}$ generated by this
iterative process. According to Theorem 4.1, there exists a unique algebraic
$n$-Poncelet curve $\Gamma$ of class $n-1$ with real foci precisely at
$f_{1},\dots,f_{n-1}$, and let $P(z,w)=0$ be its Mirman’s parametrization. The
following result was proved in [47] and is a simple consequence of Bezout’s
theorem:
###### Theorem 5.1.
Under the assumptions above, $q(z,w;f_{1},f_{2},s)$ divides $P(z,w)$.
Moreover, in the decomposition (3.8), each component $C_{k}$ is an ellipse,
and they are all disjoint. The ellipse $C_{[n/2]}$ is degenerate (a point) if
$n$ is even. If we denote by $f_{2k-1}$ and $f_{2k}$ the foci of $C_{k}$, and
by $s_{k}$ its minor semiaxis, then
$P(z,w;f_{1},\dots,f_{n-1})=\prod_{k=1}^{[n/2]}q(z,w;f_{2k-1},f_{2k},s_{k});$
(5.3)
if $n$ is even, we take $f_{n-1}=f_{n}$, $s_{[n/2]}=0$.
Comparing this statement with Theorem B in the Introduction, we see that
Mirman in fact reproved Darboux’s result while being unaware of his work!
###### Remark 5.2.
An assumption of Theorem 5.1 is that the curve $\Gamma$ is of class exactly
$n-1$, as the following construction shows. With $f_{1},\dots,f_{n-1}$ being
the points in $\mathbb{D}$ generated by inner Mirman’s iterations for $C_{j}$,
let
$b_{n}(z)=b_{n}(z;0,f_{1},\dots,f_{n-1})=\frac{z\Phi_{n-1}(z)}{\Phi_{n-1}^{*}(z)},\quad\Phi_{n-1}(z)=\Phi_{n}(z;f_{1},\dots,f_{n-1}).$
Pick arbitrary points $g_{1},\dots,g_{m-1}\in\mathbb{D}$, define
$B_{m}(z)=b_{m}(z;0,g_{1},\dots,g_{m-1})=\frac{z\Phi_{m-1}(z)}{\Phi_{m-1}^{*}(z)},\quad\Phi_{m-1}(z)=\Phi_{m-1}(z;g_{1},\dots,g_{m-1}),$
and consider the composition
$D_{mn}(z)=(B_{m}\circ b_{n})(z)=B_{m}(b_{n}(z)),$
which is again a Blaschke product with $D_{mn}(0)=0$. Then the envelope of
polygons supported on solutions of $D_{mn}(z)=\overline{\lambda}$,
$\lambda\in\mathbb{T}$, is a complete $mn$-Poncelet curve $\widetilde{\Gamma}$
of class $mn-1$. One of its components is the $n$-Pocelet ellipse $C_{j}$.
However, as we have seen, the inner Mirman’s iteration for $C_{j}$ generates
only $n-1$ values, $f_{1},\dots,f_{n-1}$, among them, the two foci of this
ellipse. All these values are a subset of the real foci of
$\widetilde{\Gamma}$, and by considerations above and by Theorem 5.1, these
will be foci of a package of $[n/2]$ ellipses, all components of
$\widetilde{\Gamma}$. However, this does not mean that _the whole_ curve
$\widetilde{\Gamma}$ is a package of Poncelet ellipses.
###### Example 5.3.
Let $n=3$, $f_{1},f_{2}\in\mathbb{D}$, $g_{1},\dots,g_{m-1}\in\mathbb{D}$
($m\geq 2$), and as above,
$D_{3m}(z)=(B_{m}\circ b_{3})(z),\quad b_{3}(z)=b_{3}(z;0,f_{1},f_{2}),\quad
B_{m}(z)=b_{m}(z;0,g_{1},\dots,g_{m-1}).$
Then the envelope of polygons supported on solutions of
$D_{3n}(z)=\overline{\lambda}$, $\lambda\in\mathbb{T}$, contains a
$3$-Poncelet ellipse with foci $f_{1}$ and $f_{2}$, whose minor semiaxis is
$s^{2}=\frac{1-|f_{1}|^{2}-|f_{2}|^{2}+|f_{1}f_{2}|^{2}}{4}$
(use (5.3) with $n=3$, $m=1$). Notice that the inner Mirman iteration for that
ellipse yields only $f_{1}$ and $f_{2}$.
\begin{overpic}[width=151.76964pt]{Fig11} \end{overpic} Figure 11.
Illustration for Example 5.3, with $f_{1}=-f_{2}=1/2$ and $g_{1}=i/2$ ($m=2$).
A simple consequence of our considerations is the following
###### Theorem 5.4.
Let $\Gamma$ be a complete $n$-Poncelet curve of class $n-1$, $C_{j}$ is an
elliptic component of the decomposition (3.8), and assume that the maximal set
generated by inner Mirman’s iteration corresponding to $C_{j}$ contains only
$m-1$ distinct values $w_{1},\dots,w_{m-1}$, $m\leq n$. Then
* •
$n$ is divisible by $m$;
* •
$\\{w_{1},\dots,w_{m-1}\\}\subset\\{f_{1},\dots,f_{n-1}\\}$, and $w_{j}$ are
foci of a package of $[m/2]$ ellipses, all of them components of $\Gamma$.
For instance, if $n$ is prime, then the existence of any elliptic component in
$\Gamma(\mathbb{R})$ forces all $C_{k}$’s to be ellipses (it is a package of
ellipses).
We can reformulate Mirman’s iteration by observing the independent term in
(5.1) and applying Vieta’s formulas: given a value $z$, the two solutions
$w^{\prime}$ and $w^{\prime\prime}$ of $q(z,w;f_{1},f_{2},s)=0$ satisfy
$w^{\prime}w^{\prime\prime}=b_{1}(z;f_{1})b_{1}(z;f_{2})=b_{2}(z;f_{1},f_{2}).$
In particular, in the circular Mirman iteration,
$w_{i+1}w_{i-1}=b_{2}(w_{i};f_{1},f_{2}),\quad i=1,2,\dots$ (5.4)
This is a three term recurrence relation that has an advantage of not
requiring knowledge of value $s$ of the semiaxis. It needs two initial values,
for which we could use $w_{0}$ and $w_{1}$. However, notice that it is
necessary to know $s$ in order to calculate $w_{1}$. Obviously, since all
$w_{i}$’s are on $\mathbb{T}$, formulas (5.4) generate $w_{i+1}$ from
$(w_{i-1},w_{i})$ for all $i\in\mathbb{N}$.
As before, relaxing the assumptions we still can use the iterations (5.4);
however, in the present situation there are two additional moments to address:
* •
The iteration breaks down when $w_{j-1}=0$;
* •
for that reason, we cannot use $w_{0}=0$ and $w_{1}=f_{1}$ (or $w_{1}=f_{2}$)
to initialize the process; we need to calculate a third value, a solution of
$q(w_{1},w;f_{1},f_{2},s)=0$, $w_{2}\neq 0$. Notice that
$q(f_{i},w;f_{1},f_{2},s)=w\left(w+b_{1}(f_{i};f_{j})-\frac{4s^{2}f_{i}}{\Phi_{2}^{*}(f_{i};f_{1},f_{2})}\right),\quad
i\neq j,\quad i,j\in\\{1,2\\},$
so we can initialize the inner Mirman iterations explicitly by
$w_{1}=f_{i}\in\\{f_{1},f_{2}\\},\quad
w_{2}=-b_{1}(f_{i};f_{j})+\frac{4s^{2}f_{i}}{\Phi_{2}^{*}(f_{i};f_{1},f_{2})},\quad
i\neq j,\quad i,j\in\\{1,2\\}.$ (5.5)
As it was observed, the value of $s$ is necessary to calculate $w_{2}$.
We further explore the case of ellipses with Poncelet properties in a
forthcoming paper [36].
## Acknowledgments
The second author was partially supported by Simons Foundation Collaboration
Grants for Mathematicians (grant 710499) and by the European Regional
Development Fund along with Spanish Government (grant MTM2017-89941-P) and
Junta de Andalucía (grant UAL18-FQM-B025-A, as well as research group FQM-229
and Instituto Interuniversitario Carlos I de Física Teórica y Computacional),
and by the University of Almería (Campus de Excelencia Internacional del Mar
CEIMAR).
The fourth author graciously acknowledges support from Simons Foundation
Collaboration Grant for Mathematicians 707882.
The authors are indebted to V. Dragović, J. Langer and D. Singer for helpful
discussions.
## References
* [1] G. E. Andrews, V. Dragović, and M. Radnovic. Combinatorics of periodic ellipsoidal billiards. Preprint ArXiv:1908.01026, Aug. 2019.
* [2] M. C. Beltrametti, E. Carletti, D. Gallarati, and G. Monti Bragadin. Lectures on curves, surfaces and projective varieties. EMS Textbooks in Mathematics. European Mathematical Society (EMS), Zürich, 2009. A classical view of algebraic geometry, Translated from the 2003 Italian original by Francis Sullivan.
* [3] M. Berger. Geometry I. Universitext. Springer-Verlag, Berlin, 2009. Translated from the 1977 French original by M. Cole and S. Levy, Fourth printing of the 1987 English translation [MR0882541].
* [4] E. S. Brown and I. M. Spitkovsky. On Matrices with Elliptical Numerical Ranges. Linear and Multilinear Algebra, 52(3-4):177–193, May 2004.
* [5] V. P. Burskii and A. S. Zhedanov. On Dirichlet, Poncelet and Abel problems. Communications on Pure and Applied Analysis, 12(4):1587–1633, July 2013.
* [6] M.-T. Chien and K.-C. Hung. Elliptic numerical ranges of bordered matrices. Taiwanese Journal of Mathematics, 16(3):1007–1016, May 2012.
* [7] U. Daepp, P. Gorkin, and R. Mortini. Ellipses and finite Blaschke products. The American Mathematical Monthly, 109(9):785–795, Nov. 2002.
* [8] U. Daepp, P. Gorkin, A. Shaffer, B. Sokolowsky, and K. Voss. Decomposing finite Blaschke products. Journal of Mathematical Analysis and Applications, 426(2):1201–1216, 2015.
* [9] U. Daepp, P. Gorkin, A. Shaffer, and K. Voss. Möbius transformations and Blaschke products: The geometric connection. Linear Algebra and its Applications, 516:186–211, Mar. 2017.
* [10] U. Daepp, P. Gorkin, A. Shaffer, and K. Voss. Finding ellipses, volume 34 of Carus Mathematical Monographs. MAA Press, Providence, RI, 2018. What Blaschke products, Poncelet’s theorem, and the numerical range know about each other.
* [11] U. Daepp, P. Gorkin, and K. Voss. Poncelet’s theorem, Sendov’s conjecture, and Blaschke products. Journal of Mathematical Analysis and Applications, 365(1):93–102, May 2010.
* [12] G. Darboux. Principes de géométrie analytique. Gauthier-Villars, Paris, 1917.
* [13] A. Del Centina. Poncelet’s porism: a long story of renewed discoveries, I. Archive for History of Exact Sciences, 70(1):1–122, 2016.
* [14] A. Del Centina. Poncelet’s porism: a long story of renewed discoveries, II. Archive for History of Exact Sciences, 70(2):123–173, 2016.
* [15] V. Dragović. Poncelet–Darboux curves, their complete decomposition and Marden theorem. International Mathematics Research Notices, 2011(15):3502–3523, Oct. 2011.
* [16] V. Dragović and M. Radnovic. Bicentennial of the Great Poncelet Theorem (1813–2013): Current advances. Bulletin of the American Mathematical Society, 51(3):373–445, July 2014.
* [17] V. Dragović and M. Radnovic. Caustics of Poncelet Polygons and Classical Extremal Polynomials. Regular and Chaotic Dynamics, 24(1):1–35, Feb. 2019.
* [18] L. Flatto. Poncelet’s theorem. American Mathematical Society, Providence, RI, 2009. Chapter 15 by S. Tabachnikov.
* [19] M. Fujimura. Inscribed Ellipses and Blaschke Products. Computational Methods and Function Theory, 13(4):557–573, Sept. 2013.
* [20] M. Fujimura. Blaschke Products and Circumscribed Conics. Computational Methods and Function Theory, 17(4):635–652, May 2017\.
* [21] M. Fujimura. Interior and exterior curves of finite Blaschke products. Journal of Mathematical Analysis and Applications, 467(1):711–722, Nov. 2018.
* [22] S. Gao. Absolute irreducibility of polynomials via newton polytopes. Journal of Algebra, 237(2):501 – 520, 2001.
* [23] H.-L. Gau. Elliptic Numerical Ranges of $4\times 4$ Matrices. Taiwanese Journal of Mathematics, 1:117–128, Jan. 2006.
* [24] H.-L. Gau and P. Y. Wu. Dilation to inflations of $S(\phi)$. Linear and Multilinear Algebra, 45(2-3):109–123, Dec. 1998.
* [25] H.-L. Gau and P. Y. Wu. Numerical range of $S(\phi)$. Linear and Multilinear Algebra, 45(1):49–73, Nov. 1998.
* [26] H.-L. Gau and P. Y. Wu. Condition for the numerical range to contain an elliptic disc. Linear Algebra and its Applications, 364:213–222, 2003.
* [27] H.-L. Gau and P. Y. Wu. Numerical range and Poncelet property. Taiwanese Journal of Mathematics, 7(2):173–193, 2003.
* [28] J. L. Geronimus. Polynomials orthogonal on a circle and interval. Translated from the Russian by D. E. Brown; edited by Ian N. Sneddon. International Series of Monographs on Pure and Applied Mathematics, Vol. 18. Pergamon Press, New York-Oxford-London-Paris, 1960.
* [29] D. M. Goldschmidt. Algebraic functions and projective curves, volume 215 of Graduate Texts in Mathematics. Springer-Verlag, New York, 2003.
* [30] P. Gorkin. Four Theorems with their Foci on ellipses. American Mathematical Monthly, 126(2):99–111, Feb. 2019.
* [31] P. Gorkin and N. Wagner. Ellipses and compositions of finite Blaschke products. Journal of Mathematical Analysis and Applications, 445(2):1354–1366, Jan. 2017.
* [32] P. Griffiths and J. Harris. On Cayley’s explicit solution to Poncelet’s porism. L’Enseignement Mathématique, 24:31–40, 1978.
* [33] U. Haagerup and P. de la Harpe. The numerical radius of a nilpotent operator on a Hilbert space. Proc. Amer. Math. Soc., 115(2):371–379, 1992.
* [34] P. R. Halmos. A Hilbert space problem book. Springer-Verlag, New York, 2nd. edition, 1982.
* [35] R. A. Horn and C. R. Johnson. Matrix analysis. Cambridge University Press, Cambridge, second edition, 2013.
* [36] M. Hunziker, A. Martinez-Finkelshtein, T. Poe, and B. Simanek. On foci of ellipses inscribed in cyclic polygons. Preprint, 2021.
* [37] K. Kendig. A guide to plane algebraic curves, volume 46 of The Dolciani Mathematical Expositions. Mathematical Association of America, Washington, DC, 2011. MAA Guides, 7.
* [38] R. Kippenhahn. Über den Wertevorrat einer Matrix. Math. Nachr., 6:193–228, 1951.
* [39] R. Kippenhahn. On the numerical range of a matrix. Linear Multilinear Algebra, 56(1-2):185–225, 2008. Translated from the German by Paul F. Zachlin and Michiel E. Hochstenbach [MR0059242].
* [40] E. A. Kudryavtseva. Liouville integrable generalized billiard flows and theorems of Poncelet type. Fundam. Prikl. Mat., 20(3):113–152, 2015.
* [41] J. C. Langer and D. A. Singer. Foci and Foliations of Real Algebraic Curves. Milan Journal of Mathematics, 75(1):225–271, Nov. 2007.
* [42] Y. Last and B. Simon. Fine structure of the zeros of orthogonal polynomials. IV. A priori bounds and clock behavior. Comm. Pure Appl. Math., 61(4):486–538, 2008.
* [43] C.-K. Li. A simple proof of the elliptical range theorem. Proc. Amer. Math. Soc., 124(7):1985–1986, 1996.
* [44] M. S. Livshitz. On a certain class of linear operators in Hilbert space. Rec. Math. [Mat. Sbornik] N.S., 19(61):239–262, 1946.
* [45] M. Marden. Geometry of Polynomials, volume 3 of Math. Surveys. Amer. Math. Soc., Providence, R. I., 2nd. edition, 1966.
* [46] A. Martínez-Finkelshtein, B. Simanek, and B. Simon. Poncelet’s theorem, paraorthogonal polynomials and the numerical range of compressed multiplication operators. Adv. Math., 349:992–1035, 2019.
* [47] B. Mirman. UB-matrices and conditions for Poncelet polygon to be closed. Linear Algebra and its Applications, 360:123–150, 2003.
* [48] B. Mirman. Sufficient conditions for Poncelet polygons not to close. The American Mathematical Monthly, 112(4):351–356, Apr. 2005.
* [49] B. Mirman. Explicit solutions to Poncelet’s porism. Linear Algebra and its Applications, 436(9):3531–3552, May 2012\.
* [50] B. Mirman and P. Shukla. A characterization of complex plane Poncelet curves. Linear Algebra and its Applications, 408:86–119, Oct. 2005.
* [51] L. Moret-Bailly. Private communication, 2021.
* [52] V. Ovsienko, R. Schwartz, and S. Tabachnikov. The Pentagram Map: A Discrete Integrable System. Communications in Mathematical Physics, 299(2):409–446, June 2010\.
* [53] D. Pecker. Poncelet’s theorem and billiard knots. Geom. Dedicata, 161:323–333, 2012.
* [54] J.-V. Poncelet. Traité sur les propriétés projectives des figures. ProQuest LLC, Ann Arbor, MI, 1822.
* [55] J. Richter-Gebert. Perspectives on projective geometry. Springer, Heidelberg, 2011. A guided tour through real and complex geometry.
* [56] R. E. Schwartz. The pentagram map. Experimental Mathematics, 1:71–81, Jan. 1992.
* [57] R. E. Schwartz. The pentagram integrals for Poncelet families. Journal of Geometry and Physics, 87:432–449, Jan. 2015.
* [58] I. R. Shafarevich. Basic algebraic geometry. 1. Springer, Heidelberg, Russian edition, 2013. Varieties in projective space.
* [59] J. Siebeck. Ueber eine neue analytische Behandlungweise der Brennpunkte. J. Reine Angew. Math., 64:175–182, 1864.
* [60] B. Simon. Orthogonal Polynomials on the Unit Circle I and II, volume 54 of AMS Colloquium Publications. American Mathematical Society, Providence, RI, 2005.
* [61] M. Stoiciu. The statistical distribution of the zeros of random paraorthogonal polynomials on the unit circle. J. Approx. Theory, 139(1-2):29–64, 2006.
* [62] B. Sz.-Nagy, C. Foias, H. Bercovici, and L. Kérchy. Harmonic analysis of operators on Hilbert space. Universitext. Springer, New York, enlarged edition, 2010.
* [63] G. Szegő. Orthogonal Polynomials, volume 23 of Amer. Math. Soc. Colloq. Publ. Amer. Math. Soc., Providence, RI, fourth edition, 1975.
* [64] S. Tabachnikov. Kasner meets Poncelet. Math. Intelligencer, 41(4):56–59, 2019.
* [65] P. Y. Wu. Polygons and numerical ranges. American Mathematical Monthly, 107(6):528–540, 2000.
|
# Fluidisation of yield stress fluids under vibration
Ashish Garg1,2 Nico Bergemann2,3 Beccy Smith4 Matthias Heil2,3 Anne Juel∗1,2
1Department of Physics & Astronomy, The University of Manchester, Oxford Road,
Manchester M13 9PL, UK; 2Manchester Centre for Nonlinear Dynamics, The
University of Manchester, Oxford Road, Manchester M13 9PL, UK; 3Department of
Mathematics, The University of Manchester, Oxford Road, Manchester M13 9PL, UK
4Mondelez International, Bournville Place, Birmingham B30 2LU, UK.
###### Abstract
Motivated by the industrial processing of chocolate, we study experimentally
the fluidisation of a sessile drop of yield-stress fluid on a pre-existing
layer of the same fluid under vertical sinusoidal oscillations. We compare the
behaviours of molten chocolate and Carbopol which are both shear-thinning with
a similar yield stress but exhibit very different elastic properties. We find
that these materials spread when the forcing acceleration exceeds a threshold
which is determined by the initial deposition process. However, they exhibit
very different spreading behaviours: whereas chocolate exhibits slow long-term
spreading, the Carbopol drop rapidly relaxes its stress by spreading to a new
equilibrium shape with an enlarged footprint. This spreading is insensitive to
the history of the forcing. In addition, the Carbopol drop performs large-
amplitude oscillations with the forcing frequency, both above and below the
threshold. We investigate these viscoelastic oscillations and provide evidence
of complex nonlinear viscoelastic behaviour in the vicinity of the spreading
threshold. In fact, for forcing accelerations greater than the spreading
threshold, our drop automatically adjusts its shape to remain at the yield
stress. We discuss how our vibrated-drop experiment offers a new and powerful
approach to probing the yield transition in elastoviscoplastic fluids.
## I Introduction
Yield-stress fluids are amongst the most common and versatile materials in
everyday life, ranging from food and healthcare products to concrete and crude
oil. Their relevance extends from the geophysical scale, e.g., mudslides and
lava flow Balmforth et al. (2014) to the microscale where, e.g., self-
assembled peptide gels are used as a vehicle for targeted drug delivery Gao et
al. (2017). These materials all exhibit a transition from solid to fluid-like
behaviour when subjected to stresses exceeding the material’s yield stress but
they may have widely different mesostructures, e.g. microgels such as
Carbopol, suspensions, emulsions Bonn et al. (2017). Industrial processing of
yield-stress materials often involves their temporary fluidisation under
mechanical stress in order to facilitate moulding, e.g., of chocolate or
concrete, and bottling and packaging, e.g. cosmetics and food products Coussot
(2005). In chocolate manufacture, vibration is routinely imposed at key stages
along the production line in order to shape or ensure uniform distribution of
the material Chevalley (1975); Bergemann (2015). However, vibrational
parameters such as amplitude, frequency and direction of forcing are typically
selected empirically. Thus, there is scope to optimise this process by gaining
a fundamental understanding of the mechanics of fluidisation under vibration.
In this paper, we study a configuration relevant to chocolate production
lines, namely the fluidisation under vertical oscillations of a drop of yield-
stress fluid initially at rest on a thin layer of the same material.
Thus far, the study of yield-stress fluid flows has focussed mostly on uniform
flows in rheometric and conduits geometries, flows in non-uniform vessels of
varying cross-section, transient flows such as gravity currents and spin-
coating processes and some elongational flows involving droplet formation
Coussot (2014). Thus, literature on the fluidisation of yield-stress fluids
under vibration is scarce. Flow enhancement and liquefaction of pastes and
complex fluids in vibrated pipes have been explored Deysarkar and Turner
(1981); Deshpande and Barigou (2001) and the effect of vertical oscillations
on a layer of complex fluid has generated growing interest in recent years.
For example, a signature of elasticity was identified in the viscoelastic
Faraday problem using a wormlike micelle solution Ballesta and Manneville
(2005). Convection-driven heap formation and cracking was observed in dense
slurries of bronze microspheres Schleier-Smith and Stone (2001) while
persistent holes were found to form in vibrated layers of cornstarch or glass
sphere suspensions Merkt et al. (2004). In yield-stress materials, Shiba et
al. (2007) observed convective motion in drops of shear-thinning polymer gels
when driven by vertical oscillations above a critical acceleration, which
resulted in organised patterns of periodic deformation of the drop. The onset
of this convective motion is governed by a constant ratio, marginally greater
than unity, of the vibrational stress due to inertia to the yield stress of
the material. Wolf et al. (2015) generated regular patterns of holes in a more
extended layer of Carbopol microgel subject to vertical vibration, which
persisted when the forcing was discontinued, returning the layer to a solid
state. These spatial features suggest that different modes of deformation of
the free-surface may be accessed depending on the vibrational parameters. The
mechanics of yielding under vibration, however, remain poorly explored.
In the last decade or so, intense rheological research coupled to flow
visualisation has provided new insight into yielding from a microscopic point
of view Bonn et al. (2017), and shown that the increase of stresses in the
material towards the yield threshold leads to its restructure, indicated by a
plethora of intriguing phenomena such as unsteady flows, creep, transient
shear-banding and slip at solid boundaries Balmforth et al. (2014); Divoux et
al. (2011); Coussot (2014). Understanding the detailed mechanics of yielding
is key to the development of accurate constitutive models, which can in turn
be applied to predict flow in a variety of practically-relevant
configurations. Such models may be derived from the microscopic dynamics
Fielding (2020) or the relationship between stress, strain and rate of strain,
formulated at a macroscopic scale to capture the mechanics observed
experimentally. Arguably the most successful constitutive model to date is the
Saramito model Saramito (2009), which treats the material as a linear
viscoelastic solid (with a characteristic relaxation time) for stresses below
yield and an elastoviscoplastic fluid with a Herschel-Bulkley viscosity above
yield. Direct experimental evidence of the role of viscoelasticity around the
yield point has only emerged recently Balmforth et al. (2014); Dinkgreve et
al. (2017); Ewoldt et al. (2008); Luu and Forterre (2009); Piau (2007) and
views diverge on whether a linear-to-nonlinear viscoelastic transition occurs
prior to reaching the yield threshold Fernandes et al. (2017).
In this paper, we investigate fluidisation of yield-stress materials under
vibration by comparing the behaviours of molten chocolate and Carbopol
microgel, which are both shear-thinning yield-stress fluids with the same
yield stress but exhibit widely different elastic properties. We perform
experiments on drops initially at equilibrium under gravity on a layer of the
same fluid, by oscillating the substrate vertically. Below the onset of
spreading, the chocolate drop remains at rest in the frame of reference of the
oscillating substrate whereas the Carbopol drop exhibits periodic viscoelastic
deformations at the frequency of the forcing. Although both chocolate and
Carbopol drops fluidise for similar values of the forcing acceleration, we
observe qualitatively different spreading behaviours. Whereas in the chocolate
drop an initial phase of rapid spreading is followed by slow, long-term
dynamics, the Carbopol drop swiftly spreads to a new equilibrium shape about
which it continues to oscillate viscoelastically at the forcing frequency,
featuring large-amplitude stretching and compression of the drop. This results
in much reduced spread of the Carbopol drop compared to the chocolate drop.
We also find that the threshold for the onset of spreading is dependent on the
process by which the initial drop shape is obtained. However, for each
subsequent increase in forcing acceleration, the vertically oscillated drop
automatically adjusts its shape by spreading in order to relieve its stress so
that it remains at the yield threshold. This implies that compared to the
standard procedures used for oscillatory rheometry Hyun et al. (2011) – the
most widely used methodology for the characterisation of non-Newtonian fluids
– our experimental setup provides unique access to the effects of nonlinear
viscoelasticity near the yield threshold. This is difficult in oscillatory
shear rheometry because for yield-stress fluids that are capable of undergoing
large viscoelastic deformations, the periodic shear strain imposed on the
sample can be absorbed in viscoelastic or plastic deformations. Hence the
oscillatory nature of the forcing ensures that even plastic deformations are
effectively reversed over each cycle of the forcing. Our measurements suggest
a reduction in the storage modulus of Carbopol at the yield point with
increasing forcing, which concurs with rheometric measurements Fernandes et
al. (2017); Varges et al. (2019); Giuseppe et al. (2015). We demonstrate that
our vibrated-drop setup offers a new and powerful experimental approach to
probing the yield transition in elastoviscoplastic fluids.
The outline of the paper is as follows. The experimental methods used to
create drops, oscillate them and image them are described in §II. Results are
presented in §III where we compare the behaviours of chocolate and Carbopol
drops under vibration in §III.1. We show that both materials fluidise at a
critical forcing acceleration in §III.2. In §III.3 we demonstrate that the
viscoplastic spreading of Carbopol is insensitive to its forcing history and
in §III.4, we investigate the viscoelastic oscillations of the Carbopol drop.
We summarise our findings in §IV and discuss the potential for our vibrated-
drop setup to act as a rheometer providing elastoviscoplastic material
properties near the yield point.
## II Experimental methods
A schematic diagram of the experimental system is shown in figure 1. A drop of
yield-stress fluid is at rest on a thin layer of the same material contained
within a circular trough on a rigid, horizontal substrate. We oscillate the
substrate sinusoidally so that the vertical displacement of the platform is
$z(t)=A\sin(2\pi ft)$, where $A$ and $f$ are the amplitude and frequency of
oscillation, respectively, and we monitor the evolution of the drop with side
and top-view cameras.
Figure 1: Schematic diagram of the experimental setup. The vertical
displacement of the oscillating platform from its equilibrium position is
given by $z(t)=A\sin(2\pi ft)$.
### II.1 Materials, drop deposition and substrate preparation
We studied two yield-stress fluids: (i) a commercially available, tempered
milk chocolate with a fat content of 30% (in molten form), and (ii) Carbopol
Ultrez-21 microgels (Lubrizol) with different concentrations.
Materials | $\rho$ (kg $m^{-3}$) | $\tau_{y}$ (Pa) | $K$ (Pa sn) | $n$ | $G$ Pa |
---|---|---|---|---|---|---
Chocolate | 1,247 | $36.1\pm 0.6$ | $8.7\pm 0.02$ | $1$ | $\sim 44,000$ |
Carbopol (2.2 g/L) | 1,020 | $35\pm 5$ | $8.2\pm 2.0$ | $0.49\pm 0.02$ | 350 |
Carbopol (3 g/L) | 1,030 | $60\pm 6$ | $12\pm 3$ | $0.50\pm 0.02$ | 450 |
Carbopol (6 g/L) | 1,050 | $120\pm 10$ | $17\pm 4$ | $0.51\pm 0.02$ | 700 |
Table 1: Density and rheological properties of the commercially available,
tempered milk chocolate with 30% fat content, and Carbopol microgels of
different concentrations. The yield stress $\tau_{y}$, the power index $n$ and
the constant $K$ were obtained by fitting a one-dimensional Herschel-Bulkley
model of the form $\tau=\tau_{y}+K|\dot{\epsilon}|^{n}$ ($\tau\geq\tau_{y}$)
Herschel and Bulkley (1926) to flow curves measured experimentally. In the
case of chocolate, the coefficients were obtained from a two-parameter fit to
the Bingham model ($n=1$) Bergemann et al. (2018). $G$ is the elastic shear
modulus of the material obtained with constant-shear rheometry for
$\tau<\tau_{y}$ in chocolate by Bergemann et al. (2018). For Carbopol we list
the values measured by Giuseppe et al. (2015); Garg et al. (2020) with
oscillatory shear rheometry in the limit of an approximately elastic response
($\tau\ll\tau_{y}$).
The chocolate was tempered using a Model Prima tempering machine (FBM, Italy)
which delivered chocolate at its outlet at a temperature of $(28.2\pm
0.3)^{\circ}$C. The temper level was determined prior to deposition of each
drop onto the substrate, using an EXOTHERM 7400 temper meter (Systech
Analytics SA, 170 Switzerland). The tempered chocolate has been extensively
characterised by Bergemann et al. (2018), who performed rheological
measurements using a Brookfield R/S-Plus (SST) rheometer with a four-bladed
vane spindle rotating in a large-diameter cylindrical glass beaker. In that
study, shear was applied cyclically by successively increasing and decreasing
the angular velocity or the torque with constant rate. Values of angular
velocity and torque were converted into shear rate $\dot{\epsilon}$ and stress
$\tau$, respectively. The dynamic yield stress $\tau_{y}$ was determined as
the value of decreasing stress at which the fluid came to rest. For
$\tau\geq\tau_{y}$, the resulting flow curve of stress as a function of strain
rate was found to be best fitted by a two-parameter Bingham model (see table 1
for parameter values). For $\tau<\tau_{y}$, the material deformed elastically,
and the measured elastic modulus was found to be moderate ($G\sim 700$ Pa)
prior to initial fluidisation, but increased to $G\sim 44,000$ Pa in
subsequent cycles, indicating the mesoscopic reorganisation of the material
following fluidisation.
We prepared three Carbopol solutions with different concentrations by adding
Carbopol Ultrez-21 powder in concentrations of 2.2 g/L, 3 g/L and 6 g/L to a
0.021 Molar NaOH solution ($pH=12.3$) at the laboratory temperature of 20 oC.
The powder was initially dissolved by vigorous stirring with an electric whisk
for a few minutes, followed by intermittent manual stirring over a period of
$2-3$ hours in order to achieve homogeneous solutions. The resulting Carbopol
solutions had $pH\simeq 8.0\pm 0.5$. They were each placed in an airtight
container and rested for at least 3 days before they were used in the
experiments. Prior to experiments, we performed rheometric tests analogous to
those reported by Bergemann et al. (2018) for tempered chocolate, which we
present in Appendix A. They were carried out repeatedly over a period of
several months and did not reveal measurable aging of the material, consistent
with previous studies Varges et al. (2019); Giuseppe et al. (2015). The flow
curve was accurately captured by a Herschel-Bulkley constitutive relation and
the values of the fitting parameters are listed in Table 1.
The tempered chocolate and the 2.2 g/L Carbopol solution have a similar yield
stress of $\tau_{y}\simeq 35$ Pa. Table 1 also gives the values of the elastic
shear modulus below yield. For Carbopol, we list values from Giuseppe et al.
(2015); Garg et al. (2020) obtained with oscillatory shear rheometry in the
limit of an approximately elastic response for different Carbopol grades,
which range from $G\sim 350$ Pa to 700 Pa. This means that the elastic shear
modulus of chocolate is approximately two orders of magnitude larger than that
of the 2.2 g/L Carbopol solution.
Figure 2: Schematic diagram of the substrate assembly and illustration of the
procedure used to deposit the drop.
A schematic diagram of the experimental apparatus used to prepare the drop is
shown in figure 2(a). A square Perspex substrate plate of width 105 mm was
secured to a Perspex base plate with three finger-tight nylon screws. It
featured a circular trough with a diameter of $60.5$ mm and a depth of $1$ mm
where a uniform film of yield-stress material of depth $0.9\pm 0.1$ mm was
deposited prior to experimentation by initially overfilling the trough and
then slowly scraping off the excess fluid with a square-edged ruler at an
angle of around $30^{\circ}$, taking care to minimise the washboard
instability at the free surface Hewitt et al. (2012). The drop of yield-stress
fluid was deposited onto this thin layer by extrusion from a standard 10 mL
plastic syringe with a nozzle enlarged to an inner diameter of $8\pm 0.05$ mm.
To ensure reproducible deposition, we placed the syringe inside a vertical,
tightly-fitting, removable holder which was rigidly mounted on three vertical,
threaded poles above the substrate plate. We ensured the axisymmetry of the
deposited drop by levelling the entire assembly to less than $0.5^{\circ}$
from the horizontal using a digital inclinometer.
The syringe was partially filled with $4.5\pm 0.3$ mL of yield-stress fluid,
taking care to avoid trapping any air bubbles, and placed in the holder
(figure 2(a)) so that the end of the nozzle was 12 mm above the substrate. The
plunger of the syringe was then slowly pushed down manually for approximately
3 seconds to empty the syringe. The material spread onto the substrate
yielding a drop at equilibrium under gravity following deposition (figure
2(b)); see §II.4. Finally, the deposition assembly holding the syringe was
removed from the apparatus, which was then ready for the vibration experiments
(figure 2(c)).
### II.2 Oscillatory forcing
We used two experimental rigs to vibrate our sessile drop, as shown
schematically in figure 3. In Rig 1 the rotation of a DC brushless servo motor
(McLennan M644CI500L) is converted to sinusoidal translation motion via the
Scotch yoke mechanism (figure 3(a)), while in Rig 2 oscillations are applied
with an electromagnetic shaker (LDS V455) (figure 3(b)). Both pieces of
apparatus were bolted onto heavy steel tables with adjustable feet positioned
onto sand contained in four steel cylinders, in order to minimise the transfer
of vibration through the floor. Rig 1 was limited to moderate frequencies and
accelerations ($f\leq 10$ Hz, $A(2\pi f)^{2}\leq 5g$, where $g$ is the
acceleration of gravity), which were sufficient to perform the experiments on
tempered chocolate. In contrast, frequencies and accelerations of up to $f=50$
Hz and $A(2\pi f)^{2}=12.5g$ were applied in experiments with Carbopol using
Rig 2.
Figure 3: (a, b) Schematic diagrams of the two experimental rigs used to
impose the vertical, oscillatory motion of the substrate. (a) Rig 1: brushless
DC motor attached to a Scotch yoke mechanism. (b) Rig 2: Electromagnetic
shaker. (c, d) Time-series of the vertical position of the platform started
from rest showing the different ramp-up procedures available in the two
systems, which are applied over a time-interval $t_{\rm ramp}$ for (c) Rig 1,
(d) Rig 2.
#### II.2.1 Rig 1: brushless DC motor with Scotch yoke mechanism
Rig 1(figure 3(a)) was originally designed for the tempered chocolate
experiments. Thus, the apparatus was enclosed in a temperature-controlled
cabinet which was held at $29\pm 0.5^{o}$C during experiments with tempered
chocolate and at the laboratory temperature of $20.2\pm 0.5^{o}$C during
experiments with Carbopol.
The substrate assembly described in §II.1 was adjustably mounted on three
threaded poles 20 mm above a square aluminium plate (thickness 8 mm and width
250 mm) which was coupled to the vertical shaft of a Scotch yoke mechanism.
The Scotch yoke was driven by a computer-controlled brushless DC motor via a
5:1 planetary gearbox (HPE70-S-5:1). For a constant rotation rate $2\pi f$ of
the motor shaft, the Scotch yoke produced a sinusoidal displacement of the
platform in the vertical direction. The frequency $f$ could be incremented
during an experimental run by varying the rotation rate of the motor but the
amplitude was set manually in the range $6\;\mathrm{mm}\leq A\leq
12.5\;\mathrm{mm}$ (to within of $\pm 0.1$ mm) prior to switching the motor
on. The oscillations of the platform were calibrated to the input signal by
measuring the displacement of the platform with a Linear Voltage Differential
Transformer (LVDT, Solartron, Mach 1) mounted onto the platform, see figure
3(a). Accelerometer data (PCB Piezotronics) recorded at the centre of the
substrate plate indicated negligible harmonic content over the entire range of
frequency investigated ($f\leq 10$ Hz). A camera (Pulnix TM-6740CL) was
mounted on the laboratory wall to capture side view images of the drop which
was backlit by a halogen lamp, see §II.3.
#### II.2.2 Rig 2: electromagnetic shaker
To reach the larger platform accelerations required for the Carbopol
experiments, we used a permanent magnet shaker (LDS, V455) as shown in figure
3(b). The substrate assembly described in §II.1 was adjustably mounted on
three threaded poles 20 mm above a circular aluminium plate (thickness 8 mm
and diameter 250 mm) which was coupled to the shaker. The drop was lit
uniformly by two LED panels, a horizontal panel below the substrate assembly
and a vertical panel mounted on the laboratory wall. It was imaged using
cameras in side-view (Pulnix TM-6740CL) and top-view (DALSA HM1400).
The shaker was driven by a sinusoidal waveform generated in LabVIEW which was
converted to an analog output with a data acquisition board (NI PCI-6221) and
amplified to the power required by the shaker with a linear amplifier (LDS, PA
1000L). As in Rig 1, the oscillations of the platform were calibrated to the
input signal with an LVDT (Solartron, Mach 1) mounted onto the platform, see
figure 3(b). The maximum displacement amplitude imposed by the shaker in our
experiments was $A=9.5$ mm and the maximum frequency was $f=50$ Hz.
The waveform of the platform oscillation was sampled with an accelerometer
(PCB Piezotronics) mounted at the centre of the platform. Fast Fourier
Transform (FFT) of the accelerometer signal gave a percentage power of the
first harmonic relative to the fundamental mode of less than 0.2 % for all
amplitudes if $f\geq 30$ Hz, increasing to approximately 1.5 % for $A=2$ mm
and 3.2 % for $A=8$ mm at $f=10$ Hz. Thus for $f\leq 10$ Hz, we only used this
rig to perform experiments at small amplitudes $0.15\;\mathrm{mm}\leq A\leq
5.5\;\mathrm{mm}$ which were not accessible in Rig 1 and for which the
harmonic power ratio was less than 2%.
#### II.2.3 Start-up and shut-down of the forcing
In each rig, start-up and shut-down protocols were required to allow the
system to reach its set oscillation parameters from rest or when the peak
acceleration was increased or decreased from a previous setting. Typical time
series of the vertical displacement of the platform starting from rest are
shown in figures 3(c, d) for the brushless DC motor (Rig 1) and the shaker
(Rig 2), respectively. The shut-down procedure is not shown but was the same
as the start-up procedure in reverse. In each graph, the duration of the
start-up phase is indicated by $t_{\rm ramp}$. In Rig 1, the amplitude is
constant and the rotation rate increases with an imposed acceleration of
either $20\pi$ rad s-2 (chocolate experiments) or $40\pi$ rad s-2 (Carbopol
experiments) towards its set frequency $f$ so that the ramping time is either
$t_{\text{ramp}}=0.1f$ s or $t_{\text{ramp}}=0.05f$ s. In figure 3(c), the
imposed acceleration is $40\pi$ rad s-2 and $f=10$ Hz so that
$t_{\text{ramp}}=0.5$ s. In Rig 2, a fixed frequency was imposed by the input
signal but it was necessary to ramp up the platform oscillation amplitude in
order to avoid damaging the shaker. We followed the manufacturer’s
specifications and linearly increased the voltage to the amplifier until the
value associated with the chosen parameter values ($f$, $A$) was reached,
which led to ramp-up times $t_{\text{ramp}}\leq 0.8$ s. In figure 3(d), the
oscillating platform reaches $A=9$ mm in $t_{\text{ramp}}=0.3$ s.
### II.3 Image analysis
We used the top-view camera to check the axisymmetry of the drops with a
resolution of 13.4 pixels/mm. The mean ratio of minimum to maximum diameter of
the drops presented in this paper was
$D_{\mathrm{min}}/D_{\mathrm{max}}=0.986\pm 0.004$. In some experiments, we
inclined the camera by $30^{\circ}$ to capture the free-surface deformation of
the drop at rates between 20 and 40 frames per second (fps); see figure 9 in
§III.2. The side-view camera was levelled and aligned with the substrate in
its reference position, which was the maximum and mean height in Rigs 1 and 2,
respectively. Vertical cross-sections of the drop were imaged with a
resolution of 11.2 pixels/mm. The maximum difference in viewing angle when the
platform was at its maximum and minimum heights was less than $1^{\circ}$,
which led to differences in the measured height of a sessile drop of $<1$ %.
However, we measured the height of the drop relative to the height of the
substrate in the same vertical plane of view to mitigate the effects of
perspective arising due the variation in the viewing angle. The side-view
camera recorded images at between 100 and 140 fps and we synchronised it to
the drive to record images stroboscopically at a specific phase of the
platform oscillation cycle.
A typical side-view image of a Carbopol drop is shown in figure 4(a). A MATLAB
routine based on the Canny algorithm was used to determine the contour of the
drop and the position of the horizontal top surface of the platform, see
figure 4(b). We measured the diameter $D$ of the drop $5$ pixels above the top
surface of the platform and defined the mean drop height $\bar{H}$ as the
local drop height $h(x)$ integrated over the central half of the drop,
$\bar{H}=\frac{2}{D}\int_{\frac{-D}{4}}^{\frac{D}{4}}h(x)dx.$
This metric was chosen in order to reduce the influence on the drop height of
the central material thread resulting from the deposition process, and allow
comparisons between chocolate and Carbopol, see figure 5. By analysing series
of successive images, we obtained time series of the vertical substrate
displacement $z(t)$, the drop height $\bar{H}(t)$ and the drop diameter
$D(t)$. The error from image analysis was on the order of a pixel, and thus in
the range from $1.0\%$ to $2.5\%$ for the drop height and from $0.5\%$ to
$1.5\%$ for the substrate displacement and drop diameter.
Figure 4: Processing of side-view images: (a) raw image; (b) contour extracted
using MATLAB. $h(x)$ is the local height of the drop, $D$ its diameter and the
mean height $\bar{H}$ is the integral of the drop height for $-D/4\leq x\leq
D/4$.
### II.4 Initial drop shape
Figure 5: Initial shape of the droplet (a) chocolate, (b) Carbopol 2.2 g/L,
(c) Carbopol 3.0 g/L, (d) Carbopol 6.0 g/L. The processed images show the drop
interface and the platform edge (red line). The pink line is the mean height
of the drop averaged over the central half of the drop. The mean height and
diameter of each drop are: (a) $H_{0}=8.1\pm 0.2$ mm, $D_{0}=30\pm 0.6$ mm;
(b) $H_{0}=9.8\pm 0.3$ mm, $D_{0}=23.8\pm 0.6$ mm; (c) $H_{0}=10.3\pm 0.3$ mm,
$D_{0}=22.4\pm 0.6$ mm; (d) $H_{0}=11.1\pm 0.3$ mm, $D_{0}=19.6\pm 0.6$ mm.
Figure 5 shows the drop shapes associated with each material listed in table
1. We denote the mean height and diameter of the drops measured prior to
imposing the oscillatory forcing by $\bar{H}_{0}$ and $D_{0}$, respectively.
The chocolate drop (a) exhibits a more pronounced slump than the 2.2g/L
Carbopol drop (b), which has a similar yield stress but an approximately 20%
smaller density (see table 1). The reduced spread of Carbopol drops at higher
concentrations is due primarily to the associated increase in yield stress.
The shape of the chocolate drop profile differs significantly from the
Carbopol drops (b–d) in that the thread of material that is pinched off upon
removal of the syringe collapses and merges into the drop, whereas for
Carbopol it remains erect.
## III Results
### III.1 Drop spreading under vibration
Figure 6: Evolution of the drop height of (a) chocolate, and (b) Carbopol (2.2
g/L) for different values of acceleration, where the forcing was imposed at
$t=0$. Data for chocolate is shown for $a=0.87g$ ($A=6$ mm and $f=6$ Hz),
$a=1.16g$ ($A=8$ mm and $f=6$ Hz), $a=1.18g$ ($A=6$ mm and $f=7$ Hz),
$a=1.58g$ ($A=8$ mm and $f=7$ Hz), $a=2.06g$ ($A=8$ mm and $f=8$ Hz),
$a=2.58g$ ($A=8$ mm and $f=9$ Hz), $a=2.61g$ ($A=10$ mm and $f=8$ Hz) and
$a=3.26g$ ($A=10$ mm and $f=9$ Hz). Data for Carbopol is for $a=0.35g$ (blue:
$A=5.5$ mm and $f=4$ Hz) and $a=6.40g$ (red: $A=5.5$ mm and $f=17$ Hz). In
(b), the maximum extension and compression are traced with solid lines of the
same colour (envelopes). The mean of the envelopes is shown with a solid black
line. The equilibrium heights $\bar{H}_{\mathrm{eq}}/\bar{H}_{0}$ measured
after interruption of the forcing in both experiments are denoted by dashed
black horizontal lines.
Figure 6 shows the time evolution of the drop height for (a) chocolate and (b)
Carbopol (2.2 g/L) from the instant substrate oscillations were imposed
($t=0$) for different values of the forcing acceleration. For low forcing
acceleration ($a\leq 1.16g$), the chocolate drop remains essentially
undeformed, with $\bar{H}/\bar{H}_{0}$ varying by less than $\pm 0.5\%$. This
is in contrast with the Carbopol drop which deforms periodically with the
substrate oscillations, with amplitudes on the order of 10% of the initial
drop height for $a=0.35g$. Once the forcing was interrupted, the Carbopol drop
returned to its initial height (dashed horizontal line) to within the
experimental resolution, thus indicating the absence of permanent deformation
(spreading). The lack of spreading of the drops under weak forcing suggests
that in this regime both chocolate and Carbopol behave as viscoelastic solids.
The absence of measurable oscillations in the chocolate drop is consistent
with its larger elastic modulus; see table 1.
In chocolate, the decrease of $\bar{H}/\bar{H}_{0}$ with time for $a\geq
1.18g$ indicates the spreading of the chocolate drop. For $a=1.18g$ this
reduction becomes apparent only for $t>6$ s which is much larger than the
start-up time $t_{\mathrm{ramp}}=0.7$ s. Conversely for the largest
acceleration ($a=3.26g$), spreading occurs much sooner (from $t\simeq 1$ s,
which is on the order of the start-up time, $t_{\mathrm{ramp}}=0.9~{}s$). For
all values of $a$, the spreading is sufficiently slow that the drop does not
reach a constant height by the end of the experimental recording of up to $12$
s. However, for $a\geq 2.06g$ this slow long-term spreading is preceded by a
rapid decrease in drop height during which the drop oscillates with the
frequency of the drive. The amplitude of these viscoelastic oscillations
diminishes as the drop spreads and falls below experimental resolution between
$t=4$ and $5$ s. Their transient nature is likely associated with the
mesoscopic reorganisation of the chocolate following fluidisation. This was
already observed by Bergemann et al. (2018) who found a considerable increase
(by a factor of 6) of the pre-yield elastic modulus of the material following
the first cycle of rheometric yield measurements. In the vibrated drop
experiment a larger number of the faster substrate oscillation cycles during
which the drop is momentarily fluidised may be required to achieve a similar
effect. This would be consistent with the observed progressive reduction in
oscillation amplitude. For the largest value of $a=3.26g$, an apparent initial
increase in mean height is visible because the extension of the drop is larger
than its compression during each cycle of the oscillation. Stroboscopic side-
view images of this experiment are shown in figure 7(a) to highlight the
spreading of the fluidised drop. The drop height is reduced by $50\%$ at $t=3$
s, and more than $70\%$ at $t=8$ s. Moreover, the shape of the initial drop
rapidly evolves to a smooth profile of uniformly positive curvature.
Figure 7: Snapshots of chocolate (a) and Carbopol (2.2 g/L) (b) drops
spreading under vertical oscillations, taken with the substrate is in its mean
position. The processed images are similar to those shown in figure 5. (a)
Chocolate: $a=3.26g$, $A=10$ mm, and $f=9$ Hz. (b) Carbopol 2.2 g/L:
$a=3.25g$, $A=5.6$ mm, $f=12$ Hz.
A typical example of the spreading of a Carbopol drop is shown in figure 6(b)
for $a=6.40g$. The drop develops large-amplitude shape oscillations during the
ramp-up of the forcing, before rapidly spreading, while continuing to
oscillate, to reach a time-periodic state of constant mean value (indicated by
the solid black line) for $t\gtrapprox 3$ s. This time-periodic state is
similar to that observed at low forcing ($a=0.35g$), albeit with significantly
larger oscillation amplitude. It indicates that the stress in the drop has
decreased below $\tau_{y}$ so that the drop behaves as a viscoelastic solid.
Upon stopping the forcing, the drop relaxes to a new equilibrium height
indicated by the horizontal dashed line. Figure 6(b) shows that, as for the
chocolate drop, the mean height of the drop is larger than its equilibrium
height because the amplitude of the extensional deformation is larger than its
compressional deformation. This feature is also visible in the ramp-up
oscillations. We performed experiments for 13 values of acceleration up to
$a=8g$, corresponding to frequencies in the range $8\;\mathrm{Hz}\leq f\leq
23$ Hz, and found that although the extent of the spreading increased with
increasing forcing acceleration, the duration of the spreading remained
approximately constant with $t_{\rm spreading}=3.2\pm 0.8$ s.
Figure 7 shows that spreading is considerably reduced in Carbopol compared
with chocolate. For $a=3.25g$, the final height reduction of the Carbopol drop
(after interruption of the forcing) is $12\%$, while in chocolate it exceeds
$70\%$ by $t=8$ s. A further increase of the substrate acceleration to
$a=6.40g$ (not shown) results only in a final height reduction of the Carbopol
drop of $36\%$. Moreover, the Carbopol drop retains its characteristic pointed
shape which suggests only partial fluidisation in contrast with chocolate.
Figure 8: Height of the chocolate drop relative to its initial height
(measured at $t=9.5$ s after imposition of the forcing) as a function of
frequency (a) and acceleration (b). Each data point corresponds to the mean of
at least 3 experiments and the error bar is the standard deviation.
Experiments with different amplitudes of forcing are denoted by different
coloured symbols: $A=6.0$ mm (red square), $A=8.0$ mm (black circle), $A=10.0$
mm (blue triangle). In (b), the vertical dashed black line indicates the
threshold acceleration beyond which axisymmetric spreading occurs. This
threshold was estimated by linear extrapolation of the data for $1.4\leq
a/g\leq 2.5$ to a unit non-dimensional height.
### III.2 Yield threshold
Figure 9: Equilibrium height of Carbopol (2.2 g/L) drops scaled by the initial
equilibrium drop height as a function of: (a) forcing frequency, (b) amplitude
and (c) acceleration. Measurements were taken after interruption of the
forcing imposed for $t>t_{\rm spreading}$. Each data point corresponds to the
mean of at least 3 experiments and the error bars are the standard deviations.
In (a), the forcing frequency is varied for fixed amplitudes (blue data):
$A=0.15$ mm (hexagon), $A=1.0$ mm (left-pointing triangle), $A=1.5$ mm
(circle), $A=2.5$ mm (right-pointing triangle), $A=4$ mm (diamond) , $A=5.5$
mm (square), $A=12.5$ mm (downward-pointing triangle). In (b), the forcing
amplitude is varied for fixed frequencies (red data): $f=1-4$ Hz (downward-
pointing triangle), $f=10$ Hz (square), $f=13$ Hz (left-pointing triangle),
$f=17$ Hz (circle), $f=27$ Hz (upward-pointing triangle), $f=45$ Hz (diamond).
In (c), different regimes are illustrated by side-view snapshots of the
Carbopol drop. The onset of spreading $a_{c}(Carb)/g=1.59\pm 0.2$ is
calculated by extrapolating the data in the axisymmetric spreading region to a
scaled drop height of unity. A transition to higher wavenumber oscillations
occur for $a/g=8.0\pm 0.2$, and non-axisymmetric spreading and rupture of the
drop sets in for $a/g=10.9\pm 0.3$.
In order to determine the forcing threshold beyond which chocolate and
Carbopol (2.2 g/L) drops spread, we performed experiments for a range of
frequencies and amplitudes of forcing. For chocolate we measured the height of
the drop at $t=9.5$ s after imposition of the forcing, by which time
oscillations had decayed and most of the spreading had occurred. For Carbopol
we measured the final height of the drop by stopping the forcing for $t>t_{\rm
spreading}$, i.e., once the drop had spread to reach a new oscillatory state
of constant mean height; see §III.1.
Results for chocolate are shown as a function of frequency in figure 8(a).
Different coloured symbols are used to indicate the three datasets with
different amplitudes of forcing, which are displaced from each other along the
frequency axis. Figure 8(b) shows that the data collapses onto a master curve
when plotted as a function of the forcing acceleration. For sufficiently small
accelerations, the drop does not exhibit measurable spreading. Once the
forcing acceleration exceeds a threshold value, $\bar{H}(t=9.5\;{\rm
s})/\bar{H}_{0}$ decreases steeply with the acceleration. The threshold value
for the onset of spreading, indicated with a vertical dashed line, was
determined by linearly extrapolating the data for $1.4\leq a/g\leq 2.5$ to
yield a value of $a_{\rm c(choc)}/g=1.28\pm 0.1$.
Similar data is reported for Carbopol in figure 9. The equilibrium height
$\bar{H}_{\mathrm{eq}}/\bar{H}_{0}$ is shown as a function of frequency (a)
and amplitude (b) of forcing, corresponding to series of experiments performed
at constant amplitude by varying the frequency, and at constant frequency by
varying the amplitude, respectively. As for the chocolate experiments,
individual data sets are displaced with respect to each other along the
horizontal axis. Figure 9(c) shows that the data approximately collapses again
onto a master curve when plotted as a function of the forcing acceleration. As
before, the threshold forcing acceleration for the onset of spreading was
determined by linearly extrapolating the data for accelerations in the range
$1.6\leq a/g\leq 8$, yielding $a_{\rm c(Carb)}/g=1.59\pm 0.2$.
Beyond this acceleration, the drop rapidly spread to a new equilibrium shape
about which it continued to oscillate. In fact the experiments were performed
by incrementally increasing either frequency or amplitude of forcing and
recording the new equilibrium at each step by temporarily interrupting the
forcing. For accelerations in the range $1.6\leq a/g\leq 8.0$, the data points
suggest a continuum of decreasing equilibrium heights corresponding to
axisymmetric drops without spatial features. For $8.0\pm 0.2\leq a/g\leq
10.9\pm 0.3$, the drop oscillations remained axisymmetric but gained
additional spatial features by transitioning to higher spatial modes. Finally,
for $a/g\geq 10.9\pm 0.3$, the oscillating drop lost its axisymmetry and
exhibited localised rupture; see images in figure 9(c). We did not investigate
these regimes in further detail.
A summary of all chocolate and Carbopol experiments is provided in figure 10
where the critical forcing acceleration is plotted as a function of yield
stress (see table 1). We note that Carbopol (2.2 g/L) and chocolate, which
have very similar values of the yield stress, also exhibit similar values of
the critical acceleration. We find that the critical acceleration increases
approximately proportionally to the yield stress as shown by the solid blue
line, which is a one-parameter linear fit to the data. This confirms that the
yield stress uniquely determines the onset of spreading in our experiments.
However, the existence of a critical acceleration for the onset of spreading
is in itself significant. If prior to the imposition of oscillatory forcing,
the drop had reached its equilibrium by slumping due to gravity forces alone,
we would expect the maximum stress in the drop to have relaxed to $\tau_{y}$
at equilibrium. Hence, any additional acceleration imposed during the
oscillatory forcing cycle would be sufficient to fluidise the drop.
Observation of a critical acceleration suggests that the initial stress
distribution within the drop must be significantly below yield because an
additional acceleration greater than that of gravity has to be imposed in
order for the drop to spread. This implies that the constant of
proportionality between critical acceleration and yield stress shown in figure
10 is set by the process by which the drop is generated. The vibrated-drop
experiment can be therefore be used to determine the yield stress of other
materials based on measured critical acceleration values provided that the
same drop deposition process is retained and that the initial drop shape are
similar.
Figure 10: Critical forcing acceleration required to spread drops of different
yield-stress fluids (see table 1) as a function of their yield stress. The
solid line indicates a least-square linear fit to the data of the form
$a_{\mathrm{c}}/g\propto\sigma_{\mathrm{y}}$.
### III.3 Dependence of Carbopol spreading on the history of forcing
Figure 11: Influence of the forcing history on the spreading of a Carbopol
drop (2.2 g/L), starting from the sessile configuration shown in figure 5. The
final forcing is $a=8.00g$ in each panel. This forcing is applied directly in
(a), and following 1, 3 and 6 increments in (b–d). After each period of
spreading, the forcing was interrupted to measure the new equilibrium shape of
the drop (inset images). The envelopes of the drop oscillations are shown with
solid lines of the same colour as the data, while the mean of the envelopes is
shown with a yellow solid line. The successive equilibrium heights of the drop
are indicated with dashed lines. The red arrows highlight that all forcing
protocols result in the same final equilibrium drop heights (and profiles).
We showed in §III.1 that Carbopol drops spread when the value of the forcing
acceleration exceeds a critical threshold. We now turn to investigating the
effect of forcing history upon the final equilibrium shape adopted by the drop
post-spreading.
Figure 11 shows four spreading experiments on a Carbopol drop (2.2 g/L)
starting from the sessile shape of height $\bar{H}_{0}$ shown in §II.4. In (a)
the drop was subjected to a forcing acceleration $a=8.0g$ for the entire
duration of the experiment. In (b–d), this acceleration was reached via 1, 3
and 6 acceleration increments, respectively. The forcing was temporarily
interrupted after each increment in order to record the new equilibrium height
$\bar{H}_{\mathrm{eq}}$ of the drop. Despite these different experimental
procedures, the value of $\bar{H}_{\mathrm{eq}}/\bar{H_{0}}$ following
interruption of the forcing at $a=8.0g$ is the same in all experiments, as
indicated by the dashed horizontal lines on the pink experimental data
(highlighted by red arrows). This indicates that spreading of the drop is not
sensitive to the forcing history. It suggests that upon exceeding the critical
acceleration, the Carbopol drop rapidly reaches a new equilibrium shape about
which it oscillates periodically while the stress remains below the yield
stress. Because this new drop equilibrium is at its yield threshold, any
subsequent increase in forcing acceleration causes the drop to spread further
to another flatter equilibrium shape. Further experiments with forcing
accelerations in the range $1.70g\leq a\leq 9.68g$ and between 1 and 14
acceleration increments (not shown) confirmed that the final equilibrium
height of the drop is indeed independent of the time-history of forcing to
within experimental resolution provided that all experiments start from the
same initial sessile drop shape.
Figure 12: Spreading of a Carbopol drop (2.2 g/L) for $a=4.83g$ ($f=10$ Hz and
$A=12$ mm) when the forcing was applied continuously (a) and intermittently
(b). In (b), the duration of the intermittent bursts of oscillatory forcing
are less than $t_{\mathrm{spreading}}\simeq 3.2$ s required for the droplet to
reach an oscillatory state about a new equilibrium. Each time the forcing is
interrupted, the drop settles into a new equilibrium state. The total duration
of the intermittent oscillations during which spreading occurs in (b) is
approximately equal to the continuous spreading time in (a), with small
effects of start-up and shut-down of the forcing visible in (b).
We also examined the effect of applying forcing in short, successive bursts of
5 periods of oscillation of the substrate. This meant that the forcing was
interrupted before the drop had spread to a new equilibrium shape. The
comparison in figure 12 between continuous forcing (a) and short bursts (b) at
$a=4.83g$ shows that spreading occurs either continuously or intermittently,
but that the drop reaches the same long-term equilibrium (highlighted by the
black arrows) about which the drop can oscillate. Remarkably, the cumulative
time required to spread to the new equilibrium while the forcing is applied is
approximately the same ($\simeq 3$ s) in both cases, although in figure 12(b)
the ramp-up and ramp-down of the forcing result in small alterations of the
forcing signal.
In a final set of experiments, we investigated the influence of the position
at which the substrate came to rest on the equilibrium shape of the drop
reached in figure 12. We imposed approximately two periods of oscillation of
the substrate (with additional ramp-up and down) at $a=4.83g$, starting with
the substrate either in its neutral position, its bottom position or its top
position. We then ensured that the substrate came to rest in each of these
three positions. Figure 13 shows that the ultimate equilibrium shape of the
drop reached after interruption of the forcing was the same irrespective of
the forcing protocol.
Figure 13: Comparison between the drop shape upon interruption of the forcing
and its final equilibrium shape after relaxation. The height of the drop
normalised by its equilibrium height is plotted as a function of the
normalised radius. The blue and red lines show two instantaneous drop shapes
at the end of the substrate motion. They both relax to the same final
equilibrium shape (black).
### III.4 Viscoelastic oscillations of the Carbopol drop
Figure 14: Time-periodic deformation of a Carbopol drop (2.2 g/L) under
vertical vibration during one period of oscillation $1/f$ for (a)
$a=0.86g<a_{c}$ ($f=5$ Hz and $A=8.5$ mm) and (b) $a=7.11g>a_{c}$ ($f=14$ Hz,
and $A=9$ mm), after the drop has reached a state of constant mean height
following spreading. The processed images show the drop interface and the
platform edge (red line). The dashed blue lines, which indicate the height and
diameter of the initial drop at $t=0$, highlight the compression and extension
of the drop over the period of oscillation.
We have established that upon exceeding a critical forcing acceleration,
Carbopol drops fluidise and rapidly spread until they reach a new equilibrium
drop shape about which they continue to oscillate. These observations suggest
that the change in equilibrium drop shape due to spreading results in a
reduction of the stress within the drop to below the yield stress, so that
Carbopol recovers the behaviour of a viscoelastic solid. In this section, we
investigate the viscoelastic oscillations of the drop about the different
equilibrium states that are reached as the oscillatory forcing is increased.
Figure 14 shows typical Carbopol (2.2 g/L) drop shapes over one period of the
oscillation for $a<a_{c}$ (a) and $a>a_{c}$ following the initial spreading
phase (b). The maximum variation in the height of the drop over the period of
the oscillation increases significantly with the imposed acceleration. Whereas
the height oscillation is barely visible for $a=0.86g<a_{c}$ (figure 14(a)),
when $a=7.11g>a_{c}$ (figure 14(b)) the drop extends significantly above its
equilibrium height (dashed blue horizontal line) and reduces its diameter
below its equilibrium value (e.g., at $t=0.38f^{-1}$). Conversely, during
compression the height change is smaller but an increase in the diameter is
still clearly visible ($t=0.90f^{-1}$). In addition, the phase of the drop’s
response differs for the two values of oscillatory forcing in that the drop
extends vertically with the rising platform for $a>a_{c}$ while it compresses
with the rising platform for $a<a_{c}$.
Figure 15: Time series of the height of a Carbopol drop (2.2 g/L) relative to
its equilibrium height (blue) for increasing values of forcing acceleration.
Each time series represents multiple cycles of oscillation over one period
$f^{-1}$ of the substrate oscillation which is shown in red in the top panel
(in terms of the normalised substrate displacement as a function of
dimensionless time $tf$). The forcing amplitude is $A=5.5$ mm. The frequencies
are (a) $f=3$ Hz, (b) $f=4$ Hz, (c) $f=5$ Hz, (d) $f=6$ Hz, (e) $f=7$ Hz, (f)
$f=8$ Hz, (g) $f=9$ Hz, (h) $f=11$ Hz, (i) $f=12$ Hz, (j) $f=13$ Hz, (k)
$f=14$ Hz, (l) $f=15$ Hz, (m) $f=17$ Hz, (n) $f=19$ Hz and (o) $f=22$ Hz. Note
the increase in vertical axis range in successive rows. Figure 16: Comparison
between the normalised Fast Fourier Transform of the time series of drop
height shown in figure 15 (blue) and the vertical displacement of the platform
(red). Forcing parameters are the same as in figure 15. Figure 17: Lissajous
curves of the data presented in figure 15 obtained by plotting the height of
the drop relative to its equilibrium height as a function of the vertical
displacement of the platform. Forcing parameters are the same as in figure 15.
Figure 15 shows a sequence of time series of the drop height (blue) for
increasing values of the forcing acceleration. Multiple cycles are plotted
over a single period of the sinusoidal forcing, which is shown in red in the
top panel. For the lowest accelerations (a, b), the oscillations of the
viscoelastic solid drop are approximately sinusoidal and lag approximately
$180^{\circ}$ behind the position of the substrate. For $a<a_{c}$, the peak-
to-peak amplitude of the drop oscillations appears to vary only weakly with
forcing acceleration. However, figures 15(c–f) indicate a weakening of the
fundamental mode of frequency $f$, and the appearance of higher frequency
modes in the signal as the forcing acceleration is increased towards the yield
threshold. This change in the drop response is quantified in figure 16: the
normalised Fast Fourier Transforms (FFT) of the time series shown in figure 15
reveal harmonics up to $5f$. Figure 16 indicates that higher harmonics first
appear in (c) and their contribution increases considerably in (d–f) so that
the amplitude of the first harmonic of frequency $2f$ increases and approaches
that of the fundamental mode of frequency $f$ as the forcing acceleration
reaches $a=1.42g<a_{c}$. Although the harmonic content of the signal
progressively decreases as $a$ increases above $a_{c}$, the first harmonic
persists up to large forcing accelerations (g–n) and becomes indistinguishable
from the signal noise only for $a=10.7g$ (o). The rich harmonic content of the
drop response near the yield threshold suggests significant nonlinear
viscoelastic effects in this region.
Returning to the time-series of figure 15, the gradual reduction in the
amplitude of the fundamental mode up to $a=1.42g$ (f) is reversed in (g) (note
the change in the vertical axis range) where it begins to strengthen again but
with a phase-shift of approximately $100^{\circ}$ compared with (a–f). A
pictorial illustration of this evolution is provided by the Lissajous curves
in figure 17, where the drop height relative to its equilibrium value is
plotted as a function of the vertical displacement of the substrate. In (a–c),
the maximum and minimum drop heights occur for the minimum and maximum
vertical substrate positions, respectively. The limit cycle (which appears as
a line in (a) due to the small amplitude of the drop oscillations) then
rotates anti-clockwise with increasing forcing acceleration towards an
approximately horizontal structure (d–g). This rotation continues in (h–l) so
that the maximum drop extension and compression occur near the maximum and
minimum substrate heights, respectively. The strong second harmonic component
of the signal means that the cycle is a distorted figure of eight (g–j), which
evolves into a simple cycle as the second and higher harmonics subside. In
(m,n), the maximum compression of the drop still occurs at the minimum
vertical substrate position but the maximum extension occurs closer to the
neutral substrate position. In (o) the cycle has reshaped so that the maximum
compression of the drop occurs increasingly close to the neutral position of
the substrate and the response of the drop to the forcing is now close to
sinusoidal again.
Figure 18: Dependence on forcing acceleration of (a) the phase difference
between drop oscillations and vertical substrate acceleration, (b) the maximum
extension of the drop. Each symbol corresponds to a fixed amplitude of
oscillation, so that increasing values of forcing acceleration are achieved by
increasing the frequency of forcing. The error bars in (b) indicate the
standard deviation of at least three experiments.
We summarise the dependence of the viscoelastic oscillations of Carbopol drops
(2.2 g/L) on the forcing acceleration in figure 18. Figure 18(a) shows the
phase lag of the fundamental harmonic of the oscillatory drop response
relative to the substrate acceleration, which was determined from the FFT
data. The phase angle increases steeply from values near $0^{\circ}$ to nearly
$180^{\circ}$ on the approach to the yield threshold as previously observed in
figures 15 and 17.
Figure 18(b) shows that the maximum extension of the drop increases
monotonically with forcing acceleration. The maximum compression of the drop
follows a similar relation (not shown). The maximum extension (and the smaller
compression) provide measures of the maximum strain in the drop, by accounting
for the decrease in equilibrium height of the drop which occurs through
spreading as the forcing acceleration is increased.
## IV Discussion and conclusion
We have studied experimentally the response to sinusoidal vertical
displacement of drops of molten chocolate and Carbopol initially at rest on a
thin layer of the same fluid. These shear-thinning yield-stress fluids have
very different pre-yield elastic moduli, up to a factor 100 larger for
chocolate than for Carbopol, because of their different mesostructures.
Carbopol is also more strongly shear-thinning with a steeper decrease of its
viscosity to lower values than chocolate. We adjusted the Carbopol
concentration so that its yield stress matched that of chocolate and found
that drops of both materials first spread for approximately the same value of
forcing acceleration $a_{c}$. For $a<a_{c}$, the chocolate drop is at rest in
the frame of reference of the oscillating substrate while the Carbopol drop
undergoes viscoelastic stretching and compression periodically with the
forcing. For $a\geq a_{c}$, rapid initial spreading of the chocolate drop
gives way to long-term slower motion, whereas in Carbopol, the drop rapidly
relaxes its stress by spreading to a new equilibrium shape of larger
footprint, which continues to undergo large-amplitude viscoelastic stretching
and compression. Similar viscoelastic oscillations are also observed
transiently in the chocolate drop for sufficiently large forcing but they
become weaker as the drop spreads. This reduction in overall strain is
consistent with an increase of the elastic modulus, which is known to occur in
chocolate over successive fluidisation cycles through mesoscopic
reorganisation of the material Bergemann et al. (2018). The strong
viscoelastic effects observed in Carbopol result in a striking reduction in
spreading: for $a=3.25g$, the total height reduction of the Carbopol drop is
12% compared with 70% for the chocolate drop after 8 s, which then only
spreads weakly at larger times; see figure 6.
The existence of a forcing threshold for the onset of spreading in our
experiment indicates that the initial drop shape at equilibrium under gravity
(reached after deposition from the syringe) has a stress distribution below
the yield stress. We found that the threshold for the onset of spreading is
uniquely determined by a critical forcing acceleration $A(2\pi f)^{2}$ and is
proportional to the yield stress. In practice, this means that our
experimental setup can be used to measure the yield stress of materials
provided that prior calibration of the rate of increase of forcing
acceleration with yield stress is performed with a fluid of known yield
stress. Our threshold criterion differs from that of experiments at much
larger forcing Shiba et al. (2007), where yield phenomena occur above a
critical velocity of forcing, i.e. when the stress $\rho(2\pi fA)^{2}$ exerted
by the substrate due to inertia of the material exceeds the yield stress.
Figure 19: Estimate of the variation of the absolute value of the effective
complex modulus $|G_{\mathrm{eff}}|$ of Carbopol as a function of forcing
acceleration. Figure 20: Estimate of the variation as a function of forcing
acceleration of the effective storage (a) and loss moduli (b), which are the
real and imaginary parts of the effective complex modulus of Carbopol and are
calculated using the data in figure 18.
Remarkably, we found that the spreading of Carbopol drops did not depend on
the history of forcing. We were able to interrupt and resume forcing as well
as increase the forcing acceleration with multiple increments without
affecting the final equilibrium drop shape. If the forcing was discontinued
before a new equilibrium drop shape had been reached, spreading towards this
equilibrium state continued when reapplying the forcing. This confirms that
spreading continues until the stress within the drop has reduced to the yield
stress, which is indicated by a new equilibrium drop shape undergoing large
amplitude viscoelastic oscillations.
Hence, the Carbopol drop (and presumably also the chocolate drop)
automatically adjusts its shape to remain at the yield stress for increasing
values of the forcing acceleration $a\geq a_{c}$. Our experiment is therefore
uniquely taylored to investigate the rheology of the material at its yield
stress. Although our experimental evidence suggests that after each increase
in forcing acceleration the material returns to a viscoelastic solid state
following short transients, we cannot exclude the existence of recirculating
viscoplastic flows within the drop. Shiba et al. (2007) observed free-standing
convection rolls in vibrated drops of a different elastoviscoplastic gel in
addition to viscoelastic oscillations, but the convective flow was associated
with time scales much longer than the period of the forcing. We also find a
complex nonlinear viscoelastic response of the drop to the oscillatory forcing
near $a_{c}$: a steep increase in the phase angle of the fundamental
oscillation mode and oscillations at higher harmonic frequencies.
Increasing the forcing acceleration increases the amplitude of the drop
oscillations about the equilibrium state and thus the strain in the drop,
without exceeding the yield stress. We use the data of figure 18(b) to
estimate the absolute value of the effective complex modulus of the Carbopol
microgel as
$|G_{\mathrm{eff}}|=\frac{\tau_{y}}{(\bar{H}_{\mathrm{max}}-\bar{H}_{\mathrm{eq}})/\bar{H}_{\mathrm{eq}}},$
which is plotted as a function of forcing acceleration in figure 19. We also
decompose the effective complex modulus into the storage and loss moduli
$G^{\prime}_{\mathrm{eff}}$ and $G^{\prime\prime}_{\mathrm{eff}}$ in figure 20
using the phase angle shown in figure 18(a). The most striking feature of
these plots is the monotonic decrease within our parameter range of
$|G_{\mathrm{eff}}|$ and $G^{\prime}_{\mathrm{eff}}$. Remarkably, the values
of $G^{\prime}_{\mathrm{eff}}$ and $G^{\prime\prime}_{\mathrm{eff}}$ follow
similar qualitative trends to the storage and loss moduli of Carbopol
microgels as functions of either imposed strain or stress, measured by
oscillatory-shear rheometry Fernandes et al. (2017); Varges et al. (2019);
Giuseppe et al. (2015). Although the values obtained for $a<a_{c}$
overestimate $|G_{\mathrm{eff}}|$ because the stress within the drop is less
than the yield stress, $|G_{\mathrm{eff}}|$ is of the correct order of
magnitude of hundreds of Pa (see table 1). Hence, our vibrated drop offers a
route to simple measurements of the nonlinear viscoelastic properties of the
material at the yield stress. We also find that the strain in the drop
eventually reaches a maximum with increasing forcing acceleration for $a\geq
10.9g$ where the drop ruptures. We therefore conclude that our vibrated-drop
setup offers unique features to help deepen insight into the rheology of
elastoviscoplastic fluids at their yield threshold and could act as a simple
rheometer for fluids undergoing large viscoelastic deformations. In order to
deepen understanding of the drop behaviour, it would be particularly
interesting to explore whether the experimental dynamics reported in this
paper can be reproduced numerically using an appropriate constitutive model
such as the Saramito Saramito (2009) model extended to include nonlinear
viscoelasticity.
## Appendix A Quasi-steady shear rheometry
We performed quasi-steady shear-rheometry measurements of the yield stress and
viscosity curves of the Carbopol microgels used in our experiments by applying
the methodology of Bergemann et al. (2018). We used a Brookfield R/S-Plus
(SST) rheometer with a four-bladed vane spindle (blade height of $20\pm 0.01$
mm) rotating in a stationary cylindrical glass beaker of inner radius $42.5\pm
0.25$ mm, filled with Carbopol microgel to a height of $50\pm 2$ mm. We refer
the interested reader to Bergemann et al. (2018) for details of the
experimental protocol and associated parameter values.
In order to measure yield stress, we imposed cycles of linearly increasing and
decreasing torque and measured the angular velocity. The shear stress $\tau$
was readily calculated from the imposed torque and the geometry of the
spindle, with a correction for spindle end-effects applied by adding a virtual
length to actual spindle length Bergemann et al. (2018). Figure 21(a) shows
the measured angular velocity of the spindle as a function of the applied
shear stress for the 2.2 g/L Carbopol microgel. The period of the forcing
cycle was $240$ s and the maximum stress applied in each cycle was 160 Pa. The
error bars correspond to standard deviations over 5 consecutive cycles of 4
repetitions of the experiments.
Figure 21: Measured angular velocity as a function of shear stress in Carbopol
microgel (2.2 g/L). Shear stress was imposed periodically through linear ramp-
up and down phases indicated in (a) by red and blue symbols and in (b) by
circles and squares, respectively. (a) Cycle period of $\tilde{T}=240$ s and
ramping rate $\alpha=1.33$ Pa/s. (b) Ramping rates and cycle periods of
$\alpha=0.83$ Pa/s, $\tilde{T}=240$ s (blue), $\alpha=1.00$ Pa/s,
$\tilde{T}=240$ s (green), $\alpha=1.16$ Pa/s, $\tilde{T}=240$ s (magenta),
$\alpha=2.67$ Pa/s, $\tilde{T}=120$ s (cyan).The data shown corresponds to
mean values indicate of 4 repetitions of individual experiments of 5 cycles,
and the error bars are the standard deviations from the mean. The vertical
black line in (b) indicates the value of the dynamic yield-stress $\tau_{y}$
of the material.
During the ramp-up phase (red symbols), a modest increase of the angular
velocity $\omega$ for $\tau\lesssim 30$ Pa is followed by a steep and smooth
increase beyond this threshold. Large fluctuations in $\omega$ occur for
$\tau\lesssim 30$ Pa, which were absent in chocolate Bergemann et al. (2018).
For $\tau\lesssim 30$ Pa, the Carbopol microgel deforms as a solid, and the
large variability in the rotation rate measurements can be attributed to
viscoelasticity and stick-slip of the Carbopol microgel on the spindle walls
Ovarlez et al. (2013); Balmforth et al. (2014); Dinkgreve et al. (2017);
Birren and Reber (2019). The ramp-down phase (blue symbols) shows a smooth
decrease of $\omega$ which retraces the red curve to within 5%. However, the
blue dataset continually steepens with decreasing stress. Thus, it is
associated with a minimum stress value required for maintaining flow known as
the dynamic yield-stress, $\tau_{y}=35\pm 3$ Pa, which we obtained by
extrapolation of the blue dataset.
Figure 21(b) shows that our results are insensitive to the details of the
ramping cycle. Experiments performed with different rates of change of the
applied stress, $0.83\leq\alpha\leq 2.67$ Pa/s, and cycle periods of either
120 and $240$ s all collapse approximately onto a master curve above the yield
stress. The hysteresis between the ramp-up and ramp-down parts of the forcing
cycle is on the order of the measurement error, which suggests that thixotropy
is negligible within our parameter range. The dynamic yield stress estimated
from all the experiments performed with the 2.2 g/L Carbopol microgel is
$\tau_{y}=35\pm 5$ Pa and the values of yield stress obtained with the same
method for the other Carbopol microgels used in our experiments are listed in
table 1 in the main text.
In order to measure the viscosity curve of the Carbopol microgel post-yield,
we performed cyclic rheometric measurements by imposing either torque or
angular velocity. We converted angular velocity into shear rate by
approximating our rheometric set up to a circular Couette flow geometry. We
solved the associated inverse problem for the unknown shear rate function
$\dot{\epsilon}(\tau)$ numerically, where the yield stress $\tau_{y}$ was
unknown and had to be determined as part of the solution Yeow et al. (2000);
Bergemann et al. (2018). The resulting values of $\tau_{y}$ were within $5\%$
of the values measured directly from figure 21. The benefit of this method is
that it does not require any prior assumption about a specific rheological
model.
Figure 22: (a) Constitutive behaviour of Carbopol microgel (2.2 g/L) during
constant rate ramp-down of either torque or angular velocity. The data is
averaged over 4 experiments of 5 cycles and the error bars indicate the
standard deviation from the mean. Ramping rates were applied in the range
$0.83\leq\alpha\leq 2.67$ Pa/s. Variation with shear rate of (a) the shear
stress, and (b) the viscosity. The blue lines indicate a two-parameter fit to
the data of the Herschel-Bulkley model $\tau=\tau_{y}+K|\dot{\epsilon}|^{n}$
fit, using the value of $\tau_{y}$ previously determined. Inset: Log–log plot
of the same graph.
Figure 22(a) shows that shear stress increases monotonically as a function of
shear rate. Each symbol corresponds to the mean of 4 repetitions of ramp-down
experiments (including 5 consecutive cycles) and the error bars indicate the
standard deviation from the mean. As in figure 21(b), the data is insensitive
to different ramping rates applied in the range $0.83\leq\alpha\leq 2.67$
Pa/s. The Carbopol microgel is fluid during ramp-down and its flow curve of
shear stress as a function of shear rate is accurately captured by a Herschel-
Bulkley model, consistent with the literature (Putz and Burghelea, 2009;
Varges et al., 2019). Carbopol is shear-thinning as indicated by the monotonic
decrease of its viscosity $\displaystyle\mu=\tau/\dot{\epsilon}$ with
increasing shear rate shown in figure 22(b). The straight line on the log-log
plot in the inset indicates the power-law behaviour of the Herschel-Bulkley
model. A two-parameter fit to the Herschel-Bulkley model gave consistency and
shear incidices of $K=8.2$ Pa sn and $n=0.49\pm 0.02$ for 2.2 g/L Carbopol
microgel. Results for different concentrations of the Carbopol microgels are
listed in table 1 in the main text.
## References
* Ballesta and Manneville [2005] P. Ballesta and S. Manneville. Signature of elasticity in the faraday instability. _Phys. Rev. E_ , 71:026308, 2005.
* Balmforth et al. [2014] N. J. Balmforth, I. A. Frigaard, and G. Ovarlez. Yielding to stress: recent developments in viscoplastic fluid mechanics. _Annu. Rev. Fluid Mech._ , 46:121–146, 2014.
* Bergemann [2015] N. Bergemann. Fluidisation of chocolate under vibration. _PhD thesis_ , 784:1–299, 2015.
* Bergemann et al. [2018] N. Bergemann, M. Heil, B. Smith, and A. Juel. From elastic deformation to flow in tempered chocolate. _J. Rheol._ , 62(5):1187–1195, 2018.
* Birren and Reber [2019] T. Birren and J. E. Reber. The impact of rheology on the transition from stick-slip to creep in a semibrittle analog. _J. Geo. Research: Solid Earth_ , 124(3):3144–3154, 2019.
* Bonn et al. [2017] D. Bonn, M. M. Denn, L. Berthier, T. Divoux, and S. Manneville. Yield stress materials in soft condensed matter. _Rev. Mod. Phys._ , 89:035005, 2017.
* Chevalley [1975] J. Chevalley. Rheology of chocolate. _J. Texture Stud._ , 6(2):177–196, 1975.
* Coussot [2005] P. Coussot. _Rheometry of pastes, suspensions, and granular materials: applications in industry and environment_. John Wiley & Sons, 2005.
* Coussot [2014] P. Coussot. Yield stress fluid flows: A review of experimental data. _J. Non-Newton. Fluid_ , 211:31–49, 2014.
* Deshpande and Barigou [2001] N. S. Deshpande and M. Barigou. Vibrational flow of non-newtonian fluids. _Chem. Eng. Sci._ , 56(12):3845–3853, 2001.
* Deysarkar and Turner [1981] A. K. Deysarkar and G. A. Turner. Flow of paste in a vibrated tube. _J. Rheol._ , 25(1):41–54, 1981.
* Dinkgreve et al. [2017] M. Dinkgreve, M. M. Denn, and D. Bonn. “Everything flows?”: elastic effects on startup flows of yield-stress fluids. _Rheol. Acta_ , 56(3):189–194, 2017.
* Divoux et al. [2011] T. Divoux, C. Barentin, and S. Manneville. Stress overshoot in a simple yield stress fluid: An extensive study combining rheology and velocimetry. _Soft Matter_ , 7:8409–8418, 2011.
* Ewoldt et al. [2008] R. H. Ewoldt, A. E. Hosoi, and G. H. McKinley. New measures for characterizing nonlinear viscoelasticity in large amplitude oscillatory shear. _J. Rheol._ , 52:1427–1458, 2008.
* Fernandes et al. [2017] R. R. Fernandes, D. E. Andrade, A. T. Franco, , and C. O. R. Negrão. The yielding and the linear-to-nonlinear viscoelastic transition of an elastoviscoplastic material. _J. Rheol._ , 61:893–903, 2017.
* Fielding [2020] S. M. Fielding. Elastoviscoplastic rheology and aging in a simplified soft glassy constitutive model. _J. Rheol._ , 64:723–738, 2020.
* Gao et al. [2017] J. Gao, C. Tang, M. A. Elsawy, A. M. Smith, M. A. F., and A. Saiani. Controlling self-assembling peptide hydrogel properties through network topology. _Biomacromolecules_ , 18:826–834, 2017.
* Garg et al. [2020] A. Garg, A. Juel, and M. Heil. The spreading of an elastoviscoplastic drop under vibration. In preparation, 2020.
* Giuseppe et al. [2015] E. D. Giuseppe, F. Corbi, F. Funiciello, A. Massmeyer, T. Santimano, M. Rosenau, and A. Davaille. Characterization of carbopol® hydrogel rheology for experimental tectonics and geodynamics. _Tectonophysics_ , 642:29 – 45, 2015.
* Herschel and Bulkley [1926] W. Herschel and R. Bulkley. Measurement of consistency as applied to rubber-benzene solutions. _Proc ASTM Part II_ , 26(82):621–633, 1926.
* Hewitt et al. [2012] I. J. Hewitt, N. J. Balmforth, and J. N. McElwaine. Granular and fluid washboards. _J. Fluid Mech._ , 692:446–463, 2012.
* Hyun et al. [2011] K. Hyun, M. Wilhelm, c. O. Klein, K. S. Cho, J. G. Nam, K. H. Ahn, S. J. Lee, R. H. Ewoldt, and G. H. McKinley. A review of nonlinear oscillatory shear tests: Analysis and application of large amplitude oscillatory shear (laos). _Prog. Poly. Sci._ , 36:1697–1753, 2011.
* Luu and Forterre [2009] L.-H. Luu and Y. Forterre. Drop impact of yield-stress fluids. _J. Fluid Mech._ , 632:301–327, 2009.
* Merkt et al. [2004] F. S. Merkt, R. D. Deegan, D. I. Goldman, E. C. Rericha, and H. L. Swinney. Persistent holes in a fluid. _Phys. Rev. Lett._ , 92:184501, 2004.
* Ovarlez et al. [2013] G. Ovarlez, S. Cohen-Addad, K. Krishan, J. Goyon, and P. Coussot. On the existence of a simple yield stress fluid behavior. _J. Non-Newton. Fluid_ , 193:68–79, 2013.
* Piau [2007] J. M. Piau. Carbopol gels: Elastoviscoplastic and slippery glasses made of individual swollen sponges meso- and macroscopic properties, constitutive equations and scaling laws. _J. Non-Newton. Fluid_ , 144:1–29, 2007.
* Putz and Burghelea [2009] A. M. V. Putz and T. I. Burghelea. The solid–fluid transition in a yield stress shear thinning physical gel. _Rheol. Acta_ , 48(6):673–689, 2009.
* Saramito [2009] P. Saramito. A new elastoviscoplastic model based on the Herschel–Bulkley viscoplastic model. _J. Non-Newton. Fluid_ , 158:154–161, 2009.
* Schleier-Smith and Stone [2001] J. M. Schleier-Smith and H. A. Stone. Convection, heaping, and cracking in vertically vibrated granular slurries. _Phys. Rev. Lett._ , 86:3016–3019, 2001.
* Shiba et al. [2007] H. Shiba, J. E. Ruppert-Felsot, Y. Takahashi, Y. Murayama, Q. Ouyang, and M. Sano. Elastic convection in vibrated viscoplastic fluids. _Phys. Rev. Lett._ , 98:044501, 2007.
* Varges et al. [2019] P. Varges, C. M Costa, B. S Fonseca, M. F Naccache, and P. de Souza Mendes. Rheological characterization of Carbopol® dispersions in water and in water/glycerol solutions. _Fluids_ , 4(1):3, 2019.
* Wolf et al. [2015] J. Wolf, S. Dungan, M. McCarthy, V. Lim, and R. Phillips. Vibration-induced geometric patterns of persistent holes in Carbopol gels. _J. Non-Newton. Fluid_ , 220:99–107, 2015.
* Yeow et al. [2000] Y. L. Yeow, W. C. Ko, and P. P. P. Tang. Solving the inverse problem of Couette viscometry by Tikhonov regularization. _J. Rheol._ , 44:1335, 2000.
|
# Rate-Energy Balanced Precoding Design for SWIPT based Two-Way Relay Systems
Navneet Garg, Junkai Zhang, and Tharmalingam Ratnarajah Authors are with
Institute for Digital Communications, The University of Edinburgh, Edinburgh,
EH9 3FG, UK (e-mails: {ngarg, jzhang15, t.ratnarajah}@ed.ac.uk).
This work was supported by the UK Engineering and Physical Sciences Research
Council (EPSRC) under grant number EP/P009549/1.
###### Abstract
Simultaneous wireless information and power transfer (SWIPT) technique is a
popular strategy to convey both information and RF energy for harvesting at
receivers. In this regard, we consider a two-way relay system with multiple
users and a multi-antenna relay employing SWIPT strategy, where splitting the
received signal leads to a rate-energy trade-off. In literature, the works on
transceiver design have been studied using computationally intensive and
suboptimal convex relaxation based schemes. In this paper, we study the
balanced precoder design using chordal distance (CD) decomposition, which
incurs much lower complexity, and is flexible to dynamic energy requirements.
It is analyzed that given a non-negative value of CD, the achieved harvested
energy for the proposed balanced precoder is higher than that for the perfect
interference alignment (IA) precoder. The corresponding loss in sum rates is
also analyzed via an upper bound. Simulation results add that the IA schemes
based on mean-squared error are better suited for the SWIPT maximization than
the subspace alignment-based methods.
###### Index Terms:
Simultaneous wireless information and power transfer (SWIPT); two-way relay;
rate-energy balanced precoding design; interference alignment; chordal
distance.
## I Introduction
With the increasing demands from a large number of devices in the 5G and
beyond wireless networks, severe interference and unnecessary power
consumption are inevitable [1]. Due to this fact, their key performance
metrics (e.g., sum rate and bit error rate) are restricted, especially for
battery operated devices [2]. A satisfactory solution is to harvest energy
from the received RF signal to provide a stable and long-term power supplement
[3]. The experimental results in [4] show that a few microwatts of RF power
can be harvested from broadcasting signals of TV stations located several
kilometers away. Thus, wireless energy harvesting (EH) system has been
employed for energy-constrained devices, such as implantable sensors and smart
wearables [5]. Further, since RF signals also carry information in wireless
networks, simultaneous information and power transfer (SWIPT) technology has
attracted great attention in many different scenarios [6].
One such scenario of interest is a two-way relay (TWR) system for relaying
information between users located at different sides of the relay. Under the
full-duplex (FD) operation at the relay node, studies [7, 8, 9, 10, 11] focus
only on the sum rate maximization, since self-interference cancellation
requires active circuits, causing more power consumption. Thus, the SWIPT-
based relay systems have been widely studied under the half-duplex (HD)
operation in different scenarios such as non-orthogonal multiple access (NOMA)
[12], mmWave with hybrid precoding [13], massive MIMO [14], wireless edge
caching [15], Internet-of-things (IoT) [16], cognitive-radio networks [17],
secrecy systems [18], unmanned aerial vehicle (UAV) [13] and more [19, 20]. In
these systems, two-way relay design is presented with variations such as
single/multi-relay systems [21], single-hop/multi-hop systems [22], and with
amplify-and-forward (AF)/decode-and-forward (DF) relaying [23].
Among these works, a brief review of single-hop AF relaying approaches is
given as follows. In [2], for the multiple-input multiple-output (MIMO) SWIPT-
based AF-TWR systems, hybridized power-time splitting ratios and precoders are
obtained via the maximization of convexified bounds on the sum rate. In [24,
25], transceivers and splitters are designed via semi-definite relaxation
(SDR) based convex problems for finite constellation symbols. For DF relaying
in [23], power allocation and splitting ratios are computed via the formulated
convex problem. In [26], both source and relay are designed using successive
convex approximation. In [27, 28, 29], energy efficiency is optimized with
respect to joint source and relay power allocation via relaxed and convexified
objective. In [30], asymptotic bit error rate is analyzed for space-shift-
keying modulation, while [31] analyzes outage probability to verify the
similar diversity order for SWIPT as in the non-SWIPT case. In [32, 33],
dynamic and asymmetric splitting ratios are computed via iterative Dinkelbach-
based algorithm to solve non-convex problem. In these works, first,
transceivers are designed in a suboptimal manner. Second, they focus only on
sum rate, although SWIPT model is adopted for energy harvesting. Thus, in the
SWIPT model, an effective rate-energy trade-off needs more investigation.
Further, since two-way relay causes interference at receivers, interference
management approaches are also studied. In [34] for DF relaying, non-
cooperative game is formulated to utilize the relay resource, where each user
maximizes its own rate in an interference channel. Interference among the
SWIPT-relay assisted channels is mitigated via NOMA in [35] and a closed form
transmitter design is provided for a fixed split-ratio. In [36], beamforming
is obtained for different relaying protocols (AF/DF) via suboptimal convex
relaxations. In [37], joint source and relay transceivers are designed to
maximize the energy efficiency via convex approaches. Thus, for these works
also, the suboptimal transceiver design and the focus on sum rates somewhat
defeats the purpose of SWIPT operation. Regarding interference management,
interference alignment (IA) has been a popular method for a decade [38, 39,
40, 41]. In the typical IA for MIMO interference channels (ICs), precoders for
two interfering transmitters are designed to align the interfering signals at
the receiver subspace, and the receiver can use a decoder (a linear combiner)
to null the aligned interference [42, 43]. The following works uses IA in the
SWIPT-relay system to mitigate the interference. In [44], IA is used to
improve the harvested energy, while keeping transmissions secure by
introducing artificial noise. In [45], a two-stage splitting scheme is derived
with IA to maximize the sum rate. For MIMO broadcast channels [46], block
diagonalization-based method to get the improved sum rates. For massive MIMO
system, asymptotic SINR is analyzed for a signal-space-alignment method in
[47]. Therefore, the study of IA with a general two-way relay system is
lacking towards a low-complexity design with better rate-energy trade-offs.
### I-A Contributions
In this paper, we consider the system with a SWIPT based two-way relay serving
multiple user nodes, who wish to communicate to other users via the relay
under the HD operation. First, to avoid processing at the relay to reduce
power consumption, the AF relaying is adopted, using which precoders and
decoders are computed by modifying an IA algorithm based on minimum mean-
squared error (MMSE) [43]. With the perfect IA precoders obtained, we provide
a systematic process to get the balanced precoder using chordal distance (CD)
decomposition to improve the harvested energy. Via rate-energy trade-off, it
is observed that improvement in harvested energy leads to reduction in sum
rates, which can be decided using the CD value. Maximum harvested energy-based
precoders are also obtained and the corresponding rate loss is analyzed. In
simulations, rate-energy regions are plotted for different precoding methods
for different CD values. These results show that a better rate-energy trade-
off can be obtained, as compared to the other transceiver designs. The
contributions can be summarized as follows:
* •
TWR-IA algorithm: Since the effective end-to-end channel includes the relay
processing matrix, it is challenging to find an optimum precoding scheme, as
in an iterative IA algorithm the effective channel varies with each iteration.
From our experiments, it turns out that AF provides the best end-to-end sum
rate. Further, since an IA method cannot be directly applied here, the
required modifications lead to the different precoders and decoders
expressions, which in turn are formulated into the TWRIA algorithm.
* •
Balanced precoding: In order to improve the harvested energy at the relay, the
CD decomposition is used to compute the balanced precoder for rate-energy
trade-off, which can provide higher energy while keeping the expected rate-
loss constant proportional to the CD value. In other words, the desired sum
rate reduction can be specified via the CD values and the splitting ratio to
obtain higher energies.
* •
Analysis and simulations: Maximum achievable energy, rate loss, and harvested
energy bounds are obtained to verify and analyze the proposed balanced
precoding. Further, simulations with two different IA methods show the better
sum rates for the TWRIA algorithm, and the better trade-offs for the balanced
precoding with respect to different CD values.
The rest of the paper is organized as follows: the symmetric two-way relay IC
system model is given in Section II, followed by an energy optimized precoding
method and a rate-energy balanced precoding design algorithm in Section III
and IV, respectively. Simulation results are shown in Section V. A brief
conclusion of this work is presented in Section VI.
#### Notations
$\mathcal{B}$, $\mathbf{B},\mathbf{b}$, $b$ represent a set, a matrix, a
vector, and a scalar, respectively. The notations $\mathbf{B}^{H}$,
$\mathbf{B}^{-1}$, $\mathbf{B}(m,n)$, $\lVert\mathbf{B}\rVert_{F}$,
$\|\mathbf{B}\|$, $|\mathbf{B}|$, $\Re\text{tr}(\mathbf{B})$ and
$\nu_{1:b}[\mathbf{B}]$ are the Hermitian transpose, the inverse of
$\mathbf{B}$, the $(m,n)^{th}$ value of the matrix $\mathbf{B}$ (also denoted
by $\left[\mathbf{B}\right]_{m,n}$), the Frobenius norm, spectral norm, the
determinant, the real part of trace, and $\nu_{1:b}[\mathbf{B}]$ denotes the
first $b$ dominant eigenvectors of $\mathbf{B}$, respectively.
$\lVert\mathbf{b}\rVert_{2}$ denotes the $l_{2}$-norm of $\mathbf{b}$.
$\mathrm{Cov}(\mathbf{b})=\mathbb{E}\left\\{\mathbf{b}\mathbf{b}^{H}\right\\}$
is the covariance matrix of zero mean vector $\mathbf{b}$, where
$\mathbb{E}\\{\cdot\\}$ is the expectation operator.
$\mathcal{D}(\mathbf{B}_{1},\mathbf{B}_{2})$ denotes a block diagonal matrix
with $\mathbf{B}_{1}$ and $\mathbf{B}_{2}$ as its block diagonal components.
$\mathcal{CN}(b,\mathbf{B})$ represents a circularly symmetric complex
Gaussian random vector with mean $b$ and covariance matrix $\mathbf{B}$.
$\mathbb{O}[\cdot]$ denotes the orthonormal operator, can be obtained from QR-
decomposition, and $\mathbf{I}_{K}$ is a $K\times K$ identity matrix.
## II System Model
Consider a symmetric TWR IC [48] with $2K$ user nodes, as shown in Figure 1.
Figure 1: Illustration of $K$-user pairs in the TWR system.
Each of the $2K$ nodes equipped with $M$ antennas wants to transmit $d$ data
streams to its paired user node via a $R$-antenna TWR. The destination (or
source) of the $k^{th}$ source (or destination) is indexed by
$k^{\prime}=\text{mod}\left(k+K-1,K\right)+1$, for $k=1,\ldots,2K$. We assume
direct links between sources and destinations are unavailable, which usually
occurs when the direct link is blocked due to long-distance path loss or
obstacles [28, 49]. We use $(M,R,d)^{2K}$ to denote the setting of this
symmetric TWR IC. With the HD relaying, the communication period is divided
into two phases, namely, multiple access (MAC) and broadcast (BC) phases. For
simplicity, the energy-constrained relay node is operated under the AF mode
[50]. As in [28], we assume that the channel varies slowly enough so that it
can be perfectly estimated by training sequences or feedback.
### II-A MAC phase
In the MAC phase, each of $2K$ users transmits its $d$ data streams to the
relay, leading to the received signal equation at the relay as
$\mathbf{y}_{r}=\sum_{j=1}^{2K}\mathbf{H}_{rj}\mathbf{V}_{j}\mathbf{s}_{j}+\mathbf{z}_{r},$
(1)
where $\mathbf{H}_{rj}\in\mathbb{C}^{R\times M}$ denotes the wireless MIMO
channel matrix from the $j^{th}$ user to the relay node; the matrix
$\mathbf{V}_{j}\in\mathcal{G}_{M,d}$ is the orthonormal precoder at the
$j^{th}$ user such that
$\mathbf{V}_{j}^{H}\mathbf{V}_{j}=\mathbf{I}_{d},\forall j$; the vector
$\mathbf{s}_{k}\in\mathbb{C}^{d\times 1}$ represents the uncorrelated transmit
data i.e.,
$\mathbb{E}\left\\{\mathbf{s}_{k}\mathbf{s}_{k}^{H}\right\\}=\frac{P_{k}}{d}\mathbf{I}_{d}$
with $P_{k}$ being the total transmit power at the $k^{th}$ user node; and
$\mathbf{z}_{r}\sim\mathcal{CN}\left(0,\sigma_{R}^{2}\mathbf{I}_{R}\right)$ is
the Gaussian noise. Entries of the channel matrix $\mathbf{H}_{rj}$ are
assumed to have zero mean and variance
$\mathbb{E}\left|\mathbf{H}_{rj}(m,n)\right|^{2}=\beta_{rj},\forall m,n$.
Thus, the relay forwards $2Kd$ data streams. Conventionally, the condition
$R\geq 2Kd$ is required. However, with IA, $Kd$ antennas at the relay are
sufficient [48].
### II-B Harvesting Energy
After the MAC phase, the relay splits the received signal into two flows: one
part goes for EH, while the remaining is will be forwarded in the BC phase for
information decoding (ID) at users. The received signal $\mathbf{y}_{r}$ is
fed into a power splitter with a PS ratio $\rho\in\left[0,1\right]$, denoting
the portion for harvesting energy,
$\displaystyle\mathbf{y}_{r}^{EH}$
$\displaystyle\approx\sqrt{\rho}\mathbf{y}_{r}=\sqrt{\rho}\left(\sum_{j=1}^{2K}\mathbf{H}_{rj}\mathbf{V}_{j}\mathbf{s}_{j}+\mathbf{z}_{r}\right),$
(2)
where in the above approximation, the noise introduced by the splitter is
negligible compared to the received signal strength, and hence ignored. The
corresponding average harvested energy at the relay can be expressed as
$\displaystyle Q_{r}$
$\displaystyle=\zeta\mathbb{E}\left\\{\left\|\mathbf{y}_{r}^{EH}\right\|_{2}^{2}\right\\},$
(3a)
$\displaystyle\approx\zeta\rho\sum_{j=1}^{2K}\frac{P_{j}}{d}\left\|\mathbf{H}_{rj}\mathbf{V}_{j}\right\|_{F}^{2},$
(3b)
where $\zeta\in\left[0,1\right]$ represents the RF-to-electrical conversion
efficiency. Note that the noise power $\zeta\rho\sigma_{r}^{2}R$ is negligible
and constant, and hence is omitted in the above equation.
### II-C BC Phase
The received signal at the power splitter for ID experiences an additional
circuit noise due to non-ideal splitters, non-ideal RF-baseband conversion and
thermal noise [51]. Thus, the signal for ID at the relay can be expressed as
$\displaystyle\mathbf{y}_{r}^{ID}$
$\displaystyle=\sqrt{\bar{\rho}}\mathbf{y}_{r}+\mathbf{w}_{r}$ (4a)
$\displaystyle=\sqrt{\bar{\rho}}\left(\sum_{j=1}^{2K}\mathbf{H}_{rj}\mathbf{V}_{j}\mathbf{s}_{j}+\mathbf{z}_{r}\right)+\mathbf{w}_{r},$
(4b)
where $\bar{\rho}=1-\rho$ and
$\mathbf{w}_{r}\sim\mathcal{CN}\left(\mathbf{0},\delta^{2}\mathbf{I}_{R}\right)$.
It can be noted that the above equation results in an effective noise
$\tilde{\mathbf{w}}_{r}=\mathbf{z}_{r}+\frac{\mathbf{w}_{r}}{\sqrt{\bar{\rho}}}\sim\mathcal{CN}\left(\mathbf{0},\sigma_{ID}^{2}\mathbf{I}_{R}\right)$,
where
$\sigma_{ID}^{2}=\sigma_{R}^{2}\left(1+\frac{\delta^{2}}{\bar{\rho}\sigma_{R}^{2}}\right)$.
Next, in the BC phase, the relay first precodes the signal using the matrix
$\mathbf{G}\in\mathbb{C}^{R\times R}$ satisfying the transmit power constraint
at the relay. Then, the relay broadcasts the noisy-precoded signal to user
destinations. At the $k^{th}$ user, the received signal is written as
$\displaystyle\mathbf{y}_{k}$
$\displaystyle=\mathbf{H}_{kr}\mathbf{G}\mathbf{y}_{r}^{ID}+\mathbf{n}_{k}$
(5a)
$\displaystyle=\sqrt{\bar{\rho}}\mathbf{H}_{kr}\mathbf{G}\sum_{j=1}^{2K}\mathbf{H}_{rj}\mathbf{V}_{j}\mathbf{s}_{j}+\sqrt{\bar{\rho}}\mathbf{H}_{kr}\mathbf{G}\tilde{\mathbf{w}}_{r}+\mathbf{n}_{k}$
(5b)
$\displaystyle=\sqrt{\bar{\rho}}\mathbf{H}_{kr}\mathbf{G}\mathbf{H}_{rk^{\prime}}\mathbf{V}_{k^{\prime}}\mathbf{s}_{k^{\prime}}+\sqrt{\bar{\rho}}\mathbf{H}_{kr}\mathbf{G}\mathbf{H}_{rk}\mathbf{V}_{k}\mathbf{s}_{k}$
(5c) $\displaystyle\quad+\sqrt{\bar{\rho}}\mathbf{H}_{kr}\mathbf{G}\sum_{j\neq
k,k^{\prime}}\mathbf{H}_{rj}\mathbf{V}_{j}\mathbf{s}_{j}+\sqrt{\bar{\rho}}\mathbf{H}_{kr}\mathbf{G}\tilde{\mathbf{w}}_{r}+\mathbf{n}_{k},$
(5d)
where $\mathbf{H}_{kr}\in\mathbb{C}^{M\times R}$ denotes the wireless MIMO
channel matrix from the relay node to the $k^{th}$ user with zero mean and
variance $\mathbb{E}\left|\mathbf{H}_{kr}(m,n)\right|^{2}=\beta_{kr},\forall
m,n$; the vector
$\mathbf{n}_{k}\sim\mathcal{CN}\left(\mathbf{0},\sigma^{2}\mathbf{I}_{M}\right)$
is the Gaussian noise at the receiver. The above equation respectively
consists of the desired signal term, the self-interference component, the co-
channel interference, the relayed noise, and the received noise. For the
information retrieval and to mitigate the interference, the $k^{th}$ node
deducts the self-interference term and then employs a combiner matrix
$\mathbf{U}_{k}$ as
$\hat{\mathbf{y}}_{k}=\mathbf{U}_{k}^{H}\left(\mathbf{y}_{k}-\sqrt{\bar{\rho}}\mathbf{H}_{kr}\mathbf{G}\mathbf{H}_{rk}\mathbf{V}_{k}\mathbf{s}_{k}\right),$
(6)
where for simplicity, $\mathbf{U}_{k}$ is assumed to be an orthonormal matrix,
i.e., $\mathbf{U}_{k}^{H}\mathbf{U}_{k}=\mathbf{I}_{d}$.
#### II-C1 IA feasibility
Let $\mathbf{H}_{kj}=\mathbf{H}_{kr}\mathbf{G}\mathbf{H}_{rj}$. For IA, the
set of precoders and combiners need the satisfy the following equations
$\displaystyle\mathbf{U}_{k}^{H}\mathbf{H}_{kj}\mathbf{V}_{j}=\mathbf{0},$
$\displaystyle\forall j\neq k,k^{\prime},$ (7a)
$\displaystyle\text{rank}\left(\mathbf{U}_{k}^{H}\left[\mathbf{H}_{kk}\mathbf{V}_{k},\mathbf{H}_{kk^{\prime}}\mathbf{V}_{k^{\prime}}\right]\right)$
$\displaystyle\geq d,\forall k.$ (7b)
The first equation ensures the interference terms are zero at all receivers,
while the second terms ensures the availability of at least $d$-dimensions for
the decoding of the desired signal. The desired signal can occupy the same
subspace as the self-interference signal, since the known self-signal term can
be removed by subtraction as in (6). Note that each relay receives $2Kd$ data
streams, thus without IA, the relay requires at least $2Kd$ antennas ($R\geq
2Kd$). However, for less number of antennas at relay, i.e., if $Kd\leq R<2Kd$,
each node’s precoder should be aligned with the desired signal subspace, that
is,
$\text{span}\left(\mathbf{H}_{jk}\mathbf{V}_{k}\right)=\text{span}\left(\mathbf{H}_{jk^{\prime}}\mathbf{V}_{k^{\prime}}\right),\forall
j,k.$ (8)
It can be seen from the above equation (7a)-(7b) that for a fixed relay matrix
$\mathbf{G}$, the above model is analogous to an interference channel
$\left(M\times M,d\right)^{2K}$ with the equivalent channel matrices
$\mathbf{H}_{kj}$, $\forall k,j$. For this analogous IC, one needs to satisfy
the necessary proper system condition in [52], or the guaranteed IA-
feasibility condition in [43]. From the above equations (for all $k,j$), by
setting the number of equations $\left(d^{2}2K(2K-2)\right)$ less than or
equal to the number of variables $\left(4K(M-d)d\right)$, we arrive at the
necessary condition
$M\geq Kd.$ (9)
#### II-C2 Sum rate
At the $k^{th}$ destination node, the resulting rate can be written as
$R_{k}=\frac{1}{2}\times$
$\log_{2}\Bigg{|}\mathbf{I}_{d}+\frac{\bar{\rho}P_{k^{\prime}}}{d}\bar{\mathbf{H}}_{kk^{\prime}}\bar{\mathbf{H}}_{kk^{\prime}}^{H}\left(\mathbf{N}_{k}+\sum_{j\neq
k,k^{\prime}}\frac{\bar{\rho}P_{j}}{d}\bar{\mathbf{H}}_{kj}\bar{\mathbf{H}}_{kj}^{H}\right)^{-1}\Bigg{|},$
where $\bar{\mathbf{H}}_{kj}=\mathbf{U}_{k}^{H}\mathbf{H}_{kj}\mathbf{V}_{j}$;
$\mathbf{N}_{k}=\mathbf{U}_{k}^{H}\mathbf{C}_{k}\mathbf{U}_{k}$ with
$\mathbf{C}_{k}$ being the effective noise covariance matrix given as
$\displaystyle\mathbf{C}_{k}$
$\displaystyle=\text{Cov}\left(\sqrt{\bar{\rho}}\mathbf{H}_{kr}\mathbf{G}\tilde{\mathbf{w}}_{r}+\mathbf{n}_{k}\right)$
$\displaystyle=\bar{\rho}\sigma_{ID}^{2}\mathbf{H}_{kr}\mathbf{G}\mathbf{G}^{H}\mathbf{H}_{kr}^{H}+\sigma^{2}\mathbf{I}_{M}.$
If the interference is perfectly canceled, i.e.,
$\bar{\mathbf{H}}_{kj}=\mathbf{0}$, the rate is constrained by the noise
component forwarded by the relay as
$R_{k,per}=\frac{1}{2}\log_{2}\left|\mathbf{I}_{d}+\bar{\rho}\frac{P_{k^{\prime}}}{d}\bar{\mathbf{H}}_{kk^{\prime}}\bar{\mathbf{H}}_{kk^{\prime}}^{H}\mathbf{N}_{k}^{-1}\right|.$
To obtain the limits on harvested energy, we first present both the rate and
the energy optimized precoding with its analysis in the following.
## III Precoding for Rate or energy limits
In this section, we first provide a brief overview of the modified IA
algorithm for the TWR system. Since the focus of the paper is SWIPT schemes,
the details of the TWRIA algorithm are delegated to the Appendix-A. After the
rate-optimized precoding, the precoders achieving the maximum harvested energy
are derived and the expected rate-loss upper bound is analyzed. Subsequently,
the definition of CD and its properties are explained to introduce the balance
precoding.
### III-A Rate-optimized precoding: TWRIA algorithm
Conventional IA methods for interference mitigation are suited to interference
channels. In TWR system, due to the presence of relay, the channel matrix
depends on the choice of relay processing matrix $\mathbf{G}$ with relay
transmit power constraint. Owing to this dependence, the effective channel
matrix $\mathbf{H}_{kj}$ varies in each iteration for a conventional iterative
IA algorithm, leading to a much higher computationally overhead and slower
convergence speed. Therefore, the TWR variant of the MSE-based IA method [43]
is derived in the Appendix-A. Similar to the IA method in [43], the TWRIA is
an iterative algorithm, where precoders and combiners are alternately updated
using the corresponding set of expressions derived in (36) and (35),
respectively.
In TWR-IA algorithm, the relay matrix $\mathbf{G}$ is chosen proportional to
an identity matrix, $\mathbf{G}=\alpha\mathbf{I}_{R}$, due to following
reasons.
* •
IA conditions: The first interference alignment condition
$\mathbf{U}_{k}^{H}\mathbf{H}_{kr}\mathbf{G}\mathbf{H}_{rj}\mathbf{V}_{j}=\mathbf{0},\forall
j\neq k,k^{\prime},$ is unaffected by the choice of $\mathbf{G}$. At the end
of IA algorithm, this product is close to zero via the matrices
$\mathbf{U}_{k}$ and $\mathbf{V}_{k}$, irrespective of the value of
$\mathbf{G}$. Moreover, due to the presence of $\mathbf{U}_{k}$ and
$\mathbf{V}_{k}$ at both sides of the effective channel
$\frac{\mathbf{H}_{kr}\mathbf{G}\mathbf{H}_{rj}}{\|\mathbf{G}\|_{F}}$ for all
$k,j$, depending on $\frac{\mathbf{G}}{\|\mathbf{G}\|_{F}}$, the variables
$\mathbf{U}_{k}$ and $\mathbf{V}_{k}$ will be optimized accordingly via the
TWRIA algorithm. In other words, $\mathbf{U}_{k}$ and $\mathbf{V}_{k}$ are the
main matrices deciding the structure of the product
$\mathbf{U}_{k}^{H}\mathbf{H}_{kr}\frac{\mathbf{G}}{\|\mathbf{G}\|_{F}}\mathbf{H}_{rj}\mathbf{V}_{j}$
for any $k,j$, and the final MSE-values, rather than the structure of
$\mathbf{G}$. The only part of $\mathbf{G}$, that affects the MSE or sum rate
performance is its weight factor $\alpha$, which is chosen according to the
relay transmit power constraint.
* •
Diagonalization: Since the matrix $\mathbf{G}$ acts as a trio of a receiver,
an amplifier and a transmitter, we can write via SVD as
$\mathbf{G}=\mathbf{G}_{T}\Lambda_{G}\mathbf{G}_{R}$, where $\mathbf{G}_{T}$
and $\mathbf{G}_{R}$ are $R\times R$ orthonormal matrices acting as relay
transmit precoder and receive decoder; and $\Lambda_{G}$ is a $R\times R$
diagonal matrix with non-negative entries for relay amplification. Note that
an square orthonormal matrix is unitary matrix, i.e.,
$\mathbf{G}_{T}^{H}\mathbf{G}_{T}=\mathbf{G}_{T}\mathbf{G}_{T}^{H}=\mathbf{G}_{R}^{H}\mathbf{G}_{R}=\mathbf{G}_{R}\mathbf{G}_{R}^{H}=\mathbf{I}$,
i.e., it does not contribute to the interference alignment or the transmit
power changes. Therefore, without loosing optimality, it is sufficient to
consider $\mathbf{G}=\Lambda_{G}$. Further, for the received signal at relay
i.e., $\sum_{j=1}^{2K}\mathbf{H}_{rj}\mathbf{V}_{j}\mathbf{s}_{j}$, each entry
of this vector contains all channels and data streams. Note that at the relay,
each of $2Kd$ data streams are equally important to forward. Thus, unequal
power allocation $\left(\Lambda_{G}\right)$ to minimize the system objective,
such as MSE or sum rate, will provide diagonal values in $\Lambda_{G}$
proportional to the small-scale fading variations, summed across all users. As
the number of users or antennas increases, these small-scale variations
reduce. Therefore, for simplicity and without loosing much optimality, the
relay matrix $\Lambda_{G}$ is relaxed to $\alpha\mathbf{I}_{R}$.
Regarding the convergence of TWRIA algorithm, it can be seen that the total
MSE is jointly convex with respect to precoders and combiners (see [43]).
Thus, the algorithm converge globally and is demonstrated via simulation
results. The computational complexity is same as an conventional IA algorithm,
where the number of iterations for convergence depends on SNR. Further, it can
be noted that the TWRIA algorithm can be designed independent of splitting
operation for SWIPT. This important feature along with the low-complexity
balanced precoding allows the use of dynamic splitting ratios in order to
dynamically satisfy rate and energy constraints. To provide the SWIPT
functionality, CD decomposition is discussed in the next section. To analyze
the rate-energy trade-off, it is important to quantify the maximum harvested
energy achievable, which is given as follows.
### III-B Energy-optimized precoding
The problem of maximizing the harvested energy at the relay with respect to
precoders, subject to orthogonality constraint on the precoders, can be
written as
$\displaystyle\left\\{\mathbf{V}_{j}^{EH},\forall
j\right\\}=\arg\max_{\mathbf{V}_{j},\forall j}$
$\displaystyle\zeta\rho\sum_{j}\frac{P_{j}}{d}\|\mathbf{H}_{rj}\mathbf{V}_{j}\|_{F}^{2}$
(11a) subject to $\displaystyle\|\mathbf{V}_{j}\|_{F}^{2}\leq d,\forall j.$
(11b)
The above problem can be decoupled, and the solution of the $j^{th}$ precoder
$\mathbf{V}_{j}$ can be obtained by the dominant eigenvectors of
$\mathbf{H}_{rj}^{H}\mathbf{H}_{rj}$ corresponding to $d$ maximum eigenvalues,
i.e.,
$\displaystyle\mathbf{V}_{j}^{EH}$
$\displaystyle=\arg\max_{\|\mathbf{V}_{j}\|_{F}^{2}\leq
d}\text{tr}\left(\mathbf{V}_{j}^{H}\mathbf{H}_{rj}^{H}\mathbf{H}_{rj}\mathbf{V}_{j}\right)$
(12a)
$\displaystyle=\nu_{1:d}\left[\mathbf{H}_{rj}^{H}\mathbf{H}_{rj}\right]=\mathbf{W}_{j}^{[1]},$
(12b)
where $\mathbf{W}_{j}^{[1]}$ is computed via the eigenvalue decomposition
(EVD), i.e.,
$\mathbf{H}_{j}^{H}\mathbf{H}_{j}=\sum_{k}\bar{\rho}_{k}\mathbf{H}_{kj}^{H}\mathbf{H}_{kj}=\mathbf{W}_{j}\mathbf{\Lambda}_{j}\mathbf{W}_{j}^{H},$
(13)
with $\mathbf{W}_{j}=\left[\mathbf{W}_{j}^{[1]},\mathbf{W}_{j}^{[2]}\right]$
and $\mathbf{\Lambda}_{j}=\mathcal{D}\left(\lambda_{ji},i=1,\ldots,M\right)$,
such that $\lambda_{j1}\geq\cdots\geq\lambda_{jM}$ being in the descending
order. Note that $\mathbf{W}_{j}^{[1]}$ and $\mathbf{W}_{j}^{[2]}$ are
orthonormal matrices of size $M\times d$ and $M\times M-d$, respectively. To
analyze the effect of the precoding scheme on sum rates, we utilize CD and its
decomposition, which are defined in the following.
### III-C Chordal Distance
###### Definition 1.
Let $\mathbf{V},\hat{\mathbf{V}}\in\mathbb{C}^{M\times d}$ be two orthonormal
matrices such that
$\hat{\mathbf{V}}^{H}\hat{\mathbf{V}}=\mathbf{V}^{H}\mathbf{V}=\mathbf{I}_{d}$.
The CD between these matrices can be defined as
$d_{c}^{2}(\mathbf{V},\,\hat{\mathbf{V}})=\frac{1}{2}\|\mathbf{V}\mathbf{V}^{H}-\hat{\mathbf{V}}\hat{\mathbf{V}}^{H}\|_{F}^{2}=d-\|\mathbf{V}^{H}\hat{\mathbf{V}}\|_{F}^{2}.$
(14)
Note that the matrices $\mathbf{V}$ and $\hat{\mathbf{V}}$ represent $d$
dimensional subspaces of $M$ dimensional space, i.e., $\mathbf{V}$ and
$\hat{\mathbf{V}}$ lie on a Grassmannian manifold $\mathcal{G}_{M,d}$, which
is a collection of all such $d$ dimensional subspaces. The CD represents the
distance between the subspaces spanned by these matrices. Thus, two
orthonormal matrices that represent the same column space will have zero CD
value. The CD value between two unit-norm vectors (say
$\mathbf{v}_{1},\mathbf{v}_{2}\in\mathcal{G}_{M,1}$) is equivalent to
computing the inner-product between them, i.e.,
$1-\left|\mathbf{v}_{1}^{H}\mathbf{v}_{2}\right|^{2}$. Further, given two
matrices in $\mathcal{G}_{M,d}$, one matrix can be expressed into the other
one using the CD decomposition lemma from [53, Lemma 1]. The following lemma
states the modified CD decomposition, where the modification comes from
splitting the null space of dimension $M-d$ into a product of two matrices.
###### Lemma 2.
The two matrices $\hat{\mathbf{V}}$ and $\mathbf{V}$ (such that
$\hat{\mathbf{V}}^{H}\hat{\mathbf{V}}=\mathbf{V}^{H}\mathbf{V}=\mathbf{I}_{d}$)
admits the following decomposition [53, Lem 1]
$\mathbf{V}=\hat{\mathbf{V}}\mathbf{X}\mathbf{Y}+\hat{\mathbf{V}}^{\text{null}}\mathbf{S}\mathbf{Z},$
(15)
where $\mathbf{V},\,\hat{\mathbf{V}}\in\mathbb{C}^{M\times d}$,
$\hat{\mathbf{V}}_{j}^{\text{null}}=\text{null}(\hat{\mathbf{V}}_{j})\in\mathbb{C}^{M-d\times
d},$ $\mathbf{X}\in\mathbb{C}^{d\times d}$ and
$\mathbf{S}\in\mathbb{C}^{M-d\times d}$ are orthonormal matrices,
$\mathbf{Y},\,\mathbf{Z}\in\mathbb{C}^{d\times d}$ are upper triangular
matrices with positive diagonal elements satisfying
$\displaystyle\mathrm{tr}(\mathbf{Z}^{H}\mathbf{Z})$ $\displaystyle=$
$\displaystyle d_{c}^{2}(\mathbf{V},\hat{\mathbf{V}}),$ (16a)
$\displaystyle\mathbf{Y}^{H}\mathbf{Y}$ $\displaystyle=$
$\displaystyle\mathbf{I}_{d}-\mathbf{Z}^{H}\mathbf{Z},$ (16b)
Moreover, $\mathbf{X}$ and $\mathbf{Y}$ are distributed independently of each
other, as is the pair $\mathbf{S}$ and $\mathbf{Z}$.
###### Proof:
A short proof is included in Appendix-B from [53], including the proofs of the
following corollaries. ∎
Note that this decomposition requires $M\geq 2d$, which is the case in IA,
i.e., at least $d$ dimensions for the desired signal and the remaining for the
interference.
###### Corollary 3.
If two sets of precoders have zero chordal distances, the resulting rate and
the harvested energy are the same.
Note that the two different orthogonal matrices with zero chordal distance
will be termed as equivalent matrices; however, they cannot be considered as
the same matrix.
###### Corollary 4.
Given the CD value $z$ and an orthogonal matrix $\mathbf{V}$. Then, in
obtaining the displacement precoder (with respect to $\mathbf{V}$) via the CD
decomposition, the matrices $\mathbf{Y}$ and $\mathbf{Z}$ can be relaxed to
diagonal matrices as
$\mathbf{V}_{D}=\mathbf{V}\mathbf{X}\Sigma_{Y}+\mathbf{V}^{\text{null}}\mathbf{S}\Sigma_{Z},$
(17)
where $\Sigma_{Y}$ and $\Sigma_{Z}$ are diagonal matrices such that
$\Sigma_{Y}^{2}=\mathbf{I}_{d}-\Sigma_{Z}^{2}$.
### III-D Rate loss upper bound
With the maximum EH based precoding in (12a), the resultant maximum harvested
energy can be written as the sum of the first $d$ dominant eigenvalues of
$\mathbf{H}_{rj}^{H}\mathbf{H}_{rj},\forall j$. Note that the precoding in
(12a) is an independent precoding scheme, which does not mitigate the effect
of interference terms for ID. However, the obtained precoders may partially
align the interference. This partial alignment can be measured using the CD
between the ideal IA precoders and the EH precoders, say
$z_{j}^{EH}=d_{c}^{2}(\mathbf{V}_{j},\mathbf{V}_{j}^{EH}),\forall j,$ (18)
where $\mathbf{V}_{j},\forall j$ stands for IA-precoders. It can be noted that
the above CD represents the displacement of $\mathbf{V}_{j}^{EH}$ with respect
to $\mathbf{V}_{j}$, and it does not depend on SNR values. The more the
distance, the more will be interference. Therefore, it is essential to specify
the allowable interference in the system, which can be characterized in the
following result [54].
###### Lemma 5.
(Rate Loss Upper Bound) In the TWR system $(M,R,d)^{2K}$, the usage of
imperfect precoder instead of TWRIA precoder at the sources incurs the rate
loss $\Delta R_{k}$, whose expected value can be upper bounded for the
$k^{th}$ receiver as $\mathbb{E}\left\\{\Delta R_{k}\right\\}<$
$\displaystyle\frac{d}{2}\log_{2}\left[1+\frac{M_{d}\bar{\rho}\sum_{j\neq
k,k^{\prime}}P_{j}z_{j}\beta_{rj}}{\bar{\rho}\left(\sum_{j=1}^{2K}P_{j}\frac{\beta_{rj}\sigma^{2}}{\beta_{kr}P_{r}}+\bar{\sigma}_{kr}^{2}\right)+\bar{\delta}_{kr}^{2}}\right],$
(19)
with $M_{d}=\frac{M}{d(M-d)}$,
$\bar{\sigma}_{kr}^{2}=\sigma_{R}^{2}\left(1+\frac{\sigma^{2}}{\beta_{kr}P_{r}}\right)$,
$\bar{\delta}_{kr}^{2}=\delta^{2}\left(1+\frac{\sigma^{2}}{\beta_{kr}P_{r}}\right)$,
$z_{k}=\mathbb{E}d_{c}^{2}(\mathbf{V}_{k},\hat{\mathbf{V}}_{k})$ being the
average CD between the IA precoder and the imperfect one.
###### Proof:
Proof is given in Appendix-C. ∎
The above bound shows that $z_{j}$ should be set inversely proportional to
$P_{j}$ to keep the rate loss constant. The splitting ratio $\bar{\rho}$ can
be set to keep the constant loss within the specified limit. In the following,
SWIPT maximization problem is simplified.
### III-E Rate-energy maximization problem
For the SWIPT precoding in literature [5, 55], authors have formulated an
optimization problem in which a linear sum of the sum rate and sum harvested
energy is maximized subjected to the quality-of-service (QoS) constraints and
the precoder constraints as
$\displaystyle\max_{\mathbf{V}_{j},\forall
j}\sum_{k}R_{k}\left(\mathbf{V}_{j},\forall j\big{|}\mathbf{H}_{j}\right)+\nu
Q_{r}\left(\mathbf{V}_{j},\forall j\big{|}\mathbf{H}_{j}\right)$ (20a)
$\displaystyle\text{subject to
}\left|R_{j}-\bar{R}_{j}\right|\leq\frac{d}{2}\log_{2}c,\|\mathbf{V}_{j}\|_{F}^{2}\leq
d,\forall j,$ (20b)
where $\nu$ is the weight controlling the preferred objective, and
$\bar{R}_{k}$ and $\frac{d}{2}\log_{2}c$ are the QoS rate constraint and the
specified rate-loss upper bound for the $j^{th}$ user. Note that the above two
are opposing objectives, i.e., if the sum rate is maximized, the harvested
energy is reduced, and if the sum harvested energy is maximized, the sum rate
degrades. To provide a balanced precoder, we start with the sum rate optimal
precoder, i.e., the TWRIA precoder $\mathbf{V}_{j}$, and degrade this precoder
in such a way that the degraded precoder satisfies the required QoS
constraint. In general, if we degrade the TWRIA precoder, it will result in
severe rate loss, causing the unexpected loss in degrees of freedom. Thus, to
avoid unexpected losses, we employ the CD decomposition, in which the value of
CD decides the degradation in the precoder, i.e., the losses in degrees of
freedom. It can be seen from Lemma 5 that if the CD value is chosen inversely
proportional to SNR, there is no-loss of DoFs, that is, only a constant rate
loss is present. This constant rate loss can be reduced via the splitting
ratio.
For example, from Lemma 5, to keep the rate loss upper bound to be a constant
(say $\frac{d}{2}\log_{2}c$), the required values of CD and splitting ratio
can be computed via the roots of the following equation
$\displaystyle\frac{M_{d}\bar{\rho}P_{j}z_{j}\beta_{rj}}{\bar{\rho}\left(\sum_{j=1}^{2K}P_{j}\frac{\beta_{rj}\sigma^{2}}{\beta_{kr}P_{r}}+\bar{\sigma}_{kr}^{2}\right)+\bar{\delta}_{kr}^{2}}\leq\frac{c-1}{2(K-1)}$
which can be rearranged into
$z_{j}\leq\min\left(\bar{z}_{j}(\bar{\rho},c),z_{j}^{EH},\forall j\right)$
with
$\displaystyle\bar{z}_{j}(\bar{\rho},c)=\frac{(c-1)}{2P_{j}\beta_{rj}M_{d}(K-1)}\left[\sum_{j=1}^{2K}P_{j}\frac{\beta_{rj}\sigma^{2}}{\beta_{kr}P_{r}}+\bar{\sigma}_{R}^{2}+\frac{\bar{\delta}^{2}}{\bar{\rho}}\right].$
In the above, one need to have $\bar{z}_{j}(\bar{\rho},c)\leq z_{j}^{EH}$;
otherwise, the sum rates will be much worse due to the lack of interference
alignment, and in that scenario, EH maximized precoder would be the better
choice. In case of high SNR regime, these conditions can easily met, since
$z_{j}$ is inversely proportional to $P_{j}$. In case of low and mid-SNR
range, the value of $\bar{\rho}$ can be finely tuned to get $z_{j}$ value
under the limit. Therefore, given the CD values and the IA precoders for the
specified constant rate loss upper bound, the corresponding balanced precoders
are obtained in the following section.
## IV Proposed balanced precoding method
### IV-A Optimization Problem
Given the TWRIA precoders $\left\\{\mathbf{V}_{j},\forall j\right\\}$ and the
value of CD $\left\\{z_{j},\forall j\right\\}$, we can now focus on maximizing
the harvested energy, since the expected sum rate losses obtained with a given
CD have a fixed and known upper bound. Thus, the $j^{th}$ balanced precoder
can be expressed using CD decomposition from the Corollary 4 as
$\mathbf{V}_{j}^{BAL}=\mathbf{V}_{j}\mathbf{X}_{j}\mathbf{Y}_{j}+\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}\mathbf{Z}_{j},$
(21)
where $\mathbf{Y}_{j}$ and $\mathbf{Z}_{j}$ are diagonal matrices; the
matrices $\mathbf{X}_{j}$ and $\mathbf{Z}_{j}$ are obtained in the following
to maximize the energy; and $\mathbf{V}_{j}^{\text{null}}$ represents the left
null space of $\mathbf{V}_{j}$, i.e.,
$\mathbf{V}_{j}^{\text{null}}=\text{null}(\mathbf{V}_{j})\in\mathcal{G}_{M,M-d}$
such that $\mathbf{V}_{j}^{H}\mathbf{V}_{j}^{\text{null}}=\mathbf{0}$.
The optimization problem to find the balanced precoding to maximize the total
harvested energy can be cast for each $j^{th}$ precoder as
$\displaystyle\max_{\mathbf{S}_{j},\mathbf{Z}_{j},\mathbf{X}_{j},\mathbf{Y}_{j}}\left\|\mathbf{H}_{rj}\mathbf{V}_{j}^{BAL}\right\|_{F}^{2}$
(22a) $\displaystyle\text{subject to
}\mathbf{V}_{j}^{BAL}=\mathbf{V}_{j}\mathbf{X}_{j}\mathbf{Y}_{j}+\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}\mathbf{Z}_{j},$
(22b)
$\displaystyle\text{tr}\left(\mathbf{Z}_{j}\mathbf{Z}^{H}\right)=\text{tr}\left(\mathbf{I}-\mathbf{Y}_{j}\mathbf{Y}_{j}^{H}\right)\leq
z_{j},$ (22c) $\displaystyle\mathbf{Z}_{j},\mathbf{Y}_{j}\text{ are diagonal
matrices},$ (22d)
$\displaystyle\mathbf{X}_{j}^{H}\mathbf{X}_{j}=\mathbf{X}_{j}\mathbf{X}_{j}^{H}=\mathbf{I}_{d},$
(22e) $\displaystyle\mathbf{S}_{j}^{H}\mathbf{S}_{j}=\mathbf{I}_{d}.$ (22f)
The solution to the above problem is obtained as follows. First,
$\mathbf{S}_{j}$ is computed, followed by the computation of $\mathbf{Z}_{j}$
and $\mathbf{X}_{j}$.
### IV-B Getting $\mathbf{S}_{j}$
Using the triangle inequality, the objective function in (22a) can be upper
bounded as
$\displaystyle\left\|\mathbf{H}_{rj}\left(\mathbf{V}_{j}\mathbf{X}_{j}\mathbf{Y}_{j}+\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}\mathbf{Z}_{j}\right)\right\|_{F}$
$\displaystyle\leq\left\|\mathbf{H}_{rj}\mathbf{V}_{j}\mathbf{X}_{j}\mathbf{Y}_{j}\right\|_{F}+\left\|\mathbf{H}_{rj}\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}\mathbf{Z}_{j}\right\|_{F},$
(23)
where the equality occurs when both
$\mathbf{H}_{rj}\mathbf{V}_{j}\mathbf{X}_{j}\mathbf{Y}_{j}$ and
$\mathbf{H}_{rj}\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}\mathbf{Z}_{j}$ are
in the same direction or proportional to each other. Since both the precoder
$\mathbf{V}_{j}$ and its null space $\mathbf{V}_{j}^{\text{null}}$ are present
in the above norm expression, the equality cannot be achieved when $z_{j}>0$
or $\mathbf{Z}_{j}\neq\mathbf{0}$. Best efforts can be done to align these
matrices using the following optimization problem as
$\displaystyle\arg\min_{\mathbf{S}_{j},\mathbf{Z}_{j},\mathbf{X}_{j},\mathbf{Y}_{j}}d_{c}^{2}\left(\mathbb{O}\left[\mathbf{H}_{rj}\mathbf{V}_{j}\mathbf{X}_{j}\mathbf{Y}_{j}\right],\mathbb{O}\left[\mathbf{H}_{rj}\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}\mathbf{Z}_{j}\right)\right],$
$\displaystyle\stackrel{{\scriptstyle(a)}}{{=}}\arg\min_{\mathbf{S}_{j}}d_{c}^{2}\left(\mathbb{O}\left[\mathbf{H}_{rj}\mathbf{V}_{j}\right],\mathbb{O}\left[\mathbf{H}_{rj}\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}\right)\right],$
$\displaystyle\stackrel{{\scriptstyle(b)}}{{=}}\arg\max_{\mathbf{S}_{j}^{H}\mathbf{S}_{j}=\mathbf{I}}\text{tr}\left(\mathbf{D}_{Vj}\mathbf{V}_{j}^{H}\mathbf{H}_{rj}^{H}\mathbf{H}_{rj}\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}\mathbf{D}_{Vnj}\right),$
(24)
where in $(a)$, the orthogonalization property is used, since both matrices
represent the same basis of the column space; in $(b)$, the definition of CD,
$\mathbb{O}\left[\mathbf{A}\right]=\mathbf{A}\left(\mathbf{A}^{H}\mathbf{A}\right)^{-1/2}$,
$\mathbf{D}_{Vj}=\left(\mathbf{V}_{j}^{H}\mathbf{H}_{rj}^{H}\mathbf{H}_{rj}\mathbf{V}_{j}\right)^{-1/2}$,
and
$\mathbf{D}_{Vnj}=\left(\mathbf{S}_{j}^{H}\mathbf{V}_{j}^{\text{null}H}\mathbf{H}_{rj}^{H}\mathbf{H}_{rj}\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}\right)^{-1/2}$
are used. From $(b)$, the solution is obtained by choosing the columns in the
same directions as
$\mathbf{V}_{j}^{\text{null}H}\mathbf{H}_{rj}^{H}\mathbf{H}_{rj}\mathbf{V}_{j}$
to maximize the trace-value as
$\displaystyle\mathbf{S}_{j}$
$\displaystyle=\mathbb{O}\left[\mathbf{V}_{j}^{\text{null}H}\mathbf{H}_{rj}^{H}\mathbf{H}_{rj}\mathbf{V}_{j}\mathbf{D}_{Vj}\mathbf{D}_{Vnj}\right]$
$\displaystyle\equiv\mathbb{O}\left[\mathbf{V}_{j}^{\text{null}H}\mathbf{H}_{rj}^{H}\mathbf{H}_{rj}\mathbf{V}_{j}\right],$
(25)
where the equivalence can be considered due to the fact that $\mathbf{X}_{j}$,
$\mathbf{Y}_{j}$ and $\mathbf{Z}_{j}$ are unknown, and thus, $\mathbf{S}_{j}$
can be independently and equivalently computed first. Further, letting
$\mathbf{A}_{j}=\mathbf{V}_{j}^{\text{null}H}\mathbf{H}_{j}^{H}\mathbf{H}_{j}\mathbf{V}_{j}$,
the cross-term below can be simplified as
$\text{tr}\left(\mathbf{Y}_{j}^{H}\mathbf{X}_{j}^{H}\mathbf{V}_{j}^{H}\mathbf{H}_{j}^{H}\mathbf{H}_{j}\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}\mathbf{Z}_{j}\right)$$=\text{tr}\left(\mathbf{Z}_{j}\mathbf{Y}_{j}^{H}\mathbf{X}_{j}^{H}\left(\mathbf{A}_{j}^{H}\mathbf{A}_{j}\right)^{1/2}\right).$
### IV-C Getting $\mathbf{Z}_{j}$ and $\mathbf{X}_{j}$: an iterative approach
Further, from (23), squaring the terms on both sides yields the Cauchy
Schwarz’s inequality
$\displaystyle\Re\left[\text{tr}\left(\mathbf{Y}_{j}^{H}\mathbf{X}_{j}^{H}\mathbf{V}_{j}^{H}\mathbf{H}_{j}^{H}\mathbf{H}_{j}\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}\mathbf{Z}_{j}\right)\right]$
$\displaystyle\leq\left\|\mathbf{H}_{j}\mathbf{V}_{j}\mathbf{X}_{j}\mathbf{Y}_{j}\right\|_{F}\left\|\mathbf{H}_{j}\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}\mathbf{Z}_{j}\right\|_{F},$
(26)
which suggests that equivalently, the above cross-term can be maximized to get
the maximum harvested energy.
Since the matrices $\mathbf{Y}_{j}$ and $\mathbf{Z}_{j}$ are diagonal, the
matrix $\mathbf{Y}_{j}=\mathcal{D}\left(y_{j1},\ldots,y_{jd}\right)$ can be
obtained from $\mathbf{Z}_{j}=\mathcal{D}\left(z_{j1},\ldots,z_{jd}\right)$
using the constraint in (22c) and (16b) as
$y_{ji}=+\sqrt{1-z_{ji}^{2}},\forall i=1,\ldots,d,$ (27)
satisfying the constraint in (22c). The remaining components of the CD
decomposition can be computed as the solution to the following optimization
problem as
$\displaystyle\max_{\mathbf{Z}_{j},\mathbf{X}_{j}}\Re\left[\text{tr}\left(\mathbf{Y}_{j}^{H}\mathbf{X}_{j}^{H}\mathbf{V}_{j}^{H}\mathbf{H}_{rj}^{H}\mathbf{H}_{rj}\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}\mathbf{Z}_{j}\right)\right],$
which is a non-convex problem due to the product of $\mathbf{Z}_{j}$ and
$\mathbf{X}_{j}$. The efficient way to solve the problem is via an iterative
method, where $\mathbf{X}_{j}$ and $\mathbf{Z}_{j}$ are solved alternately.
Given $\mathbf{Z}_{j}$ and $\mathbf{Y}_{j}$, the optimization problem above
can be reduced to a convex problem for $\mathbf{X}_{j}$ as
$\displaystyle\max_{\mathbf{X}_{j}}\Re\left[\text{tr}\left(\mathbf{Y}_{j}^{H}\mathbf{X}_{j}^{H}\mathbf{V}_{j}^{H}\mathbf{H}_{rj}^{H}\mathbf{H}_{rj}\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}\mathbf{Z}_{j}\right)\right]$
(28a) $\displaystyle\text{subject to }\|\mathbf{X}_{j}\|\leq 1,$ (28b)
where the spectral norm constraint above leads to the same constraint in
(22e). The solution for $\mathbf{X}_{j}$ can be obtained by choosing the same
column directions as of
$\mathbf{V}_{j}^{H}\mathbf{H}_{rj}^{H}\mathbf{H}_{rj}\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}\mathbf{Z}_{j}\mathbf{Y}_{j}^{H}$,
i.e.,
$\displaystyle\mathbf{X}_{j}$
$\displaystyle=\mathbb{O}\left[\mathbf{V}_{j}^{H}\mathbf{H}_{rj}^{H}\mathbf{H}_{rj}\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}\mathbf{Z}_{j}\mathbf{Y}_{j}^{H}\right].$
(29)
Note that the above $\mathbf{X}_{j}$ cannot be equivalently set to
$\mathbb{O}\left[\mathbf{V}_{j}^{H}\mathbf{H}_{rj}^{H}\mathbf{H}_{rj}\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}\right]$,
since the above particular directions are important. Further, substituting
$\mathbf{X}_{j}$ in the trace yields the following result.
###### Proposition 6.
With the above selection of $\mathbf{X}_{j}$, the trace-value is non-negative
$\displaystyle\text{tr}\left(\mathbf{Y}_{j}^{H}\mathbf{X}_{j}^{H}\mathbf{V}_{j}^{H}\mathbf{H}_{rj}^{H}\mathbf{H}_{rj}\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}\mathbf{Z}_{j}\right)=\text{tr}\left(\left(\mathbf{B}_{j}^{H}\mathbf{B}_{j}\right)^{1/2}\right)\geq
0,$
where
$\mathbf{B}_{j}=\mathbf{V}_{j}^{H}\mathbf{H}_{rj}^{H}\mathbf{H}_{rj}\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}\mathbf{Z}_{j}\mathbf{Y}_{j}^{H}=\left(\mathbf{A}_{j}^{H}\mathbf{A}_{j}\right)^{1/2}$
$\mathbf{Z}_{j}\mathbf{Y}_{j}^{H}$, and the equality occurs when $z_{j}=0$.
Next, given $\mathbf{Y}_{j}$, $\mathbf{X}_{j}$ and $z_{j}<z_{j}^{EH}$, the
diagonal matrix $\mathbf{Z}_{j}$ can be obtained from the following convex
problem as
$\displaystyle\max_{\mathbf{Z}_{j}}\Re\left[\text{tr}\left(\mathbf{Y}_{j}^{H}\mathbf{X}_{j}^{H}\mathbf{V}_{j}^{H}\mathbf{H}_{rj}^{H}\mathbf{H}_{rj}\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}\mathbf{Z}_{j}\right)\right]$
(30a) $\displaystyle\text{subject to }\|\mathbf{Z}_{j}\|_{F}\leq\sqrt{z_{j}},$
(30b) $\displaystyle\mathbf{Z}_{j}\text{ is a diagonal matrix},$ (30c)
$\displaystyle\mathbf{0}\preceq\mathbf{Z}_{j}\preceq\mathbf{I}_{d}.$ (30d)
We can equivalently recast the above problem as
$\displaystyle\max_{z_{ji},,\forall i}\sum_{i=1}^{d}c_{ji}z_{ji}$ (31a)
$\displaystyle\text{subject to }\sum_{i=1}^{d}z_{ji}^{2}\leq\sqrt{z_{j}},$
(31b) $\displaystyle 0\leq z_{ji}\leq 1,\forall i=1,\ldots,d,$ (31c)
where the vector
$c_{ji}=\left[\mathbf{Y}_{j}^{H}\mathbf{X}_{j}^{H}\mathbf{V}_{j}^{H}\mathbf{H}_{rj}^{H}\mathbf{H}_{rj}\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}\right]_{i,i},\forall
i=1,\ldots,d$. The values $c_{ji},\forall i$ are real and non-negative from
the proposition 6. The solution to the above problem is given by choosing
$\mathbf{z}_{j}$ equal to $\mathbf{c}_{j}$ and scaling it to satisfy the norm
constraint. Thus, we write
$z_{ji}=\min\left(\sqrt{z_{j}}\frac{c_{ji}}{\|\mathbf{c}_{j}\|},1\right)$, and
normalize the resulting entries to satisfy
$\sum_{i\in\mathcal{I}}z_{ji}^{2}=z_{j}-\left(d-\left|\mathcal{I}\right|\right)$,
where $\mathcal{I}=\left\\{i:z_{ji}<1\right\\}$, i.e.,
$z_{ji}\leftarrow\frac{z_{ji}}{\sum_{i\in\mathcal{I}}z_{ji}^{2}}\sqrt{z_{j}-\left(d-\left|\mathcal{I}\right|\right)},\forall
i\in\mathcal{I}$.
### IV-D Iterative CD algorithm
1:$\mathbf{H}_{rj}$, $\mathbf{V}_{j}$ and $z_{j}$.
2:$\mathbf{V}_{j}^{BAL}$.
3:if $z_{j}>z_{j}^{EH}$ then
4: Return $\mathbf{V}_{j}^{BAL}=\mathbf{V}_{j}^{EH}$.
5:else
6: Compute
$\mathbf{S}_{j}=\mathbb{O}\left[\mathbf{V}_{j}^{\text{null}H}\mathbf{H}_{rj}^{H}\mathbf{H}_{rj}\mathbf{V}_{j}\right]$.
7: Initialize $\mathbf{Z}_{j}=\sqrt{\frac{z_{j}}{d}}\mathbf{I}_{d}$ and
$\mathbf{Y}_{j}$ by (27).
8: Solve (28a) to get $\mathbf{X}_{j}$.
9: Solve (30a) to get $\mathbf{Z}_{j}$.
10: Get $\mathbf{Y}_{j}$ by (27).
11: Go to step 8 until convergence.
12: Return $\mathbf{V}_{j}^{BAL}$ via (22b).
13:end if
Algorithm 1 Iterative CD decomposition procedure.
Now, with all components obtained, the resulting balanced precoder can be
computed via (22b). The summary of this procedure is given in Algorithm 1. If
$z_{j}>z_{j}^{EH}$, we choose the energy optimized precoder as the balanced
precoder $\mathbf{V}_{j}^{BAL}=\mathbf{V}_{j}^{EH}$. Regarding the
convergence, it can be seen that since both $\mathbf{Z}_{j}$ and
$\mathbf{X}_{j}$ maximize the same linear objective, thus convergence is
guaranteed with a global optimum value. Regarding the number of iterations, we
observe via simulations that it takes only a few ($4$ to $8$) iterations to
converge.
### IV-E Computational complexity
The product $\mathbf{H}_{j}^{H}\mathbf{H}_{j}$ and its EVD need
$\mathcal{O}\left(M^{2}R\right)$ and $\mathcal{O}\left(M^{3}\right)$
operations. For $\mathbf{S}_{j}$, the product and $\mathbb{O}[\cdot]$ need
$\mathcal{O}\left(M^{2}R\right)$ and
$\mathcal{O}\left(d^{2}\cdot(M-d)+d^{3}\right)=\mathcal{O}\left(Md^{2}\right)$
operations. The rest of operations are below $\mathcal{O}\left(M^{3}\right)$,
since $R$ is the order of $M$. Thus, Algorithm 1 has
$\mathcal{O}\left(M^{3}+Md^{2}N_{I}\right)\approx\mathcal{O}\left(M^{3}\right)$
computational complexity, where the number of iterations $N_{I}$ for
convergence are few ($4$ to $8$), i.e., $N_{I}\ll\frac{M^{2}}{d^{2}}$.
### IV-F Bounds
Note that any trivial balanced precoding cannot guarantee better harvested
energy. Thus, for the proposed balanced precoding, the following bounds can be
obtained.
###### Lemma 7.
Given the balanced precoding $\left\\{\mathbf{V}_{k}^{BAL},\forall k\right\\}$
for the channel $\left\\{\mathbf{H}_{kj},\forall k,j\right\\}$ with the TWRIA
precoders $\left\\{\mathbf{V}_{k},\forall k\right\\}$, the total harvested
energy can be bounded as
$\displaystyle\zeta\rho\sum_{j=1}^{2K}\frac{P_{j}}{d}$
$\displaystyle\left[\left\|\mathbf{H}_{rj}\mathbf{V}_{j}\right\|_{F}^{2}\left(1-\frac{z_{j}}{d}\right)+\left\|\mathbf{H}_{rj}\mathbf{V}_{j}^{\text{n}}\right\|_{F}^{2}\left(\frac{z_{j}}{d}\right)\right]$
$\displaystyle\leq
Q_{r}(\rho,\mathbf{V}_{j}^{BAL})\leq\zeta\rho\sum_{j=1}^{2K}P_{j}\lambda_{j1}.$
(32)
###### Proof:
Proof is given in Appendix-D. ∎
The above result shows an improvement over (3b), i.e., the balanced precoding
promises a better harvested energy than that achieved using just the perfect
IA precoders, if $z_{j}>0$ for any $j$, and $\rho>0$. The corresponding
resulting rate loss can be obtained from the upper bound in the Lemma 5.
With $\mathcal{CN}(0,1)$ entries for the matrix $\mathbf{H}_{kj}$ and
$P_{j}=P,\forall j$, performing the expectation on both sides in the above
equation gives
$2KP\zeta\rho R\approx\mathbb{E}\left\\{Q_{r}(\rho)\right\\}\leq 2KP\zeta\rho
Rd\left(\frac{R+d}{Rd+1}\right)^{2/3},$ (33)
where the left approximation is obtained assuming
$\mathbb{E}\left\\{\left\|\mathbf{H}_{rj}\mathbf{V}_{j}^{BAL}\right\|_{F}^{2}\right\\}$
$\approx$ $Rd$, and the right inequality is given by
$\mathbb{E}\left\\{\lambda_{j1}\right\\}=Rd\left(\frac{R+d}{Rd+1}\right)^{2/3}$
[56].
## V Simulation Results
We consider $2K=6$ nodes, each transmitting $d=2$ data streams via a relay
having antennas $R=Kd=M=6$, i.e., $\left(6,6,2\right)^{6}$ system, which is
analogous to an IC system $\left(6\times 6,2\right)^{6}$. The noise variances
at receivers is assumed to be unity, $\sigma^{2}=\sigma_{R}^{2}=1$, while the
noise variance at the splitters is assumed to be $\delta^{2}=0.1$, when
$\rho>0$; the EH conversion efficiency is set to be $\zeta=0.5$; and the
transmit power at the relay is considered same as the transmit power of users
$P_{j}=P_{r},\forall j$. For the balanced precoding, the iterative CD
algorithm is run for $6$ iterations. QPSK symbol error rate performance is
averaged over $20,000$ symbols. In the following figures, we compare three
different precoding strategies given below.
* •
(MAX-EH) Harvested energy maximizing precoder;
* •
(Span-IA) Balanced precoders from subspace alignment method with $z=0,0.1$
[57, 48];
* •
(TWRIA) Balanced precoder from MMSE based IA algorithm [43] with $z=0,0.1$.
### V-A Convergence of TWRIA Algorithm
Figure 2: Convergence of TWRIA algorithm with $32$ different initializations
for $\left(6\times 6,2\right)^{6}$ TWR system for $8.5$, $17$ and $25$ dB SNR
values, respectively.
Figure 2 illustrates the sum rate of a $(6\times 6,2)^{6}$ system versus the
number of iterations for $32$ different precoder initializations for TWRIA
Algorithm 2 with respect to different values of SNR. As the iteration number
increases, the sum rate improves and converges globally after enough
iterations. The rate of convergence, i.e., the number of required iterations
for convergence, depends on the operating SNR. An average of $10^{3}$,
$10^{4}$ and $10^{5}$ iterations are required for convergence for $8.5$, $17$
and $25$ dB SNR values, respectively.
### V-B Sum rate versus SNR
Figure 3: Sum rate versus SNR plot for TWR system for $\left(6\times
6,2\right)^{6}$ system.
Figure 3 plots the sum rate versus SNR values for three types of precoders.
First, it can be seen that the TWRIA algorithm provides better sum rates,
which scales linearly with SNRs, as compared with the subspace alignment
method (span-IA). Also, when the precoder is balanced for better energy
harvesting, sum rates are decreased. When only single user employs balanced
precoding, the sum rate is not reduced by a large amount, i.e., there is much
less degree of freedom loss, as compared to the case when all users use the
balanced precoding with the same CD value. Based on the required energy at the
relay, the precoding can be balanced at users via different CD values. For a
fixed CD value, the corresponding rate loss show an increase with SNR, as
analyzed earlier in Lemma 5. Further, the decrements of sum rate with respect
to the CD values can also be seen in the following rate-energy plots.
### V-C Rate-energy plots
Given the precoders $\left\\{\mathbf{V}_{k},\forall k\right\\}$, the rate-
energy region can be written as
$\mathcal{C}=\left\\{\left(R,Q\right):R\leq\sum_{k=1}^{K}R_{k}\left(\bar{\rho},\mathbf{V}_{k}\right),Q\leq
Q_{r}\left(\rho,\mathbf{V}_{k}\right)\right\\}.$ (34)
Figure 4: Rate-energy plots for TWR $\left(6\times 6,2\right)^{6}$ system for
different methods at $25$ dB SNR.
For a splitting noise variance $\delta^{2}=0.1$, the parametric plots are
drawn to illustrate rate-energy regions [58, 59]. Figure 4 compares the same
rate-energy region for different precoding schemes. The trend of different
methods with various CD values is similar as in Figure 3. Different rates and
energy plots conclude that energy harvesters can be improved at the expense of
sum rate optimality.
Figure 5: Rate-energy plots for TWRIA algorithm for TWR $\left(6\times
6,2\right)^{6}$ system at $25$ dB SNR.
Figure 5 shows the sum rate versus the harvested energy plot for
aforementioned precoders with and without balanced precoding. It can be noted
that the TWRIA region provides higher sum rates and lower energies, while the
region for MAX-EH precoders has less sum rates and higher energies. These
plots represent two extreme ends of rate and energy achievability. Next, for
the balanced precoding, it can be observed that as $z$ increases, the rate
decreases and the energy increases, when $z<\min_{j}z_{j}^{EH}$, where
$\min_{j}z_{j}^{EH}$ represents the threshold for CD. When
$z>\min_{j}z_{j}^{EH}$, both rate and energy achieved are lower. Therefore,
the value of CD ($z$) must be properly selected below the threshold to balance
both the rates and the energy at each user.
Figure 6: Rate-energy versus the squared CD plot for TWR system for
$\left(6\times 6,2\right)^{6}$ system at $25$ dB SNR.
Figure 6 plots the sum rate (right-axis in red) and the harvested energy
(left-axis in blue) versus the squared CD
$z=d_{c}^{2}\left(\mathbf{V}_{j},\mathbf{V}_{j}^{BAL}\right),\forall j$
required for the balanced precoding with TWRIA and span-IA methods. It can be
seen that the sum rate decreases in a logarithmic manner as $z$ increases.
This behavior has been analyzed in the Lemma 5 for the rate-loss upper bound.
On the other hand, the harvested energy increases if $z$ increases.
### V-D QPSK symbol error rate versus SNR
Figure 7: QPSK symbol error rate versus SNR plot for TWR system for
$\left(6\times 6,2\right)^{6}$ system.
Figure 7 depicts the average symbol error rate (SER) plots with uncoded QPSK
modulation for $(6\times 6,2)^{6}$. It can be seen that the perfect IA
precoders ($z=0$) achieve the minimum SER, while with $z=0.1$, the SER
saturates. It can be noted the trend of error rate curves is same as in Figure
3, and the MAX-EH and the span-IA provide the worse SERs than that of TWRIA.
## VI Conclusion
In this paper, for the TWR system, a modified IA algorithm is presented while
considering AF relaying. To improve the SWIPT, i.e., to obtain the better
rate-energy trade-offs, the CD decomposition-based balance precoding has been
investigated, which is of low-complexity than the other SDR based convex
relaxation-based suboptimal methods in the literature. Further, maximum energy
based precoders, rate-loss upper bound, and harvested energy bounds are
derived to show and compare the rate-energy trade-offs. Simulations with other
IA methods conclude the effectiveness of the proposed IA and the CD
decomposition-based balanced precoding schemes.
The future work is to investigate the effect of quantized or analog feedback
including the specific details of 5G New Radio scenarios including imperfect
CSI.
## Appendices
### -A IA Algorithm for TWR system
In the following, precoder and combiner expressions are derived to minimize
the mean squared error (MSE), followed by an iterative procedure.
#### -A1 Receiver design
Given the precoders $\mathbf{V}_{j}$ for $\forall j\neq k$, the receive
combiner $\mathbf{U}_{k}$ at the $k^{th}$ receiver can be obtained to minimize
the MSE [43] as
$\displaystyle\min_{\mathbf{U}_{k}^{H}\mathbf{U}_{k}=\mathbf{I}_{d}}\mathbb{E}\left\|\hat{\mathbf{y}}_{k}-\mathbf{s}_{k^{\prime}}\right\|_{2}^{2}.$
The MSE at the $k^{th}$ user can be simplified as
$\displaystyle\mathbb{E}\left\|\hat{\mathbf{y}}_{k}-\mathbf{s}_{k^{\prime}}\right\|_{2}^{2}$
$\displaystyle=\mathbb{E}\|\left(\sqrt{\bar{\rho}}\mathbf{U}_{k}^{H}\mathbf{H}_{kk^{\prime}}\mathbf{V}_{k^{\prime}}-\mathbf{I}\right)\mathbf{s}_{k^{\prime}}\|_{2}^{2}+\mathbb{E}\|\mathbf{U}_{k}^{H}\mathbf{n}_{k}\|_{2}^{2}$
$\displaystyle+\bar{\rho}\sum_{j\neq
k,k^{\prime}}\mathbb{E}\|\mathbf{U}_{k}^{H}\mathbf{H}_{kj}\mathbf{V}_{j}\mathbf{s}_{j}\|_{2}^{2}+\bar{\rho}\mathbb{E}\|\mathbf{U}_{k}^{H}\mathbf{H}_{kr}\mathbf{G}\tilde{\mathbf{w}}_{r}\|_{2}^{2}$
$\displaystyle=\|\sqrt{\bar{\rho}}\mathbf{U}_{k}^{H}\mathbf{H}_{kk^{\prime}}\mathbf{V}_{k^{\prime}}-\mathbf{I}\|_{F}^{2}\frac{P_{k^{\prime}}}{d}+\sigma^{2}\|\mathbf{U}_{k}\|_{F}^{2}$
$\displaystyle+\bar{\rho}\sum_{j\neq
k,k^{\prime}}\frac{P_{j}}{d}\|\mathbf{U}_{k}^{H}\mathbf{H}_{kj}\mathbf{V}_{j}\|_{F}^{2}+\bar{\rho}\sigma_{ID}^{2}\|\mathbf{U}_{k}^{H}\mathbf{H}_{kr}\mathbf{G}\|_{F}^{2}$
$\displaystyle=P_{k^{\prime}}-\frac{2\bar{\rho}P_{k^{\prime}}}{d}tr\Re\left(\mathbf{U}_{k}^{H}\mathbf{H}_{kk^{\prime}}\mathbf{V}_{k^{\prime}}\right)$
$\displaystyle+\bar{\rho}\sum_{j\neq
k}\frac{P_{j}}{d}\|\mathbf{U}_{k}^{H}\mathbf{H}_{kj}\mathbf{V}_{j}\|_{F}^{2}+tr\left(\mathbf{U}_{k}^{H}\mathbf{C}_{k}\mathbf{U}_{k}\right).$
Differentiating with respect to $\mathbf{U}_{k}$ gives
$\displaystyle\frac{\bar{\rho}P_{k^{\prime}}}{d}\mathbf{H}_{kk^{\prime}}\mathbf{V}_{k^{\prime}}=\sum_{j\neq
k}\frac{\bar{\rho}P_{j}}{d}\mathbf{H}_{kj}\mathbf{V}_{j}\mathbf{V}_{j}^{H}\mathbf{H}_{kj}^{H}\mathbf{U}_{k}+\mathbf{C}_{k}\mathbf{U}_{k},$
yielding $\mathbf{U}_{k}=$
$\mathbb{O}\left[\left(\sum_{j\neq
k}\frac{\bar{\rho}P_{j}}{d}\mathbf{H}_{kj}\mathbf{V}_{j}\mathbf{V}_{j}^{H}\mathbf{H}_{kj}^{H}+\mathbf{C}_{k}\right)^{-1}\mathbf{H}_{kk^{\prime}}\mathbf{V}_{k^{\prime}}\frac{\bar{\rho}P_{k^{\prime}}}{d}\right],$
(35)
where $\mathbb{O}\left(\cdot\right)$ denotes the orthonormality operation as
defined in the notations section.
#### -A2 Precoder design
Similarly, given the combiners $\mathbf{U}_{k}$, the transmit precoder can be
optimized to minimize the total MSE as
$\displaystyle\min_{\mathbf{V}_{j}^{H}\mathbf{V}_{j}=\mathbf{I}_{d}}\sum_{k}\mathbb{E}\left\|\hat{\mathbf{y}}_{k}^{H}-\mathbf{s}_{k^{\prime}}^{H}\right\|_{2}^{2}.$
The total MSE across all receivers can be rearranged as
$\displaystyle\sum_{k}\mathbb{E}\left\|\hat{\mathbf{y}}_{k}^{H}-\mathbf{s}_{k^{\prime}}^{H}\right\|_{2}^{2}$
$\displaystyle=\sum_{k^{\prime}}P_{k^{\prime}}-\sum_{k}\frac{2\bar{\rho}P_{k^{\prime}}}{d}tr\Re\left(\mathbf{U}_{k}^{H}\mathbf{H}_{kk^{\prime}}\mathbf{V}_{k^{\prime}}\right)$
$\displaystyle+\bar{\rho}\sum_{k}\sum_{j\neq
k}\frac{P_{j}}{d}\|\mathbf{U}_{k}^{H}\mathbf{H}_{kj}\mathbf{V}_{j}\|_{F}^{2}+\sum_{k}tr\left(\mathbf{U}_{k}^{H}\mathbf{C}_{k}\mathbf{U}_{k}\right)$
$\displaystyle=\bar{\rho}\sum_{j}\frac{P_{j}}{d}\sum_{k\neq
j}\|\mathbf{V}_{j}^{H}\mathbf{H}_{kj}^{H}\mathbf{U}_{k}\|_{F}^{2}-\sum_{j}\frac{2\bar{\rho}P_{j}}{d}tr\Re\left(\mathbf{V}_{j}^{H}\mathbf{H}_{j^{\prime}j}^{H}\mathbf{U}_{j}\right)$
$\displaystyle+\sum_{k^{\prime}}P_{k^{\prime}}+\sum_{k}tr\left(\mathbf{U}_{k}^{H}\mathbf{C}_{k}\mathbf{U}_{k}\right).$
Differentiating the above MSE with respect to $\mathbf{V}_{j}$ provides
$\displaystyle\sum_{k\neq
j}\mathbf{H}_{kj}^{H}\mathbf{U}_{k}\mathbf{U}_{k}^{H}\mathbf{H}_{kj}\mathbf{V}_{j}=\mathbf{H}_{j^{\prime}j}^{H}\mathbf{U}_{j},$
and simplifying to
$\mathbf{V}_{j}=\mathbb{O}\left[\left(\sum_{k\neq
j}\mathbf{H}_{kj}^{H}\mathbf{U}_{k}\mathbf{U}_{k}^{H}\mathbf{H}_{kj}+\epsilon\mathbf{I}\right)^{-1}\mathbf{H}_{j^{\prime}j}^{H}\mathbf{U}_{j}\right],$
(36)
where $\epsilon\mathbf{I}_{M}$ is added as a regularization term to avoid
singular matrix for inverse. The value of regularizer parameter is chosen
$\frac{d}{P_{j}}\cdot\frac{tr(\mathbf{C}_{k})}{M}$, that is close to the
normalized noise power at the receiver $j$.
#### -A3 Relay design (amplify-and-forward)
For an amplified and forward relay with $\mathbf{G}=\alpha\mathbf{I}$, the
factor $\alpha$ can be obtained by solving the power constraint
$tr\mathbb{E}\mathbf{G}\mathbf{y}_{r}^{ID}\mathbf{y}_{r}^{IDH}\mathbf{G}^{H}=\text{tr}\left(\mathbf{G}\left(\sum_{j=1}^{2K}\mathbf{H}_{rj}\mathbf{V}_{j}\mathbf{V}_{j}^{H}\mathbf{H}_{rj}^{H}\bar{\rho}\frac{P_{j}}{d}+\bar{\rho}\sigma_{ID}^{2}\mathbf{I}_{R}\right)\mathbf{G}^{H}\right)=P_{r}$.
Thus, we have
$\alpha^{2}=\frac{P_{r}}{\text{tr}\left(\sum_{j=1}^{2K}\bar{\rho}\frac{P_{j}}{d}\mathbf{H}_{rj}\mathbf{V}_{j}\mathbf{V}_{j}^{H}\mathbf{H}_{rj}^{H}+\bar{\rho}\sigma_{ID}^{2}\mathbf{I}_{R}\right)}.$
(37)
#### -A4 TWR-IA Algorithm
Combining the above procedure, an iterative procedure is given in Algorithm 2.
1: Initialize precoders $\mathbf{V}_{j},\forall j$, and
$\mathbf{G}=\mathbf{I}_{R}$.
2:for $t=1,2,\ldots,\text{max\\_iter}$ do
3: Get $\mathbf{G}=\alpha\mathbf{I}_{R}$ by (37).
4: Obtain the combiner $\mathbf{U}_{k},\forall k$ via (35).
5: Compute the precoder $\mathbf{V}_{k},\forall k$ via (36).
6:end for
Algorithm 2 TWR-IA algorithm.
After initializing precoders, the scalar $\alpha$, decoders and precoders are
iteratively updated until convergence. Since the scalar $\alpha$ is changed
every iteration, the effective channel is also updated per iteration. It can
be seen via simulations that the number of iterations for convergence depends
on SNR. The higher the SNR, the larger the number of iterations. Regarding the
convergence, it can be seen that the above problem can be shown jointly convex
with respect to $\left(\mathbf{U}_{k},\mathbf{V}_{k},\forall k\right)$ as in
[43]. Both steps minimize the same MSE, and the MSE is bounded to conclude the
global convergence of the algorithm.
### -B Proof of CD decomposition
#### -B1 Lemma 2
Consider two $M\times d$ orthonormal matrices $\mathbf{V},\hat{\mathbf{V}}$
such that
$\mathbf{V}^{H}\mathbf{V}=\hat{\mathbf{V}}^{H}\hat{\mathbf{V}}=\mathbf{I}_{d}$.
Its left null space of size $M\times M-d$ can be represented as
$\hat{\mathbf{V}}_{j}^{\text{null}}=\text{null}(\hat{\mathbf{V}}_{j})$. Then,
we can write
$\displaystyle\mathbf{V}$
$\displaystyle=\hat{\mathbf{V}}\hat{\mathbf{V}}^{H}\mathbf{V}+\left(\mathbf{I}_{M}-\hat{\mathbf{V}}\hat{\mathbf{V}}^{H}\right)\mathbf{V}$
$\displaystyle=\hat{\mathbf{V}}\underbrace{\hat{\mathbf{V}}^{H}\mathbf{V}}_{=\mathbf{X}\mathbf{Y}}+\hat{\mathbf{V}}^{\text{null}}\underbrace{\hat{\mathbf{V}}^{\text{null}H}\mathbf{V}}_{=\mathbf{S}\mathbf{Z}}$
(38)
where the last equation is obtained by the QR-decomposition such that
$\mathbf{X}$ and $\mathbf{S}$ are $d\times d$ and $M-d\times d$ orthonormal
matrices respectively. It verifies
$d_{c}^{2}(\mathbf{V},\hat{\mathbf{V}})=d-\|\hat{\mathbf{V}}^{H}\mathbf{V}\|_{F}^{2}=d-\text{tr}(\mathbf{Y}^{H}\mathbf{Y})=\text{tr}(\mathbf{Z}^{H}\mathbf{Z})$.
Note that $\mathbf{X}\mathbf{Y}\in\mathbb{C}^{d\times d}$ is independent of
$\hat{\mathbf{V}}\in\mathbb{C}^{M\times d}$ since $\mathbf{XY}$ is a
projection to a lower dimension space. Also, the factors $\mathbf{X}$ and
$\mathbf{Y}$ are independent since $\mathbf{X}$ represents the basis of
$\hat{\mathbf{V}}^{H}\mathbf{V}$ and the basis is not unique. Using similar
facts, the matrices $\mathbf{S}$ and $\mathbf{Z}$ are also independent. For
more details, visit [53]. The other two properties can be seen as follows.
Consider the product simplifications for
$\hat{\mathbf{V}}^{\text{null}H}\hat{\mathbf{V}}^{\text{null}}=\mathbf{I}$ and
$\hat{\mathbf{V}}^{\text{null}H}\hat{\mathbf{V}}=\mathbf{0}$ as
$\displaystyle\mathbf{I}$ $\displaystyle=\mathbf{V}^{H}\mathbf{V}$
$\displaystyle=\mathbf{Y}^{H}\mathbf{X}^{H}\hat{\mathbf{V}}^{H}\hat{\mathbf{V}}\mathbf{X}\mathbf{Y}+\mathbf{Z}^{H}\mathbf{S}^{H}\hat{\mathbf{V}}^{\text{null}H}\hat{\mathbf{V}}^{\text{null}}\mathbf{S}\mathbf{Z}$
$\displaystyle\qquad+2\Re\left(\mathbf{Z}^{H}\mathbf{S}^{H}\hat{\mathbf{V}}^{\text{null}H}\hat{\mathbf{V}}\mathbf{X}\mathbf{Y}\right)$
$\displaystyle=\mathbf{Y}^{H}\mathbf{X}^{H}\mathbf{X}\mathbf{Y}+\mathbf{Z}^{H}\mathbf{S}^{H}\mathbf{S}\mathbf{Z}=\mathbf{Y}^{H}\mathbf{Y}+\mathbf{Z}^{H}\mathbf{Z},$
and
$\displaystyle d_{c}^{2}(\mathbf{V},\hat{\mathbf{V}})$
$\displaystyle=d-\|\hat{\mathbf{V}}^{H}\mathbf{V}\|_{F}^{2}$
$\displaystyle=d-\|\hat{\mathbf{V}}^{H}\hat{\mathbf{V}}\mathbf{X}\mathbf{Y}+\hat{\mathbf{V}}^{H}\hat{\mathbf{V}}^{\text{null}}\mathbf{S}\mathbf{Z}\|_{F}^{2}$
$\displaystyle=d-\|\mathbf{X}\mathbf{Y}\|_{F}^{2}=d-\|\mathbf{Y}\|_{F}^{2}=\|\mathbf{Z}\|_{F}^{2}.$
#### -B2 Corollary 3
Let $\mathbf{V}_{j}$ and $\hat{\mathbf{V}}_{j}$ be two set of precoders such
that $d_{c}^{2}\left(\mathbf{V}_{j},\hat{\mathbf{V}}_{j}\right)=0,\forall j$,
i.e., from Lemma 2,
$\mathbf{V}_{j}=\hat{\mathbf{V}}_{j}\mathbf{X}_{j}\mathbf{Y}_{j}$ with
$\mathbf{X}_{j}\mathbf{X}_{j}^{H}=\mathbf{Y}_{j}\mathbf{Y}_{j}^{H}=\mathbf{I}_{d},\forall
j$. The sum rate and the harvested energy will be the same, since the matrices
with the zero CDs are related by a unitary matrix, which cannot change the
value of the products
$\mathbf{V}_{j}\mathbf{V}_{j}^{H}=\hat{\mathbf{V}}_{j}\hat{\mathbf{V}}_{j}^{H},\forall
j$, the products $\bar{\mathbf{H}}_{kj}\bar{\mathbf{H}}_{kj}^{H},\forall j,k$
and the norm $\|\mathbf{H}_{rj}\mathbf{V}_{j}\|_{F}^{2},\forall j,k$.
#### -B3 Corollary 4
From the CD decomposition, the desired displacement matrix can be computed as
$\mathbf{V}\bar{\mathbf{X}}\mathbf{Y}+\mathbf{V}^{\text{null}}\bar{\mathbf{S}}\mathbf{Z}$,
where $\bar{\mathbf{X}},\mathbf{Y},\bar{\mathbf{S}},\mathbf{Z}$ will be
computed to satisfy the constraint in Lemma 2. The CD between this matrix and
$\mathbf{V}$ can be written as
$\displaystyle z$
$\displaystyle=d_{c}^{2}\left(\mathbf{V}\bar{\mathbf{X}}\mathbf{Y}+\mathbf{V}^{\text{null}}\bar{\mathbf{S}}\mathbf{Z},\mathbf{V}\right)$
$\displaystyle\stackrel{{\scriptstyle(a)}}{{=}}d_{c}^{2}\left(\mathbf{V}\bar{\mathbf{X}}\mathbf{U}_{Y}\Sigma_{Y}\mathbf{V}_{Y}^{H}+\mathbf{V}^{\text{null}}\bar{\mathbf{S}}\mathbf{U}_{Z}\Sigma_{Z}\mathbf{V}_{Y}^{H},\mathbf{V}\right)$
$\displaystyle\stackrel{{\scriptstyle(b)}}{{=}}d_{c}^{2}\left(\mathbf{V}\mathbf{X}\Sigma_{Y}+\mathbf{V}^{\text{null}}\mathbf{S}\Sigma_{Z},\mathbf{V}\mathbf{V}_{Y}\right)$
$\displaystyle\stackrel{{\scriptstyle(c)}}{{=}}d_{c}^{2}\left(\mathbf{V}\mathbf{X}\Sigma_{Y}+\mathbf{V}^{\text{null}}\mathbf{S}\Sigma_{Z},\mathbf{V}\right)=d_{c}^{2}\left(\mathbf{V}_{D},\mathbf{V}\right),$
where in $(a)$, the singular value decomposition (SVD) of
$\mathbf{Z}=\mathbf{U}_{Z}\Sigma_{Z}\mathbf{V}_{Y}^{H}$ and
$\mathbf{Y}=\mathbf{U}_{Y}\Sigma_{Y}\mathbf{V}_{Y}^{H}$ with the same right
singular vectors due to the constraint
$\mathbf{Y}^{H}\mathbf{Y}=\mathbf{I}_{d}-\mathbf{Z}^{H}\mathbf{Z}$; in $(b)$,
$\mathbf{S}=\bar{\mathbf{S}}\mathbf{U}_{Z}$,
$\mathbf{X}=\bar{\mathbf{X}}\mathbf{U}_{Y}$ are substituted, and the unitary
matrix $\mathbf{V}_{Y}$ is multiplied into both arguments, since the resulting
CD is unchanged for unitary multiplication, as in $(c)$. This shows that
$\mathbf{Z}$ and $\mathbf{Y}$ can be relaxed to a diagonal matrix.
### -C Proof of Lemma 5: RLUB
###### Proof:
From the literature, we know that the rate loss is proportional to the
interference [53, 60]. Therefore, the rate loss upper bound can be obtained as
$\mathbb{E}\left\\{\Delta R_{k}\right\\}\leq$
$\displaystyle\frac{1}{2}\log_{2}\left|\mathbf{I}_{d}+\bar{\rho}\mathbb{E}\left\\{\left(\sum_{j\neq
k,k^{\prime}}\frac{P_{j}}{d}\underline{\mathbf{H}}_{kj}\hat{\mathbf{V}}_{j}\hat{\mathbf{V}}_{j}^{H}\underline{\mathbf{H}}_{kj}^{H}\right)\mathbf{N}_{k}^{-1}\right\\}\right|,$
where in (a),
$\underline{\mathbf{H}}_{kj}^{H}=\mathbf{U}_{k}^{H}\mathbf{H}_{kj}$, and the
inequality is obtained by Jensen’s inequality. Now, using the CD
decomposition, one can write in terms of perfect IA precoder as
$\displaystyle\bar{\mathbf{H}}_{kj}^{H}\hat{\mathbf{V}}_{j}$
$\displaystyle=\bar{\mathbf{H}}_{kj}^{H}\mathbf{V}_{j}\mathbf{X}_{j}\mathbf{Y}_{j}+\bar{\mathbf{H}}_{kj}^{H}\mathbf{S}_{j}\mathbf{Z}_{j}=\bar{\mathbf{H}}_{kj}^{H}\mathbf{S}_{j}\mathbf{Z}_{j},$
where $\underline{\mathbf{H}}_{kj}^{H}\mathbf{V}_{j}=0$ using IA. In the above
decomposition, $\mathbf{S}_{j}\in\mathcal{G}_{M\times d}$ and $\mathbf{Z}_{j}$
are independent of each other [53, Lemma 1]. The above matrix
$\bar{\mathbf{H}}_{kj}$ is not an orthonormal $M\times d$ matrix. To make it
orthonormal, let
$\tilde{\mathbf{H}}_{kj}=\underline{\mathbf{H}}_{kj}\mathbf{W}_{kj}\mathbf{\Lambda}_{kj}^{-1/2}$,
subject to $\tilde{\mathbf{H}}_{kj}^{H}\tilde{\mathbf{H}}_{kj}=\mathbf{I}_{d}$
and $\mathbf{W}_{kj}^{H}\mathbf{W}_{kj}=\mathbf{I}_{d}$, where
$\underline{\mathbf{H}}_{kj}=\tilde{\mathbf{H}}_{kj}\mathbf{\Lambda}_{kj}^{1/2}\mathbf{W}_{kj}^{H}$
via singular-value decomposition and $\tilde{\mathbf{H}}_{kj}$,
$\mathbf{W}_{kj}$, and $\mathbf{\Lambda}_{kj}$ are independent of each other.
Thus, the following product is composed of independent terms which can be
simplified as
$\displaystyle\mathbb{E}\left\\{\underline{\mathbf{H}}_{kj}^{H}\mathbf{S}_{j}\mathbb{E}\left\\{\mathbf{Z}_{j}\mathbf{Z}_{j}^{H}\right\\}\mathbf{S}_{j}^{H}\underline{\mathbf{H}}_{kj}\mathbf{N}_{k}^{-1}\right\\}$
$\displaystyle\stackrel{{\scriptstyle(a)}}{{=}}$
$\displaystyle\frac{z_{j}}{d}\mathbb{E}\left\\{\mathbf{W}_{kj}(\mathbf{\Lambda}_{kj}^{1/2})^{H}\tilde{\mathbf{H}}_{kj}^{H}\mathbf{S}_{j}\mathbf{S}_{j}^{H}\tilde{\mathbf{H}}_{kj}\mathbf{\Lambda}_{kj}^{1/2}\mathbf{W}_{kj}^{H}\mathbf{N}_{k}^{-1}\right\\}$
$\displaystyle\stackrel{{\scriptstyle(b)}}{{=}}$
$\displaystyle\mathbb{E}\left\\{\mathbf{W}_{kj}(\mathbf{\Lambda}_{kj}^{1/2})^{H}\mathbb{E}\left\\{\tilde{\mathbf{H}}_{kj}^{H}\mathbf{S}_{j}\mathbf{S}_{j}^{H}\tilde{\mathbf{H}}_{kj}\right\\}\mathbf{\Lambda}_{kj}^{1/2}\mathbf{W}_{kj}^{H}\mathbf{N}_{k}^{-1}\right\\}$
$\displaystyle\stackrel{{\scriptstyle(c)}}{{=}}$
$\displaystyle\frac{z_{j}}{d}\frac{d}{M-d}\mathbb{E}\left\\{\mathbf{W}_{kj}\mathbf{\Lambda}_{kj}\mathbf{W}_{kj}^{H}\mathbf{N}_{k}^{-1}\right\\}$
$\displaystyle=$
$\displaystyle\frac{z_{j}}{M-d}\mathbb{E}\left\\{\underline{\mathbf{H}}_{kj}^{H}\underline{\mathbf{H}}_{kj}\mathbf{N}_{k}^{-1}\right\\}$
where in (a), the decomposition of
$\bar{\mathbf{H}}_{kj}=\tilde{\mathbf{H}}_{kj}\mathbf{\Lambda}_{kj}^{1/2}\mathbf{W}_{kj}^{H}$
has been substituted, and the expectation on $\mathbf{Z}_{j}$ is carried out,
which is approximated to be
$\frac{1}{d}\mathbb{E}\left\\{\text{tr}(\mathbf{Z}_{j}\mathbf{Z}_{j}^{H})\right\\}=\frac{z_{j}}{d}$
as [53, App. B], with $z_{j}$ being the expected CD value; in (b),
independence of basis matrices $\tilde{\mathbf{H}}_{kj}$ and $\mathbf{S}_{j}$
is used; (c) is simplified from the fact that the product of two orthonormal
matrices $\tilde{\mathbf{H}}_{kj}^{H}\mathbf{S}_{j}$ is matrix-Beta
distributed random variable $BETA(d,M-2d)$, which has a mean of
$\frac{d}{M-d}$. Next, we write for i.i.d. entries in $\mathbf{H}_{kr}$ and
$\mathbf{H}_{rj}$ as
$\displaystyle\mathbb{E}\left\\{\underline{\mathbf{H}}_{kj}^{H}\underline{\mathbf{H}}_{kj}\mathbf{N}_{k}^{-1}\right\\}$
$\displaystyle=$
$\displaystyle\mathbb{E}\Big{\\{}\mathbf{U}_{k}^{H}\mathbf{H}_{kr}\mathbf{G}\mathbf{H}_{rj}\mathbf{H}_{rj}^{H}\mathbf{G}^{H}\mathbf{H}_{kr}^{H}\mathbf{U}_{k}$
$\displaystyle\hfill\times\left[\bar{\rho}\sigma_{ID}^{2}\mathbf{U}_{k}^{H}\mathbf{H}_{kr}\mathbf{G}\mathbf{G}^{H}\mathbf{H}_{kr}^{H}\mathbf{U}_{k}+\sigma^{2}\mathbf{I}_{d}\right]^{-1}\Big{\\}}$
$\displaystyle\stackrel{{\scriptstyle(a)}}{{=}}$ $\displaystyle
M\beta_{rj}\mathbb{E}\Big{\\{}\mathbf{U}_{k}^{H}\mathbf{H}_{kr}\mathbf{G}\mathbf{G}^{H}\mathbf{H}_{kr}^{H}\mathbf{U}_{k}\Big{\\}}$
$\displaystyle\hfill\times\left[\bar{\rho}\sigma_{ID}^{2}\mathbb{E}\mathbf{U}_{k}^{H}\mathbf{H}_{kr}\mathbf{G}\mathbf{G}^{H}\mathbf{H}_{kr}^{H}\mathbf{U}_{k}+\sigma^{2}\mathbf{I}_{d}\right]^{-1}$
$\displaystyle\stackrel{{\scriptstyle(b)}}{{\preceq}}$ $\displaystyle
M\beta_{rj}\mathbb{E}\Big{\\{}\mathbf{U}_{k}^{H}\mathbf{H}_{kr}\mathbf{G}\mathbf{G}^{H}\mathbf{H}_{kr}^{H}\mathbf{U}_{k}$
$\displaystyle\hfill\times\left[\bar{\rho}\sigma_{ID}^{2}\mathbf{U}_{k}^{H}\mathbf{H}_{kr}\mathbf{G}\mathbf{G}^{H}\mathbf{H}_{kr}^{H}\mathbf{U}_{k}+\sigma^{2}\mathbf{I}_{d}\right]^{-1}\Big{\\}}$
$\displaystyle\stackrel{{\scriptstyle(c)}}{{=}}$
$\displaystyle\frac{M\beta_{rj}}{\bar{\rho}\sigma_{ID}^{2}+\frac{\sigma^{2}}{\beta_{kr}\mathbb{E}\|\mathbf{G}\|_{F}^{2}}}\mathbf{I}_{d}$
$\displaystyle\stackrel{{\scriptstyle(d)}}{{\approx}}$
$\displaystyle\frac{M\beta_{rj}}{\bar{\rho}\sigma_{ID}^{2}+\left(\sum_{j=1}^{2K}\bar{\rho}P_{j}\beta_{rj}+\bar{\rho}\sigma_{ID}^{2}\right)\frac{\sigma^{2}}{\beta_{kr}P_{r}}}\mathbf{I}_{d}$
where in (a),
$\mathbb{E}\mathbf{H}_{rj}\mathbf{H}_{rj}^{H}=M\beta_{rj}\mathbf{I}_{R}$; in
(b), the inequality
$\mathbb{E}\left\\{\mathbf{C}_{x}\left(\mathbf{C}_{x}+\kappa\mathbf{I}\right)^{-1}\right\\}\preceq\mathbb{E}\left\\{\mathbf{C}_{x}\right\\}\left[\mathbb{E}\left\\{\mathbf{C}_{x}+\kappa\mathbf{I}\right\\}\right]^{-1}$
is used (and verified via simulations); in (c),
$\mathbb{E}\left[\mathbf{H}_{kr}\mathbf{G}\mathbf{G}^{H}\mathbf{H}_{kr}^{H}\right]=\beta_{kr}tr(\mathbf{G}\mathbf{G}^{H})\mathbf{I}_{M}$
and $\mathbf{U}_{k}^{H}\mathbf{U}_{k}=\mathbf{I}$ are used; in (d), the fact
that the norm of a vector is independent of its direction, and the
approximation
$\frac{\mathbf{G}^{H}\mathbf{G}}{\|\mathbf{G}\|_{F}^{2}}\approx\frac{1}{R}\mathbf{I}_{R}$
are used to get $\mathbb{E}\|\mathbf{G}\|_{F}^{2}$ as $P_{r}=$
$\displaystyle\mathbb{E}\|\mathbf{G}\|_{F}^{2}\mathbb{E}\text{tr}\left[\left(\sum_{j=1}^{2K}\mathbf{H}_{rj}\mathbf{V}_{j}\mathbf{V}_{j}^{H}\mathbf{H}_{rj}^{H}\bar{\rho}\frac{P_{j}}{d}+\bar{\rho}\sigma_{ID}^{2}\mathbf{I}_{R}\right)\frac{\mathbf{G}^{H}\mathbf{G}}{\|\mathbf{G}\|_{F}^{2}}\right]$
$\displaystyle\approx\mathbb{E}\|\mathbf{G}\|_{F}^{2}\mathbb{E}\text{tr}\left[\left(\sum_{j=1}^{2K}\mathbf{H}_{rj}\mathbf{V}_{j}\mathbf{V}_{j}^{H}\mathbf{H}_{rj}^{H}\bar{\rho}\frac{P_{j}}{d}+\bar{\rho}\sigma_{ID}^{2}\mathbf{I}_{R}\right)\frac{1}{R}\right]$
$\displaystyle=\mathbb{E}\|\mathbf{G}\|_{F}^{2}\left(\sum_{j=1}^{2K}d\bar{\rho}\frac{P_{j}}{d}\beta_{rj}+\bar{\rho}\sigma_{ID}^{2}\right)$
Finally, the rate loss bound expression can be simplified as $\Delta
R_{k}\leq$
$\displaystyle\frac{1}{2}\log_{2}\left|\mathbf{I}_{d}+\frac{M_{d}\bar{\rho}\sum_{j\neq
k,k^{\prime}}P_{j}z_{j}\beta_{rj}\mathbf{I}_{d}}{\bar{\rho}\sigma_{ID}^{2}+\left(\sum_{j=1}^{2K}\bar{\rho}P_{j}\beta_{rj}+\bar{\rho}\sigma_{ID}^{2}\right)\frac{\sigma^{2}}{\beta_{kr}P_{r}}}\right|$
$\displaystyle=\frac{d}{2}\log_{2}\left(1+\frac{M_{d}\bar{\rho}\sum_{j\neq
k,k^{\prime}}P_{j}z_{j}\beta_{rj}}{\bar{\rho}\sigma_{ID}^{2}+\left(\sum_{j=1}^{2K}\bar{\rho}P_{j}\beta_{rj}+\bar{\rho}\sigma_{ID}^{2}\right)\frac{\sigma^{2}}{\beta_{kr}P_{r}}}\right),$
where $M_{d}=\frac{M}{d(M-d)}$. Substituting the value of
$\bar{\rho}\sigma_{ID}^{2}=\bar{\rho}\sigma_{R}^{2}+\delta^{2}$ provides the
required expression. ∎
### -D Proof of Lemma 7
###### Proof:
The inequality in the upper bound comes from (12a) as
$\frac{1}{d}\left\|\mathbf{H}_{rj}\mathbf{V}_{j}^{BAL}\right\|_{F}^{2}\leq\frac{1}{d}\sum_{i=1}^{d}\lambda_{ji}\leq\lambda_{j1},\forall
j$, where the inequality is due to the fact that the average of $d$-values is
less than the the maximum of them.
The inequality of the lower bound can be derived from the CD decomposition,
where the equality occurs when $z_{j}=0,\forall j$. For the proposed balanced
precoder with the optimum values of
$\mathbf{X}_{j}^{*},\mathbf{Y}_{j}^{*},\mathbf{S}_{j}^{*}$ and
$\mathbf{Z}_{j}^{*}$, we can write
$\displaystyle\left\|\mathbf{H}_{rj}\mathbf{V}_{j}^{BAL}\right\|_{F}^{2}$
$\displaystyle=\left\|\mathbf{H}_{rj}\mathbf{V}_{j}\mathbf{X}_{j}^{*}\mathbf{Y}_{j}^{*}+\mathbf{H}_{rj}\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}^{*}\mathbf{Z}_{j}^{*}\right\|_{F}^{2}$
$\displaystyle\stackrel{{\scriptstyle(a)}}{{\geq}}\left\|\mathbf{H}_{rj}\mathbf{V}_{j}\mathbf{X}_{j}^{*}\sqrt{1-\frac{z_{j}}{d}}+\mathbf{H}_{rj}\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}^{*}\sqrt{\frac{z_{j}}{d}}\right\|_{F}^{2}$
$\displaystyle\stackrel{{\scriptstyle(b)}}{{\geq}}\left\|\mathbf{H}_{rj}\mathbf{V}_{j}\mathbf{X}_{j}^{*}\right\|_{F}^{2}\left(1-\frac{z_{j}}{d}\right)+\left\|\mathbf{H}_{rj}\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}^{*}\right\|_{F}^{2}\left(\frac{z_{j}}{d}\right)$
$\displaystyle\stackrel{{\scriptstyle(c)}}{{\geq}}\left\|\mathbf{H}_{rj}\mathbf{V}_{j}\right\|_{F}^{2}\left(1-\frac{z_{j}}{d}\right)+\left\|\mathbf{H}_{rj}\mathbf{V}_{j}^{\text{n}}\right\|_{F}^{2}\left(\frac{z_{j}}{d}\right),$
where in $(a)$, the maximum value of the norm is upper bounded by trivial
selection $\mathbf{Z}_{j}=\mathbf{I}_{d}\sqrt{1-\frac{z_{j}}{d}}$; in $(b)$,
we employ the fact that the trace value in the expansion of norm is non-
negative for the proposed scheme, as mentioned in the proposition 6; and in
$(c)$, the specific $d$-dimensional null space
$\left(\mathbf{V}_{j}^{\text{null}}\mathbf{S}_{j}^{*}\right)$ can be replaced
with any other $d$-dimensional null space
$\mathbf{V}_{j}^{\text{n}}\in\mathcal{G}_{M,d}$ of $\mathbf{V}_{j}$. ∎
## References
* [1] C. Yang, J. Li, Q. Ni, A. Anpalagan, and M. Guizani, “Interference-aware energy efficiency maximization in 5G ultra-dense networks,” _IEEE Trans. Commun._ , vol. 65, no. 2, pp. 728–739, Dec. 2017\.
* [2] J. L. Bing, Y. Rong, L. Gopal, and C. W. R. Chiong, “Transceiver design for SWIPT MIMO relay systems with hybridized power-time splitting-based relaying protocol,” _IEEE Access_ , vol. 8, pp. 190 922–190 933, 2020.
* [3] H. Mahmoud Gamal ElDin Mohammed ElAnzeery, Mohamed Abd ElAziz Saad ElBagouri, and R. Guindi, “Novel radio frequency energy harvesting model,” in _Proc. IEEE International Power Engineering and Optimization Conference_ , Melaka, Malaysia, June 2012.
* [4] R. J. Vyas, B. B. Cook, Y. Kawahara, and M. M. Tentzeris, “E-WEHP: A batteryless embedded sensor-platform wirelessly powered from ambient digital-TV signals,” _IEEE Trans. Microw. Theory Techn._ , vol. 61, no. 6, pp. 2491–2505, May 2013.
* [5] I. Krikidis, S. Timotheou, S. Nikolaou, G. Zheng, D. W. K. Ng, and R. Schober, “Simultaneous wireless information and power transfer in modern communication systems,” _IEEE Commun. Mag._ , vol. 52, no. 11, pp. 104–110, Nov. 2014.
* [6] C. Valenta and G. Durgin, “Harvesting wireless power: Survey of energy-harvester conversion efficiency in far-field, wireless power transfer systems,” _IEEE Microwave Magazine_ , vol. 15, no. 4, pp. 108–120, 2014\.
* [7] H. Shen, C. Liu, W. Xu, and C. Zhao, “Optimized full-duplex MIMO DF relaying with limited dynamic range,” _IEEE Access_ , vol. 5, pp. 20 726–20 735, Sep. 2017.
* [8] O. Taghizadeh, A. C. Cirik, and R. Mathar, “Hardware impairments aware transceiver design for full-duplex amplify-and-forward MIMO relaying,” _IEEE Trans. Wireless Commun._ , vol. 17, no. 3, pp. 1644–1659, Dec. 2018\.
* [9] Z. Wen, X. Liu, N. C. Beaulieu, R. Wang, and S. Wang, “Joint source and relay beamforming design for full-duplex MIMO af relay SWIPT systems,” _IEEE Communications Letters_ , vol. 20, no. 2, pp. 320–323, 2016\.
* [10] D. Wang, R. Zhang, X. Cheng, and L. Yang, “Capacity-enhancing full-duplex relay networks based on power-splitting (PS-)SWIPT,” _IEEE Transactions on Vehicular Technology_ , vol. 66, no. 6, pp. 5445–5450, 2017.
* [11] W. Wang, R. Wang, W. Duan, R. Feng, and G. Zhang, “Optimal transceiver designs for wireless-powered full-duplex two-way relay networks with SWIPT,” _IEEE Access_ , vol. 5, pp. 22 329–22 343, 2017.
* [12] L. Bariah, S. Muhaidat, and A. Al-Dweik, “Error probability analysis of noma-based relay networks with SWIPT,” _IEEE Communications Letters_ , vol. 23, no. 7, pp. 1223–1226, 2019.
* [13] X. Sun, W. Yang, Y. Cai, Z. Xiang, and X. Tang, “Secure transmissions in millimeter wave swipt uav-based relay networks,” _IEEE Wireless Communications Letters_ , vol. 8, no. 3, pp. 785–788, 2019.
* [14] X. Wang, J. Liu, and C. Zhai, “Wireless power transfer-based multi-pair two-way relaying with massive antennas,” _IEEE Transactions on Wireless Communications_ , vol. 16, no. 11, pp. 7672–7684, 2017.
* [15] S. Gautam, T. X. Vu, S. Chatzinotas, and B. Ottersten, “Cache-aided simultaneous wireless information and power transfer (SWIPT) with relay selection,” _IEEE Journal on Selected Areas in Communications_ , vol. 37, no. 1, pp. 187–201, 2019.
* [16] Y. Hu, Y. Zhu, M. C. Gursoy, and A. Schmeink, “SWIPT-enabled relaying in iot networks operating with finite blocklength codes,” _IEEE Journal on Selected Areas in Communications_ , vol. 37, no. 1, pp. 74–88, 2019.
* [17] J. Yan and Y. Liu, “A dynamic SWIPT approach for cooperative cognitive radio networks,” _IEEE Transactions on Vehicular Technology_ , vol. 66, no. 12, pp. 11 122–11 136, 2017.
* [18] X. Chen, D. W. K. Ng, and H. Chen, “Secrecy wireless information and power transfer: challenges and opportunities,” _IEEE Wireless Communications_ , vol. 23, no. 2, pp. 54–61, 2016.
* [19] X. Lu, P. Wang, D. Niyato, D. I. Kim, and Z. Han, “Wireless networks with RF energy harvesting: A contemporary survey,” _IEEE Communications Surveys Tutorials_ , vol. 17, no. 2, pp. 757–789, 2015.
* [20] M. A. Hossain, R. Md Noor, K. A. Yau, I. Ahmedy, and S. S. Anjum, “A survey on simultaneous wireless information and power transfer with cooperative relay and future challenges,” _IEEE Access_ , vol. 7, pp. 19 166–19 198, 2019.
* [21] D. K. P. Asiedu, H. Lee, and K. Lee, “Simultaneous wireless information and power transfer for decode-and-forward multihop relay systems in energy-constrained IoT networks,” _IEEE Internet of Things Journal_ , vol. 6, no. 6, pp. 9413–9426, 2019.
* [22] R. Fan, S. Atapattu, W. Chen, Y. Zhang, and J. Evans, “Throughput maximization for multi-hop decode-and-forward relay network with wireless energy harvesting,” _IEEE Access_ , vol. 6, pp. 24 582–24 595, 2018.
* [23] G. Huang, Q. Zhang, and J. Qin, “Joint time switching and power allocation for multicarrier decode-and-forward relay networks with SWIPT,” _IEEE Signal Processing Letters_ , vol. 22, no. 12, pp. 2284–2288, 2015.
* [24] B. Chen, X. Zhu, and X. Tu, “Joint precoder design for SWIPT-enabled MIMO relay networks with finite-alphabet inputs,” _IEEE Access_ , vol. 8, pp. 179 105–179 117, 2020.
* [25] B. Chen, X. Zhu, X. Tu, and Y. Guo, “Linear precoder design for SWIPT-enabled relay networks with finite-alphabet inputs,” _IEEE Access_ , vol. 8, pp. 82 012–82 023, 2020.
* [26] Z. Wen, X. Liu, S. Zheng, and W. Guo, “Joint source and relay design for mimo two-way relay networks with SWIPT,” _IEEE Transactions on Vehicular Technology_ , vol. 67, no. 1, pp. 822–826, 2018.
* [27] X. Zhou and Q. Li, “Energy efficiency optimisation for SWIPT af two-way relay networks,” _Electronics Letters_ , vol. 53, no. 6, pp. 436–438, 2017\.
* [28] ——, “Energy efficiency for SWIPT in MIMO two-way amplify-and-forward relay networks,” _IEEE Transactions on Vehicular Technology_ , vol. 67, no. 6, pp. 4910–4924, 2018.
* [29] J. Rostampoor, S. M. Razavizadeh, and I. Lee, “Energy efficient precoding design for SWIPT in MIMO two-way relay networks,” _IEEE Transactions on Vehicular Technology_ , vol. 66, no. 9, pp. 7888–7896, 2017.
* [30] H. K. Sahu and P. R. Sahu, “SSK-based SWIPT with AF relay,” _IEEE Communications Letters_ , vol. 23, no. 4, pp. 756–759, 2019.
* [31] H. Lee, C. Song, S. Choi, and I. Lee, “Outage probability analysis and power splitter designs for SWIPT relaying systems with direct link,” _IEEE Communications Letters_ , vol. 21, no. 3, pp. 648–651, 2017.
* [32] Y. Ye, Y. Li, D. Wang, F. Zhou, R. Q. Hu, and H. Zhang, “Optimal transmission schemes for DF relaying networks using SWIPT,” _IEEE Transactions on Vehicular Technology_ , vol. 67, no. 8, pp. 7062–7072, 2018.
* [33] Y. Ye, Y. Li, Z. Wang, X. Chu, and H. Zhang, “Dynamic asymmetric power splitting scheme for SWIPT-based two-way multiplicative AF relaying,” _IEEE Signal Processing Letters_ , vol. 25, no. 7, pp. 1014–1018, 2018.
* [34] H. Chen, Y. Li, Y. Jiang, Y. Ma, and B. Vucetic, “Distributed power splitting for SWIPT in relay interference channels using game theory,” _IEEE Transactions on Wireless Communications_ , vol. 14, no. 1, pp. 410–420, 2015.
* [35] G. Li, D. Mishra, Y. Hu, and S. Atapattu, “Optimal designs for relay-assisted NOMA networks with hybrid SWIPT scheme,” _IEEE Transactions on Communications_ , vol. 68, no. 6, pp. 3588–3601, 2020.
* [36] W. Wang, R. Wang, H. Mehrpouyan, N. Zhao, and G. Zhang, “Beamforming for simultaneous wireless information and power transfer in two-way relay channels,” _IEEE Access_ , vol. 5, pp. 9235–9250, 2017.
* [37] Y. Cai, M. Zhao, Q. Shi, B. Champagne, and M. Zhao, “Joint transceiver design algorithms for multiuser MISO relay systems with energy harvesting,” _IEEE Transactions on Communications_ , vol. 64, no. 10, pp. 4147–4164, 2016.
* [38] H. Du, T. Ratnarajah, M. Sellathurai, and C. B. Papadias, “Reweighted nuclear norm approach for interference alignment,” _IEEE Transactions on Communications_ , vol. 61, no. 9, pp. 3754–3765, 2013.
* [39] S. M. Razavi and T. Ratnarajah, “Performance analysis of interference alignment under csi mismatch,” _IEEE Transactions on Vehicular Technology_ , vol. 63, no. 9, pp. 4740–4748, 2014.
* [40] Y. Luo, T. Ratnarajah, J. Xue, and F. A. Khan, “Interference alignment in two-tier randomly distributed heterogeneous wireless networks using stochastic geometry approach,” _IEEE Systems Journal_ , vol. 12, no. 3, pp. 2238–2249, 2018.
* [41] J. Xue, S. Biswas, A. C. Cirik, H. Du, Y. Yang, T. Ratnarajah, and M. Sellathurai, “Transceiver design of optimum wirelessly powered full-duplex mimo iot devices,” _IEEE Transactions on Communications_ , vol. 66, no. 5, pp. 1955–1969, 2018.
* [42] V. Cadambe and S. Jafar, “Interference alignment and degrees of freedom of the K-user interference channel,” _IEEE Transactions on Information Theory_ , vol. 54, no. 8, pp. 3425–3441, 2008.
* [43] N. Garg, A. K. Jagannatham, G. Sharma, and T. Ratnarajah, “Precoder feedback schemes for robust interference alignment with bounded CSI uncertainty,” _IEEE Transactions on Signal and Information Processing over Networks_ , vol. 6, pp. 407–425, 2020.
* [44] Z. Xie, X. Geng, Y. Chen, K. Song, B. Panful, Y. Wang, Y. Su, Z. Zhang, and Y. Hu, “Secured green communication scheme for interference alignment based networks,” _Journal of Communications and Networks_ , vol. 22, no. 1, pp. 23–36, 2020.
* [45] M. Chu, B. He, X. Liao, Z. Gao, and V. C. M. Leung, “On the design of power splitting relays with interference alignment,” _IEEE Transactions on Communications_ , vol. 66, no. 4, pp. 1411–1424, 2018.
* [46] F. Benkhelifa, A. S. Salem, and M. Alouini, “Sum-rate enhancement in multiuser MIMO decode-and-forward relay broadcasting channel with energy harvesting relays,” _IEEE Journal on Selected Areas in Communications_ , vol. 34, no. 12, pp. 3675–3684, 2016.
* [47] Z. Fang, Y. Wu, Y. Lu, J. Hu, T. Peng, and J. Ye, “Simultaneous wireless information and power transfer in cellular two-way relay networks with massive MIMO,” _IEEE Access_ , vol. 6, pp. 29 262–29 270, 2018\.
* [48] R. S. Ganesan, T. Weber, and A. Klein, “Interference alignment in multi-user two way relay networks,” in _Proc. IEEE 73rd Vehicular Technology Conference (VTC Spring)_ , Yokohama, Japan, May 2011.
* [49] L. Shi, Y. Ye, R. Q. Hu, and H. Zhang, “Energy efficiency maximization for SWIPT enabled two-way DF relaying,” _IEEE Signal Processing Letters_ , vol. 26, no. 5, pp. 755–759, 2019.
* [50] J. Wang, G. Wang, B. Li, H. Yang, Y. Hu, and A. Schmeink, “Massive MIMO two-way relaying systems with SWIPT in IoT networks,” _IEEE Internet of Things Journal_ , pp. 1–14, Oct. 2020.
* [51] Z. Zong, H. Feng, F. R. Yu, N. Zhao, T. Yang, and B. Hu, “Optimal transceiver design for SWIPT in K-user MIMO interference channels,” _IEEE Trans. Wireless Commun._ , vol. 15, no. 1, pp. 430–445, Aug. 2016.
* [52] C. M. Yetis, T. Gou, S. A. Jafar, and A. H. Kayran, “On feasibility of interference alignment in MIMO interference networks,” _IEEE Trans. Signal Process._ , vol. 58, no. 9, pp. 4771–4782, May 2010.
* [53] N. Ravindran and N. Jindal, “Limited feedback-based block diagonalization for the MIMO broadcast channel,” _IEEE J. Sel. Areas Commun._ , vol. 26, no. 8, pp. 1473–1482, Oct. 2008.
* [54] N. Garg and G. Sharma, “Analog precoder feedback schemes with interference alignment,” _IEEE Transactions on Wireless Communications_ , vol. 17, no. 8, pp. 5382–5396, Aug 2018.
* [55] N. Zhao, F. Yu, and V. Leung, “Wireless energy harvesting in interference alignment networks,” _IEEE Communications Magazine_ , vol. 53, no. 6, pp. 72–78, 2015.
* [56] Y. Li, L. Zhang, L. J. Cimini, and H. Zhang, “Statistical analysis of MIMO beamforming with co-channel unequal-power MIMO interferers under path-loss and rayleigh fading,” _IEEE Transactions on Signal Processing_ , vol. 59, no. 8, pp. 3738–3748, 2011.
* [57] R. S. Ganesan, H. Al-Shatri, A. Kuehne, T. Weber, and A. Klein, “Pair-aware interference alignment in multi-user two-way relay networks,” _IEEE Trans. Wireless Commun._ , vol. 12, no. 8, pp. 3662–3671, May 2013\.
* [58] X. Zhou, R. Zhang, and C. Ho, “Wireless information and power transfer: Architecture design and rate-energy tradeoff,” _IEEE Transactions on Communications_ , vol. 61, no. 11, pp. 4754–4767, 2013.
* [59] J. Kang, I. Kim, and D. I. Kim, “Wireless information and power transfer: Rate-energy tradeoff for nonlinear energy harvesting,” _IEEE Transactions on Wireless Communications_ , vol. 17, no. 3, pp. 1966–1981, 2018\.
* [60] O. E. Ayach and R. W. Heath, “Interference alignment with analog channel state feedback,” _IEEE Trans. Wireless Commun._ , vol. 11, no. 2, pp. 626–636, Dec. 2012.
|
# Optimal cost tuning of frustration: Achieving desired states in the
Kuramoto-Sakaguchi model
Gemma Rosell-Tarragó<EMAIL_ADDRESS>Departament de Física de la
Matèria Condensada, Universitat de Barcelona, Martí Franquès 1, 08028
Barcelona, Spain Universitat de Barcelona Institute of Complex Systems
(UBICS), Martí Franquès, 1, 08028 Barcelona, Spain Albert Díaz-Guilera
<EMAIL_ADDRESS>Departament de Física de la Matèria Condensada, Universitat
de Barcelona, Martí Franquès 1, 08028 Barcelona, Spain Universitat de
Barcelona Institute of Complex Systems (UBICS), Martí Franquès, 1, 08028
Barcelona, Spain
###### Abstract
There are numerous examples of studied real-world systems that can be
described as dynamical systems characterized by individual phases and coupled
in a network like structure. Within the framework of oscillatory models, much
attention has been devoted to the Kuramoto model, which considers a collection
of oscillators interacting through a sinus function of the phase differences.
In this paper, we draw on an extension of the Kuramoto model, called the
Kuramoto-Sakaguchi model, which adds a phase lag parameter to each node. We
construct a general formalism that allows to compute the set of lag parameters
that may lead to any phase configuration within a linear approximation. In
particular, we devote special attention to the cases of full synchronization
and symmetric configurations. We show that the set of natural frequencies,
phase lag parameters and phases at the steady state is coupled by an equation
and a continuous spectra of solutions is feasible. In order to quantify the
system’s strain to achieve that particular configuration, we define a cost
function and compute the optimal set of parameters that minimizes it. Despite
considering a linear approximation of the model, we show that the obtained
tuned parameters for the case of full synchronization enhance frequency
synchronization in the nonlinear model as well.
## I Introduction
Emergence is one of the key concepts in the analysis of complex systems [1].
Collective properties emerge as a consequence of irregular interactions among
its elemental constituents [2]. One of the most paradigmatic examples of
emergence is synchronization [3, 4], because the interplay between populations
of oscillatory units gives rise to a variety of global states, ranging from
perfect synchronization or phase locked stationary configurations to chimera
states [5, 6, 7]. Among the different models that have been used to understand
such collective behavior, a lot of effort has been devoted to the Kuramoto
model (KM), in which phase oscillators interact continuously with other units
through a sine function of the phase difference [8, 9, 10].
In the past few years there has been a growing interest in the concept of
controllability, which quantifies the feasibility to achieve a desired final
state of a given dynamical system [11]. As stated above, the KM can give rise
to a wide variety of stationary (phase or frequency synchronized) or not
stationary states, being chimeras an unexpected mixture of both types of
behaviors [12]. In this context, controllability can be understood as a tuning
of the internal parameters of the oscillators to reach specific phase
configurations. The most simple settings stand for a collection of identical
oscillators interacting through a sinus function of the phase differences. In
this case it is quite intuitive to see that the final state is a perfectly
synchronized one in which all oscillators have exactly the same phase and
frequency (the same frequency than the intrinsic one). It is the existence of
a distribution of frequencies that gives rise to a transition, in terms of the
strength of the coupling, from an incoherent state to a coherent one [13];
such a transition is robust in the sense that the introduction of a lag term,
a phase added to the argument of the sinus function, does not change the
behavior, as far as it is kept below $\pi/2$ [9]. However, the introduction of
this lag term for identical oscillators changes completely the structure of
the, in principle, synchronized state. In Reference [14] it was shown that,
for small and common values of the lag parameters, the synchronized state
breaks into partially synchronized groups of oscillators, being symmetry the
reason for the phase synchronization of the oscillators. When increasing this
common lag parameter the system enters into a incoherent chaotic state.
Actually, there has been an increasing interest in the last months on the role
that symmetries plays in the synchronization of oscillatory units and how the
lack of homogeneity in some of the parameters can be compensated by other
choices [15, 16, 17].
In a previous work we introduced the concept of “functionability” as a measure
of the ability of a given node to change the state of the system by just
tuning one internal variable, the node lag in the argument of the sinus
function of the interaction [18]. Being an intrinsic property of the node, its
change produces a global change in the phases of the system of oscillators
that can be measured. Functionability stands for the reaction of the whole
phase distribution to a small change in a node. The analytical expression of
functionability reports its quadratic dependence on the node degree and the
node lag value, but also a structural term, such that the most peripheral
nodes in the network have also larger contributions to functionability
centrality measure. The nodes with higher functionability values may represent
positive actors for the network, because they enable more variability in the
states of the system, but also potentially dangerous ones, as tiny
perturbations can produce cascade like effects that completely changes the
network dynamics.
As stated, the addition of a phase lag parameter enables a richer
configuration state. However, it is clear that a tuning of a single parameter
will not be enough to generate the wide variety of stationary states that a
population of Kuramoto oscillators can achieve. Notwithstanding, the question
that arises is whether a fine tuning of a set of individual parameters can
make it possible. In this paper, this is our proposal, we construct a general
formalism that allows, within a linear approximation, to compute the set of
lag parameters that may lead to any phase configuration for a fixed set of
intrinsic frequencies. The problem can also be posed the other way around.
Namely, given a set of frequencies, we may derive the configuration of phases
that is produced by a set of lag parameters.
There are numerous examples of real-world systems that can be described as
dynamical systems characterized by individual phases and which functioning are
object of investigation. Some examples are the brain functional networks
arising from temporal correlation patterns, ac power in power grids[19],
heartbeats[20], multiprocessors and multicore processors, or traffic
signaling. Not only the synchronization between their constituents may be
intended or prevented, but also other particular configurations may be of
relevant interest. For this reason, we propose a mechanism for tuning the
intrinsic parameters of the system to achieve any desired phase configuration.
A previous work proposes a methodology to enhance frequency synchronization
for the nonlinear Kuramoto-Sakaguchi model (extension of the Kuramoto model
with a node phase lag parameter) [21]. Another work suggests that an unstable
synchronized state becomes stable when, and only when, the oscillator
parameters are tuned to nonidentical values [15]. We highlight the work done
in Reference [22], where the particular configuration of perfect
synchronization is studied and the synchrony alignment function is defined in
order to minimize the order parameter of the system considering different
topologies and frequency scenarios. We address the most general question,
following a similar path to that pursued by them, forcing the system to
achieve any particular configuration for the linear case of the Kuramoto-
Sakaguchi model by means of a fine tuning of the phase lag or frustration
parameter set. Despite considering a linear approximation of the model, we
show that the obtained tuned parameters for the case of full synchronization
enhance frequency synchronization in the nonlinear model as well.
The structure of the paper is the following. In Section II we present the
Kuramoto-Sakaguchi model (a variation of the Kuramoto model) as the proper
framework for our purposes. Next, in Section III, we derive the analytic
expression of the fine tuning of the frustration parameters so as to achieve
any phase configuration. Then, in Section IV, we define a cost function to
assess the expense of achieving a particular phase configuration by its
corresponding tuning and derive the analytic solution for the cases of
symmetric and fully synchronized configurations, in Sections V and VI,
respectively, as well as comment on the nonlinear validity of our results. We
conclude in Section VII. An easy-to-follow example and further mathematical
derivations can be found in the Appendix.
## II The Kuramoto-Sakaguchi Model
In 1975, Kuramoto suggested one of the best-known dynamical equation to model
interacting oscillatory systems[8]: a set of $N$ phase oscillators
characterized by their phase, $\theta_{i}$, and coupled between each other by
the sine of their phase differences. Each unit is influenced directly by the
set of its nearest neighbours via the adjacency matrix of the network
corresponding to the system, $\mathcal{G}(\mathcal{V},\mathcal{E})$. The
coupling strength describes the intensity of such pair-wise interactions,
$K_{ij}$. The set of nodes of the network, $\mathcal{V}(\mathcal{G})$ consists
of all the oscillators, while the set of edges, $\mathcal{E}(\mathcal{G})$, is
made of the links between them. In his original work, Kuramoto assumed
homogeneous interactions, i.e, $K_{ij}=K\ \forall(i,j)$ [8, 10]. Taking into
account the connectivity or topology of the network and the oscillatory
dynamics, the dynamics of the system can be written as a system of
differential equations[5]:
$\frac{d\theta_{i}}{dt}=\omega_{i}+K\sum_{j}A_{ij}\sin(\theta_{j}-\theta_{i})\
i=1,...,N\ j\in\Gamma_{i}$ (1)
where $\Gamma_{i}$ is the set of neighbors of node $i$ and $\omega_{i}$ is the
natural frequency of each unit.
We consider that two nodes are phase synchronized when their phases have the
same value,
$\theta_{i}(t)-\theta_{j}(t)=0\ \forall t>t_{0}$
When the phase difference has a constant value, that is,
$\theta_{i}(t)-\theta_{j}(t)=c\ \forall t>t_{0}$, we say there is a phase
locking between nodes $i$ and $j$. Similarly, we consider that two nodes are
frequency synchronized when their frequencies have the same value:
$\frac{d\theta_{i}}{dt}-\frac{d\theta_{j}}{dt}=0\ \forall t>t_{0}$
We say that two nodes are fully synchronized when they are phase synchronized,
because this implies frequency synchronization.
As long as the distribution of natural frequencies is homogeneous, namely, all
units have the same natural frequency, there is only one attractor of the
dynamics: the fully synchronized state. It can be shown that, if the
distribution of natural frequencies is unimodal, the system becomes frequency
synchronized as long as the coupling strength is larger than a threshold
value[10].
In 1986, Kuramoto together with Sakaguchi presented a similar model which
incorporated a constant phase lag between oscillators[9] which can be written
as follows:
$\frac{d\theta_{i}}{dt}=\omega_{i}+K\sum_{j}A_{ij}\sin(\theta_{j}-\theta_{i}-\alpha)\
i=1,...,N\ j\in\Gamma_{i}$ (2)
where $\alpha$ is a homogeneous phase lag parameter. This model has also
become well-known and several variations of the model have been studied. It
has been shown that, as long as $|\alpha|<\pi/2$, the system is not chaotic
and a threshold value for the coupling strength exists above which the system
becomes synchronized to a resulting frequency [9]. In the particular case that
considers homogeneous natural frequencies, i.e, $\omega_{i}=\omega_{0}\
\forall i$, the frustration parameter, $\alpha$, forces the system to break
the otherwise original fully synchronized state. However, partial
synchronization is conserved for symmetric nodes in the network[14, 15]. As
the frustration increases the asynchronous groups’ phase move away from each
other.
We are interested here in a more general case, where the frustration parameter
is not homogeneous but an intrinsic property of each unit:
$\frac{d\theta_{i}}{dt}=\omega_{i}+K\sum_{j}A_{ij}\sin(\theta_{j}-\theta_{i}-\alpha_{i})\
i=1,...,N\ j\in\Gamma_{i}$ (3)
In this context, a recent work studies a particular effect of this frustration
parameter by defining functionability, a new centrality measure of the nodes
in a network, in order to address the issue of which nodes, when perturbed,
move the system from a synchronized state to one that is more asynchronous in
the sense that it enhances the phase differences between all pairs of
oscillators [18].
## III Analytic expression of the frustration parameters tuning
We address the most general problem, which considers the same dynamics as in
Eq.(3), while allowing the edges of the network to be weighted, a more
realistic scenario for real-world networks.
For small values of the frustration parameters and phases close to each other,
which is the case in frequency synchronization, we can linearize Eq.(3) as
follows:
$\displaystyle\frac{d\theta_{i}}{dt}=\omega_{i}+K\sum_{j}W_{ij}(\theta_{j}-\theta_{i}-\alpha_{i})=$
$\displaystyle=\omega_{i}-K\sum_{j}L_{ij}\theta_{j}-K\alpha_{i}s_{i}$ (4)
where $W_{ij}$ is the value of the weight of the edge between node $i$ and
node $j$, $s_{i}\equiv\sum_{j}W_{ij}$ is the weighted degree of the $i$th node
and $L$ is the weighted Laplacian matrix defined as
$L_{ij}\equiv\delta_{ij}s_{i}-W_{ij}$. In the stable regime, a synchronized
frequency is achieved and, for all oscillators $\dot{\theta}_{i}=\Omega$. We
can derive the value of the common frequency oscillation, $\Omega$, summing
Eq.(III) over $i$:
$\sum_{i}\Omega=\sum_{i}\omega_{i}-K\sum_{i}\sum_{j}L_{ij}\theta_{j}^{*}-K\sum_{i}\alpha_{i}s_{i}$
(5)
Taking into account the steady state $\dot{\theta}_{i}=\Omega\ \forall i$ and
arranging summations:
$N\Omega=\sum_{i}\omega_{i}-K\sum_{j}\theta_{j}^{*}\sum_{i}L_{ij}-K\sum_{i}\alpha_{i}s_{i}.$
(6)
and finally,
$\Omega=\left\langle\omega\right\rangle-K\left\langle\alpha s\right\rangle.$
(7)
where we have used the Laplacian matrix property: $\sum_{i}L_{ij}=0$ and
defined the averages $\sum_{i}\alpha_{i}s_{i}/N=\left\langle\alpha
s\right\rangle$ and $\sum_{i}\omega_{i}/N=\left\langle\omega\right\rangle$.
Now we can plug expression Eq.(7) to Eq.(III) to get the stable phases of
oscillators, $\theta_{i}^{*}$:
$\sum_{j}L_{ij}\theta_{j}^{*}=\frac{\omega_{i}}{K}-\frac{\left\langle\omega\right\rangle}{K}+\left\langle\alpha
s\right\rangle-\alpha_{i}s_{i}\hskip 28.45274pt\forall i$ (8)
The solution of Eq.(8) regarding phases is undetermined due to the singular
nature of the Laplacian matrix. Hence, Eq.(8) is, in general, an undetermined
system of linear equations, that is, there is one free phase, which we should
use as a reference value for the solution. Nonetheless, we do not work
directly with the functional form of phases because they are time dependent
$\\{\theta_{i}^{*}\\}=f_{i}(t)$, but with the phase differences with respect
to a reference node, once the stationary state is achieved,
$\phi_{i}\equiv\theta_{i}-\theta_{R}$ (9)
In this way, we work with time independent values. In this situation,
$\phi_{R}=0$, by definition, as $\phi_{R}\equiv\theta_{R}-\theta_{R}=0$.
On the other hand, the contribution $\left\langle\alpha
s\right\rangle-\alpha_{i}s_{i}$ of the right-hand side of Eq.(8) can be
written in matrix form as:
$\displaystyle-\begin{pmatrix}\frac{N-1}{N}&-\frac{1}{N}&-\frac{1}{N}&...\\\
-\frac{1}{N}&\frac{N-1}{N}&-\frac{1}{N}&...\\\ ...&...&...&...\\\
-\frac{1}{N}&-\frac{1}{N}1&...&\frac{N-1}{N}\end{pmatrix}\cdot\begin{pmatrix}s_{0}&0&...&0\\\
0&s_{1}&...&0\\\ 0&...&...&0\\\ 0&...&0&s_{N-1}\end{pmatrix}\cdot$
$\displaystyle\cdot\begin{pmatrix}\alpha_{0}\\\ \alpha_{1}\\\ ...\\\
\alpha_{N-1}\end{pmatrix}=(-M\cdot D_{s})\vec{\alpha}$
where we have defined:
$M\equiv\begin{pmatrix}\frac{N-1}{N}&-\frac{1}{N}&-\frac{1}{N}&...\\\
-\frac{1}{N}&\frac{N-1}{N}&-\frac{1}{N}&...\\\ ...&...&...&...\\\
-\frac{1}{N}&-\frac{1}{N}1&...&\frac{N-1}{N}\end{pmatrix}$ and
$D_{s}\equiv\begin{pmatrix}s_{0}&0&...&0\\\ 0&s_{1}&...&0\\\ 0&...&...&0\\\
0&...&0&s_{N-1}\end{pmatrix}$. We write Eq.(8) in matrix as
$L\vec{\theta}^{*}=\frac{1}{K}\vec{\Delta\omega}-M\cdot D_{s}\vec{\alpha}$
(10)
where $\Delta\omega_{i}\equiv\omega_{i}-\left\langle\omega\right\rangle$.
Finally, we obtain the set of unknowns $\\{\alpha_{i}\\}$:
$M\cdot D_{s}\vec{\alpha}=\frac{1}{K}\vec{\Delta\omega}-L\vec{\theta}^{*}$
(11)
Equation (11), however, does not have a solution, because of the singular
nature of $M\cdot D_{s}$ matrix. $M$ matrix is singular too, and hence, its
inverse matrix does not exist. Mathematically, $det(M\cdot D_{s})=det(M)\cdot
det(D_{s})=0$
Similarly as we did for phases [18], we solve the singularity problem by
setting a reference node, which we call control node, regarding frustration
parameters, i.e., we would not obtain the value for each of the parameters,
but a relation between them:
$\kappa_{i}\equiv\alpha_{i}-\alpha_{C}$ (12)
where $\alpha_{C}$ is the value of the control node. In this situation,
$\kappa_{C}=0$, by definition, as $\kappa_{C}\equiv\alpha_{C}-\alpha_{C}=0$.
To easily write the matrix expressions, we define the selection matrix
$J_{(n,m)}$, which is, in general, an $(N-1)\times(N-1)$ identity matrix after
the removal of the $mth$ row and the $nth$ column.
$L\vec{\theta}^{*}$ turns to $\tilde{L}(k,R)\vec{\phi}^{*}$, where we have
removed the $kth$ row and the $Rth$ column. The result does not depend on
which row we remove, hence we can choose any $k$. Using the selection matrix,
$\tilde{L}(k,R)=J_{(,k)}\cdot L\cdot J_{(R,)}\equiv\tilde{L}$.
Similarly,
$\vec{\tilde{\phi}}(k)=J_{(,k)}\cdot\vec{\phi}\equiv\vec{\tilde{\phi}}$, where
we have removed the $kth$ row.
In an equivalent way as the definition of the reduced Laplacian:
$\tilde{MD_{s}}(k,C)=J_{(,k)}\cdot MD_{s}\cdot J_{(C,)}\equiv\tilde{MD_{s}}$
where $\tilde{MD_{s}}$ is $MD_{s}$ without the $kth$ row and the $Cth$ column.
Similarly,
$\vec{\tilde{\kappa}}(k)=J_{(,k)}\cdot\vec{\kappa}\equiv\vec{\tilde{\kappa}}$
and
$\vec{\tilde{\Delta\omega}}(k)=J_{(,k)}\cdot\vec{\Delta\omega}\equiv\vec{\tilde{\Delta\omega}}$,
where we have removed the $kth$ row.
Considering all the previous definitions and remarks, Eq.(10) can be rewritten
as:
$\displaystyle\tilde{MD_{s}}\vec{\tilde{\kappa}}=\frac{1}{K}\vec{\tilde{\Delta\omega}}-\tilde{L}\vec{\tilde{\phi}}^{*}-\alpha_{C}\cdot
J_{(,k)}\sum_{j}^{\rightarrow}[MD_{s}]_{ij}$ (13)
and finally,
$\displaystyle\vec{\tilde{\kappa}}=\left(\tilde{MD_{s}}\right)^{-1}\left(\frac{1}{K}\vec{\tilde{\Delta\omega}}-\tilde{L}\vec{\tilde{\phi}}^{*}-\alpha_{C}\cdot\tilde{M}\vec{s}\right)$
(14)
where we have used $J_{(,k)}\vec{\sum}_{j}[MD_{s}]_{ij}=\tilde{M}\vec{s}$.
Notice that $MD_{s}$ matrix is singular, but the row sum is not zero, although
it is so for the column sum. Hence, we need to set $\alpha_{C}=0$ if we want
to avoid extra constant arrays in the final expression. In this particular
case:
$\displaystyle\vec{\tilde{\kappa}}=\left(\tilde{MD_{s}}\right)^{-1}\left(\frac{1}{K}\vec{\tilde{\Delta\omega}}-\tilde{L}\vec{\phi}^{*}\right)$
(15)
and keep in mind that $\kappa_{C}=0$.
The obtained values of $\vec{\alpha}$ depend on both the chosen control node,
$C$, and the value we set for its frustration parameter, $\alpha_{C}$. Notice,
therefore, that there is a continuous spectrum of values for the frustration
parameter in order to achieve a particular phase configuration.
Moreover and more importantly, due to the non-row-sum equal to zero of
$MD_{s}$ matrix, the differences between the obtained values are dependent of
the control node choice. Mathematically,
$\alpha_{i}-\alpha_{j}(C=l)\neq\alpha_{i}-\alpha_{j}(C=k)$ if $l\neq k$. This
property will lead us to the definition of a cost for the system to move to
the final configuration, which will depend on both the control node and the
value of its frustration parameter.
We provide an example of a toy network for the case of a homogeneous natural
frequencies distribution, i.e., $\omega_{i}=\omega\ \forall i$. In this case,
Eq.(14) turns to:
$\vec{\tilde{\kappa}}=\left(\tilde{MD_{s}}\right)^{-1}\left(-\tilde{L}\vec{\tilde{\phi}}^{*}-\alpha_{C}\cdot\tilde{M}\vec{s}\right)$
which in the case of the network depicted in Figure 1,
Figure 1: Network of seven nodes.
leads to the solution
$\begin{pmatrix}\kappa_{0}\\\ \kappa_{2}\\\ \kappa_{3}\\\ \kappa_{4}\\\
\kappa_{5}\\\
\kappa_{6}\end{pmatrix}=\begin{pmatrix}\frac{-2\alpha_{1}+3\phi_{1}+\phi_{3}+3\phi_{6}}{4}\\\
\frac{3(\phi_{1}-\phi_{2})}{2}\\\
\frac{2\phi_{1}-\phi_{2}-2\phi_{3}+\phi_{4}}{2}\\\
\frac{2\phi_{1}-\phi_{2}+\phi_{3}-2\phi_{4}+\phi_{5}}{2}\\\
\frac{2\phi_{1}-\phi_{2}+\phi_{4}-2\phi_{5}-\phi_{6}}{2}\\\
\frac{2\phi_{1}\phi_{2}+\phi_{5}-2\phi_{6}}{2}\end{pmatrix}$ (16)
where we have chosen $\kappa_{1}=0$ and $\phi_{0}=0$. Hence, the results are
written as a function of the value $\alpha_{1}$ and $\phi_{i}\ i\neq 0$.
Therefore, we can achieve any phase configuration, given by the set
$\\{\phi_{i}\\}$ by tuning the frustration parameters set $\\{\alpha\\}$,
where $\alpha_{i}=\kappa_{i}+\alpha_{C}$.
To illustrate how we obtain the final values, let us consider the following
phase configuration:
$\vec{\tilde{\phi}}_{(R=0)}=(0.1,0.2,0.25,-0.2,-0.1,0.0)$ (17)
In the general case where $\alpha_{C}=\alpha_{1}\neq 0$:
$\vec{\tilde{\kappa}}_{(C=1)}=(0.1375-\frac{\alpha_{1}}{2},-0.15,-0.35,0.275,0.0,-0.05).$
If we choose $\alpha_{C}=0$, then $\alpha_{i}=\kappa_{i}$, we can include the
value of the control node $C=1$:
$\vec{\tilde{\alpha}}=(0.1375,0.0,-0.15,-0.35,0.275,0.0,-0.05).$
Alternatively, we can choose whatever value we need regarding the control
node. For instance, if $\alpha_{C}=\alpha_{1}=0.1$:
$\vec{\tilde{\alpha}}=(0.1875,0.1,-0.05,-0.25,0.375,0.1,0.05)$
and the phases configuration is the same. Importantly, we recover the same
phase differences using the nonlinear model with the tuned $\alpha$’s, up to
an error. For this last example and using the frustration parameters obtained
by setting $\alpha_{1}=0.1$, the nonlinear model leads to final phases vector
$\begin{split}\vec{\tilde{\phi}}_{(R=0)}=(0.09969,0.19944,0.25097,\\\
-0.19798,-0.09897,0.00012)\end{split}$ (18)
which represents $\sim 0.3\%$ of relative error with respect to the initial
Eq.(17). See the full derivation of the analytical solution in Appendix A.
## IV Optimal Cost tuning of frustration
As pointed out in Section III, there is a continuous spectrum of values for
the choice of the frustration parameters that enables the system access a
particular phase configuration. The following question arises naturally: Among
all the possible solutions, which is the one that makes the system achieve a
particular phase configuration with the minimum required cost?
This question is of particular relevance when we consider the plausible real
nature of the system. If a real system needs to access a particular phase
configuration, which may be associated with a precise function, it will tend
to minimize the effort or cost to do so.
In order to quantify the required cost, we define it as follows:
$e_{T}(C)\equiv\sum_{i}|\alpha_{i}(C)|$ (19)
Henceforth, the cost associated to each node is given by the absolute value of
the required frustration parameter. The absolute value operator allows for a
sign-free contribution of each node, a very convenient choice in the case that
the system is not beforehand specified, and a general definition is proposed
instead. Furthermore, unlike other nonlinear cost functions such as the square
sum of the parameters, no extra weight is given to larger values, besides the
corresponding to a linear function.
As previously remarked, $e_{T}(C)$ will depend both on the chosen control
node, $C$, as well as the particular choice of its frustration parameter,
$\alpha_{C}$.
The optimal configuration is given by the solution of the minimization problem
$\min_{C,x}e_{T}(C,x)=\min_{C,x}\sum_{i}^{N}\alpha_{i}(C,x)$ (20)
where the $x$ variable is not yet specified. Depending on the problem we are
interested in we would set it either to $\omega_{i}$, $s_{i}$ or any other
combination of the parameters of the model. The minimal value of the cost will
depend on the proper choice of the control node, $C$, in addition of the
particular value of its frustration parameter, $\alpha_{C}$, as the free
parameter left to be set. In Sections V and VI we provide a thorough analysis
of it.
The cost required to achieve a particular phase configuration depends on that
configuration, the control node and the chosen value of $\alpha_{C}$. In
Figure 2 we present an example, following with the network presented in
Section III and choosing different values of $\alpha_{C}$, we compute
numerically the values of the required cost using Eq.(19) to achieve the phase
configuration given in Eq.(17). Notice that the global minimum depends on the
control node and its frustration parameter.
Figure 2: (Color online) Implied cost to achieve the phase configuration in
Eq.(17) as a function of the chosen control node, $C$, for the network in
Figure 1 and considering five different values of $\alpha_{C}$. Notice that
the minimum cost is given, in this case, by $\alpha_{C}=0$ and $C=1$ or $C=5$.
In Section III we have derived the general analytical solution of the
frustration parameters as a function of a particular choice for the phase
configuration. In this section we have defined a cost function in order to
assess the optimal choice of such configuration.
Depending on the phase configuration one is interested in achieving, results
will vary and the analytical expressions will have different features.
In the following sections we will focus on two particular configurations, due
to its intrinsic importance, in order to obtain and discuss the analytical
solution of Eq.(20): The configuration given by the symmetries of the
network[14] and the fully synchronized state.
## V Symmetric phase configuration
As explained in Section II, a particular example of the Kuramoto-Sakaguchi
model is the symmetric case, obtained by a homogeneous distribution of
frustration parameters, i.e, $\alpha_{i}=\alpha_{h}\ \forall i$. For our
purposes, we consider $\alpha_{h}>0$. In this situation, the trivial solution
of the frustration parameters, $\alpha_{i}=\alpha_{h}$, is another one of the
values out of the continuous spectrum. That is, we can recover the landscape
given by the symmetric configuration in many different ways. We are however,
interested in computing the analytical expression of the cost function in
order to select the one corresponding to the minimum cost.
### V.1 Optimal cost tuning when $\alpha_{C}=0$
Let us firstly consider the case where $\alpha_{C}=0$ and homogeneous natural
frequencies $\omega_{i}=\omega_{h}\ \forall i$. In the particular case of the
symmetric configuration, that is, the phase configuration given by
$\alpha_{i}=\alpha_{h}\ \forall i$ the solution of the frustration parameters
is given by:
$\vec{\tilde{\kappa}}=\left(\tilde{MD_{s}}\right)^{-1}\left(\frac{1}{K}\vec{\tilde{\Delta\omega}}-\tilde{L}\vec{\tilde{\phi}}^{*}\right)=-\left(\tilde{MD_{s}}\right)^{-1}\tilde{L}\vec{\tilde{\phi}}^{*}$
(21)
But $\vec{\tilde{\phi}}^{*}$ corresponds to the symmetric case. Hence (see
Section II),
$\vec{\tilde{\phi}}^{*}=\alpha_{h}\tilde{L}^{-1}\vec{\tilde{\Delta s}}$ (22)
where $\vec{\tilde{\Delta s}}_{i}\equiv\left\langle s\right\rangle-s_{i}$ and
the tilde touches on $kth$ row removal.
Plugging Eq.(22) into Eq.(21):
$\vec{\tilde{\kappa}}=-\alpha_{h}\left(\tilde{MD_{s}}\right)^{-1}\tilde{L}\tilde{L}^{-1}\vec{\tilde{\Delta
s}}=-\alpha_{h}\left(\tilde{MD_{s}}\right)^{-1}\vec{\tilde{\Delta s}}$
But $\vec{\tilde{\Delta s}}$ can be written as:
$\vec{\tilde{\Delta s}}=-\tilde{M}\vec{s}$ (23)
Putting it all together:
$\vec{\tilde{\kappa}}=-\alpha_{h}\left(\tilde{MD_{s}}\right)^{-1}\tilde{L}\tilde{L}^{-1}\vec{\tilde{\Delta
s}}=\alpha_{h}\left(\tilde{MD_{s}}\right)^{-1}\tilde{M}\vec{s}\\\ \
\vec{\tilde{\kappa}}$ (24)
which in vector form is written as:
$\vec{\tilde{\kappa}}=\alpha_{h}\left(\tilde{MD_{s}}\right)^{-1}\tilde{M}\vec{s}\\\
\ \vec{\tilde{\kappa}}=\alpha_{h}\begin{pmatrix}1-\frac{s_{C}}{s_{0}}\\\
1-\frac{s_{C}}{s_{1}}\\\ \cdots\\\ 1-\frac{s_{C}}{s_{N-1}}\\\ \end{pmatrix}$
(25)
And considering the relation between $\alpha$ and $\kappa$, in Eq.(12):
$\begin{split}\vec{\alpha}=\alpha_{h}\begin{pmatrix}1-\frac{s_{C}}{s_{0}}\\\
1-\frac{s_{C}}{s_{1}}\\\ \cdots\\\ 0\text{ ($C$ node)}\\\ \cdots\\\
1-\frac{s_{C}}{s_{N-1}}\\\ \end{pmatrix}\end{split}$ (26)
Equation (25) gives us the tuned values of the frustration parameters as a
function of the chosen control node, $C$, when $\alpha_{C}=0$. Notice that the
result depends nonlinearly only on the ratio between the degree of each node
and the control node. This informs us that nodes with the same degree would be
tuned to the same value or, in other words, the tuning depends only on the
degree sequence of the network.
Once we have computed the analytical solution of the frustration parameters,
we derive the expression of the required cost to achieve such state with the
particular choice of $C$. Using the definition in Eq.(19):
$e_{T}(C)=\left\lvert\alpha_{h}\right\rvert\sum_{i}^{N-1}\left\lvert
1-\frac{s_{C}}{s_{i}}\right\rvert=|\alpha_{h}|\sum_{i}^{N-1}\Big{\lvert}\frac{s_{i}-s_{C}}{s_{i}}\Big{\rvert}$
(27)
Before we provide the mathematical solution to the minimization problem
defined in Eq.(20) for this particular case, let us gain an intuitive
understanding of it. Looking at Eq.(27) we see that the contribution of the
$ith$ node to the cost increment depends on $|s_{C}-s_{i}|$ and, hence, if the
chosen control node, $C$, has an extreme value, i.e, $s_{C}\ll s_{i}$ or
$s_{C}\gg s_{i}$, the contribution will be larger. On the contrary, if the
degree of the control node is similar to that of the remaining nodes, then the
increase in cost will be smaller.
For example, the network in Figure 3(a), with $\vec{s}=(1,6,2,1,2,2,2,2)$ has
the set of unique degrees $\vec{s}_{unique}=(1,2,6)$ and hence three possible
values of the cost, shared by some nodes. If $C=\\{0,3\\}$, $s_{C}=1$:
$\displaystyle
e_{T}(C)=|\alpha_{h}|\left(|1-\frac{1}{1}|+5|1-\frac{1}{2}|+|1-\frac{1}{6}|\right)=$
$\displaystyle=|\alpha_{h}|\left(\frac{5}{2}+\frac{5}{6}\right)=\frac{10}{3}|\alpha_{h}|$
If $C=\\{2,4,5,6,7\\}$, $s_{C}=2$:
$\displaystyle
e_{T}(C)=|\alpha_{h}|\left(2|1-\frac{2}{1}|+4|1-\frac{2}{2}|+|1-\frac{2}{6}|\right)=$
$\displaystyle=|\alpha_{h}|\left(2+\frac{2}{3}\right)=\frac{8}{3}|\alpha_{h}|$
And, finally, if $C=1$, $s_{C}=6$:
$\displaystyle
e_{T}(C)=|\alpha_{h}|\left(2|1-\frac{6}{1}|+5|1-\frac{6}{2}|\right)=$
$\displaystyle=|\alpha_{h}|\left(10+10\right)=20|\alpha_{h}|$
The minimum value of the energy is $\frac{8}{3}|\alpha_{h}|$, corresponding to
the choice $C\in\\{2,4,5,6,7\\}$ with $s_{C}=2$.
Notice that the optimal choice of the control node (or nodes) does not depend
on the value of $\alpha_{h}$ in the symmetric configuration, but only on the
degree sequence of the network. Moreover, this example illustrates that the
degree of the control node corresponds to an intermediate value within the
degree sequence of the network and not an extreme value. A more detailed
inspection of Eq.(27) discloses that the proper choice of the control node (or
control nodes) corresponds to the minimization of the relative error of
degrees. In order to find the particular value of the degree that the control
node must have we should solve the minimization problem defined in Eq.(20):
$\min_{C,x}e_{T}(C,x)=\min_{C,x}\sum_{i}^{N}\alpha_{i}(C,x)$
which, when considering the symmetric configuration case, turns to
$\min_{s_{C}}|\alpha_{h}|\sum_{i}^{N-1}\Big{\lvert}1-\frac{s_{C}}{s_{i}}\Big{\rvert}=|\alpha_{h}|\min_{s_{C}}\sum_{i}^{N}\Big{\lvert}\frac{s_{C}-s_{i}}{s_{i}}\Big{\rvert}$
(28)
Equation (28) is equivalent to the minimization of the absolute value of the
relative error of the degree:
$|\alpha_{h}|\min_{s_{C}}\sum_{i}^{N}|\mathcal{E}_{i}|$ (29)
where
$\mathcal{E}_{i}=\Big{\lvert}\displaystyle\frac{s_{C}-s_{i}}{s_{i}}\Big{\rvert}$.
The most general minimization problem of the relative error of a variable [23]
can be written as
$\min_{d}\sum_{i=1}^{N}w_{i}|x_{i}-d|\ ;d>0$ (30)
where $d$ is the variable one is interested in and $w_{i}$ is the weight
corresponding to each $x_{i}$ variable. The solution of Eq.(30) is given by
$\displaystyle d=x_{m}\text{ , where }m\equiv\min\\{i\ |\
\sum_{k=1}^{i}w_{k}\geq\sum_{k=i}^{n}w_{k}\\}$ $\displaystyle
i\in\\{1,...,n\\}$ (31)
In other words, the value of $d$ that minimizes Eq.(30) corresponds to the
weighted median of the variable $x$ or, equivalently, the 50% weighted
percentile. The weighted median of a set $n$ distinct ordered elements
$x_{1},x_{2},...,x_{n}$ with positive weights $w_{1},w_{2},...,w_{n}$, is the
element $x_{k}$ satisfying
$\min\\{i|\sum_{k=1}^{i}w_{k}\geq\sum_{k=i}^{n}w_{k}\\}$. In other words, the
solution is given by $x_{k}$, the value such that the sum of the weights at
each side of the pivot, $k$, are as even as possible.
The particular case defined in Eq.(28) can be mapped to the most general
problem defined in Eq.(30), choosing $w_{i}=1/s_{i}$, $x_{i}=s_{i}$ and
$d=s_{C}$. Accordingly, the solution of $s_{C}$ corresponds to the weighted
median of the set $\\{s_{i}\\}$, with weights given by the inverse of the node
degree.
Following the example of the network in Figure 3(a), with degree sequence
$\vec{s}=(1,6,2,1,2,2,2,2)$, let us compute the optimal value of $s_{C}$ by
using Eq.(V.1).
$\displaystyle\text{sorted}(\vec{s})=(1,1,2,2,2,2,2,6)$
$\displaystyle\vec{w}=\left(1,1,\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{6}\right)$
To find the weighted median, we have to find the minimum value such that the
sum of the weights at each side of the pivot are as even as possible.
$1+1+\frac{1}{2}+\frac{1}{2}=3\geq
2.17=\frac{1}{2}+\frac{1}{2}+\frac{1}{2}+\frac{1}{2}+\frac{1}{6}$
which corresponds to $s_{C}=2$, in agreement with the location of the minimum
for $\alpha_{C}=0$ in Figure 3(b) corresponding to the network in Figure 3(a).
Figure 3: (Color online) Implied cost to achieve the symmetric configuration
as a function of the degree corresponding to different choices of the control
node, $s_{C}$, for a network of 8 nodes (upper panels) and 9 nodes (lower
panels). The distinct colors and markers correspond to different values of
$\alpha_{C}$. The symmetric configuration is generated by a value of
$\alpha_{h}=0.1$.
### V.2 Optimal cost tuning when $\alpha_{C}\neq 0$
We next ask which is the optimal choice of the control node in the case we let
$\alpha_{C}\neq 0$ and $\omega_{i}=\omega_{h}\ \forall i$. In this case, we
should look at Eq.(13) and set $\vec{\tilde{\Delta\omega}}=0$. Making use of
the analytical solution of the symmetric configuration in Eq.(22):
$\tilde{MD_{s}}\vec{\tilde{\kappa}}=-\alpha_{h}\tilde{L}\tilde{L}^{-1}\vec{\tilde{\Delta
s}}-\alpha_{C}\cdot\sum_{j}^{\rightarrow}[MD_{s}]_{ij}$
Using the properties $\tilde{L}\tilde{L}^{-1}=\mathbb{I}$ and
$\vec{\tilde{\Delta s}}=\tilde{M}\vec{s}$,
$\vec{\tilde{\kappa}}=\alpha_{h}\left(\tilde{MD_{s}}\right)^{-1}\left(\tilde{M}\vec{s}-\alpha_{C}\left(\tilde{MD_{s}}\right)^{-1}\sum_{j}^{\rightarrow}[MD_{s}]_{ij}\right)$
Finally, in vector form,
$\displaystyle\vec{\tilde{\kappa}}=\alpha_{h}\begin{pmatrix}1-\frac{s_{C}}{s_{0}}\\\
1-\frac{s_{C}}{s_{1}}\\\ \cdots\\\ 1-\frac{s_{C}}{s_{N-1}}\\\
\end{pmatrix}-\alpha_{C}\begin{pmatrix}1-\frac{s_{C}}{s_{0}}\\\
1-\frac{s_{C}}{s_{1}}\\\ \cdots\\\ 1-\frac{s_{C}}{s_{N-1}}\\\ \end{pmatrix}$
(32)
Using Eq.(12),
$\displaystyle\vec{\alpha}=(\alpha_{h}-\alpha_{C})\begin{pmatrix}1-\frac{s_{C}}{s_{0}}\\\
1-\frac{s_{C}}{s_{1}}\\\ \cdots\\\ 0\text{ ($C$ node)}\\\ \cdots\\\
1-\frac{s_{C}}{s_{N-1}}\\\ \end{pmatrix}+\alpha_{C}$ (33)
where we have used the result in Eq.(25) and the relation
$\displaystyle\left(\tilde{MD_{s}}\right)^{-1}\sum_{j}^{\rightarrow}[MD_{s}]_{ij}=\left(\tilde{MD_{s}}\right)^{-1}\tilde{M}\vec{s}=$
$\displaystyle=\begin{pmatrix}1-\frac{\alpha_{C}}{\alpha_{0}}\\\
1-\frac{\alpha_{C}}{\alpha_{1}}\\\ \cdots\\\
1-\frac{\alpha_{C}}{\alpha_{N-1}}\\\ \end{pmatrix}$
In the particular case that $\alpha_{C}=\alpha_{h}$ we recover the trivial
initial configuration $\alpha_{i}=\alpha_{h}\ \forall i$, as expected from the
model.
Once we have computed the analytical solution of the frustration parameters,
we derive the expression of the implied cost to achieve such state with the
particular choice of $C$. Using the definition in Eq.(19):
$e_{T}(C)=\sum_{i=0}^{N-1}\Big{\lvert}(\alpha_{h}-\alpha_{C})\left(1-\frac{s_{C}}{s_{i}}\right)+\alpha_{C}\Big{\rvert}$
(34)
We next derive the analytical solution of the optimal choice of the control
node and finally proof that the global minimum corresponds to a value of
$\alpha_{C}=0$. Equation (34) can be rearranged as
$\displaystyle
e_{T}(C)=\sum_{i=0}^{N-1}\Big{\lvert}\frac{s_{i}-\left(1-\frac{\alpha_{C}}{\alpha_{h}}\right)s_{C}}{s_{i}/\alpha_{h}}\Big{\rvert}$
(35)
and thereby can be easily mapped to the solution of the minimization problem
defined and solved in Eq.(30) and Eq.(V.1), respectively. Looking at Eq.(34),
we should choose $x_{i}=s_{i}$, $w_{i}=\alpha_{h}/s_{i}$ and
$d=\left(1-\alpha_{C}/\alpha_{h}\right)s_{C}$. With this choice, the value of
$d$ that minimizes Eq.(34) corresponds to the weighted median of the set
$\\{s_{i}\\}$ with weights $\alpha_{h}/s_{i}$. Therefore, the value of $d$ is
the same as the solution of the case $\alpha_{C}=0$, but $d\neq s_{C}$ and
thus we must apply a transformation in order to obtain the optimal choice of
$s_{C}$. We have to distinguish several cases, considering $\alpha_{h}>0$:
* •
$\alpha_{C}>0$. In this case we inspect Eq.(35) and distinguish two more
cases:
* o
$\alpha_{C}>\alpha_{h}$: in this case, the prefactor of $s_{C}$ is negative,
and we can write:
$\displaystyle
e_{T}(C)=\sum_{i=0}^{N-1}\Big{\lvert}\frac{s_{i}+\Big{\lvert}1-\frac{\alpha_{C}}{\alpha_{h}}\Big{\rvert}s_{C}}{s_{i}/\alpha_{h}}\Big{\rvert}=$
$\displaystyle=\sum_{i=0}^{N-1}\Big{\lvert}\alpha_{h}+M\alpha_{h}\frac{s_{C}}{s_{i}}\Big{\rvert}$
where $M\equiv\Big{\lvert}1-\frac{\alpha_{C}}{\alpha_{h}}\Big{\rvert}>0$ is a
positive number. Hence, as the cost function increases with increasing
$s_{C}$, the minimum is achieved when $s_{C}=\min(s_{i})$ (See Figure 3 at
$\alpha_{C}=0.2$).
* o
$\alpha_{C}<\alpha_{h}$: in this case, the prefactor of $s_{C}$ is positive,
and we can write:
$e_{T}(C)=\sum_{i=0}^{N-1}\Big{\lvert}\frac{s_{i}-\Big{\lvert}1-\frac{\alpha_{C}}{\alpha_{h}}\Big{\rvert}s_{C}}{s_{i}/\alpha_{h}}\Big{\rvert}$
taking into account that
$d=\Big{\lvert}1-\frac{\alpha_{C}}{\alpha_{h}}\Big{\rvert}s_{C}$ and
considering that, in this case, $0<\alpha_{C}<\alpha_{h}$ and hence
$0\leq\Big{\lvert}1-\frac{\alpha_{C}}{\alpha_{h}}\Big{\rvert}\leq 1$ and the
weighted mean is bounded by $\min(s_{i})\leq d\leq\max(s_{i})$, the optimal
value of $s_{C}$ falls in the range $d\leq s_{C}\leq\max(s_{i})$. Hence, the
optimal value of $s_{C}$ is always larger than the weighted median, $d$ (see
Figure 3 at $\alpha_{C}=0.05$).
* •
$\alpha_{C}<0$. In this case we can rewrite Eq.(35) as
$\displaystyle
e_{T}(C)=\sum_{i=0}^{N-1}\Big{\lvert}\frac{s_{i}-\left(1+\frac{|\alpha_{C}|}{\alpha_{h}}\right)s_{C}}{s_{i}/\alpha_{h}}\Big{\rvert}$
and distinguish two more cases:
* o
$|\alpha_{C}|>|\alpha_{h}|$: In this case, the prefactor of $s_{C}$ is
positive and bounded by
$2\leq\left(1+\frac{|\alpha_{C}|}{\alpha_{h}}\right)\leq\infty$. In this case,
$d=\left(1+\frac{|\alpha_{C}|}{\alpha_{h}}\right)s_{C}$ and hence $0\leq
s_{C}\leq d/2$. Hence, the optimal value of $s_{C}$ is always smaller than
half the value of the weighted median, $d$ (see Figure 3 at
$\alpha_{C}=-0.2$).
* o
$|\alpha_{C}|<|\alpha_{h}|$:n this case, the prefactor of $s_{C}$ is positive
and bounded by $1\leq\left(1+\frac{|\alpha_{C}|}{\alpha_{h}}\right)\leq 2$. In
this case, $d=\left(1+\frac{|\alpha_{C}|}{\alpha_{h}}\right)s_{C}$ and hence
$d/2\leq s_{C}\leq d$. Hence, the optimal value of $s_{C}$ is always smaller
than the weighted median, $d$ (see Figure 3 at $\alpha_{C}=-0.05$).
* •
$\alpha_{C}=0$: This case is explored in Section V.1. Equation (35) turns to
$\displaystyle
e_{T}(C)=\sum_{i=0}^{N-1}\Big{\lvert}\frac{s_{i}-s_{C}}{s_{i}/\alpha_{h}}\Big{\rvert}$
. The optimal value of $s_{C}$ is the same as the weighted median, $d$,
without any further transformation (see Figure 3 at $\alpha_{C}=0.0$).
* •
$\alpha_{C}=\alpha_{h}$: This case is discussed in the introduction of the
present section. Eq.(35) turns to
$e_{T}(C)=\sum_{i=0}^{N-1}\alpha_{h}=N\alpha_{h}$
and hence the value of the cost is the same constant value for all nodes (see
Figure 3 at $\alpha_{C}=0.1$).
Amid all the cases considered concerning the value of $\alpha_{C}$, the global
minimum cost is given by $\alpha_{C}=0$, as shown in Figure 3. This result can
be proved by considering a simplified version of Eq.(35), defined as
$f(x)=\Big{\lvert}\frac{a-(1-x/b)c}{a/b}\Big{\rvert}$ (36)
The minimum value of Eq.(36) is achieved when $x=0$, as long as $a>0$, $b>0$
and $c>0$. This conditions are equivalent to $s_{i}>0$, $\alpha_{h}>0$ and
$s_{C}>0$, and are true for all the summation terms in Eq.(35). Therefore, the
minimum value is given by setting $\alpha_{C}=0$.
Summing up, in order to obtain the optimal $\\{\alpha_{i}\\}$ parameters’ set
in order to achieve the symmetric phase configuration with the minimum implied
cost in the Kuramoto-Sakaguchi model, we should set $\alpha_{C}=0$,
independently of the value of $\alpha_{h}$. The remaining parameters have to
be tuned using Eq.(33). Moreover, the optimal choice of the control node (or
nodes) corresponds to that with $s_{C}$ located at the weighted median of
$\\{s_{i}\\}$ (with weight equal to $s_{i}^{-1}$).
Notice also that nodes are grouped by degree regarding the tuned values of its
frustration parameters. In other words, there may be different potential
control nodes, as long as they share the same degree.
## VI Fully synchronized phase configuration
Another particular phase configuration is given by the phase synchronization
of nodes, that is, $\vec{\phi}^{*}=\vec{0}$. If we set, as in Section V,
$\omega_{i}=\omega_{h}\ \forall i$, we end up with the trivial solution
$\alpha_{i}=0\ \forall i$. In the case of full synchronization we want to
recover the completely in-phase state from a phase dispersion produced by a
distribution of natural frequencies, which we consider to be positive. Hence,
applying Eq.(13) to this case:
$\displaystyle\vec{\tilde{\kappa}}=\left(\tilde{MD_{s}}\right)^{-1}\left(\frac{1}{K}\tilde{M}\vec{\omega}-\alpha_{C}\cdot\tilde{M}\vec{s}\right)$
(37)
and in vector form,
$\displaystyle\vec{\tilde{\kappa}}=\begin{pmatrix}\frac{\alpha_{C}(s_{C}-s_{0})-(\omega_{C}-\omega_{0})/K}{s_{0}}\\\
\frac{\alpha_{C}(s_{C}-s_{1})-(\omega_{C}-\omega_{1})/K}{s_{1}}\\\ \cdots\\\
\frac{\alpha_{C}(s_{C}-s_{N-1})-(\omega_{C}-\omega_{N-1})/K}{s_{N-1}}\end{pmatrix}\hskip
14.22636pt$ (38)
where we have used: $\vec{\tilde{\Delta\omega}}=\tilde{M}\vec{\omega}$.
Finally, from the $\vec{\kappa}$ in Eq.(38) we can obtain $\vec{\alpha}$:
$\vec{\alpha}=\begin{pmatrix}\frac{\alpha_{C}s_{C}-(\omega_{C}-\omega_{0})/K}{s_{0}}\\\
\frac{\alpha_{C}s_{C}-(\omega_{C}-\omega_{1})/K}{s_{1}}\\\ \cdots\\\
\alpha_{C}\\\ \cdots\\\
\frac{\alpha_{C}s_{C}-(\omega_{C}-\omega_{N-1})/K}{s_{N-1}}\end{pmatrix}$ (39)
Similarly as the result of the symmetric configuration, given in Eq.(33), the
solution of the fully synchronized configuration concerning $\vec{\alpha}$ is
a continuous spectrum of values, depending on the choice of the control node,
$C$, the value of its frustration parameter $\alpha_{C}$, which is a free
parameter, and the natural frequencies of the oscillators. In Sections VI.1
and VI.2 we will make a in-depth analysis of the problem, as well as comment
on the nonlinear expansion of the Kuramoto-Sakaguchi model and the validity of
our approach in this case (Section VI.3).
### VI.1 Optimal cost tuning when $\alpha_{C}=0$
Using the definition of cost in Eq.(19) and the general solution of the
frustration parameters in Eq.(39) we get:
$e_{T}(C)=\sum_{i=0}^{N-1}\Big{\lvert}\frac{\alpha_{C}s_{C}-(\omega_{C}-\omega_{i})/K}{s_{i}}\Big{\rvert}$
(40)
In the particular choice $\alpha_{C}=0$:
$e_{T}(C)=\sum_{i=0}^{N-1}\Big{\lvert}\frac{\omega_{C}-\omega_{i}}{Ks_{i}}\Big{\rvert}$
(41)
Equation (41) shows that the relevant piece of information regarding the
control node is given by its natural frequency, $\omega_{C}$. Similarly to the
minimization problem posed in Section V, and in order to find the optimal
choice of the control node we need to solve Eq.(20) considering the solution
of Eq.(41):
$\min_{\omega_{C}}\Big{\lvert}\frac{\omega_{C}-\omega_{i}}{Ks_{i}}\Big{\rvert}=\frac{1}{K}\min_{\omega_{C}}\Big{\lvert}\frac{\omega_{C}-\omega_{i}}{s_{i}}\Big{\rvert}$
(42)
The optimization problem is equivalent to the most general problem, described
in Eq.(30), with solution given by Eq.(V.1). In this case, $d=\omega_{C}$,
$x_{i}=\omega_{i}$ and the weight $w_{i}=s_{i}^{-1}$. Accordingly, and in a
similar way as in Section V, the solution of $\omega_{C}$ corresponds to the
weighted median of the set $\\{\omega_{i}\\}$, with weights given by the
inverse of the node degree. Notice that the optimal choice of the control node
is in general different to that given in Section V.1). This is due to the fact
that the weights of the weighted median have to be sorted according to
descending order of natural frequencies instead of node degree.
Following with the example provided in Section V.1, for the network in Figure
3(a), with degree sequence $\vec{s}=(1,6,2,1,2,2,2,2)$, let us compute the
optimal value of $\omega_{C}$ by using Eq.(V.1). Consider the following
natural frequencies
$\vec{\omega}=(0.1,0.2,0.05,0.45,0.3,0.4,0.25,0.15)$ (43)
which lead to
$\text{sorted}(\vec{\omega})=(0.05,0.1,0.15,0.2,0.25,0.3,0.4,0.45)$ (44)
and the corresponding weights
$\vec{w}=\left(\frac{1}{2},1,\frac{1}{2},\frac{1}{6},\frac{1}{2},\frac{1}{2},\frac{1}{2},1\right)$
(45)
To find the weighted median, we have to find the minimum value such that the
sum of the weights at each side of the pivot are as even as possible.
$\frac{1}{2}+1+\frac{1}{2}+\frac{1}{6}+\frac{1}{2}=2.67\geq
2.5=\frac{1}{2}+\frac{1}{2}+\frac{1}{2}+1$
Therefore, the optimal value of natural frequency corresponds to the choice
$C=6$ [see $\alpha_{C}=0$ line in Figure 4(a)], with $\omega_{C}=0.25$ [see
$\alpha_{C}=0$ line in Figure 4(b)] and a degree of $s_{C}=2$.
Figure 4: (Color online) Implied cost to achieve the fully synchronized
configuration as a function of the chosen control node, $C$ (upper panel) and
natural frequencies of nodes (bottom panel) for the network in Figure 3(a).
Five different values of $\alpha_{C}$ are considered (marked colored lines).
Natural frequencies are set as the example in Eq.(43).
### VI.2 Optimal cost tuning when $\alpha_{C}\neq 0$
The cost corresponding to the fully synchronized configuration case is given
by Eq.(40). In the general case where $\alpha_{C}\neq 0$, we can minimize the
cost with respect to $\omega_{C}$ or to $s_{C}$. If we minimize with respect
to $\omega_{i}$, we first have to rewrite Eq.(40) as
$\displaystyle
e_{T}(C)=\sum_{i=0}^{N-1}\Big{\lvert}\frac{\alpha_{C}s_{C}-(\omega_{C}-\omega_{i})/K}{s_{i}}\Big{\rvert}$
$\displaystyle\frac{1}{K}\sum_{i=0}^{N-1}\Big{\lvert}\frac{\omega_{i}-(\omega_{C}-\alpha_{C}s_{C}K)}{s_{i}}\Big{\rvert}$
(46)
Again, the problem and the solution of Eq.(VI.2) can be taken from Eq.(30) and
Eq.(V.1), choosing $d\equiv\omega_{C}-\alpha_{C}s_{C}K$, $w_{i}\equiv
1/Ks_{i}$ and $x_{i}\equiv\omega_{i}$.
Hence, the value $d$ that minimizes the cost is the weighed median considering
the same weight as in Section VI.1, $w_{i}=1/s_{i}$ (notice, however, that the
ordering is determined by natural frequencies and not degrees). Let us analyze
the different possibilities regarding the values of $\alpha_{C}$, maintaining
$\omega_{C}$ and $s_{C}$ constant:
* •
$\omega_{C}>\alpha_{C}s_{C}K$ or $\alpha_{C}<\frac{\omega_{C}}{Ks_{C}}$: We
can write
$\displaystyle\sum_{i}^{N}\Big{\lvert}\frac{\omega_{i}-|\omega_{C}-\alpha_{C}s_{C}K|}{Ks_{i}}\Big{\rvert}$
The value which minimizes cost is given by $d=\omega_{k}$, corresponding to
the weighted median. However this is not directly the value of $\omega_{C}$,
as $d=|\omega_{C}-\alpha_{C}s_{C}K|$ in this case. The real values of the pair
$\\{\omega_{C},s_{C}\\}$ are given by
$\min_{C}(\omega_{k}-(\omega_{C}-\alpha_{C}s_{C}K))$. Following with the
example in Section VI.1, the value of the weighted median is $d=0.25$. In the
case we are considering, however, this is not the optimal choice of the
parameters for the control node. We must shift the values considering the
relation between $d$ and the other parameters. If we choose $\alpha_{C}=0.1$,
for instance, we find that, $|\omega_{C}-0.1s_{c}|=0.25$. In Figure 4 we see
that the optimal choice is given by $\omega_{C}=0.4$, which corresponds to
$C=5$ and $s_{C}=2$.
* •
$\omega_{C}<\alpha_{C}s_{C}K$ or $\alpha_{C}>\frac{\omega_{C}}{Ks_{C}}$: We
can write
$\displaystyle\sum_{i}^{N}\Big{\lvert}\frac{\omega_{i}+|\omega_{C}-\alpha_{C}s_{C}K|}{Ks_{i}}\Big{\rvert}$
Hence, as the function increases with increasing
$(\omega_{i}-\alpha_{C}s_{i}K)$, the minimum is achieved by
$\min_{C}(\omega_{C}-\alpha_{C}s_{C}K)$.
### VI.3 Non-linear expansion of the Kuramoto-Sakaguchi model
The results obtained in Section VI are based on a linear approximation of the
Kuramoto-Sakaguchi model. We have derived the results based on the phase
synchronization requirement, and assuming that frequency synchronization is
already achieved in the steady state. Nevertheless, when measuring the order
parameter with a large dispersion of natural frequencies or low coupling
constant, we do not expect such steady state. However, we ask to which extend
the proposed values of the obtained frustration parameters are also able to
enhance frequency synchronization considering the original nonlinear Kuramoto-
Sakaguchi model:
$\dot{\theta}_{i}=\omega_{i}+K\sum_{j}W_{ij}\sin(\theta_{j}-\theta_{i}-\alpha_{i})$
(47)
We compare the results from Ref. [21] considering its Type II frustration
parameters tuning for both the linear and the nonlinear Kuramoto model and we
find that, despite our approach does not consider the enhancement of frequency
synchronization on the nonlinear regime, it is able to improve the value of
the order parameter, in a similar fashion as in Ref. [21]. This work considers
the nonlinear Kuramoto-Sakaguchi model and seeks to improve the number of
nodes that fall into the recruitment condition so as to achieve the same
common oscillatory frequency. The considered network class is the same as the
mentioned paper, as well as the statistics study.
We make use of the expression in Eq.(39) to tune the set of $\vec{\alpha}$ for
a given configuration of random $\vec{\omega}$ and study the effect on the
synchronization of the system for different values of the coupling strength.
We consider two cases: the linear and the nonlinear model with natural
frequencies obtained from a uniform distribution $\omega_{i}\in[-1,1]$.
Figure 5: (Color online) Average order parameter, $\left\langle
r\right\rangle$, as a function of the coupling strength, $K$, for the linear
[in panel (a)] and the nonlinear [in panel (b)] Kuramoto-Sakaguchi (KS) model
on regular random graphs with homogeneous node degree $s_{i}=4$ and $N=100$
nodes. Natural frequencies are obtained from a uniform random distribution in
the range $\omega_{i}\in[-1,1]$. Each data point represents an average over
ten optimized configurations. We compare three types of tuning for the set of
frustration parameters, $\\{\alpha_{i}\\}$: the original Kuramoto dynamics or
$\alpha_{i}=0\ \forall i$ (spotted continuous red line); type II [21] KS
dynamics with the frustration parameters set to
$\sin(\alpha_{i})=-\omega_{i}/(Ks_{i})$ if $|\omega_{i}|<Ks_{i}$ and
$\sin(\alpha_{i})=\pm 1$ otherwise (squared dashed green line); and KS
dynamics with the frustration parameters determined by the derived linear
approximation in Eq.(39))(squared discontinuous purple line).
From Figure 5(a), the linear case of the Kuramoto-Sakaguchi model [see
Eq.(III)], our approach, derived from the analytic expression of the linear
approximation, advances the analytic tuning of frustration parameters
suggested by Ref. [21]. This is because they look for an enhancement in the
number of nodes that are oscillating at the same frequency, $\Omega$, but they
do not worry about the exact values of the phases they achieve. On the
contrary, we assume nodes are already synchronized (without setting the
specific value of $\Omega$, as they do) and we look for the full
synchronization state.
In Figure 5(b), the linear tuning squared discontinuous purple line)
approaches the type II (squared dashed green line) tuning in the case of the
nonlinear Kuramoto-Sakaguchi model, even for small values of the coupling
strength. Hence, despite the aim of our approach is not achieving frequency
synchronization, the obtained tuning of the frustration parameters helps
enhancing it as well. In principle this behavior is reminiscent of the so
called explosive percolation (see Ref. [24] and references therein), since the
transition to the synchronized state is abrupt, as it happens in a first order
phase transition. We are adjusting the phase-lag parameter as a response to
the frequencies, and then in some sense it is similar to the original proposal
in Ref. [25], the correlated degree-frequency framework.
## VII Conclusions
The Kuramoto-Sakaguchi model adds to the original Kuramoto model a homogeneous
phase lag, $\alpha$, between nodes which promotes a phase shift between
oscillators. We consider a more general framework, in which the phase lag or
the frustration parameter, $\alpha_{i}$, is an intrinsic property of each
node. A very relevant question in oscillatory models is finding the conditions
of network synchronization. In the present work, we bring forward a
methodology not only to obtain the desired synchronized state, but any
convenient phase configuration in the steady state, by means of a fine tuning
of the phase lag or frustration parameters, $\\{\alpha_{i}\\}$. We feature the
analytical solution of frustration parameters so as to achieve any phase
configuration, by linearizing the most general model. The three intrinsic
parameters of the nodes in the model, natural frequencies, $\\{\omega_{i}\\}$,
frustration parameters $\\{\alpha_{i}\\}$, and phases in the steady state
$\phi^{*}_{i}$, are coupled by an equation that allows to tune them for a
desired configuration. While the set $\phi^{*}_{i}$ is uniquely determined,
the set $\alpha_{i}$ has a continuous spectrum of solutions.
A main result is that a given phase configuration can be access via a
continuous spectrum of frustration parameters, i.e, one phase and one
frustration parameter are left as free parameters. The nodes we choose their
values concerning phase and frustration parameter, are named reference and
control nodes, respectively. Once the frustration parameters are tuned so as
to obtain the desired state, we define a cost function to assess the overhead
that the system requires to achieve such parameters’ configuration. Among all
possible tuning solutions of $\\{\alpha_{i}\\}$, we request those which
minimize the cost to obtain them. We develop the analytical solution of the
cost function for the cases of symmetric configuration and fully synchronized
state and discuss them.
A key result is the solution to the minimization cost problem: For the case of
symmetric configuration, the nodes which are to be set as control nodes are
those whose degree is the weighed median of the sample, with a weight equal to
the inverse of its degree. On the other hand, for the case of fully
synchronized state, control nodes are those whose natural frequency is the
weighted median of the sample, with a weight equal to the inverse of its
degree. An extensive analysis of several cases is done in the text and a
detailed example of a toy network is provided. We highlight the connection
made with the nonlinear Kuramoto-Sakaguchi model. Despite our analysis being
based on the linear version of the model, we show that the proposed
parameters’ tuning is also able to enhance frequency synchronization, as done
in Ref. [21]. We stress the fact that the question ‘among all the possible
solutions, which is the one that makes the system achieve a particular phase
configuration with the minimum required cost?’ is of particular relevance when
we consider the plausible real nature of the system. If a real system needs to
access a particular phase configuration, which may be associated with a
singular function, then it will tend to minimize the effort or cost to do so.
Further work can be done within this framework by doing real experiments on
measuring the energy needed to access a particular configuration. Moreover,
other nonlinear oscillatory models can be analyzed and compared with the
Kuramoto-Sakaguchi model.
Other questions regarding the model are left open. We have considered the
coupled trio of natural frequencies-frustration parameters-steady state
phases. A natural extension to this would be to inspect the possibility to
also tune the weights of the network edges in order to access a particular
configuration. The higher dimension of the latter with respect to the vectors
of parameters would require further assumptions about the model or the network
structure, such as positive weights or particular distributions or topologies.
Another research venue would be to consider the effect of removing a node of
the network and the $\\{\alpha_{i}\\}$ set needed to minimize the effect on
the removal on the whole network.
Despite we provide the analytical solution to the optimal choice of parameters
in order to minimize the cost of achieving both the symmetric and the fully
synchronized configurations, the access to all nodes’ parameters requirement
may not be feasible in real-world networks. Our methodology is quite general
and the optimization procedure refers to a set of parameters to be tuned. In
particular, a finite subset of nodes with accessible phase-lag parameter could
be chosen (the choice could be restricted to any subset of nodes), holding all
other nodes unaltered. This would provide a nonoptimal global condition but a
restricted and approximated one that could deal with a subset of available
nodes. A meaningful analysis would be to identify which subset of nodes is the
one that enables to get a closest approximate solution, and relate those nodes
with their topological properties, although this question is beyond the goal
of this work.
###### Acknowledgements.
The authors acknowledge financial support from MINECO via Project No.
PGC2018-094754-B-C22 (MINECO/FEDER,UE) and Generalitat de Catalunya via Grant
No. 2017SGR341. G.R.-T. also acknowledges MECD for Grant No. FPU15/03053
## Appendix A Step-by-step derivation of the example
Consider the network in Figure 1, with its Laplacian matrix:
$L=\begin{pmatrix}4&-1&-1&-1&0&0&-1\\\ -1&2&-1&0&0&0&0\\\ -1&-1&2&0&0&0&0\\\
-1&0&0&2&-1&0&0\\\ 0&0&0&-1&2&-1&0\\\ 0&0&0&0&-1&2&1\\\ -1&0&0&0&0&-1&2\\\
\end{pmatrix}$
We develop the equation step by step:
$\displaystyle\sum_{j}L_{ij}\theta_{j}^{*}=\frac{\omega_{i}}{K}-\frac{\left\langle\omega\right\rangle}{K}+\left\langle\alpha
s\right\rangle-\alpha_{i}s_{i}\hskip 28.45274pt\forall i$
$\displaystyle\begin{pmatrix}4&-1&-1&-1&0&0&-1\\\ -1&2&-1&0&0&0&0\\\
-1&-1&2&0&0&0&0\\\ -1&0&0&2&-1&0&0\\\ 0&0&0&-1&2&-1&0\\\ 0&0&0&0&-1&2&1\\\
-1&0&0&0&0&-1&2\\\ \end{pmatrix}\cdot\begin{pmatrix}\theta_{0}^{*}\\\
\theta_{1}^{*}\\\ \theta_{2}^{*}\\\ \theta_{3}^{*}\\\ \theta_{4}^{*}\\\
\theta_{5}^{*}\\\ \theta_{6}^{*}\end{pmatrix}=$
$\displaystyle\begin{pmatrix}\frac{\omega_{0}-\left\langle\omega\right\rangle}{K}\\\
\frac{\omega_{1}-\left\langle\omega\right\rangle}{K}\\\
\frac{\omega_{2}-\left\langle\omega\right\rangle}{K}\\\
\frac{\omega_{3}-\left\langle\omega\right\rangle}{K}\\\
\frac{\omega_{4}-\left\langle\omega\right\rangle}{K}\\\
\frac{\omega_{5}-\left\langle\omega\right\rangle}{K}\\\
\frac{\omega_{6}-\left\langle\omega\right\rangle}{K}\\\
\end{pmatrix}-\begin{pmatrix}\frac{6}{7}&-\frac{1}{7}&-\frac{1}{7}&-\frac{1}{7}&-\frac{1}{7}&-\frac{1}{7}&-\frac{1}{7}\\\
-\frac{1}{7}&\frac{6}{7}&-\frac{1}{7}&-\frac{1}{7}&-\frac{1}{7}&-\frac{1}{7}&-\frac{1}{7}\\\
-\frac{1}{7}&-\frac{1}{7}&\frac{6}{7}&-\frac{1}{7}&-\frac{1}{7}&-\frac{1}{7}&-\frac{1}{7}\\\
-\frac{1}{7}&-\frac{1}{7}&-\frac{1}{7}&\frac{6}{7}&-\frac{1}{7}&-\frac{1}{7}&-\frac{1}{7}\\\
-\frac{1}{7}&-\frac{1}{7}&-\frac{1}{7}&-\frac{1}{7}&\frac{6}{7}&-\frac{1}{7}&-\frac{1}{7}\\\
-\frac{1}{7}&-\frac{1}{7}&-\frac{1}{7}&-\frac{1}{7}&-\frac{1}{7}&\frac{6}{7}&-\frac{1}{7}\\\
-\frac{1}{7}&-\frac{1}{7}&-\frac{1}{7}&-\frac{1}{7}&-\frac{1}{7}&-\frac{1}{7}&\frac{6}{7}\end{pmatrix}\cdot$
$\displaystyle\cdot\begin{pmatrix}4&0&0&0&0&0&0\\\ 0&2&0&0&0&0&0\\\
0&0&2&0&0&0&0\\\ 0&0&0&2&0&0&0\\\ 0&0&0&0&2&0&0\\\ 0&0&0&0&0&2&0\\\
0&0&0&0&0&0&2\\\ \end{pmatrix}\cdot\begin{pmatrix}\alpha_{0}\\\ \alpha_{1}\\\
\alpha_{2}\\\ \alpha_{3}\\\ \alpha_{4}\\\ \alpha_{5}\\\ \alpha_{6}\\\
\end{pmatrix}=\begin{pmatrix}\frac{\omega_{0}-\left\langle\omega\right\rangle}{K}\\\
\frac{\omega_{1}-\left\langle\omega\right\rangle}{K}\\\
\frac{\omega_{2}-\left\langle\omega\right\rangle}{K}\\\
\frac{\omega_{3}-\left\langle\omega\right\rangle}{K}\\\
\frac{\omega_{4}-\left\langle\omega\right\rangle}{K}\\\
\frac{\omega_{5}-\left\langle\omega\right\rangle}{K}\\\
\frac{\omega_{6}-\left\langle\omega\right\rangle}{K}\\\ \end{pmatrix}+$
$\displaystyle\begin{pmatrix}-\frac{24}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}\\\
\frac{4}{7}&-\frac{12}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}\\\
\frac{4}{7}&\frac{2}{7}&-\frac{12}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}\\\
\frac{4}{7}&\frac{2}{7}&\frac{2}{7}&-\frac{12}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}\\\
\frac{4}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&-\frac{12}{7}&\frac{2}{7}&\frac{2}{7}\\\
\frac{4}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&-\frac{12}{7}&\frac{2}{7}\\\
\frac{4}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&-\frac{12}{7}\end{pmatrix}\cdot\begin{pmatrix}\alpha_{0}\\\
\alpha_{1}\\\ \alpha_{2}\\\ \alpha_{3}\\\ \alpha_{4}\\\ \alpha_{5}\\\
\alpha_{6}\\\ \end{pmatrix}$
If we set all natural frequencies to the same value: $\omega_{i}=\omega\
\forall i$:
$\displaystyle\begin{pmatrix}4&-1&-1&-1&0&0&-1\\\ -1&2&-1&0&0&0&0\\\
-1&-1&2&0&0&0&0\\\ -1&0&0&2&-1&0&0\\\ 0&0&0&-1&2&-1&0\\\ 0&0&0&0&-1&2&1\\\
-1&0&0&0&0&-1&2\\\ \end{pmatrix}\cdot\begin{pmatrix}\theta_{0}^{*}\\\
\theta_{1}^{*}\\\ \theta_{2}^{*}\\\ \theta_{3}^{*}\\\ \theta_{4}^{*}\\\
\theta_{5}^{*}\\\ \theta_{6}^{*}\end{pmatrix}=$
$\displaystyle=\begin{pmatrix}-\frac{24}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}\\\
\frac{4}{7}&-\frac{12}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}\\\
\frac{4}{7}&\frac{2}{7}&-\frac{12}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}\\\
\frac{4}{7}&\frac{2}{7}&\frac{2}{7}&-\frac{12}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}\\\
\frac{4}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&-\frac{12}{7}&\frac{2}{7}&\frac{2}{7}\\\
\frac{4}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&-\frac{12}{7}&\frac{2}{7}\\\
\frac{4}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&-\frac{12}{7}\end{pmatrix}\cdot\begin{pmatrix}\alpha_{0}\\\
\alpha_{1}\\\ \alpha_{2}\\\ \alpha_{3}\\\ \alpha_{4}\\\ \alpha_{5}\\\
\alpha_{6}\\\ \end{pmatrix}$
Now we choose $R=0$ and $C=1$, i.e., all $\phi_{i}=\theta_{i}-\theta_{0}$ and
$\kappa_{i}=\alpha_{i}-\alpha_{1}$:
Let us write explicitly the change of variables (the red and the blue columns
are ones we can remove due to the change of variables, as they do not affect
to the system of equations anymore):
$\displaystyle\begin{pmatrix}\color[rgb]{1,0,0}4&-1&-1&-1&0&0&-1\\\
\color[rgb]{1,0,0}-1&2&-1&0&0&0&0\\\ \color[rgb]{1,0,0}-1&-1&2&0&0&0&0\\\
\color[rgb]{1,0,0}-1&0&0&2&-1&0&0\\\ \color[rgb]{1,0,0}0&0&0&-1&2&-1&0\\\
\color[rgb]{1,0,0}0&0&0&0&-1&2&1\\\ \color[rgb]{1,0,0}-1&0&0&0&0&-1&2\\\
\end{pmatrix}\cdot\begin{pmatrix}\color[rgb]{1,0,0}\phi_{0}^{*}=0\\\
\phi_{1}^{*}\\\ \phi_{2}^{*}\\\ \phi_{3}^{*}\\\ \phi_{4}^{*}\\\
\phi_{5}^{*}\\\ \phi_{6}^{*}\end{pmatrix}=$
$\displaystyle=\begin{pmatrix}-\frac{24}{7}&\color[rgb]{0,0,1}\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}\\\
\frac{4}{7}&\color[rgb]{0,0,1}-\frac{12}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}\\\
\frac{4}{7}&\color[rgb]{0,0,1}\frac{2}{7}&-\frac{12}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}\\\
\frac{4}{7}&\color[rgb]{0,0,1}\frac{2}{7}&\frac{2}{7}&-\frac{12}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}\\\
\frac{4}{7}&\color[rgb]{0,0,1}\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&-\frac{12}{7}&\frac{2}{7}&\frac{2}{7}\\\
\frac{4}{7}&\color[rgb]{0,0,1}\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&-\frac{12}{7}&\frac{2}{7}\\\
\frac{4}{7}&\color[rgb]{0,0,1}\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&-\frac{12}{7}\end{pmatrix}\cdot$
$\displaystyle\cdot\begin{pmatrix}\kappa_{0}\\\
\color[rgb]{0,0,1}\kappa_{1}=0\\\ \kappa_{2}\\\ \kappa_{3}\\\ \kappa_{4}\\\
\kappa_{5}\\\ \kappa_{6}\\\
\end{pmatrix}+\begin{pmatrix}\frac{-12}{7}\alpha_{1}\\\
\frac{2}{7}\alpha_{1}\\\ \frac{2}{7}\alpha_{1}\\\ \frac{2}{7}\alpha_{1}\\\
\frac{2}{7}\alpha_{1}\\\ \frac{2}{7}\alpha_{1}\\\ \frac{2}{7}\alpha_{1}\\\
\end{pmatrix}$
If we look carefully at Eq.(A), we see that although the left-hand side and
the right hand-side matrices are both singular, the first one has both column
and row sums equal to zero, while the second one has only column sum equal to
zero. This is reflected in the additional constant term that appears when
doing the change of variables regarding $\alpha_{i}$, which can be written as:
$b_{i}=\sum_{j}[M\cdot D_{s}]_{ij}\ \neq 0\text{ in general}$ (48)
We can choose whatever row to remove from either sides. We choose row 0:
$\displaystyle\begin{pmatrix}2&-1&0&0&0&0\\\ -1&2&0&0&0&0\\\ 0&0&2&-1&0&0\\\
0&0&-1&2&-1&0\\\ 0&0&0&-1&2&1\\\ 0&0&0&0&-1&2\\\
\end{pmatrix}\cdot\begin{pmatrix}\phi_{1}^{*}\\\ \phi_{2}^{*}\\\
\phi_{3}^{*}\\\ \phi_{4}^{*}\\\ \phi_{5}^{*}\\\ \phi_{6}^{*}\end{pmatrix}=$
$\displaystyle=\begin{pmatrix}\frac{4}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}\\\
\frac{4}{7}&-\frac{12}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}\\\
\frac{4}{7}&\frac{2}{7}&-\frac{12}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}\\\
\frac{4}{7}&\frac{2}{7}&\frac{2}{7}&-\frac{12}{7}&\frac{2}{7}&\frac{2}{7}\\\
\frac{4}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&-\frac{12}{7}&\frac{2}{7}\\\
\frac{4}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&-\frac{12}{7}\end{pmatrix}\cdot\begin{pmatrix}\kappa_{0}\\\
\kappa_{2}\\\ \kappa_{3}\\\ \kappa_{4}\\\ \kappa_{5}\\\ \kappa_{6}\\\
\end{pmatrix}+\begin{pmatrix}\frac{2}{7}\alpha_{1}\\\ \frac{2}{7}\alpha_{1}\\\
\frac{2}{7}\alpha_{1}\\\ \frac{2}{7}\alpha_{1}\\\ \frac{2}{7}\alpha_{1}\\\
\frac{2}{7}\alpha_{1}\\\ \end{pmatrix}$
In this situation, we can solve for the set $\vec{\tilde{\kappa}}$:
$\displaystyle\begin{pmatrix}\kappa_{0}\\\ \kappa_{2}\\\ \kappa_{3}\\\
\kappa_{4}\\\ \kappa_{5}\\\ \kappa_{6}\\\
\end{pmatrix}=\begin{pmatrix}\frac{4}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}\\\
\frac{4}{7}&-\frac{12}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}\\\
\frac{4}{7}&\frac{2}{7}&-\frac{12}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}\\\
\frac{4}{7}&\frac{2}{7}&\frac{2}{7}&-\frac{12}{7}&\frac{2}{7}&\frac{2}{7}\\\
\frac{4}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&-\frac{12}{7}&\frac{2}{7}\\\
\frac{4}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&\frac{2}{7}&-\frac{12}{7}\end{pmatrix}^{-1}\cdot$
$\displaystyle\cdot\left[\begin{pmatrix}2&-1&0&0&0&0\\\ -1&2&0&0&0&0\\\
0&0&2&-1&0&0\\\ 0&0&-1&2&-1&0\\\ 0&0&0&-1&2&1\\\ 0&0&0&0&-1&2\\\
\end{pmatrix}\cdot\begin{pmatrix}\phi_{1}^{*}\\\ \phi_{2}^{*}\\\
\phi_{3}^{*}\\\ \phi_{4}^{*}\\\ \phi_{5}^{*}\\\
\phi_{6}^{*}\end{pmatrix}-\begin{pmatrix}\frac{2}{7}\alpha_{1}\\\
\frac{2}{7}\alpha_{1}\\\ \frac{2}{7}\alpha_{1}\\\ \frac{2}{7}\alpha_{1}\\\
\frac{2}{7}\alpha_{1}\\\ \frac{2}{7}\alpha_{1}\\\ \end{pmatrix}\right]$
Which leads to the result:
$\begin{pmatrix}\kappa_{0}\\\ \kappa_{2}\\\ \kappa_{3}\\\ \kappa_{4}\\\
\kappa_{5}\\\
\kappa_{6}\end{pmatrix}=\begin{pmatrix}\frac{-2\alpha_{1}+3\phi_{1}+\phi_{3}+3\phi_{6}}{4}\\\
\frac{3(\phi_{1}-\phi_{2})}{2}\\\
\frac{2\phi_{1}-\phi_{2}-2\phi_{3}+\phi_{4}}{2}\\\
\frac{2\phi_{1}-\phi_{2}+\phi_{3}-2\phi_{4}+\phi_{5}}{2}\\\
\frac{2\phi_{1}-\phi_{2}+\phi_{4}-2\phi_{5}-\phi_{6}}{2}\\\
\frac{2\phi_{1}\phi_{2}+\phi_{5}-2\phi_{6}}{2}\end{pmatrix}$ (49)
Note that $\kappa_{1}=0$ and $\phi_{0}=0$.
Consider the following configuration:
$\vec{\tilde{\phi}}_{(R=0)}=(0.1,0.2,0.25,-0.2,-0.1,0.0)$
In this case:
$\vec{\tilde{\kappa}}_{(C=1)}=(0.1375-\alpha_{1}/2,-0.15,-0.35,0.275,0.0,-0.05)$
Note the definition
$\kappa_{i}\equiv\alpha_{i}-\alpha_{C}\Rightarrow\alpha_{i}=\kappa_{i}+\alpha_{C}$.
If we choose $\alpha_{C}=0\Rightarrow\alpha_{i}=\kappa_{i}$, then we can
include the value of the control node $C=1$:
$\vec{\tilde{\alpha}}=(0.1375,0.0,-0.15,-0.35,0.275,0.0,-0.05)$
Alternatively, we can choose whatever value we need regarding the control
node. For instance, if $\alpha_{C}=\alpha_{1}=0.1$:
$\vec{\tilde{\alpha}}=(0.1875,0.1,-0.05,-0.25,0.375,0.1,0.05)$
and the phases configuration is the same, as shown in Figure 6
Figure 6: For the network in Fig 1, phases obtained after tuning the set of
frustration parameters to the symmetric configuration (four distinct
symmetries).
## Appendix B Mathematical solution of the cost optimization problem and
intuitive insights
In order to gain a more intuitive understating of the analytical expression
and solution of the considered cost function, we consider the analysis of the
continuous case.
### B.1 Symmetric configuration case
Considering the symmetric configuration case and choosing $\alpha_{C}=0$, the
continuous optimization problem can be written as
$\displaystyle\frac{\partial e_{T}(C)}{\partial
s_{C}}=\frac{\partial}{\partial
s_{C}}|\alpha_{h}|\sum_{i}^{N-1}\Big{\lvert}1-\frac{s_{C}}{s_{i}}\Big{\rvert}=$
$\displaystyle|\alpha_{h}|\sum_{i}^{N-1}\frac{\text{sgn}(s_{C}-s_{i})}{s_{i}}$
(50)
Equation (50) is based on the function
$f(x)=\Big{\lvert}\frac{a-x}{a}\Big{\rvert}\ a,x>0$ (51)
which is depicted in Figure 7 for different values of $a$ and the sum of all
of them.
Figure 7: Three examples of the general function
$f(x)=\lvert\frac{a-x}{a}\rvert$, with $a=2$, $a=3$ and $a=4$, and the
resulting sum of them.
Regardless of the set of $a_{i}$ values, the sum function $\sum_{i}f(x,a_{i})$
(see the example in the black line in Figure 7), is a concave function and has
a unique minimum, which corresponds to one of the $a_{i}$ values.
In order to assess the value of $a_{i}$ where the minimum is located, we
compute the derivative of Eq.(51):
$\frac{df(x)}{dx}=\frac{\text{sgn}(x-a)}{a}$ (52)
and hence, $d\sum_{i}f(x,a_{i})/dx=\sum_{i}\text{sgn}(x-a_{i})/a_{i}$, which
is depicted in Figure 8.
Figure 8: Derivative of the function
$f(x)=\lvert\frac{2-x}{2}\rvert+\lvert\frac{3-x}{3}\rvert+\lvert\frac{4-x}{4}\rvert$
defined in Figure 7. Red dashed line at $y=0$.
Notice that, despite the derivative of the function is not defined at the
values $x=a_{i}$, the derivative changes its sign when moving from $x<3$ to
$x>3$ and hence, the minimum is located at this value of $a_{i}$.
To conclude, Eq(50) behaves equivalently as the function defined in Eq.(51)
and hence, displays only one minimum, which is achieved at the $s_{i}$ where
there is a change of sign in the derivative.
Alternatively and as explained in the main text, we can understand the
minimization problem as part of a general framework. The minimization of
Eq.(50) is equivalent to the minimization of the absolute value of the
relative error:
$\sum_{i}^{N-1}\Big{\lvert}1-\frac{s_{C}}{s_{i}}\Big{\rvert}=\sum_{i}^{N}\Big{\lvert}\frac{s_{C}-s_{i}}{s_{i}}\Big{\rvert}=\sum_{i}^{N}|\mathcal{E}_{i}|$
(53)
The general problem can be written as [23]:
$\displaystyle\min_{d}\sum_{i=1}^{N}w_{i}|x_{i}-d|\ ;d>0$
with the solution:
$\displaystyle d=x_{m}\text{ where }m\equiv\min\\{i\ |\
\sum_{k=1}^{i}w_{k}\geq\sum_{k=i}^{n}w_{k}\\}$ $\displaystyle
i\in\\{1,...,n\\}$ (54)
In other words, the $d$ value that minimizes Eq.(B.1) corresponds to the
weighted median of the variable $x$, or the 50% weighted percentile.
Weighted median: For $n$ distinct ordered elements $x_{1},x_{2},...,x_{n}$
with positive weights $w_{1},w_{2},...,w_{n}$, the weighted median is the
element $x_{k}$ satisfying
$\min\\{i|\sum_{k=1}^{i}w_{k}\geq\sum_{k=i}^{n}w_{k}\\}$
Therefore, the solution is given by $x_{k}$, the value such that the sum of
the weights at each side of the pivot, $k$, are as even as possible.
Our problem is a special case of the discrete weighted medians with weights
$1/s_{i}$, which are a special case of the medians of a measure.
Following the example provided in Figure 7, $\\{x\\}=\\{2,3,4\\}$ and
$\\{w\\}=\\{1/2,1/3,1/4\\}$.
The weighted median is achieved for $k=2$, corresponding to $x_{2}=3$ and
weight $w_{2}=1/3$ as $1/2+1/3=5/6>1/3+1/4=7/12$. Conversely, if we let $k=1$,
and hence $x_{1}=2$ and $w_{1}=1/2$, the condition on the weights will not be
true: $1/2\ngtr 1/2+1/3+1/4$.
## References
* Nicolis and Nicolis [2007] G. Nicolis and C. Nicolis, _Foundations of Complex Systems: Nonlinear Dynamics, Statistical Physics, Information and Prediction_ (World Scientific, 2007).
* Barrat _et al._ [2008] A. Barrat, M. Barthelemy, and A. Vespignani, _Dynamical Processes on Complex Networks_ (Cambridge University Press, 2008).
* Pikovsky _et al._ [2001] A. Pikovsky, M. Rosenblum, and J. Kurths, _Synchronization: A universal concept in nonlinear sciences_ (Cambridge University Press, Cambridge, UK, 2001).
* Osipov _et al._ [2007] G. Osipov, J. Kurths, and C. Zhou, _Synchronization in Oscillatory Networks_, Springer Series in Synergetics (Springer Berlin Heidelberg, 2007).
* Arenas _et al._ [2008] A. Arenas, A. Díaz-Guilera, J. Kurths, Y. Moreno, and C. Zhou, Synchronization in complex networks, Physics Reports 469, 93 (2008).
* Dörfler and Bullo [2014] F. Dörfler and F. Bullo, Synchronization in complex networks of phase oscillators: A survey, Automatica 50, 1539 (2014).
* Boccaletti _et al._ [2018] S. Boccaletti, A. N. Pisarchik, C. I. del Genio, and A. Amann, _Synchronization_ (Cambridge University Press, 2018).
* [8] Y. Kuramoto, Self-entrainment of a population of coupled non-linear oscillators, in _International Symposium on Mathematical Problems in Theoretical Physics_ (Springer-Verlag) pp. 420–422.
* Sakaguchi and Kuramoto [1986] H. Sakaguchi and Y. Kuramoto, A Soluble Active Rotator Model Showing Phase Transitions via Mutual Entrainment, Progress of Theoretical Physics 76, 576 (1986).
* Acebrón _et al._ [2005] J. A. Acebrón, L. L. Bonilla, C. J. Pérez Vicente, F. Ritort, and R. Spigler, The Kuramoto model: A simple paradigm for synchronization phenomena, Reviews of Modern Physics 77 (2005).
* Liu and Barabási [2016] Y.-Y. Liu and A.-L. Barabási, Control principles of complex systems, Rev. Mod. Phys. 88, 035006 (2016).
* Frolov _et al._ [2020] N. Frolov, V. Maksimenko, S. Majhi, S. Rakshit, D. Ghosh, and A. Hramov, Chimera-like behavior in a heterogeneous kuramoto model: The interplay between attractive and repulsive coupling, Chaos: An Interdisciplinary Journal of Nonlinear Science 30, 081102 (2020).
* Kuramoto [2003] Y. Kuramoto, _Chemical oscillations, waves, and turbulence_ (Dover Publications, 2003) p. 156.
* Nicosia _et al._ [2013] V. Nicosia, M. Valencia, M. Chavez, A. Díaz-Guilera, and V. Latora, Remote Synchronization Reveals Network Symmetries and Functional Modules, Physical Review Letters 110, 174102 (2013).
* Nishikawa and Motter [2016] T. Nishikawa and A. E. Motter, Symmetric states requiring system asymmetry, Physical Review Letters 117, 114101 (2016).
* Molnar _et al._ [2020] F. Molnar, T. Nishikawa, and A. E. Motter, Network experiment demonstrates converse symmetry breaking, Nature Physics 16, 351 (2020).
* Zhang and Motter [2020] Y. Zhang and A. E. Motter, Symmetry-independent stability analysis of synchronization patterns, SIAM Rev. 62, 817 (2020).
* Rosell-Tarragó and Díaz-Guilera [2020] G. Rosell-Tarragó and A. Díaz-Guilera, Functionability in complex networks: Leading nodes for the transition from structural to functional networks through remote asynchronization, Chaos 30, 062315 (2020).
* Lei _et al._ [2001] X. Lei, X. Li, and D. Povh, A nonlinear control for coordinating TCSC and generator excitation to enhance the transient stability of long transmission systems, Electric Power Systems Research 59, 103 (2001).
* Acharya _et al._ [2007] U. R. Acharya, K. P. Joseph, N. Kannathal, L. C. Min, and J. S. Suri, Heart Rate Variability, in _Advances in Cardiac Signal Processing_ (Springer Berlin Heidelberg, Berlin, Heidelberg, 2007) pp. 121–165.
* Brede and Kalloniatis [2016] M. Brede and A. C. Kalloniatis, Frustration tuning and perfect phase synchronization in the Kuramoto-Sakaguchi model, Physical Review E 93, 062315 (2016).
* Skardal _et al._ [2014] P. S. Skardal, D. Taylor, and J. Sun, Optimal synchronization of complex networks, Physical Review Letters 113, 144101 (2014).
* Semsar-Kazerooni and Khorasani [2009] E. Semsar-Kazerooni and K. Khorasani, Multi-agent team cooperation: A game theory approach, Automatica 45, 2205 (2009).
* D’Souza _et al._ [2019] R. M. D’Souza, J. Gómez-Gardeñes, J. Nagler, and A. Arenas, Explosive phenomena in complex networks, Advances in Physics 68, 123 (2019).
* Gómez-Gardeñes _et al._ [2011] J. Gómez-Gardeñes, S. Gómez, A. Arenas, and Y. Moreno, Explosive synchronization transitions in scale-free networks, Phys. Rev. Lett. 106, 128701 (2011).
|
# Vortex propagation and phase transitions in a chiral antiferromagnetic
nanostripe
Riccardo Tomasello Institute of Applied and Computational Mathematics, FORTH,
Heraklion, Crete, Greece Stavros Komineas Institute of Applied and
Computational Mathematics, FORTH, Heraklion, Crete, Greece Department of
Mathematics and Applied Mathematics, University of Crete, 70013 Heraklion,
Crete, Greece
###### Abstract
We study a vortex in a nanostripe of an antiferromagnet with easy-plane
anisotropy and interfacial Dzyloshinskii-Moriya interaction. The vortex has
hybrid chirality being Néel close to its center and Bloch away from it.
Propagating vortices can acquire velocities up to a maximum value that is
lower than the spin wave velocity. When the vortex is forced to exceed the
maximum velocity, phase transitions occur to a nonflat spiral, vortex chain,
and flat spiral, successively. The vortex chain is a topological configuration
stabilised in the stripe geometry. Theoretical arguments lead to the general
result that the velocity of localized excitations in chiral magnets cannot
reach the spin wave velocity.
## I Introduction
A wide range of materials present antiferromagnetic order, where neighboring
magnetic moments are coupled via a strong exchange interaction and are aligned
antiparallel. Antiferromagnets (AFMs) exhibit features, such as low magnetic
susceptibility, robustness against external fields and lack of stray fields,
that are favorable for the building blocks of spintronic devices Jungwirth2016
; Baltz2018 . They receive renewed interest because current techniques allow
for the antiferromagnetic order to be manipulated by spin-currents and to be
observed despite the lack of net magnetization Wadley2016 ; Grzybowski2017 ;
Moriyama2018 ; Bodnar2019 ; Baldrati2019 ; Shi2020 . This opens the way for a
number of potential applications including storage with picosecond switching
Cheng2015 ; Roy2016 ; Lopez-Dominguez2019 , THz oscillators Cheng2016 ;
Khymyn2017 ; Puliafito2019 , racetrack memory based on magnetic solitons such
as domain walls (DWs) Gomonay2016 ; Shiino2016 ; Sanchez-Tejerina2020 or
skyrmions Zhang2016 ; Barker2016 ; Gomonay2018 ; Salimath2020 , which can
achieve velocities larger than 1 km/s Gomonay2016 ; Shiino2016 ; Salimath2020
.
Some AFM materials such as $\alpha-{\rm Fe}_{2}{\rm O}_{3}$, ${\rm Ba}_{2}{\rm
CuFe}_{2}{\rm O}_{7}$, are characterized by easy-plane anisotropy which
supports the formation of vortices. They have been discussed theoretically in
infinite films IvanovSheka_PRL1994 ; Pereira1995 ; Ivanov1996 ; Bogdanov1998 ;
Komineas1998 and observed experimentally by imprinting techniques Wu2011 ;
Chmiel2018 . Despite of that, they have received much less attention than DWs
or skyrmions or also than their ferromagnetic counterparts
1989_PRB_GouveaWysinBishopMertens ; PapanicolaouSpathis_NL1999 ;
2000_Science_Shinjo ; 2002_Science_Wachowiak ; WaeyenbergePuzic_Nat2006 ;
Yamada2007 ; Pribiag2007 ; Komineas2007 .
An extensive experimental investigation of an easy-plane AFM with the
Dzyaloshinskii-Moriya interaction (DMI) established spiral antiferromagnetic
order 2011_PRB_MuhlbauerZheludev ; 2012_PRB_MuhlbauerZheludev and a
subsequent theoretical analysis has shown the existence of two spiral phases
2002_PRB_ChovanPapanicolaou ; 2005_springer_ChovanPapanicolaou . For weak DMI,
the Néel is the ground state, but for stronger DMI the system enters a spiral
phase where all Néel vector components vary in space (nonflat spiral). Only
for strong enough DMI the Néel vector lies in a plane and rotates in space
thus giving a flat spiral. The nonflat spiral gives an intermediate phase that
is not there in the case of an easy-axis magnet BogdanovHubert_JMMM1999 .
We study theoretically vortices in easy-plane AFMs with an interfacial DMI. We
consider a stripe geometry as this is the most suitable for applications
involving shifting of magnetic information, while it will also give rise to
interesting effects on the magnetic structure. We calculate the magnetic
ground state and demonstrate that this induces a vortex with a mixed
chirality, i.e., Néel-type near the vortex core and Bloch-type away from it.
This unusual type of vortex will be referred to as a hybrid vortex.
We subsequently study propagating vortices. We show that a propagating vortex
shrinks along the direction of propagation, similarly to AFM DWs Shiino2016 ,
while it elongates along the perpendicular direction, similarly to AFM
skyrmions Salimath2020 ; KomineasPapanicolaou_SciPost2020 . The vortex can
acquire a maximum velocity beyond which it becomes unstable to periodic
configurations, thus giving rise successively to a nonflat spiral, a vortex
chain and a flat spiral. The spirals are extensions of states known within the
one dimensional model, but the vortex chain is a feature of the stripe
geometry. A theoretical explanation for the dynamical behavior is obtained and
it leads to the general result that the velocity of localized excitations in
chiral magnets cannot reach the spin wave velocity. Our results provide an
understanding of the statics and dynamics of vortices in chiral AFMs and could
be useful for the design of antiferromagnetic devices based on magnetic
solitons.
Figure 1: Static hybrid vortex in a stripe with width $0pt=10$. The length of
the numerical mesh along $x$ is $L=100$. (a), (b) ,(c) Vector plot of the
static hybrid vortex for three values of the DMI parameter. Vectors show the
projection of the Néel vector on the plane, $(n_{1},n_{2})$ while the
component $n_{3}$ is shown by a color code. (d), (e) ,(f) The components of
$\bm{n}$ along the line in the center of the stripe ($y=0$) for the
configurations shown in (a), (b), (c) respectively. The vortex core width is
shown by a green solid line.
## II The model and ground states
We consider an antiferromagnetic nanostripe with exchange, interfacial DMI and
easy-plane anisotropy. A continuum model is obtained for the normalized Néel
vector $\bm{n}=(n_{1},n_{2},n_{3})$ BaryakhtarIvanov_SJLTP1979 ;
KomineasPapanicolaou_NL1998 with the potential energy
$\displaystyle V=\int$
$\displaystyle\left[\frac{1}{2}(\partial_{\mu}\bm{n})\cdot(\partial_{\mu}\bm{n})\right.$
(1)
$\displaystyle\left.-\lambda\epsilon_{\mu\nu}\bm{\hat{e}}_{\mu}\cdot(\partial_{\nu}\bm{n}\times\bm{n})+\frac{1}{2}n_{3}^{2}\right]\mathrm{d}x\mathrm{d}y,$
where $\mu,\nu$ take the values 1,2, $\epsilon_{\mu\nu}$ is the totally
antisymmetric tensor, $\bm{\hat{e}}_{\mu}$ denote the unit vectors in the
respective directions, and $\lambda$ is a scaled DMI parameter. The equation
of motion is
$\displaystyle\bm{n}\times(\ddot{\bm{n}}-\bm{f})=0,$ (2)
$\displaystyle\bm{f}=\Delta\bm{n}+2\lambda\epsilon_{\mu\nu}\bm{\hat{e}}_{\mu}\times\partial_{\nu}\bm{n}-n_{3}\bm{\hat{e}}_{3}.$
Let us review the results for a one dimensional (1D) model with the energy (1)
and $\bm{n}=\bm{n}(x)$. Phase transitions occur at the two critical values of
the parameter 2002_PRB_ChovanPapanicolaou
$\lambda_{NF}=\frac{1}{2},\qquad\lambda_{F}\approx 0.705.$ (3)
We give schematically the three regimes separated by the critical values of
$\lambda$.
Néel$\lambda_{NF}$nonflat spiral$\lambda_{F}$flat spiral$\lambda$
For weak DMI, $\lambda<\lambda_{NF}$, the Néel state is the ground state (see
Ref. Tomasello2020 for a related model). The Néel vector is lying in the
easy-plane and, for definiteness, we will assume that this is
$\bm{n}=\bm{\hat{e}}_{2}$. Increasing $\lambda$, we enter an intermediate
phase in the form of a nonflat spiral at $\lambda=\lambda_{NF}$. The spiral
presents a continuous rotation of the projection of $\bm{n}$ on the $(13)$
plane as we move along the $x$ axis and, at the same time, the component
$n_{2}$ oscillates around a nonzero value. The period of the spiral tends to
infinity for $\lambda\to\lambda_{NF}$ while the component $n_{2}\to 1$ in the
same limit. As $\lambda$ increases, $n_{2}$ decreases and it vanishes at
$\lambda=\lambda_{F}$ where a flat spiral is obtained with $\bm{n}$ lying
fully and rotating on the $(13)$ plane. For $\lambda>\lambda_{F}$, the flat
spiral remains the ground state and the period of the spiral decreases with
increasing $\lambda$ 2002_PRB_ChovanPapanicolaou ; Tomasello2020 .
## III Vortex in a stripe
Let us now assume a stripe geometry. This extends to infinity along the $x$
axis and it has a width $0pt$ in the $y$ direction, $-0pt/2\leq y\leq 0pt/2$.
We focus on the regime $\lambda<\lambda_{NF}$ where we expect a Néel state.
Any solution of Eq. (2) should satisfy the natural boundary condition
$\partial_{y}\bm{n}+\lambda\bm{\hat{e}}_{1}\times\bm{n}=0,\quad
y=\pm\frac{0pt}{2}.$ (4)
In the finite interval $-0pt/2<y<0pt/2$, two degenerate nontrivial ground
states with negative energy can be found, as shown in Appendix A. We denote
these $\bm{n}=\bm{n}_{\pm}$ and $\bm{n}$ is primarily aligned along
$\pm\bm{\hat{e}}_{2}$. In the case of the stripe, we extend the previous 1D
configuration in the $x$ direction and we have two degenerate ground states
where $\bm{n}$ does not depend on $x$, that is $\bm{n}(x,y)=\bm{n}_{\pm}(y)$.
This is a quasi-uniform state where the Néel vector points primarily along
$\pm\bm{\hat{e}}_{2}$ and it tilts out of the plane, in $\bm{\hat{e}}_{3}$, in
the regions close to the boundaries $y=\pm 0pt/2$. One can say that the
boundary condition (4) makes $\bm{\hat{e}}_{2}$ an energetically favorable
axis.
We simulate the system numerically on a stripe domain with a long $x$
dimension, that typically contains $1000$ grid points with lattice spacing
0.1, giving a physical dimension $100$. We vary the width of the stripe. We
impose Neumann boundary conditions at the ends of the numerical mesh in the
$x$ direction. In the $y$ direction, we use open boundary conditions at $y=\pm
0pt/2$. (In Appendix A, it is shown that these give the same results as the
natural boundary conditions). A relaxation algorithm indeed converges to a
quasi-uniform state of the form $\bm{n}=\bm{n}_{\pm}(y)$, that does not depend
on $x$.
Vortices should be excited states on the quasi-uniform state in the regime
$\lambda<\lambda_{NF}$. Due to the form of the DMI, a vortex solution of model
(2) is expected to be of Néel type in an infinite film. The form of the ground
state forces us to assume in-plane domains oriented primarily along the
$\pm\bm{\hat{e}}_{2}$ on the left and right side of the stripe, respectively,
i.e., $\bm{n}(x\to\pm\infty,y)=\bm{n}_{\pm}(y)$, separated by an out-of-plane
domain wall in the center of the stripe. This ansatz is used as an initial
condition in our numerical relaxation method. We run simulations for different
widths $0pt$ and parameter values $\lambda$. A vortex is obtained as an
equilibrium state for stripes with width larger than a critical width that
depends on $\lambda$. For $0pt>4$, we obtain a vortex for all values of
$\lambda$.
Figure 1 shows the results of simulations on a $1000\times 100$ grid with
lattice spacing 0.1, giving physical dimensions $100\times 10$. In Fig. 1, the
entries (a), (b), (c) show vector plots of a static vortex in a stripe with
$0pt=10$ for three values of the DMI parameter $\lambda=0.2,0.3,0.4$. The
vortex is Néel close to the vortex core and it gradually becomes Bloch as we
go away from the core thus exhibiting a hybrid character. Starting from the
vortex core, the Néel vector goes towards the in-plane direction by rotating
in the $(13)$ as well as in the $(23)$ plane. This results in a vortex
configuration which is between Néel and Bloch near the vortex core, similarly
to what happens with Dzyaloshinskii DWs Thiaville2012 or skyrmions with
intermediate chirality Buttner2018 ; Olleros-Rodriguez2020 . The $(13)$
rotation is a consequence of the interfacial DMI, while the $(23)$ rotation is
a consequence of the boundary conditions which force the Néel vector to be
oriented primarily along $\bm{\hat{e}}_{2}$ in the far field. As we move
further from the vortex core, the magnetization becomes aligned with
$\pm\bm{\hat{e}}_{2}$ in opposite directions on the left and right side of the
stripe.
In Fig. 1, the entries (d), (e), (f) show the Néel vector profiles along the
line in the center of the strip, $y=0$, corresponding to the vortices in
entries (a), (b), (c), respectively. Increasing the DMI parameter has two main
effects: (i) an increase of the vortex core width $L_{0}$, as also noted in
the Appendix of Ref. 2002_PRB_ChovanPapanicolaou , and (ii) a faster rotation
of the Néel vector towards $\bm{\hat{e}}_{2}$.
Figure 2: Energy $V$ above the ground state energy as a function of the stripe
width $0pt$ for the hybrid vortex and for the Néel vortex for $\lambda=0.3$.
The numerical results are given by symbols (rhombus, star) connected by solid
lines.
The vortex energy is finite in a stripe, in contrast to the logarithmically
diverging vortex energy in infinite films. The vortex energy above the ground
state as a function of the stripe width $0pt$ is shown in Fig. 2. We find
numerically a Néel-type vortex in the same stripe geometries by starting our
relaxation simulations with a Néel vortex as an initial state. Its energy,
shown in Fig. 2, is higher than the energy of the hybrid vortex for the whole
range of stripe widths $0pt$.
## IV Propagating vortex
We proceed to study the dynamics of the hybrid vortex. Let us assume that a
magnetic configuration is set into motion and we obtain $\bm{n}(x-vt)$
propagating along the axis of the stripe. We initially neglect the dependence
on $y$. Eq. (2) reduces to
$\bm{n}\times\left[(1-v^{2})\partial_{1}^{2}\bm{n}-2\lambda\bm{\hat{e}}_{2}\times\partial_{1}\bm{n}-n_{3}\bm{\hat{e}}_{3}\right]=0.$
(5)
The latter equation is discussed in Appendix B in connection with propagating
domain walls. Applying a rescaling $x\to x\sqrt{1-v^{2}}$, Eq. (5) takes the
form
$\bm{n}\times\left(\partial_{1}^{2}\bm{n}-2\frac{\lambda}{\sqrt{1-v^{2}}}\bm{\hat{e}}_{2}\times\partial_{1}\bm{n}-n_{3}\bm{\hat{e}}_{3}\right)=0,$
(6)
where a single combination of parameters appears. The phases of this 1D system
were explained earlier in the introduction. We have the following three cases.
(a) The Néel state for
$\frac{\lambda}{\sqrt{1-v^{2}}}<\lambda_{NF}\Rightarrow
v<\sqrt{1-\left(\frac{\lambda}{\lambda_{NF}}\right)^{2}}\equiv v_{NF}.$ (7)
(b) The non-flat spiral for
$\displaystyle\lambda_{NF}<\frac{\lambda}{\sqrt{1-v^{2}}}<\lambda_{F}\Rightarrow$
$\displaystyle\sqrt{1-\left(\frac{\lambda}{\lambda_{NF}}\right)^{2}}<v<\sqrt{1-\left(\frac{\lambda}{\lambda_{F}}\right)^{2}}.$
(8)
(c) The flat spiral for
$\frac{\lambda}{\sqrt{1-v^{2}}}>\lambda_{F}\Rightarrow
v>\sqrt{1-\left(\frac{\lambda}{\lambda_{F}}\right)^{2}}\equiv v_{F}.$ (9)
Figure 3: (a) Vector plot for $\lambda=0.4$ for the propagating hybrid vortex
for velocity $v=0.60$. Plotting conventions are as in Fig. 1. (b) Vortex core
width $L_{x}$ in the direction of propagation for a propagating vortex as a
function of velocity $v$, for various values of $\lambda$, normalized to the
width of a static vortex $L_{0}$. The red solid line shows the expected result
for Lorentz-type contraction $L_{x}=\sqrt{1-v^{2}}$. The dashed lines mark the
maximum obtained velocities for the respective $\lambda$ values. (c) Vortex
core width $L_{y}$ in the $y$ axis for the propagating vortex as a function of
velocity $v$, normalized to the width of a static vortex $L_{0}$.
Using a numerical relaxation method KomineasPapanicolaou_SciPost2020 applied
to Eq. (5), we find hybrid vortices in a steady-state motion propagating along
the axis of the stripe with a range of velocities $v$. Figure 3(a) shows a
propagating vortex with velocity $v=0.6$. Starting from the static hybrid
vortex and increasing $v$, we find that the propagating vortex is contracted
along the $x$ direction and it is elongated along the $y$ direction.
Figure 4: Vector plots for parameter $\lambda=0.4$ for (a) a non-flat spiral
at velocity $v=0.78$, (b) a vortex chain at $v=0.79$ and (c) a flat spiral at
$v=0.90$. Plotting conventions are as in Fig. 1. (d)-(f) The components of
$\bm{n}$ along the line in the center of the stripe ($y=0$) for the
configurations shown in entries (a)-(c). The effect of the numerical mesh
boundary, seen at $x=\pm 50$ in entries (f,e), has a negligible effect on the
configurations in the lattice interior.
Figure 3(b) shows the width $L_{x}$ of the propagating vortex in the $x$ axis
as a function of velocity for various values of $\lambda$, normalized to the
width $L_{0}$ of the static vortex. We define the width of the vortex core as
the distance between the positions where $n_{3}=0.5$. The width $L_{x}$, in
the direction of propagation, closely follows the law of Lorentz-type
contraction (shown by a solid line in the figure) despite that the model is
not Lorentz invariant. Lorentz contraction is exactly followed by a
propagating DW as reported in Ref. Shiino2016 and reviewed in Appendix B. For
each $\lambda$, the vortex achieves a maximum velocity (marked by dashed
lines) as we explain below. Therefore, there is a minimum achievable vortex
width which decreases with decreasing $\lambda$. Figure 3(c) shows the width
$L_{y}$ of the vortex core in the $y$ direction. It increases with the
velocity further pronouncing the vortex elongation.
When the velocity exceeds the value $v_{NF}(\lambda)$ in Eq. (7), we expect a
nonflat spiral to develop based on the reasoning given following Eq. (6). The
numerical simulations show that this actually happens in the case of the
stripe at a higher velocity. Figure 4(a) shows the nonflat spiral that is
nucleated, for $\lambda=0.4$, when a single vortex is set into motion with a
velocity $v=0.78$. The vortex has survived in the stripe center and it is
strongly elongated in the $y$ direction. Figure 4(d) shows line plots of the
Néel vector components along the line $y=0$ in the center of the stripe. The
spiral configuration is obvious in the $n_{1},n_{3}$ components. The component
$n_{2}$ oscillates around nonzero values with opposite signs on the two sides
of the vortex. The configuration has the features of a DW on top of a spiral
state (or a defect in the periodic structure). Such a DW is connecting two
topologically distinct spatially modulated ground states and it has been
reported in Ref. 2004_APP_ChovanMarderPapanicolaou . Apart from the presence
of a vortex in the center of the stripe, the structure is different from the
ideal 1D nonflat spiral in that (a) $\bm{n}$ tilts out-of-plane close to the
stripe boundaries and (b) the spiral structure is different close to the
stripe boundaries than in the stripe center as seen in the vector plot.
Indeed, edge (half) vortices are present at the boundaries of the stripe.
Further increasing the velocity, for large enough $\lambda$, we obtain a
periodic chain of vortices with opposite polarities, as shown in Fig. 4(b). It
appears that the edge vortices already present in Fig. 4(a) enter the stripe
and develop into full vortices in Fig. 4(b). The transition from the nonflat
spiral to the chain of vortices appears to be a discontinuous one. For
example, in the transition between Figs. 4(a),(b), one can see the sudden
change of the periodicity of the structure. We have a phase transition to a
lattice of topological solitons induced by the dynamics.
When the velocity exceeds the value $v_{F}(\lambda)$ in Eq. (9), we expect a
flat spiral to develop. This actually happens close to $v=v_{F}(\lambda)$ for
small $\lambda$ and for $v$ larger than $v_{F}(\lambda)$ for large $\lambda$.
Figure 4(c) shows a flat spiral in the stripe. The vortex gets elongated
across the width of the stripe and disappears from the configuration, while
the component $n_{2}$ is nearly zero. As a results, the configuration is close
to the 1D spiral but some dependence of $\bm{n}$ on $y$ is seen in the region
close to the boundaries. The transition from the chain of vortices to the
spiral appears to be a continuous one. Figure 5 shows the numerically found
velocities for the transitions to the nonflat spiral, the vortex chain and the
flat spiral for various values of the DMI parameter $\lambda$. The velocities
$v_{NF}(\lambda),v_{F}(\lambda)$ are plotted by solid lines for comparison.
Regarding the transition to the nonflat spiral, we attribute the deviations
from the expected transition velocity to the 2D nature of the structure
explained in connection with Fig. 4(a). In a more quantitative argument, the
boundary conditions favor the orientation of $\bm{n}$ in the
$\bm{\hat{e}}_{2}$ over the $\bm{\hat{e}}_{1}$ direction, and it is therefore
expected that the Néel state will persist longer, compared to the 1D model,
before it is destabilized to the nonflat spiral. Regarding the transition to
the flat spiral, this is happening at $v$ larger than $v_{F}$ clearly due to
the appearance of an additional state, that is, the vortex chain. For small
$\lambda$, no vortex chain is formed because the transition to the nonflat
spiral occurs at a high velocity $v\approx v_{\rm NF}$ where the vortex is
very elongated.
## V Concluding remarks
We have studied vortices and their dynamics in an antiferromagnet with easy-
plane anisotropy and interfacial DMI. We have considered a nanostripe geometry
and applied a continuum model. The stripe boundary induces a quasi-uniform
ground state with the Néel vector lying primarily perpendicular to the
boundary. The form of the ground state forces the vortex to have a hybrid
character with both Néel and Bloch chirality. When propagating, the hybrid
vortex gives rise to phase transitions to a non-flat spiral, a vortex chain,
and a flat spiral successively as the velocity increases. While the spiral
phases are anticipated by a study of the 1D model, the vortex chain is a
feature of the stripe geometry. No vortex lattice has been found in this
system in an infinite film 2002_PRB_ChovanPapanicolaou .
Figure 5: (a) The dots mark the numerically found velocities for the
transition to the nonflat spiral (red squares), to the vortex chain (black
triangles), and to the flat spiral (blue circles) for various values of
$\lambda$. The red solid line shows the velocity $v_{NF}(\lambda)$ of Eq. (7)
and the blue solid line shows $v_{F}(\lambda)$ of Eq. (9), for comparison with
the numerical results.
## Acknowledgements
This work was supported by the project “ThunderSKY” funded by the Hellenic
Foundation for Research and Innovation and the General Secretariat for
Research and Technology, under Grant No. 871. We acknowledge discussions with
Michael Plexousakis on the numerical algorithms.
## Appendix A One-dimensional system with boundaries
We assume a one-dimensional system of length $0pt$, specifically, we consider
a time-independent Néel vector $\bm{n}=\bm{n}(y)$ in the interval $-0pt/2\leq
y\leq 0pt/2$. This satisfies a reduced form of Eq. (2) of the main text,
$\bm{n}\times(\bm{n}^{\prime\prime}+2\lambda\bm{\hat{e}}_{1}\times\bm{n}^{\prime}-n_{3}\bm{\hat{e}}_{3})=0$
(10)
where the prime denotes differentiation with respect to $y$. The equation is
supplemented with the boundary condition
$\bm{n}^{\prime}+\lambda\bm{\hat{e}}_{1}\times\bm{n}=0,\quad
y=\pm\frac{0pt}{2}.$ (11)
We are looking for the ground state of this system.
An obvious solution of Eq. (10) is $\bm{n}=\bm{\hat{e}}_{1}$ and this also
satisfies the boundary condition. Its energy is $E=0$.
A state with negative energy can be found if we write
$n_{1}=0,\quad n_{2}=\cos\Theta,\quad n_{3}=\sin\Theta$ (12)
where we use the parametrization with the polar angle $\Theta$ measured from
the $\bm{\hat{e}}_{2}$ direction. Eq. (1) of the main text for the energy
reduces to the form
$V=\frac{1}{2}\int(\Theta^{\prime})^{2}\,\mathrm{d}y+\frac{1}{2}\int\sin^{2}\Theta\,\mathrm{d}y+\lambda\int\Theta^{\prime}\,\mathrm{d}y$
(13)
where the integrations extend over the interval $-0pt/2\leq y\leq 0pt/2$.
Energy minimization, $\delta E/\delta\Theta=0$, gives
$(\Theta^{\prime})^{2}=\sin^{2}\Theta+\gamma^{2}$ (14)
where $\gamma$ is a constant. The boundary condition is
$\frac{\delta
V}{\delta\Theta^{\prime}}=0\Rightarrow\Theta^{\prime}=-\lambda,\qquad
y=\pm\frac{0pt}{2}$ (15)
and coincides with (11). In the present problem, we will assume
$\bm{n}(y=0)=\pm\bm{\hat{e}}_{2}$ in the center of the interval (the solution
will be symmetric with respect to the center). Thus, we confine the problem in
the interval $0\leq y\leq 0pt/2$ and we are seeking solutions with the
boundary conditions
$\Theta(y=0)=0,\pi\qquad\Theta^{\prime}\left(y=\pm\textstyle{\frac{0pt}{2}}\right)=-\lambda.$
(16)
Figure 6: The ground state obtained as the solution of Eq. (10) for the
boundary conditions in Eq. (11) for system lengths $0pt=4$ and 10. A second
solution is obtained by $\bm{n}\to-\bm{n}$. We denote these two states by
$\bm{n}_{\pm}$.
For the case $\Theta(y=0)=0$, Eq. (14) has the implicit solution
$y=-\int_{0}^{\Theta}\frac{d\theta}{\sqrt{\sin^{2}\theta+\gamma^{2}}}$ (17)
where we have chosen the case $\Theta^{\prime}<0$ and thus $\Theta(y)$ is a
monotonically decreasing function of $y$. Fig. 6 shows the components of
$\bm{n}$ found numerically by solving Eq. (10), for two values of the system
length $0pt$. We have a tilting of the Néel vector out-of-plane near the edges
of the system. The system is symmetric with respect to the transformation
$\bm{n}\to-\bm{n}$. The two equivalent solutions will be denoted by
$\bm{n}_{\pm}$.
A remark of significant practical importance regarding the numerical
application of the boundary conditions is the following. We have found the
solutions of (10) by using the boundary conditions (11) and also by using open
boundary conditions inspired by the physical problem. In the latter case, the
edge spins have only one neighbor. The result for the states $\bm{n}_{\pm}$ is
the same in both cases indicating that the two boundary conditions are
equivalent (as shown in Appendix C). This could be anticipated as the natural
boundary conditions are indeed derived in order to describe free edges of the
material.
We further denote
$\Theta^{\prime}(y=0)=\gamma,\qquad\Theta\left(y=\textstyle{\pm\frac{0pt}{2}}\right)=\mp\Theta_{0}pt,\pi\mp\Theta_{0}pt.$
(18)
At the boundaries, $y=\pm 0pt/2$, Eq. (14) gives the tilting angle
$\Theta_{0}pt$,
$\sin^{2}\Theta_{0}pt=\lambda^{2}-\gamma^{2}.$ (19)
This also implies that $|\gamma|<\lambda$.
In the case of a narrow stripe, the angle is $|\Theta|\ll 1$ for all $y$
(assume the case $\Theta(y=0)=0$). Eq. (17) gives
$\Theta(y)=-\gamma y+O(y^{3}).$
To the same order of approximation, we have $\gamma\approx\lambda$ and
$\Theta(y)\approx-\lambda y.$ (20)
The maximum angle, attained at the boundary, is $\Theta_{0}pt=\lambda 0pt/2$.
The condition for the validity of the result is $\lambda
0pt\ll\gamma\Rightarrow 0pt\ll 1$. The energy (13) has the value
$V=-\frac{\lambda^{2}}{2}0pt,\qquad 0pt\ll 1.$ (21)
In the case of a wide stripe, we assume that the configuration is almost
uniform in the center, $\sin\Theta=0,\;\Theta^{\prime}=0$. We set $\gamma=0$
in Eq. (14) and this reduces to
$(\Theta^{\prime})^{2}=\sin^{2}\Theta.$ (22)
Eq. (22) has the domain wall solution
$\tan\frac{\Theta}{2}=-e^{y-y_{0}}$ (23)
where $y_{0}$ is a constant. The Néel vector components are
$n_{2}=-\tanh(y-y_{0}),\quad n_{3}=-\operatorname{sech}(y-y_{0}).$ (24)
The constant $y_{0}$ is determined by the boundary conditions (15),
$\Theta^{\prime}(y=\pm\textstyle{\frac{0pt}{2}})=-\lambda\Rightarrow\operatorname{sech}\left(\pm\textstyle{\frac{0pt}{2}}-y_{0}\right)=\lambda.$
(25)
At the boundaries, $\Theta^{\prime}(\pm 0pt/2)=-\lambda<1/2$, thus
$|y_{0}|>0pt/2$ (that is, the center of the domain wall solution is beyond the
boundary). The form (24) applies to Fig. 6 for $0pt=10$.
Figure 7: Energy $V$ of the ground states $\bm{n}_{\pm}$ as a function of the
system length $0pt$ for $\lambda=0.3$. Symbols show numerical results
connected by a solid line. The slope of the curve for small $0pt$ is given by
Eq. (21).
We will now prove that the energy for all $\bm{n}_{\pm}$ is $V<0$. For
$0\leq\Theta\leq\pi/2$, Eq. (14) gives that $|\Theta^{\prime}|$ is an
increasing function of $\Theta$ and thus also an increasing function of $y$.
We have $|\Theta^{\prime}(y)|\leq\lambda$ with the maximum value attained at
the boundary, $|\Theta^{\prime}(y=0pt/2)|=\lambda$. We insert Eq. (14) in Eq.
(13) and then use the inequality for $|\Theta^{\prime}|$ to find that the
energy of the configuration is negative,
$V\leq\int(\Theta^{\prime})^{2}\,\mathrm{d}y+\lambda\int\Theta^{\prime}\,\mathrm{d}y<0,$
(26)
where we take into account that $\Theta^{\prime}<0$. Eq. (26) establishes that
the nonuniform states $\bm{n}_{\pm}$ have an energy lower than any uniform
state in the system. They are found numerically to be the lowest energy
states.
Fig. 7 shows the energy of the ground states $\bm{n}_{\pm}$ as a function of
the system length $0pt$. The dependence is linear for small $0pt$, following
Eq. (21), and it saturates to a negative value for larger $0pt$.
## Appendix B Propagating domain wall
Let us consider the 1D system that results from Eq. (2) of the main text when
we assume $\bm{n}=\bm{n}(x,t)$,
$\bm{n}\times\left(\ddot{\bm{n}}-\bm{n}^{\prime\prime}+2\lambda\bm{\hat{e}}_{2}\times\bm{n}^{\prime}+n_{3}\bm{\hat{e}}_{3}\right)$
(27)
where the prime denotes differentiation with respect to $x$. Denote by
$\bm{n}_{\rm DW}(x)=(\operatorname{sech}(x),0,\tanh(x))$
the static domain wall solution. This is stable for
$\lambda<\lambda_{NF}=\frac{1}{2}$ while for $\lambda>\lambda_{NF}$ it is
destabilized to the nonflat spiral.
A domain wall propagating with velocity $v$ satisfies
$\bm{n}\times\left[(1-v^{2})\bm{n}^{\prime\prime}-2\lambda\bm{\hat{e}}_{2}\times\bm{n}^{\prime}-n_{3}\bm{\hat{e}}_{3}\right]=0.$
(28)
The solution of the equation is obtained by a Lorentz transformation of the
static wall
$\bm{n}(x,t;v)=\bm{n}_{\rm DW}\left(\frac{x-vt}{\sqrt{1-v^{2}}}\right).$
Note that the DM term vanishes for the static or propagating domain wall
solutions and thus Lorentz invariance is preserved. The propagating solution
is valid for the range of parameter values where the Néel state is stable,
$\frac{\lambda}{\sqrt{1-v^{2}}}<\lambda_{NF}\Rightarrow
v<\sqrt{1-\left(\frac{\lambda}{\lambda_{NF}}\right)^{2}}\equiv v_{0}.$ (29)
As the velocity increases, the domain wall is contracted by a factor
$\sqrt{1-v^{2}}$ and it has a minimum width at $v=v_{0}$. For $v>v_{0}$ the
propagating domain wall is unstable and the system should turn to a
propagating spiral state.
## Appendix C Open and natural boundary conditions
We consider a vector field $\bm{n}=\bm{n}(x,t)$ with components
$\bm{n}=(n_{1},n_{2},n_{3})$ and a constant length $|\bm{n}|=1$. It satisfies
the equation
$\bm{n}\times\left(\bm{n}^{\prime\prime}+2\lambda\bm{\hat{e}}_{2}\times\bm{n}^{\prime}-n_{3}\bm{\hat{e}}_{3}\right)=0$
(30)
where $\lambda$ is a parameter. The problem is defined in an interval
$-0pt/2\leq x\leq 0pt/2$ and the boundary conditions (so-called, natural
boundary conditions) are
$\bm{n}^{\prime}+\lambda\bm{\hat{e}}_{2}\times\bm{n}=0,\qquad
x=\pm\frac{0pt}{2}.$ (31)
We discretise space and have a lattice of points $x_{i},\,i=1,\ldots,N$ with
lattice spacing $a$. On the lattice, the discrete version of Eq. (30) reads
$\bm{n}_{i}\times\left(\frac{\bm{n}_{i+1}+\bm{n}_{i-1}}{a^{2}}+\lambda\bm{\hat{e}}_{2}\times\frac{\bm{n}_{i+1}-\bm{n}_{i-1}}{a}-n_{i,3}\bm{\hat{e}}_{3}\right)=0$
(32)
for any site $i=1,\ldots,N$ of the lattice with lattice spacing $a$.
We consider the following two approaches for implementing the boundary
conditions.
### Open boundary conditions.
Motivated by the physical problem, we use open boundary conditions, that is,
we assume that there is no interaction to the right of the last site $i=N$,
and thus Eq. (32) gives at the last site, $i=N$,
$\bm{n}_{N}\times\left(\frac{\bm{n}_{N-1}}{a^{2}}-\lambda\bm{\hat{e}}_{2}\times\frac{\bm{n}_{N-1}}{a}-n_{N,3}\bm{\hat{e}}_{3}\right)=0.$
(33)
A similar equation is obtained for the first site $i=1$.
### Apply bc’s to order $O(a)$.
The discrete form of (31) at $i=N$ reads
$\displaystyle\frac{\bm{n}_{N+1}-\bm{n}_{N}}{a}+\lambda\bm{\hat{e}}_{2}\times\bm{n}_{N+1}=0$
$\displaystyle\Rightarrow$
$\displaystyle\frac{\bm{n}_{N+1}}{a}+\lambda\bm{\hat{e}}_{2}\times\bm{n}_{N+1}=\frac{\bm{n}_{N}}{a},$
(34)
correct to order $O(a)$. The latter can be used in Eq. (32) to give (33). This
proves the equivalence of the open boundary conditions with the natural
boundary conditions.
## References
* (1) T. Jungwirth, X. Marti, P. Wadley, and J. Wunderlich, Nature Nanotechnology 11, 231 (2016).
* (2) V. Baltz, A. Manchon, M. Tsoi, T. Moriyama, T. Ono, and Y. Tserkovnyak, Reviews of Modern Physics 90, 015005 (2018).
* (3) P. Wadley, B. Howells, J. Elezny, C. Andrews, V. Hills, R. P. Campion, V. Novak, K. Olejnik, F. Maccherozzi, S. S. Dhesi, S. Y. Martin, T. Wagner, J. Wunderlich, F. Freimuth, Y. Mokrousov, J. Kune, J. S. Chauhan, M. J. Grzybowski, A. W. Rushforth, K. W. Edmonds, B. L. Gallagher, and T. Jungwirth, Science 351, 587 (2016).
* (4) M. J. Grzybowski, P. Wadley, K. W. Edmonds, R. Beardsley, V. Hills, R. P. Campion, B. L. Gallagher, J. S. Chauhan, V. Novak, T. Jungwirth, F. Maccherozzi, and S. S. Dhesi, Physical Review Letters 118, 057701 (2017).
* (5) T. Moriyama, K. Oda, T. Ohkochi, M. Kimata, and T. Ono, Scientific Reports 8, 14167 (2018).
* (6) S. Y. Bodnar, M. Filianina, S. P. Bommanaboyena, T. Forrest, F. Maccherozzi, A. A. Sapozhnik, Y. Skourski, M. Kläui, and M. Jourdan, Physical Review B 99, 140409 (2019).
* (7) L. Baldrati, O. Gomonay, A. Ross, M. Filianina, R. Lebrun, R. Ramos, C. Leveille, F. Fuhrmann, T. R. Forrest, F. Maccherozzi, S. Valencia, F. Kronast, E. Saitoh, J. Sinova, and M. Kläui, Physical Review Letters 123, 177201 (2019).
* (8) J. Shi, V. Lopez-Dominguez, F. Garesci, C. Wang, H. Almasi, M. Grayson, G. Finocchio, and P. Khalili Amiri, Nature Electronics 3, 92 (2020).
* (9) R. Cheng, M. W. Daniels, J.-G. Zhu, and D. Xiao, Physical Review B 91, 064423 (2015).
* (10) P. E. Roy, R. M. Otxoa, and J. Wunderlich, Physical Review B 94, 014439 (2016).
* (11) V. Lopez-Dominguez, H. Almasi, and P. K. Amiri, Physical Review Applied 11, 024019 (2019).
* (12) R. Cheng, D. Xiao, and A. Brataas, Physical Review Letters 116, 207603 (2016).
* (13) R. Khymyn, I. Lisenkov, V. Tiberkevich, B. A. Ivanov, and A. Slavin, Scientific Reports 7, 43705 (2017).
* (14) V. Puliafito, R. Khymyn, M. Carpentieri, B. Azzerboni, V. Tiberkevich, A. Slavin, and G. Finocchio, Physical Review B 99, 024405 (2019).
* (15) O. Gomonay, T. Jungwirth, and J. Sinova, Physical Review Letters 117, 017202 (2016).
* (16) T. Shiino, S.-H. Oh, P. M. Haney, S.-W. Lee, G. Go, B.-G. Park, and K.-J. Lee, Physical Review Letters 117, 087203 (2016).
* (17) L. Sánchez-Tejerina, V. Puliafito, P. Khalili Amiri, M. Carpentieri, and G. Finocchio, Physical Review B 101, 014433 (2020).
* (18) X. Zhang, Y. Zhou, and M. Ezawa, Scientific Reports 6, 24795 (2016).
* (19) J. Barker and O. A. Tretiakov, Physical Review Letters 116, 147203 (2016).
* (20) O. Gomonay, V. Baltz, A. Brataas, and Y. Tserkovnyak, Nature Physics 14, 213 (2018).
* (21) A. Salimath, F. Zhuo, R. Tomasello, G. Finocchio, and A. Manchon, Physical Review B 101, 024429 (2020).
* (22) B. A. Ivanov and D. D. Sheka, Phys. Rev. Lett. 72, 404 (1994).
* (23) A. R. Pereira and A. S. T. Pires, Physical Review B 51, 996 (1995).
* (24) B. A. Ivanov, A. K. Kolezhuk, and G. M. Wysin, Physical Review Letters 76, 511 (1996).
* (25) A. Bogdanov and A. Shestakov, Physics of the Solid State 40, 1350 (1998).
* (26) S. Komineas and N. Papanicolaou, Nonlinearity 11, 265 (1998).
* (27) J. Wu, D. Carlton, J. S. Park, Y. Meng, E. Arenholz, A. Doran, A. T. Young, A. Scholl, C. Hwang, H. W. Zhao, J. Bokor, and Z. Q. Qiu, Nature Physics 7, 303 (2011).
* (28) F. P. Chmiel, N. Waterfield Price, R. D. Johnson, A. D. Lamirand, J. Schad, G. van der Laan, D. T. Harris, J. Irwin, M. S. Rzchowski, C.-B. Eom, and P. G. Radaelli, Nature Materials 17, 581 (2018).
* (29) M. E. Gouvêa, G. M. Wysin, A. R. Bishop, and F. G. Mertens, Phys. Rev. B 39, 11840 (1989).
* (30) N. Papanicolaou and P. N. Spathis, Nonlinearity 12, 285 (1999).
* (31) T. Shinjo, T. Okuno, R. Hassdorf, †. K. Shigeto, and T. Ono, Science 289, 930 (2000).
* (32) A. Wachowiak, J. Wiebe, M. Bode, O. Pietzsch, M. Morgenstern, and R. Wiesendanger, Science 298, 577 (2002).
* (33) B. V. Waeyenberge, A. Puzic, H. Stoll, K. W. Chou, T. Tyliszczak, R. Hertel, M. Fähnle, H. Brückl, K. Rott, G. Reiss, I. Neudecker, D. Weiss, C. H. Back, and G. Schütz, Nature(London) 444, 461 (2006).
* (34) K. Yamada, S. Kasai, Y. Nakatani, K. Kobayashi, H. Kohno, A. Thiaville, and T. Ono, Nature materials 6, 269 (2007).
* (35) V. S. Pribiag, I. N. Krivorotov, G. D. Fuchs, P. M. Braganca, O. Ozatay, J. C. Sankey, D. C. Ralph, and R. A. Buhrman, Nature Physics 3, 498 (2007).
* (36) S. Komineas, Physical Review Letters 99, 117202 (2007).
* (37) S. Mühlbauer, S. N. Gvasaliya, E. Pomjakushina, and A. Zheludev, Phys. Rev. B 84, 180406 (2011).
* (38) S. Mühlbauer, S. Gvasaliya, E. Ressouche, E. Pomjakushina, and A. Zheludev, Phys. Rev. B 86, 024417 (2012).
* (39) J. Chovan, N. Papanicolaou, and S. Komineas, Phys. Rev. B 65, 064433 (2002).
* (40) J. Chovan and N. Papanicolaou, in Frontiers in Magnetic Materials, edited by A. V. Narlikar (Springer Berlin Heidelberg, Berlin, Heidelberg, 2005), pp. 347–384.
* (41) A. N. Bogdanov and A. Hubert, JMMM 195, 182 (1999).
* (42) S. Komineas and N. Papanicolaou, SciPost Phys. 8, 086 (2020).
* (43) I. V. Baryakhtar and B. A. Ivanov, Sov. J. of Low Temp. Phys. 5, 361 (1979).
* (44) S. Komineas and N. Papanicolaou, Nonlinearity 11, 265 (1998).
* (45) R. Tomasello, L. Sanchez-Tejerina, V. Lopez-Dominguez, F. Garescì, A. Giordano, M. Carpentieri, P. K. Amiri, and G. Finocchio, Physical Review B 102, 224432 (2020).
* (46) A. Thiaville, S. Rohart, É. Jué, V. Cros, and A. Fert, EPL (Europhysics Letters) 100, 57002 (2012).
* (47) F. Büttner, I. Lemesh, and G. S. D. Beach, Scientific Reports 8, 4464 (2018).
* (48) P. Olleros-Rodríguez, R. Guerrero, J. Camarero, O. Chubykalo-Fesenko, and P. Perna, ACS Applied Materials & Interfaces 12, 25419 (2020).
* (49) J. Chovan, M. Marder, and N. Papanicolaou, Acta Physica Polonica 126, 32 (2004).
|
# The BACCO simulation project: biased tracers in real space
Matteo Zennaro,1 Raul E. Angulo,1,2 Marcos Pellejero-Ibáñez,1 Jens Stücker,1
Sergio Contreras,1 and Giovanni Aricò1,3
1Donostia International Physics Center (DIPC), Paseo Manuel de Lardizabal, 4,
20018, Donostia-San Sebastián, Guipuzkoa, Spain.
2IKERBASQUE, Basque Foundation for Science, 48013, Bilbao, Spain.
3Universidad de Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza, Spain
E-mail:matteo_zennaro001<EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
We present an emulator for the two-point clustering of biased tracers in real
space. We construct this emulator using neural networks calibrated with more
than $400$ cosmological models in a 8-dimensional cosmological parameter space
that includes massive neutrinos an dynamical dark energy. The properties of
biased tracers are described via a Lagrangian perturbative bias expansion
which is advected to Eulerian space using the displacement field of numerical
simulations. The cosmology-dependence is captured thanks to a cosmology-
rescaling algorithm. We show that our emulator is capable of describing the
power spectrum of galaxy formation simulations for a sample mimicking that of
a typical Emission-Line survey at $z\sim 1$ with an accuracy of $1-2\%$ up to
nonlinear scales $k\sim 0.7h\,{\rm Mpc}^{-1}$.
###### keywords:
cosmology: theory – large-scale structure of Universe – methods: statistical –
methods: computational
††pubyear: 2020††pagerange: The BACCO simulation project: biased tracers in
real space–B
## 1 Introduction
The observed spatial distribution of galaxies and quasars offers an extremely
valuable window to the physics of the universe. For instance, their clustering
as a function of scale depends on the early universe physics, where properties
of the primordial fluctuation as well as the parameters of the cosmological
model leave distinctive signatures. Similarly, the relation between cosmic
velocities and densities, encoded in the so-called redshift space distortions,
offers a pathway to constrain the nature of gravity and the growth of
structure (e.g. Kaiser, 1987; Guzzo et al., 2008). Finally, baryonic acoustic
oscillations (e.g. Eisenstein & Hu, 1998) can be employed as a standard ruler
to measure the expansion history of the universe, offering an opportunity to
better constrain the properties of dark energy (see e.g. Dodelson & Schneider,
2013; Taylor et al., 2013; Paz & Sánchez, 2015).
There is a large number of ongoing observational campaigns that will take on
the wealth of information encoded in clustering (e.g. Euclid, see the review
by Amendola et al. 2018, DESI, Levi et al. 2013; DESI Collaboration et al.
2016, and J-PAS Bonoli et al. 2020). These observations will map the position
and shapes of hundreds of millions of galaxies up to tens of thousands of
square degrees (e.g. Bartelmann & Schneider, 2001; Takada et al., 2014; Ivezić
et al., 2019; Dore et al., 2019; Mandelbaum, 2018). Although statistical
uncertainties and observational systematic errors will be crucial to keep
under control, the actual limitation in exploiting the surveys will arise from
the precision with which the observed distribution of the luminous tracers can
be modelled.
The challenge of modelling galaxy clustering arises from the nonlinearity of
the physics involved. Although the early seeds of structure formation can be
predicted accurately and efficiently using linearised Bolztmann-Einstein
equations (Lewis et al., 2000; Lesgourgues, 2011), the subsequent
gravitational evolution will create nonlinearities which will dominate on
small scales. Furthermore, galaxies and quasars are expected to form in
particular regions of the universe with an efficiency that depends on the
details of the galaxy formation physics involved.
The most accurate way to follow all these processes is provided by numerical
simulations (see, e.g. Kuhlen et al., 2012, for a review). In these,
fluctuations in the early universe are represented by a set of $N$-body
particles which are then evolved solving their equations of motion. Recent
advances in the field have different codes and numerical approaches to
converge in the predictions for the nonlinear matter power spectrum to better
than 2% down to scales of $k\sim 10h\,{\rm Mpc}^{-1}$ (Schneider et al., 2016;
Springel et al., 2020; Garrison et al., 2018; Angulo et al., 2020). On the
other hand, simulations are computationally expensive, thus it is typically
not possible to carry them out for more than a handful of different choices of
cosmological parameters. Nevertheless, several approaches have been suggested
in the literature to speed up their production, (e.g. Monaco et al., 2002;
Tassev et al., 2013; Izard et al., 2016), or to interpolate among the outputs
of simulations (Heitmann et al., 2014; Liu et al., 2018; Nishimichi et al.,
2019; DeRose et al., 2019; Giblin et al., 2019; Euclid Collaboration et al.,
2019; Wibking et al., 2019; Winther et al., 2019; Angulo et al., 2020; Euclid
Collaboration et al., 2020).
Another problem that arises from a potential modelling from numerical
simulation refers to galaxy formation. Although there has been great progress
in understanding the formation of galaxies, correlations with halo properties
and the impact of various processes, predictions for galaxy properties are
still uncertain and model dependent. This has the drawback that, when compared
to observations, small uncertainties in galaxy modelling could heavily bias
cosmological constraints. An alternative could be marginalization over galaxy
formation. Though this is in principle possible, it would add much higher
computational demands, and a poor or incomplete modelling of galaxy properties
could still lead to biased cosmological inferences.
Perturbation theory offers a very attractive alternative. By solving
analytically the relevant fluid equations, predictions for the distribution of
matter down to quasilinear scales can be obtained efficiently (e.g. Bernardeau
et al., 2002) and with control of theoretical uncertainties (see e.g.
Chudaykin et al., 2020). The relation between biased tracers and the
underlying matter field can also be treated perturbatively (McDonald, 2009;
Desjacques et al., 2018a; Fujita & Vlah, 2020). By including all possible
dependences to a given order allowed by symmetries, it is possible to extend
the predictions for any biased tracers, without making an explicit connection
with particular galaxy formation physics. Recent advances in these fields have
significantly improved the accuracy of these predictions, which can typically
reach scales of $k\sim 0.2h\,{\rm Mpc}^{-1}$ (Baumann et al. 2012, Baldauf et
al. 2016, Vlah et al. 2016, Ivanov et al. 2020, d'Amico et al. 2020, Colas et
al. 2020, Nishimichi et al. 2020, Chen et al. 2020 ). Although this is a
remarkable achievement, these predictions still fall short with respect to the
observations and accuracy of future galaxy surveys (Blas et al. 2014, McQuinn
& White 2016). This approach has been extensively used under the assumption of
perturbed dynamics in cosmological parameter estimation (see e.g. Chuang et
al. 2017 and Pellejero-Ibanez et al. 2017).
In this paper we combine both approaches – numerical simulations and
perturbation theory – to create a framework that inherits the accuracy of
numerical simulations with the flexibility of a perturbative bias expansion,
which can reach even small nonlinear scales while being agnostic to galaxy
formation physics and details of any given particular observational surveys.
As shown by Modi et al. (2020), this approach has the potential to accurately
describe galaxy clustering down to smaller scales than those typically reached
by perturbation theory.
We are able to do this by combining the benefits of several recent
developments. On the simulation side, we employ large $N$-body simulations,
with accurate force and mass resolution, which have significant noise
suppression by using special initial conditions, and have been carefully
designed to be used in combinations with cosmology-rescaling algorithms. These
allow to densely cover a target parameter space with the equivalent of
hundreds to thousands of simulations. We combine these with feed forward
neural networks which allow us to quickly predict non-linear fields while
varying any cosmological parameter – including neutrinos and dynamical dark
energy – within defined regions in cosmological parameter space. On the
perturbative side, we employ a Lagrangian bias expansion up to second order
which captures dependences with the local density and tidal fields, and that
includes a higher-order derivative bias parameters, capturing the non-locality
of the galaxy formation process. Using the non-linear displacement field of
the rescaled simulations the Lagrangian bias descriptions are advected to
Eulerian space where the different fields can be combined to make predictions
for the non-linear power spectrum of biased tracers.
This paper is structured as follows. In §2 we describe the variety of
numerical tools we employ, including numerical simulations and the power
spectrum calculation, and we validate the cosmology-rescaling approach. We
also describe the implementation of the Lagrangian bias expansion. In §3 we
describe a perturbative solution for the matter fields we consider, which we
will employ to complement our numerical approach. In §4 we provide details of
our emulation via neural networks, including its validation and estimation of
its accuracy. In §5 we present an application of our emulator by fitting the
power spectrum of galaxies mimicking a Star Formation Rate selected (SFR)
sample at $z=1$. We conclude in §6.
## 2 Numerical Methods
Here we present the main numerical methods underlying this work. Specifically,
we describe our simulations in §2.1 and how we model biased tracers in §2.2.
We finish by validating the cosmology-rescaling approach for our purposes in
§2.3.
### 2.1 Simulations
We will employ two sets of simulations. The first one, referred to as the
BACCO simulations will be the core of our biased-tracers emulators. The second
suite will be employed to test the accuracy of our predictions.
Cosmology | $\Omega_{\rm cdm}$ | $\Omega_{\rm b}$ | $h$ | $n_{\rm s}$
---|---|---|---|---
Nenya | 0.265 | 0.050 | 0.60 | 1.01
Narya | 0.310 | 0.050 | 0.70 | 1.01
Vilya | 0.210 | 0.060 | 0.65 | 0.92
TheOne | 0.259 | 0.048 | 0.68 | 0.96
Table 1: Cosmological parameters of the four cosmologies simulated in the
BACCO project. All the cosmologies assume a flat geometry, no massive
neutrinos ($M_{\nu}=0$ eV), a dark energy equation of state with $w_{0}=-1$
and $w_{a}=0$, an amplitude of cold matter fluctuations $\sigma_{8}=0.9$, and
optical depth at recombination $\tau=0.0952$.
#### 2.1.1 BACCO Simulations
Our main suite of simulations corresponds to the core of the BACCO simulation
projects. These corresponds to 8 large $N$-body simulations specially designed
to cover a wide range of cosmological parameters when combined with cosmology-
rescaling. These simulations were presented in Angulo et al. (2020), we here
simply provide a recap of their main characteristics.
Specifically, the BACCO simulations are a set of gravity-only simulations with
a box of sidelength $L=1440\,h^{-1}{\rm Mpc}$ resolved with $4320^{3}$
particles, which implies a mass resolution of $m_{p}\sim 3\times
10^{9}\,h^{-1}{\rm M_{\odot}}$. These simulations are carried out in pairs at
4 distinct sets of cosmological parameters. The parameter values, provided in
Table 1, were chosen so that, in combination, these simulations can
efficiently cover a region of approximately $10\sigma$ around the best fit
values obtained by the analysis of the Planck satellite. We refer to Contreras
et al. (2020a) for details on how these parameters were chosen. For each of
these cosmologies we carried out two simulations whose initial Fourier
amplitudes have been fixed but their initial phases are inverted. As shown by
Angulo & Pontzen (2016), this configuration allows for a dramatic reduction of
the noise in the power spectrum due to cosmic variance – by more than two
orders of magnitude for $k<0.1h\,{\rm Mpc}^{-1}$.
The gravitational evolution was carried out with an updated version of
L-Gadget3 (Springel, 2005; Angulo et al., 2020), and employing a Plummer-
equivalent softening length of $5h^{-1}{\rm kpc}$. Numerical parameters were
chosen so that the power spectrum displays a convergence better than 2% at
$k\sim 10\,h\,{\rm Mpc}^{-1}$. In fact, as shown in Angulo et al. (2020), such
configurations agree to better than 2% with other state-of-the-art codes in a
realization of the "Euclid Simulation Challenge" (Schneider et al., 2016).
#### 2.1.2 Test Suite of Simulations
A key aspect of our framework is the ability of employing the 4 BACCO
cosmologies to sample hundreds of different cosmologies using a cosmology-
rescaling.
To test the accuracy of such rescaling, we will employ a suite of $35$
independent $N$-body simulations. Each of these simulations consists of
$1536^{3}$ particles in a cubic volume of $512h^{-1}{\rm Mpc}$ a side.
Naturally, these simulations feature a much smaller volume than our main BACCO
simulations, however they have identical numerical parameters, mass
resolution, and force resolution. As for our main suite, these were carried
out with the latest version of L-Gadget3 and were initialised in pairs using
the approach of Angulo & Pontzen (2016) and employing 2LPT at $z=49$.
To further improve the accuracy of our comparison, we have carried out a
version of the BACCO simulations but with a volume and initial phase field
matching those of each of our test simulations. In this way, we reduce
significantly the role of cosmic variance and allow for an accurate testing.
The cosmologies of our test suite vary systematically one of eight
cosmological parameters $\boldsymbol{\vartheta}=\\{\Omega_{\rm m},\Omega_{\rm
b},\sigma_{8},n_{s},h,M_{\nu},w_{0},w_{a}\\}$, over the same range in which we
will build our emulator. When testing the accuracy of our predictions we will
compare the results of rescaling these BACCO simulations against simulations
carried out directly with the target cosmology.
Figure 1: Visualization of the different Lagrangian fields that have been
advected to Eulerian space. Each panel corresponds to a projection of the same
$100\times 281\times 25\,h^{-1}{\rm Mpc}$ volume. The first panel corresponds
to a uniform weighting – thereby recovering the dark matter density field,
whereas the other panels are weighted by different components of the
Lagrangian linear density field. One can see how the different Lagrangian bias
components emphasize different aspects of the large-scale structure – for
example the linear density weighting seems to emphasize filaments and clusters
in the cosmic web and the density squared weighting emphasizes the regions
associated with the most massive clusters (small red points). Figure 2: Ratio
showing the comparison of the power spectrum of linear lagrangian fields
advected to Eulerian coordinates as predicted by $N$-body simulation and by
cosmology-rescaled simulations. Each panel displays the results for the cross-
spectra of different linear fields, $P_{ij}$, as indicated by the legend;
whereas lines of different colours shows different cosmological models in our
test suite, which includes dynamical dark energy an massive neutrinos.
### 2.2 Modelling biased tracers
To model biased tracers, we will consider a Lagrangian bias expansion. We
refer to Desjacques et al. (2018b) for a review of the perturbative bias
formalism. As discussed in the introduction, the use of a general biasing
formalism will allow us to model the clustering of any tracer of the
underlying matter field, without making any strong assumptions about galaxy
formation physics.
Specifically, we will describe the overdensity of objects in Lagrangian space,
$\delta_{\rm g}(\boldsymbol{q})$, as a second-order expansion in the linear
matter overdensity $\delta(\boldsymbol{q})$ and include potentially non-local
dependences via a higher-order derivative $\nabla^{2}\delta(\boldsymbol{q})$.
Hence:
$\begin{split}\delta_{\rm
g}(\boldsymbol{q})&=1+b_{1}\delta(\boldsymbol{q})+b_{2}\left[\delta^{2}(\boldsymbol{q})-\left\langle\delta^{2}\right\rangle\right]\\\
&+b_{s^{2}}\left[s^{2}(\boldsymbol{q})-\left\langle
s^{2}\right\rangle\right]+b_{\nabla^{2}\delta}\nabla^{2}\delta(\boldsymbol{q}),\end{split}$
(1)
where $1$ is a homogeneous field;
$s^{2}(\boldsymbol{q})=s_{ij}(\boldsymbol{q})s_{ij}(\boldsymbol{q})$ is the
shear field, defined by the tidal tensor
$s_{ij}(\boldsymbol{q})=\partial_{i}\partial_{j}\Phi(\boldsymbol{q})-\delta_{ij}\delta(\boldsymbol{q})$,
with $\Phi(\boldsymbol{q})$ the local matter gravitational potential. Here,
$b_{1},b_{2},b_{s^{2}},$ and $b_{\nabla^{2}\delta}$ are the Lagrangian bias
parameters, which we assume to be scale-independent in Lagrangian space. Note
that it is possible to include higher order bias terms and derivatives (e.g.
Fujita et al., 2020), albeit they are expected to have a small contribution
for low and intermediate mass haloes (Abidi & Baldauf 2018, Lazeyras & Schmidt
2019).
To describe the tracer overdensity field in Eulerian space, $\delta_{\rm
g}(\boldsymbol{x})$, the coordinate system needs to be advected
$\boldsymbol{x}=\boldsymbol{q}+\boldsymbol{\Psi}(\boldsymbol{q})$, where
$\boldsymbol{\Psi}(\boldsymbol{q})$ is referred to as the displacement field.
In terms of the advected fields $\delta_{i}(\boldsymbol{x})$, the power
spectrum can then be expressed in Eulerian space as
$P_{\rm
gg}=\sum_{i,j\in\\{1,\delta,\delta^{2},\nabla^{2}\delta\\}}(2-\delta_{ij})b_{i}b_{j}\,\langle|\delta_{i}(\boldsymbol{k})\delta_{j}^{*}(\boldsymbol{k})|\rangle,\\\
$ (2)
where $\delta_{i}(\boldsymbol{k})$ corresponds to the Fourier transform of the
advected field $\delta_{i}(\boldsymbol{x})$, and it is completely determined
by 15 cross-spectra,
$P_{ij}\equiv\langle|\delta_{i}(\boldsymbol{k})\delta_{j}^{*}(\boldsymbol{k})|\rangle$.
Usually, these spectra are computed perturbatively up to, at least, the same
order as the bias expansion. This has the advantage that predictions can be
computed quickly and accurately as a function of cosmology, but only on
relatively large scales.
In this work, we follow a different strategy and directly compute the 15
cross-spectra relevant for the 2nd-order bias expansion using the results of
numerical simulations. As shown by Modi et al. (2020), this approach improves
significantly the reach of scales accessible to analytic bias expressions and
specifically, was able to accurately describe the clustering of mock HOD
galaxies down to scales of $0.6h\,{\rm Mpc}^{-1}$.
Operationally, to implement this approach we create the 5 relevant fields –
$1$, $\delta(q)$, $\delta^{2}$, $s^{2}(q)$, and $\nabla^{2}\delta(q)$ – in
Lagrangian coordinates using a grid of $1080^{3}$ points, using Fourier
amplitudes and phases matching those of our $N$-body simulations. We then
apply a Gaussian smoothing of size $0.75h^{-1}{\rm Mpc}$, mimicking possible
exclusion effects and the non-locality of structure formation. We then compute
a non-linear displacement field on the same Lagrangian field by following
simulation particles initially located at the same grid points. Finally, we
compute the corresponding advected fields by using a cloud-in-cell assignment
in Eulerian space where the position is only given by the cosmology-dependent
displacement field and where the weight is given by the value of the
respective bias expansion field.
In Figure 1 we show a slice through each of these 5 fields in Eulerian space:
the ‘homogeneous’ is the field weighted by a constant $1$, the ‘density’ is
weighted by $\delta(\boldsymbol{q})$, the ‘quadratic density’ field is
weighted by
$\left[\delta^{2}(\boldsymbol{q})-\left\langle\delta^{2}\right\rangle\right]$,
the ‘tidal field’ by $\left[s^{2}(\boldsymbol{q})-\left\langle
s^{2}\right\rangle\right]$, and the ‘Laplacian’ by
$\nabla^{2}\delta(\boldsymbol{q})$.
A key aspect of our work is that the displacement field connecting Lagrangian
and Eulerian spaces can be easily and accurately evaluated for different
cosmologies with a cosmology rescaling algorithm. We will explore this in the
following subsection.
Figure 3: Comparison of the power spectra of linear lagrangian field advected
to Eulerian coordinates at $z=0$, as predicted by analytic LPT calculations,
$P_{\rm LPT}$, and various kinds of simulations. Each panel shows results for
a different cross-spectra as indicated by the legend. In each panel coloured
lines show the results of using an ensemble of $1000$ realisations of
$L=1500h^{-1}{\rm Mpc}$ advected using 1st, 2nd, or 3rd-order LPT
displacements; whereas red symbols show the measurements in of our of $N$-body
simulations.
### 2.3 Cosmology Rescaling
To capture the cosmology dependence of the Lagrangian fields in Eulerian
coordinates, we will employ the so-called cosmology-rescaling algorithms.
The main idea of such approach is that the nonlinear structure in a given
cosmology can be mimicked by rescaling the outputs of a simulation carried out
in a nearby cosmology. The rescaling is performed by transforming the length
unit and by considering a different output time such that the linear variance
of the field as a function of scale matches that of the rescaled simulation.
Additionally, large-scale modes are modified by additionally displacing a
given simulation particle with the difference of the 2LPT displacements in the
original and target cosmologies. Velocities are modified in an analogous
manner.
Cosmology-rescaling was originally proposed by Angulo & White (2010) and has
been extensively tested in multiple subsequent studies. In particular, in
Zennaro et al. (2019) we extended the approach to massive neutrinos; in
Contreras et al. (2020a) we showed that the matter power spectrum of dark
matter, haloes, and subhaloes can be retrieved to better than 3% up to $k\sim
5h\,{\rm Mpc}^{-1}$; and in Ondaro et al (in prep.) that the halo mass
function can be obtained to better than 2% over the range
$10^{12}-10^{15}\,h^{-1}{\rm M_{\odot}}$. Additionally, this approach was
employed by Angulo et al. (2020) to construct an emulator for the nonlinear
matter power spectrum, and by Aricò et al. (2020) to incorporate the effects
of baryonic physics such as star formation, gas cooling and feedback.
Here, we will quantify the performance of cosmology-rescaling in predicting
the cross-spectra of our Eulerian linear fields. For this, we will compare the
measurements in our suite of test simulations (c.f. §2.1) with those obtained
by rescaling one of our BACCO simulations.
We present our results in Fig. 2 for the 15 possible combinations of our 5
Lagrangian fields at $z=0$, as indicated by the legend in each panel. Each of
the lines displays the results for one of our test cosmologies. We recall that
these cover a parameter space roughly set by a $10\sigma$ region around
Planck’s best fit values. Firstly, we can see that cosmology rescaling
retrieves highly-accurate predictions, agreeing with those of direct $N$-body
simulations to better than 1-2% in most cases. Although not shown here, we
have checked that the precision of the method is even slightly better at
higher redshifts.
The large-scale clustering is expected to be dominated by the terms involving
the linear density and the homogeneous field – $\langle 11\rangle$, $\langle
1\delta\rangle$, and $\langle\delta\delta\rangle$ – these combinations are
particularly well recovered, to better than 1%, even on the smallest scales we
consider. On smaller scales, we expect fields involving $\delta^{2}$ – the
combinations $1\delta^{2}$, $\delta^{2}\delta^{2}$, and $\delta^{2}$ – to also
have relevant contributions. Similarly, for such fields, we also obtain highly
accurate predictions, with deviations being typically below $\sim 2\%$ at
$k\sim 1h\,{\rm Mpc}^{-1}$. Note that these fields have particularly low
amplitudes on large scales, which is why the comparison becomes progressively
noisier as we consider smaller wavenumbers.
Finally, we expect fields involving the tidal field – $1s^{2}$, $\delta
s^{2}$, $s^{2}s^{2}$ – to have subdominant contributions on all scales.
Nevertheless, our predictions are still relatively accurate with typical
discrepancies of less than 5%. The least accurate predictions among all the
cases where we can reliably perform the comparison is $\delta^{2}s^{2}$, where
the cosmology-rescaling can over- or under-predict the results of our test
simulations by $5-10\%$.
We note that, even though these simulations have matching Fourier phases, some
cross-spectra are particularly noisy on large scales. As we will see in the
next sections, these are inherently noisy. These fields also display no
benefit from the “Paired-&-Fixed” method.
In summary, we have shown that, using a cosmology-rescaling, it is possible to
accurately predict all the 15 cross-spectra involved in the perturbative bias
expansion. This is remarkable as it opens up the possibility to compute
predictions for a densely-sampled cosmological parameter space which should
yield to an accurate emulation. Nevertheless, numerical predictions for some
combination of fields are particularly noisy on large-scales (e.g.
$\delta^{2}\nabla^{2}$, which is zero at linear order even on large scales),
which could pose a problem for a direct emulation. For this reason we will
combine our numerical results with analytic calculations using Lagrangian
perturbation theory, which will be the subject of our next section.
## 3 Lagrangian Perturbation Theory
In the previous section, we have introduced the Lagrangian bias expansion, and
in particular, the 15 cross-spectra of the different Lagrangian fields that
enter the biased-tracers power spectrum. In this section, we model these 15
terms theoretically, with the aim of using these LPT predictions to reduce the
noise present in the measurements from rescaled simulations, and provide us
with a way of reducing the dynamical range of the quantities we want to
emulate.
### 3.1 LPT predictions
To compare with quantities measured in simulations, we want each of the
components of the field presented in Eq. (1) to be evaluated at a given
Eulerian position $\boldsymbol{x}$. The field at that position will receive
contributions from all Lagrangian positions that, after being displaced, end
up in the same Eulerian position (e.g. Matsubara, 2008). Therefore,
$1+\delta_{F}(\boldsymbol{x})=\int\mathrm{d}^{3}\boldsymbol{q}F(\boldsymbol{q})\delta_{\rm
D}\left(\boldsymbol{x}-\boldsymbol{q}-\boldsymbol{\Psi}(\boldsymbol{q})\right),$
(3)
where $F(\boldsymbol{q})$ can be any of the components of the field.
We obtain the overdensities created in Eulerian space from displacing a
uniform distribution $F(\boldsymbol{q})=1$, a linear density field
$F(\boldsymbol{q})=\delta_{\rm L}(\boldsymbol{q})$, its square
$F(\boldsymbol{q})=\delta^{2}_{\rm L}(\boldsymbol{q})$, the tidal field
$F(\boldsymbol{q})=s^{2}(\boldsymbol{q})$ and the laplacian of the linear
density field $F(\boldsymbol{q})=\nabla^{2}\delta_{\rm L}(\boldsymbol{q})$.
Note that we expand the displacement field
$\boldsymbol{\Psi}\simeq\boldsymbol{\Psi}^{(1)}+\boldsymbol{\Psi}^{(2)}+\dots$,
retaining only its first order expansion up to second order in power, so that
in Fourier space
$\begin{split}\delta_{F}(\boldsymbol{k})=\int\mathrm{d}^{3}\boldsymbol{q}e^{-i\boldsymbol{k}\cdot\boldsymbol{q}}F(\boldsymbol{q})\left\\{1-\boldsymbol{k}\cdot\boldsymbol{\Psi}^{(1)}(\boldsymbol{q})-\dfrac{1}{2}[\boldsymbol{k}\cdot\boldsymbol{\Psi}^{(1)}(\boldsymbol{q})]^{2}\right\\},\end{split}$
(4)
ignoring an additional Dirac’s delta $\delta_{D}(\boldsymbol{k})$ that would
only contribute to the wavemode $\boldsymbol{k}=0$. Note that the first order
displacement field can be easily connected to the linear overdensity in
Fourier space, as
$\boldsymbol{\Psi}^{(1)}(\boldsymbol{k})=-i\dfrac{\boldsymbol{k}}{k^{2}}\delta(\boldsymbol{k}).$
(5)
After advecting these fields with Eq. (4) we combine them into the respective
cross spectra, retaining only terms of order $(11)$ and $(22)$, obtaining
contributions of the form
$\begin{split}P_{ij}(k)&=A_{ij}(k)P(k)\\\
&+B_{ij}(k)\int\dfrac{\mathrm{d}^{3}\boldsymbol{p}}{(2\pi)^{3}}P(\boldsymbol{p})P(|\boldsymbol{k}-\boldsymbol{p}|)C_{ij}(\boldsymbol{p},\boldsymbol{k}),\end{split}$
(6)
where the prefactors $A_{ij}(k)$ and $B_{ij}(k)$, and the kernels
$C_{ij}(\boldsymbol{p},\boldsymbol{k})$ are different for each combination of
the fields are are reported in appendix A.
Numerically, we compute all LTP integrals using a doubly-adaptive quadrature
integration, as implemented in the Gnu Scientific Library, whose accuracy we
have tested against a multi-dimensional numerical integration. Each integral
takes approximately $0.2$ seconds. In Appendix B we compare our calculations
against two publicly-available code FastPT (McEwen et al., 2016; Fang et al.,
2017) and velocileptors (Chen et al., 2020).
### 3.2 LPT validation
To validate our LPT expressions and their numerical evaluation, we will
compare them against a suite of realizations of an initially homogeneous
distribution advected to Eulerian space using Lagrangian perturbation theory
at different orders.
Specifically, we construct a suite of $1000$ realizations of the relevant
Lagrangian fields on $384^{3}$ grids in $L=1500h^{-1}{\rm Mpc}$ boxes, and
using a cosmology compatible with current observational constraints. These
grids are then advected to $z=0$ using either first, second, or third-order
Lagrangian perturbation theory, following the implementation of Michaux et al.
(2021).
We then measure the resulting cross-spectra using FFTs with $384^{3}$ grid
points. We show our results in Fig. 3 where we display the measured spectra
over our LPT analytic calculations. For comparison, we also display the
corresponding measurements from one of our main BACCO simulation as red
symbols.
Firstly, we can see that for most 15 power spectra, our analytic calculations
and the 1LPT simulations agree remarkably well on large-scale – the ratio
approaches to unity for low wavenumers. This is an important validation of our
calculations. Furthermore, for terms involving, $1$ and $\delta$, there is
also agreement with higher-order version of LPT and with the results of our
full $N$-body simulation.
In other cases, there is a disagreement between 1 and 2LPT simulations with
the later agreeing with the $N$-body results. This suggest that to predict
accurately these quantities, higher-orders in the displacement need to be
included. Yet in other instances, mostly those involving $\delta^{2}$, the LPT
simulations appear to converge to a different large-scale limit than the
$N$-body result, which points to shell-crossing and nonlinearities on small
scales being important to correctly predict the amplitude on large-scales. We
leave further investigation of this for future work.
Nevertheless, the discrepancies on large scales appear in fields which are
already highly subdominant on large scales. Thus we conclude that our analytic
LPT calculations are sufficiently accurate to complement our numerical
measurements on large scales.
## 4 Emulating biased tracers
Figure 4: The cross spectra of various Lagrangian fields advected to Eulerian
space, $P_{ij}$ where $i,j=\\{1,\delta,\delta^{2},s^{2},\nabla^{2}\delta\\}$.
Symbols display the measurements employing a randomly-selected cosmology from
our training data at $z\sim 0$. We note that these measurements corresponds to
cosmology-rescaled high-resolution simulations, which have been created with
the “Paired-&-Fixed” method. For comparison, solid lines show the predictions
of our analytic LPT calculation for each respective cross-spectrum.
We now describe the procedure we follow to construct an emulator for real-
space biased tracers. We first define our target cosmological parameter space
in §4.1, and then describe our core power spectrum data in §4.2. We continue
by discussing our neural network emulation strategy in §4.3 and finish by
quantifying its accuracy in §4.4.
### 4.1 Parameter Space
We will construct and validate our emulator over a hyper-volume in
cosmological parameter space defined by the ranges:
$\displaystyle\sigma_{8}$ $\displaystyle\in$ $\displaystyle[0.73,0.9]$
$\displaystyle\Omega_{\rm m}$ $\displaystyle\in$ $\displaystyle[0.23,0.4]$
$\displaystyle\Omega_{\rm b}$ $\displaystyle\in$ $\displaystyle[0.04,0.06]$
$\displaystyle n_{\rm s}$ $\displaystyle\in$ $\displaystyle[0.92,1.01]$ (7)
$\displaystyle h$ $\displaystyle\in$ $\displaystyle[0.6,0.8]$ $\displaystyle
M_{\rm\nu}\,[{\rm eV}]$ $\displaystyle\in$ $\displaystyle[0.0,0.4]$
$\displaystyle w_{0}$ $\displaystyle\in$ $\displaystyle[-1.15,-0.85]$
$\displaystyle w_{\rm a}$ $\displaystyle\in$ $\displaystyle[-0.3,0.3]$
where $\sigma_{8}$ is the cold mass linear mass variance in $8\,h^{-1}{\rm
Mpc}$ spheres; $\Omega_{\rm m}$ and $\Omega_{\rm b}$ are the density of cold
matter and baryons in units of the critical density of the Universe; $n_{\rm
s}$ is the primordial spectral index; $h$ is the dimensionless Hubble
parameter $h=H_{0}/(100\,{\rm km}\,{\rm s^{-1}}{\rm Mpc^{-1}})$; $M_{\rm\nu}$
is the mass of neutrinos in units of eV; and $w_{0}$ and $w_{\rm a}$ are
parameters describing the time-evolving dark energy equation of state via
$w(z)=w_{0}+(1-a)\,w_{\rm a}$. We further consider a flat geometry and neglect
the impact of radiation in the background evolution.
We note that this parameter volume coincides with that of the non-linear
matter power spectrum and baryonic effects emulators of Angulo et al. (2020)
and Aricò et al. (2020). We also recall that our suite of test simulations –
with which we validated the cosmology-rescaling approach, c.f. §4.4 – span the
same range of parameters.
It is worth highlighting that although this represents a restricted parameter
space, it is significantly larger than that typically covered by emulation of
$N$-body simulations. This is because the cosmology-rescaling allows us to
densely sample the volume, thus keeping emulation errors under control. Also,
outside this parameter range, it could be possible to gracefully extrapolate
using less precise methods. This could be acceptable as the region outside
these boundaries is expected to be disfavored by large-scale-structure data,
which would justify the use of less precise methods.
### 4.2 Data
We sample our cosmology hyper-space with $400$ points generated with a latin-
hypercube algorithm. For each point, we first select the simulation that would
yield most accurate rescaling results (see Contreras et al., 2020a, for a
details), then apply our rescaling approach at $10$ different output times
over $0<z<1.5$, and finally advect the corresponding Lagrangian fields.
For each rescaled output we measure the power spectra and compute the
quantities to be emulated as we will describe in the next two subsections.
This whole procedure takes approximately 2 CPU hours, with the largest
fraction spent in the power spectra calculation and in the creation of the
linear fields.
#### 4.2.1 Power Spectrum Measurements
We compute 15 cross-spectra using a Fast Fourier Transform with $1080^{3}$
grid points and a Cloud-in-Cells mass assignment scheme. To reduce the impact
of aliasing, we employ two interlaced grids, following Sefusatti et al.
(2016). We consider $50$ logarithmically-spaced bins over the range of
wavenumbers $k\in[10^{-2},1]\,h\,{\rm Mpc}^{-1}$.
On large scales, the regularity of the Fourier grid imprints features in the
measured spectra. To correct for this, we compute the average of Fourier modes
over the LPT expectation evaluated at the same wavelength, and then multiply
by the LPT expectation at the bin centre, i.e.:
$\hat{P}_{ij}(k)=P_{{\rm LPT},ij}(k)\sum_{|\vec{k}_{n}|\in[k-\Delta k,k+\Delta
k]}\frac{\delta_{i}(\vec{k}_{n})\delta_{j}(\vec{k}_{n})}{P^{\rm
LPT}_{ij}(\vec{k}_{n})}$ (8)
we have found this procedure to be particularly important for the terms
involving the $1$ and $\delta$ fields on large scales.
We perform the mass assignment and FFT using shared-memory parallelization
over $24$ cores. The mass deposit steps typically take 8 seconds per field,
and the corresponding FFT and power spectrum calculation approximately 17
seconds.
#### 4.2.2 Emulated quantities
Figure 5: Ratio between the cross spectra in our validation set over the
predictions of our neural network emulation. As in previous plots, each panel
display results for a different combination of Lagrangian fields in Eulerian
coordinates. Lines of different colour show specific test cosmologies, whereas
shaded regions denote the are enclosing 95% of the distribution with the mean
indicated by the tick black line. Note the predictions are percent accurate
over most cases, and the largest fractional discrepancies occur when the
emulated quantities crosses zero.
In total, we have computed a set of 15 spectra for approximately $4,000$
combinations of cosmology and redshifts.
In Fig. 4 we show as an example one set of measurements for a randomly-
selected cosmology at $z\sim 0$. We also display our analytic LPT predictions
as solid lines.
We can see that our measurements are in good agreement with LPT on large
scales, as it was already shown in Fig. 3, especially for fields involving
combinations of $1$, $\delta$, and $\delta^{2}$. In some spectra, there seems
to be a small disagreement, which in §3 we attribute to either higher-order
LPT contributions and/or shell-crossing. However, these later spectra seem to
contribute subdominantly.
It is worth noting that there does not seem to be any significant noise in the
fields that dominate the large-scale clustering $\langle 11\rangle$, $\langle
1\delta\rangle$ and $\langle\delta\delta\rangle$. This owns to the
“Paired-&-Fixed”method, which, as shown by Angulo & Pontzen (2016) (see also
Pontzen et al., 2016; Chuang et al., 2018; Villaescusa-Navarro et al., 2018),
results in <$\sim 0.1\%$ accurate density power spectrum for $k<0.1h\,{\rm
Mpc}^{-1}$ at $z=0$ using simulations of comparable volume to those in the
BACCO suite.
To reduce the dynamic range of our measurements, which would yield to a more
accurate subsequent emulation, we consider the ratio of our measured cross-
spectra over the corresponding analytic LPT expectation. We further consider
the logarithm of the numerical and analytic results whenever that ratio is
always positive over our range of wavenumbers, i.e:
$S_{ij}(k,z)\equiv\begin{cases}P_{ij}/P^{\rm LPT}_{ij},&\text{if
}\mathrm{min}[P_{ij}]\leq 0\\\ \log(P_{ij}/P^{\rm LPT}_{ij}),&\text{otherwise
}\end{cases}$ (9)
It is well known that large-scale coherent motions dilute the amplitude of the
baryonic acoustic oscillations (BAO Eisenstein et al., 2007). This effect can
be captured in perturbation theory via the so-called infrared-resummation,
however, this is not included in our LPT calculations. In principle, this is
not a problem for our approach since we simply use LPT to reduce the dynamic
range of our data. In practice, on the other hand, this means a small
oscillatory feature is present in $S$ which we ought to emulate, which could
introduce small uncertainties.
To avoid this and improve the accuracy our our predictions, in the definition
of $\langle 11\rangle$, $\langle 1\delta\rangle$ and
$\langle\delta\delta\rangle$, we replace the corresponding LPT predictions by
a linear theory power spectrum including an infra-red resummation, i.e:
$P_{\rm linear}^{\rm smeared-BAO}\equiv P_{\rm linear}\,G(k)+P_{\rm
linear}^{\rm no-BAO}\,[1-G(k)]$ (10)
where $G(k)\equiv\exp[-k^{2}/k_{*}]$, with
$k_{*}^{-1}\equiv(6\pi^{2})^{-1}\int{\rm d}k\,P_{\rm linear}$, and $P_{\rm
linear}^{\rm no-BAO}$ is a version of the linear theory power spectrum without
any BAO signal. Operationally, this is obtained by performing a discrete sine
transform, smoothing the result, and returning to Fourier space by an inverse
transform (Baumann et al., 2018; Giblin et al., 2019). We have checked that
this in fact results into $S(k)$ devoid of any residual oscillations, however,
we emphasise that our predictions are still essentially provided by our
simulation results.
As discussed previously, simulations results have a significant amount of
noise for some cross-spectra. Most notably, $1s^{2}$, $1\delta^{2}$,
$\delta\delta^{2}$, and $\delta s^{2}$, in such cases emulation could be
inaccurate as we would be trying to predict specific noise features. To avoid
this, for the aforementioned spectra, we have replaced our measurements for
$k<0.1\,h\,{\rm Mpc}^{-1}$ with our analytic LPT spectra.
To further reduce the dimensionality and improve our emulation, we have
performed a principal component decomposition over our dataset. We have found
that keeping the first $6$ vectors with the largest eigenvalues is enough to
explain most of the variance in all of our dataset.
### 4.3 Neural Network
Following Angulo et al. (2020) and Aricò et al. (2020), we construct our
emulators using a feed-forward Neural Network. We use a relatively simple
architecture with two fully-connected hidden layers with 200 neurons each a
Rectified Linear Unit activation function. Note that a traditional alternative
for building emulators are Gaussian Processes, which have the advantage of
naturally providing with an estimate of the emulator uncertainty. These
methods are, however, computationally expensive in high dimensions and for
abundant training data. Thus, we choose to adopt neural networks instead to
allow future expansions of the training set by including additional points in
our parameter space, which could in principle reduce arbitrarily the emulation
error.
We build a separate neural network for each cross-spectrum using the Keras
front-end of the Tensor-flow library (Abadi et al., 2015). We select a
standard Adam optimization algorithm with a $10^{-3}$ learning rate with a
loss function given by the mean squared error.
We split our dataset, consisting of $4000$ sets of cross-spectra, into
disjoint groups for training and validation. The training set comprises 95% of
the data with which the training of each 15 of the emulators takes
approximately 30 minutes in a single Nvidia Quadro RTX 8000 GPU card; the
evaluation of each emulator takes approximately $0.05$ seconds on the same
hardware.
### 4.4 Validation
We have estimated the accuracy of our neural network by comparing the
prediction of our emulator with the measurements of the spectra in our test
suite comprised of $\sim 200$ different cosmologies and redshifts.
Our results are presented in Fig. 5, where we display the ratio of the values
of $S(k)$ predicted by our neural network and those directly measured in our
test suite. Each panel shows the result for one of our 15 combinations of
Lagrangian fields. We recall that in this testing set there are simulations
with very dissimilar cosmologies including massive neutrinos and dark energy
with time-dependent equation of state. Individual results are indicated by the
coloured lines whereas the mean and the 95% region of the distribution are
indicated by thick solid and shaded regions.
We can see that our neural networks (NN) perform remarkably well overall. For
the main fields for large-scale predictions – i.e. combinations of $\delta$,
$1$ – the NN shows a typical uncertainty below 1% at scales $k<0.1h\,{\rm
Mpc}^{-1}$. On smaller scales, the accuracy somewhat degrades but it is still
below 2% for all but 1 case (pink line).
For fields further including $\delta^{2}$ terms, the accuracy is similar, with
differences being less than 5% on small scales. Recall that in the case of
$1s^{2}$, $1\delta^{2}$, $\delta\delta^{2}$, and $\delta s^{2}$, we have
replaced our numerical results on $k<0.1h\,{\rm Mpc}^{-1}$ by our analytic LPT
expressions, which yields to an almost perfect emulation and to the apparent
discontinuity of the emulator precision at that transition scale.
Note that the largest fractional errors occur at $k\sim 0.2h\,{\rm Mpc}^{-1}$
for $\delta^{2}\nabla^{2}\delta$ and at $k\sim 0.7h\,{\rm Mpc}^{-1}$ for
$\delta^{2}s^{2}$, which is where these cross-spectra cross zero (in many
cosmologies), thus the ratio can easily become unbounded. However, since these
fields are close to zero, their absolute contribution to the total power
spectrum is negligible.
## 5 Application: Fitting the galaxy power spectrum
Figure 6: The galaxy auto power spectrum and matter-galaxy cross spectrum for
SFR-selected mock galaxies with $\bar{n}=10^{-3}\,h^{3}\,\mathrm{Mpc}^{-3}$ at
$z=1$. Symbols show the measurements after a galaxy-formation model in one of
our BACCO simulations, whereas blue lines denote the best fitting model based
in the emulator for biased tracers developed in our work.
As an illustration of our emulator of the real-space clustering of biased
tracers, in this section we will constrain the bias parameters that best fit
the power spectrum of a simulated galaxy catalogue. While in principle we
could directly jointly constrain bias parameters and cosmological parameters,
for the sake of this paper we will fix our cosmology to the fiducial one. We
do so to provide a quick test of the flexibility of the bias model that can be
build with our emulator. We defer the study of possible degeneracies between
cosmology and Lagrangian bias parameters to a future work.
We consider a mock galaxy catalogue mimicking a sample of Emission Line
galaxies at $z\sim 1$ with a number density of
$\bar{n}=10^{-3}\,h^{3}\,\mathrm{Mpc}^{-3}$, which can be regarded as
analogous to the sample to be observed by EUCLID or DESI. To build this mock
catalogue, we employ the extended subhalo abundance matching model of
Contreras et al. (2020c). These authors showed that this particular
implementation – which includes models for tidal stripping and disruption of
satellite galaxies, as well as for the star formation rate of galaxies –
accurately reproduces the real and reshift-space clustering of galaxies in the
state-of-the-art hydrodynamical simulation TNG-300 (Nelson et al., 2018;
Springel et al., 2018; Marinacci et al., 2018; Pillepich et al., 2018; Naiman
et al., 2018).
For this paper, we applied the aforementioned model to one of our main BACCO
simulations (nenya). We employ the galaxy-formation parameters that best
described SFR-selected TNG-300 galaxies, as provided by Contreras et al.
(2020c). Specifically we set
$\beta=2.490,\gamma=5.127,\log_{1}0M_{1}=12.770,\tau_{0}=4.931,$ and
$\tau_{\rm S}=-0.363$. We then selected the galaxies with the highest SFR
values and kept a sample with a number density of
$\bar{n}=10^{-3}\,h^{3}\,\mathrm{Mpc}^{-3}$. Please note that galaxies
obtained with this method contain some galaxy assembly bias signal, since they
inherit correlations with secondary halo properties (i.e. $v_{peak}$) and
local environment (see Contreras et al., 2020b, for a more comprehensive
treatment of GAB within SHAM methods).
Figure 7: Marginalised 1$\sigma$ and $2\sigma$ credibility regions in the
Lagrangian bias parameters $b_{1}$, $b_{2}$, $b_{s^{2}},$ and
$b_{\nabla^{2}\delta}$ for a sample of mock galaxies with
$\bar{n}=10^{-3}\,h^{3}\,\mathrm{Mpc}^{-3}$ selected according to their star
formation rate at $z=1$. These constraints were derived by simultaneously
fitting the galaxy power spectrum and galaxy-matter cross-spectrum from
$k=10^{-2}h\,{\rm Mpc}^{-1}$ until three different maximum wavenumbers,
$k_{\rm max}$, $0.1$, $0.3$, and $0.7h\,{\rm Mpc}^{-1}$, as indicated by the
legend.
We will consider as our data the average clustering measured in the two phase-
inverted Nenya simulations to reduce the impact of cosmic variance on large
scales. Specifically, we will employ the galaxy auto power spectrum, $P_{\rm
gg}$, and the cross-correlation, $P_{\rm gm}$, with the underlying nonlinear
matter field. As for the previous power spectrum calculations, we perform this
operation using $1080^{3}$ FFTs with a CiC assignment and interlacing. In Fig.
6 we display the measurements as red symbols. Note that $P_{\rm gg}$ and
$P_{\rm gm}$ involve different combinations of the Lagrangian fields, which
makes it important to prove that the model is able to jointly fit them.
We will now explore the values of the free parameters of our real-space model,
which is described by combinations of the cross spectra of Lagrangian fields
and one scale independent shot-noise contribution:
$\displaystyle P_{\rm gg}=$
$\displaystyle\sum_{i,j\in\\{1,\delta,\delta^{2},\nabla^{2}\delta\\}}(2-\delta_{ij})b_{i}b_{j}\,P_{ij}+\frac{A_{\rm
sn}}{\bar{n}}$ (11) $\displaystyle P_{\rm gm}=$
$\displaystyle\sum_{i=1,j\in\\{1,\delta,\delta^{2},\nabla^{2}\delta\\}}b_{j}\,P_{ij}$
(12)
where $b_{i=1}\equiv 1$, and $P_{ij}$ are the cross spectra predicted by our
neural network for the cosmological parameters of the target galaxy catalogue
(i.e. those of Nenya ). Note that in principle, there could be other
contributions to the noise (e.g. Eggemeier et al., 2020), which could be
included in our model but that we have neglected here. However, our model is
flexible enough for it to be extended with additional terms. We defer an
investigation of a minimal model for describing realistic galaxies to future
work. Our model is thus specified by 5 parameters:
$\boldsymbol{\vartheta}=\\{b_{1},b_{2},b_{s^{2}},b_{\nabla^{2}\delta},A_{\rm
sn}\\}$ (13)
where $b_{i}$ are the perturbative bias parameters of our sample. We assume
flat priors for these parameters over the range $b_{1}\in[-3,3]$,
$b_{2}\in[-5,5]$, $b_{s^{2}}\in[-10,20]$, $b_{\nabla^{2}\delta}\in[-10,20]$,
$A_{\rm sn}\in[0,2]$.
We describe the probability of observing a set of power spectra
$\tilde{P}(k)=\\{P_{\rm gg}(k),P_{\rm gm}(k)\\}$ as a multivariate normal
distribution:
$\ln
p[\tilde{P}(k),\boldsymbol{\vartheta}]=-\dfrac{1}{2}\sum_{i}\left[\dfrac{\tilde{P}(k_{i})-\tilde{P}_{\rm
model}(k_{i})}{\sigma_{i}}\right]^{2}.$ (14)
Where we have assumed a diagonal covariance matrix computed in the Gaussian
limit with a volume corresponding to $V=50h^{-3}\mathrm{Gpc}^{3}$, roughly
comparable to that to be surveyed by EUCLID. To account for the uncertainty
associated to our emulation process, we have added in quadrature a constant
fractional error of $2\%$.
We sample the posterior probability of $\boldsymbol{\vartheta}$,
$p[\boldsymbol{\vartheta},\tilde{P}(k)]=\dfrac{p[\tilde{P}(k),\boldsymbol{\vartheta}]p(\boldsymbol{\vartheta})}{p[\tilde{P}(k)]},$
(15)
using affine-invariant MonteCarlo Markov Chain code emcee (Foreman-Mackey et
al., 2013). At each step of the chain we evaluate our neural network emulator
to obtain $S_{ij}$ and evaluate the corresponding LPT expressions to recover
$P_{ij}$ and combine them as defined in Eqs. 11. We consider $20$ MCMC walkers
with 10,000 steps each and a burn-in phase of 500 steps. We checked that
doubling the number of steps does not change significantly our results and
therefore we consider our runs converged.
In Fig. 7 we present the two-dimensional marginalised constraints on the free
parameters of our model for the galaxy power spectra. Dark and light regions
denote the 1-$\sigma$ and 2-$\sigma$ two-dimensional confidence levels. We
display the results for when limiting our analysis to different maximum
wavenumbers, namely $k_{\rm max}=0.1,0.3,0.7h^{-1}{\rm Mpc}$. We note that, as
we include smaller scales, even though the width of the contours changes, the
best fitting parameters remain compatible.
Finally, in Fig. 6 we compare the measured galaxy statistics to those
predicted by our model at the values that maximise the posterior probability.
We can see that our model is an excellent description of our simulated galaxy
catalogues, over all the scales we consider and for both $P_{\rm gg}$ and
$P_{\rm gm}$. In particular, our model evaluated for the best-fitting set of
parameters is always within the error-bars of the data, and agrees with the
measured power spectra at the $1-2\%$ level. This is particularly remarkable
especially keeping in mind that, typically, perturbation-theory approaches are
limited to scales $k\sim 0.2h\,{\rm Mpc}^{-1}$, and that we compare against a
galaxy catalogue built directly in Eulerian space and following physically-
motivated recipes.
## 6 Conclusions
In this paper we have built and presented a set of 15 emulators for predicting
the cross power spectra necessary to describe the galaxy-galaxy and galaxy-
matter clustering in the context of a second order Lagrangian bias expansion.
Each emulator predicts one such term on a range of wavenumbers $0.01<k/h\,{\rm
Mpc}^{-1}<1$ and at redshifts $0<z<1.5$, in cosmologies spanning $\sim
10\sigma$ around the best-fitting parameters from the Planck collaboration,
including extensions of the standard $\Lambda$CDM paradigm, such as massive
neutrino and dynamical dark energy.
In order to build our emulator from a densely sampled parameter space, we have
used a cosmology-rescaling algorithm. For this reason, we have first used a
set of 35 intermediate volume simulations, with cosmologies spanning the same
parameter space as our emulator, to show that the Lagrangian fields measured
from simulations assuming different cosmologies can be accurately reproduced
rescaling the small set of strategically chosen BACCo simulations presented in
Contreras et al. (2020a). From this exercise (see Fig. 2) we conclude that the
rescaling technique can accurately reproduce the Lagrangian fields needed to
model Lagrangian bias expansion.
We have devoted particular care to obtaining theoretical predictions for the
power spectra of these Lagrangian fields. We have considered LPT terms up to
second order in density and advected them to Eulerian space. In Fig.3 we have
shown the accuracy of the expressions obtained this way, comparing them to
numerical simulations of these LPT fields. We have used out LPT predictions
for a two-fold purpose: one the one hand, we can combine the predictions to
the actual data to mitigate the large-scale noise affecting some of the terms;
on the other hand, we want to use them to reduce the dynamical range of the
quantities that we emulate.
We have built our 15 emulators rescaling the large BACCO simulations to
approximately 4000 different combinations of cosmology and redshifts (see Fig.
4) and we have then proceeded to validate them against a set of 200
combinations of cosmologies and redshifts (Fig. 5). We find that most of the
terms that principally contribute to the galaxy power spectrum (namely, the
terms involving combinations of the homogeneous and linear fields, $1,\delta$)
are emulated with percent accuracy. The terms involving the squared density
field $\delta^{2}$, the tidal field $s^{2}$ and the laplacian of the density
field $\nabla^{2}\delta$ are generally reproduced with an accuracy of a few
percent. However, two of these terms, the combination of $\delta^{2}s^{2}$ and
of $\delta^{2}\nabla^{2}\delta$, show particularly bad performances at the
scale where they cross zero.
Finally, we have used our emulator to constrain the bias parameters of a mock
galaxy sample of known cosmology, using the galaxy-galaxy and galaxy-matter
power spectra. We constructed our galaxy sample at redshift $z=1$, selecting
galaxies according to their Star Formation Rate until reaching a number
density of $0.001h^{3}\,\mathrm{Mpc}^{-3}$ and assigning errors that reflect
the cosmic variance expected for large upcoming surveys such as Euclid. This
proof of concept showed that this technique can be used to model the
clustering of realistic galaxy samples, obtaining constraints of the bias
parameters that are self consistent along a large range of scales (Fig. 6). In
particular we were able to fit our mock galaxy sample including scales down to
$k_{\rm max}=0.7h\,{\rm Mpc}^{-1}$.
We anticipate this approach will be valuable for studying Lagrangian biases
from realistic galaxy mock samples. Its applications could be extended also to
any problem in which the knowledge of the fully nonlinear clustering of each
of these lagrangian fields is required, as for example in the modelling of
galaxy-galaxy lensing. However non trivial, we also expect that an extension
of this emulator to redshift space would provide us with an unprecedented tool
to constrain cosmology from galaxy clustering.
In the latest stages of the preparation of this work, a similar approach to
ours has been made public in Kokron et al. (2021). The two works share many
similarities, although differ in many aspects of the implementation and
analysis. We defer to future works a comparison of the two approaches.
## Acknowledgments
The authors acknowledge the support of the ERC-StG number 716151 (BACCO). MPI
acknowledges the support of the “Juan de la Cierva Formación” fellowship
(FJC2019-040814-I). SC acknowledges the support of the “Juan de la Cierva
Formación” fellowship (FJCI-2017-33816). The authors acknowledge the computer
resources at MareNostrum and the technical support provided by Barcelona
Supercomputing Center (RES-AECT-2019-2-0012, RES-AECT-2020-3-0014).
## Data Availability
The data underlying this article will be shared on reasonable request to the
corresponding author. The Neural Network emulator will be made public at
http://www.dipc.org/bacco upon the publication of this article.
## References
* Abadi et al. (2015) Abadi M., et al., 2015, TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems, http://tensorflow.org/
* Abidi & Baldauf (2018) Abidi M. M., Baldauf T., 2018, Journal of Cosmology and Astroparticle Physics, 2018, 029
* Amendola et al. (2018) Amendola L., et al., 2018, Living Reviews in Relativity, 21, 2
* Angulo & Pontzen (2016) Angulo R. E., Pontzen A., 2016, MNRAS, 462, L1
* Angulo & White (2010) Angulo R. E., White S. D. M., 2010, MNRAS, 405, 143
* Angulo et al. (2020) Angulo R. E., Zennaro M., Contreras S., Aricò G., Pellejero-Ibañez M., Stücker J., 2020, arXiv e-prints, p. arXiv:2004.06245
* Aricò et al. (2020) Aricò G., Angulo R. E., Contreras S., Ondaro-Mallea L., Pellejero-Ibañez M., Zennaro M., 2020, arXiv e-prints, p. arXiv:2011.15018
* Baldauf et al. (2016) Baldauf T., Schaan E., Zaldarriaga M., 2016, Journal of Cosmology and Astroparticle Physics, 2016, 017
* Bartelmann & Schneider (2001) Bartelmann M., Schneider P., 2001, Physics Reports, 340, 291
* Baumann et al. (2012) Baumann D., Nicolis A., Senatore L., Zaldarriaga M., 2012, Journal of Cosmology and Astroparticle Physics, 2012, 051
* Baumann et al. (2018) Baumann D., Green D., Wallisch B., 2018, J. Cosmology Astropart. Phys., 2018, 029
* Bernardeau et al. (2002) Bernardeau F., Colombi S., Gaztañaga E., Scoccimarro R., 2002, Phys. Rep., 367, 1
* Blas et al. (2014) Blas D., Garny M., Konstandin T., 2014, Journal of Cosmology and Astroparticle Physics, 2014, 010
* Bonoli et al. (2020) Bonoli S., et al., 2020, arXiv e-prints, p. arXiv:2007.01910
* Chen et al. (2020) Chen S.-F., Vlah Z., White M., 2020, J. Cosmology Astropart. Phys., 2020, 062
* Chuang et al. (2017) Chuang C.-H., et al., 2017, Monthly Notices of the Royal Astronomical Society, 471, 2370
* Chuang et al. (2018) Chuang C.-H., et al., 2018, arXiv e-prints, p. arXiv:1811.02111
* Chudaykin et al. (2020) Chudaykin A., Ivanov M. M., Simonović M., 2020, arXiv e-prints, p. arXiv:2009.10724
* Colas et al. (2020) Colas T., d'Amico G., Senatore L., Zhang P., Beutler F., 2020, Journal of Cosmology and Astroparticle Physics, 2020, 001
* Contreras et al. (2020a) Contreras S., Zennaro R. E. A. M., Aricó G., Pellejero-Ibañez M., 2020a, arXiv e-prints, p. arXiv:2001.03176
* Contreras et al. (2020b) Contreras S., Angulo R., Zennaro M., 2020b, arXiv e-prints, p. arXiv:2005.03672
* Contreras et al. (2020c) Contreras S., Angulo R., Zennaro M., 2020c, arXiv e-prints, p. arXiv:2012.06596
* DESI Collaboration et al. (2016) DESI Collaboration et al., 2016, arXiv e-prints, p. arXiv:1611.00036
* DeRose et al. (2019) DeRose J., et al., 2019, ApJ, 875, 69
* Desjacques et al. (2018a) Desjacques V., Jeong D., Schmidt F., 2018a, Phys. Rep., 733, 1
* Desjacques et al. (2018b) Desjacques V., Jeong D., Schmidt F., 2018b, J. Cosmology Astropart. Phys., 2018, 035
* Dodelson & Schneider (2013) Dodelson S., Schneider M. D., 2013, Phys. Rev. D, 88, 063537
* Dore et al. (2019) Dore O., et al., 2019, Bulletin of the AAS, 51
* Eggemeier et al. (2020) Eggemeier A., Scoccimarro R., Crocce M., Pezzotta A., Sánchez A. G., 2020, Phys. Rev. D, 102, 103530
* Eisenstein & Hu (1998) Eisenstein D. J., Hu W., 1998, ApJ, 496, 605
* Eisenstein et al. (2007) Eisenstein D. J., Seo H.-J., Sirko E., Spergel D. N., 2007, ApJ, 664, 675
* Euclid Collaboration et al. (2019) Euclid Collaboration et al., 2019, MNRAS, 484, 5509
* Euclid Collaboration et al. (2020) Euclid Collaboration et al., 2020, arXiv e-prints, p. arXiv:2010.11288
* Fang et al. (2017) Fang X., Blazek J. A., McEwen J. E., Hirata C. M., 2017, J. Cosmology Astropart. Phys., 2017, 030
* Foreman-Mackey et al. (2013) Foreman-Mackey D., Hogg D. W., Lang D., Goodman J., 2013, PASP, 125, 306
* Fujita & Vlah (2020) Fujita T., Vlah Z., 2020, Journal of Cosmology and Astroparticle Physics, 2020, 059
* Fujita et al. (2020) Fujita T., Mauerhofer V., Senatore L., Vlah Z., Angulo R., 2020, J. Cosmology Astropart. Phys., 2020, 009
* Garrison et al. (2018) Garrison L. H., Eisenstein D. J., Ferrer D., Tinker J. L., Pinto P. A., Weinberg D. H., 2018, The Astrophysical Journal Supplement Series, 236, 43
* Giblin et al. (2019) Giblin B., Cataneo M., Moews B., Heymans C., 2019, MNRAS, 490, 4826
* Guzzo et al. (2008) Guzzo L., et al., 2008, Nature, 451, 541
* Heitmann et al. (2014) Heitmann K., Lawrence E., Kwan J., Habib S., Higdon D., 2014, ApJ, 780, 111
* Ivanov et al. (2020) Ivanov M. M., Simonović M., Zaldarriaga M., 2020, Journal of Cosmology and Astroparticle Physics, 2020, 042
* Ivezić et al. (2019) Ivezić Ž., et al., 2019, The Astrophysical Journal, 873, 111
* Izard et al. (2016) Izard A., Crocce M., Fosalba P., 2016, MNRAS, 459, 2327
* Kaiser (1987) Kaiser N., 1987, MNRAS, 227, 1
* Kokron et al. (2021) Kokron N., DeRose J., Chen S.-F., White M., Wechsler R. H., 2021, arXiv e-prints, p. arXiv:2101.11014
* Kuhlen et al. (2012) Kuhlen M., Vogelsberger M., Angulo R., 2012, Physics of the Dark Universe, 1, 50
* Lazeyras & Schmidt (2019) Lazeyras T., Schmidt F., 2019, Journal of Cosmology and Astroparticle Physics, 2019, 041
* Lesgourgues (2011) Lesgourgues J., 2011, ArXiv e-prints 1104.2932,
* Levi et al. (2013) Levi M., et al., 2013, preprint, (arXiv:1308.0847)
* Lewis et al. (2000) Lewis A., Challinor A., Lasenby A., 2000, ApJ, 538, 473
* Liu et al. (2018) Liu J., Bird S., Zorrilla Matilla J. M., Hill J. C., Haiman Z., Madhavacheril M. S., Petri A., Spergel D. N., 2018, Journal of Cosmology and Astro-Particle Physics, 2018, 049
* Mandelbaum (2018) Mandelbaum R., 2018, Annual Review of Astronomy and Astrophysics, 56, 393
* Marinacci et al. (2018) Marinacci F., et al., 2018, MNRAS, 480, 5113
* Matsubara (2008) Matsubara T., 2008, Phys. Rev. D, 78, 083519
* McDonald (2009) McDonald P., 2009, Journal of Cosmology and Astroparticle Physics, 2009, 026
* McEwen et al. (2016) McEwen J. E., Fang X., Hirata C. M., Blazek J. A., 2016, J. Cosmology Astropart. Phys., 2016, 015
* McQuinn & White (2016) McQuinn M., White M., 2016, Journal of Cosmology and Astroparticle Physics, 2016, 043
* Michaux et al. (2021) Michaux M., Hahn O., Rampf C., Angulo R. E., 2021, MNRAS, 500, 663
* Modi et al. (2020) Modi C., Chen S.-F., White M., 2020, MNRAS, 492, 5754
* Monaco et al. (2002) Monaco P., Theuns T., Taffoni G., 2002, MNRAS, 331, 587
* Naiman et al. (2018) Naiman J. P., et al., 2018, MNRAS, 477, 1206
* Nelson et al. (2018) Nelson D., et al., 2018, MNRAS, 475, 624
* Nishimichi et al. (2019) Nishimichi T., et al., 2019, ApJ, 884, 29
* Nishimichi et al. (2020) Nishimichi T., D’Amico G., Ivanov M. M., Senatore L., Simonović M., Takada M., Zaldarriaga M., Zhang P., 2020, Phys. Rev. D, 102, 123541
* Paz & Sánchez (2015) Paz D. J., Sánchez A. G., 2015, MNRAS, 454, 4326
* Pellejero-Ibanez et al. (2017) Pellejero-Ibanez M., et al., 2017, Monthly Notices of the Royal Astronomical Society, 468, 4116
* Pillepich et al. (2018) Pillepich A., et al., 2018, MNRAS, 475, 648
* Pontzen et al. (2016) Pontzen A., Slosar A., Roth N., Peiris H. V., 2016, Phys. Rev. D, 93, 103519
* Schneider et al. (2016) Schneider A., et al., 2016, Journal of Cosmology and Astro-Particle Physics, 2016, 047
* Sefusatti et al. (2016) Sefusatti E., Crocce M., Scoccimarro R., Couchman H. M. P., 2016, MNRAS, 460, 3624
* Springel (2005) Springel V., 2005, MNRAS, 364, 1105
* Springel et al. (2018) Springel V., et al., 2018, MNRAS, 475, 676
* Springel et al. (2020) Springel V., Pakmor R., Zier O., Reinecke M., 2020, arXiv e-prints, p. arXiv:2010.03567
* Takada et al. (2014) Takada M., et al., 2014, Publications of the Astronomical Society of Japan, 66
* Tassev et al. (2013) Tassev S., Zaldarriaga M., Eisenstein D. J., 2013, JCAP, 6, 036
* Taylor et al. (2013) Taylor A., Joachimi B., Kitching T., 2013, MNRAS, 432, 1928
* Villaescusa-Navarro et al. (2018) Villaescusa-Navarro F., et al., 2018, ApJ, 867, 137
* Vlah et al. (2016) Vlah Z., Castorina E., White M., 2016, Journal of Cosmology and Astroparticle Physics, 2016, 007
* Wibking et al. (2019) Wibking B. D., et al., 2019, MNRAS, 484, 989
* Winther et al. (2019) Winther H. A., Casas S., Baldi M., Koyama K., Li B., Lombriser L., Zhao G.-B., 2019, Phys. Rev. D, 100, 123540
* Zennaro et al. (2019) Zennaro M., Angulo R. E., Aricò G., Contreras S., Pellejero-Ibáñez M., 2019, MNRAS, 489, 5938
* d'Amico et al. (2020) d'Amico G., Gleyzes J., Kokron N., Markovic K., Senatore L., Zhang P., Beutler F., Gil-Marín H., 2020, Journal of Cosmology and Astroparticle Physics, 2020, 005
## Appendix A LPT power spectra
In Eq. (6) we presented a general form expressing the cross power spectra
$P_{ij}$ of the different fields, with
$i,j=\\{1,\delta,\delta^{2},s^{2},\nabla^{2}\delta\\}$. Here, we report the
prefactors of the terms of order $(11)$ and $(22)$, and the different kernels
entering the $(22)$ terms.
The $(11)$ terms are multiplied by these prefactors (note the $k$-dependence
arising for terms including the Laplacian of the density field),
$A_{ij}(k)=\begin{bmatrix}1&1&0&0&-k^{2}\\\ 1&1&0&0&-k^{2}\\\ 0&0&0&0&0\\\
0&0&0&0&0\\\ -k^{2}&-k^{2}&0&0&k^{4}\\\ \end{bmatrix}.$ (16)
The $(22)$ terms are preceded by
$B_{ij}(k)=\begin{bmatrix}1&1&1&1&0\\\ 1&1/2&2&2&0\\\ 1&2&1&1&1\\\
1&2&1&1&1\\\ 0&0&1&1&1\\\ \end{bmatrix}.$ (17)
Finally, we report all the kernels for the $(22)$ terms. Please note that we
use
$F_{\rm
ZA}(\boldsymbol{k}_{1},\boldsymbol{k}_{2})=1+\dfrac{\boldsymbol{k}_{1}\cdot\boldsymbol{k}_{2}}{k_{1}k_{2}}\left(\dfrac{k_{1}}{k_{2}}+\dfrac{k_{2}}{k_{1}}\right)+\left(\dfrac{\boldsymbol{k}_{1}\cdot\boldsymbol{k}_{2}}{k_{1}k_{2}}\right)^{2},$
(18)
and
$S_{2}(\boldsymbol{k}_{1},\boldsymbol{k}_{2})=\left(\dfrac{\boldsymbol{k}_{1}\cdot\boldsymbol{k}_{2}}{k_{1}k_{2}}\right)^{2}-\dfrac{1}{3}.$
(19)
$C_{11}(\boldsymbol{p},\boldsymbol{k})=F_{\rm
ZA}^{2}(\boldsymbol{p},\boldsymbol{k}-\boldsymbol{p})$ (20)
$C_{1\delta}(\boldsymbol{p},\boldsymbol{k})=F_{\rm
ZA}(\boldsymbol{p},\boldsymbol{k}-\boldsymbol{p})\left[\dfrac{\boldsymbol{k}\cdot(\boldsymbol{k}-\boldsymbol{p})}{|\boldsymbol{k}-\boldsymbol{p}|^{2}}+\dfrac{\boldsymbol{k}\cdot\boldsymbol{p}}{p^{2}}\right]$
(21) $C_{1\delta^{2}}(\boldsymbol{p},\boldsymbol{k})=F_{\rm
ZA}(\boldsymbol{p},\boldsymbol{k}-\boldsymbol{p})$ (22)
$C_{1s^{2}}(\boldsymbol{p},\boldsymbol{k})=F_{\rm
ZA}(\boldsymbol{p},\boldsymbol{k}-\boldsymbol{p})S_{2}(\boldsymbol{p},\boldsymbol{k}-\boldsymbol{p})$
(23) $C_{1\nabla^{2}\delta}(\boldsymbol{p},\boldsymbol{k})=0$ (24)
$C_{\delta\delta}(\boldsymbol{p},\boldsymbol{k})=\dfrac{\boldsymbol{k}\cdot(\boldsymbol{k}-\boldsymbol{p})}{|\boldsymbol{k}-\boldsymbol{p}|^{2}}\left[\dfrac{\boldsymbol{k}\cdot(\boldsymbol{k}-\boldsymbol{p})}{|\boldsymbol{k}-\boldsymbol{p}|^{2}}+\dfrac{\boldsymbol{k}\cdot\boldsymbol{p}}{p^{2}}\right]$
(25)
$C_{\delta\delta^{2}}(\boldsymbol{p},\boldsymbol{k})=\dfrac{\boldsymbol{k}\cdot(\boldsymbol{k}-\boldsymbol{p})}{|\boldsymbol{k}-\boldsymbol{p}|^{2}}$
(26) $C_{\delta
s^{2}}(\boldsymbol{p},\boldsymbol{k})=\dfrac{\boldsymbol{k}\cdot(\boldsymbol{k}-\boldsymbol{p})}{|\boldsymbol{k}-\boldsymbol{p}|^{2}}S_{2}(\boldsymbol{p},\boldsymbol{k}-\boldsymbol{p})$
(27) $C_{\delta\nabla^{2}\delta}(\boldsymbol{p},\boldsymbol{k})=0$ (28)
$C_{\delta^{2}\delta^{2}}(\boldsymbol{p},\boldsymbol{k})=1$ (29)
$C_{\delta^{2}s^{2}}(\boldsymbol{p},\boldsymbol{k})=S_{2}(\boldsymbol{p},\boldsymbol{k}-\boldsymbol{p})$
(30)
$C_{\delta^{2}\nabla^{2}\delta}(\boldsymbol{p},\boldsymbol{k})=\left[p^{2}\dfrac{\boldsymbol{k}\cdot(\boldsymbol{k}-\boldsymbol{p})}{|\boldsymbol{k}-\boldsymbol{p}|^{2}}+|\boldsymbol{k}-\boldsymbol{p}|^{2}\dfrac{\boldsymbol{k}\cdot\boldsymbol{p}}{p^{2}}\right]$
(31)
$C_{s^{2}\nabla^{2}\delta}(\boldsymbol{p},\boldsymbol{k})=\left[p^{2}\dfrac{\boldsymbol{k}\cdot(\boldsymbol{k}-\boldsymbol{p})}{|\boldsymbol{k}-\boldsymbol{p}|^{2}}+|\boldsymbol{k}-\boldsymbol{p}|^{2}\dfrac{\boldsymbol{k}\cdot\boldsymbol{p}}{p^{2}}\right]S_{2}(\boldsymbol{p},\boldsymbol{k}-\boldsymbol{p})$
(32)
$C_{\nabla^{2}\delta\nabla^{2}\delta}(\boldsymbol{p},\boldsymbol{k})=\left[\dfrac{p^{4}}{|\boldsymbol{k}-\boldsymbol{p}|^{4}}(k^{2}-\boldsymbol{k}\cdot\boldsymbol{p})^{2}+\boldsymbol{k}\cdot\boldsymbol{p}(k^{2}-\boldsymbol{k}\cdot\boldsymbol{p})\right]$
(33)
## Appendix B Comparison against public PT codes
In this appendix we compare our implementation of LPT integrals and the
resulting prediction for Lagrangian fields in Eulerian space against those
computed using two publicly available codes.
Specifically, we compare against FastPT, which implements standard
perturbation theory at 1 loop, and against velocileptors, which employs 1-loop
Lagrangian perturbation theory with effective-field theory counterterms. Note
that not all power spectrum combinations are available in those codes, thus we
restrict the comparison to a subset of integrals.
Figure 8: Various cross-spectra, $P_{ij}$ at $z=0$, as indicated by the
legend. Shaded areas represent the standard deviation of 1000 fast simulations
evolved with 2LPT. Solid lines show our results, whereas dashed lines and
crosses denote the values obtained using velocileptors and FastPT,
respectively.
We first compare our LPT solutions against velocileptors. We see that, on
large scales, there is a good agreement for $11$, $1\delta$ $\delta\delta$,
$\delta\delta^{2}$, $\delta s^{2}$, $\delta^{2}\delta^{2}$, $\delta^{2}s^{2}$,
and $s^{2}s^{2}$. We find some differences in the large scale behavior of the
terms $1\delta^{2}$ and $1s^{2}$ that we plan to ivestigate in the future.
When compared against FastPT, we also see a good agreement with our
predictions on large scales for $\delta\delta$, $\delta^{2}\delta^{2}$ and
$s^{2}s^{2}$, but systematic differences for $\delta\delta^{2}$, $\delta
s^{2}$.
Our ensemble of LPT simulations shows very good agreement with our predictions
(as well as with those measured in full $N$-body simulations (c.f. §3). Note
that, for the term $1\delta^{2}$, the LPT simulations appear to have less
power than the predictions. However, for this term, the 2LPT simulations also
exhibits less power than the $N$-body solution, that in turn is in better
agreement with our predictions, as shown in Fig. 3.
|
††thanks<EMAIL_ADDRESS>
# Practical distributed quantum information processing with LOCCNet
Xuanqiang Zhao Benchi Zhao Zihe Wang Zhixin Song Xin Wang Institute for
Quantum Computing, Baidu Research, Beijing 100193, China
###### Abstract
Distributed quantum information processing is essential for building quantum
networks and enabling more extensive quantum computations. In this regime,
several spatially separated parties share a multipartite quantum system, and
the most natural set of operations is Local Operations and Classical
Communication (LOCC). As a pivotal part in quantum information theory and
practice, LOCC has led to many vital protocols such as quantum teleportation.
However, designing practical LOCC protocols is challenging due to LOCC’s
intractable structure and limitations set by near-term quantum devices. Here
we introduce LOCCNet, a machine learning framework facilitating protocol
design and optimization for distributed quantum information processing tasks.
As applications, we explore various quantum information tasks such as
entanglement distillation, quantum state discrimination, and quantum channel
simulation. We discover protocols with evident improvements, in particular,
for entanglement distillation with quantum states of interest in quantum
information. Our approach opens up new opportunities for exploring
entanglement and its applications with machine learning, which will
potentially sharpen our understanding of the power and limitations of LOCC. An
implementation of LOCCNet is available in Paddle Quantum, a quantum machine
learning Python package based on PaddlePaddle deep learning platform.
## I Introduction
In the past few decades, quantum technologies have been found to have an
increasing number of powerful applications in areas including optimization
Farhi2014 ; Harrigan2020 , chemistry McArdle2018a ; Arute2020 , security
Bennett1984 ; Ekert1991 , and machine learning Biamonte2017a . To realize
large-scale quantum computers and deliver real-world applications, distributed
quantum information processing will be essential in the technology road map,
where quantum entanglement and its manipulation play a crucial role.
Quantum entanglement is central to quantum information by serving as a
fundamental resource which underlies many important protocols such as
teleportation bennett1993teleporting , superdense coding
bennett1992communication , and quantum cryptography Ekert1991 . To achieve
real-world applications of quantum technologies, protocols for manipulating
quantum entanglement are essential ingredients, and it will be important to
improve existing methods. The study of entanglement manipulation is one of the
most active and important areas in quantum information Plenio2007 ;
horodecki2009quantum .
In entanglement manipulation and distributed quantum information processing,
multiple spatially separated parties are usually involved. As direct transfers
of quantum data between these nodes are not feasible with current technology,
Local Operations and Classical Communication (LOCC) bennett1993teleporting is
more practical at this stage. Such an LOCC (or distant lab) paradigm plays a
fundamental role in entanglement theory, and many important results have been
obtained within this paradigm horodecki2009quantum . However, how to design
LOCC protocols on near-term quantum devices preskill2018quantum remains an
important challenge. Such protocols are generally hard to design even with
perfect entanglement due to the complicated and hard-to-characterize structure
of LOCC Chitambar2014 . Moreover, limited capabilities and structure of near-
term quantum devices have to be considered during the design of LOCC
protocols.
Inspired by the breakthroughs of deep learning LeCun2015 in mastering the
game of Go Silver2016 and solving protein folding Jumper2020 , it is
desirable to apply machine learning ideas to explore quantum technologies. For
instance, machine learning has been applied to improve quantum processor
designs Mavadia2017 ; Wan2017 ; Lu2017b ; Niu2019 and quantum communication
Wallnofer2020 ; Bausch2018 . Here, we adopt the ideas from machine learning to
solve the challenges in exploring LOCC protocols. We use parameterized quantum
circuits (PQC) Benedetti2019a to represent the local operations allowed in
each spatially separated party and then incorporate multiple rounds of
classical communication. Then one can formulate the original task as an
optimization problem and adopt classical optimization methods to search the
optimal LOCC protocol. The PQCs have been regarded as machine learning models
with remarkable expressive power, which leads to applications in quantum
chemistry and optimization Benedetti2019a . Here, we generalize PQC to a
larger deep learning network to deal with distributed quantum information
processing tasks and in particular to explore better entanglement manipulation
protocols.
In this work, we introduce a machine learning framework for designing and
optimizing LOCC protocols that are adaptive to near-term quantum devices,
which consists of a set of PQCs representing local operations. As
applications, we explore central quantum information tasks such as
entanglement distillation, state discrimination, and quantum channel
simulation. We discover protocols with evident improvements via this
framework, sharpening our understanding of the power and limitations of LOCC.
As showcases, we establish hardware-efficient and simple protocols for
entanglement distillation and state discrimination, which outperforms
previously best-known methods. In particular, for distillation of Bell states
with non-orthogonal product noise, the optimized protocol outputs a state
whose distillation fidelity even reaches the theoretical upper bound and hence
is optimal.
## II Results
### II.1 The LOCCNet framework
In this section, we introduce LOCCNet, a machine learning framework that
facilitates the design of LOCC protocols for various quantum information
processing tasks, including entanglement distillation Bennett1996 ;
deutsch1996quantum ; Murao1998 ; Dur2007 ; Pan2003 ; Devetak2003a , quantum
state discrimination Bennett1999b ; Walgate2000 ; Fan2004a ; Hayashi2006 ;
Ghosh2004 ; Nathanson2005 ; Duan2007a ; Chitambar2014b ; Duan2009d ;
Childs2013 ; Li2017 ; Bandyopadhyay2014a , and quantum channel simulation
Bennett1996c ; Bennett2014 ; Berta2013 ; Pirandola2015b ; Wilde2018 ; WW18 ;
Fang2018 . An LOCC protocol can be characterized as a sequence of local
quantum operations performed by spatially separated parties with classical
communication of measurement outcomes [see Supplementary Note 1].
According to the number of classical communication rounds, one can divide LOCC
into different classes Chitambar2014 . The one-round protocols correspond to
LOCC operations where one party applies a local operation and sends the
measurement outcome to others, who then apply local operations chosen based on
the outcome they receive. Based on one-round protocols, we are able to
construct an $r$-round protocol recursively. All these protocols belong to the
finite-round LOCC class, and can be visualized as tree graphs. Each node in
the tree represents a local operation and different measurement outcomes
correspond to edges connecting to this node’s children, which represent
different choice of local operations based on the measurement outcomes from
last round.
Although the basic idea of LOCC is relatively easy to grasp, its mathematical
structure is highly complicated Chitambar2014 and hard to characterize. As
indicated by its tree structure, a general $r$-round LOCC protocol could lead
to exponentially many possible results, making LOCC protocol designs for many
essential quantum information processing tasks very challenging. At the same
time, it will be more practical to consider LOCC protocols with hardware-
efficient local operations and a few communication rounds due to the limited
coherence time of local quantum memory. To overcome these challenges, we
propose to find LOCC protocols with the aid of machine learning, inspired by
its recent success in various areas. Specifically, we present the LOCCNet
framework, which incorporates optimization methods from classical machine
learning field into the workflow of designing LOCC protocols and can simulate
any finite round LOCC in principle.
Figure 1: Illustration of the procedure for optimizing an LOCC protocol with
LOCCNet. For simplicity, only two parties are involved in this workflow,
namely Alice and Bob. The tree presented here corresponds to a specific two-
round LOCC protocol. Such a tree can be customized with LOCCNet. With each
node (Local Operation) encoded as a PQC and arrows between nodes referring to
classical communication, one can define a loss function to guide the training
process depending on the task. The tree branch diverges indicating different
possible measurement outcomes. Finally, one can adopt optimization methods to
iteratively update the parameters $\bm{\theta}$ in each local operation and
hence obtain the optimized LOCC protocol.
As illustrated in Fig. 1, each party’s local operations, represented by nodes
in a tree, are described as parameterized quantum circuits (PQC)
Benedetti2019a . Users can measure any chosen qubit and define a customized
loss function from measurement outcomes as well as remaining states. With a
defined loss function for a task of interest, LOCCNet can be optimized to give
a protocol. The effect of classical communication is also well simulated by
LOCCNet in the sense that different PQCs can be built for different
measurement outcomes from previous rounds.
Previously, PQCs have been adapted to many research areas including quantum
simulation peruzzo2014variational , quantum optimization Farhi2014 , and
quantum error correction Johnson2017 . The family of variational quantum
algorithms Cerezo2020 ; Bharti2021 ; Endo2020 , based on PQCs, is one
promising candidate to achieve quantum advantages with near-term devices. In
quantum information, PQCs also help in estimating distance measures for
quantum states Chen2020a ; Cerezo2019 and compressing quantum data Romero2017
; Cao2020 . Here, we take one step further by extending the use of PQCs to the
distributed quantum information processing scenario where LOCC is the most
natural set of operations.
In the next three sections, we will demonstrate the LOCCNet framework in
details with important applications and present some interesting findings,
including protocols that achieve better results than existing ones. We conduct
software implementations of LOCCNet using the Paddle Quantum toolkit
Paddlequantum on the PaddlePaddle Deep Learning Platform Ma2019 ; Paddle .
### II.2 Entanglement distillation
Many applications of LOCC involve entanglement manipulation, and the use of
entanglement is generally required to be in its pure and maximal form. Hence,
the efficient conversion of entanglement into such a form, a process known as
entanglement distillation Bennett1996 ; Bennett1996c , is usually a must for
many quantum technologies. The development of entanglement distillation
methods remains at the forefront of quantum information horodecki2009quantum .
For example, the two-qubit maximally entangled state
$|\Phi^{+}\rangle=1/\sqrt{2}(|00\rangle+|11\rangle)$, which is also known as
the entangled bit (ebit), is the fundamental resource unit in entanglement
theory since it is a key ingredient in many quantum information processing
tasks. Thus, an essential goal for entanglement distillation in a two-qubit
setting is to convert a number of copies of some two-qubit state $\rho_{AB}$
shared by two parties, Alice and Bob, into a state as close as possible to the
ebit. Here, closeness between the state $\rho_{AB}$ and the ebit is usually
measured in terms of the fidelity
$\displaystyle F=\langle\Phi^{+}|\rho_{AB}|\Phi^{+}\rangle.$ (1)
Although theory is more concerned with asymptotic distillation with unlimited
copies of $\rho_{AB}$, protocols considering a finite number of copies are
more practical due to the physical limitations of near-term quantum
technologies. Also, practical distillation protocols usually allow for the
possibility of failure as a trade-off for achieving a higher final fidelity.
Furthermore, due to limited coherence time of local quantum memories, schemes
involving only one round of classical communication are preferred in practice.
Under these settings, many practical schemes for entanglement distillation
have been proposed Bennett1996 ; deutsch1996quantum ; fujii2009entanglement ;
kalb2017entanglement ; rozpkedek2018optimizing ; krastanov2019optimized . Not
surprisingly, there is not a single scheme that applies to all kinds of
states. In fact, designing a protocol even for a specific type of states is a
difficult task.
In this section, we apply LOCCNet to entanglement distillation and present
selected results that reinforce the validity and practicality of using this
framework for designing LOCC protocols. To use LOCCNet for finding
distillation protocols for a state $\rho_{AB}$, we build two PQCs, one for
Alice and one for Bob. In the preset event of success, these PQCs output a
state supposed to have a higher fidelity to the ebit. To optimize PQCs, we
define the infidelity of the output state and the ebit, i.e., $1-F$, as the
loss function to be minimized. As soon as the value of the loss function
converges through training, the PQCs along with the optimized parameters form
an LOCC distillation protocol. In principle, this training procedure is
general and can be applied to find distillation protocols for any initial
state $\rho_{AB}$ given its numerical form. Beyond rediscovering existing
protocols, we are also able to find improved protocols with LOCCNet. Below, we
give two distillation protocols for S states and isotropic states,
respectively, as examples of optimized schemes found with LOCCNet.
An S state is a mixture of the ebit $|\Phi^{+}\rangle$ and non-orthogonal
product noise rozpkedek2018optimizing . Here, we define it to be
$\displaystyle\rho_{AB}=p|\Phi^{+}\rangle\langle\Phi^{+}|+(1-p)|00\rangle\langle
00|,$ (2)
where $p\in[0,1]$. A distillation protocol known to perform well on two copies
of some S state is the DEJMPS protocol deutsch1996quantum , which in this case
outputs a state whose fidelity to the ebit is $(1+p)^{2}/(2+2p^{2})$ with a
probability of $(1+p^{2})/2$ [see Supplementary Note 2].
Here, we present a protocol learned by LOCCNet that can output a state
achieving a fidelity higher than DEJMPS and close to the highest possible
fidelity. Details on this protocol after simplification are given in Fig. 2,
where Alice and Bob apply local operations to their own qubits independently
and then compare their measurement outcomes through classical communication.
The distillation succeeds only when both Alice and Bob get $0$ from
computational basis measurements.
@C=1em @R=1.7em A_0 &
A_1 -1 R_y(θ)
B_0 1
B_1 -1 R_y(θ)
Figure 2: Circuit of a distillation protocol learned by LOCCNet for S states.
This simplified circuit represents local operations in a distillation protocol
learned by LOCCNet for two copies of an S state, $\rho_{A_{0}B_{0}}$ and
$\rho_{A_{1}B_{1}}$. The rotation angles of both $R_{y}$ gates are
$\theta=\arccos(1-p)+\pi$, which depend on the parameter $p$ of the S states
to be distilled.
The final fidelity achieved by this protocol is compared with that achieved by
the DEJMPS protocol in Fig. 3. For the aim of benchmarking, the techniques
based on partial positive transpose (PPT) were introduced to derive
fundamental limits of entanglement distillation Rains2001 ; Matthews2008 ;
Wang2016 ; Fang2017 ; rozpkedek2018optimizing ; Wang2016c . The entanglement
theory under PPT operations has been extensively studied in the literature
(e.g., Audenaert2003 ; Wang2016d ; Regula2019 ; Chitambar2017 ; Wang2017d ;
Wang2020c ) and offers valuable limitations of LOCC. Here, the PPT bound
obtained with semi-definite programming rozpkedek2018optimizing is an upper
bound to the fidelity achieved by any LOCC protocol [see Supplementary Note
2].
As shown in the figure, the protocol learned by LOCCNet achieves near-optimal
fidelity in the sense that it is close to the PPT bound. Analytically, for two
copies of some S state with a parameter $p$, the post-measurement state in the
event of success is
$\sigma_{AB}=F|\Phi^{+}\rangle\langle\Phi^{+}|+(1-F)|\Phi^{-}\rangle\langle\Phi^{-}|$,
where
$\displaystyle F=\frac{1+\sqrt{2p-p^{2}}}{2}$ (3)
is its fidelity to the ebit and
$|\Phi^{-}\rangle=1/\sqrt{2}(|00\rangle-|11\rangle)$. The probability of
arriving at this state is $p_{\text{succ}}=p^{2}-p^{3}/2$ [see Supplementary
Note 2]. It is noteworthy that the distilled state is a Bell diagonal state of
rank two. For two copies of such a state, the DEJMPS protocol achieves the
optimal fidelity rozpkedek2018optimizing ; Ruan2018 . Thus, combining our
protocol with the DEJMPS protocol offers an efficient and scalable
distillation scheme for more copies of some S state.
Figure 3: Fidelity achieved by distillation protocols for two copies of some S
state. The orange dashed line depicts the performance of the protocol learned
by LOCCNet, which outperforms the DEJMPS protocol (green dotted). Also, the
learned protocol is near optimal in the sense that its line almost aligns with
the PPT bound (blue solid).
Another important family of entangled states is the isotropic state family,
defined as
$\displaystyle\rho_{AB}=p|\Phi^{+}\rangle\langle\Phi^{+}|+(1-p)\frac{I}{4},$
(4)
where $p\in[0,1]$ and $I$ is the identity matrix. Distillation protocols for
two copies of some isotropic state have been well studied, and the DEJMPS
protocol achieves empirically optimal fidelity in this case. Given four copies
of some isotropic state with a parameter $p$, a common way to distill
entanglement is to divide them into two groups of two copies and apply the
DEJMPS protocol to each group. Conditioned on success, we then apply the
DEJMPS protocol again to the two resulting states from the previous round.
Since the DEJMPS protocol was originally designed for two-copy distillation,
such a generalization is probably unable to fully exploit the resources
contained in four copies of the state. Indeed, with the aid of LOCCNet, we
find a protocol optimized specifically for four copies of some isotropic
state. As illustrated in Fig. 4, Alice and Bob first apply similar local
operations with three pairs of qubits being measured and then compare their
measurement outcomes through classical communication. If their measurement
outcomes for each pair of qubits are identical, the distillation procedure
succeeds.
@C=1em @R=1.7em A_0 & 1
A_1 1 R_x(+π2)
A_2 1 R_x(+π2)
A_3 -3 R_x(+π2)
Figure 4: Circuit of a distillation protocol learned by LOCCNet for isotropic
states. This simplified circuit represents Alice’s local operation in a
protocol learned by LOCCNet for entanglement distillation with four copies of
some isotropic state. Bob’s local operation is identical to Alice’s, except
that the rotation angles of Bob’s $R_{x}$ gates are $-\pi/2$.
The fidelity achieved by this protocol for different input isotropic states is
plotted in Fig. 5, along with that of the generalized DEJMPS protocol. For
four copies of some isotropic state with a parameter $p$, our protocol
achieves a final fidelity of
$\displaystyle F=\frac{1-2p+9p^{2}}{4-8p+12p^{2}},$ (5)
which is slightly higher than the DEJMPS protocol, as shown in Fig. 5. Details
are referred to [Supplementary Note 2]. Another advantage of this optimized
protocol is that the output state in the event of success is still an
isotropic state, implying the possibility of a generalized distillation
protocol for $4^{n}$ copies of some isotropic state.
We remark that our protocols are optimized with the goal to achieve the
highest possible fidelity, so their probabilities of success are not high. For
situations where the probability of success is important, one can also design
a customized loss function to optimize a protocol according to their metrics.
Figure 5: Fidelity achieved by distillation protocols for four copies of some
isotropic state. The blue solid line depicts the fidelity achieved by the
protocol learned by LOCCNet, which outperforms the generalized DEJMPS protocol
(orange dashed).
### II.3 Distributed quantum state discrimination
Another important application of LOCC is quantum state discrimination (QSD).
Distinguishing one physical configuration from another is central to
information theory. When messages are encoded into quantum states for
information transmission, the processing of this information relies on the
distinguishability of quantum states. Hence, QSD has been a central topic in
quantum information Bae2015 ; reviewQSD ; Li2015a , which investigates how
well quantum states can be distinguished and underlies various applications in
quantum information processing tasks, including quantum data hiding
DiVincenzo2002 and dimension witness Gallego2010 .
QSD using global quantum operations is well-understood in the sense that the
optimal strategy maximizing the success probability can be solved efficiently
via semi-definite programming (SDP) Eldar2003 ; Sun2002 ; Jezek2002 . However,
for an important operational setting called distant lab paradigm or
distributed regime, our knowledge of QSD remains limited despite substantial
efforts in the past two decades Bennett1999b ; Walgate2000 ; Fan2004a ;
Hayashi2006 ; Ghosh2004 ; Nathanson2005 ; Duan2007a ; Chitambar2014b ;
Duan2009d ; Childs2013 ; Li2017 ; Bandyopadhyay2014a . In the distributed
regime, multipartite quantum states are distributed to spatially separated
labs, and the goal is to distinguish between these states via LOCC.
For two orthogonal pure states shared between multiple parties, it has been
shown that they can be distinguished via LOCC alone no matter if these states
are entangled or not Walgate2000 . However, it is not easy to design a
concrete LOCC protocol for practical implementation on near-term quantum
devices. Using LOCCNet, one can optimize and obtain practical LOCC protocols
for quantum state discrimination. Furthermore, for non-orthogonal states,
limited aspects have been investigated in terms of the feasibility of LOCC
discrimination. However, LOCCNet can provide an optimized and practical
protocol in this realistic setting.
Here, to explore the power of LOCCNet in state discrimination, we focus on the
optimal success probability of discriminating between noiseless and noisy Bell
states via LOCC. Consider two Bell states, $|\Phi^{+}\rangle$ and
$|\Phi^{-}\rangle$, and an amplitude damping (AD) channel $\mathcal{A}$ with
noise parameter $\gamma$ such that $\mathcal{A}(\rho)=E_{0}\rho
E_{0}^{\dagger}+E_{1}\rho E_{1}^{\dagger}$ with $E_{0}=|0\rangle\langle
0|+\sqrt{1-\gamma}|1\rangle\langle 1|$ and
$E_{1}=\sqrt{\gamma}|0\rangle\langle 1|$. If we send $|\Phi^{-}\rangle$’s two
qubits respectively through this AD channel, then the resulting state is
$\mathcal{A}\otimes\mathcal{A}(|\Phi^{-}\rangle\langle\Phi^{-}|)$. The goal is
now to distinguish between $|\Phi^{+}\rangle\\!\langle\Phi^{+}|$ and
$\mathcal{A}\otimes\mathcal{A}(|\Phi^{-}\rangle\\!\langle\Phi^{-}|)$.
Suppose $\Phi_{0}$ and $\Phi_{1}$ are some pair of two-qubit states. To find a
protocol discriminating between them, we build an ansatz with measurements on
both qubits. As illustrated in Fig. 6, Alice performs a unitary gate on her
qubit followed by a measurement, whose outcome determines Bob’s operation on
his qubit. Given an ideal discrimination protocol, Bob’s measurement outcome
should be $0$ if and only if the input state is $\Phi_{0}$ so that he can tell
which state the input state is for sure. Based on this observation, we define
a loss function
$\displaystyle L=P(1|\Phi_{0})+P(0|\Phi_{1}),$ (6)
where $P(j|\Phi_{k})$ is the probability of Bob’s measurement outcome being
$j$ given the input state being $\Phi_{k}$. By minimizing this loss function,
we are able to obtain a protocol for distinguishing between states $\Phi_{0}$
and $\Phi_{1}$ with an optimized probability of success. Specifically, for
$\Phi_{0}\equiv|\Phi^{+}\rangle\\!\langle\Phi^{+}|$ and
$\Phi_{1}\equiv\mathcal{A}\otimes\mathcal{A}(|\Phi^{-}\rangle\\!\langle\Phi^{-}|)$,
through optimization we find a protocol where Alice’s local unitary operation
is $U=R_{y}(\pi/2)$ and Bob’s local unitary operation is
$V=R_{y}((-1)^{a}\theta)$ where $\theta=\pi-\arctan((2-\gamma)/\gamma)$ and
$a=0\text{ or }1$ is Alice’s measurement outcome. This optimized protocol
achieves an average success probability of
$\displaystyle
p_{\text{succ}}=\frac{1}{2}+\frac{\sqrt{2-2\gamma+\gamma^{2}}}{2\sqrt{2}},$
(7)
as given in [Supplementary Note 3].
@C=0.6em @R=1.7em Alice & U [1]
Bob V 0 or 1
Figure 6: Ansatz used for finding QSD protocols with LOCCNet. Alice performs a
unitary gate on her qubit and measures. Then Bob performs on his qubit a
unitary gate chosen based on Alice’s measurement result. Bob’s measurement
outcome is supposed to tell which state the input state is.
In Fig. 7, we compare the protocol learned by LOCCNet with the optimal
protocol for perfect discrimination between two noiseless and orthogonal Bell
states $|\Phi^{+}\rangle$ and $|\Phi^{-}\rangle$. The PPT bound shown in Fig.
7 is obtained via SDP and serves as an upper bound to the average probability
of any LOCC protocol recognizing the input state correctly Yu2014a , where the
input state is either $\Phi_{0}$ or $\Phi_{1}$ with equal chance. While the
noiseless protocol is consistently better than random guessing as noise in the
AD channel increases, it inevitably suffers from a decrease in its
discrimination ability. The gap between its probability of success and the PPT
bound steadily widens. On the other hand, the protocol optimized with LOCCNet
can achieve a near-optimal probability of success for each noise setting, as
shown in the figure.
Figure 7: Average success probability of distinguishing a Bell state and a
noisy Bell state. The orange dashed line depicts the behavior of the protocol
via LOCCNet, which outperforms the protocol for distinguishing perfect
orthogonal Bell states (green dotted). Moreover, the protocol from LOCCNet is
near optimal since it almost matches the upper bounds obtained via PPT POVMs
(blue solid).
### II.4 Quantum channel simulation
One central goal of quantum information is to understand the limitations
governing the use of quantum systems to take advantage of quantum physics
laws. Quantum channel lies at the heart of this question since it
characterizes what we can do with the quantum states physically Nielsen2010 ;
Wilde2017book ; Watrous2011b . To fully exploit quantum resources, the ability
to manipulate quantum channels under operational settings is important.
Particularly, in distributed quantum computing, one fundamental primitive,
dubbed quantum channel simulation, is to realize quantum channels from one
party to another using entanglement and LOCC protocols. Quantum channel
simulation, exploiting entanglement to synthesize a target channel through
LOCC protocols Bennett1996c ; Bennett2014 ; Berta2013 ; Pirandola2015b ;
Wilde2018 ; WW18 ; Fang2018 ; Pirandola2018 , servers as the basis of many
problems in quantum information, including quantum communication, quantum
metrology Pirandola2017 , and quantum key distribution Pirandola2020 .
One famous example of quantum channel simulation is quantum teleportation
(i.e., simulation of the identity channel). As one of the most important
quantum information processing protocols bennett1993teleporting ;
Pirandola2015 , quantum teleportation exploits the physical resource of
entanglement to realize noiseless quantum channels between different parties
and it is an important building block for quantum technologies including
distributed quantum computing and quantum networks. Similar to quantum
teleportation, quantum channel simulation is a general technique to send an
unknown quantum state $\psi$ from a sender to a receiver such that the
receiver could obtain ${\cal N}_{A^{\prime}\to B}(\psi_{A^{\prime}})$ with the
help of a pre-shared entangled state $\rho_{AB}$ and an LOCC protocol $\Pi$.
The overall scheme simulates the target channel ${\cal N}$ in the sense that
$\displaystyle\Pi(\psi_{A^{\prime}}\otimes\rho_{AB})={\cal N}_{A^{\prime}\to
B}(\psi_{A^{\prime}}),\forall\psi_{A^{\prime}}.$ (8)
For some classes of channels such as Pauli channels, the LOCC-based simulation
protocols were known Bennett1996c ; Horodecki1999 ; Pirandola2015b . However,
the LOCC protocols for general quantum channel simulation is hard to design
due to the complexity of LOCC. Even for the qubit amplitude damping (AD)
channel, the LOCC protocol for simulating this channel in the non-asymptotic
regime is still unknown, and its solution would provide a better estimate of
its secret key capacity Pirandola2020 . Note that the asymptotic simulation of
this channel involving infinite dimensions was introduced in Pirandola2015b .
Here, we apply our LOCCNet to explore the simulation of an AD channel
$\mathcal{A}$ using its Choi state Choi1975
$\rho_{\mathcal{A}}=(I\otimes\mathcal{A})(\Phi^{+})$ as the pre-shared
entangled state. Note AD channel is one of the realistic sources of noise in
superconducting quantum processor Chirolli2018 .
To train the LOCCNet for simulating ${\cal A}$, we select a set of linearly
independent density matrices $S$ as the training set. The loss function for
this channel simulation task is then defined as
$\displaystyle L=-\sum_{\psi\in S}F({\cal A}(\psi),{\cal B}(\psi)),$ (9)
where ${\cal B}$ is the actual channel simulated by LOCCNet with current
parameters and
$F(\rho,\sigma)=\text{Tr}\left(\sqrt{\rho^{1/2}\sigma\rho^{1/2}}\right)^{2}$
gives the fidelity between states $\rho$ and $\sigma$. With this loss function
to be minimized, the parameters in LOCCNet are optimized to maximize the state
fidelity between $\mathcal{A}(\psi)$ and $\mathcal{B}(\psi)$ for all $\psi\in
S$.
Once the LOCCNet is trained to teleport all the basis states in $S$ with near
perfect fidelity, we obtain a protocol for simulating ${\cal A}$. For
benchmarking, we randomly generate 1000 pure states and teleport them to Bob.
The results are summarized in Fig. 8. Compared with the original teleportation
protocol, we could achieve an equivalent performance at low noise level and a
better performance at noise level $\gamma>0.4$. Note that the numerical
simulations are conducted on Paddle Quantum Paddlequantum .
Figure 8: Average fidelity of simulating AD channel with LOCC protocols. The
blue curve depicts the behavior of the protocol via LOCCNet, which outperforms
the original teleportation (orange) at high noise level (noise parameter
$\gamma>0.4$). Each data point contains the statistical results of $1000$
randomly generated states.
## III Discussion
We established LOCCNet for exploring LOCC protocols in distributed quantum
information processing. Its overall pipeline is standard for machine learning
algorithms. For a specific task, one firstly designs an appropriate loss
function and then utilizes different LOCCNet structures and optimization
methods to train the model to obtain an optimal or near-optimal protocol.
Depending on the nature of the task, a selected training data set may be
required, as in the case of channel simulation. Based on the current design of
LOCCNet, more machine learning techniques, such as reinforcement learning
could be incorporated into this framework, making it a more powerful tool for
exploring LOCC protocols.
LOCCNet not only unifies and extends the existing LOCC protocols, but also
sheds light on the power and limitation of LOCC in the noisy intermediate-
scale quantum (NISQ) era preskill2018quantum by providing a plethora of
examples. We developed improved protocols for entanglement distillation, local
state discrimination, and quantum channel simulation as applications. As a
showcase, we applied LOCCNet to establish hardware-efficient and state-of-the-
art protocols for entanglement distillation of noisy entangled states of
interest. In addition to making a significant contribution to entanglement
distillation, LOCCNet finds direct practical use in many settings, as we
exemplified with several explicit applications in distinguishing noisy and
noiseless Bell states as well as simulating amplitude damping channels.
As we have shown the ability of LOCCNet in discovering improved LOCC
protocols, one future direction is to apply LOCCNet to further enhance
practical entanglement manipulation and quantum communication and explore
fundamental problems in quantum information theory. While in this paper we
mainly focus on bipartite cases, LOCCNet also supports multipartite
entanglement manipulation. For example, as an essential part in quantum
repeaters Briegel1998 , entanglement swapping aims to transform two entangled
pairs shared between Alice and Bob and between Bob and Carol into a new
entangled pair shared by Alice and Carol using only LOCC. Indeed, we could use
LOCCNet to design such a protocol. For instance, we can build an LOCCNet where
Bob first operates on and measure his subsystem, and then Alice and Carol
perform local operations according to the measurement results from Bob. The
loss function to minimize can be defined as the infidelity of a target state
and the output state shared between Alice and Carol. Similar procedures can be
followed to apply LOCCNet in optimizing other multipartite protocols as well,
which is worth exploring in future works.
Another important direction is to extend the framework to the continuous-
variable quantum information processing, which may be applied to explore
better LOCC protocols of private communication based on continuous variable
systems Pirandola2020 . As we have seen the potential of advancing distributed
quantum information processing with the aid of machine learning, we expect
more of such cases with classical machine learning being used to improve
quantum technologies, which in turn will enhance quantum machine learning
applications.
## Data Availability
Data that support the plots and other findings of this study are available
from the corresponding authors upon reasonable request.
## Code Availability
Code used in the numerical experiments on quantum channel simulation is
available at https://github.com/vsastray/LOCCNetcodes. Other Code used in this
study is available from the corresponding authors upon reasonable request.
## Acknowledgements
We would like to thank Runyao Duan and Kun Fang for helpful discussions.
## Competing Interests
The authors declare no competing interests.
## Author Contributions
X. W. formulated the initial idea and the framework; X. Z. and B. Z. developed
the theory; X. Z., B. Z., Z. W., and Z. S. performed the experiments. All co-
authors contributed to the preparation of the manuscript.
## References
* (1) Farhi, E., Goldstone, J. & Gutmann, S. A Quantum Approximate Optimization Algorithm. Preprint at http://arxiv.org/abs/1411.4028 (2014).
* (2) Harrigan, M. P. _et al._ Quantum approximate optimization of non-planar graph problems on a planar superconducting processor. _Nat. Phys._ 17, 332–336 (2021).
* (3) McArdle, S., Endo, S., Aspuru-Guzik, A., Benjamin, S. & Yuan, X. Quantum computational chemistry. _Rev. Mod. Phys._ 92, 015003 (2020).
* (4) Arute, F. _et al._ Hartree-Fock on a superconducting qubit quantum computer. _Science_ 369, 1084–1089 (2020).
* (5) Bennett, C. H. & Brassard, G. Quantum cryptography: Public key distribution and coin tossing. In _International Conference on Computers, Systems & Signal Processing, Bangalore, India, Dec 9-12, 1984_, 175–179 (1984).
* (6) Ekert, A. K. Quantum cryptography based on Bell’s theorem. _Phys. Rev. Lett._ 67, 661–663 (1991).
* (7) Biamonte, J. _et al._ Quantum machine learning. _Nature_ 549, 195–202 (2017).
* (8) Bennett, C. H. _et al._ Teleporting an unknown quantum state via dual classical and einstein-podolsky-rosen channels. _Phys. Rev. Lett._ 70, 1895 (1993).
* (9) Bennett, C. H. & Wiesner, S. J. Communication via one-and two-particle operators on einstein-podolsky-rosen states. _Phys. Rev. Lett._ 69, 2881 (1992).
* (10) Plenio, M. B. & Virmani, S. S. An introduction to entanglement measures. _Quantum Information and Computation_ 7, 1–51 (2007).
* (11) Horodecki, R., Horodecki, P., Horodecki, M. & Horodecki, K. Quantum entanglement. _Rev. Mod. Phys._ 81, 865 (2009).
* (12) Preskill, J. Quantum Computing in the NISQ era and beyond. _Quantum_ 2, 79 (2018).
* (13) Chitambar, E., Leung, D., Mančinska, L., Ozols, M. & Winter, A. Everything You Always Wanted to Know About LOCC (But Were Afraid to Ask). _Commun. Math. Phys._ 328, 303–326 (2014).
* (14) LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. _Nature_ 521, 436–444 (2015).
* (15) Silver, D. _et al._ Mastering the game of Go with deep neural networks and tree search. _Nature_ 529, 484–489 (2016).
* (16) Jumper, J. _et al._ Highly accurate protein structure prediction with AlphaFold. _Nature_ 596, 583–-589 (2021).
* (17) Mavadia, S., Frey, V., Sastrawan, J., Dona, S. & Biercuk, M. J. Prediction and real-time compensation of qubit decoherence via machine learning. _Nat. Commun._ 8, 14106 (2017).
* (18) Wan, K. H., Dahlsten, O., Kristjánsson, H., Gardner, R. & Kim, M. S. Quantum generalisation of feedforward neural networks. _npj Quantum Inf._ 3, 36 (2017).
* (19) Lu, D. _et al._ Enhancing quantum control by bootstrapping a quantum processor of 12 qubits. _npj Quantum Inf._ 3, 1–7 (2017).
* (20) Niu, M. Y., Boixo, S., Smelyanskiy, V. N. & Neven, H. Universal quantum control through deep reinforcement learning. _npj Quantum Inf._ 5, 33 (2019).
* (21) Wallnöfer, J., Melnikov, A. A., Dür, W. & Briegel, H. J. Machine Learning for Long-Distance Quantum Communication. _PRX Quantum_ 1, 010301 (2020).
* (22) Bausch, J. & Leditzky, F. Quantum codes from neural networks. _New J. Phys._ 22, 023005 (2020).
* (23) Benedetti, M., Lloyd, E., Sack, S. & Fiorentini, M. Parameterized quantum circuits as machine learning models. _Quantum Sci. Technol._ 4, 043001 (2019).
* (24) Bennett, C. H. _et al._ Purification of Noisy Entanglement and Faithful Teleportation via Noisy Channels. _Phys. Rev. Lett._ 76, 722–725 (1996).
* (25) Deutsch, D. _et al._ Quantum privacy amplification and the security of quantum cryptography over noisy channels. _Phys. Rev. Lett._ 77, 2818 (1996).
* (26) Murao, M., Plenio, M. B., Popescu, S., Vedral, V. & Knight, P. L. Multiparticle entanglement purification protocols. _Phys. Rev. A_ 57, R4075–R4078 (1998).
* (27) Dür, W. & Briegel, H. J. Entanglement purification and quantum error correction. _Rep. Prog. Phys._ 70, 1381–1424 (2007).
* (28) Pan, J.-W., Gasparoni, S., Ursin, R., Weihs, G. & Zeilinger, A. Experimental entanglement purification of arbitrary unknown states. _Nature_ 423, 417–422 (2003).
* (29) Devetak, I. & Winter, A. Distillation of secret key and entanglement from quantum states. _Proc. R. Soc. A._ 461, 207–235 (2005).
* (30) Bennett, C. H. _et al._ Quantum nonlocality without entanglement. _Phys. Rev. A_ 59, 1070–1091 (1999).
* (31) Walgate, J., Short, A. J., Hardy, L. & Vedral, V. Local distinguishability of multipartite orthogonal quantum states. _Phys. Rev. Lett._ 85, 4972 (2000).
* (32) Fan, H. Distinguishability and indistinguishability by local operations and classical communication. _Phys. Rev. Lett._ 92, 177905 (2004).
* (33) Hayashi, M., Markham, D., Murao, M., Owari, M. & Virmani, S. Bounds on multipartite entangled orthogonal state discrimination using local operations and classical communication. _Phys. Rev. Lett._ 96, 40501 (2006).
* (34) Ghosh, S., Kar, G., Roy, A. & Sarkar, D. Distinguishability of maximally entangled states. _Phys. Rev. A_ 70, 22304 (2004).
* (35) Nathanson, M. Distinguishing bipartitite orthogonal states using LOCC: Best and worst cases. _J. Math. Phys._ 46, 062103 (2005).
* (36) Duan, R., Feng, Y., Ji, Z. & Ying, M. Distinguishing arbitrary multipartite basis unambiguously using local operations and classical communication. _Phys. Rev. Lett._ 98, 230502 (2007).
* (37) Chitambar, E., Duan, R. & Hsieh, M.-H. When Do Local Operations and Classical Communication Suffice for Two-Qubit State Discrimination? _IEEE Trans. Inf. Theory_ 60, 1549–1561 (2014).
* (38) Duan, R., Feng, Y., Xin, Y. & Ying, M. Distinguishability of quantum states by separable operations. _IEEE Trans. Inf. Theory_ 55, 1320–1330 (2009).
* (39) Childs, A. M., Leung, D., Mančinska, L. & Ozols, M. A framework for bounding nonlocality of state discrimination. _Commun. Math. Phys._ 323, 1121–1153 (2013).
* (40) Li, Y., Wang, X. & Duan, R. Indistinguishability of bipartite states by positive-partial-transpose operations in the many-copy scenario. _Phys. Rev. A_ 95, 052346 (2017).
* (41) Bandyopadhyay, S. _et al._ Limitations on separable measurements by convex optimization. _IEEE Trans. Inf. Theory_ 61, 3593–3604 (2014).
* (42) Bennett, C. H., DiVincenzo, D. P., Smolin, J. A. & Wootters, W. K. Mixed-state entanglement and quantum error correction. _Phys. Rev. A_ 54, 3824–3851 (1996).
* (43) Bennett, C. H., Devetak, I., Harrow, A. W., Shor, P. W. & Winter, A. The Quantum Reverse Shannon Theorem and Resource Tradeoffs for Simulating Quantum Channels. _IEEE Trans. Inf. Theory_ 60, 2926–2959 (2014).
* (44) Berta, M., Brandao, F. G. S. L., Christandl, M. & Wehner, S. Entanglement Cost of Quantum Channels. _IEEE Trans. Inf. Theory_ 59, 6779–6795 (2013).
* (45) Pirandola, S., Laurenza, R., Ottaviani, C. & Banchi, L. Fundamental limits of repeaterless quantum communications. _Nat. Commun._ 8, 15043 (2017).
* (46) Wilde, M. M. Entanglement cost and quantum channel simulation. _Phys. Rev. A_ 98, 042338 (2018).
* (47) Wang, X. & Wilde, M. M. Exact entanglement cost of quantum states and channels under PPT-preserving operations. Preprint at http://arxiv.org/abs/1809.09592 (2018).
* (48) Fang, K., Wang, X., Tomamichel, M. & Berta, M. Quantum Channel Simulation and the Channel’s Smooth Max-Information. _IEEE Trans. Inf. Theory_ 66, 2129–2140 (2020).
* (49) Peruzzo, A. _et al._ A variational eigenvalue solver on a photonic quantum processor. _Nat. Commun._ 5, 4213 (2014).
* (50) Johnson, P. D., Romero, J., Olson, J., Cao, Y. & Aspuru-Guzik, A. QVECTOR: an algorithm for device-tailored quantum error correction. Preprint at http://arxiv.org/abs/1711.02249 (2017).
* (51) Cerezo, M. _et al._ Variational quantum algorithms. _Nat. Rev. Phys._ 3, 625–644 (2021).
* (52) Bharti, K. _et al._ Noisy intermediate-scale quantum (NISQ) algorithms. Preprint at http://arxiv.org/abs/2101.08448 (2021).
* (53) Endo, S., Cai, Z., Benjamin, S. C. & Yuan, X. Hybrid Quantum-Classical Algorithms and Quantum Error Mitigation. _J. Phys. Soc. Jpn._ 90, 032001 (2021).
* (54) Chen, R., Song, Z., Zhao, X. & Wang, X. Variational Quantum Algorithms for Trace Distance and Fidelity Estimation. Preprint at http://arxiv.org/abs/2012.05768 (2020).
* (55) Cerezo, M., Poremba, A., Cincio, L. & Coles, P. J. Variational Quantum Fidelity Estimation. _Quantum_ 4, 248 (2020).
* (56) Romero, J., Olson, J. P. & Aspuru-Guzik, A. Quantum autoencoders for efficient compression of quantum data. _Quantum Sci. Technol._ 2, 045001 (2017).
* (57) Cao, C. & Wang, X. Noise-Assisted Quantum Autoencoder. _Phys. Rev. Applied_ 15, 054012 (2021).
* (58) Paddle Quantum. https://github.com/PaddlePaddle/Quantum. (2020).
* (59) Ma, Y., Yu, D., Wu, T. & Wang, H. PaddlePaddle: An Open-Source Deep Learning Platform from Industrial Practice. _Frontiers of Data and Computing_ 1, 105–115 (2019).
* (60) PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice. https://github.com/paddlepaddle/paddle (2016).
* (61) Fujii, K. & Yamamoto, K. Entanglement purification with double selection. _Phys. Rev. A_ 80, 042308 (2009).
* (62) Kalb, N. _et al._ Entanglement distillation between solid-state quantum network nodes. _Science_ 356, 928–932 (2017).
* (63) Rozpedek, F. _et al._ Optimizing practical entanglement distillation. _Phys. Rev. A_ 97, 062333 (2018).
* (64) Krastanov, S., Albert, V. V. & Jiang, L. Optimized entanglement purification. _Quantum_ 3, 123 (2019).
* (65) Rains, E. M. A semidefinite program for distillable entanglement. _IEEE Trans. Inf. Theory_ 47, 2921–2933 (2000).
* (66) Matthews, W. & Winter, A. Pure-state transformations and catalysis under operations that completely preserve positivity of partial transpose. _Phys. Rev. A_ 78, 012317 (2008).
* (67) Wang, X. & Duan, R. Improved semidefinite programming upper bound on distillable entanglement. _Phys. Rev. A_ 94, 050301 (2016).
* (68) Fang, K., Wang, X., Tomamichel, M. & Duan, R. Non-asymptotic Entanglement Distillation. _IEEE Trans. Inf. Theory_ 65, 6454–6465 (2019).
* (69) Wang, X. & Duan, R. Nonadditivity of Rains’ bound for distillable entanglement. _Phys. Rev. A_ 95, 062322 (2017).
* (70) Audenaert, K., Plenio, M. B. & Eisert, J. Entanglement Cost under Positive-Partial-Transpose-Preserving Operations. _Phys. Rev. Lett._ 90, 027901 (2003).
* (71) Wang, X. & Duan, R. Irreversibility of Asymptotic Entanglement Manipulation Under Quantum Operations Completely Preserving Positivity of Partial Transpose. _Phys. Rev. Lett._ 119, 180506 (2017).
* (72) Regula, B., Fang, K., Wang, X. & Gu, M. One-shot entanglement distillation beyond local operations and classical communication. _New J. Phys._ 21, 103017 (2019).
* (73) Chitambar, E., de Vicente, J. I., Girard, M. W. & Gour, G. Entanglement manipulation beyond local operations and classical communication. _J. Math. Phys._ 61, 042201 (2020).
* (74) Wang, X., Fang, K. & Duan, R. Semidefinite Programming Converse Bounds for Quantum Communication. _IEEE Trans. Inf. Theory_ 65, 2583–2592 (2019).
* (75) Wang, X. & Wilde, M. M. Cost of Quantum Entanglement Simplified. _Phys. Rev. Lett._ 125, 040502 (2020).
* (76) Ruan, L., Dai, W. & Win, M. Z. Adaptive recurrence quantum entanglement distillation for two-Kraus-operator channels. _Phys. Rev. A_ 97, 052332 (2018).
* (77) Bae, J. & Kwek, L.-C. Quantum state discrimination and its applications. _J. Phys. A: Math. Theor._ 48, 083001 (2015).
* (78) Barnett, S. M. & Croke, S. Quantum state discrimination. _Adv. Opt. Photon._ 1, 238–278 (2009).
* (79) Li, K. Discriminating quantum states: the multiple Chernoff distance. _Ann. Statist._ 44, 1661–1679 (2016).
* (80) DiVincenzo, D. P., Leung, D. W. & Terhal, B. M. Quantum data hiding. _IEEE Trans. Inf. Theory_ 48, 580–598 (2002).
* (81) Gallego, R., Brunner, N., Hadley, C. & Acín, A. Device-independent tests of classical and quantum dimensions. _Phys. Rev. Lett._ 105, 230501 (2010).
* (82) Eldar, Y. A semidefinite programming approach to optimal unambiguous discrimination of quantum states. _IEEE Trans. Inf. Theory_ 49, 446–456 (2003).
* (83) Sun, X., Zhang, S., Feng, Y. & Ying, M. Mathematical nature of and a family of lower bounds for the success probability of unambiguous discrimination. _Phys. Rev. A_ 65, 44306 (2002).
* (84) Ježek, M., Řeháček, J. & Fiurášek, J. Finding optimal strategies for minimum-error quantum-state discrimination. _Phys. Rev. A_ 65, 60301 (2002).
* (85) Yu, N., Duan, R. & Ying, M. Distinguishability of Quantum States by Positive Operator-Valued Measures With Positive Partial Transpose. _IEEE Trans. Inf. Theory_ 60, 2069–2079 (2014).
* (86) Nielsen, M. A. & Chuang, I. L. _Quantum Computation and Quantum Information: 10th Anniversary Edition_ (Camb. Univ. Press, Cambridge, 2010).
* (87) Wilde, M. M. _Quantum Information Theory_ (Camb. Univ. Press, Cambridge, 2017).
* (88) Watrous, J. _The Theory of Quantum Information_ (Camb. Univ. Press, Cambridge, 2018).
* (89) Pirandola, S. _et al._ Theory of channel simulation and bounds for private communication. _Quantum Sci. Technol._ 3, 035009 (2018).
* (90) Pirandola, S. & Lupo, C. Ultimate Precision of Adaptive Noise Estimation. _Phys. Rev. Lett._ 118, 100502 (2017).
* (91) Pirandola, S. _et al._ Advances in quantum cryptography. _Adv. Opt. Photon._ 12, 1012 (2020).
* (92) Pirandola, S., Eisert, J., Weedbrook, C., Furusawa, A. & Braunstein, S. L. Advances in quantum teleportation. _Nat. Photonics_ 9, 641–652 (2015).
* (93) Horodecki, M., Horodecki, P. & Horodecki, R. General teleportation channel, singlet fraction, and quasidistillation. _Phys. Rev. A_ 60, 1888–1898 (1999).
* (94) Choi, M.-D. Completely positive linear maps on complex matrices. _Linear Algebra Appl._ 10, 285–290 (1975).
* (95) Chirolli, L. & Burkard, G. Decoherence in solid-state qubits. _Adv. Phys._ 57, 225–285 (2008).
* (96) Briegel, H.-J., Dür, W., Cirac, J. I. & Zoller, P. Quantum Repeaters: The Role of Imperfect Local Operations in Quantum Communication. _Phys. Rev. Lett._ 81, 5932–5935 (1998).
Supplemental Information: Practical distributed quantum information processing
with LOCCNet
## SUPPLEMENTARY NOTE 1: Details of LOCC
Preliminaries. We begin with the preliminaries on quantum information. We will
frequently use symbols such as $A$ (or $A^{\prime}$) and $B$ (or $B^{\prime}$)
to denote finite-dimensional Hilbert spaces associated with Alice and Bob,
respectively. We use $d_{A}$ to denote the dimension of system $A$. The set of
linear operators acting on $A$ is denoted by ${\cal L}(A)$. We usually write
an operator with a subscript indicating the system that the operator acts on,
such as $M_{AB}$, and write $M_{A}\mathrel{\mathop{\mathchar
58\relax}}=\operatorname{Tr}_{B}M_{AB}$.
A quantum state on system $A$ is a positive operator $\rho_{A}$ with unit
trace. The set of quantum states is denoted as $S(A)\mathrel{\mathop{\mathchar
58\relax}}=\\{\,\rho_{A}\geq 0\,|\,\operatorname{Tr}\rho_{A}=1\,\\}$. We call
a positive operator separable if it can be written as a convex combination of
tensor product positive operators. A bipartite positive semidefinite operator
$E_{AB}\in{\cal L}(A\otimes B)$ is said to be Positive-Partial-Transpose (PPT)
if $E_{AB}^{T_{B}}$ is positive semidefinite. Note that the action of partial
transpose (with respect to $B$) is defined as $(|i_{A}\rangle\langle
k_{A}|\otimes|j_{B}\rangle\langle l_{B}|)^{T_{B}}=|i_{A}\rangle\langle
k_{A}|\otimes|l_{B}\rangle\langle j_{B}|$.
LOCC. When a quantum system is distributed to spatially separated parties, it
is natural to consider how the system evolves when the parties perform local
quantum operations with classical communication. A systematic definition of
LOCC can be found in Chitambar2014 . Here, for self-consistency, we give a
detailed description of LOCC as follows.
Consider a setting involving multiple spatially separated parties sharing a
multipartite quantum system. The set $\text{LOCC}_{1}$ consists of the most
elementary LOCC operations corresponding to LOCC protocols with one classical
communication round, where one party performs a local operation and sends the
measurement outcome to others, who then perform corresponding local operations
on their local systems upon receiving the outcome. A local operation can be
described as a set of completely positive (CP) maps $\\{\mathcal{E}_{m}\\}$
such that $\sum_{m}\mathcal{E}_{m}$ is trace-preserving. The subscript $m$
corresponds to an operation’s measurement outcome, which could affect each
party’s choices of subsequent local operations. A more complicated LOCC
operation can be seen as a sequence of $\text{LOCC}_{1}$ operations.
Specifically, for any $r\geq 2$, $\text{LOCC}_{r}$ is defined to be a set of
LOCC operations, in which each operation is constructed from an
$\text{LOCC}_{r-1}$ operation followed by an $\text{LOCC}_{1}$ operation. A
common characteristic of these LOCC operations is that they can implemented
with finite rounds of classical communication. Thus, we define a set
$\text{LOCC}_{\mathbb{N}}$, corresponding to finite round protocols, such that
an LOCC operation is in this set if it belongs to $\text{LOCC}_{r}$ for some
$r$ in $\mathbb{N}=\\{1,2,\dots\\}$. As there are finite round protocols,
there also exist infinite round protocols in theory. These infinite round
protocols, together with operations in $\text{LOCC}_{\mathbb{N}}$, form the
set known as LOCC.
LOCCNet is a machine learning framework developed for designing and exploring
LOCC protocols for various quantum information processing tasks. In the main
text , we give a brief introduction to this framework. Here, we give some
common types of LOCC protocols involving two parties, Alice and Bob, as
examples to explain how a protocol can be constructed and optimized using the
LOCCNet.
Optimizing one-round LOCC protocols. One-round LOCC protocols are protocols
having only one round of classical communication. An example is shown in Fig.
S1. An application of such a protocol is quantum state teleportation. To
optimize a one-round protocol with LOCCNet, we need to build and train three
PQCs, shown as a tree in Fig. S2. The PQC $U(\theta_{0})$ is used to optimize
Alice’s local operation $U$, and PQCs $V_{0}(\theta_{1})$ and
$V_{0}(\theta_{2})$ are for Bob’s local operation in the case of Alice
measuring $0$ and $1$, respectively.
@C=1em @R=1.7em A & / U [1]
B / V
Figure S1: A circuit illustration of one-round LOCC. Alice first performs a
local operation and sends the measurement outcome to Bob. Bob then perform a
local operation accordingly. Figure S2: Tree structure of the LOCCNet used for
optimizing a one-round protocol.
Optimizing two-round LOCC protocols. A general two-round LOCC protocol
includes Alice performing a local operation and telling Bob her measurement
outcome, then Bob performing a corresponding local operation and telling Alice
his measurement outcome, and finally Alice performing another local operation.
Such a protocol is already a little complicated and optimizing such a protocol
requires seven PQCs. Here, we give two special types of two-round protocols
that are easier to train and has practical applications.
The first type of protocols is shown in Fig. S3 and are widely used for
entanglement distillation. In such a protocol, Alice and Bob first perform
local operations independently and then exchange their measurement outcomes
through classical communication to check whether the expected task is
completed. To optimize such a protocol, we only need to build two PQCs, one
for Alice’s local operation and one for Bob’s local operation.
@C=1em @R=1.7em A & / U 0 or 1
B / V 0 or 1
Figure S3: A circuit illustration of a type of two-round LOCC where Alice and
BOB perform local operations independently before exchanging measurement
outcomes.
Another type of protocols is given in Fig. S4. In such a protocol, after Bob
obtains his measurement outcome and tells it to Alice, Alice does not need to
perform a local operation. An application of such a protocol is state
discrimination, as we show in the main text. Like training a one-round
protocol, optimizing a protocol of this type only requires three PQCs.
@C=1em @R=1.7em A & / U [1]
B / V 0 or 1
Figure S4: A circuit illustration of another type of two-round LOCC. In such a
protocol, Bob sending his measurement outcome to Alice is the last step.
## SUPPLEMENTARY NOTE 2: Analysis of entanglement distillation
The aim of entanglement distillation is to compensate for the impurity caused
by noise and restore a maximally entangled state at the cost of many noisy
entangled states. In this sense, one could also refer an entanglement
distillation protocol as a purification or error-correction protocol. The Bell
states are four two-qubit maximally entangled states defined as
$\displaystyle|\Phi^{\pm}\rangle=\frac{1}{\sqrt{2}}(|00\rangle\pm|11\rangle),\quad|\Psi^{\pm}\rangle=\frac{1}{\sqrt{2}}(|01\rangle\pm|10\rangle).$
(S1)
The state $|\Phi\rangle$ is also known as the entangled bit (ebit), and
entanglement distillation in two-qubit settings usually means to convert
copies of a state $\rho_{AB}$ shared by two parties, Alice and Bob, into a
state closer to the ebit. Here, closeness between the state $\rho_{AB}$ and
the ebit is usually measured in terms of the fidelity
$\displaystyle F=\langle\Phi^{+}|\rho_{AB}|\Phi^{+}\rangle.$ (S2)
A well known protocol for two-copy entanglement distillation is the DEJMPS
protocol, which is illustrated in Fig. S5. Sharing two copies of an initial
state, $\rho_{A_{0}B_{0}}$ and $\rho_{A_{1}B_{1}}$, both Alice and Bob first
apply $R_{x}$ gates and CNOT gates to their local qubits and then measure a
pair of qubits from the same copy. Finally, they exchange measurement outcomes
and output the unmeasured copy when their outcomes agree. Otherwise, the
distillation procedure fails.
@C=1em @R=1.7em A_0 &R_x(+π2) 1
A_1 R_x(+π2)
B_0 R_x(-π2) 1
B_1 R_x(-π2)
Figure S5: The DEJMPS protocol for two-copy entanglement distillation.
The DEJMPS protocol has been shown to be optimal in purifying two copies of
any Bell diagonal state with of rank at most three rozpkedek2018optimizing ,
where a Bell diagonal state is a state of the form
$\displaystyle\rho_{AB}=p_{0}|\Phi^{+}\rangle\\!\langle\Phi^{+}|+p_{1}|\Psi^{+}\rangle\\!\langle\Psi^{+}|+p_{2}|\Phi^{-}\rangle\\!\langle\Phi^{-}|+p_{3}|\Psi^{-}\rangle\\!\langle\Psi^{-}|,$
(S3)
which is a convex combination of the four Bell states. For conciseness, we can
write such a Bell diagonal state as a $4$-tuple,
$\displaystyle\rho_{AB}=(p_{0},p_{1},p_{2},p_{3}).$ (S4)
The DEJMPS protocol can also distill some states besides Bell diagonal states,
like S states. In the following, we will analyze the performance of the DEJMPS
protocol on two copies of an S state and compare it with a protocol learned by
LOCCNet. After that, we will compare the DEJMPS protocol with another protocol
learned by LOCCNet for distilling four copies of an isotropic state, which is
a special Bell diagonal state.
S state. The S state is defined as the Bell state with a non-orthogonal
product noise,
$\displaystyle\rho_{AB}(p)=p|\Phi^{+}\rangle\langle\Phi^{+}|+(1-p)|00\rangle\langle
00|,$ (S5)
where $p\in[0,1]$. In the main text, we give expressions of fidelity achieved
by the DEJMPS protocol and the protocol learned by LOCCNet for two copies of
an S state. Here, we give a detailed derivation of these two expressions.
###### Proposition S1
For two copies of an S state with parameter $p$, the DEJMPS protocol outputs a
state whose fidelity to the ebit is
$\displaystyle F=\frac{(1+p)^{2}}{2+2p^{2}}$ (S6)
with a probability of success
$\displaystyle p_{\text{succ}}=\frac{1+p^{2}}{2}.$ (S7)
###### Proof.
By its definition in Equation (S5), an S state $\rho$ with parameter $p$ can
be written in the matrix form as
$\displaystyle\rho=\begin{pmatrix}1-\frac{p}{2}&0&0&\frac{p}{2}\\\ 0&0&0&0\\\
0&0&0&0\\\ \frac{p}{2}&0&0&\frac{p}{2}\\\ \end{pmatrix}.$ (S8)
Applying the circuit in Fig. S5 to two copies of such an state, Alice and Bob
both get $0$ for measurement outcomes with a probability of
$p_{00}=(1+p^{2})/4$. By matrix calculation, we obtain the post-measurement
state of the unmeasured copy in this case as
$\displaystyle\sigma_{-}=\begin{pmatrix}\alpha&-\beta&-\beta&\alpha\\\
-\beta&\beta&\beta&-\beta\\\ -\beta&\beta&\beta&-\beta\\\
\alpha&-\beta&-\beta&\alpha\\\ \end{pmatrix},$ (S9)
where $\alpha=(1+p)^{2}/(4+4p^{2})$ and $\beta=(1-p)^{2}/(4+4p^{2})$. The
probability that Alice and Bob both get $1$ for measurement outcomes is
$p_{11}=(1+p^{2})/4$, and the post-measurement state in this case is
$\displaystyle\sigma_{+}=\begin{pmatrix}\alpha&\beta&\beta&\alpha\\\
\beta&\beta&\beta&\beta\\\ \beta&\beta&\beta&\beta\\\
\alpha&\beta&\beta&\alpha\\\ \end{pmatrix}.$ (S10)
According to the definition of fidelity, the fidelity of state $\sigma_{\pm}$
to the ebit is
$\displaystyle F$
$\displaystyle=\text{Tr}(\sigma_{\pm}|\Phi^{+}\rangle\langle\Phi^{+}|)=\frac{(1+p)^{2}}{2+2p^{2}}.$
(S11)
The probability of Alice and Bob arriving at state $\sigma_{\pm}$ is
$\displaystyle
p_{\text{succ}}=p_{00}+p_{11}=\frac{1+p^{2}}{4}+\frac{1+p^{2}}{4}=\frac{1+p^{2}}{2}.$
(S12)
$\sqcap$$\sqcup$
With LOCCNet, we are able to find a new protocol that achieves a higher
fidelity than the DEJMPS protocol when distilling two copies of an S state.
Indeed, we show in the main text that this protocol is optimal in the sense
that it achieves the highest possible fidelity. With some simplification, we
obtain a circuit shown in Fig. S6. Below, we offer analysis on the performance
of this optimized protocol in Proposition S2.
@C=1em @R=1.7em A_0 &
A_1 -1 R_y(θ)
B_0 1
B_1 -1 R_y(θ)
Figure S6: The simplified circuit of a distillation protocol learned by
LOCCNet for two copies of some S state, $\rho_{A_{0}B_{0}}$ and
$\rho_{A_{1}B_{1}}$. The rotation angles of both $R_{y}$ gates are
$\theta=\arccos(1-p)+\pi$, which depends on the parameter $p$ of the S states
to be distilled.
###### Proposition S2
For two copies of an S state with parameter $p$, the protocol illustrated in
Fig. S6 outputs a state whose fidelity to the ebit is
$\displaystyle F=\frac{1+\sqrt{2p-p^{2}}}{2}$ (S13)
with probability $p_{\text{succ}}=p^{2}-\frac{p^{3}}{2}$ of success.
###### Proof.
The matrix form of an S state $\rho$ with parameter $p$ is given in Equation
(S8). Applying the circuit in Fig. S6 to two copies of such an state, Alice
and Bob both get $0$ for measurement outcomes with a probability of
$p_{00}=p^{2}-p^{3}/2$. By matrix calculation, we obtain the post-measurement
state of the unmeasured copy as
$\displaystyle\sigma=\begin{pmatrix}\frac{1}{2}&0&0&\frac{\sqrt{2p-p^{2}}}{2}\\\
0&0&0&0\\\ 0&0&0&0\\\ \frac{\sqrt{2p-p^{2}}}{2}&0&0&\frac{1}{2}\\\
\end{pmatrix}.$ (S14)
Note that the state $\sigma$ can be written as
$\displaystyle\sigma=\begin{pmatrix}\alpha+\beta&0&0&\alpha-\beta\\\
0&0&0&0\\\ 0&0&0&0\\\ \alpha-\beta&0&0&\alpha+\beta\\\
\end{pmatrix}=\alpha|\Phi^{+}\rangle\\!\langle\Phi^{+}|+\beta|\Phi^{-}\rangle\\!\langle\Phi^{-}|,$
(S15)
where $\alpha=(1+\sqrt{2p-p^{2}})/2$, $\beta=(1-\sqrt{2p-p^{2}})/2$, and
$|\Phi^{-}\rangle=1/\sqrt{2}(|00\rangle-|11\rangle)$. By the definition of
fidelity, we have
$\displaystyle F$
$\displaystyle=\text{Tr}(\sigma|\Phi^{+}\rangle\\!\langle\Phi^{+}|)=\text{Tr}((\alpha|\Phi^{+}\rangle\\!\langle\Phi^{+}|+\beta|\Phi^{-}\rangle\\!\langle\Phi^{-}|)|\Phi^{+}\rangle\\!\langle\Phi^{+}|)$
(S16) $\displaystyle=\alpha=\frac{1+\sqrt{2p-p^{2}}}{2}$ (S17)
since $\langle\Phi^{+}|\Phi^{+}\rangle=1$ and
$\langle\Phi^{-}|\Phi^{+}\rangle=0$ for $|\Phi^{-}\rangle$ is orthogonal to
$|\Phi^{+}\rangle$. The probability of Alice and Bob arriving at state
$\sigma$ is
$\displaystyle p_{\text{succ}}=p_{00}=p^{2}-\frac{p^{3}}{2}.$ (S18)
$\sqcap$$\sqcup$
Isotropic state. A two-qubit isotropic state is of the form
$\displaystyle\rho_{AB}=p|\Phi^{+}\rangle\langle\Phi^{+}|+(1-p)\frac{I}{4},$
(S19)
where $p\in[0,1]$. Alternatively, one can write an isotropic state as a Bell
diagonal state
$\displaystyle\rho_{AB}=\left(\frac{1+3p}{4},\frac{1-p}{4},\frac{1-p}{4},\frac{1-p}{4}\right).$
(S20)
Distillation with the DEJMPS protocol. The DEJMPS protocol is known to achieve
a high fidelity when distilling two copies of an Bell diagonal state, and the
resulting state in the event of success is still a Bell diagonal state.
Specifically, the DEJMPS protocol’s circuit, excluding the measurements, acts
on an Bell diagonal state as a permutation of the Bell states’ coefficients.
For a Bell diagonal state
$\displaystyle\rho=p_{0}|\Phi^{+}\rangle\\!\langle\Phi^{+}|+p_{1}|\Psi^{+}\rangle\\!\langle\Psi^{+}|+p_{2}|\Phi^{-}\rangle\\!\langle\Phi^{-}|+p_{3}|\Psi^{-}\rangle\\!\langle\Psi^{-}|,$
(S21)
the operator $R_{x}(\pi/2)\otimes R_{x}(-\pi/2)$ maps it to another Bell
diagonal state
$\displaystyle\sigma=p_{0}|\Phi^{+}\rangle\\!\langle\Phi^{+}|+p_{1}|\Psi^{+}\rangle\\!\langle\Psi^{+}|+p_{3}|\Phi^{-}\rangle\\!\langle\Phi^{-}|+p_{2}|\Psi^{-}\rangle\\!\langle\Psi^{-}|.$
(S22)
As stated in Eq. (S22), a pair of $R_{x}(\pm\pi/2)$ gates transforms a Bell
diagonal state $(p_{0},p_{1},p_{2},p_{3})$ to another Bell diagonal state
$(p_{0},p_{1},p_{3},p_{2})$. Similarly, a pair of bilateral CNOT gates shown
in Fig. S5 acts on the tensor product of two Bell diagonal states as a
permutation of coefficients. The effect of the bilateral CNOT gates is
summarized as a Table in Bennett1996 . Specifically, for a pair of Bell
diagonal states $(a_{0},a_{1},a_{2},a_{3})$ and $(b_{0},b_{1},b_{2},b_{3})$,
applying the bilateral CNOT gates on the state
$\displaystyle(p_{0},p_{1},\dots,p_{14},p_{15})$
$\displaystyle=(a_{0},a_{1},a_{2},a_{3})\otimes(b_{0},b_{1},b_{2},b_{3})$
(S23) $\displaystyle\equiv(a_{0}b_{0},a_{0}b_{1},\dots,a_{3}b_{2},a_{3}b_{3})$
(S24)
results in a state
$\displaystyle\text{CNOT}(p_{0},p_{1},\dots,p_{14},p_{15})=(p_{0},p_{1},p_{10},p_{11},p_{5},p_{4},p_{15},p_{14},p_{8},p_{9},p_{2},p_{3},p_{13},p_{12},p_{7},p_{6}).$
(S25)
Although the coincidence measurement, referring to Alice and Bob getting
identical measurement outcomes, on a Bell diagonal state is not a Bell basis
permutation, the post-measurement state is still a Bell diagonal state. To be
specific, note that since only $00$ and $11$ are counted as valid results,
Bell states $|\Psi^{\pm}\rangle\\!\langle\Psi^{\pm}|$ are filtered out and
thus a Bell diagonal state $(p_{0},p_{1},p_{2},p_{3})$ collapses to
$(p_{0},0,p_{2},0)$ up to a normalization factor after the coincidence
measurement.
The final fidelity and the probability of success achieved by the DEJMPS
protocol can be derived by permuting coefficients in the Bell basis, and the
results is given in deutsch1996quantum . For self-consistency, we give a
derivation as below.
###### Proposition S3 (deutsch1996quantum )
For two copies of a Bell diagonal state $(a_{0},a_{1},a_{2},a_{3})$, the
DEJMPS protocol outputs a state whose fidelity to the ebit is
$\displaystyle F=\frac{a_{0}^{2}+a_{3}^{2}}{p_{\text{succ}}},$ (S26)
where
$p_{\text{succ}}=a_{0}^{2}+a_{3}^{2}+a_{1}^{2}+a_{2}^{2}+2a_{0}a_{3}+2a_{1}a_{2}$
is the probability of success.
###### Proof.
After the first layer of $R_{x}$ gates, the input state
$(a_{0},a_{1},a_{2},a_{3})^{\otimes 2}$ becomes
$(a_{0},a_{1},a_{3},a_{2})^{\otimes 2}$ according to Eq. (S22). Then,
transformed by the layer of bilateral CNOT gates, the state
$(p_{0},p_{1},\dots,p_{14},p_{15})=(a_{0},a_{1},a_{3},a_{2})^{\otimes 2}$
becomes
$(p_{0},p_{1},p_{10},p_{11},p_{5},p_{4},p_{15},p_{14},p_{8},p_{9},p_{2},p_{3},p_{13},p_{12},p_{7},p_{6})$.
The coincidence measurement in the computational basis on the second copy
filters out $|\Psi^{\pm}\rangle\\!\langle\Psi^{\pm}|$ and the remaining state
is either $|00\rangle\\!\langle 00|$ or $|11\rangle\\!\langle 11|$. In either
case, the first copy becomes
$\displaystyle\sigma$
$\displaystyle=(p_{0}+p_{10},p_{5}+p_{15},p_{8}+p_{2},p_{13}+p_{7})$ (S27)
$\displaystyle=(a_{0}^{2}+a_{3}^{2},a_{1}^{2}+a_{2}^{2},a_{3}a_{0}+a_{0}a_{3},a_{2}a_{1}+a_{1}a_{2})$
(S28)
$\displaystyle=(a_{0}^{2}+a_{3}^{2},a_{1}^{2}+a_{2}^{2},2a_{0}a_{3},2a_{1}a_{2})$
(S29)
up to a normalization factor. The sum of all the unnormalized coefficients is
the probability of measuring $00$ or $11$, i.e.,
$\displaystyle
p_{\text{succ}}=a_{0}^{2}+a_{3}^{2}+a_{1}^{2}+a_{2}^{2}+2a_{0}a_{3}+2a_{1}a_{2}.$
(S30)
The the normalized output state is $\sigma/p_{\text{succ}}$, and its fidelity
to the ebit, which is the coefficient before
$|\Phi^{+}\rangle\\!\langle\Phi^{+}|$, is
$\displaystyle F=\frac{a_{0}^{2}+a_{3}^{2}}{p_{\text{succ}}}.$ (S31)
$\sqcap$$\sqcup$
Since the DEJMPS protocol is for distilling two copies of a Bell diagonal
state, to distill four copies of an isotropic state, we can follow these
steps. First, we divide them into two groups where each group consists of two
copies. Then we apply the DEJMPS protocol to both groups independently. In the
event of success, we will get two copies of a Bell diagonal state, to which we
apply the DEJMPS protocol again.
###### Proposition S4
For four copies of an isotropic state $((1+3p)/4,(1-p)/4,(1-p)/4,(1-p)/4)$,
the generalized DEJMPS protocol given above outputs a state whose fidelity to
the ebit is
$\displaystyle F=\frac{1+10p^{2}+8p^{3}+13p^{4}}{4+8p^{2}+20p^{4}}$ (S32)
with $p_{\text{succ}}=\frac{1}{8}\left(1+2p^{2}+5p^{4}\right)$ probability of
success.
###### Proof.
For two copies of the isotropic state to be distilled, the DEJMPS protocol
outputs a state
$\displaystyle\rho$
$\displaystyle=\left(\frac{(1+3p)^{2}+(1-p)^{2}}{16p^{\prime}_{\text{succ}}},\frac{(1-p)^{2}+(1-p)^{2}}{16p^{\prime}_{\text{succ}}},\frac{2(1+3p)(1-p)}{16p^{\prime}_{\text{succ}}},\frac{2(1-p)(1-p)}{16p^{\prime}_{\text{succ}}}\right)$
(S33)
$\displaystyle=\left(\frac{1+2p+5p^{2}}{8p^{\prime}_{\text{succ}}},\frac{1-2p+p^{2}}{8p^{\prime}_{\text{succ}}},\frac{1+2p-3p^{2}}{8p^{\prime}_{\text{succ}}},\frac{1-2p+p^{2}}{8p^{\prime}_{\text{succ}}}\right)$
(S34)
up to a normalization factor with a probability of success
$\displaystyle p^{\prime}_{\text{succ}}$
$\displaystyle=\frac{1}{16}\left((1+3p)^{2}+3(1-p)^{2}+2(1+3p)(1-p)+2(1-p)^{2}\right)=\frac{1+p^{2}}{2}.$
(S35)
Then, the probability of successful distillation for both groups is
$p_{\text{succ}}^{{}^{\prime}2}$. In that case, applying the DEJMPS protocol
to the resulting two copies of state $\rho$ gives a state whose fidelity to
the ebit is
$\displaystyle F$
$\displaystyle=\frac{(1+2p+5p^{2})^{2}+(1-2p+p^{2})^{2}}{64p_{\text{succ}}^{{}^{\prime}2}p^{\prime\prime}_{\text{succ}}}=\frac{1+10p^{2}+8p^{3}+13p^{4}}{8(1+p^{2})^{2}p^{\prime\prime}_{\text{succ}}}$
(S36)
with a probability of success
$\displaystyle
p^{\prime\prime}_{\text{succ}}=\frac{1+2p^{2}+5p^{4}}{2(1+p^{2})^{2}}.$ (S37)
Substituting $p^{\prime\prime}_{\text{succ}}$ into $F$, we have
$\displaystyle F=\frac{1+10p^{2}+8p^{3}+13p^{4}}{4+8p^{2}+20p^{4}}.$ (S38)
The sucess probability of the whole process is
$\displaystyle
p_{\text{succ}}=p_{\text{succ}}^{{}^{\prime}2}p^{\prime\prime}_{\text{succ}}=\frac{1}{8}\left(1+2p^{2}+5p^{4}\right).$
(S39)
$\sqcap$$\sqcup$
The protocol found with LOCCNet. As we show in the main text, the DEJMPS
protocol does not fully exploit the resources encoded in four copies of an
isotropic states, and there is a protocol learned by LOCCNet that achieves a
higher fidelity.
@C=1em @R=1.7em A_0 & 1
A_1 1 R_x(+π2)
A_2 1 R_x(+π2)
A_3 -3 R_x(+π2)
Figure S7: The simplified circuit of a protocol learned by LOCCNet for
entanglement distillation with four copies of some isotropic state. This
circuit only includes Alice’s operation, while Bob’s operation is identical to
Alice’s, except that the rotation angles of Bob’s $R_{x}$ gates are $-\pi/2$.
###### Proposition S5
For four copies of an isotropic state $\rho$ with parameter $p$, the protocol
illustrated in Fig. S7 outputs a state whose fidelity to the ebit is
$\displaystyle F=\frac{1-2p+9p^{2}}{4-8p+12p^{2}}$ (S40)
with a probability of success
$\displaystyle p_{\text{succ}}=\frac{1+4p^{3}+3p^{4}}{8}.$ (S41)
###### Proof.
Similar to the DEJMPS protocol, this optimized protocol consists of
$Ry(\pm\pi/2)$ gates, bilateral CNOT gates, and coincidence measurements in
the computational basis. Thus, the claimed fidelity and probability of success
can be derived by simulating the circuit shown in Fig. S7 as permutation on
Bell basis. Using similar techniques from the proof of Proposition S3, we
obtain the unnormalized state after three coincidence measurements in the
event of success, which is
$\displaystyle\sigma=\left(\frac{1}{32}(1+p)^{2}(1-2p+9p^{2}),\frac{1}{32}(1-p^{2})^{2},\frac{1}{32}(1-p^{2})^{2},\frac{1}{32}(1-p^{2})^{2}\right).$
(S42)
Then, adding up all the coefficients in $\sigma$, we obtain the probability of
success
$\displaystyle p_{\text{succ}}$
$\displaystyle=\frac{1}{32}(1+p)^{2}(1-2p+9p^{2})+\frac{3}{32}(1-p^{2})^{2}=\frac{1+4p^{3}+3p^{4}}{8}.$
(S43)
Meanwhile, the normalized $\sigma$’s fidelity to the ebit is
$\displaystyle F$
$\displaystyle=\frac{1}{32p_{\text{succ}}}(1+p)^{2}(1-2p+9p^{2})=\frac{1-2p+9p^{2}}{4-8p+12p^{2}}.$
(S44)
$\sqcap$$\sqcup$
PPT bound. As the mathematical structure of LOCC is complex and difficult to
characterize Chitambar2014 , we may consider larger but mathematically more
tractable classes of operations. The operations most frequently employed
beyond LOCC are the PPT operations, which completely preserve the positivity
of the partial transpose Rains2001 . A bipartite quantum operation $\Pi_{AB\to
A^{\prime}B^{\prime}}$ is called a PPT operation if its Choi-Jamiołkowski
matrix $J_{\Pi}=\sum_{i,j,m,k}|i_{A}j_{B}\rangle\langle
m_{A}k_{B}|\otimes\Pi(|i_{A}j_{B}\rangle\langle m_{A}k_{B}|)$ is positive
under partial transpose across the bipartition of
$AA^{\prime}\mathrel{\mathop{\mathchar 58\relax}}BB^{\prime}$, where
$\\{|i_{A}\rangle\\}$ and $\\{|j_{B}\rangle\\}$ are orthonormal bases on
Hilbert spaces $A$ and $B$, respectively.
The entanglement theory under PPT operations has been extensively studied in
the literature (e.g., Audenaert2003 ; Wang2016d ; Matthews2008 ; Wang2020c ;
Regula2019 ; Chitambar2017 ) and offers the limitations of LOCC. In
particular, the limit of finite-copy entanglement distillation was recently
explored in Fang2017 ; rozpkedek2018optimizing ; Regula2019 . In the
following, we compare our results with the PPT bound from
rozpkedek2018optimizing , which gives the fundamental limits on the fidelity
of distillation with given success probability. To be specific, the maximal
fidelity of distilling $D$-dimensional Bell state from $\rho$ with fixed
success probability $\delta$ using PPT operations rozpkedek2018optimizing is
given by
$\begin{array}[]{ll}\operatorname{maximize}&\frac{d_{A}d_{B}}{\delta}\operatorname{Tr}\rho_{AB}^{T}M_{AB}\\\
\text{ subject to }&M_{AB}\geqslant 0,\quad E_{AB}\geqslant 0,\\\
&M_{AB}+E_{AB}\leqslant\frac{\mathbb{I}_{AB}}{d_{A}d_{B}},\\\
&M_{AB}^{T_{B}}+E_{AB}^{T_{B}}\leqslant\frac{\mathbb{I}_{AB}}{d_{A}d_{B}},d_{A}d_{B}\operatorname{Tr}\left[\rho_{AB}^{T}\left(M_{AB}+E_{AB}\right)\right]=\delta,\\\
&M_{AB}^{T_{B}}+\frac{1}{D+1}E_{AB}^{T_{B}}\geqslant
0,-M_{AB}^{T_{B}}+\frac{1}{D-1}E_{AB}^{T_{B}}\geqslant 0,\end{array}$ (S45)
where $d_{A},d_{B}$ are the dimensions of systems $A$ and $B$, respectively.
Recall that $\rho_{AB}$ is the initial input state that Alice and Bob are
attempting to distill and in most examples considered here, it will consist of
two copies of some two-qubit state.
## SUPPLEMENTARY NOTE 3: Analysis of LOCC state discrimination
To explore the power of LOCCNet in state discrimination, we focus on the
optimal success probability of discriminating noiseless and noisy Bell states
via LOCC. In the following, we present the LOCC protocol from Walgate2000 for
discriminating two Bells states. After that, we show how to distinguish one
Bell state from one noisy Bell state using the protocol learned via LOCCNet
and compare it with the protocol of the noiseless case.
Noiseless case. Consider two Bell states, $|\Phi^{+}\rangle$ and
$|\Phi^{-}\rangle$. Since these two states are pure and orthogonal to each
other, there exist an LOCC protocol that can perfectly distinguish between
them Walgate2000 . Here, we give a specific discrimination protocol for these
two Bell states. Suppose Alice and Bob share a two-qubit state $\rho_{AB}$,
which could be either $|\Phi^{+}\rangle\\!\langle\Phi^{+}|$ or
$|\Phi^{-}\rangle\\!\langle\Phi^{-}|$. To find out which state it is through
LOCC, they can follow the steps below. First, Alice applies a $R_{y}(\pi/2)$
gate on her qubit followed by a measurement. Then, Alice tell Bob her
measurement outcome through classical communication. Receiving the measurement
outcome from Alice, Bob applies on his qubit a $R_{y}$ gate with the rotation
angle $\theta$ being $\pi/2$ or $-\pi/2$, corresponding to the case where the
communicated measurement outcome is $0$ or $1$, respectively. Finally, Bob
measures his qubit. If he gets $0$, then he can be sure that the state
$\rho_{AB}$ is $|\Phi^{+}\rangle\\!\langle\Phi^{+}|$. Otherwise,
$\rho_{AB}=|\Phi^{-}\rangle\\!\langle\Phi^{-}|$. The whole process is also
illustrated with a circuit shown in Fig. S8, which can perfectly discriminate
$|\Phi^{+}\rangle$ and $|\Phi^{-}\rangle$.
@C=1em @R=1.7em A & R_y(π2) [1]
B R_y(θ)
Figure S8: The LOCC protocol distinguishing between the pair of noiseless Bell
states $|\Phi^{+}\rangle$ and $|\Phi^{-}\rangle$. The rotation angle $\theta$
of Bob’s $R_{y}$ gate is either $\pi/2$ or $-\pi/2$, depending on Alice’s
measurement outcome.
Noisy case. Quantum noises unavoidably may occur in quantum information
processing. One common noise of theoretical and experimental interest is the
amplitude damping channel Nielsen2010 , which is one of the realistic sources
of noise in superconducting quantum processor Chirolli2018 . To be specific,
an amplitude damping (AD) channel $\mathcal{A}$ with noise parameter $\gamma$
such that $\mathcal{A}(\rho)=E_{0}\rho E_{0}^{\dagger}+E_{1}\rho
E_{1}^{\dagger}$ with $E_{0}=|0\rangle\langle
0|+\sqrt{1-\gamma}|1\rangle\langle 1|$ and
$E_{1}=\sqrt{\gamma}|0\rangle\langle 1|$. If $|\Phi^{-}\rangle$ is affected by
the amplitude damping noise on each qubit, then the resulting state is
$\mathcal{A}\otimes\mathcal{A}(|\Phi^{-}\rangle\langle\Phi^{-}|)$. The goal is
now to distinguish between $\Phi_{0}\equiv|\Phi^{+}\rangle\langle\Phi^{+}|$
and
$\Phi_{1}\equiv\mathcal{A}\otimes\mathcal{A}(|\Phi^{-}\rangle\langle\Phi^{-}|)$.
PPT bound. The distinguishability of quantum states under PPT (positive
partial transpose) POVMs was introduced in Yu2014a to better understand the
fundamental limits of the local distinguishability of quantum states. To be
specific, the PPT POVM used for distinguishing a set of $n$ orthogonal quantum
states $\\{\rho_{1},\dots,\rho_{n}\\}$ can be defined as an $n$-tuple of
operators, $(M_{k})_{k=1,\dots,n}$, where $M_{k}$ is PPT for $k=1,\dots,n$ and
$\sum_{k=1}^{n}M_{k}=I_{AB}$. The set of PPT POVMs enjoys a more tractable
mathematical structure than the LOCC POVMs due to the SDP characterization of
PPT condition.
The optimal success probability of discriminating a collection of quantum
states $\\{\rho_{1},\cdots,\rho_{K}\\}$ using PPT POVMs is given by
$\begin{split}p_{s}(\rho_{1},\cdots,\rho_{K})=\max\
&\frac{1}{K}\operatorname{Tr}\sum_{j=1}^{K}\rho_{j}\\\ \text{s.t.}\
&\sum_{j=1}^{K}M_{j}=I,0\leq M_{j}\leq I,\forall j=1,2,\cdots,K,\\\
&M_{j}^{T_{B}}\geq 0,\forall j=1,2,\cdots,K.\end{split}$ (S46)
where we assume that each state in this collection has a equal probability of
appearance. As LOCC POVMs is a proper subset of PPT POVMs, the above SDP gives
the upper bound to the optimal success probability of discriminating a
collection of quantum states.
Optimized LOCC protocol. While the PPT bound serves as an upper bound to the
success probability of LOCC discrimination, an optimal LOCC protocol may not
necessarily reach the bound. Here, we present an LOCC protocol optimized by
LOCCNet that achieves a success probability close to the PPT bound.
The only difference between this optimized protocol and the protocol for
noiseless discrimination is that the rotation angle $\theta$ of Bob’s $R_{y}$
gate is not fixed in the noisy case, as shown in Fig. S9. Specifically, for
Alice’s measurement outcome being $0$,
$\displaystyle\theta=\pi-\arctan\left(\frac{2-\gamma}{\gamma}\right),$ (S47)
where $\gamma$ is the noise parameter of the AD channel $\mathcal{A}$ that
$|\Phi^{-}\rangle\\!\langle\Phi^{-}|$ goes through. For Alice’s measurement
outcome being $1$,
$\displaystyle\theta=-\pi+\arctan\left(\frac{2-\gamma}{\gamma}\right).$ (S48)
@C=1em @R=1.7em A & R_y(π2) [1]
B R_y(θ)
Figure S9: The optimized LOCC protocol for distinguishing between states
$\Phi_{0}$ and $\Phi_{1}$. Depending on Alice’s measurement outcome, the
rotation angle $\theta$ of Bob’s $R_{y}$ gate is either $\pi-\arctan(\alpha)$
or $\arctan(\alpha)-\pi$, where $\alpha=(2-\gamma)/\gamma$ and $\gamma$ is the
noise parameter of the AD channel.
###### Proposition S6
For states $\Phi_{0}=|\Phi^{+}\rangle\\!\langle\Phi^{+}|$ and
$\Phi_{1}=\mathcal{A}\otimes\mathcal{A}(|\Phi^{-}\rangle\\!\langle\Phi^{-}|)$,
the optimized protocol illustrated in Fig. S9 discriminates between them with
an average probability of
$\displaystyle
p_{\text{succ}}=\frac{1}{2}+\frac{\sqrt{2-2\gamma+\gamma^{2}}}{2\sqrt{2}}.$
(S49)
###### Proof.
We consider this protocol in two cases.
Case 1. The input state is $\Phi_{0}=|\Phi^{+}\rangle\\!\langle\Phi^{+}|$. For
this case, after Alice applies $R_{y}(\pi/2)$ to her qubit, state
$|\Phi^{+}\rangle$ becomes
$\displaystyle R_{y}\left(\frac{\pi}{2}\right)\otimes
I|\Phi^{+}\rangle=\frac{1}{2}|0\rangle\otimes(|0\rangle-|1\rangle)+\frac{1}{2}|1\rangle\otimes(|0\rangle+|1\rangle).$
(S50)
Measuring the first qubit of this state, Alice has $50\%$ chance of getting
$0$ with the normalized post-measurement state being
$|\psi_{0}\rangle=1/\sqrt{2}|0\rangle\otimes(|0\rangle-|1\rangle)$ and $50\%$
chance of getting $1$ with state
$|\psi_{1}\rangle=1/\sqrt{2}|1\rangle\otimes(|0\rangle+|1\rangle)$. The
resulting states of the second qubit after Bob’s operation are given below,
where $\theta_{0}=\pi-\arctan((2-\gamma)/\gamma)$ and
$\theta_{1}=-\pi+\arctan((2-\gamma)/\gamma)$.
$\displaystyle R_{y}(\theta_{0})\frac{1}{\sqrt{2}}(|0\rangle-|1\rangle)$
$\displaystyle=\frac{1}{\sqrt{2}}\left(\left(\cos\frac{\theta_{0}}{2}+\sin\frac{\theta_{0}}{2}\right)|0\rangle-\left(\cos\frac{\theta_{0}}{2}-\sin\frac{\theta_{0}}{2}\right)|1\rangle\right);$
(S51) $\displaystyle R_{y}(\theta_{1})\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$
$\displaystyle=\frac{1}{\sqrt{2}}\left(\left(\cos\frac{\theta_{1}}{2}-\sin\frac{\theta_{1}}{2}\right)|0\rangle+\left(\cos\frac{\theta_{1}}{2}+\sin\frac{\theta_{1}}{2}\right)|1\rangle\right).$
(S52)
Then, the probability of Bob’s measurement outcome being $0$ given $\Phi_{0}$
as the input state is
$\displaystyle P(0|\Phi_{0})$
$\displaystyle=\frac{1}{2}\left|\frac{1}{\sqrt{2}}\left(\cos\frac{\theta_{0}}{2}+\sin\frac{\theta_{0}}{2}\right)\right|^{2}+\frac{1}{2}\left|\frac{1}{\sqrt{2}}\left(\cos\frac{\theta_{1}}{2}-\sin\frac{\theta_{1}}{2}\right)\right|^{2}=\frac{1}{2}+\frac{\sin\theta_{0}-\sin\theta_{1}}{4}.$
(S53)
Since $\theta_{1}=-\theta_{0}$, we have $\sin\theta_{1}=-\sin\theta_{0}$ and
$\displaystyle P(0|\Phi_{0})$
$\displaystyle=\frac{1}{2}+\frac{\sin\theta_{0}+\sin\theta_{0}}{4}=\frac{1+\sin\theta_{0}}{2}.$
(S54)
Let $\alpha\equiv(2-\gamma)/\gamma$. Then, $\theta_{0}=\pi-\arctan\alpha$ and
$\displaystyle P(0|\Phi_{0})$
$\displaystyle=\frac{1+\sin(\pi-\arctan\alpha)}{2}=\frac{1+\sin(\arctan\alpha)}{2}=\frac{1}{2}+\frac{\alpha}{2\sqrt{1+\alpha^{2}}}.$
(S55)
Case 2. The input state is
$\Phi_{1}=\mathcal{A}\otimes\mathcal{A}(|\Phi^{-}\rangle\\!\langle\Phi^{-}|)$.
If the noise parameter of the AD channel $\mathcal{A}$ is $\gamma$, then the
input state in matrix form is
$\displaystyle\Phi_{0}=\frac{1}{2}\begin{pmatrix}1+\gamma^{2}&0&0&\gamma-1\\\
0&\gamma-\gamma^{2}&0&0\\\ 0&0&\gamma-\gamma^{2}&0\\\
\gamma-1&0&0&(1-\gamma)^{2}\end{pmatrix}.$ (S56)
After Alice applies $R_{y}(\pi/2)$ to her qubit, the input state becomes
$\displaystyle R_{y}\left(\frac{\pi}{2}\right)\otimes
I\Phi_{0}R_{y}^{\dagger}\left(\frac{\pi}{2}\right)\otimes
I=\frac{1}{4}\begin{pmatrix}1+\gamma&1-\gamma&1-\gamma+2\gamma^{2}&\gamma-1\\\
1-\gamma&1-\gamma&1-\gamma&-1+3\gamma-2\gamma^{2}\\\
1-\gamma+2\gamma^{2}&1-\gamma&1+\gamma&\gamma-1\\\
\gamma-1&-1+3\gamma-2\gamma^{2}&\gamma-1&1-\gamma\end{pmatrix}.$ (S57)
From this state, we can see that when Alice measures her qubit, she has $50\%$
chance of getting $0$ with this state collapsing to
$\displaystyle\rho_{0}=|0\rangle\\!\langle
0|\otimes\frac{1}{2}\begin{pmatrix}1+\gamma&1-\gamma\\\
1-\gamma&1-\gamma\end{pmatrix}.$ (S58)
If Alice gets $1$ instead, this state collapses to
$\displaystyle\rho_{1}=|1\rangle\\!\langle
1|\otimes\frac{1}{2}\begin{pmatrix}1+\gamma&\gamma-1\\\
\gamma-1&1-\gamma\end{pmatrix}.$ (S59)
The resulting states of the second qubit after Bob’s operation are given
below, where $\theta_{0}=\pi-\arctan((2-\gamma)/\gamma)$ and
$\theta_{1}=-\pi+\arctan((2-\gamma)/\gamma)$.
$\displaystyle R_{y}(\theta_{0})\frac{1}{2}\begin{pmatrix}1+\gamma&1-\gamma\\\
1-\gamma&1-\gamma\end{pmatrix}R_{y}^{\dagger}(\theta_{0})$
$\displaystyle=\frac{1}{2}\begin{pmatrix}1+\gamma\cos\theta_{0}+(\gamma-1)\sin\theta_{0}&(1-\gamma)\cos\theta_{0}+\gamma\sin\theta_{0}\\\
(1-\gamma)\cos\theta_{0}+\gamma\sin\theta_{0}&1-\gamma\cos\theta_{0}+(1-\gamma)\sin\theta_{0}\end{pmatrix};$
(S60) $\displaystyle
R_{y}(\theta_{1})\frac{1}{2}\begin{pmatrix}1+\gamma&\gamma-1\\\
\gamma-1&1-\gamma\end{pmatrix}R_{y}^{\dagger}(\theta_{1})$
$\displaystyle=\frac{1}{2}\begin{pmatrix}1+\gamma\cos\theta_{1}+(1-\gamma)\sin\theta_{1}&(\gamma-1)\cos\theta_{1}+\gamma\sin\theta_{1}\\\
(\gamma-1)\cos\theta_{1}+\gamma\sin\theta_{1}&1-\gamma\cos\theta_{1}+(\gamma-1)\sin\theta_{1}\end{pmatrix}.$
(S61)
Then, the probability of Bob’s measurement outcome being $1$ given $\Phi_{1}$
as the input state is
$\displaystyle P(1|\Phi_{1})$
$\displaystyle=\frac{1}{2}\cdot\frac{1-\gamma\cos\theta_{0}+(1-\gamma)\sin\theta_{0}}{2}+\frac{1}{2}\cdot\frac{1-\gamma\cos\theta_{1}+(\gamma-1)\sin\theta_{1}}{2}$
(S62)
$\displaystyle=\frac{2-\gamma(\cos\theta_{0}+\cos\theta_{1})+(1-\gamma)(\sin\theta_{0}-\sin\theta_{1})}{4}.$
(S63)
Since $\theta_{0}=\theta_{1}$, we have $\sin\theta_{1}=-\sin\theta_{0}$,
$\cos\theta_{1}=\cos\theta_{0}$, and thus
$\displaystyle P(1|\Phi_{1})$
$\displaystyle=\frac{2-\gamma(\cos\theta_{0}+\cos\theta_{0})+(1-\gamma)(\sin\theta_{0}+\sin\theta_{0})}{4}$
(S64)
$\displaystyle=\frac{1-\gamma\cos\theta_{0}+(1-\gamma)\sin\theta_{0}}{2}.$
(S65)
Let $\alpha\equiv(2-\gamma)/\gamma$. Then $\theta_{0}=\pi-\arctan\alpha$ and
$\displaystyle
P(1|\Phi_{1})=\frac{1}{2}+\frac{\gamma+(1-\gamma)\alpha}{2\sqrt{1+\alpha^{2}}}.$
(S66)
Combining these two cases, we can obtain this optimized protocol’s average
probability of success as
$\displaystyle p_{\text{succ}}$
$\displaystyle=\frac{1}{2}P(0|\Phi_{0})+\frac{1}{2}P(1|\Phi_{1})=\frac{1}{2}\left(\frac{1}{2}+\frac{\alpha}{2\sqrt{1+\alpha^{2}}}+\frac{1}{2}+\frac{\gamma+(1-\gamma)\alpha}{2\sqrt{1+\alpha^{2}}}\right)$
(S67)
$\displaystyle=\frac{1}{2}\left(1+\frac{\gamma+(2-\gamma)\alpha}{2\sqrt{1+\alpha^{2}}}\right)=\frac{1}{2}+\frac{\sqrt{2-2\gamma+\gamma^{2}}}{2\sqrt{2}}.$
(S68)
$\sqcap$$\sqcup$
|
# Computer simulation of surgical interventions
for the treatment of refractory pulmonary hypertension
Seong Woo Han Courant Institute of Mathematical Sciences, New York University
Department of Bioengineering, University of Pennsylvania Charles Puelz
Courant Institute of Mathematical Sciences, New York University Department of
Pediatrics, Section of Cardiology, Baylor College of Medicine and Texas
Children’s Hospital Craig G. Rusin Department of Pediatrics, Section of
Cardiology, Baylor College of Medicine and Texas Children’s Hospital
Daniel J. Penny Department of Pediatrics, Section of Cardiology, Baylor
College of Medicine and Texas Children’s Hospital Ryan Coleman Department of
Pediatrics, Section of Critical Care Medicine, Baylor College of Medicine and
Texas Children’s Hospital Charles S. Peskin Courant Institute of
Mathematical Sciences, New York University
###### Abstract
This paper describes computer models of three interventions used for treating
refractory pulmonary hypertension (RPH). These procedures create either an
atrial septal defect, a ventricular septal defect, or, in the case of a Potts
shunt, a patent ductus arteriosus. The aim in all three cases is to generate a
right-to-left shunt, allowing for either pressure or volume unloading of the
right side of the heart in the setting of right ventricular failure, while
maintaining cardiac output. These shunts are created, however, at the expense
of introducing de-oxygenated blood into the systemic circulation, thereby
lowering the systemic arterial oxygen saturation. The models developed in this
paper are based on compartmental descriptions of human hemodynamics and oxygen
transport. An important parameter included in our models is the cross-
sectional area of the surgically created defect. Numerical simulations are
performed to compare different interventions and various shunt sizes and to
assess their impact on hemodynamic variables and oxygen saturations. We also
create a model for exercise and use it to study exercise tolerance in
simulated pre-intervention and post-intervention RPH patients.
## 1 Introduction
Pulmonary hypertension refers to a spectrum of cardiovascular and/or pulmonary
diseases that involve elevations in a person’s pulmonary vascular resistance
(PVR). Over time, this elevated PVR causes pathologic remodeling of the right
ventricle and the pulmonary vasculature, ultimately resulting in right
ventricular failure and death. In this paper, our focus is on interventions
for refractory pulmonary hypertension (RPH), which corresponds to disease that
is unresponsive to standard medical treatments [6, 14]. Even with aggressive
pharmacotherapy, patients often will ultimately require either lung
transplantation or palliative surgical or catheter-based procedures, with the
goal of either approach being to provide relief to the ailing right ventricle
and thereby to extend the patient’s life, although for an unknown period of
time.
Palliative shunts used in the setting of a failing right ventricle due to
elevated PVR can be classified into two main categories: (1) pre-tricuspid
shunts, meaning the shunt occurs prior to blood crossing the tricuspid valve,
which is what occurs in the setting of an atrial septal defect (ASD), and (2)
post-tricuspid shunts, meaning the shunt occurs after the blood passes the
tricuspid valve, which are where the ventricular septal defect (VSD) and Potts
shunt occur. This classification scheme is important, as pre-tricuspid shunts
are viewed as volume-unloading shunts for the right ventricle, meaning they
can unload excess volume from a failing right ventricle, but do not directly
affect the pressure the right ventricle has to pump against. Post-tricuspid
shunts, on the other hand, are pressure-unloading shunts, meaning they provide
a lower-resistance pathway for blood to traverse, decreasing the resistance
the failing right ventricle has to pump against. Pressure-unloading shunts are
often preferred, as it is the pressure load the right ventricle has to pump
against that results in its failure and the patient’s demise. Because these
shunts, regardless of location, allow for right-to-left shunting, they result
in a decrease in systemic oxygen saturations of varying degrees and severity.
Shunt effectiveness is determined not only by location of the shunt, but by
the size of the shunt as well. Small septal defects may quickly become
restrictive over time, reducing their effectiveness at either pressure- or
volume-unloading the right ventricle. The same is true for narrow shunts like
a Potts, particularly as their length increases.
The effects of various shunts on pressures, flows, and oxygen saturations are
often not clear in practice. Furthermore, shunt flows are highly sensitive to
shunt size, a parameter that can be varied within the modeling framework
developed in this paper. The goal of this paper is to use computational models
to study the impact of several possible shunts, used for treating refractory
pulmonary hypertension, on important hemodynamic variables and oxygen
saturations. The three shunts considered here are (1) within the atrial
septum, (2) within the ventricular septum, or (3) between the pulmonary artery
and the aorta [13, 2]. We refer to these surgically created defects
respectively by the following names: (1) an atrial septal defect (ASD), (2) a
ventricular septal defect (VSD), or (3) a Potts shunt. For each intervention,
we develop and apply computational models to study both the benefit in terms
of reduced pulmonary artery pressure and also the detriment in terms of
systemic arterial oxygen desaturation, as functions of the cross-sectional
area of the shunt. Furthermore, we develop an exercise model and use it to
study the simulated impact of RPH on exercise tolerance in three distinct
conditions: pre-intervention, immediately post-intervention, and a certain
amount of time post-intervention after which pulmonary vascular remodeling has
occurred.
The complexities associated with pulmonary hypertension have motivated the use
of computational models for studying disease progression, diagnosis, and the
performance of possible treatments. We recall several important contributions
of physics-based models for pulmonary hypertension. Delhass et al. constructed
compartmental models for two possible interventions considered in this paper,
the ASD and the Potts shunt [5]. Our paper extends their results to a
comparison of the ASD and Potts shunt with the VSD. Gerringer et al. built and
calibrated compartmental models with animal data to study the effect of
progressing pulmonary hypertension on resistance and compliance [8]. Tewari et
al. also used calibrated compartmental models to investigate changes in
important hemodynamic parameters after the onset of disease [30]. Vessel
network models, which incorporate spatial variations of blood flow and
pressure, were used by Acosta et al. to derive early diagnostic indicators of
disease [1]. Qureshi et al. also used vessel network models to study several
classes of pulmonary hypertension as well as to simulate control and diseased
animal models [21, 20]. There have also been modeling efforts to understand
the impact of pulmonary hypertension on remodeling of heart tissue. Raush et
al. constructed three-dimensional solid mechanics models of the ventricular
chambers that were coupled to a mathematical description of tissue remodeling
under the high pressure loads associated with hypertension [22].
The rest of this paper is organized as follows. Section 2 describes the
mathematical models for blood flow and oxygen transport and the numerical
methods used to approximate the resulting equations. This section also
includes a discussion of parameter selection, the shunt model derived from the
Gorlin equation, and the exercise model. Section 3 describes results from our
models for each of the three possible surgical interventions. Our models are
used to study the dependence of several important hemodynamic variables on
shunt size. We also investigate mean shunt flow as a function of pulmonary
vascular resistance and the corresponding time-dependent details of the shunt
flow waveform. Limitations and conclusions are provided in Sections 4 and 5.
## 2 Circulation models and numerical methods
In this section, we present the models used in this work and the numerical
methods by which the model equations are solved. The following subsections
describe a hemodynamic model, a cardiac chamber model that specifies the time-
varying compliance of each heart chamber in the hemodynamic model, a shunt
model based on the Gorlin equation that makes it possible to include shunts of
specified cross-sectional area, an oxygen transport model that calculates
oxygen saturations throughout the circulation, and finally an exercise model
that modulates several hemodynamic parameters to simulate the impact of
exercise on blood flow and oxygen transport.
### 2.1 Hemodynamic model
The circulation is represented by a collection of compartments corresponding
to compliance chambers. These chambers are connected by resistors that are
equipped with valves [19]. We use the following compliance relation for each
of the $N$ compliance chambers, numbered $i=1,2,...,N$:
$V_{i}=(V_{\text{d}})_{i}+C_{i}P_{i},\quad i=1,...,N.$ (1)
The parameter $C_{i}$ is the compliance of chamber $i$, which is assumed to be
constant for arteries and veins but time-varying for the heart chambers. The
variable $V_{i}$ is the volume of compliance chamber $i$, the variable $P_{i}$
is the pressure of that chamber, and the parameter $(V_{\text{d}})_{i}$ is the
dead volume, that is, the volume of the chamber when the pressure is zero. We
assume the flow from chamber $i$ to chamber $j$ is governed by a pressure-flow
relationship of the following form:
$Q_{ij}=\frac{S_{ij}}{R_{ij}}(P_{i}-P_{j})=S_{ij}\,G_{ij}\,(P_{i}-P_{j}),\quad
i,j=1,...,N,$ (2)
where
$S_{ij}=\begin{cases}1,&P_{i}>P_{j},\\\ 0,&P_{i}\leq P_{j}.\end{cases}$ (3)
Equation (2) describes the flow through a resistance that is equipped with a
valve. The conductance $G_{ij}$ is the reciprocal of the resistance $R_{ij}$.
Conductance is convenient because it can be set equal to zero to represent the
absence of a connection between two chambers. The variable $S_{ij}$, which is
determined by $P_{i}$ and $P_{j}$ according to equation (3), denotes the state
of the valve, with $S_{ij}$ = 1 when the valve is open, and $S_{ij}$ = 0 when
the valve is closed. Note that the words “open” and “closed” have the opposite
meaning here from their use in electricity, where a closed switch is
conducting and an open switch is non-conducting.
Equipping every connection with a valve does not involve any loss of
generality. Between any pair of chambers $i$ and $j$, our framework allows for
two connections of the type described above, one with a valve that allows flow
only from $i$ to $j$, and another with a valve that allows flow only from $j$
to $i$. To model a situation in which there is no valve in a connection
between chambers $i$ and $j$, we need only set $G_{ij}$ equal to $G_{ji}$. To
model a leaky valve we may set $G_{ij}$ and $G_{ji}$ to positive but unequal
values. Lastly, to model the situation in which there is no connection at all
between chambers $i$ and $j$, we set $G_{ij}$ = $G_{ji}$ = 0. Thus, our
framework allows for a great variety of connection types and patterns merely
by specifying the (non-symmetric) $N$ by $N$ matrix $G$.
Upon differentiating equation (1) with respect to time and using the principle
that the rate of change of volume is equal to inflow minus outflow, together
with equation (2), one obtains the following system of ordinary differential
equations for the pressures as functions of time:
$\displaystyle\frac{d}{dt}(C_{i}P_{i})$
$\displaystyle=\displaystyle\sum_{j=1}^{N}(S_{ji}G_{ji}(P_{j}-P_{i})-S_{ij}G_{ij}(P_{i}-P_{j}))$
$\displaystyle=\displaystyle\sum_{j=1}^{N}(S_{ij}G_{ij}+S_{ji}G_{ji})(P_{j}-P_{i}).$
(4)
We assume here that all of the dead volumes are constant but allow for the
possibility that some of the compliances, specifically those of the heart
chambers, are functions of time. How these compliances are specified will be
described in Subsection 2.2. Equation (4) will be modified later to include
shunt flows modeled by the Gorlin equation; see Subsection 2.3.
Our numerical scheme for equation (4) is the backward Euler method:
$\displaystyle\frac{C_{i}^{n}P_{i}^{n}-C_{i}^{n-1}P_{i}^{n-1}}{\Delta t}$
$\displaystyle=\displaystyle\sum_{j=1}^{N}(S_{ij}^{n}G_{ij}^{n}+S_{ji}^{n}G_{ji}^{n})(P_{j}^{n}-P_{i}^{n}).$
(5)
This is a system of equations for the unknown pressures at time step $n$. It
is a nonlinear system, because $S_{ij}$ is a function of $P_{i}$ and $P_{j}$,
see equation (3). The reason for using the backward Euler method here is its
unconditional stability. If two compliance chambers are connected by a very
large conductance (that is, by a very small resistance), their pressures will
equilibrate on a very fast time scale, and we do not want to be required to
use a small enough time step to resolve the details of that rapid
equilibration. This situation actually arises in the circulation whenever
there are two chambers with an open heart valve between them, since an open
valve (at least when it is non-stenotic) has a very high conductance.
The procedure that we use to solve the nonlinear system (5) is based on the
following observation: given the valve states, equation (5) reduces to a
linear system that is easy to solve for the pressures. Also, given the
pressures, it is easy to evaluate the valve states from equation (3). Thus,
the procedure starts with a guess for the valve states (a good guess is the
valve states that were found on the previous time step), solves equation (5)
for the pressures, resets the valve states according to the pressures via
equation (3), and so on. The process stops when the valve states (and
therefore the pressures) stop changing, and in practice this happens very
quickly. On most time steps, the initial guess, that the valves states are the
same as they were at the previous time step, turns out to be correct. When the
valve states stop changing, the problem stated in equation (5) is actually
solved (except, of course, for round-off error), not merely solved to within
some tolerance. This is because the valve states are discrete.
For further discussion of the methodology described here, see [10]. As in that
reference, our models use six compliance chambers corresponding to the left
and right ventricles and the systemic and pulmonary arteries and veins. We do
not separately model the atria, but instead treat each atrium as part of the
venous system to which it is connected, and moreover we do not take into
account the time dependence of the atrial compliances. The ventricular
compliances are, of course, time dependent in our model, but not in the same
way as in [10], see Subsection 2.2.
Parameters | Resistance ($R$) | Dead Volume ($V_{\text{d}}$) | Compliance ($C$)
---|---|---|---
Units | mmHg/(L/min) | L | L/mmHg
S | 17.5 | - | -
P | 1.79, 22.75 | - | -
Mi | 0.01 | - | -
Ao | 0.01 | - | -
Tr | 0.01 | - | -
Pu | 0.01 | - | -
SA | - | 0.825 | 0.0012
PA | - | 0.1135 | 0.0042, 0.0021
PV | - | 0.18 | 0.01
SV | - | 3.1 | 0.09
Table 1: Parameters for the circulation models. When two numbers are given,
the first one is used in the normal model and the second one is used in the
RPH model. For example, pulmonary resistance is chosen to be 1.3 times greater
than the systemic resistance (1.3$\times$17.5) to simulate refractory
pulmonary hypertension. The parameters shown in the table are resting values.
The systemic venous dead volume and systemic resistance are modified by our
exercise model as discussed in section 2.5. Abbreviations: S, systemic organs;
P, lungs; Mi, mitral valve; Ao, aortic valve; Tr, tricuspid valve; Pu,
pulmonic valve; SA, systemic arteries; PA, pulmonary arteries; PV, pulmonary
veins; SV, systemic veins.
To create a model for RPH, we first determine parameter values that result in
a normal model for a healthy circulation. Then, parameters for the normal
model are systematically adjusted, as described below, to create a pre-
intervention model corresponding to the RPH disease state. In order to model a
severe pulmonary hypertension, the pulmonary resistance is taken to be 1.3
times greater than the systemic resistance [24]. This is very different from
the normal case in which the pulmonary resistance is approximately 10 times
smaller than the systemic resistance [11, 17, 32]. A possible consequence of
pulmonary hypertension is right-heart hypertrophy, making the normally thin-
walled right ventricle into a chamber that more closely resembles the normal
left ventricle [25, 18, 4]. To model this, we use typical left-ventricular
parameters for both ventricles; see the next subsection. Another consequence
of pulmonary hypertension is right heart failure that leads to increased blood
volume [28]. Accordingly, we use a total blood volume of 5.248 L, compared to
the blood volume in our normal model, which is 5.098 L. This change is needed
to elevate the systemic venous pressure sufficiently to fill the hypertrophied
right heart and produce a viable cardiac output. The hemodynamic parameters
used in the present model, other than the cardiac chamber parameters, are
stated in Table 1. When two values are given, the first corresponds to the
normal model and the second corresponds to the pre-intervention RPH model.
### 2.2 Cardiac chamber model
This subsection details the time-varying elastance model used for the left and
right ventricles, adapted from [16]. Note that the elastance, denoted by $E$,
is the reciprocal of the compliance $C$. For a cardiac chamber, the elastance,
and therefore the compliance, is a given function of time. For the pre-
intervention RPH model, we use the same elastance function $E(t)$, with the
same parameters, for the left and right ventricles. This choice is reasonable
because the large right-sided pressures associated with pulmonary hypertension
lead to remodeling and thickening of the right ventricular wall [9]. In severe
pulmonary hypertension, these changes result in a right ventricular
pressure/volume characteristics similar to that of the left ventricle [26].
Maximum and minimum ventricular elastances are denoted $E_{\text{max}}$ and
$E_{\text{min}}$. $E_{\text{max}}$ is the end-systolic elastance and
$E_{\text{min}}$ is the end-diastolic elastance. The functional form of the
elastance $E(t)$ is given during the time interval $[0,T]$ as follows:
$E(t)=k\left(\frac{g_{1}(t)}{1+g_{1}(t)}\right)\left(\frac{1}{1+g_{2}(t)}-\frac{1}{1+g_{2}(T)}\right)+E_{\text{min}},$
(6)
where
$g_{1}(t)=\left(\frac{t}{\tau_{1}}\right)^{m_{1}},\quad
g_{2}(t)=\left(\frac{t}{\tau_{2}}\right)^{m_{2}}.$ (7)
Here $T$ is the period of the heartbeat. Note that equation (6) makes $E(0)$ =
$E(T)$. (We have made a slight modification of the formula used in [16] to
ensure this.) Outside of the interval $[0,T]$, we define $E(t)$ as a periodic
function with period $T$, so that $E(t)$ = $E(t+T)$ for all $t$. The parameter
$k$ is chosen so that the maximum value of $E(t)$ is $E_{\text{max}}$. The
formula for $k$ to achieve this is
$k=\frac{E_{\text{max}}-E_{\text{min}}}{\max_{t\in[0,T]}[(\frac{g_{1}(t)}{1+g_{1}(t)})(\frac{1}{1+g_{2}(t)}-\frac{1}{1+g_{2}(T)})]}$
(8)
The maximum in the denominator of the formula for $k$ is computed by
evaluating the expression that needs to be maximized at a collection of
equally spaced points within the interval $[0,T]$, and then choosing the
largest of the values of that expression that are found. Although this
procedure does not yield the exact maximum value, it comes close enough for
practical purposes, especially since the goal is to find the maximum value,
rather than the time at which it occurs. Parameter values used for the heart
chambers are provided in Table 2. When two values are shown, the first
corresponds to the normal model and the second corresponds to the pre-
intervention RPH model. The constant $\tau_{1}$ controls the timescale of
contraction, $\tau_{2}$ controls the duration of systole, and $m_{1}$ and
$m_{2}$ govern the speed of contraction and relaxation respectively. Note that
$\tau$ and $m$ are estimated from previously employed values [27], and the
values for $E_{\text{min}}$ and $E_{\text{max}}$ are similar to those used by
[15].
Parameters | Symbol | Units | Left Ventricle | Right Ventricle
---|---|---|---|---
Minimum elastance | $E_{\text{min}}$ | mmHg/L | $0.08\times 10^{3}$ | $0.04\times 10^{3}$, $0.08\times 10^{3}$
Maximum elastance | $E_{\text{max}}$ | mmHg/L | $3.00\times 10^{3}$ | $0.60\times 10^{3}$, $3.00\times 10^{3}$
Contraction exponent | $m_{1}$ | - | 1.32 | 1.32
Relaxation exponent | $m_{2}$ | - | 27.4 | 27.4
Systolic time constant | $\tau_{1}$ | min | 0.269$\times T$ | 0.269$\times T$
Diastolic time constant | $\tau_{2}$ | min | 0.452$\times T$ | 0.452$\times T$
Dead Volume | $V_{d}$ | L | 0.010 | 0.010
Period of heartbeat | $T$ | min | 0.0125 | 0.0125
Table 2: Parameters for the time varying compliances in the heart model. When
two numbers are given, the first one is used in the normal model and the
second one is used in the RPH model. The parameters shown in the table are
resting values. The period of a cardiac cycle, $T$, will be modified in the
exercise model as described in section 2.5.
### 2.3 Shunt model
The Gorlin equation is used to calculate flows through surgically created
shunts (ASD, VSD, or Potts shunt) in our model [23]. This allows us to specify
the cross-sectional area of the connection that the surgeon creates, and to
study how the shunt size affects hemodynamic variables and the transport of
oxygen.
To derive the shunt model, consider two chambers, denoted by the indices 1 and
2, separated by a wall with a hole in it that corresponds to the shunt. Let
$A_{0}$ be the cross-sectional area of the hole. We assume that the velocity
of the blood as it goes through the hole is much larger than the velocity in
the two chambers, so that we may consider the fluid in each of the two
chambers as if it were at rest. Let $Q$ denote the volume of blood flow per
unit time through the hole, with the direction from chamber 1 to chamber 2
considered positive. Then, the spatially averaged velocity of blood flow in
the hole itself is given by
$v=Q/A_{0}.$ (9)
Let $P_{1}$ and $P_{2}$ be the pressures in the two chambers, and let $P_{0}$
be the pressure within the hole. Suppose, for example, that $Q$ $>$ 0\. By
Bernoulli’s equation in the upstream chamber up to the hole itself, one has
$\displaystyle P_{0}$ $\displaystyle=P_{1}-\frac{1}{2}\rho
v^{2}=P_{1}-\frac{\rho}{2A_{0}^{2}}Q^{2}.$ (10)
In the region downstream of the hole, Bernoulli’s equation does not apply
because the flow there is dominated by turbulent eddies that dissipate energy.
The result is that the pressure is relatively constant in the downstream
region, in particular that $P_{2}=P_{0}$. It follows that
$P_{1}-P_{2}=\frac{\rho}{2A_{0}^{2}}Q^{2},\quad Q>0.$ (11)
By the same reasoning, for flow in the other direction
$P_{2}-P_{1}=\frac{\rho}{2A_{0}^{2}}Q^{2},\quad Q<0.$ (12)
Equations (11) and (12) can be combined as follows:
$P_{1}-P_{2}=\frac{\rho}{2A_{0}^{2}}|Q|Q,$ (13)
This shows that the hydraulic resistance of the hole is given by
$R_{\text{shunt}}=\frac{\rho}{2A_{0}^{2}}|Q|.$ (14)
Note that the above formula for $R_{\text{shunt}}$ is independent of
viscosity. In reality, there is a very small viscous resistance as well, so we
modify the above formula to read
$R_{\text{shunt}}=R_{\text{visc}}+\frac{\rho}{2A_{0}^{2}}|Q|.$ (15)
In most situations $R_{\text{visc}}$ is negligible, but we should include it
to prevent $R_{\text{shunt}}$ from being zero, which would otherwise happen in
principle every time that $Q$ changes sign. Since $R_{\text{visc}}$ is
included only for this reason, we choose the very small value
$R_{\text{visc}}$ = 0.1 mmHg/(L/min). Evaluating the conductance of the hole
from equation (15), one obtains
$G_{\text{shunt}}=\frac{1}{R_{\text{visc}}+\frac{\rho}{2A_{0}^{2}}|Q|}$ (16)
In making use of equation (16), one must be careful about units. In this
paper, we use what may be called physiological units, in which volume is
measured in liters, pressure in mmHg, and time in minutes. The constants
$A_{0}$ and $\rho$ need to be expressed in these units. The use of liters for
volume implies that our unit of length is the decimeter (dm), which is equal
to 10 cm. Thus, $A_{0}$ needs to be expressed internally in terms of dm2,
although in the presentation of our results, we use cm2 since these units have
more meaning to the reader. To express density in physiological units, one
needs the units of mass. The units of force are mmHg $\cdot$ dm2, and the
units of acceleration are dm/min2, so the units of mass are mmHg $\cdot$ dm
$\cdot$ min2. Dividing this by dm3, one obtains the units of density as mmHg
$\cdot$ (min/dm)2. After taking units carefully into account in this way, the
density in physiological units is
$\displaystyle\rho=0.00002084167\cdot\frac{\text{mmHg}\cdot{\text{min}^{2}}}{\text{dm}^{2}}.$
(17)
Another complication in the use of equation (16) is that the shunt conductance
$G_{\text{shunt}}$ is flow-dependent. A simple idea here would be to use the
shunt flow on the previous time step to set the shunt conductance for the
present time step, but instead of this, we use a fixed-point iteration, with
the shunt flow on the previous time step as the initial guess. At each step of
the fixed-point iteration, equation (16) is used to set the shunt conductance
based on the latest guess for the shunt flow. Then, the shunt conductance is
inserted into the appropriate two places in the conductance matrix $G$ (one
entry for each flow direction, since there is no valve involved in the shunt).
Finally, all of the pressures and flows for the circulation are computed,
including the shunt flow. The benefit of doing the fixed-point iteration can
be seen in Figure 1 since it removes the numerical oscillations seen in the
blood flow waveform. In practice, 10 fixed-point iterations are used for each
time step, and this achieves good enough agreement between the flow that is
used to set the shunt conductance and the flow that is calculated on the basis
of that shunt conductance.
Figure 1: A comparison of shunt flow waveforms from the ASD model, obtained
with and without fixed-point iterations (due to the nonlinearity in the shunt
conductance). The shunt area is 1 $\text{cm}^{2}$. The left panel shows flow
computed without the fixed-point iteration. The right panel shows flow
computed with the fixed-point iteration. Notice that the fixed-point iteration
removes the high-frequency oscillations.
### 2.4 Oxygen transport model
An important consequence of the surgical interventions considered in this
paper is the mixing of oxygenated and doxygenated blood. Our approach to the
modeling of oxygen transport follows Tu and Peskin [31]. Time-varying oxygen
concentrations for each compliance chamber are described by the following
system of differential equations:
$\frac{d}{dt}([O_{2}]_{i}V_{i})=\displaystyle\sum_{j=1\atop j\neq
i}^{N}([O_{2}]_{j}Q_{ji}-[O_{2}]_{i}Q_{ij}+M_{ji}).$ (18)
The variable $[O_{2}]_{i}$ is the oxygen concentration in compliance chamber
$i$, the variable $Q_{ji}$ is the blood flow from $j$ to $i$, and the
parameter $M_{ji}$ is the rate at which oxygen is added to the stream of blood
that is flowing from chamber $j$ to chamber $i$. Note that $M_{ji}$ is
positive if oxygen is being added to the blood stream, and negative if oxygen
is being removed. The correctness of equation (18) relies on the fact that all
of the flows that appear in it are positive or zero. This is a benefit of our
formulation in which every connection is equipped with a valve, as described
above in subsection 2.1. These equations describe conservation of oxygen
during transport between chambers, metabolic consumption of oxygen within
systemic organs, and replenishment of oxygen within the lungs. After computing
the flows at time step $n$, those flow values are used to update the oxygen
concentrations from time step $n-1$ to $n$ as follows:
$\frac{[O_{2}]_{i}^{n}V_{i}^{n}-[O_{2}]_{i}^{n-1}V_{i}^{n-1}}{\Delta
t}=\displaystyle\sum_{j=1\atop j\neq
i}^{N}([O_{2}]_{j}^{n-1}Q_{ji}^{n}-[O_{2}]_{i}^{n-1}Q_{ij}^{n}+M_{ji}).$ (19)
Note that this is the forward Euler method insofar as the oxygen
concentrations are concerned, although it differs from the forward Euler
method by using the flows at time step $n$. The manner in which $M_{ji}$ is
determined for use in these equations is described below.
We use the millimole (mmol) as the unit for the amount of oxygen. It then
follows from our other choices of units that the units of oxygen concentration
are mmol/L and the units of the rate of oxygen consumption by the body are
mmol/min. A standard concentration of hemoglobin in blood is 2.5 mmol/L, and
since each hemoglobin molecule can carry four oxygen molecules, the oxygen
concentration when hemoglobin is fully saturated is 10 mmol/L.
There are only two places in our model where the parameter $M_{ji}$ that
appears in equation (18) is nonzero. One of these is in the connection from
the pulmonary arteries (pa) to the pulmonary veins (pv). We assume that
$M_{\text{pa,pv}}$ is such that the stream of blood flowing from the pulmonary
arteries to the pulmonary veins becomes fully saturated with oxygen during its
passage through the pulmonary capillaries. This gives the equation
$M_{\text{pa,pv}}=(10\text{ mmol/L}-[O_{2}]_{\text{pa}})\,Q_{\text{pa,pv}}.$
(20)
Equation (20) is used to set $M_{\text{pa,pv}}$ at every time step. Note that
this is not the same as setting $[O_{2}]_{\text{pv}}$ = 10 mmol/L. The reason
for this is that there may be other streams of blood entering the pulmonary
venous compartment besides the one coming from the pulmonary arteries. In
particular, since we regard the left atrium as being part of the pulmonary
venous compartment, this will be the case when we are simulating a surgically
created atrial septal defect.
The other nonzero value of $M_{ji}$ in our model is $M_{\text{sa,sv}}$, which
is negative, since it represents oxygen consumption by the tissues. This
oxygen is extracted from the stream of blood that flows from the systemic
arteries (sa) to the systemic veins (sv). In the simulations reported here, we
keep $M_{\text{sa,sv}}$ constant, and we calculate its value by using a normal
cardiac output of 5.6 L/min and a normal amount of oxygen extraction by the
systemic tissue of 30%. These values result in the following:
$\displaystyle-M_{\text{sa,sv}}=0.3\cdot(10\text{ mmol/L})\cdot(5.6\text{
L/min})$ $\displaystyle=16.8\text{ mmol/min}$ (21)
The initial value for the oxygen concentration is set to 10 mmol/L in all
compartments, but, this has no effect on our results because we run the
simulations to a periodic steady state.
### 2.5 Exercise model
The surgical interventions studied in this paper have an impact on exercise
tolerance. We develop a simple exercise model to use within our hemodynamic
and oxygen transport models in order to compare exercise tolerance pre and
post intervention [7, 29]. The independent variable in our exercise model is
oxygen consumption, denoted $M$, in the systemic tissues. This independent
variable is used to set the parameter $M_{\text{sa,sv}}=-M$ in equations (19)
and determine several other parameters as described below. The resting oxygen
consumption in the systemic tissues is denoted $M_{\text{rest}}$ which we set
equal to 16.8 mmol/min. The heart rate is taken to be a function of the oxygen
consumption using the following equation, adapted from [3]:
$\displaystyle\emph{\text{HR}}=(0.94\text{
beats/mmol})\times(M-M_{\text{rest}})+80\text{ beats/min}.$ (22)
When changing the heart rate, the period $T=1/\text{\em HR}$ is modified
accordingly. Note that to create the resting condition, we set
$M=M_{\text{rest}}$, in which case we recover the resting heart rate of 80
beats/min. The systemic resistance $R_{S}$ decreases during exercise and is
taken to be a function of the heart rate as follows:
$\displaystyle R_{S}(\text{\em HR})=\frac{(17.5\text{
mmHg/(L/min)})\times(80\text{ beats/min})}{\emph{\text{HR}}}.$ (23)
Note that in the resting case, HR = 80 beats/min, and we recover the resting
systemic resistance of 17.5 mmHg/(L/min). The third component of our exercise
model is a decrease in the systemic venous dead volume. This modification is
described by the following formula:
$\displaystyle V_{d,SV}(\text{\em HR})=(3.1\text{
L})\times\left(\frac{80\text{ beats/min}}{\emph{\text{HR}}}\right)^{0.1},$
(24)
where we recover the resting dead volume shown in Table 1 when $\text{\em
HR}=80$ beats/min. Here, it is better to think of the dead volume as a reserve
volume that can be mobilized by the sympathetic nervous system by constricting
the systemic veins. This has the effect of supporting and even moderately
increasing the stroke volume of the heart, despite the increasing heart rate
associated with exercise.
## 3 Results and discussion
In this section, we examine the simulated effects for each of the three
interventions. First, we compare results from the normal and pre-intervention
RPH models. Second, we simulate each surgical intervention within our models
in order to investigate changes in pressure and oxygen saturation in the
systemic and pulmonary arteries. Third, we consider various levels of exercise
in a normal model, a pre-intervention RPH model, and in two post-interventions
RPH models, the VSD and Potts shunt. Oxygen saturation, systemic flow, and
oxygen delivery are studied as the oxygen consumption in the systemic tissues
is varied. For the post-intervention cases, we focus on a shunt size that most
significantly lowers the pulmonary artery pressure in both the VSD and Potts
shunt. Fourth, we compare the resting model with the exercise model. In these
cases, we study systemic flow, oxygen saturation, and oxygen delivery as the
pulmonary resistance is varied because of favorable post-intervention
remodeling of the pulmonary vasculature. Finally, we examine mean shunt flow
and shunt flow waveforms to determine whether the shunt is indeed right-to-
left, as anticipated, or is perhaps bidirectional.
In all simulations, 100 time steps are used for each cardiac cycle. The heart
rate at rest is 80 beats/min. Each computer experiment for both rest and
exercise is run for 500 cardiac cycles. This duration is sufficient for each
simulation to achieve a periodic steady state for all variables in all cases.
We remark that a periodic steady state for the hemodynamic variables is
achieved very quickly, within 10-20 cardiac cycles. In contrast, it takes many
more cardiac cycles for the oxygen concentrations to reach a periodic steady
state. When we report a single value for any quantity as the result of a
simulation, it is the average of that quantity over the last five cardiac
cycles.
Model Output | Units | Normal | RPH Pre-Intervention
---|---|---|---
Total blood volume | L | 5.098 | 5.248
Cardiac output | L/min | 5.589 | 3.661
Left ventricle stroke volume | mL | 69.871 | 45.762
Right ventricle stroke volume | mL | 69.871 | 45.762
Diastolic systemic arterial pressure | mmHg | 80.844 | 56.109
Systolic systemic arterial pressure | mmHg | 118.561 | 81.077
Mean systemic arterial pressure | mmHg | 102.249 | 70.327
Diastolic pulmonary arterial pressure | mmHg | 13.834 | 80.635
Systolic pulmonary arterial pressure | mmHg | 24.143 | 96.224
Mean pulmonary arterial pressure | mmHg | 19.505 | 89.526
Table 3: Comparison of model outputs in normal and pre-intervention RPH
circulations. The parameters that are changed to convert the normal
circulation to the RPH circulation are described in Subsection 2.1.
### 3.1 Pressures and oxygen saturations
First, we compare the pre-intervention RPH model to the normal model. Values
for different hemodynamic variables are shown in Table 3. As discussed above,
parameters for the pre-intervention RPH model are derived from the normal
model as follows. The pulmonary resistance is taken to be 1.3 times the
systemic resistance (to describe the onset of pulmonary vascular disease), the
right ventricular elastances are taken to be the same as the left ventricular
elastances (to describe right-heart remodeling in the setting of increased
afterload), and the blood volume is increased from 5.098 L to 5.248 L (to
describe compensation by the body to increase cardiac output in the setting of
heart failure). These changes result in a pre-intervention RPH model with
pulmonary pressures that exceed systemic pressures. This physiologic feature
is seen clinically and motivates the need for the types of surgical
interventions explored in this paper. As expected, cardiac output and stroke
volume are substantially lower in the RPH model as compared to the normal
model.
Next, we consider the impact of each intervention on pressures and oxygen
saturations. Figure 2 shows the systemic and pulmonary arterial blood
pressures (mean values) as functions of the shunt area. Results for the atrial
septal defect (ASD) are in the left panel, results for the ventricular septal
defect (VSD) are in the right panel, and results for the Potts shunt are in
the bottom panel. The ASD results show that this intervention is not
successful in lowering the pulmonary arterial pressure, which is perhaps
consistent with the fact that an ASD is a volume-unloading shunt. There
appears to be a very small effect from the ASD, but it is certainly not one
that would be therapeutic. The VSD intervention lowers the mean pulmonary
artery pressure from about 90 mmHg to about 84 mmHg, which could be
beneficial. This smallest shunt for which this result is achieved has cross-
sectional area equal to 0.3 cm2. It is interesting to note that beyond this
value for the shunt area, the pulmonary arterial pressure slightly increases
as the shunt size increases. The Potts shunt most substantially lowers the
mean pulmonary arterial pressure, from about 90 mmHg to about 78 mmHg. Unlike
in the VSD case, the pulmonary arterial pressure with the Potts shunt
decreases monotonically with increasing shunt size, but most of the benefit
has already occurred with a shunt size of 0.3 cm2. Our model suggests that
there is little benefit in using a larger Potts shunt size than this value.
Figure 3 shows the pressures in the pulmonary artery and in the systemic
artery for the three interventions, all on the same plot, as functions of the
shunt area.
Figure 2: A comparison of mean pressures in the pulmonary artery (blue) and
systemic artery (red) as the shunt area is varied in the ASD, VSD, and Potts
shunt.
Figure 3: A comparison of mean pressures in the pulmonary artery and the
systemic artery for the three interventions: the left panel shows the
pulmonary artery pressures for the ASD, VSD, and Potts shunt as the shunt area
is varied. The right panel shows the systemic artery pressures for the three
interventions as the shunt area is varied. Note different pressure scales in
the two panels.
We further investigate oxygen transport for each intervention in Figure 4.
This figure depicts systemic flow, oxygen saturation, and oxygen delivery for
the three interventions. Systemic flow is relevant here because it is used in
the computation of oxygen delivery to the systemic tissues. Oxygen saturation
is the oxygen concentration (in mmol/L) expressed as a percentage of 10
mmol/L, which is the maximum possible oxygen concentration in our model. The
rate at which oxygen is delivered to the systemic tissues is calculated by
multiplying the systemic flow by the systemic arterial oxygen concentration.
Figure 4: A comparison of systemic flow, oxygen saturation, and oxygen
delivery rate for the ASD, VSD, and Potts shunt.
All three interventions increase systemic flow. The increase is substantial in
the case of VSD and Potts shunt. This increase in systemic flow has an
important effect on oxygen delivery to the systemic tissues. All three
interventions decrease systemic arterial oxygen saturation. This is
inevitable, since the interventions by design are allowing deoxygenated blood
to bypass the lungs. The effect is smallest in the case of the ASD, but since
this intervention has minimal benefit in terms of decreasing pulmonary artery
pressure, the fact that it also does the least harm is not really of interest.
The VSD and Potts shunt produce similar decreases in systemic arterial oxygen
saturation, but these two interventions look quite different from the point of
view of oxygen delivery to the systemic tissues. The increase in systemic flow
in the VSD case seems to compensate nicely for the drop in systemic arterial
oxygen saturation. At the shunt size of about 0.5 cm2, the oxygen delivery
increases by about 3% compared to the delivery in the pre-intervention state
(corresponding to a shunt size of zero). Recall, however, that the optimal
reduction in pulmonary artery pressure occurs in the VSD at a shunt size of
about 0.3 cm2, and at this size, the increase in delivery is smaller.
### 3.2 Exercise tolerance
In this section, we consider the effect of exercise in four cases: (1) the
normal circulation, (2) the pre-intervention circulation with refractory
pulmonary hypertension, and a post-intervention circulation with either a (3)
VSD or (4) Potts shunt. The ASD intervention is not considered here since it
does not appear to be useful in lowering pulmonary artery pressure. The normal
circulation is considered as a point of reference. We consider a shunt area of
0.3 cm2 for both the VSD and Potts shunt, since this appears to the be the
smallest possible shunt size that corresponds to the largest decrease in
pulmonary artery pressure in either case; refer to Figure 3. Figure 5 shows
systemic flow, oxygen saturation, and oxygen delivery as functions of the
oxygen consumption, the independent variable in our exercise model. Note that
larger consumption indicates a higher level of exercise. Systemic venous
oxygen saturation will be our measure of exercise tolerance. It is an
important variable to consider during exercise because it indicates the amount
of oxygen left after consumption by the systemic tissues, including the
exercising muscles. If the computed value for the systemic venous oxygen
saturation is negative, then the assumed level of exercise in our model
corresponding to that value of oxygen consumption is not possible.
The left panel of Figure 5 shows systemic flow for each of these four cases.
In all cases, systemic flow increases as exercise level increases, as
expected. The rate of increase is smaller for the RPH models, both pre- and
post-intervention. We note that the rate of increase in systemic flow for the
Pott shunts appears to be slightly larger than that observed for the VSD. The
right and bottom panels of Figure 5 show oxygen saturation and oxygen
delivery, respectively. The dashed-dotted lines correspond to variables in the
systemic arteries and the solid lines correspond to variables in the systemic
veins. In all cases, oxygen saturation and delivery decrease as exercise level
increases. More rapid decreases in both of these variables are seen for the
pre-intervention and post-intervention VSD and Potts shunt models. This trend
appears consistent with the notion that the body is less tolerant to exercise
in a diseased state.
More surprisingly, saturation and delivery curves for the VSD and Potts shunt
lie beneath the pre-intervention curves, indicating slightly lower exercise
tolerances for the post-intervention models compared to the pre-intervention
model. This finding is inconsistent with anecdotal evidence that these
interventions increase exercise tolerance in RPH patients. In this light, we
hypothesize the following mechanism for an increase in exercise tolerance post
intervention. First, the intervention such as a Potts shunt or VSD off-loads
the right heart and pulmonary vasculature by decreasing the pulmonary artery
pressure. Then, remodeling in the pulmonary artery occurs, leading to a
decrease in the pulmonary resistance. Such a change in pulmonary resistance
has been observed in Potts shunt patients [12]. We test this hypothesis in our
models by decreasing the pulmonary resistance. These experiments are done at
rest and at moderate exercise corresponding to oxygen consumption values of
16.8 mmol/min and 33.44 mmol/min respectively.
Figure 5: A comparison of systemic flow, oxygen saturation, and oxygen
delivery as functions of oxygen consumption, which corresponds to the level of
exercise and is the independent variable in our exercise model. Only the VSD
and Potts shunt are considered. If the computed value of the systemic venous
saturation is negative, this implies the assumed level of oxygen consumption
and the corresponding exercise level is not possible. In the right and bottom
panels, the dashed-dotted lines correspond to the systemic arteries, and the
solid lines correspond to the systemic veins. $R_{P}$ is the pulmonary
resistance in units of mmHg/(L/min).
Figures 6, 7, and 8 show the effects of decreasing the pulmonary resistance,
corresponding to favorable pulmonary vascular remodeling, on systemic flow,
oxygen saturation, and oxygen delivery respectively. The model at rest is
shown in the left panel and the model corresponding to moderate exercise is
shown in the right panel. For each of these three variables, only one data
point is shown for the pre-intervention model which has a fixed pulmonary
resistance of 22.75 mmHg/(L/min). This point is used as a baseline to assess
the performance of the post-intervention models as the pulmonary resistance
decreases. For decreasing pulmonary resistance, we see an increase in systemic
flow, oxygen saturation, and oxygen delivery. Systemic venous oxygen delivery
and saturation for the post-intervention models exceeds the pre-intervention
model values for pulmonary resistances less than approximately 15-20
mmHg/(L/min). These results confirm that the VSD or Potts shunt intervention,
combined with pulmonary vascular remodeling in the form of decreased pulmonary
resistance, lead to an increase in exercise tolerance over the pre-
intervention state. We also remark that Figure 7 reveals that the systemic
arterial oxygen saturation reaches 100% for small enough pulmonary resistance,
i.e. for substantial enough pulmonary vascular remodeling. Maximum saturation
in the systemic arteries indicates the shunt flow has reversed and is now
left-to-right. An interesting question is whether the shunt should be closed
at this stage, since it apparently serves no therapeutic purpose.
Figure 6: Systemic flow at rest and at moderate exercise. Post-intervention
values are shown as the pulmonary resistance decreases in order to simulate
post-intervention pulmonary vascular remodeling. The pre-intervention value is
shown as a baseline for comparison. The values used for oxygen consumption at
rest and at moderate exercise are $M$ = 16.8 mmol/min and $M$ = 33.44 mmol/min
respectively.
Figure 7: Oxygen Saturation at rest and at moderate exercise. Post-
intervention values are shown as the pulmonary resistance decreases in order
to simulate post-intervention pulmonary vascular remodeling. The pre-
intervention value is shown as a baseline for comparison. The values used for
oxygen consumption at rest and at moderate exercise are $M$ = 16.8 mmol/min
and $M$ = 33.44 mmol/min respectively.
Figure 8: Oxygen Delivery at rest and at moderate exercise. Post-intervention
values are shown as the pulmonary resistance decreases in order to simulate
post-intervention pulmonary vascular remodeling. The pre-intervention value is
shown as a baseline for comparison. The values used for oxygen consumption at
rest and at moderate exercise are $M$ = 16.8 mmol/min and $M$ = 33.44 mmol/min
respectively.
### 3.3 Shunt flow waveforms
In this section we examine the flow waveforms through the VSD and Potts shunt.
Shunt sizes of 0.3 cm2 are considered for both cases. Figure 9 depicts the
mean shunt flow, as the pulmonary resistance varies, for both interventions.
Both rest and exercise are considered. For each shunt, the mean flow switches
from right-to-left to left-to-right as the pulmonary resistance decreases. For
small enough pulmonary resistances corresponding to favorable remodeling, the
mean shunt flow waveforms are left-to-right in both rest and exercise. As
mentioned above, it might be appropriate in this scenario to close the shunt,
since it does not serve any therapeutic benefit. To examine features of the
shunt flow waveform, we fix the pulmonary resistance to 16 mmHg/(L/min), a
value close to the transition in the mean shunt flow direction (refer to
Figure 9). Note that at this value for the pulmonary resistance, the oxgyen
saturation and delivery levels for the VSD and Potts shunt surpass the pre-
intervention values; refer to Figures 7 and 8. In Figure 10, VSD and Potts
shunt flow waveforms are shown, with the rest cases on the left and the
exercise cases on the right. The black solid line in Figure 10 indicates the
mean flow value for the waveform over a cardiac cycle. The elastance function
for the ventricles is shown in the bottom panel for reference to the cardiac
cycle. Note that shunt flows in all cases are complex and bidirectional.
However, there is more right-to-left flow during exercise for both the VSD and
Potts shunt. For this particular value of the pulmonary resistance, the mean
shunt flow is left-to-right at rest and right-to-left during exercise. This
finding is consistent with evidence from post-intervention RPH patients who
have normal arterial saturations at rest and lower-than-normal saturations
during exercise.
As seen in the shunt flow waveforms, bidirectional flow plays an important
role in this study. It is an important feature of our methodology that it
evaluates the shunt flow as a function of time, and not merely the mean flow.
This is because bidirectional flow can exchange oxygen between two
compartments even when there is no mean shunt flow at all, and this can have a
substantial impact on oxygen transport. Indeed, in the congenital heart
disease called transposition of the great arteries, the pulmonary and systemic
circulations form parallel loops, and there cannot be any mean flow from one
to the other. Survival of the patient after birth is then completely dependent
on the existence of a bidirectional shunt [31].
## 4 Limitations
The specific results reported above may certainly depend upon the specific
parameters chosen. In order to apply the methodology of this paper with
confidence to any particular patient, it will be necessary to identify the
relevant cardiovascular parameters of that patient. Making the model patient-
specific is also a way to test the validity of the model, since the pre-
operative state of a patient can be used to identify patient-specific
parameters, and then the model can be used to predict what the immediately
post-operative state of the patient will be. Comparison with the actual post-
operative state will then be a strong test of the model. Future work will
therefore be directed toward the development of a methodology for identifying
the model parameters that correspond best to the state of a particular
patient, so that the model can be made useful in clinical practice. Another
limitation of our models is the representation of the systemic arteries as a
single compartment. In the case of a VSD, this shunt results in blood mixing
in the ventricles, and consequently, desaturated blood is delivered to the
brain, coronaries arteries, and lower body. In contrast, the Potts shunt
delivers desaturated blood only to the lower body, while fully saturated blood
is delivered to the brain and heart. This is by virtue of the fact that mixing
occurs downstream from the carotid arteries. This important detail could be
studied by constructing a more complex model with additional compartments or
by post-processing data from our current model with knowledge of upper and
lower systemic arterial compartment blood volumes.
Figure 9: Mean shunt flow for the VSD and Potts shunt, at rest ($M$ = 16.8
mmol/min) and at moderate exercise ($M$ = 33.44 mmol/min), as the pulmonary
resistance is varied. The shunt size in both cases is 0.3 cm2. The star-shaped
marker corresponds to a pulmonary resistance value of $R_{P}$ = 16
mmHg/(L/min), which is used for the waveforms in Figure 10.
Figure 10: Shunt flow waveforms at rest ($M$ = 16.8 mmol/min) on the left and
at moderate exercise ($M$ = 33.44 mmol/min) on the right. The bottom two
figures show the elastance function of the ventricle for reference to the
cardiac cycle. Note the different time scales; the heart rate is 80 beats/min
for the resting case (left) and 95.6 beats/min for the exercise case (right).
The shunt size in both cases is 0.3 cm2 and the pulmonary resistance is
$R_{P}$ = 16 mmHg/(L/min). The black solid line represents the mean flow over
a cardiac cycle of each shunt flow waveform.
## 5 Conclusions
In this paper, we have presented a methodology that can be used to study
surgical interventions that are designed to alleviate the detrimental effects
of refractory pulmonary hypertension. We have illustrated the use of this
methodology by comparing three such interventions, all of which are designed
to allow some blood flow to bypass the lungs: an atrial septal defect, a
ventricular septal defect, and a Potts shunt. For each intervention, we have
simulated a range of defect sizes from 0 to 1 cm2. Our results are that the
ASD is ineffective at lowering blood pressure in the pulmonary artery, but
that the VSD and Potts shunt are both effective, with a greater effect being
produced by the Potts shunt. These results are consistent with the fact that
an ASD is volume-unloading while a VSD or Potts shunt is pressure-unloading.
Both the VSD and Potts shunt lower the systemic arterial oxygen saturation in
our study, but this is partially compensated by an increase in systemic flow,
so that oxygen delivery to the systemic tissues is lowered to a lesser degree
than the systemic arterial oxygen saturation. The increase in systemic flow is
slightly greater for the VSD than for the Potts shunt, with the result that
oxygen delivery is slightly increased only in the VSD case. Oxygen delivery is
reduced in the case of the Potts shunt due to the substantial oxgyen
saturation reduction for Potts shunt compared to VSD. With respect to
exercise, both the VSD and Potts shunt saw a reduction in exercise tolerance
compared to the pre-intervention case. We found that post-intervention
pulmonary vascular remodeling, leading to a drop in pulmonary resistance,
explained the anecdotal increase in exercise tolerance that is seen in some
RPH patients. When judged by reduction of pulmonary arterial pressure alone,
the Potts shunt appears to perform better. When considering oxygen delivery
and exercise tolerance, the VSD appears to be the best choice. As mentioned in
Section 4, it is important to keep in mind that the Potts shunt has the
advantage of delivering fully saturated blood to the brain. The above effects
are quantified in our study as functions of the size of the defect in each
case, and this kind of information could be useful to a clinician who needs to
decide how large a defect to create.
## 6 Acknowledgements
Charles Puelz was supported in part by the Research Training Group in Modeling
and Simulation funded by the National Science Foundation via grant
RTG/DMS-1646339.
## References
* [1] S Acosta, C Puelz, B Rivière, DJ Penny, KM Brady, and CG Rusin. Cardiovascular mechanics in the early stages of pulmonary hypertension: a computational study. Biomechanics and Modeling in Mechanobiology, 16(6):2093–2112, 2017.
* [2] AE Baruteau, E Belli, Y Boudjemline, D Laux, M Lévy, G Simonneau, A Carotti, M Humbert, and D Bonnet. Palliative Potts shunt for the treatment of children with drug-refractory pulmonary arterial hypertension: updated data from the first 24 patients. European Journal of Cardio-Thoracic Surgery, 47(3):e105–e110, 2015.
* [3] Sandra Bot and Peter Hollander. The relationship between heart rate and oxygen uptake during non-steady state exercise. Ergonomics, 43:1578–92, 11 2000.
* [4] D Chemla, V Castelain, P Herve, Y Lecarpentier, and S Brimioulle. Haemodynamic evaluation of pulmonary hypertension. European Respiratory Journal, 20(5):1314–1331, 2002.
* [5] T Delhaas, Y Koeken, H Latus, C Apitz, and D Schranz. Potts shunt to be preferred above atrial septostomy in pediatric pulmonary arterial hypertension patients: a modeling study. Frontiers in Physiology, 9:1252, 2018.
* [6] CD Etz, HA Welp, TDT Tjan, A Hoffmeier, E Weigang, HH Scheld, and C Schmid. Medically refractory pulmonary hypertension: treatment with nonpulsatile left ventricular assist devices. The Annals of Thoracic Surgery, 83(5):1697–1705, 2007.
* [7] D.G. Fitzjerrell, Ronald White, and R.C. Croston. Cardiovascular modelling: Simulating the human response to exercise, lower body negative pressure, zero gravity and clinical conditions, volume 5, pages 195–229. 01 1983.
* [8] JW Gerringer, JC Wagner, D Vélez-Rendón, and D Valdez-Jasso. Lumped-parameter models of the pulmonary vasculature during the progression of pulmonary arterial hypertension. Physiological Reports, 6(3):e13586, 2018.
* [9] S Giusca, E Popa, MS Amzulescu, I Ghiorghiu, IM Coman, BA Popescu, M Delcroix, JU Voigt, C Ginghina, and R Jurcut. Is right ventricular remodeling in pulmonary hypertension dependent on etiology? an echocardiographic study. Echocardiography, 33(4):546–554, 2016.
* [10] FC Hoppensteadt and CS Peskin. Modeling and simulation in medicine and the life sciences. Springer Science & Business Media, 2012.
* [11] G Kovacs, A Olschewski, A Berghold, and H Olschewski. Pulmonary vascular resistances during exercise in normal subjects: a systematic review. European Respiratory Journal, 39(2):319–328, 2012.
* [12] Timothy S Lancaster, Shabana Shahanavaz, David T Balzer, Stuart C Sweet, R Mark Grady, and Pirooz Eghtesady. Midterm outcomes of the potts shunt for pediatric pulmonary hypertension, with comparison to lung transplant. The Journal of Thoracic and Cardiovascular Surgery, 161(3):1139–1148, 2021.
* [13] JA Leopold. Catheter-based therapies for patients with medication-refractory pulmonary arterial hypertension. Circulation: Cardiovascular Interventions, 8(11):e003332, 2015.
* [14] M Levy, DS Celermajer, E Bourges-Petit, MJ Del Cerro, F Bajolle, and D Bonnet. Add-on therapy with subcutaneous treprostinil for refractory pediatric pulmonary hypertension. The Journal of Pediatrics, 158(4):584–588, 2011.
* [15] F Liang, S Takagi, R Himeno, and H Liu. Multi-scale modeling of the human cardiovascular system with applications to aortic valvular and arterial stenoses. Medical and Biological Engineering and Computing, 47(7):743–755, 2009.
* [16] JP Mynard, MR Davidson, DJ Penny, and JJ Smolich. A simple, versatile valve model for use in lumped parameter and one-dimensional cardiovascular models. International Journal for Numerical Methods in Biomedical Engineering, 28(6-7):626–641, 2012.
* [17] N Naderi. Chapter 11 - Hemodynamic Study. In Majid Maleki, Azin Alizadehasl, and Majid Haghjoo, editors, Practical Cardiology, pages 183 – 191. Elsevier, 2018.
* [18] AV Noordegraaf, KM Chin, F Haddad, PM Hassoun, AR Hemnes, SR Hopkins, SM Kawut, D Langleben, J Lumens, and R Naeije. Pathophysiology of the right ventricle and of the pulmonary circulation in pulmonary hypertension: an update. European Respiratory Journal, 53(1), 2019.
* [19] CS Peskin and C Tu. Hemodynamics in congenital heart disease. Computers in Biology and Medicine, 16(5):331–359, 1986.
* [20] MU Qureshi, MJ Colebank, LM Paun, LE Fix, N Chesler, MA Haider, NA Hill, D Husmeier, and MS Olufsen. Hemodynamic assessment of pulmonary hypertension in mice: a model-based analysis of the disease mechanism. Biomechanics and Modeling in Mechanobiology, 18(1):219–243, 2019\.
* [21] MU Qureshi, GDA Vaughan, C Sainsbury, M Johnson, CS Peskin, MS Olufsen, and NA Hill. Numerical simulation of blood flow and pressure drop in the pulmonary arterial and venous circulation. Biomechanics and Modeling in Mechanobiology, 13(5):1137–1154, 2014.
* [22] MK Rausch, A Dam, S Göktepe, OJ Abilez, and E Kuhl. Computational modeling of growth: systemic and pulmonary hypertension in the heart. Biomechanics and Modeling in Mechanobiology, 10(6):799–811, 2011.
* [23] T Reynolds. The determination of aortic valve area by the gorlin formula: what the cardiac sonographer should know. Journal of the American Society of Echocardiography, 3(4):331–335, 1990.
* [24] AK Roy, SP Gaine, and KP Walsh. Percutaneous atrial septostomy with modified butterfly stent and intracardiac echocardiographic guidance in a patient with syncope and refractory pulmonary arterial hypertension. Heart, Lung and Circulation, 22(8):668–671, 2013.
* [25] JJ Ryan and SL Archer. The right ventricle in pulmonary arterial hypertension: disorders of metabolism, angiogenesis and adrenergic signaling in right ventricular failure. Circulation Research, 115(1):176–188, 2014.
* [26] K Ryo, A Goda, T Onishi, A Delgado-Montero, B Tayal, HC Champion, MA Simon, MA Mathier, MT Gladwin, and J Gorcsan III. Characterization of right ventricular remodeling in pulmonary hypertension associated with patient outcomes by 3-dimensional wall motion tracking echocardiography. Circulation: Cardiovascular Imaging, 8(6):e003176, 2015.
* [27] N Stergiopulos, JJ Meister, and N Westerhof. Determinants of stroke volume and systolic and diastolic aortic pressure. American Journal of Physiology-Heart and Circulatory Physiology, 270(6):H2050–H2059, 1996.
* [28] S Stickel, W Gin-Sing, M Wagenaar, and JSR Gibbs. The practical management of fluid retention in adults with right heart failure due to pulmonary arterial hypertension. European Heart Journal Supplements, 21(Supplement_K):K46–K53, 2019.
* [29] Xing-Guo Sun, James Hansen, Ronald Oudiz, and Karlman Wasserman. Sun xg, hansen je, oudiz rj, wasserman k. exercise pathophysiology in patients with primary pulmonary hypertension. Circulation, 104:429–35, 08 2001.
* [30] S Tewari, S Bugenhagen, Z Wang, D Schreier, B Carlson, N Chesler, and D Beard. Analysis of cardiovascular dynamics in pulmonary hypertensive c57bl6/j mice. Frontiers in Physiology, 4:355, 2013.
* [31] C Tu and CS Peskin. Hemodynamics in transposition of the great arteries with comparison to ventricular septal defect. Computers in Biology and Medicine, 19(2):95–128, 1989.
* [32] J Widrich and M Shetty. Physiology, pulmonary vascular resistance. In StatPearls [Internet]. StatPearls Publishing, 2020.
|
# Fully Geant4 compatible package for the simulation of Dark Matter in fixed
target experiments
M. Bondi1, A. Celentano1111e-mail<EMAIL_ADDRESS>R. R.
Dusaev2222e-mail<EMAIL_ADDRESS>D. V. Kirpichnikov3333e-mail:
<EMAIL_ADDRESS>
M. M. Kirsanov3444e-mail<EMAIL_ADDRESS>N. V.
Krasnikov3,4555e-mail<EMAIL_ADDRESS>L. Marsicano1, D. Shchukin5
1 Istituto Nazionale di Fisica Nucleare, Sezione di Genova, 16146 Genova,
Italy
2 Tomsk Polytechnic University, 634050 Tomsk, Russia
3 Institute for Nuclear Research of the Russian Academy of Sciences,
117312 Moscow, Russia
4 Joint Institute for Nuclear Research, 141980 Dubna, Russia
5 P.N. Lebedev Physical Institute, Moscow, Russia, 119991 Moscow, Russia
(August 27, 2024)
###### Abstract
We present the package for the simulation of DM (Dark Matter) particles in
fixed target experiments. The most convenient way of this simulation (and the
only possible way in the case of beam-dump) is to simulate it in the framework
of the Monte-Carlo program performing the particle tracing in the experimental
setup. The Geant4 toolkit framework was chosen as the most popular and
versatile solution nowadays.
Specifically, the package includes the codes for the simulation of the
processes of DM particles production via electron and muon bremsstrahlung off
nuclei, resonant in-flight positron annihilation on atomic electrons and gamma
to ALP (axion-like particles) conversion on nuclei. Four types of DM mediator
particles are considered: vector, scalar, pseudoscalar and axial vector. The
total cross sections of bremsstrahlung processes are calculated numerically at
exact tree level (ETL).
The code handles both the case of invisible DM decay and of visible decay into
$e^{+}e^{-}$ ($\mu^{+}\mu^{-}$ for $Z^{\prime}$, $\gamma\gamma$ for ALP).
The proposed extension implements native Geant4 application programming
interfaces (API) designed for these needs and can be unobtrusively embedded
into the existing applications.
As an example of its usage, we discuss the results obtained from the
simulation of a typical active beam-dump experiment. We consider $5\times
10^{12}$ 100 GeV electrons impinging on a lead/plastic heterogeneous
calorimeter playing a role of an active thick target. The expected sensitivity
of the experiment to the four types of DM mediator particles mentioned above
is then derived.
## Program summary
_Program title:_ DMG4
_CPC Library link to program files:_
_Code Ocean capsule:_
_Licensing provisions:_ GNU General Public License 3 (GPL)
_Programming language:_ c++
_Nature of problem:_ The optimal way to simulate Dark Matter production
processes in fixed target experiments in most cases is to do it inside the
program for the full simulation of the experimental setup and not separately,
in event generators. The code that can be easily embedded in such programs is
needed. The code should be able to simulate various DM production processes
that happen in a thick target, in particular on nuclei, with maximal accuracy.
_Solution method:_ We created a Geant4 compatible DM simulation package for
this purpose. The choice of this simulation framework is suggested by its
popularity and varsatility. The code includes the cross sections precalculated
at exact tree level for a wide variety of DM particles.
## 1 Introduction
Models with light Dark Matter (DM) particles are very popular in the searches
for physics beyond the Standard Model (SM). The light dark matter (LDM)
hypothesis conjectures the existence of a new class of lighter elementary
particles, not charged under the SM interactions. The simplest model predicts
LDM particles (denoted as $\chi$) with masses below 1 GeV/c2, charged under a
new force in Nature and interacting with the SM particles via the exchange of
a light mediator. In the simplest model, the mediator is a $1^{-}$ vector
boson, usually referred to as “heavy photon” or “dark photon” [1]. However,
relevant model variations correspond to different mediator quantum number
assignments. This picture thus foresees the existence of a new “Dark Sector”
in Nature, with its own particles and interactions, and is compatible with the
well-motivated hypothesis of DM thermal origin. A complete introduction to
this subject can be found, for example, in the 2017 US Cosmic Visions
community report [2], or in the 2019 CERN Physics Beyond Colliders report [3].
Accelerator-based thick-target experiments at moderate beam energy ($\sim$
10$\div$100 GeV) are the ideal tool to probe the new hypothesis since they
have a very large discovery potential in a wide area of parameters space. On
the other hand, direct detection efforts typically show a limited sensitivity
to LDM due to the very low energy of the recoil, often lower than the
detection threshold.
In many cases such searches are performed in (active) beam-dump experiments
[4, 5, 6, 7]. In these experiments, many different processes can result in DM
production inside the thick target with initial particles at a wide spectrum
of energies and topologies, due to the production of secondaries from the
primary impinging particle. Therefore, the optimal way to simulate these
processes is to do it inside the program for the full simulation of the
experimental setup, to account for the correlation among the initial-state
particles kinematic variables and to fully take into account the production
cross-section dependence on these.
We created a Geant4 compatible package for the simulation of various types of
DM production – the choice of this simulation framework was suggested by the
fact that, today, it is the most versatile and mature popular toolkit for full
simulation programs used in HEP experiments [8] designed to maintain full
lifecycle of HEP experiments. The package is named _DMG4_. The code tends to
follow the Geant4 API conventions as close as possible.
## 2 DMG4 package structure
The DMG4 package is a cohesive set of DM particle definition classes, DM
process classes and the DM physics class that assembles all together.
Historically, it includes a separate package _DarkMatter_ with a collecton of
cross section calculation routines. This package was used previously through
the Geant4 classes inherited from G4UserSteppingAction and G4UserRunAction.
The package structure is illustrated in Figure 1. The new particles introduced
so far in the package are listed in Table 1. The PDG codes are ascribed
according to the slightly extended rules in [9].
Figure 1: Component diargam of the DMG4 package. Table 1: DM particles defined in the package DMG4 Name | PDG ID | emitted by | spin | parity | stable? | decay
---|---|---|---|---|---|---
DMParticleAPrime | 5500022 | $e^{+},e^{-}$ | 1 | 1 | true | -
DMParticleXBoson | 5500122 | $e^{+},e^{-}$ | 1 | 1 | false | $e^{+}e^{-}$
DMParticleScalar | 5400022 | $e^{+},e^{-}$ | 0 | 1 | true | -
DMParticleXScalar | 5400122 | $e^{+},e^{-}$ | 0 | 1 | false | $e^{+}e^{-}$
DMParticlePseudoScalar | 5410022 | $e^{+},e^{-}$ | 0 | -1 | true | -
DMParticleXPseudoScalar | 5410122 | $e^{+},e^{-}$ | 0 | -1 | false | $e^{+}e^{-}$
DMParticleAxial | 5510022 | $e^{+},e^{-}$ | 1 | -1 | true | -
DMParticleZPrime | 5500023 | $\mu$ | 1 | 1 | true | -
DMParticleALP | 5300122 | $\gamma$ | 0 | -1 | false | $\gamma\gamma$
The dark sector particles that are used for the missing energy signature
simulations are assumed to be stable, although in full models they could decay
into other dark matter particles. However, this is unimportant as long as they
are also invisible and carry away energy - for this reason, in the following
we will call generically “dark matter” the dark sector particles produced in
the detector. The extension to the case of partly visible DM decay products
that could be observed through cascade decays is straightforward.
The current version of DMG4 package contains the following processes of DM
production:
* •
Bremsstrahlung-like process of the type $bN\to bNX$, where $b$ is a projectile
(can be $e^{-},e^{+},\mu^{-},\mu^{+}$), and $X$ is a DM particle
* •
Primakoff process of photon conversion $\gamma N\to aN$, where $a$ is an
axion-like particle (ALP) [10]
* •
Resonant in-flight positron annihilation on atomic electrons
$e^{+}e^{-}\rightarrow X\rightarrow\chi\chi$, where $\chi$ is a dark matter
mediator decay product [11].
In the latter case, the DM particle $X$ acts as a $s-$channel intermediate
resonance, with a non-zero intrinsic width due to the decay to final state
invisible particles. For missing energy signature simulations, as discussed
before, the role of the decay products $\chi$ is the same as the role of the
DM particle $X$ in the previous production mechanisms, since they carry away
energy from the active target without being detected.
The physics for a simulation run is configured in the function
DarkMatterPhysicsConfigure called from the constructor of the factory class
DarkMatterPhysics. One has to create an instance of one of the concrete
classes corresponding to the needed process and derived from the base class
DarkMatter, for example DarkPhotons. The factory then instantiates and
registers the needed particles and processes provided by the DMG4 package in
terms of the native Geant4 API. The required parameters include the mixing
parameter $\epsilon$ and cut-off minimal energy of particles that can initiate
the processes of DM production. The latter is needed to avoid simulation of
very soft DM particles that are anyway undetectable.
As in many other Geant4 physics classes, there is a parameter that can bias
the production cross section, i.e. increase it in such a way that the fraction
of events with DM production is not too small. The simulation without biasing
is practically impossible as for physically interesting values of $\epsilon$
one would have to simulate too many events to have sufficient statistics. At
the same time the fraction of events with DM production should be
significantly smaller than 1, otherwise the energy and coordinate
distributions can be distorted. It is recommended in any case to keep it
smaller than 0.07, for some processes smaller than 0.03.
The _DarkMatter_ package contains the routines that calculate the cross
sections, total and differential. This is explained in more details in the
next section.
## 3 Package DarkMatter and ETL cross sections
The formulas for the cross sections, total and differential, implemented in
the package are derived for different cases. For the bremsstrahlung-like and
the $e^{+}e^{-}$ annihilation processes we consider the following scenarios,
with different quantum number assignments for the DM mediator particles [12,
13], assuming for simplicity that all other DM particles $\chi$, coupled only
to these mediators, are fermions.
Vector case:
$\mathcal{L}\supset\mathcal{L}_{SM}-\frac{1}{4}V_{\mu\nu}^{2}+\frac{1}{2}m_{V}^{2}V_{\mu}^{2}+\sum_{\psi}e\epsilon_{V}V_{\mu}\bar{\psi}\gamma^{\mu}\psi+g^{D}_{V}V_{\mu}\bar{\chi}\gamma^{\mu}\chi+\bar{\chi}(i\gamma^{\mu}\partial_{\mu}-m_{\chi})\chi$
(1)
Axial vector case:
$\mathcal{L}\supset\mathcal{L}_{SM}-\frac{1}{4}A_{\mu\nu}^{2}+\frac{1}{2}m_{A}^{2}A_{\mu}^{2}+\sum_{\psi}e\epsilon_{A}A_{\mu}\bar{\psi}\gamma_{5}\gamma^{\mu}\psi+g^{D}_{A}A_{\mu}\bar{\chi}\gamma_{5}\gamma^{\mu}\chi+\bar{\chi}(i\gamma^{\mu}\partial_{\mu}-m_{\chi})\chi$
(2)
Scalar case:
$\mathcal{L}\supset\mathcal{L}_{SM}+\frac{1}{2}(\partial_{\mu}S)^{2}-\frac{1}{2}m_{S}^{2}S^{2}+\sum_{\psi}e\epsilon_{S}S\bar{\psi}\psi+g^{D}_{S}S\bar{\chi}\chi+\bar{\chi}(i\gamma^{\mu}\partial_{\mu}-m_{\chi})\chi$
(3)
Pseudo-scalar case:
$\mathcal{L}\supset\mathcal{L}_{SM}+\frac{1}{2}(\partial_{\mu}P)^{2}-\frac{1}{2}m_{P}^{2}P^{2}+\sum_{\psi}ie\epsilon_{P}P\bar{\psi}\gamma_{5}\psi+g^{D}_{P}P\bar{\chi}\gamma_{5}\chi+\bar{\chi}(i\gamma^{\mu}\partial_{\mu}-m_{\chi})\chi,$
(4)
where $\epsilon_{V},\epsilon_{A},\epsilon_{S},\epsilon_{P}$ are the mixing (or
coupling) parameters, $m_{V},m_{A},m_{S},m_{P}$ are the masses of mediators.
For ALPs, instead, we consider the simplified model [14] with ALP coupling
predominantly to photons:
$\mathcal{L}_{int}\supset-\frac{1}{4}g_{a\gamma\gamma}aF_{\mu\nu}\tilde{F}^{\mu\nu}+\frac{1}{2}(\partial_{\mu}a)^{2}-\frac{1}{2}m_{a}^{2}a^{2},$
(5)
where $F_{\mu\nu}$ denotes the strength of the photon field, and the dual
tensor is defined by
$\tilde{F}_{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\lambda\rho}F^{\lambda\rho}$.
We assume that the effective coupling, $g_{a\gamma\gamma}$, and the ALP mass,
$m_{a}$, are independent.
For the electron bremsstrahlung process, the simulation package contains the
analytical expressions for the cross sections, total and differential, derived
in the IWW (improved Weizsaker-Williams) approximation [4]. However, as
discussed already in [15], these can be rather inexact in some regions of
parameter space. For this reason, the package contains the tabulated K-factors
that correct the total cross sections to the values calculated in ETL (exact
tree-level) limit [12, 13, 15]. The total ETL cross-sections were pre-
calculated using the means of symbolic computation software Mathematica [16].
As compared to [15], we extended the tables with K-factors to the cases of
scalar, pseudoscalar and axial vector DM mediator particles. At runtime, the
total cross is obtained from the tabulated values using the interpolation. The
differential cross section formulas are shown in Appendix A. The tabulated
differential cross sections are also used in some limited regions, where the
difference is significant.
For the $e^{+}e^{-}$ annihilation process the following expression for the
production cross section is implemented in the code:
$\sigma_{e^{+}e^{-}}=\frac{4\pi\alpha_{EM}\alpha_{D}\varepsilon^{2}}{\sqrt{s}}q\frac{\mathcal{K}}{(s-m_{X}^{2})^{2}+\Gamma_{X}^{2}m_{X}^{2}}\;\;$
(6)
where $s$ is the invariant mass of the $e^{+}e^{-}$ system, $m_{X}$ the mass
of the intermediate DM particle (where $X=V,A,S,P$),
$q=\frac{\sqrt{s}}{2}\sqrt{1-\frac{4m_{\chi}^{2}}{s}}$, $\Gamma_{X}$ is the
intermediate DM particle decay width to dark particles $\chi$, discussed in
the following, $\alpha_{EM}$ is the electromagnetic fine structure constant,
and $\alpha_{D}\equiv\frac{\left(g^{D}_{X}\right)^{2}}{4\pi}$ is the coupling
squared to the dark particles $\chi$. Finally, $\mathcal{K}$ is a kinematic
factor that reads, respectively, $(s-\frac{4}{3}q^{2})$ for the vector DM,
$\frac{8}{3}q^{2}$ for the axial vector case, $2q^{2}$ for the scalar case,
and $\frac{s}{2}$ for the pseudo-scalar case. These expressions correspond to
the exact tree-level calculation, with the replacement
$(s-m^{2}_{X})^{2}\rightarrow(s-m^{2}_{X})^{2}+\Gamma_{X}^{2}m^{2}_{X}$ in the
last denominator to regulate the tree-level cross-section divergence at the
resonance pole.
The following tree-level expressions for the decay widths are implemented. For
the visible decay width, valid for $m_{X}>2\leavevmode\nobreak\ m_{e}$, the
vector, axial-vector, scalar, and pseudo-scalar cases read:
$\displaystyle\Gamma_{V\rightarrow
e^{-}e^{+}}=\frac{\alpha_{QED}\epsilon^{2}}{3}m_{V}\bigl{(}1+\frac{2m_{e}^{2}}{m_{V}^{2}}\bigr{)}\sqrt{1-\frac{4m_{e}^{2}}{m_{V}^{2}}},$
(7) $\displaystyle\Gamma_{A\rightarrow
e^{-}e^{+}}=\frac{\alpha_{QED}\epsilon^{2}}{3}m_{A}\left(1-\frac{4m_{e}^{2}}{m_{A}^{2}}\right)^{3/2},$
(8) $\displaystyle\Gamma_{S\rightarrow
e^{-}e^{+}}=\frac{\alpha_{QED}\epsilon_{S}^{2}}{2}m_{S}\left(1-\frac{4m_{e}^{2}}{m_{S}^{2}}\right)^{3/2},$
(9) $\displaystyle\Gamma_{P\rightarrow
e^{-}e^{+}}=\frac{\alpha_{QED}\epsilon_{P}^{2}}{2}m_{P}\left(1-\frac{4m_{e}^{2}}{m_{P}^{2}}\right)^{1/2}$
(10)
For the invisible decay width we have instead:
$\displaystyle\Gamma_{V\rightarrow\bar{\chi}\chi}=\frac{\alpha_{D}}{3}m_{V}\bigl{(}1+\frac{2m_{\chi}^{2}}{m_{V}^{2}}\bigr{)}\sqrt{1-\frac{4m_{\chi}^{2}}{m_{V}^{2}}},$
(11)
$\displaystyle\Gamma_{A\rightarrow\bar{\chi}\chi}=\frac{\alpha_{D}}{3}m_{A}\left(1-\frac{4m_{\chi}^{2}}{m_{A}^{2}}\right)^{3/2},$
(12)
$\displaystyle\Gamma_{S\rightarrow\bar{\chi}\chi}=\frac{\alpha_{D}}{2}m_{S}\left(1-\frac{4m_{\chi}^{2}}{m_{S}^{2}}\right)^{3/2},$
(13)
$\displaystyle\Gamma_{P\rightarrow\bar{\chi}\chi}=\frac{\alpha_{D}}{2}m_{P}\left(1-\frac{4m_{\chi}^{2}}{m_{P}^{2}}\right)^{1/2}.$
(14)
The ALP coupled to photons (5) has the following decay width
$\Gamma_{a\rightarrow\gamma\gamma}=\frac{g_{a\gamma\gamma}^{2}m_{a}^{3}}{64\pi}.$
(15)
## 4 Calculation of sensitivity of a typical active beam-dump experiment to
various types of DM particles
We used the DMG4 package described above to calculate the sensitivity to
various types of DM of a typical experiment that uses a missing energy
signature in the electron beam and compare them for the same beam energy and
EOT (number of electrons on target). We define the sensitivity as the expected
90% C.L. upper limit on the parameter $\epsilon$ in the case of no signal and
very small background. We perform the calculations for the typical energy of
the electron beam at the CERN SPS of 100 GeV and a lead/plastic
electromagnetic calorimeter ECAL [17] as an active target.
As only one of the scenarios defined in Section 3 can be chosen for a single
simulation run of the package, in the following instead of
$\epsilon_{V},\epsilon_{A},\epsilon_{S},\epsilon_{P}$ we use simply
$\epsilon$.
In these estimations a signal event is an event with energy deposition in the
ECAL smaller than 50 GeV and no significant energy deposition (less than 1
GeV) in the hadron calorimeter installed downstream the ECAL. The number of
such signal events (signal yield in the following) produced in the electron
beam for the mixing parameter $\epsilon=10^{-4}$, calculated for the vector DM
(dark photon) and pseudoscalar DM according to cross sections from the package
DarkMatter, is shown in Table 2 and Figure 3. In these calculations only
bremsstrahlung processes are taken into account. The difference between vector
and pseudoscalar particles is significant.
Table 2: Comparison of the signal yields for the vector (VC) and pseudoscalar (PS) cases, per $10^{10}$ EOT. The cross section ratio calculated for the electron energy 100 GeV is also shown. $M_{A}$ [MeV] | $N^{VC}_{sign}$ | $N^{PS}_{sign}$ | $N^{VC}_{sign}/N^{PS}_{sign}$ | $\sigma^{VC}_{tot}/\sigma^{PS}_{tot}$
---|---|---|---|---
1.1 | 24.0 | 5.85 | 4.1 | 4.12
2 | 14.3 | 4.41 | 3.2 | 3.53
4 | 5.23 | 1.99 | 2.6 | 3.114
16.7 | 0.516 | 0.205 | 2.51 | 2.66
20 | 0.41 | 0.16 | 2.5 | 2.64
100 | 0.015 | 0.0066 | 2.3 | 2.47
500 | 0.00035 | 0.00016 | 2.2 | 2.39
900 | 0.00005685 | 0.0000241 | 2.36 | 2.34
The difference in the signal yield between vector and axial-vector DM is
rather small; between scalar and pseudoscalar DM it is still smaller. We show
them separately in Figure 4. The difference is significant only for the masses
below 4 MeV.
We calculated the sensitivity of the missing energy signature fixed target
experiment to light DM particles for the statistics corresponding to $5\times
10^{12}$ EOT assuming the background-free conditions and 100% efficiency of
the experiment. The result for the vector and pseudoscalar mediators, with
only bremsstrahlung processes taken into account, is shown in Figure 2. The
contribution from the annihilation processes is significant at the masses
above 100 MeV, but it is more model-dependent. The corresponding sensitivity
for the two values of $\alpha_{D}$ is shown in Figure 5.
## 5 Conclusion
The package DMG4 for the simulation of light dark matter production in fixed
target experiments is created. It can be used in simulation programs of
experimental setups based on the Geant4 framework. As an example, we
calculated the sensitivity of a typical missing energy signature experiment to
various types of light dark matter.
The package is available at http://mkirsano.web.cern.ch/mkirsano/DMG4.tar.gz.
It is recommended also to contact the corresponding author Mikhail Kirsanov
about the usage.
## 6 Aknowledgements
This work was supported by the Ministry of Science and Higher Education (MSHE)
and RAS (Russia), Tomsk Polytechnic University within the assignment of MSHE
(Russia), the European Research Council (ERC) under the European Union’s
Horizon 2020 research and innovation program (Grant agreement No. 947715 -
POKER Starting Grant).
## 7 Appendix A
In this section we collect brems-like differential cross-sections of the
processes $lN\to lNX$, where $X=(S,P,V,A)$ and $l=(e^{\pm},\mu^{\pm})$. For
the IWW approach [12, 13] one has the following expressions for the cross-
sections
$\left(\frac{d\sigma^{X}}{dx\,d\cos\theta}\right)_{IWW}=2\epsilon_{X}^{2}\alpha^{3}|{\bf
k}|E_{0}(1-x)\frac{\chi}{\tilde{u}^{2}}|\mathcal{A}^{X}|^{2}$ (16)
where $x=E_{X}/E_{0}$ is the energy fraction that DM mediators carry away,
$\theta$ is the emission angle of DM mediators, $|{\bf
k}|=\sqrt{E_{X}^{2}-m_{X}^{2}}$ is the momentum of hidden $X$-bosons, $E_{0}$
is the initial energy of the incident particle in the beam,
$\tilde{u}=-xE_{0}^{2}\theta^{2}-m_{X}^{2}(1-x)/x-m_{l}^{2}x$ is the
approximate value for the auxiliary Mandelstam variable, $\chi$ is the
standard photon flux that takes into account the elastic form-factors
$F_{el}(t)$. The corresponding expressions for $\chi$ and $F_{el}(t)$ can be
found elsewhere [15]. The expressions for amplitudes squared are [12, 13]
$\displaystyle|\mathcal{A}^{S}|^{2}=$
$\displaystyle\frac{x^{2}}{1-x}+2(m_{S}^{2}-4m_{l}^{2})\frac{\tilde{u}x+m_{S}^{2}(1-x)+m_{l}^{2}x^{2}}{\tilde{u}^{2}},$
$\displaystyle|\mathcal{A}^{P}|^{2}=$
$\displaystyle\frac{x^{2}}{1-x}+2m_{P}^{2}\frac{\tilde{u}x+m_{P}^{2}(1-x)+m_{l}^{2}x^{2}}{\tilde{u}^{2}},$
$\displaystyle|\mathcal{A}^{V}|^{2}=$ $\displaystyle
2\frac{2-2x+x^{2}}{1-x}+4(m_{V}^{2}+2m_{l}^{2})\frac{\tilde{u}x+m_{V}^{2}(1-x)+m_{l}^{2}x^{2}}{\tilde{u}^{2}}$
(17) $\displaystyle|\mathcal{A}^{A}|^{2}=$
$\displaystyle\frac{4m_{l}^{2}x^{2}}{m_{A}^{2}(1-x)}+2\frac{2-2x+x^{2}}{1-x}+4(m_{A}^{2}-4m_{l}^{2})\frac{\tilde{u}x+m_{A}^{2}(1-x)+m_{l}^{2}x^{2}}{\tilde{u}^{2}}.$
## 8 Appendix B
In this section we place various figures referenced in the main sections of
the paper.
Figure 2: Sensitivity of the missing energy signature experiment to vector and
pseudoscalar DM for $5\times 10^{12}$ EOT. Figure 3: The signal yield per
$10^{10}$ EOT in the missing energy signature experiment for vector and
pseudoscalar DM. Figure 4: The signal yield per $10^{10}$ EOT in the missing
energy signature experiment for four types of DM. Figure 5: Sensitivity of
the missing energy signature experiment to vector DM for $5\times 10^{12}$
EOT. The sensitivity that takes into account the contribution from the
annihilation process for $\alpha_{D}=0.5$ ($\alpha_{D}=0.1)$ is shown by the
black continuous (dashed) line.
## References
* [1] B. Holdom. _Phys. Lett. B_ 166, 196 (1986). doi:10.1016/0370-2693(86)91377-8.
* [2] M. Battaglieri, et al. US cosmic visions: New ideas in dark matter 2017: Community report (2017). ArXiv:1707.04591 [hep-ph].
* [3] J. Beacham et al. _J. Phys. G_ 47, 010501 (2020). doi:10.1088/1361-6471/ab4cd2.
* [4] J. D. Bjorken, et al. _Phys. Rev. D_ 80, 075018 (2009). doi:10.1103/PhysRevD.80.075018.
* [5] E. Izaguirre, et al. _Phys. Rev. D_ 88, 114015 (2013). doi:10.1103/PhysRevD.88.114015.
* [6] B. Batell, M. Pospelov, and A. Ritz. _Phys. Rev. D_ 80, 095024 (2009). doi:10.1103/PhysRevD.80.095024.
* [7] E. Izaguirre, et al. _Phys. Rev. D_ 91, 094026 (2015). doi:10.1103/PhysRevD.91.094026.
* [8] S. Agostinelli et al. _Nucl. Instrum. Meth. A_ 506, 250 (2003). doi:10.1016/S0168-9002(03)01368-8.
* [9] Monte Carlo Particle Numbering Scheme. https://pdg.lbl.gov/2019/reviews/rpp2019-rev-monte-carlo-numbering.pdf. Accessed: 2021-01-25.
* [10] R. R. Dusaev, D. V. Kirpichnikov, and M. M. Kirsanov. _Phys. Rev. D_ 102, 055018 (2020). doi:10.1103/PhysRevD.102.055018.
* [11] L. Marsicano, et al. _Phys. Rev. Lett._ 121, 041802 (2018). doi:10.1103/PhysRevLett.121.041802.
* [12] Y.-S. Liu, D. McKeen, and G. A. Miller. _Physical Review D_ 95, 036010 (2017). doi:10.1103/physrevd.95.036010.
* [13] Y.-S. Liu and G. A. Miller. _Physical Review D_ 96, 016004 (2017). doi:10.1103/physrevd.96.016004.
* [14] B. Döbrich, et al. _Journal of High Energy Physics_ 2016 (2016). doi:10.1007/jhep02(2016)018.
* [15] S. Gninenko, et al. _Phys. Lett. B_ 782, 406 (2018). doi:10.1016/j.physletb.2018.05.010.
* [16] W. R. Inc. Mathematica, Version 12.2. Champaign, IL, 2020.
* [17] D. Banerjee, et al. _Phys. Rev. D_ 97, 072002 (2018). doi:10.1103/PhysRevD.97.072002.
|
# Quadratic estimators for CMB weak lensing
Abhishek S. Maniyar<EMAIL_ADDRESS>Center for Cosmology and
Particle Physics, Department of Physics, New York University, New York, NY
10003, USA Yacine Ali-Haïmoud Center for Cosmology and Particle Physics,
Department of Physics, New York University, New York, NY 10003, USA Julien
Carron Université de Genève, Département de Physique Théorique et CAP, 24
Quai Ansermet, CH-1211 Genève 4, Switzerland Antony Lewis Department of
Physics and Astronomy, University of Sussex, Brighton BN1 9QH, UK Mathew S.
Madhavacheril Perimeter Institute for Theoretical Physics, Waterloo, ON,
Canada N2L 2Y5
###### Abstract
In recent years, weak lensing of the cosmic microwave background (CMB) has
emerged as a powerful tool to probe fundamental physics, such as neutrino
masses, primordial non-Gaussianity, dark energy, and modified gravity. The
prime target of CMB lensing surveys is the lensing potential, which is
reconstructed from the observed CMB temperature $T$ and polarization $E$ and
$B$ fields. Until very recently, this reconstruction has been performed with
quadratic estimators (QEs), which, although known to be suboptimal for high-
sensitivity experiments, are numerically efficient, and useful to make
forecasts and cross-check the results of more sophisticated likelihood-based
methods. It is expected that ongoing and near-future CMB experiments such as
AdvACT, SPT-3G and the Simons Observatory (SO), will also rely on QEs. In this
work, we review different QEs, and clarify and quantify their differences. In
particular, we show that the Hu-Okamoto (HO02) estimator is not the absolute
optimal lensing estimator that can be constructed out of quadratic
combinations of $T,E$ and $B$ fields. Instead, we derive the global-minimum-
variance (GMV) lensing quadratic estimator. Although this estimator can be
found elsewhere in the literature, it was erroneously described as equivalent
to the HO02 estimator, and has never been used in real data analyses. Here, we
show explicitly that the HO02 estimator is suboptimal to the GMV estimator,
with a reconstruction noise larger by up to $\sim 9\%$ for a SO-like
experiment. We further show that the QE used in the Planck, and recent SPT
lensing analysis is suboptimal to both the HO02 and GMV estimator, and would
have a reconstruction noise up to $\sim 11\%$ larger than that of the GMV
estimator for a SO-like experiment. In addition to clarifying differences
between different QEs, this work should thus provide motivation to implement
the GMV estimator in future lensing analyses relying on QEs.
## I Introduction
Weak gravitational lensing of the cosmic microwave background (CMB) arises
from the deflection of CMB photons as they travel to us from the last
scattering surface, through the inhomogeneous Universe Blanchard and Schneider
(1987); see e.g. Ref. Lewis and Challinor (2006) for a review. The deflection
angle is proportional to the gradient of the lensing potential $\phi$, which
is determined by the projected mass distribution along the line of sight.
Reconstructing $\phi$ is therefore a powerful cosmological tool, as it gives
direct access to the projected distribution of the _total_ matter – baryonic
and dark – without relying on biased tracers Seljak and Zaldarriaga (1999).
Among other applications, the power spectrum of the lensing potential and its
cross-correlation with other tracers of large-scale structure are a sensitive
probe of the growth of matter fluctuations, primordial non-Gaussianity,
neutrino masses, dark energy, and modified gravity Lewis and Challinor (2006);
Allison _et al._ (2015); Schmittfull and Seljak (2018). CMB lensing has been
successfully measured by ACT, SPT, Planck, BICEP and POLARBEAR Das _et al._
(2011); Sherwin _et al._ (2017); van Engelen _et al._ (2012); Ade _et al._
(2016a); Aghanim _et al._ (2020); Omori _et al._ (2017); Story _et al._
(2015); Wu _et al._ (2019); Millea _et al._ (2020); Ade _et al._ (2016b,
2014); Adachi _et al._ (2020). Current and upcoming wide-field CMB
experiments such as AdvACT Henderson _et al._ (2016), SPT-3G Benson _et al._
(2014) and the Simons Observatory (SO) Ade _et al._ (2019) will measure the
lensing potential with even higher signal-to-noise ratio. Looking ahead, next-
generation “Stage-4” instrumental concepts with unprecedented depth and
angular resolution are currently under development Abazajian _et al._ (2016),
with CMB lensing as one of their main science goals Abazajian _et al._
(2019).
One of the main signatures of weak lensing is the induced correlations between
unequal Fourier modes of the CMB temperature and polarization fields. It is
therefore natural to seek to estimate $\phi$ out of linear combinations of
terms quadratic in different modes of the observed fields Zaldarriaga and
Seljak (1999); Hu (2001); and indeed, almost all CMB lensing analyses thus far
have relied on such quadratic estimators. For the next-generation Stage-4-like
CMB experiments (CMBS4), quadratic estimators are known to be suboptimal,
especially for polarization Hirata and Seljak (2003a). More elaborate
algorithms are being developed, such as the gradient-inversion method
Hadzhiyska _et al._ (2019), or likelihood-based methods Hirata and Seljak
(2003b, a); Carron and Lewis (2017); Millea _et al._ (2020). Meanwhile,
quadratic estimators remain the workhorse tool for current and near-future CMB
experiments like AdvACT, SPT-3G, and SO. They have the advantages of being
very simple to implement and computationally efficient, and will serve as
useful cross-checks even when more accurate and computationally demanding
methods are employed with future data.
The main goal of this paper is to clarify and quantify the differences between
several quadratic estimators commonly used for CMB lensing reconstruction. Our
most important point is that the well-known Hu and Okamoto Hu and Okamoto
(2002) (hereafter, HO02) estimator is _not_ the optimal quadratic estimator
that can be constructed from temperature and polarization maps, even if
generalized to the full sky, and even when using non-perturbative response
functions Lewis _et al._ (2011); Fabbian _et al._ (2019). Instead, we derive
the global-minimum-variance (hereafter GMV) quadratic estimator built out of
all possible quadratic combination of $T,E$ and $B$. The GMV estimator was in
fact first derived in Hirata and Seljak Hirata and Seljak (2003a), as the
weak-signal limit of their likelihood-based method. Nevertheless, it was
stated there and in subsequent works that this estimator is equivalent to that
of HO02. We explicitly show that this is not the case, and that the
reconstruction noise of the GMV estimator can be up to $\sim 9\%$ lower than
that of the HO02 estimator on large angular scales. We also generalize it to
be accurately unbiased accounting for higher-order lensing effects.
Furthermore, we show that the quadratic estimator used in the Planck
collaboration Ade _et al._ (2016a); Aghanim _et al._ (2020) and SPT
collaboration Wu _et al._ (2019) lensing analyses, obtained by neglecting
$C^{TE}_{\ell}$ in the inverse filter matrix, is suboptimal to _both_ the GMV
and HO02 estimators. For a SO-like experiment, this suboptimal estimator is up
to $\sim 11\%$ noisier that the GMV estimator. This may motivate implementing
the GMV estimator in future analyses, despite the possible added complexity of
jointly filtering temperature and polarization maps.
The remainder of this paper is organized as follows. After introducing our
notation and convention in Sec. II, we review the HO02 estimator and its close
cousin, the Okamoto-Hu Okamoto and Hu (2003) (hereafter OH03) estimator in
Sec. III. We then derive the GMV estimator in Sec. IV and explicitly show how
it differs from the HO02 estimator. We describe the suboptimal lensing
estimator of Ref. Aghanim _et al._ (2020) in Sec. V. Finally, we compare
estimators in Sec. VI for different instrumental setups, and conclude in Sec.
VII.
## II Notation and conventions
We denote by capital letters $X,Y=T,E,B$ the observed (lensed and noisy) CMB
temperature and polarization fields, and by $\phi$ the projected lensing
potential. Throughout we work in the flat-sky approximation; we denote two-
dimensional Fourier wavenumbers by $\boldsymbol{l}$ for CMB fields and
$\boldsymbol{L}$ for the lensing potential.
The power spectra of the observed temperature and polarization fields are
defined as
$\displaystyle\langle
X(\boldsymbol{l})Y(\boldsymbol{l}^{\prime})\rangle=(2\pi)^{2}\delta(\boldsymbol{l}+\boldsymbol{l}^{\prime})C_{l}^{XY}\,,$
(1)
where $C_{l}^{XY}$ is the total cross-power spectrum of the lensed fields,
including detector noise added in quadrature (for $X=Y$). It can also include
contributions from other sources of variance such as residual foreground
contamination. In this expression the angular brackets denote taking ensemble
averages over the primordial CMB, detector noise, along with the underlying
large scale structure.
Gravitational lensing affects the auto- and cross-power spectra of CMB fields,
and moreover produces correlations between non-opposite $\boldsymbol{l}$
modes, proportional to the projected lensing potential. The response of the
non-opposite correlations to lensing can be quantified by non-perturbative
response functions $f_{XY}$ defined by
$\left\langle\frac{\delta}{\delta\phi(\boldsymbol{L})}\left(X(\boldsymbol{l})Y(\boldsymbol{l}^{\prime})\right)\right\rangle=\delta(\boldsymbol{l}+\boldsymbol{l}^{\prime}-\boldsymbol{L})f_{XY}(\boldsymbol{l},\boldsymbol{l}^{\prime}).$
(2)
The coupling coefficients $f_{XY}$ appearing in Eq. (2) are given explicitly
in Table 1. They depend on the lensed gradient spectra
$\widetilde{C}_{l}^{X\nabla Y}$ defined in Refs. Lewis _et al._ (2011);
Fabbian _et al._ (2019), which generalize the unlensed spectra used in the
original work of HO02 so that the response function for each lensing mode
includes the important higher-order effect of other lensing modes. The BB term
has negligible contribution to the signal-to-noise ratio of the reconstructed
$\phi$ field and thus we neglect it in our analysis. Note that different
foregrounds can also contribute to off-diagonal correlations (e.g. Ferraro and
Hill, 2018; Schaan and Ferraro, 2019), but we do not include them in this
work.
$\alpha$ | $f_{\alpha}({\boldsymbol{l}_{1},\boldsymbol{l}_{2}})$ [
---|---
$TT$ | $\widetilde{C}_{l_{1}}^{T\nabla T}({\boldsymbol{L}}\cdot{\boldsymbol{l}}_{1})+\widetilde{C}_{l_{2}}^{T\nabla T}({\boldsymbol{L}}\cdot{\boldsymbol{l}}_{2})$[
$TE$ | $\widetilde{C}_{l_{1}}^{T\nabla E}\cos 2\varphi_{\boldsymbol{l}_{1}\boldsymbol{l}_{2}}({\boldsymbol{L}}\cdot{\boldsymbol{l}}_{1})+\widetilde{C}_{l_{2}}^{T\nabla E}({\boldsymbol{L}}\cdot{\boldsymbol{l}}_{2})$[
$EE$ | $[\widetilde{C}_{l_{1}}^{E\nabla E}({\boldsymbol{L}}\cdot{\boldsymbol{l}}_{1})+\widetilde{C}_{l_{2}}^{E\nabla E}({\boldsymbol{L}}\cdot{\boldsymbol{l}}_{2})]\cos 2\varphi_{\boldsymbol{l}_{1}\boldsymbol{l}_{2}}$ [
$TB$ | $\widetilde{C}_{l_{1}}^{T\nabla E}\sin 2\varphi_{\boldsymbol{l}_{1}\boldsymbol{l}_{2}}({\boldsymbol{L}}\cdot{\boldsymbol{l}}_{1})$[
$EB$ | $[\widetilde{C}_{l_{1}}^{E\nabla E}({\boldsymbol{L}}\cdot{\boldsymbol{l}}_{1})+\widetilde{C}_{l_{2}}^{B\nabla B}({\boldsymbol{L}}\cdot{\boldsymbol{l}}_{2})]\sin 2\varphi_{\boldsymbol{l}_{1}\boldsymbol{l}_{2}}$ [
Table 1: CMB lensing correlation coefficients.
$\varphi_{\boldsymbol{l}_{1}\boldsymbol{l}_{2}}$ is the angle between
$\boldsymbol{l}_{1}$ and $\boldsymbol{l}_{2}$. The quantity
$\widetilde{C}^{X\nabla Y}$ is the lensed gradient spectrum, defined in Refs.
Lewis _et al._ (2011); Fabbian _et al._ (2019). Note that we do not include
curl-like terms $\widetilde{C}_{l}^{TP_{\bot}},\widetilde{C}_{l}^{PP_{\bot}}$,
which are always subdominant Fabbian _et al._ (2019).
In the remainder of this work we will describe different estimators
$\hat{\phi}_{\alpha}$ for the lensing potential. All these estimators are
required to be unbiased, i.e. such that
$\langle\hat{\phi}_{\alpha}\rangle=\phi$. They are however noisy, and we
define their variance (or reconstruction noise) $N_{\alpha}(L)$ through
$\langle(\hat{\phi}_{\alpha}-\phi)(\boldsymbol{L})(\hat{\phi}_{\alpha}-\phi)(\boldsymbol{L}^{\prime})\rangle=(2\pi)^{2}\delta_{\rm
D}(\boldsymbol{L}+\boldsymbol{L}^{\prime})N_{\alpha}(L).$ (3)
Here, for optimizing the signal to noise, we only consider the primary
Gaussian disconnected contractions of the lensed fields,
$N_{\alpha}^{(0)}(L)$; in Appendix. A we give an explicit form for the
$N_{\alpha}^{(1)}(L)$ contractions Kesden _et al._ (2003) that should also be
included in any full data likelihood analysis. The superscript values 0 and 1
represent the order to which the variance $N_{\alpha}(L)$ explicitly depends
on $C^{\phi\phi}_{L}$.
We will often deal with convolutions in Fourier space, and for brevity,
introduce the compact notation
$\int_{\begin{subarray}{c}\boldsymbol{l}_{1}+\boldsymbol{l}_{2}\\\
=\boldsymbol{L}\end{subarray}}...\equiv\iint\frac{d^{2}l_{1}d^{2}l_{2}}{(2\pi)^{2}}\delta_{\rm
D}(\boldsymbol{l}_{1}+\boldsymbol{l}_{2}-\boldsymbol{L})...$ (4)
We define the Fourier transform of a configuration-space function
$A(\boldsymbol{\hat{n}})$ as
$\mathcal{F}[A(\boldsymbol{\hat{n}})](\boldsymbol{l})\equiv\int
d^{2}\boldsymbol{\hat{n}}~{}A(\boldsymbol{\hat{n}})\textrm{e}^{-i\boldsymbol{l}\cdot\boldsymbol{\hat{n}}},$
(5)
and the inverse-Fourier transform of a harmonic-space function
$B(\boldsymbol{l})$ as
$\mathcal{F}^{-1}[B(\boldsymbol{l})](\boldsymbol{\hat{n}})\equiv\int\frac{d^{2}\boldsymbol{l}}{(2\pi)^{2}}B(\boldsymbol{l})\textrm{e}^{i\boldsymbol{l}\cdot\boldsymbol{\hat{n}}}.$
(6)
## III Hu and Okamoto estimators
We now briefly rederive the HO02 and OH03 quadratic estimators for the lensing
potential, setting the stage for our subsequent derivation of the global-
minimum-variance estimator.
The approach of HO02 consists in constructing the single-pair estimators
$\hat{\phi}_{\alpha}({\boldsymbol{L}})$ separately for each pair
$\alpha=TT,TE,EE,TB,EB$, and then combining them together to form their
minimum variance estimator.
OH03 moreover derive efficient full-sky single-pair estimators in
configuration space. They are identical to the HO02 estimators, except for the
$TE$ estimator, which is slightly sub-optimal. We give explicit expressions
for these estimators in the flat-sky limit in Section III.2. Here again, the
final OH03 estimator is obtained by combining these single-pair estimators.
Note that HO02 and OH03 used unlensed spectra in the response functions rather
than the lensed gradient spectra. We still refer to the estimators constructed
with the lensed gradient spectra as the HO02 and OH03 estimators, given that
the procedure is identical.
### III.1 Single-pair minimum-variance quadratic estimators in harmonic space
We start by constructing quadratic estimators out of a single pair $XY$, of
the form
$\hat{\phi}_{XY}(\boldsymbol{L})\equiv\int_{\begin{subarray}{c}\boldsymbol{l}_{1}+\boldsymbol{l}_{2}\\\
=\boldsymbol{L}\end{subarray}}X(\boldsymbol{l}_{1})Y(\boldsymbol{l}_{2})F_{XY}(\boldsymbol{l}_{1},\boldsymbol{l}_{2}).$
(7)
For the estimator to be unbiased, the weights $F_{XY}$ must satisfy the
constraint
$\int_{\begin{subarray}{c}\boldsymbol{l}_{1}+\boldsymbol{l}_{2}\\\
=\boldsymbol{L}\end{subarray}}f_{XY}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})F_{XY}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})=1.$
(8)
The noise of this estimator (defined as in Eq. (3)) is
$\displaystyle
N_{XY}(L)=\int_{\begin{subarray}{c}\boldsymbol{l}_{1}+\boldsymbol{l}_{2}\\\
=\boldsymbol{L}\end{subarray}}F_{XY}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})\Big{(}F_{XY}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})C_{l_{1}}^{XX}C_{l_{2}}^{YY}$
$\displaystyle+F_{XY}(\boldsymbol{l}_{2},\boldsymbol{l}_{1})C_{l_{1}}^{XY}C_{l_{2}}^{XY}\Big{)}.~{}~{}~{}$
(9)
#### III.1.1 All pairs except $TE$
For all pairs except $TE$, either $X=Y$ or $C_{l}^{XY}$ = 0. As a consequence,
the variance of the estimator takes the form
$\displaystyle
N_{XY}(L)=(1+\delta_{XY})\int_{\begin{subarray}{c}\boldsymbol{l}_{1}+\boldsymbol{l}_{2}\\\
=\boldsymbol{L}\end{subarray}}C_{l_{1}}^{XX}C_{l_{2}}^{YY}[F_{XY}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})]^{2}.~{}~{}~{}~{}$
(10)
Minimizing this variance under the constraint (8) results in the following
coefficients
$\displaystyle F_{XY}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$ $\displaystyle=$
$\displaystyle\lambda_{XY}(L)~{}\frac{f_{XY}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})}{(1+\delta_{XY})C_{l_{1}}^{XX}C_{l_{2}}^{YY}},$
(11) $\displaystyle\lambda_{XY}(L)$ $\displaystyle\equiv$
$\displaystyle\left[\int_{\begin{subarray}{c}\boldsymbol{l}_{1}+\boldsymbol{l}_{2}\\\
=\boldsymbol{L}\end{subarray}}\frac{[f_{XY}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})]^{2}}{(1+\delta_{XY})C_{l_{1}}^{XX}C_{l_{2}}^{YY}}\right]^{-1}.$
(12)
Inserting back into Eq. (10), we find the corresponding minimum variance
$N_{XY}(L)=\lambda_{XY}(L)$.
#### III.1.2 Special case of $XY=TE$
We may decompose $F_{TE}$ into a symmetric and antisymmetric piece:
$\displaystyle F_{TE}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$ $\displaystyle=$
$\displaystyle
F_{TE}^{+}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})+F_{TE}^{-}(\boldsymbol{l}_{1},\boldsymbol{l}_{2}),$
(13) $\displaystyle F_{TE}^{\pm}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$
$\displaystyle\equiv$
$\displaystyle\frac{1}{2}\left(F_{TE}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})\pm
F_{TE}(\boldsymbol{l}_{2},\boldsymbol{l}_{1})\right).$ (14)
For each pair $(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$, we further define the
2-dimensional vector
$\boldsymbol{F}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})\equiv\left(F_{TE}^{+}(\boldsymbol{l}_{1},\boldsymbol{l}_{2}),F_{TE}^{-}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})\right).$
(15)
After some algebra, and only keeping even functions of
$(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$ in the integral, Eq. (9) can be
rewritten as
$N_{TE}(L)=\int_{\begin{subarray}{c}\boldsymbol{l}_{1}+\boldsymbol{l}_{2}\\\
=\boldsymbol{L}\end{subarray}}\boldsymbol{F}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})\cdot\boldsymbol{M}(l_{1},l_{2})\cdot\boldsymbol{F}(\boldsymbol{l}_{1},\boldsymbol{l}_{2}),$
(16)
where for each pair $(l_{1},l_{2})$, the 2 by 2 matrix
$\boldsymbol{M}(l_{1},l_{2})$ is given by
$\displaystyle\boldsymbol{M}(l_{1},l_{2})=$
$\displaystyle\begin{pmatrix}C_{(l_{1}}^{TT}C_{l_{2})}^{EE}+C_{l_{1}}^{TE}C_{l_{2}}^{TE}&C_{[l_{1}}^{TT}C_{l_{2}]}^{EE}\\\
C_{[l_{1}}^{TT}C_{l_{2}]}^{EE}&C_{(l_{1}}^{TT}C_{l_{2})}^{EE}-C_{l_{1}}^{TE}C_{l_{2}}^{TE}\end{pmatrix},~{}~{}~{}$
(17)
where $A_{(l_{1}l_{2})}\equiv(A_{l_{1}l_{2}}+A_{l_{2}l_{1}})/2$ and
$A_{[l_{1}l_{2}]}\equiv(A_{l_{1}l_{2}}-A_{l_{2}l_{1}})/2$ are the symmetric
and antisymmetric parts of $A_{l_{1}l_{2}}$.
Similarly, we may define the symmetric and antisymmetric parts of the
correlation coefficients $f_{TE}^{\pm}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$
and the two-dimensional vector $\boldsymbol{f}=(f_{TE}^{+},f_{TE}^{-})$, for
each pair $(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$, and rewrite the
constraint (8) as
$\displaystyle\int_{\begin{subarray}{c}\boldsymbol{l}_{1}+\boldsymbol{l}_{2}\\\
=\boldsymbol{L}\end{subarray}}\boldsymbol{F}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})\cdot\boldsymbol{f}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})=1.$
(18)
Minimizing the variance (16) under this constraint leads to the solution
$\boldsymbol{F}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})=\lambda(L)~{}\boldsymbol{M}^{-1}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})\cdot\boldsymbol{f}(\boldsymbol{l}_{1},\boldsymbol{l}_{2}),$
(19)
where the Lagrange multiplier $\lambda$ is obtained from the constraint (18):
$\lambda(L)=\left(\int_{\begin{subarray}{c}\boldsymbol{l}_{1}+\boldsymbol{l}_{2}\\\
=\boldsymbol{L}\end{subarray}}\boldsymbol{f}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})\cdot\boldsymbol{M}^{-1}(l_{1},l_{2})\cdot\boldsymbol{f}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})\right)^{-1}.$
(20)
The 2 by 2 matrix $\boldsymbol{M}(l_{1},l_{2})$ is easily invertible, and
after re-expressing Eq. (19) in terms of the original
$F_{XY}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$ and
$f_{TE}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$, one recovers the HO02 optimal
weights for $TE$, namely, with our notation,
$\displaystyle F_{TE}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$ $\displaystyle=$
$\displaystyle\lambda_{TE}(L)\frac{C_{l_{1}}^{EE}C_{l_{2}}^{TT}f_{TE}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})-C_{l_{1}}^{TE}C_{l_{2}}^{TE}f_{TE}(\boldsymbol{l}_{2},\boldsymbol{l}_{1})}{C_{l_{1}}^{TT}C_{l_{2}}^{EE}C_{l_{1}}^{EE}C_{l_{2}}^{TT}-\left(C_{l_{1}}^{TE}C_{l_{2}}^{TE}\right)^{2}},$
(21) $\displaystyle\lambda_{TE}(L)$ $\displaystyle\equiv$
$\displaystyle\Bigg{[}\int_{\begin{subarray}{c}\boldsymbol{l}_{1}+\boldsymbol{l}_{2}\\\
=\boldsymbol{L}\end{subarray}}f_{TE}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})\frac{C_{l_{1}}^{EE}C_{l_{2}}^{TT}f_{TE}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})-C_{l_{1}}^{TE}C_{l_{2}}^{TE}f_{TE}(\boldsymbol{l}_{2},\boldsymbol{l}_{1})}{C_{l_{1}}^{TT}C_{l_{2}}^{EE}C_{l_{1}}^{EE}C_{l_{2}}^{TT}-\left(C_{l_{1}}^{TE}C_{l_{2}}^{TE}\right)^{2}}\Bigg{]}^{-1}\,.$
(22)
Inserting Eq. (19) into Eq. (16), we see that the noise of the minimum-
variance estimator is just $N_{TE}(L)=\lambda_{TE}(L)$.
### III.2 Single-pair efficient configuration-space estimators
#### III.2.1 All pairs except $TE$
The response coefficients $f_{XY}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$ can
all be written as linear combinations of products of functions of
$\boldsymbol{l}_{1}$ with functions of $\boldsymbol{l}_{2}$, with coefficients
depending on $L$. From Eq. (11), we see that this property transfers to the
optimal weights $F_{XY}$ for all pairs except $TE$. As a consequence, all
single-pair estimators except $TE$ can be written as sums of convolutions of
functions of $\boldsymbol{l}_{1}$ with functions of $\boldsymbol{l}_{2}$. This
implies that they can be written as a sum of products of functions of
configuration space – they are “separable” in configuration space. This allows
to use Fast Fourier Transforms (FFTs) (or fast harmonic transforms for full-
sky expressions Okamoto and Hu (2003)) to compute them efficiently.
Similar to OH03, we define the following bilinear operator of harmonic-space
functions:
$\boldsymbol{\mathcal{P}}[A(\boldsymbol{l}_{1}),B(\boldsymbol{l}_{2})](\boldsymbol{\hat{n}})\equiv\boldsymbol{\nabla}\mathcal{F}^{-1}[A(\boldsymbol{l}_{1})](\boldsymbol{\hat{n}})\times\mathcal{F}^{-1}[B(\boldsymbol{l}_{2})](\boldsymbol{\hat{n}}).$
(23)
The single-pair estimators can all be written in the form
$\hat{\phi}_{XY}(\boldsymbol{\hat{n}})=-\boldsymbol{\nabla}\cdot\mathcal{F}^{-1}\left[\lambda_{XY}(L)\mathcal{F}\left[\boldsymbol{\psi}_{XY}(\boldsymbol{\hat{n}})\right]\right],$
(24)
with
$\displaystyle\boldsymbol{\psi}_{TT}$ $\displaystyle=$
$\displaystyle\boldsymbol{\mathcal{P}}\left[\frac{\tilde{C}_{l_{1}}^{T\nabla
T}}{C_{l_{1}}^{TT}}T(\boldsymbol{l}_{1}),\frac{T(\boldsymbol{l}_{2})}{C_{l_{2}}^{TT}}\right],$
(25) $\displaystyle\boldsymbol{\psi}_{EE}$ $\displaystyle=$
$\displaystyle\frac{1}{2}\sum_{\epsilon=\pm
1}\boldsymbol{\mathcal{P}}\left[\textrm{e}^{2i\epsilon\varphi_{\boldsymbol{l}_{1}}}\frac{\tilde{C}_{l_{1}}^{E\nabla
E}}{C_{l_{1}}^{EE}}E(\boldsymbol{l}_{1}),\textrm{e}^{-2i\epsilon\varphi_{\boldsymbol{l}_{2}}}\frac{E(\boldsymbol{l}_{2})}{C_{l_{2}}^{EE}}\right],~{}~{}$
(26) $\displaystyle\boldsymbol{\psi}_{TB}$ $\displaystyle=$
$\displaystyle\frac{1}{2i}\sum_{\epsilon=\pm
1}\epsilon\boldsymbol{\mathcal{P}}\left[\textrm{e}^{2i\epsilon\varphi_{\boldsymbol{l}_{1}}}\frac{\tilde{C}_{l_{1}}^{T\nabla
E}}{C_{l_{1}}^{TT}}T(\boldsymbol{l}_{1}),\textrm{e}^{-2i\epsilon\varphi_{\boldsymbol{l}_{2}}}\frac{B(\boldsymbol{l}_{2})}{C_{l_{2}}^{BB}}\right],~{}~{}~{}$
(27) $\displaystyle\boldsymbol{\psi}_{EB}$ $\displaystyle=$
$\displaystyle\frac{1}{2i}\sum_{\epsilon=\pm
1}\epsilon\boldsymbol{\mathcal{P}}\left[\textrm{e}^{2i\epsilon\varphi_{\boldsymbol{l}_{1}}}\frac{\tilde{C}_{l_{1}}^{E\nabla
E}}{C_{l_{1}}^{EE}}E(\boldsymbol{l}_{1}),\textrm{e}^{-2i\epsilon\varphi_{\boldsymbol{l}_{2}}}\frac{B(\boldsymbol{l}_{2})}{C_{l_{2}}^{BB}}\right].~{}~{}~{}$
(28)
These expressions are the flat-sky limit of the OH03 full-sky expressions.
#### III.2.2 Case of $XY=TE$
The separability property is not satisfied by the $TE$ estimator, due to the
non-factorizable term in the denominator of $F_{TE}$ in Eq. (21). Instead of
the optimal $F_{TE}$, one can use a slightly suboptimal coefficient, obtained
by setting $C_{l}^{TE}=0$ in Eq. (21), namely
$\displaystyle F^{\rm eff}_{TE}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$
$\displaystyle=$ $\displaystyle\lambda^{\rm
eff}_{TE}(L)\frac{f_{TE}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})}{C_{l_{1}}^{TT}C_{l_{2}}^{EE}},$
(29) $\displaystyle\lambda^{\rm eff}_{TE}(L)$ $\displaystyle\equiv$
$\displaystyle\Bigg{[}\int_{\begin{subarray}{c}\boldsymbol{l}_{1}+\boldsymbol{l}_{2}\\\
=\boldsymbol{L}\end{subarray}}\frac{[f_{TE}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})]^{2}}{C_{l_{1}}^{TT}C_{l_{2}}^{EE}}\Bigg{]}^{-1}\,.$
(30)
The resulting suboptimal estimator $\hat{\phi}_{TE}^{\rm eff}$ also takes the
form Eq. (24), with
$\displaystyle\boldsymbol{\psi}_{TE}^{\rm eff}$ $\displaystyle=$
$\displaystyle\frac{1}{2}\sum_{\epsilon=\pm
1}\boldsymbol{\mathcal{P}}\left[\textrm{e}^{2i\epsilon\varphi_{\boldsymbol{l}_{1}}}\frac{\tilde{C}_{l_{1}}^{T\nabla
E}}{C_{l_{1}}^{TT}}T(\boldsymbol{l}_{1}),\textrm{e}^{-2i\epsilon\varphi_{\boldsymbol{l}_{2}}}\frac{E(\boldsymbol{l}_{2})}{C_{l_{2}}^{EE}}\right]$
(31)
$\displaystyle+\boldsymbol{\mathcal{P}}\left[\frac{\tilde{C}_{l_{2}}^{T\nabla
E}}{C_{l_{2}}^{EE}}E(\boldsymbol{l}_{2}),\frac{T(\boldsymbol{l}_{1})}{C_{l_{1}}^{TT}}\right].$
### III.3 Optimal combination of single-pair estimators
Given the five single-pair estimators $\hat{\phi}_{\alpha}({\boldsymbol{L}})$
constructed for each $\alpha\in\\{TT,TE,EE,TB,EB\\}$, HO02 combine them to
form the estimator
$\hat{\phi}_{\rm
HO02}({\boldsymbol{L}})=\sum_{\alpha}w_{\alpha}(L)\hat{\phi}_{\alpha}({\boldsymbol{L}})\,,$
(32)
where the optimal weights $w_{\alpha}(L)$ are obtained by minimizing the
variance of the linear combination with the constraint that they sum up to
unity i.e. $\sum_{\alpha}w_{\alpha}=1$. Subject to this constraint, one gets
$w_{\alpha}=\frac{\sum_{\beta}({\boldsymbol{N}}^{-1})_{\alpha\beta}}{\sum_{\beta\gamma}({\boldsymbol{N}}^{-1})_{\beta\gamma}},$
(33)
where for each $L$, $\boldsymbol{N}_{\alpha\beta}(L)$ is the covariance matrix
of the separate estimators $\hat{\phi}_{\alpha}$, whose elements are obtained
by the generalization of Eq. (3) to the cross-correlation of two estimators.
The overall noise of this estimator is then $N_{\rm
HO}\equiv\left(\sum_{\alpha\beta}({\boldsymbol{N}}^{-1})_{\alpha\beta}\right)^{-1}$.
The final HO02 estimator thus takes the form
$\hat{\phi}_{\rm
HO02}(\boldsymbol{L})=\int_{\begin{subarray}{c}\boldsymbol{l}_{1}+\boldsymbol{l}_{2}\\\
=\boldsymbol{L}\end{subarray}}\sum_{XY}F_{XY}^{\rm
HO02}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})X(\boldsymbol{l}_{1})Y(\boldsymbol{l}_{2}),$
(34)
where the sum runs over the five unique pairs $XY=TT,TE,EE,TB,EB$, and the
weights are proportional to the single-pair optimal weights, each with a
different proportionality coefficient:
$F_{XY}^{\rm
HO02}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})=w_{XY}(L)F_{XY}(\boldsymbol{l}_{1},\boldsymbol{l}_{2}).$
(35)
The same procedure can be carried with the single-pair separable estimators of
OH03. These estimators are all identical to the minimum-variance estimators of
HO02, except for $\hat{\phi}_{TE}^{\rm eff}$, which is slightly suboptimal
relative to $\hat{\phi}_{TE}$. Upon combining all five estimators, the OH03
estimator also takes the form of Eq. (34), with weights
$F_{XY}^{\rm OH03}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})=w_{XY}^{\rm
eff}(L)\lambda_{XY}^{\rm
eff}(L)\frac{f_{XY}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})}{(1+\delta_{XY})C_{l_{1}}^{XX}C_{l_{2}}^{YY}}.$
(36)
## IV Global minimum-variance quadratic estimator
It is easy to see that the final HO02 estimator is a linear combination of
terms quadratic in $T,E,B$. Rather than splitting the optimization process in
two steps, we instead directly seek the global minimum variance quadratic
estimator, in one single step. By doing so, we can account for the
correlations between different $XY$ pairs _for each
$(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$_, rather than only after integrating
over $(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$, as done in the HO02 estimator.
The estimator built this way is therefore necessarily less noisy than the HO02
estimator, as we will show explicitly.
### IV.1 Harmonic-space expression
We start by deriving the global-minimum-variance (hereafter GMV) estimator in
harmonic space, following the steps of Appendix A of Hirata & Seljak Hirata
and Seljak (2003a).
For each Fourier mode $\boldsymbol{l}$, we define the three-dimensional vector
$\boldsymbol{X}(\boldsymbol{l})=[T(\boldsymbol{l}),E(\boldsymbol{l}),B(\boldsymbol{l})]$.
We seek an estimator of the form
$\hat{\phi}(\boldsymbol{L})=\int_{\begin{subarray}{c}\boldsymbol{l}_{1}+\boldsymbol{l}_{2}\\\
=\boldsymbol{L}\end{subarray}}X^{i}(\boldsymbol{l}_{1})\Xi_{ij}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})X^{j}(\boldsymbol{l}_{2}),$
(37)
where we use the Einstein summation convention. Without loss of generality, we
may assume
$\Xi_{ji}(\boldsymbol{l}_{2},\boldsymbol{l}_{1})=\Xi_{ij}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$,
as only the part of the integrand symmetric under exchange of
$(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$ contributes to the integral.
For $\boldsymbol{l}_{1}+\boldsymbol{l}_{2}\neq\boldsymbol{0}$, we define
$f_{ij}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$ through
$\left\langle\frac{\delta}{\delta\phi(\boldsymbol{L})}\left(X^{i}(\boldsymbol{l}_{1})X^{j}(\boldsymbol{l}_{2})\right)\right\rangle=\delta(\boldsymbol{l}_{1}+\boldsymbol{l}_{2}-\boldsymbol{L})f_{ij}(\boldsymbol{l}_{1},\boldsymbol{l}_{2}).$
(38)
In other words, if $i=1,2,3$ correspond to $X^{i}=T,E,B$, we have
$f_{11}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})=f_{TT}(\boldsymbol{l}_{1},\boldsymbol{l}_{2}),f_{12}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})=f_{TE}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})=f_{21}(\boldsymbol{l}_{2},\boldsymbol{l}_{1})$,
etc… Here again, we have
$f_{ji}(\boldsymbol{l}_{2},\boldsymbol{l}_{1})=f_{ij}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$.
Requiring the estimator to be unbiased thus leads the constraint equation
$\int_{\begin{subarray}{c}\boldsymbol{l}_{1}+\boldsymbol{l}_{2}\\\
=\boldsymbol{L}\end{subarray}}\Xi_{ij}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})f_{ij}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})=1.$
(39)
Using the symmetry properties of $\Xi$, the variance of this estimator is then
$\displaystyle N(\boldsymbol{L})$ $\displaystyle=$ $\displaystyle
2\int_{\begin{subarray}{c}\boldsymbol{l}_{1}+\boldsymbol{l}_{2}\\\
=\boldsymbol{L}\end{subarray}}\Xi_{ij}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})\Xi_{pq}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})C_{l_{1}}^{ip}C_{l_{2}}^{jq}.$
(40)
Minimizing this variance under the constraint (39) leads to
$C_{l_{1}}^{ip}\Xi_{pq}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})C_{l_{2}}^{jq}=\frac{\lambda(L)}{2}f_{ij}(\boldsymbol{l}_{1},\boldsymbol{l}_{2}),$
(41)
where $\lambda(L)$ is a Lagrange multiplier. This equation is more easily
solved in matrix form. For each $l$, we define the three by three symmetric
matrix $[\boldsymbol{C}_{l}]$ with elements $C_{l}^{ij}$; similarly, for each
pair $(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$, we define the three by three
matrices $[\boldsymbol{\Xi}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})]$ and
$[\boldsymbol{f}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})]$. Equation (41) then
has the solution
$[\boldsymbol{\Xi}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})]=\frac{\lambda(L)}{2}[\boldsymbol{C}_{l_{1}}]^{-1}[\boldsymbol{f}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})][\boldsymbol{C}_{l_{2}}]^{-1}.$
(42)
Inserting back into the constraint equation, we obtain
$\lambda(L)^{-1}=\int_{\begin{subarray}{c}\boldsymbol{l}_{1}+\boldsymbol{l}_{2}\\\
=\boldsymbol{L}\end{subarray}}\frac{1}{2}\textrm{Tr}\left([\boldsymbol{C}_{l_{1}}]^{-1}[\boldsymbol{f}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})][\boldsymbol{C}_{l_{2}}]^{-1}[\boldsymbol{f}(\boldsymbol{l}_{2},\boldsymbol{l}_{1})]\right).$
(43)
The noise of the minimum-variance estimator is then simply $N(L)=N_{\rm
GMV}(L)=\lambda(L)$.
Putting everything together, the GMV estimator takes the final form
$\displaystyle\hat{\phi}_{\rm GMV}(\boldsymbol{L})$ $\displaystyle=$
$\displaystyle\int_{\begin{subarray}{c}\boldsymbol{l}_{1}+\boldsymbol{l}_{2}\\\
=\boldsymbol{L}\end{subarray}}\sum_{XY}F_{XY}^{\rm
GMV}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})X(\boldsymbol{l}_{1})Y(\boldsymbol{l}_{2}),$
(44)
where the sum runs over the five unique pairs $XY=TT,TE,EE,TB,EB$. Explicitly,
the weights are
$\displaystyle F_{TT}^{\rm
GMV}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})=\frac{N_{\rm
GMV}(L)}{2D_{l_{1}}D_{l_{2}}}\times$
$\displaystyle~{}~{}~{}\Big{[}C_{l_{1}}^{EE}C_{l_{2}}^{EE}f_{TT}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})+C_{l_{1}}^{TE}C_{l_{2}}^{TE}f_{EE}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$
$\displaystyle~{}~{}~{}~{}~{}-C_{l_{1}}^{EE}C_{l_{2}}^{TE}f_{TE}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})-C_{l_{2}}^{EE}C_{l_{1}}^{TE}f_{TE}(\boldsymbol{l}_{2},\boldsymbol{l}_{1})\Big{]},~{}~{}~{}~{}~{}$
(45) $\displaystyle F_{EE}^{\rm
GMV}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})=\frac{N_{\rm
GMV}(L)}{2D_{l_{1}}D_{l_{2}}}\times$
$\displaystyle~{}~{}~{}\Big{[}C_{l_{1}}^{TE}C_{l_{2}}^{TE}f_{TT}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})+C_{l_{1}}^{TT}C_{l_{2}}^{TT}f_{EE}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$
$\displaystyle~{}~{}~{}~{}~{}-C_{l_{1}}^{TE}C_{l_{2}}^{TT}f_{TE}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})-C_{l_{2}}^{TE}C_{l_{1}}^{TT}f_{TE}(\boldsymbol{l}_{2},\boldsymbol{l}_{1})\Big{]},$
(46) $\displaystyle F_{TE}^{\rm
GMV}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})=\frac{N_{\rm
GMV}(L)}{D_{l_{1}}D_{l_{2}}}\times$
$\displaystyle~{}~{}~{}\Big{[}-C_{l_{1}}^{TE}C_{l_{2}}^{EE}f_{TT}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})-C_{l_{1}}^{TE}C_{l_{2}}^{TT}f_{EE}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$
$\displaystyle~{}~{}~{}~{}~{}+C_{l_{1}}^{EE}C_{l_{2}}^{TT}f_{TE}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})+C_{l_{1}}^{TE}C_{l_{2}}^{TE}f_{TE}(\boldsymbol{l}_{2},\boldsymbol{l}_{1})\Big{]},$
(47) $\displaystyle F_{TB}^{\rm
GMV}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})=\frac{N_{\rm
GMV}(L)}{D_{l_{1}}C_{l_{2}}^{BB}}\times$
$\displaystyle~{}~{}~{}\Big{[}C_{l_{1}}^{EE}f_{TB}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})-C_{l_{1}}^{TE}f_{EB}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})\Big{]},$
(48) $\displaystyle F_{EB}^{\rm
GMV}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})=\frac{N_{\rm
GMV}(L)}{D_{l_{1}}C_{l_{2}}^{BB}}\times$
$\displaystyle~{}~{}~{}\Big{[}-C_{l_{1}}^{TE}f_{TB}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})+C_{l_{1}}^{TT}f_{EB}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})\Big{]}\,,$
(49)
where $D_{l}\equiv C^{TT}_{l}C^{EE}_{l}-[C^{TE}_{l}]^{2}$.
These explicit expressions should make it very clear that the GMV estimator is
different from the HO02 estimator. Put differently, the first step of the
likelihood-based iterative technique of Ref. Hirata and Seljak (2003a) is
_not_ equivalent to the HO02 estimator. Indeed, in the HO02 estimator, the
weight $F_{XY}$ of each pair $XY$ is proportional to $f_{XY}$ only (times a
function of $L$), even after combining all single-pair estimators; in
contrast, for the GMV estimator, the weight of each pair is a linear
combination of the response coefficients from _all_ pairs. The weights in the
GMV estimator would not separately minimize the variance of an individual $XY$
pair, but they provide a global optimum when combining all the pairs together.
These expressions also show that the weights are all sums of products of
function of $\boldsymbol{l}_{1}$ with functions of $\boldsymbol{l}_{2}$,
including $F_{TE}^{\rm GMV}$. In other words, the GMV estimator is separable
without requiring any additional approximation, making it well adapted for
efficient computations, as we now discuss.
As a side note, let us point out that the GMV estimator (just like the HO02 an
OH03 estimators) can be split into two pieces, built from $\\{TT,TE,EE\\}$ and
$\\{TB,EB\\}$, respectively, which are uncorrelated as
$C^{TB}_{\ell}=C^{EB}_{\ell}=0$. Our publicly available Python code
GlobalLensQuest first computes these two separate uncorrelated estimators
$\hat{\phi}_{a}\equiv\hat{\phi}_{\\{TT,TE,EE\\}}$ and
$\hat{\phi}_{b}\equiv\hat{\phi}_{\\{TB,EB\\}}$, and then combines them with
inverse variance weighting to obtain the GMV estimator.
### IV.2 Compact configuration-space expression
We may write the GMV estimator in even more compact form by defining the
inverse-covariance-weighted fields
$\overline{\boldsymbol{X}}(\boldsymbol{l})\equiv[\boldsymbol{C}_{l}]^{-1}\boldsymbol{X}(\boldsymbol{l}),$
(50)
and write
$\overline{T}(\boldsymbol{l}_{1})\equiv\overline{X}^{1}(\boldsymbol{l}_{1})=[C_{l_{1}}^{EE}T(\boldsymbol{l}_{1})-C_{l_{1}}^{TE}E(\boldsymbol{l}_{1})]/D_{l_{1}},$
(51)
and similarly
$\overline{E}(\boldsymbol{l}_{1})=\overline{X}^{2},\overline{B}(\boldsymbol{l}_{1})=\overline{X}^{3}$.
We then have
$\hat{\phi}_{\rm
GMV}(L)=\frac{\lambda(L)}{2}\int_{\begin{subarray}{c}\boldsymbol{l}_{1}+\boldsymbol{l}_{2}\\\
=\boldsymbol{L}\end{subarray}}\overline{X}^{i}(\boldsymbol{l})f_{ij}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})\overline{X}^{j}(\boldsymbol{l}_{2}).$
(52)
We moreover define the Wiener-filtered fields
$X^{i}_{\rm
WF}(\boldsymbol{l})\equiv\widetilde{C}_{l}^{ij}~{}\overline{X}^{j},$ (53)
where $\widetilde{C}_{l}^{11}\equiv\widetilde{C}_{l}^{T\nabla
T},\widetilde{C}_{l}^{12}=\widetilde{C}_{l}^{21}\equiv\widetilde{C}_{l}^{T\nabla
E}$, etc., and write
$T_{\rm WF}(\boldsymbol{l})\equiv X^{1}_{\rm
WF}(\boldsymbol{l})\equiv\widetilde{C}_{l}^{T\nabla
T~{}}\overline{T}(\boldsymbol{l})+\widetilde{C}_{l}^{T\nabla
E}~{}\overline{E}(\boldsymbol{l}),$ (54)
and similarly for $E_{\rm WF}(\boldsymbol{l})\equiv X^{2}_{\rm
WF}(\boldsymbol{l})$.
In terms of these fields, the GMV estimator takes the particularly simple form
$\displaystyle\hat{\phi}_{\rm
GMV}(\boldsymbol{L})=\frac{\lambda(L)}{2}\int_{\begin{subarray}{c}\boldsymbol{l}_{1}+\boldsymbol{l}_{2}\\\
=\boldsymbol{L}\end{subarray}}(\boldsymbol{L}\cdot\boldsymbol{l}_{1})\Big{[}2T_{\rm
WF}(\boldsymbol{l}_{1})\overline{T}(\boldsymbol{l}_{2})$
$\displaystyle+\sum_{\epsilon=\pm 1}E_{\rm
WF}(\boldsymbol{l}_{1})\textrm{e}^{-2i\epsilon\varphi_{\boldsymbol{l}_{1}}}\left(\overline{E}(\boldsymbol{l}_{2})+i\epsilon\overline{B}(\boldsymbol{l}_{2})\right)\textrm{e}^{2i\epsilon\varphi_{\boldsymbol{l}_{2}}}\Big{]}.~{}$
(55)
This form is well adapted for efficient evaluation, as it is the sum of
convolutions of functions of $\boldsymbol{l}_{1}$ with functions of
$\boldsymbol{l}_{2}$. To see this, let us define
$\displaystyle~{}_{\pm 2}E_{\rm WF}(\boldsymbol{l})$ $\displaystyle\equiv$
$\displaystyle E_{\rm WF}(\boldsymbol{l})\textrm{e}^{\pm
2i\varphi_{\boldsymbol{l}}},$ (56) $\displaystyle~{}_{\pm
2}\overline{P}(\boldsymbol{l})$ $\displaystyle\equiv$
$\displaystyle{\frac{1}{2}}\left(\overline{E}(\boldsymbol{l})\pm
i\overline{B}(\boldsymbol{l})\right)\textrm{e}^{\pm
2i\varphi_{\boldsymbol{l}}}.$ (57)
We may then express the GMV estimator in terms of the configuration-space
versions of these fields (i.e. their inverse-Fourier transforms):
$\hat{\phi}_{\rm
GMV}(\boldsymbol{\hat{n}})=-\boldsymbol{\nabla}\cdot\mathcal{F}^{-1}\left[\lambda(L)\mathcal{F}[\boldsymbol{\psi}_{\rm
GMV}(\boldsymbol{\hat{n}})]\right],\\\ $ (58)
where
$\displaystyle\boldsymbol{\psi}_{\rm GMV}(\boldsymbol{\hat{n}})$
$\displaystyle=$ $\displaystyle\boldsymbol{\nabla}T_{\rm
WF}(\boldsymbol{\hat{n}})~{}\overline{T}(\boldsymbol{\hat{n}})$ (59)
$\displaystyle+\sum_{s=\pm 2}\boldsymbol{\nabla}[_{-s}E_{\rm
WF}(\boldsymbol{\hat{n}})]~{}_{s}\overline{P}(\boldsymbol{\hat{n}}).$
This expression is the flat-sky equivalent of Eq. (3) in Ref. Aghanim _et
al._ (2020), derived in Ref. Carron (2019). A similar expression is derived in
Ref. Peloton _et al._ (2017), in terms of $(T,Q,U)$ rather than $(T,E,B)$;
nevertheless, it is also incorrectly stated in that paper that this estimator
is identical to an estimator built out of single-pair estimators, i.e. the
HO02 estimator.
## V Suboptimal quadratic estimator (SQE) used in recent data analyses
While the full expression for the configuration-space GMV estimator was
already known (although it was not known that it differs from the HO02
estimator) Aghanim _et al._ (2020); Carron (2019), in practice only an
approximate version was used for data analyses thus far. Instead of using the
full covariance matrix $[\boldsymbol{C}_{l}]$ in Eq. (50), the Planck
collaboration Ade _et al._ (2016a); Aghanim _et al._ (2020) and SPT
collaboration Wu _et al._ (2019) approximate it as diagonal by setting
$C_{l}^{TE}=0$ – note that Aghanim _et al._ (2020) still use the exact
response coefficients $f_{XY}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$. This
simplification allows to deal with a cut-sky setup with a lower computational
cost; it moreover preserves the configuration space separability. We denote
this suboptimal quadratic estimator SQE. Explicitly, the weights of this
estimator are
$\displaystyle F_{XY}^{\rm SQE}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$
$\displaystyle=$ $\displaystyle\lambda_{\rm
SQE}(L)\frac{f_{XY}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})}{(1+\delta_{XY})C_{l_{1}}^{XX}C_{l_{2}}^{YY}},$
(60) $\displaystyle\lambda_{\rm SQE}(L)$ $\displaystyle\equiv$
$\displaystyle\left(\int_{\begin{subarray}{c}\boldsymbol{l}_{1}+\boldsymbol{l}_{2}\\\
=\boldsymbol{L}\end{subarray}}\sum_{XY}\frac{f_{XY}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})^{2}}{(1+\delta_{XY})C_{l_{1}}^{XX}C_{l_{2}}^{YY}}\right)^{-1},~{}~{}$
(61)
where again the sum runs of the five distinct pairs $XY$.
By definition, this estimator is suboptimal relative to the GMV estimator.
Furthermore, it should be clear from Eq. (36) that it is also noisier than the
OH03 estimator (and as we will see, also noisier than the HO02 estimator).
Indeed, the OH03 estimator accounts for the covariances between different
single-pair estimators, which depend on $C_{l}^{TE}$; as a consequence, the
$L$-dependent proportionality constant in Eq. (36) is different for each pair.
In contrast, the SQE amounts to neglecting correlations between single-pair
estimators, and simply using their inverse-variance combination, leading to
the same coefficient $\lambda_{\rm SQE}(L)$ for all weights in Eq. (60). Given
that the SQE estimator is effectively a linear combination of single-pair
estimator, and that the OH03 weights represent the optimal linear combination
of single-pair estimators, we conclude that the SQE estimator must be
suboptimal to the OH03 estimator.
To compute the noise of this estimator, we must account for $C_{l}^{TE}\neq
0$. We find
$\displaystyle N_{\rm SQE}(L)=\lambda_{\rm
SQE}^{2}(L)\int_{\begin{subarray}{c}\boldsymbol{l}_{1}+\boldsymbol{l}_{2}\\\
=\boldsymbol{L}\end{subarray}}\sum_{XY}\frac{f_{XY}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})}{(1+\delta_{XY})C_{l_{1}}^{XX}C_{l_{2}}^{YY}}$
$\displaystyle\sum_{UV}\left[\frac{f_{UV}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})C_{l_{1}}^{XU}C_{l_{2}}^{YV}}{(1+\delta_{UV})C_{l_{1}}^{UU}C_{l_{2}}^{VV}}+\frac{f_{UV}(\boldsymbol{l}_{2},\boldsymbol{l}_{1})C_{l_{1}}^{XV}C_{l_{2}}^{YU}}{(1+\delta_{UV})C_{l_{2}}^{UU}C_{l_{1}}^{VV}}\right].~{}~{}$
(62)
The SQE estimator enables faster evaluation with a cut-sky, at the cost of
only $\sim 3\%$ increase in the reconstruction noise for Planck (Aghanim _et
al._ , 2020), as we confirm in Fig. 2. However, we will see that for more
sensitive experimental setups, the reconstruction noise penalty can be more
than $10\%$ and thus a full joint filtering analysis of the temperature and
polarization maps would be beneficial in future.
## VI Quantitative comparison of different quadratic estimators
### VI.1 Experimental setups
In order to evaluate the variance of different quadratic estimators, we use
three different setups which correspond to the Planck, SO-like, and CMBS4-like
experiments. In Table 2, we provide the adopted specifications for these
setups. The Gaussian random noise of the detector is calculated as
$C^{T,E/B}_{\ell}|_{\mathrm{noise}}=(\Delta_{T,P})^{2}e^{\ell(\ell+1)\sigma^{2}/8\ln{2}}$
(63)
where $\Delta_{T,P}$ denote the white-noise of the detector in $\mu$K-radian,
and $\sigma$ is the full width at half maximum (FWHM) of the beam in arcmin.
Experiment | $\ell_{\mathrm{max}}$ | $\Delta_{T}$ | $\Delta_{P}$ | $\sigma$ [
---|---|---|---|---
| | $\mu\mathrm{K}$-arcmin | $\mu\mathrm{K}$-arcmin | arcmin [
Planck | 3000 | 35.0 | 60.0 | 5.0 [
SO | 3000 | 8.0 | 8.0$\sqrt{2}$ | 1.4 [
CMBS4 | 3000 | 1.0 | 1.0$\sqrt{2}$ | 1.0 [
Table 2: Experimental specifications used in this work.
It has been shown that extra-galactic foregrounds can bias the CMB lensing
reconstruction from temperature maps and different strategies have been
proposed to mitigate this issue (e.g. Ferraro and Hill, 2018; Schaan and
Ferraro, 2019; Madhavacheril and Hill, 2018; Darwish _et al._ , 2021; van
Engelen _et al._ , 2014). We neglect the foregrounds in this study, and
simply choose $\ell_{\mathrm{max}}=3000$ in both temperature and polarization.
Although it is possible to go for a much higher $\ell_{\mathrm{max}}$ in
polarization than in temperature due to lack of strongly polarized
foregrounds, for simplicity we take
$\ell^{T}_{\mathrm{max}}=\ell^{P}_{\mathrm{max}}$. We have checked our results
with different $\ell_{\mathrm{max}}$ ranges and found no drastic difference in
our results.
### VI.2 Comparison of reconstruction noises
We start by comparing the OH03 and HO02 quadratic estimators in Figure 1. We
find that the HO02 estimator is systematically less noisy than the OH03
estimator for large angular scales (by less than 0.5%). Interestingly, the
OH03 estimator becomes slightly less noisy than the HO02 estimator for $L$
larger than several hundreds. We have checked that our numerical integrals are
converged to better than 0.01% relative accuracy up to $L\approx 2000$, and to
better than 0.03% for $L\lesssim 3000$ which we also show in Fig. 4. This
gives us confidence that the $\sim 0.08\%$ improvement of OH03 estimator over
HO02 seen for a CMBS4-like experiment is real and not a numerical artifact.
The lower noise of OH03 at small angular scales may appear surprising at
first, given that this estimator uses a TE estimator suboptimal to that of
HO02. However, the suboptimal TE estimator does not guarantee that the overall
OH03 estimator (obtained by optimally combining the five single-pair
estimators) is noisier than the overall HO02 estimator: indeed, if the OH03 TE
estimator happens to be more correlated with the TT and EE estimators than the
HO02 TE estimator which is the case here, the overall combination of OH03
estimators can be less noisy than that of HO02.
Figure 1: Fractional difference between the variance of the OH03 and HO02
estimators. Different colors correspond to the different experimental setups
described in Sec. VI.1.
In Fig. 2, we show the ratios of the reconstruction noise of the GMV and HO02
estimators to that of the SQE estimator for different experimental setups. As
expected, we find that the variance of the GMV quadratic estimator is lower
than that of the HO02 and SQE estimators. Also, the variance of HO02 estimator
is smaller than the SQE estimator. For a Planck-like experimental setup, on
large angular scales ($L\lesssim 500$), the difference between the SQE and GMV
estimators is of the order of 3% and can reach $\sim 6\%$ around $L\sim 2000$.
However, for more sensitive experiments, this difference reaches $\sim 11\%$
and $\sim 12\%$ at $L\lesssim 100$ and $L\sim 2000$ for SO- or CMBS4-like
experiments, respectively. This result may motivate using the full covariance
matrix $[\boldsymbol{C}_{l}]$ in Eq. (50) instead of assuming $C_{l}^{TE}=0$,
in order to obtain more precise results in future data analyses.
Figure 2: Ratio of the minimum variance reconstruction noise of the GMV and
SQE estimators, and HO02 and SQE estimators for different experimental setups.
From Fig. 2, we can also see that for a Planck-like experimental setup, at
small angular scales ($L\gtrsim 1000$) the GMV and HO02 estimators perform
almost equally well, while for SO- and CMBS4-like experiment GMV outperforms
HO02 everywhere. On large angular scales, the difference between the GMV and
HO02 is much more significant for all the experiments considered here. For
Planck, polarization noise is significant, so the difference in lensing
estimators is quite small on all scales. However, although CMBS4-like
experiments are $EB$-dominated for the purpose of lensing reconstruction, the
improvement of the GMV over HO02 is driven by a significant difference in the
TT-TE-EE part of the minimum-variance estimator (rather than improved
filtering of $E$ improving $EB$; see Appendix C). The effect on the combined
MV estimator is largest at $L\lesssim 200$ where the MV estimator has the
largest contribution from TT-TE-EE.
## VII Conclusions
Quadratic estimators (QEs) are widely used to reconstruct the CMB lensing
potential from CMB temperature and polarization maps. In fact, up until the
very recent POLARBEAR Adachi _et al._ (2020) and SPTPol Millea _et al._
(2020) results, all maps of the CMB lensing potential have been constructed
using QEs. In this work, we present a clear comparison between different QEs,
both in terms of explicit equations, and quantitatively, by comparing their
reconstruction noise. Importantly, we show that the Hu-Okamoto Hu and Okamoto
(2002) (HO02) optimization method, consisting in first constructing optimal
single-pair quadratic estimators, and then their optimal linear combination,
does not lead to the absolute minimum-variance QE. Instead, we derive the
global-minimum-variance (GMV) QE, which minimizes the variance of quadratic
temperature and polarization combinations in one single step.
Interestingly, the GMV estimator derived here had been hiding in plain sight
in previous works. It is equivalent to the first step (or weak-signal limit)
of likelihood-based methods Hirata and Seljak (2003a); Peloton _et al._
(2017); Carron and Lewis (2017), which is therefore _not_ equivalent to the
HO02 estimator, contrary to what was previously thought (although technically
the GMV estimator with non-perturbative lensed gradient weights presented here
is a modification to the first-step of likelihood based estimators so that the
result is non-perturbatively unbiased). Our work is the first to note that the
HO02 estimator is not the global-optimum quadratic estimator, and make this
point sharply clear through explicit expressions, as well as numerical
comparisons. Indeed, we show that the reconstruction noise of the GMV
estimator is lower than that of the HO02 estimator by up to $\sim 9\%$ for a
SO-like experiment.
We also study the suboptimal QE used in the 2018 Planck Ade _et al._ (2016a);
Carron and Lewis (2017); Aghanim _et al._ (2020) and recent SPT Wu _et al._
(2019) lensing analyses (SQE), which is obtained from the GMV quadratic
estimator (appropriately generalized to account for beam and pixel
convolution), with the additional approximation of neglecting $C_{l}^{TE}=0$
in the inverse filter matrix. We show that this approximation makes the SQE
suboptimal not only relative to the GMV estimator, but also relative to the
HO02 and OH03 estimators. We evaluate the reconstruction noise of the
different estimators for ongoing and planned CMB experimental setups and find
that while the improvement in the reconstruction noise between the SQE and
HO02 estimator is of order $\sim 1-8\%$, the difference between the SQE and
GMV estimators is $\sim 9-12\%$ for more sensitive experiments, especially on
large angular scales $L\lesssim 10^{2}$ and scales around $L\sim 2000$. This
improvement amounts to achieving a better sensitivity for the same experiment
at no additional cost. This should motivate overcoming the added complexity
associated with joint filtering of cut-sky temperature and polarization maps,
in order to be able to use the GMV estimator in future lensing data analyses.
Our Python code
GlobalLensQuest111https://github.com/abhimaniyar/GlobalLensQuest to compare
and compute the noise variances of the HO02, OH03, GMV, and SQE estimators and
Julien Carron’s codes LensIt222https://github.com/carronj/LensIt and
plancklens333https://github.com/carronj/plancklens which can perform the
optimal GMV operation with anisotropic noise and cut-sky are publicly
available.
While in this work we have chosen to present relevant equations in the flat-
sky limit for conciseness, it is straightforward to generalize our results to
the full-sky case. The flat-sky fields $X(\boldsymbol{l})$ are to be replaced
by the full-sky harmonic coefficients $X_{\ell m}$; the generalization of Eq.
(2) and of the coupling coefficients
$f_{XY}(\boldsymbol{l},\boldsymbol{l}^{\prime})$ are then provided in Ref.
Okamoto and Hu (2003). While Okamoto and Hu (2003) provide the full-sky
expressions for the HO02 estimator, Carron (2019) provide the full-sky
expressions for the GMV estimator incorporating the instrumental beam response
and anisotropic noise. We expect a comparable improvement over the full-sky
version of the HO02 estimator Okamoto and Hu (2003) when using the GMV
estimator Carron (2019).
The approach presented here for the GMV estimator would also apply to any
other joint estimator constructed from similar linear combinations of other
estimators. For example, the foreground-immune hybrid QE of Ref. Schaan and
Ferraro (2019) splits the $TT$ lensing estimator into magnification-only and
shear-only estimators, and then form a hybrid estimator through a minimum-
variance linear combination of these two estimators. Their hybrid estimator
can be further optimized by following the logic presented here, i.e. searching
for the global-minimum-variance shear and magnification estimator, accounting
for correlations for each $(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$, rather
than after integration over $\boldsymbol{l}_{1},\boldsymbol{l}_{2}$.
For SO- and CMBS4-like experiments, on large angular scales ($L<100$) the
reconstruction is expected to be signal dominated Ade _et al._ (2019);
Abazajian _et al._ (2019). The power spectrum $C^{\phi\phi}_{L}$ uncertainty
is therefore dominated by cosmic variance and using the GMV estimator rather
than the HO02 or SQE estimators would not drastically affect the measurement
of $C^{\phi\phi}_{L}$ on these large angular scales. However, the reduction in
the noise of the reconstructed $\phi$ field on the signal-dominated large
angular scales will be beneficial for science goals which involve cross-
correlation of the $\phi$ field with other tracers of large-scale structure
Ade _et al._ (2019); Schmittfull and Seljak (2018), utilizing the sample
variance cancellation through cross-correlations. Also, lensing induced
B-modes act as a source of noise and limit the measurement of the primordial
B-modes Zaldarriaga and Seljak (1998) which is a major scientific goal for CMB
experiments. These modes can be removed using map-level estimates of both the
primordial E-modes and lensing potential $\phi$ with a technique called
delensing and depend on the estimate of the particular realization of $\phi$
in the given patch of the sky Seljak and Hirata (2004); Ade _et al._ (2019);
Abazajian _et al._ (2019). Lower noise estimates of the $\phi$ field will
therefore be crucial for such operations and motivate using the GMV estimator
instead of other QEs.
Even if future CMBS4 experiments will likely use likelihood-based iterative
methods to reconstruct the lensing potential, QEs will remain very useful as a
forecasting and cross-checking tool. More immediately, QEs are still the
primary lensing reconstruction tool for current and near-future CMB
experiments. The GMV estimator will therefore be a useful tool to harvest even
more information out of the CMB data.
## Acknowledgements
We acknowledge useful discussions with Colin Hill, Emmanuel Schaan, and Simone
Ferraro. YAH is supported by NSF award No 1820861 and NASA grant No
80NSSC20K0532. JC acknowledges support from a SNSF Eccellenza Professorial
Fellowship (No. 186879). AL has support from the UK STFC grant ST/T000473/1.
## Appendix A Explicit form for the GMV $N^{(1)}(L)$
As pointed out in Sec. II, in the main text we only optimize relative to the
reconstruction noise $N^{(0)}(L)$. Here we give an explicit expression for the
additional $N^{(1)}(L)$ bias Kesden _et al._ (2003) for the GMV estimator,
for which an explicit expression has not been provided in the literature.
The covariance of the GMV estimator given by Eq. (37) becomes
$\displaystyle\langle\hat{\phi}(\boldsymbol{L})\hat{\phi}(\boldsymbol{L}^{\prime})\rangle=\int_{\begin{subarray}{c}\boldsymbol{l}_{1}+\boldsymbol{l}_{2}\\\
=\boldsymbol{L}\end{subarray}}\int_{\begin{subarray}{c}\boldsymbol{l}^{\prime}_{1}+\boldsymbol{l}^{\prime}_{2}\\\
=\boldsymbol{L}^{\prime}\end{subarray}}$ $\displaystyle\langle
X^{i}(\boldsymbol{l}_{1})X^{j}(\boldsymbol{l}_{2})X^{p}(\boldsymbol{l}^{\prime}_{1})X^{q}(\boldsymbol{l}^{\prime}_{2})\rangle\Xi_{ij}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})\Xi_{pq}(\boldsymbol{l}^{\prime}_{1},\boldsymbol{l}^{\prime}_{2})\,.$
(64)
Evaluating the expectation value in the brackets
$\displaystyle\langle
X^{i}(\boldsymbol{l}_{1})X^{j}(\boldsymbol{l}_{2})X^{p}(\boldsymbol{l}^{\prime}_{1})X^{q}(\boldsymbol{l}^{\prime}_{2})\rangle$
$\displaystyle=$
$\displaystyle(2\pi)^{4}\big{[}C^{ij}_{l_{1}}C^{pq}_{l^{\prime}_{1}}\delta_{\rm
D}(\boldsymbol{L})\delta_{\rm
D}(\boldsymbol{L}^{\prime})+C^{ip}_{l_{1}}C^{jq}_{l^{\prime}_{1}}\delta_{\rm
D}(\boldsymbol{l}_{1}+\boldsymbol{l}^{\prime}_{1})\delta_{\rm
D}(\boldsymbol{l}_{2}+\boldsymbol{l}^{\prime}_{2})$ (65)
$\displaystyle+C^{iq}_{l_{1}}C^{jp}_{l^{\prime}_{1}}\delta_{\rm
D}(\boldsymbol{l}_{1}+\boldsymbol{l}^{\prime}_{2})\delta_{\rm
D}(\boldsymbol{l}_{2}+\boldsymbol{l}^{\prime}_{1})\big{]}+(2\pi)^{2}T^{ijpq}(\boldsymbol{l}_{1},\boldsymbol{l}_{2},\boldsymbol{l}^{\prime}_{1},\boldsymbol{l}^{\prime}_{2})\delta_{\rm
D}(\boldsymbol{L}+\boldsymbol{L}^{\prime})\,,$
where the first term in the square brackets disappears because
$\boldsymbol{L}\neq 0$ and rest of the terms in the square bracket represent
$N^{(0)}(L)$ and
$T^{ijpq}(\boldsymbol{l}_{1},\boldsymbol{l}_{2},\boldsymbol{l}^{\prime}_{1},\boldsymbol{l}^{\prime}_{2})$,
the trispectrum containing terms that contribute to the lensing power spectrum
signal and the signal-dependent $N^{(1)}(L)$ bias. Following Kesden _et al._
(2003), the trispectrum term can be written in terms of
$f_{ij}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$ to first order in explicit
$C^{\phi\phi}_{L}$ as
$\displaystyle
T^{ijpq}(\boldsymbol{l}_{1},\boldsymbol{l}_{2},\boldsymbol{l}^{\prime}_{1},\boldsymbol{l}^{\prime}_{2})=C^{\phi\phi}_{|\boldsymbol{l}_{1}+\boldsymbol{l}_{2}|}f_{ij}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})f_{pq}(\boldsymbol{l}^{\prime}_{1},\boldsymbol{l}^{\prime}_{2})+C^{\phi\phi}_{|\boldsymbol{l}_{1}+\boldsymbol{l}^{\prime}_{1}|}f_{ip}(\boldsymbol{l}_{1},\boldsymbol{l}^{\prime}_{1})f_{jq}(\boldsymbol{l}_{2},\boldsymbol{l}^{\prime}_{2})+C^{\phi\phi}_{|\boldsymbol{l}_{1}+\boldsymbol{l}^{\prime}_{2}|}f_{iq}(\boldsymbol{l}_{1},\boldsymbol{l}^{\prime}_{2})f_{jp}(\boldsymbol{l}_{2},\boldsymbol{l}^{\prime}_{1}).~{}~{}~{}~{}$
(66)
Substituting Eqs. (65) and (66) in Eq. (64) we have
$\langle\hat{\phi}(\boldsymbol{L})\hat{\phi}(\boldsymbol{L}^{\prime})\rangle=(2\pi)^{2}\delta_{\rm
D}(\boldsymbol{L}+\boldsymbol{L}^{\prime})[C^{\phi\phi}_{L}+N^{(0)}(L)+N^{(1)}(L)]\,,$
(67)
where
$\displaystyle N^{(1)}(L)$ $\displaystyle\equiv$
$\displaystyle\int_{\begin{subarray}{c}\boldsymbol{l}_{1}+\boldsymbol{l}_{2}\\\
=\boldsymbol{L}\end{subarray}}\int_{\begin{subarray}{c}\boldsymbol{l}^{\prime}_{1}+\boldsymbol{l}^{\prime}_{2}\\\
=\boldsymbol{L}^{\prime}\end{subarray}}\Xi_{ij}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})\Xi_{pq}(\boldsymbol{l}^{\prime}_{1},\boldsymbol{l}^{\prime}_{2})\times\big{[}C^{\phi\phi}_{|\boldsymbol{l}_{1}+\boldsymbol{l}^{\prime}_{1}|}f_{ip}(\boldsymbol{l}_{1},\boldsymbol{l}^{\prime}_{1})f_{jq}(\boldsymbol{l}_{2},\boldsymbol{l}^{\prime}_{2})+C^{\phi\phi}_{|\boldsymbol{l}_{1}+\boldsymbol{l}^{\prime}_{2}|}f_{iq}(\boldsymbol{l}_{1},\boldsymbol{l}^{\prime}_{2})f_{jp}(\boldsymbol{l}_{2},\boldsymbol{l}^{\prime}_{1})\big{]}\,,$
(68)
The optimal weight matrix $\Xi_{ij}(\boldsymbol{l}_{1},\boldsymbol{l}_{2})$
was determined in order to minimize the variance while only considering
$N^{(0)}(L)$.
Figure 3: Comparison of the $N^{(0)}(L)$ and $N^{(1)}(L)$ curves for the GMV
estimator (left) and ratio of the $N^{(1)}(L)$ for SQE and GMV estimators
(right) for given experimental configurations.
In the left panel of Fig. 3, we show the comparison between the $N^{(0)}(L)$
and $N^{(1)}(L)$ curves for the GMV estimator for our given experimental
configurations. We find that for Planck-like experiments $N^{(1)}(L)$ is a
couple of orders of magnitude smaller than $N^{(0)}(L)$, while for more
sensitive (less noisy) SO- and CMBS4-like experiments, $N^{(1)}(L)$ is a
factor of few to an order of magnitude smaller than $N^{(0)}(L)$. On small
scales it can however become comparable to the signal spectrum and is
important to model in any likelihood analysis. It would be straightforward to
apply the perturbative likelihood approximation of Ade _et al._ (2016a);
Aghanim _et al._ (2020) (which accounts consistently for the signal
dependence of $N^{(1)}(L)$) to the GMV estimator. In the right panel of Fig.
3, we show the ratio of the $N^{(1)}(L)$ for SQE and GMV estimators for
different experiments considered here. For SO- and CMBS4-like experiments,
$N^{(1)}(L)$ bias for the GMV estimator is smaller than for the SQE estimator
for $L\lesssim 1800$. For Planck-like experiments, apart from the large
angular scales $L\lesssim 200$ where GMV estimator gives smaller $N^{(1)}(L)$
bias than the SQE estimator, both the estimators have almost the same
$N^{(1)}(L)$ bias.
## Appendix B Numerical convergence
In Fig. 4, we show the result of the convergence test we perform for our
Python code. It shows the % change in the noise calculation of HO02 (dashed
curves) and OH03 (solid curves) estimators when we double the number of steps
in the angular part of the integration for the given $\ell_{\rm max}$ and
$L\lesssim 3000$. The dashed HO02 curves mostly overlap with the solid OH03
curves and thus are not distinctly visible. This shows that our numerical
integrals are converged to better than 0.01% relative accuracy up to $L\approx
2000$, and to better than 0.03% for $L\lesssim 3000$, and thus makes us
confident that the improvement observed for OH03 over HO02 for small angular
scales is not a numerical artifact.
Figure 4: % change in the noise curves for HO02 and OH03 estimators when we
double the number of integration steps in our Python code.
## Appendix C GMV to HO02 comparison
We perform the following exercise to compare the GMV and HO02 estimators. As
mentioned at the end of Sec. IV.1, the GMV estimator can be split into two
independent estimators $\hat{\phi}^{\rm GMV}_{\\{TT,TE,EE\\}}$ and
$\hat{\phi}^{\rm GMV}_{\\{TB,EB\\}}$. We compare the minimum variance
reconstruction noise of these individual estimators with their HO02
counterparts i.e. $\hat{\phi}^{\rm HO02}_{\\{TT,TE,EE\\}}$ and
$\hat{\phi}^{\rm HO02}_{\\{TB,EB\\}}$. This is shown in Fig. 5, where we plot
the ratio of the reconstruction noises for the GMV and HO02 versions of these
two estimators. As we can see, HO02 performs almost equally well as the GMV
estimator for $\\{TB,EB\\}$ pair. The overall improvement of the GMV estimator
over HO02 estimator is thus mainly driven by $\\{TT,TE,EE\\}$, especially for
more sensitive SO- and CMBS4-like experiments. We also show an ideal case
setup in Fig. 5 which corresponds to a noise-less experiment with the same
multipole ranges as other experiments considered. HO02 estimator for a
CMBS4-like setup considered here performs almost as well as the GMV estimator
for $\\{TB,EB\\}$ pair and very slightly under-performs for the
$\\{TT,TE,EE\\}$ set when compared to the ideal case setup.
Figure 5: Ratio of the minimum variance reconstruction noise of the GMV and
HO02 estimators for $\hat{\phi}_{\\{TT,TE,EE\\}}$ and
$\hat{\phi}_{\\{TB,EB\\}}$ estimators for different experimental setups. The
ideal setup corresponds to a noise-less case.
## References
* Blanchard and Schneider (1987) A. Blanchard and J. Schneider, Astron. Astrophys. 184, 1 (1987).
* Lewis and Challinor (2006) A. Lewis and A. Challinor, Phys. Rep. 429, 1 (2006), arXiv:astro-ph/0601594 [astro-ph.CO] .
* Seljak and Zaldarriaga (1999) U. Seljak and M. Zaldarriaga, Phys. Rev. Lett. 82, 2636 (1999), arXiv:astro-ph/9810092 [astro-ph] .
* Allison _et al._ (2015) R. Allison, P. Caucal, E. Calabrese, J. Dunkley, and T. Louis, Phys. Rev. D 92, 123535 (2015), arXiv:1509.07471 [astro-ph.CO] .
* Schmittfull and Seljak (2018) M. Schmittfull and U. Seljak, Phys. Rev. D 97, 123540 (2018), arXiv:1710.09465 [astro-ph.CO] .
* Das _et al._ (2011) S. Das _et al._ , Phys. Rev. Lett. 107, 021301 (2011), arXiv:1103.2124 [astro-ph.CO] .
* Sherwin _et al._ (2017) B. D. Sherwin _et al._ , Phys. Rev. D 95, 123529 (2017), arXiv:1611.09753 [astro-ph.CO] .
* van Engelen _et al._ (2012) A. van Engelen _et al._ , Astrophys. J. 756, 142 (2012), arXiv:1202.0546 [astro-ph.CO] .
* Ade _et al._ (2016a) P. A. R. Ade _et al._ (Planck Collaboration), Astron. Astrophys. 594, A15 (2016a), arXiv:1502.01591 [astro-ph.CO] .
* Aghanim _et al._ (2020) N. Aghanim _et al._ (Planck Collaboration), Astron. Astrophys. 641, A8 (2020), arXiv:1807.06210 [astro-ph.CO] .
* Omori _et al._ (2017) Y. Omori _et al._ , Astrophys. J. 849, 124 (2017), arXiv:1705.00743 [astro-ph.CO] .
* Story _et al._ (2015) K. T. Story _et al._ , Astrophys. J. 810, 50 (2015), arXiv:1412.4760 [astro-ph.CO] .
* Wu _et al._ (2019) W. L. K. Wu _et al._ , Astrophys. J. 884, 70 (2019), arXiv:1905.05777 [astro-ph.CO] .
* Millea _et al._ (2020) M. Millea _et al._ , (2020), arXiv:2012.01709 [astro-ph.CO] .
* Ade _et al._ (2016b) P. A. R. Ade _et al._ (BICEP2 Collaboration and Keck Array Collaboration), Astrophys. J. 833, 228 (2016b), arXiv:1606.01968 [astro-ph.CO] .
* Ade _et al._ (2014) P. A. R. Ade _et al._ (Polarbear Collaboration), Phys. Rev. Lett. 113, 021301 (2014), arXiv:1312.6646 [astro-ph.CO] .
* Adachi _et al._ (2020) S. Adachi _et al._ (Polarbear Collaboration), Phys. Rev. Lett. 124, 131301 (2020), arXiv:1909.13832 [astro-ph.CO] .
* Henderson _et al._ (2016) S. W. Henderson _et al._ , Journal of Low Temperature Physics 184, 772 (2016), arXiv:1510.02809 [astro-ph.IM] .
* Benson _et al._ (2014) B. A. Benson _et al._ , in _Millimeter, Submillimeter, and Far-Infrared Detectors and Instrumentation for Astronomy VII_, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9153, edited by W. S. Holland and J. Zmuidzinas (2014) p. 91531P, arXiv:1407.2973 [astro-ph.IM] .
* Ade _et al._ (2019) P. Ade _et al._ (Simons Observatory Collaboration), JCAP 2019, 056 (2019), arXiv:1808.07445 [astro-ph.CO] .
* Abazajian _et al._ (2016) K. N. Abazajian _et al._ , arXiv e-prints (2016), arXiv:1610.02743 [astro-ph.CO] .
* Abazajian _et al._ (2019) K. Abazajian _et al._ (CMB-S4 Collaboration), arXiv e-prints (2019), arXiv:1907.04473 [astro-ph.IM] .
* Zaldarriaga and Seljak (1999) M. Zaldarriaga and U. Seljak, Phys. Rev. D 59, 123507 (1999), arXiv:astro-ph/9810257 [astro-ph] .
* Hu (2001) W. Hu, Astrophys. J. Lett. 557, L79 (2001), arXiv:astro-ph/0105424 [astro-ph] .
* Hirata and Seljak (2003a) C. M. Hirata and U. Seljak, Phys. Rev. D 68, 083002 (2003a), arXiv:astro-ph/0306354 [astro-ph] .
* Hadzhiyska _et al._ (2019) B. Hadzhiyska, B. D. Sherwin, M. Madhavacheril, and S. Ferraro, Phys. Rev. D 100, 023547 (2019), arXiv:1905.04217 [astro-ph.CO] .
* Hirata and Seljak (2003b) C. M. Hirata and U. Seljak, Phys. Rev. D 67, 043001 (2003b), arXiv:astro-ph/0209489 [astro-ph] .
* Carron and Lewis (2017) J. Carron and A. Lewis, Phys. Rev. D 96, 063510 (2017), arXiv:1704.08230 [astro-ph.CO] .
* Millea _et al._ (2020) M. Millea, E. Anderes, and B. D. Wandelt, Phys. Rev. D 102, 123542 (2020), arXiv:2002.00965 [astro-ph.CO] .
* Hu and Okamoto (2002) W. Hu and T. Okamoto, Astrophys. J. 574, 566 (2002), arXiv:astro-ph/0111606 [astro-ph] .
* Lewis _et al._ (2011) A. Lewis, A. Challinor, and D. Hanson, JCAP 2011, 018 (2011), arXiv:1101.2234 [astro-ph.CO] .
* Fabbian _et al._ (2019) G. Fabbian, A. Lewis, and D. Beck, JCAP 2019, 057 (2019), arXiv:1906.08760 [astro-ph.CO] .
* Okamoto and Hu (2003) T. Okamoto and W. Hu, Phys. Rev. D 67, 083002 (2003), arXiv:astro-ph/0301031 [astro-ph] .
* Ferraro and Hill (2018) S. Ferraro and J. C. Hill, Phys. Rev. D 97, 023512 (2018), arXiv:1705.06751 [astro-ph.CO] .
* Schaan and Ferraro (2019) E. Schaan and S. Ferraro, Phys. Rev. Lett. 122, 181301 (2019), arXiv:1804.06403 [astro-ph.CO] .
* Kesden _et al._ (2003) M. Kesden, A. Cooray, and M. Kamionkowski, Phys. Rev. D 67, 123507 (2003), arXiv:astro-ph/0302536 [astro-ph] .
* Carron (2019) J. Carron, (2019), arXiv:1908.02016 [astro-ph.CO] .
* Peloton _et al._ (2017) J. Peloton, M. Schmittfull, A. Lewis, J. Carron, and O. Zahn, Phys. Rev. D 95, 043508 (2017), arXiv:1611.01446 [astro-ph.CO] .
* Madhavacheril and Hill (2018) M. S. Madhavacheril and J. C. Hill, Phys. Rev. D 98, 023534 (2018), arXiv:1802.08230 [astro-ph.CO] .
* Darwish _et al._ (2021) O. Darwish _et al._ , Mon. Not. Roy. Ast. Soc. 500, 2250 (2021), arXiv:2004.01139 [astro-ph.CO] .
* van Engelen _et al._ (2014) A. van Engelen, S. Bhattacharya, N. Sehgal, G. P. Holder, O. Zahn, and D. Nagai, Astrophys. J. 786, 13 (2014), arXiv:1310.7023 [astro-ph.CO] .
* Zaldarriaga and Seljak (1998) M. Zaldarriaga and U. Seljak, Phys. Rev. D 58, 023003 (1998), astro-ph/9803150 .
* Seljak and Hirata (2004) U. Seljak and C. M. Hirata, Phys. Rev. D 69, 043005 (2004), arXiv:astro-ph/0310163 [astro-ph] .
|
# Echoes of Compact Objects in Scalar-Tensor Theories of Gravity
Christoforos Vlachos, Eleftherios Papantonopoulos
Physics Division, National Technical University of Athens,
15780 Zografou Campus, Athens, Greece
Kyriakos Destounis
Theoretical Astrophysics, IAAT, University of T$\ddot{u}$bingen, 72076
T$\ddot{u}$bingen, Germany<EMAIL_ADDRESS>mail:lpapa<EMAIL_ADDRESS>
###### Abstract
Scalar-tensor theory predicts solutions to the gravitational field equations
which describe compact objects in the presence of a non-minimally coupled
scalar field to the Einstein tensor. These objects are black holes with scalar
hair and wormholes supporting scalar phantom matter. The evolution of test
fields in fixed asymptotically-flat backgrounds of exotic compact objects
leads to the formation of echoes in the ringdown signal, which designate the
existence of trapping regions close to the event horizon. Here, we consider
minimally-coupled test scalar fields propagating on compact object solutions
of the Horndeski action, which possess an effective cosmological constant,
leading to anti-de Sitter asymptotics, and show that echoes can form in the
ringdown waveform due to the entrapment of test fields between the photon
sphere and the effective asymptotic boundary. Although the presence of an
event horizon leads to the usual echoes with decaying amplitude, signifying
modal stability of the scalarized black hole considered, we find that test
scalar fields propagating on a scalarized wormhole solution give rise to
echoes of constant and equal amplitude to that of the initial ringdown,
indicating the existence of normal modes. Finally, we find that, near
extremality, the test field exhibits a concatenation of echoes; the primary
ones are associated with the trapping region between the photon sphere and the
effective anti-de Sitter boundary while the secondary ones are linked to the
existence of a potential well at the throat of the wormhole.
###### Contents
1. 1 Introduction
2. 2 Exact compact objects in scalar-tensor theory
3. 3 Scalar field propagation on fixed gravitational backgrounds
4. 4 Time-domain integration scheme
5. 5 Propagation of perturbations on compact objects in scalar-tensor theory
1. 5.1 Black hole
2. 5.2 Wormhole
1. 5.2.1 Extremal wormhole
6. 6 Conclusions
## 1 Introduction
The direct observation of gravitational waves (GWs) produced during the
relativistic collision of two compact objects, offers exciting new
opportunities for the study of the nature of the colliding bodies. In the near
future, following the recent LIGO detections [1]-[5], GW astronomy will
provide us a new understanding of the gravitational interaction and
astrophysics in extreme-gravity conditions. The recent observations do not yet
probe the detailed structure of spacetime beyond the photon sphere, however
one expects that the strong gravity regime will in the next years come to our
understanding with future GW observations. In particular, the expectation is
to precisely detect the ringdown phase, which is governed by a series of
damped oscillatory modes at early times, named quasinormal modes (QNMs)
[6]-[9], and may potentially contain unexpected anomalies due to new physics
at late times [10].
The expectation is that future GW observations will give us some information
on the nature and physics of the near-horizon region of black holes (BHs) and
if these regions exhibit any unexpected structure. Alternatives to BHs, that
is, objects without event horizons, were recently constructed, known as exotic
compact objects (ECOs) [11]-[14]. The existence of any structure at near-
horizon scales would generate a series of “echoes” of the primary
gravitational wave signal, produced during the ringdown phase [10, 15]. The
LIGO data have already been analyzed on the presence of echoes [16, 17, 18].
It is believed that the ringdown waveform is dominated by the QNMs of the
compact object remnant. Thus, the detection of overtones from the ringdown
signal allows for precision measurements of the characteristic parameters of
compact objects like the mass, charge and angular momentum. Various studies
suggest that the ringdown signal is dominated by the mode excitations of
photons trapped in unstable circular orbits at the photon sphere, namely the
photon sphere (PS) modes [19]-[26]. These QNMs are directly related to the
existence of the PS, and if the compact object is an asymptotically flat BH no
other oscillatory mode is excited. For ECOs, on the other hand, although the
PS excitations still exist at the early stage of the ringdown signal, as in
the case of BHs, they do not belong to the QNM spectrum [10, 15].
Wormholes are ECO solutions of the Einstein equations that connect different
parts of the Universe or two different Universes [27, 28]. Although wormholes
have distinct causal strutures from BHs, they possess PSs and, therefore, can
disguise themselves as BHs in GW data if one only focuses on the early stage
of the ringdown signal. Lorentzian wormholes in General Relativity (GR) were
discussed in [29, 30, 31], where a static spherically-symmetric metric was
introduced and conditions for traversable wormholes were found. Unfortunately,
wormhole solutions of the Einstein equations lead to the violation of null
energy condition (NEC). A matter distribution of exotic or phantom matter
allows the formation of traversable wormhole geometries in GR. There have been
many efforts to build a wormhole with ordinary matter satisfying the NEC [31,
32, 33] in modified gravity theories like Brans-Dicke theory [34],
$f({\mathcal{R}})$ gravity [35], Einstein-Gauss-Bonnet theory [36], Einstein-
Cartan theory and general scalar-tensor theories [37].
A series of contemporary studies [10, 15] suggest that the ringdown signal may
provide a conclusive proof for the formation of an event horizon. Such
expectation is based on the assumption that an ECO would possess a reflective
surface beyond the PS instead of an event horizon. This would lead to the
existence of a trapping region between the surface of the ECO and PS where
perturbations could be confined and manifest themselves as echoes in the late
stage of the gravitational waveform. Then, by considering ringdown waveforms
of ECOs, such as wormholes, it has been claimed [10, 15] that precision
observations of the late-time ringdown signal can distinguish between the
formation of ECOs or BHs.
In this work we will consider the ringdown phase of exact compact object
solutions of scalar-tensor theory, which is part of the Horndeski class of
solutions [38] that give rise to second order field equations in four
dimensions [39, 40, 41] (for a review of this class of Horndeski theories see
[42]). The BH [43]-[48] and wormhole [49] solutions we consider arise from a
gravitational action with a real or phantom scalar field, respectively, non-
minimally coupled to the Einstein tensor. These exact solutions encode the
‘gravitational’ scalar, and its coupling strength, in the metric tensor
components as a primary charge [44, 49], which asymptotically plays the role
of an effective negative cosmological constant. The motivation for considering
these objects is that they have a natural asymptotic reflective boundary, in
the external region of the PS, which may lead to different behaviour compared
with the case of flat spacetimes. The anti-de Sitter (AdS) spacetime is a
necessity for holographic theories which were built by applying the
gauge/gravity duality. The aim of holography is to study strongly coupled
phenomena using dual gravitational systems where the coupling is weak [50].
This duality, which is well founded in string theory, has many interesting
applications and among them is condensed matter physics. In these theories,
AdS BHs play an essential role in the gravity sector in order to achieve the
abundant phase structure of the condensed matter system lying on the conformal
boundary (for a review, see [51]). These holographic theories stimulated the
extended study of AdS BHs, their formation and their stability, with their
QNMs describing the approach to thermal equilibrium in the dual conformal
field theory on the boundary [52].
Wormholes in AdS spacetimes where discussed in [53], in an attempt to yield
some information about the physics of closed Universes. Such discussion is
connected with the physics of inflation, and its connection with vacuum decay.
A unique realization of such ideas is baby-Universe formation by quantum
tunneling which eventually disconnect from the parent spacetime [54].
Recently, these ideas of connecting the physics of wormhole spacetimes to
baby-Universes were revisited in [55], using features associated with a
negative cosmological constant and asymptotically AdS boundaries.
Our main goal is to probe the ringdown of the aforementioned exact compact
object solutions of scalar-tensor theory, by considering the propagation of
linear test scalar perturbations minimally coupled to the metric, with the
hope that these two objects are discernible, even though both possess a PS and
effective AdS asymptotics. We will adopt the methodology used in [56, 57] and
introduce a new minimally-coupled test scalar field in the gravitational
action; the simplest case one can possibly envision. We note that the test
scalar field we utilize is linear and should not be confused with the
gravitational scalar of the theory, which backreacts to the metric to give
rise to the scalarized BH and wormhole solutions we consider. By introducing a
novel minimally-coupled linear test scalar field, we test the response of such
solutions to small fluctuations, which in turn encode the information of the
gravitational scalar that places a natural asymptotic effective boundary,
although a negative cosmological constant is absent from the action. More
complicated non-minimal couplings have been considered in such theories,
though the effect of a coupling between the test and gravitational scalar
leads to a critical coupling constant below which the QNM boundary conditions
are not satisfied [57].
The work is organized as follows. In Section 2 we review the BH and wormhole
solution with the non-minimal derivative coupling in the Horndeski scalar-
tensor theory. In Section 3 we derived the effective potentials for a test
scalar field scattered off from the BH and wormhole. In Section 4 we discuss
the time-domain integration scheme. In Section 5 we study the propagation of
the test scalar field in the background of the BH and the wormhole. Finally,
in Section 6 we discuss our results and possible applications.
## 2 Exact compact objects in scalar-tensor theory
In this section, we briefly review two exact compact object solutions [44, 49]
of the Horndenski Lagrangian with non-minimal kinetic coupling
$S=\int
dx^{4}\sqrt{-g}\left\\{\frac{\mathcal{R}}{8\pi}-\left[\varepsilon\,g_{\mu\nu}+\eta\,G_{\mu\nu}\right]\phi^{,\mu}\phi^{,\nu}\right\\}\leavevmode\nobreak\
.$ (1)
Here, $\mathcal{R}$ is the Ricci scalar, $G_{\mu\nu}$ is the Einstein tensor,
$g_{\mu\nu}$ is the metric tensor with $g=\det g_{\mu\nu}$, $\phi$ is a real
massless scalar field and $\eta$ is a non-minimal coupling constant with
dimensionality length squared. In the case where $\varepsilon=1$, the theory
contains a canonical scalar field with a positive kinetic term, while when
$\varepsilon=-1$ the theory describes a phantom scalar field with a negative
kinetic term.
The BH solution found in [44] is static, spherically symmetric and possesses
an AdS-like boundary. The solution reads
$\displaystyle ds^{2}$
$\displaystyle=-f(r)dt^{2}+g(r)dr^{2}+r^{2}(d\theta^{2}+\sin^{2}\theta
d\varphi^{2})\leavevmode\nobreak\ ,$ (2)
where
$\displaystyle f(r)$
$\displaystyle=\frac{1}{4}\left(3-\frac{8\mu}{r}+\frac{r^{2}}{3l_{\eta}^{2}}+\frac{l_{\eta}}{r}\arctan\frac{r}{l_{\eta}}\right)\,,$
(3) $\displaystyle g(r)$
$\displaystyle=\frac{(r^{2}+2l_{\eta}^{2})^{2}}{(r^{2}+l_{\eta}^{2})^{2}\,4f(r)}\,,$
(4) $\displaystyle\Psi^{2}(r)$
$\displaystyle\equiv\left(\phi^{\prime}(r)\right)^{2}=-\frac{\varepsilon}{8\pi
l^{2}_{\eta}}\,\frac{r^{2}(r^{2}+2l_{\eta}^{2})^{2}}{(r^{2}+l_{\eta}^{2})^{3}\,4f(r)}\,.$
(5)
We would like to stress that this solution describes a BH only in the case
where $\varepsilon=1$ and $\varepsilon\eta<0$ (see [44, 49] for further
details). Here, $\mu$ is an integration constant that plays the role of mass
and $l_{\eta}=\sqrt{|\varepsilon\eta|}$ is a characteristic scale of the non-
minimal coupling. The inverse tangent function restricts the domain of $r$ to
$r\in(0,\infty)$. In the limit $r\rightarrow 0$ the function $f(r)$ yields
Schwarzschild asymptotics, i.e. $f(r)\approx 1-\frac{2\mu}{r}$, while for
$r\rightarrow\infty$ one obtains
$f(r)\approx\frac{3}{4}+\frac{r^{2}}{12l^{2}_{\eta}}$ i.e. AdS-like
asymptotics. Note that in the metric functions (3) and (4) the non-minimal
coupling $l_{\eta}$ is present, which is the strength of the coupling of the
scalar field to curvature. Therefore, the BH solution is dressed with a scalar
field given in Eq. (5).
The wormohole solution found in [49], by following the approach of [44],
reads111Note that the coordinates $(t,\xi,\theta,\phi)$ used here are not the
Schwarzschild coordinates since, $\xi$ is not the curvature radius of a
coordinate sphere $\xi=$const$>0$.
$ds^{2}=-f(\xi)dt^{2}+g(\xi)d\xi^{2}+(\xi^{2}+a^{2})(d\theta^{2}+\sin^{2}\theta
d\varphi^{2})\leavevmode\nobreak\ .$ (6)
where $\varepsilon\eta<0$, $\varepsilon=-1$ and
$\displaystyle g(\xi)$ $\displaystyle=$
$\displaystyle\frac{\xi^{2}(\xi^{2}+a^{2}+2l_{\eta}^{2})^{2}}{(\xi^{2}+a^{2})(\xi^{2}+a^{2}+l_{\eta}^{2})^{2}F(\xi)}\leavevmode\nobreak\
,$ (7) $\displaystyle f(\xi)$ $\displaystyle=$
$\displaystyle\frac{a}{\sqrt{\xi^{2}+a^{2}}}\exp\left[\int_{0}^{\xi}\frac{\xi(\xi^{2}+a^{2}+2l_{\eta}^{2})^{2}}{l_{\eta}^{2}(\xi^{2}+a^{2})(\xi^{2}+a^{2}+l_{\eta}^{2})F(\xi)}d\xi\right]\leavevmode\nobreak\
,$ (8) $\displaystyle\Psi^{2}(\xi)$ $\displaystyle\equiv$
$\displaystyle\left(\phi^{\prime}(r)\right)^{2}=-\frac{\varepsilon}{8\pi
l_{\eta}^{2}}\frac{\xi^{2}(\xi^{2}+a^{2}+2l_{\eta}^{2})^{2}}{(\xi^{2}+a^{2})(\xi^{2}+a^{2}+l_{\eta}^{2})^{3}F(\xi)}\leavevmode\nobreak\
,$ (9)
with
$F(\xi)=3-\frac{8\mu}{\sqrt{\xi^{2}+a^{2}}}+\frac{\xi^{2}+a^{2}}{3l_{\eta}^{2}}+\frac{l_{\eta}}{\sqrt{\xi^{2}+a^{2}}}\arctan\left(\frac{\sqrt{\xi^{2}+a^{2}}}{l_{\eta}}\right).$
(10)
Again, the non-minimal derivative coupling appears in the metric functions and
the phantom scalar field generating the wormhole is given in Eq. (9). The
function $F(\xi)$ has a minimum at $\xi=0$, thus to make it positive definite
one should demand $F(0)>0$. Hence, one can derive the limitation on the upper
value of the parameter mass parameter $\mu$
$2\mu<a\left(\frac{3}{4}+\frac{\alpha^{2}}{12}+\frac{1}{4\alpha}\arctan\alpha\right)\leavevmode\nobreak\
,$ (11)
where $\alpha\equiv a/l_{\eta}$ is a dimensionless parameter which defines the
ratio of the wormhole throat radius $a$ and the scale of the non-minimal
kinetic coupling $l_{\eta}$. Far from the throat, in the limit
$|\xi|\to\infty$, the metric functions $g(\xi)$ and $f(\xi)$ take the
asymptotic form
$g(\xi)=3\frac{l_{\eta}^{2}}{\xi^{2}}+O\left(\frac{1}{\xi^{4}}\right)\leavevmode\nobreak\
,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,f(\xi)=A\frac{\xi^{2}}{l_{\eta}^{2}}+O(\xi^{0})\leavevmode\nobreak\
,$ (12)
where $A$ depends on the parameters $a$, $l_{\eta}$, $\mu$ and can be
calculated only numerically. These asymptotics correspond to AdS space with
constant negative curvature. Close to the throat $\xi=0$ one finds
$g(\xi)=B\frac{\xi^{2}}{l_{\eta}^{2}}+O(\xi^{4})\leavevmode\nobreak\
,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,f(\xi)=1+O(\xi^{2})\leavevmode\nobreak\
,$ (13)
where $B$ depends on $\alpha$ and $\mu$. Moreover, there is a coordinate
singularity at $\xi=0$ where $g(0)=0$.
## 3 Scalar field propagation on fixed gravitational backgrounds
In this study, we will be interested in the response of the compact object
models described above against a linear test scalar field $\Phi$ minimally-
coupled to the metric, but not the gravitational scalar $\phi$ in (1). The
dynamical propagation of a linear massless scalar perturbation $\Phi$ on the
fixed background spacetime of a compact object, described by the metric tensor
$g_{\mu\nu}$, is dominated by Klein-Gordon equation
$\square\Phi=0\Longleftrightarrow\frac{1}{\sqrt{-g}}\partial_{\mu}\left[\sqrt{-g}g^{\mu\nu}\partial_{\nu}\Phi\right]=0\leavevmode\nobreak\
.$ (14)
Due to spherical symmetry we can decompose the scalar field
$\Phi(t,\rho,\theta,\phi)$ into a radial and angular parts, by introducing the
ansatz
$\Phi(t,\rho,\theta,\phi)=\frac{\psi(\rho,t)}{R(\rho)}\,Y_{lm}(\theta,\phi)\leavevmode\nobreak\
,$ (15)
where $Y_{lm}$ are the standard spherical harmonics, $\rho$ is a general
radial-like coordinate and $R(\rho)$222$R(\rho)\equiv R(r)=r$ for the BH and
$R(\rho)\equiv R(\xi)=\sqrt{\xi^{2}+a^{2}}$ for the wormhole. a function of
$\rho$. Equation (14) can, then, be recasted into a Schrodinger-like form
$\displaystyle\left[\frac{\partial^{2}}{\partial
t^{2}}-\frac{\partial^{2}}{\partial\rho_{*}^{2}}+V(\rho)\right]\psi(\rho,t)=0\leavevmode\nobreak\
,$ (16)
with the effective potential given by
$\displaystyle
V(\rho)=f(\rho)\left(\,\frac{\ell(\ell+1)}{R(\rho)^{2}}+\frac{R^{{}^{\prime\prime}}(\rho)}{g(\rho)R(\rho)}+\frac{f^{\prime}(\rho)R^{\prime}(\rho)}{2g(\rho)f(\rho)R(\rho)}-\frac{g^{\prime}(\rho)R^{\prime}(\rho)}{2g^{2}(\rho)R(\rho)}\right)\leavevmode\nobreak\
,$ (17)
where $\ell$ is the angular quantum number and $\rho_{\ast}$ is the usual
tortoise coordinate defined by
$d\rho_{*}=\sqrt{\frac{g(\rho)}{f(\rho)}}\;d\rho\,.$
Equation (16) demonstrates that one is able to reduce the problem of scalar
perturbations around compact objects into a single one-dimensional scattering
problem with an effective potential. Applying this procedure on the BH (3)-(5)
and the wormhole (7)-(9) we find the corresponding effective potentials a test
scalar field “feels” when propagating on these backgrounds.
Figure 1: The effective potential of scalar perturbations with $\ell=1$ for
the black hole (3)-(5) (left) and the wormhole (7)-(9) (right) with throat
radius $a=1$, for three different values of $l_{\eta}$ and $\mu=0.1$.
Fig. 1 shows the effective potentials for various choices of non-minimal
coupling constants. We observe that the BH spacetime has a peak right outside
the event horizon, while asymptotically the effective potential diverges. Such
asymptotic divergence encodes the AdS-like nature of the spacetime. The
increment of $l_{\eta}$ leads to a more distant asymptotic boundary which
could be explained from the fact that the non-minimal coupling has
dimensionality length squared. In a sense, $l_{\eta}$ acts as an inverse
cosmological constant, therefore at the limit $l_{\eta}\rightarrow\infty$ the
spacetime becomes asymptotically flat, which would correspond to a zero
cosmological constant. The wormhole’s effective potential is clearly different
from that of the BH, as seen in Fig. 1. There is a single peak at the throat
$\xi=0$, which corresponds to the PS, while asymptotically the potential
diverges. The effect of $l_{\eta}$ is apparent in this case as well. Although
not proven explicitly here, our numerics indicate that both peaks of $V$ occur
close to the PS, since their amplitude is solely affected by $\ell$ which is
directly associated with the energy of null particles trapped in unstable
circular orbits at the PS.
## 4 Time-domain integration scheme
In this section we briefly demonstrate the numerical scheme of time-domain
integration, first proposed in [58], which yields the temporal response of the
scalar field as it propagates on a fixed background. By defining
$\psi(\rho_{\ast},t)=\psi(i\Delta\rho_{\ast},j\Delta t)=\psi_{i,j}$,
$V(\rho(\rho_{*}))=V(\rho_{\ast},t)=V(i\Delta\rho_{\ast},j\Delta t)=V_{i,j}$,
equation (16) takes the form
$\displaystyle\frac{\psi_{i+1,j}-2\psi_{i,j}+\psi_{i-1,j}}{\Delta\rho^{2}_{\ast}}-\frac{\psi_{i,j+1}-2\psi_{i,j}+\psi_{i,j-1}}{\Delta
t^{2}}-V_{i}\psi_{i,j}=0\,.$ (18)
Then, by using as initial condition a Gaussian wave-packet of the form
$\psi(\rho_{\ast},t)=\exp\left[-\frac{(\rho_{\ast}-c)^{2}}{2\sigma^{2}}\right]$
and $\psi(\rho_{\ast},t<0)=0$, where $c$ and $\sigma$ correspond to the median
and width of the wave-packet, we can derive the time evolution of the scalar
field $\psi$ by
$\displaystyle\psi_{i,j+1}=-\psi_{i,j-1}+\left(\frac{\Delta
t}{\Delta\rho_{\ast}}\right)^{2}\left(\psi_{i+1,j}+\psi_{i-1,j}\right)+\left(2-2\left(\frac{\Delta
t}{\Delta\rho_{\ast}}\right)^{2}-V_{i}\Delta t^{2}\right)\psi_{i,j}\,,$ (19)
where the Von Neumann stability condition requires that $\frac{\Delta
t}{\Delta\rho_{\ast}}<1$. Moreover, the effective potential is positive and
vanishes at the event horizon (but not at the wormhole throat), however, it
diverges as $r\to\infty$ (or $|\xi|\to\infty$). This requires that $\psi$
should vanish at infinity for both compact objects in study, which corresponds
to reflective boundary conditions. To calculate the precise values of the
potential $V_{i}$, we integrate numerically the equation for the tortoise
coordinate and then solve with respect to the corresponding radial coordinate.
Various convergence tests were performed throughout our numerical evolution,
with different integration steps and precision, to reassure the validity of
our ringdown profiles.
## 5 Propagation of perturbations on compact objects in scalar-tensor theory
By applying the numerical procedure outlined above, we calculate the temporal
response of linear massless scalar field perturbations on the BH and wormhole
solutions discussed. In what follows, we assume the mass of both compact
objects to $\mu=0.1$ (if not stated otherwise) and obtain the perturbation
response at a position arbitrarily close to the event horizon for the BH, and
at $\xi=0.01$ for the wormhole.
### 5.1 Black hole
Figure 2: Time evolution of scalar perturbations with $\ell=0$ (top panel)
$\ell=1$ (middle panel) and $\ell=2$ (bottom panel) on the black hole
background (3)-(5) with $\mu=0.1$ and $l_{\eta}=1$.
Fig. 2 displays the evolution of a linear scalar perturbation field on the
background of the BH solution (3)-(5). The most obvious effect we can observe
is the emergence of echoes following the initial quasinormal ringdown. This
pattern becomes more evident for higher $\ell$ due to the fact that more
energy is carried away from the PS when perturbed. For spherically-symmetric
$\ell=0$ perturbations, on the other hand, the echo pattern is not so evident
since the field does not excite the PS significantly, therefore the echoes
fall off rapidly. Our investigation confirms that the decay rate of scalar
perturbations follows an exponential fall-off, as in [52, 59], which is more
evident for the case of $\ell=0$. This behaviour is in contrast to that of
asymptotically flat BH perturbations, where the quasinormal ringing gives way
to a power-law cutoff [58, 60, 61], and its occurrence is related to the
asymptotic nature of timelike infinity in AdS spacetimes which serves as a
reflective boundary. The echoes have significantly smaller amplitudes compared
with the initial ringdown, which is in agreement with the studies in [56, 57,
62] and the dissipative nature of the event horizon, designating modal
stability.
It is worthy to note that further analytical investigations of perturbations
in AdS BHs led to the conclusion that solutions of the Klein-Gordon equation
with fixed angular quantum number $\ell$, indeed decay exponentially [63].
However, an accumulation of all solutions, possessing finite energy, achieves
a logarithmic decay rate, due to the presence of stable trapping [64] (for a
discussion of the wave equation in the interior of AdS BHs see [65, 66]).
Figure 3: Time evolution of scalar perturbations with $\ell=2$ on the black
hole background (3)-(5) with $\mu=0.1$ and $l_{\eta}=4$ (left), $l_{\eta}=5$
(right). The black vertical dashed lines indicate the time one expects the
next echo to arrive, as calculated from (20).
In Fig. 3 the effect of the derivative coupling constant is illustrated. When
$l_{\eta}$ increases the effective AdS boundary moves further away from the
event horizon (see Fig. 1). As a consequence, the scalar wave reflected off
the PS has to travel a larger distance before it reaches the reflective AdS
boundary and return to re-perturb the PS. Thus, the increment of $l_{\eta}$
leads to a delay of the echoes. It is important to note that the echoes appear
in this case, not due to trapping of waves between the PS and the surface of
the compact object, but rather due to the asymptotic nature of infinity. This
effect may have important implications in AdS/CFT correspondence, if an actual
negative cosmological constant is included, where a ringdown in the bulk
corresponds to the approach to thermal equilibrium in the boundary CFT though
a sequence of ringdown signals, such as echoes, does not yet have a proper
boundary interpretation (though see [67, 68]).
To justify our statement we have computed numerically the time interval needed
for light to perform a round trip from the PS to the AdS boundary. For a
metric of the form (2) the characteristic time-scale is given by [10, 69]
$\displaystyle\Delta t=2\int_{PS}^{Boundary}\sqrt{\frac{g(r)}{f(r)}}\;dr\,.$
(20)
As can be seen from Fig. 3, the temporal location of the echoes, as obtained
from the numerical integration, is in good agreement with the values of
$\Delta t$ calculated from Eq. (20) (shown with dashed lines in Fig. 3). This
agreement further supports that the formation of echoes is due on the
secondary perturbations of the PS from the reflected scalar field on the
effective AdS boundary.
### 5.2 Wormhole
In Fig. 4 we demonstrate the behavior of the test scalar field as it
propagates in the wormhole solution (8)-(9). The temporal response exhibits
echoes, as in the BH case above, which follow the initial ringdown due to the
first encounter of the probe field with the PS. In a similar manner, the
$\ell=0$ perturbations do not significantly excite the PS of the wormhole,
thus the echoes are not as oscillatory as the ones obtained for $\ell>0$.
However, we notice that the amplitude of the echoes does not decrease with
time, in contrast to the response in the BH setup.
Through this effect, one then can easily distinguish if the compact object is
a BH or wormhole. The underlying mechanism that leads to such a behavior could
be understood from the fact that instead of an event horizon we have a
wormhole throat, therefore, energy cannot be dissipated. The probe field
travels through the throat and into the second Universe, to be reflected back
from the second effective AdS boundary (see Fig. 1). The small “glitches”
shown in echoes of Fig. 4 (in the linear scale) appear due to the fact that we
measure the response of the test field at $\xi=0.01$, thus the reflected wave
from the AdS boundary of the primary Universe arrives slightly earlier than
the reflected wave from the boundary of the secondary Universe, to superpose
and lead to an echo of equal amplitude to that of the initial outburst.
The effect of the non-minimal coupling $l_{\eta}$ is shown in Fig. 5. Besides
the fact that the initial ringdown and the echoes have the same amplitude, one
can realize that $l_{\eta}$ serves as a scale of the Universe, since for
higher $l_{\eta}$ values, the field has to travel a larger distance from the
throat to the AdS boundary and back. This results to a proportionality between
the coupling $l_{\eta}$ and the echo time. To illustrate this, we have
approximated the time interval between two echoes using the relation
$\displaystyle\Delta
t=2\int_{Throat}^{Boundary}\sqrt{\frac{g(r)}{f(r)}}\;dr\leavevmode\nobreak\ .$
(21)
The agreement between the resulting signal from time integration and the
characteristic time from Eq. (21) (see vertical dashed lines in Fig. 5)
justifies further that the echoes are produced due to the presence of the
effective AdS boundary and not due to the existence of a double barrier
effective potential that usually appears in wormhole solutions (see [10],
[70]-[73]).
Figure 4: Time evolution of scalar perturbations with $\ell=0$ (top panel)
$\ell=1$ (middle panel), $\ell=2$ (bottom panel) on the wormhole background
(8)-(9) with $l_{\eta}=10,\,\mu=0.1$ and $a=1$.
The existence of echoes of equal amplitude to that of the initial ringdown is
an indication that such compact objects may possess normal modes of
oscillation, similar to those found in [74, 75, 76]. In fact, one could
perform a mode decomposition on the test scalar field to calculate these
modes, though the complicated form of the metric components render such
analysis rather challenging. Since the echoes seem to possess a characteristic
timescale (21), it is intriguing to approximate the normal modes as
$\omega\sim 2\pi/\Delta_{t}$. Our numerics indicate that
$\omega\sim\mu/l_{\eta}$, with $\ell$ not playing a dominant role in this
approximation, besides affecting the oscillation rate of each ringdown. A
modal analysis would shed more light to the validity of our approximation and
to the existence of normal modes in such wormholes.
Figure 5: Time evolution of scalar perturbations with $\ell=1$ on the wormhole
background (8)-(9) with $\mu=0.1$, $a=1$ and $l_{\eta}=5$ (left),
$l_{\eta}=10$ (right).
#### 5.2.1 Extremal wormhole
So far, the wormhole parameters considered give rise to a single peak at the
effective potential, which leads to two trapping regions separated by a
potential barrier. If we consider a different set of parameters, where the
wormhole mass is nearly extremal $\mu\simeq\mu_{extreme}$ (or exactly
extremal), then the potential peak splits into two potential barriers, for
large angular momenta. In this case, three trapping regions appear, which
depend on the throat radius, near-extremal mass an angular momentum in a
manner which is demonstrated in Fig. 6. The existence of three potential wells
in the wormhole background will lead to echoes which arise from two different
regions: the primary region between the PS and the asymptotic AdS boundary,
and the novel secondary region between the wormhole throat.
Figure 6: Left panel: Effective potential with $l_{\eta}=1\,,\,\ell=10$ and
$\mu=\mu_{extreme}$ for each value of the throat radius $a$. Right panel:
Effective potential with $l_{\eta}=1\,,\,a=1\,,\,\mu=\mu_{extreme}$ for
various angular momenta $\ell$.
Figure 7: Time evolution of scalar perturbations with $\ell=5$ on the wormhole
background with $\mu=\mu_{extreme}\approx 0.37$, $a=1$ and $l_{\eta}=5$ (top
panel), $l_{\eta}=7$ (bottom panel).
In Fig. 7 we observe a new qualitative behaviour of linear scalar
perturbations which is not present in the non-extremal wormhole setups
considered above. Besides the primary echoes arising from the reflection of
the test field at the AdS boundary, a new series of echoes appears in between
them, with smaller amplitude than the primary ones. These secondary echoes are
a product of the new trapping region at the wormhole throat. Although the
first couple of secondary echoes are quite visible in Fig. 7, at sufficiently
late times the superposition of primary and secondary echoes renders them
hardly distinguishable. Nevertheless, the existence of primary and secondary
echoes, associated with different trapping regions, is reported here for the
first time in a wormhole spacetime with trapping regions that arise naturally.
## 6 Conclusions
The GW ringdown, where the final object relaxes to a stable state, contains
key information about the perturbed compact object’s externally observably
quantities. Recently, it has been argued [10, 15] that the late-time ringdown
signal may incorporate signatures for the existence of ECOs, such as
wormholes, or horizon-scale quantum corrections [77, 78, 79] (though see
[80]), in the form of echoes.
Here, we considered minimally-coupled test scalar perturbations on exact BH
and wormhole solutions of scalar-tensor theory, which possess an effective
negative cosmological constant, leading to AdS asymptotics, due to the
presence of a non-minimally coupled ‘gravitational’ scalar to the Einstein
tensor. We find that similar effects arise in the late-time ringdown waveform
for both compact objects under study. After the initial ringdown, the test
field response exhibits echoes, with timescales proportional to the non-
minimal coupling constant. Although the BH considered here does not contain
any quantum corrections at the event horizon, the effective asymptotic AdS
boundary, that the gravitational scalar introduces to the scalarized solution,
forces the partially reflected waves from the PS to mirror off the AdS
boundary and re-perturb the PS to give rise to a damped beating pattern which
is strikingly similar to echoes from quantum corrected compact objects [81].
The existence of echoes in such AdS-like BHs may lead to interesting
interpretations of such a phenomenon on dual holographic descriptions at the
boundary, as AdS/CFT suggests (see [67, 68] for a holographic description of
echoes on a dual CFT at the near-horizon quantum structure).
Interestingly, even though the wormhole studied here possesses a single
barrier effective potential (for non-extremal masses), with its throat located
at the peak [82] (similar to that of Bronnikov-Ellis [83, 84] and Morris-
Thorne [29] wormholes), we still observe echoes in the late-time response of
the test scalar field due to the existence of the effective AdS boundary.
Moreover, if the wormhole is near-extremal or extremal, the response of the
probe scalar field exhibits primary and secondary echoes, associated with the
AdS boundary and throat potential well, respectively. The concatenation of
echoes we observed here are very similar to those found in [85], though our
setup consists of naturally arising trapping regions, solely depending on the
spacetime and perturbation parameters, in contrast to the reflective
boundaries placed by hand in [85].
Contrary to to the BH case, the echoes found in the wormhole background do not
decay with time, but have constant and equal amplitude to that of the initial
ringdown. The constancy of the amplitude of echoes is related to the absence
of dissipation and may be an indication of the existence of normal oscillation
modes, as well as potential instabilities, similar to that found in [86]. The
introduction of gravitational perturbations (such as the ones considered in
[87, 88, 89, 90]) may lead to an unstable wormhole, due to the presence of
phantom matter, which can potentially expand or collapse into a BH [91, 92].
## Acknowledgments
The authors would like to thank Rodrigo Fontana for helpful discussions.
## References
* [1] LIGO Scientific and Virgo Collaborations collaboration, B. P. Abbott et al., Observation of Gravitational Waves from a Binary Black Hole Merger, Phys. Rev. Lett. 116 (2016) 061102.
* [2] VGW151226: Observation of Gravitational Waves from a 22-Solar-Mass Binary Black Hole Coalescence, Phys. Rev. Lett. 116 (2016) 241103.
* [3] VIRGO, LIGO Scientific collaboration, B. P. Abbott et al., GW170104: Observation of a 50-Solar-Mass Binary Black Hole Coalescence at Redshift 0.2, Phys. Rev. Lett. 118 (2017) 221101.
* [4] Virgo, LIGO Scientific collaboration, B. P. Abbott et al., GW170814: A Three-Detector Observation of Gravitational Waves from a Binary Black Hole Coalescence, Phys. Rev. Lett. 119 (2017) 141101.
* [5] Virgo, LIGO Scientific collaboration, B. P. Abbott et al., GW170817: Observation of Gravitational Waves from a Binary Neutron Star Inspiral, Phys. Rev. Lett. 119 (2017) 161101.
* [6] C. V. Vishveshwara, “Scattering of Gravitational Radiation by a Schwarzschild Black-hole,” Nature 227, 936 (1970).
* [7] K. D. Kokkotas and B. G. Schmidt, “Quasinormal modes of stars and black holes,” Living Rev. Rel. 2, 2 (1999) [gr-qc/9909058].
* [8] E. Berti, V. Cardoso and A. O. Starinets, “Quasinormal modes of black holes and black branes,” Class. Quant. Grav. 26, 163001 (2009) [arXiv:0905.2975 [gr-qc]].
* [9] R. A. Konoplya and A. Zhidenko, “Quasinormal modes of black holes: From astrophysics to string theory,” Rev. Mod. Phys. 83, 793 (2011) [arXiv:1102.4014 [gr-qc]].
* [10] V. Cardoso, E. Franzin and P. Pani, “Is the gravitational-wave ringdown a probe of the event horizon?,” Phys. Rev. Lett. 116 (2016) no.17, 171101 [arXiv:1602.07309 [gr-qc]].
* [11] P. O. Mazur and E. Mottola, “Gravitational condensate stars: An alternative to black holes,” [arXiv:gr-qc/0109035 [gr-qc]].
* [12] M. S. Morris, K. S. Thorne and U. Yurtsever, “Wormholes, Time Machines, and the Weak Energy Condition,” Phys. Rev. Lett. 61, 1446-1449 (1988).
* [13] T. Damour and S. N. Solodukhin, “Wormholes as black hole foils,” Phys. Rev. D 76, 024016 (2007) [arXiv:0704.2667 [gr-qc]].
* [14] B. Holdom and J. Ren, “Not quite a black hole,” Phys. Rev. D 95, no.8, 084034 (2017) [arXiv:1612.04889 [gr-qc]].
* [15] V. Cardoso, S. Hopper, C. F. B. Macedo, C. Palenzuela and P. Pani, “Gravitational-wave signatures of exotic compact objects and of quantum corrections at the horizon scale,” Phys. Rev. D 94 (2016) no.8, 084031 [arXiv:1608.08637 [gr-qc]].
* [16] J. Abedi, H. Dykaar and N. Afshordi, “Echoes from the Abyss: Tentative evidence for Planck-scale structure at black hole horizons,” Phys. Rev. D 96, no.8, 082004 (2017) [arXiv:1612.00266 [gr-qc]].
* [17] G. Ashton, O. Birnholtz, M. Cabero, C. Capano, T. Dent, B. Krishnan, G. D. Meadors, A. B. Nielsen, A. Nitz and J. Westerweck, “Comments on: ”Echoes from the abyss: Evidence for Planck-scale structure at black hole horizons”,” [arXiv:1612.05625 [gr-qc]].
* [18] J. Abedi, H. Dykaar and N. Afshordi, “Echoes from the Abyss: The Holiday Edition!,” [arXiv:1701.03485 [gr-qc]].
* [19] V. Cardoso, A. S. Miranda, E. Berti, H. Witek and V. T. Zanchin, “Geodesic stability, Lyapunov exponents and quasinormal modes,” Phys. Rev. D 79, 064016 (2009) [arXiv:0812.1806 [hep-th]].
* [20] V. Cardoso, J. L. Costa, K. Destounis, P. Hintz and A. Jansen, “Quasinormal modes and Strong Cosmic Censorship,” [arXiv:1711.10502 [gr-qc]].
* [21] V. Cardoso, J. L. Costa, K. Destounis, P. Hintz and A. Jansen, “Strong cosmic censorship in charged black-hole spacetimes: still subtle,” Phys. Rev. D 98, no.10, 104007 (2018) [arXiv:1808.03631 [gr-qc]].
* [22] K. Destounis, “Charged Fermions and Strong Cosmic Censorship,” Phys. Lett. B 795, 211-219 (2019) [arXiv:1811.10629 [gr-qc]].
* [23] H. Liu, Z. Tang, K. Destounis, B. Wang, E. Papantonopoulos and H. Zhang, “Strong Cosmic Censorship in higher-dimensional Reissner-Nordström-de Sitter spacetime,” JHEP 03, 187 (2019) [arXiv:1902.01865 [gr-qc]].
* [24] K. Destounis, R. D. B. Fontana, F. C. Mena and E. Papantonopoulos, “Strong Cosmic Censorship in Horndeski Theory,” JHEP 10, 280 (2019) [arXiv:1908.09842 [gr-qc]].
* [25] K. Destounis, R. D. B. Fontana and F. C. Mena, “Accelerating black holes: quasinormal modes and late-time tails,” Phys. Rev. D 102, no.4, 044005 (2020) [arXiv:2005.03028 [gr-qc]].
* [26] K. Destounis, R. D. B. Fontana and F. C. Mena, “Stability of the Cauchy horizon in accelerating black-hole spacetimes,” Phys. Rev. D 102, no.10, 104037 (2020) [arXiv:2006.01152 [gr-qc]].
* [27] C. W. Misner and J. A. Wheeler, Ann. Phys.2, 525 (1957); C. W. Misner, Phys. Rev.118, 1110 (1960).
* [28] J. A. Wheeler, Ann. Phys.2, 604 (1957);J. A. Wheeler, Geometrodynamics (Academic, New York, 1962).
* [29] M.S. Morris and K.S. Thorne, Am. J. Phys.56, 395 (1988); M. S. Morris, K. S. Thorne and U. Yurtsever, Phys. Rev. Lett.61, 1446 (1988).
* [30] V. Sahni and A. A. Starobinsky, Int. J. Mod. Phys. D09, 373 (2000); S. M. Carroll, Living Rev. Rel.4, 1 (2001); P. J. E. Peebles and B. Ratra, Rev. Mod. Phys.75, 559 (2003); P. F. Gonzales-Diaz, Phys. Rev. D65, 104035 (2002).
* [31] E. Poisson and M. Visser, “Thin shell wormholes: Linearization stability,” Phys. Rev. D 52, 7318-7321 (1995) [arXiv:gr-qc/9506083 [gr-qc]].
* [32] G. Antoniou, A. Bakopoulos, P. Kanti, B. Kleihaus and J. Kunz, “Novel Einstein–scalar-Gauss-Bonnet wormholes without exotic matter,” Phys. Rev. D 101, no.2, 024033 (2020) [arXiv:1904.13091 [hep-th]].
* [33] M. R. Mehdizadeh, M. Kord Zangeneh and F. S. N. Lobo, “Higher-dimensional thin-shell wormholes in third-order Lovelock gravity,” Phys. Rev. D 92, no.4, 044022 (2015) [arXiv:1506.03427 [gr-qc]].
* [34] K. K. Nandi, A. Islam and J. Evans, “Brans wormholes,” Phys. Rev. D 55, 2497-2500 (1997) [arXiv:0906.0436 [gr-qc]]; E. F. Eiroa, M. G. Richarte and C. Simeone, “Thin-shell wormholes in Brans-Dicke gravity,” Phys. Lett. A 373, 1-4 (2008) [erratum: Phys. Lett. 373, 2399-2400 (2009)] [arXiv:0809.1623 [gr-qc]]; F. S. N. Lobo and M. A. Oliveira, “General class of vacuum Brans-Dicke wormholes,” Phys. Rev. D 81, 067501 (2010) [arXiv:1001.0995 [gr-qc]]. S. V. Sushkov and S. M. Kozyrev, “Composite vacuum Brans-Dicke wormholes,” Phys. Rev. D 84, 124026 (2011) [arXiv:1109.2273 [gr-qc]]; E. Papantonopoulos and C. Vlachos, “Wormhole solutions in modified Brans-Dicke theory,” Phys. Rev. D 101, no.6, 064025 (2020) [arXiv:1912.04005 [gr-qc]].
* [35] M. G. Richarte and C. Simeone, “Wormholes in Einstein-Born-Infeld theory,” Phys. Rev. D 80, 104033 (2009) [erratum: Phys. Rev. D 81, 109903 (2010)] [arXiv:2006.12272 [gr-qc]]; N. M. Garcia and F. S. N. Lobo, “Wormhole geometries supported by a nonminimal curvature-matter coupling,” Phys. Rev. D 82, 104018 (2010) [arXiv:1007.3040 [gr-qc]]; N. Montelongo Garcia and F. S. N. Lobo, “Nonminimal curvature-matter coupled wormholes with matter satisfying the null energy condition,” Class. Quant. Grav. 28, 085018 (2011) [arXiv:1012.2443 [gr-qc]].
* [36] M. G. Richarte and C. Simeone, “Thin-shell wormholes supported by ordinary matter in Einstein-Gauss-Bonnet gravity,” Phys. Rev. D 76, 087502 (2007) [erratum: Phys. Rev. D 77, 089903 (2008)] [arXiv:0710.2041 [gr-qc]]; P. Kanti, B. Kleihaus and J. Kunz, “Wormholes in Dilatonic Einstein-Gauss-Bonnet Theory,” Phys. Rev. Lett. 107, 271101 (2011) [arXiv:1108.3003 [gr-qc]]; M. R. Mehdizadeh, M. Kord Zangeneh and F. S. N. Lobo, “Einstein-Gauss-Bonnet traversable wormholes satisfying the weak energy condition,” Phys. Rev. D 91, no.8, 084004 (2015) [arXiv:1501.04773 [gr-qc]]; M. Kord Zangeneh, F. S. N. Lobo and M. H. Dehghani, “Traversable wormholes satisfying the weak energy condition in third-order Lovelock gravity,” Phys. Rev. D 92, no.12, 124049 (2015) [arXiv:1510.07089 [gr-qc]]; G. Antoniou, A. Bakopoulos, P. Kanti, B. Kleihaus and J. Kunz, “Novel Einstein–scalar-Gauss-Bonnet wormholes without exotic matter,” Phys. Rev. D 101, no.2, 024033 (2020) [arXiv:1904.13091 [hep-th]].
* [37] K. A. Bronnikov and A. M. Galiakhmetov, “Wormholes without exotic matter in Einstein–Cartan theory,” Grav. Cosmol. 21, no.4, 283-288 (2015) [arXiv:1508.01114 [gr-qc]]; K. A. Bronnikov and A. M. Galiakhmetov, “Wormholes and black universes without phantom fields in Einstein-Cartan theory,” Phys. Rev. D 94, no.12, 124006 (2016) [arXiv:1607.07791 [gr-qc]]; M. R. Mehdizadeh and A. H. Ziaie, “Dynamic wormhole solutions in Einstein-Cartan gravity,” Phys. Rev. D 96, no.12, 124017 (2017) [arXiv:1709.09028 [gr-qc]]; G. Kofinas, E. Papantonopoulos and E. N. Saridakis, “Self-Gravitating Spherically Symmetric Solutions in Scalar-Torsion Theories,” Phys. Rev. D 91, no.10, 104034 (2015) [arXiv:1501.00365 [gr-qc]]; I. P. Lobo, M. G. Richarte, J. P. Morais Graça and H. Moradpour, “Thin-shell wormholes in Rastall gravity,” Eur. Phys. J. Plus 135, no.7, 550 (2020) [arXiv:2007.05641 [gr-qc]].
* [38] G. W. Horndeski, Int. J. Theor. Phys. 10 (1974) 363-384.
* [39] A. Nicolis, R. Rattazzi, E. Trincherini, “The Galileon as a local modification of gravity,” Phys. Rev. D79 (2009) 064036. [arXiv:0811.2197 [hep-th]].
* [40] C. Deffayet, G. Esposito-Farese, A. Vikman, “Covariant Galileon,” Phys. Rev. D79 (2009) 084003. [arXiv:0901.1314 [hep-th]].
* [41] C. Deffayet, S. Deser and G. Esposito-Farese, “Generalized Galileons: All scalar models whose curved background extensions maintain second-order field equations and stress-tensors,” Phys. Rev. D 80, 064015 (2009) [arXiv:0906.1967].
* [42] E. Papantonopoulos, “Effects of the kinetic coupling of matter to curvature,” Int. J. Mod. Phys. D 28, no. 05, 1942007 (2019).
* [43] T. Kolyvaris, G. Koutsoumbas, E. Papantonopoulos and G. Siopsis, “Scalar Hair from a Derivative Coupling of a Scalar Field to the Einstein Tensor,” Class. Quant. Grav. 29, 205011 (2012), [arXiv:1111.0263 [gr-qc]].
* [44] M. Rinaldi, “Black holes with non-minimal derivative coupling,” Phys. Rev. D 86, 084048 (2012) [arXiv:1208.0103 [gr-qc]].
* [45] T. Kolyvaris, G. Koutsoumbas, E. Papantonopoulos and G. Siopsis, “Phase Transition to a Hairy Black Hole in Asymptotically Flat Spacetime,” JHEP 1311, 133 (2013), [arXiv:1308.5280 [hep-th]].
* [46] A. Cisterna and C. Erices, “Asymptotically locally AdS and flat black holes in the presence of an electric field in the Horndeski scenario,” Phys. Rev. D 89, 084038 (2014) [arXiv:1401.4479 [gr-qc]].
* [47] E. Babichev and C. Charmousis, “Dressing a black hole with a time-dependent Galileon,” JHEP 1408, 106 (2014) [arXiv:1312.3204 [gr-qc]].
* [48] C. Charmousis, T. Kolyvaris, E. Papantonopoulos and M. Tsoukalas, “Black Holes in Bi-scalar Extensions of Horndeski Theories,” JHEP 07, 085 (2014) [arXiv:1404.1024 [gr-qc]].
* [49] R. V. Korolev and S. V. Sushkov, “Exact wormhole solutions with nonminimal kinetic coupling,” Phys. Rev. D 90, 124025 (2014) [arXiv:1408.1235 [gr-qc]].
* [50] J. M. Maldacena, “The Large N limit of superconformal field theories and supergravity,” Int. J. Theor. Phys. 38, 1113-1133 (1999) [arXiv:hep-th/9711200 [hep-th]].
* [51] S. A. Hartnoll, “Lectures on holographic methods for condensed matter physics,” Class. Quant. Grav. 26, 224002 (2009) [arXiv:0903.3246 [hep-th]].
* [52] G. T. Horowitz and V. E. Hubeny, “Quasinormal modes of AdS black holes and the approach to thermal equilibrium,” Phys. Rev. D 62 (2000), 024027 [arXiv:hep-th/9909056 [hep-th]].
* [53] J. M. Maldacena and L. Maoz, “Wormholes in AdS,” JHEP 02, 053 (2004) [arXiv:hep-th/0401024 [hep-th]].
* [54] S. B. Giddings and A. Strominger, “Baby Universes, Third Quantization and the Cosmological Constant,” Nucl. Phys. B 321, 481-508 (1989).
* [55] D. Marolf and H. Maxfield, “Transcending the ensemble: baby universes, spacetime wormholes, and the order and disorder of black hole information,” JHEP 08, 044 (2020) [arXiv:2002.08950 [hep-th]].
* [56] M. Minamitsuji, “Black hole quasinormal modes in a scalar-tensor theory with field derivative coupling to the Einstein tensor,” Gen. Rel. Grav. 46, 1785 (2014) [arXiv:1407.4901 [gr-qc]].
* [57] R. Dong, J. Sakstein and D. Stojkovic, “Quasinormal modes of black holes in scalar-tensor theories with nonminimal derivative couplings,” Phys. Rev. D 96, no.6, 064048 (2017) [arXiv:1709.01641 [gr-qc]].
* [58] C. Gundlach, R. H. Price and J. Pullin, “Late time behavior of stellar collapse and explosions: 1. Linearized perturbations,” Phys. Rev. D 49 (1994), 883-889 [arXiv:gr-qc/9307009 [gr-qc]].
* [59] B. Wang, C. Molina and E. Abdalla, “Evolving of a massless scalar field in Reissner-Nordstrom Anti-de Sitter space-times,” Phys. Rev. D 63 (2001), 084001 [arXiv:hep-th/0005143 [hep-th]].
* [60] E. W. Leaver, “Spectral decomposition of the perturbation response of the Schwarzschild geometry,” Phys. Rev. D 34, 384-408 (1986).
* [61] C. Gundlach, R. H. Price and J. Pullin, “Late time behavior of stellar collapse and explosions: 2. Nonlinear evolution,” Phys. Rev. D 49, 890-899 (1994) [arXiv:gr-qc/9307010 [gr-qc]].
* [62] E. Abdalla, B. Cuadros-Melgar, J. de Oliveira, A. B. Pavan and C. E. Pellicer, “Vectorial and spinorial perturbations in Galileon Black Holes: Quasinormal modes, quasiresonant modes and stability,” Phys. Rev. D 99, no.4, 044023 (2019) [arXiv:1810.01198 [gr-qc]].
* [63] G. Holzegel and J. Smulevici, “Decay properties of Klein-Gordon fields on Kerr-AdS spacetimes,” Commun. Pure Appl. Math. 66, 1751-1802 (2013) [arXiv:1110.6794 [gr-qc]].
* [64] G. Holzegel and J. Smulevici, “Quasimodes and a Lower Bound on the Uniform Energy Decay Rate for Kerr-AdS Spacetimes,” [arXiv:1303.5944 [gr-qc]].
* [65] C. Kehle, “Uniform Boundedness and Continuity at the Cauchy Horizon for Linear Waves on Reissner–Nordström–AdS Black Holes,” Commun. Math. Phys. 376, no.1, 145-200 (2019) [arXiv:1812.06142 [gr-qc]].
* [66] C. Kehle, “Diophantine approximation as Cosmic Censor for Kerr-AdS black holes,” [arXiv:2007.12614 [gr-qc]].
* [67] R. Dey, S. Chakraborty and N. Afshordi, “Echoes from braneworld black holes,” Phys. Rev. D 101, no.10, 104014 (2020) [arXiv:2001.01301 [gr-qc]].
* [68] R. Dey and N. Afshordi, “Echoes in Kerr/CFT,” [arXiv:2009.09027 [hep-th]].
* [69] K. Saraswat and N. Afshordi, “Quantum Nature of Black Holes: Fast Scrambling versus Echoes,” JHEP 04, 136 (2020) [arXiv:1906.02653 [hep-th]].
* [70] K. A. Bronnikov and R. A. Konoplya, “Echoes in brane worlds: ringing at a black hole–wormhole transition,” Phys. Rev. D 101 (2020) no.6, 064004 [arXiv:1912.05315 [gr-qc]].
* [71] M. S. Churilova and Z. Stuchlik, “Ringing of the regular black-hole/wormhole transition,” Class. Quant. Grav. 37 (2020) no.7, 075014 [arXiv:1911.11823 [gr-qc]].
* [72] P. Bueno, P. A. Cano, F. Goelen, T. Hertog and B. Vercnocke, “Echoes of Kerr-like wormholes,” Phys. Rev. D 97 (2018) no.2, 024040 [arXiv:1711.00391 [gr-qc]].
* [73] H. Liu, P. Liu, Y. Liu, B. Wang and J. P. Wu, “Echoes from phantom wormholes,” [arXiv:2007.09078 [gr-qc]].
* [74] O. Evnin and C. Krishnan, “A Hidden Symmetry of AdS Resonances,” Phys. Rev. D 91, no.12, 126010 (2015) [arXiv:1502.03749 [hep-th]].
* [75] O. Fierro, D. Narbona, J. Oliva, C. Quijada and G. Rubilar, “Scalars on asymptotically locally AdS wormholes with $\mathcal{R}^{2}$ terms,” [arXiv:1812.02089 [hep-th]].
* [76] A. Anabalon, J. Oliva and C. Quijada, “Fully resonant scalars on asymptotically AdS wormholes,” Phys. Rev. D 99, no.10, 104022 (2019) [arXiv:1903.08239 [hep-th]].
* [77] V. F. Foit and M. Kleban, “Testing Quantum Black Holes with Gravitational Waves,” Class. Quant. Grav. 36, no.3, 035006 (2019) [arXiv:1611.07009 [hep-th]].
* [78] V. Cardoso, V. F. Foit and M. Kleban, “Gravitational wave echoes from black hole area quantization,” JCAP 08, 006 (2019) [arXiv:1902.10164 [hep-th]].
* [79] I. Agullo, V. Cardoso, A. del Rio, M. Maggiore and J. Pullin, “Gravitational-wave signatures of quantum gravity,” [arXiv:2007.13761 [gr-qc]].
* [80] A. Coates, S. H. Völkel and K. D. Kokkotas, “Spectral Lines of Quantized, Spinning Black Holes and their Astrophysical Relevance,” Phys. Rev. Lett. 123, no.17, 171104 (2019) [arXiv:1909.01254 [gr-qc]].
* [81] N. Chatzifotis, G. Koutsoumbas and E. Papantonopoulos, “Quantum Effects and the Formation of Bound States in AdS-asymptotic Wormholes,” [arXiv:2011.08770 [gr-qc]].
* [82] R. A. Konoplya, “How to tell the shape of a wormhole by its quasinormal modes,” Phys. Lett. B 784, 43-49 (2018) [arXiv:1805.04718 [gr-qc]].
* [83] K. A. Bronnikov, “Scalar-tensor theory and scalar charge,” Acta Phys. Polon. B 4, 251-266 (1973).
* [84] H. G. Ellis, “Ether flow through a drainhole - a particle model in general relativity,” J. Math. Phys. 14, 104-118 (1973).
* [85] Z. P. Li and Y. S. Piao, “Mixing of gravitational wave echoes,” Phys. Rev. D 100, no.4, 044023 (2019) [arXiv:1904.05652 [gr-qc]].
* [86] K. Destounis, “Superradiant instability of charged scalar fields in higher-dimensional Reissner-Nordström-de Sitter black holes,” Phys. Rev. D 100, no.4, 044054 (2019) [arXiv:1908.06117 [gr-qc]].
* [87] K. A. Bronnikov and S. V. Grinyok, “Conformal continuations and wormhole instability in scalar-tensor gravity,” Grav. Cosmol. 10, 237 (2004) [arXiv:gr-qc/0411063 [gr-qc]].
* [88] J. A. Gonzalez, F. S. Guzman and O. Sarbach, “Instability of wormholes supported by a ghost scalar field. I. Linear stability analysis,” Class. Quant. Grav. 26, 015010 (2009) [arXiv:0806.0608 [gr-qc]].
* [89] J. A. Gonzalez, F. S. Guzman and O. Sarbach, “On the instability of charged wormholes supported by a ghost scalar field,” Phys. Rev. D 80, 024023 (2009) [arXiv:0906.0420 [gr-qc]].
* [90] K. A. Bronnikov, R. A. Konoplya and A. Zhidenko, “Instabilities of wormholes and regular black holes supported by a phantom scalar field,” Phys. Rev. D 86, 024028 (2012) [arXiv:1205.2224 [gr-qc]].
* [91] J. A. Gonzalez, F. S. Guzman and O. Sarbach, “Instability of wormholes supported by a ghost scalar field. II. Nonlinear evolution,” Class. Quant. Grav. 26, 015011 (2009) [arXiv:0806.1370 [gr-qc]].
* [92] A. Doroshkevich, J. Hansen, I. Novikov and A. Shatskiy, “Passage of radiation through wormholes,” Int. J. Mod. Phys. D 18, 1665-1691 (2009) [arXiv:0812.0702 [gr-qc]].
|
# The flow group of rooted abelian or quadratic differentials
Mark Bell Independent
UK<EMAIL_ADDRESS>, Vincent Delecroix CNRS - Universitsé de
Bordeaux
351, cours de la Libération
33400 Talence<EMAIL_ADDRESS>, Vaibhav Gadre School of
Mathematics and Statistics
University of Glasgow
University Place, Glasgow, G128QQ UK<EMAIL_ADDRESS>, Rodolfo
Gutiérrez-Romo Centro de Modelamiento Matemático, CNRS-IRL 2807, Universidad
de Chile, Beauchef 851, Santiago, Chile<EMAIL_ADDRESS>and Saul Schleimer
Department of Mathematics
University of Warwick
Coventry, CV47AL UK<EMAIL_ADDRESS>
###### Abstract.
We define the flow group of any component of any stratum of rooted abelian or
quadratic differentials (those marked with a horizontal separatrix) to be the
group generated by almost-flow loops. We prove that the flow group is equal to
the fundamental group of the component. As a corollary, we show that the plus
and minus modular Rauzy–Veech groups are finite-index subgroups of their
ambient modular monodromy groups. This partially answers a question of Yoccoz.
Using this, and recent advances on algebraic hulls and Zariski closures of
monodromy groups, we prove that the Rauzy–Veech groups are Zariski dense in
their ambient symplectic groups. Density, in turn, implies the simplicity of
the plus and minus Lyapunov spectra of any component of any stratum of
quadratic differentials. Thus, we establish the Kontsevich–Zorich conjecture.
###### Key words and phrases:
Moduli of Riemann surfaces, Quadratic differentials, Teichmüller dynamics,
Monodromy groups, Kontsevich–Zorich cocycle, Rauzy–Veech groups, Lyapunov
spectra
###### 2010 Mathematics Subject Classification:
Primary 30F60; Secondary 37D40, 37G15, 37A20
This work is in the public domain.
## 1\. Introduction
Moduli spaces of abelian or quadratic differentials consist of Riemann
surfaces endowed with abelian differentials or, respectively, meromorphic
quadratic differentials with at most simple poles. By integration along
appropriate relative cycles, these moduli spaces are endowed with local
complex (orbifold) charts known as _period coordinates_. The usual
identification of $\mathbb{C}$ with $\mathbb{R}^{2}$ gives rise to a natural
$\operatorname{SL}(2,\mathbb{R})$-action on period coordinates. The resulting
diagonal action is known as the _Teichmüller flow_.
Moduli spaces are a meeting ground of many mathematical disciplines. A very
deep example of this, which is moreover relevant for our work, is as follows.
Fix any differential $q$ and form its $\operatorname{SL}(2,\mathbb{R})$ orbit
closure. This is dynamically defined, yet is an algebraic variety [Fil16]. In
period coordinates the orbit closure is cut out by homogeneous linear
equations, with (real) algebraic coefficients [EMM15].
In general, one stratifies a space of differentials by fixing various
topological and combinatorial data such as the genus of the underlying surface
$S$, the number and character of the singularities, and so on. The resulting
strata are not necessarily connected; the classification of their components
is known [KZ03, Lan08, CM14].
Suppose now that $\mathcal{C}$ is such a _stratum component_ of abelian or
quadratic differentials. There is a forgetful map from $\mathcal{C}$ to
$\mathcal{M}(S)$: the moduli space of Riemann surface structures on $S$. Both
$\mathcal{C}$ and $\mathcal{M}(S)$ are orbifolds; both have manifold covers
which we will need.
For $\mathcal{M}(S)$ this story is classical. Briefly, points in
$\mathcal{M}(S)$ are in fact equivalence classes; taking the universal cover
breaks these classes apart. This gives the Teichmüller space $\mathcal{T}(S)$
which is homeomorphic to an open ball in $\mathbb{R}^{6g-6}$; here
$g=\operatorname{genus}(S)$. The deck group of this covering is the mapping
class group $\mathrm{Mod}(S)$.
To obtain a manifold cover of $\mathcal{C}$ we consider _rooted_
differentials. A choice of root is simply a horizontal unit tangent vector at
a singularity. The choice of root removes any symmetry of the differential and
so unwraps the orbifold locus. The resulting finite cover is a manifold which
may not be connected. For instance, differentials with roots at zeroes of
different orders lie in different components. We fix a connected component of
this cover and denote it by $\mathcal{C}^{\mathrm{root}}$.
The maps from the previous paragraphs give us the following sequence of
homomorphisms:
$\pi_{1}(\mathcal{C}^{\mathrm{root}})\to\pi_{1}^{\mathrm{orb}}(\mathcal{C})\to\pi_{1}^{\mathrm{orb}}(\mathcal{M}(S))=\mathrm{Mod}(S)\xrightarrow{\rho}\operatorname{Aut}(H_{1}(S;\mathbb{Z}))\cong\operatorname{Sp}(2g,\mathbb{Z})$
Here the third map, $\rho$, is the symplectic representation of the mapping
class group: the action of $\mathrm{Mod}(S)$ on the homology of $S$. We call
the image of $\pi_{1}^{\mathrm{orb}}(\mathcal{C})$, inside the mapping class
group, the _modular monodromy group_. The image of the modular monodromy group
under $\rho$ is known as the _monodromy group_.
The (modular) monodromy groups are “topological offspring” of the stratum
component $\mathcal{C}$.
In an attempt to relate the topology and dynamics of $\mathcal{C}$ we ask the
following: to what extent can $\pi_{1}^{\mathrm{orb}}(\mathcal{C})$ be
“detected” by the Teichmüller flow? More precisely, let
$U\subseteq\mathcal{C}$ be a contractible open set missing the orbifold locus.
Fix a base-point $q_{0}\in U$. Consider the Teichmüller trajectories that
start and end in $U$. For each, we connect its endpoints to $q_{0}$ inside of
$U$ to get a loop based at $q_{0}$. As $U$ is contractible, the resulting
based homotopy class is independent of the choices we made inside of $U$. We
call these based homotopy classes _almost-flow loops_. The _flow group_ of
$\mathcal{C}$ associated with the pair $(U,q_{0})$ is the subgroup of
$\pi_{1}^{\mathrm{orb}}(\mathcal{C})$ generated by all such loops. The
question can then be stated as:
###### Question 1.1.
For any stratum component $\mathcal{C}$ of the moduli space of abelian or
quadratic differentials, is the flow group of $\mathcal{C}$ equal to
$\pi_{1}^{\mathrm{orb}}(\mathcal{C})$?
This is a version of a question of Yoccoz [Yoc10, Section 9.3].111Yoccoz asks
if the image of the flow group in $\mathrm{Mod}(S)$ is all of
$\mathrm{Mod}(S)$. However, what is meant here is the modular monodromy group
[Mat21].
This question can also be stated for rooted differentials by defining the flow
group analogously. As $\mathcal{C}^{\mathrm{root}}$ is a manifold, the open
set $U$ can be any contractible open set in $\mathcal{C}^{\mathrm{root}}$. A
positive answer to this question is our main result: Let
$\mathcal{C}^{\mathrm{root}}$ be any component of a stratum of the moduli
space of rooted abelian or quadratic differentials. Let
$q_{0}\in\mathcal{C}^{\mathrm{root}}$ be any base-point and $U$ any
contractible open set containing $q_{0}$. Then the flow group of
$\mathcal{C}^{\mathrm{root}}$ associated with the pair $(U,q_{0})$ is equal to
$\pi_{1}(\mathcal{C}^{\mathrm{root}},q_{0})$. Since
$\mathcal{C}^{\mathrm{root}}$ is a finite cover of $\mathcal{C}$, this theorem
shows that the answer to 1.1 is yes, at least up to finite index.
Through the zippered rectangles construction, the Teichmüller flow on
$\mathcal{C}^{\mathrm{root}}$ can be coded combinatorially by the reduced
Rauzy diagram. In the course of the proof of Theorem 5.6, we prove Let
$\mathcal{D}^{\mathrm{red}}$ be the reduced Rauzy diagram for
$\mathcal{C}^{\mathrm{root}}$. Then the natural homomorphism
$\pi_{1}(\mathcal{D}^{\mathrm{red}})\to\pi_{1}(\mathcal{C}^{\mathrm{root}})$
is surjective. Taking a further image to $\mathrm{Mod}(S)$, this answers up to
finite index, a weaker version of Yoccoz’s question. The flow group and some
of its applications to Teichmüller dynamics are also discussed by Hamenstädt
[Ham18, Section 4.2].
For strata of abelian differentials, previous work by Calderon and
Calderon–Salter also allows us to explicitly compute the image of the flow
group inside of $\mathrm{Mod}(S)$ and of
$\operatorname{Aut}(H_{1}(S;\mathbb{Z}))$ (or some larger group, such as
$\operatorname{Aut}(H_{1}(S,Z;\mathbb{Z}))$), up to finite index [Cal20, CS19,
CS19a, CS20].
### Cocycles
Fix $\mathcal{C}$, a stratum component. Given a bundle over $\mathcal{C}$, the
Teichmüller flow gives us a natural cocycle. The most studied of these is the
_Kontsevich–Zorich cocycle_. This can be lifted to a connected component
$\mathcal{T}\mathcal{C}$ of the Teichmüller space of abelian or quadratic
differentials (the choice of $\mathcal{T}\mathcal{C}$ is, in general, not
unique [Cal20, CS20]).
In more detail, we define a vector bundle over $\mathcal{T}\mathcal{C}$ with a
suitable fibre. In the abelian case, this fibre is the first cohomology of the
underlying topological surface; in the quadratic case, it is the first
cohomology of the orientation double cover. By Poincaré-duality, it is also
possible to use the corresponding homology groups as the fibre.
The $\operatorname{SL}(2,\mathbb{R})$-action induces a trivial dynamical
cocycle on this vector bundle. By modding out by the mapping class group, the
vector bundle descends to a bundle over $\mathcal{C}$ known as the _Hodge
bundle_ ; similarly the cocycle descends to the _Kontsevich–Zorich cocycle_
[KZ97, Kon97]. In the quadratic case, the cocycle then naturally splits into
two distinct symplectically orthogonal blocks, usually referred to as the
_plus_ (or _invariant_) and _minus_ (or _anti-invariant_) pieces.
Many interesting dynamical properties of abelian or quadratic differentials
can be written in terms of the Lyapunov exponents of the Kontsevich–Zorich
cocycle. An important example are the deviations of ergodic averages of the
linear flow on almost every abelian or quadratic differential [Zor97, EKZ14].
In fact, when the Lyapunov spectrum of the Kontsevich–Zorich cocycle is
simple, these deviations can be precisely described.
Kontsevich–Zorich conjectured that the Lyapunov spectrum is simple for all
abelian stratum components [Conjecture 2]Zor97[page 1499]Zor99. Their
conjecture extends naturally to the quadratic case as follows. We form the
branched orientation double cover. The homology of the cover splits into the
plus and minus eigenspace for the involution; the
$\operatorname{SL}(2,\mathbb{R})$ action preserves this splitting. Simplicity
is conjectured in both pieces [Zor18].
For the abelian case, this conjecture was established in the famous work by
Avila–Viana [AV07a]. The quadratic case is known for many stratum components
but not in full generality [Tre13, Gut17].
Our paper establishes simplicity in all cases; as discussed below, our proof
relies on certain machinery of these previous authors, but is independent of
their theorems.
### Rauzy–Veech groups
The Rauzy–Veech groups are subgroups of the symplectic group generated by the
matrices (in a preferred basis) induced by evaluating these cocycles over
based loops in the Rauzy diagram. It follows from Theorem 5.2 that Rauzy–Veech
groups have finite index in the corresponding monodromy groups. We leverage
this finiteness with standard techniques of splitting zeroes [AV07a, Gut19] to
extend from simpler strata to more complicated ones in order to prove the
following.
The Rauzy–Veech groups for all components of all abelian strata are Zariski
dense in their ambient symplectic groups. The same holds for the plus and
minus Rauzy–Veech groups for all components of all quadratic strata.
The groups of Theorem 9.1 that arise, by splitting singularities, from abelian
strata are known to be finite index inside the ambient symplectic groups (over
$\mathbb{Z}$) and hence Zariski dense. This was done by Avila–Matheus–Yoccoz
[AMY18] for abelian hyperelliptic components and by the fourth author [Gut19,
Gut17] for all other components mentioned above. Using our techniques, and
again replying on certain machinery from previous work, our Theorem 9.1 gives
Zariski density in all cases.
By the work of Benoist [Ben97], Zariski density of an appropriate Rauzy–Veech
group implies that the monoids associated with the Kontsevich–Zorich cocycles
are “rich” in the sense of the simplicity criterion of Avila–Viana [AV07,
AV07a]. As a consequence of Theorem 9.1, we can apply the Avila–Viana
criterion to prove the Kontsevich–Zorich simplicity conjecture.
The Kontsevich–Zorich cocycle has a simple spectrum for all components of all
strata of abelian differentials. The plus and minus Kontsevich–Zorich cocycles
also have a simple spectrum for all components of all strata of quadratic
differentials.
As mentioned before, simplicity was known for all abelian [AV07a] and some
quadratic stratum components [Gut17]. It is also known for the principal
stratum of quadratic differentials by different methods through the recently
announced solution by Eskin–Mirzakhani–Rafi of the Furstenberg problem for
random walks on the mapping class group. However, we have claimed the known
results as our proof is self-contained and is uniform across all stratum
components.
With Theorem 5.6 in hand, we can compute the Kontsevich–Zorich cocycle over
any loop in $\mathcal{C}^{\mathrm{root}}$ and not just along the Teichmüller
flow. This additional flexibility implies that, for Zariski density, we can
always consider a monodromy group instead of a Rauzy–Veech group.
For the monodromy groups of abelian differentials, and also for the monodromy
groups induced by the minus piece of the cocycle for quadratic differentials,
we directly apply some of Filip’s results to obtain Zariski density [Fil17,
Corollary 1.7]. For the monodromy groups induced by the plus piece of the
cocycle, we need to discuss _algebraic hulls_.
The algebraic hull of the Kontsevich–Zorich cocycle restricted to a linear
invariant suborbifold can be thought of as the smallest algebraic group into
which the cocycle can be measurably conjugated. As such, the hull is both an
algebro-geometric and an ergodic-theoretic object. Eskin–Filip–Wright showed
that the algebraic hull is as large as it can be, namely it equals the
stabiliser of the tautological plane (that is, the cohomology classes spanned
by the real and imaginary parts of the differential) in the Zariski closure of
the monodromy group [EFW18, Theorem 1.1].
The plus piece of the Kontsevich–Zorich cocycle does not meet the tautological
plane. The stabiliser then equals the Zariski closure of the monodromy, and
hence so does the algebraic hull. This result, together with Filip’s
classification of the possible Lie algebra representations of algebraic hulls
[Fil17, Theorem 1.2], allows us to show that the Zariski closure of the
monodromy group corresponding to the plus piece is
$\operatorname{Sp}(2g,\mathbb{R})$ by a simple dimension count.
Finally, we remark that, just as Eskin–Filip–Wright’s theorem shows that the
algebraic hull is as large as it can be, Theorem 5.6 shows that the flow group
is also as large as it can be. Thus, for stratum components, Theorem 5.6 can
be considered as a dynamical analogue of Eskin–Filip–Wright’s result.
### Acknowledgements
The authors are immensely grateful to Carlos Matheus for countless
illuminating conversations. We also thank Giovanni Forni, Maxime Fortier
Bourque, Erwan Lanneau and Alex Wright for their helpful comments on an
earlier version of this article.
The fourth author is grateful to the ANID AFB-170001, the FONDECYT Iniciación
11190034, and the MATHAMSUD 21-MATH-07 grants.
## 2\. Strategies
We outline the steps and the key ideas in our proofs.
#### Passing to rooted differentials
The dynamical issues considered here are stable under passing to a finite
cover of the given stratum component $\mathcal{C}$. We pass to the space
$\mathcal{C}^{\mathrm{root}}$ of _rooted differentials_ : differentials marked
with a horizontal separatrix (or, equivalently, a horizontal unit tangent
vector at a marked point). The reasons are two-fold. Unlike $\mathcal{C}$, the
cover $\mathcal{C}^{\mathrm{root}}$ is a manifold, which simplifies various
transversality (and fundamental group) arguments. Also, a generic rooted
differential admits a description via a zippered rectangles construction.
#### Zippered rectangles and the based loop theorem
The _zippered rectangles construction_ is originally due to Veech [Vee82] for
abelian stratum components and due to Boissy–Lanneau [BL09] for quadratic
stratum components. Parameter spaces of zippered rectangles, where the length
of the base-arc is normalised, define contractible open sets in
$\mathcal{C}^{\mathrm{root}}$ which we call _polytopes_. The union of the
polytopes is dense in $\mathcal{C}^{\mathrm{root}}$. However, the complement
of their union is complicated; the polytopes do not give a cell structure on
$\mathcal{C}^{\mathrm{root}}$. For instance, there are compact arcs in
$\mathcal{C}^{\mathrm{root}}$ that intersect polytope faces infinitely many
times. See Section A.1 for an explicit example and relevant discussions. As a
result, the _based loop theorem_ , which we explain below, cannot be deduced
from naïve transversality arguments.
Fortunately, as discussed by Yoccoz [Yoc10, Proposition in Section 9.3], the
subset of rooted differentials that do _not_ admit any zippered rectangle
construction is contained in the (codimension two) set of differentials that
have both a vertical and a horizontal saddle connection. Thus, any based loop
$\gamma\colon[0,1]\to\mathcal{C}^{\mathrm{root}}$ can be homotoped to be
disjoint from such differentials.
After this homotopy, we can cover the image of $\gamma$ by finitely many
reasonably nice charts. Unfortunately, these may not be contained in the
interior of any of the polytopes defined above. We arrange matters so that the
boundaries of these charts are codimension-one embedded submanifolds. A
further homotopy makes $\gamma$ transverse to these boundaries remaining
covered by the charts. Since a chart may not be contained in the interior of a
polytope, the lengths of the base-arcs in these charts need not be normalised.
We fix the required normalisation as follows.
Given a sufficiently small subsegment of $\gamma$, where the base-arcs are not
normalised, we apply the (forward or backward, as needed) Teichmüller flow.
This replaces (via homotopy) the subsegment of $\gamma$ by two segments
contained in the flow and one segment contained in the interior of a polytope.
Doing this finitely many times, we homotope $\gamma$ to be a concatenation of
segments which are forward or backward Teichmüller segments or completely
contained inside a polytope. This is our _based loop theorem_ , namely Theorem
4.23.
#### Rauzy induction and the Teichmüller flow
The combinatorial information of a zippered rectangles construction is an
irreducible generalised permutation [BL09]. Also, there are various associated
parameters such as the dimensions of the rectangles and the heights of the
various zippers. The combinatorics together with the parameters uniquely
specify the differential.
If we apply the forward Teichmüller flow, the base-arc grows until it violates
the normalisation. At this point we pass to the largest base-arc strictly
contained in the original base-arc. In this way we obtain a new irreducible
generalised permutation as well as new parameters. We call a single such
operation a _Rauzy–Veech move_.
The collection of all these moves gives a renormalisation procedure known as
the _Rauzy–Veech renormalisation_ or the _Rauzy–Veech induction_. It was
originally defined by Rauzy and Veech for abelian differentials [Rau79, Vee82]
and by Boissy–Lanneau [BL09] for quadratic differentials. Applying the
Teichmüller flow, we obtain a sequence of pairs of combinatorics and
parameters. Thus, the Rauzy–Veech renormalisation gives a coding for the
Teichmüller flow.
We encode this as an “automaton” (a directed graph) as follows. The vertices
are equivalence classes of irreducible generalised permutations suited to
$\mathcal{C}^{\mathrm{root}}$. Two permutations $\pi$ and $\pi^{\prime}$ are
equivalent if we can precompose with a permutation $\sigma$ to obtain
$\pi\circ\sigma=\pi^{\prime}$. There is a directed edge from $[\pi]$ to
$[\rho]$ if some representative of the latter arises from a single Rauzy–Veech
move. This automaton is called the _reduced Rauzy diagram_. Since the
Teichmüller flow is ergodic, as shown by Masur [Mas82] and Veech [Vee82,
Vee86], it follows that the reduced Rauzy diagram is _strongly connected_ :
there is a directed path from any vertex to any other vertex. By accelerating
the renormalisation, we can derive a coding that has the properties that the
Avila–Viana criterion stated below requires.
#### Flow groups and the fundamental group
There is a natural homomorphism from the fundamental group of the reduced
Rauzy diagram (as an undirected graph) to the fundamental group of
$\mathcal{C}^{\mathrm{root}}$. By leveraging the based loop theorem and
Rauzy–Veech sequences for Teichmüller segments, we show that the homomorphism
is surjective. This partially answers a question of Yoccoz [Yoc10, Remark in
Section 9.3].
We use the based loop theorem, and the above surjectivity, to show that the
flow group is equal to the fundamental group of $\mathcal{C}^{\mathrm{root}}$.
See Theorem 5.5 and Theorem 5.6. In other words, at the level of the
fundamental group, the Teichmüller flow captures the topology of
$\mathcal{C}^{\mathrm{root}}$, and hence the topology of $\mathcal{C}$, up to
finite index.
#### Cocycle simplicity
By a criterion of Avila–Viana [AV07, AV07a], simplicity of natural integrable
cocycles, such as the Kontsevich–Zorich cocycle, boils down to the existence
of a coding with an almost product structure and a notion of “richness” of the
cocycle. As we indicated earlier, a coding with the required integrability and
distortion properties can be achieved by accelerating the Rauzy–Veech
renormalisation. This was done by Avila–Gouëzel–Yoccoz [AGY06] for abelian
differentials and by Avila–Resende [AR12] for quadratic differentials. See
Section 6 for more details. The remaining task, and the crux of the problem,
is to show the richness of the cocycle. The required richness was established
by Avila–Viana [AV07a] for abelian stratum components by a direct computation.
In general, to obtain the richness condition for a symplectic cocycle it is
enough to establish the Zariski density of an appropriate group inside the
symplectic group; using work of Benoist [Ben97] it implies the above notion of
richness (Zariski density is, in fact, strictly stronger [AMY18, Appendix A]).
For the Kontsevich–Zorich cocycle, the relevant group is the Rauzy–Veech
group. Its Zariski density was proved for hyperelliptic components by
Avila–Matheus–Yoccoz. In fact, their result is stronger as they show it to be
an explicit finite index subgroup of its ambient symplectic group [AMY18,
Theorem 1.1]. This finite index result was extended by the fourth author to
all abelian stratum components and to quadratic stratum components that have
abelian components on their boundary [Theorem 1.1]Gut19[Theorem 1.1]Gut17.
Our main result, namely Theorem 5.6 stating that the flow group equals the
fundamental group of $\mathcal{C}^{\mathrm{root}}$, is crucial to achieve the
Zariski density of the Rauzy–Veech group of any stratum component. Indeed, it
allows us to compute the cocycle along any loop in
$\mathcal{C}^{\mathrm{root}}$ instead of only along almost T. This extra
flexibility is significant since we do not have to restrict to directed loops
in the reduced Rauzy diagram. In recent work [Fil17], Filip gives a finite
list of possible Zariski closures of the monodromy of a linear invariant
suborbifold. From this description, he also derives the fact that the Zariski
closure of the monodromy restricted to the symplectic block that contains the
tautological plane is the full symplectic group for this block. Combined with
this fact, our Theorem 5.6 directly yields simplicity for abelian components.
A quadratic stratum component lifts to a linear invariant suborbifold of its
orientation double-cover and hence Filip’s result applies to this situation.
The involution on the orientation double-cover splits the Kontsevich–Zorich
cocycle into two symplectically orthogonal blocks, usually referred to as the
_plus_ (or _invariant_) piece and the _minus_ (or _anti-invariant_) pieces.
The minus piece contains the tautological plane. Again by Filip’s corollary,
the Zariski closure for the minus cocycle is the full symplectic group.
Simplicity of the minus cocycle follows directly from combining this with
Theorem 5.6.
It remains to tackle the plus cocyle. Here, we exploit our extra flexibility
to build a dimension argument that eliminates all but the full symplectic
group as the Zariski closure. We carry out the dimension argument first for
components of minimal strata and hyperelliptic components with two zeros to
conclude Zariski density for the monodromy groups of these components. This
implies the Zariski density of their Rauzy–Veech groups as they are finite
index in the monodromy groups (a consequence of Theorem 5.2). We then deal
with a few remaining low genera components by using a well-known criterion for
Zariski density [PR14]. Finally, we extend the density to Rauzy–Veech groups
of all quadratic components by standard techniques of surgery/splitting
zeroes. The density allows us to apply the Avila–Viana criterion to conclude
the proof of the Kontsevich–Zorich conjecture in full generality.
## 3\. Preliminaries
### 3.1. Moduli spaces of abelian and quadratic differentials
A connected, oriented surface $S$ of finite type, that is, with finite genus
and finitely many marked points, can be equipped with a conformal/complex
structure by charts to the complex plane and holomorphic transition functions.
The Teichmüller space of $S$ is the space of marked conformal structures on
$S$. The mapping class group $\mathrm{Mod}(S)$ is the group of orientation
preserving diffeomorphisms of $S$ modulo isotopy. The mapping class group acts
on the Teichmüller space by changing the marking. The quotient $\mathcal{M}$
is the moduli space of Riemann surfaces homeomorphic to $S$.
The cotangent bundle to Teichmüller space is the space of (marked) meromorphic
quadratic differentials on $S$ with at most simple poles. The zeroes or poles
of the differential must lie at the marked points. The quotient by the mapping
class group is the moduli space $\mathcal{Q}$ of quadratic differentials. The
space $\mathcal{Q}$ is stratified by the orders at the marked points and
components of the strata are classified by the following combinatorial and
algebraic invariants:
1. (1)
The singularity data which can be encapsulated as follows. Let $Z\subseteq S$
be a non-empty and finite set of points; we set $n=|Z|$. Let $\kappa\colon
Z\to\\{-1,0\\}\cup\mathbb{N}$ be any function so that $\sum\kappa(z)=4g-4$.
The points $z\in Z$ with $\kappa(z)=-1$ are called _simple poles_ and these
have to be the marked points of $S$. The points with $\kappa(z)=0$ are called
_regular points_. To ensure generality, we allow finitely many additional
points in $S$ to be marked as regular points.
2. (2)
Abelian or quadratic, that is, whether the vertical foliations of
differentials in the component are orientable or not. In the abelian case, the
function $\kappa$ is even at every $z\in Z$, so it is common to consider
$\kappa/2$ instead of $\kappa$ as the function giving the singularity data. We
will follow this convention.
3. (3)
Hyperelliptic or non-hyperelliptic (when possible), that is, whether
differentials in the component have some rotational symmetry of order two with
$2g+2$ fixed points [Lan04].
4. (4)
Odd or even spin (only for abelian components for which $\kappa(z)$ is even
for each $z\in Z$), which is defined as the Arf invariant of a specific
quadratic form Joh[Appendix C]Zor08.
5. (5)
Regular or irregular (when possible), which can be distinguished by the
dimension of a cohomology group corresponding to a specific divisor [CM14].
For the reader’s convenience, we state the complete classification of abelian
and quadratic stratum components in Section 8.1.
We note that, in general, a stratum component is an _orbifold_. We refer to
the book by Boileau–Maillot–Porti [BMP03] for background on orbifolds and
their fundamental groups, although in most of our exposition we will only
consider the fundamental groups of actual manifolds.
###### Remark 3.2.
The extent to which $q\in\mathcal{C}$ is _marked_ varies in different
expositions. For us, we are assuming that $\mathcal{C}$ is as small as
possible; so we have forgotten the marking by $S$ and forgotten the marking by
$Z$. However, this does not mean that we erase marked regular points—we only
erase their names, as well as the names of all poles and zeros. Thus,
travelling around a loop in $\mathcal{C}$, a pair of points $z,z^{\prime}\in
Z$ may be permuted. We deduce that, while there is no map from
$\pi_{1}^{\mathrm{orb}}(\mathcal{C})\to\operatorname{Sym}(Z)$, for a loop in
$\mathcal{C}$ there is a well-defined conjugacy class in
$\operatorname{Sym}(Z)$.
### 3.3. $\operatorname{SL}(2,\mathbb{R})$-action
We fix a stratum component $\mathcal{C}$ and let $q$ be a differential in
$\mathcal{C}$. By integrating a square-root of $q$ we get charts from $S$ to
$\mathbb{C}$ with transition functions that are translations (or half-
translations), that is, transition functions that are of the form $z\to z+c$
(or $z\to\pm z+c)$.
The action of the group $\operatorname{SL}(2,\mathbb{R})$ on
$\mathbb{R}^{2}=\mathbb{C}$ can be restricted to the charts. As the transition
functions are translations (or half-translations), the
$\operatorname{SL}(2,\mathbb{R})$-action preserves the form of the transition
functions. As a result, it descends to an action on the differentials. As the
classifying invariants are also preserved, the
$\operatorname{SL}(2,\mathbb{R})$ orbit of any differential in $\mathcal{C}$
is contained in $\mathcal{C}$.
Fixing a basis for the relative homology of $(S,Z)$, we can compute periods.
The period of a basis element is the integral of a square root of $q$ over an
arc in the homology class of the element. The periods define local charts on
$\mathcal{C}$. By the famous work of Eskin–Mirzakhani–Mohammadi [EMM15],
closures of $\operatorname{SL}(2,\mathbb{R})$-orbits inside $\mathcal{C}$ are
suborbifolds cut out by linear equations (with real coefficients and no
constant terms) in the period coordinates. Such an orbit closure is called a
_linear invariant suborbifold_.
The diagonal part of the $\operatorname{SL}(2,\mathbb{R})$-action defines _the
Teichmüller flow_.
### 3.4. Monodromy groups
The canonical projection $\mathcal{C}\to\mathcal{M}$ associates to a
differential the underlying conformal structure. So we may consider in
$\pi_{1}^{\mathrm{orb}}(\mathcal{M})=\mathrm{Mod}(S)$ the image of
$\pi_{1}^{\mathrm{orb}}(\mathcal{C})$ under the induced map on the orbifold
fundamental groups. The image group $\operatorname{MMon}(\mathcal{C})$ is
called the _modular monodromy group_ of $\mathcal{C}$.
The mapping class group $\mathrm{Mod}(S)$ has a natural action on the
(absolute) integral homology $H_{1}(S;\mathbb{Z})$. The action preserves the
symplectic form on $H_{1}(S;\mathbb{Z})$ given by the algebraic intersection.
As a result, $\mathrm{Mod}(S)$ admits a representation to the automorphism
group of $H_{1}(S;\mathbb{Z})$ that preserves the symplectic form. The
restriction of this symplectic representation to
$\operatorname{MMon}(\mathcal{C})$ gives us a subgroup of the symplectic group
which we call the _monodromy group_ $\operatorname{Mon}(\mathcal{C})$ of
$\mathcal{C}$.
### 3.5. Rooted differentials
For this article, we need to pass to a finite manifold cover of $\mathcal{C}$.
We begin as follows.
###### Definition 3.6.
Suppose that $q\in\mathcal{C}$ is a differential. Let $z$ be a zero, regular
point, or pole of $q$. Let $v$ be a unit tangent vector at $z$ pointing along
the horizontal foliation. We call the pair $(q,v)$ a _rooted_ differential.
The usual difference between the order of a point and the total angle at a
point allows us to show that the number of rootings of $q$ is $4g-4+2|Z|$.
However, some rootings of $q$ may be equivalent to others when $q$ has a
symmetry.
Rooted differentials are intended to reproduce the notion of a _marked
translation surface_ that is widely used in the literature [Yoc10, Boi20]. We
use $\mathcal{C}^{\mathrm{root}}$ to denote the space of rooted differentials.
## 4\. Zippered rectangles
We now outline a procedure to pass from flat geometry to combinatorics with
parameters. To do so, we exhibit a generic rooted quadratic differential as a
collection of rectangles with gluings. This is a well-known construction,
originally due to Veech [Vee82], called _zippered rectangles_. As we are
interested in the topology (fundamental group) of
$\mathcal{C}^{\mathrm{root}}$ and not just in the dynamics of the Teichmüller
flow on $\mathcal{C}^{\mathrm{root}}$, we will present the full details of the
construction for greater clarity.
There are several useful systems of parameters. We make use of the
_singularity_ parameters but there are other commonly used parameters such as
_zipper_ parameters introduced by Veech. We define the parameters and discuss
how to move between them.
### 4.1. The combinatorics
A _saddle connection_ for a quadratic differential is a flat geodesic that
connects a pair of possibly distinct points in $Z$ and is otherwise disjoint
from $Z$. We say that a quadratic differential $q$ has a _vertical vanishing
coordinate_ (respectively, _horizontal vanishing coordinate_) if $q$ has a
horizontal (respectively, vertical) saddle connection. Let
$\mathcal{V}\subseteq\mathcal{C}^{\mathrm{root}}$ be the set of such. So
$\mathcal{V}$ is a countable union of codimension-one loci.
Continuing in this way, we say that a quadratic differential $q$ is _doubly
vanishing_ if $q$ has both a horizontal and a vertical saddle connection. Let
$\mathcal{W}\subseteq\mathcal{C}^{\mathrm{root}}$ be the set of such
differentials. So $\mathcal{W}$ is a countable union of codimension-two loci.
A quadratic differential is said to be _vertically non-vanishing_
(respectively, _horizontally non-vanishing_) if it has no horizontal
(respectively, vertical) saddle connections. We will call a quadratic
differential _doubly non-vanishing_ if it has neither horizontal nor vertical
saddle connections.
Given a rooted quadratic differential $(q,v)$, let $I_{v}$ be the horizontal
separatrix defined by $v$, which may be finite if $q$ has a vertical vanishing
coordinate. Given a point $x\in I_{v}$, let $I(x)$ be the subarc of $I_{v}$
from the base of the root to $x$.
###### Definition 4.2.
We say that $I(x)$ is a _base-arc_ if
1. (1)
the interior of $I(x)$ meets every leaf of the vertical foliation, and
2. (2)
at least one of the two rays (going “up” or “down”) perpendicular to $I_{v}$
and starting at $x$ hit a singularity in $Z$ before hitting $I(x)$ a second
time.
The proof of the following lemma is analogous to the one given by Yoccoz in
the abelian case [Yoc10, Proposition 5.6].
###### Lemma 4.3.
Let $(q,v)$ be a rooted quadratic differential. If $q$ is not doubly
vanishing, then it admits a base-arc.
###### Proof.
Assume first that $q$ has no vertical saddle connection. Thus the vertical
foliation for $q$ is minimal (as otherwise the closure of a vertical leaf
would be a subsurface with boundary, containing saddle connections). Thus, the
interior of any horizontal subarc of $I_{v}$ will meet every leaf of the
vertical foliation, so condition (1) in Definition 4.2 is met. Shortening the
arc as needed, we can arrange condition (2) in Definition 4.2.
Assume instead that $q$ has no horizontal saddle connection. Now the
horizontal foliation for $q$ is minimal. Thus, we can and do take a
sufficiently long subarc of $I_{v}$ so that its interior meets every vertical
leaf. Then, condition (1) in Definition 4.2 is met. Making the arc longer as
needed, we can arrange condition (2) in Definition 4.2. ∎
Since $I$ is simply connected, we can orient the vertical foliation in a small
neighbourhood of $I$. We so orient the vertical foliation (locally) so that
the “upwards direction” crosses $I$ from right to left.
We consider the first return map to $I$, in $q$, defined by travelling along
the vertical foliation. We travel in both directions (up and down) to find
both the first return map and its “inverse”. Let $S_{\mathrm{t}}$ be the
(finite) set of points $x\in I$ where the upward leaf from $x$ runs in to a
singularity in $Z$ before returning to $I$. We define $S_{\mathrm{b}}$
similarly. If $q$ is horizontally non-vanishing, the sets $S_{\mathrm{t}}$ and
$S_{\mathrm{b}}$ are disjoint, but this is not true in general. To distinguish
between the points in $S_{\mathrm{t}}$ and $S_{\mathrm{b}}$, we will add the
labels $\mathrm{t}$ and $\mathrm{b}$.
Let $I_{\text{t}}$ be the components of $I-S_{\mathrm{t}}$. We call these the
_top intervals_. Similarly, we define the _bottom intervals_ $I_{\text{b}}$ to
be the components of $I-S_{\mathrm{b}}$. Again, since if $q$ is horizontally
non-vanishing, we have $I_{\text{t}}\cap I_{\text{b}}=\varnothing$, but this
is not true in general. Thus, we will distinguish them by the labels
$\mathrm{t}$ and $\mathrm{b}$.
There is a fixed-point free involution $\tau$ on
$I_{\text{t}}\times\\{\mathrm{t}\\}\sqcup I_{\text{b}}\times\\{\mathrm{b}\\}$
as follows. Any interval $(J,\ast)$, where
$\ast\in\\{\mathrm{t},\mathrm{b}\\}$, pairs with the interval
$(J^{\prime},\ast^{\prime})=\tau(J,\ast)$ so that the first return map takes
$(J,\ast)$ to $(J^{\prime},\ast^{\prime})$. Thus
$|I_{\text{t}}|+|I_{\text{b}}|$ is even. We write
$2d=|I_{\text{t}}|+|I_{\text{b}}|$.
We capture the above information, combinatorially, as follows. Let
$\mathcal{A}$ be a set of $d$ _letters_. Let $\ell=|I_{\text{t}}|$ and
$m=|I_{\text{b}}|$. Note that the sets $I_{\text{t}}$ and $I_{\text{b}}$ are
ordered by how the intervals appear along $I$. We use this to index the
intervals: $I_{\text{t}}\times\\{\mathrm{t}\\}=\\{J_{i}\\}_{i=1}^{\ell}$ and
$I_{\text{b}}\times\\{\mathrm{b}\\}=\\{J_{i}\\}_{i=\ell+1}^{\ell+m}$. We
define $\pi\colon\\{1,2,\ldots,2d\\}\to\mathcal{A}$ to be any two-to-one map
with the following property: for all $a\in\mathcal{A}$, if
$\\{i,j\\}=\pi^{-1}(a)$ then $\tau(J_{i})=J_{j}$. Associated with $\pi$ is a
fixed-point free involution $\sigma$ of $\\{1,2,\ldots,2d\\}$ where
$\sigma(i)=j$ implies $\pi(i)=\pi(j)$. Maps $\pi$ of the above type were first
considered by Danthony–Nogueira [DN88, DN90], and by Boissy–Lannaeu [BL09] and
are known as _generalised permutations_. As shown by Boissy–Lanneau [BL09,
Theorems A–D], the generalised permutations $\pi$ that arise in the above
construction are _irreducible_. Moreover, the set of generalised permutations
that arise from $\mathcal{C}^{\mathrm{root}}$ is known as the _Rauzy class_ of
$\mathcal{C}^{\mathrm{root}}$. We denote the Rauzy class by
$\mathcal{R}(\mathcal{C}^{\mathrm{root}})$. Moreover, as we vary over all
stratum components and choices of rootings, all irreducible generalised
permutations arise from this construction. See the article by Boissy–Lanneau
[BL09] for a combinatorial definition of irreducibility and a proof of these
facts.
We refer to the letters $\pi(1),\ldots,\pi(\ell)$ as the _top letters_ for
$\pi$. Similarly, we refer to the letters $\pi(\ell+1),\ldots,\pi(\ell+m)$ as
the _bottom letters_. Any letter that is both a top letter and a bottom letter
is called a _translation letter_. Any letter that is only a top letter (or
only a bottom letter) is called a _flip letter_. We explain the terminology
below.
We say that $\pi$ is a _abelian permutation_ if it has no flip letters. We say
that $\pi$ is a _quadratic permutation_ it has (at least one) top flip letter
and (at least one) bottom flip letter. All generalised permutations that arise
in the construction above are of one of this two types.
From now on, will eschew the terminology “generalised permutation” and
collectively refer to abelian and quadratic permutations simply as
_permutations_.
### 4.4. The rectangles
Let $\alpha\in\mathcal{A}$ and let $\pi^{-1}(\alpha)=\\{i,\sigma(i)\\}$. There
is an associated rectangle $R=R_{\alpha}$ with the following properties.
1. (1)
The horizontal sides of $R$ are exactly $R\cap I=J_{i}\cup J_{\sigma(i)}$.
2. (2)
With the exception of one rectangle, each vertical side of $R$ contains
exactly one singularity. The exceptional rectangle is either $R_{\pi(\ell+m)}$
or $R_{\pi(\ell)}$ depending on whether the right endpoint of $I$ is in
$S_{\mathrm{t}}$ or $S_{\mathrm{b}}$, respectively.
### 4.5. The zippers
We now define the zippers. Let $p\in S_{\mathrm{t}}$. By definition, the
perpendicular ray that goes up from $p$ hits a singularity before it can
return to $I$. We call the resulting vertical segment $Z(p)$ a _top zipper_.
Similarly, we define _bottom zippers_ to be the segments of the perpendicular
rays that go down from points in $S_{\mathrm{b}}$ and hit a singularity before
they return to $I$.
### 4.6. Singularity parameters
We have fixed an orientation on the surface. Let $R=R_{\alpha}$ be the
rectangle with letter $\alpha\in\mathcal{A}$. Recall that every rectangle $R$
has two vertical sides and two horizontal sides. Laying $R$ out in the plane
we call its sides the _east_ , _north_ , _west_ , and _south_ sides. By
construction, the east and west sides of $R$ lie in vertical leaves
$\ell_{\text{E}}$ and $\ell_{\text{W}}$ that meet $Z$, the set of
singularities, in exactly one point before returning to $I$. We also lay out
$\ell_{\text{E}}$ and $\ell_{\text{W}}$ in the plane. Let $z_{\text{E}}$ and
$z_{\text{W}}$ be the images of the singularities in $\ell_{\text{E}}$ and
$\ell_{\text{W}}$, as they lie in the plane. Also by construction, at least
one of $z_{\text{E}}$ or $z_{\text{W}}$ lies in (the closure of) a vertical
side of $R$.
Breaking symmetry, suppose that $z_{\text{W}}$ lies in the west side of $R$,
not just in $\ell_{\text{W}}$. Let $m$ be the horizontal spanning arc of $R$,
which has one endpoint at $z_{\text{W}}$. Let $p$ be the endpoint of $m$ lying
in $\ell_{\text{E}}$. We call $p$ the _projection_ of $z_{\text{W}}$ to
$\ell_{\text{E}}$. We define the _singularity width_ of the letter $\alpha$ to
be
$x_{\alpha}=|m|$
that is, the unsigned length of $m$. Note that $x_{\alpha}$ is exactly the
width of $R=R_{\alpha}$.
If $p=z_{\text{E}}$, we define the _singularity height_ of the letter $\alpha$
to be $y_{\alpha}=0$. Otherwise, let $\ell$ be the bounded segment of
$\ell_{\text{E}}-\\{p,z_{\text{E}}\\}$. We orient the path $\gamma=m\cup\ell$
away from $z_{\text{W}}$. Note that $\gamma$ turns right or left at $p$
depending on the position of $z_{\text{E}}$ in $\ell_{\text{E}}$. This turning
is defined due to the orientation on $q$; also it is independent of the
choices made. We now define $y_{\alpha}$ the _singularity height_ of the
letter $\alpha$. We take the magnitude of $y_{\alpha}$ to be
$|y_{\alpha}|=|\ell|$
We take the sign of $y_{\alpha}$ to be positive if and only if $\gamma$ turns
left at $p$
(a) The curve $\gamma$ turns left at $p$ and can be tightened to a saddle
connection.
(b) The curve $\gamma$ turns right at $p$ and can be tightened to a saddle
connection.
(c) The curve $\gamma$ may not be able to be tightened to a saddle connection.
Figure 4.7. Three cases of singularity coordinates.
###### Remark 4.8.
The singularity height $y_{\alpha}$ is _not_ , in general, the height of
$R=R_{\alpha}$. We discuss this point further below.
###### Remark 4.9.
For all but one rectangles $R=R_{\alpha}$, the points $z_{\text{E}}$ and
$z_{\text{W}}$ lie in its east and west sides, respectively. When this
happens, the path $\gamma$ defined above can be tightened to give a saddle
connection in $q$. The parameter $x_{\alpha}+iy_{\alpha}$ is then (up to
global change of sign) the period of $\gamma$. However, there are (abelian and
quadratic) differentials where $\gamma$ is not homotopic (relative to its
endpoints) to a saddle connection Figure 4.7. This accounts for the complexity
of the definition of the singularity height $y_{\alpha}$.
Note that the horizontal edges of rectangles representing the top letters
(correctly repeating the flip letters) are arcs whose union is exactly the
base-arc $I$. The same holds for the bottom letters. We deduce the _width
equality_
(4.10) $\sum_{k=1}^{\ell}x_{\pi(k)}=\sum_{k=\ell+1}^{\ell+m}x_{\pi(k)}.$
Again, the left sum is over the top while the right is over the bottom. Since
every translation letter appears exactly once on each side, we deduce from the
width equality that $\sum x_{\alpha}=\sum x_{\beta}$; here $\alpha$ ranges
over the top flip letters and $\beta$ ranges over the bottom flip letters.
### 4.11. Zipper parameters
Breaking symmetry, let $p\in S_{\mathrm{t}}\times\\{\mathrm{t}\\}$ where we
assume $p\neq r\times\\{\mathrm{t}\\}$ if $r\in S_{\mathrm{t}}$. Let $Z(p)$ be
a top zipper based at $p$. By a slight abuse of notation, think of $p$ as a
point in $I$. Let $R_{\pi(i)}$ for $i\leqslant\ell$ be the rectangle to the
left of $Z(p)$. Then the horizontal coordinate of $p$, that is the distance of
$p$ from the left-endpoint of $I$, is given by
$x(p)=\sum_{j=1}^{i}x_{\pi(j)}.$
The height of $Z(p)$ is given by
$h(Z(p))=\sum_{j=1}^{i}y_{\pi(j)}.$
and we require this to be positive. This gives us the _top zipper
inequalities_
(4.12) $\sum_{j=1}^{i}y_{\pi(j)}>0$
for all $i<\ell$.
Similarly, if $Z(p)$ for $p\in S_{\mathrm{b}}\times\\{\mathrm{b}\\}$ and
$p\neq r\times\\{\mathrm{b}\\}$ if $r\in S_{\mathrm{b}}$ is a bottom zipper
and $R_{\pi(i)}$ for $i\geqslant\ell+1$ then the horizontal coordinate is
$x(p)=\sum_{j=\ell+1}^{i}x_{\pi(j)}$
and the height is
$h(Z(p))=\sum_{j=\ell+1}^{i}y_{\pi(j)}.$
Here we require the height $h(Z(p))$ to be negative. This gives us the _bottom
zipper inequalities_
(4.13) $\sum_{j=\ell+1}^{i}y_{\pi(j)}<0$
for all $\ell+1\leqslant i<\ell+m$.
It remains to consider the right endpoint $r$. The zipper height of $Z(r)$
gives us a linear relation in the $y$ parameters. Note that the above
equalities express the height $h(Z(r))$ in two ways; namely
$h(Z(r))=\sum_{j=1}^{\ell}y_{\pi(j)}$
and
$h(Z(r))=\sum_{j=\ell+1}^{\ell+m}y_{\pi(j)}.$
We deduce the _height_ equality
(4.14) $\sum_{k=1}^{\ell}y_{\pi(k)}=\sum_{k=\ell+1}^{\ell+m}y_{\pi(k)}.$
This is equivalent to $\sum y_{\alpha}=\sum y_{\beta}$, where $\alpha$ ranges
over the top flip letters and $\beta$ ranges over the bottom flip letters.
The height and width equalities are essentially identical. Thus, the
dimensions of the space of $x$ and $y$ parameters are equal; they are
$|\mathcal{A}|$ in the abelian case and $|\mathcal{A}|-1$ in the quadratic
case.
### 4.15. Rectangle parameters
For all rectangles $R=R_{\alpha}$, at least one of the points $z_{\text{E}}$
and $z_{\text{W}}$ lie in its east and west sides, respectively. Breaking
symmetry, suppose that $\alpha$ is a top letter and $z_{\text{E}}$ lies in its
east side. Let $Z(p)$ for $p\in S_{\mathrm{t}}\times\\{\mathrm{t}\\}$ be the
zipper with end point $z_{\text{E}}$. If $\alpha$ is a translation letter then
there is a zipper $Z(p^{\prime})$ for $p^{\prime}\in
S_{\mathrm{b}}\times\\{\mathrm{b}\\}$ with endpoint $z_{E}$ such that the
union $Z(p)\cup Z(p^{\prime})$ is the east side of $R_{\alpha}$. Recall that
the heights of bottom zippers are negative. Hence, the height $h(R_{\alpha})$
satisfies
(4.16) $h(R_{\alpha})=h(Z(p))-h(Z(p^{\prime})).$
If $\alpha$ is a flip letter instead then there is a zipper $Z(p^{\prime})$
for $p^{\prime}\in S_{\mathrm{t}}\times\\{\mathrm{t}\\}$ with end point
$z_{E}$ such that the union $Z(p)\cup Z(p^{\prime})$ is the east side of
$R_{\alpha}$. The height $h(R_{\alpha})$ is then
(4.17) $h(R_{\alpha})=h(Z(p))+h(Z(p^{\prime})).$
A similar discussion follows if $\alpha$ is a bottom letter.
### 4.18. Polytopes in $\mathcal{C}^{\mathrm{root}}$
Because of the flexibility in choosing base-arcs, a rooted differential can
have (infinitely) many different zippered rectangles constructions. For
example, suppose that $q$ is a doubly non-vanishing rooted differential. Then
the vertical foliation for $q$ is minimal; to see this, note that otherwise
the closure of a vertical leaf would be a subsurface with boundary, containing
saddle connections. Thus, for this $q$ any subarc $I\subseteq I_{v}$
satisfying condition (2) in Definition 4.2 can serve as a base-arc.
To remove this ambiguity from the combinatorics, an additional base-arc
normalisation must be imposed, as follows. Let $\mathcal{R}$ be the Rauzy
class of $\mathcal{C}^{\mathrm{root}}$. Fix an irreducible permutation
$\pi\in\mathcal{R}$.
###### Definition 4.19.
We define the set $P_{\pi}$ of _parameters_ for $\pi$ to be the pairs
$(x,y)\in\mathbb{R}^{\mathcal{A}}\times\mathbb{R}^{\mathcal{A}}$ satisfying
1. (1)
the width and height equalities (4.10) and (4.14),
2. (2)
the positivity condition $x_{\alpha}>0$ for all $\alpha$,
3. (3)
the zipper inequalities (4.12) and (4.13), and
4. (4)
the _base-arc normalisation_
$1<\sum_{i=1}^{\ell}x_{\pi(i)}<1+\min\\{x_{\pi(\ell)},x_{\pi(\ell+m)}\\}.$
The pair of inequalities in (4) are the promised restrictions on the length of
the base-arc. Any zippered rectangles construction arising from parameters in
this way is called _(base-arc) normalised_.
Let $q$ be a doubly non-vanishing rooted differential. Let $Z(q,v)$ be the
subset of $x$ along the separatrix $I_{v}$ such that $I(x)$ is a base-arc.
###### Lemma 4.20.
Let $q$ be a doubly non-vanishing rooted differential. Then the base of the
root is the only accumulation point of $Z(q,v)$ in $I_{v}$.
###### Proof.
Suppose that a point $x\in Z(q,v)$ is an accumulation point of $Z(q,v)$ and
that $x$ is not the root. As $q$ is doubly non-vanishing, there is no upper
bound on the lengths of possible base-arcs. Hence, $Z(q,v)$ contains a point
$x^{\prime}$ such that the base-arc $I(x^{\prime})$ is longer than $I(x)$. But
then the first return map to $I(x^{\prime})$ along the vertical has infinitely
many intervals, which is a contradiction. See Yoccoz’s lectures notes for more
details [Yoc10, Section 3.1]. ∎
It follows that $Z(q,v)$ contains a point $r$ whose base-arc is the shortest
among those whose length is at least one. The base-arc $I(r)$ is then the
unique subarc of $I_{v}$ whose zippered rectangle construction yields an
irreducible permutation with parameters that satisfy condition (4) of
Definition 4.19.
The above procedure applies to rooted quadratic differentials off of a
(somewhat complicated) measure zero set. For each such, it gives a pair
(combinatorics, parameter).
The opposite direction is provided by Boissy and Lanneau [BL09, Lemma 2.12].
Suppose that $\pi$ in $\mathcal{R}(\mathcal{C}^{\mathrm{root}})$ is an
irreducible permutation. Suppose that $(x,y)$ is any parameter in $P_{\pi}$.
Then, by placing a marked point at the origin of $\mathbb{C}$, by laying out
an arc on the positive real axis, by laying down rectangles, and gluing
according to the associated zipper lengths, Boissy and Lanneau build a
quadratic differential; the details are somewhat subtle.
We call this differential $q_{\pi}(x,y)$; we use $q_{\pi}\colon
P_{\pi}\to\mathcal{C}^{\mathrm{root}}$ to denote the resulting map. We call
the image
$\mathcal{C}_{\pi}=q_{\pi}(P_{\pi})\subseteq\mathcal{C}^{\mathrm{root}}$ a
_polytope_. For any doubly non-vanishing rooted differential $q$ in
$\mathcal{C}_{\pi}$, if $(\pi,(x,y))$ are its normalised combinatorics and
parameters, then $q_{\pi}(x,y)=q$.
On the other hand, it may happen that the polytopes arising from distinct
permutations coincide as sets. More precisely, consider the following
equivalence relation on permutations. Two permutations $\pi$ and
$\pi^{\prime}$ are _equivalent through re-indexing_ if there is a permutation
$p\in\operatorname{Sym}(\mathcal{A})$ such that $\pi^{\prime}=p\circ\pi$. As
letter re-indexing by $p$ does not affect the geometric construction, it
follows that if $\pi$ and $\pi^{\prime}$ are equivalent then
$\mathcal{C}_{\pi}=\mathcal{C}_{\pi^{\prime}}$.
###### Lemma 4.21.
For any irreducible permutation $\pi\in\mathcal{R}$, the map $q_{\pi}$ is a
homeomorphism from $P_{\pi}$ onto $\mathcal{C}_{\pi}$. Moreover, if $\pi$ and
$\pi^{\prime}$ are not equivalent then
$\mathcal{C}_{\pi}\cap\mathcal{C}_{\pi^{\prime}}=\varnothing$. The union of
the sets $\mathcal{C}_{\pi}$ is dense in $\mathcal{C}^{\mathrm{root}}$.
###### Proof.
Fix $\pi$ and a parameter $(x_{\alpha},y_{\alpha})_{\alpha\in\mathcal{A}}$.
For every letter $\alpha\in\mathcal{A}$ we are given an arc $\gamma_{\alpha}$
connecting a pair of singularities whose period is exactly
$x_{\alpha}+iy_{\alpha}$. Boissy–Lanneau show that these periods give
coordinates in $\mathcal{C}^{\mathrm{root}}$ [BL09, Lemma 2.12]. Since
$q_{\pi}$ is linear in these periods, it is continuous and injective. Since
$P_{\pi}$ and $\mathcal{C}_{\pi}$ have the same dimension, the map $q_{\pi}$
is a homeomorphism onto its image.
Thus, for any non-equivalent $\pi$ and $\pi^{\prime}$ the intersection
$\mathcal{C}_{\pi}\cap\mathcal{C}_{\pi^{\prime}}$ is open. Hence, if it is
non-empty, it must contain a doubly non-vanishing differential. However, this
contradicts that uniqueness of the permutation given by the zippered
rectangles construction.
Furthermore, every doubly non-vanishing differential $q$ in
$\mathcal{C}^{\mathrm{root}}$ lies in some $\mathcal{C}_{\pi}$, so we are
done.
∎
The above lemma shows that we can pass unambiguously from a typical
differential (such as a doubly non-vanishing differential) to combinatorics
and parameters.
### 4.22. Based loops in $\mathcal{C}^{\mathrm{root}}$
The main result of this article is the following theorem stating that every
based loop in $\mathcal{C}^{\mathrm{root}}$ can almost be straightened out
into a concatenation of Teichmüller geodesic segments.
###### Theorem 4.23.
Let $\mathcal{C}^{\mathrm{root}}$ be a stratum component of the moduli space
of rooted quadratic differentials. Let $q_{0}$ be a base-point in
$\mathcal{C}^{\mathrm{root}}$. Let
$\gamma\colon[0,1]\to\mathcal{C}^{\mathrm{root}}$ be a loop based at $q_{0}$.
Then, up to a homotopy relative to the base-point, the loop $\gamma$ can be
written as a finite concatenation of paths that are either
* •
(forward or backward) Teichmüller geodesic segments; or
* •
contained inside some polytope.
###### Proof.
We fix a base-point $q_{0}$ in $\mathcal{C}^{\mathrm{root}}$. For convenience,
we assume that $q_{0}$ is doubly non-vanishing and hence contained in some
polytope.
Recall that the set $\mathcal{W}\subseteq\mathcal{C}^{\mathrm{root}}$ of
doubly vanishing rooted differentials is a countable union of codimension-two
loci. Moreover, if $q\in\mathcal{C}^{\mathrm{root}}-\mathcal{W}$, then it
admits a base-arc by Lemma 4.3.
Let $q$ be a differential in $\mathcal{C}^{\mathrm{root}}-\mathcal{W}$ and $I$
be a base-arc. This gives a permutation $\pi$ and singularity parameters
$(x,y)$.
Since the singularity parameters are coordinates for
$\mathcal{C}^{\mathrm{root}}-\mathcal{W}$, there exists an open set in
$\mathcal{C}^{\mathrm{root}}$ containing $q$ given by the zippered rectangles
construction with underlying permutation $\pi$. Note that the base-arc $I$ may
not be normalised, so $q$ may not belong to $\mathcal{C}_{\pi}$. However, the
only condition that the parameters $(x,y)$ may not satisfy to belong to
$\mathcal{C}_{\pi}$ is
$1<\sum_{k=1}^{\ell}x_{\pi(k)}<1+\min\\{x_{\pi(\ell)},x_{\pi(\ell+m)}\\},$
which can be forced to hold by applying the Teichmüller flow. Thus, there
exists $t(q)\in\mathbb{R}$ such that $g_{t(q)}q\in\mathcal{C}_{\pi}$.
We now define $U(q)\subseteq\mathcal{C}^{\mathrm{root}}$ to be a contractible
open set around $q$ obtained by varying the parameters $(x,y)$ by a very small
amount so that $g_{t(q)}U(q)\subseteq\mathcal{C}_{\pi}$. That is, with respect
to the parameters $(x,y)$, the set $U(q)$ is a _box_ with sides parallel to
the coordinate planes. Therefore, $\partial U(q)$ is a union of finitely many
codimension-one embedded submanifolds (with boundary) in
$\mathcal{C}^{\mathrm{root}}$.
The locus $\mathcal{V}$ of vanishing rooted differentials, that is, rooted
differentials with a horizontal or vertical saddle connection, can be covered
by countably many relatively open codimension-one charts. Hence, we may apply
a homotopy (relative to $q_{0}$) to arrange that $\gamma$ is transverse to
$\mathcal{V}$. This can be done by using standard techniques in differential
topology [Hir94, Theorem 2.5, page 78]. After this, the loop $\gamma$ is
disjoint from $\mathcal{W}$.
Now, the boxes $(U({\gamma(s)}))_{s}$ cover $\gamma$, so, by compactness,
there exists a finite collection $s_{0},\dotsc,s_{n}\in[0,1]$ such that
$(U({\gamma(s_{j})}))_{j}$ covers $\gamma$. Let $U_{j}=U(\gamma(s_{j}))$.
We now perform a further homotopy (relative to $q_{0}$) supported in the union
of the boxes. Again appealing to standard techniques [Hir94, Theorem 2.5, pp.
78], we now have that $\gamma$ is transverse to the sides of the boxes $U_{j}$
and is again transverse to $\mathcal{V}$.
We obtain that $\gamma$ intersects $\partial U_{j}$ only finitely many times
and, therefore, that $\gamma^{-1}(U_{j})$ is a finite union of intervals in
$[0,1]$ for each $k$. All such intervals are open, except for possibly two
intervals $[0,s)$ and $(s^{\prime},1]$. If these two intervals exist in
$\gamma^{-1}(U_{j})$, we replace them by their union
$[0,s)\cup(s^{\prime},1]$.
Now, we select a minimal subcollection $J_{0},\dotsc,J_{m}$ of these sets that
covers $[0,1]$. Thus, $\gamma(J_{k})$ is contained inside some $U_{j}$, which
we denote by $V_{k}$. Observe that the list $V_{0},\dotsc,V_{m}$ may contain
repetitions. By setting $J_{m+1}=J_{0}$, we assume that the indices are chosen
so that $J_{k}\cap J_{k+1}$ is a non-empty open interval or a set of the form
$[0,s)\cup(s^{\prime},1]$. Without loss of generality, we can assume that
$0\in J_{m}\cap J_{0}$, since this can be arranged by covering $J_{0}$ and
$J_{m}$ with smaller intervals and rearranging the indices.
Since $\gamma$ is transverse to $\mathcal{V}$, we have that $\gamma(s)$ lies
in $\mathcal{V}$ for at most countably many $s\in[0,1]$. Thus, there exists a
doubly non-vanishing quadratic differential $q_{k+1}$ in the image of each set
$\gamma(J_{k}\cap J_{k+1})$ (with $q_{m+1}=q_{0}$). Hence, we obtain a
sequence of times $0=s_{0}\leqslant s_{1}\leqslant\dotsb\leqslant s_{m}=1$
such that the closed intervals $[s_{k},s_{k+1}]\subseteq[0,1]$ cover $[0,1]$
and $\gamma(s_{k})=q_{k}$.
Figure 4.24. Illustration of the proof of Theorem 4.23. Part of the loop
$\gamma$ is depicted as a solid curve. The dotted lines represent the
boundaries of the polytopes. Unlike the boxes $V_{k-1}$ and $V_{k+1}$, the box
$V_{k}$ is not contained inside a polytope, so the Teichmüller flow must be
applied to it. The resulting segment $\delta_{k}$ is shown as dashed curve.
Since $V_{k}$ is equal to one of the $U_{j}$ by construction, there exists a
real number $t_{k}$ such that $g_{t_{k}}V_{k}$ is completely contained inside
some polytope $\mathcal{C}_{\pi_{k}}$. Let $\delta_{k}$ be the path starting
at $q_{k}$ and ending at $q_{k+1}$ given by the concatenation of the paths
* •
$g_{t}q_{k}$ for $t\in[0,t_{k}]$;
* •
$g_{t_{k}}\gamma(s)$ for $s\in[s_{k},s_{k+1}]$; and
* •
$g_{-t}q_{k+1}$ for $t\in[-t_{k},0]$.
if $t_{k}\geqslant 0$ or
* •
$g_{-t}q_{k}$ for $t\in[0,-t_{k}]$;
* •
$g_{t_{k}}\gamma(s)$ for $s\in[s_{k},s_{k+1}]$; and
* •
$g_{t}q_{k+1}$ for $t\in[t_{k},0]$.
if $t_{k}\leqslant 0$. See Figure 4.24 for an illustration of this proof.
The union of the arcs $\delta_{k}$ and $\gamma_{k}=\gamma|[s_{k},s_{k+1}]$
bounds a disc in $\mathcal{C}^{\mathrm{root}}$ foliated by the arcs
$g_{t}\gamma_{k}$ where $t\in[0,t_{k}]$. In particular, $\delta_{k}$ is
homotopic to $\gamma_{k}$, relative to the endpoints.
Let $\delta$ be the concatenation of the paths $(\delta_{k})_{k}$. By
construction, $\delta$ is a closed curve, homotopic to $\gamma$. Moreover, the
pieces $g_{t_{k}}\gamma(s)$ for $s\in[s_{k},s_{k+1}]$ in the concatenation are
the only paths that are not (forward or backward) Teichmüller geodesic
segments. This concludes the proof of the theorem. ∎
###### Remark 4.25.
We do not attempt to make optimal choices to reduce the length of geodesic
pieces in the concatenation for $\delta$. A simple way to reduce these lengths
is to choose the normalised zippered rectangle construction whenever $q$
admits one. Thus, if $q\in\mathcal{C}^{\mathrm{root}}-\mathcal{W}$ is
contained in some polytope, then we can choose the box $U(q)$ to be contained
inside the same polytope and set $t(q)=0$ for such boxes. On the other hand,
when $q$ does not lie in any polytope we can choose the length of the base-arc
to be as close to $1$ as possible, so $t(q)$ is as small as possible. See
Section A.4 for a concrete example of the construction.
### 4.26. Rauzy–Veech induction and the Teichmüller flow
We will now define Rauzy–Veech induction on zippered rectangles. The induction
is defined by passing to the smaller base-arc with length
$|I|-\min\\{x_{\pi(\ell)},x_{\pi(\ell+m)}\\}$.
Let $\pi$ be an irreducible permutation in $\mathcal{R}$. Let $(x,y)$ be
singularity parameters for a zippered rectangle construction with underlying
permutation $\pi$. Let $\alpha=\pi(\ell)$ and $\beta=\pi(\ell+m)$. Since $\pi$
is irreducible, $\alpha\neq\beta$ and we will assume that $x_{\alpha}\neq
x_{\beta}$. Breaking symmetry, suppose that $x_{\alpha}>x_{\beta}$. In this
case, we say that the _top letter wins_. We set the new width parameters as
$x^{\prime}_{\alpha}=x_{\alpha}-x_{\beta},$
and $x^{\prime}_{\rho}=x_{\rho}$ for all $\rho\neq\alpha$. Similarly we set
the new height parameters as
$y^{\prime}_{\beta}=y_{\beta}+y_{\alpha},$
and $y^{\prime}_{\rho}=y_{\rho}$ for all $\rho\neq\beta$. The parameter
transformations can be encoded in terms of a matrix. Let
$E=(e_{rs})_{r,s\in\mathcal{A}}$ be the $\mathcal{A}\times\mathcal{A}$
elementary matrix with ones along the diagonal, $e_{\alpha\beta}=1$ and all
other entries zero. Then, $Ex^{\prime}=x$ and $E^{T}y=y^{\prime}$.
To define the new permutation we consider the two cases:
1. (1)
$\alpha$ is a translation letter; or
2. (2)
$\alpha$ is a flip letter.
Suppose $\alpha$ is a translation letter and let $\pi(j)=\alpha$ for some
$\ell+1\leqslant j<\ell+m$. We then set
* •
$\pi^{\prime}(i)=\pi(i)$ for all $i\leqslant j$,
* •
$\pi^{\prime}(j+1)=\beta$, and
* •
$\pi^{\prime}(i)=\pi(i-1)$ for all $i>j+1$.
Suppose now that $\alpha$ is a flip letter and let $\pi(j)=\alpha$ for some
$1\leqslant j<\ell$. We set the top indices to range from $1$ to $\ell+1$ and
the bottom indices to range from $\ell+2$ to $\ell+m$ and then set
* •
$\pi^{\prime}(i)=\pi(i)$ for all $i<j$,
* •
$\pi^{\prime}(j)=\beta$,
* •
$\pi^{\prime}(i)=\pi(i-1)$ for all $i>j$.
With the above definitions, We set
$R_{\mathrm{t}}(\pi,x,y)=(\pi^{\prime},x^{\prime},y^{\prime})$. If
$x_{\alpha}<x_{\beta}$ instead, then we say that the _bottom letter wins_. The
definition of $R_{\mathrm{b}}$ is analogous.
The codimension-one locus $x_{\alpha}=x_{\beta}$ is contained in
$\mathcal{V}$. The induction is undefined for it.
Rauzy–Veech induction makes the Rauzy class
$\mathcal{R}=\mathcal{R}(\mathcal{C}^{\mathrm{root}})$ into a directed graph.
The vertices of $\mathcal{D}$ are irreducible permutations in $\mathcal{R}$
with an arrow from permutation $\pi$ to $\pi^{\prime}$ if
$\pi^{\prime}=R_{\mathrm{t}}(\pi)$ or $R_{\mathrm{b}}(\pi)$. A component
$\mathcal{D}$ of this graph is called a _Rauzy diagram_.
We now explain the coding of the Teichmüller flow using normalised parameters
and Rauzy–Veech induction.
Let $\mathcal{C}_{\pi}=q_{\pi}(P_{\pi})$ be the polytope given by normalised
singularity parameters for an irreducible permutation $\pi$. Let
$\alpha=\pi(\ell)$ and $\beta=\pi(\ell+m)$.
We say that $q$ is a _forward-tied_ differential for $\pi$ if there exists a
sequence $q_{n}=q_{\pi}(\pi,x_{n},y_{n})$ converging to $q$ for which the
length of the base-arc $I$ tends to
$1+\min\\{x_{\pi(\ell)},x_{\pi(\ell+m)}\\}$ from below. Similarly, we say $q$
is _backward-tied_ for $\pi$ if it is the limit of a sequence where the length
of the base-arc tends to $1$ from above.
The period of the base-arc is linear in period coordinates. Therefore, the
backward-tied differentials are contained in a codimension-one locus in
$\mathcal{C}^{\mathrm{root}}$. It will become clear using Rauzy–Veech
induction that the forward-tied differentials are also contained in a
codimension-one locus in $\mathcal{C}^{\mathrm{root}}$.
Let $\mathcal{F}(\pi)$ and $\mathcal{B}(\pi)$ be the set of forward-tied and
backward-tied differentials for $\pi$. We call these the _flow faces_. Let $q$
be a forward-tied differential. Suppose that there is a sequence
$q_{n}=q_{\pi}(\pi,x_{n},y_{n})$ converging to $q$ for which the parameters
$x_{n}$ and $y_{n}$ are bounded away from zero and infinity respectively. Then
the sequence $(x_{n},y_{n})$ converges to some $(x,y)$ in the closure of
$P_{\pi}$ such that all its widths $x_{\alpha}>0$ and heights $y_{\alpha}$ are
bounded. It follows that $q$ is contained in the interior of
$\mathcal{F}(\pi)$ and the map $q_{\pi}$ extends from $P_{\pi}$ to such
parameters. We can similarly characterise differentials in the interior of
$\mathcal{B}(\pi)$ and extend $q_{\pi}$ to such parameters.
Let $q$ be a forward-tied differential in the interior of $\mathcal{F}(\pi)$
and further suppose that $x_{\alpha}>x_{\beta}$. The Rauzy–Veech induction on
$(\pi,x,y)$ is then defined and let
$(\pi^{\prime},x^{\prime},y^{\prime})=R_{\mathrm{t}}(\pi,x,y)$. Let $I$ be the
base-arc for $q=q_{\pi}(\pi,x,y)$ and $I^{\prime}$ the base-arc for
$q=q_{\pi^{\prime}}(\pi^{\prime},x^{\prime},y^{\prime})$. Note that
$|I^{\prime}|=|I|-x_{\beta}=1+\min\\{x_{\alpha},x_{\beta}\\}-x_{\beta}=1.$
This means that $q^{\prime}$ is a backward-tied differential for
$\pi^{\prime}$.
We conclude that the interiors of
$\mathcal{F}(\pi)\cap\\{x_{\alpha}>x_{\beta}\\}$ and
$\mathcal{F}(\pi)\cap\\{x_{\alpha}<x_{\beta}\\}$ are identified with
corresponding subsets of $\mathcal{B}(R_{\mathrm{t}}(\pi))$ and
$\mathcal{B}(R_{\mathrm{b}}(\pi))$, respectively.
Let $q\in\mathcal{C}^{\mathrm{root}}-\mathcal{V}$ and let $g_{t}q$ for
$t\geqslant 0$ be the Teichmüller geodesic ray through $q$. Since $q$ is
doubly non-vanishing, it is contained in some polytope $\mathcal{C}_{\pi}$ and
its parameters satisfy $x_{\pi(\ell)}\neq x_{\pi(\ell+m)}$. Let $I$ be the
base-arc in $q$. Then the length of the base-arc in $g_{t}q$ is $e^{t}|I|$.
Therefore, there is some time $t_{1}>0$ such that
$g_{t_{1}}q\in\mathcal{F}(\pi)$. By Rauzy induction, $g_{t_{1}}q$ is also
contained in $\mathcal{B}(\pi^{\prime})$ where $\pi=R_{\ast}(\pi)$ where
$\ast=\mathrm{t}$ or $\ast=\mathrm{b}$, depending on which of $x_{\pi(\ell)}$
and $x_{\pi(\ell+m)}$ is larger. Let
$\alpha^{\prime},\beta^{\prime}\in\mathcal{A}$ be the last top and bottom
letters of $\pi^{\prime}$, respectively. As $g_{t_{1}}q$ is also doubly non-
vanishing, the widths corresponding to $\alpha^{\prime}$ and $\beta^{\prime}$
do not coincide. So there is time $t_{2}>t_{1}$ such that $g_{t_{2}}q$ is
contained in $\mathcal{F}(\pi^{\prime})$. This description continues
iteratively.
It is possible that the Rauzy diagram $\mathcal{D}$ contains permutations
$\pi$ and $\pi^{\prime}$ that are equivalent under some re-indexing
$p\in\operatorname{Sym}(\mathcal{A})$. For such permutations,
$\mathcal{C}_{\pi}=\mathcal{C}_{\pi^{\prime}}$. Note then that the
permutations arising from $\pi$ and $\pi^{\prime}$ by Rauzy–Veech induction
are also pairwise equivalent under the same re-indexing $p$. It follows that
$p$ induces a symmetry of $\mathcal{D}$ as a directed graph. We call the
quotient of $\mathcal{D}$ by all such symmetries the _reduced Rauzy diagram_
and denote it by $\mathcal{D}^{\mathrm{red}}$. It is then clear that we should
use the quotient graph $\mathcal{D}^{\mathrm{red}}$ to code Teichmüller flow
on $\mathcal{C}^{\mathrm{root}}$.
The above coding of Teichmüller flow has an immediate combinatorial
consequence. By the work of Masur and Veech [Mas82, Vee82], the Teichmüller
flow on $\mathcal{C}^{\mathrm{root}}$ is ergodic for the Masur–Veech measure.
By ergodicity, a positive measure set in $\mathcal{C}_{\pi}-\mathcal{V}$,
visits every $\mathcal{C}_{\sigma}$ under Teichmüller flow. This implies that
the reduced Rauzy diagram $\mathcal{D}^{\mathrm{red}}$ is a strongly connected
graph, that is, there is a directed path between any pair of vertices. It then
follows that each component of $\mathcal{D}$ is also strongly connected. See
the article by Boissy–Lanneau for more details [BL09].
The following three lemma are standard facts in the theory of Rauzy–Veech
sequences and are often implicitly used. We include proof sketches for
completeness.
###### Lemma 4.27.
For any doubly non-vanishing rooted differential $(q,v)$ and any $T>0$ the
geodesic segment $[q,g_{T}q]$ crosses finitely many flow faces.
###### Proof.
By applying Teichmüller flow, we may assume that $(q,v)$ is contained in some
$\mathcal{B}_{0}=\mathcal{B}(\pi)$. Let
$\mathcal{B}_{1},\mathcal{B}_{2},\dots$ be the sequence of backward faces the
geodesic segment $[q,g_{T}q]$ crosses. Let $0<t_{k}\leqslant T$ be the
monotonically increasing sequence of times such that $g_{t_{k}}q$ is contained
in $\mathcal{B}_{k}$.
Recall that $Z(q,v)$ is the set of points $x$ in $I_{v}$ such that $I(x)$ is a
base-arc. Identifying $I_{v}$ with the positive real axis, it follows from the
definitions that each point $e^{-t_{k}}$ is contained in $Z(q,v)$.
By Lemma 4.20, the intersection $Z(q,v)\cap[e^{-T},1)$ is finite. Hence, the
sequence $t_{k}$ is finite and we are done. ∎
###### Remark 4.28.
It follows from the previous lemma that the curve $\delta$ we construct in the
proof of Theorem 4.23 crosses finitely many flow faces.
Let $\zeta$ be a finite Rauzy–Veech sequence that starts at $\pi$ and ends at
$\pi^{\prime}$. Let $P_{\zeta}\subseteq P_{\pi}$ be the parameters of
differentials in $\mathcal{C}_{\pi}$ whose Rauzy–Veech sequence begins with
$\zeta$. By inductively using the definition of Rauzy–Veech induction, it
follows that the set $P_{\zeta}$ is a convex open subset of $P_{\pi}$. In
particular, $\mathcal{C}_{\zeta}=q_{\pi}(P_{\pi})$ is path connected.
We say that a Teichmüller segment $[q,g_{t}q]$ is a $\zeta$-segment if
$q\in\mathcal{C}_{\zeta},g_{t}q\in\mathcal{C}_{\pi^{\prime}}$ and the
Rauzy–Veech sequence of $[q,g_{t}q]$ is $\zeta$.
###### Lemma 4.29.
Let $\zeta$ be a finite Rauzy–Veech sequence that starts at $\pi$ and ends at
$\pi^{\prime}$. Then any pair of $\zeta$-segments are isotopic in
$\mathcal{C}^{\mathrm{root}}$ through $\zeta$-segments.
###### Proof.
As $[q,g_{t}q]$ is a $\zeta$-segment, it follows that there exists an open set
$U$ in $q_{\pi}(P_{\zeta})$ centred at $q$ such that $g_{t}U$ is contained in
$\mathcal{C}_{\pi^{\prime}}$. Any $\zeta$-segment with an endpoint in $U$ is
thus homotopic to $[q,g_{t}q]$. Indeed, for any $q^{\prime}$ in $U$, we can
connect $q$ to $q^{\prime}$ by an arc contained in $U$. We flow the arc for
time $t$ to get an arc in $\mathcal{C}_{\pi^{\prime}}$. We thus have a
homotopy between $[q,g_{t}q]$ and $[q^{\prime},g_{t}q^{\prime}]$. We then do a
further homotopy from $[q^{\prime},g_{t}q^{\prime}]$ and
$[q^{\prime},g_{t^{\prime}}q^{\prime}]$. The lemma then follows from the path
connectedness of $q_{\pi}(P_{\zeta})$. ∎
###### Lemma 4.30.
Let $U$ be an open set contained in some polytope $\mathcal{C}_{\pi}$. Then
there exists a Rauzy–Veech sequence $\theta$ (that depends on $U$) starting
from $\pi$ such that, for every $q\in\mathcal{C}_{\theta}$, the Teichmüller
segment in $\mathcal{C}_{\pi}$ containing $q$ intersects $U$.
###### Proof.
Let $\Delta$ be the standard simplex in $\mathbb{R}^{\mathcal{A}}$. Let
$p:P_{\pi}\to\Delta$ be the projection $(x,y)\to x/\|x\|_{1}$.
By iteration, the transformation on parameters induced by a Rauzy–Veech
sequence $\zeta$ is encoded by a non-negative matrix $B_{\zeta}$, that is, if
$(x,y)$ is in $P_{\zeta}$ then the new parameters $x^{(\zeta)}$ are related to
$x$ by $B_{\zeta}x^{(\zeta)}=x$. Suppose $\zeta$ ends at $\pi^{\prime}$. It
also follows that $p(B_{\zeta}P_{\pi^{\prime}})=p(P_{\zeta})$.
It then suffices to show that there is a Rauzy–Veech sequence $\theta$ such
that $p(B_{\theta}P_{\pi^{\prime}})$ is contained in $p(q_{\pi}^{-1}U)$, where
$\pi^{\prime}$ is the permutation that $\theta$ ends at. This is a standard
fact but we will include a brief justification for completeness.
By Masur’s [Mas82] and Veech’s [Vee82] solution of the Keane conjecture or
even more strongly by Kerckhoff–Masur–Smillie [KMS86], we can find $q\in U$
with a uniquely ergodic vertical foliation or more strongly giving a recurrent
Teichmüller ray. Let $\zeta_{n}$ be the Rauzy–Veech sequence of length $n$ for
$q$ and $B_{n}$ the corresponding matrix. Also, let $\pi_{n}$ denote the
permutation at its end. Since the vertical foliation is uniquely ergodic, the
nested sequence $p(B_{n}P(\pi_{n}))$ converges to $p(q_{\pi}^{-1}(q))$ as
$n\to\infty$. Since all such sets are polytopes inside the standard simplex
$\Delta$, there is some $n$ large enough such that $p(B_{n}P_{\pi_{n}})$ is
contained inside $p(q_{\pi}^{-1}U)$. We make take $\theta$ to be $\zeta_{n}$
to conclude the proof.
∎
## 5\. The flow group is the fundamental group
Let $\pi_{1}(\mathcal{D}^{\mathrm{red}},\pi)$ be the fundamental group based
at $\pi$ of $\mathcal{D}^{\mathrm{red}}$ as an undirected graph. Let $q_{0}$
be a point in $\mathcal{C}_{\pi}$.
To simplify notation, we will fix a component of $\mathcal{D}$ evenly covering
$\mathcal{D}^{\mathrm{red}}$. We will continue to refer to a vertex in
$\mathcal{D}^{\mathrm{red}}$ as an irreducible permutation when in fact it is
an equivalence class of vertices related by letter re-indexing. Note that
every directed path in $\mathcal{D}^{\mathrm{red}}$ is realised by an actual
Rauzy–Veech sequence in $\mathcal{D}$. Thus, we will use the actual
Rauzy–Veech sequences in $\mathcal{D}$ to concatenate sensibly.
###### Proposition 5.1.
There is a natural homomorphism
$\pi_{1}(\mathcal{D}^{\mathrm{red}},\pi)\to\pi_{1}(\mathcal{C}^{\mathrm{root}},q_{0}).$
###### Proof.
Let $\sigma$ be any irreducible permutation in $\mathcal{D}^{\mathrm{red}}$.
As $\mathcal{D}^{\mathrm{red}}$ is strongly connected, we can choose a
directed path $\xi(\sigma)$ from $\pi$ to $\sigma$ and a directed path
$\xi^{\prime}(\sigma)$ from $\sigma$ to $\pi$. We choose empty paths for
$\xi(\pi)$ and $\xi^{\prime}(\pi)$.
Every loop $\kappa$ in $\mathcal{D}^{\mathrm{red}}$ based at $\pi$ can be
written as a concatenation of alternating forward and backward paths. Breaking
symmetry, suppose the odd indexed paths are forward paths and the even indexed
paths are backward paths. We may then write the concatenation as
$\kappa_{1}\kappa_{2}^{-1}\kappa_{3}\kappa_{4}^{-1}\dotsb{}$.
Let $\sigma_{i}$ and $\tau_{i}$ be the beginning and ending permutations for
$\kappa_{i}$. Then we have the string of relations
$\pi=\sigma_{1},\tau_{1}=\sigma_{2},\tau_{2}=\sigma_{3}$, etc. The
concatenation for $\kappa$ is then equal to the concatenation
$\lambda_{1}\lambda_{2}^{-1}\lambda_{3}\lambda_{4}^{-1}\dotsb{}$, where each
$\lambda_{i}$ is a loop based at $\pi$ given by
$\lambda_{i}=\xi(\sigma_{i})\,\kappa_{i}\,\xi^{\prime}(\tau_{i})$.
It thus suffices to associate a loop in $\mathcal{C}^{\mathrm{root}}$ based at
$q_{0}$ with a directed loop $\kappa$ in $\mathcal{D}^{\mathrm{red}}$ based at
$\pi$. Observe that an actual Rauzy–Veech sequence in $\mathcal{D}$
representing $\kappa$ might end at a permutation $\pi^{\prime}$ equivalent to
$\pi$. However, this will not matter as
$\mathcal{C}_{\pi^{\prime}}=\mathcal{C}_{\pi}$ in that case. Let $q$ be a
point in $\mathcal{C}_{\pi}$. As $\mathcal{C}_{\pi}$ is a polytope and hence
contractible, any pair of paths in $\mathcal{C}_{\pi}$ from $q_{0}$ to $q$ are
homotopic relative to the endpoints. We fix one such path and call it
$\eta_{q}$. By $\eta_{q}^{-1}$, we mean the reverse path from $q$ to $q_{0}$.
We now choose any $\kappa$-segment $\gamma_{\kappa}$. By Lemma 4.29, any two
choices of $\gamma_{\kappa}$ are homotopic. Let $q$ and $q^{\prime}$ in
$\mathcal{C}_{\pi}$ be the beginning and end points of $\gamma_{\kappa}$. We
then map $\kappa$ to the based loop
$\eta_{q}\,\gamma_{\kappa}\,\eta_{q^{\prime}}^{-1}$ in
$\mathcal{C}^{\mathrm{root}}$.
It remains to show that this map is a homomorphism. Let $\kappa^{\prime}$ and
$\kappa^{\prime\prime}$ be two directed loops based at $\pi$. Let
$\kappa=\kappa^{\prime}\kappa^{\prime\prime}$ and let $\gamma_{\kappa}$ be a
$\kappa$-segment. Let $q$ and $q^{\prime\prime}$ in $\mathcal{C}_{\pi}$ be the
beginning and the end points of $\gamma_{\kappa}$. We deduce that there exists
a point $q^{\prime}$ in $\mathcal{C}_{\pi}$ on $\gamma_{\kappa}$ such that if
we write $\gamma_{\kappa}=[q,q^{\prime}]\cup[q^{\prime},q^{\prime\prime}]$
then $\gamma_{\kappa^{\prime}}=[q,q^{\prime}]$ is a $\kappa^{\prime}$-segment
and $\gamma_{\kappa^{\prime\prime}}=[q^{\prime},q^{\prime\prime}]$ is a
$\kappa^{\prime\prime}$-segment. Then
$\eta_{q}\,\gamma_{\kappa}\,\eta_{q^{\prime\prime}}^{-1}$ is homotopic to
$(\eta_{q}\,\gamma_{\kappa^{\prime}}\,\eta_{q^{\prime}}^{-1})(\eta_{q^{\prime}}\gamma_{\kappa^{\prime\prime}}\eta_{q^{\prime\prime}}^{-1})$
and so we conclude that the map is a homomorphism.
∎
As a consequence of Theorem 4.23, we prove
###### Theorem 5.2.
Let $\mathcal{C}^{\mathrm{root}}$ be a component of a stratum of rooted
abelian or quadratic differentials. Let $\pi$ be a permutation in
$\mathcal{D}^{\mathrm{red}}$. Let $q_{0}$ be a base-point in
$\mathcal{C}^{\mathrm{root}}$ contained in the polytope $\mathcal{C}_{\pi}$.
Then the natural homomorphism
$\pi_{1}(\mathcal{D}^{\mathrm{red}},\pi)\to\pi_{1}(\mathcal{C}^{\mathrm{root}},q_{0})$
is surjective.
###### Proof.
By Theorem 4.23, a loop $\gamma$ based at $q_{0}$ is homotopic to a finite
concatenation of paths $\gamma_{i}$ where each $\gamma_{i}$ is either a
(forward or backward) Teichmüller geodesic segment or is contained inside a
polytope. Breaking symmetry, we may assume $\gamma$ is the concatenation
$\gamma_{1}\gamma_{2}\dotsb\gamma_{k}$ where the odd indexed $\gamma_{i}$ are
contained in a polytope and the even indexed $\gamma_{i}$ are (forward or
backward) Teichmüller segments. By Lemma 4.27, the Teichmüller segments
$\gamma_{2i}$ give us finite Rauzy–Veech sequences $\zeta_{2i}$ such that
* •
$\zeta_{2}$ starts at $\pi$; and
* •
successive $\zeta_{2i}$ can be concatenated as undirected paths in
$\mathcal{D}$.
The concatenation $\zeta_{2}\zeta_{4}\dotsb{}$ descends to a loop $\kappa$ in
$\mathcal{D}^{\mathrm{red}}$ based at $\pi$. As before, the actual sequence in
$\mathcal{D}$ might end at a permutation $\pi^{\prime}$ equivalent to $\pi$
but that will not matter as $\mathcal{C}_{\pi^{\prime}}=\mathcal{C}_{\pi}$. As
in the proof of Proposition 5.1, the loop $\kappa$ can be written as a
concatenation of (forward and backward) loops based at $\pi$. The surjectivity
follows. ∎
###### Remark 5.3.
As the components of $\mathcal{D}$ evenly cover $\mathcal{D}^{\mathrm{red}}$,
it follows from Theorem 5.2 that there is a finite cover of
$\mathcal{C}^{\mathrm{root}}$ that corresponds to components of $\mathcal{D}$.
We denote this cover by $\mathcal{C}^{\mathrm{lab}}$. This cover is easy to
describe intrinsically in the case of an abelian stratum component but its
description for a quadratic stratum component is an interesting question.
Let $q_{0}$ be a base-point in $\mathcal{C}^{\mathrm{root}}$ and $U$ be a
contractible open set around $q_{0}$. For every $q\in U$, we choose a path
$\eta_{q}$ from $q_{0}$ to $q$ inside $U$. As $U$ is contractible, any choice
of $\eta_{q}$ is homotopic to any other choice relative to their end-points.
We take the convention that $\eta_{q}^{-1}$ is $\eta_{q}$ in reverse
connecting $q$ to $q_{0}$.
Let $\gamma$ be a Teichmüller segment that begins at some $q\in U$ and ends at
some $q^{\prime}\in U$. The concatenation
$\eta_{q}\gamma\eta_{q^{\prime}}^{-1}$ is a loop in
$\mathcal{C}^{\mathrm{root}}$ based at $q_{0}$. We call such loops _almost-
flow loops_ (based at $q_{0})$.
###### Definition 5.4.
The flow group $G(U,q_{0})$ is the subgroup of
$\pi_{1}(\mathcal{C}^{\mathrm{root}},q_{0})$ generated by the almost-flow
loops based at $q_{0}$.
We first prove the following theorem.
###### Theorem 5.5.
Let $\mathcal{C}^{\mathrm{root}}$ be a component of a stratum of rooted
abelian or quadratic differentials. Let $q_{0}$ be a base-point contained in
some polytope $\mathcal{C}_{\pi}$. Then
$G(\mathcal{C}_{\pi},q_{0})=\pi_{1}(\mathcal{C}^{\mathrm{root}},q_{0}).$
###### Proof.
In the proof of Theorem 5.2, we showed that images of directed loops in
$\pi_{1}(\mathcal{D}^{\mathrm{red}},\pi)$ generate
$\pi_{1}(\mathcal{C}^{\mathrm{root}},q_{0})$. So it suffices to show that
every directed loop $\zeta$ in $\mathcal{D}^{\mathrm{red}}$ based at $\pi$ is
realised by an almost-flow loop in $G(\mathcal{C}_{\pi},q_{0})$. We choose any
$q$ in $\mathcal{C}_{\zeta}$ and let $[q,g_{t}q]$ be a $\zeta$-segment that it
gives. By definition, $q^{\prime}=g_{t}q$ is also contained in
$\mathcal{C}_{\pi}$. Then the based loop
$\eta_{q}\gamma\eta_{q^{\prime}}^{-1}$ realises the loop $\zeta$. This
concludes the proof of the theorem. ∎
As a corollary, we obtain one of our main results.
###### Theorem 5.6.
For any base-point $q_{0}$ in $\mathcal{C}^{\mathrm{root}}$ and any
contractible open set $U$ containing $q_{0}$
$G(U,q_{0})=\pi_{1}(\mathcal{C}^{\mathrm{root}},q_{0}).$
###### Proof.
Let $q$ be a point in $U$. As $U$ is contractible, it follows that
$G(U,q)\cong G(U,q_{0})$. Since the union of polytopes is dense in
$\mathcal{C}^{\mathrm{root}}$, we may then assume that $q_{0}$ is contained in
some polytope $\mathcal{C}_{\pi}$. Suppose $V\subseteq U$ is a smaller
contractible open set that contains $q_{0}$. By definition of the flow groups,
$G(V,q_{0})$ is a subgroup of $G(U,q_{0})$. So we may assume that $U$ is also
contained in $\mathcal{C}_{\pi}$. It now suffices to show that any directed
loop $\zeta$ in $\mathcal{D}^{\mathrm{red}}$ based at $\pi$ can be written as
a word in almost-flow loops in $G(U,q_{0})$.
By Lemma 4.30, there is a Rauzy–Veech sequence $\theta$ starting from $\pi$
such that for any $q\in\mathcal{C}_{\theta}$ the Teichmüller segment in
$\mathcal{C}_{\pi}$ containing $q$ intersects $U$. Note then that the same is
true for any finite extension $\theta\zeta$.
As $\mathcal{D}$ is strongly connected, we may extend $\theta$ to assume that
it also ends at $\pi$. If $\pi^{\prime}$ is equivalent to $\pi$, we also get a
loop $\theta^{\prime}$ based at $\pi^{\prime}$ so that $\theta$ and
$\theta^{\prime}$ descend to the same loop in $\mathcal{D}^{\mathrm{red}}$.
Let $q$ be a differential in $\mathcal{C}_{\theta\theta}$. As the Teichmüller
segment in $\mathcal{C}_{\pi}$ containing $q$ intersects $U$, we may assume
$q$ is in $U$.
By definition, there exists a time $t>0$ such that the Teichmüller segment
$[q,g_{t}q]$ is a $\theta\theta$-segment. We may decompose $[q,g_{t}q]$ as
$[q,g_{s}q]\cup[g_{s}q,g_{t}q]$ such that both segments $[q,g_{s}q]$ and
$[g_{s}q,g_{t}q]$ are $\theta$-segments. In particular, $q^{\prime}=g_{s}q$ is
also contained in $\mathcal{C}_{\theta}$. As the Teichmüller segment in
$\mathcal{C}_{\pi}$ containing $q^{\prime}$ intersects $U$, by tweaking $s$ we
may assume that $q^{\prime}$ is also contained in $U$. Let
$\gamma_{u}=[q,q^{\prime}]$. Then $\eta_{q}\gamma_{u}\eta_{q^{\prime}}^{-1}$
is an almost-flow loop in $G(U,q_{0})$ that realises $\theta$.
Now, let $\zeta$ be an oriented loop in $\mathcal{D}^{\mathrm{red}}$ based at
$\pi$. We consider the oriented loop $\xi=\theta\zeta\theta$. As a sequence in
$\mathcal{D}$, the loop $\zeta$ could terminate at some $\pi^{\prime}$
equivalent to $\pi$. In this case we mean the concatenation
$\theta\zeta\theta^{\prime}$, where $\theta^{\prime}$ is a loop in
$\mathcal{D}$ based at $\pi^{\prime}$ that descends to the same loop in
$\mathcal{D}^{\mathrm{red}}$ as $\theta$. We will assume first that $\zeta$ is
a loop based at $\pi$ in $\mathcal{D}$. The argument below extends to the case
where $\zeta$ ends instead at $\pi^{\prime}$ by concatenating the appropriate
arcs.
Let $\gamma_{\xi}=[q,g_{t}q]$ be a $\xi$-segment. As the Teichmüller segment
in $\mathcal{C}_{\xi}$ containing $q$ intersects $U$, we may choose $q$ to be
in $U$. We then write $\gamma_{\xi}$ as a concatenation
$[q,g_{s}q]\cup[g_{s}q,g_{t}q]$ where $[q,g_{s}q]$ is a $\theta\zeta$-segment
and $[g_{s}q,g_{t}q]$ is a $\theta$-segment. Let $q^{\prime}=g_{s}q$ and
$q^{\prime\prime}=g_{t}q$. As $[q^{\prime},q^{\prime\prime}]$ is a
$\theta$-segment, $q^{\prime}$ is contained in $\mathcal{C}_{\theta}$. The
Teichmüller segment inside $\mathcal{C}_{\pi}$ containing $q^{\prime}$
intersects $U$. So by changing $s$, we may assume that $q^{\prime}$ is also
contained in $U$.
The Teichmüller segment $\gamma^{\prime}=[q,q^{\prime}]$ then begins and ends
in $U$. So the directed loop $\theta\zeta$ is realised by the almost-flow loop
$\eta_{q}\gamma^{\prime}\eta_{q^{\prime}}^{-1}$ in $G(U,q_{0})$.
As we already established that $\theta$ is realised by an element in
$G(U,q_{0})$, we deduce that $\zeta$ must also be realised by an element in
$G(U,q_{0})$. This concludes the proof of the corollary.
∎
## 6\. Dynamics of the Teichmüller flow
### 6.1. Coding formalism
Let $\Pi$ be a finite or countable set. We consider the symbolic space
$\Sigma=\Pi^{\mathbb{Z}}$ endowed with the left shift map S. Suppose
$w\in\Pi^{m}$ is a finite word. The (forward) cylinder $\Sigma(u)$ induced by
$u$ is defined as $\Sigma(u)=\\{a\in\Sigma\text{ such that }a_{k}=u_{k}\text{
for }k=0,\dotsc,m-1\\}$. Given another word $v\in\Pi^{n}$, we write
$uv\in\Pi^{m+n}$ for the concatenation of $u$ and $v$.
###### Definition 6.2.
We say that an S-invariant probability measure $\mu$ has _bounded distortion_
if there exists a constant $K>0$ such that, for any finite words $u\in\Pi^{m}$
and $v\in\Pi^{n}$,
$\frac{1}{K}\mu(\Sigma(u))\mu(\Sigma(v))\leqslant\mu(\Sigma(uv))\leqslant
K\mu(\Sigma(u))\mu(\Sigma(v)).$
The bounded distortion property allows us to treat the symbolic space “almost”
as a Bernoulli shift. For this reason, it is also called an _approximate
product structure_. Since $\mu$ is shift-invariant, the previous definition
would not change if we used backward or centred cylinders instead of forward
cylinders.
We will now describe how the Teichmüller flow can be coded by such a symbolic
setup. Let $\pi$ be an irreducible generalised permutation in
$\mathcal{D}^{\mathrm{red}}$. For a backward-tied differential $q$ in
$\mathcal{B}(\pi)$, let $(q,g_{t}q)$, for some $t>0$, be the longest
Teichmüller segment entirely contained in $\mathcal{C}_{\pi}$. It follows that
$g_{t}q$ is contained in $\mathcal{F}(\pi)$. Let $\zeta$ be a finite
Rauzy–Veech sequence starting at $\pi$. Let $\mathcal{S}_{\zeta}$ be the
differentials $q$ in $\mathcal{B}(\pi)$ for which the Teichmüller segment
above is contained in $\mathcal{C}_{\zeta}$.
Recall that $\Delta$ is the standard simplex in $\mathbb{R}^{\mathcal{A}}$ and
$p:P_{\pi}\mapsto\Delta$ is the projection $(x,y)\to x/\|x\|_{1}$. Let
$B_{\theta}$ denote the matrix of a Rauzy–Veech sequence $\theta$. As
indicated in the proof of Lemma 4.30, it is possible to find a sequence
$\theta$ from $\pi$ to $\pi$ such that $p(B_{\theta}P_{\pi})$ is compactly
contained in $p(P_{\pi})$. Let $\overline{p}:(x,y)\mapsto y$ be the map that
records the height parameters. By extending $\theta$ to a longer loop based at
$\pi$, we may also assume that $\overline{p}(P_{\pi})$ is compactly contained
in $\overline{p}(B_{\theta}P_{\pi})$. We fix this $\theta$ once and for all
and consider $\mathcal{S}_{\theta}$. By finessing $\theta$ further, we may
assume that it is _neat_ , that is, if $\theta=\zeta\eta$ and
$\theta=\eta^{\prime}\zeta$ then $\zeta=\theta$. In the coding that we
consider, the set $\mathcal{S}_{\theta}$ will serve as a transverse section to
the Teichmüller flow.
We recall the bounded distortion theorem for Rauzy–Veech induction. This
theorem states that that there exists a constant $K\geqslant 1$, that depends
only on the topology of the surface, and a countable collection of finite
Rauzy–Veech sequences $\zeta$ from $\pi$ to $\pi$ such that
* •
for the map $p(q_{\pi}^{-1}(\mathcal{B}(\pi)))\to
p(q_{\pi}^{-1}(\mathcal{B}(\pi)))$ given by $x\mapsto
B_{\zeta}\,x/\|B_{\zeta}\,x\|_{1}$, its Jacobian $\mathcal{J}$ satisfies
$\frac{1}{K}\mathcal{J}(x_{1})<\mathcal{J}(x_{2})<K\mathcal{J}(x_{1})$
for any pair of points $(x_{1},y_{1}),(x_{2},y_{2})\in
q_{\pi}^{-1}(\mathcal{B}(\pi))$;
* •
no $\zeta$ contains $\theta$, that is, $\zeta$ cannot be written as a
concatenation $\eta\theta\eta^{\prime}$; and
* •
up to excising a set of differentials of measure zero,
$\bigcup_{\zeta}\mathcal{B}(\zeta)=\mathcal{B}(\pi).$
For more details, we refer the reader to the article by Avila–Gouëzel–Yoccoz
[AGY06, Section 4] for abelian differentials, and by Avila–Resende [AR12,
Sections 4 and 5] for quadratic differentials.
We stress that the Rauzy–Veech sequences considered here are sequences in
$\mathcal{D}^{\text{red}}$. To be precise with constraints that involve
concatenations, they should be imposed over all lifts in $\mathcal{D}$. For
instance, fixing a lift of $\pi$ in $\mathcal{D}$, the second constraint
should be read as finding a loop $\zeta$ in $\mathcal{D}$ that returns to
$\pi$ and does not contain a lift of $\theta$. Alternatively, the coding we
describe is a coding of the flow lifted to $\mathcal{C}^{\mathrm{lab}}$ that
is “equivariant” with respect to the Deck group for the covering
$\mathcal{C}^{\mathrm{lab}}\to\mathcal{C}^{\mathrm{root}}$.
With this setup, the first return maps under Teichmüller flow to
$\mathcal{S}_{\theta}$ are given by Rauzy–Veech sequences of the form
$\theta\zeta\theta$. Note then that, by tweaking the constant $K$, the
Jacobian of the map $p(q_{\pi}^{-1}(\mathcal{S}_{\theta}))\mapsto
p(q_{\pi}^{-1}(\mathcal{S}_{\theta}))$ given by $x\mapsto
B_{\theta\zeta\theta}\,x/\|B_{\theta\zeta\theta}\,x\|_{1}$ satisfies
$\frac{1}{K}\mathcal{J}(x_{1})<\mathcal{J}(x_{2})<K\mathcal{J}(x_{1})$
for each $\zeta$ and for any pair of points $(x_{1},y_{1}),(x_{2},y_{2})\in
q_{\pi}^{-1}(\mathcal{S}_{\theta})$. In fact, the Jacobian property is
enforced by the stronger property of the matrix that, up to a constant, that
depends only on the topology of the surface, all columns have the same norm.
More precisely,
$\|B_{\theta\gamma\theta}\,x\|_{1}\asymp\|B_{\theta\gamma\theta}\,x^{\prime}\|_{1}$
for any $(x,y),(x^{\prime},y^{\prime})\in q_{\pi}^{-1}(\mathcal{S}_{\theta})$.
Moreover, we have that, up to excising a set of zero measure,
(6.3) $\mathcal{S}_{\theta}=\bigcup_{\zeta}\mathcal{S}_{\theta\zeta\theta}.$
We thus have a countable full measure partition of $\mathcal{S}_{\theta}$ into
sets $\mathcal{S}_{\theta\zeta\theta}$ such that each
$\mathcal{S}_{\theta\zeta\theta}$ is the image of smooth map
$\phi_{\zeta}\colon\mathcal{S}_{\theta}\mapsto\mathcal{S}_{\theta}$. Each map
$\phi_{\zeta}$ is diffeomorphic onto its image and its inverse is a _uniformly
expanding Markov map_ in the sense of Avila–Gouëzel–Yoccoz [AGY06, Definition
2.2]. We assemble the inverses of $\phi_{\zeta}$ into a map $\Phi$ on
$\mathcal{S}_{\theta}$.
For such an expanding map $\Phi$, there exists a unique $\Phi$-invariant
absolutely continuous probability measure $\nu$, which is automatically
ergodic and even mixing Aar[Section 2]Avi-Gou-Yoc. In the case of the
Teichmüller flow, the measure $\nu$ is the restriction of the Masur–Veech
measure on $\mathcal{C}^{\mathrm{root}}$.
We now set $\Pi$ as the countable set of sequences $\zeta$ above. The
partition given by (6.3) induces an equivariant bijection between
$(\Sigma,\text{{S}})$ and
$(\bigcup_{\zeta\in\Pi}\mathcal{S}_{\theta\zeta\theta},\Phi)$. The measure
$\mu$ that we will consider on $\Sigma$ is the unique S-invariant probability
measure rendering this bijection a measure-theoretic conjugation. The bounded
distortion inherited by $\nu$ from the Jacobians becomes equivalent to the
bounded distortion of $\mu$ given by Definition 6.2.
### 6.4. Return times
A function $\xi:\Sigma\mapsto\mathbb{R}$ is _Hölder_ if there exists a non-
negative exponent $\alpha<1$ and a constant $C>0$ such that for any finite
sequence $u$ and any $a,b\in\Sigma(u)$
$|\xi(a)-\xi(b)|\leqslant C\alpha^{m},$
where $m$ is the length of $u$.
A _roof function_ is a Hölder function
$\xi\colon\Sigma\mapsto\mathbb{R}_{>0}$. A suspension of the symbolic space is
a space that is homeomorphic to $\Sigma\times[0,1]$ with the identification
$(a,0)\sim(\text{{S}}(a),1)$. The roof function equips the suspension with a
flow $\psi$ which flows in the interval direction and satisfies
$\psi_{\xi(a)}(a,0)=(\text{{S}}(a),1)$.
###### Definition 6.5.
A roof function is said to have _exponential tails_ if there exists $h>0$ such
that
$\int_{\Sigma}e^{h\xi}d\mu<\infty.$
The property of having exponential tails implies, in particular, that the
volume of the suspension with respect to the local product measure $d\mu\,dt$
is finite. We will discuss this integrability in more detail when we talk of
cocycles.
Note that under the measure-theoretic conjugacy, the section
$\mathcal{S}_{\theta}$ gets identified with $\Sigma\times\\{0\\}$. The
function $\xi$ we are interested in is the return time function to
$\mathcal{S}_{\theta}$ under Teichmüller flow. It is easy to check that the
return time on $\mathcal{S}_{\theta\zeta\theta}$ is given by
$\xi(x)=\log\|B_{\theta\zeta\theta}\,x\|_{1}$. The measure $d\nu\,dt$ is the
Masur–Veech measure on $\mathcal{C}^{\mathrm{root}}$. The coding of the
Teichmüller flow can be summarised as follows.
###### Theorem 6.6.
There exists a countable set $\Pi$ whose full shift $(\Sigma,\text{{S}})$,
$\Sigma=\Pi^{\mathbb{Z}}$, carries an S-invariant probability measure with
bounded distortion and a roof function $\xi$ with exponential tails such that
there exists a measure-theoretic conjugacy
$f\colon\Sigma\times[0,1]\to\mathcal{C}^{\mathrm{root}}$ (where
$\mathcal{C}^{\mathrm{root}}$ is equipped with the Masur–Veech measure) that
satisfies $f\circ\psi_{t}=g_{t}\circ f$ for all $t\in\mathbb{R}$.
### 6.7. Cocycles
A linear cocycle with values in $\operatorname{SL}(m,\mathbb{R})$ for the
Teichmüller flow is a map
$C\colon\mathcal{C}\times\mathbb{R}\mapsto\operatorname{SL}(m,\mathbb{R})$
satisfying
* •
$C(q,0)=I$ where $I$ is the identity matrix, and
* •
$C(q,s+t)=C(g_{s}q,t)\,C(q,s)$ for all $q\in\mathcal{C}$ and
$s,t\in\mathbb{R}$.
The most well known example is the Kontsevich–Zorich cocycle which records the
dynamics of the flow on the integral first homology group of the surface.
Choosing charts on $\mathcal{C}$, one can choose a basis for a trivialisation
of the surface homology in these charts. As the Teichmüller flow has Poincaré
recurrence, one can consider the change of basis matrices as a flow trajectory
returns to a chart. This is the Kontsevich–Zorich cocycle. As the flow
preserves the intersection form on the homology, the cocycle takes values in
the symplectic group over $\mathbb{Z}$.
Another example is the Rauzy–Veech cocycle defined on
$\mathcal{C}^{\mathrm{lab}}$, which is the finite cover of
$\mathcal{C}^{\mathrm{root}}$ corresponding to the cover
$\mathcal{D}\to\mathcal{D}^{\text{red}}$ defined in Remark 5.3. Here, the
polytopes $\mathcal{C}_{\pi}$ carry preferred coordinates through the
normalised width and height parameters. The itinerary through polytopes of a
typical flow trajectory is recorded by the Rauzy–Veech sequence. The
coordinate transformation of a Rauzy–Veech sequence is linear. This defines
the Rauzy–Veech cocycle. For an abelian stratum component, one can associate
to an irreducible permutation a natural spanning set for the absolute homology
of the surface. With respect to these spanning sets the Kontsevich–Zorich
cocycle lifted to $\mathcal{C}^{\mathrm{lab}}$ is the same as the Rauzy–Veech
cocycle. Thus, the two can be studied simultaneously. This structure is not
available for stratum components of quadratic differentials.
A linear cocycle for the Teichmüller flow is said to be _integrable_ with
respect to a finite flow invariant measure if for all $t$ the functions
$q\mapsto\log\|C(q,t)\|$ and $q\mapsto\log\|C(q,t)^{-1}\|$ are $L^{1}$ with
respect to the measure. As our cocycles are integer valued, the condition
$q\mapsto\log\|C(q,t)\|$ being $L^{1}$ suffices. The flow invariant measure we
are interested in is the Masur–Veech measure on $\mathcal{C}$ and its lifts to
$\mathcal{C}^{\mathrm{root}}$ and $\mathcal{C}^{\mathrm{lab}}$. The lift to
$\mathcal{C}^{\mathrm{root}}$ is exactly the measure $d\nu\,dt$.
The Teichmüller flow is ergodic with respect to the Masur–Veech measure. Thus,
if a cocycle is integrable, then Oseledets theorem applies: for almost every
$q$ and every non-zero vector $v\in\mathbb{R}^{m}$ the limit
$\lim_{t\to\infty}\frac{1}{t}\log\frac{\|C(q,t)v\|_{1}}{\|v\|_{1}}$
exists and depends only on $v$ and not on $q$. Moreover, the limit can achieve
up to $m$ values:
$\lambda_{1}\geqslant\lambda_{2}\geqslant\dots\geqslant\lambda_{m}$. This set
of numbers is known as the _Lyapunov spectrum_.
We say a cocycle on $\mathcal{C}^{\mathrm{root}}$ is _locally constant_ if the
cocycle is constant over the cylinder sets in our coding. By choosing a
trivialisation of the homology over each polytope $\mathcal{C}_{\pi}$, the
Kontsevich–Zorich cocycle lifted to $\mathcal{C}^{\mathrm{root}}$ is locally
constant. The Rauzy–Veech cocycle is locally constant by construction.
We normalise the invariant measure $\mu$ on the section $\mathcal{S}_{\theta}$
to be a probability measure. Let $\zeta$ be a symbol in $\Pi$ and let
$B_{\theta\zeta\theta}$ be the associated Rauzy matrix. By standard methods of
computing volumes of images of projective linear maps, there exists a constant
$C>1$ that depends only on the topology of the surface such that
$\frac{1}{C\|B_{\theta\zeta\theta}\|_{1}^{d-1}}<\mu(\mathcal{S}_{\theta\zeta\theta})<\frac{C}{\|B_{\theta\zeta\theta}\|_{1}^{d-1}}.$
See our previous article for more details [Bel+19, Lemma 5.11]. Thus, for the
discrete Rauzy–Veech cocycle $q\mapsto B(q)$ on $\mathcal{S}_{\theta}$, we get
that, up to a similar uniform multiplicative constant,
$\int_{\mathcal{S}_{\theta}}\log\|B(q)\|_{1}\,d\mu(q)\asymp\sum_{\zeta\in\Pi}\frac{\log\|B_{\theta\zeta\theta}\|_{1}}{\|B_{\theta\zeta\theta}\|_{1}^{d-1}}.$
To estimate the integral above, we organise the sequences $\zeta\in\Pi$ by the
$L^{1}$ norms of the matrices $B_{\theta\zeta\theta}$ considered on a
multiplicative scale on $\mathbb{R}_{+}$. Recurrence estimates in the bounded
distortion theorem for Rauzy–Veech sequences show that there exist constants
$M>1$ and $0<c<1$ that depend only on the topology of the surface such that,
for the set
$\Pi_{n}=\\{\zeta\in\Pi\,:\,\|B_{\theta\zeta\theta}\|_{1}\in[1,M^{n})\\}$,
$\sum_{\zeta\in\Pi_{n}}\mu(\mathcal{S}_{\theta\zeta\theta})>1-c^{n}.$
It follows that
$\sum_{\zeta}\log\|B_{\theta\zeta\theta}\|_{1}/\|B_{\theta\zeta\theta}\|_{1}^{d-1}$
is dominated by $\sum nc^{n}$ and, hence, that the discrete Rauzy–Veech
cocycle is integrable. Note that up to an additive constant that depends only
on the surface, the first return time $\xi(q)$ at any $q$ in
$\mathcal{S}_{\theta\zeta\theta}$ is $\log\|B_{\theta\zeta\theta}\|_{1}$. So
the integrability of the Rauzy–Veech cocycle is equivalent to the finiteness
of the Masur–Veech volume of $\mathcal{C}^{\mathrm{root}}$.
Using the above estimates for $\gamma\in\Pi_{n}$, it is straightforward to
derive that, if a locally constant cocycle with values in
$\operatorname{SL}(m,\mathbb{Z})$ is integrable with respect to the
Masur–Veech measure, then it is integrable with respect to
$(\mathcal{S}_{\theta},\mu)$. As a result, the integrability over
$(\mathcal{S}_{\theta},\mu)$ of the plus and minus cocycles (that we define in
Section 9) can be deduced from their integrability with respect to the
Masur–Veech measure. In any case, we will give a direct verification of the
integrability over $(\mathcal{S}_{\theta},\mu)$ of the plus and minus cocycles
in Section 10. To do that, we will need the following lemma.
We say that a cocycle $C$ is _dominated by_ a cocycle $C^{\prime}$ if there is
a constant $K>0$ that depends only on the surface such that the $L^{1}$-norms
satisfy $\|C\|_{1}\leqslant K\|C^{\prime}\|_{1}$.
###### Lemma 6.8.
Suppose that $C$ is a locally constant cocycle dominated by the Rauzy–Veech
cocycle. Then $C$ is integrable in either sense.
###### Proof.
The lemma follows directly from integrability of the Rauzy–Veech cocycle. ∎
Recall that a cocycle into $\operatorname{SL}(m,\mathbb{R})$ has a _simple_
Lyapunov spectrum if its Lyapunov spectrum consists on $m$ distinct numbers.
We will now state a weaker version of the Avila–Viana criterion for simplicity
of the Lyapunov spectrum. The actual criterion is more general Avi-
Via07a[Theorem 7.1]Avi-Via07b, but we state it specifically for our context.
Let $\gamma_{1}$ and $\gamma_{2}$ be almost-flow loops given by directed
Rauzy–Veech sequences $\eta_{1}$ and $\eta_{2}$. Then the concatenation
$\gamma_{1}\gamma_{2}$ is also realised by an almost-flow loop given by the
Rauzy–Veech sequence $\eta_{1}\eta_{2}$. Thus, the almost-flow loops give us a
monoid. By evaluating a locally constant cocycle for each almost-flow loop, we
get a representation of the monoid into $\operatorname{SL}(m,\mathbb{R})$. As
the cocyles we consider here are defined over $\mathbb{Z}$ and preserve a
symplectic structure, this representation has an image in the symplectic
group.
###### Criterion 6.9.
Let $C$ be a locally constant integrable cocycle for the Teichmüller flow. If
the associated monoid is Zariski dense in the symplectic group, then the
Lyapunov spectrum is simple.
As was previously mentioned, the criterion stated by Avila–Viana [AV07a,
Theorem 7.1] does not require Zariski density. Instead, it has the weaker
hypothesis of requiring the presence of _pinching_ and _twisting_ elements in
the group generated by the monoid. It is a classical fact that a Zariski dense
monoid gives a group that contains a pinching element. It follows from the
work of Benoist [Ben97] that this group also contains elements that are
twisting relative to the pinching element. As we directly establish Zariski
density, we will omit the precise definitions of pinching and twisting, which
are technical to state.
Furthermore, the criterion stated by Avila–Viana does not, strictly speaking,
require the cocycle to be symplectic; it requires the cocycle to take values
in the special linear group and satisfy pinching and twisting. Nevertheless,
we state the criterion for symplectic cocycles as all of the cocycles that we
consider are symplectic.
## 7\. Rauzy–Veech groups
Recall that $\mathcal{C}^{\mathrm{lab}}$ is the cover of
$\mathcal{C}^{\mathrm{root}}$ corresponding to the covering
$\mathcal{D}^{\mathrm{red}}\to\mathcal{D}$, as defined in Remark 5.3. In the
definitions that follow, we operate in $\mathcal{C}^{\mathrm{lab}}$.
### 7.1. Rauzy–Veech groups of abelian components
Let $\mathcal{C}$ be an abelian stratum component. There is a natural spanning
set for the absolute homology of the surface that one can associate to any
permutation $\pi$ in its Rauzy diagram $\mathcal{D}$ [AV07a, AMY18, Gut19].
For any loop $\delta$ in $\mathcal{D}$ based at $\pi$, one can define a matrix
in $\operatorname{Sp}(2g,\mathbb{Z})$ by computing the linear action on
absolute homology induced by $\delta$, in terms of the preferred spanning set.
The _Rauzy–Veech group_ of $\pi$, denoted $\operatorname{RV}(\pi)$, is the
subgroup of $\operatorname{Sp}(2g,\mathbb{Z})$ generated by such matrices. The
matrix associated with any loop $\delta$ coincides with the Rauzy–Veech matrix
$B_{\delta}$ but this is special for abelian stratum components.
### 7.2. Minus and plus pieces for quadratic stratum components
Let $\mathcal{C}$ be a component of a stratum of quadratic differentials.
There is a branched double cover $\widetilde{S}$ of $S$ such that the lifts
$\widetilde{q}$ for $q\in\mathcal{C}$ are abelian differentials on
$\widetilde{S}$. This is often called _the orientation double cover of the
quadratic differential_. The cover is branched over every zero of $q$ with odd
order and every pole of $q$. We will give the construction of the orientation
double cover of a typical rooted quadratic differential shortly.
The differential $\widetilde{q}$ is symmetric with respect to an involution
and the quotient is $q$. Viewed as an involution of $\widetilde{S}$, the
induced linear action on $H^{1}(\widetilde{S},\widetilde{Z};\mathbb{Z})$ has
eigenvalues $\\{1,-1\\}$. The $(+1)$-eigenspace is usually referred to as the
plus (or invariant) piece and the $(-1)$-eigenspace is usually referred to as
the minus (or anti-invariant) piece. By Poincaré-duality, we can also consider
it as a splitting of the homology
$H_{1}(\widetilde{S},\widetilde{Z};\mathbb{Z})$.
The Teichmüller flow on $\mathcal{C}$ defines a cocycle by its action on
$H^{1}(\widetilde{S},\widetilde{Z};\mathbb{Z})$. The cocyle preserves the plus
and minus eigenspaces. The plus Kontsevich–Zorich cocycle is its restriction
to the plus piece. Similarly, we also get the minus Kontsevich–Zorich cocycle.
### 7.3. Plus Rauzy–Veech groups and integrability of the plus cocycle
As the absolute part of the plus piece is invariant under the involution, it
is isomorphic to the absolute homology of $S$. Hence, the _plus Rauzy–Veech
group_ $\operatorname{RV}^{+}(\pi)$ can be defined in a similar way to the
abelian case by associating to each irreducible quadratic permutation a
preferred spanning set for the absolute homology of $S$. Then, for any loop
$\delta$ in the Rauzy diagram $\mathcal{D}$ based at $\pi$, we may associate a
matrix for the homology action induced by $\delta$ using the preferred
spanning set. This matrix does not coincide in general with the Rauzy–Veech
matrix $B_{\delta}$ and so we prefer to give a direct proof of the
integrability of the plus cocycle.
More precisely, for each quadratic permutation in $\mathcal{D}$ there is a
choice of a spanning set $\\{c_{\alpha}\\}_{\alpha\in\mathcal{A}}$ for the
plus piece such that the matrix for the plus cocycle has a simple form in each
Rauzy–Veech move. See the work by the fourth author [Gut17, Section 4.1] for
the description of the spanning set.
The simple form of the matrix has the following description. Suppose $\alpha$
and $\beta$ are top and bottom letters in $\pi$. Breaking symmetry, suppose
that $x_{\alpha}>x_{\beta}$. Let $\delta=\sigma\to\tau$ be the Rauzy–Veech
move dictated by the width constraint. If $c_{\alpha}$ and $c_{\beta}$ have
non-zero algebraic intersection then the matrix satisfies
$C_{\delta}=I+M_{\alpha,\beta}$, where $M_{\alpha,\beta}$ is the matrix whose
$(\alpha,\beta)$-entry is one and zero otherwise. On the other hand, if
$c_{\alpha}$ and $c_{\beta}$ have zero algebraic intersection, then
$C_{\delta}=I-M_{\alpha,\beta}-2M_{\alpha,\alpha}$.
The explicit matrices give us a direct proof of the integrability of the plus
cocycle. Inductively, the matrix for the cocycle can be defined for any finite
Rauzy–Veech sequence $\delta$ as a product of matrices for individual
Rauzy–Veech moves.
###### Lemma 7.4.
For any finite Rauzy–Veech sequence $\delta$
$\|C_{\delta}\|_{1}\leqslant\|B_{\delta}\|_{1},$
where $B_{\delta}$ is the Rauzy–Veech matrix for $\delta$.
###### Proof.
For any matrix $C$ with coefficients $c_{rs}$, let $|C|$ be the non-negative
matrix with coefficients $|c_{rs}|$.
Note that for an individual Rauzy move $\delta$ the plus cocycle satisfies
$|C_{\delta}|=B_{\delta}$. Now let
$\delta=\delta_{1}\delta_{2}\dotsc\delta_{k}$ be a finite Rauzy–Veech
sequence. We observe that
$\|C_{\delta}\|_{1}=\|C_{\delta_{1}}\dotsc
C_{\delta_{k}}\|_{1}\leqslant\||C_{\delta_{1}}|\dotsc|C_{\delta_{k}}|\|_{1}=\|B_{\delta_{1}}\dotsc
B_{\delta_{k}}\|_{1}=\|B_{\delta}\|_{1}$
∎
As the above lemma shows, the plus cocycle is dominated by the Rauzy–Veech
cocycle. The integrability of the plus cocycle now follows from Lemma 6.8.
(a) Original double cover.
(b) Double cover after a Rauzy move.
Figure 7.5. Example of the spanning set for the minus piece rendering the
linear transformations coming from Rauzy moves equal to the Rauzy–Veech
matrices. The original permutation is $\big{(}\begin{smallmatrix}1&2&1&2&3\\\
&3&4&4\end{smallmatrix}\big{)}$ representing the stratum
$\mathcal{Q}(2,-1,-1)$, which becomes $\big{(}\begin{smallmatrix}1&2&1&2\\\
3&3&4&4\end{smallmatrix}\big{)}$ after one bottom Rauzy move. In this case,
the cycles in the spanning set can be tightened to saddle connections, so they
are drawn in this manner. The general case is similar.
### 7.6. Minus Rauzy–Veech groups and integrability of the minus cocycle
The minus piece is in the kernel of the map induced on homology by the
branched covering $\widetilde{S}\to S$. As a result, the minus cocycle has to
be analysed directly in the orientation double cover of a quadratic
differential. Here again, for each irreducible quadratic permutation there is
a natural choice for a spanning set for the minus piece. Using this preferred
set, the _minus Rauzy–Veech group_ $\operatorname{RV}^{-}(\pi)$ can now be
defined in a similar way to the other types of Rauzy–Veech groups. For rooted
quadratic differentials that admit a zippered rectangles construction, we will
explicitly construct their orientation double cover and then precisely
describe the resulting matrices. These matrices preserve a specific
alternating form defined by Avila–Resende [AR12, Equation 9].
Consider the arcs between singularities that we used to define singularity
parameters. These arcs are a spanning set in the relative homology. Using
these arcs, we can build a spanning set for the minus piece as follows.
To construct the orientation double cover of the rooted differential, we take
two copies of the zippered rectangles. Let $1\leqslant i\leqslant\ell+m$. As
notation, a rectangle $R_{i}$ will be denoted as $R_{i}^{(1)}$ in the first
copy and $R_{i}^{(2)}$ in the second copy. The gluings are now constructed as
follows.
* •
If $\pi(i)$ is a translation letter, then $R_{i}^{(1)}$ is glued to
$R_{\sigma(i)}^{(1)}$ and $R_{i}^{(2)}$ is glued to $R_{\sigma(i)}^{(2)}$ as
before.
* •
If $\pi(i)$ is a flip letter then $R_{i}^{(1)}$ is glued to
$R_{\sigma(i)}^{(2)}$ by a translation.
The resulting abelian differential is the orientation double cover of the
original quadratic differential. The involution rotates each rectangle by 180
degrees and maps it to the corresponding rectangle in the other copy.
Let $\alpha\in\mathcal{A}$ be a letter. Let $a_{\alpha}$ be the arc in the
original quadratic differential oriented so that its period is
$x_{\alpha}+y_{\alpha}$ where $x_{\alpha},y_{\alpha}$ are the singularity
parameters. Let $a_{\alpha}^{(1)}$ and $a_{\alpha}^{(2)}$ be the lifts of
$a_{\alpha}$ to the double cover. The spanning set for the minus piece in the
relative homology of the orientation double cover is now defined as follows:
* •
Suppose $\alpha$ is a translation letter. Then let
$A_{\alpha}=a_{\alpha}^{(1)}+a_{\alpha}^{(2)}$.
* •
Suppose $\alpha$ is a flip letter. Then let
$A_{\alpha}=a_{\alpha}^{(1)}-a_{\alpha}^{(2)}$.
See Figure 7.5 for an illustration of these cycles.
It is straightforward to check that in a Rauzy–Veech move the linear change of
these spanning sets is exactly encoded by the Rauzy–Veech matrix. Thus the
Kontsevich–Zorich cocycle on the minus piece of the relative cohomology (by
duality) coincides with the Rauzy–Veech cocycle. In particular, the
$L^{1}$-norm of the restriction to the minus piece in absolute cohomology is
dominated by the $L^{1}$-norm of the Rauzy–Veech matrix. Thus, the minus
cocycle is integrable.
### 7.7. Modular Rauzy–Veech groups
By considering mapping classes instead of the homological actions induced by
loops in the Rauzy diagram, we can define the _modular Rauzy–Veech groups_ ,
the _plus modular Rauzy–Veech groups_ and the _minus modular Rauzy–Veech
groups_. These groups are then subgroups of mapping class groups and their
images by the symplectic representations coincide with the corresponding
Rauzy–Veech groups. They are denoted $\operatorname{MRV}(\pi)$,
$\operatorname{MRV}^{+}(\pi)$ and $\operatorname{MRV}^{-}(\pi)$, respectively.
### 7.8. Rauzy–Veech groups and monodromy groups
Elementary theory of covering spaces together with our main theorems Theorem
4.23 and Theorem 5.5 implies that Rauzy–Veech groups are finite-index
subgroups of appropriate monodromy groups. More precisely:
###### Corollary 7.9.
Let $\mathcal{C}$ be a component of a stratum of abelian or quadratic
differentials and let $\pi$ be an irreducible permutation that represents
$\mathcal{C}$. Let $\widetilde{\mathcal{C}}$ be any finite manifold cover of
$\mathcal{C}$ (which, in particular, can be taken to be either
$\mathcal{C}^{\mathrm{root}}$ or $\mathcal{C}^{\mathrm{lab}}$). Then, the
following groups are finite-index subgroups inside the following larger
groups:
1. (1)
$\pi_{1}(\widetilde{\mathcal{C}})$ inside
$\pi_{1}^{\mathrm{orb}}(\mathcal{C})$;
2. (2)
the (modular) monodromy group of $\widetilde{\mathcal{C}}$ inside the
(modular) monodromy group of $\mathcal{C}$;
3. (3)
if $\pi$ is abelian, the (modular) Rauzy–Veech group of $\pi$ inside the
(modular) monodromy group of $\mathcal{C}$ of $\pi$;
4. (4)
if $\pi$ is quadratic, the (modular) plus (respectively, minus) Rauzy–Veech
group of $\pi$ inside the (modular) monodromy group of $\mathcal{C}$
corresponding to the plus (respectively, minus) piece of the homology.
###### Proof.
Since $\widetilde{\mathcal{C}}$ is a finite cover of $\mathcal{C}$, parts (1)
and (2) follow from elementary theory of covering spaces.
Suppose that $\pi$ is abelian. Then the modular Rauzy–Veech group of $\pi$ is
a subgroup of the modular monodromy group of $\mathcal{C}^{\mathrm{lab}}$. The
push-forward of the modular Rauzy–Veech group to $\mathcal{C}^{\mathrm{root}}$
is exactly the image of the flow group $G(\mathcal{C}_{\pi},q_{0})$ inside the
mapping class group. By Theorem 5.5, this image equals the modular monodromy
group of $\mathcal{C}^{\mathrm{root}}$. By part (2), it is a finite-index
subgroup of the modular monodromy group of $\mathcal{C}$. The rest of part (3)
is obtained by applying the symplectic representation. Part (4) is obtained
analogously. ∎
Part (3) of the previous corollary provides a partial answer (that is, up to
finite index) to a question of Yoccoz [Yoc10, Remark in Section 9.3]. Part (4)
extends these partial answers to analogous questions for quadratic stratum
components.
###### Remark 7.10.
For abelian components, the overall structure of how various groups that we
have considered fit together can be organised in the following commutative
diagram:
${\pi_{1}(\mathcal{D})}$${\pi_{1}(\mathcal{C}^{\mathrm{lab}})}$${\operatorname{MRV}(\pi)}$${\operatorname{RV}(\pi)}$${\pi_{1}(\mathcal{D}^{\mathrm{red}})}$${G(\mathcal{C}_{\pi})=\pi_{1}(\mathcal{C}^{\mathrm{root}})}$${\operatorname{MMon}(\mathcal{C}^{\mathrm{root}})}$${\operatorname{Mon}(\mathcal{C}^{\mathrm{root}})}$${\pi_{1}^{\mathrm{orb}}(\mathcal{C})}$${\operatorname{MMon}(\mathcal{C})}$${\operatorname{Mon}(\mathcal{C})}$${\mathrm{Mod}(S)}$${\operatorname{Sp}(2g,\mathbb{Z})}$f.i.f.i.f.i.f.i.f.i.f.i.f.i.
where “f.i.” stands for “finite index”, and recall that $\operatorname{MMon}$
is the modular monodromy group and that $\operatorname{Mon}$ is the monodromy
group. This diagram actually allows us to define the “most general” version of
a Rauzy–Veech group, which is the image of the group homomorphism
$\pi_{1}(\mathcal{D})\to\pi_{1}(\mathcal{C}^{\mathrm{lab}})$. Theorem 5.2
shows that this group is actually equal to
$\pi_{1}(\mathcal{C}^{\mathrm{lab}})$. Thus, any other version of Rauzy–Veech
group can be obtained as the image of $\pi_{1}(\mathcal{C}^{\mathrm{lab}})$ by
an appropriate group homomorphism.
Similar commutative diagrams can also be stated for quadratic components by
considering the images into the plus and minus pieces separately.
Combining Corollary 7.9 with the work of Calderon and Calderon–Salter [Cal20,
CS19, CS19a, CS20], we obtain a classification of the modular Rauzy–Veech
groups and the Rauzy–Veech groups in relative homology, up to finite index,
for genus at least five for non-hyperelliptic components. More precisely:
###### Corollary 7.11.
Let $S$ be a topological surface of genus at least five. Let $\mathcal{C}$ be
a non-hyperelliptic component of a stratum of abelian differentials on $S$
whose set of marked points is $Z$. Let $\phi$ is the absolute framing induced
by the horizontal vector field of a surface in $\mathcal{C}$. We have that:
1. (1)
The modular Rauzy–Veech group in $\mathrm{Mod}(S,Z)$ is a finite-index
subgroup of $\mathrm{Mod}(S,Z)[\phi]$, that is, of the stabiliser of $\phi$
inside $\mathrm{Mod}(S,Z)$.
2. (2)
The Rauzy–Veech group in $\operatorname{PAut}(H_{1}(S,Z;\mathbb{Z}))$ is a
finite-index subgroup of the kernel of the crossed homomorphism
$\Theta_{\phi}\colon\operatorname{PAut}(H_{1}(S,Z;\mathbb{Z}))\to
H^{1}(S,\mathbb{Z}/2\mathbb{Z})$ defined by Calderon–Salter [CS19a, Section
4].
This classification was already known for hyperelliptic components, and in
this case the index is known to be one [AMY18].
## 8\. Classification of components and a reduction strategy
### 8.1. Classification of the components of strata of abelian and quadratic
differentials
For the reader’s convenience, we restate the complete classification of the
components of abelian and quadratic strata.
###### Theorem 8.2 ([KZ03]).
The following is the classification of the components of the strata of abelian
differentials (up to regular marked points).
* •
In genus one, the only stratum is $\mathcal{H}(0)$. It is non-empty, connected
and hyperelliptic.
* •
In genus two, the only strata are $\mathcal{H}(2)$ and $\mathcal{H}(1,1)$.
They are non-empty, connected and hyperelliptic.
* •
In genus three, the strata $\mathcal{H}(4)$ and $\mathcal{H}(2,2)$ have two
components. One of them is hyperelliptic and the other one corresponds to odd
spin structures. Every other stratum is non-empty and connected.
* •
Finally, for genus $g$ at least four:
* –
The stratum $\mathcal{H}(2g-2)$ has three components. One of them is
hyperelliptic, and the other two are distinguished by even and odd spin
structures.
* –
The stratum $\mathcal{H}(g-1,g-1)$ can have two or three components depending
on the parity of $g$. If $g$ is even, it has two components. One of them is
hyperelliptic, and the other one is not. If $g$ is odd, it has three
components. One of them is hyperelliptic, and the other two are distinguished
by even and odd spin structures.
* –
All other strata of the form $\mathcal{H}(2\kappa_{1},\dotsc,2\kappa_{n})$
have two components, distinguished by even and odd spin structures.
* –
The remaining strata are non-empty and connected.
###### Theorem 8.3 ([Lan08, CM14]).
The following is the classification of the components of the strata of
quadratic differentials (up to regular marked points).
* •
In genus zero, every stratum is non-empty and connected.
* •
In genus one, the strata $\mathcal{Q}(0)$ and $\mathcal{Q}(1,-1)$ are empty.
All other strata are nonempty and connected.
* •
In genus two, the strata $\mathcal{Q}(4)$ and $\mathcal{Q}(3,1)$ are empty.
Moreover, the stratum $\mathcal{Q}(2,2)$ is non-empty, connected and
hyperelliptic.
* •
In genus three, the strata $\mathcal{Q}(9,-1)$, $\mathcal{Q}(6,3,-1)$ and
$\mathcal{Q}(3,3,3,-1)$ have two components, known as _regular_ and
_irregular_ components.
* •
In genus four, the strata $\mathcal{Q}(6,6)$, $\mathcal{Q}(6,3,3)$ and
$\mathcal{Q}(3,3,3,3)$ have three components. One of them is hyperelliptic,
and the other two are known as _regular_ and _irregular_ components. Moreover,
the strata $\mathcal{Q}(12)$ and $\mathcal{Q}(9,3)$ have two components, known
as _regular_ and _irregular_ components.
* •
Finally, for genus at least two:
* –
The strata of the form $\mathcal{Q}(4j+2,4k+2)$, $\mathcal{Q}(4j+2,2k-1,2k-1)$
and $\mathcal{Q}(2j-1,2j-1,2k-1,2k-1)$ for $j,k\geqslant 0$ not contained in
the previous list have two components. One of them is hyperelliptic and the
other one is not.
* –
The remaining strata are non-empty and connected.
### 8.4. Adjacency of strata and a reduction strategy
For abelian differentials [AV07a], the adjacency between components of
different strata was exploited to show that Rauzy–Veech groups of simpler
components are contained inside the Rauzy–Veech groups of more complex ones.
We will exploit the same strategy to obtain the containment of Rauzy–Veech
groups of quadratic stratum components.
We start with the notions of _simple extensions_.
###### Definition 8.5.
Let $\sigma$ be an irreducible permutation. We say that a permutation $\tau$
is a _type preserving simple extension_ of $\sigma$ if $\tau$ is quadratic
(respectively, abelian) when $\sigma$ is quadratic (respectively, abelian) and
$\tau$ can be obtained from $\sigma$ by inserting a single letter $\alpha$ in
such a way that:
* •
at most one occurrence of $\alpha$ is at the beginning of a row in $\tau$; and
* •
no occurrence of $\alpha$ is at the end of a row in $\tau$.
Similarly, we have
###### Definition 8.6.
Let $\sigma$ be an irreducible abelian permutation. We say that $\tau$ is a
_type changing simple extension_ of $\sigma$ is $\tau$ is quadratic and $\tau$
can be obtained from $\sigma$ by inserting a single top flip letter $\alpha$
and a single bottom flip letter $\beta$ such that
* •
at most one occurrence of $\alpha$ (respectively, $\beta$) is at the beginning
of a row in $\tau$; and
* •
no occurrence of $\alpha$ (respectively, $\beta$) is at the end of a row in
$\tau$.
As irreducible quadratic permutations already possess flip letters, there are
no type changing extensions from a quadratic permutation to an abelian one.
The notion of simple extensions was original introduced by Avila–Viana [AV07a]
in their proof of the Kontsevich–Zorich conjecture for abelian differentials.
By expanding it to type changing extensions, the notion was extended to
quadratic differentials by the fourth author [Gut17].
Note that if $\sigma$ is an irreducible permutation and $\tau$ is a (type
preserving or changing) simple extension of $\sigma$, then $\tau$ is also
irreducible [Gut17, Lemma 3.2]. Also, note that the genera of the embodying
strata of $\sigma$ and $\tau$ are the same.
Let $\tau$ be a (type preserving or changing) simple extension of $\sigma$ and
suppose that $\zeta$ is a directed loop based at $\sigma$ in the Rauzy diagram
$\mathcal{D}$ that contains $\sigma$. It is then possible to shadow $\zeta$ by
a Rauzy–Veech sequence starting from $\tau$ by requiring that the added letter
(or letters) always lose if they participate in a Rauzy move [Gut17, Section
3]. It also follows from this description that the shadowing Rauzy–Veech
sequence also returns to $\tau$. This implies that $\operatorname{RV}(\sigma)$
is a subgroup of $\operatorname{RV}(\tau)$.
We now explain the geometric content underlying simple extensions. A type
preserving simple extension allows us to split a singularity into a pair of
singularities [Gut17, Lemma 5.1]. A type changing simple extension allows us
to split a singularity in to three singularities at least one of which has odd
order [Gut17, Corollary 5.2].
For quadratic stratum components which are our focus, the numerical invariant
$\kappa$ gets re-organised as follows;
* •
If $\kappa_{1}\geqslant 1$, $\kappa_{2},\dotsc,\kappa_{n}\geqslant-1$ are
integers and $\kappa_{1,1},\kappa_{1,2}\geqslant-1$ are integers satisfying
$\kappa_{1,1}+\kappa_{1,2}=\kappa_{1}$, then there exists a permutation
$\sigma$ whose embodying stratum is
$\mathcal{Q}(\kappa_{1},\dotsc,\kappa_{n})$ and a permutation $\tau$ whose
embodying stratum is
$\mathcal{Q}(\kappa_{1,1},\kappa_{1,2},\kappa_{2},\dotsc,\kappa_{n})$ such
that $\tau$ is a simple extension of $\sigma$.
* •
If $\kappa_{1}\geqslant 0$, $\kappa_{2}\dotsc,\kappa_{n}\geqslant 1$ are
integers and $\kappa_{1,1},\kappa_{1,2},\kappa_{1,3}\geqslant-1$ are integers
which are not all even and satisfy
$\kappa_{1,1}+\kappa_{1,2}+\kappa_{1,3}=2\kappa_{1}$, then there exists a
permutation $\sigma$ whose embodying stratum is
$\mathcal{H}(\kappa_{1},\dotsc,\kappa_{n})$ and a permutation $\tau$ whose
embodying stratum is
$\mathcal{Q}(\kappa_{1,1},\kappa_{1,2},\kappa_{1,3},2\kappa_{2},\dotsc,2\kappa_{n})$
such that $\tau$ is a simple extension of $\sigma$.
These facts suggest the following strategy to show the Zariski density of
every Rauzy–Veech group corresponding to the plus piece of quadratic
differentials:
1. (1)
Prove Zariski density for minimal quadratic strata, that is, for strata of the
form $\mathcal{Q}(4g-4)$ for $g\geqslant 3$ with the exception of
$\mathcal{Q}(12)^{\mathrm{reg}}$ and $\mathcal{Q}(12)^{\mathrm{irr}}$ (with
$g=4$) whose density has to be proved separately for technical reasons. Then
use simple extensions to extend the density to every connected stratum with
$g\geqslant 3$. For strata in $g\geqslant 3$ that have two components, one
non-hyperelliptic and the other hyperelliptic, the non-hyperelliptic component
can also be reached by such simple extensions from minimal strata. Note that a
hyperelliptic component cannot arise by a simple extension of a non-
hyperelliptic one. So we pass to the next case in our strategy.
2. (2)
Prove Zariski density for every hyperelliptic component that has the form
$\mathcal{Q}(4j+2,4k+2)$ for $j,k\geqslant 1$. The rest of the hyperelliptic
components arise by a string of simple extensions of these and hence we can
extend density, except for those containing poles. Nevertheless, these last
components arise as a string of simple extensions from the hyperelliptic
component of minimal abelian strata.
The remaining cases are all in genus four and lower and we will outline the
strategy for those.
3. (3)
The regular and irregular components in genus four arise by a simple extension
from $\mathcal{Q}(12)$ or from $\mathcal{H}(6)$; the extensions from
$\mathcal{H}(6)$ are already treated in previous work by the fourth author
[Gut17, Table 1]. The density then extends to these.
4. (4)
Prove Zariski density explicitly for $\mathcal{Q}(9,-1)^{\mathrm{irr}}$ in
genus three. The remaining regular and irregular components in genus three can
be handled by exhibiting a simple extension from $\mathcal{Q}(8)$ or from
$\mathcal{H}(4)$; the extensions from $\mathcal{H}(4)$ are again already
treated in previous work by the fourth author [Gut17, Table 1].
5. (5)
Prove Zariski density explicitly for $\mathcal{Q}(5,-1)$ in genus two. The
remaining non-hyperelliptic components all have at least three singularities
and not all of these have even orders. So we may extend density from
$\mathcal{H}(2)$ by using simple extensions.
6. (6)
All quadratic strata in genus one have at least three singularities and not
all of the singularities can have even orders. So we may extend density from
$\mathcal{H}(0)$ by using simple extensions.
For abelian differentials, a similar strategy gives containment of Rauzy–Veech
groups. As the Rauzy–Veech groups can be explicitly figured out in the base
cases, they can then also be classified in all abelian cases using the
containment. They turn out to be either the full symplectic group, or certain
special finite index subgroups of the symplectic group. See the work by
Avila–Matheus–Yoccoz [AMY18] for hyperelliptic abelian components and by the
fourth author [Gut19] for general abelian components. Plus and minus
Rauzy–Veech groups for quadratic stratum components that can arise by a string
of simple extensions from abelian base cases are also more tractable even
though here a complete classification is not yet achieved [Gut17].
## 9\. Zariski density
In this section we prove one of our main results.
###### Theorem 9.1.
The Rauzy–Veech groups for all components of all abelian strata are Zariski
dense in their ambient symplectic groups. The same holds for the plus and
minus Rauzy–Veech groups for all components of all quadratic strata.
Being the full symplectic group or a finite index subgroup, Rauzy–Veech groups
for abelian strata and quadratic components that arise by simple extensions
from abelian strata are Zariski dense.
The Rauzy–Veech groups of the quadratic base cases are harder to track
directly and here we bypass them. Instead, we leverage Filip’s results [Fil17]
to prove Zariski density of their monodromy groups. By Corollary 7.9,
Rauzy–Veech groups are finite index in the monodromy groups. We deduce that
they are Zariski dense. For clarity, we again refer the reader to the
commutative diagram in Section 7. We then extend the density to all quadratic
stratum components by simple extensions.
The proof presented here is self-contained and works for any (abelian or
quadratic) connected component. A key ingredient is the work [Fil17] of Filip
that gives a list of possible Zariski closures for algebraic hulls of linear
invariant suborbifolds.
### 9.2. Filip’s results
We briefly describe Filip’s results [Fil17, Theorem 1.2 and Corollary 1.7] for
the possible Zariski closures of the monodromy and algebraic hulls of a linear
invariant suborbifold. Let $\mathcal{N}$ be a linear invariant suborbifold.
* •
If $p(T\mathcal{N})$ is the subbundle of the Kontsevich–Zorich cocycle that
contains the tangent space to $\mathcal{N}$, then the monodromy has no zero
exponents on $p(T\mathcal{N})$ and the closure of the monodromy for the action
on $p(T\mathcal{N})$ is the full symplectic group [Fil17, Corollary 1.7]. This
readily implies the Zariski density of the monodromy group of any abelian
component, and, as detailed in Section 9.4, also implies the Zariski density
of the monodromy group of the minus piece of any quadratic component.
* •
For strongly irreducible subbundles that do not contain the tautological
plane, Filip shows that the Lie algebra representation of the corresponding
piece of the algebraic hull must be, up to compact factors, one from the
following list [Fil17, Theorem 1.2]:
1. (1)
$\mathfrak{sp}(2g,\mathbb{R})$ in the standard representation,
2. (2)
$\mathfrak{su}(p,q)$ in the standard representation or $\mathfrak{su}(p,1)$ in
any exterior power representation,
3. (3)
$\mathfrak{so}(2n-1,2)$ in the spin representation,
4. (4)
$\mathfrak{so}^{\ast}(2n)$ in the standard representation, or
$\mathfrak{so}(2n-2,2)$ in either of the spin representations.
On the other hand, Eskin–Filip–Wright show that the algebraic hull coincides
with the Zariski closure of the monodromy group for subbundles that do not
contain the tautological plane [EFW18, Theorem 1.1]. Therefore, the previous
theorem also classifies the Lie algebra representations of the Zariski closure
of such monodromy groups.
### 9.3. Zariski density for abelian components
The Zariski density of the monodromy group for all abelian components follows
directly from Filip’s results, as the subbundle $p(T\mathcal{N})$ is the
entire Hodge bundle.
### 9.4. Zariski density for the minus piece
Assume that $\mathcal{C}$ is a component of a stratum of the moduli space of
quadratic differentials. Let $\widetilde{\mathcal{C}}$ be the linear invariant
suborbifold consisting on the abelian differentials obtained by the
orientation double cover of the quadratic differentials in $\mathcal{C}$. Let
$\widetilde{S}$ be the underlying topological surface of the elements of
$\widetilde{\mathcal{C}}$.
Let $p\colon H^{1}(\widetilde{S},\widetilde{Z};\mathbb{Z})\to
H^{1}(\widetilde{S};\mathbb{Z})$ be the restriction map to the absolute
homology. By Filip’s classification, the monodromy group acting on
$p(T\widetilde{\mathcal{C}})$ is Zariski dense [Fil17, Corollary 1.7] in the
full symplectic group. On the other hand, we have that
$p(T\widetilde{\mathcal{C}})$ is exactly
$H_{-}^{1}(\widetilde{S};\mathbb{Z})$, that is, the cohomology classes which
are anti-invariant with respect to the action of $\iota$. This can be seen by
observing that if $(X,\omega)$ is an element of $\widetilde{\mathcal{C}}$,
then $\iota^{*}\omega=-\omega$.
By Lefschetz-duality, this is equivalent to saying that the monodromy group is
Zariski dense when acting on
$H_{1}^{-}(\widetilde{S},\widetilde{Z};\mathbb{Z})$, so we obtain Zariski
density for the minus piece of the homology.
### 9.5. Zariski density for the plus piece
It remains to show the Zariski density for the plus piece. We will first prove
the Zariski density of the monodromy groups for minimal strata, hyperelliptic
components with two singularities and some sporadic components. As Theorem 5.2
implies that the Rauzy–Veech groups are finite index in monodromy groups, it
will follow that the Rauzy–Veech groups for these base components are also
Zariski dense. We will then extend the density to the Rauzy–Veech groups of
all components by simple extensions. In turn, this will imply that the
monodromy groups of all components are Zariski dense.
For the base components, we will work directly on $H_{1}(S;\mathbb{Z})$ as it
is isomorphic to $H_{1}^{+}(\widetilde{S};\mathbb{Z})$ in such a way that the
corresponding monodromy groups are conjugate.
First observe that the monodromies in $H_{1}^{+}(S;\mathbb{Z})$ and
$H_{+}^{1}(S;\mathbb{Z})$ are isomorphic by Poincaré duality. Let $M$ denote
this monodromy group. Note that $H_{+}^{1}(S;\mathbb{Z})$ does not contain the
tautological plane. By Eskin–Filip–Wright [EFW18, Theorem 1.1], the algebraic
hull coincides with the Zariski closure of the monodromy group $M$. This
ensures that we can directly apply Filip’s classification. Moreover, Treviño
[Tre13, Theorem 1] proved that the plus Lyapunov spectrum contains no zero
exponents.
Recall that the action of $M$ on $H_{1}(S;\mathbb{R})$ is said to be _strongly
irreducible_ if no finite-index subgroup of $M$ preserves a non-trivial vector
subspace of $H_{1}(S;\mathbb{R})$. We will later show that this action is
indeed strongly irreducible.
Assume for now that the action of $M$ is strongly irreducible and let
$\mathfrak{m}$ be the Lie algebra of the Zariski closure of $M$. Applying
Filip’s classification to $\mathfrak{m}$ and using the absence of zero
exponents, we deduce that the possibilities for $\mathfrak{m}$ are, up to
compact factors:
1. (1)
$\mathfrak{sp}(2g,\mathbb{R})$ in the standard representation (degree $2g$,
dimension $g(2g+1)$);
2. (2)
$\mathfrak{su}(p,p)$ in the standard representation (degree $4p$, dimension
$4p^{2}-1$);
3. (3)
$\mathfrak{so}(2n-1,2)$ in the spin representation (degree $2^{n}$, dimension
$n(2n+1)$);
4. (4)
$\mathfrak{so}(2n-2,2)$ in one of the spin representations (degree $2^{n-1}$,
dimension $n(2n-1)$); or
5. (5)
$\mathfrak{so}^{*}(2n)$ in the standard representation for even $n$ (degree
$4n$, dimension $n(2n-1)$).
This list can be further refined by observing that the degree and dimension of
the representation must match because of the strong irreducibility. In each of
the above cases, we derive:
1. (1)
$\dim_{\mathbb{R}}\mathfrak{sp}(2g,\mathbb{R})=g(2g+1)$;
2. (2)
$p=g/2$, so $\dim_{\mathbb{R}}\mathfrak{su}(g/2,g/2)=g^{2}-1$;
3. (3)
$n=\log_{2}(2g)$, so
$\dim_{\mathbb{R}}\mathfrak{so}(2n-1,2)=\log_{2}(2g)\log_{2}(8g^{2})$;
4. (4)
$n=\log_{2}(4g)$, so
$\dim_{\mathbb{R}}\mathfrak{so}(2n-2,2)=\log_{2}(4g)\log_{2}(8g^{2})$; and
5. (5)
$n=g/2$, so $\dim_{\mathbb{R}}\mathfrak{so}^{*}(g/2,g/2)=g(g-1)/2$.
Note that possibilities (2)–(4) require the genus to be even. Thus, if $g$ is
odd and if the action of $M$ is strongly irreducible then $\mathfrak{m}$ has
to be $\mathfrak{sp}(2g,\mathbb{R})$. Hence, for odd genus it suffices to
establish the strong irreducibility of the $M$-action. For even genus, along
with showing the strong irreducibility, we will eliminate all but the
symplectic representation by setting up a dimension count. Here, we exploit
the additional flexibility that Theorem 5.6 provides.
As we use Dehn twists to build a dimension count, we record the following
formula: suppose $c,c^{\prime}$ are oriented multi-curves. As an element of
the homology
(9.6) $T(c)(c^{\prime})=c^{\prime}+\omega(c^{\prime},c)c,$
where $T$ is the left Dehn twist and $\omega(\ast,\ast)$ is the algebraic
intersection number.
The next lemma shows how Dehn twists in $M$ can be used to prove the largeness
of a subspace of $H_{1}(S;\mathbb{R})$ invariant under some finite index
subgroup of $M$.
###### Lemma 9.7.
Let $V\neq\\{0\\}$ a subspace of $H_{1}(S;\mathbb{R})$ on which a finite-index
subgroup $N$ of $M$ acts irreducibly. Suppose that $T(v)\in M$ for some $v\in
H_{1}(S;\mathbb{R})$. If there exists $v^{\prime}\in V$ such that
$\omega(v,v^{\prime})\neq 0$, then $v\in V$.
###### Proof.
By Equation 9.6 and the hypothesis, the linear combination
$T(v)^{k}(v^{\prime})-v^{\prime}$ is then a non-zero multiple of $v$ for every
$k\geqslant 1$. Since $|M:N|<\infty$, $T(v)^{k}\in N$ for some $k\geqslant 1$.
The lemma follows.
∎
We now state and prove a strong irreducibility criterion that we will use
throughout the discussion that follows.
###### Lemma 9.8.
Let $B$ be a finite set of cycles in $H_{1}(S;\mathbb{R})$ such that
* •
the span of $B$ is $H_{1}(S;\mathbb{R})$;
* •
for any pair $u\neq u^{\prime}\in B$, there exists a _chain_
$u=u_{0},\dotsc,u_{k}=u^{\prime}$ such that $\omega(u_{j},u_{j+1})\neq 0$ for
all $0\leqslant j\leqslant k-1$; and
* •
$T(u)$ is in $M$ for all $u\in B$.
Then, $M$ acts strongly irreducibly on $H_{1}(S;\mathbb{R})$.
###### Proof.
Let $V\neq\\{0\\}$ be a subspace on which a finite-index subgroup $N$ of $M$
acts irreducibly. By hypothesis, it suffices to show that $B$ is contained in
$V$.
Since $B$ spans $H_{1}(S;\mathbb{R})$, there exists $v\in V$, $u\in B$ such
that $\omega(v,u)\neq 0$. Then, by Lemma 9.7, $u\in V$. Let $u^{\prime}\neq u$
be an element of $B$. By hypothesis, there is a chain
$u=u_{0},\dotsc,u_{k}=u^{\prime}$ such that $\omega(u_{j},u_{j+1})\neq 0$ for
all $0\leqslant j\leqslant k-1$. By applying Lemma 9.7 inductively, it follows
that for all $j$ the cycles $u_{j}$ are contained in $V$. We conclude that $B$
is contained in $V$, so the lemma follows. ∎
###### Remark 9.9.
Explicit sets $B$ to which we will apply the previous lemma, will not always
be a basis of $H_{1}(S;\mathbb{R})$; extra cycles may be needed to satisfy the
chain hypothesis.
### 9.10. Minimal strata
For $g\geqslant 3$, let $d=2g$ and
$\pi_{d}=\setcounter{MaxMatrixCols}{11}\begin{pmatrix}1&2&1&3&4&5&6&7&8&\cdots&d-1\\\
2&4&3&5&4&8&7&\cdots&d&d-1&d\end{pmatrix}.$
The rooted differentials in $\mathcal{C}_{\pi_{d}}$ belong to
$\mathcal{Q}(4g-4)$.
As the first step, we will prove that the action of the monodromy group on
$H_{1}(S;\mathbb{R})$ is strongly irreducible. To do so, we will choose an
explicit set of cycles spanning the homology. The parity of $g$ will dictate a
choice of slightly different sets of cycles. See Figure 11(a) for a flat
surface in $\mathcal{Q}(16)$ for $g=5$, and Figure 11(b) and Figure 11(c) for
a flat surface in $\mathcal{Q}(12)$ for $g=4$. Note, however, that the stratum
$\mathcal{Q}(12)$ has two components, and that the figure depicts a quadratic
differential in $\mathcal{Q}(12)^{\mathrm{reg}}$. We will complete the proof
for $g=4$ in Appendix B, but for now we will only show the strong
irreducibility for $\mathcal{Q}(12)^{\mathrm{reg}}$. The pattern for the
cycles in even genus greater than four remains the same as in Figure 11(b) and
Figure 11(c), so we don’t need a separate figure.
In the second step, we will prove Zariski density of the monodromy group for
even genus (odd genera being directly covered by Filip’s result).
In our proofs, we will use a subset of the cycles (and their combinations)
that are core curves of cylinders on the flat surface. Dehn twists in such
cycles are in the monodromy group, so we may use them in our computations.
Let $M$ be the monodromy group and let $\mathfrak{m}$ be the Lie algebra of
its Zariski closure. We set $\varepsilon=(-1)^{g}$.
(a) Odd $g$.
(b) Even $g$ with the slope-$1$ curve $b$.
(c) Even $g$ with the modified slope-$1$ curve $b^{\prime}$.
Figure 9.11. Basis and useful curves.
We consider the collection of the following homology classes:
* •
$c_{1}$ and $c_{d}$ are the homology classes of the dashed curves;
* •
$c_{2}$ and $c_{d-1}$ are the homology classes of the dash-dotted curves; and
* •
$c_{3},\dotsc,c_{d-2}$ are the homology classes of the solid curves.
These curves form a basis for $H_{1}(S;\mathbb{R})$ (which is symplectic if
ordered appropriately). The densely dotted slope-$1$ curve is
$b=\sum_{i=1}^{d}c_{i}$ and the loosely dotted horizontal curve is
$p=c_{1}+\varepsilon c_{d}$. In even genus, we also need the curve
$b^{\prime}=\sum_{i=1}^{d-4}c_{i}-c_{d-1}-c_{d}$ obtained by modifying $b$.
Note that the set $\\{c_{2},\dotsc,c_{d-1},b,p\\}$ is a basis for
$H_{1}(S;\mathbb{R})$ when $g$ is odd and the set
$\\{c_{2},\dotsc,c_{d-1},b,b^{\prime},p\\}$ is a spanning set for
$H_{1}(S;\mathbb{R})$ when $g$ is even.
###### Lemma 9.12.
The action of $M$ on $H_{1}(S;\mathbb{R})$ is strongly irreducible.
###### Proof.
We set
$B=\begin{cases}\\{c_{2},\dotsc,c_{d-1},b,p\\}&\text{if $g$ is odd}\\\
\\{c_{2},\dotsc,c_{d-1},b,b^{\prime},p\\}&\text{if $g$ is even}\\\
\end{cases}$
As directly seen from Figure 11(a) and Figure 11(b), the cycles in the set
$\\{c_{2},\dotsc,c_{d-1},b\\}$ are given by core curves of cylinders on the
corresponding flat surfaces. Now consider the cylinder with core curve
$c_{d-3}-c_{d-2}$. A left-handed shear applied inside of this cylinder makes
the four horizontal saddle connections have slope one. Equivalently, it
performs a one-quarter Dehn twist. This straightens the modified slope-one
curve $b^{\prime}$; see Figure 11(c). Thus $b^{\prime}$ is the core curve of a
cylinder on the deformed surface. We deduce that for any cycle $u$ in $B$ the
Dehn twist $T(u)$ is in $M$.
We also note that for any pair $u\neq u^{\prime}$ in $B$ there exists a chain
$u=u_{0},\dotsc,u_{k}=u^{\prime}$ such that $\omega(u_{j},u_{j+1})\neq 0$ for
all $0\leqslant j\leqslant k-1$.
It follows that the set $B$ satisfies the hypothesis of Lemma 9.8 and thus the
action of $M$ on $H_{1}(S;\mathbb{R})$ is strongly irreducible. ∎
The Dehn twist $T(c)$ along a homology cycle $c$ is also a symplectic
transvection. For the following calculation we will think of it as such. If
$T(c)^{k}\in M$, then $T(c)^{k}-\mathrm{Id}\in\mathfrak{m}$. As notation, let
$D(c)=T(c)-\mathrm{Id}$, $E(c)=T(c)^{2}-\mathrm{Id}$.
The cycles $c_{2},\dotsc,c_{d-1}$ and $p$ are realised as core curves of
cylinders on the surface. Hence, the Dehn twists $T(c_{i})$ for
$i=2,\dotsc,d-1$ and $T(p)$ are in $M$. Observe that:
$\begin{aligned} D(c_{2})(c_{1})&=-c_{2}\\\ D(c_{2i+1})(c_{2i+2})&=c_{2i+1}\\\
D(c_{2i+2})(c_{2i+1})&=-c_{2i+2}\\\ D(c_{d-1})(c_{d})&=-c_{d-1}\\\
D(p)(c_{2})=p,\,D(p)(c_{d-1})&=\varepsilon p\end{aligned}\qquad\begin{aligned}
D(c_{2})(c_{i})&=0\text{ for }i\neq 1\\\ D(c_{2i+1})(c_{j})&=0\text{ for
}j\neq 2i+2\\\ D(c_{2i+2})(c_{j})&=0\text{ for }j\neq 2i+1\\\
D(c_{d-1})(c_{j})&=0\text{ for }j\neq d\\\ D(p)(c_{j})&=0\text{ for
}j\notin\\{2,d-1\\}\end{aligned}$
All elements above belong to $\mathfrak{m}$.
Moreover, for each $1\leqslant i,j\leqslant g-2$ we have that the elements
$T(c_{2i+1}+c_{2j+1})^{2}$, $T(c_{2i+1}+c_{2j+2})^{2}$ and
$T(c_{2i+2}+c_{2j+2})^{2}$ all belong to $M$. Indeed, if $i=j$ then
$|\omega(c_{2i+1},c_{2j+2})|=1$, so $T(c_{2i+1}+c_{2j+2})\in M$. Otherwise,
this follows from a result by the fourth author using $b$ as the auxiliary
vector [Gut19, Corollary 2.8]. Observe that
$\displaystyle E(c_{2i+1}+c_{2j+1})(c_{2i+2})$
$\displaystyle=2(c_{2i+1}+c_{2j+1})$ $\displaystyle
E(c_{2i+1}+c_{2j+1})(c_{2j+2})$ $\displaystyle=2(c_{2i+1}+c_{2j+1})$
$\displaystyle E(c_{2i+1}+c_{2j+2})(c_{2i+2})$
$\displaystyle=2(c_{2i+1}+c_{2j+2})$ $\displaystyle
E(c_{2i+1}+c_{2j+2})(c_{2j+1})$ $\displaystyle=-2(c_{2i+1}+c_{2j+2})$
$\displaystyle E(c_{2i+2}+c_{2j+2})(c_{2i+1})$
$\displaystyle=-2(c_{2i+2}+c_{2j+2})$ $\displaystyle
E(c_{2i+2}+c_{2j+2})(c_{2j+1})$ $\displaystyle=-2(c_{2i+2}+c_{2j+2}).$
The number of elements of the form $D(\ast)$ is $d-1=2g-1$. The number of
elements of the form $E(\ast)$ are $\binom{2g-4}{2}$.
With basis $\\{c_{1},\dotsc,c_{d}\\}$, we identify the vector space of linear
transformations of $H_{1}(S;\mathbb{R})$ with $\text{Mat}_{d\times
d}(\mathbb{R})$. We may then associate a matrix to the elements $D(\ast)$ and
$E(\ast)$ considered above. Let $M_{i,j}$ be the matrix with the $(i,j)$-entry
one and all other entries zero. We may then write the matrices for $D(\ast)$
and $E(\ast)$ as linear combinations of $M_{i,j}$.
From the calculations above, we note that
* •
The matrix for $D(c_{2i+1})$ is $M_{(2i+1),(2i+2)}$.
* •
The matrix for $D(c_{2i+2})$ is $-M_{(2i+2),(2i+1)}$.
* •
The matrix $M_{(2i+2),(2j+1)}$ features only in the linear combination of the
matrix for $E(c_{2i+1}+c_{2j+1})$.
* •
The matrix $M_{(2i+2),(2j+2)}$ features only in the linear combination of the
matrix for $E(c_{2i+1}+c_{2j+2})$.
* •
The matrix $M_{(2i+1),(2j+2)}$ features only in the linear combination of the
matrix for $E(c_{2i+2}+c_{2j+2})$.
It follows that all the elements considered above are linearly independent in
$\text{Mat}_{d\times d}(\mathbb{R})$. Thus,
$\dim_{\mathbb{R}}\mathfrak{m}\geqslant\binom{2g-4}{2}+2g-1=2g^{2}-7g+9.$
We conclude that if $g\geqslant 6$, we have that
$\displaystyle\dim_{\mathbb{R}}\mathfrak{m}$
$\displaystyle>\dim_{\mathbb{R}}\mathfrak{su}(g/2,g/2)=g^{2}-1$
$\displaystyle\dim_{\mathbb{R}}\mathfrak{m}$
$\displaystyle>\dim_{\mathbb{R}}\mathfrak{so}(2n-1,2)=\log_{2}(2g)\log_{2}(8g^{2})\text{
for }n=\log_{2}(2g)$ $\displaystyle\dim_{\mathbb{R}}\mathfrak{m}$
$\displaystyle>\dim_{\mathbb{R}}\mathfrak{so}(2n-2,2)=\log_{2}(4g)\log_{2}(8g^{2})\text{
for }n=\log_{2}(4g)$ $\displaystyle\dim_{\mathbb{R}}\mathfrak{m}$
$\displaystyle>\dim_{\mathbb{R}}\mathfrak{so}^{*}(g/2)=g(g-1)/2.$
Hence, in these cases, $\mathfrak{m}=\mathfrak{sp}(2g,\mathbb{R})$ and that
$M=\operatorname{Sp}(2g,\mathbb{R})$. Recall directly from the list that
$\mathfrak{m}$ is $\mathfrak{sp}(2g,\mathbb{R})$ when $g$ is odd, as the
action of $M$ on $H_{1}(S;\mathbb{R})$ is strongly irreducible.
Since minimal strata only occur for $g\geqslant 3$, the only remaining case is
$g=4$. We treat this case in Appendix B.
### 9.13. Hyperelliptic components with two singularities
Let $r,s\geqslant 1$ be odd integers. Consider the permutation $\pi_{r,s}$
defined by
$\setcounter{MaxMatrixCols}{11}\begin{pmatrix}A&0&1&2&\cdots&r-1&A&r&r+1&\cdots&r+s\\\
r+s&r+s-1&\cdots&r&B&r-1&r-2&r-3&\cdots&0&B\\\ \end{pmatrix}.$
The embodying component for this permutation is
$\mathcal{Q}(2r,2s)^{\mathrm{hyp}}$ and we can assume that $r\geqslant s$. See
Figure 9.14 for a flat surface in $\mathcal{Q}(6,2)^{\mathrm{hyp}}$. The genus
of the underlying surface is $g=(r+s+2)/2$. Let:
* •
$c_{0},\dotsc,c_{r+s}$ be the homology classes of the solid curves;
* •
$c_{A}$ and $c_{B}$ be the homology classes of the dashed curves; and
* •
$c_{AB}$ be the homology class of the dotted curve.
These cycles form a spanning set of the relative homology
$H_{1}(S,Z;\mathbb{Z})$. As a relative cycle,
$c_{AB}=-c_{s}+c_{r+s}-c_{A}+c_{B}$.
In absolute homology, $c_{A}+c_{B}=0$. Excising $c_{B}$ (or $c_{A}$) we obtain
exactly $2g=r+s+2$ curves. So $\\{c_{0},\dotsc,c_{r+s}\\}\cup\\{c_{A}\\}$ is a
basis of $H_{1}(S;\mathbb{Z})$. Note that
* •
all cycles in $\\{c_{0},\dotsc,c_{r+s}\\}\cup\\{c_{AB}\\}$ are represented by
core curves of cylinders and hence all $T(c_{j})$, for $0\leqslant j\leqslant
r+s$, and $T(c_{A})$ are in $M$; and
* •
any pair of cycles in $\\{c_{0},\dotsc,c_{r+s}\\}$ intersect. Moreover,
$c_{AB}$ intersects $c_{0}$.
Thus, the basis satisfies the hypothesis of Lemma 9.8 which proves that the
action of $M$ on $H_{1}(S;\mathbb{R})$ is strongly irreducible.
We again consider the Dehn twist $T(c)$ in a cycle $c$ as a symplectic
transvection. If $T(c)^{k}\in M$, then $T(c)^{k}-\mathrm{Id}\in\mathfrak{m}$.
As notation, let $D(c)=T(c)-\mathrm{Id}$.
The cycles $c_{i}$, for $0\leqslant i\leqslant r+s$, and $c_{i}+c_{j}$, for
$0\leqslant i<j\leqslant r+s$, are core curves of cylinders on the flat
surface. Hence, $T(c_{i}),T(c_{i}+c_{j})\in M$ for each $0\leqslant
i<j\leqslant r+s$ and, thus, $D(c_{i}),D(c_{i}+c_{j})\in\mathfrak{m}$. As
before, we use the basis $\\{c_{0},c_{1},\dotsc,c_{r+s},c_{A}\\}$ to identify
linear transformations of $H_{1}(S;\mathbb{R})$ with $\text{Mat}_{2g\times
2g}(\mathbb{R})$. Again as before, for $0\leqslant i,j\leqslant r+s$ let
$M_{i,j}$ be the matrix with $(i,j)$-entry one and all remaining entries zero.
Similarly, we have the definitions for $M_{i,A}$ and $M_{A,A}$.
Figure 9.14. Curves for the hyperelliptic case.
We use the following notation. If $P(i,j)$ is a logical proposition on $i$ and
$j$, we define
$\llbracket P(i,j)\rrbracket=\begin{cases}1&\text{if }P(i,j)\text{ is true}\\\
0&\text{if }P(i,j)\text{ is false}.\end{cases}$
We then note that
$D(c_{i})=-\sum_{k<i}M_{i,k}+\sum_{k>i}M_{i,k}+\llbracket i<r\rrbracket
M_{i,A}.$
Similarly, with $i<j$ note that
$\displaystyle D(c_{i}+c_{j})=$
$\displaystyle{}-(M_{i,i}+M_{j,i})+(M_{i,j}+M_{j,j})$
$\displaystyle{}-2\sum_{k<i}(M_{i,k}+M_{j,k})+2\sum_{j<k}(M_{i,k}+M_{j,k})$
$\displaystyle{}+2\llbracket i<j<r\rrbracket(M_{i,A}+M_{j,A})$
$\displaystyle{}+\llbracket i<r\leqslant j\rrbracket(M_{i,A}+M_{j,A}).$
###### Lemma 9.15.
The collection of matrices $\\{D(c_{i})\\}\cup\\{D(c_{i}+c_{j})\\}_{i<j}$ are
linearly independent.
###### Proof.
Any linear combination can be regrouped as $\sum_{i=0}^{r+s}S_{i}$ where
$S_{i}=a_{i}D(c_{i})+\sum_{j=i+1}^{r+s}a_{i,j}D(c_{i}+c_{j}).$
In particular, when $i=r+s$ the summation is empty and
$S_{r+s}=a_{r+s}D(c_{r+s})$.
Note that the matrices in $S_{0}$ are the only ones in the whole linear
combination whose first row does not vanish. Inductively, the matrices in
$S_{i+1}$ are the only ones outside $S_{0},\dotsc,S_{i}$ whose $(i+1)$-th row
is empty. Hence, it suffices to argue that each such collection
$\\{D(c_{i}),D(c_{i}+c_{i+1}),\dotsc,D(c_{i}+c_{r+s})\\}$ is linearly
independent.
Let $0\leqslant i\leqslant r+s$ and let $W$ be the $i$-th row of $D(c_{i})$.
Moreover, for $i+1\leqslant j\leqslant r+s$, let $V_{j}$ be the $i$-th row of
$D(c_{i}+c_{j})$.
###### Claim 9.16.
The row vectors $W,V_{i+1},\dotsc,V_{r+s}$ are linearly independent.
###### Proof (Claim).
We index the standard basis in $\mathbb{R}^{2g}$ as
$\\{e_{0},\dotsc,e_{r+s},e_{A}\\}$. From the formulae above, we deduce
$W=-\sum_{k<i}e_{k}+\sum_{k>i}e_{k}+\llbracket i<r\rrbracket e_{A}.$
and for $i+1\leqslant j\leqslant r+s$
$\displaystyle V_{j}=$
$\displaystyle{}-2\sum_{k<i}e_{k}-e_{i}+e_{j}+2\sum_{k>j}e_{k}$
$\displaystyle{}+(2\llbracket i<j<r\rrbracket+\llbracket i<r\leqslant
j\rrbracket)e_{A}$
Consider $V_{r+s}=-2\sum_{k<i}e_{k}-e_{i}+e_{r+s}+\llbracket i<r\rrbracket
e_{A}$. Observe that the linear combination of canonical vectors realising
$V_{j}-2V_{r+s}$ does not feature $e_{r+s}$ for $i+1\leqslant j<r+s$. Thus, we
redefine $V_{j}$ to be $V_{j}-2V_{r+s}$ for such $j$ and continue inductively.
That is, the next step redefine $V_{j}$ to be $V_{j}-2V_{r+s-1}$ for every
$i+1\leqslant j<r+s-1$.
We obtain that
$\displaystyle V_{j}={}$ $\displaystyle
2(-1)^{j+1}\sum_{k<i}e_{k}+(-1)^{j+1}e_{i}+e_{j}$
$\displaystyle{}+2(-1)^{j}\llbracket i<j<r\rrbracket e_{A}$
$\displaystyle{}+(-1)^{j}\llbracket i<r\leqslant j\rrbracket e_{A}.$
We also apply a similar process to $W$, redefining it to be
$W-V_{r+s}-V_{r+s-1}-\dotsb-V_{i+1}$. Then, we obtain that
$W=\begin{cases}-\sum_{k<i}e_{k}+\llbracket i<r\rrbracket e_{A}&\text{ if $i$
is even}\\\ +\sum_{k\leqslant i}e_{k}-\llbracket i<r\rrbracket e_{A}&\text{ if
$i$ is odd}.\end{cases}$
We observe
* •
for $i+1\leqslant j\leqslant r+s$, the vector $e_{j}$ is featured only in the
linear combination for $V_{j}$, and
* •
the vector $W$ is in the kernel of the projection
$\mathbb{R}^{2g}\to\mathbb{R}^{r+s-i}$ given by
$(u_{0},\dotsc,u_{r+s})\mapsto(u_{i+1},\dotsc,u_{r+s})$.
We thus conclude the claim. ∎
By the claim, the collection $\\{D(c_{i})\\}\cup\\{D(c_{i}+c_{j})\\}_{i<j}$
for each fixed $i$ is linearly independent. The lemma then follows from our
grouping. ∎
By Lemma 9.15,
$\dim_{\mathbb{R}}\mathfrak{m}\geqslant\binom{2g-1}{2}+2g-1=g(2g-1).$
We conclude that if $g\geqslant 5$, we have that
$\displaystyle\dim_{\mathbb{R}}\mathfrak{m}$
$\displaystyle>\dim_{\mathbb{R}}\mathfrak{su}(g/2,g/2)=g^{2}-1$
$\displaystyle\dim_{\mathbb{R}}\mathfrak{m}$
$\displaystyle>\dim_{\mathbb{R}}\mathfrak{so}(2n-1,2)=\log_{2}(2g)\log_{2}(8g^{2})\text{
for }n=\log_{2}(2g)$ $\displaystyle\dim_{\mathbb{R}}\mathfrak{m}$
$\displaystyle>\dim_{\mathbb{R}}\mathfrak{so}(2n-2,2)=\log_{2}(4g)\log_{2}(8g^{2})\text{
for }n=\log_{2}(4g)$ $\displaystyle\dim_{\mathbb{R}}\mathfrak{m}$
$\displaystyle>\dim_{\mathbb{R}}\mathfrak{so}^{*}(g/2,g/2)=g(g-1)/2.$
Thus, we obtain that $\mathfrak{m}=\mathfrak{sp}(2g,\mathbb{R})$ and that
$M=\operatorname{Sp}(2g,\mathbb{R})$ for $g\geqslant 5$.
To extend to $g=4$, we include $D(c_{AB})$ in the collection of matrices and
prove linear independence. Note that
$\displaystyle D(C_{AB})={}$ $\displaystyle 3(M_{0,0}-M_{s,0}-M_{A,0})$
$\displaystyle{}+4\sum_{k<s}(M_{0,k}-M_{s,k}-M_{A,k})$
$\displaystyle{}+2\sum_{k=s+1}^{r-1}(M_{0,k}-M_{s,k}-M_{A,k}).$
We expand the collection $S_{0}$ in the proof of Lemma 9.15 to include
$D(C_{AB})$. Let $\\{W,V_{1},\dotsc,V_{r+s}$ be the vectors that arise from
$S_{0}$ in 9.16. It suffices to show that the first row of $D(C_{AB})$ cannot
be written as a linear combination of $\\{W,V_{1},\dots,V_{r-1}\\}$. This is
readily seen after the vectors $W,V_{1},\dotsc,V_{r-1}$ are redefined as per
the Gaussian elimination process described in the proof of 9.16.
For $g=3$, we have directly from the list that
$\mathfrak{m}=\mathfrak{sp}(2g,\mathbb{R})$ since the action of $M$ on
$H_{1}(S;\mathbb{R})$ is strongly irreducible.
The only remaining case is $g=2$. Here, we have that
$\displaystyle\dim_{\mathbb{R}}\mathfrak{m}\geqslant 6$
$\displaystyle>3=\dim_{\mathbb{R}}\mathfrak{su}(1,1)$
$\displaystyle\mathfrak{sp}(4,\mathbb{R})$
$\displaystyle\cong\mathfrak{so}(3,2)$
$\displaystyle\dim_{\mathbb{R}}\mathfrak{sp}(4,\mathbb{R})=10$
$\displaystyle<15=\dim_{\mathbb{R}}\mathfrak{so}(4,2),$
so the only possibility for $\mathfrak{m}$ is
$\mathfrak{sp}(4,\mathbb{R})\cong\mathfrak{so}(3,2)$.
### 9.17. Exceptional non-minimal strata
In Table 1, we exhibit an explicit simple extension from a component of a
minimal stratum to each non-hyperelliptic component of exceptional non-minimal
strata that was not already treated by the fourth author [Gut17, Table 1],
except for $\mathcal{Q}(9,-1)^{\mathrm{irr}}$. Indeed, there does not exist a
simple extension from $\mathcal{Q}(8)$ to this last component as noted by
Lanneau [Lan08], so we treat it separately in Appendix B. These computations
were performed by using the surface_dynamics package for SageMath [Ste+20].
Source | Target | Permutation
---|---|---
$\mathcal{Q}(8)$ | $\mathcal{Q}(9,-1)^{\mathrm{reg}}$ | $\begin{pmatrix}1&2&1&3&4&3&5&2\\\ &6&5&4&A&A&6\end{pmatrix}$
$\mathcal{Q}(12)^{\mathrm{reg}}$ | $\mathcal{Q}(6,6)^{\mathrm{reg}}$ | $\begin{pmatrix}1&2&A&3&4&5&A&6&7&5\\\ &2&4&6&8&7&8&3&1\end{pmatrix}$
$\mathcal{Q}(12)^{\mathrm{irr}}$ | $\mathcal{Q}(6,6)^{\mathrm{irr}}$ | $\begin{pmatrix}&1&2&3&4&3&5&6&7\\\ 8&1&6&8&A&4&2&7&A&5\end{pmatrix}$
$\mathcal{Q}(12)^{\mathrm{reg}}$ | $\mathcal{Q}(9,3)^{\mathrm{reg}}$ | $\begin{pmatrix}1&2&3&A&4&3&5&6&7\\\ 2&4&6&8&7&8&5&A&1\end{pmatrix}$
$\mathcal{Q}(12)^{\mathrm{irr}}$ | $\mathcal{Q}(9,3)^{\mathrm{irr}}$ | $\begin{pmatrix}1&2&3&4&5&A&4&A&6&7\\\ &2&6&8&5&3&7&8&1\end{pmatrix}$
Table 1. Explicit extensions into non-hyperelliptic components of exceptional
strata with less than three singularities. The permutation in the third column
represents the component in the second column. Erasing letters $A$ and $B$
produces a permutation representing the component of a minimal stratum in the
first column.
## 10\. Simplicity
We now prove the Kontsevich–Zorich conjecture.
###### Theorem 10.1.
The Kontsevich–Zorich cocycle has a simple spectrum for all components of all
strata of abelian differentials. The plus and minus Kontsevich–Zorich cocycles
also have a simple spectrum for all components of all strata of quadratic
differentials.
###### Proof.
Let $\mathcal{C}$ be any component of a stratum of abelian or quadratic
differentials. We have the following facts.
* •
The Teichmüller flow on a finite cover of $\mathcal{C}$ admits a coding as a
countable shift with approximate product structure.
* •
We have established that the plus and minus cocycles lifted to this cover are
locally constant and integrable.
* •
We have also established that associated monoids for the plus and minus
cocycles (lifted to this cover) are Zariski dense in the corresponding
symplectic groups.
By 6.9, it follows that the Lyapunov spectra for the plus and minus cocycles
are simple. ∎
## Appendix A Examples
### A.1. The decomposition of $\mathcal{C}^{\mathrm{root}}$ is not polytopal
(a) $\gamma(0)$.
(b) $\gamma(1)$.
Figure A.2. A curve $\gamma\colon[0,1]\to\mathcal{H}(0,0)^{\mathrm{root}}$
that ends at $\mathcal{V}$. The real periods are deformed following the
arrows, while the imaginary periods remain constant.
In this section, we present an explicit example showing that the decomposition
of $\mathcal{C}^{\mathrm{root}}$ into the union of the
$\overline{\mathcal{C}_{\pi}}$, where $\pi\in\mathcal{R}$, is “not polytopal”.
Indeed, a compact arc in $\mathcal{C}^{\mathrm{root}}$ may intersect
$\mathcal{S}=\mathcal{C}^{\mathrm{root}}-\bigcup_{\pi\in\mathcal{R}}\mathcal{C}_{\pi}$
infinitely many times even if it is transverse to $\mathcal{V}$.
Let $\mathcal{C}=\mathcal{H}(0,0)$ and consider the curve shown in Figure A.2.
The rooted differential $\gamma(1)$ contains a “wide” vertical cylinder, that
is, a vertical cylinder such that an arc of length one emanating from the root
ends before it crosses the cylinder entirely. On the other hand, we assume
that that the rooted differential $\gamma(0)$ is doubly non-vanishing, and
that it admits a normalised zippered rectangles construction for the
underlying permutation $\pi=\big{(}\begin{smallmatrix}1&2&3\\\
3&2&1\end{smallmatrix}\big{)}$.
Starting from $s=0$ and as $s$ increases, the normalised base-arc shrinks
until its length is exactly equal to $1+\min\\{x_{1},x_{3}\\}$ at $s=s_{1}>0$.
Thus, $\gamma(s)$ admits a zippered rectangles construction for every
$0\leqslant s<s_{1}$, but $\gamma(s_{1})$ does not. Indeed, $\gamma(s_{1})$
belongs to a flow face. As $\gamma$ passes through this flow face, a forward
Rauzy move must be performed to again obtain a normalised zippered rectangles
construction. The winning letter is $3$, so the resulting permutation after
the Rauzy move is again $\pi$.
This process continues inductively. Indeed, starting from $s=s_{k}$, for any
integer $k\geqslant 1$, and as $s$ increases, the normalised base-arc
continues to shrink until the curve hits the flow face again at
$s=s_{k+1}>s_{k}$. Thus, $\gamma(s)$ admits a normalised zippered rectangles
construction for every $s_{k}<s<s_{k+1}$. A Rauzy move must be performed when
the curve crosses the flow face; the winning letter continues to be $3$.
Hence, the resulting permutation is again $\pi$.
In summary, there exists a countable collection
$0<s_{1}<\dotsb<s_{k}<s_{k+1}<\dotsb{}$ such that:
1. (1)
$\gamma(s)$ admits a normalised zippered rectangles construction for
$0\leqslant s<s_{1}$ and every $s_{k}<s<s_{k+1}$ for $k\geqslant 1$;
2. (2)
$\gamma(s_{k})$ belongs to a flow face for every $k\geqslant 1$.
Thus, $\gamma$ intersects $\mathcal{S}$ infinitely many times and there is no
finite Rauzy–Veech sequence shadowing $\gamma$. Moreover, as this process
unfolds, the width of the rectangle $R_{1}$ goes to zero, while its height
diverges grows indefinitely. Hence, the normalised zippered rectangles
constructions along $\gamma$ become more and more degenerate and do not
converge to a well-defined element of $P_{\pi}$. See Figure A.3 for an
illustration of this phenomenon.
(a) $\gamma(0)$.
(b) $\gamma(s)$ for $s_{1}<s<s_{2}$.
(c) $\gamma(s)$ for $s_{2}<s<s_{3}$.
Figure A.3. Normalised zippered rectangles construction of $\gamma(s)$.
Similar examples exist also for _toppling faces_ , that is, when either some
width $x_{\alpha}$ or some zipper height goes to zero. It is possible to find
a compact arc in $\mathcal{C}^{\mathrm{root}}$ transverse to $\mathcal{V}$
that intersects infinitely many toppling faces. Indeed, a simple way to obtain
such an example is to consider a horizontal slit $J$ that does not meet the
normalised base-arc $I$, and such that some vertical segments emanating from
$I$ meet $J$ before their first return. Then, the slit can be rotated until it
becomes vertical (in a way that it still does not meet the base-arc). This
forces the widths of some rectangles to hit zero infinitely many times before
the slit becomes vertical.
### A.4. Crossing a toppling face
In this section, we present concretely the construction used in the based loop
theorem, namely Theorem 4.23, in which any loop in
$\pi_{1}(\mathcal{C}^{\mathrm{root}},q_{0})$ is written as a finite
concatenation of paths that are forward (or backward) Teichmüller segments or
are contained inside a polytope. The path in $\mathcal{C}^{\mathrm{root}}$
that we present is not closed, but it still illustrates the key point. For
more complicate paths or loops, this procedure has to be done several times.
(a) $\gamma(0)$.
(b) $\gamma(1/2)$.
(c) $\gamma(1)$.
Figure A.5. A curve $\gamma\colon[0,1]\to\mathcal{H}(0,0)^{\mathrm{root}}$
that passes through $\mathcal{V}$. The real periods are deformed following the
arrows, while the imaginary periods remain constant.
Let $\mathcal{C}=\mathcal{H}(0,0)$. Consider the path
$\gamma\colon[0,1]\to\mathcal{C}^{\mathrm{root}}$ illustrated in Figure A.5.
Assume that $\gamma(0)$ and $\gamma(1)$ are doubly non-vanishing, while
$\gamma(1/2)$ has a vertical saddle connection and, thus, belongs to
$\mathcal{V}$.
Figure A.6. Normalised zippered rectangles construction of $\gamma(0)$.
Since $\gamma(0)$ is doubly non-vanishing, it admits a normalised base-arc.
The resulting zippered rectangles construction, with underlying permutation
$\pi=\big{(}\begin{smallmatrix}1&2&3\\\ 3&2&1\end{smallmatrix}\big{)}$, is
shown in Figure A.6.
If $0\leqslant s<1/2$, a parameter $(x_{s},y_{s})\in P_{\pi}$ of this zippered
rectangles construction satisfies $q_{\pi}(x_{s},y_{s})=\gamma(s)$. As $s$
increases towards $1/2$, these parameters approach the boundary of $P_{\pi}$
and $\gamma(1/2)\notin\mathcal{C}_{\pi}$. In particular, as $s$ increases to
$1/2$, the width $x_{2}$ tends to zero while all other parameters stay bounded
away from zero and infinity, and thus $\gamma(1/2)$ can be said to lie on a
toppling face.
(a) Before cutting and pasting.
(b) After cutting and pasting by two backward Rauzy moves.
Figure A.7. Zippered rectangles construction of $\gamma(1/2)$ with a base-arc
of length at least $5/3$.
On the other hand, $\gamma(1/2)$ is not doubly vanishing. It thus admits a
base-arc. We take a base-arc of length at least $5/3$, since the interior of
any horizontal segment with length at least $5/3$ meets every leaf of the
vertical foliation. The resulting (unnormalised) zippered rectangles
construction, with underlying permutation
$\sigma=\big{(}\begin{smallmatrix}1&3&2\\\ 3&2&1\end{smallmatrix}\big{)}$, is
shown in Figure A.7. As Teichmüller flow by $T=-\log(5/3)$ normalises the
base-arc, we have that $g_{T}(\gamma(1/2))\in\mathcal{C}_{\sigma}$. Observe
that $\sigma$ is obtained from $\pi$ by two backward Rauzy moves.
(a) $\gamma(0)$.
(b) $\gamma(1)$.
Figure A.8. Zippered rectangles constructions with a base-arc of length at
least $5/3$.
For “large” deformations of parameters, the zippered rectangles construction
with a base-arc of length at least $5/3$ is contained in
$\mathcal{C}_{\sigma}$ after Teichmüller flow by $T=-\log(5/3)$. In
particular, we have that $g_{T}(\gamma(s))\in\mathcal{C}_{\sigma}$ for every
$s\in[0,1]$. Let $(x_{s}^{\prime},y_{s}^{\prime})\in P_{\sigma}$ such that
$q_{\sigma}(x_{s}^{\prime},y_{s}^{\prime})=g_{T}(\gamma(s))$. Figure A.8 shows
these zippered rectangles constructions for $\gamma(0)$ and $\gamma(1)$.
(a) Before cutting and pasting.
(b) After cutting and pasting by two backward Rauzy moves followed by two
forward Rauzy moves.
Figure A.9. Normalised zippered rectangles construction of $\gamma(1)$.
Finally, $\gamma(1)$ is also doubly non-vanishing. Thus it admits a normalised
base-arc. The resulting zippered rectangles construction, with underlying
permutation $\tau=\big{(}\begin{smallmatrix}1&2&3\\\
3&1&2\end{smallmatrix}\big{)}$, is shown in Figure A.9. Observe that $\tau$ is
obtained from $\pi$ by two backward Rauzy moves followed by two forward Rauzy
moves.
If $1/2<s\leqslant 1$, a parameter $(x_{s},y_{s})$ of this zippered rectangles
construction satisfies $q_{\tau}(x_{s},y_{s})=\gamma(s)$. As $s$ decreases
towards $1/2$, these parameters approach the boundary of $P_{\tau}$ and
$\gamma(1/2)\notin\mathcal{C}_{\tau}$.
Putting everything together, we obtain three open sets
$U_{0},U_{1/2},U_{1}\subseteq\mathcal{C}^{\mathrm{root}}$ satisfying:
* •
$U_{0}=q_{\pi}(W_{0})$, where $W_{0}\subseteq P_{\pi}$ is an open set
containing $(x_{0},y_{0})$ whose closure is contained in $P_{\pi}$;
* •
$U_{1/2}=g_{-T}(q_{\sigma}(W_{1/2}))$, where $W_{1/2}\subseteq P_{\sigma}$ is
an open set containing $(x_{s}^{\prime},y_{s}^{\prime})$ for every $s\in[0,1]$
whose closure is contained in $P_{\sigma}$; and
* •
$U_{1}=q_{\tau}(W_{1})$, where $W_{1}\subseteq P_{\tau}$ is an open set
containing $(x_{1},y_{1})$ whose closure is contained in $P_{\tau}$.
Then, $\gamma$ is homotopic, relative to its endpoints, to the concatenation
of the paths:
* •
$g_{t}\gamma(0)$ for $t\in[0,T]$;
* •
$g_{T}\gamma(s)$ for $s\in[0,1]$; and
* •
$g_{-t}\gamma(1)$ for $t\in[0,T]$.
Therefore, the combinatorial description of this concatenation is
$\begin{pmatrix}1&2&3\\\
3&2&1\end{pmatrix}\xrightarrow{\mathrm{b}^{-1}}\begin{pmatrix}1&3&2\\\
3&2&1\end{pmatrix}\xrightarrow{\mathrm{t}^{-1}}\begin{pmatrix}1&3&2\\\
3&2&1\end{pmatrix}\xrightarrow{\mathrm{b}}\begin{pmatrix}1&2&3\\\
3&2&1\end{pmatrix}\xrightarrow{\mathrm{t}}\begin{pmatrix}1&2&3\\\
3&1&2\end{pmatrix}$
which is the (undirected) Rauzy–Veech sequence shadowing $\gamma$.
## Appendix B Zariski density of the remaining cases
In this section, we explicitly check the Zariski density for the plus piece of
the four remaining components, namely $\mathcal{Q}(5,-1)$,
$\mathcal{Q}(9,-1)^{\mathrm{irr}}$, $\mathcal{Q}(12)^{\mathrm{reg}}$ and
$\mathcal{Q}(12)^{\mathrm{irr}}$. We do this by using the following sufficient
criterion.
###### Criterion B.1 ([PR14, Theorem 9.10]).
Let $G$ be a subgroup of $\operatorname{Sp}(2g,\mathbb{Z})$. We have that $G$
is Zariski dense in $\operatorname{Sp}(2g,\mathbb{R})$ provided the Zariski
closure of $G$ is not a power of $\operatorname{SL}(2,\mathbb{R})$, and there
exist elements $A,B\in G$ satisfying:
1. (1)
$A$ is Galois-pinching in the sense of Matheus–Möller–Yoccoz [MMY15]. That is,
all of its eigenvalues are real and have distinct moduli, and the Galois group
of its characteristic polynomial is maximal; and
2. (2)
$B$ has infinite order and does not commute with $A$.
Since $A$ is symplectic, its characteristic polynomial $P$ is reciprocal.
Thus, the Galois group of $P$ is contained inside an appropriate
hyperoctahedral group. Hence, this group is maximal if and only if it has
order $2^{g}g!$. Moreover, if a monodromy group is a power of
$\operatorname{SL}(2,\mathbb{R})$, then it has more than one compact factor,
which is forbidden for strongly irreducible pieces [Theorem 1.2]Fil17[Theorem
1.1]Esk-Fil-Wri. Thus, if we can establish B.1 together with Lemma 9.8, we
obtain the Zariski density of $G$ inside $\operatorname{Sp}(2,\mathbb{R})$.
For all of the remaining components, we will follow the same strategy to show
that the hypotheses of B.1 hold. We will start with a specific permutation
$\pi$. We will then exhibit two cycles $\delta_{1}$ and $\delta_{2}$ based at
$\pi$ in the reduced Rauzy diagram such that their induced matrices $A_{1}$
and $A_{2}$ in a preferred basis (for the action on absolute homology) can be
combined to produce the matrices $A$ and $B$ that the criterion requires.
Specifically, in all cases, $B$ can be taken to be $A_{1}$ and $A$ can be
taken to be $A_{1}A_{2}$.
These cycles $\delta_{1}$ and $\delta_{2}$ were found by a randomised computer
search on the reduced Rauzy diagrams. We tried to choose cycles that are
relatively short to avoid the resulting matrices having entries that are
greater than $100$.
### B.2. Zariski density of $\mathcal{Q}(5,-1)$
Let
$\pi=\begin{pmatrix}1&2&3&2&4\\\ 4&5&5&3&1\end{pmatrix}$
and
$\displaystyle\delta_{1}={}$
$\displaystyle\mathrm{b}^{3}\mathrm{t}^{2}\mathrm{b}^{3}\mathrm{t}\mathrm{b}^{2}\mathrm{t}\mathrm{b}^{3}\mathrm{t}^{2}\mathrm{b}^{3}$
$\displaystyle\delta_{2}={}$
$\displaystyle\mathrm{t}^{2}\mathrm{b}\mathrm{t}\mathrm{b}\mathrm{t}\mathrm{b}\mathrm{t}^{3}\mathrm{b}\mathrm{t}\mathrm{b}\mathrm{t}\mathrm{b}^{2}\mathrm{t}^{2}$
Figure B.3. Representative of $\mathcal{Q}(5,-1)$.
Consider the four curves $c_{1},\dotsc,c_{4}$ depicted in Figure B.3 as solid
or dashed lines. These cycles form a basis for the absolute homology as their
intersection matrix is
$\Omega=\begin{pmatrix}0&0&-1&-1\\\ 0&0&1&0\\\ 1&-1&0&-1\\\
1&0&1&0\end{pmatrix}$
that has determinant $1$. On the other hand, the cycle $v\in
H_{1}(S;\mathbb{R})$ depicted in Figure B.3 as dash-dotted vertical lines can
be written as $v=-c_{2}+c_{3}+c_{4}$. Thus, the set
$B=\\{c_{1},c_{3},c_{4},v\\}$ readily satisfies the hypotheses of Lemma 9.8,
so the $M$-action is strongly irreducible.
In the chosen basis, the matrices induced by $\delta_{1}^{2}$ and
$\delta_{2}^{2}$ are
$A_{1}=\begin{pmatrix}1&-2&-2&0\\\ 0&-1&-2&0\\\ 0&0&1&2\\\
0&0&0&-1\end{pmatrix},\quad A_{2}=\begin{pmatrix}-1&0&0&0\\\ 0&-2&2&-1\\\
1&2&-1&2\\\ -2&1&-2&0\end{pmatrix}$
Then, $A=A_{1}A_{2}$ has the form
$A=\begin{pmatrix}-1&0&1&-2\\\ 2&2&-4&3\\\ 2&6&-7&0\\\ 0&5&-4&-4\\\
\end{pmatrix}$
The characteristic polynomial $P$ of $A$ is
$P(t)=t^{4}+10t^{3}+22t^{2}+10t+1$. We verified in Magma [BCP97] that $A$ is
Galois pinching, that is, it satisfies condition (1) of B.1. Setting
$B=A_{1}$, we similarly check that $B$ satisfies condition (2) of the
criterion. Thus, the plus piece of $\mathcal{Q}(5,-1)$ is Zariski dense inside
$\operatorname{Sp}(4,\mathbb{R})$.
### B.4. Zariski density of $\mathcal{Q}(9,-1)^{\mathrm{irr}}$
Let
$\pi=\begin{pmatrix}1&2&3&4&5&6&3\\\ 7&7&6&5&4&2&1\end{pmatrix}$
and
$\displaystyle\delta_{1}={}$
$\displaystyle\mathrm{b}^{4}\mathrm{t}^{5}\mathrm{b}^{3}\mathrm{t}\mathrm{b}^{5}\mathrm{t}\mathrm{b}^{6}\mathrm{t}^{2}$
$\displaystyle\delta_{2}={}$
$\displaystyle\mathrm{b}^{4}\mathrm{t}^{2}\mathrm{b}^{3}\mathrm{t}^{3}\mathrm{b}^{7}\mathrm{t}^{3}\mathrm{b}\mathrm{t}^{3}\mathrm{b}\mathrm{t}^{2}\mathrm{b}^{2}\mathrm{t}^{3}\mathrm{b}^{2}\mathrm{t}^{2}\mathrm{b}^{2}\mathrm{t}^{2}\mathrm{b}^{2}$
Consider the six curves $c_{1},\dotsc,c_{6}$ depicted in Figure B.5 as solid
or dashed lines. These cycles form a basis for the absolute homology, as their
intersection matrix is
$\Omega=\begin{pmatrix}0&-1&0&-1&-1&-1\\\ 1&0&0&-1&-1&-1\\\ 0&0&0&1&1&1\\\
1&1&-1&0&-1&-1\\\ 1&1&-1&1&0&-1\\\ 1&1&-1&1&1&0\end{pmatrix}$
that has determinant $1$. On the other hand, the cycle $v\in
H_{1}(S;\mathbb{R})$ depicted in Figure B.5 as dash-dotted vertical lines can
be written as $v=c_{2}+c_{3}$. Thus, the set
$B=\\{c_{1},c_{2},c_{4},c_{5},c_{6},v\\}$ readily satisfies the hypotheses of
Lemma 9.8, so the $M$-action is strongly irreducible.
Figure B.5. Representative of $\mathcal{Q}(9,-1)^{\mathrm{irr}}$.
In the chosen basis, the matrices induced by $\delta_{1}^{2}$ and
$\delta_{2}^{2}$ are
$A_{1}=\begin{pmatrix}0&2&-1&-1&-1&-2\\\ 0&-1&0&0&0&0\\\ -1&-1&0&-2&-2&-1\\\
0&0&0&-1&0&0\\\ 0&0&0&0&-1&0\\\ 1&3&-1&2&2&0\end{pmatrix},\quad
A_{2}=\begin{pmatrix}-3&-10&-2&-4&-6&-4\\\ 1&3&0&2&2&0\\\ 2&3&-2&0&-1&0\\\
0&-2&-2&-1&-2&-2\\\ -3&-7&1&-2&-2&0\\\ 0&0&0&0&0&-1\end{pmatrix}$
Then, $A=A_{1}A_{2}$ has the form
$A=\begin{pmatrix}-2&0&2&0&-1&-1\\\ -6&-1&3&-2&0&-3\\\ 7&-1&-2&2&3&1\\\
3&-3&2&1&3&-2\\\ 5&-3&3&2&3&-2\\\ 8&-2&-2&2&5&0\end{pmatrix}$
The characteristic polynomial $P$ of $A$ is
$P(t)=t^{6}+t^{5}-22t^{4}-52t^{3}-22t^{2}+t+1$. Again, we use Magma to check
that $A$ is Galois pinching. Setting $B=A_{1}$, we can readily check that $B$
satisfies condition (2) of the criterion. Thus, the plus piece of
$\mathcal{Q}(9,-1)^{\mathrm{irr}}$ is Zariski dense inside
$\operatorname{Sp}(6,\mathbb{R})$.
### B.6. Zariski density of $\mathcal{Q}(12)^{\mathrm{reg}}$
Let
$\pi=\begin{pmatrix}1&2&1&3&4&5&6&7\\\ 2&4&3&6&5&8&7&8\end{pmatrix}$
and
$\displaystyle\delta_{1}={}$
$\displaystyle\mathrm{b}^{4}\mathrm{t}^{2}\mathrm{b}\mathrm{t}^{6}\mathrm{b}\mathrm{t}^{4}\mathrm{b}^{2}\mathrm{t}\mathrm{b}^{4}\mathrm{t}^{2}\mathrm{b}^{7}\mathrm{t}^{2}\mathrm{b}^{2}\mathrm{t}\mathrm{b}^{5}\mathrm{t}^{4}\mathrm{b}\mathrm{t}\mathrm{b}\mathrm{t}\mathrm{b}$
$\displaystyle\delta_{2}={}$
$\displaystyle\mathrm{b}\mathrm{t}\mathrm{b}\mathrm{t}^{3}\mathrm{b}\mathrm{t}^{6}\mathrm{b}^{2}\mathrm{t}^{4}\mathrm{b}\mathrm{t}^{3}\mathrm{b}^{2}\mathrm{t}^{3}\mathrm{b}^{2}\mathrm{t}\mathrm{b}^{4}\mathrm{t}^{2}\mathrm{b}\mathrm{t}^{3}\mathrm{b}^{2}\mathrm{t}^{2}\mathrm{b}^{3}\mathrm{t}\mathrm{b}^{5}$
Let $M$ be the monodromy group of $\mathcal{Q}(12)^{\mathrm{reg}}$. We have
that the $M$-action is strongly irreducible by Lemma 9.12.
Consider the six curves $c_{1},\dotsc,c_{6}$ shown in Figure B.7. Ordered
appropriately, these curves form a symplectic basis. In this basis, the
matrices induced by $\delta_{1}^{2}$ and $\delta_{2}^{2}$ are
$\displaystyle A_{1}$ $\displaystyle=\begin{pmatrix}2&5&-1&7&6&2&1&0\\\
-1&0&1&-1&-2&1&3&0\\\ -1&2&-1&2&1&1&-2&0\\\ -1&-4&1&-6&-5&-2&0&0\\\
1&1&2&1&0&2&5&0\\\ -1&-3&1&-4&-4&-1&3&0\\\ 0&0&0&0&0&0&-1&0\\\
-1&-5&7&-11&-10&2&11&-1\end{pmatrix}$ $\displaystyle A_{2}$
$\displaystyle=\begin{pmatrix}1&3&-5&1&6&-7&-3&-1\\\ 0&-2&2&0&-3&3&1&1\\\
4&-1&-1&2&3&-4&1&0\\\ -7&5&-3&-2&-1&3&-5&-1\\\ 5&-5&3&2&-2&1&5&2\\\
2&0&-1&0&2&-3&0&-1\\\ 8&-1&-4&3&7&-10&0&-1\\\
2&3&-5&1&6&-7&-3&-2\end{pmatrix}$
Then, $A=A_{1}A_{2}$ has the form
$A=\begin{pmatrix}17&-7&15&-17&5&11&36&20\\\ 23&-13&25&-38&8&24&62&33\\\
6&0&0&7&0&-7&-5&-2\\\ 33&-20&34&-50&6&37&89&51\\\ 28&-16&31&-47&9&33&81&44\\\
15&-7&12&-15&3&8&27&15\\\ 21&-6&5&12&-6&-6&7&11\\\
1&-1&0&1&-2&1&1&2\end{pmatrix}$
Figure B.7. Representative of $\mathcal{Q}(12)^{\mathrm{reg}}$.
The characteristic polynomial $P$ of $A$ is
$P(t)=t^{8}+20t^{7}-1686t^{6}-24t^{5}+36258t^{4}-24t^{3}-1686t^{2}+20t+1$.
Using Magma, we can check that $A$ is Galois pinching. Setting $B=A_{1}$, we
can also similarly check that $B$ satisfies condition (2) of the criterion.
Thus, the plus piece of $\mathcal{Q}(12)^{\mathrm{reg}}$ is Zariski dense
inside $\operatorname{Sp}(8,\mathbb{R})$.
### B.8. Zariski density of $\mathcal{Q}(12)^{\mathrm{irr}}$
Let
$\pi=\begin{pmatrix}1&2&1&3&4&5&6&7\\\ 2&6&5&4&3&8&7&8\end{pmatrix}$
and
$\displaystyle\delta_{1}={}$
$\displaystyle\mathrm{b}^{3}\mathrm{t}\mathrm{b}^{2}\mathrm{t}^{5}\mathrm{b}\mathrm{t}^{5}\mathrm{b}^{2}\mathrm{t}^{3}\mathrm{b}\mathrm{t}^{2}\mathrm{b}^{2}\mathrm{t}^{6}\mathrm{b}\mathrm{t}^{2}\mathrm{b}^{2}\mathrm{t}\mathrm{b}\mathrm{t}^{3}\mathrm{b}^{2}\mathrm{t}^{6}\mathrm{b}^{4}$
$\displaystyle\delta_{2}={}$
$\displaystyle\mathrm{b}^{5}\mathrm{t}^{5}\mathrm{b}\mathrm{t}\mathrm{b}\mathrm{t}^{4}\mathrm{b}^{3}\mathrm{t}^{7}\mathrm{b}^{2}\mathrm{t}\mathrm{b}\mathrm{t}^{4}\mathrm{b}^{4}$
Consider the six curves $c_{1},\dotsc,c_{6}$ depicted in Figure B.9 as solid,
dashed or dash-dotted lines. These cycles form a basis for the absolute
homology as their intersection matrix is
$\begin{pmatrix}0&1&0&0&0&0&0&0\\\ -1&0&0&0&0&0&0&0\\\ 0&0&0&-1&-1&-1&0&0\\\
0&0&1&0&-1&-1&0&0\\\ 0&0&1&1&0&-1&0&0\\\ 0&0&1&1&1&0&0&0\\\ 0&0&0&0&0&0&0&1\\\
0&0&0&0&0&0&-1&0\end{pmatrix}$
and has determinant $1$. On the other hand, the cycle $b$, depicted as the
slope-$1$ densely dotted lines, can be written as
$b=c_{1}+c_{2}-c_{3}+c_{4}+c_{7}-c_{8}$, and the cycle $p$ depicted as loosely
dotted horizontal lines can be written as $p=c_{1}+c_{8}$. Thus, the set
$B=\\{c_{2},c_{3},c_{4},c_{5},c_{6},c_{7},b,p\\}$ readily satisfies the
hypotheses of Lemma 9.8, so the $M$-action is strongly irreducible.
Figure B.9. Representative of $\mathcal{Q}(12)^{\mathrm{irr}}$.
In the chosen basis, the matrices induced by $\delta_{1}^{2}$ and
$\delta_{2}^{2}$ are
$\displaystyle A_{1}$ $\displaystyle=\begin{pmatrix}-1&6&-2&6&11&8&6&-4\\\
2&-5&0&-6&-9&-6&-4&6\\\ -1&-3&-2&-6&-8&-5&-3&4\\\ -1&-5&-1&-9&-11&-7&-5&6\\\
0&0&0&0&-1&0&0&0\\\ 1&3&1&6&8&4&3&-4\\\ -2&-4&2&-4&-8&-6&-5&0\\\
0&-6&2&-6&-11&-8&-6&3\\\ \end{pmatrix}$ $\displaystyle A_{2}$
$\displaystyle=\begin{pmatrix}-2&1&1&1&-1&-2&-3&0\\\ 3&-5&3&3&3&0&-2&0\\\
0&0&-1&0&0&0&0&0\\\ 0&0&0&-1&0&0&0&0\\\ 0&2&-2&-2&-1&2&2&0\\\
-3&4&-3&-3&-3&-1&2&0\\\ 0&0&0&0&0&0&-1&0\\\
-7&13&-9&-9&-7&2&5&-1\end{pmatrix}$
Then, $A=A_{1}A_{2}$ has the form
$A=\begin{pmatrix}6&-15&1&1&6&12&2&43\\\ -19&27&3&5&4&-25&4&-43\\\
-7&-19&2&1&12&18&-2&51\\\ -33&11&6&9&22&-11&4&13\\\
-41&34&8&11&21&-33&8&-29\\\ -24&30&5&7&8&-28&6&-40\\\
-15&24&3&5&4&-23&5&-35\\\ 32&-12&-4&-6&-16&10&0&5\end{pmatrix}$
The characteristic polynomial $P$ of $A$ is
$P(t)=t^{8}-47t^{7}-794t^{6}+11691t^{5}-22022t^{4}+11691t^{3}-794t^{2}-47t+1$.
By using Magma again, we can explicitly check that $A$ is Galois pinching.
Setting $B=A_{1}$, we can similarly check that $B$ satisfies condition (2) of
the criterion. Thus, the plus piece of $\mathcal{Q}(12)^{\mathrm{irr}}$ is
Zariski dense inside $\operatorname{Sp}(8,\mathbb{R})$.
## References
* [Aar97] Jon Aaronson “An introduction to infinite ergodic theory” 50, Mathematical Surveys and Monographs American Mathematical Society, Providence, RI, 1997, pp. xii+284 DOI: 10.1090/surv/050
* [AGY06] Artur Avila, Sébastien Gouëzel and Jean-Christophe Yoccoz “Exponential mixing for the Teichmüller flow” In _Publ. Math. Inst. Hautes Études Sci._ , 2006, pp. 143–211 DOI: 10.1007/s10240-006-0001-5
* [AMY18] Artur Avila, Carlos Matheus and Jean-Christophe Yoccoz “Zorich conjecture for hyperelliptic Rauzy-Veech groups” In _Math. Ann._ 370.1-2, 2018, pp. 785–809 DOI: 10.1007/s00208-017-1568-5
* [AR12] Artur Avila and Maria João Resende “Exponential mixing for the Teichmüller flow in the space of quadratic differentials” In _Comment. Math. Helv._ 87.3, 2012, pp. 589–638 DOI: 10.4171/CMH/263
* [AV07] Artur Avila and Marcelo Viana “Simplicity of Lyapunov spectra: a sufficient criterion” In _Port. Math. (N.S.)_ 64.3, 2007, pp. 311–376 DOI: 10.4171/PM/1789
* [AV07a] Artur Avila and Marcelo Viana “Simplicity of Lyapunov spectra: proof of the Zorich-Kontsevich conjecture” In _Acta Math._ 198.1, 2007, pp. 1–56 DOI: 10.1007/s11511-007-0012-1
* [BCP97] Wieb Bosma, John Cannon and Catherine Playoust “The Magma algebra system. I. The user language” Computational algebra and number theory (London, 1993) In _J. Symbolic Comput._ 24.3-4, 1997, pp. 235–265 DOI: 10.1006/jsco.1996.0125
* [Bel+19] Mark Bell, Vincent Delecroix, Vaibhav Gadre, Rodolfo Gutiérrez-Romo and Saul Schleimer “Coding Teichmüller flow using veering triangulations”, 2019 arXiv:1909.00890 [math.DS]
* [Ben97] Y. Benoist “Propriétés asymptotiques des groupes linéaires” In _Geom. Funct. Anal._ 7.1, 1997, pp. 1–47 DOI: 10.1007/PL00001613
* [BL09] Corentin Boissy and Erwan Lanneau “Dynamics and geometry of the Rauzy-Veech induction for quadratic differentials” In _Ergodic Theory Dynam. Systems_ 29.3, 2009, pp. 767–816 DOI: 10.1017/S0143385708080565
* [BMP03] Michel Boileau, Sylvain Maillot and Joan Porti “Three-dimensional orbifolds and their geometric structures” 15, Panoramas et Synthèses [Panoramas and Syntheses] Société Mathématique de France, Paris, 2003, pp. viii+167
* [Boi20] Corentin Boissy “Moduli space of meromorphic differentials with marked horizontal separatrices” In _Algebr. Geom. Topol._ 20.5, 2020, pp. 2373–2412 DOI: 10.2140/agt.2020.20.2373
* [Cal20] Aaron Calderon “Connected components of strata of Abelian differentials over Teichmüller space” In _Comment. Math. Helv._ 95.2, 2020, pp. 361–420 DOI: 10.4171/CMH/491
* [CM14] Dawei Chen and Martin Möller “Quadratic differentials in low genus: exceptional and non-varying strata” In _Ann. Sci. Éc. Norm. Supér. (4)_ 47.2, 2014, pp. 309–369 DOI: 10.24033/asens.2216
* [CS19] Aaron Calderon and Nick Salter “Higher spin mapping class groups and strata of Abelian differentials over Teichmüller space”, 2019 arXiv:1906.03515 [math.GT]
* [CS19a] Aaron Calderon and Nick Salter “Relative homological representations of framed mapping class groups” In _Bull. Lond. Math. Soc._ , 2019 DOI: 10.1112/blms.12412
* [CS20] Aaron Calderon and Nick Salter “Framed mapping class groups and the monodromy of strata of Abelian differentials”, 2020 arXiv:2002.02472 [math.GT]
* [DN88] Claude Danthony and Arnaldo Nogueira “Involutions linéaires et feuilletages mesurés” In _C. R. Acad. Sci. Paris Sér. I Math._ 307.8, 1988, pp. 409–412
* [DN90] Claude Danthony and Arnaldo Nogueira “Measured foliations on nonorientable surfaces” In _Ann. Sci. École Norm. Sup. (4)_ 23.3, 1990, pp. 469–494 URL: http://www.numdam.org/item?id=ASENS_1990_4_23_3_469_0
* [EFW18] Alex Eskin, Simion Filip and Alex Wright “The algebraic hull of the Kontsevich-Zorich cocycle” In _Ann. of Math. (2)_ 188.1, 2018, pp. 281–313 DOI: 10.4007/annals.2018.188.1.5
* [EKZ14] Alex Eskin, Maxim Kontsevich and Anton Zorich “Sum of Lyapunov exponents of the Hodge bundle with respect to the Teichmüller geodesic flow” In _Publ. Math. Inst. Hautes Études Sci._ 120, 2014, pp. 207–333 DOI: 10.1007/s10240-013-0060-3
* [EMM15] Alex Eskin, Maryam Mirzakhani and Amir Mohammadi “Isolation, equidistribution, and orbit closures for the ${\rm SL}(2,\mathbb{R})$ action on moduli space” In _Ann. of Math. (2)_ 182.2, 2015, pp. 673–721 DOI: 10.4007/annals.2015.182.2.7
* [Fil16] Simion Filip “Splitting mixed Hodge structures over affine invariant manifolds” In _Ann. of Math. (2)_ 183.2, 2016, pp. 681–713 DOI: 10.4007/annals.2016.183.2.5
* [Fil17] Simion Filip “Zero Lyapunov exponents and monodromy of the Kontsevich-Zorich cocycle” In _Duke Math. J._ 166.4, 2017, pp. 657–706 DOI: 10.1215/00127094-3715806
* [Gut17] Rodolfo Gutiérrez-Romo “Simplicity of the Lyapunov spectra of certain quadratic differentials”, 2017 arXiv:1711.02006 [math.DS]
* [Gut19] Rodolfo Gutiérrez-Romo “Classification of Rauzy-Veech groups: proof of the Zorich conjecture” In _Invent. Math._ 215.3, 2019, pp. 741–778 DOI: 10.1007/s00222-018-0836-7
* [Ham18] Ursula Hamenstädt “Typical properties of periodic Teichmüller geodesics: stretch factors”, 2018 URL: http://www.math.uni-bonn.de/people/ursula/tracefield.pdf
* [Hir94] Morris W. Hirsch “Differential topology” Corrected reprint of the 1976 original 33, Graduate Texts in Mathematics Springer-Verlag, New York, 1994, pp. x+222
* [Joh80] Dennis Johnson “Spin structures and quadratic forms on surfaces” In _J. London Math. Soc. (2)_ 22.2, 1980, pp. 365–373 DOI: 10.1112/jlms/s2-22.2.365
* [KMS86] Steven Kerckhoff, Howard Masur and John Smillie “Ergodicity of billiard flows and quadratic differentials” In _Ann. of Math. (2)_ 124.2, 1986, pp. 293–311 DOI: 10.2307/1971280
* [Kon97] M. Kontsevich “Lyapunov exponents and Hodge theory” In _The mathematical beauty of physics (Saclay, 1996)_ 24, Adv. Ser. Math. Phys. World Sci. Publ., River Edge, NJ, 1997, pp. 318–332
* [KZ03] Maxim Kontsevich and Anton Zorich “Connected components of the moduli spaces of Abelian differentials with prescribed singularities” In _Invent. Math._ 153.3, 2003, pp. 631–678 DOI: 10.1007/s00222-003-0303-x
* [KZ97] Maxim Kontsevich and Anton Zorich “Lyapunov exponents and Hodge theory”, 1997 arXiv:hep-th/9701164 [hep-th]
* [Lan04] Erwan Lanneau “Hyperelliptic components of the moduli spaces of quadratic differentials with prescribed singularities” In _Comment. Math. Helv._ 79.3, 2004, pp. 471–501 DOI: 10.1007/s00014-004-0806-0
* [Lan08] Erwan Lanneau “Connected components of the strata of the moduli spaces of quadratic differentials” In _Ann. Sci. Éc. Norm. Supér. (4)_ 41.1, 2008, pp. 1–56 DOI: 10.24033/asens.2062
* [Mas82] Howard Masur “Interval exchange transformations and measured foliations” In _Ann. of Math. (2)_ 115.1, 1982, pp. 169–200 DOI: 10.2307/1971341
* [Mat21] Carlos Matheus, 2021
* [MMY15] Carlos Matheus, Martin Möller and Jean-Christophe Yoccoz “A criterion for the simplicity of the Lyapunov spectrum of square-tiled surfaces” In _Invent. Math._ 202.1, 2015, pp. 333–425 DOI: 10.1007/s00222-014-0565-5
* [PR14] Gopal Prasad and Andrei S. Rapinchuk “Generic elements in Zariski-dense subgroups and isospectral locally symmetric spaces” In _Thin groups and superstrong approximation_ 61, Math. Sci. Res. Inst. Publ. Cambridge Univ. Press, Cambridge, 2014, pp. 211–252
* [Rau79] Gérard Rauzy “Échanges d’intervalles et transformations induites” In _Acta Arith._ 34.4, 1979, pp. 315–328 DOI: 10.4064/aa-34-4-315-328
* [Ste+20] W.. Stein “Sage Mathematics Software (Version 9.1)” http://www.sagemath.org, 2020 The Sage Development Team
* [Tre13] Rodrigo Treviño “On the non-uniform hyperbolicity of the Kontsevich-Zorich cocycle for quadratic differentials” In _Geom. Dedicata_ 163, 2013, pp. 311–338 DOI: 10.1007/s10711-012-9751-z
* [Vee82] William A. Veech “Gauss measures for transformations on the space of interval exchange maps” In _Ann. of Math. (2)_ 115.1, 1982, pp. 201–242 DOI: 10.2307/1971391
* [Vee86] William A. Veech “The Teichmüller geodesic flow” In _Ann. of Math. (2)_ 124.3, 1986, pp. 441–530 DOI: 10.2307/2007091
* [Yoc10] Jean-Christophe Yoccoz “Interval exchange maps and translation surfaces” In _Homogeneous flows, moduli spaces and arithmetic_ 10, Clay Math. Proc. Amer. Math. Soc., Providence, RI, 2010, pp. 1–69
* [Zor08] Anton Zorich “Explicit Jenkins-Strebel representatives of all strata of abelian and quadratic differentials” In _J. Mod. Dyn._ 2.1, 2008, pp. 139–185 DOI: 10.3934/jmd.2008.2.139
* [Zor18] Anton Zorich, 2018
* [Zor97] Anton Zorich “Deviation for interval exchange transformations” In _Ergodic Theory Dynam. Systems_ 17.6, 1997, pp. 1477–1499 DOI: 10.1017/S0143385797086215
* [Zor99] Anton Zorich “How do the leaves of a closed $1$-form wind around a surface?” In _Pseudoperiodic topology_ 197, Amer. Math. Soc. Transl. Ser. 2 Amer. Math. Soc., Providence, RI, 1999, pp. 135–178 DOI: 10.1090/trans2/197/05
|
# Unusual heat transport of the Kitaev material Na2Co2TeO6: putative quantum
spin liquid and low-energy spin excitations
Xiǎochén Hóng,${}^{1,2,}^{,}$111These authors contributed
<EMAIL_ADDRESS>Matthias Gillig,1,∗ Richard Hentrich,1,∗
Weiliang Yao,3 Vilmos Kocsis,1 Arthur R. Witte,1,4 Tino Schreiner,1 Danny
Baumann,1 Nicolás Pérez,1 Anja U. B. Wolter,1 Yuan
<EMAIL_ADDRESS>Bernd Büchner,1,4,6 and Christian
<EMAIL_ADDRESS>1Leibniz-Institute for Solid State and
Materials Research (IFW-Dresden), 01069 Dresden, Germany
2Fakult$\ddot{a}$t für Mathematik und Naturwissenschaften, Bergische
Universit$\ddot{a}$t Wuppertal, 42097 Wuppertal, Germany
3International Center for Quantum Materials, School of Physics, Peking
University, 100871 Beijing, China
4Institute of Solid State and Materials Physics and Würzburg-Dresden Cluster
of Excellence $ct.qmat$, Technische Universit$\ddot{a}$t Dresden, 01062
Dresden, Germany
5Collaborative Innovation Center of Quantum Matter, 100871 Beijing, China
6Center for Transport and Devices, Technische Universit$\ddot{a}$t Dresden,
01069 Dresden, Germany
###### Abstract
We studied the field dependent thermal conductivity ($\kappa$) of Na2Co2TeO6,
a compound considered as the manifestation of the Kitaev model based on the
high-spin $d^{7}$ Co2+ ions. We found that in-plane magnetic fields beyond a
critical value $B_{c}\approx$ 10 T are able to drastically enhance $\kappa$ at
low temperatures, resulting in a double-peak structure of $\kappa(T)$ that
closely resembles the behavior of $\alpha$-RuCl3. This result suggests that
heat transport in Na2Co2TeO6 is primarily phononic, and it is strongly
affected by scattering from magnetic excitations that are highly tunable by
external fields. Interestingly, for magnetic fields
$\textbf{B}\mathbin{\\!/\mkern-5.0mu/\\!}\textbf{a}$ (i.e., along the zigzag
direction of the Co-Co bonds), there is an extended field range which
separates the long-range magnetic order for $B\leq B_{c}\approx 10$ T and the
partially spin-polarized gapped high-field phase for $B\gtrsim 12$ T. The low-
energy phonon scattering is particularly strong in this field range,
consistent with the notion that the system becomes a quantum spin liquid with
prominent spin fluctuations down to energies of no more than 2 meV.
###### pacs:
not needed
A compass model of the effective spin$-1/2$ two-dimensional honeycomb lattice,
known as the Kitaev model, is an emerging research field Kitaev ; JvdB ;
Trebst ; JPCM . Topological quantum spin liquids (QSL) with Majorana fermions
as excitations are guaranteed as its ground states Kitaev . To accommodate
pseudospin$-1/2$ magnetism and bond-dependent exchange couplings in real
materials, heavy transition metal compounds in the low-spin $d^{5}$ electron
configuration are preferred TakagiRev ; JK . Guided by theoretical predictions
JK ; Rau ; ChenG , Ir4+ oxides and Ru3+ chloride have been studied extensively
Takagi ; Na213 ; a213 ; b213 ; g213 ; Cu213 ; Ag213 ; H213 ; CuLi213 ; CaoHB ;
Baenitz . Those studies imply that in real materials, the Heisenberg
interactions ($J$) and off-diagonal terms ($\Gamma$) have to be also taken
into account in the Hamiltonian. Thus, instead of a pure Kitaev QSL, the
ground states of these compounds are usually some form of magnetic order JPCM
. Nevertheless, the Kitaev physics is still expected to be prominent in
$proximate$ Kitaev materials as long as the Kitaev term ($K$) is relatively
large Perkins ; Pollmann .
Very recently, $\alpha$-RuCl3 became the most representative proximate Kitaev
material. Regardless of its zigzag ordered ground state in zero field,
evidences of Kitaev-Heisenberg excitations (i.e. the exotic magnetic
excitations associated with the prominence of Kitaev interactions) have been
reported by diverse experimental tools Raman ; NS-NM ; NS-NP ; NS-PRL ; NS-S ;
Loidl ; THz ; Wellm ; Richard ; LeeMY ; Shibauchi ; MiaoH . More
interestingly, its magnetic order can be melted by a moderate in-plane field
Baek ; Sears ; Anja ; Richard ; LeeMY ; WangZ , and the sought-after half-
integer quantization of the thermal Hall conductivity was claimed in a narrow
field range above this critical point Kasahara . In the high field limit,
$\alpha$-RuCl3 further enters into a partially-polarized phase with a spin
excitation gap which opens up linearly in field HF ; Richard ; Baek ; Anja ;
WangZ .
The rich temperature-field phase diagram of the $JK\Gamma$ model suggested by
$\alpha$-RuCl3 is exciting, in particular due to the putative QSL phase
surrounding the quantum critical point that separates the long-range magnetic
order from the partially field-polarized phase (see Fig. 1). However, it
remains to be clarified, which of the features in the phase diagram are
related to the $JK\Gamma$ model, and which of them are barely material-
specific properties. The latter is a particularly complex issue since
$\alpha$-RuCl3 is sensitive to disorder and suffers from stacking faults and
twinnings Vojta ; Coldea ; Yamashita . In this regard, it is useful if a
sibling Kitaev material can be coordinatively investigated and compared to
$\alpha$-RuCl3. Recently, two theoretical papers simultaneously proposed
another route to realize the Kitaev model in $d^{7}$ cobaltates LiuHM ; Motome
. Demonstrating Kitaev interactions in the high-spin $d^{7}$ systems not only
extends the terrain of candidate Kitaev compounds, but also introduces a
mechanism to strongly reduce the $J$ interactions, thus intensifying the
dominance of the $K$ term LiuHM ; Motome . In Ref. LiuHM , Na2Co2TeO6 was
explicitly pointed out to be such candidate. Stimulated by these predictions,
the overlooked Co-based frustrated materials again came into focus. Some
surprising results have already been reported in several honeycomb and
triangular Co-compounds since then ZRDd ; ZRDk ; ZRDt ; SunXF ; Trump ; Stock
; MaJ ; Park ; ChenWJ .
Figure 1: (Color online). Schematic sketch of the possible phase diagram of
the $JK\Gamma$ pseudospin-1/2 model, shaped by the studies mainly on
$\alpha$-RuCl3 JPCM ; TakagiRev . In-plane magnetic field can tune the zero-
field zigzag antiferromagnetic order into the high-field (partially) polarized
state with a gap which opens up linearly. A $Z_{2}$ QSL state is frequently
claimed to exist in a small field window at low temperature TakagiRev ; NS-NP
; Kasahara . The Kitaev-Heisenberg paramagnons are expected to survive up to
high temperatures comparable to the Kitaev interaction strength NS-S ; NS-NM ;
NS-NP ; Raman ; Richard ; JPCM .
In this letter, we report the thermal conductivity $\kappa$ of Na2Co2TeO6
single crystals from room temperature down to 6 K, with in-plane magnetic
fields $B$ up to 15 T. We unveil a strong impact of B on $\kappa$ at low
temperature: (i) beyond a magnetic field of $B_{c}\approx$ 10 T, $\kappa$
increases more than one order of magnitude, and (ii) the $\kappa$($T$) curves
at high fields exhibit a double-peak structure. Remarkably, this peculiar
$\kappa$($T,B$) profile is striking similar to that of $\alpha$-RuCl3 Richard
. Thus, our data strongly suggest a commonality in the physics of Na2Co2TeO6
and $\alpha$-RuCl3. We conclude in particular, that $\kappa$($T,B$) is
essentially phononic, where the predominant phonon scattering is caused by
Kitaev-Heisenberg-type excitations, similar to those of $\alpha$-RuCl3.
Figure 2: (Color online). (a) Temperature dependence of the thermal
conductivity of two Na2Co2TeO6 single crystals without magnetic field. The
heat current was applied in two orthogonal directions, parallel to the
armchair ($\kappa_{A}$) and zigzag ($\kappa_{Z}$) edges, respectively. Insert
shows the derivative of the $\kappa_{A}$($T$) curve around the magnetic
ordering temperature. (b) The ratio between $\kappa_{Z}$($T$) and
$\kappa_{A}$($T$). The dashed region indicates the three-dimensional
magnetically ordered state ChenWJ . The dotted line highlights the peak
temperature $T^{peak}$. Figure 3: (Color online). Thermal conductivity of
Na2Co2TeO6 crystals in various magnetic fields applied along the two in-plane
high-symmetry directions. As the sketches show, the field was applied parallel
to the thermal gradient. (a) Fixed-field $\kappa$($T$) curves with
$\textbf{B}\mathbin{\\!/\mkern-5.0mu/\\!}\textbf{a}^{*}$. The field values (in
T) are indicated by the numbers next to each curve. Note that the $T$ axis is
plotted in log scale for clarity. (b) Magnified view of the low-temperature
part to show the low-field curves clearly. (c) Magnified view around the
magnetic transition $T_{N}\approx$ 26 K note . The zero-field $T_{N}$ is
indicated by the black arrow. (d) $\kappa$($B$) isotherms at selected
temperatures for $\textbf{B}\mathbin{\\!/\mkern-5.0mu/\\!}\textbf{a}^{*}$.
(e), (f), (g) and (h) show the same aspects for sample Z with
$\textbf{B}\mathbin{\\!/\mkern-5.0mu/\\!}\textbf{a}$. The slight increase of
$\kappa$ by suppressing the magnetic order is highlighted by the light green
region as an example for the 25 K isotherm in (h).
High-quality Na2Co2TeO6 single crystals were grown by a modified flux method
Yao . Two regular bar-shaped samples of $5\times 1\times 0.1$ mm3 were
employed in the thermal transport study. Due to the well-defined geometry, we
estimate the geometric error of $\kappa$ to be less than 10%. The longest
edges of the crystals were set parallel to the armchair (
$\mathbin{\\!/\mkern-5.0mu/\\!}\textbf{a}^{*}$, Sample A) and zigzag (
$\mathbin{\\!/\mkern-5.0mu/\\!}\textbf{a}$, Sample Z) directions of the Co-Co
bonds, respectively. Steady-state thermal conductivity measurements were
performed with the standard four point geometry. One side of the sample was
glued directly to the heat sink, and the other side was thermally excited by a
chip heater. The temperature gradient $\nabla T$ generated along the long edge
of the sample was measured by a differential AuFe/Chromel-P thermocouple which
had been calibrated carefully in magnetic field. A custom-built high-vacuum
low-noise probe provides the controlled heat sink temperature from 300 K to 6
K with a stability of about 0.1 mK. Magnetic fields up to 15 T were generated
by a commercial superconducting magnet. Fields were applied parallel to the
thermal gradient.
The temperature dependent thermal conductivity $\kappa$($T$) of both samples
in zero magnetic field are shown in Fig. 2(a). At first glance, these
$\kappa$($T$) curves resemble that of a conventional phononic heat conductor
unaffected by novel excitations Berman : here, $\kappa_{p}$($T$) $\approx
C_{p}\nu_{p}l_{p}$, with the phononic specific heat $C_{p}$, velocity
$\nu_{p}$, and mean free path $l_{p}$. At low temperature, $\kappa_{p}$
follows the specific heat, since $\nu_{p}$ and $l_{p}$ are practically
temperature independent, and thus rapidly grows with $T$. Towards higher $T$,
the reduction of the phonon mean free path $l_{p}$ by the sample-independent
phonon Umklapp processes dictates a $1/T$-like tail of $\kappa$($T$). A single
broad peak of $\kappa_{p}$($T$) is thus typically present at around 1/10 or
less of the Debye temperature $T^{peak}\lessapprox\Theta_{D}/10$ Berman .
However, a closer inspection of the results found such trivial interpretation
impeachable. The observed $T^{peak}\approx 45$ K seems abnormally high
considering $\Theta_{D}\approx$ 280 K which can be evaluated from the specific
heat data Yao . Additionally, as shown in Fig. 2(b), the
$\kappa_{Z}/\kappa_{A}$ ratio is essentially flat around $T^{peak}$, but drops
sharply at much lower $T$, suggesting $T^{peak}$ is not the result of a
canonical competition between growing $C_{p}$ and falling $l_{p}$ due to the
Umklapp scattering. Furthermore, the development of a three-dimensional long-
range magnetic order (see Ref. ChenWJ ) indeed has an impact on the
$\kappa$($T$) curves. The pertinent anomaly, albeit weak, manifests itself
clearly in the $T$-derivative of $\kappa$($T$) as shown in the inset of Fig.2.
For common magnets, including $\alpha$-RuCl3, a sharp upturn of $\kappa$($T$)
is expected below the magnetic transition temperature $T_{N}$, due to the
suppression of the phonon-magnon scattering in the magnetically ordered state
ZRDk ; Richard ; LeeMY . Notably, we find $\kappa$($T$) of Na2Co2TeO6 to
continue decreasing below $T_{N}$. In our scenario of phonon-dominated heat
transport which will be elaborated next, this difference may be attributed to
a substantially greater ratio between $k_{B}T_{N}$ ($T_{N}\sim 26$ K) and the
magnetic excitation gap ($\triangle\sim 1$ meV) of Na2Co2TeO6 than that of
$\alpha$-RuCl3 ($T_{N}\sim 7$ K and $\triangle\sim 2$ meV) ChenWJ ; NS-PRL ;
note .
Our above notion of predominant phonon heat transport, which is limited at low
$T$ by magnetic scattering, is further supported by the in-plane magnetic
field effects on $\kappa$. As depicted in Fig. 3, the $\kappa$($T$,$B$)
profiles are qualitatively similar between the two samples under B applied in
orthogonal in-plane directions, and also between Na2Co2TeO6 and $\alpha$-RuCl3
Richard . Magnetic fields have a negligible impact on the $\kappa$($T$) curves
above 50 K, but changes them profoundly at lower temperatures. The
$\kappa$($T$) curves soar in high fields ($B>$ 10 T), and display a double-
peak structure, resembling the peculiar features of $\alpha$-RuCl3 Richard .
The same holds for the low-temperature $\kappa$($B$) isotherms which increase
dramatically above a critical field $B_{c}\sim$ 10 T. The strong similarity of
our data to the findings for $\alpha$-RuCl3 therefore corroborates our above
notion of predominant phonon heat transport with a characteristic scattering
due to the Kitaev-Heisenberg excitations. These peculiar features presented by
$\kappa$($T$) and $\kappa$($B$) at high fields can straightforwardly be
explained by restoring the phonon conductivity via opening a gap for the
Kitaev-Heisenberg excitations in the partially-polarized high-field phase,
exactly the same mechanism established for explaining the mentioned features
in $\alpha$-RuCl3 Richard .
Figure 4: (Color online). Color-contour representation of the $T$ derivative
of $\kappa$($T,B$) with field (a)
$\textbf{B}\mathbin{\\!/\mkern-5.0mu/\\!}\textbf{a}^{*}$ and (b)
$\textbf{B}\mathbin{\\!/\mkern-5.0mu/\\!}\textbf{a}$. The magnetic transitions
were determined by thermodynamic probes note , and are represented by the blue
points. The evolution of the onset temperature $T_{N}$ ($\ast$) of three-
dimensional magnetic order is nearly the same for both B directions. A field-
induced canting reversal ($\times$) is found for
$\textbf{B}\mathbin{\\!/\mkern-5.0mu/\\!}\textbf{a}^{*}$ but cannot be
detected for $\textbf{B}\mathbin{\\!/\mkern-5.0mu/\\!}\textbf{a}$ Yao ; note .
There is one additional transition (+) recognizable only for
$\textbf{B}\mathbin{\\!/\mkern-5.0mu/\\!}\textbf{a}^{*}$ at low $T$ note . The
lines are guides to the eye. The gap energy $\hbar$$\omega_{0}$/$k_{B}$ in the
partially-polarized phase extracted from the Callaway fit of the data (see
text) are shown as black circles (squares) to the right ordinates.
These results are presented in a more intuitive way in Fig. 4 as a color
contour plot of $\delta\kappa$/$\delta$$T$, against the magnetic phase
boundaries extracted from the magnetization measurements note . Three main
regions (I, II and III) can be assigned by referring to the established
conclusions of $\alpha$-RuCl3 Richard . $Region$ $I$ represents the (canted)
antiferromagnetic order at low-temperatures and low-fields, enclosed by the
$T_{N}$($B$) line Yao . As mentioned above, the phase boundary at $T_{N}$ is
barely visible in $\kappa$($T$) curves, and the curves bend downwards in the
ordered phase. Instead of gapping out the Kitaev-Heisenberg excitations like
$\alpha$-RuCl3 Richard , spin-phonon scattering is enhanced by entering this
phase. Notably, an intermediate field region labeled as $Region$ $I^{*}$ can
be recognized below about 10 K for both B directions. It is featured by an
enhancement of $\kappa$, seen more clearly in the low $T$ $\kappa$($T$) curves
(Fig. 3(b) and 3(f)) and the $T=7$ K $\kappa$($B$) isotherms (Fig. 3(d) and
3(h)). It is obvious that the more complex magnetic phases detected via
thermodynamic studies for
$\textbf{B}\mathbin{\\!/\mkern-5.0mu/\\!}\textbf{a}^{*}$ confer more features
to the $\kappa$($B$) curve. We are aware that an intermediate-field phase was
also reported in recent $\alpha$-RuCl3 studies, but it only exists for a
certain B direction (corresponding to
$\textbf{B}\mathbin{\\!/\mkern-5.0mu/\\!}\textbf{a}$ in our representation)
Balz . One explanation for the features of Region I∗ is that magnetic texture
changes inside the canted AFM ordered state, which reduces the spin-phonon
scattering. Of course, it is also possible that additional transport channels
of magnons are activated inside Region I∗. Further studies on this region are
desired.
$Region$ $II$ is constituted by a narrow field range above $B_{c}\approx 10$ T
before the system enters the higher field phase (Region III). This Region II
is smoothly connected to the broader region at higher temperatures which is
dominated by the Kitaev-Heisenberg paramagnons. It is of particular interest
whether the ground state of this phase is a QSL as is conjectured for
$\alpha$-RuCl3 Kasahara . Indeed, while our data for
$\textbf{B}\mathbin{\\!/\mkern-5.0mu/\\!}\textbf{a}^{*}$ suggest a very narrow
width of a possible QSL state in Region II, the data for
$\textbf{B}\mathbin{\\!/\mkern-5.0mu/\\!}\textbf{a}$ leave a quite large range
of approximately 10 $-$ 11.5 T for it.
Finally, $Region$ $III$ is the high-field phase where the $\kappa$($T$) curves
acquire a double peak structure and $\kappa$($B$) exhibits a strong increase.
For $\textbf{B}\mathbin{\\!/\mkern-5.0mu/\\!}\textbf{a}^{*}$, Region III seems
to set in for $B>B_{c}\approx 10$ T, whereas for
$\textbf{B}\mathbin{\\!/\mkern-5.0mu/\\!}\textbf{a}$, our data suggest this
phase to appear at $B\gtrsim 11.5$ T. Both features imply the opening of a gap
for the spin excitations in high fields, connected to the partially polarized
phase HF ; Richard ; Baek ; Anja ; WangZ ; MV ; Winter . The inflection point
of $\kappa$($T$), $T_{min}$, can serve as a rough gauge of the energy of the
phonon-scattering magnetic modes Richard ; HF . $T_{min}$ manifests itself as
the zero contour line in Fig. 4. One can also see the tendency in Fig. 3 that
$T_{min}$ shifts to higher temperature in larger field.
We analyzed the high-field data based on the Callaway model as performed in
$\alpha$-RuCl3 Richard ; note . The extracted gap sizes
($\hbar\omega_{0}/k_{B}$) of magnetic excitations are summarized to the right
ordinates of Fig. 4. It is clear that fields applied in the two directions
open this gap roughly linearly. The $\hbar\omega_{0}/k_{B}$ $vs$ $B$ slopes
for the two field directions are slightly different, and both of them are much
larger than that of $\alpha$-RuCl3 Richard ; Anja ; Baek . In analogy to
$\alpha$-RuCl3 and according to the results of the theory and calculations for
the $JK\Gamma$ model, one might speculate that $\hbar\omega_{0}$ can be
identified with a van Hove singularity near the $\Gamma$ point Anja ; Winter ;
MV . We expect that our experimentally determined parameters of Na2Co2TeO6
will prove essential for a theoretical construction of its magnon spectrum and
for finding its place in the $JK\Gamma$ parameter space.
It is interesting to apply the information gained for Region III with respect
to the spin excitation gap to Region I and II, in particular at zero field. As
exhibited in Fig. 4, $T_{min}$ is related to the gap size by $\alpha
T_{min}=\hbar\omega_{0}/k_{B}$ with $\alpha\approx 3$ note . Since there is no
$T_{min}$ resolved in the zero field $\kappa$($T$) curves down to 6 K, we can
use this information to estimate that strong spin excitations exist at least
down to an energy scale of 2 meV, deep inside the magnetically ordered phase
in zero field. Besides, we stress that for
$\textbf{B}\mathbin{\\!/\mkern-5.0mu/\\!}\textbf{a}$, $\kappa$($T$) at
$B\approx B_{c}$ is virtually identical to that at zero field down to the
lowest $T$ (see Fig. 3(f)). This implies a similarly small energy scale for
the spin excitations in Region II, which underpins the possible realization of
a field-driven $U$(1) QSL state Hickey .
To summarize, we presented the thermal conductivity of a new Kitaev material
Na2Co2TeO6, and explained the data as phononic heat transport strongly
scattered by magnetic excitations. In-plane magnetic field confers different
ground states to Na2Co2TeO6. Strong scattering survives below $T_{N}$ in the
low-field magnetically ordered state Region I. A linearly opened excitation
gap is extracted from the unusual double-peak $\kappa$($T$) curves in the
high-field partially polarized state Region III. In between them is the most
exciting state Region II (and perhaps also Region I∗), where strong low-energy
spin fluctuations are present despite the absence of magnetic order.
Especially for $\textbf{B}\mathbin{\\!/\mkern-5.0mu/\\!}\textbf{a}$, we found
a relatively wide field window suitable for a QSL state. Overall, our results
support the conjecture that Na2Co2TeO6 is another materialization of the
Kitaev model. All the interesting results of $\alpha$-RuCl3 deserve seeking
for in Na2Co2TeO6, that will promote the understanding of proximate Kitaev
materials.
We would like to thank Vladislav Kataev, Christoph Wellm, and Xenophon Zotos
for fruitful discussions, and thank Juliane Scheiter for technical support.
This work has been supported by the Deutsche Forschungsgemeinschaft through
SFB 1143 (project-id 247310070), the Würzburg-Dresden Cluster of Excellence on
Complexity and Topology in Quantum Matter-ct.$qmat$ (EXC 2147, Project No.
390858490). This work has further been supported by the European Research
Council (ERC) under the European Union’s Horizon 2020 research and innovation
programme (Grant Agreement No. 647276-MARS-ERC-2014-CoG). Work at Peking
University was supported by the NSF of China under Grant No. 11888101, and by
the NBRP of China under Grant No. 2018YFA0305602.
## References
* (1) A. Kitaev, Anyons in an exactly solved model and beyond, Ann. Phys. 321, 2 (2006).
* (2) Z. Nussinov and J. van den Brink, Compass models: Theory and physical motivations, Rev. Mod. Phys. 87, 1 (2015).
* (3) S. Trebst, Kitaev materials, arXiv:1701.07056
* (4) S. M. Winter, A. A. Tsirlin, M. Daghofer, J. van den Brink, Y. Singh, P. Gegenwart, and R. Valentí, Models and materials for generalized Kitaev magnetism, J. Phys.: Condens. Matter 29, 493002 (2017).
* (5) H. Takagi, T. Takayama, G. Jackeli, G. Khaliullin, and S. E. Nagler, Concept and realization of Kitaev quantum spin liquids, Nat. Rev. Phys. 1, 264 (2019).
* (6) G. Jackeli and G. Khaliullin, Mott Insulators in the Strong Spin-Orbit Coupling Limit: From Heisenberg to a Quantum Compass and Kitaev Models, Phys. Rev. Lett. 102, 017205 (2009).
* (7) G. Chen and L. Balents, Spin-orbit effects in Na4Ir3O8: A hyper-kagome lattice antiferromagnet, Phys. Rev. B 78, 094403 (2008).
* (8) J. G. Rau, E. K.-H. Lee, and H.-Y. Kee, Generic Spin Model for the Honeycomb Iridates beyond the Kitaev Limit, Phys. Rev. Lett. 112, 077204 (2014).
* (9) Y. Okamoto, M. Nohara, H. Aruga-Katori, and H. Takagi, Spin-Liquid State in the S = 1/2 Hyperkagome Antiferromagnet Na4Ir3O8, Phys. Rev. Lett. 99, 137207 (2007).
* (10) Y. Singh and P. Gegenwart, Antiferromagnetic Mott insulating state in single crystals of the honeycomb lattice material Na2IrO3, Phys. Rev. B 82, 064412 (2010).
* (11) Y. Singh, S. Manni, J. Reuther, T. Berlijn, R. Thomale, W. Ku, S. Trebst, and P. Gegenwart, Relevance of the Heisenberg-Kitaev Model for the Honeycomb Lattice Iridates A2IrO3, Phys. Rev. Lett. 108, 127203 (2012).
* (12) T. Takayama, A. Kato, R. Dinnebier, J. Nuss, H. Kono, L. S. I. Veiga, G. Fabbris, D. Haskel, and H. Takagi, Hyperhoneycomb Iridate $\beta$-Li2IrO3 as a Platform for Kitaev Magnetism, Phys. Rev. Lett. 114, 077202 (2015).
* (13) A. Biffin, R. D. Johnson, I. Kimchi, R. Morris, A. Bombardi, J. G. Analytis, A. Vishwanath, and R. Coldea, Noncoplanar and Counterrotating Incommensurate Magnetic Order Stabilized by Kitaev Interactions in $\gamma$-Li2IrO3, Phys. Rev. Lett. 113, 197201 (2014).
* (14) M. Abramchuk, C. Ozsoy-Keskinbora, J. W. Krizan, K. R. Metz, D. C. Bell, and F. Tafti, Cu2IrO3: A New Magnetically Frustrated Honeycomb Iridate, J. Am. Chem. Soc. 139, 15371 (2017).
* (15) V. Todorova, A. Leineweber, L. Kienle, V. Duppel, M. Jansen, On AgRhO2 and the new quaternary delafossites AgLi1/3M2/3O2, syntheses and analyses of real structures, J. Solid State Chem. 184, 1112 (2011).
* (16) J. H. Roudebush, K. A. Ross, and R. J. Cava, Iridium containing honeycomb Delafossites by topotactic cation exchange, Dalton Trans. 45, 8783 (2016).
* (17) K. Kitagawa, T. Takayama, Y. Matsumoto, A. Kato, R. Takano, Y. Kishimoto, S. Bette, R. Dinnebier, G. Jackeli, and H. Takagi, A spin-orbital-entangled quantum liquid on a honeycomb lattice, Nature 554, 341 (2018).
* (18) H. B. Cao, A. Banerjee, J.-Q. Yan, C. A. Bridges, M. D. Lumsden, D. G. Mandrus, D. A. Tennant, B. C. Chakoumakos, and S. E. Nagler, Low-temperature crystal and magnetic structure of $\alpha$-RuCl3, Phys. Rev. B 93, 134423 (2016).
* (19) M. Majumder, M. Schmidt, H. Rosner, A. A. Tsirlin, H. Yasuoka, and M. Baenitz, Anisotropic Ru3+ 4$d^{5}$ magnetism in the $\alpha$-RuCl3 honeycomb system: Susceptibility, specific heat, and zero-field NMR, Phys. Rev. B 91, 180401(R) (2015).
* (20) I. Rousochatzakis, S. Kourtis, J. Knolle, R. Moessner, and N. B. Perkins, Quantum spin liquid at finite temperature: Proximate dynamics and persistent typicality, Phys. Rev. B 100, 045117 (2019).
* (21) M. Gohlke, R. Verresen, R. Moessner, and F. Pollmann, Dynamics of the Kitaev-Heisenberg Model, Phys. Rev. Lett. 119, 157203 (2017).
* (22) L. J. Sandilands, Y. Tian, K. W. Plumb, Y.-J. Kim, and K. S. Burch, Scattering Continuum and Possible Fractionalized Excitations in $\alpha$-RuCl3, Phys. Rev. Lett. 114, 147201 (2015).
* (23) A. Banerjee, C. A. Bridges, J.-Q. Yan, A. A. Aczel, L. Li, M. B. Stone, G. E. Granroth, M. D. Lumsden, Y. Yiu, J. Knolle, S. Bhattacharjee, D. L. Kovrizhin, R. Moessner, D. A. Tennant, D. G. Mandrus, and S. E. Nagler, Proximate Kitaev quantum spin liquid behaviour in a honeycomb magnet, Nat. Mater. 15, 733 (2016).
* (24) A. Banerjee, J. Q. Yan, J. Knolle, C. A. Bridges, M. B. Stone, M. D. Lumsden, D. G. Mandrus, D. A. Tennant, R. Moessner, and S. E. Nagler, Neutron scattering in the proximate quantum spin liquid $\alpha$-RuCl3, Science 356, 1055 (2017).
* (25) S.-H. Do, S.-Y. Park, J. Yoshitake, J. Nasu, Y. Motome, Y. S. Kwon, D. T. Adroja, D. J. Voneshen, K. Kim, T.-H. Jang, J.-H. Park, K.-Y. Choi, and S. Ji, Majorana fermions in the Kitaev quantum spin system $\alpha$-RuCl3, Nat. Phys. 13, 1079 (2017).
* (26) K. J. Ran, J. H. Wang, W. Wang, Z.-Y. Dong, X. Ren, S. Bao, S. C. Li, Z. Ma, Y. Gan, Y. T. Zhang, J. T. Park, G. C. Deng, S. Danilkin, S.-L. Yu, J.-X. Li, and J. S. Wen, Spin-Wave Excitations Evidencing the Kitaev Interaction in Single Crystalline $\alpha$-RuCl3, Phys. Rev. Lett. 118, 107203 (2017).
* (27) S. Widmann, V. Tsurkan, D. A. Prishchenko, V. G. Mazurenko, A. A. Tsirlin, and A. Loidl, Thermodynamic evidence of fractionalized excitations in $\alpha$-RuCl3, Phys. Rev. B 99, 094415 (2019).
* (28) O. Tanaka, Y. Mizukami, R. Harasawa, K. Hashimoto, N. Kurita, H. Tanaka, S. Fujimoto, Y. Matsuda, E.-G. Moon, and T. Shibauchi, Thermodynamic evidence for field-angle dependent Majorana gap in a Kitaev spin liquid, arXiv:2007.06757
* (29) C. Wellm, J. Zeisner, A. Alfonsov, A. U. B. Wolter, M. Roslova, A. Isaeva, T. Doert, M. Vojta, B. Büchner, and V. Kataev, Signatures of low-energy fractionalized excitations in $\alpha$-RuCl3 from field-dependent microwave absorption, Phys. Rev. B 98, 184408 (2018).
* (30) S. Reschke, V. Tsurkan, S.-H. Do, K.-Y. Choi, P. Lunkenheimer, Z. Wang, and A. Loidl, Terahertz excitations in $\alpha$-RuCl3: Majorana fermions and rigid-plane shear and compression modes, Phys. Rev. B 100, 100403(R) (2019).
* (31) H. Li, T. T. Zhang, A. Said, G. Fabbris, D. G. Mazzone, J. Q. Yan, D. Mandrus, G. B. Halasz, S. Okamoto, S. Murakami, M. P. M. Dean, H. N. Lee, and H. Miao, Fractional Excitation Induced Giant Phonon Anomalies in the Proximate Kitaev Quantum Spin Liquid $\alpha$-RuCl3, arXiv:2011.07036
* (32) I. A. Leahy, C. A. Pocs, P. E. Siegfried, D. Graf, S.-H. Do, K.-Y. Choi, B. Normand, and M. Lee, Anomalous Thermal Conductivity and Magnetic Torque Response in the Honeycomb Magnet $\alpha$-RuCl3, Phys. Rev. Lett. 118, 187203 (2017).
* (33) R. Hentrich, A. U. B. Wolter, X. Zotos, W. Brenig, D. Nowak, A. Isaeva, T. Doert, A. Banerjee, P. Lampen-Kelley, D. G. Mandrus, S. E. Nagler, J. Sears, Y. J. Kim, B. Büchner, and C. Hess, Unusual Phonon Heat Transport in $\alpha$-RuCl3: Strong Spin-Phonon Scattering and Field-Induced Spin Gap, Phys. Rev. Lett. 120, 117204 (2018).
* (34) S.-H. Baek, S.-H. Do, K.-Y. Choi, Y. S. Kwon, A. U. B. Wolter, S. Nishimoto, J. van den Brink, and B. Büchner, Evidence for a Field-Induced Quantum Spin Liquid in $\alpha$-RuCl3, Phys. Rev. Lett. 119, 037201 (2017).
* (35) A. U. B. Wolter, L. T. Corredor, L. Janssen, K. Nenkov, S. Schönecker, S.-H. Do, K.-Y. Choi, R. Albrecht, J. Hunger, T. Doert, M. Vojta, and B. Büchner, Field-induced quantum criticality in the Kitaev system $\alpha$-RuCl3, Phys. Rev. B 96, 041405(R) (2017).
* (36) Z. Wang, S. Reschke, D. Hüvonen, S.-H. Do, K.-Y. Choi, M. Gensch, U. Nagel, T. Rõõm, and A. Loidl, Magnetic Excitations and Continuum of a Possibly Field-Induced Quantum Spin Liquid in $\alpha$-RuCl3, Phys. Rev. Lett. 119, 227202 (2017).
* (37) J. A. Sears, Y. Zhao, Z. Xu, J. W. Lynn, and Y.-J. Kim, Phase diagram of $\alpha$-RuCl3 in an in-plane magnetic field, Phys. Rev. B 95, 180411(R) (2017).
* (38) Y. Kasahara, T. Ohnishi, Y. Mizukami, O. Tanaka, S. X. Ma, K. Sugii, N. Kurita, H. Tanaka, J. Nasu, Y. Motome, T. Shibauchi, and Y. Matsuda, Majorana quantization and half-integer thermal quantum Hall effect in a Kitaev spin liquid, Nature 559, 227 (2018).
* (39) R. Hentrich, X. C. Hong, M. Gillig, F. Caglieris, M. Čulo, M. Shahrokhvand, U. Zeitler, M. Roslova, A. Isaeva, Th. Doert, L. Janssen, M. Vojta, B. Büchner, and C. Hess, High-field thermal transport properties of the Kitaev quantum magnet $\alpha$-RuCl3: Evidence for low-energy excitations beyond the critical field, Phys. Rev. B 102, 235155 (2020).
* (40) R. D. Johnson, S. C. Williams, A. A. Haghighirad, J. Singleton, V. Zapf, P. Manuel, I. I. Mazin, Y. Li, H. O. Jeschke, R. Valentí, and R. Coldea, Monoclinic crystal structure of $\alpha$-RuCl3 and the zigzag antiferromagnetic ground state, Phys. Rev. B 92, 235119 (2015).
* (41) E. C. Andrade, L. Janssen, and M. Vojta, Susceptibility anisotropy and its disorder evolution in models for Kitaev materials, Phys. Rev. B 102, 115160 (2020).
* (42) M. Yamashita, J. Gouchi, Y. Uwatoko, N. Kurita, and H. Tanaka, Sample dependence of half-integer quantized thermal Hall effect in the Kitaev spin-liquid candidate $\alpha$-RuCl3, Phys. Rev. B 102, 220404(R) (2020).
* (43) H. M. Liu and G. Khaliullin, Pseudospin exchange interactions in d7 cobalt compounds: Possible realization of the Kitaev model, Phys. Rev. B 97, 014407 (2018).
* (44) R. Sano, Y. Kato, and Y. Motome, Kitaev-Heisenberg Hamiltonian for high-spin d7 Mott insulators, Phys. Rev. B 97, 014408 (2018).
* (45) R. D. Zhong, M. Chung, T. Kong, L. T. Nguyen, S. M. Lei, and R. J. Cava, Field-induced spin-liquid-like state in a magnetic honeycomb lattice, Phys. Rev. B 98, 220407(R) (2018).
* (46) R. D. Zhong, S. Guo, G. Y. Xu, Z. J. Xu, and R. J. Cava, Strong quantum fluctuations in a quantum spin liquid candidate with a Co-based triangular lattice, Proc. Natl. Acad. Sci. U.S.A. 116, 14505 (2019).
* (47) R. D. Zhong, T. Gao, N. P. Ong, and R. J. Cava, Weak-field induced nonmagnetic state in a Co-based honeycomb, Sci. Adv. 6, eaay6953 (2020).
* (48) N. Li, Q. Huang, X. Y. Yue, W. J. Chu, Q. Chen, E. S. Choi, X. Zhao, H. D. Zhou, and X. F. Sun, Possible itinerant excitations and quantum spin state transitions in the effective spin-1/2 triangular-lattice antiferromagnet Na2BaCo(PO4)2, Nat. Commun. 11, 4216 (2020).
* (49) H. K. Vivanco, B. A. Trump, C. M. Brown, and T. M. McQueen, Competing antiferromagnetic-ferromagnetic states in a $d^{7}$ Kitaev honeycomb magnet, Phys. Rev. B 102, 224411 (2020).
* (50) M. Songvilay, J. Robert, S. Petit, J. A. Rodriguez-Rivera, W. D. Ratcliff, F. Damay, V. Balédent, M. Jiménez-Ruiz, P. Lejay, E. Pachoud, A. Hadj-Azzem, V. Simonet, and C. Stock, Kitaev interactions in the Co honeycomb antiferromagnets Na3Co2SbO6 and Na2Co2TeO6, Phys. Rev. B 102, 224429 (2020).
* (51) G. T. Lin, J. Jeong, C. Kim, Y. Wang, Q. Huang, T. Masuda, S. Asai, S. Itoh, G. Günther, M. Russina, Z. L. Lu, J. M. Sheng, L. Wang, J. C. Wang, G. H. Wang, Q. Y. Ren, C. Y. Xi, W. Tong, L. S. Ling, Z. X. Liu, L. S. Wu, J. W. Mei, Z. Qu, H. D. Zhou, J.-G. Park, Y. Wan, J. Ma, Field-induced quantum spin disordered state in spin-1/2 honeycomb magnet Na2Co2TeO6 with small Kitaev interaction, arXiv:2012.00940
* (52) W. J. Chen, X. T. Li, Z. H. Hu, Z. Hu, L. Yue, R. Sutarto, F. Z. He, K. Iida, K. Kamazawa, W. Q. Yu, X. Lin, and Y. Li, Spin-orbit phase behaviors of Na2Co2TeO6 at low temperatures, arXiv:2012.08781
* (53) C. Kim, J. Jeong, G. T. Lin, P. Park, T. Masuda, S. Asai, S. Itoh, H.-S. Kim, H. D. Zhou, J. Ma, and J.-G. Park, Antiferromagnetic Kitaev interaction in $J_{eff}=1/2$ cobalt honeycomb materials Na3Co2SbO6 and Na2Co2TeO6, arXiv:2012.06167
* (54) W. L. Yao and Y. Li, Ferrimagnetism and anisotropic phase tunability by magnetic fields in Na2Co2TeO6, Phys. Rev. B 101, 085120 (2020).
* (55) R. Berman, Thermal Conduction in Solids (Clarendon Press, Oxford, 1976).
* (56) See Supplemental Material for more details and discussion about the Callaway-type fit to the data, and for the magnetic characterization.
* (57) C. Balz, L. Janssen, P. Lampen-Kelley, A. Banerjee, Y. H. Liu, J. Q. Yan, D. Mandrus, M. Vojta, and S. E. Nagler, Field-induced intermediate ordered phase and anisotropic interlayer interactions in $\alpha$-RuCl3, arXiv:2012.15258
* (58) S. M. Winter, K. Riedl, D. Kaib, R. Coldea, and R. Valentí, Probing $\alpha$-RuCl3 Beyond Magnetic Order: Effects of Temperature and Magnetic Field, Phys. Rev. Lett. 120, 077203 (2018).
* (59) L. Janssen and M. Vojta, Heisenberg-Kitaev physics in magnetic fields, J. Phys.: Condens. Matter 31, 423002 (2019).
* (60) C. Hickey and S. Trebst, Emergence of a field-drivenU(1) spin liquid in the Kitaev honeycomb model, Nat. Commun. 10, 530 (2019).
|
††thanks: Present address: FU Berlin, Department of Mathematics and Computer
Science, Arnimallee 6, 14195 Berlin, Germany
# Insights from exact exchange-correlation kernels
N. D. Woods<EMAIL_ADDRESS>Theory of Condensed Matter, Cavendish Laboratory,
University of Cambridge, Cambridge, CB3 0HE, United Kingdom M. T. Entwistle
Department of Physics, University of York, and European Theoretical
Spectroscopy Facility, Heslington, York YO10 5DD, United Kingdom R. W. Godby
Department of Physics, University of York, and European Theoretical
Spectroscopy Facility, Heslington, York YO10 5DD, United Kingdom
###### Abstract
The exact exchange-correlation (xc) kernel
$f_{\text{xc}}(x,x^{\prime},\omega)$ of linear response time-dependent density
functional theory is computed over a wide range of frequencies, for three
canonical one-dimensional finite systems. Methods used to ensure the numerical
robustness of $f_{\text{xc}}$ are set out. The frequency dependence of
$f_{\text{xc}}$ is found to be due largely to its analytic structure, i.e. its
singularities at certain frequencies, which are required in order to capture
particular transitions, including those of double excitation character.
However, within the frequency range of the first few interacting excitations,
$f_{\text{xc}}$ is approximately $\omega$-independent, meaning the exact
adiabatic approximation $f_{\text{xc}}(\omega=0)$ remedies the failings of the
local density approximation and random phase approximation for these lowest
transitions. The key differences between the exact $f_{\text{xc}}$ and its
common approximations are analyzed, and cannot be eliminated by exploiting the
limited gauge freedom in $f_{\text{xc}}$. The optical spectrum benefits from
using as accurate as possible an $f_{\text{xc}}$ and ground-state xc
potential, while maintaining exact compatibility between the two is of less
importance.
## I Introduction
The density-density linear response function of a many-body quantum system can
be used to extract a great deal of excited-state information about the system,
for example, its optical transition probabilities and transition energies when
subject to incident light [1]. Linear response time-dependent density
functional theory (DFT) constitutes an exact methodology, in principle, for
recovering an interacting response function from the response function of a
corresponding Kohn-Sham system [2, 3, 4, 5, 6, 7, 8]. The interacting
$\chi(x,x^{\prime},|t-t^{\prime}|)$ density-density response function
describes the first-order change in the density due to a perturbation in the
external potential [2, 3]:
$\displaystyle\chi(x,x^{\prime},|t-t^{\prime}|)=\left.\frac{\delta
n(x,t)}{\delta v_{\text{ext}}(x^{\prime},t^{\prime})}\right|_{n_{0}}.$ (1)
The Kohn-Sham response function $\chi_{0}=\delta n/\delta v_{\text{KS}}$ is
specified with the ground-state exchange-correlation (xc) potential
$v_{\text{xc}}(x)$, and then a map from the Kohn-Sham response function to the
interacting response function is established using the xc kernel,
$\displaystyle f_{\text{xc}}(x,x^{\prime},|t-t^{\prime}|)=\left.\frac{\delta
v_{\text{xc}}(x,t)}{\delta n(x^{\prime},t^{\prime})}\right|_{n_{0}},$ (2)
i.e. the first-order change in the xc potential due to a perturbation in the
density. The uniqueness of this map is guaranteed by the Runge-Gross theorem
of time-dependent DFT [9, 10], and the definition of $f_{\text{xc}}$ in Eq.
(2) demonstrates that, like the ground-state xc potential, $f_{\text{xc}}$ is
a functional of the ground-state density $n_{0}$. The principal aim of this
work is to elucidate the structure and features of the exact numerical
$f_{\text{xc}}$, that is, $f_{\text{xc}}(x,x^{\prime},|t-t^{\prime}|)$
including the full extent of its spatial and temporal character.
The map from the Kohn-Sham response function to the interacting response
function is identified with the requirement that density perturbations in the
Kohn-Sham system match those in the interacting system. This map is often
referred to as the Dyson equation of linear response time-dependent DFT,
$\displaystyle\chi(x,x^{\prime},\omega)=$
$\displaystyle\chi_{0}(x,x^{\prime},\omega)+\iint\chi_{0}(x,x^{\prime\prime},\omega)\\{f_{\text{H}}(x^{\prime\prime},x^{\prime\prime\prime})$
$\displaystyle+f_{\text{xc}}(x^{\prime\prime},x^{\prime\prime\prime},\omega)\\}\chi(x^{\prime\prime\prime},x^{\prime},\omega)\
dx^{\prime\prime}dx^{\prime\prime\prime},$
where $f_{\text{H}}=\delta v_{\text{H}}/\delta n$ is the Hartree kernel (the
electron-electron interaction) and all objects are now expressed in the
frequency domain $\omega$, the Fourier transform of the time domain
$|t-t^{\prime}|$. An approximate Kohn-Sham response function in conjunction
with an approximate $f_{\text{xc}}$ provides an approximation to the exact
interacting response function, from which a host of properties can be
calculated, such as the optical absorption spectrum [11] and the ground-state
correlation energy [12, 13]. The optical absorption spectrum [2],
$\displaystyle\sigma(\omega)=-\frac{4\pi\omega}{c}\iint\text{Im}(\chi(x,x^{\prime},\omega))xx^{\prime}\
dxdx^{\prime},$ (3)
is the main focus of this work, and provides the transition energies and
transition rates of a sample subject to classical light within the dipole
approximation 111Hartree atomic units $m_{e}=\hbar=e=4\pi\varepsilon_{0}=1$
are used throughout.. Linear response time-dependent DFT is now a prominent
method used to excited- and ground-state aspects of finite and extended
systems.
The development and understanding of approximate xc kernels has been the
subject of intense interest over the past decades, see, for example, [15, 7,
2, 16, 11] and references therein. In most cases, these approximations can be
sorted hierarchically depending on the level of theory involved in the
approximation. The lowest orders of this hierarchy contain the random phase
approximation (RPA) and the adiabatic local density approximation (LDA). The
former ignores exchange and correlation at the level of the xc kernel entirely
by setting $f^{\text{RPA}}_{\text{xc}}=0$ [17], and the latter includes
exchange and correlation within the framework of an LDA, leading to an
adiabatic, spatially local
$f_{\text{xc}}^{\text{ALDA}}\propto\delta(x-x^{\prime})\delta(t-t^{\prime})$
[18, 19].
The xc kernel itself, however, is known to possess a range of pathological
features that depart significantly from these approximations. In particular,
certain circumstances demand a spatial ultra-non-locality in $f_{\text{xc}}$
[2]. Furthermore, a non-adiabatic temporal structure is known to be essential
to capture excitations of a multi-particle character [20, 21, 22, 23]. These
are two manifestations of the fact that the exact $f_{\text{xc}}$ contains all
correlated many-body effects. More sophisticated approximations to
$f_{\text{xc}}$ seek to include these effects in some form or another, such as
those that utilize the $GW$ approximation and the Bethe-Salpeter equation [24,
25, 26, 16], exact-exchange kernels [27, 28, 29, 30, 31], and long-range
corrected kernels [32, 33].
The use of model systems has been effective in developing understanding of
$f_{\text{xc}}$. In particular, the frequency dependence of $f_{\text{xc}}$
has been the subject of model analytic studies [34, 21, 35, 36], numerical
studies using model Hamiltonians, e.g. the Hubbard model [37, 38, 39, 40], and
numerical studies of exact one-dimensional Hamiltonians in a truncated Hilbert
space [41, 42, 43]. This work continues along the lines of the last approach,
and seeks to address the spatial and frequency dependence of $f_{\text{xc}}$
for energies far beyond the first few excitations. The observed features of
$f_{\text{xc}}$ are examined in relation to matters of practical interest,
such as optical properties.
## II Methodology
### II.1 Background
The `iDEA` code [44] is used in order to obtain the interacting and Kohn-Sham
response functions. This software implements quantum mechanics for finite
systems in one dimension interacting with a softened Coulomb electron-electron
interaction
$\displaystyle v_{c}(x,x^{\prime})=\frac{1}{|x-x^{\prime}|+\alpha},$ (4)
where $\alpha$ is the extent of the softening; we use $\alpha=1$ a.u. A delta
function basis set, i.e. real-space grid, of dimension $N$ is used to
discretize the spatial domain $[-a,a]$ of length $L=2a$ subject to Dirichlet
boundary conditions.
Our three prototype systems each consist of two spinless electrons in the
external potentials described below. (The use of spinless fermions delivers a
richer ground state and excitation spectrum for a given number of electrons.)
For some input external potential $v_{\text{ext}}(x)$, the full set of
eigenvectors $\\{|\Psi_{i}\rangle\\}$ of the interacting Hamiltonian is found
using exact diagonalization. The corresponding exact Kohn-Sham potential
$v_{\text{KS}}(x)$ is then reverse-engineered by applying preconditioned root-
finding techniques to an appropriate fixed-point map [45]. The full set of
Kohn-Sham eigenvectors $\\{|\phi_{i}\rangle\\}$ is also obtained using exact
diagonalization. The causal response functions are computed in the frequency
domain directly using the Lehmann representation; the interacting response
function, for example, is given by
$\displaystyle\chi(x,x^{\prime},\omega)=\lim_{\eta\rightarrow
0}\sum_{n=1}^{\infty}\frac{\langle\Psi_{0}|\hat{n}(x)|\Psi_{n}\rangle\langle\Psi_{n}|\hat{n}(x^{\prime})|\Psi_{0}\rangle}{\omega-\Omega_{n}+i\eta}$
$\displaystyle-\frac{\langle\Psi_{0}|\hat{n}(x^{\prime})|\Psi_{n}\rangle\langle\Psi_{n}|\hat{n}(x)|\Psi_{0}\rangle}{\omega+\Omega_{n}+i\eta},$
where $\Omega_{n}=E_{n}-E_{0}$ is the $n^{\text{th}}$ excitation energy of the
interacting Hamiltonian, and the response function is zero for times
$t>t^{\prime}$. Construction of the interacting response function in this
fashion is an accurate but demanding procedure, whereas the methods outlined
in [42] to construct the response functions are amenable to larger systems,
but more prone to error; either method will suffice here. On a finite spatial
grid, the response functions at a given $\omega$ become response matrices,
denoted $\chi(\omega)$ and $\chi_{0}(\omega)$. The Dyson equation gives an
alternate definition of the xc kernel,
$\displaystyle
f_{\text{xc}}(\omega)=\chi_{0}^{-1}(\omega)-\chi^{-1}(\omega)-f_{\text{H}},$
(5)
where -1 is to be understood as the matrix inverse, and the Hartree kernel
$f_{\text{H}}$ becomes softened due to Eq. (4). This expression is used as the
definition of $f_{\text{xc}}$ in the present context, where the matrix
inverses require the careful treatment described in the next section.
### II.2 Challenges in computing exact exchange-correlation kernels
Numerical difficulties arise when attempting to construct $f_{\text{xc}}$ as
an object in itself using Eq. (5), which is one of a few reasons that has
prevented or hindered studies of the exact $f_{\text{xc}}$ along these lines
[41, 42, 43]. The xc kernel represents the solution to an inverse problem,
i.e. find the $\delta v_{\text{xc}}$ that produces a given $\delta n$, and
inverse problems are notoriously sensitive to small error [46], such as those
introduced by finite-precision arithmetic. As discussed, $f_{\text{xc}}$ in
Eq. (5) requires the matrix inverse of the response matrices at a given
$\omega$, and hence a naive inversion procedure introduces numerical error at
a given $\omega$ in proportion to the condition of the response matrices, that
is, the ratio of the maximum to minimum eigenvalue.
Physical eigenvalues that are close to, or below, machine precision manifest
in the response matrices from various sources [47]. One such source is related
to the linear response $v$-representability problem. That is to say, there
exist density perturbations that oscillate with some non-resonant frequency
$\omega$ that cannot be produced by a perturbing potential at linear order
[48, 47, 27]. Therefore, at such a frequency, the response function has an
eigenvector $|u(\omega)\rangle$ whose eigenvalue is zero, indicating that the
density perturbation $\delta n=|u(\omega)\rangle$ does not correspond to some
finite $\delta v$, as the response function is non-invertible 222For example,
the constant perturbation $\delta v=c(\omega)$ oscillating with frequency
$\omega$ produces no response in the density $\delta n$ at all orders,
including at first order.. This can happen in both the interacting and Kohn-
Sham response functions at distinct frequencies. These eigenvalues are of
importance for the work to follow, and are discussed in more depth in Section
III.2.
Another source of low eigenvalues is due to extended regions of nearly
vanishing ground-state density. Since the aim of this work is, in part, to
study the optical response of confined systems far beyond the first
excitation, an appropriately large spatial domain $[-a,a]$ is required in
order to accommodate the more extended excited states without introducing
spurious features due to the boundary conditions. Within such a system, a
perturbing potential localized toward the edge of the domain yields a
negligible ground-state density response, the effect of which is to introduce
near-zero eigenvalues into the response functions. Therefore, ill-conditioning
is unavoidable if we are to study the response functions, and thus
$f_{\text{xc}}$, at high frequencies. The extent of this ill-conditioning
depends on the maximum frequency up to which one wishes to examine matters.
Note that these near-machine-precision eigenvalues of the response functions
are problematic only if this difference accounts for some particular physical
phenomenon, such as charge transfer, whereby in order to capture excitations
of charge-transfer character an $f_{\text{xc}}$ that is divergent in
proportion to the increasing separation between the subsystems involved is
required 333We observed this situation in a double-well system that we
explored as background to the present study. [51]. In such cases the
construction of numerical xc kernels will be challenging. However, the systems
studied in this work do not suffer this issue: the ill-conditioning of the
interacting and Kohn-Sham response functions can be assumed to cancel in the
definition of $f_{\text{xc}}$, Eq. (5), thus producing a regular
$f_{\text{xc}}$. This procedure can be viewed as a form of basis set
truncation, i.e. assign $\chi=\chi_{0}$ within some subset of the basis
responsible for ill-conditioning and proceed to compute $f_{\text{xc}}$ under
this assumption. We now describe two such approaches: a truncation in real
space, and a truncation in eigenspace.
#### II.2.1 Real-space truncation
In finite systems with a confining potential, the response functions tend
toward zero outside of the confined region, and this so-called long-range
behavior is known to be relatively unimportant in the present context – this
is not the case in periodic systems [16, 15, 52, 53]. Therefore, as we
demonstrate in this work, forcing the interacting response function to equal
the non-interacting response function within some yet undefined outer region
does not much alter the derived properties of the interacting response
function, such as its optical spectrum.
To this end, a partition of the spatial domain $[-a,a]$ is made such that an
inner region is defined where $x$ takes values $-b\leq x\leq b$; the numerical
parameter $b$ defines the extent of the truncation. The outer region
constitutes the remaining space between the inner region and the edges of the
domain, $-a$ and $a$. This partition of the space, as it applies to the
response functions, can be seen in Fig. 1. The assumption is then made that
$\displaystyle\tilde{\chi}(x,x^{\prime},\omega)$
$\displaystyle\coloneqq\chi(x,x^{\prime},\omega)\text{ for
}(x,x^{\prime})\text{ in inner region}$
$\displaystyle\tilde{\chi}(x,x^{\prime},\omega)$
$\displaystyle\coloneqq\chi_{0}(x,x^{\prime},\omega)\text{ for
}(x,x^{\prime})\text{ in outer region},$
where $\tilde{\chi}$ is the truncated response function.
Figure 1: A schematic depiction of the real-space truncation strategy used to
regularize computations of $f_{\text{xc}}$, whereby a truncated response
function $\tilde{\chi}$ is defined as the interacting response function within
some region parameterized by $b$, the inner region (shaded gray), and is
otherwise set equal to the Kohn-Sham response function.
The xc kernel is now defined as the object that returns the truncated response
function $\tilde{\chi}$, rather than the interacting response function $\chi$,
upon solution of the Dyson equation. This leads to the following piecewise
form for $f_{\text{xc}}$,
$f_{\text{xc}}=\begin{cases}\chi_{0}^{-1}-\chi^{-1}-f_{\text{H}}&\text{ for
}(x,x^{\prime})\text{ in inner region}\\\ -f_{\text{H}}&\text{ for
}(x,x^{\prime})\text{ in outer region}.\end{cases}$
The regularizing effect of the method presented above can be understood by
examining the role of the truncation parameter $b$. In particular, two
extremes are considered, first, setting $b=a$ means the inversion of the
response matrix at a given $\omega$ is performed over the whole domain, and
hence is dominated by error due to excessive regions of nearly vanishing
ground-state density; this error is hereafter referred to as the numerical
error. Setting $b=0$ turns the truncated response function into the Kohn-Sham
response function over the entire domain – an evidently unsatisfactory state
of affairs – and error of this kind is referred to as method error. Whilst
this need not be the case in principle, it is the case for the systems studied
here that it is possible to choose $b$ such that an acceptable balance is
struck between method error and numerical error. In other words, the truncated
response function is able to retain all the physical properties of the
interacting response function and ensure the computation of the resulting
piecewise $f_{\text{xc}}$ is well-conditioned. A discussion on the notion of
error in the present context, including an elaboration of the method error and
numerical error, is given in the supplemental material 444URL to be inserted.
One might take issue that the above piecewise expression for $f_{\text{xc}}$
is spuriously discontinuous at the boundary of the inner and outer region,
where the extent of this discontinuity depends on the long-range behavior of
$f_{\text{xc}}$. However, we shall be concerned with the behavior of
$f_{\text{xc}}$ inside the inner region, i.e. the region where departure of
the Hxc kernel $f_{\text{Hxc}}=f_{\text{xc}}+f_{\text{H}}$ from zero produces
meaningful features in the output of the Dyson equation.
#### II.2.2 Eigenspace truncation
A second, related, method used in this work in order to regularize the
computation of $f_{\text{xc}}$ is to truncate the interacting response matrix
in the eigenspace of the Kohn-Sham response matrix. This method is much more
accurate than the real-space truncation, but is limited to Hermitian response
matrices, i.e. response matrices constructed without an artificial broadening
$\eta$.
Consider the eigendecomposition of the interacting and Kohn-Sham response
matrices at a given $\omega$, where the eigenpairs are denoted
$\\{|u_{i}\rangle,\lambda_{i}\\}$ and
$\\{|u^{\text{KS}}_{i}\rangle,\lambda^{\text{KS}}_{i}\\}$ respectively.
Consider further some value $\bar{\lambda}$ such that the effective null
space, $\text{Null}_{\text{eff}}$, is defined as the subspace spanned by
eigenvectors whose eigenvalue has modulus below $\bar{\lambda}$ 555The formal
definition of the effective null space is
$\text{Null}_{\text{eff}}(\chi_{0})=\text{Span}(\\{|u_{i}^{\text{KS}}\rangle\
|\ |\lambda^{\text{KS}}_{i}|<\bar{\lambda}\\})$.. The assumption is now made
that the truncated response matrix $\tilde{\chi}$ operates on vectors that are
elements of the effective null space as the Kohn-Sham response matrix,
$\displaystyle\tilde{\chi}|v\rangle=\chi_{0}|v\rangle\text{ for
}|v\rangle\in\text{Null}_{\text{eff}}(\chi_{0}),$ (6)
see Fig. 2. Given $P_{N}$ as the projection operator onto the effective null
space, the expression in Eq. (6) is established as follows,
$\displaystyle\tilde{\chi}=(I-P_{\text{N}})\chi+P_{\text{N}}\chi_{0};$ (7)
the first term on the right-hand side removes the effective null space from
$\chi$, and the second term ensures $\tilde{\chi}$ operates as intended on
elements of the effective null space. Another view of this manipulation is
that the truncated and Kohn-Sham response functions are required to share
eigenvectors and eigenvalues inside the effective null space,
$\displaystyle\\{|\tilde{u}_{i}\rangle,\tilde{\lambda}_{i}\\}=\\{|u^{\text{KS}}_{i}\rangle,\lambda^{\text{KS}}_{i}\\}\text{
for }|\lambda^{\text{KS}}_{i}|<\bar{\lambda},$ (8)
and the truncated response function is otherwise equal to the interacting
response function, this is also depicted in Fig. 2.
Figure 2: A schematic depiction of the eigenspace truncation strategy used to
regularize computations of $f_{\text{xc}}$, whereby the interacting response
function is expanded in the basis of eigenvectors of the Kohn-Sham response
function, and set equal to the Kohn-Sham response function inside the
effective null space, parameterized by $\bar{\lambda}$.
The pseudoinverse [56] with cutoff $\bar{\lambda}$, i.e. the
eigendecomposition with eigenpairs below $\bar{\lambda}$ discarded, is now an
exact procedure to obtain $f_{\text{xc}}$ that recovers $\tilde{\chi}$ after
solution of the Dyson equation,
$\displaystyle f_{\text{xc}}=\chi_{0}^{+}-\tilde{\chi}^{+}-f_{\text{H}},$ (9)
where + denotes the pseudoinverse.
In direct analogy with the previous method, the parameter $\bar{\lambda}$
assumes the job of $b$, namely, it parameterizes the extent of the truncation.
The difference between the truncated response function Eq. (6) and the exact
interacting response function is again given the name method error. Note that
this approach is quite distinct from applying the pseudoinverse to the
response functions in Eq. (5) – doing so would introduce much more error and
the source of this error is not clear. On the contrary, the error inherent in
the method presented here is identified as the extent to which the effective
null space of the interacting and Kohn-Sham response functions do not overlap,
and this error can be tracked without reference to $f_{\text{xc}}$ using the
method error.
Since the eigenspace truncation method is much more accurate than the real-
space truncation method, results are given using the eigenspace truncation
where possible. Although, visualization of the optical spectrum relies on
evaluating the response functions slightly above the real axis in the
frequency domain, along $\omega+i\eta$. This leads to response matrices that
are complex-symmetric, and thus (weakly) non-Hermitian, in which case the
real-space truncation method is used.
### II.3 Gauge freedom
As noted in Refs. [28, 27, 57, 37], the following transformation
$\displaystyle f_{\text{xc}}(x,x^{\prime},\omega)\rightarrow
f_{\text{xc}}(x,x^{\prime},\omega)+g(x,\omega)+h(x^{\prime},\omega)+c(\omega),$
leaves the output of the Dyson equation unchanged, and thus we are, in
principle, free to choose the arbitrary complex-valued functions
$g(x,\omega),h(x^{\prime},\omega)$ and $c(\omega)$. All three transformations
are a direct manifestation of the invariance of quantum Hamiltonian systems
under a constant time-dependent shift of the potential. From the point of view
of $f_{\text{xc}}$ approximations, two xc kernels are equivalent if they exist
within this family of functions 666Note that the quantities $\langle
ij|f_{\text{xc}}(\omega)|kl\rangle$, where $(i,j,k,l)$ label indices of
single-particle wavefunctions, are unique [27], and it is these quantities
that form the input to the Casida equation [72], for example..
A preferred gauge is defined by Eq. (5), since the objects $\chi$ and
$\chi_{0}$ are themselves invariant to a shift in the potential. The unique
$f_{\text{xc}}$, modulo a constant shift (see below), defined in Eq. (5) can
be considered the physical $f_{\text{xc}}$, and it is this definition of
$f_{\text{xc}}$ that is assumed in discussions on its various properties and
limits [16, 15, 52, 53, 59]. To modify this $f_{\text{xc}}$ using its gauge
freedom changes its underlying structure; for example, setting $g\neq 0$ gives
$f_{\text{xc}}$ spurious long-range behavior, and setting $g\neq h$ produces
an $f_{\text{xc}}$ that is not symmetric under interchange of
$x\leftrightarrow x^{\prime}$. In this work, we illustrate $f_{\text{xc}}$ as
it is defined in Eq. (5), which also defines $g=h=0$. The constant shift $c$
has special meaning, as it is itself an eigenvector of $\chi$ and $\chi_{0}$
with eigenvalue zero, i.e. the response functions are non-invertible in this
direction. Since the Dyson equation is therefore silent regarding the value of
$c$, we anchor $f_{\text{xc}}$ by requiring that, in the long-range limit, far
outside the confined density, $f_{\text{xc}}+f_{\text{H}}\rightarrow 0$, and
find that this limit is reliably achieved.
Having decided upon a preferred gauge, one can consider the possible
consequences of this gauge freedom on matters of practical interest. An
approximate $f_{\text{xc}}$ that differs in relevant structure from the exact
$f_{\text{xc}}$ largely due to a change of gauge provides at least a partial
explanation for the performance of a given approximate $f_{\text{xc}}$; in
Section III.4 we consider this line of inquiry.
## III Results and discussion
### III.1 Atom
Our first system consists of two interacting electrons confined in the atom-
like potential, $v_{\text{ext}}(x)=-2/(|0.1x|+0.5)$, within the domain
$[-8,8]$ a.u. An illustration of this system, and its associated numerical
parameters, are given in the supplemental material. The purpose of the atom
demonstration is two-fold. First, it constitutes a proof-of-concept, and
defines a standard of accuracy to which the remainder of the calculations are
held unless stated otherwise. Second, the optical spectrum of the atom is
calculated in the range $\omega\in[0,6]$ a.u., which includes many excitations
beyond the first, and the efficacy of various approximations to
$f_{\text{xc}}$ are examined in relation to the optical spectrum.
The exact $f_{\text{xc}}$, constructed using the real-space truncation method,
is shown for the first three visible interacting excitations in the optical
spectrum, and at $\omega=0$ a.u., in Fig. 3. The last is sometimes termed the
exact adiabatic $f_{\text{xc}}$, and it correctly describes any system in
which the response to a perturbation is essentially instantaneous,
$f_{\text{xc}}(x,x^{\prime},\omega=0)$.
Figure 3: The real part of the exact numerical xc kernel
$f_{\text{xc}}(x,x^{\prime},\omega)$ for the atom at (a) $\omega=0$ (exact
adiabatic), (b) the first visible interacting excitation, (c) the second
visible interacting excitation, and (d) the third visible interacting
excitation. For illustrative purposes, $f_{\text{xc}}$ is shown for
$-3.5<x<3.5$, where its essential structure is most visible. The xc kernel
across the first two transitions remains approximately equal to the exact
adiabatic $f_{\text{xc}}(\omega=0)$, after which a significant departure from
the adiabatic limit is observed. The exact adiabatic $f_{\text{xc}}(\omega=0)$
displays considerable non-local structure, despite a local dominance along
$x=x^{\prime}$.
The real-space truncation parameter is chosen as $b=5.8$ a.u., meaning the
inner region is defined as $-5.8<x<5.8$. It is important to stress that this
choice of $b$ is not unique, and there exists some feasible range of $b$
within which $f_{\text{xc}}$ itself is insensitive to changes. Moreover,
within this feasible range, both the method error and numerical error are
acceptable – a discussion on the precise quantification of error here is given
in the supplemental material. The mean absolute error between the output of
the Dyson equation, which is hereafter defined as
$\displaystyle\chi_{\text{Dyson}}(\omega)=\frac{\chi_{0}(\omega)}{I-\chi_{0}(\omega)(f_{\text{xc}}(\omega)+f_{\text{H}})},$
(10)
and the interacting response function $\chi(\omega)$ is $\mathcal{O}(10^{-9})$
over the entire grid. The zero-force sum rule [2, 60] is used to further
validate the numerics, which is discussed and illustrated in the supplemental
material.
In order to extract the optical transition energies and transition rates,
given a single-particle response function $\chi_{0}$ and xc kernel
$f_{\text{xc}}$, one can construct the entire optical absorption spectrum Eq.
(3) using the corresponding output of the Dyson equation, denoted
$\chi_{\text{Dyson}}(f_{\text{xc}},v_{\text{xc}})$, where $\chi_{0}$ is
specified with some $v_{\text{xc}}$, see Fig. 4.
Figure 4: The optical spectrum for the atom is computed using the interacting
response function $\chi$ (blue solid), the Kohn-Sham response function
$\chi_{0}$ (red dash), and the output of the Dyson equation
$\chi_{\text{Dyson}}$ with the exact $f_{\text{xc}}(x,x^{\prime},\omega)$
(black dot). The exact numerical $f_{\text{xc}}$ reproduces the interacting
peaks perfectly, as expected.
As established in [61, 62, 63], the exact Kohn-Sham single-particle
transitions are in excellent agreement with the interacting transitions, but
this agreement becomes increasingly poor at higher energies. An approach to
understanding this is to consider the overlap between the final states
involved in a given interacting $|\Psi_{0}\rangle\rightarrow|\Psi_{f}\rangle$
and Kohn-Sham $|\Phi_{(0,1)}\rangle\rightarrow|\Phi_{f}\rangle$ transition,
where $|\Phi_{(i,j)}\rangle$ denotes the Slater determinant constructed from
$i^{\text{th}}$ and $j^{\text{th}}$ Kohn-Sham single-particle states.
The overlap of the ground-state is
$\langle\Psi_{0}|\Phi_{(0,1)}\rangle=0.99991$, and the overlap of the final
states involved in the first transition at $\omega=0.76$ a.u. is
$\langle\Psi_{1}|\Phi_{(0,2)}\rangle=0.9995$. The static correlation in the
interacting state here is modest, meaning the interacting state has strong
single-particle character, which leads to agreement between the low-energy
transitions in the optical spectrum. At higher energies, the overlap decays by
multiple orders of magnitude, however this is not the predominant source of
error in higher energy transitions. Rather, interacting excitations that
correspond to Kohn-Sham single-particle excitations out of the highest
occupied state (the second) are much more accurate than single-particle
excitations out of the first state. For example, the interacting excitation
$|\Psi_{0}\rangle\rightarrow|\Psi_{19}\rangle$ at $\omega=4.41$ a.u. in Fig. 4
is captured well with the Kohn-Sham excitation
$|\Phi_{(0,1)}\rangle\rightarrow|\Phi_{(0,12)}\rangle$, whereas this is not
true of the preceding interacting excitation. This is not surprising, as the
highest occupied Kohn-Sham state has energy equal to minus the exact electron
removal energy [64], and thus at least one energy involved in the transition
is correct. This might often be the case, and to comment further would require
additional many-body calculations.
The following $f_{\text{xc}}$ approximations are now considered: the RPA
$f^{\text{RPA}}_{\text{xc}}=0$, an adiabatic LDA
$f^{\text{ALDA}}_{\text{xc}}[n](x,x^{\prime},\omega=0)\propto\delta(x-x^{\prime})$
parameterized with reference to the homogeneous electron gas in [65], and the
exact adiabatic xc kernel $f_{\text{xc}}(x,x^{\prime},\omega=0)$, Fig. 3(a).
These $f_{\text{xc}}$ approximations are used to solve the Dyson equation in
conjunction with the exact Kohn-Sham response function. The atomic optical
spectrum, using the aforementioned series of approximations, is shown in Fig.
5 for the first transition and a chosen higher energy transition.
Figure 5: The optical absorption spectrum for the atom around the first
transition, and around two higher energy transitions (inset), calculated at
various levels of approximation. The optical spectra calculated from the
interacting and Kohn-Sham response functions are plotted alongside the optical
spectrum computed from the Dyson equation using the RPA, adiabatic LDA, and
exact adiabatic xc kernels. The adiabatic LDA and RPA fail to accurately
reproduce the interacting transitions, whereas the exact adiabatic
approximation successfully describes the low-energy transition, but fails at
higher energies.
The exact adiabatic $f_{\text{xc}}$ reproduces the first peak well, which
reflects the fact that $f_{\text{xc}}$ at the first excitation, Fig. 3(b), is
visually indistinguishable from the exact adiabatic $f_{\text{xc}}$. This
agreement demonstrates that, not only is the adiabatic approximation valid for
low-energy transitions, but also the non-local spatial structure in the exact
adiabatic $f_{\text{xc}}$ is required in order to reproduce the low-energy
peaks in the optical spectrum – in lacking this structure, both the RPA and
adiabatic LDA significantly over-correct the underestimation of the non-
interacting transition energy. At higher energies, i.e. beyond the third peak
in the optical spectrum (not shown), all three approximations perform
similarly, and do not improve matters significantly beyond the corresponding
non-interacting peak. This behavior appears to be generic for all peaks
observed up to $\omega=6$ a.u., namely, the transition energies output from
the Dyson equation with these approximate xc kernels are bound to the quality
of the non-interacting transition energies. Furthermore, the inset of Fig. 5
demonstrates that the exact adiabatic approximation gives an excitation energy
that is worse than the other adiabatic approximations. This suggests that, in
order to capture higher energy transitions, a frequency dependence is
required, and in particular ‘improving’ the spatial structure of adiabatic
approximations toward the exact adiabatic structure does not assist matters
here.
As alluded to above, the exact adiabatic approximation ceases to out-perform
the adiabatic LDA and RPA beyond the third transition in the optical spectrum,
i.e. the same transition for which the corresponding exact $f_{\text{xc}}$
departs from its adiabatic structure in a serious manner, Fig. 3. In fact, as
the subsection to follow demonstrates, this violent departure from
adiabaticity has a particular and fairly simple form, and its origin is
understood in the context of eigenvalues of $\chi(\omega)$ or
$\chi_{0}(\omega)$ that cross zero. The eigenvector corresponding to the
eigenvalue that touches zero is a non-$v$-representable density perturbation
at linear order.
### III.2 Infinite potential well
The infinite potential well is defined with the external potential
$v_{\text{ext}}=0$ inside the domain $[-5,5]$ a.u. (See supplemental material
for the corresponding Kohn-Sham potential, density, and numerical parameters.)
The numerical response functions for this system are well-conditioned and
valid up to arbitrary $\omega$, there are no regions of nearly vanishing
density, and thus there is no significant numerical error present here.
The non-adiabatic character of $f_{\text{xc}}$ is illustrated up to
$\omega\approx 6$ a.u. in Fig. 6. As in the atom, $f_{\text{xc}}$ exhibits
little frequency dependence, until after the second transition. This behavior
is related to the poles that occur in $f_{\text{xc}}$ infinitesimally below
the $\omega$-axis [47, 27]. The eigenvalues of $f_{\text{xc}}$ as a function
of frequency are given in Fig. 7, and in particular, three divergences are
shown (the singularities are tempered slightly by evaluating the response
functions just above the real $\omega$-axis). The lower panels of Fig. 7
demonstrate that these singularities coincide with a single eigenvalue of
either the interacting or non-interacting response function crossing zero.
Thus, the visual character of $f_{\text{xc}}$ is dominated by the outer
product of the eigenvector whose eigenvalue is either beginning to diverge, or
recovering from a divergence. Moreover, nothing in principle is preventing
these singularities in $f_{\text{xc}}$ coming arbitrarily close to an
interacting excitation – either $\chi_{0}$ can cross zero close to an
interacting excitation, or $\chi$ itself can cross zero close to an
interacting excitation 777The interacting response function $\chi$ diverges at
an interacting excitation, but this does not, in a finite basis, prevent an
eigenvalue from crossing zero at this energy under certain circumstances that
are set out in the supplemental material.. Since most $f_{\text{xc}}$
approximations lack these divergences, one can question the importance of
these divergences in relation to optical properties.
Figure 6: The real part of the exact numerical xc kernel
$f_{\text{xc}}(x,x^{\prime},\omega)$ for the infinite potential well at (a)
$\omega=0$ (exact adiabatic), and at three higher frequencies that demonstrate
its non-adiabatic character, (b) the sixth interacting excitation,
$\omega=1.09$ a.u., (c) $\omega=3.28$ a.u., and (d) $\omega=5.55$ a.u. The xc
kernels are shifted such that their maximum value is zero, since there exists
no long-range limit, see Section II.3. A strong frequency dependence, i.e.
departure from the adiabatic limit, is observed. Figure 7: Eigenvalues as a
function of frequency, labeled Re($\lambda(\omega)$), of $f_{\text{xc}}$
(upper), $\chi_{0}$ (lower left), and $\chi$ (lower right). Within the
frequency range shown, the predominant non-adiabatic behavior of
$f_{\text{xc}}$ is a result of three singularities and their surrounding
divergences at $\omega=0.99,1.01,1.17$ a.u. The source of the last two is
observed to be an eigenvalue crossing zero in $\chi_{0}$ and $\chi$
respectively.
In order to determine the impact of the divergences in $f_{\text{xc}}$ on the
optical spectrum, we examine how the optical spectrum is affected after
projecting out the divergent eigendirection in $f_{\text{xc}}$, but otherwise
keeping $f_{\text{xc}}$ identical (the details of this procedure are given in
the supplemental material). This is tantamount to setting
$\chi|v\rangle\coloneqq\chi_{0}|v\rangle$ for some vector $|v\rangle$ within
the Hilbert space that is associated with the divergence.
The interacting excitation at $\omega=1.09$ a.u., see Fig. 7, is visible in
the optical spectrum, and furthermore the character of $f_{\text{xc}}$ at this
energy, see Fig. 6, is dominated by the outer product of an eigenvector whose
eigenvalue is much larger in magnitude than the rest, and between the two
divergences at $\omega=1.01,1.17$ a.u. The removal of these divergences from
$f_{\text{xc}}$ across the energy range of interest, and subsequently solving
the Dyson equation with the projected xc kernel
$f_{\text{xc}}^{\text{projected}}$, shifts the interacting optical peak back
toward the non-interacting peak, Fig. 8. Conversely, the much weaker
divergence at $\omega=0.99$ a.u., as seen in Fig. 7, has a tail that also
yields an eigenvalue much larger in magnitude than the rest at the interacting
excitation, and removal of this divergence produces no change in the optical
spectrum, i.e. this eigenvector is not relevant for capturing the transition
in question. In the inset of Fig. 8, the visual character of $f_{\text{xc}}$
is shown at the interacting excitation $\omega=1.09$ a.u. after the
divergences visible in Fig. 7 have been removed. Underneath these divergences,
$f_{\text{xc}}$ is indistinguishable from the adiabatic $f_{\text{xc}}$,
meaning the frequency dependence of $f_{\text{xc}}$ in this system is largely
due to its pole structure. These results suggest that functional
approximations that do not capture the non-adiabatic pole structure of
$f_{\text{xc}}$ will struggle to improve matters beyond the non-interacting
peaks for certain transitions.
Figure 8: The optical absorption spectrum for the infinite potential well
around a visible interacting excitation at $\omega=1.094$ a.u, and its
corresponding non-interacting excitation at $\omega=1.024$ a.u. The projected
xc kernel (inset), i.e. $f_{\text{xc}}$ with its divergences removed, is
indistinguishable from the adiabatic xc kernel, see Fig. 6(a), and the optical
peak associated with $f_{\text{xc}}^{\text{projected}}$ is only a slight
improvement on the non-interacting peak.
A further point of note is that there are many more zeros in the interacting
response function than the non-interacting response function, as a simple
counting argument is sufficient to demonstrate. All $N$ eigenvalues of the
response functions begin negative [47], and each excitation brings a negative
eigenvalue to a positive eigenvalue across a divergence, which otherwise
evolves as a continuous function of $\omega$. For two electrons discretized
with a basis set of dimension $N$, there are $\frac{1}{2}N(N-1)-1$ interacting
excitations, and $2N-4$ non-interacting excitations, the difference being made
up of double (triple, etc. with more than two electrons) excitations that are
notoriously not present in the Kohn-Sham response function [21]. Therefore,
there are a great deal more eigenvalues that must pass through zero in the
interacting response function, and moreover, these eigenvalues that cross zero
are connected to the excitations that require them to do so – an account of
the precise conditions under which this occurs is given in the supplemental
material using a two-state model, see also [65].
In Fig. 7, there are three divergences, two of which are paired at
$\omega=1.01,1.17$ a.u., meaning the zero in the non-interacting response
function is a shifted version of the zero in the interacting response
function; such behavior can be related to single excitations. There also
exists an unpaired divergence at $\omega=0.99$ a.u. related to the excitation
at $\omega=0.98$ a.u. which has double excitation character. That is, the
overlap between the final state involved in this transition
$|\Psi_{0}\rangle\rightarrow|\Psi_{5}\rangle$ and the Slater determinant
$|\Phi_{(2,3)}\rangle$ is $\langle\Phi_{(2,3)}|\Psi_{5}\rangle=0.98$.
It is perhaps, then, no surprise that removal of the eigenvector and
eigenvalue whose source is the unpaired divergence did not alter the visible
(single excitation) transition in Fig. 8. In fact, the pole in the interacting
response function relating to the double excitation disappears with removal of
the unpaired divergence, and so this divergence, and its surrounding
character, is important in order to capture the transition for which it is
relevant. The xc kernel exhibits unpaired divergences after all double
excitations up to $\omega=6$ a.u. at frequencies slightly higher than the
interacting double excitation energy. Refs. [21, 23] derive, under certain
assumptions, the necessarily divergent character of $f_{\text{xc}}$ around a
double excitation. The above analysis reveals that divergences in
$f_{\text{xc}}$ are, in fact, common and associated with the $\omega$
neighborhood containing multiple and single excitations alike.
### III.3 Quantum harmonic oscillator
The quantum harmonic oscillator is defined in the domain $[-8,8]$ a.u. with
the potential $v_{\text{ext}}(x)=\frac{1}{2}\nu^{2}x^{2}$ where $\nu\coloneqq
0.45$ a.u. (See supplemental material for the corresponding exact Kohn-Sham
potential, the interacting ground-state density, and the numerical
parameters.)
First, the spatial structure, and in particular the long-range behavior [16],
of the exact $f_{\text{xc}}$ is examined. The delicate nature of the numerics
involved in computing the exact $f_{\text{xc}}$ is brought to the fore when
attempting to capture its long-range limit. The ill-conditioning in the
response matrices can be identified with regions of nearly vanishing ground-
state density, see Section II.2, and it is precisely in these regions that we
expect to observe the long-range character of $f_{\text{xc}}$. However,
$f_{\text{xc}}$, when evaluated outside some central region where we can be
confident there is little-to-no numerical error, diverges in a manner that is
not consistent with the known long-range limit of $f_{\text{xc}}$. This
spurious divergence in $f_{\text{xc}}$ does not much affect the accuracy of
the output of the Dyson equation, $\chi_{\text{Dyson}}$, because the Dyson
equation Eq. (10) involves the matrix product $\chi_{0}f_{\text{xc}}$, and the
divergent regions of $f_{\text{xc}}$ operate on the nearly vanishing regions
of $\chi_{0}$.
If we are to observe the long-range limit of $f_{\text{xc}}$ in the present
context, the region where the numerical error is low must overlap with the
region where the long-range limit is observed. This is the case for the exact
adiabatic $f_{\text{xc}}$, which is shown, together with slices of
$f_{\text{xc}}$ along a particular axis, in Fig. 9 and Fig. 10.
Figure 9: The exact adiabatic xc kernel $f_{\text{xc}}(x,x^{\prime},\omega=0)$
for the quantum harmonic oscillator constructed with the eigenspace truncation
method. The precise spatial structure exhibited by the exact adiabatic kernel,
including its long-range limit and non-local character, is discussed in the
main text. Figure 10: The exact adiabatic xc kernel
$f_{\text{xc}}(x,x^{\prime},\omega=0)$ for the quantum harmonic oscillator
along $x^{\prime}=0$ a.u. (top) and $x^{\prime}=-2$ a.u. (bottom). The xc
kernel, Hxc kernel, and Hartree kernel, are shown, and the long-range limit
$f_{\text{xc}}\rightarrow-f_{\text{H}}$ is observed.
The center-most region of the domain is where exchange and correlation effects
are most important, and in this region $f_{\text{xc}}$ exhibits a strong local
response that quickly decays as $x\neq x^{\prime}$. Since this region can be
interpreted as the most crucial for recovering accurate observable properties
from the Dyson equation, this observation supports, at least in part, local
approximations to $f_{\text{xc}}$, such as the adiabatic LDA.
The non-local structure of $f_{\text{xc}}$ at $\omega=0$ a.u. can be seen most
clearly in the lower panel of Fig. 10, where perturbations in the density
outside the center-most region cause a significant change in the exchange-
correlation potential inside the center-most region. The failure of the
adiabatic LDA to capture this non-local response leads to fairly poor
agreement between the exact and approximate transition energies, which is seen
for the atom in Fig. 5, and illustrated for the quantum harmonic oscillator
below 888It is possible that the particular form of the non-local
$f_{\text{xc}}$ functionals considered in [73, 13], in which
$f_{\text{xc}}(x,x^{\prime})=f_{\text{xc}}[(n(x)+n(x^{\prime}))/2]$, can
capture certain features of the adiabatic non-local response depicted in Fig.
9 and Fig. 10..
The long-range limit $f_{\text{xc}}(x,x^{\prime},\omega=0)\rightarrow-
f_{\text{H}}(|x-x^{\prime}|)$ is explored in Fig. 10 along both $x^{\prime}=0$
a.u. and $x^{\prime}=-2$ a.u. Due to the rapid decay of the ground-state
density in the quantum harmonic oscillator, convergence of $f_{\text{xc}}$
toward $-f_{\text{H}}$ along $x=-2$ a.u. is not directly observed in the
$x\rightarrow-\infty$ limit – the atom, whose ground-state density does not
decay as quickly, converges toward $-f_{\text{H}}$ in both the positive and
negative limits (see supplemental material). It is known that the long-range
limit of $f_{\text{xc}}$, in both finite and periodic systems, satisfies
$f_{\text{xc}}(\omega)\rightarrow-\alpha(\omega)f_{\text{H}}$ [16, 15, 52,
53], where $\alpha(\omega)$ is a frequency-dependent constant that reflects
dielectric screening in the system. Therefore, one expects $\alpha=1$ in
finite systems, and at lower frequencies, where the numerical methodology
provides a robust long-range limit, our observations are consistent with this,
see Fig. 10.
To conclude matters for the quantum harmonic oscillator, we examine its
optical absorption spectrum, and in particular highlight a failing of the
$f_{\text{xc}}$ approximations considered in this work when compared to the
exact and exact adiabatic xc kernels. The optical absorption spectrum, using
the same range of approximations discussed in the case of the atom, is shown
in Fig. 11. In the non-interacting quantum harmonic oscillator, all
transitions but the first are disallowed, otherwise known as its selection
rules. The inclusion of the Coulomb interaction lifts these special symmetries
of the non-interacting quantum harmonic oscillator, but not enough for the
previously disallowed transitions to be observed in the optical spectrum – the
dipole matrix elements for these allowed transitions are
$\mathcal{O}(10^{-8})$. On the other hand, the exact Kohn-Sham potential
differs significantly from the harmonic form, which creates a series of
visible peaks beyond the first in the optical spectrum; the transition rates
for these transitions are vastly overestimated.
In this situation, the RPA and adiabatic LDA do not achieve the required
strong suppression of the optical peaks. Interestingly, the exact adiabatic
kernel, as seen in Fig. 9, is able to reproduce the exact optical spectrum
across the entire frequency range considered $\omega\in[0,6]$ a.u., presumably
due to the correct oscillator strengths used in its construction. This
suggests that, in cases where the exact system possesses heavily suppressed
transitions, perhaps due to symmetries that are not shared by the Kohn-Sham
system, the typical $f_{\text{xc}}$ approximations are insufficient to recover
the exact state of affairs, but improvement of their non-local spatial
structure toward the exact adiabatic kernel can assist matters.
Figure 11: The optical absorption spectrum for the quantum harmonic oscillator
around the first transition, and around a chosen higher energy transition,
calculated at various levels of approximation. The optical spectrum calculated
from the interacting and Kohn-Sham response functions are plotted alongside
the optical spectrum computed from the Dyson equation using the RPA, ALDA, and
exact adiabatic xc kernels. At higher frequencies (inset), all optical peaks
are suppressed for the quantum harmonic oscillator, a state of affairs that
the exact adiabatic $f_{\text{xc}}$ reproduces, but the RPA and adiabatic LDA
do not.
### III.4 Gauge freedom
Two xc kernels that differ in structure that is predominantly captured with
the gauge transform defined in Section II.3 provides an explanation for the
approximate agreement between the derived properties of the two xc kernels in
question. This possibility is now considered, and in fact we shall demonstrate
that the gauge freedom of $f_{\text{xc}}$ is not sufficient to explain the
similarity observed between, for example, the optical properties calculated
using the adiabatic LDA and RPA in Section III. Moreover, the particular form
of the non-local spatial structure within the exact
$f_{\text{xc}}(x,x^{\prime},\omega)$ is in general not possible to capture
with this gauge freedom, and it is unlikely that this line of reasoning is
able to explain the efficacy of approximations of any kind.
In order to demonstrate this, an optimal gauge is defined that transforms, in
as much as is possible, one xc kernel into another using the gauge freedom,
i.e. it brings one $f_{\text{xc}}$ toward another $f_{\text{xc}}$ in a
particular matrix norm. The definition and derivation of the optimal gauge is
given in the relevant section of the supplemental material. Furthermore, an
illustration of the optimal gauge transform for each of the examples to follow
is also provided in the supplemental material. The atom of Section III is
considered, and in particular the optimal gauge is found in order to match the
RPA, $f_{\text{xc}}^{\text{RPA}}=0$, with the adiabatic LDA used in this work,
$f_{\text{xc}}^{\text{ALDA}}$ [65]. As discussed in Section II.3, the vectors
$g$ and $h$ introduce an unavoidable degree of non-locality, and the local
spatial structure of the adiabatic LDA simply cannot be reproduced with a
gauge transform of this kind applied to the RPA.
The story remains much the same when an attempt is made to match the adiabatic
LDA xc kernel to the exact adiabatic xc kernel for the atom,
$f_{\text{xc}}(x,x^{\prime},\omega=0)$. The particular form of the spatial
non-locality in the exact adiabatic xc kernel does not lend itself to the
fairly inflexible structural freedom afforded by the functions $g$ and $h$.
This conclusion holds true for almost all the xc kernels examined in this
work, namely, the spatial profile of the exact $f_{\text{xc}}$ at any $\omega$
is not possible to reproduce with a gauge transform applied to the adiabatic
LDA or RPA, despite, at high frequencies, the similarity of the derived
optical spectra from these approximations.
A final remark is given in relation to the divergences in $f_{\text{xc}}$
studied in the previous section. The xc kernel around a divergence is
approximately of the form of an outer product $|u\rangle\langle u|$, where
$|u\rangle$ is the eigenvector of $f_{\text{xc}}$ that is diverging. The xc
kernel is therefore mostly composed of $N$ degrees of freedom, a state of
affairs that is a priori much more amenable to the gauge freedom. In fact, the
first pole in the $f_{\text{xc}}$ of the atom, i.e. the beginning of the non-
adiabatic behavior, see panel (d) of Fig. 3, is non-divergent after the
optimal gauge is applied. This divergence therefore does not affect observable
properties such as its associated optical peak, and indeed the exact adiabatic
kernel is able to capture this peak, despite it existing squarely within the
divergence. However, such a situation is not detected again for the remainder
of the divergences.
### III.5 Exchange-correlation potential versus exchange-correlation kernel
In finite systems, it is the conventional wisdom that an accurate ground-state
xc potential is more important for capturing the interacting excitation
spectrum than a sophisticated $f_{\text{xc}}$ [16, 15, 68, 69, 70]. The effect
of an improved treatment of exchange and correlation in the ground state is
shown in Fig. 12, which demonstrates the optical spectrum for the atomic
system calculated from the non-interacting response function $\chi_{0}$ at
various levels of approximation. The transition energies and rates calculated
from the exact Kohn-Sham system are considerably closer to the exact
transitions than, for example, Hartree theory, which is increasingly poor at
higher energies.
Figure 12: The exact optical absorption spectra for the atomic system
illustrated alongside the optical absorption spectrum from the non-interacting
response functions $\chi_{0}$ calculated with wavefunctions and energies given
by Hartree theory, an LDA, and the exact Kohn-Sham potential. The corrected
transitions, given upon solution of the Dyson equation with the corresponding
$f_{\text{xc}}$, are also shown for Hartree theory/RPA, and LDA/adiabatic LDA.
The most accurate optical spectrum is that of the non-interacting exact Kohn-
Sham system, and hence improving treatment of ground-state exchange-
correlation toward this ideal is found to be most important. In particular,
use of $f_{\text{xc}}$ to shift the non-interacting peaks toward the
interacting peaks is, in general, unable to achieve accuracy improvements
comparable to those observed from improving ground-state exchange-correlation
– this is seen most clearly at higher frequencies (inset).
These non-interacting response functions can be used, in conjunction with
their corresponding $f_{\text{xc}}$, to solve the Dyson equation, thus
shifting the position and weight of the peaks. An xc kernel functional
corresponds to an xc potential functional if it is the second functional
derivative of the xc potential with respect to the density. Thus, the RPA xc
kernel corresponds to Hartree theory, and the adiabatic LDA xc kernel
corresponds to the ground-state LDA from which it came. To use an xc kernel
that does not correspond to the ground-state xc potential can violate various
exact conditions [71] – this is evidently the case here for the zero-force sum
rule.
The aforementioned conventional wisdom is exhibited clearly in Fig. 12,
namely, use of a corresponding $f_{\text{xc}}$ is only able to improve matters
slightly beyond the non-interacting peaks, which is most visible at higher
frequencies. For this reason, even when using an incompatible $f_{\text{xc}}$,
the calculation benefits from an improved treatment of ground-state exchange
and correlation. This is evidenced in the case of the atom in Section III,
where using the RPA and adiabatic LDA xc kernels in conjunction with the exact
Kohn-Sham ground-state yields a more accurate optical spectrum than if one
were to attempt to keep the xc potential and xc kernel compatible.
## IV Conclusions
We have calculated and analyzed the spatial and frequency dependence of the
exact xc kernel $f_{\text{xc}}$ of time-dependent DFT for three one-
dimensional model systems: an atom, an infinite potential well, and a quantum
harmonic oscillator. A set of numerical methods is designed to ensure
numerical robustness.
The xc kernel exhibits a significant non-local spatial structure at all
frequencies, including at $\omega=0$, i.e. the exact adiabatic
$f_{\text{xc}}$. In lacking this structure, local approximations to
$f_{\text{xc}}$ are found to be insufficient for recovering the lowest energy
excitations, whereas the exact adiabatic xc kernel performs well. However,
beyond the lowest few excitations, all the approximations considered here –
the exact adiabatic xc kernel, the RPA, and the adiabatic LDA – are equally
poor, and do not generally improve the optical spectrum obtained directly from
the non-interacting Kohn-Sham response function $\chi_{0}$ (that is, setting
$f_{\text{xc}}+f_{\text{H}}$ to zero everywhere). A notable exception is the
quantum harmonic oscillator, whose optical transitions beyond the first are
heavily suppressed, a feature that the exact adiabatic xc kernel is able to
capture. In general, improvement of the spatial structure of adiabatic xc
kernel approximations toward the exact adiabatic xc kernel is expected to
assist matters for the lowest energy transitions, but beyond these transitions
the lack of frequency dependence hinders all adiabatic xc kernels. In
addition, the long-range limit of $f_{\text{xc}}$ for finite systems
$f_{\text{xc}}\rightarrow-f_{\text{H}}$ is confirmed, although the long-range
character of $f_{\text{xc}}$ is demonstrated to be unimportant in the present
context, in contrast to its character within and around the region of high
density.
Drastic non-adiabatic behavior is observed in $f_{\text{xc}}$ for all systems
studied in this work, and is, to a considerable extent, attributable to
specific aspects of its analytic structure as a function of $\omega$. (Simple)
poles in $f_{\text{xc}}$, related to certain interacting or non-interacting
transitions that necessitate them, can in practice appear close to interacting
excitations, for example, between two nearly degenerate charge-transfer
excitations in a double-well system [50]. It is possible that a gauge
transform can remove the divergence in specific cases without affecting the
optical spectrum, but this is the exception rather than the rule. If
$f_{\text{xc}}$ is kept identical apart from removal of the diverging
eigenvalue and its associated eigenvector, then $f_{\text{xc}}$ can be
rendered unable to capture the relevant transition. This suggests that an
$f_{\text{xc}}$ approximation that does not attempt to exhibit the non-
adiabatic pole structure of the exact $f_{\text{xc}}$ cannot reproduce
transitions with energies higher than the first few excitations. This is the
case for single, double, triple, and so on, excitations alike. The fact that
these divergences can be related to certain excitations that necessitate them
provides a new perspective on the divergent character of $f_{\text{xc}}$ that
is known to exist around double excitations [21, 36].
In general, the subtle spatial structure of the exact $f_{\text{xc}}$ cannot
be captured by applying a gauge transformation to one of the usual kernel
approximations. However, the divergent $\omega$-dependence discussed in the
previous paragraph is more amenable to the gauge freedom. Indeed, in the case
of the atom, the first divergence (around the third peak in the optical
spectrum) turns out to be related to the exact adiabatic $f_{\text{xc}}$ with
a gauge transform. Hence, the exact adiabatic approximation is able to
describe the third peak in the optical spectrum, despite the significant
frequency dependence of $f_{\text{xc}}$ around this peak.
As noted earlier in this section, the simple non-interacting kernel
$f_{\text{xc}}=-f_{\text{H}}$ often yields surprisingly good spectra, provided
that the exact Kohn-Sham potential, or a good approximation to it, is used to
calculate $\chi_{0}$. This is in part due to the fact that the exact Kohn-Sham
transitions are, in certain circumstances, good approximations to the
interacting transitions [70]. By extension, in practical calculations, effort
may be usefully devoted to improving approximations used for $f_{\text{xc}}$
and $v_{\text{xc}}$ individually, without an overriding need to maintain one
as the functional derivative of the other. The quality of $v_{\text{xc}}$ is
of particular importance, as previous authors have observed in specific cases
[16, 15, 68, 69, 70].
In systems where predictive spectral accuracy beyond that provided by the
simplest kernels is required, note should be taken of the intricate spatial
non-locality and analytic structure as a function of $\omega$ exhibited by the
exact kernels calculated in this paper. Approximate kernels that are spatially
local (whether local-density or not), and/or exhibit no more than a gentle
variation with $\omega$, are unlikely to prove adequate for calculating
optical spectra and other aspects of the density response function. A fruitful
direction appears to be kernels that are obtained by making a connection
between the time-dependent DFT description and some level of many-body
perturbation theory, such as the kernel obtained from the Bethe-Salpeter
equation presented in [26], since non-locality and frequency dependence emerge
automatically from even the simplest level of many-body perturbation theory.
###### Acknowledgements.
The authors thank Michael Hutcheon and Matt Smith for helpful discussions.
N.D.W is supported by the EPSRC Centre for Doctoral Training in Computational
Methods for Materials Science for funding under grant number EP/L015552/1.
## References
* Martin _et al._ [2016] R. M. Martin, L. Reining, and D. M. Ceperley, _Interacting Electrons Theory and Computational Approaches_ (Cambridge University Press, 2016).
* Ullrich [2012] C. A. Ullrich, _Time-dependent density-functional theory : concepts and applications_ (Oxford University Press, 2012) p. 526.
* Marques _et al._ [2006] M. A. Marques, C. A. Ullrich, F. Nogueira, A. Rubio, K. Burke, and E. K. U. Gross, eds., _Time-Dependent Density Functional Theory_, Lecture Notes in Physics, Vol. 706 (Springer Berlin Heidelberg, Berlin, Heidelberg, 2006).
* Ullrich and Hui Yang [2014] C. A. Ullrich and Z. Hui Yang, Brazilian Journal of Physics 44, 154 (2014).
* Burke _et al._ [2005] K. Burke, J. Werschnik, and E. K. Gross, Journal of Chemical Physics 123, 062206 (2005).
* Casida and Huix-Rotllant [2012] M. Casida and M. Huix-Rotllant, Annual Review of Physical Chemistry 63, 287 (2012).
* Marques and Gross [2004] M. Marques and E. Gross, Annual Review of Physical Chemistry 55, 427 (2004).
* Gross and Kohn [1990] E. Gross and W. Kohn, in _Density Functional Theory of Many-Fermion Systems_, Advances in Quantum Chemistry, Vol. 21, edited by P.-O. Löwdin (Academic Press, 1990) pp. 255 – 291\.
* Runge and Gross [1984] E. Runge and E. K. Gross, Physical Review Letters 52, 997 (1984).
* van Leeuwen [1999] R. van Leeuwen, Physical Review Letters 82, 3863 (1999).
* Onida _et al._ [2002] G. Onida, L. Reining, and A. Rubio, Reviews of Modern Physics 74, 601 (2002).
* Görling [2019] A. Görling, Physical Review B 99, 235120 (2019).
* Olsen _et al._ [2019] T. Olsen, C. E. Patrick, J. E. Bates, A. Ruzsinszky, and K. S. Thygesen, npj Computational Materials 5, 2057 (2019).
* Note [1] Hartree atomic units $m_{e}=\hbar=e=4\pi\varepsilon_{0}=1$ are used throughout.
* Botti _et al._ [2007] S. Botti, A. Schindlmayr, R. Del Sole, and L. Reining, Reports on Progress in Physics 70, 357 (2007).
* Ullrich and Yang [2016] C. A. Ullrich and Z. H. Yang, Topics in Current Chemistry 368, 185 (2016).
* Gavrilenko and Bechstedt [1997] V. I. Gavrilenko and F. Bechstedt, Phys. Rev. B 55, 4343 (1997).
* Corradini _et al._ [1998] M. Corradini, R. Del Sole, G. Onida, and M. Palummo, Phys. Rev. B 57, 14569 (1998).
* Moroni _et al._ [1995] S. Moroni, D. M. Ceperley, and G. Senatore, Phys. Rev. Lett. 75, 689 (1995).
* Maitra _et al._ [2002] N. T. Maitra, K. Burke, and C. Woodward, Phys. Rev. Lett. 89, 023002 (2002).
* Maitra _et al._ [2004] N. T. Maitra, F. Zhang, R. J. Cave, and K. Burke, The Journal of Chemical Physics 120, 5932 (2004).
* Elliott _et al._ [2011] P. Elliott, S. Goldson, C. Canahui, and N. T. Maitra, Chemical Physics 391, 110 (2011), arXiv:1101.3379 .
* Cave _et al._ [2004] R. J. Cave, F. Zhang, N. T. Maitra, and K. Burke, Chemical Physics Letters 389, 39 (2004).
* Reining _et al._ [2002] L. Reining, V. Olevano, A. Rubio, A. Rubio, and G. Onida, Physical Review Letters 88, 4 (2002).
* Marini _et al._ [2003] A. Marini, R. Del Sole, and A. Rubio, Physical Review Letters 91, 256402 (2003), arXiv:0310495 [cond-mat] .
* Romaniello _et al._ [2009] P. Romaniello, D. Sangalli, J. A. Berger, F. Sottile, L. G. Molinari, L. Reining, and G. Onida, Journal of Chemical Physics 130, 044108 (2009).
* Hellgren and Von Barth [2009] M. Hellgren and U. Von Barth, Journal of Chemical Physics 131, 044110 (2009).
* Hellgren and von Barth [2008] M. Hellgren and U. von Barth, Phys. Rev. B 78, 115107 (2008).
* Kim and Görling [2002] Y.-H. Kim and A. Görling, Phys. Rev. Lett. 89, 096402 (2002).
* Görling [1999] A. Görling, Phys. Rev. Lett. 83, 5459 (1999).
* Görling [1998] A. Görling, Physical Review A - Atomic, Molecular, and Optical Physics 57, 3433 (1998).
* Del Sole _et al._ [2003] R. Del Sole, G. Adragna, V. Olevano, and L. Reining, Physical Review B - Condensed Matter and Materials Physics 67, 045207 (2003).
* Botti _et al._ [2004] S. Botti, F. Sottile, N. Vast, V. Olevano, L. Reining, H.-C. Weissker, A. Rubio, G. Onida, R. Del Sole, and R. W. Godby, Phys. Rev. B 69, 155112 (2004).
* Ruggenthaler _et al._ [2013] M. Ruggenthaler, S. E. Nielsen, and R. Van Leeuwen, Physical Review A - Atomic, Molecular, and Optical Physics 88, 022512 (2013).
* Maitra and Tempel [2006] N. T. Maitra and D. G. Tempel, Journal of Chemical Physics 125, 184111 (2006).
* Gritsenko and Jan Baerends [2009] O. V. Gritsenko and E. Jan Baerends, Physical Chemistry Chemical Physics 11, 4640 (2009).
* Aryasetiawan and Gunnarsson [2002] F. Aryasetiawan and O. Gunnarsson, Phys. Rev. B 66, 165119 (2002).
* Carrascal _et al._ [2018] D. J. Carrascal, J. Ferrer, N. Maitra, and K. Burke, European Physical Journal B 91, 142 (2018).
* Turkowski and Rahman [2014] V. Turkowski and T. S. Rahman, Journal of Physics Condensed Matter 26, 6 (2014).
* Fuks and Maitra [2014] J. I. Fuks and N. T. Maitra, Phys. Chem. Chem. Phys. 16, 14504 (2014).
* Thiele and Kümmel [2009] M. Thiele and S. Kümmel, Physical Review A - Atomic, Molecular, and Optical Physics 80, 012514 (2009).
* Thiele and Kümmel [2014] M. Thiele and S. Kümmel, Physical Review Letters 112, 083001 (2014).
* Entwistle and Godby [2019] M. T. Entwistle and R. W. Godby, Physical Review B 99, 161102 (2019).
* Hodgson _et al._ [2013] M. J. Hodgson, J. D. Ramsden, J. B. Chapman, P. Lillystone, and R. W. Godby, Physical Review B - Condensed Matter and Materials Physics 88, 241102 (2013).
* Ruggenthaler _et al._ [2015] M. Ruggenthaler, M. Penz, and R. van Leeuwen, Journal of Physics: Condensed Matter 27, 203202 (2015).
* Tarantola [2005] A. Tarantola, _Inverse Problem Theory and Methods for Model Parameter Estimation_ (Society for Industrial and Applied Mathematics, 2005).
* Van Leeuwen [2001] R. Van Leeuwen, International Journal of Modern Physics B 15, 1969 (2001).
* Mearns and Kohn [1987] D. Mearns and W. Kohn, Physical Review A 35, 4796 (1987).
* Note [2] For example, the constant perturbation $\delta v=c(\omega)$ oscillating with frequency $\omega$ produces no response in the density $\delta n$ at all orders, including at first order.
* Note [3] We observed this situation in a double-well system that we explored as background to the present study.
* Maitra [2017] N. T. Maitra, Journal of Physics Condensed Matter 29, 423001 (2017).
* Ghosez _et al._ [1997] P. Ghosez, X. Gonze, and R. Godby, Physical Review B - Condensed Matter and Materials Physics 56, 12811 (1997).
* Byun _et al._ [2020] Y.-M. Byun, J. Sun, and C. A. Ullrich, Electronic Structure 2, 023002 (2020).
* Note [4] URL to be inserted.
* Note [5] The formal definition of the effective null space is $\text{Null}_{\text{eff}}(\chi_{0})=\text{Span}(\\{|u_{i}^{\text{KS}}\rangle\ |\ |\lambda^{\text{KS}}_{i}|<\bar{\lambda}\\})$.
* Ben-Isreal A. [2003] G. T. N. E. Ben-Isreal A., _Generalized Inverses_ (Springer-Verlag, 2003).
* Hellgren and Gross [2012] M. Hellgren and E. K. Gross, Physical Review A - Atomic, Molecular, and Optical Physics 85, 022514 (2012).
* Note [6] Note that the quantities $\langle ij|f_{\text{xc}}(\omega)|kl\rangle$, where $(i,j,k,l)$ label indices of single-particle wavefunctions, are unique [27], and it is these quantities that form the input to the Casida equation [72], for example.
* Godby and Needs [1989] R. W. Godby and R. J. Needs, Physical Review Letters 62, 1169 (1989).
* Wagner _et al._ [2012] L. O. Wagner, Z.-h. Yang, and K. Burke, in _Fundamentals of Time-Dependent Density Functional Theory_ (Springer-Verlag Berlin Heidelberg, 2012) pp. 101–123.
* Appel _et al._ [2003] H. Appel, E. K. Gross, and K. Burke, Physical Review Letters 90, 4 (2003).
* Savin _et al._ [1998] A. Savin, C. J. Umrigar, and X. Gonze, Chemical Physics Letters 288, 391 (1998).
* Al-Sharif _et al._ [1998] A. I. Al-Sharif, R. Resta, and C. J. Umrigar, Physical Review A - Atomic, Molecular, and Optical Physics 57, 2466 (1998).
* Perdew _et al._ [1982] J. P. Perdew, R. G. Parr, M. Levy, and J. L. Balduz, Phys. Rev. Lett. 49, 1691 (1982).
* Entwistle _et al._ [2018] M. T. Entwistle, M. Casula, and R. W. Godby, Phys. Rev. B 97, 235143 (2018).
* Note [7] The interacting response function $\chi$ diverges at an interacting excitation, but this does not, in a finite basis, prevent an eigenvalue from crossing zero at this energy under certain circumstances that are set out in the supplemental material.
* Note [8] It is possible that the particular form of the non-local $f_{\text{xc}}$ functionals considered in [73, 13], in which $f_{\text{xc}}(x,x^{\prime})=f_{\text{xc}}[(n(x)+n(x^{\prime}))/2]$, can capture certain features of the adiabatic non-local response depicted in Fig. 9 and Fig. 10.
* Marques and Gross [2003] M. A. L. Marques and E. K. U. Gross, in _A Primer in Density Functional Theory_ (Springer, Berlin, Heidelberg, 2003) pp. 144–184.
* Marques _et al._ [2001] M. A. Marques, A. Castro, and A. Rubio, Journal of Chemical Physics 115, 3006 (2001).
* Petersilka _et al._ [2000] M. Petersilka, E. K. U. Gross, and K. Burke, International Journal of Quantum Chemistry 80, 534 (2000).
* Liebsch [1985] A. Liebsch, Physical Review B 32, 6255 (1985).
* Casida [1995] M. E. Casida, _Recent Advances in Density Functional Methods_ (World Scientific Publishing, 1995) pp. 155–192.
* Olsen and Thygesen [2014] T. Olsen and K. S. Thygesen, Physical Review Letters 112, 203001 (2014).
|
# Peptipedia: a comprehensive database for peptide research supported by
Assembled predictive models and Data Mining approaches
Cristofer Quiroz Facultad de Ingeniería, Universidad Autonóma de Chile, Cinco
Pte. 1670, Talca, 3467987, Chile. Yasna Barrera Saavedra Escuela de
Ingeniería en Bioinformática, Universidad de Talca, Avenida Lircay SN,
3460000, Talca, Chile. Benjamín Armijo-Galdames Centre for Biotechnology and
Bioengineering, University of Chile, Beauchef 851, Santiago, 8370456, Chile.
Department of Chemical Engineering, Biotechnology and Materials, University of
Chile, Beauchef 851, Santiago, 8370456, Chile. Juan Amado-Hinojosa Centre
for Biotechnology and Bioengineering, University of Chile, Beauchef 851,
Santiago, 8370456, Chile. Department of Chemical Engineering, Biotechnology
and Materials, University of Chile, Beauchef 851, Santiago, 8370456, Chile.
Álvaro Olivera-Nappa Centre for Biotechnology and Bioengineering, University
of Chile, Beauchef 851, Santiago, 8370456, Chile. Department of Chemical
Engineering, Biotechnology and Materials, University of Chile, Beauchef 851,
Santiago, 8370456, Chile. Anamaria Sanchez-Daza David Medina-Ortiz Centre
for Biotechnology and Bioengineering, University of Chile, Beauchef 851,
Santiago, 8370456, Chile.
###### Abstract
Motivation: Peptides have attracted the attention in this century due to their
remarkable therapeutic properties. Computational tools are being developed to
take advantage of existing information, encapsulating knowledge and making it
available in a simple way for general public use. However, these are property-
specific redundant data systems, and usually do not display the data in a
clear way. In some cases, information download is not even possible. This data
needs to be available in a simple form for drug design and other
biotechnological applications.
Results: We developed Peptipedia, a user-friendly database and web application
to search, characterise and analyse peptide sequences. Our tool integrates the
information from thirty previously reported databases, making it the largest
repository of peptides with recorded activities so far. Besides, we
implemented a variety of services to increase our tool’s usability. The
significant differences of our tools with other existing alternatives becomes
a substantial contribution to develop biotechnological and bioengineering
applications for peptides.
Availability: Peptipedia is available for non-commercial use as an open-access
software, licensed under the GNU General Public License, version GPL 3.0. The
web platform is publicly available at pesb2.cl/peptipedia. Both the source
code and sample datasets are available in the GitHub repository
https://github.com/CristoferQ/PeptideDatabase.
Contact<EMAIL_ADDRESS><EMAIL_ADDRESS>
_K_ eywords Protein Engineering - predictive models $\cdot$ peptide databases
$\cdot$ machine-learning algorithms $\cdot$ digital signal processing $\cdot$
assembled models
## INTRODUCTION
Peptides play a crucial role as signaling molecules, encompassing diverse
therapeutic activities like antimicrobial, antitumoral, hormone replacement,
anti-inflammatory and antihypertensive (Lau and Dunn,, 2018; Lien and Lowman,,
2003). Peptides are polymers that can be sought in natural sources or
synthetically obtained; they are constituted of at least 2 amino acids, and
their maximum length is usually set to 50 - 100 amino acids. However, it seems
there is no consensus about the maximum of amino acids in a sequence to
consider it a peptide or a protein. (Morrison and Boyd,, 1973; Latham,, 1999;
Lien and Lowman,, 2003; Uhlig et al.,, 2014).
As therapeutic agents, peptides are especially attractive because they exhibit
high biological activity and specificity, reduced side effects and low
toxicity. Nevertheless, peptides have some disadvantages over other molecules,
such as high synthesis cost and low stability due of the lack of tertiary
structure, making them particularly susceptible to enzymatic degradation and
difficulties in crossing biological membranes due to their high polarity,
molecular weight, and hydrophilicity. (Vlieghe et al.,, 2010; Uhlig et al.,,
2014).
Despite the disadvantages mentioned, peptide researching interest has
increased, resulting in a significant accumulation of new peptide sequences in
conjunction with their related activities and properties. This has brought to
the market over 70 peptides approved in the US, Europe, and Japan as
therapeutic, more than 200 in clinical trials, and more than 600 in pre-
clinical tests (Srivastava,, 2019; Usmani et al.,, 2017; Lau and Dunn,, 2018).
One of the most significant trends in recent times is ”drug discovery” to
identify new drugs or new functionalities for specific targets. In this
context, computational approaches are continually developed as support tools
for biological fields, where methodologies based on Machine Learning and Data
Mining become relevant tools (Wu et al.,, 2019; Basith et al.,, 2020).
However, these techniques require prior knowledge, which can be obtained from
biological databases that accumulate information on molecules and their
characteristics. These data of interest can be collected and processed to
develop a tool for solving a specific problem.
Several dedicated databases have emerged for peptide grouping, mostly,
according to their activities (e.g., antimicrobial: APD3 (Wang et al.,, 2016),
antituberculosis: AntiTBdb (Usmani et al.,, 2018) AntiTbPdb, antihypertensive:
AHTPDB (Kumar et al.,, 2015)) or origin source (e.g., Plant: PlantPepDB (Das
et al.,, 2020), bacterial: BACTIBASE (Hammami et al.,, 2010), anuran: DADP
(Novković et al.,, 2012)). The first web-based databases including peptides
were reported in 1998 by Tossi and Sandri, (2002), followed by SYFPEITHI,
JenPep, FIMM and HIV database (Rammensee et al.,, 1999; Blythe et al.,, 2002;
Schönbach et al.,, 2000; Korber et al.,, 1998). Then in 2003, the
Antimicrobial Peptide Database (APD) appeared and has been continuously
updated, but currently, the link is down (Wang and Wang,, 2004; Wang et al.,,
2016), and since then, around 40 peptide databases have arisen.
Each database is useful in their specific context, but a comprehensive and
integrated database focused on peptides is not available so far. Also, many of
the databases present some issues which hinder their usability. Most of them
do not indicate their last update, and if reviewing, they seem to have not
been updated since their launch, except for DRAMP, AllergenOnline, BactPepDB,
DBAASP, ConoServer and APD. Other sites are not found: PenBase, ANTIMIC.
Almost all databases have redundancy in their sequences (see section 1 of
Supplementary Information). Others require informatic background, being
unfriendly for users with no advanced computational skills. Many others do not
provide a download tool: YADAMP, Quorumpeps database, DADP, BIOPEP, BioDADpep,
Péptaibol; for others, the download tool is not working: PepBank, StraPep,
PeptideDB, BactPepDB, MHCBN, ForPep, CancerPPD.
Peptipedia was developed to fulfill the necessities that each database cannot
solve separately. We have implemented a user-friendly web application with a
new database that encompasses the highest number of peptide sequences with
reported activity, curated from 30 existing peptide databases. Peptipedia
classifies reported activity for each peptide in categories and subcategories
defined according to our analysis and literature (Kastin,, 2013).
Our application is more than a database compilation: it is the most extensive
integrated peptide persistent-storage system to date. This user-friendly
platform also includes useful physicochemical and statistical properties
estimator from peptides, amino acid sequences characterisation, and a tool for
Machine Learning-based activity prediction for a query peptide.
## Methods
### Collection, preprocessing, characterisation, and database generation
We consolidate the information for Peptipedia by integrating the data from
different computational tools and databases previously reported, such as APD
(Wang et al.,, 2016), LAMP (Zhao et al.,, 2013), and Uniprot (Consortium,,
2015), among others (see section 1 in Supplementary Information for more
details). Firstly, we manually downloaded the sequences from each tool and
processed them independently, generating different CSV files to facilitate
their manipulation. We filtered the sequences according to their length,
considering a minimum of 2 residues and a maximum of 150. Secondly, we
generated a single file with all sequences, eliminating redundancy between
them. For each sequence, we searched its activities, using the previous
information in all databases employed to develop our web information system.
It is important to note that taxonomic and structural information, and
specific information for particular activities, such as IC50 measurements,
experiments, among others, were also included in Peptipedia. Furthermore, the
sequences are categorized depending on whether they present modifications or
non-canonical residues. Then, we used ModLamp library (Müller et al.,, 2017)
to characterise the peptides based on physicochemical and thermodynamic
properties. Statistical properties were obtained for each sequence using the
DMAKit-Lib library (Medina-Ortiz et al., 2020b, ). Finally, the amino acid
frequency for each sequence was obtained through scripts implemented in Python
v3. Now, we store the processed information in a NoSQL database, using MongoDB
as a handler due to its manipulation characteristics, information extraction
speed and scaling.
### Strategies for classification systems
Most sequences report a specific activity in terms of their biochemical roles
and/or biological effects, specially in humans. We noted that a significant
number of peptides are used or were designed for therapeutic purposes, but
there were another seven types of peptide activity which cannot be classified
as therapeutic. Consequently, we classify all peptides in eight categories:
(1) ’therapeutic’, (2) ’immunological’, (3) ’sensorial’, (4) ’neurological’,
(5) ’drug delivery vehicle’, (6) ’transit’, (7) ’propeptide’ and (8) ’signal’.
Each category has subclassifications within it. However, there are a small
group of peptides with particular activity, so we categorise them in the
category (9) ’other activity’. All peptides with no activity reported are in
the category (10) ’no activity reported’.
One of the essential services of Peptipedia is the activity classification
system for peptide sequences based on Machine Learning strategies. The
training of models was based on the application of supervised learning
algorithms combined with sequence coding approaches, using physicochemical
properties and Digital Signal Processing, according to the strategies proposed
by Medina-Ortiz et al., 2020a . In this way, we generated assembled binary
models to recognize activities for peptide sequences employing our categories
proposed in this work. The training process was based on developing binary
data sets to evaluate two categories: presents or absence of activity.
Additionally, we generated each data set using the one v/s rest strategy,
keeping class imbalance minimum. Finally, in those models with low
performance, it was used the recursive binary partitions strategies, according
to the method proposed by Medina-Ortiz et al., 2020c to improve the
performance of the classification assembled models.
### Implementation and Availability
Peptipedia was designed using a Model View Controller (MVC) design pattern.
The view component and the controllers were implemented using JavaScript
programming language through the Express framework. Display components were
optimised using Bootstrap 4. All the model members, including all service
disposed in this work’s proposed tool, were developed using Python v3
programming language, supported by the libraries DMAKit-Lib (Medina-Ortiz et
al., 2020b, ) and Scikit-Learn (Pedregosa et al.,, 2011). Both the proposed
software architecture and implementation features are detailed in section 2 of
the Supplementary Information.
## Results and Discussion
Figure 1: Representative scheme of building and characteristics of Peptipedia.
Peptipedia is a computational tool for peptide sequence analysis. The
information presented by our tool was consolidated from 30 databases,
considering information on the sequence, taxonomy, and different properties of
stored peptides. Searching for sequences and relevant information in our web
application is easy, personalised and intuitive, allowing download the
information in multiple formats. Peptipedia has enabled different tools that
will help characterise and analyse sequences, as well as functionalities
supported by Machine Learning methods that facilitate the development of
predictive models and an activity predictor system.
Peptipedia is a user-friendly web application system to search, analyse,
evaluate and characterise peptide sequences using different strategies,
Machine Learning, and Data Mining techniques. This web tool has a NoSQL
database system with 92055 peptides registered and described, being the most
extensive database of peptide sequences with activities reported to date. This
tool reports different types of information for each sequence, considering
structural, physicochemical, and phylogenetic properties. Additionally, the
various activities previously identified for each peptide are reported and so
are the databases or repositories where they were extracted from. Finally,
statistical properties related to the percentage of residues for each sequence
and the average per category are included in the database, providing
interesting, useful, and easy-to-understand information for scientists and
researchers (see Figure 1).
### Relevant tools and services available in Peptipedia
#### Searches, Visualization and Downloads
Different types of searches can be generated in Peptipedia, either with the
sequence or through information related to its activity, physicochemical
properties, frequency of residues, among other relevant information. Besides,
it is possible to apply different filters to generate a personalised
exploration for the user’s interest.
We develop a general summary for each search, showing statistical descriptions
and various visualizations to display the information. Furthermore, we present
specific details for each peptide, including thermodynamic properties,
taxonomy, phylogeny, activity and sequence descriptors; we also show the
databases where the peptide sequence was previously reported. Remarkably,
Peptipedia offers specific information like IC50, assays information, organism
evaluation and other relevant characteristics for particular activities such
as antihypertensive, anti-HIV, and antiviral subcategories.
Peptipedia has general and specific modules for downloading data, making it
easier to obtain information, facilitating the download in CSV, Fasta, and
JSON formats. Besides, our tool enables the complete database download in
easily manipulable forms, considering both the sequence and its reported
information.
#### Services
Different services were implemented in Peptipedia to facilitate analysis and
characterisation of peptide sequences. We propose various services that allow
characterization through physicochemical and thermodynamic properties, using
the ModLamp (Müller et al.,, 2017) library. We also provide modules that
enable the estimation of statistical properties for peptide sequences.
Bioinformatic tools such as sequence alignments are available in our web tool:
using the Edlib library Sosic and Sikic, (2017), it is possible to align any
sequence against those registered in our database.
Another relevant service is peptide activity classification system supported
by assembled predictive models: the user can upload a list of amino acid
sequences, and our tool classifies them by the categories proposed in this
work, evaluating each of them. Furthermore, a peptide encoding service is
implemented using common strategies such as One Hot Encoder and more
sophisticated ones such as Embedding through the Tape library.
Finally, Peptipedia allows the generation of predictive models for sets of
peptides with specific user requirements through supervised learning
algorithms and cross-validation techniques. Configuration of hyperparameters,
coding strategy and validation method are selectable. The tool reports the
performance of the generated model by the user, allowing the download to use
it locally. Besides, this service enables the interpretation of the results
giving different recommendations about them.
### Peptides registered, categories, and relevant information in Peptipedia
We developed the largest database of peptides with reported activity to date,
with a total of 92,055 records. Considering the information on previously
reported activities and the characteristics of each their specific properties,
we propose a system of ten categories, which present sub-categories according
to the features of the activities that constitute them. Using these
categories, we analyse the peptide sequences, identifying therapeutic
peptides, signal peptides, and sensory activity, representing the highest
prevalence in our records. While immunological, transit, and neurological
activity show the least trend or have fewer records (see Figure 2 A).
It is important to highlight the moonlighting characteristics of peptides.
This feature is the feasibility of a peptide to present different activities
at the same time (Jeffery,, 1999). The main found tendencies of moonlighting
are between the therapeutic and sensorial peptides, and between propeptides
and signal peptides. This last overlapping of activities makes sense because
propeptides generally contain a signal peptide in their sequence (Wang et
al.,, 2018), which they lose once processed. (see Figure 2 B). This type of
properties reflects the potential features of a peptide when acting as a drug
or presenting different biotechnological uses, making them interesting to
study due to their fascinating characteristics. Residue frequency analysis
allows evaluating amino acid trends for particular activities. We compare
trends for the main reported categories, with a clear preference for arginine
residues for drug delivery peptides, which can be explained because this kind
of peptides are usually design to crosss membranes, so they need a chemical
affinity for negatively charged membranes, which is given by the positive
charge of arginine. In contrast, signal, transit and propeptides generally
show similar trends. However, no major visible patterns were identified (see
Section 4 in Supplementary Information).
Figure 2: Visualisation of registered peptides on Peptipedia Representation of
the information contained in Peptipedia. A: distribution of peptides according
to the categories proposed in this work. B: analysis of the relationship of
simultaneous activities for the same type of peptide; the most significant
trends are seen between therapeutics and sensorial, and between propeptides
and signal.
### Binary classification categories supported by Assembled Models
Using coding of physicochemical properties and their representation in
frequency space (Medina-Ortiz et al., 2020a, ) and employing recursive binary
division strategies to optimise performance measures (Medina-Ortiz et al.,
2020c, ) we depeloped 44 assembled binary models for classification of
activity for peptide sequences, considering the categories and subcategories
proposed in this work. We used k-fold cross-validation to avoid model
overfitting. Remarkably, all the models generated presented an accuracy of
over 83% (see Table 1 and section 5 of the Support Information for details).
We previously compared the results obtained by applying this type of
strategies against classical sequence coding methods, demonstrating better
results (Medina-Ortiz et al., 2020a, ). Furthermore, we compare our results
with previously developed classification models for peptide sequences. Xiao et
al., (2013) proposed a classification system for antimicrobial peptides with
86% accuracy; for the same task, our model achieves a performance of 88.7%.
Similarly, Yi et al., (2019) proposed a classification system for anticancer
peptides using Deep Learning Long Short-Term Memory Model strategies,
achieving an accuracy of 81.48%, while our model achieves 83.54%. Another
relevant example is identifying quorum sensing peptides (QSPs): Rajput et al.,
(2015) proposed an identification system for QSPs based on sequence features
in combination with support vector machine algorithms, obtaining 93 %
accuracy; our accuracy is slightly lower for this peptides, reaching an
accuracy of 86.4 %. Even though we present a lower performance in particular
situations than previously developed methods. Nevertheless, the proposed
strategy is generic, could be apply in activity classification of peptides
sequences problems, prediction of properties, and multiple issues in protein
engineering (Medina-Ortiz et al., 2020a, ). Notably, we validated all our
models using statistical methods. Each data set was created by selecting
random samples and repeating this process 100 times, providing statistical
support and demonstrating the robustness of the activity classification models
implemented in Peptipedia.
# | Category | | Size
---
dataset
| Weighted
---
Performance
1. | Sensorial Peptides | 19982 | 85.27
2. | Drug Delivery | 4912 | 86.02
3. | Therapeutic | 50000 | 87.32
4. | Neurological | 2712 | 89.33
5. | Immunological | 2178 | 86.12
6. | Other Activity | 490 | 82.98
7. | Transit Peptide | 1350 | 88.48
8. | Signal Peptide | 26794 | 86.41
9. | Propeptide | 17768 | 88.63
Table 1: Weighted performance for binary classification models for the nine
main categories proposed in this work.
### Using Peptipedia to develop predictive models
The study of anti-HIV peptides is relevant because their potential therapeutic
applications. They interact with a specific domain of the glycoprotein 41,
which is their pharmacological target for inhibiting the virus fusion and
entry to the host cell. Different efforts have focused on designing new
sequences, either through traditional techniques such as directed evolution or
rational design strategies. Both strategies currently benefit from the
application of Machine Learning since it facilitates the simulation of the
effects of new variants. We implemented an IC50 predictive model for anti-HIV
peptides to demonstrate the usability of Peptipedia, because this is a crucial
parameter for assessing the performance of antimicrobial and antiviral drugs.
First, using the sequence search engine, we identify all the peptides that
have this category. We manually downloaded and filtered those with a
quantitative IC50 measurement, discarding the cases in which it was expressed
in terms of low, medium or high effect, and standardising the measured values
to work with them using the same units. Subsequently, we used the Peptipedia
predictive models training tool, selecting coding by digital signal
processing, using the alpha-structure property as coding strategy, Random
Forest as supervised learning algorithm, and validation strategy $k$-fold with
$k=10$. The tool reported the model’s performance, achieving a Pearson
coefficient of 0.8 (see Figure 3 A). Furthermore, Peptipedia allows us to
analyse the prediction error’s randomness to determine if there are biases in
the generated predictions (see Figure 3 B). In this way, we are able to
predict the therapeutic potency of an new anti-HIV peptide with no need of
performing lab assays, which, combined with the coding module, becomes
powerful support for designing peptides with desirable activities.
Figure 3: Predictive modeling of IC50 for Anti-HIV peptides using Peptipedia
A: Scatter Plot prediction v/s reality, denoting the performance of the
predictive model. In general, there is no tendency to over-adjust or under-
adjust in any particular range, which shows that the cross-validation
strategies were correctly applied. B: histogram of the error distribution. The
probability of error analysis indicates no tendency for significant errors
that adversely alter the model predictions. The errors are mainly concentrated
between -5 and 5, which is quite acceptable considering the nature of the
entered values, where the largest reach 100 and the smallest are close to
zero.
## Conclusions
We designed and implemented Peptipedia, a web application supported by machine
learning algorithms and data mining strategies to characterise and analyse
peptide sequences. Additionally, our tool has the most extensive database of
peptides with activity reported so far, with a total of 92,055 amino acid
sequences integrated from thirty databases or repositories of previously
reported peptides, Peptipedia has enabled different tools that will help
characterising, getting statistical properties and bioinformatics analysis
supported by sequence alignments, as well as services that facilitate the
development of predictive models.
Additionally, the sequence and the reported activity information of the
registered peptides are integrated into a robust binary classification system,
implemented through Machine Learning strategies, allowing to predict putative
peptide activities. These services are useful as a previous approach to
experimental work for performing an activity screening of novel peptides with
unknown activity. Besides, peptide design also gets benefited, since this tool
helps to find residues patterns based on their activity.
Both the usability and the wide range of services available on Peptipedia, as
well as the robustness of the predictive systems implemented, considerably
improve the current state of the art, becoming an attractive alternative to
existing traditional applications and a good support for research in peptide
engineering and its biotechnological applications.
## CODE AVAILABILITY
All code is available at the authors’ GitHub repository
https://github.com/CristoferQ/PeptideDatabase.
## ACKNOWLEDGEMENTS
This work was supported mainly by the Centre for Biotechnology and
Bioengineering - CeBiB (PIA project FB0001, ANID, Chile), Fondecyt 1180882
project, and Universidad de Magallanes for MAG1895 project. DM-O gratefully
acknowledges ANID, Chile, for Ph.D. fellowship 21181435. JA-H gratefully
acknowledges ANID, for Ph.D. fellowship 21182109. AS-D thanks PAI Programme
(I7818010006).
#### Conflict of interest statement.
None declared.
## References
* Basith et al., (2020) Basith, S., Manavalan, B., Hwan Shin, T., and Lee, G. (2020). Machine intelligence in peptide therapeutics: A next-generation tool for rapid disease screening. Medicinal research reviews, 40(4):1276–1314.
* Blythe et al., (2002) Blythe, M. J., Doytchinova, I. A., and Flower, D. R. (2002). Jenpep: a database of quantitative functional peptide data for immunology. Bioinformatics, 18(3):434–439.
* Consortium, (2015) Consortium, U. (2015). Uniprot: a hub for protein information. Nucleic acids research, 43(D1):D204–D212.
* Das et al., (2020) Das, D., Jaiswal, M., Khan, F. N., Ahamad, S., and Kumar, S. (2020). Plantpepdb: A manually curated plant peptide database. Scientific Reports, 10(1):1–8.
* Hammami et al., (2010) Hammami, R., Zouhir, A., Le Lay, C., Hamida, J. B., and Fliss, I. (2010). Bactibase second release: a database and tool platform for bacteriocin characterization. Bmc Microbiology, 10(1):1–5.
* Jeffery, (1999) Jeffery, C. J. (1999). Moonlighting proteins. Trends in biochemical sciences, 24(1):8–11.
* Kastin, (2013) Kastin, A. (2013). Handbook of biologically active peptides. Academic press.
* Korber et al., (1998) Korber, B., Moore, J., Brander, C., Walker, B., Haynes, B., and Koup, R. (1998). Hiv molecular immunology compendium. Los Alamos National Laboratory, Theoretical Biology and Biophysics, Los Alamos, NM.
* Kumar et al., (2015) Kumar, R., Chaudhary, K., Sharma, M., Nagpal, G., Chauhan, J. S., Singh, S., Gautam, A., and Raghava, G. P. (2015). Ahtpdb: a comprehensive platform for analysis and presentation of antihypertensive peptides. Nucleic acids research, 43(D1):D956–D962.
* Latham, (1999) Latham, P. W. (1999). Therapeutic peptides revisited. Nature biotechnology, 17(8):755–757.
* Lau and Dunn, (2018) Lau, J. L. and Dunn, M. K. (2018). Therapeutic peptides: Historical perspectives, current development trends, and future directions. Bioorganic & medicinal chemistry, 26(10):2700–2707.
* Lien and Lowman, (2003) Lien, S. and Lowman, H. B. (2003). Therapeutic peptides. Trends in biotechnology, 21(12):556–562.
* (13) Medina-Ortiz, D., Contreras, S., Amado-Hinojosa, J., Torres-Almonacid, J., Asenjo, J. A., Navarrete, M., and Olivera-Nappa, Á. (2020a). Combination of digital signal processing and assembled predictive models facilitates the rational design of proteins. arXiv preprint arXiv:2010.03516.
* (14) Medina-Ortiz, D., Contreras, S., Quiroz, C., Asenjo, J. A., and Olivera-Nappa, Á. (2020b). Dmakit: A user-friendly web platform for bringing state-of-the-art data analysis techniques to non-specific users. Information Systems, page 101557.
* (15) Medina-Ortiz, D., Contreras, S., Quiroz, C., and Olivera-Nappa, Á. (2020c). Development of supervised learning predictive models for highly non-linear biological, biomedical, and general datasets. Frontiers in Molecular Biosciences, 7.
* Morrison and Boyd, (1973) Morrison, R. and Boyd, R. (1973). Organic Chemistry 3rd Ed., 1973. Allyn and Bacon.
* Müller et al., (2017) Müller, A. T., Gabernet, G., Hiss, J. A., and Schneider, G. (2017). modlAMP: Python for antimicrobial peptides. Bioinformatics, 33(17):2753–2755.
* Novković et al., (2012) Novković, M., Simunić, J., Bojović, V., Tossi, A., and Juretić, D. (2012). Dadp: the database of anuran defense peptides. Bioinformatics, 28(10):1406–1407.
* Pedregosa et al., (2011) Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., et al. (2011). Scikit-learn: Machine learning in python. the Journal of machine Learning research, 12:2825–2830.
* Rajput et al., (2015) Rajput, A., Gupta, A. K., and Kumar, M. (2015). Prediction and analysis of quorum sensing peptides based on sequence features. PLoS One, 10(3):e0120066.
* Rammensee et al., (1999) Rammensee, H.-G., Bachmann, J., Emmerich, N. P. N., Bachor, O. A., and Stevanović, S. (1999). Syfpeithi: database for mhc ligands and peptide motifs. Immunogenetics, 50(3-4):213–219.
* Schönbach et al., (2000) Schönbach, C., Koh, J. L., Sheng, X., Wong, L., and Brusic, V. (2000). Fimm, a database of functional molecular immunology. Nucleic acids research, 28(1):222–224.
* Sosic and Sikic, (2017) Sosic, M. and Sikic, M. (2017). Edlib: a C/C ++ library for fast, exact sequence alignment using edit distance. Bioinformatics, 33(9):1394–1395.
* Srivastava, (2019) Srivastava, V., editor (2019). Peptide Therapeutics. Drug Discovery. The Royal Society of Chemistry.
* Tossi and Sandri, (2002) Tossi, A. and Sandri, L. (2002). Molecular diversity in gene-encoded, cationic antimicrobial polypeptides. Current pharmaceutical design, 8(9):743–761.
* Uhlig et al., (2014) Uhlig, T., Kyprianou, T., Martinelli, F. G., Oppici, C. A., Heiligers, D., Hills, D., Calvo, X. R., and Verhaert, P. (2014). The emergence of peptides in the pharmaceutical business: From exploration to exploitation. EuPA Open Proteomics, 4:58–69.
* Usmani et al., (2017) Usmani, S. S., Bedi, G., Samuel, J. S., Singh, S., Kalra, S., Kumar, P., Ahuja, A. A., Sharma, M., Gautam, A., and Raghava, G. P. (2017). Thpdb: Database of fda-approved peptide and protein therapeutics. PloS one, 12(7):e0181748.
* Usmani et al., (2018) Usmani, S. S., Kumar, R., Kumar, V., Singh, S., and Raghava, G. P. (2018). Antitbpdb: a knowledgebase of anti-tubercular peptides. Database, 2018.
* Vlieghe et al., (2010) Vlieghe, P., Lisowski, V., Martinez, J., and Khrestchatisky, M. (2010). Synthetic therapeutic peptides: science and market. Drug discovery today, 15(1-2):40–56.
* Wang et al., (2016) Wang, G., Li, X., and Wang, Z. (2016). Apd3: the antimicrobial peptide database as a tool for research and education. Nucleic acids research, 44(D1):D1087–D1093.
* Wang et al., (2018) Wang, J., Yin, T., Xiao, X., He, D., Xue, Z., Jiang, X., and Wang, Y. (2018). StraPep: a structure database of bioactive peptides. Database, 2018(bay038).
* Wang and Wang, (2004) Wang, Z. and Wang, G. (2004). Apd: the antimicrobial peptide database. Nucleic acids research, 32(suppl_1):D590–D592.
* Wu et al., (2019) Wu, Q., Ke, H., Li, D., Wang, Q., Fang, J., and Zhou, J. (2019). Recent progress in machine learning-based prediction of peptide activity for drug discovery. Current topics in medicinal chemistry, 19(1):4–16.
* Xiao et al., (2013) Xiao, X., Wang, P., Lin, W.-Z., Jia, J.-H., and Chou, K.-C. (2013). iamp-2l: a two-level multi-label classifier for identifying antimicrobial peptides and their functional types. Analytical biochemistry, 436(2):168–177.
* Yi et al., (2019) Yi, H.-C., You, Z.-H., Zhou, X., Cheng, L., Li, X., Jiang, T.-H., and Chen, Z.-H. (2019). Acp-dl: a deep learning long short-term memory model to predict anticancer peptides using high-efficiency feature representation. Molecular Therapy-Nucleic Acids, 17:1–9.
* Zhao et al., (2013) Zhao, X., Wu, H., Lu, H., Li, G., and Huang, Q. (2013). Lamp: a database linking antimicrobial peptides. PloS one, 8(6):e66557.
|
# Spatially Resolved Star Formation and Inside-out Quenching in the TNG50
Simulation and 3D-HST Observations
Erica J. Nelson,1,2 Sandro Tacchella,2 Benedikt Diemer,3 Joel Leja,4,5,6 Lars
Hernquist,2 Katherine E. Whitaker,7,8 Rainer Weinberger,2 Annalisa Pillepich,9
Dylan Nelson,10 Bryan A. Terrazas,2 Rebecca Nevin,2 Gabriel B. Brammer,8,11
Blakesley Burkhart,12,13 Rachel K. Cochrane,2 Pieter van Dokkum,14 Benjamin D.
Johnson,2 Federico Marinacci,15 Lamiya Mowla,16 Rüdiger Pakmor,19 Rosalind E.
Skelton,18 Joshua Speagle,16,18 Volker Springel,19 Paul Torrey,20 Mark
Vogelsberger,21 Stijn Wuyts22
1Department for Astrophysical and Planetary Science, University of Colorado,
Boulder, CO 80309, USA
2Center for Astrophysics | Harvard-Smithsonian, Cambridge, MA 02138, USA
3Department of Astronomy, University of Maryland, College Park, MD 20742, USA
4Department of Astronomy & Astrophysics, The Pennsylvania State University,
University Park, PA 16802, USA
5Institute for Computational & Data Sciences, The Pennsylvania State
University, University Park, PA, USA
6Institute for Gravitation and the Cosmos, The Pennsylvania State University,
University Park, PA 16802, USA
7Department of Astronomy, University of Massachusetts, Amherst, MA 01003, USA
8Cosmic Dawn Center (DAWN), Copenhagen, Denmark
9Max-Planck-Institut für Astronomie, Königstuhl 17, 69117 Heidelberg, Germany
10Universität Heidelberg, Zentrum für Astronomie, Institut für theoretische
Astrophysik, Albert-Ueberle-Str. 2, 69120 Heidelberg, Germany
11Niels Bohr Institute, University of Copenhagen, Jagtvej 128, København N,
DK-2200, Denmark
12Department of Physics and Astronomy, Rutgers University, 136 Frelinghuysen
Rd., Piscataway, NJ 08854, USA
13Center for Computational Astrophysics, Flatiron Institute, 162 5th Ave., New
York, NY 10010, USA
14Astronomy Department, Yale University, New Haven, CT 06511, USA
15Department of Physics and Astronomy “Augusto Righi", University of Bologna,
via Gobetti 93/2, 40129 Bologna, Italy
16Dunlap Institute for Astronomy & Astrophysics, University of Toronto,
Toronto, ON M5S 3H4, Canada
17South African Astronomical Observatory, Cape Town 7935, South Africa
18Department of Statistical Sciences, University of Toronto, Toronto, ON M5S
3G3, Canada
19Max-Planck-Institut für Astrophysik, 85740 Garching bei München, Germany
20Department of Astronomy, University of Florida, Gainesville, FL 32611, USA
21Department of Physics and Kavli Institute for Astrophysics and Space
Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
22Department of Physics, University of Bath, Claverton Down, Bath BA2 7AY, UK
E-mail<EMAIL_ADDRESS>
###### Abstract
We compare the star forming main sequence (SFMS) – both integrated and
resolved on 1kpc scales – between the high-resolution TNG50 simulation of
IllustrisTNG and observations from the 3D-HST slitless spectroscopic survey at
$z\sim 1$. Contrasting integrated star formation rates (SFRs), we find that
the slope and normalization of the star-forming main sequence in TNG50 are
quantitatively consistent with values derived by fitting observations from
3D-HST with the Prospector Bayesian inference framework. The previous offsets
of 0.2-1 dex between observed and simulated main sequence normalizations are
resolved when using the updated masses and SFRs from Prospector. The scatter
is generically smaller in TNG50 than in 3D-HST for more massive galaxies with
M∗$>10^{10}$M⊙, even after accounting for observational uncertainties. When
comparing resolved star formation, we also find good agreement between TNG50
and 3D-HST: average specific star formation rate (sSFR) radial profiles of
galaxies at all masses and radii below, on, and above the SFMS are similar in
both normalization and shape. Most noteworthy, massive galaxies with
M∗$>10^{10.5}$M⊙, which have fallen below the SFMS due to ongoing quenching,
exhibit a clear central SFR suppression, in both TNG50 and 3D-HST. In TNG this
inside-out quenching is due to the supermassive black hole (SMBH) feedback
model operating at low accretion rates. In contrast, the original Illustris
simulation, without this same physical SMBH mechanism, does not reproduce the
central SFR profile suppression seen in data. The observed sSFR profiles
provide support for the TNG quenching mechanism and how it affects gas on
kiloparsec scales in the centers of galaxies.
###### keywords:
galaxies: evolution – galaxies: formation – galaxies: high-redshift –
galaxies: star formation – galaxies: structure
††pubyear: 2021††pagerange: Spatially Resolved Star Formation and Inside-out
Quenching in the TNG50 Simulation and 3D-HST Observations–A
## 1 Introduction
Very generally, the fundamental challenge in trying to understand how galaxies
form is that it happens over such long timescales. At its present star
formation rate, the Milky Way would take over thirty billion years to double
its stellar mass (e.g. Licquia & Newman, 2015). No matter the advances in
telescope technology, we cannot watch a galaxy through the billions of years
of its evolution to see how it builds its bulge and disk, what drives changes
in its star formation rate, or how it responds to interactions with other
galaxies or changes in accretion rate. Various methods have been devised to
trace galaxies across cosmic time (e.g. van Dokkum et al., 2010; Leja et al.,
2013; Behroozi et al., 2013; Moster et al., 2013; Papovich et al., 2015;
Wellons & Torrey, 2017; Torrey et al., 2017). But clever as these methods are,
they can only tell us about the statistical evolution of a population; they
can give us a description of the buildup of a group of similar mass galaxies
through time but cannot tell us how it happened. Similarly, the archaeological
approach to galaxy evolution tends to be mainly limited to understanding the
stellar-mass assembly and chemical evolution of galaxies (e.g. Thomas et al.,
1999; Graves et al., 2009; Trager & Somerville, 2009; Pacifici et al., 2016).
A complementary approach to this problem is to simulate galaxy formation
rather than observe it. Simulating a universe in a box allows us to track
galaxies through time to see how they grow and determine the key physical
processes driving that growth. Cosmological hydrodynamical simulations evolve
a box of dark matter, gas, stars, and supermassive black holes through time
using gravity and hydrodynamics. Refining these simulations has informed us
about the plethora of physical processes involved in galaxy formation.
However, it is only in the last decade that hydrodynamical simulations have
begun to produce galaxies with realistic morphologies (e.g. Governato et al.,
2010; Brooks et al., 2011; Guedes et al., 2011; Aumer & White, 2013;
Christensen et al., 2014; Hopkins et al., 2014; Vogelsberger et al., 2014b, a;
Genel et al., 2014; Sijacki et al., 2015; Schaye et al., 2015; Crain et al.,
2015; Khandai et al., 2015; Davé et al., 2016; Dubois et al., 2016).
In general, these simulations come in two types: cosmological volumes focusing
on population statistics at the expense of resolution, and zoom-in simulations
focusing on individual galaxies at the expense of population statistics. With
gradual improvements in physical models, computational methods, and spatial
resolution, it has become possible to simulate a cosmological volume with
resolution sufficient to study the structural evolution of galaxies (thousands
of galaxies at sub-kpc resolution). TNG50 is the highest resolution simulation
of the IllustrisTNG project, covering a 50 Mpc box with a median spatial
resolution of $\sim 100$ pc (TNG: Weinberger et al. 2017; Pillepich et al.
2018; Springel et al. 2018; Naiman et al. 2018; Marinacci et al. 2018; Nelson
et al. 2018, 2019a, TNG50: Pillepich et al. 2019; Nelson et al. 2019b).
Studying the structural evolution of galaxies and its relation to the
regulation of star formation requires both the spatial resolution and the
population statistics afforded by TNG50. However, before it is used for this
purpose, the simulation needs to be validated against key observables.
In the space of colour and magnitude, we have long known that galaxies occupy
the ‘blue cloud’ and ‘red sequence’ (e.g. Strateva et al., 2001; Kauffmann et
al., 2003; Blanton et al., 2003; Bell et al., 2004; Faber et al., 2007;
Brammer et al., 2009; Whitaker et al., 2011; Taylor et al., 2015). With
improvements in our ability to constrain the physical properties of galaxies,
we have found that this blue ‘cloud’ in colour-magnitude space resolves itself
into a ‘sequence’ in SFR-M space. This so-called ‘star-forming main sequence’
(SFMS) is a somewhat sublinear relation between log(SFR) and log(M). The
normalization declines with time reflecting slower relative growth rates of
galaxies through cosmic time (e.g. Noeske et al., 2007; Daddi et al., 2007;
Salim et al., 2007; Rodighiero et al., 2011; Karim et al., 2011; Wuyts et al.,
2011; Whitaker et al., 2012; Whitaker et al., 2014; Speagle et al., 2014;
Shivaei et al., 2015; Tasca et al., 2015; Schreiber et al., 2015; Tomczak et
al., 2016; Lee et al., 2018).
The star-forming main sequence has a scatter of about a factor of two (which
has been deemed ‘tight’). However, not all galaxies reside on the main
sequence at all times, they form stars more rapidly or slowly over the course
of their assembly history. What drives their evolution through this plane,
however, remains uncertain. Star formation across the main sequence has been
proposed to be regulated by mergers; episodes of ‘compaction’ and inside-out
quenching; bursty star formation; self regulation by accretion and outflows;
and variations in dark matter halo formation times (e.g. Hernquist, 1989;
Wuyts et al., 2011; Sparre et al., 2015, 2017; Tacchella et al., 2016;
Tacchella et al., 2020; Nelson et al., 2016; Orr et al., 2017; Matthee &
Schaye, 2019)
Recently we have developed the ability to place spatially resolved constraints
on the star forming main sequence. This became possible owing to the
capability of mapping tracers of star formation and stellar mass in
representative samples of galaxies with e.g. HST/WFC3, VLT/SINFONI, SDSS
IV/MaNGA, and in particular of measuring where star formation happens in
galaxies on, above, and below the star forming main sequence at different
masses (Nelson et al., 2016; Tacchella et al., 2018; Ellison et al., 2018;
Belfiore et al., 2018; Abdurro’uf & Akiyama, 2018; Morselli et al., 2019).
This tells us where star formation occurs when galaxies are forming stars
normally and where it is enhanced and suppressed relative to the existing
stars. Galaxy structure and the regulation of star formation appear to be
intimately coupled and this measurement provides a direct link between them.
The integrated and resolved star forming main sequence depends on several key
aspects of galaxy formation models: where gas settles in galaxies, feedback,
and the conversion of gas into stars. For this reason, the star forming main
sequence has been used regularly to validate simulations (e.g. Torrey et al.,
2014; Sparre et al., 2015; Schaye et al., 2015; Somerville & Davé, 2015; Davé
et al., 2016; Donnari et al., 2019). However, while the star forming main
sequence in recent state-of-the-art simulations has been found to match
observations qualitatively, it does not usually match quantitatively,
typically having a normalization which is $0.1-1$ dex too low especially at
$z=1-3$ (Somerville & Davé, 2015). Specifically compared to the chosen
observations in each work, it is 0.1-0.5 dex lower in Illustris at $1<z<2$
(Torrey et al., 2014; Sparre et al., 2015), 0.2 dex lower in EAGLE at
$0.05<z<0.3$ (Schaye et al., 2015), $0.2-1$ dex lower in SIMBA (Davé et al.,
2019), and $0.2-0.5$ dex lower in TNG100 (Donnari et al., 2019).
It is unclear whether this is due to problems with the simulations or
uncertainties in the observations. Given the phenomenological nature of
prescriptions for AGN and Stellar feedback and star formation, it is entirely
possible that this points to a problem with the simulations. On the other
hand, measurements of star formation rates from observations are notoriously
difficult and are typically subject to a factor of two systematic uncertainty.
The other dimension of the SFR-M∗ plane, stellar mass, is better constrained
but still has systematic uncertainties of at least 0.1 dex (e.g. Muzzin et
al., 2009). Resolved measurements of star formation across the main sequence
have also been compared between observations and simulations yielding
qualitative disagreements. While observations generally find specific star
formation rate (sSFR) profiles that are flat or rising on and below the star
forming main sequence respectively, simulations typically find they are
falling with radius, in particular below the main sequence and in sharp
contrast to observations (FIRE, Illustris, SIMBA respectively: Orr et al.,
2017; Starkenburg et al., 2019; Appleby et al., 2020). In order to use a
simulation to understand the structural evolution of galaxies and the
regulation of star formation, we must be confident it reproduces the
integrated and resolved star forming main sequence. We must understand where
the simulation can or cannot reproduce these key observables and determine why
in order to physically interpret the observations based on the models we
compare them with.
With high quality observational measurements and simulations with improved
resolution and prescriptions for feedback, in this paper we compare the
integrated and resolved star forming main sequence from the Illustris TNG50
magneto-hydrodynamical cosmological simulation to that inferred from
observations as part of the 3D-HST survey at $z\sim 1$. We first compare the
normalization, slope, and scatter of the integrated star forming main
sequence. We then compare the resolved specific star formation rate radial
profiles of galaxies below, on, and above the main sequence.
Hubble, Spitzer, and Herschel have spent thousands of hours imaging the
CANDELS/3D-HST extragalactic legacy fields to place the best possible
photometric constraints on the UV-FIR spectral energy distributions (SEDs) of
galaxies which we model to derive physical parameters. This community
investment provides the backbone of this work. Two additional features make
our work unique. First, owing to the new Bayesian inference framework
Prospector, we now have improved measurements of the star formation rates and
stellar masses of galaxies changing observed estimates of the star forming
main sequence (Johnson & Leja, 2017; Leja et al., 2017, 2019; Johnson et al.,
2020). Second, owing to the Hubble space telescope WFC3/G141 grism and
multiband imaging, we now have spatially resolved measurements of the specific
star formation rates for large samples of galaxies across the star forming
main sequence (e.g. Nelson et al., 2016).
The observations on which this comparison is based are from the 3D-HST survey.
The 3D-HST survey is a 248 orbit survey with the Hubble Space Telescope (HST)
Wide Field Camera 3 (WFC3) grism which provided spatially resolved near-
infrared spectra for 200,000 objects in the five major extragalactic legacy
fields (Brammer et al., 2012a; Skelton et al., 2014; Momcheva et al., 2015).
At $0.7<z<1.5$ these spectra can be used to create H$\alpha$ emission line
maps, which trace where star formation is occurring (e.g. van Dokkum et al.,
2011; Nelson et al., 2012, 2013; Brammer et al., 2012b; Lundgren et al., 2012;
Schmidt et al., 2013; Wuyts et al., 2013; Vulcani et al., 2015, 2016), for
3200 galaxies with $9<$log(M${}_{*})<11$ across the star-forming main sequence
(e.g. Nelson et al., 2016), over an order of magnitude more than was
previously possible. Enormous gains were made in our ability to map H$\alpha$
emission with near infrared integral field units on 10-meter class telescopes
with adaptive optics (e.g. Förster Schreiber et al., 2006, 2009, 2011a, 2011b;
Tacchella et al., 2015b). The information content in these deep spectra allows
detailed study of physical processes in those objects similarly to
cosmological zoom simulations. As with the computational cost of zoom
simulations, the observational costs of these types of observations are high,
limiting the statistics to of order $\sim 100$ galaxies. The WFC3/G141 grism
provided another window into this problem that is well matched to cosmological
simulations like TNG. The slitless spectra provide spatially resolved emission
line diagnostics for all objects in its field of view, dramatically increasing
the multiplexing capabilities. On a strategic level, we note that with a
richer information content, these VLT/SINFONI observations for tens of
galaxies are well-matched to zoom simulations while HST/WFC3 grism
observations are well-matched to simulations of cosmological volumes with
thousands of galaxies. With a similar resolution and volume, TNG50 and 3D-HST
are particularly well-suited to each other.
This paper is organized as follows. In §2, we describe the data used for this
project and how we infer physical properties of galaxies from them. In §3 we
describe the TNG50 simulation. In §4, we compare the integrated star forming
main sequence slope, normalization, and scatter in TNG50 to observations from
3D-HST/Prospector. In §5 we compare the specific star formation rate profiles
of galaxies below, on, and above the star forming main sequence between TNG50
and 3D-HST. In §6 we summarize our findings.
## 2 Observational Data
### 2.1 Integrated Quantities
In this paper, the key quantities are redshifts, stellar masses, and star
formation rates, both integrated and resolved in the case of the latter two.
The 3D-HST+CANDELS dataset is particularly well designed for deriving these
quantities in the $z=0.5-2$ Universe as it has 1 kpc spatial resolution
imaging and spectroscopy in the rest-frame optical that is key for inferring
structural stellar population properties. CANDELS is a 902 orbit HST survey
providing optical and near-infrared imaging (Grogin et al., 2011; Koekemoer et
al., 2011). 3D-HST is a 248 orbit HST survey including near-infrared imaging
and slitless spectroscopy over the same area (van Dokkum et al., 2011; Brammer
et al., 2012a; Skelton et al., 2014; Momcheva et al., 2016). These surveys
cover five major extragalactic fields AEGIS, COSMOS, GOODS-N, GOODS-S, and UDS
which, crucially, have a wealth of publicly available data from the
ultraviolet through the infrared (Giavalisco et al., 2004; Whitaker et al.,
2011; Grogin et al., 2011; Koekemoer et al., 2011; Brammer et al., 2012a;
Ashby et al., 2013; Skelton et al., 2014; Momcheva et al., 2016; Oesch et al.,
2018; Whitaker et al., 2019, see Table 3 of Skelton et al. 2014 for additional
references).
Redshifts are derived from template fits to the combination of photometry and
near infrared slitless spectroscopy (Momcheva et al., 2016). Galaxy stellar
masses and star formation rates are derived by modeling the 0.3–24$\mu$m (UV-
IR) spectral energy distribution (SED) from the observed photometry. Aperture
photometry was performed on PSF-matched images to measure consistent colours
across passbands. For the HST imaging, a $0\aas@@fstack{\prime\prime}7$
diameter aperture was used and an aperture correction was performed to arrive
at the total flux (see Skelton et al., 2014, for many more details). To
determine stellar population parameters, the SED is fit using with the
Bayesian inference framework Prospector (Johnson & Leja, 2017) as presented in
(Leja et al., 2019). Prospector uses the Flexible Stellar Population Synthesis
code (Conroy & Wechsler, 2009, FSPS) to construct a physical model and the
nested sampler dynesty to sample the posterior space (Speagle, 2020). This
model includes a non-parametric star formation history, a two-component dust
attenuation model with a flexible attenuation curve, variable stellar
metallicity, and dust emission powered by energy balance (see Leja et al.,
2017, for more details). With this new model, our new catalogs have
systematically higher stellar masses and lower star formation rates than
previous versions (Leja et al., 2019). In this work we use the SFRs averaged
over the last 30 Myr.
### 2.2 Mapping stellar mass and star formation
In this paper we compare specific star formation rate profiles of galaxies
across the star forming main sequence from TNG50 to observations at $z\sim 1$.
Deriving specific star formation profiles observationally is challenging due
primarily to the difficulty of mapping star formation. Our process for
deriving sSFR profiles for this comparison builds on Nelson et al. (2016), so
we refer the reader there for details. The primary update is that we use
spatially resolved SED fitting to derive stellar mass maps and perform a dust
correction to the H$\alpha$ emission to map star formation. We summarize our
methodological choices and their impact below and briefly describe the rest of
the analysis and the data from whence it came, with an emphasis on what is new
and what is certain or uncertain.
Our aspiration here is to compare sSFR profiles from TNG50 to the real
Universe, meaning that we need to map stellar mass and SFRs from observations.
We map stellar mass and star formation in two ways, with one method closer to
the data and the other with more layers of interpretation. In both of these
analysis tracks we stack maps, correct for the effects of the point spread
function (PSF) on the stack, and then construct radial surface brightness
profiles. In the following section, we first describe the different ways we
map sSFR and then describe the stacking, PSF-correcting, and profile
extraction.
#### 2.2.1 Resolved sSFR from maps of H$\alpha$ equivalent width
The method closest to the data is to simply use maps of H$\alpha$ equivalent
width as a proxy for sSFR. Hot young stars photoionize their surrounding gas.
The recombination and subsequent cascade of electrons in hydrogen atoms
produces the H$\alpha$[$6563$Å] emission line (amongst others) which is thus a
tracer of stars formed in the past $\sim 10$ million years. At the same
wavelength, the rest-frame R-band continuum, light from from the longer-lived,
lower-mass stars that make up the bulk of the stellar mass become more
important, making it an oft-used tracer of the distribution of stellar mass
(e.g. van der Wel et al., 2014). Here we trace this redshifted R-band emission
with the WFC3/$JH_{F140}$ filter. The quotient of these,
H$\alpha$/$JH_{F140}$, which we will here call the H$\alpha$ equivalent width
(EW(H$\alpha$)) hence traces sSFR.
The key innovation here is the ability to map the H$\alpha$ emission line, a
tracer of star formation, in large samples of galaxies. We do this using the
slitless spectroscopy from the 3D-HST survey which provides spatially resolved
maps of emission lines for everything in its field of view (e.g. Nelson et
al., 2012, 2013, 2016; Brammer et al., 2012b; Lundgren et al., 2012; Schmidt
et al., 2013; Wuyts et al., 2013; Vulcani et al., 2015, 2016). Due to its
large multiplexing capacity and unbiased sampling, this mode has grown
increasingly popular on HST and likely will on JWST as well. The grism (a
portmanteau of “grating" and “prism"), is a spectral element in the WFC3 IR
channel filter wheel dispersing incident light onto the WFC3 detector, and as
such providing spectra for all objects in the field of view. This observing
mode features a unique combination of HST’s high native spatial resolution and
the grism’s low spectral resolution: $\sim$1 kpc and $\sim$1000 km/s at $z=1$,
our redshift of interest. This means that for all galaxies in our sample we
will get a map of the spatial distribution of line-emitting gas. Because of
the low spectral resolution, these spectra contain virtually no kinematic
information; besides e.g. >1000 km/s outflows, the entire velocity structure
of the galaxy will be contained in a single spectral resolution element. Hence
we obtain maps of the emission lines of all galaxies in the field of view
which are redshifted into the wavelength coverage of the grism.
The wavelength coverage of the G141 grism ($1.15-1.65\,\mu$m) samples
redshifted H$\alpha$ at $0.7<z<1.5$. The spectra of all objects in the field
are forward-modeled based on imaging. This provides the extraction window for
each spectrum based on the geometric transformation onto the detector.
Furthermore, because there is nothing blocking the light from other objects,
many of the spectra overlap or “contaminate" one another. The forward-modeling
also maps where contaminating flux from other objects will fall on the 2D
spectrum of the object of interest. All pixels predicted to have contaminating
flux more than a third of the background are masked. Finally, the continuum
light of a galaxy is modeled by convolving the best-fit SED without emission
lines with its HST image at the same wavelength (combined
$J_{F125W}/JH_{F140W}/H_{F160W}$). We subtract the continuum model from the 2D
grism spectrum which simultaneously removes the continuum emission and
corrects the H$\alpha$ maps for underlying stellar absorption. What remains
for all 3200 galaxies at $0.7<z<1.5$ is a map of their H$\alpha$ emission. One
complication of the low spectral resolution is that Nii and H$\alpha$ are
blended and Sii and H$\alpha$ are separated by three resolution elements. To
mitigate this, we use a double wedge mask along the dispersion direction
covering Sii. The overall contribution of Nii has less of an impact because
the total map is scaled to the integrated SFR measured from Prospectorand the
mask decreases the impact of very high ratios extending emission in the
dispersion direction. Radial gradients in Nii/H$\alpha$, on the other hand do
matter. We account for these in §2.4.
More details on the reduction and analysis of the 3D-HST grism spectroscopy
are available in Brammer et al. (2012a); Momcheva et al. (2016); more details
on the creation of H$\alpha$ maps are in Nelson et al. (2016). Mapping the
$JH_{F140}$ emission is much more straightforward. Stamps are cut around the
objects in the interlaced frames. Light from nearby objects is masked
according to the SExtractor segmentation map.
This first method for mapping sSFR comes straight from the data: it is simply
the quotient of the measured H$\alpha$ map and the measured $JH_{F140}$ map.
No dust correction is done to either the H$\alpha$ or the $JH_{F140}$ maps,
with the assumption that they are subject to similar dust attenuation because
they are at the same wavelength (modulo differential extinction toward Hii
regions) hence the dust attenuation multiplier cancels out in the quotient.
#### 2.2.2 Resolved sSFR from spatially resolved SED fitting
The effect of dust attenuation in principle cancels when scaling the observed
H$\alpha$/$JH_{F140}$ directly to sSFR (as described in the previous section).
However, in addition to dust, the continuum light, which we are scaling to
stellar mass, is also subject to age gradients which affect the mass-to-light
ratio ($M/L$). In particular, the centers of galaxies are typically observed
to be older than their outskirts (e.g. Wuyts et al., 2012; Cibinel et al.,
2015; Tacchella et al., 2015a). Older stars have a higher $M/L$ meaning that
the stellar mass is more concentrated than the light. Consequently the actual
sSFR profiles could be more centrally depressed than the observed profiles of
H$\alpha$/$JH_{F140}$.
Our second method attempts to mitigate the effects of dust and stellar age on
the observed light using spatially resolved spectral energy distribution (SED)
modelling to map the stellar mass and dust attenuation in our galaxies.
Spatially resolved SED modelling is done using the eight band HST imaging
described in §2.1. This methodology is described in detail in Cibinel et al.
(2015), but we outline it here for completeness. Image postage stamps are cut
from the mosaics in each HST band convolved by PSF-matching to the resolution
of the reddest band, $H_{F160}$, which has the lowest resolution. The images
are adaptively smoothed using Adaptsmooth (Zibetti et al., 2009) requiring
$S/N>5$ in each spatial bin in the $H_{F160W}$ image, which has the highest
$S/N$. The SPS code LePhare (Arnouts et al., 1999; Ilbert et al., 2006) is run
on the photometry in each spatial bin using the Bruzual & Charlot (2003)
synthetic spectral library, a Chabrier (2003) initial mass function, a
Calzetti et al. (2000) dust law and three metallicity values
($Z=0.2,0.4,1\,Z_{\odot}$). The star formation history is parameterized as a
delayed exponential $(t/\tau^{2})\,{\rm exp}\,(-t/\tau)$ having a
characteristic timescale $\tau$ with 22 values between 0.01 and 10 Gyr and a
minimum age of 100 Myr.
We use the model $E(B-V)$ maps to correct our H$\alpha$ maps for the effects
of dust using
$A_{cont}=k(\lambda)E(B-V)$ $A_{extra}=0.9A_{cont}-0.15A_{cont}^{2}$
$F(H\alpha)_{\rm intr}=F(H\alpha)_{\rm obs}\times 10^{0.4A_{cont}}\times
10^{0.4A_{extra}}$
where $A_{cont}$ is the dust attenuation toward the stellar continuum at the
wavelength of H$\alpha$. $k(\lambda)$ is computed using the Calzetti et al.
(2000) dust attenuation law $k(H\alpha=6563$Å$)=3.32$. $A_{extra}$ is the
amount of extra attenuation towards Hii regions calculated using Wuyts et al.
(2013). These dust corrected maps of SFR(H$\alpha$) are then divided by then
the SED-modeled stellar mass to get the sSFR. Both are scaled to the
integrated values from Prospector.
### 2.3 Selection
We select galaxies in the redshift range $0.75<z<1.5$ for which we can map the
H$\alpha$ emission line using the HST/G141 grism. We confine this analysis to
the mass range $9<$log(M∗)$<11$; the lower boundary is driven by our
completeness limit (Tal et al., 2014), the upper by number statistics. Here
and for the remainder of this paper when we refer to log(M∗), it is in units
of M⊙. Here we are interested in an analysis of the SFMS rather than the star
formation properties of the full population of galaxies, so we select only
those galaxies which are actively forming stars. We do this according to a
doubling time criteria, specifically by comparing the doubling time to the age
of the Universe (Tacchella et al., 2019). We use a slightly less restrictive
criteria to encompass the tail of the distribution to low SFRs:
$t_{\rm double}<20t_{\rm Hubble}(z)$
This corresponds to a galaxy’s current star formation rate doubling its mass
in 20 Hubble times (or adding 5% to its mass in a Hubble time). This is the
extent of the selection criteria applied for §4 comparing the integrated SFMS
in observations and TNG50.
For §5 comparing sSFR profiles across the main sequence, a few additional
selection criteria are required on the observational side. We remove all
galaxies flagged as having unreliable photometry as well as galaxies with with
X-ray luminosity $L_{x}>10^{42.5}{\rm erg\,\,s}^{-1}$ or H$\alpha$ emission
line widths of $\sigma>2000$ km/s likely indicating that emission from an
active galactic nucleus (AGN) will contaminate the central H$\alpha$ flux we
interpret as star formation. For the H$\alpha$ maps, we also reject galaxies
whose spectra are too badly contaminated (See §2.2.1). Together these criteria
result in a selection of $\sim 3200$ galaxies. Finally we note that we have
maps of $E(B-V)$ in only two of our five fields, GOODS-N and GOODS-S, where
there the HDUV program provides UV data.
### 2.4 Stacking & specific star formation rate profiles
We stack galaxies across the main sequence in bins of stellar mass and
position with respect to the SFMS ($\Delta$MS). Stellar mass bins are 0.5 dex
from log(M/M⊙)=9-11. We fit the SFMS as described in §4 and divide the
galaxies into bins below, on, and above the main sequence according to
log($\Delta$MS) [-0.8,-0.4], [-0.4,0.4], and [0.4,1.2], respectively. To
create each stack, we take a pixel by pixel mean of all maps in that bin. Many
pixels in a given map are masked so we also make a mean stack of the masks and
divide this out to correctly normalize the mean in each pixel. No weighting is
done for ease of comparison to the simulations.
A key step in this process is to correct observations for the effect of the
point spread function (PSF). The PSF blurs images, resulting in dense regions
appearing less dense (and vice versa of course). Our method for correcting for
the effect of the PSF uses a parametric model to account for the effects of
the PSF on the radial light distribution. To do this, we fit the light
distribution (or derived physical quantity) of each stack with a Sérsic model
(Sérsic, 1968) using galfit (Peng et al., 2002). We fit a single Sérsic model
letting the brightness, effective radius, Sérsic index, centroid, projected
axis ratio, and position angle be free and forcing the background level to be
zero. The fit is found by convolving each model with the PSF and computing
reduced $\chi^{2}$. All images are background subtracted and their backgrounds
have been tested and found to be zero. Forcing the background to zero allows
galfit less freedom to fit the wings of the profile. With these best fit
parameters, we create a model not convolved with the PSF and add the residuals
from the fit. This means that regions of the fit in which the image deviates
from the model will be accounted for. The resulting “PSF-corrected” image will
have the bulk of its light corrected for the PSF but the residuals will not be
(e.g. Szomoru et al., 2010).
There are of course several shortcomings with this methodology. First, this
method corrects based on a single, axisymmetric Sérsic profile. This is a
reasonable approximation for the mass profile of a high redshift galaxy but
real galaxies are of course more complicated. This model effectively
reconstructs the radial profile of the light but will not e.g. deconvolve non-
axisymmetric features at larger radii, like clumps or spiral arms. While this
means that individual images are not as they would be without the PSF, we
average over these types of features twice: once in the stack and again in
computing the radial profile, so it is not important for this analysis.
Because on average the profiles of both mass and star formation peak in the
centers of galaxies (see e.g. Nelson et al., 2016), the thing that is most
important for us is to replace the light into the center so it is less
important for e.g. spiral arms or clumps at large radii to be deconvolved from
the PSF. Second, because we stack galaxies with different radii, the stack
will have a steeper profile than any of the galaxies do intrinsically. The
resulting fit will typically have larger Sérsic index than the average of
individual galaxy images and plausibly put too much light back in the center.
We acknowledge that this step may induce a feeling of unease in the
uninitiated but it is necessary and the best we can do with current tools.
Ideally, our SED modelling would account for the effects of the PSF so this
step would not be required, however a tool of this kind does not yet exist.
Radial profiles are computed in circular apertures. To generate specific star
formation rate profiles from the equivalent width profiles, we scale the
integral of the H$\alpha$ profile to the mean total star formation rate from
Prospector described above and the $JH_{F140}$ to the stellar mass. We also
normalize the SED modeled profiles of stellar mass and star formation to the
mean integrated values from Prospector. Error bars are computed by bootstrap
resampling the stacks. The sSFR profiles are the SFR profiles divided by the
M∗profiles.
To summarize (and make the order of operations clear): we make maps of
H$\alpha$, F140W, stellar mass, and dust attenuation for all galaxies where
they are available and then stack all available maps for a given bin. For
method two, the stacked dust attenuation map is applied to the stacked
H$\alpha$ map. Next, all stacks are PSF-corrected, the radial profiles are
computed for each H$\alpha$, $JH_{F140}$, stellar mass, and H$\alpha$
corrected for dust, and finally quotients are performed for each pair.
Two observational issues merit a (somewhat) brief discussion before moving on,
with both most strongly affecting the sSFR profiles of massive galaxies.
First, it is possible that some fraction of the central light comes from an
AGN: both the broad band emission and in particular the H$\alpha$ emission. To
estimate the possible extent of this effect, we subtract observational
estimates of the contribution of AGN to our observed H$\alpha$ emission from
the literature. Förster Schreiber et al. (2014) and Genzel et al. (2014) find
that in their sample of $z\sim 2$ galaxies with a detected broad velocity
component, an average of 37% of the nuclear H$\alpha$ flux comes from this
broad component that they attribute to AGN-driven winds (rather than star
formation). Additionally, because of the low resolution of the G141 grism, the
H$\alpha$ line we observe is contaminated by Nii. In these same studies, the
authors find nuclear Nii/H$\alpha$ = 0.55 in stacks of galaxies with a broad
line detection. That being said, in the galaxy population writ large, Förster
Schreiber et al. (2019) find fairly flat Nii/H$\alpha$ gradients. Genzel et
al. (2014) find 35% of galaxies with 10.5<log(M∗/M⊙)<11 have a broad
component. Accounting for these effects reduces the observed central sSFR by
25%. We use this as the default in our analysis but note it has a minimal
effect.
Second, age gradients will affect the observed sSFR profiles in high mass
galaxies. Because older stellar populations emit less light per unit mass and
we expect the centers of massive galaxies to be older than their outskirts,
there is likely more stellar mass in the centers of these galaxies than we
infer from the $JH_{F140}$ light profiles. This can be seen in the sSFR
profiles based on resolved SED fitting that get more centrally depressed at
high mass on and below the main sequence. Above the main sequence, on the
other hand, central dust obscuration becomes an issue at high masses. As can
be seen in Fig. 5, the sSFR profile based on SED modelling has a higher
central sSFR than that based on EW(H$\alpha$). Because of the importance of
these effects at high masses, we take the SED modeled sSFR profiles as the
default for Fig. 3.
## 3 Simulation data
### 3.1 The TNG50 simulation
TNG50 is a magnetohydrodynamical cosmological simulation of galaxy formation.
It is the highest resolution member of the IllustrisTNG family (TNG from now
on), The Next Generation of the Illustris project . Henceforth the original
Illustris simulation will be referred to as simply Illustris (Vogelsberger et
al., 2014b; Genel et al., 2014; Sijacki et al., 2015). The TNG model was built
on the successes of the original Illustris model. However, a few issues with
the star formation rates and structures of galaxies in the original Illustris
simulation were soon noticed in comparison to observations: the effective
radii (of the stellar mass) in Illustris were larger than observed (Pillepich
et al., 2018; Genel et al., 2018), the distribution of galaxy colors showed
only a weak bimodality between red and blue (Vogelsberger et al., 2014a;
Nelson et al., 2018), and the normalization of the star-forming main sequence
was too low at $z=1-2$ (Sparre et al., 2015) in comparison to observational
estimates. Thus for TNG, the models for feedback from star formation and AGN
were modified as described below. Furthermore, the parameters of the TNG model
were chosen to provide a better match to a few key observations including the
cosmic star formation history and the following at $z=0$: the galaxy stellar
mass function, stellar mass – halo mass relation, supermassive black hole –
galaxy mass relation, size – stellar mass relation, and gas fraction within
group-mass halos.
TNG50 evolves dark matter, gas, stars, black holes and magnetic fields from
$z=127$ to 0. With a cubic volume of 51.7 Mpc side length, and a density-
dependent resolution in galaxy star forming regions of $70-140$ pc, TNG50
provides resolution typical of zoom simulations of single galaxies for 1600
galaxies with $10^{9}<{\rm m}<10^{10}$ M⊙ and 530 with $10^{10}<{\rm
m}<10^{11}$ M⊙ at $z\sim 1$. The baryon mass resolution is $8.5\times
10^{4}$M⊙, the gravitational softening length of the dark matter and stars is
0.3 kpc. This is the most computationally demanding run of the simulation
suite, requiring 130 million CPU hours (see Pillepich et al., 2019; Nelson et
al., 2019b, for more details). TNG50 evolves a total of 2x21603 total initial
resolution elements, half dark matter particles, half gas cells. They are
evolved using AREPO, a massively parallel simulation code optimized for large
runs on distributed memory machines (Springel, 2010).
The TNG physical model for galaxy formation includes several physical process
thought to be important to galaxy evolution that are implemented at the
spatial and mass resolution of the simulation. In addition to gravity and
hydrodynamics, the model includes gas cooling and heating, star formation,
aging of single age star particles, chemical enrichment of the interstellar
medium, and feedback from supernovae and super massive black holes (SMBHs).
Star formation is modeled with the very simple density threshold-based
parameterization of Springel & Hernquist (2003). In such a prescription, gas
is stochastically converted into star particles once its density exceeds
$n_{H}=0.1$cm-3 on a timescale determined such that the galaxy-wide empirical
Kennicutt-Schmidt relation (Kennicutt, 1989) is broadly reproduced.
As in any model of galaxy formation, feedback from stars and black holes is
essential. Supernova feedback associated with star formation drives galactic
scale outflows. In TNG, these outflows are launched directly from star forming
gas with energy proportional to the local and instantaneous star formation
rate. There are several changes to the Illustris star formation-driven wind
model in TNG: the wind injection is isotropic rather than bipolar; the
velocity of wind particles now scales with redshift and has a floor; and the
energy now depends on metallicity and has a thermal component (Pillepich et
al., 2018). The net result is that the star-formation driven winds in TNG are
faster at all masses and times and generally more effective at preventing star
formation.
The TNG50 model for feedback from SMBHs is described in detail in Weinberger
et al. (2017): SMBH feedback comes in two flavors, decided by the rate at
which the black hole is accreting nearby gas. In the high accretion rate
flavor, thermal energy is injected continuously into the surrounding gas, as
in Illustris (Springel et al., 2005; Di Matteo et al., 2005). At low accretion
rates, kinetic energy is injected into the surrounding gas as a time-pulsed,
oriented wind in a different random direction at each SMBH timestep. By
contrast, in Illustris, highly bursty thermal energy was injected into large
($\sim 50-100$kpc) bubbles displaced away from the central galaxy (Sijacki et
al., 2007). The new AGN feedback model, particularly the kinetic mode,
effectively quenches galaxies that reside in intermediate to high mass halos,
including realistic gas fractions (Weinberger et al., 2017; Pillepich et al.,
2018).
### 3.2 SFRs from TNG50 and other galaxy properties
In this work, for all galaxies we take the galaxy stellar mass to be the total
mass of all star particles that are gravitationally bound to each subhalo,
according to the subfind halo finder (Springel et al., 2001). We take the star
formation rate to be the sum of the individual star formation rates of all
individual gas cells in each subhalo. These are thus instantaneous star
formation rates and total masses. While this is what we attempt to measure in
observations, as explored in depth in Donnari et al. (2019) and Donnari et al.
(2021), aperture corrections and imperfect star formation tracers make this
inexact, complicating comparisons between observations and simulations.
However, we attempt to make our comparison as consistent as possible.
As with for the 3D-HST data, we also exclude from the simulated galaxies
analysis those with very low SFRs: $t_{\rm double}<20t_{\rm Hubble}(z)$. This
cut in the simulated sample automatically removes completely quenched objects
or galaxies whose SFRs are so low that they fall below the resolution limit of
TNG50; i.e. objects whose SFR$\equiv 0$.
Furthermore, when comparing the distribution of SFRs about the main sequence
in observations and simulations (§ 4.2) it is essential to account for
observational uncertainties. To do this, in observations instead of looking at
simply the best-fit value of the SFR, we use the full information about the
probability density function (PDF) of the fit. To measure the scatter of the
main sequence we sum the probability density function of each galaxy’s SFR
instead of just looking at the distribution of the best fit values. We apply
the same treatment to the SFRs from the simulation. We assign an observed PDF
to each SFR and sum the PDFs to determine the scatter of the main sequence in
TNG50. In this way we account for observational uncertainties in the
comparison between observations and simulations.
### 3.3 Radial profiles of sSFR in TNG50
The standard approach to making radial profile from simulations is to rotate
galaxies to face on then extract profiles in circular annuli. This is of
course not how observations are done; observers unfortunately cannot travel
out to distant galaxies and rotate them. In observations, the light from
galaxies as they are oriented on the sky is what falls on our detectors. The
blurring done by the point spread function (PSF) will happen on the randomly
oriented image in the plane of the detector. For high S/N images, it is
possible to do a PSF correction on an individual galaxy image and then
deproject it before stacking. With our shallow H$\alpha$ images, however, a
PSF correction is not possible on individual galaxy images, it is only
possible on a stack. We therefore cannot deproject the observed H$\alpha$
images and instead project the TNG50 particle distributions to mimic the
observations.
Maps of stellar mass and star-forming gas cells are made by projecting
particles and cells onto a grid of $121^{2}$ pixels representing a physical
size of 602 kpc, or 0.5 kpc/pixel using the methods developed in Diemer et al.
(2017); Diemer (2018); Diemer et al. (2019) and Tacchella et al. (2019). Each
particle/cell is distributed onto pixels according to the kernel smoothing
used by the simulation. This includes all particles/cells bound to the galaxy
according to the subfind halo finder. The centroid is defined as the co-moving
center of mass of the subhalo calculated by summing the mass weighted relative
coordinates of particles of all types in the subhalo. We project galaxies in
the xy plane in the simulation box to mimic the random projection of galaxies
in observations. These maps are then mean-stacked and we compute profiles in
radial bins. As for the observations, error bars are determined by bootstrap
resampling the stacks. We include the three full snapshots in the redshift
range of the observations ($z=0.7,1.0,1.5$).
In Fig. 7, we show the difference between the sSFR profiles derived from the
standard face-on projection, the edge-on projection, and the random xy, xz, yz
projections. The differences are fairly small but we include this correction
for completeness. In particular, this correction has the largest effect in the
highest mass bin below the main sequence, which, as we will soon see is a
particularly important regime to treat accurately for this comparison.
## 4 The Integrated Star Forming Main Sequence: TNG50 vs. 3D-HST
Figure 1: The star forming main sequence (SFMS) in TNG50 (blue points) versus
the 3D-HST survey black points at $0.7<z<1.5$. The curves show quadratic fits
to the running median star formation rates. For the data we include the
original 3D-HST stellar masses and star formation rates (red points and red
line Whitaker et al., 2014; Skelton et al., 2014) and a literature compilation
from Speagle et al. (2014). The new fits from Prospector infer stellar masses
$0.1-0.3$ dex higher and star formation rates $0.1-1$ dex lower resulting in a
star forming main sequence with a normalization systematically lower by $\sim
0.2-0.5$ dex. We also show the star forming main sequence from the original
Illustris simulation (purple line). The slope and normalization of the SFMS in
the simulations is remarkably consistent with observations at this redshift,
due to the newly inferred values from the data.
Here we investigate similarities and differences in the distribution of
galaxies in the star formation rate – stellar mass plane at $0.7<z<1.5$
between observations from the 3D-HST survey (see section 2.1) versus TNG50
cosmological hydrodynamical simulations (see section 3). To make this
comparison as informative as possible, we analyze the simulations and
observations in the same way.
### 4.1 Locus of the star forming main sequence
We compute running median star formation rates as a function of stellar mass
for both samples. To define the main sequence, we fit these running medians
with a quadratic:
$\log({\rm SFR})=a+b\log(M_{*})+c\log(M_{*})^{2}$
As described in §3 and §2, in both observed and simulated sample galaxies with
very low SFRs are removed from the analysis.
Figure 1 shows the distribution of galaxies in the SFR-M∗ plane from 3D-HST
(dark orange points) and TNG50 (blue points) as well as the SFMS fits to the
median SFRs (dark orange vs. light orange curves). The median fits are
remarkably similar between TNG50 and 3D-HST: they are within 0.1 dex at all
masses $9<\log(M_{*}/M_{\odot})<11$. The SFMS are so similar in fact that one
might be tempted to conclude the first author bungled the plotting and used
the same array twice. We assure the reader this is not the case: these are
truly nearly identical. That being said, there remains of order $\sim 0.1$ dex
uncertainty in this comparison due to aperture effects and the timescale on
which the SFR is measured, as described in Donnari et al. (2019).
Let us not lose sight of the main point, however: the SFMS in the TNG50
simulation and observations from 3D-HST are in remarkable agreement. This is
surprising given the longstanding $0.1-1$ dex offset between the SFMSs in
observations and simulations at $z=1-2$ (e.g Torrey et al., 2014; Sparre et
al., 2015; Somerville & Davé, 2015; Davé et al., 2016; Donnari et al., 2019).
So what changed? Let us first consider the simulations. Illustris and TNG50
main sequences are shown in Fig. 1: light orange vs. yellow curves. There is
little change going from Illustris to TNG50 at $z\sim 1$; the slope and
normalization of the main sequence have remained similar. Turning to the
observations, the star forming main sequence from the original 3D-HST catalogs
(v4.1.5; Whitaker et al., 2014; Skelton et al., 2014) as well as a literature
compilation (Speagle et al., 2014) are also shown. The normalization of the
main sequence at $z\sim 1$ has decreased by $0.2-0.5$ dex when using the
Prospector Bayesian inference framework to determine stellar population
parameters compared to previous determinations. The offset is minimized when
adopting a non-parametric star formation history in the Bayesian inference
framework, coupled with accounting for infrared emission due to dust heated by
older stellar populations and supermassive black holes rather than star
formation (see Leja et al., 2019, for more information). Thus the long-
standing $0.1-1$ dex offset between the SFMS in observations and simulations
at $z\sim 1$ disappears in this work not due to changes in the simulations but
rather to changes in the stellar population parameters inferred from
observations. Values for the coefficients in the $z\sim 1$ main sequence fit
(equation above) are listed in Table 1.
As shown in Torrey et al. (2014), the star forming main sequence in
simulations is fairly insensitive to the nature of the feedback prescription.
The integrated main sequence is thus not a particularly discerning validation
of a simulation’s feedback model. As we will show in the next section, this is
not the case when looking at the resolved properties of star formation across
the main sequence. Furthermore, Leja et al. (2015) showed that earlier
measurements of the star forming main sequence and the evolution of the
stellar mass function were not self consistent in observations: the SFMS
dramatically overpredicted galaxy stellar mass growth. In the simulations they
are obviously self-consistent and hence unsurprising that they could not
simultaneously match both the observed main sequence and mass function. With
data that are self-consistent, the simulations can match both as they are
directly coupled.
Table 1: Coefficients in the fit to the star forming main sequence in the TNG50 simulation versus observations from the 3D-HST survey both original v4.1.5 and updated with Prospector. $\log{(SFR)}=a+b\log{(M_{*})}+c\log{(M_{*})}^{2}$ data/sim | a | b | c
---|---|---|---
3D-HST/Prospector | -22.13 | 3.74 | -0.146
TNG50 | -20.46 | 3.38 | -0.127
3D-HST/orig | -37.48 | 6.87 | -0.302
Illustris/orig | -14.21 | 2.08 | -0.060
### 4.2 Width and outliers of the star forming main sequence
Although the medians are nearly identical between TNG50 and Prospector/3D-HST,
the distribution of galaxies about these medians is not, even when accounting
for observational uncertainties in our treatment of the simulations. We look
at the distribution of the distance of galaxies from the median fit
($\Delta$MS) in bins of stellar mass in this Section: see Fig. 2. However,
investigating the shape of this distribution requires properly accounting for
observational uncertainties. Prospector returns a probability density function
(PDF) for each parameter it fits. In each bin, we sum the PDFs of SFR
normalized to the main sequence fit then normalize the overall distribution to
have an area of 1. To make the distribution from simulations more directly
comparable, as mentioned in Section 3.2, we assign an observed PDF to each SFR
in the simulation by drawing randomly from the observed galaxies with similar
masses and SFRs (as the width of the PDF is dependent on these quantities). We
then sum the TNG50 PDFs in the same way as the observed ones. In other words
we add observational uncertainties to the simulated SFRs so that the scatter
is directly comparable.
Figure 2: Top: Distribution of simulated and observed galaxies around the main
sequence in bins of stellar mass. We contrast the 3D-HST data (black), TNG50
simulation (light blue), and original Illustris simulation (purple). Although
the width of these distributions are broadly consistent between the
simulations and observations at lower galaxy stellar mass, the simulated
scatter is smaller than observed at high ($M_{\star}>10^{10}$ M⊙). Bottom: As
above, except with the distribution shifted to the ridgeline of the
distribution of star formation rather than the median. With this shift
applied, the shape of the distribution of galaxies above the main sequence is
similar between observations from 3D-HST/prospector and TNG50. The orange
solid line shows the definition of “starbursts" used in §4 following
Rodighiero et al. (2011) and Sparre et al. (2015): $>2.5\sigma$ above the main
sequence.
Figure 2 (top row) shows this comparison – the distribution of simulated and
observed galaxies around the main sequence in bins of stellar mass. We remind
the reader that observed and simulated galaxies with very low SFRs ($t_{\rm
double}<20t_{\rm Hubble}(z)$) are not considered in this analysis. At all
masses, the TNG main sequence is narrower than the observations. That is,
there is less scatter in the SFRs of the simulated galaxies than there is
amongst the observed galaxies. We note that we use the instantaneous SFR in
the simulations and the SFR averaged over 30Myr in observations. The scatter
of the main sequence measured from instantaneous SFRs will be larger than
those averaged over longer timescales (e.g. Caplar & Tacchella, 2019; Donnari
et al., 2019; Tacchella et al., 2020) so likely the difference in the scatter
is even larger than we see here. This is less dramatic below log(M∗) = 10 and
more dramatic above. We quantify this difference in width by computing the
width of the region that contains 68% of the distribution. These values are
listed in table 2. For $9<\log({\rm M_{*}})<10$, we find the simulations are
80-90% the width of observations. For $10<\log({\rm M_{*}})<10.5$, the
difference grows to 70%; for $10.5<\log({\rm M_{*}})<11.$ to 60%.
Table 2: Scatter in the star forming main sequence in 3D-HST/Prospectorversus TNG50. This is measured in bins of stellar mass with a width 0.5dex including observational uncertainties on both the observations and simulations. (See §4.2 for more details.) mass bins | 3D-HST/Prospector | TNG50 | ratio
---|---|---|---
$9<\log({\rm M_{*}})<9.5$ | 0.41 | 0.33 | 0.81
$9.5<\log({\rm M_{*}})<10$ | 0.38 | 0.33 | 0.88
$10<\log({\rm M_{*}})<10.5$ | 0.45 | 0.32 | 0.72
$10.5<\log({\rm M_{*}})<11$ | 0.57 | 0.33 | 0.58
The distribution is more skewed toward low SFRs in observations. While in
TNG50, the distributions are self-similar at all masses, in observations they
become more skewed toward high masses. We quantify this by measuring the skew
of the distributions of the simulated vs. observed galaxies based on the
ridgeline of the distribution instead of the mean as in the standard
definition. At $10.5<\log({\rm M_{*}/M_{\odot}})<11$, the observed
distribution of SFR has a skew of -2.3 while TNG50 has -1.5. The relative lack
of low SFR galaxies in TNG50 is likely due to the prescriptions for AGN
feedback in the simulation. Although we note from the observational side that
SFRs of low-sSFR galaxies are the most model-dependent. On the simulation
side, kinetic radio-mode AGN feedback is designed to very efficiently shut
down star formation, while the thermal quasar-mode is comparably inefficient
(Weinberger et al., 2018). Within the model, every black hole is in one of
these two modes, with low-mass, rapidly accreting black holes (living in low-
mass or high redshift galaxies) being in the thermal mode. Once the accretion
rate (relative to the Eddington accretion limit) drops below a black hole mass
dependent factor, the feedback mode switches to a kinetic mode, leading to an
overly sharp decline, almost a jump, in sSFR as a function of black hole mass
as well as stellar mass and other properties of the simulated $z\sim 0$ galaxy
population (Terrazas et al., 2020; Habouzit et al., 2019; Li et al., 2020;
Habouzit et al., 2020). We speculate that, similarly, $z\sim 1$ galaxies in
TNG50 quickly quench whereas in the real universe massive galaxies seem more
likely to tarry below the main sequence before becoming fully quenched.
Furthermore, at M∗>$10^{10}$M⊙ above the main sequence an insufficient number
of starbursts as compared to the real Universe was noted in Illustris (Sparre
et al., 2015). We quantify this by comparing the fraction of star formation
that occurs more than 2.5$\sigma$ above the main sequence in 3D-HST and TNG50
(as in Rodighiero et al., 2011; Sparre et al., 2015). We calculate this
fraction based on both ridgelines of the distributions. Our definition is
shown in Fig. 2 (bottom row). At high masses, TNG50 has a ridgeline which is
$\sim 0.15$ dex lower than observations despite the medians being the same. We
also use the scatter as a function of mass from TNG50 to compute this for both
observations and TNG50 because the scatter is significantly smaller in TNG50
than observations at high masses. For mass bins
[9,9.5],[9.5,10],[10,10.5],[10.5,11] in observations we find the following
fractions of star formation occurring in starbursts [12%,10%,6%,2%] and for
TNG50 we find [15%,13%,5%,3%]. After accounting for the difference between the
ridgeline and median of the distribution of SFRs, using the same value for
scatter, and accounting for errors on the observed SFRs, the fraction of star
formation occurring in “starbursts" is very similar in observations from
3DHST/Prospector and TNG50. The primary issue with the TNG50 SFRs is the shape
of the distribution from the ridgeline to low SFRs.
## 5 sSFR profiles in TNG50 vs. 3D-HST
Here we compare the average radial profiles of sSFR in galaxies on, above, and
below the SFMS in observations from the 3D-HST survey at $z\sim 1$ and the
TNG50 magnetohydrodynamical cosmological simulations. The derivation of the
profiles is described in section 2.4 for the observations and section 3 for
the simulations.
The sSFR profiles are a powerful diagnostic to understand where the galaxies
are growing. A flat sSFR profiles indicates that the stellar mass doubles at
all radii with the same pace, implying a self-similar growth of the stellar
mass density profile. An increasing sSFR toward the outskirts implies that the
galaxy grows stellar mass faster in the outskirts than in the center (galaxy
grows in size), while a decreasing sSFR toward the outskirts is consistent
with a galaxy that decreases its size.
### 5.1 Inside-out Quenching
Figure 3: Top row: stacks of sSFR in TNG50 at random orientations. Bottom:
specific star-formation rate (sSFR) radial profiles of massive galaxies, with
$10^{10.5}<M_{\star}/\rm{M}_{\odot}<10^{11}$ at $z\sim 1$. We contrast
profiles inferred from observations with 3D-HST (dashed cuves) against the
outcome of the TNG50 hydrodynamical simulation (solid curves), as a function
of offset from the star-forming main sequence: galaxies which reside below
(left), on (center), and above (right). In all cases the TNG50 simulation
broadly reproduces both the normalization and shape of the observed SFR radial
profiles. A key result of this work is that quenching galaxies (left) exhibit
a clear central SFR suppression in the data as well as in TNG50. This supports
the scenario of inside-out quenching, which in TNG50 arises due to a central,
short time-scale, ejective supermassive black hole feedback mechanism at low
accretion rates. This is not the case with the jet-inflated bubble black hole
feedback model in Illustris as shown by the dash-dot purple line. The grey
shaded region is inside the observed PSF.
A key result of this paper is that star formation is quenched from the inside-
out, which in the simulations is caused directly by AGN feedback. Figure 3
shows that below the main sequence at 10.5$<$log(M∗)$<$11, the sSFR profiles
are strongly centrally suppressed in both TNG50 and in observations (e.g.
Nelson et al., 2016; Tacchella et al., 2018; Ellison et al., 2018; Belfiore et
al., 2018; Morselli et al., 2019). In TNG50, this centrally suppressed star
formation is a key signature of locally acting AGN feedback (see also Nelson
et al., 2019b).
Figure 4: sSFR profiles of $z\sim 1$ galaxies across the star forming main
sequence – comparison between observations and TNG50, the original Illustris
simulation, and a lower resolution version of TNG50 with resolution more
similar to that of Illustris (TNG50-2). Profiles are cut off when their
signal-to-noise ratio falls below 1. We find that TNG50 is more consistent
with observations than the original Illustris simulation and that this is not
primarily due to resolution effects. The grey shaded region is inside the
observed PSF.
This signature is not seen in the original Illustris simulation, where AGN
feedback acts non-locally. In Fig. 4 we also disentangle the impact of
resolution, comparing TNG50-1 to TNG50-2, the analogous simulation run with
eight times lower mass resolution (two times lower spatial resolution). As
shown through the comparison to the lower resolution version of TNG50, this is
not a resolution effect but due to the physics in the simulation. In
Illustris, bubbles are blown at galactocentric distances of 50-100 kpc and
consequently have a hard time propagating back into the denser gas to affect
the center of the galaxy. Hence the sSFR profiles in Illustris are not
centrally suppressed. In TNG50 on the other hand, both kinetic and thermal
feedback are done on the gas immediately surrounding the black hole,
suppressing star formation from the inside-out.
In quantitative detail there remain small differences between the observed and
TNG50 sSFR profiles at high masses (i.e. log M∗> 10.5) below the main
sequence. While the sSFR profiles agree at the centers, for $2<r<4$ kpc TNG50
is about a factor of two higher than observations. This implies that the
central suppression of SFR does not extend to sufficiently large radii as seen
in the data. This could be related to the modeling of the interstellar medium
in this simulation: in particular, there is no explicit multi-phase medium
with cold clouds embedded in a hot, volume filling component, but cells have a
single, volume averaged density value and a pressure according to an effective
equation of state (Springel & Hernquist, 2003). This means that AGN driven
winds that interact with this medium impact the entire mass budget, while a
situation where the wind propagates in low density channels while cold clouds
continue forming stars (e.g. Dugan et al., 2017) is not possible within the
IllustrisTNG model. It is possible that the effect of AGN winds would differ
with a more realistically modeled ISM – a scenario testable with future
simulations.
In general the TNG AGN feedback model produces sSFR profiles which are in
better agreement with observations than the original Illustris simulation. The
TNG black hole feedback model introduces powerful kicks when a black hole
reaches a certain mass (Weinberger et al., 2017; Nelson et al., 2019b). These
kicks evacuate gas from the very center of the galaxy (Zinger et al., 2020),
introducing enough feedback energy to gravitationally unbind gas from the
galaxy. As described in Terrazas et al. (2020), these galaxies are likely in
the process of unbinding their gas starting from the very central regions and
eventually expanding its effect to larger radii. Notably the original
Illustris simulation, with its rather different physical mechanism for AGN
feedback at low accretion rates, based on jet-inflated bubbles heating the ICM
at distances of tens of kpc or more from the galaxy, does not reproduce the
central SFR profile suppression seen in data. This different manifestation
between the two feedback models is clearly constrained by the observations. In
summary, our findings support two key ideas: (i) in reality, massive galaxies
quench from the inside-out possibly due to locally acting AGN feedback, while
(ii) in the TNG simulations, the details of how supermassive black hole
feedback are implemented and, in particular, how this feedback energy
physically affects, heats, and redistributes gas appears to zeroth order
consistent with constraints from the observed star formation rate radial
profiles on scales of a kiloparsec.
### 5.2 Flat sSFR profiles across the star-forming main sequence
Figure 5: The average radial sSFR profiles of galaxies across the star forming
main sequence are very similar between TNG50 and observations at $0.75<z<1.5$.
The top row is above the main sequence, middle is on, bottom is below. The
magenta in the bottom right corresponds to the AGN correction described in
§2.4, note it makes little difference. The grey shaded region is inside the
observed PSF.
Average specific star formation rate (sSFR) profiles of galaxies on, above,
and below the star forming main sequence in observations and simulations are
shown in Fig. 5. The main takeaway is that the sSFR profiles across the main
sequence in TNG50 are remarkably similar to those in observations. With few
exceptions, at all masses and radii the observed and simulated sSFR profiles
lie within 0.3 dex (a factor of two) of each other.
This agreement is surprising; it did not have to turn out this way. The
consistency shows that the distribution of dense gas and the conversion of gas
into stars are roughly correct in the simulation, at least relative to the
existing stellar mass. This means that the physical TNG50 model governing how
galaxies grow in size and build their structures across the SFMS yield high
fidelity predictions. The distribution of cold gas is set by the spatially
dependent interplay between gas inflows, outflows, and star formation. The
accretion of gas onto the galaxies is driven by gravity (a model about which
there is less uncertainty than the others) and suppressed by feedback. TNG50
uses the Springel & Hernquist (2003) model for star formation. In this model
gas above a certain density threshold is converted to stars stochastically.
While this model is too simple on small scales (e.g. Semenov et al., 2019), it
appears that on kpc scales this model produces results that are consistent
with observations.
Feedback has significant effects in all parts of the baryon cycle: it affects
inflow rates and geometries (e.g. Nelson et al., 2015) and it determines the
distribution of cold gas and hence where the galaxy can form stars. In TNG50
outflows driven by supernova feedback are launched from star forming gas with
their energy given by the star formation rate. This result means that at
$0.75<z<1.5$ and 9<log(M∗)<10.5, the way outflows are implemented in TNG50
produces results that are on population and azimuthal average, consistent with
observations on, above, and below the star forming main sequence. TNG50’s
combination of outflows and conversion of gas into stars produces galaxies
that have a radial structure of new star formation over past star formation
that is consistent with the real Universe.
Above the main sequence the sSFR profiles from TNG50 and 3D-HST are fairly
flat. Star formation is not primarily enhanced in the center meaning that it
is not primarily driven by central starbursts. In this regime, the match with
observations improved from Illustris to TNG. In Illustris the profiles have
somewhat of a negative gradient while in TNG50 (and 3D-HST observations) they
do not. This is not primarily a resolution effect as the profiles in TNG-
LowRes are fairly flat like those in TNG50. Instead this is likely a physical
effect owing to the implementation of supernova feedback. As shown in
obsevations as well as in TNG50 (Förster Schreiber et al., 2019; Nelson et
al., 2019b), star formation driven winds are strongest above the SFMS, at
least at $z\sim 1$. The implementation of these winds changed from Illustris
to TNG. As shown in Hemler et al. (2020), this affects the metallicity
gradients in galaxies. Here we see that it also affects the shape of the sSFR
profiles of galaxies above the main sequence. In TNG50 wind energy has an
additional scaling with the metallicity (Pillepich et al., 2018). These
changes produce flatter sSFR profiles above the main sequence, more in line
with observations from the 3D-HST survey at $z\sim 1$.
What do the shapes of the sSFR profiles mean for how galaxies build
structurally? Across the main sequence at all masses and star formation rates,
the sSFR profiles on average are fairly flat, meaning that galaxies grow
largely self-similarly on average (Nelson et al., 2019c). This is consistent
with the fact that the size-mass relation for star-forming galaxies has a
shallow slope (e.g. van Dokkum et al., 2013, 2015; Patel et al., 2013; Suess
et al., 2019; Mosleh et al., 2020). Star formation adds stars to galaxies with
close to the same distribution as the existing stars so the structure of
galaxies as a population as a function of mass changes fairly slowly. This is
not necessarily true of individual galaxies and in fact the purpose of this
detailed comparison between observations and simulations is in service of the
ability to use these simulations to track individual galaxies through time to
see what drives their evolution through the SFR-M∗ plane.
Figure 6: Difference between the sSFR profiles in 3D-HST and TNG50. As shown
by the dotted grey lines, the profiles are nearly always within $\pm 0.3$dex
(a factor of two) of each other.
## 6 Summary
In this paper we have compared the integrated and kpc-resolved star forming
main sequence in the TNG50 magnetohydrodynamical cosmological simulation and
observations from the 3D-HST survey. TNG50 is the highest resolution iteration
of the IllustrisTNG project, resolving 2100 galaxies with M∗$>10^{9}$ M⊙ at a
spatial resolution of $\sim 100$ pc at $z\sim 1$. The 3D-HST program is a 248
orbit near-infrared spatially resolved spectroscopic survey with the Hubble
Space Telescope that provides maps of the specific star formation rate in
thousands of galaxies at $z\sim 1$. These are complemented by a new analysis
of the integrated photometry of these galaxies with the Prospector Bayesian
inference framework, providing improved estimates for stellar masses and SFRs.
These simulated and observed datasets are well-matched to determine how well
the simulation can be used to understand how galaxies move through the star
forming main sequence, what causes star formation to be enhanced and
suppressed, and how galaxies evolve structurally during this process.
We find that the star forming main sequence in TNG50 is consistent to within
0.1 dex of observations from 3D-HST for all masses $10^{9}<$M∗$<10^{11}$M⊙ at
$0.75<z<1.5$ derived from Prospector. This is a significantly stronger
agreement than previously reported for the TNG simulations in comparison to
then-available observationally-inferred results (Donnari et al., 2019) and a
strong validation of the model in a galaxy integrated population sense (see
Fig. 1). This is also better than the agreement typically reported in
cosmological hydrodynamical simulations (Torrey et al., 2014; Sparre et al.,
2015; Schaye et al., 2015; Somerville & Davé, 2015; Davé et al., 2016). We
find that the previous 0.2-1 dex offset between observations and simulations
may be driven by the inference of stellar population parameters from
observations rather than necessarily the physical model in simulations,
although uncertainties remain to be tested regarding star formation histories
and other aspects of the inference of stellar populations. The newly-derived
stellar mass estimates are 0.1-0.3 dex higher and the star formation rates
0.1-1 dex lower than previous estimates (see Leja et al., 2019, for more
details). While the median SFRs are nearly identical between TNG50 and
observations, some discrepancies do arise in the higher order moments of the
SFR distribution. The scatter of SFRs around the main sequence in TNG50 is
narrower at all masses than in observations. It is also self-similar while the
observed SFRs skew towards lower values as mass increases (Fig. 2).
Further, we find surprisingly good agreement between the simulated and
observed average sSFR radial profiles of galaxies above, on, and below the
star forming main sequence. With a few exceptions, they agree qualitatively
and quantitatively. They are within a factor of two at all masses and radii
across the main sequence. Qualitatively, in both observations and simulations,
across the main sequence, the sSFR profiles are fairly flat, meaning galaxies
on average grow self-similarly regardless of where they are in the SFR-M∗
plane, which is likely why the size growth of galaxies is so gradual. This
means, importantly that the distribution of gas and its conversion into stars
in the simulation are at least roughly correct on kpc scales.
The agreement between TNG50 and 3D-HST data is particularly interesting below
the main sequence at high masses, a region of parameter space that galaxies
must necessarily traverse on their journey from star forming to quenched. Here
we find that both simulated and observed $z\sim 1$ galaxies exhibit
depressions in sSFR in the central regions, up to a few kpc wide. The inside-
out suppression of star formation in high mass galaxies below the main
sequence is similar in both 3D-HST observations and in the TNG50 simulation, a
key signature of locally acting AGN feedback. This behavior is not seen in the
original Illustris simulation, where AGN feedback affects gas at large radii
rather than acting directly from the innermost regions of galaxies. Taken
together, our results provide evidence for AGN feedback as the source of
galaxy quenching.
Looking ahead, because the simulation reasonably reproduces the observations,
we should be able to use the simulation to understand how galaxies move
through the SFR-M∗ plane and build structurally through star formation.
## Acknowledgements
The TNG50 simulation was realized with compute time granted by the Gauss
Centre for Supercomputing (GCS) via the Large-Scale Project GCS- DWAR (2016;
PIs Nelson/Pillepich); its lower resolution counterparts were carried out on
the Draco and Hydra supercomputers at the Max Planck Computing and Data
Facility (MPCDF); the original Illustris simulation was performed at the CURIE
supercomputer at CEA/France as part of PRACE project RA0844 and at the
SuperMUC computer at the Leibniz Computing Centre, Germany, as part of project
pr85je. EJN acknowledges support of the National Hubble Fellowship Program
through grant number HST-HF2-51416.001-A. ST is supported by the Smithsonian
Astrophysical Observatory through the CfA Fellowship. BB acknowledges support
of the Simons Foundation Flatiron Institute and is a Packard Fellow. FM
acknowledges support through the Program "Rita Levi Montalcini" of the Italian
MUR. BAT was supported by the Harvard Future Faculty Leaders Postdoctoral
Fellowship. The Cosmic Dawn Center (DAWN) is funded by the Danish National
Research Foundation under grant No. 140. RKC acknowledges funding from the
John Harvard Distinguished Science Fellowship
## References
* Abdurro’uf & Akiyama (2018) Abdurro’uf Akiyama M., 2018, MNRAS, 479, 5083
* Appleby et al. (2020) Appleby S., Davé R., Kraljic K., Anglés-Alcázar D., Narayanan D., 2020, MNRAS, 494, 6053
* Arnouts et al. (1999) Arnouts S., Cristiani S., Moscardini L., Matarrese S., Lucchin F., Fontana A., Giallongo E., 1999, MNRAS, 310, 540
* Ashby et al. (2013) Ashby M. L. N., et al., 2013, ApJ, 769, 80
* Aumer & White (2013) Aumer M., White S. D. M., 2013, MNRAS, 428, 1055
* Behroozi et al. (2013) Behroozi P. S., Wechsler R. H., Conroy C., 2013, ApJ, 770, 57
* Belfiore et al. (2018) Belfiore F., et al., 2018, MNRAS, 477, 3014
* Bell et al. (2004) Bell E. F., et al., 2004, ApJ, 608, 752
* Blanton et al. (2003) Blanton M. R., et al., 2003, ApJ, 594, 186
* Brammer et al. (2009) Brammer G. B., et al., 2009, ApJ, 706, L173
* Brammer et al. (2012a) Brammer G. B., et al., 2012a, ApJS, 200, 13
* Brammer et al. (2012b) Brammer G. B., et al., 2012b, ApJ, 758, L17
* Brooks et al. (2011) Brooks A. M., et al., 2011, ApJ, 728, 51
* Bruzual & Charlot (2003) Bruzual G., Charlot S., 2003, Monthly Notices of the Royal Astronomical Society, 344, 1000
* Calzetti et al. (2000) Calzetti D., Armus L., Bohlin R. C., Kinney A. L., Koornneef J., Storchi-Bergmann T., 2000, ApJ, 533, 682
* Caplar & Tacchella (2019) Caplar N., Tacchella S., 2019, MNRAS, 487, 3845
* Chabrier (2003) Chabrier G., 2003, PASP, 115, 763
* Christensen et al. (2014) Christensen C. R., Brooks A. M., Fisher D. B., Governato F., McCleary J., Quinn T. R., Shen S., Wadsley J., 2014, MNRAS, 440, L51
* Cibinel et al. (2015) Cibinel A., et al., 2015, ApJ, 805, 181
* Conroy & Wechsler (2009) Conroy C., Wechsler R. H., 2009, ApJ, 696, 620
* Crain et al. (2015) Crain R. A., et al., 2015, MNRAS, 450, 1937
* Daddi et al. (2007) Daddi E., et al., 2007, ApJ, 670, 156
* Davé et al. (2016) Davé R., Thompson R., Hopkins P. F., 2016, MNRAS, 462, 3265
* Davé et al. (2019) Davé R., Anglés-Alcázar D., Narayanan D., Li Q., Rafieferantsoa M. H., Appleby S., 2019, MNRAS, 486, 2827
* Di Matteo et al. (2005) Di Matteo T., Springel V., Hernquist L., 2005, Nature, 433, 604
* Diemer (2018) Diemer B., 2018, ApJS, 239, 35
* Diemer et al. (2017) Diemer B., Sparre M., Abramson L. E., Torrey P., 2017, ApJ, 839, 26
* Diemer et al. (2019) Diemer B., et al., 2019, MNRAS, 487, 1529
* Donnari et al. (2019) Donnari M., et al., 2019, MNRAS, 485, 4817
* Donnari et al. (2021) Donnari M., et al., 2021, MNRAS, 500, 4004
* Dubois et al. (2016) Dubois Y., Peirani S., Pichon C., Devriendt J., Gavazzi R., Welker C., Volonteri M., 2016, MNRAS, 463, 3948
* Dugan et al. (2017) Dugan Z., Gaibler V., Silk J., 2017, ApJ, 844, 37
* Ellison et al. (2018) Ellison S. L., Sánchez S. F., Ibarra-Medel H., Antonio B., Mendel J. T., Barrera-Ballesteros J., 2018, MNRAS, 474, 2039
* Faber et al. (2007) Faber S. M., et al., 2007, ApJ, 665, 265
* Förster Schreiber et al. (2006) Förster Schreiber N. M., et al., 2006, ApJ, 645, 1062
* Förster Schreiber et al. (2009) Förster Schreiber N. M., et al., 2009, ApJ, 706, 1364
* Förster Schreiber et al. (2011a) Förster Schreiber N. M., Shapley A. E., Erb D. K., Genzel R., Steidel C. C., Bouché N., Cresci G., Davies R., 2011a, ApJ, 731, 65
* Förster Schreiber et al. (2011b) Förster Schreiber N. M., et al., 2011b, ApJ, 739, 45
* Förster Schreiber et al. (2014) Förster Schreiber N. M., et al., 2014, The Astrophysical Journal, 787, 38
* Förster Schreiber et al. (2019) Förster Schreiber N. M., et al., 2019, ApJ, 875, 21
* Genel et al. (2014) Genel S., et al., 2014, MNRAS, 445, 175
* Genel et al. (2018) Genel S., et al., 2018, MNRAS, 474, 3976
* Genzel et al. (2014) Genzel R., et al., 2014, ApJ, 796, 7
* Giavalisco et al. (2004) Giavalisco M., et al., 2004, ApJ, 600, L93
* Governato et al. (2010) Governato F., et al., 2010, Nature, 463, 203
* Graves et al. (2009) Graves G. J., Faber S. M., Schiavon R. P., 2009, ApJ, 693, 486
* Grogin et al. (2011) Grogin N. A., et al., 2011, ApJS, 197, 35
* Guedes et al. (2011) Guedes J., Callegari S., Madau P., Mayer L., 2011, The Astrophysical Journal, 742, 76
* Habouzit et al. (2019) Habouzit M., et al., 2019, MNRAS, 484, 4413
* Habouzit et al. (2020) Habouzit M., et al., 2020, arXiv e-prints, p. arXiv:2006.10094
* Hemler et al. (2020) Hemler Z. S., et al., 2020, arXiv e-prints, p. arXiv:2007.10993
* Hernquist (1989) Hernquist L., 1989, Nature, 340, 687
* Hopkins et al. (2014) Hopkins P. F., Kereš D., Oñorbe J., Faucher-Giguère C.-A., Quataert E., Murray N., Bullock J. S., 2014, MNRAS, 445, 581
* Ilbert et al. (2006) Ilbert O., et al., 2006, A&A, 457, 841
* Johnson & Leja (2017) Johnson B., Leja J., 2017, Bd-J/Prospector: Initial Release, doi:10.5281/zenodo.1116491
* Johnson et al. (2020) Johnson B. D., Leja J., Conroy C., Speagle J. S., 2020, arXiv e-prints, p. arXiv:2012.01426
* Karim et al. (2011) Karim A., et al., 2011, The Astrophysical Journal, 730, 61
* Kauffmann et al. (2003) Kauffmann G., et al., 2003, MNRAS, 341, 54
* Kennicutt (1989) Kennicutt Robert C. J., 1989, ApJ, 344, 685
* Khandai et al. (2015) Khandai N., Di Matteo T., Croft R., Wilkins S., Feng Y., Tucker E., DeGraf C., Liu M.-S., 2015, MNRAS, 450, 1349
* Koekemoer et al. (2011) Koekemoer A. M., et al., 2011, ApJS, 197, 36
* Lee et al. (2018) Lee B., et al., 2018, ApJ, 853, 131
* Leja et al. (2013) Leja J., et al., 2013, ApJ, 778, L24
* Leja et al. (2015) Leja J., van Dokkum P. G., Franx M., Whitaker K. E., 2015, ApJ, 798, 115
* Leja et al. (2017) Leja J., Johnson B. D., Conroy C., Dokkum P. G. v., Byler N., 2017, The Astrophysical Journal, 837, 170
* Leja et al. (2019) Leja J., et al., 2019, ApJ, 877, 140
* Li et al. (2020) Li Y., et al., 2020, ApJ, 895, 102
* Licquia & Newman (2015) Licquia T. C., Newman J. A., 2015, ApJ, 806, 96
* Lundgren et al. (2012) Lundgren B. F., et al., 2012, eprint arXiv:1206.1867
* Marinacci et al. (2018) Marinacci F., et al., 2018, MNRAS, 480, 5113
* Matthee & Schaye (2019) Matthee J., Schaye J., 2019, MNRAS, 484, 915
* Momcheva et al. (2015) Momcheva I. G., et al., 2015, preprint, (arXiv:1510.02106)
* Momcheva et al. (2016) Momcheva I. G., et al., 2016, ApJS, 225, 27
* Morselli et al. (2019) Morselli L., Popesso P., Cibinel A., Oesch P. A., Montes M., Atek H., Illingworth G. D., Holden B., 2019, A&A, 626, A61
* Mosleh et al. (2020) Mosleh M., Hosseinnejad S., Hosseini-ShahiSavandi S. Z., Tacchella S., 2020, ApJ, 905, 170
* Moster et al. (2013) Moster B. P., Naab T., White S. D. M., 2013, MNRAS, 428, 3121
* Muzzin et al. (2009) Muzzin A., Marchesini D., van Dokkum P. G., Labbé I., Kriek M., Franx M., 2009, ApJ, 701, 1839
* Naiman et al. (2018) Naiman J. P., et al., 2018, MNRAS, 477, 1206
* Nelson et al. (2012) Nelson E. J., et al., 2012, ApJ, 747, L28
* Nelson et al. (2013) Nelson E. J., et al., 2013, The Astrophysical Journal, 763, L16
* Nelson et al. (2015) Nelson D., Genel S., Vogelsberger M., Springel V., Sijacki D., Torrey P., Hernquist L., 2015, MNRAS, 448, 59
* Nelson et al. (2016) Nelson E. J., et al., 2016, ApJ, 828, 27
* Nelson et al. (2018) Nelson D., et al., 2018, MNRAS, 475, 624
* Nelson et al. (2019a) Nelson D., et al., 2019a, Computational Astrophysics and Cosmology, 6, 2
* Nelson et al. (2019b) Nelson D., et al., 2019b, MNRAS, 490, 3234
* Nelson et al. (2019c) Nelson E. J., et al., 2019c, ApJ, 870, 130
* Noeske et al. (2007) Noeske K. G., et al., 2007, ApJ, 660, L43
* Oesch et al. (2018) Oesch P. A., et al., 2018, ApJS, 237, 12
* Orr et al. (2017) Orr M. E., et al., 2017, ApJ, 849, L2
* Pacifici et al. (2016) Pacifici C., Oh S., Oh K., Lee J., Yi S. K., 2016, ApJ, 824, 45
* Papovich et al. (2015) Papovich C., et al., 2015, The Astrophysical Journal, 803, 26
* Patel et al. (2013) Patel S. G., et al., 2013, The Astrophysical Journal, 766, 15
* Peng et al. (2002) Peng C. Y., Ho L. C., Impey C. D., Rix H.-W., 2002, AJ, 124, 266
* Pillepich et al. (2018) Pillepich A., et al., 2018, MNRAS, 473, 4077
* Pillepich et al. (2019) Pillepich A., et al., 2019, MNRAS, 490, 3196
* Rodighiero et al. (2011) Rodighiero G., et al., 2011, ApJ, 739, L40
* Salim et al. (2007) Salim S., et al., 2007, ApJS, 173, 267
* Schaye et al. (2015) Schaye J., et al., 2015, MNRAS, 446, 521
* Schmidt et al. (2013) Schmidt K. B., et al., 2013, MNRAS, 432, 285
* Schreiber et al. (2015) Schreiber C., et al., 2015, A&A, 575, A74
* Semenov et al. (2019) Semenov V. A., Kravtsov A. V., Gnedin N. Y., 2019, ApJ, 870, 79
* Sérsic (1968) Sérsic J. L., 1968, Atlas de galaxias australes
* Shivaei et al. (2015) Shivaei I., et al., 2015, ApJ, 815, 98
* Sijacki et al. (2007) Sijacki D., Springel V., Di Matteo T., Hernquist L., 2007, MNRAS, 380, 877
* Sijacki et al. (2015) Sijacki D., Vogelsberger M., Genel S., Springel V., Torrey P., Snyder G. F., Nelson D., Hernquist L., 2015, MNRAS, 452, 575
* Skelton et al. (2014) Skelton R. E., et al., 2014, ApJS, 214, 24
* Somerville & Davé (2015) Somerville R. S., Davé R., 2015, ARA&A, 53, 51
* Sparre et al. (2015) Sparre M., et al., 2015, MNRAS, 447, 3548
* Sparre et al. (2017) Sparre M., Hayward C. C., Feldmann R., Faucher-Giguère C.-A., Muratov A. L., Kereš D., Hopkins P. F., 2017, MNRAS, 466, 88
* Speagle (2020) Speagle J. S., 2020, MNRAS, 493, 3132
* Speagle et al. (2014) Speagle J. S., Steinhardt C. L., Capak P. L., Silverman J. D., 2014, ApJS, 214, 15
* Springel (2010) Springel V., 2010, MNRAS, 401, 791
* Springel & Hernquist (2003) Springel V., Hernquist L., 2003, MNRAS, 339, 289
* Springel et al. (2001) Springel V., White S. D. M., Tormen G., Kauffmann G., 2001, MNRAS, 328, 726
* Springel et al. (2005) Springel V., Di Matteo T., Hernquist L., 2005, MNRAS, 361, 776
* Springel et al. (2018) Springel V., et al., 2018, MNRAS, 475, 676
* Starkenburg et al. (2019) Starkenburg T. K., Tonnesen S., Kopenhafer C., 2019, ApJ, 874, L17
* Strateva et al. (2001) Strateva I., et al., 2001, AJ, 122, 1861
* Suess et al. (2019) Suess K. A., Kriek M., Price S. H., Barro G., 2019, ApJ, 885, L22
* Szomoru et al. (2010) Szomoru D., et al., 2010, ApJ, 714, L244
* Tacchella et al. (2015a) Tacchella S., et al., 2015a, Science, 348, 314
* Tacchella et al. (2015b) Tacchella S., et al., 2015b, ApJ, 802, 101
* Tacchella et al. (2016) Tacchella S., Dekel A., Carollo C. M., Ceverino D., DeGraf C., Lapiner S., Mandelker N., Primack Joel R., 2016, MNRAS, 457, 2790
* Tacchella et al. (2018) Tacchella S., et al., 2018, ApJ, 859, 56
* Tacchella et al. (2019) Tacchella S., et al., 2019, MNRAS, 487, 5416
* Tacchella et al. (2020) Tacchella S., Forbes J. C., Caplar N., 2020, MNRAS, 497, 698
* Tal et al. (2014) Tal T., et al., 2014, ApJ, 789, 164
* Tasca et al. (2015) Tasca L. A. M., et al., 2015, A&A, 581, A54
* Taylor et al. (2015) Taylor E. N., et al., 2015, MNRAS, 446, 2144
* Terrazas et al. (2020) Terrazas B. A., et al., 2020, MNRAS, 493, 1888
* Thomas et al. (1999) Thomas D., Greggio L., Bender R., 1999, MNRAS, 302, 537
* Tomczak et al. (2016) Tomczak A. R., et al., 2016, ApJ, 817, 118
* Torrey et al. (2014) Torrey P., Vogelsberger M., Genel S., Sijacki D., Springel V., Hernquist L., 2014, MNRAS, 438, 1985
* Torrey et al. (2017) Torrey P., Wellons S., Ma C.-P., Hopkins P. F., Vogelsberger M., 2017, MNRAS, 467, 4872
* Trager & Somerville (2009) Trager S. C., Somerville R. S., 2009, MNRAS, 395, 608
* Vogelsberger et al. (2014a) Vogelsberger M., et al., 2014a, MNRAS, 444, 1518
* Vogelsberger et al. (2014b) Vogelsberger M., et al., 2014b, Nature, 509, 177
* Vulcani et al. (2015) Vulcani B., et al., 2015, ApJ, 814, 161
* Vulcani et al. (2016) Vulcani B., et al., 2016, ApJ, 833, 178
* Weinberger et al. (2017) Weinberger R., et al., 2017, MNRAS, 465, 3291
* Weinberger et al. (2018) Weinberger R., et al., 2018, MNRAS, 479, 4056
* Wellons & Torrey (2017) Wellons S., Torrey P., 2017, MNRAS, 467, 3887
* Whitaker et al. (2011) Whitaker K. E., et al., 2011, ApJ, 735, 86
* Whitaker et al. (2012) Whitaker K. E., van Dokkum P. G., Brammer G., Franx M., 2012, ApJ, 754, L29
* Whitaker et al. (2014) Whitaker K. E., et al., 2014, ApJ, 795, 104
* Whitaker et al. (2019) Whitaker K. E., et al., 2019, ApJS, 244, 16
* Wuyts et al. (2011) Wuyts S., et al., 2011, ApJ, 742, 96
* Wuyts et al. (2012) Wuyts S., et al., 2012, ApJ, 753, 114
* Wuyts et al. (2013) Wuyts S., et al., 2013, ApJ, 779, 135
* Zibetti et al. (2009) Zibetti S., Charlot S., Rix H.-W., 2009, MNRAS, 400, 1181
* Zinger et al. (2020) Zinger E., et al., 2020, MNRAS, 499, 768
* van Dokkum et al. (2010) van Dokkum P. G., et al., 2010, ApJ, 709, 1018
* van Dokkum et al. (2011) van Dokkum P. G., et al., 2011, ApJ, 743, L15
* van Dokkum et al. (2013) van Dokkum P. G., et al., 2013, The Astrophysical Journal, 771, L35
* van Dokkum et al. (2015) van Dokkum P. G., et al., 2015, ApJ, 813, 23
* van der Wel et al. (2014) van der Wel A., et al., 2014, The Astrophysical Journal, 788, 28
## Appendix A Appendix
Figure 7: Here we show the difference between sSFR profiles from TNG50 at
different orientations. Blue is face on, orange is edge on, grays are
xy,xz,and yz projections respectively. The top row is above the main sequence,
the middle is on the main sequence, the bottom is below.
As noted in §3.3, in Fig. 7 we show the impact of orientation on average sSFR
profiles across the SFMS. The primary region of parameter space where this
turns out to be relevant is also the most interesting: at high masses below
the main sequence. Projection effects result in sSFR profiles that appear less
centrally depressed than they are in reality if one could measure them face-
on. This is relevant for our interpretation of observations.
|
# Accreted or Not Accreted? The Fraction of Accreted Mass in Galaxies from
Simulations and Observations
Rhea-Silvia Remus1 and Duncan A. Forbes2
1Universitäts-Sternwarte München, Fakultät für Physik, LMU München,
Scheinerstr. 1, D-81679 München, Germany
2Centre for Astrophysics and Supercomputing, Swinburne University of
Technology, Hawthorn VIC 3122, Australia E-mail<EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
In the two-phase scenario of galaxy formation, a galaxy’s stellar mass growth
is first dominated by in-situ star formation, and subsequently by accretion.
We analyse the radial distribution of the accreted stellar mass in $\sim$500
galaxies from the hydrodynamical cosmological simulation Magneticum.
Generally, we find good agreement with other simulations in that higher mass
galaxies have larger accreted fractions, but we predict higher accretion
fractions for low-mass galaxies. Based on the radial distribution of the
accreted and in-situ components, we define 6 galaxy classes, from completely
accretion dominated to completely in-situ dominated, and measure the
transition radii between in-situ and accretion-dominated regions for galaxies
that have such a transition. About 70% of our galaxies have one transition
radius. However, we also find about 10% of the galaxies to be accretion
dominated everywhere, and about 13% to have two transition radii, with the
centre and the outskirts both being accretion dominated. We show that these
classes are strongly correlated with the galaxy merger histories, especially
with the mergers’ cold gas fractions. We find high total in-situ (low
accretion) fractions to be associated with smaller, lower mass galaxies, lower
central dark matter fractions, and larger transition radii. Finally, we show
that the dips in observed surface brightness profiles seen in many early-type
galaxies do not correspond to the transition from in-situ to accretion-
dominated regions, and any inferred mass fractions are not indicative of the
true accreted mass. Instead, these dips contain information about the
galaxies’ dry minor merger assembly history.
###### keywords:
galaxies: structure – galaxies: evolution – galaxies: formation – methods:
numerical – methods: observational
††pubyear: 2021††pagerange: Accreted or Not Accreted? The Fraction of Accreted
Mass in Galaxies from Simulations and Observations–Accreted or Not Accreted?
The Fraction of Accreted Mass in Galaxies from Simulations and Observations
## 1 Introduction
In the two-phase scenario of galaxy formation (e.g., Oser et al., 2010;
Pillepich et al., 2014), galaxies undergo two main phases of growth: first the
in-situ, and subsequently the ex-situ (or accretion) growth phase. In the
former phase, stars are formed within the primary galaxy. In the latter phase,
mass growth occurs through accretion of satellite galaxies. A pioneering work
in this area was presented by Oser et al. (2010), using a set of zoom
simulations of massive galaxies, who found that the in-situ phase occurs
between redshifts 6 and 2, giving rise to a ‘galaxy core’ of size $\sim$2 kpc,
followed by the accretion dominated growth at lower redshifts. They also find
higher mass galaxies to have higher fractions of accreted mass, and that the
accreted mass is more often deposited in the galaxy outer halo regions. This
has been subsequently supported by parameter studies using binary merger
simulations of different mass ratios, demonstrating that larger mass ratios
for host and satellite galaxies are more likely to lead to a deposition of the
accreted stellar mass at larger radii, while small merger mass ratios usually
lead to full mixture of the accreted and in-situ formed stars (e.g., Hilz et
al., 2012; Karademir et al., 2019). However, the picture is not that clear, as
the orbital configurations of the mergers have been shown to influence the
radius of mass deposition for satellite galaxies especially for mergers of
larger mass ratios, with circular orbits leading to mass depositions at larger
radii than radial merger orbits (e.g., Amorisco, 2017; Karademir et al.,
2019).
This two-phase scenario has been further refined over the last decade with
increasingly sophisticated, full cosmological simulations. Those focusing on
predictions for the accreted stellar component of early-type galaxies include
the dark matter particle tagging approach of Cooper et al. (2013); Cooper et
al. (2015) and, more recently, hydrodynamical cosmological simulations like
Illustris (Pillepich et al., 2014; Rodriguez-Gomez et al., 2016), EAGLE
(Davison et al., 2020), and Illustris-TNG (Tacchella et al., 2019; Pulsoni et
al., 2020).
In particular, Cooper et al. (2013) modeled 1872 central galaxies in the mass
range $10.7<\log M_{*}<11.4$ (i.e., $11.5<\log M_{\mathrm{200}}<14.0$) using a
semi-analytic method to tag dark matter particles, not including a full
treatment of gas physics in the simulation itself. Being mainly massive
galaxies, the sample is dominated by early-type galaxies. They found that
accretion leads to a break, or change, in the slope of the stellar mass
surface density profile at the radius where the accreted material starts to
dominate over that formed in-situ. They fit Sérsic profiles to the in-situ and
accreted stars separately in surface density space, finding that the resulting
double Sérsic profiles provided a good overall fit. They showed that the
fraction of accreted material approaches 100% for the most massive early-type
galaxies, with more accretion dominated galaxies having shallower density
profiles with little, or no, obvious transition in the overall profile. This
study was further expanded for the very high mass end ($\log
M_{\mathrm{200}}\sim 14$) by Cooper et al. (2015). They found double Sérsic
profiles to be a good fit to the stellar mass surface density profiles, with
the inner component having Sérsic indices of $n\sim 4$ and the outer component
having $n\sim 1$ (similar to observational results for BCGs, e.g., Seigar et
al. (2007); Kluge et al. (2019)). However, these inner and outer components
were found to corresponded to the relaxed and unrelaxed accreted stars rather
than the in-situ and accreted stars.
Results from the fully hydrodynamical cosmological simulations all confirm the
idea of the two-phase scenario of galaxy formation, and the general trend for
higher mass galaxies to have larger amounts of accreted stars and shallower
radial stellar density profiles (Pillepich et al., 2014; Rodriguez-Gomez et
al., 2016; Tacchella et al., 2019; Davison et al., 2020). They also generally
agree that the transition radius between accretion-dominated and in-situ-
dominated stars is generally smaller for more massive galaxies. However, they
show very different results when it comes to the details of the accreted
versus in-situ components of galaxies, caused by the different subgrid models
describing the star formation and feedback processes (see Vogelsberger et al.,
2020, for a recent review).
For example, Rodriguez-Gomez et al. (2016) find for the old Illustris
simulations that all galaxies reveal a clear ‘transition’ radius for which the
in-situ and accreted components contribute equally (i.e., 50:50), while
Tacchella et al. (2019) reported for the new Illustris-TNG simulations that
their most massive galaxies can be dominated by accreted stars at all radii.
In addition, Tacchella et al. (2019) find much higher accreted mass fractions
at a given stellar mass for the new Illustris-TNG simulations than Rodriguez-
Gomez et al. (2016) found for the old Illustris simulatios. Using the EAGLE
simulation, Davison et al. (2020) found similar ex-situ fractions as a
function of stellar mass as Tacchella et al. (2019). They also confirmed
previous studies findings that the majority of accreted material is deposited
in the outer regions, while for the most massive galaxies the accreted mass
can dominate over in-situ material in the inner regions. However, the mass-
size relations found for the galaxies from these two simulations are
different, clearly showing that the details of galaxy formation still strongly
differ between the different simulations.
More recently, these studies were also broadened to study the radial kinematic
profiles of galaxies as possible tracers for the transition radii from in-situ
to accretion dominated parts of galaxies (e.g., Schulze et al. (2020) using
the Magneticum simulations and Pulsoni et al. (2020) using the Illustris-TNG
simulations). Both studies find that the shape of the kinematic profile (i.e.,
$v/\sigma$) does not, in general, trace the transition between in-situ and ex-
situ dominated galaxy regions. Schulze et al. (2020) showed that the kinematic
profiles only for a special subset of galaxies that only experienced very
small mergers since $z\sim 1$ can be used as tracer for the transition radius,
Pulsoni et al. (2020) split their galaxies into four classes of stellar mass
density profiles based on the variation of the in-situ and ex-situ fractions
with radius. We comment further on their findings in the main sections of this
work.
Observationally, the in-situ and accreted components of galaxies are much
harder to tackle. Deep imaging studies of massive early-type galaxies (ETGs)
found that around 3/4 of their galaxies reveal evidence for substructures in
their stellar halos (e.g., Schweizer & Seitzer (1992); Forbes & Thomson
(1992); Tal et al. (2009); Duc et al. (2015); Kluge et al. (2019)). Such
substructures, in the form of shells, plumes, envelopes etc, likely represent
the debris of accreted satellite galaxies. After stacking a large sample of
42,000 luminous red galaxies (with $\log M_{*}>11$), Tal & van Dokkum (2011)
found that within about $8R_{\mathrm{e}}$ their surface brightness profiles
could be well represented by a single Serisc profile, but beyond that an extra
component was required. A transition at similar radii have been reported in
the globular cluster systems of massive ETGs (e.g., Forbes & Remus (2018)).
D’Souza et al. (2014) fit 45,500 galaxies (avoiding edge-on disks) in several
stellar mass bins, finding good fits to a double Sérsic. There have also been
claims that the surface brightness profiles of elliptical galaxies are better
represented by three components (Huang et al., 2013), with radii of
$<1R_{\mathrm{e}}$, $\sim 2.5R_{\mathrm{e}}$, and $\sim 10R_{\mathrm{e}}$, all
with Sérsic values of $n\sim{}$1–2. Thus, a range of radii from much smaller
than $1R_{\mathrm{e}}$ to 8–10$R_{\mathrm{e}}$, have been reported in the
literature as key transition radii. Very few studies have quantified the
transition radius between different galaxy components and the mass associated
with each component. This has, however, been attempted by Spavone et al.
(2017) and Spavone et al. (2020), using deep imaging of ETGs from the VEGAS
survey. Fitting double Sérsic functions to the galaxy surface brightness
profiles and deriving transition radii from this they inferred outer halo mass
fractions, finding evidence for higher mass fractions in the outer component
for more massive galaxies.
We note that several late-type galaxies have been studied in order to measure
their outer halo light, e.g., the deep imaging of Merritt et al. (2016) using
the Dragonfly camera. This study revealed a large range in the fraction of
halo light in late-type galaxies beyond 5 disk scale lengths from $\sim 10\%$
to $<0.01\%$. However, it is not clear in how far this represents the accreted
component of the disk galaxies as this outer light could also come from
extended thick disk components.
In this work, we investigate the in-situ and accreted components using
galaxies from the hydrodynamical cosmological simulation Magneticum. We
compare their total and radial accretion properties to results from other
simulations as well as observations of massive early-type galaxies, especially
addressing the question in how far the transition radii from accreted to in-
situ components can be inferred from Sérsic fits to the radial surface density
profiles. In Sec. 2 we present the simulations and the details of our
classification of in-situ and accreted. Results from the simulations are
presented in Sec. 3. This includes the classification of the radial density
profiles into 6 classes (3.1), probing their assembly history (3.2), a
comparison with other simulations (3.3), and a study of the correlation of the
in-situ/accreted fractions with various galaxy properties (3.4) and the
transition radii (3.5). Sec. 4 provides a comparison between the 2D density
profiles from simulations with observations and the question of possible
recovery of the accreted fractions from the projected profiles. Finally we
present our summary and conclusions in Sec. 5.
## 2 The Magneticum Pathfinder Simulations
We use the Magneticum Pathfinder111www.magneticum.org simulations (Dolag et
al. 2021, in prep.), which are a set of cosmological hydrodynamical SPH-
simulations of several boxes with volumes ranging from
$(2688~{}\mathrm{Mpc}/h)^{3}$ to $(48~{}\mathrm{Mpc}/h)^{3}$ and different
resolutions, with the lowest having $m_{\mathrm{Gas}}=2.6\times
10^{9}M_{\odot}/h$ and the currently highest having
$m_{\mathrm{Gas}}=7.3\times 10^{6}M_{\odot}/h$. Each gas particle can spawn up
to four stellar particles during its lifetime, and as such the average mass of
a stellar particle is $1/4$th of the gas particle for each resolution. A WMAP7
$\Lambda$CDM cosmology (Komatsu et al., 2011) is adapted throughout all
simulations, with $\sigma_{8}=0.809$, $h=0.704$, $\Omega_{\Lambda}=0.728$,
$\Omega_{\mathrm{M}}=0.272$, $\Omega_{\mathrm{B}}=0.0451$, and an initial
slope for the power spectrum of $n_{\mathrm{s}}=0.963$.
All simulations are performed with a version of GADGET-3 that includes various
updates in the formulation of SPH (Dolag et al., 2004, 2005; Donnert et al.,
2013; Beck et al., 2016) as well as in the sub-grid physics, especially with
respect to the star formation and metal enrichment descriptions (Tornatore et
al., 2004, 2007; Wiersma et al., 2009) and the black hole feedback (Fabjan et
al., 2010; Hirschmann et al., 2014). For more details on the physics included
in the Magneticum Pathfinder simulations we refer the reader to Hirschmann et
al. (2014); Teklu et al. (2015) and Dolag et al. (2017). Structures are
identified using a modified version of SUBFIND (Springel et al., 2001; Dolag
et al., 2009).
As shown in previous works, the Magneticum Pathfinder simulations successfully
reproduce many observational results over a broad range of masses, from galaxy
clusters down to field galaxies. Most relevant for the work presented here,
they capture the evolution and properties of black holes (BHs) and AGN
(Hirschmann et al., 2014; Steinborn et al., 2015, 2016), and the angular
momentum, kinematic, and dynamical properties of galaxies at low and high
redshifts (Teklu et al., 2015; Remus et al., 2017; Schulze et al., 2018; Teklu
et al., 2018; van de Sande et al., 2019). Especially relevant for the work
presented in this paper, the Magneticum spheroidal and disk galaxies
successfully match the observed size-mass relation up to redshifts of $z=2$
(Remus et al., 2017; Schulze et al., 2018).
### 2.1 High resolution simulation
As we are focusing on the internal properties of galaxies in this work, we use
the currently largest volume of Magneticum with the highest resolution level
available. This box has a size of $(48~{}\mathrm{Mpc}/h)^{3}$. It initially
contains a total of $2\times 576^{3}$ (dark matter and gas) particles. The
mass resolution for the dark matter, gas, and stellar particles is
$m_{\mathrm{DM}}=3.6\times 10^{7}M_{\odot}/h$, $m_{\mathrm{Gas}}=7.3\times
10^{6}M_{\odot}/h$, and $m_{\mathrm{*}}\simeq 2\times 10^{6}M_{\odot}/h$,
respectively, with a softening of
$\epsilon_{\mathrm{DM}}=\epsilon_{\mathrm{Gas}}=1.4~{}\mathrm{kpc}/h$ for dark
matter and gas particles, and $\epsilon_{\mathrm{*}}=0.7~{}\mathrm{kpc}/h$ for
stellar particles.
We choose a lower stellar mass limit of $M_{\mathrm{*}}\geq 2\times
10^{10}M_{\odot}$ to ensure sufficient resolution for radial density profile
fits, and additionally limit the sample to central galaxies to ensure a proper
treatment of the in-situ/ex-situ classification. The highest mass galaxies in
this simulation are $M_{\mathrm{*}}\sim 10^{12}M_{\odot}$. With these
restrictions, we select 511 galaxies, which include 4 galaxies that are
brightest cluster galaxies (BCGs), and 43 galaxies which are brightest group
galaxies.
### 2.2 Definition of Accreted and In-situ Stars
We are interested in the accreted (ex-situ) and the in-situ components of the
galaxies. Therefore, we have to trace all stars that are part of a galaxy at
z=0 back to their formation redshift. If the star is born inside the main-
branch progenitor of the galaxy, it is considered to be formed “in-situ”. If
the star was born outside the virial radius of the main-branch progenitor of
the galaxy, and only later in its life accreted onto that galaxy, then it is
considered to be “accreted” independent of whether it was accreted smoothly or
as part of another galaxy. If the star particle is born inside the virial
radius of the main-branch progenitor but in the wake of a gas-rich merger, we
still consider the star to be formed “in-situ”, as otherwise all stars would
be accreted since ultimately all gas has been accreted onto the galaxy. These
stars have been handled differently in the literature, however, for the sake
of a clean classification with respect to the accreted fraction we use the
classification described above. This gives us the smallest possible fraction
of accreted stars, and the largest possible fraction of in-situ formed stars.
Figure 1: Examples for three of the six different in-situ/accreted profile
classes, from left to right: class A (extremely accretion dominated), class B
(accretion dominated), and class C (classic). The vertical black dashed lines
in the upper panels indicate the half mass radius. Upper panels: Radial
stellar density profiles for all stars (black lines), in-situ formed stars
(red lines) and accreted stars (blue lines). Middle panels: Relative mass
fractions of the in-situ (red) and accreted (blue) subcomponents. Bottom
panels: The assembly history of the stellar mass of the example galaxies.
Dashed red lines show major mergers (mass ratios of 1:1 to 3:1), green lines
show minor mergers (mass ratios of 3:1 to 10:1), and blue lines show mini
mergers (mass ratios below 10:1).
Figure 2: Same as Fig. 1 but for the other three profile classes, from left to
right: class D (double cross-over), class E (balanced), and class F (in-situ
dominated).
### 2.3 Galaxy Classification
The sample of 511 Magneticum galaxies includes all galaxy types. However, in
the second part of this study, we restrict our investigation to spheroidal (or
early-type) galaxies. They are selected using the $b$-value
$b=\log_{10}\left(\frac{j_{*}}{\mathrm{kpc}~{}\mathrm{km/s}}\right)-\frac{2}{3}\log_{10}\left(\frac{M_{*}}{\mathrm{M_{\odot}}}\right),$
which effectively gives a galaxies’ position in the $M_{*}$–$j_{*}$ plane as
discussed by Teklu et al. (2015). At $z=0$, galaxies with a $b$-value of
$b\leq-4.73$ are classified as spheroidals, while galaxies with $b\geq-4.35$
are classified as disks (Teklu et al., 2017). Galaxies with $b$-values in
between these limits have intermediate properties, that is they include S0
galaxies and disk galaxies with large bulges, but also a small number of
ongoing-merger and interacting galaxies. On this basis our sample includes 154
spheroidal, 105 disk, and 252 intermediate galaxies. This is the same
classification that has been used by Teklu et al. (2017), Schulze et al.
(2018), and Schulze et al. (2020).
## 3 Accreted and In-situ Formed Stars in Magneticum Galaxies
### 3.1 Radial Stellar Mass Density Profiles
We calculate the radial stellar mass density profiles for the Magneticum
galaxies using equal particle bins with at least 200 particles per bin, in
spherical shells around the galaxy center. For the in-situ and the accreted
components, the same radial bins are used as for the total profile to ensure a
direct radial comparability of the two components. To accommodate for the
softening, we exclude the inner 1.4 kpc, but we do not use an outer radial
limit.
Figure 3: Random 2D projected views of the example Magneticum galaxies for
each of the six profile classes. For each galaxy, a box with a length of 200
kpc centred on the galaxy is shown, with the left plot showing the intensity
map derived from all stars in the galaxy and the right plot showing the origin
of the stars colour coded according to the in-situ/accreted fraction (with
blue colours showing 100% in-situ fractions and red colours showing 100%
accreted fraction). From left to right, top to bottom, the shown galaxies are
from class A (upper left), class B (upper right), class C (central left),
class D (central right), class E (bottom left), and class F (bottom right). As
can be clearly seen, the amount of in-situ stars increases from the upper left
class (A) to the lower right class (F).
Inspecting the 3D radial stellar density profiles (in
$M_{\odot}/\mathrm{kpc}^{3}$) of the Magneticum galaxies, we find six
different classes based on their in-situ/accreted behaviour. Examples for each
class are shown in Fig. 1 and Fig. 2, and Fig. 3, and described in the
following:
* •
Class A: Extremely accretion dominated profiles. For these galaxies, the
accreted stellar component is always dominant, even in the inner regions (see
left panels of Fig. 1 and upper left panels of Fig. 3). About 7% of all
galaxies show this kind of behaviour (see Tab. 1). Such galaxies have no clear
transition radius between in-situ and accreted stellar mass.
* •
Class B: Accretion dominated profiles. For these galaxies, the fraction of in-
situ and accreted stars near the galaxy center is equal, but for all larger
radii the accreted fraction dominates (see middle panels of Fig. 1 and upper
right panels of Fig. 3). This is a rare class, with only about 2% of all
Magneticum galaxies in this class (see Tab. 1). It could also be interpreted
as an extreme case of class A, but we here study it as a separate class. The
transition radius for these galaxies is very small, and is not a real
transition in all cases as the in-situ component does not necessarily dominate
in the center but are sometimes simply equal amounts of in-situ and accreted.
* •
Class C: Classic profiles. The inner regions of these galaxies are dominated
by in-situ formed stars, while in the outskirts the accreted stellar component
is dominant (see right panels of Fig. 1 and central left panels of Fig. 3).
This is by far the most common class of profiles, with 72% of all Magneticum
galaxies showing this behaviour (see Tab. 1). This is also the behaviour found
most commonly in previous work for example by Cooper et al. (2010); Rodriguez-
Gomez et al. (2016); Pulsoni et al. (2020). These galaxies have a clear
transition radius from in-situ to accretion dominated.
* •
Class D: Double cross-over profiles. These galaxies have a large accreted
fraction dominating in the inner and the outer regions, with their
intermediate-radii regions dominated by in-situ formed stars (see left panels
of Fig. 2 and central right panels of Fig. 3). 13% of all Magneticum galaxies
fall in this category (see Tab. 1), making this the second most common profile
type. Given its nature, these profiles have two transition radii.
* •
Class E: Balanced profiles. A small fraction ($\approx 4\%$, see Tab. 1) of
all Magneticum galaxies reveal profiles for which the in-situ and accreted
contributions are nearly equal over a large radial range (see middle panels of
Fig. 2 and bottom left panels of Fig. 3). For these profiles, we usually find
a transition radius at very large radii, however, even if the outer parts are
slightly dominated by accreted stars, the fraction of accreted stars usually
stays below 60%.
* •
Class F: In-situ dominated profiles. Galaxies in this class have radial
density profiles that are always dominated by in-situ formed stars at all
radii, even at their outskirts (see right panels of Fig. 2 and bottom right
panels of Fig. 3). Only 2.7% of all Magneticum galaxies show this behaviour,
with the in-situ fraction always larger than the accreted fraction (Tab. 1).
As for class A galaxies, there is no transition radius for these galaxies.
Class | All | Spheroidals | Disks
---|---|---|---
| N | % | N | % | N | %
Class A | 36 | 7.1 | 15 | 9.7 | 1 | 0.95
Class B | 9 | 1.8 | 6 | 3.9 | 1 | 0.95
Class C | 367 | 71.8 | 100 | 64.9 | 78 | 74.3
Class D | 66 | 12.9 | 24 | 15.6 | 18 | 17.1
Class E | 19 | 3.7 | 7 | 4.6 | 2 | 1.9
Class F | 14 | 2.7 | 2 | 1.3 | 5 | 4.8
Table 1: Numbers and percentage for the six different profile classes for all
511 galaxies from the Magneticum simulation used in this work, and for those
galaxies that are classified as spheroidals (154 galaxies) or disks (105
galaxies).
Recently, a similar analysis of radial in-situ and accreted profiles has been
presented by Pulsoni et al. (2020) using the Illustris-TNG simulations.
Differently to our six classes of 3D mass density profiles, they reported four
classes based on 2D mass surface density profiles: Their class 1 (20% of the
sample) galaxies are in-situ dominated at all radii, equivalent to our class F
galaxies, albeit our class F only covers 2.7% of the galaxies. Class 2 is
their most common profile (57%), and is equivalent to our class C, albeit we
have more galaxies of class C (72%). This is also the kind of in-situ/accreted
profiles that have solely been reported for Illustris galaxies (Rodriguez-
Gomez et al., 2016). Their class 3 (15%) is closest to our double cross-over
profiles (class D), and with 13% of our galaxies being class D the numbers are
very similar between the two simulations. Their class 4 galaxies are accretion
dominated and their least common profile at 8%. In our work such profiles were
divided into class A (extremely accretion dominated) and B (accretion
dominated), representing a total of about 9% of galaxies, again in good
agreement with each others. We also classified $\sim 4\%$ of our galaxies to
be ‘balanced’ with similar in-situ and accreted components, a class that does
not appear in the classifications by Pulsoni et al. (2020). Bearing in mind
that the relative proportions of each profile class will depend on the range
of galaxy types and masses in each simulation, the main differences are that
we find more classic profiles (72% vs 57%) and fewer in-situ dominated
profiles (2.7% vs 20%), clearly highlighting the intrinsic differences between
the simulations.
### 3.2 Assembly History
One of the first steps to understand the origin of the different profile
classes is to test whether they correlate with the stellar mass of the galaxy.
In the left panel of Fig. 4 we show the normalised distribution of the
different profile classes as a function of stellar mass. We find a clear trend
that galaxies of classes E and F, where the in-situ component is 50% or
larger, are always galaxies with relatively low stellar masses, while galaxies
of classes A and B, which are everywhere accretion dominated, usually reside
at the high mass end. This mass trend is expected, as more massive galaxies
have generally accreted more mass than low mass galaxies. Similar trends were
seen by Pulsoni et al. (2020) in their study. Interestingly, the two most
common classes of galaxies, namely classes C and D, are most likely to be
found in the middle mass range of our galaxy sample, with the galaxies having
stellar masses of $10.5<\log(M_{*})<11$. This indicates that it is not the
frequency of mergers, but the type of merger that is crucial in establishing
the differences.
Figure 4: Profile class dependence on stellar mass and merger frequency. Left
panel: Histogram of the stellar mass distribution, colour coded by profile
class. Low mass galaxies are dominated by classes C, E and F, while classes A
and B are common for high mass galaxies. Right panels: Histogram showing the
frequency of major (red), minor (green), and mini (blue) mergers since $z=2$
for each class. Major mergers are least common for galaxies of class C, where
nearly 60% of the galaxies have had no major merger since z=2. Figure 5: Mass
fraction added via major, minor, mini mergers and all mergers together since
$z=2$ for the six different profile classes. Classes A/B/C/D/E/F are given as
red/orange/blue/cyan/yellow/green filled circles, respectively. Top panel:
Fraction of stellar mass accreted through mergers of different types. Bottom
panel: Fraction of gas mass accreted through mergers of different types. Gas
mass includes hot and cold gas. For the most massive galaxies a significant
fraction of the gas is in a hot (and thus not star forming) phase.
To understand this in more detail, we study the assembly history of all
galaxies with respect to their main formation branch from $z=2$ to $z=0$. We
distinguish three different kind of mergers:
* •
Major mergers with mass ratios of 1:1 to 3:1.
* •
Minor mergers with mass ratios of 3:1 to 10:1.
* •
Mini mergers with mass ratios below 10:1.
For the mini mergers, there is no lower mass limit in general, however, due to
the resolution limit of the simulation, the smallest mergers that we can
resolve here are on the order of 100:1 in mass ratio. Below this limit, we do
not call accretion events mergers, even for those few cases where this would
still be resolvable, for the sake of completeness in numbers.
Major mergers are known to have strong impacts on the main progenitor galaxy
at all radii, however, this is not in general the case for minor and mini
mergers. While minor mergers, especially in the mass range around 5:1, can
still strongly influence the mass distribution of the progenitor galaxies even
at their centres (e.g., Hilz et al., 2013; Karademir et al., 2019), mini
mergers mostly contribute to the outer halos of galaxies and only play a role
for the central evolution of the host galaxy for radial infall orbits or head-
on collisions (Karademir et al., 2019). Since we focus on radial ranges beyond
the half-mass radius, all types of mergers can play a role in establishing the
different profile classes.
Examples of the assembly history for the six classes are given in the lower
panels of Fig. 1 and Fig. 2, with the history always belonging to the galaxy
for which the radial density profiles are shown in the upper panels of the
same figures, and the intensity maps are shown in the according panels of Fig.
3. The redshifts of past merger events are marked as red/green/blue dashed
lines for major/minor/mini mergers, respectively. These six examples show that
all galaxies, but one, experience major mergers, with the galaxy that
experiences no major merger being of class C. In the case of the example
galaxies from classes A and B (the accretion dominated profile classes), both
show two major merger events since $z=2$. Interestingly, the galaxy from class
F, which is dominated by in-situ stars at all radii, experiences a major
merger at a redshift of about $z=1$.
More quantitatively, in the right panel of Fig. 4 we show for each profile
class histograms of the frequency of major (red dashed lines), minor (green
dash-dotted lines), and mini mergers (blue solid line) since redshift $z=2$
($\sim 10~{}\mathrm{Gyr}$ in look-back time). Furthermore, Fig. 5 shows the
amount of stellar mass relative to the present-day stellar mass that was
accreted through the mergers of different mass ratios and through all mergers
for each accretion class in the upper panel, while the lower panel shows the
fraction of gas relative to the present-day stellar mass accreted through
these mergers for the different accretion classes.
In general, about half of all Magneticum galaxies have experienced at least
one major merger since $z=2$. However, when considering individual profile
classes, we find large differences. Most strikingly, major mergers generally
play a significant role in the formation of galaxies from accretion classes A,
B, D, and E, while they only play a minor role in the evolution of galaxies
from accretion classes C and F. This reflects what could already be seen from
the example cases, but more statistically we find the following accretion
history patterns for our six accretion profile classes:
* •
Class A: The accretion history of this “overmerged” class of galaxies is not
surprisingly completely dominated by merger events, with a total of nearly 70%
of the present-day stellar mass being accreted (see upper panel of Fig. 5).
Surprisingly, these mergers are not necessarily dry. Especially the major
mergers contribute an average of 25% of the total stellar mass in gas, but the
shear amount of accreted stellar mass is enough to dominate the final galaxy
at all radii. Additionally, most of these galaxies are rather massive, and as
such a larger amount of the gas will be present as a hot gas halo and not
participate in the star formation process. Galaxies of this profile class are
rather common for the spheroidals, but only one of these is found among the
disk galaxies222This is a very special case in which the accretion occurs
along a plane along the galaxies disk-plane, similar to what was discussed for
mini mergers by Karademir et al. (2019).
* •
Class B: Galaxies from this accretion class have a similar accretion history
to galaxies of class A, with more than 60% of their stellar mass being
accreted. The main difference here is that the mergers were all gas poor,
resulting in the lowest amount of gas accreted since $z=2$ in the whole sample
(see lower panel of Fig. 5).
* •
Class C: The “classic” profile shows a significantly different behaviour from
all the others, namely 2/3 of the galaxies in this class have never
experienced a major merger since $z=2$ (see right panel of Fig. 4). Even those
class C galaxies with major mergers only get about 10% of their mass from this
pathway, implying that the major mergers happen rather early for this class
(see upper panel of Fig. 5). In general, galaxies of this class only accrete
about 30% of their stellar mass through mergers, which is the lowest fraction
found for all classes. Furthermore, the mass accreted through the major and
minor mergers is approximately equal, and the relative contribution of the
mini mergers is rather large for galaxies of this group. This clearly shows
the importance of minor and mini mergers in the assembly history of class C
galaxies. In addition, we find that the mergers that a galaxy of class C
experiences, are usually rather dry, and contribute only about 20% of the
total stellar mass in gas (see lower panel of Fig. 5). This also explains the
dominance of the accreted material at large radii and the dominance of the in-
situ components in the center, as the minor and mini merger, especially when
dry, often do not reach the central parts of the host galaxy at all but rather
deposit their mass at large radii (Purcell et al., 2007; Amorisco, 2017;
Karademir et al., 2019).
* •
Class D: Major mergers are important for half of the galaxies in this profile
class, but those mergers are relatively dry (gas-poor). The host galaxy, on
the other hand, is relatively wet (gas-rich) at the time of merging, and
through the merger the gas is moved outwards into a ring-like structure, where
the star formation occurs. Our simulation does not have the resolution to
confirm this, but this may be one possible way to form an (old) bulge inside a
gas-rich galaxy. In addition, these galaxies are equally common for both disks
and spheroidals (seen Tab. 1).
* •
Class E: The mass accretion history of galaxies from this class is dominated
by a single merger which is either a major merger (60%) or a massive minor
merger (see right panels of Fig. 4). These mergers were gas-rich, causing a
starburst after accretion, which effectively leads to this special profile
case where the in-situ and accreted radial fractions are identical over a
broad radial range. As known from classical binary merger simulations
Hernquist & Barnes (e.g., 1991), these mergers usually result in a spheroidal
galaxy as long as the merger is not in-plane or has a very high gas fraction
(Springel & Hernquist, 2005). This is reflected in the low fraction of disk
galaxies in this class (only 1.9%), while for the spheroidals they account for
4.6%, as seen in Tab. 1.
* •
Class F: Galaxies of class F show a similar behaviour to galaxies of class C,
as only half of them experience a major merger and only about 40% of their
stellar mass is accreted (see right panel of Fig. 4 and upper panel of Fig.
5). The major difference is the amount of gas accreted through the merger
events independent of the merger mass ratio: We find that all mergers deliver
significantly more gas than for the galaxies of class C, with a total of about
80% of the stellar mass being accreted in gas mass (see lower panel of Fig.
5), which is the highest frequency found for the different profile classes.
Since all these galaxies have stellar masses well below $10^{11}M_{\odot}$,
they do not host a large hot gas halo, and thus most of that gas is cold and
contributes to star formation, resulting in an overall in-situ dominated
radial density profile. This clearly shows that the origin of these overall-
in-situ dominated profiles is gas-rich accretion, and as such it is surprising
that two of these galaxies are actually spheroidals.
### 3.3 Accreted Mass Fractions and Galaxy Mass
Figure 6: Ex-situ (accreted) fractions for Magneticum galaxies in comparison
to other simulations. Upper left panel: Mean ex-situ fraction versus critical
halo mass $M_{\mathrm{200c}}$ for Magneticum (solid blue line), with the
$1\sigma$ scatter shown in light blue. For comparison, mean values are also
shown for Illustris-TNG100 (Pillepich et al. (2018), dash-dot-dot-dotted pink
line and shaded area), and the particle tagging models from Cooper et al.
(2013) (dash-dotted lines). Lower left panel: Same as upper panel but showing
the individual values for the Magneticum galaxies, with the colours marking
the different profile classes as indicated in the legend. The solid line (here
black instead of blue for better visibility) and blue shaded area mark the
mean and $1\sigma$ scatter for this distribution, as in the upper panel. Upper
right panel: Ex-situ fraction versus stellar mass $M_{*}$. The blue solid line
and shaded area show the mean and the $1\sigma$ scatter for the Magneticum
galaxies, as in the left panel. For comparison, we include the relations for
four other fully cosmological simulations: Illustris (Rodriguez-Gomez et al.
(2016), dashed black line and gray shade), Eagle (Davison et al. (2020),
yellow dash-dotted line and yellow shade), Illustris-TNG (Tacchella et al.
(2019), pink dash-dot-dot-dotted line and shade), and Horizon-AGN (Dubois et
al. (2016), green dashed lines, light green without AGN, dark green with AGN).
Additionally, we include the mean values from two zoom-simulations that cover
a range of stellar masses: the SPH-based GADGET from Oser et al. (2010) as
x-circle symbols, and the AMR-based ENZO simulations from Lackner et al.
(2012) as +-circle symbols. The mean values in three mass bins from the semi-
analytic modeling by Lee & Yi (2013) are shown as black half-circles. Finally,
we also include the mean values for the zoom-simulations by Hirschmann et al.
(2015) (square for non-feedback, diamond for the feedback-runs, joined by a
vertical line). These data points were extracted from Rodriguez-Gomez et al.
(2016). We include these simulations as they nicely demonstrate the impact of
the stellar feedback on the simulations. Lower right panel: Same as the upper
right panel, but for the individual Magenticum galaxies with the colours
marking the different profile classes as in the lower left panel. Again, the
Magneticum mean is shown as solid black line and the blue shaded area marks
the $1\sigma$ scatter for this distribution, as in the upper panel. Magneticum
galaxies tend to have higher accretion fractions at low masses compared to
other simulations.
Overall, there is broad agreement between simulations and models of different
kinds that the fraction of accreted stars is correlated with the stellar (and
halo) mass of galaxies. However, the different simulations vary strongly in
the actual (average) values found for the accreted (or ex-situ) fractions at a
given mass. This can clearly be seen in Fig. 6 for halo mass (upper left
panel) and for stellar mass (upper right panel). Here, we compile data from
the literature for the five large hydrodynamical simulations:
* •
Magneticum from this study, shown as the blue solid line and shaded area in
both panels.
* •
Illustris-TNG, shown as the pink dash-dot-dot-dotted line and shaded area,
from Pillepich et al. (2018) for the halo mass comparison (left panel) and
from Tacchella et al. (2019) for the stellar mass comparison (right panel).
* •
EAGLE, shown as dash-dotted yellow line and shaded area, from Davison et al.
(2020), only for the stellar mass comparison (right panel).
* •
Horizon-AGN, shown as green dashed lines, from Dubois et al. (2016), only for
the stellar mass comparison (right panel) for the runs with (dark green) and
without (light green) AGN.
* •
Illustris, shown as black dotted line and gray shaded area, from Rodriguez-
Gomez et al. (2016), only for the stellar mass comparison (right panel).
As can be seen immediately, Magneticum galaxies have larger accreted fractions
at the low mass end compared to the other simulations, while at the high mass
end the accreted fractions for all the simulations converge to around 70-80%
for galaxies of stellar masses above $3\times 10^{11}M_{\odot}$ (or halo
masses above $1\times 10^{13}M_{\odot}$).
There are different reasons for these simulations to show such different
behaviour, and it is up to now still unclear which ones are closest to
reality. One commonly discussed reason for the different results with respect
to the amount of accreted and in-situ components in galaxies is the stellar
and/or AGN feedback, which is modeled slightly different in each simulation.
That both processes have strong effects on the resulting accretion fractions
has been reported by previous studies: Hirschmann et al. (2015) used a subset
of the galaxies presented by Oser et al. (2010) and simulated them with and
without strong stellar feedback (see open black diamond and square in the
upper right panel of Fig. 6, respectively). They found that, while the stellar
mass of the galaxies did not change much, the accreted fraction dropped
significantly, on average from about 50% to 20%, when the stellar feedback was
switched on. This is due to the fact that the feedback from the stars heats up
the gas inside the galaxies and thus suppresses star formation, especially for
the lower mass galaxies, leading to lower stellar masses for the galaxies in
general and thus smaller amounts of accreted stars. However, the merger events
still lead to starbursts and subsequent star formation, but this is then in-
situ star formation. An opposite effect was reported by Dubois et al. (2016,
see green dashed lines in Fig. 6) and Dubois et al. (2013) for the AGN
feedback. Here, the simulation runs without AGN feedback (light green) result
in lower accreted fractions than the simulation with AGN feedback (dark
green).
As all five fully hydrodynamical simulations shown here include both types of
feedback, albeit in different implementations, it is not possible to know
which of the processes is the main driver of the differences between the
simulations. Note, for example, that the values found for Magneticum are very
similar to the average values found by Oser et al. (2010) in their zoom-
simulations (x-circles), but the latter has no AGN feedback and the galaxies
found in that simulation also differ strongly from those in Magneticum in
other parameters (see e.g., Remus et al., 2017). For Magneticum, we know from
Teklu et al. (2017) that our stellar feedback is slightly too weak for the low
mass end and our AGN feedback is slightly too strong at the high mass end when
looking at the baryon convergence efficiency, which is most likely also the
reason for the difference at the low mass end between Magneticum and EAGLE and
Illustris-TNG.
Another possible reason for the different accretion fractions found in the
different simulations could be the use of AMR versus SPH codes. Those
simulations, both fully cosmological and zoom-in, that were performed with AMR
codes (namely Horizon-AGN (Dubois et al., 2016) and the simulations by Lackner
et al. (2012)) show significantly lower accreted fractions at all mass ranges
than those simulations that were performed with SPH codes (namely Magneticum,
EAGLE, and the simulations by Oser et al. (2010) and Hirschmann et al.
(2014)), independent of the included feedback. This is especially interesting
given that the studies by Lackner et al. (2012) and Oser et al. (2010) both do
not include the AGN feedback, but show the strongest differences. It is well
know that all codes have their shortcomings, and many improvements have been
implemented in recent years, but to understand how much influence the choice
of the code has on the accreted fractions, however, would require a detailed
comparison study, which has not been done so far.
Note that, for the five large cosmological simulations shown here, the
definitions of in-situ and accreted largely agree in that we only count those
stars as accreted that were already born at infall, and count those stars that
were born from gas that was accreted through a merger but only formed after
the merger event from this gas as in-situ (see also Rodriguez-Gomez et al.,
2016; Tacchella et al., 2019). This means that, effectively, our accreted
fractions are lower limits, and the values could only get higher for more
elaborate definitions of in-situ and accreted, but thus this definition is not
responsible for the difference found between these simulation samples.
We also include the accreted fractions with mass obtained from two semi-
analytic models (SAMs): Cooper et al. (2013) used a particle-tagging method on
top of the SAM presented by Guo et al. (2011) based on the Millenium II
simulation (Boylan-Kolchin et al., 2009), and the resulting accretion
fractions for different halo masses are shown as dash-dotted lines in the left
panel of Fig. 6, split according to the morphology as disks (gray line) or
spheroidals (black line). The results found for their sample of spheroidal
galaxies is close to the average values found for the Magneticum galaxies in
this work, albeit our sample includes both disks and spheroidals. When we
separate our disks and spheroidals, we find a general trend for the disks to
have lower accreted fractions than spheroidals of the same mass in agreement
with Cooper et al. (2013), but the trend is much less pronounced and the
scatter is large. Lee & Yi (2013) use a SAM built on their own Gadget-2 based
dark matter only simulation, and provide accreted fractions for a range of
stellar masses (see half-circles in the upper panel of Fig. 6). Their values
lie in between the five different fully cosmological simulations, and are
especially at the low-mass end not in agreement with our Magneticum results.
So far, we have discussed the mean values of the accreted fractions with
stellar and halo mass for the Magneticum simulation in comparison to other
simulations, now we want to take a closer look at the distribution of the
individual galaxies with regard to their accretion classes A–F. They are shown
in the lower two panels of Fig. 6 in comparison to the mean value lines shown
in black. As can be seen immediately, there are strong differences between the
galaxies of the different accretion classes: The galaxies from the over-merged
classes A and B all show high accretion fractions, well above 60%, with no
real trend with mass visible. On the other hand, the in-situ dominated
galaxies of class F all have, as expected, low accretion fractions below the
mean Magneticum values, and their spread in mass is too small to see any trend
with mass for both stellar and halo mass. Similarly, galaxies of the major
merger class E also show no trend in mass, and the overall accretion fractions
are around 50%. For the other two classes, C and D, we find a clear
correlation of the accreted fraction with both stellar and halo mass, with a
tendency for the galaxies of class D to be slightly above the mean Magneticum
accretion values per mass, and for class C to be slightly below on average.
Interestingly, class C also includes the lowest accretion fractions at all
mass bins, even lower than the galaxies of the in-situ dominated class F,
clearly demonstrating that the accretion fractions can be really low if most
of the accretion is provided by dry minor and mini mergers in the outskirts of
a galaxy, while the centre is left undisturbed.
Figure 7: Left panel: Stellar mass-size relation for the Magneticum galaxies,
colour coded according to their profile class (see legend). The 3D half-mass
radius is shown against total stellar mass. For comparison, the observed mass-
size relation from the GAMA survey by Lange et al. (2015) is shown for ETGs
(dashed line) and LTGs (solid line). Note that the observations measure half-
light radii instead of half-mass radii. Right panel: In-situ fraction versus
3D half-mass radius for the Magneticum galaxies, colour coded as in the left
panel. Only class C reveals a clear correlation between in-situ fraction and
galaxy size.
### 3.4 Accreted Mass Fractions and Global Galaxy Properties
Next we examine trends between the 3D half-mass radius and dark matter
fraction with total stellar mass and in-situ fraction. It has been shown
already by Remus et al. (2017) and Schulze et al. (2018) that the Magneticum
galaxies successfully reproduce the observed stellar mass-size relations
(e.g., GAMA, by Lange et al., 2015), but here we now take a closer look at the
different profile classes in this relation (left panel of Fig. 7). The
overmerged galaxies of classes A and B show only small scatter close to the
observed relation over the whole mass range, but due to the fact that they are
the by far most common class at high stellar masses, they are also most common
among the large galaxies. The classical profile galaxies of class C, however,
show a significantly different behaviour from the galaxies of the other
classes, in that they are the clearly dominant class for the small galaxies at
all stellar mass ranges. Albeit the scatter is large for this class and there
are also very large galaxies among them, they clearly dominate the small size
end especially at the lower stellar mass end. This reflects our previous
findings that these galaxies are dominated by compact star formation in their
centers and only little accretion mostly to the outskirts, resulting in a
rather compact central part and consequently a smaller half-mass radius. On
the other hand, we see a very different behaviour for those galaxies of
classes D, E, and F, all of which have large sizes for their stellar masses,
clearly dominating the region of the mass-size relation that is usually
occupied by disk galaxies. This is in good agreement with the fact that all of
them have large amount of cold gas accreted through their formation history
since z=2, resulting in in-situ star formation in disks and thus larger half-
mass radii (even if in case of class D galaxies the large central accreted
component will prevent its classification as a disk given its massive bulge-
like nature).
As stellar mass $M_{*}$ and 3D half-mass radius $r_{1/2}$ of a galaxy are
correlated, it is not surprising that we also find a correlation between the
in-situ fraction of a galaxy and its half-mass radius (right panel of Fig. 7).
It can best be seen for the galaxies of accretion class C, as they cover the
largest range of both half-mass radii and in-situ fractions, with a clear
tendency for smaller galaxies to have larger in-situ fractions and large
galaxies to have small in-situ fractions. A similar behaviour is found for
galaxies of classes D and E, albeit class E only covers such a small range of
in-situ fractions that no clear correlation between size and in-situ fraction
can be inferred from these galaxies alone. In general, the in-situ fraction
decreases with increasing halfmass radius, i.e., accretion leads to a growth
in the scaled size (e.g., Oser et al., 2012), which indicates that most of the
accreted stars are deposited at large radii (see also Amorisco, 2017; Lagos et
al., 2018; Karademir et al., 2019; Davison et al., 2020).
Figure 8: Dark matter fraction within the half-mass radius trends. Left panel:
Dark matter fraction $f_{\mathrm{DM}}$ versus stellar mass $M_{*}$, with
colours as in the right panel. Observations for LTGs from the SPARCS survey
(Tortora et al., 2019) are included as a lilac solid line and shaded area, and
observations for ETGs from the SPIDER survey (Tortora et al., 2014) are
included as an aqua solid line and shaded area. Right panel: Dark matter
fraction $f_{\mathrm{DM}}$ versus in-situ fraction $f_{\mathrm{in-situ}}$ for
the Magneticum galaxies, with colours indicating the different accretion
classes.
For galaxies of the overmerged classes A and B we find a similarly large range
of half-mass radii, but only a small range of in-situ fractions around 20%,
and they reveal no correlation at all between size and in-situ fraction. This
well reflects the known fact that a major merger results in a much more
compact galaxy than a series of minor mergers that bring in the same total
mass as the major merger but deposit their masses at different radii (Naab et
al., 2009; Hilz et al., 2012). So while all galaxies of these two classes had
plenty of mergers, we find the differences in the individual merger mass
ratios mirrored in the size distribution. Galaxies of the in-situ dominated
class F show the strongest deviation from the correlation between size and in-
situ fraction: while all the in-situ fractions are rather high, the sizes are
generally larger than those of the class C galaxies of similar in-situ
fraction, in agreement with our previous finding that class F galaxies are
more similar to disks than the average class C galaxy.
Finally, we investigate if the fraction of dark matter within the half-mass
radius, $f_{\mathrm{DM}}$, is correlated with the in-situ fraction and stellar
mass. As can be seen in Fig. 8, there is a broad tendency for galaxies with
smaller in-situ fractions to have larger central dark matter fractions,
indicating that (massive) accretion events lead to larger fractions of dark
matter in the center by either enhancing the relative amount of dark matter in
the center or dispersing the baryonic matter. This tendency can be seen for
galaxies of all accretion classes but those of class F, the in-situ dominated
class. Galaxies of that class show much higher central dark matter fractions
than galaxies of class C with similar in-situ fractions. This is in good
agreement with our previous conclusion that class F galaxies closely resemble
the typical behaviour of disk galaxies, since observations show that LTGs
have, at the same stellar mass, larger central dark matter fractions than ETGs
(Tortora et al., 2019, but also e.g., Courteau & Dutton (2015); Genzel et al.
(2020), albeit they use velocity dispersions instead of stellar masses and
different definitions of central radius). This can also be seen in the left
panel of Fig. 8 where we included the observational results for LTGs and ETGs
from Tortora et al. (2019) and Tortora et al. (2014), respectively. As can
immediately be seen, most of the class D and F galaxies clearly resemble the
properties of the LTGs, while the clear observed correlation between
$f_{\mathrm{DM}}$ and $M_{*}$ for ETGs is most strongly populated by galaxies
of the classical accretion profile class C, in good agreement with the idea
that dry merging lowers the central dark matter fractions while wet merging
and smooth gas accretion lead to larger central dark matter fractions.
However, the details of the interactions between the baryons and the dark
matter in the centers of galaxies and the influence of gas and feedback on
this interaction are currently under debate and are beyond the scope of this
work.
### 3.5 Accreted Mass Fractions and Transition Radii
Figure 9: Transition radius trends for all profile classes as indicated in the
label. Classes A and F have no transition radii and are therefore not shown.
Class D galaxies (cyan diamonds) have two transition radii, so the outer ones
are shown as filled diamonds and the inner ones are shown as open diamonds.
Upper left panel: Stellar mass $M_{*}$ versus 3D transition radius
$r_{\mathrm{trans}}$ in kpc. Upper right panel: In-situ fraction
$f_{\mathrm{in-situ}}$ versus transition radius $r_{\mathrm{trans}}$ in kpc.
Lower left panel: Stellar mass $M_{*}$ versus 3D transition radius
$r_{\mathrm{trans}}$ normalised by 3D half-mass radius $r_{\mathrm{1/2}}$.
Lower right panel: In-situ fraction $f_{\mathrm{in-situ}}$ versus normalised
transition radius. The horizontal line in both lower panels represents a
normalised transition radius of 1 half-mass radius, i.e.,
$r_{\mathrm{trans}}/r_{\mathrm{1/2}}=1$. Dashed black lines and shaded areas
show the results from the Illustris simulation (Rodriguez-Gomez et al., 2016),
and the dash-dot-dot-dotted pink line and shaded area are the results found
for Illustris-TNG (Pulsoni et al., 2020). The horizontal dotted line marks
where the transition radius equals the half-mass radius.
As discussed before, for most galaxies there exists a radius at which the
contribution from in-situ and accreted stars is 50% each, that is at which the
dominance of the two components switches. We call this radius the transition
radius $r_{\mathrm{trans}}$. For our classic profile (class C), this is the
radius where the dominant stellar component switches from in-situ in the
center to accreted in the outskirts, and thus separates the inner, self-made
part of the galaxy from the outer, dry-merger dominated part.
Previous works by Cooper et al. (2013) and Rodriguez-Gomez et al. (2016)
already reported this radius to be smaller for larger stellar masses and
smaller in-situ fractions, and we can confirm these general trends for our
class C galaxies as shown in Fig. 9. However, we do not find a tight
correlation between the transition radius and stellar mass, and only a weak
correlation is seen between transition radius and in-situ fraction (see upper
panels of Fig. 9), with a large scatter. Only when moving to normalised
transition radius (i.e., the transition radius divided by the half-mass
radius), the trends become more clear: we even see a clear positive
correlation between normalised transition radius and in-situ fraction, very
similar to the correlation found by Rodriguez-Gomez et al. (2016) but slightly
less steep (see lower panels of Fig. 9). We also find a clear negative trend
between the in-situ fractions and the stellar mass, with galaxies that have
accreted a lot of material (i.e., high mass galaxies) tending to have
normalised transition radii of $r_{\mathrm{trans}}/r_{\mathrm{1/2}}\leq 1$.
However, this trend is more an upper limit for the in-situ fractions at a
given mass, as we also find low-mass galaxies with normalised transition radii
$r_{\mathrm{trans}}/r_{\mathrm{1/2}}\leq 1$, but basically no high-mass
galaxies with $r_{\mathrm{trans}}/r_{\mathrm{1/2}}>1$. The trend found for the
Magneticum galaxies is weaker than what has been found by Rodriguez-Gomez et
al. (2016), and much weaker than the trend reported for the Illustris-TNG
galaxies by Pulsoni et al. (2020). This reflects the result found already in
Fig. 6, namely that the galaxies in Magneticum have, on average, accreted more
dry stellar mass through mergers than galaxies in galaxies from Illustris or
Illustris-TNG.
So far, we only discussed those galaxies of profile class C as most previous
works only discussed this profile class with no mention of other profile
classes. For classes A and F, we cannot provide a transition radius as these
galaxies are always dominated by accreted or in-situ stars, respectively, but
for the profile classes B, D, and E such transition radii exist: Galaxies of
class B usually have a very small transition radius of only about
$2~{}\mathrm{kpc}$, close to the limits of our spatial resolution (and hence
may be somewhat smaller than indicated). We do not find any trend, positive or
negative, with stellar mass, and only a weak positive correlation between in-
situ fraction and normalised transition radius. This is not surprising as this
class if very close to being overmerged like class A, and thus we do not
expect the transition radius to have any relevant meaning.
In the case of class D, where the center and the outskirts are dominated by
accreted stars but the middle radial range is dominated by in-situ stars, even
two transition radii exist. For the outer transition radii of class D (filled
cyan symbols in Fig. 9) and class E we find the trend with in-situ fraction to
be very similar, but generally steeper than the correlation seen for class C.
This is even clearer for the normalized transition radii again. For both
classes we also find a tighter anti-correlation between normalised transition
radius and stellar mass, albeit the scatter is still large.
The inner transition radii for class D galaxies (open cyan diamonds in Fig. 9)
are usually comparably small and often well below the half-mass radius.
Generally, the inner transition radii of class D behave significantly
different from all other transition radii, as there is no trend at all for the
stellar mass neither with the transition radius nor with the normalised
transition radius, and there is actually a negative trend with in-situ
fraction, clearly showing that the larger the fraction of stars formed in-
situ, the smaller the accreted core in the centre, indicating that more stars
are formed in-situ if the mass accreted onto the centre was small compared to
the gas disk of the progenitor galaxy.
As these transition radii are very indicative of the accretion history of the
galaxies and may provide a method to estimate the in-situ fraction of a
galaxy, it would be very instructive to be able to measure this transition
radius observationally. Therefore, in the next section of this paper we
address the question of whether it is possible to measure the radius of the
transition from in-situ to accretion dominance from the observed surface
brightness profiles of galaxies, as suggested by Cooper et al. (2013) and
Rodriguez-Gomez et al. (2016).
## 4 Accreted Fractions from Sérsic Fits for ETGs
Figure 10: Upper panels: Example of a class C mass density profile in 3D and
2D projections. Top left panel: 3D total stellar mass density profile (black
curve) with the in-situ and accreted components in red and blue, respectively.
Top right panels: Projected 2D stellar mass density profiles from three
different projections: The projected total stellar profile is shown as black
curve, the in-situ and accreted components are shown as solid red and blue
lines, respectively. The dashed lines show the inner (red) and the outer
(blue) fits from the double Sérsic fits, and the single Sérsic fits (green) to
the total projected stellar density profiles. In the upper right of each panel
we list the transition radius in 3D and the crossing radii in 2D, in units of
kpc. In this example, the single Sérsic fit is never a good fit to any of the
projections. The double Sérsic fits describe the total profile very well in
all projections, and are also a good approximation to the in-situ and accreted
profiles in all cases. However, the crossing radii $R_{\mathrm{cross}}$ (i.e.,
the radius where the two Sérsic profiles cross) vary on the order of 1 kpc
between the three projections, and are in all three cases only about half as
large as the real 3D transition radius $r_{\mathrm{trans}}$. Lower panels:
Same as upper panels but for a class A profile. Class A galaxies are extremely
accretion dominated and have no transition radius between in-situ and accreted
components in their 3D or 2D stellar density profiles, as can be seen from the
solid red and blue curves in all four panels. However, a single Sérsic fit is
not a good fit to any of the projections and the double Sérsic fit is clearly
needed in all three projections to describe the total total stellar profiles.
Thus, we obtain crossing radii $R_{\mathrm{cross}}$ from these double Sérsic
components, that vary again between the three projections, but in no case are
they representative of the true in-situ or accreted components.
Motivated by previous simulation results from Cooper et al. (2013) and
Rodriguez-Gomez et al. (2016), there have been observational attempts to
measure the in-situ and accreted fractions of galaxies using a double Sérsic
fit (Sérsic, 1963) to the observed surface brightness profiles of galaxies.
This assumes that the inner Sérsic fit describes the in-situ component of the
galaxy, and the outer Sérsic fit describes the accreted component. In some
cases, a third fit to the very outskirts of a galaxy was carried out, under
the assumption that the third component describes the stellar halo of the
galaxy and not the galaxy itself (e.g., Spavone et al., 2017), but we will not
investigate this approach here. Instead, we investigate in this section from
our simulated galaxies if the double-Sérsic approach really supplies a good
measure for the in-situ and accreted components of galaxies. As the
observational work has so far focused mostly on early-type galaxies (ETGs), we
restrict our sample to Magneticum ETGs only.
Figure 11: Left panel: Crossing radius $R_{\mathrm{cross}}$ obtained from
double Sérsic fits to the 2D projected mass density profiles (random
projection) versus the 3D transition radius $r_{\mathrm{trans}}$ between in-
situ and accreted components, for all profile classes (colours as indicated in
the right panel) with well defined transition radii. Right panel: Fraction of
integrated mass from the outer Sérsic function fit to the 2D mass density
profiles versus the true accreted mass fraction. In both panels, the dashed
line shows a 1:1 relation. Error bars indicate the maximum and minimum values
obtained for edge-on and face-on projections.
### 4.1 Sérsic Fits to Projected Surface Density Profiles of Simulated ETGs
To compare the simulated mass density profiles with observed surface
brightness profiles, we need to create 2D projections of the simulated
galaxies. As this projection is rather arbitrary, we choose a random
projection along with the face-on and edge-on projections to test for each
galaxy in our sample. In all cases, we find that the profile class of our
galaxies does not change, and the in-situ to accreted relations stay the same
under all projections. Thus, for all galaxies that have a transition radius
($r_{\mathrm{trans}}$) in 3D, we also find a radius in the 2D projections
which indicates a transition from in-situ to accretion dominance.
To follow the observational approach, we fit the projected surface density
profiles with both single and double Sérsic fits. In most cases, a double
Sérsic fit is a better fit to the projected surface density profiles,
independent of the projection. In those cases where a single Sérsic fit is
sufficient, this is true for all tested projections. This is rather promising
as this clearly indicates that, if a double Sérsic fit is needed to describe
the observed surface brightness profile, then it is independent of the viewing
angle and reflects the underlying 3D density distribution. For the double
Sérsic fits to the 2D surface density profiles, we define the crossing radius,
$R_{\mathrm{cross}}$, as the radius where the inner and outer Sérsic profiles
cross each others.
The upper row of Fig. 10 shows an example of a class C profile galaxy with its
well defined transition radius in 3D (here
$r_{\mathrm{trans}}=10.52~{}\mathrm{kpc}$, left panel). A transition is also
seen between the in-situ and accretion dominated regions of the galaxy in all
three 2D projections (upper right panels), however, the values of the
transition radii in the 2D projections are all smaller than the 3D transition
radius $r_{\mathrm{trans}}$. We find that this is not simply a matter of
unlucky projections but is rather a common feature of class C profiles (which
make up the majority of profiles). This disconnect between the transition
radii seen in 3D and the 2D profiles also occurs in all other classes with
well-defined transition radii, namely class B, D, and E.
While the transition radii are already disconnected from 3D to 2D, the matter
is even worse if we use the double Sérsic fits to describe the underlying in-
situ and accreted components: In a few cases like the one shown in the upper
panels of Fig. 10, the two Sérsic components are a good approximation of the
in-situ and accreted components, and the resulting crossing radius between the
two Sérsic components, $R_{\mathrm{cross}}$, is a good approximation to the 2D
transition radius. However, for most galaxies this is not the case. One
example of a galaxy that demonstrates the issue nicely is shown in the lower
panels of Fig. 10: This galaxy is of class A, i.e., is accretion dominated at
all radii and has no transition radius from in-situ to accretion dominated,
neither in 3D (left lower panel) nor in projection (three panels on the lower
right). However, the stellar 2D surface density profiles in this example,
under all projections, clearly require a double Sérsic fit, thus providing a
crossing radius $R_{\mathrm{cross}}$. The two resulting Sérsic components in
this case do not describe the underlying in-situ and accreted components. They
instead mark the radius where accretion due to massive mergers transitions
into accretion from small mergers and mini mergers that never reach the centre
of the galaxy.
To further quantify this issue, the left panel of Fig. 11 shows the
differences between the crossing radii of the double Sérsic fits
$R_{\mathrm{cross}}$ for the random projection (with error bars marking the
values for the edge-on and face-on projections), and the true 3D transition
radius $r_{\mathrm{trans}}$ between the in-situ and accreted components for
all galaxies where such a transition radius is well defined (classes B, C, D,
and E). The plot is largely a scatter diagram with little, or no,
correspondence between the two measured radii, independent of the profile
class. This is also true for the projected 2D transition radii, but as the
behaviour is nearly identical we do not show this plot here.
For galaxies of classes D and E, the 2D crossing radii are much smaller than
the real transition radii, while for galaxies of class B we find the opposite
trend (with two of them having such large crossing radii $R_{\mathrm{cross}}$
that they are well above the plotted radius range). Galaxies of class C show
both kind of behaviour, with $R_{\mathrm{cross}}$ both lower and higher than
the true $r_{\mathrm{trans}}$. Independent of the profile class, we conclude
that it is a lucky coincidence if the crossing radius $R_{\mathrm{cross}}$ of
the double Sérsic fits is a good approximation to the transition radius
$r_{\mathrm{trans}}$. In summary, the transition from an inner to an outer
Sérsic fit to an observed surface brightness profile bears little, or no,
connection to the true transition between the in-situ and accreted components
in a galaxy.
We also measure the integrated mass within the outer Sérsic component and
compare it to the true accreted mass fraction. This is shown in the right
panel of Fig. 11 for all galaxies where a double Sérsic fit was a good fit,
even those of profile classes A and E that are dominated by accreted or in-
situ stars at all radii, respectively. This plot reveals a large scatter with
a very weak trend about the unity line, indicating that an outer (inner)
Sérsic component fit to a surface brightness profile is a poor guide to the
true accreted (in-situ) mass fraction.
In summary, we find that fitting a double Sérsic profile to the 2D surface
density profile of an ETG will not reveal the true radius for the transition
from in-situ to accretion dominated material. This suggests that the dips seen
in observed surface brightness profiles can not, in general, be taken as a
signature of a division between in-situ and accreted components of the galaxy.
They may instead be more indicative of a transition from stars being formed
in-situ plus stars accreted by major mergers, to the component of stars mostly
accreted through minor or mini mergers. Furthermore, the integrated mass
associated with the inner and outer Sérsic functions provide a poor guide to
the true in-situ and accreted mass fractions of a galaxy, respectively. We
conclude that the (two) components visible in the (projected) density profiles
do not reflect the in-situ and accreted components in general, but rather mark
the transition from the inner part of the galaxy which can be dominated by in-
situ stars but also be dominated by a massive merger event, and the outer part
of the galaxy which is dominated by small minor or mini mergers that get
disrupted in the outskirts of the galaxy and never interact with its
center333Note that in the very inner parts of galaxies additional components
can be visible in the (projected) density profiles, caused for example by bars
and bulges, but we cannot include these structures in our analysis as the
resolution of the Magneticum galaxies is not high enough to resolve these
inner structures of the galaxies.. This is similar to the dynamical split of
the ICL and the BCG in galaxy clusters, and might be a way to distinguish
outer stellar halos of galaxies from the galaxies themselves instead.
### 4.2 Observational Comparison Samples
Some of the deepest imaging of nearby ETGs available comes from the VEGAS
survey (Capaccioli et al., 2015). The survey probes surface brightness
profiles out to $\sim 10R_{\mathrm{e}}$ and down to surface brightness levels
of $\sim 29~{}\mathrm{mag/arcsec}^{2}$ in the g-band. The survey is still
ongoing, however, results on the radial surface brightness profiles have been
published for several massive galaxies in group/cluster environments by
Spavone et al. (2017) and Spavone et al. (2020). Spavone et al. (2017) fit two
or three Sérsic profiles to 6 ETGs, with the Sérsic parameters $n$ constrained
to a narrow range. More recently, Spavone et al. (2020) fit 19 ETGs in the
Fornax cluster with either two or three Sérsic components. Here we focus on
the two component fit, for which $n$ was a free parameter. The two component
fits have a single intermediate radius, and the accreted mass fractions are
calculated from the second (outer) component. These are referred to as the
“relaxed” components following Cooper et al. (2015). Hence the approach to
provide unconstrained fits is more comparable to our approach, we focus on the
study by Spavone et al. (2020) instead of Spavone et al. (2017).
Another very deep imaging study has been carried out by Kluge et al. (2019),
who fit double Sérsic profiles to extremely low surface brightness profiles
targeting especially BCGs. Both single and double Sérsic fits were obtained,
as well as accreted fractions from the double Sérsic fits.
We also compare to the double Sérsic fits of $\sim 45,500$ galaxies, observed
at a mean redshift $z\sim 0.08$ and stacked in mass bins by D’Souza et al.
(2014). Taking mean values from their figure 13 for ETG-like galaxies, we note
that their data covers a similar stellar mass range to our modelled galaxies
and that they find effective radii of the inner and outer components to be
around 3 and 8 kpc, respectively. D’Souza et al. (2014) also provided such
measurements for their stacked LTG sample, however, we will focuss on their
results regarding the ETGs here.
### 4.3 Accreted Fractions from Double Sérsic Fits
Figure 12: Mass fraction of accreted stars versus stellar mass (same as the
upper right panel of Fig. 6) but with observations included as black symbols:
Data points from the VEGAS survey Spavone et al. (2020) as open diamonds, with
additional data points from the literature as presented by Spavone et al.
(2020) as open triangles. Data points for BCGs from Kluge et al. (2019) are
shown as filled black diamonds. The values for the stacked SDSS galaxies by
D’Souza et al. (2014) are shown as open squares, with the upper line the one
obtained for the ETG-like galaxies where a double Sérsic fit is a good
description, and the lower line for the LTG-like galaxies where a third Sérsic
fit is required. The blue solid line and shaded region indicate the Magneticum
average values from this work, and all other coloured lines and shaded areas
indicate other cosmological simulations as described in Fig. 6.
In Fig. 12 we reproduce the top right panel of Fig. 6, showing the ex-situ
accretion fraction versus stellar mass. Again, the lines and shading show the
results of various models, colors as indicated in Fig. 6, with the blue solid
line showing the average fraction of accreted mass for Magneticum galaxies.
This time, we included in Fig. 12 observational data from the literature.
These observations do not directly measure the fraction of accreted mass but
rather fit double Sérsic profiles to the surface brightness profiles of early-
type galaxies, as described in Sec. 4.2, inferring the mass fraction from the
outer Sérsic component.
Figure 13: Sérsic index $n$ versus stellar mass for single Sérsic fits to the
2D projected mass density profiles of all Magneticum galaxies. The dashed
black line show the results from simulations by Hopkins et al. (2009), with
the shaded area marking the scatter inferred from their simulations. The solid
black line is the mean relation from observations by (Graham, 2013) converted
into stellar mass assuming $M/L_{\mathrm{B}}=10$. Left panel: All Magneticum
galaxies are coloured according to their profile classes as indicated in the
legend. Right panel: Only the ETGs from the Magneticum galaxy sample are
shown. In addition, open black diamonds show observations of ETGs from
Kormendy et al. (2009), and the solid black diamonds show BCGs from the
observations by Kluge et al. (2019).
While all simulations suggest a general trend of increasing accretion fraction
for higher mass galaxies, there is no clear trend for the observational proxy
(the outer component mass) to vary with stellar mass for the Fornax galaxy
sample by Spavone et al. (2020) or the BCG sample by Kluge et al. (2019). Only
the stacked sample provided by D’Souza et al. (2014) shows a decrease in
accreted fraction with mass for their ETG sample. In fact, the different
observations differ strongly from each others and do not show a clear picture
of the accreted fraction estimated through the outer Sérsic fit component
being correlated with the total stellar mass of a galaxy. This further
supports our results from Sec. 4.1, that dips in the observed surface
brightness profile do not, in general, correspond to the true transition from
in-situ to accreted dominated material. An alternative approach to estimating
accretion fraction may come from star formation histories (Boecker et al.,
2020) or 2D chemo-dynamical analysis (Poci et al., 2019), and needs to be
investigated in future studies.
### 4.4 Indices from Single and Double Sérsic Fits
As a final test, we compare the Sérsic indices $n$ from the single and double
Sérsic fits to our projected simulated galaxies with observations to ensure
that the simulated and observed galaxy samples really are comparable in their
radial properties. Fig. 13 shows the Sérsic indices $n$ from single Sérsic
fits to the 2D projected mass density profiles of the Magneticum galaxies, for
all galaxies in the left panel and ETGs-only in the right panel. We also
include the observed relation for galaxies using eq. 2.7 from Graham (2013)
and simply assuming $M/L_{\mathrm{B}}=10$ for all galaxies as solid black
line. We find a trend of increasing Sérsic index $n$ for higher mass galaxies
that matches the observed mean trend well, clearly showing that the overall
mass distribution of the simulated galaxies is in good agreement with
observations. Galaxies of the different profile classes are well spread in
Sérsic index for a given stellar mass, with no clear trends apart from the
fact that the largest Sérsic indices are clearly found in class C galaxies. We
also include the range of Sérsic indices from the empirical model by Hopkins
et al. (2009), which cover a similar mass range than our simulated galaxies.
The model predicts on average Sérsic indices that are, at a given stellar
mass, somewhat larger than the Magneticum and the observed galaxies of Graham
(2013), especially at the high mass end, but the scatter range is very
similar, and the overall trend of larger Sŕsic indices with larger stellar
mass is in good agreement.
When limiting our galaxy sample to ETGs-only (right panel of Fig. 13), we find
the Sérsic indices to be slightly larger on average. We additionally include
the observational data for individual ETGs from Kormendy et al. (2009) and
Kluge et al. (2019), further showing that the simulations also provide good
descriptions of the stellar mass distributions of ETGs especially. There is a
slight tendency for galaxies with stellar masses above $\log(M_{*})>11.5$ to
have larger Sérsic indices in the observed samples than in our simulated
sample, however, at this mass range our simulated sample is statistically not
representative anymore, especially since there are only 4 BCGs in our sample
while the observations by Kluge et al. (2019) are BCGs only444For a more
detailed comparison of the BCG properties from a larger Magneticum simulation
volume with observations, see Remus et al., in prep..
Figure 14: Sérsic index $n$ versus stellar mass for double Sérsic fits to the
2D projected mass density profiles of the Magneticum galaxies, with colours as
indicated in the legend. Observations from the VEGAS survey (Spavone et al.,
2020) are shown as open black diamonds, and BCGs from Kluge et al. (2019) as
solid black diamonds. The black lines mark the values obtained by D’Souza et
al. (2014) from fits to over $45,500$ galaxies. Left panel: Inner Sérsic fits.
Right panel: Outer Sérsic fits.
For double Sérsic fits, we here focus only on ETGs as the observational
comparison samples only include ETGs. As shown in Fig. 14, we find the inner
profiles of our simulated ETGs to have Sérsic indices of $n\sim 2.5$ on
average (left panel), and the outer profiles to have Sérsic indices of
$n\sim{}$0.5–1, on average (right panel). We find little variation of these
indices with stellar mass, if all there is a slight average trend for the
inner Sérsic indices of more massive galaxies to be slightly larger. The
agreement with the 9 ETGs from the VEGAS survey (Spavone et al., 2020), with
masses $>10^{10}M_{\odot}$, is reasonable, with a slight tendency for the
obersved outer Sérsic indices to be larger than for the simulated ETGs.
Compared to the observations of BCGs by Kluge et al. (2019), we find that the
agreement with their outer Sérsic indices is rather good, while their inner
Sérsic are on average larger than those of our simulated galaxies. However,
this could also be due to the fact that most of the Magneticum ETGs in the
mass range comparable to the sample by Kluge et al. (2019) are not BCGs.
Additionally, we also show in Fig. 14 the inner and outer Sérsic indices from
the stacked observations by D’Souza et al. (2014), using their double Sérsic
fits (solid lines). As can clearly be seen they differ rather strongly from
the simulated galaxies, but also the other observations, with their inner
slopes generally smaller and their outer slopes much larger in comparison. We
note that D’Souza et al. (2014) also fit triple Sérsic profiles to their
highest mass galaxies. In this case, their outer Sérsic fits have $n\sim 1.5$,
which is much closer to our simulation values and the other observations,
indicating that their inner slopes are actually really “inner” slopes which we
do not fit in this work to avoid resolution issues. This clearly highlights
the importance of clear definitions regarding the fitted regions of galaxies
when performing comparisons.
Overall, we find reasonable agreement between the Sérsic indices $n$ predicted
by our simulated galaxies and the observed Sérsic indices, especially of ETGs,
for both single and double Sérsic fit indices, clearly showing that the
simulated galaxies used in this work capture the observed matter
distributions. This clearly shows that our result, that double Sérsic fits do
not describe the in-situ and accreted components of galaxies, is applicable to
observations. We rather suggest that the double Sérsic fit (excluding the
central inner bulge areas) describes the relaxed inner and the unrelaxed outer
stellar (halo) components of galaxies and could therefore be used to
distinguish the outer stellar halo from a galaxy.
## 5 Summary and Conclusions
Using the Magneticum simulation we have studied a sample of 511 model galaxies
in the log mass range $10.3<M_{*}<12$. These simulated galaxies reproduce well
the observed galaxy size-mass relation. We also find that the fraction of
accreted material, as a function of total halo and stellar mass, reveals
similar trends to those found by previous simulations. One notable difference
is that our simulations predict somewhat higher accretion fractions in the
lower mass galaxies.
We examined the stellar mass density profiles of our sample, split in accreted
and in-situ components, and classified them into 6 classes depending on the
profile type: The most common class reveals, at intermediate radii, a
transition radius where the accreted and the in-situ component are equal, with
the in-situ component being dominant in the centre and the accreted component
dominating the outskirts. This class is comparable to the profiles found in
previous studies. However, for about 30% of our galaxies we find different
profiles: Some are accretion dominated at all radii, even in the centre;
another group of galaxies is in-situ dominated at all radii; and most
interestingly, we find one class of galaxies that have even two transition
radii at which the in-situ and accreted material are equal, with an accretion
dominated core, an in-situ dominated shell around it, and an accretion
dominated outskirt. This is actually the second most common class of galaxies
in our sample.
We show that these profile classes correlate with galaxy mass, and that the
type of mergers they undergo help to shape their profiles. We especially show
that the amount of gas that is involved in these mergers is more important in
shaping these profiles than the actual merger, and that the most common class
is to about 70% not dominated by major mergers but rather smaller merger
events. Their outer regions are largely built up by dry minor and mini
mergers, clearly showing the importance of minor and especially mini mergers
in shaping the outer stellar halos of galaxies.
We find that galaxies with high in-situ fractions (low accretion fractions)
tend to be lower mass galaxies with smaller halfmass radii, and we see a weak
trend for high in-situ fraction galaxies to have lower central dark matter
fractions, with the exception of the overall in-situ dominated galaxies that
have clearly larger central dark matter fractions at a given stellar mass than
galaxies of the most common class, typical for what is found in disk galaxies.
We measure the radius between the in-situ and accretion-dominated regions for
those galaxies that reveal a clear transition, which are the majority of our
sample. This transition radius is found to weakly be inversely correlated with
stellar mass, but strongly correlated with the in-situ fraction for our most
common class of galaxies. However, galaxies of the other classes that have one
or even two transition radii, do not follow the same relations.
In order to compare our 3D profiles with observations, we projected the
stellar mass density in different angles for each galaxy to be able to
directly compare with observed surface brightness profiles. We find that the
transition radius from in-situ to accretion dominated profiles seen in many 3D
profiles also occurs in all projections, but always at different radii in the
2D profiles, usually at smaller radii. None of our galaxies changes its in-
situ-profile class during projections. We also find that, similar to
observations, our projected stellar mass surface density profiles usually
require a double Sérsic fit to be described accurately, with Sérsic indices
for both components similar to the range of observed Sérsic indices. However,
we clearly see that these two Sérsic components usually do not describe the
underlying in-situ and accreted components, but are rather disjunct from
those. Only in very few cases the crossing radius of the two Sérsic components
coincides with the transition radius of the galaxy. Even worse, we also
clearly see that most galaxies that are dominated by accreted stars at all
radii, still require a double Sérsic fit to describe the stellar surface
density profiles, thus having a crossing radius but no transition radius.
In other words, we clearly conclude that the dip seen in 2D profiles does not
correspond to the true transition radius between in-situ and accretion
dominated regions. Similarly, any mass inferred from these double-Sérsci fits
will not trace the true in-situ or accreted mass of a galaxy. Thus, fits to
the dips seen in some observed surface brightness profiles of early-type
galaxies are not a true measure of a galaxy’s accretion material. However,
they do hold some information about the assembly history of that galaxy, as we
find indications that these dips are more likely an indication for the
transition from the inner (in-situ and massive merger dominated) core of a
galaxy to its stellar halo, mostly accreted through minor and mini mergers,
similar to the ICL component around BCGs. To confirm this, a more detailed
study including also the radial kinematics of a galaxy in addition to its
density component is needed in the future, to disentangle the formation
pathways of galaxies from observational tracers.
## Acknowledgements
We thank Klaus Dolag and Felix Schulze for very useful discussions. We also
thank Thomas Davison, Enrica Iodice, and Marilena Spavone for their helpful
comments. We also acknowledge funding from the DAAD PPP Germany-Australia
Exchange Program. The Magneticum Pathfinder simulations were partially
performed at the Leibniz-Rechenzentrum with CPU time assigned to the Project
“pr86re”, supported by the DFG Cluster of Excellence “Origin and Structure of
the Universe”. We are especially grateful for the support by M. Petkova
through the Computational Center for Particle and Astrophysics (C2PAP).
## Data Availability
The data underlying this article will be shared on reasonable request to the
corresponding author.
## References
* Amorisco (2017) Amorisco N. C., 2017, MNRAS, 464, 2882
* Beck et al. (2016) Beck A. M., et al., 2016, MNRAS, 455, 2110
* Boecker et al. (2020) Boecker A., Leaman R., van de Ven G., Norris M. A., Mackereth J. T., Crain R. A., 2020, MNRAS, 491, 823
* Boylan-Kolchin et al. (2009) Boylan-Kolchin M., Springel V., White S. D. M., Jenkins A., Lemson G., 2009, MNRAS, 398, 1150
* Capaccioli et al. (2015) Capaccioli M., et al., 2015, A&A, 581, A10
* Cooper et al. (2010) Cooper A. P., et al., 2010, MNRAS, 406, 744
* Cooper et al. (2013) Cooper A. P., D’Souza R., Kauffmann G., Wang J., Boylan-Kolchin M., Guo Q., Frenk C. S., White S. D. M., 2013, MNRAS, 434, 3348
* Cooper et al. (2015) Cooper A. P., Gao L., Guo Q., Frenk C. S., Jenkins A., Springel V., White S. D. M., 2015, MNRAS, 451, 2703
* Courteau & Dutton (2015) Courteau S., Dutton A. A., 2015, ApJ, 801, L20
* D’Souza et al. (2014) D’Souza R., Kauffman G., Wang J., Vegetti S., 2014, MNRAS, 443, 1433
* Davison et al. (2020) Davison T. A., Norris M. A., Pfeffer J. L., Davies J. J., Crain R. A., 2020, MNRAS, 497, 81
* Dolag et al. (2004) Dolag K., Jubelgas M., Springel V., Borgani S., Rasia E., 2004, ApJ, 606, L97
* Dolag et al. (2005) Dolag K., Vazza F., Brunetti G., Tormen G., 2005, MNRAS, 364, 753
* Dolag et al. (2009) Dolag K., Borgani S., Murante G., Springel V., 2009, MNRAS, 399, 497
* Dolag et al. (2017) Dolag K., Mevius E., Remus R.-S., 2017, Galaxies, 5, 35
* Donnert et al. (2013) Donnert J., Dolag K., Brunetti G., Cassano R., 2013, MNRAS, 429, 3564
* Dubois et al. (2013) Dubois Y., Gavazzi R., Peirani S., Silk J., 2013, MNRAS, 433, 3297
* Dubois et al. (2016) Dubois Y., Peirani S., Pichon C., Devriendt J., Gavazzi R., Welker C., Volonteri M., 2016, MNRAS, 463, 3948
* Duc et al. (2015) Duc P.-A., et al., 2015, MNRAS, 446, 120
* Fabjan et al. (2010) Fabjan D., Borgani S., Tornatore L., Saro A., Murante G., Dolag K., 2010, MNRAS, 401, 1670
* Forbes & Remus (2018) Forbes D. A., Remus R.-S., 2018, MNRAS, 479, 4760
* Forbes & Thomson (1992) Forbes D. A., Thomson R. C., 1992, MNRAS, 254, 723
* Genzel et al. (2020) Genzel R., et al., 2020, arXiv e-prints, p. arXiv:2006.03046
* Graham (2013) Graham A. W., 2013, Elliptical and Disk Galaxy Structure and Modern Scaling Laws. p. 91, doi:10.1007/978-94-007-5609-0_2
* Guo et al. (2011) Guo Q., et al., 2011, MNRAS, 413, 101
* Hernquist & Barnes (1991) Hernquist L., Barnes J. E., 1991, Nature, 354, 210
* Hilz et al. (2012) Hilz M., Naab T., Ostriker J. P., Thomas J., Burkert A., Jesseit R., 2012, MNRAS, 425, 3119
* Hilz et al. (2013) Hilz M., Naab T., Ostriker J. P., 2013, MNRAS, 429, 2924
* Hirschmann et al. (2014) Hirschmann M., Dolag K., Saro A., Bachmann L., Borgani S., Burkert A., 2014, MNRAS, 442, 2304
* Hirschmann et al. (2015) Hirschmann M., Naab T., Ostriker J. P., Forbes D. A., Duc P.-A., Davé R., Oser L., Karabal E., 2015, MNRAS, 449, 528
* Hopkins et al. (2009) Hopkins P. F., Hernquist L., Cox T. J., Keres D., Wuyts S., 2009, ApJ, 691, 1424
* Huang et al. (2013) Huang S., Ho L. C., Peng C. Y., Li Z.-Y., Barth A. J., 2013, ApJ, 766, 47
* Karademir et al. (2019) Karademir G. S., Remus R.-S., Burkert A., Dolag K., Hoffmann T. L., Moster B. P., Steinwandel U. P., Zhang J., 2019, MNRAS, 487, 318
* Kluge et al. (2019) Kluge M., et al., 2019, arXiv e-prints, p. arXiv:1908.08544
* Komatsu et al. (2011) Komatsu E., et al., 2011, ApJS, 192, 18
* Kormendy et al. (2009) Kormendy J., Fisher D. B., Cornell M. E., Bender R., 2009, ApJS, 182, 216
* Lackner et al. (2012) Lackner C. N., Cen R., Ostriker J. P., Joung M. R., 2012, MNRAS, 425, 641
* Lagos et al. (2018) Lagos C. d. P., et al., 2018, MNRAS, 473, 4956
* Lange et al. (2015) Lange R., et al., 2015, MNRAS, 447, 2603
* Lee & Yi (2013) Lee J., Yi S. K., 2013, ApJ, 766, 38
* Merritt et al. (2016) Merritt A., van Dokkum P., Abraham R., Zhang J., 2016, ApJ, 830, 62
* Naab et al. (2009) Naab T., Johansson P. H., Ostriker J. P., 2009, ApJ, 699, L178
* Oser et al. (2010) Oser L., Ostriker J. P., Naab T., Johansson P. H., Burkert A., 2010, ApJ, 725, 2312
* Oser et al. (2012) Oser L., Naab T., Ostriker J. P., Johansson P. H., 2012, ApJ, 744, 63
* Pillepich et al. (2014) Pillepich A., et al., 2014, MNRAS, 444, 237
* Pillepich et al. (2018) Pillepich A., et al., 2018, MNRAS, 475, 648
* Poci et al. (2019) Poci A., McDermid R. M., Zhu L., van de Ven G., 2019, MNRAS, 487, 3776
* Pulsoni et al. (2020) Pulsoni C., Gerhard O., Arnaboldi M., Pillepich A., Rodriguez-Gomez V., Nelson D., Hernquist L., Springel V., 2020, arXiv e-prints, p. arXiv:2009.01823
* Purcell et al. (2007) Purcell C. W., Bullock J. S., Zentner A. R., 2007, ApJ, 666, 20
* Remus et al. (2017) Remus R.-S., Dolag K., Naab T., Burkert A., Hirschmann M., Hoffmann T. L., Johansson P. H., 2017, MNRAS, 464, 3742
* Rodriguez-Gomez et al. (2016) Rodriguez-Gomez V., et al., 2016, MNRAS, 458, 2371
* Schulze et al. (2018) Schulze F., Remus R.-S., Dolag K., Burkert A., Emsellem E., van de Ven G., 2018, MNRAS, 480, 4636
* Schulze et al. (2020) Schulze F., Remus R.-S., Dolag K., Bellstedt S., Burkert A., Forbes D. A., 2020, MNRAS, 493, 3778
* Schweizer & Seitzer (1992) Schweizer F., Seitzer P., 1992, AJ, 104, 1039
* Seigar et al. (2007) Seigar M. S., Graham A. W., Jerjen H., 2007, MNRAS, 378, 1575
* Sérsic (1963) Sérsic J. L., 1963, Boletin de la Asociacion Argentina de Astronomia La Plata Argentina, 6, 41
* Spavone et al. (2017) Spavone M., et al., 2017, A&A, 603, A38
* Spavone et al. (2020) Spavone M., et al., 2020, A&A, 639, A14
* Springel & Hernquist (2005) Springel V., Hernquist L., 2005, ApJ, 622, L9
* Springel et al. (2001) Springel V., White S. D. M., Tormen G., Kauffmann G., 2001, MNRAS, 328, 726
* Steinborn et al. (2015) Steinborn L. K., Dolag K., Hirschmann M., Prieto M. A., Remus R.-S., 2015, MNRAS, 448, 1504
* Steinborn et al. (2016) Steinborn L. K., Dolag K., Comerford J. M., Hirschmann M., Remus R.-S., Teklu A. F., 2016, MNRAS, 458, 1013
* Tacchella et al. (2019) Tacchella S., et al., 2019, MNRAS, 487, 5416
* Tal & van Dokkum (2011) Tal T., van Dokkum P. G., 2011, ApJ, 731, 89
* Tal et al. (2009) Tal T., van Dokkum P. G., Nelan J., Bezanson R., 2009, AJ, 138, 1417
* Teklu et al. (2015) Teklu A. F., Remus R.-S., Dolag K., Beck A. M., Burkert A., Schmidt A. S., Schulze F., Steinborn L. K., 2015, ApJ, 812, 29
* Teklu et al. (2017) Teklu A. F., Remus R.-S., Dolag K., Burkert A., 2017, MNRAS, 472, 4769
* Teklu et al. (2018) Teklu A. F., Remus R.-S., Dolag K., Arth A., Burkert A., Obreja A., Schulze F., 2018, ApJ, 854, L28
* Tornatore et al. (2004) Tornatore L., Borgani S., Matteucci F., Recchi S., Tozzi P., 2004, MNRAS, 349, L19
* Tornatore et al. (2007) Tornatore L., Borgani S., Dolag K., Matteucci F., 2007, MNRAS, 382, 1050
* Tortora et al. (2014) Tortora C., La Barbera F., Napolitano N. R., Romanowsky A. J., Ferreras I., de Carvalho R. R., 2014, MNRAS, 445, 115
* Tortora et al. (2019) Tortora C., Posti L., Koopmans L. V. E., Napolitano N. R., 2019, MNRAS, 489, 5483
* Vogelsberger et al. (2020) Vogelsberger M., Marinacci F., Torrey P., Puchwein E., 2020, Nature Reviews Physics, 2, 42
* Wiersma et al. (2009) Wiersma R. P. C., Schaye J., Smith B. D., 2009, MNRAS, 393, 99
* van de Sande et al. (2019) van de Sande J., et al., 2019, MNRAS, 484, 869
|
# gleam: Galaxy Line Emission & Absorption Modeling
Andra Stroe Clay Fellow Center for Astrophysics | Harvard & Smithsonian, 60 Garden St., Cambridge, MA 02138, USA Victor-Nicolae Savu
(Received December 5, 2020; Revised January 26, 2021; Accepted January 27,
2021)
###### Abstract
We present gleam (Galaxy Line Emission & Absorption Modeling), a Python tool
for fitting Gaussian models to emission and absorption lines in large samples
of 1D extragalactic spectra. gleam is tailored to work well in batch mode
without much human interaction. With gleam, users can uniformly process a
variety of spectra, including galaxies and active galactic nuclei, in a wide
range of instrument setups and signal-to-noise regimes. gleam also takes
advantage of multiprocessing capabilities to process spectra in parallel. With
the goal of enabling reproducible workflows for its users, gleam employs a
small number of input files, including a central, user-friendly configuration
in which fitting constraints can be defined for groups of spectra and
overrides can be specified for edge cases. For each spectrum, gleam produces a
table containing measurements and error bars for the detected spectral lines
and continuum, and upper limits for non-detections. For visual inspection and
publishing, gleam can also produce plots of the data with fitted lines
overlaid. In the present paper, we describe gleam’s main features, the
necessary inputs, expected outputs, and some example applications, including
thorough tests on a large sample of optical/infra-red multi-object
spectroscopic observations and integral field spectroscopic data. gleam is
developed as an open-source project hosted at
https://github.com/multiwavelength/gleam and welcomes community contributions.
Astronomy software (1855), Astronomical techniques (1684), Spectroscopy
(1558), Astronomy data analysis (1858), Galaxies (573), Active galactic nuclei
(16), Open source software (1866)
††journal: AJ††software: gleam (Stroe & Savu, 2020), Matplotlib (Hunter,
2007), Astropy (Astropy Collaboration et al., 2013), LMFIT (Newville et al.,
2019), Numpy (Harris et al., 2020)
## 1 Introduction
One of the main goals of extragalactic astronomy is to understand the cosmic
evolution of galaxies and black holes in the context of large scale structure.
To obtain a comprehensive view of the physical processes driving their
evolution and unveil their spatial distribution, spectroscopic observations of
large samples of galaxies and active galactic nuclei (AGN) at increasingly
high redshift are required. The most efficient way to obtain large samples
($>100$ to hundreds of thousands of sources) covering large volumes is through
simultaneous observations of many objects. As a consequence, multi-object
spectroscopy (MOS) and integral field unit (IFU) spectroscopy have experienced
significant growth since the 1980s.
MOS plays an important role in repositioning mid-size telescopes, with
instruments dedicated exclusively to completing large surveys of galaxies and
quasars, such as LAMOST (Cui et al., 2012), WHT/WEAVE (Dalton et al., 2012),
SDSS-IV (Blanton et al., 2017) and DESI (DESI Collaboration et al., 2016). The
instrument suite of $6-10$-m class optical/infrared telescopes usually
contains MOS and IFU capabilities, e.g. Keck/DEIMOS (Faber et al., 2003),
Keck/MOSFIRE (McLean et al., 2012), VLT/VIMOS (Le Fèvre et al., 2003),
VLT/KMOS (Sharples et al., 2006), Gemini/GMOS (Hook et al., 2004), Subaru/FMOS
(Kimura et al., 2010), MMT/Hectospec (Fabricant et al., 2005), Magellan/IMACS
(Dressler et al., 2011). In the near future, new wide-field ($>1^{\circ}$),
high-multiplex ($>1000$ targets) MOS instruments will be mounted, such as
VLT/MOONS (Cirasuolo et al., 2012) and VISTA/4MOST (de Jong et al., 2012). All
new-generation ground-based optical/infrared telescopes have MOS and IFU
instruments planned, e.g. ELT/MOSAIC (Jagourel et al., 2018), TMT/WFOS (Pazder
et al., 2006) and GMT/GMACS (Pak et al., 2020). Instruments on the flagship
James Webb Space Telescope, including NIRSpec and MIRI, will also have MOS/IFU
capabilities. MOS/IFU techniques have been also routinely used in the radio
and sub-mm regime (e.g. VLA, ALMA, Thompson et al., 1980; Wootten & Thompson,
2009).
As a result of transformational large-scale public surveys and concerted
guaranteed time efforts completed over the past 3 decades, a growing body of
spectroscopic observations have been made available to the community.
Complementing guaranteed time observations and large scale public surveys,
individual investigators have added MOS and IFU observations tailored to
specific extragalactic science goals. Obtaining statistically robust samples
and particular science cases that involve targets distributed across the sky
(e.g. galaxy population studies in galaxy clusters, quasar surveys, high
redshift galaxy surveys, or intra/circum galactic medium absorption-line
surveys) require the combination of data coming from different telescopes and
instruments. Further, the advent of online databases has made access to fully
or partially reduced observations easier (Ginsburg et al., 2019), enabling
individual authors to make use of existing spectroscopic observations for new
science goals, possibly combining data from different telescopes. The sheer
volume of data warrants automated analysis pipelines with minimal human
interaction.
Striving for reproducible results, many authors in the field provide machine-
readable data plus scripts used to obtain the results with the publication
e.g. the Jupyter (Kluyver et al., 2016) notebook used for creating figures or
the CASA script used to reduce and make images from ALMA data. However,
publications mainly focus on the originally intended science case, leaving
byproducts and intermediate results largely unreported (e.g. unreported line
fluxes when the goal was measuring redshifts). In order to incorporate
archival spectroscopic observations in new projects, researchers need to
partially reproduce and build upon the efforts of the original authors.
The first fundamental property encoded by a spectrum is the source’s redshift.
A number of powerful, modern tools assist astronomers in obtaining accurate
redshifts for large samples in an automatic and unsupervised way while also
ensuring the reliability of the results (e.g. EZ, Garilli et al., 2010). Apart
from providing redshifts, the scientific potential of MOS and IFU observations
is realized in extracting the (resolved) physics and chemistry of
extragalactic objects from emission and absorption lines. With a growing body
of literature with tailored science goals, each publication uses heterogeneous
data and methods to measure emission and absorption lines. As the astronomy
community further adopts the Python programming language (e.g. Astropy,
Astropy Collaboration et al., 2013), various interfaces for fitting functions
exist. However, the low-level function fitting packages require individual
authors to write their own bindings to interface between the reduced
astronomical data and the fitting software. With a high level of duplicated
effort in the community to write tailored code to fit spectral lines and with
the high costs associated with sharing and maintaining it, access to data
analysis software entails a great deal of overhead and represents a barrier to
entry for the field of spectroscopy.
gleam 111https://github.com/multiwavelength/gleam (Galaxy Line Emission &
Absorption Modeling, Stroe & Savu, 2020) is a software tool for fitting
Gaussian models to emission and absorption lines in large samples of galaxy
and AGN spectra. gleam has versatile science applications involving large
samples of 1D spectra or IFU observations. For example, gleam is ideally
suited for unveiling the detailed physics and chemistry of galaxies, as
derived from interstellar medium line ratios and stellar absorption lines, for
a variety of samples spanning both cosmic time and environment. gleam can also
aid in the exploration of IFU cubes through spatially resolved physics,
kinematics, and chemistry. Requiring only the source redshifts and with little
to no interaction, the user can analyze large numbers of spectra in a uniform
manner, even with data taken in different conditions, with different
instrument setups, on different telescopes, at a range of signal-to-noise
(S/N) regimes, and for a wide variety of sources. We tested gleam mainly on
optical and infrared spectra; however, we expect it to also work well on radio
and sub-mm spectra.
In this paper, we provide an introduction to the gleam software, focusing on
features contained in the v1.0 release. Section 2 describes the basic
functionality, while Section 3 discusses the necessary input files and
expected outputs. Section 4 covers some example applications and uses. In
Section 5, we present the open-source development model adopted for gleam,
while in Section 6 we discuss possible extensions to the code in the near
future.
## 2 gleam: Galaxy Line Emission and Absorption Modeling
With gleam, the user can process large numbers of sources in batch mode,
taking advantage of the multiprocessing capabilities of modern CPUs.
Optionally, gleam also provides an interactive interface to inspect individual
line fits on a spectral line and source-by-source basis. gleam fits emission
and absorption lines in fully-reduced 1D spectra using per-source
spectroscopic redshift information and fitting constraints from a central
configuration. The central configuration encourages users to define common
fitting constraints for broad groups of spectra and be deliberate when
defining overrides, which helps prevent user errors and facilitates an easier
review of the methods and results by collaborators, referees, and readers.
gleam fits all lines listed in a central line list and can jointly fit lines
located close together. gleam also reports upper limits and identifies lines
without spectral coverage. If so required, the user can provide a file
containing sky bands, sky lines and/or OH lines to be masked and disregarded
during line fitting. At its core, gleam uses the popular LMFIT Python
package222https://lmfit.github.io/lmfit-py/(Newville et al., 2019) to perform
the line fitting and to calculate and report errors on fit parameters. gleam
is also well integrated with Astropy333https://www.astropy.org/(Astropy
Collaboration et al., 2013), which enables the use of units and FITS tables.
As output, gleam creates a FITS table with Gaussian line measurements and
upper limits (as the case may be), including central wavelength, width,
height, and amplitude, as well as estimates for the continuum under the line,
the line flux, luminosity, equivalent width, and velocity width. gleam can
also make plots of the entire spectrum with fitted lines overlaid, as well as
plots for each individual line fitted, using Matplotlib (Hunter, 2007).
gleam follows open-source practices, with planned features to be added to the
living codebase published online on Github at
https://github.com/multiwavelength/gleam. The latest release of gleam can be
installed easily with Python pip.
## 3 The Software
Below, we introduce gleam’s main functionality, features, required inputs, and
outputs. For a full documentation of the code, we encourage the reader to
consult gleam’s Github page at https://github.com/multiwavelength/gleam.
### 3.1 Model fitting
In fitting the spectrum, gleam groups neighboring spectral lines. For each
spectral line group, a user-defined window of the spectrum around the group is
considered for fitting. gleam models each group as the sum of a constant for
the continuum and one Gaussian for each spectral line. The assumption that the
continuum is locally constant might fail if the window is too wide, while a
too narrow window will not have enough line-free spectrum to properly
constrain the continuum. Sections of the spectrum suffering from
contamination, such as areas with sky lines, can also be masked.
The centers of all Gaussian components can be fixed, constrained to user-
defined intervals, or left as free parameters. An initial guess for the
central wavelength of each Gaussian is used to identify the component. This
initial guess is calculated from the user-provided redshift for each spectrum
and the global list of lines at rest-frame wavelengths. To offer the
flexibility to fit emission and absorption lines in a range of galaxy and AGN
spectra, gleam relies on a single prior, the source redshift, for initializing
the line fitting solution. In all but the brightest sources with the highest
S/N spectral lines, a spectroscopic-quality redshift is required.
A fit is accepted when every Gaussian component passes the user-specified S/N
ratio. When, due to noise and the overlap with sky lines, there is
insufficient information in the data to fit the entire model, gleam
iteratively removes Gaussian components in search of an acceptable fit. Any
removed Gaussian components are treated as non-detections and upper limits are
computed for them.
gleam employs LMFIT to perform the fitting (Newville et al., 2019). Through a
non-linear least-squares minimization using the Levenberg-Marquardt method,
LMFIT enables the robust estimation of both model parameters and their errors.
### 3.2 Naming convention
When handling large numbers of spectra coming from different observations,
sometimes from different telescopes, it is important to adopt a consistent
naming convention. gleam helps with this by prescribing a four-part
hierarchical naming convention that allows for easy identification and
grouping of spectra.
Each measured spectrum is uniquely identified by the combination of the
following four properties:
Sample
A label for the parent sample for the source, e.g. name of the parent galaxy
cluster or famous field,
Setup
A label for the telescope, instrument, or mode used for the observation,
Pointing
An identifier for the individual pointing, fiber configurations, or slit
configuration/mask the observation is part of,
SourceNumber
A source number to distinguish a target within a sample, setup, and pointing
combination.
### 3.3 Inputs
gleam uses five kinds of input files to gather information about spectra in
order to compute the properties of its spectral lines:
* •
A set of 1D spectra,
* •
Metafiles that provide a reference redshift for each spectrum,
* •
Configuration file, which specifies choices of spectral lines, fitting
parameters, cosmological parameters and sky masking,
* •
Line table with the rest-frame wavelengths of the spectral lines of interest,
* •
(Optional) Sky band catalog, with details of any wavelengths contaminated by
sky absorption/emission.
In a single run, gleam can process spectra that originate from different data
sets and which might have different units for wavelength or flux. It is
therefore highly recommended to include units in the headers of all spectra
files. gleam propagates the units to the results. Line files and sky band
files should also specify units in the file headers to ensure alignment with
the spectra.
The metadata file contains information about individual spectra in the
project, such as the setup and pointing they were observed with, a numeric
identifier, and their redshift. The user may add custom columns to the
metadata file to store other information about the spectra, such as the sky
coordinates, quality flags, source types, etc. The project can have a single
metadata file or multiple ones, as long as spectra are uniquely labeled.
Because gleam does not process the sky coordinates for sources, it cannot
detect when two spectra pertain to the same source and, therefore, will
produce separate independent fits for each input spectrum. It is incumbent
upon the user to reason about which spectrum best fits their science
requirements. When it is appropriate, another approach would be to
combine/stack the relevant observations into a single spectrum before running
gleam.
gleam can uniformly process large numbers of spectra, even with data taken in
different conditions, with different instruments on different telescopes, and
for a wide variety of sources. The configuration file is used to concisely
describe how the different spectra should be processed, so they can be
analyzed together. For easy editing and review, the configuration file for
gleam uses the YAML444https://yaml.org/ format. Taking advantage of the naming
convention and the many reasonable defaults, the user can tailor the analysis
at 3 levels. The global level parameters override the default configuration
for all the spectra. The setup level offers a way to apply configuration
overrides to groups of spectra (named setups). This level can be used to
capture differences between telescopes or instruments, such as the spectral
resolution. At the most granular level, the user can customize parameters for
individual sources. While per-source overrides can help account for some
particular cases (e.g. a small percentage of sources with both narrow and
broad emission lines), they should be used sporadically due to the associated
typing burden, and in the spirit of keeping the results comparable. The model
parameters for each spectrum are computed by stacking the applicable overrides
on top of the default in order: first, the global overrides, then any
applicable per-setup overrides, and, finally, any applicable per-source
overrides.
gleam cannot specify any default for the line table and the instrumental
resolution, so this information needs to appear at some level in the
configuration file. With these two fields, we present a minimal working gleam
configuration example:
⬇
globals:
line_table: line_lists/Main_optical_lines.fits
resolution: 4.4 Angstrom
gleam fits all lines listed in the line table and iteratively eliminates model
components when the data does not yield satisfactory fits for all the lines. A
S/N parameter, which defines the minimum accepted ratio between the estimated
amplitude of a component and its error, separates detections from upper limits
for each spectral line. In some setups, the user may select a starting subset
of the lines and avoid unnecessary trials (e.g. excluding faint lines the user
does not expect to be detected).
(a) A spectrum of a source at $z\sim 0.1$, with a spectrum dominated by AGN
features, including broad emission lines. The spectrum was taken with
MMT/Hectospec, a multi-fiber instrument.
(b) A high S/N spectrum at $z\sim 0.6$ dominated by star formation, taken in
$>1^{\prime\prime}$ seeing conditions with the fiber-bed WHT/AF2 instrument.
Figure 1: gleam emission and absorption line fits, highlighting different
source types and origin telescopes.
(c) A star-formation dominated galaxy at $z\sim 0.17$, with emission lines
slightly overlapping with a sky-contaminated wavelength range. Data was taken
in $\sim 1^{\prime\prime}$ seeing conditions with thin clouds, with a multi-
slit spectrograph (VLT/VIMOS).
(d) A passive galaxy spectrum with absorption and emission features at $z\sim
0.26$. Data was taken in $\sim 1^{\prime\prime}$ seeing with MMT/Hectorspec
instrument.
Figure 1: Continued.
By default, there is no sky masking and the entire spectrum is used. However,
the user may control whether the model should ignore portions of the spectrum
where sky bands may not have been reliably subtracted. These bands are masked
and disregarded for fitting and treated as if no spectral coverage is
available. For example, for a data set where sky subtraction only failed for a
few of the spectra, the configuration may specify the sky band catalog at the
global level but only turn masking on for individual sources (or setups) for
which the sky subtraction is inadequate.
gleam offers users a lot of flexibility in choosing the way line models are
fit to the data. Neighboring Gaussian components can be fit jointly to account
for nearby or blended lines. The user can also define the amount of continuum
to be fitted on either side of a group, which should be large enough to
encompass enough line-free continuum. Over the range specified, the continuum
should be well approximated by a constant. Any unrelated lines that fall
within the selected continuum are automatically masked. The center of each
Gaussian component is first estimated based on the rest-frame wavelength in
the line catalog and on the redshift estimate of the spectrum (listed in the
metadata file). The Gaussian center can be fixed to the initial guess, allowed
to vary within a small range around it, or can vary freely within the spectral
range of data considered when fitting. This final option can lead to lines
being mislabeled or cross-labeled, so it should only be used when the redshift
estimate for the spectrum is so poor that neither of the two other options is
feasible.
To report luminosities based on the fitted models, gleam uses a set of
cosmological parameters: Hubble constant ($H_{0}$), $\Omega_{0}$ and the CMB
temperature. While the default values for these parameters are reasonably
accurate and up to date, some projects may require slightly different values.
The cosmology section overrides one or more of these parameters, and the
resulting cosmology is then used consistently across all the spectra within a
project. This is the only set of overrides that cannot be made on a per-setup
or per-source basis, since doing so could produce results that are not
comparable between sources.
### 3.4 Outputs
For each of the sources in the sample, gleam produces a FITS table with all of
the line fits and upper limits (with units derived from the input data, if
available). Each line fitted is represented in a separate row, with all the
corresponding line fit details contained in different columns. The output
table contains fit parameters and their associated errors (such as the
continuum estimation, central line wavelength, Gaussian height, standard
deviation, and amplitude), line fluxes, luminosities, and equivalent widths.
Each row also contains a flag for spectral coverage (i.e. whether the line is
covered by the input spectrum) and another to indicate detection (whether the
line is detected above the required S/N). Fit values and errors are omitted
when the spectrum does not cover the spectral line. If a line is not detected,
gleam only reports an upper limit in the amplitude column and omits all other
Gaussian fit parameters. The deconvolved full-width-at-half-maximum (FWHM) and
the velocity FWHM are only reported if the line is spectrally resolved.
If plotting is enabled, gleam produces two types of figures. The first type of
figure shows the entire spectrum with zoom-ins on the emission and absorption
line fits (see Figure 1). The second type of plot is focused on each line fit.
Masked sky areas are shaded gray for clarity.
## 4 Example applications
We demonstrate gleam’s capabilities by showcasing two natural applications,
which also served as testbeds during the code development.
### 4.1 A Large, Heterogeneous Sample of Extragalactic Spectra
gleam is well-suited for measuring emission and absorption lines in large,
heterogeneous samples of extragalactic 1D spectra. In Stroe & Sobral (2021),
we thoroughly tested gleam on all the spectroscopy available to us, which
included about 4200 passive galaxies, star-forming galaxies, AGN, and quasars
at redshifts from 0 to $\sim 1$. The data were taken with four different
instruments (VLT/VIMOS, WHT/AF2, Keck/DEIMOS, and MMT/Hectospec) that employ
different techniques to achieve MOS capabilities (slits, fibers), under a
range of sky, weather, and seeing conditions. In this project, the focus was
on measuring optical emission lines, such as [Oii] ($\lambda\,3728$ Å),
H$\beta$, [Oiii] ($\lambda\lambda\,4960,\,5007$ Å), H$\alpha$, [Nii]
($\lambda\lambda\,6550,\,6585$ Å), H$\beta$, and [Sii]
($\lambda\lambda\,6718,\,6733$ Å), with the spectral resolution for all
instruments being sufficient for separating nearby narrow emission lines.
For the entire sample of 4200 sources, a simple, short configuration file was
sufficient, as illustrated below. The fitting constraints, the set of spectral
lines to be fit, and the sky lines to be masked were the same for the bulk of
the sources. With gleam, it was easy to set a different resolution for each
instrumental setup and, when necessary, add, for example, a different
continuum width (which resulted in more stable fits), turn off sky masking
(when data reduction adequately corrected for the sky absorption), or use a
different line table (when an air versus a vacuum wavelength calibration was
applied to the data). We also set overrides for several individual sources.
For example, we fit the full line list when [Nii] was also present in the data
or when lines were blended. The YAML configuration file for this example
application can be found below and demonstrates all the customization options
that gleam provides. Examples of line fits can be found in Figure 1.
⬇
globals:
sky: line_lists/Sky_bands.fits
mask_sky: True
line_table: line_lists/Main_optical_lines.fits
lines:
- OII
- Hb
- OIII4
- OIII5
- Ha
- NII1
- SII1
- SII2
fitting:
SN_limit: 2
tolerance: 26.0 Angstrom
w: 3.0 Angstrom
mask_width: 20.0 Angstrom
cont_width: 70.0 Angstrom
center: constrained
\parsetups:
VIMOS:
resolution: 12.5 Angstrom
fitting:
cont_width: 60 Angstrom
MMT:
resolution: 6 Angstrom
line_table: line_lists/rsvao.fits
mask_sky: False
Keck:
resolution: 1 Angstrom
fitting:
cont_width: 40 Angstrom
WHT_R316R:
resolution: 8.1 Angstrom
WHT_R600R:
resolution: 4.4 Angstrom
\parsources:
…
A115.MMT.Q1_EXT1.194:
lines: all
…
Without plotting, the fitting for 10 spectral lines (with H$\alpha$ +[Nii] and
the [Sii] doublet fit jointly) for the sample of $>4000$ sources could be
completed in less 20 min, using 6 threads on a modern laptop with a 2.9 GHz
6-Core Intel Core i9 processor. With plotting, the process can take up to 3 h.
### 4.2 Integral Field Unit Observations
Very powerful Python wrappers tailored for the analysis of IFU data exist in
the literature (e.g. GIST, Bittner et al., 2019). Their design is tailored to
accomplish complex applications, such as the detailed modeling of absorption
lines in passive galaxies, which require input spectra with good continuum
detections. gleam does not make assumptions on the underlying physics of the
sources, making it a complementary tool for emission-line dominated sources
that do not benefit from high S/N continuum detections.
In Stroe et al. (2020), gleam was used to measure spectral lines in
Gemini/GMOS IFU observations of 5 emission-line dominated cluster galaxies at
$z\sim 0.2$. For analysis with gleam, the IFU cube for each of the sources was
split into individual spaxels. The nature of the project required slight
reinterpretations of the naming convention. The Sample components was set to
the name of the parent galaxy cluster, Pointing was used to specify the
galaxy, while SourceNumber was used to label each spaxel in the IFU.
Coordinates for each spaxel were added to the metadata file to track the
connection between the spaxels and their sky positions with respect to each
galaxy. This is a good example of using the metadata file for storing more
than just the redshift information. With this interpretation of the naming
convention, the YAML configuration for the project could be specified in just
a few lines:
⬇
globals:
mask_sky: False
line_table: line_lists/rsvao.fits
lines:
- Ha
- NII1
- SII1
- SII2
fitting:
SN_limit: 3
tolerance: 26.0 Angstrom
w: 9.0 Angstrom
mask_width: 20.0 Angstrom
cont_width: 70.0 Angstrom
center: constrained
\parsetups:
GMOS:
resolution: 11.4 Angstrom
## 5 Development model
gleam is developed as an open-source project hosted at
https://github.com/multiwavelength/gleam and published under the permissive
BSD-3-Clause License. The authors welcome community contributions in the form
of bug reports and feature suggestions, as well as code contributions under
the same license via the GitHub pull-request system. gleam is written in the
Python programming language, which is a popular choice both within and outside
the astronomical community, with the hope that interested contributors would
find it easy to get started. The project aims to offer an inclusive and
welcoming place for collaboration and has adopted the Astropy Community Code
of Conduct555https://www.astropy.org/code_of_conduct.html.
## 6 Future developments
In the near future, we will explore a number of natural extensions to gleam.
In its first iteration, gleam was envisioned to work out-of-the-box for most
extragalactic science cases, hence the choice of Gaussian models for the
fitting. In the future, we plan to expand the choices for component types with
other models, such as Voigt (e.g. absorption lines towards quasars) or
asymmetric (e.g. Ly$\alpha$ emission) profiles and more complex continuum
models.
We aim to also provide better support for multiple component fits, such as
when both broad and narrow lines are present in an AGN spectrum. In its
iterative refinement of the model, gleam removes Gaussian components in a
sequential fashion. As such, the most complex model that converges is passed
through the S/N criterion to identify detected spectral lines. As evidenced in
Section 4, gleam robustly fits well-separated spectral lines at non-redundant
wavelength separations. For scenarios in which many ($>3$) blended or nearby
lines at lower S/N are jointly fit, sometimes lines are cross-identified.
Incorrect matching/labeling of spectral lines can be avoided by making use of
constraints on the center of each Gaussian component. In the future, a number
of new additions to gleam will ensure successful and correct fits to a wider
variety of science cases. At the moment, line fitting in gleam does not take
into account line ratio predictions from radiative modeling and, as such,
allows for any ratio between spectral lines. The option for a tighter coupling
between line ratios could be desirable for specific science cases or, for
example, low S/N regimes.
Another direction for development would be to investigate other back-ends for
performing the line fitting, e.g. astropy.modeling from Astropy, which was not
available at the time the main code development was occurring for gleam. This
approach would enable a closer integration with the Astropy suite of packages.
Further, as mentioned in Section 4, we tested gleam on a variety of optical
and infrared observations. In the near future, we will test its robustness
when applied to data at other wavelengths, such as radio (e.g. focusing on Hi
observations) and sub-mm observations (e.g. molecular and atomic lines,
especially at high redshift).
The authors are grateful to the referee for their constructive suggestions,
which improved the paper. Andra Stroe gratefully acknowledges the support of a
Clay Fellowship. gleam heavily relies on a number of scientific Python
dependencies, including Astropy, LMFIT, Matplotlib, and NumPy. Its development
makes use of other packages, tools, and services, including git, Poetry, Mypy,
Black, Colorama, pydantic, Click, PyYAML, GitHub, and VSCode. In testing the
software, we made use of observations obtained with the International Gemini
Observatory, with ESO Telescopes at the La Silla Paranal Observatory, with the
William Herschel Telescope, with the W.M. Keck Observatory, and with the MMT
Observatory.
## References
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, doi: 10.1051/0004-6361/201322068
* Bittner et al. (2019) Bittner, A., Falcón-Barroso, J., Nedelchev, B., et al. 2019, A&A, 628, A117, doi: 10.1051/0004-6361/201935829
* Blanton et al. (2017) Blanton, M. R., Bershady, M. A., Abolfathi, B., et al. 2017, AJ, 154, 28, doi: 10.3847/1538-3881/aa7567
* Cirasuolo et al. (2012) Cirasuolo, M., Afonso, J., Bender, R., et al. 2012, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 8446, Ground-based and Airborne Instrumentation for Astronomy IV, ed. I. S. McLean, S. K. Ramsay, & H. Takami, 84460S, doi: 10.1117/12.925871
* Cui et al. (2012) Cui, X.-Q., Zhao, Y.-H., Chu, Y.-Q., et al. 2012, Research in Astronomy and Astrophysics, 12, 1197, doi: 10.1088/1674-4527/12/9/003
* Dalton et al. (2012) Dalton, G., Trager, S. C., Abrams, D. C., et al. 2012, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 8446, Ground-based and Airborne Instrumentation for Astronomy IV, ed. I. S. McLean, S. K. Ramsay, & H. Takami, 84460P, doi: 10.1117/12.925950
* de Jong et al. (2012) de Jong, R. S., Bellido-Tirado, O., Chiappini, C., et al. 2012, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 8446, Ground-based and Airborne Instrumentation for Astronomy IV, ed. I. S. McLean, S. K. Ramsay, & H. Takami, 84460T, doi: 10.1117/12.926239
* DESI Collaboration et al. (2016) DESI Collaboration, Aghamousa, A., Aguilar, J., et al. 2016, arXiv e-prints, arXiv:1611.00036. https://arxiv.org/abs/1611.00036
* Dressler et al. (2011) Dressler, A., Bigelow, B., Hare, T., et al. 2011, PASP, 123, 288, doi: 10.1086/658908
* Faber et al. (2003) Faber, S. M., Phillips, A. C., Kibrick, R. I., et al. 2003, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 4841, Instrument Design and Performance for Optical/Infrared Ground-based Telescopes, ed. M. Iye & A. F. M. Moorwood, 1657–1669, doi: 10.1117/12.460346
* Fabricant et al. (2005) Fabricant, D., Fata, R., Roll, J., et al. 2005, PASP, 117, 1411, doi: 10.1086/497385
* Garilli et al. (2010) Garilli, B., Fumana, M., Franzetti, P., et al. 2010, PASP, 122, 827, doi: 10.1086/654903
* Ginsburg et al. (2019) Ginsburg, A., Sipőcz, B. M., Brasseur, C. E., et al. 2019, AJ, 157, 98, doi: 10.3847/1538-3881/aafc33
* Harris et al. (2020) Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357, doi: 10.1038/s41586-020-2649-2
* Hook et al. (2004) Hook, I. M., Jørgensen, I., Allington-Smith, J. R., et al. 2004, PASP, 116, 425, doi: 10.1086/383624
* Hunter (2007) Hunter, J. D. 2007, Computing in Science & Engineering, 9, 90, doi: 10.1109/MCSE.2007.55
* Jagourel et al. (2018) Jagourel, P., Fitzsimons, E., Hammer, F., et al. 2018, in Ground-based and Airborne Instrumentation for Astronomy VII, ed. C. J. Evans, L. Simard, & H. Takami, Vol. 10702, International Society for Optics and Photonics (SPIE), 3162 – 3171, doi: 10.1117/12.2314135
* Kimura et al. (2010) Kimura, M., Maihara, T., Iwamuro, F., et al. 2010, PASJ, 62, 1135, doi: 10.1093/pasj/62.5.1135
* Kluyver et al. (2016) Kluyver, T., Ragan-Kelley, B., Pérez, F., et al. 2016, in Positioning and Power in Academic Publishing: Players, Agents and Agendas, ed. F. Loizides & B. Scmidt (Netherlands: IOS Press), 87–90. https://eprints.soton.ac.uk/403913/
* Le Fèvre et al. (2003) Le Fèvre, O., Saisse, M., Mancini, D., et al. 2003, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 4841, Instrument Design and Performance for Optical/Infrared Ground-based Telescopes, ed. M. Iye & A. F. M. Moorwood, 1670–1681, doi: 10.1117/12.460959
* McLean et al. (2012) McLean, I. S., Steidel, C. C., Epps, H. W., et al. 2012, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 8446, Ground-based and Airborne Instrumentation for Astronomy IV, ed. I. S. McLean, S. K. Ramsay, & H. Takami, 84460J, doi: 10.1117/12.924794
* Newville et al. (2019) Newville, M., Otten, R., Nelson, A., et al. 2019, lmfit/lmfit-py 1.0.0, 1.0.0, Zenodo, doi: 10.5281/zenodo.3588521
* Pak et al. (2020) Pak, S., DePoy, D. L., Marshall, J. L., et al. 2020, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 11203, Advances in Optical Astronomical Instrumentation 2019, 1120308, doi: 10.1117/12.2547889
* Pazder et al. (2006) Pazder, J. S., Roberts, S., Abraham, R., et al. 2006, in Ground-based and Airborne Instrumentation for Astronomy, ed. I. S. McLean & M. Iye, Vol. 6269, International Society for Optics and Photonics (SPIE), 644 – 655, doi: 10.1117/12.672712
* Sharples et al. (2006) Sharples, R., Bender, R., Bennett, R., et al. 2006, nar, 50, 370, doi: http://dx.doi.org/10.1016/j.newar.2006.02.014
* Stroe et al. (2020) Stroe, A., Hussaini, M., Husemann, B., Sobral, D., & Tremblay, G. 2020, ApJ, 905, L22, doi: 10.3847/2041-8213/abcb04
* Stroe & Savu (2020) Stroe, A., & Savu, V.-N. 2020, multiwavelength/gleam: Initial release of gleam, v1.0, Zenodo, doi: 10.5281/zenodo.3974969
* Stroe & Sobral (2021) Stroe, A., & Sobral, D. 2021, ApJ (submitted)
* Thompson et al. (1980) Thompson, A. R., Clark, B. G., Wade, C. M., & Napier, P. J. 1980, ApJS, 44, 151, doi: 10.1086/190688
* Wootten & Thompson (2009) Wootten, A., & Thompson, A. R. 2009, IEEE Proceedings, 97, 1463, doi: 10.1109/JPROC.2009.2020572
|
# Dual gauge theory formulation of planar quasicrystal elasticity and fractons
Piotr Surówka<EMAIL_ADDRESS>Max Planck Institute for the Physics of
Complex Systems, 01187 Dresden, Germany Department of Theoretical Physics,
Wrocław University of Science and Technology, 50-370 Wrocław, Poland
###### Abstract
Elastic description of planar quasicrystals can be formulated as an interplay
between two Goldstone fields corresponding to phonon and phason degrees of
freedom. We reformulate this description as a gauge theory with one gauge
field that is symmetric under exchange of indices and one that is not. We also
show which topological defects in quasicrystals can be succinctly incorporated
in the dual description and interpret them as fractonic excitations. Finally
we calculate the static interaction potential between defects in a
quasicrystal with fivefold symmetry. This is done in the limit of a small
coupling between phonon and phason stresses.
Solid materials are most commonly represented by crystals, whose atomic
constituents are arranged in a highly ordered microscopic structure, forming a
lattice. Macroscopic description of crystals is provided by the theory of
elasticity that deals with mechanics of bodies modelled as a continuous object
rather than a crystalline lattice of atoms Landau et al. (1986). An important
point is, however, that not all solids are crystals. In fact there are various
classes of solid materials, whose microscopic structure is not a periodic
lattice. Examples include policrystals, glasses and quasicrystals. We do not
have such a detailed understanding of these materials as for crystals. Both of
the microscopic structure and macroscopic descriptions are an active field of
study.
Topological defects play a key role in the physics of elastic solids Nabarro
(1987). They are characterized by a discontinuity in the order parameter. In
the context of classical elasticity there are two types of such
discontinuities: dislocations and disclinations. They are crucial in the two-
dimensional phase transitions as first shown by Kosterlitz and Thouless and
applied to solids by Nelson, Halperin and Young José (2013); Nelson and
Halperin (1979); Young (1979). The transition happens through the
proliferation of topological defects at finite temperature that results in a
thermal melting of a crystal. In elasticity such melting occurs in two stages.
First the dislocations condense, while the disclinations are still
energetically too costly. This is the hexatic state, which is an example of
the quantum nematic order Nelson (2002). In the next stage both dislocations
and disclinations are condensing leading to the isotropic state. Although
defects is a classic subject in the science of materials, the intricate
geometric constructions are quite far from an effective field theory framework
usually employed to study phase transition. To circumvent this issue a major
theoretical insight, that allows one to easily incorporate defects as sources
in the field theory formulation, is provided by Kleinert Kleinert (1982, 1983)
(for a review see Kleinert (1989, 1995); Beekman et al. (2017a)). In a
complete analogy to the particle vortex duality he rewrites elasticity as a
symmetric gauge field. This duality maps defects to matter fields charged by
the dual gauge fields.
Symmetric tensor gauge fields emerge also as a low-energy description of
certain spin liquids Xu (2006); Xu and Hořava (2010); Pretko (2017a, b); You
et al. (2020). Such gauge fields are sourced by matter fields with restricted
mobility, whose presence indicate a new type of topological phase of matter
Nandkishore and Hermele (2019); Pretko et al. (2020). This similarity with
quantum elasticity can be made precise in the language of dualities formulated
earlier. Dislocations are vector charges that can move along their Burgers
vector, while disclinations are fully immobile scalar charges. This behavior
leads to the conclusion that elastic defects are in fact fractons Pretko and
Radzihovsky (2018a). As a result elastic dualities serve to expand our
knowledge on the classical and quantum elasticity as well as to give insights
into new fractonic phases of matter Zaanen et al. (2004); Beekman et al.
(2017b); Pretko and Radzihovsky (2018b); Gromov (2019); Kumar and Potter
(2019); Pretko et al. (2019); Zhai and Radzihovsky (2019); Gromov and Surówka
(2020); Nguyen et al. (2020); Nampoothiri et al. (2020); Fruchart and Vitelli
(2020); Manoj et al. (2020); Zhai and Radzihovsky (2020).
In this paper we intend to study the connection between fractons and
elasticity of quasicrystals. Quasicrystals are solids, with long-range
positional order and no periodicity Levine and Steinhardt (1984); Levine et
al. (1985); Socolar et al. (1986); Baggioli and Landry (2020) (See Fan (2016)
for a review). This means that the elastic description of quasicrystals is
fundamentally different than the one for crystals. The free energy of ordinary
crystals is unchanged under a discrete translation corresponding to the
lattice vectors defining the unit cell of the periodic structure. When the
translation is allowed to vary slowly as a function of the position the free
energy increases. This can be parameterized by a displacement field
$u_{i}(x)$, which describes phonons, Goldstone bosons that are low-energy
collective excitations of the crystal. In quasicrystals the free energy also
remains constant under the global rearrangements of atomic positions that can
be parameterized by an additional vector $w_{i}(x)$. Small fluctuations of
this vector introduce the so-called phason field or strain, capturing the low-
energy collective excitations of quasicrystal called phasons.
The main goal of the present work is to reformulate the elastodynamics of
quasicrystals Ding et al. (1993) in terms of gauge theories. Such a
formulation allows for a systematic study of defects and their interactions as
well as it opens up a possibility to investigate two-dimensional defect-
mediated phase transitions present in quasicrystals. It also provides a
theoretical background for quantum phases with quasicrystalline symmetries.
Elasticity of quasicrystals.$-$Quasicrystals are characterized by two types of
displacement fields $u_{i}(x)$ and $w_{i}(x)$. The first is analogous to the
phonon displacement and leads to the symmetric strain tensor $u_{ij}=u_{ji}$,
where $u_{ij}=\frac{1}{2}(\partial_{i}u_{j}+\partial_{j}u_{i}).$ Contrary to
the phonon field $u_{ij}$ the phason displacement tensor
$w_{ij}=\partial_{i}w_{j}$ is not symmetric $w_{ij}\neq w_{ji}$. Following the
usual procedure of writing a free energy as an expansion around the zero
displacement $u_{ij}=w_{ij}=0$ one can write down the potential part of the
effective action $S=\int dtd^{2}x\mathcal{L}$ as
$\displaystyle S_{\text{pot}}[u_{i},w_{i}]$ $\displaystyle=\int
dtd^{2}x\frac{-1}{2}\Big{(}C^{ijkl}u_{ij}u_{kl}\Big{)}\,$ $\displaystyle+\int
dtd^{2}x\frac{-1}{2}\Big{(}K^{ijkl}w_{ij}w_{kl}\Big{)}$ (1)
$\displaystyle+\int dtd^{2}x\frac{-1}{2}\Big{(}R^{ijkl}w_{ij}u_{kl}+R^{\prime
ijkl}w_{ij}u_{kl}\Big{)}$
$\displaystyle\equiv\int\frac{-1}{2}\Big{[}\left(u_{ij}\,w_{ij}\right)\begin{pmatrix}C_{ijkl}&R_{ijkl}\\\
R^{\prime}_{ijkl}&K_{ijkl}\end{pmatrix}\begin{pmatrix}u_{kl}\\\
w_{kl}\end{pmatrix}\Big{]},\,$
where we have introduced four tensors of elastic coefficients parameterizing
different couplings between fluctuations. We always assume a summation over
repeated indices, which run over spatial coordinates $x$ and $y$ as we focus
on two-dimensional systems. It is important to note that the phonon elastic
tensor $C_{ijkl}$ possesses both minor and major symmetries, the couplings
between phonons and phasons represented $R^{ijkl}$ and $R^{\prime ijkl}$ have
a minor symmetry of indices contracted with the phonon field. Finally the
$K^{ijkl}$ has neither minor nor major symmetries. Potential energy in the
effective action can be supplemented by the kinetic part
$S_{\text{kin}}[u_{i},w_{i}]=\int
dtd^{2}x\Big{[}\dot{u}_{i}\dot{u}_{i}+\dot{w}_{i}\dot{w}_{i}\Big{]}\,.$ (2)
It is well known that the phason field is diffusive Lubensky et al. (1985);
Francoual et al. (2003), therefore the above elastodynamics describes either
short time behavior of classical quasicrystals or systems at zero temperature.
Several realizations of quantum quasicrystalline symmetry have been proposed
Gopalakrishnan et al. (2013); Kraus et al. (2013); Tran et al. (2015); Sagi
and Nussinov (2016); Bandres et al. (2016); Huang and Liu (2018); Ahn et al.
(2018); Varjas et al. (2019). The partition function for quasicrystals
elasticity reads
$Z=\int
Du^{i}Dw^{i}e^{iS_{\text{kin}}[u_{i},w_{i}]+iS_{\text{pot}}[u^{i},w^{i}]}\,.$
(3)
Our intention is to perform the duality transformation on the above action. In
order to do that we need to write it first in terms of stress variables. There
are two equivalent ways of doing that. One can directly perform the Hubbard-
Stratonovich transformation or write down generalized Hooke’s laws
$\displaystyle T_{ij}$ $\displaystyle=-\frac{\partial\mathcal{L}}{\partial
u_{ij}}=C_{ijkl}u_{kl}+R_{ijkl}w_{kl}$ (4a) $\displaystyle H_{ij}$
$\displaystyle=-\frac{\partial\mathcal{L}}{\partial
w_{ij}}=K_{ijkl}w_{kl}+R^{\prime}_{ijkl}u_{kl},$ (4b)
solve for displacements in terms of stresses $T_{ij}$ and $H_{ij}$ and express
the action in terms of these variables. To write down (4a) and (4b) we use the
fact that $R^{\prime}_{klij}=R_{ijkl}$. These transformations bring the action
to the following form
$\displaystyle S[T_{ij},H_{ij},u_{i},w_{j}]$ $\displaystyle=\int
dtd^{2}x\frac{1}{2}\Big{[}P_{i}P^{i}+\mathcal{P}_{i}\mathcal{P}^{i}\,$
$\displaystyle+\left(T_{ij}\,H_{ij}\right)\begin{pmatrix}C_{ijkl}&R_{ijkl}\\\
R^{\prime}_{ijkl}&K_{ijkl}\end{pmatrix}^{-1}\begin{pmatrix}T_{kl}\\\
H_{kl}\end{pmatrix}$
$\displaystyle+2u_{i}(\partial_{\mu}T^{i\mu})+2w_{i}(\partial_{\mu}H^{i\mu})\Big{]},$
(5)
where we have introduced momentum operators $P^{i}=T^{i0}$ and
$\mathcal{P}^{i}=H^{i0}$. Greek letters run over spacetime indices. After the
transformation to stress variables displacements act as Lagrange multipliers
for the conservation of momentum.
Elastic duality for quasicrystals.$-$Dualities offer new insights into non-
perturbative physics of strongly correlated systems. In $2+1$ dimensions the
core of dualities lies in the mapping of the Goldstone bosons fluctuations
onto appropriate gauge fields. For example particle-vortex duality maps scalar
fields onto $U(1)$ gauge fields and crystal elasticity maps strain
fluctuations into symmetric tensor gauge fields. Elasticity of quasicrystals
generalizes this mapping and introduces two sets of distinct elastic gauge
fields. In order to see this we note that integrating out $u_{i}(x)$ and
$w_{i}(x)$ we obtain two constraints coming from rewriting the action in terms
of stress variables $\delta\left(\partial_{\mu}T^{i\mu}\right)$ and
$\delta\left(\partial_{\mu}H^{i\mu}\right)$. In order to have a dual action
for quasicrystals we resolve these constraints by two tensor gauge fields
$T^{i\mu}=\epsilon^{\mu\nu\rho}\partial_{\nu}A^{i}_{\rho}\,,\qquad
H^{i\mu}=\epsilon^{\mu\nu\rho}\partial_{\nu}\mathcal{A}^{i}_{\rho}\,.$ (6)
We note that a symmetric tensor such as $T^{i\mu}$ can be resolved by a
symmetric tensor field $A_{ij}=A_{ji}$ and a scalar $\phi$ in a complete
analogy with crystal elasticity
$P^{i}=\epsilon^{kl}\partial_{k}A^{i}_{l}\,,\qquad
T^{ij}=\epsilon^{jk}(-\partial_{0}A^{i}_{k}+\partial_{k}\partial_{i}\phi)\,,$
(7)
however, the resolution of the constraint for $H_{ij}$ leads to a tensor field
$\mathcal{A}_{ij}\neq\mathcal{A}_{ji}$ that is not symmetric under exchange of
indices and a vector potential $\Phi_{i}$
$\mathcal{P}^{i}=\epsilon^{kl}\partial_{k}A^{i}_{l}\,,\qquad
H^{ij}=\epsilon^{jk}(-\partial_{0}\mathcal{A}^{i}_{k}+\partial_{k}\Phi^{i})\,.$
(8)
In analogy with Maxwell electrodynamics we can define electric and magnetic
fields
$B^{i}=\epsilon^{kl}\partial_{k}A^{i}_{l}\,,\qquad
E^{i}_{j}=\epsilon^{i}{}_{k}(-\partial_{0}A^{k}_{j}+\partial_{j}\partial_{k}\phi)\,.$
(9)
$\mathcal{B}^{i}=\epsilon^{kl}\partial_{k}\mathcal{A}^{i}_{l}\,,\qquad\mathcal{E}^{i}_{j}=\epsilon^{i}{}_{k}(-\partial_{0}\mathcal{A}^{k}_{j}+\partial_{j}\Phi^{k})\,.$
(10)
These fields are invariant under the following gauge transformations
$\delta
A_{ij}=\partial_{i}\partial_{j}\alpha\,,\qquad\delta\phi=\dot{\alpha}\,,$ (11)
$\delta\mathcal{A}_{ij}=\partial_{j}\beta_{i}\,,\qquad\delta\Phi_{i}=\dot{\beta}_{i}\,.$
(12)
Already at this stage we see that the gauge structure for the symmetric stress
tensor is the same as in scalar fracton theories and the asymmetric stress
field leads to a vector gauge theory Pretko (2018). Thus quasicrystal
elasticity combines these two degrees of freedom. We can now write the
effective action in the dual formulation
$\displaystyle S_{\text{dual}}$ $\displaystyle=\int
dtd^{2}x\frac{1}{2}\Bigg{[}B_{i}B^{i}+\mathcal{B}_{i}\mathcal{B}^{i}\,$
$\displaystyle+\left(E_{ij}\,\mathcal{E}_{ij}\right)\begin{pmatrix}\tilde{\mathcal{C}}_{ijkl}&\tilde{\mathcal{R}}_{ijkl}\\\
\tilde{\mathcal{R}}^{\prime}_{ijkl}&\tilde{\mathcal{K}}_{ijkl}\end{pmatrix}\begin{pmatrix}E_{kl}\\\
\mathcal{E}_{kl}\end{pmatrix}\Bigg{]}$ (13) $\displaystyle+\int
dtd^{2}x\mathcal{L}_{\text{sources}}$
We have introduced a set of tensors that are yet to be fixed by calculating
the proper inverse defined in (Dual gauge theory formulation of planar
quasicrystal elasticity and fractons). We note that the entries are not just
the inverses of the original elastic coefficients as we need to invert the
whole matrix of tensors. Tilde denotes index rotations, e.g.
$\tilde{\mathcal{C}}_{ijkl}=\epsilon_{ii^{\prime}}\epsilon_{jj^{\prime}}\epsilon_{kk^{\prime}}\epsilon_{ll^{\prime}}\mathcal{C}^{i^{\prime}j^{\prime}k^{\prime}l^{\prime}}$.
The dual charges, that we later map to defects, couple to the gauge potentials
in the following way
$\mathcal{L}_{\text{sources}}=\phi\rho+A_{ij}J_{ij}+\Phi_{i}\varrho_{i}+\mathcal{A}_{ij}\mathcal{J}_{ij}\,.$
(14)
The last ingredient that we need is the inverse matrix of elastic tensors. The
explicit form of this matrix is in general complicated and depends on the
symmetries of the quasicrystal that we want to study. In order to have a
better intuition about the structure of this matrix w focus on a sub-class,
for which the tensors of elastic coefficients can be decomposed into a set of
projectors
$\displaystyle C_{ijkl}$
$\displaystyle=c_{0}P^{(0)}_{ijkl}+c_{1}P^{(1)}_{ijkl}+c_{2}P^{(2)}_{ijkl},$
(15a) $\displaystyle K_{ijkl}$
$\displaystyle=k_{0}P^{(0)}_{ijkl}+k_{1}P^{(1)}_{ijkl}+k_{2}P^{(2)}_{ijkl},$
(15b) $\displaystyle R_{ijkl}$
$\displaystyle=r_{0}P^{(0)}_{ijkl}+r_{1}P^{(1)}_{ijkl}+r_{2}P^{(2)}_{ijkl},$
(15c) $\displaystyle R^{\prime}_{ijkl}$
$\displaystyle=r^{\prime}_{0}P^{(0)}_{ijkl}+r^{\prime}_{1}P^{(1)}_{ijkl}+r^{\prime}_{2}P^{(2)}_{ijkl},$
(15d)
where the basis tensors are given by
$\displaystyle P^{(0)}_{ijkl}$
$\displaystyle=\frac{1}{2}\delta_{ij}\delta_{kl},$ (16a) $\displaystyle
P^{(1)}_{ijkl}$
$\displaystyle=\frac{1}{2}(\delta_{ik}\delta_{jl}-\delta_{il}\delta_{jk}),$
(16b) $\displaystyle P^{(2)}_{ijkl}$
$\displaystyle=\frac{1}{2}(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk})-\frac{1}{2}\delta_{ij}\delta_{kl}.$
(16c)
One can check that $P^{m}_{ijab}P^{n}_{abkl}=P^{m}_{ijkl}$ if $m=n$ and zero
otherwise. Such a choice is of course an oversimplification, which will not
apply to all systems. However, it illustrates the proof of concept behind this
construction and the result can be presented in a compact form. In general, in
order to construct a dual formulation for a specific quasicrystal, one has to
specify elastic tensors and then explicitly construct an inverse. The simplest
way to achieve this in two dimensions, for a given set of elastic parameters,
is to pass to the Pauli matrix representation (see e.g. Scheibner et al.
(2020); Banerjee et al. (2020); Fruchart and Vitelli (2020)) and then apply an
algorithm for the inversion of a block matrix Lu and Shiou (2002).
The inverse matrix is determined by the following matrix equation
$\begin{pmatrix}\mathcal{C}_{ijmn}&\mathcal{R}_{ijmn}\\\
\mathcal{R}^{\prime}_{ijmn}&\mathcal{K}_{ijmn}\end{pmatrix}\begin{pmatrix}C_{ijkl}&R_{ijkl}\\\
R^{\prime}_{ijkl}&K_{ijkl}\end{pmatrix}=\begin{pmatrix}\text{Id}_{ijmn}&0\\\
0&\text{Id}_{ijmn}\end{pmatrix},$ (17)
where appropriate index contractions are understood after the multiplication
of matrices. $\text{Id}_{ijmn}=\delta_{im}\delta_{jn}$ is the identity
operator for four-tensors. For the case considered this is a system of linear
equations for twelve coefficients. The solution in the basis of projectors
reads
$\displaystyle\mathcal{C}_{ijkl}$
$\displaystyle=\frac{k_{0}}{\Delta_{0}}P^{(0)}_{ijkl}+\frac{k_{1}}{\Delta_{1}}P^{(1)}_{ijkl}+\frac{k_{2}}{\Delta_{2}}P^{(2)}_{ijkl},$
(18a) $\displaystyle\mathcal{K}_{ijkl}$
$\displaystyle=\frac{c_{0}}{\Delta_{0}}P^{(0)}_{ijkl}+\frac{c_{1}}{\Delta_{1}}P^{(1)}_{ijkl}+\frac{c_{2}}{\Delta_{2}}P^{(2)}_{ijkl},$
(18b) $\displaystyle\mathcal{R}_{ijkl}$
$\displaystyle=-\frac{r_{0}}{\Delta_{0}}P^{(0)}_{ijkl}-\frac{r_{1}}{\Delta_{1}}P^{(1)}_{ijkl}-\frac{r_{2}}{\Delta_{2}}P^{(2)}_{ijkl},$
(18c) $\displaystyle\mathcal{R}^{\prime}_{ijkl}$
$\displaystyle=-\frac{r^{\prime}_{0}}{\Delta_{0}}P^{(0)}_{ijkl}-\frac{r^{\prime}_{1}}{\Delta_{1}}P^{(1)}_{ijkl}-\frac{r^{\prime}_{2}}{\Delta_{2}}P^{(2)}_{ijkl},$
(18d)
where $\Delta_{m}=c_{m}k_{m}+r_{m}r^{\prime}_{m}$ for each coefficient
labelled by $i\in\\{0,1,2\\}$. We note that it may happen the matrix of
elastic coefficients is not invertible. In this case it is enough to construct
the inverse in the invertible subspace as in the classical elasticity.
Moreover, in analogy with block matrices not all individual entries not have
to be invertible for the existence of the total inverse matrix.
Defects.$-$Soon after the discovery of quasicrystals questions about the
nature of defects, their dynamics and interactions became relevant Levine et
al. (1985); Socolar et al. (1986); Lubensky et al. (1986); Bohsung and Trebin
(1987). Several intricate geometric constructions are available, however, the
intrinsic difficulty of these methods prevents us from having definite answers
to all relevant questions about defects. Below we argue that the duality
offers a natural, simple language for a systematic study of topological
singularities in quasicrystals. In the dual language the elastic defects are
mapped to the charges of the gauge theory. In order to see this mapping one
can decompose the phonon and phason displacement fields into regular and
singular parts $u^{i}=u^{i}_{\rm reg}+u^{i}_{\rm sing}$, $w^{i}=w^{i}_{\rm
reg}+w^{i}_{\rm sing}$. Phonon displacement singularities couple to the
conservation of the stress tensor
$\delta S=\int dtd^{2}x\,\,\,\Big{[}u^{i}_{\rm
sing}\partial_{\mu}T^{i\mu}\Big{]}=\int
dtd^{2}x\,\,\,\Big{[}\rho\phi+J^{ij}A_{ij}\Big{]}\,.$ (19)
where $\rho=\partial^{i}\rho_{i}$,
$\rho^{i}=\epsilon^{i}{}_{j}\epsilon^{kl}\partial_{k}\partial_{l}u^{j}_{\rm
sing}$ and $J^{ij}=\epsilon^{i}{}_{n}\epsilon^{\mu\nu
j}\partial_{\mu}\partial_{\nu}u^{n}_{\rm sing}$. In the last equality we
employ an integration by parts. The charge $\rho$ is mapped to the
disclination density
$\rho_{\rm
disc}=\frac{1}{2}\epsilon^{k}{}_{l}\epsilon^{ij}\partial_{i}\partial_{j}\partial_{k}u^{l}_{\rm
sing}\,.$ (20)
We now study the coupling of the phason singularities $w^{i}_{\rm{sing}}$.
Following the same logic as above we find
$\delta S=\int dtd^{2}x\,\,\,\Big{[}w^{i}_{\rm
sing}\partial_{\mu}H^{i\mu}\Big{]}=\int
dtd^{2}x\,\,\,\Big{[}\varrho_{i}\Phi^{i}+\mathcal{J}^{ij}\mathcal{A}_{ij}\Big{]}\,,$
(21)
where we have introduced matching or stacking faults as the phason defects are
sometimes dubbed in the literature
$\varrho^{i}=\epsilon^{i}{}_{j}\epsilon^{kl}\partial_{k}\partial_{l}w^{j}_{\rm
sing}$ (22)
and the current
$\mathcal{J}^{ij}=\epsilon^{i}{}_{n}\epsilon^{\mu\nu
j}\partial_{\mu}\partial_{\nu}w^{n}_{\rm sing}.\,$ (23)
We can now identify the duality mapping between charges and defects for
phasons. Vector charges in the dual theory map to the rotated matching faults
$\epsilon^{i}{}_{j}\varrho^{j}$. With these mappings we can study the Gauss
laws in the theory. They follow from the gauge transformations in the
Hamiltonian formulation of the theory
$\mathcal{H}=\Pi_{ij}\dot{A}_{ij}+\Pi^{H}_{ij}\mathcal{A}^{ij}-\mathcal{L},$
(24)
where $\Pi_{ij}=\frac{\delta S}{\delta\dot{A}_{ij}}$ and
$\Pi^{H}_{ij}=\frac{\delta S}{\delta\dot{\mathcal{A}}_{ij}}$ are the canonical
momenta of the phonon and phason fields respectively. Invariance with respect
to the gauge transformations leads to the following generalized Gauss laws
$\partial_{i}\partial_{j}\Pi^{ij}=\rho,$ (25)
$\partial_{i}\Pi^{ij}_{H}=\varrho^{j}.$ (26)
We see that these Gauss laws are exactly the same as the ones constructed for
scalar and vector gauge theories Pretko (2018). As a result we can now
identify defects in quasicrystals as different types of fractons. As far as
the phonon field is concerned the defects are the same as in classical
elasticity. We have disclinations that are immobile and dislocations
corresponding to the disinclination dipoles that can move only along their
Burgers vector. In addition to that the matching faults correspond to vector
charges. The matching faults conserve the dipole moment $D_{i}$, where
$D_{i}=\int x^{i}\partial_{j}\varrho^{j}=0$. Therefore the matching faults are
fractons with restricted mobility. Given this quasicrystals offer a natural
playground to investigate scalar and vector fracton theories.
Defect potential.$-$We now apply the duality to study the static defect
potential in a simple illustrative example of quasicrystals with fivefold
symmetry in the limit of negligible phonon-phason coupling. The relevant part
of the action reads
$\displaystyle S$ $\displaystyle=\int
dtd^{2}x\frac{1}{2}\Bigg{[}\left(\partial_{i}\partial_{j}\phi\,\partial_{i}\Phi_{j}\right)\begin{pmatrix}\tilde{\mathcal{C}}_{ijkl}&0\\\
0&\tilde{\mathcal{K}}_{ijkl}\end{pmatrix}\begin{pmatrix}\partial_{k}\partial_{l}\phi\\\
\partial_{k}\Phi_{l}\end{pmatrix}$ (27)
$\displaystyle+\phi\rho+\Phi_{i}\varrho_{i}\Big{]}\,$
In the case of planar fivefold symmetry the elastic tensors are given by Ding
et al. (1993)
$\displaystyle C_{ijkl}$
$\displaystyle=\lambda\delta_{ij}\delta_{kl}+\mu(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}),$
(28a) $\displaystyle K_{ijkl}$
$\displaystyle=K_{1}\delta_{ik}\delta_{jl}+K_{2}(\delta_{ij}\delta_{kl}-\delta_{il}\delta_{jk}).$
(28b)
The above tensors can be expanded in the bassis of projectors upon the
following identifications: $c_{0}=2(\lambda+\mu)$, $c_{1}=0$, $c_{2}=2\mu$,
$k_{1}=K_{1}+K_{2}$, $k_{2}=K_{1}+K_{2}$, $k_{3}=K_{1}-K_{2}$. $\lambda$ and
$\mu$ are the first and second Lamé parameters. Integrating out the gauge
fields $\phi$ and $\Phi_{i}$ one arrives at the following expression,
$\mathcal{L}=-\frac{1}{2}\rho_{\text{vec}}^{\text{T}}(-q)\mathcal{V}\rho_{\text{vec}}(q),$
(29)
where
$\rho_{\text{vec}}^{\text{T}}=\left(\rho\,\,\varrho_{1}\,\,\varrho_{2}\right)$.
It gives the static potential between the defects. The disclination potential
is $\mathcal{V}_{\rho\rho}=\frac{4\mu(\lambda+\mu)}{q^{4}(\lambda+2\mu)}$ and
the explicit form of the matching fault potential potential is given by
$\displaystyle\mathcal{V_{\varrho\varrho}}=$
$\displaystyle\begin{pmatrix}\frac{(K_{1}-K_{2})\left[(K_{1}+K_{2})q_{1}^{2}+2K_{1}q_{2}^{2}\right]}{q^{4}K_{1}}&-\frac{(K_{1}-K_{2})^{2}q_{1}q_{2}}{q^{4}K_{1}}\\\
-\frac{(K_{1}-K_{2})^{2}q_{1}q_{2}}{q^{4}K_{1}}&\frac{(K_{1}-K_{2})\left[2K_{1}q_{1}^{2}+(K_{1}+K_{2})q_{2}^{2}\right]}{q^{4}K_{1}}\end{pmatrix}.$
(30)
In the limit we consider there is no potential between matching faults and
defects in the phonon field. Our computation shows the power of the duality
that greatly simplifies the analysis by matching the defects into charges of
appropriate gauge fields that can be easily dealt with using field theory
methods.
Discussion.$-$We have constructed a dual formulation of quasicrystal
elasticty. It has two stress tensors, one symmetric and one not, which
ultimately leads to the dual description in terms of the two tensor gauge
fields. Both these fields couple to fractonic charges dual to dislocations and
disclinations in the phonon sector and to matching faults for phasons. It
follows that quasicrystal elasticity is a natural place where scalar and
vector fracton theories coexist.
The duality offers a way to address open questions about the phase structure
of quasicrystals. It was speculated in the past that a brittle-ductile
transition can be related to the BKT-type of transition Kleman (2003). The
dual formulation simplifies an analysis of phase transitions due to defect
proliferation. Thus the construction presented here can be used as a starting
point for a detailed study.
Acknowledgements.$-$PS was supported by the Deutsche Forschungsgemeinschaft
through the Leibniz Program, the cluster of excellence ct.qmat (EXC 2147,
project-id 39085490) and the National Science Centre Sonata Bis grant
2019/34/E/ST3/00405.
## References
* Landau et al. (1986) L. Landau, E. Lifshitz, A. Kosevich, and L. Pitaevskii, _Theory of elasticity_ , Theoretical Physics (Butterworth-Heinemann, 1986), ISBN 9780750626330\.
* Nabarro (1987) F. R. N. Nabarro, _Theory of Crystal Dislocations (Dover Books on Physics and Chemistry)_ (Dover Pubns, 1987), ISBN 0486654885.
* José (2013) J. V. José, in _40 Years of Berezinskii–Kosterlitz–Thouless Theory_ (World Scientific, 2013), pp. 69–91, URL https://doi.org/10.1142/9789814417648_0002.
* Nelson and Halperin (1979) D. R. Nelson and B. I. Halperin, Phys. Rev. B 19, 2457 (1979), URL https://link.aps.org/doi/10.1103/PhysRevB.19.2457.
* Young (1979) A. P. Young, Phys. Rev. B 19, 1855 (1979), URL https://link.aps.org/doi/10.1103/PhysRevB.19.1855.
* Nelson (2002) D. R. Nelson, _Defects and Geometry in Condensed Matter Physics_ (Cambridge University Press, 2002), ISBN 0521004004\.
* Kleinert (1982) H. Kleinert, Physics Letters A 91, 295 (1982).
* Kleinert (1983) H. Kleinert, Physics Letters A 97, 51 (1983).
* Kleinert (1989) H. Kleinert, _Gauge Fields in Condensed Matter_ (World Scientific Publishing Company, 1989), ISBN 9971502119.
* Kleinert (1995) H. Kleinert, in _Formation and Interactions of Topological Defects_ (Springer, 1995), pp. 201–232, URL https://doi.org/10.1007/978-1-4615-1883-9_8.
* Beekman et al. (2017a) A. J. Beekman, J. Nissinen, K. Wu, K. Liu, R.-J. Slager, Z. Nussinov, V. Cvetkovic, and J. Zaanen, Physics Reports 683, 1 (2017a), URL https://doi.org/10.1016/j.physrep.2017.03.004.
* Xu (2006) C. Xu, Phys. Rev. B 74, 224433 (2006), URL https://link.aps.org/doi/10.1103/PhysRevB.74.224433.
* Xu and Hořava (2010) C. Xu and P. Hořava, Phys. Rev. D 81, 104033 (2010), URL https://link.aps.org/doi/10.1103/PhysRevD.81.104033.
* Pretko (2017a) M. Pretko, Phys. Rev. B 95, 115139 (2017a), URL https://link.aps.org/doi/10.1103/PhysRevB.95.115139.
* Pretko (2017b) M. Pretko, Phys. Rev. B 96, 035119 (2017b), URL https://link.aps.org/doi/10.1103/PhysRevB.96.035119.
* You et al. (2020) Y. You, Z. Bi, and M. Pretko, Phys. Rev. Research 2, 013162 (2020), URL https://link.aps.org/doi/10.1103/PhysRevResearch.2.013162.
* Nandkishore and Hermele (2019) R. M. Nandkishore and M. Hermele, Annual Review of Condensed Matter Physics 10, 295 (2019).
* Pretko et al. (2020) M. Pretko, X. Chen, and Y. You, Int. J. Mod. Phys. A 35, 2030003 (2020).
* Pretko and Radzihovsky (2018a) M. Pretko and L. Radzihovsky, Physical Review Letters 120 (2018a), URL https://doi.org/10.1103/physrevlett.120.195301.
* Zaanen et al. (2004) J. Zaanen, Z. Nussinov, and S. I. Mukhin, Annals of Physics 310, 181 (2004).
* Beekman et al. (2017b) A. J. Beekman, J. Nissinen, K. Wu, and J. Zaanen, Physical Review B 96 (2017b), URL https://doi.org/10.1103/physrevb.96.165115.
* Pretko and Radzihovsky (2018b) M. Pretko and L. Radzihovsky, Phys. Rev. Lett. 121, 235301 (2018b), URL https://link.aps.org/doi/10.1103/PhysRevLett.121.235301.
* Gromov (2019) A. Gromov, Phys. Rev. Lett. 122, 076403 (2019), URL https://link.aps.org/doi/10.1103/PhysRevLett.122.076403.
* Kumar and Potter (2019) A. Kumar and A. C. Potter, Phys. Rev. B 100, 045119 (2019), URL https://link.aps.org/doi/10.1103/PhysRevB.100.045119.
* Pretko et al. (2019) M. Pretko, Z. Zhai, and L. Radzihovsky, Phys. Rev. B 100, 134113 (2019).
* Zhai and Radzihovsky (2019) Z. Zhai and L. Radzihovsky, Phys. Rev. B 100, 094105 (2019).
* Gromov and Surówka (2020) A. Gromov and P. Surówka, SciPost Phys. 8, 065 (2020), eprint 1908.06984.
* Nguyen et al. (2020) D. X. Nguyen, A. Gromov, and S. Moroz, SciPost Phys. 9, 076 (2020).
* Nampoothiri et al. (2020) J. N. Nampoothiri, Y. Wang, K. Ramola, J. Zhang, S. Bhattacharjee, and B. Chakraborty, Phys. Rev. Lett. 125, 118002 (2020), URL https://link.aps.org/doi/10.1103/PhysRevLett.125.118002.
* Fruchart and Vitelli (2020) M. Fruchart and V. Vitelli, Phys. Rev. Lett. 124, 248001 (2020), URL https://link.aps.org/doi/10.1103/PhysRevLett.124.248001.
* Manoj et al. (2020) N. Manoj, R. Moessner, and V. B. Shenoy (2020), eprint 2011.11401.
* Zhai and Radzihovsky (2020) Z. Zhai and L. Radzihovsky (2020), eprint 2012.02208.
* Levine and Steinhardt (1984) D. Levine and P. J. Steinhardt, Phys. Rev. Lett. 53, 2477 (1984), URL https://link.aps.org/doi/10.1103/PhysRevLett.53.2477.
* Levine et al. (1985) D. Levine, T. C. Lubensky, S. Ostlund, S. Ramaswamy, P. J. Steinhardt, and J. Toner, Phys. Rev. Lett. 54, 1520 (1985), URL https://link.aps.org/doi/10.1103/PhysRevLett.54.1520.
* Socolar et al. (1986) J. E. S. Socolar, T. C. Lubensky, and P. J. Steinhardt, Phys. Rev. B 34, 3345 (1986), URL https://link.aps.org/doi/10.1103/PhysRevB.34.3345.
* Baggioli and Landry (2020) M. Baggioli and M. Landry, SciPost Phys. 9, 062 (2020).
* Fan (2016) T.-Y. Fan, _Mathematical Theory of Elasticity of Quasicrystals and Its Applications_ (Springer Singapore, 2016), URL https://doi.org/10.1007/978-981-10-1984-5.
* Ding et al. (1993) D.-h. Ding, W. Yang, C. Hu, and R. Wang, Phys. Rev. B 48, 7003 (1993), URL https://link.aps.org/doi/10.1103/PhysRevB.48.7003.
* Lubensky et al. (1985) T. C. Lubensky, S. Ramaswamy, and J. Toner, Phys. Rev. B 32, 7444 (1985), URL https://link.aps.org/doi/10.1103/PhysRevB.32.7444.
* Francoual et al. (2003) S. Francoual, F. Livet, M. de Boissieu, F. Yakhou, F. Bley, A. Létoublon, R. Caudron, and J. Gastaldi, Phys. Rev. Lett. 91, 225501 (2003), URL https://link.aps.org/doi/10.1103/PhysRevLett.91.225501.
* Gopalakrishnan et al. (2013) S. Gopalakrishnan, I. Martin, and E. A. Demler, Phys. Rev. Lett. 111, 185304 (2013), URL https://link.aps.org/doi/10.1103/PhysRevLett.111.185304.
* Kraus et al. (2013) Y. E. Kraus, Z. Ringel, and O. Zilberberg, Phys. Rev. Lett. 111, 226401 (2013), URL https://link.aps.org/doi/10.1103/PhysRevLett.111.226401.
* Tran et al. (2015) D.-T. Tran, A. Dauphin, N. Goldman, and P. Gaspard, Phys. Rev. B 91, 085125 (2015), URL https://link.aps.org/doi/10.1103/PhysRevB.91.085125.
* Sagi and Nussinov (2016) E. Sagi and Z. Nussinov, Phys. Rev. B 94, 035131 (2016), URL https://link.aps.org/doi/10.1103/PhysRevB.94.035131.
* Bandres et al. (2016) M. A. Bandres, M. C. Rechtsman, and M. Segev, Phys. Rev. X 6, 011016 (2016), URL https://link.aps.org/doi/10.1103/PhysRevX.6.011016.
* Huang and Liu (2018) H. Huang and F. Liu, Phys. Rev. Lett. 121, 126401 (2018), URL https://link.aps.org/doi/10.1103/PhysRevLett.121.126401.
* Ahn et al. (2018) S. J. Ahn, P. Moon, T.-H. Kim, H.-W. Kim, H.-C. Shin, E. H. Kim, H. W. Cha, S.-J. Kahng, P. Kim, M. Koshino, et al., Science 361, 782 (2018), URL https://doi.org/10.1126/science.aar8412.
* Varjas et al. (2019) D. Varjas, A. Lau, K. Pöyhönen, A. R. Akhmerov, D. I. Pikulin, and I. C. Fulga, Phys. Rev. Lett. 123, 196401 (2019), URL https://link.aps.org/doi/10.1103/PhysRevLett.123.196401.
* Pretko (2018) M. Pretko, Phys. Rev. B 98, 115134 (2018), URL https://link.aps.org/doi/10.1103/PhysRevB.98.115134.
* Scheibner et al. (2020) C. Scheibner, A. Souslov, D. Banerjee, P. Surówka, W. T. M. Irvine, and V. Vitelli, Nat. Phys. 16, 475 (2020).
* Banerjee et al. (2020) D. Banerjee, V. Vitelli, F. Jülicher, and P. Surówka (2020), eprint 2002.12564.
* Lu and Shiou (2002) T.-T. Lu and S.-H. Shiou, Computers & Mathematics with Applications 43, 119 (2002).
* Lubensky et al. (1986) T. C. Lubensky, S. Ramaswamy, and J. Toner, Phys. Rev. B 33, 7715 (1986), URL https://link.aps.org/doi/10.1103/PhysRevB.33.7715.
* Bohsung and Trebin (1987) J. Bohsung and H.-R. Trebin, Phys. Rev. Lett. 58, 1204 (1987), URL https://link.aps.org/doi/10.1103/PhysRevLett.58.1204.
* Kleman (2003) M. Kleman, European Physical Journal B 31, 315 (2003).
|
# Classical Novae Masquerading as Dwarf Novae?
Outburst Properties of Cataclysmic Variables with ASAS-SN
A. Kawash Center for Data Intensive and Time Domain Astronomy, Department of
Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA
L. Chomiuk Center for Data Intensive and Time Domain Astronomy, Department of
Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA
J. Strader Center for Data Intensive and Time Domain Astronomy, Department of
Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA
E. Aydi Center for Data Intensive and Time Domain Astronomy, Department of
Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA
K. V. Sokolovsky Center for Data Intensive and Time Domain Astronomy,
Department of Physics and Astronomy, Michigan State University, East Lansing,
MI 48824, USA Sternberg Astronomical Institute, Moscow State University,
Universitetskii pr. 13, 119992 Moscow, Russia T. Jayasinghe Department of
Astronomy, The Ohio State University, 140 West 18th Avenue, Columbus, OH
43210, USA C. S. Kochanek Department of Astronomy, The Ohio State
University, 140 West 18th Avenue, Columbus, OH 43210, USA Center for
Cosmology and Astroparticle Physics, The Ohio State University, 191 W.
Woodruff Avenue, Columbus, OH 43210, USA P. Schmeer Bischmisheim, Am
Probstbaum 10, 66132 Saarbrücken, Germany K. Z. Stanek Department of
Astronomy, The Ohio State University, 140 West 18th Avenue, Columbus, OH
43210, USA Center for Cosmology and Astroparticle Physics, The Ohio State
University, 191 W. Woodruff Avenue, Columbus, OH 43210, USA K. Mukai CRESST
II and X-ray Astrophysics Laboratory, NASA/GSFC, Greenbelt, MD 20771, USA
Department of Physics, University of Maryland, Baltimore County, 1000 Hilltop
Circle, Baltimore, MD 21250, USA B. Shappee Institute for Astronomy,
University of Hawai‘i at Mānoa, 2680 Woodlawn Dr., Honolulu 96822, USA Z. Way
Department of Astronomy, The Ohio State University, 140 West 18th Avenue,
Columbus, OH 43210, USA C. Basinger Department of Astronomy, The Ohio State
University, 140 West 18th Avenue, Columbus, OH 43210, USA T. W.-S. Holoien
Carnegie Observatories, 813 Santa Barbara Street, Pasadena, CA 91101, USA J.
L. Prieto Núcleo de Astronomía de la Facultad de Ingeniería y Ciencias,
Universidad Diego Portales, Av. Ejército 441, Santiago, Chile Millennium
Institute of Astrophysics, Santiago, Chile
###### Abstract
The unprecedented sky coverage and observing cadence of the All-Sky Automated
Survey for SuperNovae (ASAS-SN) has resulted in the discovery and continued
monitoring of a large sample of Galactic transients. The vast majority of
these are accretion-powered dwarf nova outbursts in cataclysmic variable
systems, but a small subset are thermonuclear-powered classical novae. Despite
improved monitoring of the Galaxy for novae from ASAS-SN and other surveys,
the observed Galactic nova rate is still lower than predictions. One way
classical novae could be missed is if they are confused with the much larger
population of dwarf novae. Here, we examine the properties of 1617 dwarf nova
outbursts detected by ASAS-SN and compare them to classical novae. We find
that the mean classical nova brightens by $\sim$11 magnitudes during outburst,
while the mean dwarf nova brightens by only $\sim$5 magnitudes, with the
outburst amplitude distributions overlapping by roughly 15$\%$. For the first
time, we show that the amplitude of an outburst and the time it takes to
decline by two magnitudes from maximum are positively correlated for dwarf
nova outbursts. For classical novae, we find that these quantities are
negatively correlated, but only weakly, compared to the strong anti-
correlation of these quantities found in some previous work. We show that,
even if located at large distances, only a small number of putative dwarf
novae could be mis-classified classical novae suggesting that there is minimal
confusion between these populations. Future spectroscopic follow-up of these
candidates can show whether any are indeed classical novae.
Classical novae (251), Dwarf novae (418), Novae (1127), Cataclysmic variable
stars (203), White dwarf stars (1799)
††journal: ApJ
## 1 Introduction
Interacting binary systems that consist of a white dwarf accreting material
from a close companion star are known as Cataclysmic Variables (CVs). The
secondary, usually a low-mass main-sequence star, transfers matter through
Roche-lobe overflow, which forms an accretion disk around the white dwarf (see
Warner 1995 and Hellier 2001 for reviews).
A Dwarf Nova (DN) outburst is a common event that occurs in a CV, and is
generally thought to be caused by a thermal instability in the accretion disk
(see Hameury 2020 for a review). The disk rapidly transitions from neutral to
ionized, leading to a sudden increase in disk viscosity and mass-accretion
rate, and a dramatic brightening of the accretion disk (Hellier, 2001). DNe
are one of the most common types of Galactic transients, with new objects
being discovered generally every week (see Kato et al. 2020 and previous
papers for examples). Individual systems will typically outburst every 20–300
days (Osaki, 2001). The peak absolute magnitude of a DN depends mostly on the
physical size and inclination of the accretion disk, and ranges from MV,max
$\approx$ 7 to 2 mag, with long orbital period systems viewed face-on
producing brighter outbursts (Harrison et al., 2004; Patterson, 2011).
A Classical Nova (CN) is another type of event that occurs in a CV, and is
caused by a thermonuclear runaway on the surface of the white dwarf (see Bode
& Evans 2008 for a review). Accreted material builds up on the surface of the
white dwarf over time, until a critical pressure is reached, which triggers
explosive thermonuclear burning and the puffing up and expulsion of the
accreted envelope. Recent studies of CNe in M31 with well-constrained
luminosities show that the absolute magnitude at peak brightness can range
from M${}_{V}\approx-4$ to $-10$ mag, much more luminous than DNe (Shafter,
2017). This is consistent with early estimates of the Galactic nova luminosity
function (Mclaughlin, 1945) even with more precise distances from Gaia
DR2.Della Valle & Izzo (2020). Outbursts of CNe are expected to recur on a
timescale that depends on the white dwarf mass and accretion rate (Yaron et
al., 2005), and if a CN has been observed to erupt more than once, it is
referred to as a recurrent nova. Of the ten recurrent novae known in our
Galaxy, the recurrence time ranges from 10 to 80 years (Schaefer, 2010).
However, in other galaxies, more rapidly recurring novae are being discovered
(Darnley, 2019), with a nova in M31 that has been found to recur every year
(Darnley et al., 2016), and nova LMC 1968 recently exhibiting a four year
recurrence time (Kuin et al., 2020; Page et al., 2020). For the purposes of
this work, we are interested in both recurrent and singular classical novae
and do not distinguish between the two classes, considering them both
thermonuclear-powered CNe.
There have been many estimates of the Galactic nova rate (see Della Valle &
Izzo 2020 for a review), but it remains poorly constrained. Recently,
(Shafter, 2017) derived a rate of $50_{-23}^{+31}$ novae per year from
Galactic observations and a rate between $\sim 50$ and $\sim 70$ novae per
year from extragalactic observations. These rates, though mutually consistent,
are larger than previous estimates, and significantly higher than the
discovered rate. Historically, amateur astronomers have played a leading role
in the discovery and observations of CN outbursts, and found only a small
fraction of the predicted population. The average number of discovered
Galactic novae increased from about 3 per year in the mid 20th century (when
many discoveries were made visually) to 4 per year in the 1980s and 1990s
(when film photography was often used) to 8 per year in the 2000s and 2010s
(when digital cameras became widely available)111Up-to-date lists of novae may
be found at https://asd.gsfc.nasa.gov/Koji.Mukai/novae/novae.html and
https://github.com/Bill-Gray/galnovae. Amateur observers use a variety of
equipment, which often include an astronomical CCD camera attached to a
telephoto lens or small telescope with typical detection limits down to
$V\approx 12$ mag. As amateur observations do not systematically cover the
entire sky down to a well defined limiting magnitude, one explanation for the
discrepancy between the number of predicted and discovered CNe in the Galaxy
is that most CNe eruptions go undiscovered. To test this possibility, a deep
wide-field survey with high observing cadence is needed. Fortunately, such a
survey now exists.
The All-Sky Automated Survey for SuperNovae (ASAS-SN) is the only survey to
date observing the entire night sky with nearly nightly cadence (Shappee et
al., 2014; Kochanek et al., 2017). Early ASAS-SN observations were conducted
at two facilities in Hawaii and Chile, using a V filter with a few day cadence
down to a depth of V $\approx$ 17 mag. In 2017, ASAS-SN added facilities in
Texas, South Africa, and an additional facility in Chile, switched to
observing in a g filter down to a median depth of g $\approx$ 18.5 mag and
became able to observe the entire night sky (including the Galactic plane)
with nearly nightly cadence (Kochanek et al., 2017; Jayasinghe et al., 2020).
The primary goal of ASAS-SN is to discover bright, extragalactic supernovae,
but due to the all-sky nature of the survey, there are a wide variety of
transients discovered, including CV outbursts. The observing capabilities of
ASAS-SN make it uniquely suited for monitoring fast outbursts from CVs
brighter than g $\approx$ 18 mag, considerably deeper than most amateur
observations. Various models from Shafter (2017) predict anywhere from 30 to
110 CNe brighter than V $\approx$ 18 in the Galaxy each year, but so far ASAS-
SN observations have yielded no large increase in the number of discovered
novae.
With no increase in the discovery rate. another explanation must exist if the
rate estimates are correct. One possibility is that CNe are being confused
with the more numerous DN outbursts. Due to the high frequency of nearby DN
outbursts, it is not feasible to obtain a classification spectrum of even a
substantial fraction of DN candidate outbursts in the Galaxy. This problem
will only be exacerbated by next generation time domain surveys like the Large
Synoptic Survey Telescope (LSST; Ivezić et al. 2019). The default assumption
for a fainter CV outburst (g${}_{\rm peak}>$ 13 mag) is that it is a DN; this
is usually a safe assumption since a Galactic CN should be very bright, even
when observed on the other side of the Galaxy unless the dust extinction is
very high.
The main goal of this work is to investigate the possibility that some
Galactic CNe are being mistaken for DNe by inspecting the outburst properties
of a large sample of both types of transients. For extragalactic CNe, the
association with a well-studied nearby galaxy means that the CN distances— and
therefore absolute magnitudes— are well constrained. Although the luminosity
function of CNe is reasonably well-measured (e.g., Shafter, 2017), it has
limited utility in the Galaxy where nova distances are usually poorly
constrained; many Galactic CN progenitors are too faint in quiescence or too
distant for an accurate parallax measurement even with Gaia (Schaefer, 2018;
Selvelli & Gilmozzi, 2019). However, photometry of the field prior to outburst
often exists for Galactic events, making it possible to estimate how much the
object brightened during outburst—typically called the outburst amplitude. The
outburst amplitude has the potential to be a powerful discriminant between CNe
and other transients but is relatively little studied.
Significant effort that has been invested in understanding the potential
relationship between absolute magnitude of peak outburst and its rate of
decline for CNe (MMRD; Capaccioli et al. 1989a; della Valle & Livio 1995;
Kasliwal et al. 2011; Shara et al. 2017). There are far fewer studies of the
relationship between outburst amplitude and decline time for CNe. Warner
(1987) found that CNe exhibit outburst amplitudes ranging from 8 to 15
magnitudes in V-band, with large-amplitude CNe fading quickly and small-
amplitude CNe sometimes taking years to fade back to quiescence, and this
relationship has been used to identify potential recurrent nova candidates
from the sample of known CNe (Pagnotta & Schaefer, 2014). Recurrent novae are
expected to occur on massive white dwarfs and have short decline times (Yaron
et al., 2005). Given the large luminosity differences between CNe and DNe,
this relationship could also be useful in identifying potential CNe candidates
hiding in the large population of DNe. DNe typically have outbursts that last
roughly a week with amplitudes of 2–5 mag, much lower than CNe. However, WZ
Sge type DNe (Ortolani et al., 1980; Kato, 2015; Hellier, 2001; Howell et al.,
1995) show rare (once in decades) accretion-powered superoutbursts with
amplitudes reaching 9 magnitudes and lasting for weeks (e.g., Tampo et al.,
2020). Although the vast majority of DNe should have lower outburst amplitudes
than CNe, WZ Sge type superoutbursts could be confused with CNe if only the
outburst amplitude but not the absolute magnitude is known. It is a goal of
this paper to better understand this potential for confusion, and to
investigate possibilities for alleviating it in order to more confidently
identify CNe.
Recently, time-domain surveys have found large numbers of DN outbursts due to
their high frequency. A few examples of these include the Sloan Digitial Sky
survey (SDSS: Gänsicke et al. 2009), the Catalina Real-time Transient Survey
(CRTS: Coppejans et al. 2016), and the Optical Gravitational Lensing
Experiment (OGLE; Mróz et al. 2015). These have resulted in large sample
studies of a multitude of DN outburst properties, but we have found no
previous discussion of the relationship between outburst amplitude and decline
time from maximum for DNe.
In this work, we focus on the relationship between outburst amplitude and
decline time as a potential tool for distinguishing CNe from DNe. To do this,
we estimate the outburst properties of DNe, along with CNe that have erupted
since ASAS-SN started observing in 2013 and compare the two samples. In
Section 2, we describe how the sample of CVs was obtained, how the light
curves were generated, and how the various outburst properties were measured.
In Section 3, we present the outburst properties of the CN and DN populations,
fit the distributions of outburst amplitudes and decline times, measure the
correlation between these two properties, and discuss the observable
differences between the two types of outbursts. We then assess in Section 4
whether CNe could be hiding amongst DNe, and how we can ensure in the future
that the two types of transients are not confused.
## 2 Methods
### 2.1 Catalog
The list of CVs analyzed in this work was obtained from the AAVSO
International Variable Star Index (VSX; Watson et al. 2006), which contains
the most up-to-date and comprehensive list of known CVs, including CVs
discovered by ASAS-SN. VSX was queried using TAPVizieR (Landais et al., 2013)
for any objects flagged
as222https://www.aavso.org/vsx/index.php?view=about.vartypes:
* •
U Geminorum-type variables (“UG" flag), including all the sub-classes in the
VSX catalog. These are CVs that have been typed as DNe.
* •
DQ Herculis-type variables (“DQ" flag), which are CVs with intermediate-
strength magnetic fields, and are also known as intermediate polars. Given the
right orbital period, accretion rate, and magnetic field strength, these
systems can still produce DN outbursts (Hameury & Lasota, 2017).
* •
CVs of unknown type (“CV" flag). These are often CVs that have recently been
discovered in surveys like ASAS-SN, and which have not yet been assigned a
type in VSX.
A total of 9333 objects had these flags in the VSX catalog at the time the
catalog was queried (December 2019). There were a total 62 CN outbursts
discovered in the Galaxy between January 2013 and April 2020. The positions of
these CNe, like the sample of DNe, were obtained from VSX.
### 2.2 Light curves
Image-subtraction light curves were generated using ASAS-SN observations for
all objects in our sample following the procedures described in Jayasinghe et
al. (2018, 2019, see also ). ASAS-SN light curves for most fields outside of
the Galactic plane span back to 2013. In 2017, ASAS-SN switched from observing
in a V-band filter to a g-band filter and started more regularly monitoring
the Galactic plane. For the purposes of our analysis, g-band is used as the
standard filter; the conversions of V-band measurements to g-band are outlined
in Section A.1. An example of an ASAS-SN light curve for a DN is shown in
Figure 1 and additional light curves are shown in Figure 7, Figure 8, and
Figure A.3.
Figure 1: The ASAS-SN light curve of the dwarf nova ASASN-14fu. The
$\geq$5-sigma detections are shown for V-band and g-band observations in blue
and orange, respectively. The black triangles denote 5$\sigma$ upper limits
derived from non-detections, and the gaps in the data are due to seasonal the
Solar constraints.
Image-subtraction photometry is preferred over aperture photometry for
studying CV outbursts. Reference images are created using the best images of
each field with outlier rejection before the final average, which
automatically rejects any outbursts. By subtracting this reference image from
individual epochs, any flux at the position of a CV in the individual
observations should be from an outburst. Contaminating flux from nearby stars,
a problem given ASAS-SN’s angular resolution (2 pixel full width at half
maximum = 16 arcseconds), is removed by the subtraction, although bright stars
are not always subtracted cleanly. All the light curves used to measure the
outburst properties were inspected for contamination. Some CVs within a few
pixels of a bright star (g $<$ 14 mag) show artifacts due to reference image
subtraction errors. These were flagged and ultimately dropped, leading to the
elimination of $\sim$ 2% of the sample. Higher resolution photometry of the
environment was provided by the Panoramic Survey Telescope and Rapid Response
System (Pan-STARRS; Chambers et al. 2016).
ASAS-SN light curves for objects near the edge of a detector chip can have
problems. These sources often lie in the overlap regions between fields, so
data from one camera were flagged if the median magnitude was more than two
magnitudes different from other cameras or if the flux limit of the ASAS-SN
image was much lower than expected based on the observation duration.
Light curves of Galactic CNe that have erupted since 2013 were also generated
using image-subtraction photometry from ASAS-SN. These data were combined with
V-band observations from the American Association of Variable Star Observers
(AAVSO; Kafka 2020) to increase the cadence and expand the sensitivity of our
analysis for the CNe brighter than the saturation limit of ASAS-SN (g
$\approx$ 10 mag). The AAVSO data were visually inspected, and observations
from individual observers were discarded if they were inconsistent with data
from other contributors. These erroneous observations, though rare, likely
occur when one object is mistaken for another in a crowded field.
### 2.3 Outburst Peak Magnitude and Decline Time
Various aspects of the outburst can be measured directly from the light curve.
To measure the maximum brightness, it is common to smooth the light curves of
CNe (e.g., Burlak & Henden, 2008). This allows jitters and short flares to be
ignored when estimating the peak. However, for the purposes of this work, we
define the peak brightness simply as the brightest observation in the light
curve, as done by Strope et al. (2010). For most objects, the cadence of ASAS-
SN provides observations very close to maximum brightness, but for transients
evolving on a timescale less than a day, the maximum brightness can be
underestimated. Also, outbursts that are discovered immediately after a field
emerges from its Solar conjunction can have significantly underestimated peak
brightness.
Another quantity that we are able to measure directly from the light curve is
the decline time, $t_{2}$, defined as the time in days it takes for the light
curve to decline by two magnitudes from maximum brightness. For DN outbursts,
this is relatively straightforward, as they typically exhibit smooth declines,
though we consider any plateaus in the light curve after maximum brightness to
be part of the decline. CN light curves can exhibit jitters, flares, and cusps
(see Figure A.3 and Strope et al. 2010), which can cause $t_{2}$ to change
depending on the definition (e.g., first decline by two magnitudes versus
final fade by two magnitudes). We define $t_{2}$ as the last time the light
curve drops below two magnitudes from maximum in order to be consistent with
the estimates by Strope et al. (2010).
To measure $t_{2}$, we first assumed that the brightest detection was the peak
of an outburst. Then, we required that the decline have at least two
detections separated by more than one hour to automatically eliminate
satellite trails and asteroids. Next, we required that all data used to
measure $t_{2}$ be brighter than the independently measured quiescent
magnitude (discussed in §A.2). This eliminates objects with outburst
amplitudes less than 2 magnitudes, but was necessary to distinguish outbursts,
CV variability in quiescence, and contamination from nearby bright stars. We
also required that there is no gap between consecutive observations longer
than 40 days to eliminate artificially extended $t_{2}$ values due to Solar
conjunctions. Linear interpolation between the two data points above and below
the two-magnitude threshold was used to estimate $t_{2}$.
We are able to tightly constrain $t_{2}$ when ASAS-SN observations are able to
detect the outburst below the two-magnitude threshold. In these cases, we
consider this a measurement of $t_{2}$. For some faint and fast outbursts
close to the ASAS-SN detection limit, the outburst decline is not tracked all
the way to the two magnitude threshold, but a subsequent non-detection places
a limit fainter than the two-magnitude threshold. In this case, we limit t2 to
be bounded by these two epochs.
### 2.4 Outburst Amplitude
Observations of the brightest outburst of a DN from ASAS-SN were combined with
observations from The Pan-STARRS $3\pi$ Steradian Survey (Chambers et al.,
2016) of the same object in quiescence to estimate the amplitude of outburst.
We estimate the quiescent magnitude of the CNe in the same way for those in
the observing field of Pan-STARRS (declination $>-30^{\circ}$); otherwise we
use Gaia DR2 photometry (Gaia Collaboration et al., 2018) for CNe that erupted
after _Gaia_ DR2 observations were completed (2016 May 23). The details of the
quiescent magnitude measurements are discussed in §A.2 and §A.3.
If an object is unambiguously detected in quiescence, we make a measurement of
its outburst amplitude. However, if an object is clearly not detected (no
match within four arcseconds for DNe and 2 arcseconds for CNe), we place a
lower limit on its outburst amplitude. The outburst amplitude we estimate is
simply the difference between the peak magnitude of the outburst detected by
ASAS-SN or AAVSO observations and the magnitude obtained from the Pan-STARRS
or Gaia photometry catalogs, after correcting for filter transformations
(§A.1). If a CN was detected immediately after Solar conjunction, it is likely
that the peak brightness was missed (See Figure A.3). We expect this is only
an issue for CNe, since they can still be detected in outburst months after
eruption. For these CNe, we place a lower limit on the outburst amplitude and
an upper limit on $t_{2}$.
Figure 2: Galactic (top) and equatorial (bottom) coordinate positions of CVs
with outburst property estimates. Dwarf novae are shown in blue, and classical
novae are shown in red. The gap in the data for dwarf novae is due to the
survey limits of Pan-STARRS
## 3 Results
### 3.1 Detected DN and CN Outbursts
In total, we find 2688 DNe with outbursts that declined by at least two
magnitudes from maximum in the ASAS-SN data, around 30$\%$ of all DNe in VSX.
This does not test the discovery and classification of DNe in ASAS-SN, as we
have only searched for outbursts from known DNe discovered by a variety of
surveys and techniques. In order to be detected in our analysis, a DN needs to
have gone into outburst in a field ASAS-SN regularly monitored (the entire sky
since 2017), and to have reached a peak outburst magnitude in the range g
$\approx$ 10–16 mag (a more detailed analysis of our detection capabilities is
discussed in Section A.4). From the subset of objects with detected outbursts,
1791 objects have declination greater than $-30^{\circ}$, and are therefore in
Pan-STARRS. We are able to unambiguously estimate or place a limit on the
quiescent brightness for 1617 of these. The measured properties of these DNe
are presented in Table 1 (the entirety of which is available online in a
machine-readable format).
Table 1: Outburst Properties of Dwarf Novae
Name | RAJ2000 | DEJ2000 | Peak | Amp. | Amp. Flag | t2 | t2 Flag | t2,low | t2,up
---|---|---|---|---|---|---|---|---|---
| hms | dms | mag | mag | boolean | days | boolean | days | days
ASASSN-18xt | 2:25:06.37 | 8:06:38.6 | 14.1 | 6.4 | 1 | 12.1 | 0 | 10.9 | 16.2
CSS 091106 023638+111157 | 2:36:37.98 | 11:11:56.5 | 15.1 | 4.9 | 1 | 12.9 | 0 | 9.0 | 16.1
TCP J03005508+1802290 | 3:00:55.05 | 18:02:28.7 | 12.1 | 3.4 | 1 | 11.9 | 1 | 11.4 | 11.9
MLS 130110 034256+171739 | 3:42:56.18 | 17:17:40.1 | 16.0 | 4.7 | 1 | 3.8 | 0 | 3.7 | 5.1
MLS 130302 035906+175034 | 3:59:05.90 | 17:50:34.5 | 16.3 | 2.4 | 1 | 18.4 | 0 | 6.0 | 21.0
CSS 081118 041139+232220 | 4:11:38.58 | 23:22:20.3 | 15.0 | 4.6 | 1 | 11.0 | 0 | 7.0 | 13.9
CSS 081107 033104+172540 | 3:31:04.44 | 17:25:40.2 | 15.5 | 4.9 | 1 | 5.9 | 0 | 4.0 | 7.0
CSS 081107 033556+191119 | 3:35:55.78 | 19:11:19.1 | 15.6 | 5.3 | 1 | 14.2 | 0 | 6.9 | 16.1
CSS 090213 033031+201402 | 3:30:31.41 | 20:14:01.2 | 15.6 | 4.3 | 1 | 5.0 | 0 | 4.5 | 5.0
V0701 Tau | 3:44:01.97 | 21:57:07.4 | 15.2 | 6.4 | 1 | 16.3 | 0 | 15.0 | 19.9
Note. — Names, positions, peak apparent brightness, amplitude of outburst, and
t2 for the dwarf novae in our sample. The Amp. Flag columns equals 1 when we
are able to make a measurement of the outburst amplitude and 0 when we are
able to place a lower limit. The t2 Flag column is 1 when were are able to
detect the object below the two magnitude threshold and 0 when there is only a
non-detection below this threshold. When t2 Flag = 0, the value listed for t2
is likely larger than the true value. The t2,low column gives the time until
last detection above the two magnitude threshold and the t2,up column gives
the time until the first detection or non-detection below this threshold.
These last two columns are lower and upper limits on t2, respectively. The
first 10 dwarf novae are shown here and the entirety of the this table is
available in a machine readable format in the electronic paper.
By combining data from ASAS-SN and AAVSO, we are able to measure or place a
limit on $t_{2}$ for 50 CNe. We are able to unambiguously estimate the
quiescent brightness for 40 of these objects. The measured properties of these
CNe are presented in Table A.1 in Section A.5. In order to make a more robust
comparison between DN and CN outbursts, previous CN outburst estimates and
limits were also obtained from Strope et al. (2010). This yielded an
additional 92 CNe for the sample, bringing the total number of CN outbursts
studied to 132.
The positions of both the DNe and CNe are shown in Galactic and equatorial
coordinates in Figure 2. The DNe in our sample are restricted to the Pan-
STARRS observing field, but we also used _Gaia_ to have full sky coverage for
the CNe. The CNe are generally restricted to within several degrees of the
Galactic plane, as expected if CVs track the stellar mass density of the
Galaxy (e.g., Shafter, 2017). However, as DNe are likely to be nearby, they
often appear at higher Galactic latitudes. Without significant dust
extinction, we expect to detect CNe even at the largest Galactic distances,
but we do not expect to detect even the brightest DNe outbursts beyond $\sim$
6 kpc. Since CNe are more luminous, we are still able to detect them down to
latitudes near $b=0^{\circ}$, although dust extinction can obscure CNe at the
lowest latitudes.
### 3.2 Outburst vs. Quiescent Brightness
Figure 3 shows the distribution of the sources by plotting peak outburst
brightness against brightness in quiescence. The peak outburst magnitudes of
CNe are significantly brighter than for DNe, although this is largely due to
selection effects. For DNe, the brightness in outburst were studied using data
solely from ASAS-SN, so we do not include DN outbursts brighter than $\sim$10
mag (the saturation limit of ASAS-SN) in this study. The region with g
$\gtrsim$ 10 mag, where ASAS-SN is saturated, is populated only with CNe
because we rely on AAVSO observations to measure brighter peak magnitudes for
CNe. We also note that ASAS-SN is sensitive to transients as faint as $\sim$18
magnitude, but since we are interested in measuring $t_{2}$, the outburst has
to reach at least two magnitudes brighter than the survey magnitude limit.
This detection range is shown in the non-shaded region in Figure 3. Objects
that are not detected in quiescence are indicated by leftward facing triangles
at the expected 98% completeness limits. Where Darnley et al. (2012)
classified the companion of a CN, we have included the classification as main
sequence (MS), red giant (RG), and sub-giant (SG) stars. Luminous companions
may contribute significantly to the quiescent flux we measure (indeed, novae
with giant companions are found to have bright quiescent magnitudes), which
will lower the estimated outburst amplitude.
Figure 3: Peak magnitude of outburst versus measured brightness in quiescence
for all outbursts discussed in this work. Dwarf nova outbursts are shown in
blue, and classical nova outbursts without companion information are shown in
red. Those classical novae where the companion type is known are denoted as
green X’s, orange stars, and cyan crosses for main sequence, red giant, and
sub-giant companions, respectively. The non-shaded region indicates where our
analysis can measure outburst properties by combining ASAS-SN and Pan-STARRS
observations. Only these surveys were used to study DNe, but AAVSO V-band
observations were utilized to study CNe that peak above the saturation limit
of ASAS-SN. The dashed line shows an outburst amplitude of 8 mag. The
“amplitude-limited" diagonal shaded region shows the requirement that
outbursts in our catalog must have amplitudes $>$2 mag, and quiescent
brightness measurements in this region are likely contaminated by outbursts.
The horizontal gray shaded region at top denotes the saturation limit of ASAS-
SN (g $\lesssim 10$ mag). The horizontal shaded region at bottom signifies two
magnitudes brighter than the sensitivity limit of ASAS-SN (g $\gtrsim 18$
mag). Finally, the vertical shaded region at right represents the saturation
limit of the Pan-STARRS 3$\pi$ survey (g $\lesssim 13$ mag).
### 3.3 Outburst amplitude vs. t2
The amplitude of outburst is shown as a function of
$\log_{10}\left(t_{2}\right)$ in Figure 4 for both CN and DN outbursts. As
expected, the majority of DNe have smaller outburst amplitudes than CNe,
although there is significant overlap for amplitudes of 5–10 mag. We find that
the outburst amplitudes and decline times of both samples are well fit by
normal distributions, with the exception of the decline times of DNe. These
distributions were fit using censored statistics, as a fraction of our
estimates for the amplitude of outburst and $t_{2}$ are limits and are shown
along with histograms of measured values, not including limits, in Figure 4.
For CNe, the normal distribution of the outburst amplitude has a mean and
standard deviation of $\mu=11.43\pm 0.25$ mag and $\sigma=2.57\pm 0.20$ mag,
respectively. This is in comparison with the amplitudes of DNe, where
$\mu=5.13\pm 0.04$ mag and $\sigma=1.55\pm 0.03$ mag. There is a roughly
15$\%$ overlap in the outburst amplitude distributions of CNe and DNe,
suggesting that this property alone is not sufficient to distinguish the two
classes of objects.
Figure 4: Amplitude of outburst versus the time t2 to decline by two
magnitudes from maximum for both CNe and DNe. A filled circle signifies that
both the outburst amplitude and $t_{2}$ were estimated for that object. A
triangle signifies that a lower limit was placed on the outburst amplitude,
and an open symbol shows the upper limit that was placed on $t_{2}$. Though we
are able to place lower and upper limits on $t_{2}$, we only show the upper
limit for visualization purposes. Blue objects denote DN outbursts, while CNe
analyzed in this work and Strope et al. (2010) are represented with symbols as
in Figure 3. The top and right panels show the distributions of $t_{2}$ and
outburst amplitude, respectively, with DNe shown in blue and CNe shown in red.
The dashed histograms show the distributions of only measured values, not
limits, and the solid lines shows the fits to the measured values including
the limits.
The mean of the outburst amplitude distribution for DNe presented in this
paper is larger than other measurements found using transient survey data
alone (Coppejans et al., 2016; Mróz et al., 2015). With typical CCD dynamic
ranges of about 5 magnitudes, it is difficult to detect both the transient
peak and the quiescent system in a single survey unless the amplitude is less
extreme (Drake et al., 2014). By combining ASAS-SN and Pan-STARRS
observations, we are able to measure and place lower limits on outburst
amplitudes as high as 12 magnitudes.
We do not include error bars on Figure 4 for visualization purposes, but the
lower and upper bounds on t2 for each DNe and CNe can be found in Tables 1 and
A.1, respectively. We have not estimated the error on the outburst amplitude
and expect systematics to dominate. The error on the outburst amplitude should
be relatively small in most instances, but for some small fraction of objects,
the outburst amplitude may be significantly underestimated. The peak of the
outburst can be missed if the object declines rapidly or occurs during a time
of lower temporal cadence by ASAS-SN. In addition, for DNe that outburst
frequently, there is a chance that the quiescent magnitude we measure from
Pan-STARRS data is contaminated by outbursts. A more detailed discussion of
possible errors, the sensitivity of our analysis, and possible selection
effects is provided in Section A.4. In considering the results presented in
Tables 1 and A.1, we encourage the reader to take these caveats into
consideration.
Fitting a log-normal distribution to the CN decline times, we find a mean and
standard deviation of $\left\langle t_{2}\right\rangle$ = 18.7 $\pm$ 1.9 days
and 3.2 $\pm$ 0.2 days, respectively. The distribution of $t_{2}$ values for
DNe is not well fit by a single log-normal distribution, but can be described
as a homoscedastic double-log-normal distribution, with mean values equal to
2.4 $\pm$ 0.2 days for 12$\%$ of the sample and 10.5 $\pm$ 0.2 days for the
remaining 88$\%$ of the sample. The common standard deviation is 1.52 $\pm$
0.02 days. Visual inspection of ASAS-SN images of these “fast” outbursts
confirm that the transients are real.
The bimodality of the outburst durations in SU UMa dwarf novae is well
documented, with normal outbursts lasting a few days and superoutbursts
lasting roughly two weeks (Warner, 1995; Osaki, 1996). In our analysis, we
only measure $t_{2}$ of the brightest outburst, so we expect our sample to be
biased towards superoutbursts rather than normal outbursts. One explanation
for a short outburst is that the heating wave fails to move fully throughout
the accretion disk of the CV, and the unheated colder region pulls material
from hotter regions of the disk, shutting down the outburst (Smak, 1984). In
the case of superoutbursts, the heating wave reaches the outer edge of the
disk, causing the disk to remain hot for a longer amount of time.
In addition to finding that the distributions are well fit by Gaussians, we
also find strong evidence of a relationship between the amplitude of outburst
and $t_{2}$ for DNe, and modest evidence of an inverse correlation for CNe. We
use censored statistics to measure a linear correlation of the form
$\log_{10}\left(t_{2}\right)=\beta\left(\rm{Amp}-\left<Amp\right>\right)+\alpha$
(1)
where $\rm{Amp}$ is the outburst amplitude and $\left<\rm{Amp}\right>$ is the
mean of only the measured outburst amplitudes ($\left<\rm{Amp}\right>$ = 10.57
for CNe and $\left<\rm{Amp}\right>$ = 4.91 for DNe). For DNe, we exclude the
subset of fast DNe ($\log_{10}\left(t_{2}\right)<0.4$), and we find a fit with
$\alpha$ = 0.980 $\pm$ 0.005 and $\beta$ = 0.061 $\pm$ 0.004. This fit has a
modest intrinsic scatter of $\sigma=0.178\pm 0.004$, and the correlation is
highly significant, roughly 10$\sigma$.
We are unable to find a previous study of our observed correlation between
amplitude and $t_{2}$ in DNe. However, Otulakowska-Hypka et al. (2016) studied
the correlation between outburst duration (the total time of the outburst) and
the amplitude of outburst for DNe. For normal outbursts of SU UMa stars they
found no significant correlation between outburst duration and outburst
amplitude. However, for superoutbursts, they did find evidence for a
correlation. We make no distinction between subtypes of DN outbursts and
expect this sample to contain a higher fraction of superoutbursts since our
measurement is for the brightest observed outburst of an object since 2013 and
excludes the faster outbursts from the fit.
For CNe, we find a less significant (roughly 3$\sigma$) correlation with best-
fit parameters to Equation 1 of $\alpha$ = 1.33 $\pm$ 0.05 and
$\beta=-0.083\pm 0.024$, and a large intrinsic scatter of $\sigma=0.50\pm
0.04$. This fit is shown in Figure 5 along with the predicted correlation
derived from the MMRD relationship for various inclination angles and assuming
an absolute magnitude in quiescence of M${}_{V}=3.8$ (Warner, 1995; Capaccioli
et al., 1989b). Although the correlation for CNe is less significant than for
DNe, the two populations have opposite slopes: amplitude and $t_{2}$ are anti-
correlated for CNe, while they are positively correlated for DNe.
Figure 5: The outburst amplitude versus $\log{t_{2}}$ for the CNe analyzed in
this work. The markers are the same as Figure 4. The red dashed line and the
red shaded region show our best-fit relation for CNe, with values and
uncertainty given in §3.3. The dotted black lines show the expected
theoretical correlation derived from the MMRD relationship from Figure 5.4 of
Warner (1995).
Warner (1987) noted the substantial scatter in the relation between amplitude
and $t_{2}$ for CNe. He attributed it to observational errors, a random
distribution of inclination angles, and/or nova outbursts depending on
multiple binary parameters (e.g., white dwarf mass, accretion rate, and core
temperature), but the measured values appeared to be in general agreement with
predictions. Yaron et al. (2005) point out that their models predict a
population of low-amplitude CNe ($<$7 mag). Both Warner (1987) and Yaron et
al. (2005) analyzed novae from the Duerbeck (1987) catalog where few low
amplitude novae are present. Our sample includes a larger population of such
low-amplitude CNe (many of which also have small $t_{2}$), likely due to newer
surveys that are more sensitive to fainter and faster CNe. This serves to
steepen the slope of the fit and further increase the variance around the
anti-correlation between amplitude and $t_{2}$, compared to the results of
Warner (1987). Our findings are consistent with the recent results from
Kasliwal et al. (2011) and Shara et al. (2017), who find a class of novae that
deviate from the proposed MMRD relationship.
## 4 Which Dwarf Novae might be mis-classified Classical Novae?
To test the idea that some CNe commonly get mis-characterized as DNe, we
search for possible CN candidates in our sample of DN outbursts. This is
likely the first large-sample analysis of this kind, but there is at least one
example of an ASAS-SN transient that was initially characterized as a DN
candidate but turned out to be a highly reddened CN (ASASSN-20ga; De et al.
2020). Because of their high luminosities, Galactic CN eruptions can only
appear faint (g $\gtrsim$ 12 mag) if they are affected by substantial dust
extinction. For a Galactic transient to be a CN, it must have a peak absolute
magnitude $M_{\rm g,peak}$ brighter than $-4.2$ mag ($M_{\rm g,peak}=-4.2$ mag
is 3$\sigma$ fainter than the mean of the log-normal CN luminosity function
presented in Shafter 2017). The absolute magnitude is tied to the peak
apparent magnitude $m_{g,peak}$ by
$M_{\rm g,peak}=m_{\rm
g,peak}-5\log_{10}\left(\frac{d}{10~{}\rm{pc}}\right)-A_{g},$ (2)
where $d$ is the distance of the object in pc and $A_{g}$ is the amount of
extinction. To place an upper limit on how luminous a given transient can
possibly be, we take the peak apparent magnitude of the transient measured
from ASAS-SN, an upper limit on the distance given reasonable constraints, and
the maximum g-band extinction in the direction of the transient from Schlafly
& Finkbeiner (2011).
In our previous analysis, we only considered DNe in the Pan-STARRS field of
view ($\delta>-30^{\circ}$), but here we apply those lessons learned to
inspect all DN outbursts detected in ASAS-SN. This results in 2688 DNe with
outbursts detected by ASAS-SN, spread over the entire sky. For 1039 of the
objects, parallaxes were measured with _Gaia_ at $\geq 3\sigma$ significance,
and for those, we use the 1$\sigma$ upper limits on the distances given in
Bailer-Jones et al. (2018). For those objects without significant distance
estimates in Bailer-Jones et al. (2018), we used a Galactic upper limit of
$d=$ 30 kpc. This is likely too conservative for any direction in the Galaxy,
but a directional upper limit on the distance is beyond the scope of this
work. At this time, we are more focused on being complete than robust when
identifying candidates and plan to investigate a more reasonable Galactic
distance upper limit as a function of position in Kawash et al. (2021, in
preparation).
We find that 201 (< 10%) objects classified as DNe in our sample could have
$M_{\rm g,peak}\lesssim-4.2$ mag, if they were behind all of the dust
estimated by Schlafly & Finkbeiner (2011), as shown in Figure 6. To be clear,
these are not exact peak absolute magnitude measurements, especially for
objects with no reliable distances (shown in purple) and high extinction, and
are only being used to identify CN candidates. We further rule out objects
where we have measured the outburst amplitude to be $<$ 5 mag, the lower bound
of CN amplitudes based on Figure 4, and objects with more than one detected
outburst in an observing season (bounded by Solar conjunction). Though objects
with multiple outbursts in ASAS-SN data are much more likely to be dwarf novae
than recurrent novae, we can not rule out the latter. There are no known
recurrent novae in the Galaxy that recur on timescales less than a decade, but
M31 recurrent nova M31N 2008-12a erupts every year (Darnley & Henze, 2019). If
objects like this, dubbed ‘rapid recurrent novae,’ exist in the Galaxy, they
should be less luminous and evolve more quickly than a typical classical nova,
making them easily confused with DNe. Therefore, we only eliminate objects
with multiple outbursts in a year so our search is sensitive to rapid
recurrent novae. Overall, we find that 94 objects have outburst amplitudes,
recurrence times, and possibly luminosities consistent with that of a Galactic
classical or recurrent nova.
Figure 6: The brightest possible peak g-band absolute magnitude an outburst
could have while still being in the Galaxy, as a function of the amount of
g-band extinction. The objects with 3$\sigma$ parallax detections in Gaia are
assumed to be at the distances in Bailer-Jones et al. (2018) and are plotted
in green. The violet points are objects that do not have significant Gaia
parallaxes, so we place a limit on the brightest peak absolute magnitude by
assuming a maximum Galactic distance of 30 kpc. In order to be luminous enough
to be a CN, the absolute magnitude needs to be brighter than M${}_{g}\approx$
$-$4.2 mag, and this cutoff is shown as the dashed black line. The peak
absolute magnitude presented here is not an accurate measurement especially
for objects with no reliable distance estimates and those with large amounts
of dust extinction; the plotted values are only used to select CN candidates.
Many of these objects have been confirmed as DNe through an identification
spectrum or a measurement of the superhump period, but we find that 27 sources
were not confirmed through any method. Quiescent multi-band photometry of
these remaining candidates can provide insights to the distance and ultimately
constrain the luminosity of the transient. In the Galactic plane, a candidate
that is relatively blue is likely close by, and therefore will have a lower
luminosity at peak brightness, suggesting a DN outburst. Conversely, a
candidate that is brighter in redder filters is likely reddened by dust,
implying a higher luminosity and a CN outburst. This strategy—identifying
highly reddened quiescent counterparts—is only possible for candidates within
a few degrees of the Galactic plane and that can be securely matched to multi-
band optical catalogs. As described in §2.4, we find quiescent counterparts in
Pan-STARRS, and also add in coverage of southerly declinations by using the
DeCAPS catalog (Schlafly et al., 2018) (cross matching for both catalogs is as
explained in Section A.3). We use the $griz$ photometry to estimate reddening
by fitting the observed spectral energy distributions of the CVs in question
to de-reddened SDSS CV colors from Kato et al. (2012) by varying the amount of
reddening according to extinction laws (Cardelli et al., 1989; Mathis, 1990).
For each of our candidates, this results in a distribution of extinction
values that are plugged into the three-dimensional all-sky extinction map
stitched together in Bovy et al. (2016) to find a range of plausible
distances. This strategy only worked for 19 of the candidates: those that were
able to be cross matched and in fields close to the plane, where the large
amount of dust can constrain the distance. We find that all 19 are consistent
with the peak luminosity of a dwarf nova, and zero are consistent with the
peak luminosity of a classical nova. This analysis will also be useful to shed
light on the nature of CV candidates that are discovered in the future, so it
is made publicly available as an iPython notebook at
https://github.com/amkawash/CV_colors_luminosity
So, for all but 8 of the 2688 DN outbursts detected by ASAS-SN, we find
evidence suggestive of their DN nature. An identification spectrum is needed
to determine if any of the 8 remaining candidates are mis-classified classical
or recurrent novae. These candidates are shown in Table 2 listing equatorial
and Galactic coordinates, peak g apparent magnitude observed in the ASAS-SN
light curve, a limit on outburst amplitude if able to be measured, t2,
extinction along the line of sight from Schlafly & Finkbeiner (2011), an
estimate of the outburst recurrence time measured from the ASAS-SN light curve
($\tau_{R}$), and if the outburst is luminous enough to be a CN as close as 10
kpc. The ASAS-SN light curves of all of these candidates are shown in Figures
7 and 8.
Figure 7: ASAS-SN Light curves for 4 of the candidates listed in Table 2. The left column shows all observations of these objects and the right column shows the observations around the brightest outburst. Blue and orange points denote the $\geq$5-sigma detections from V-band and g-band observations, respectively, and the black triangles signify the $\geq$5-sigma upper limits from non-detections. Figure 8: Same as Figure 7 for the remaining 4 candidates in Table 2. Table 2: Classical and Rapid Recurrent Nova Candidates Name | Right Ascension | Declination | l | b | Peak | Amp. | t2 | Ag | $\tau_{R}$ | 10 kpc
---|---|---|---|---|---|---|---|---|---|---
| (h m s) | (∘ ′ $"$) | (degrees) | (degrees) | (mag) | (mag) | (days) | (mag) | (years) | (bool)
ASASSN-17li | 18:38:22.00 | $-$09:43:47.4 | 22.790 | $-$1.551 | 16.2 | nan | [12.0 - 32.9] | 10.2 | >3 | Y
ASASSN-17ar | 10:01:11.14 | $-$55:11:56.3 | 280.224 | $-$0.018 | 14.4 | nan | 20.7 | 7.6 | >3 | Y
ASASSN-19nf | 14:19:35.09 | $-$59:58:24.0 | 313.755 | 1.033 | 16.1 | $>7.5$ | [7.6 - 15.5] | 19.8 | 0.9 | Y
ASASSN-19am | 09:30:39.31 | $-$54:47:04.3 | 276.609 | $-$2.521 | 16.3 | nan | [10.9 - 17.8] | 7.4 | 0.8 | Y
ASASSN-17lq | 17:29:28.81 | $-$38:02:26.8 | 350.512 | $-$2.042 | 15.4 | $>8.2$ | [9.0 - 11.0] | 10.1 | >3 | Y
ASASSN-19pw | 18:31:05.75 | $-$14:47:52.6 | 17.469 | $-$2.303 | 15.6 | 6.7 | [14.2 - 16.6] | 6.4 | >4 | Y
ASASSN-17js | 18:21;09.05 | $-$19:24:47.6 | 12.275 | $-$2.347 | 15.0 | 6.5 | [5.9 - 9.2] | 6.9 | >3 | Y
ASASSN-19fd | 17:03:19.29 | $-$29:52:23.3 | 354.045 | 7.099 | 13.5 | nan | 7.9 | 1.4 | >4 | N
## 5 Conclusions
In this work, we characterize the brightest outburst of 1618 DN outbursts
detected by ASAS-SN and 93 CNe observed by ASAS-SN and AAVSO contributors. In
general agreement with previous results, we find that the mean outburst
amplitude of CNe is 11.4 magnitudes, significantly larger than the mean DN
outburst of 5.1 magnitudes. However, we find significant overlap in their
distributions, at the $\sim 15\%$ level. Although the outburst amplitude is a
fairly good indicator for determining the nature of a CV outburst, it is clear
that a CV outburst with amplitude in the range 5$-$10 mag is ambiguous in
nature. Similarly, the mean decline time, or $t_{2}$, of CNe is larger than
that of DNe, but a majority of the distributions overlap, especially at lower
values. Because there is an overlap in parameter space between DN outbursts
and CN eruptions, we have presented a technique to identify CN versus DN
candidates based solely on photometric data. This will be a necessary tool to
handle the large number of CV outbursts discovered in the LSST era.
The primary motivation for this work was driven by the possibility that CNe
are being mis-characterized as DNe. To explore this prospect, we have
investigated every DN that declines by two magnitudes from maximum in ASAS-SN.
The majority of outbursts are inconsistent with the luminosity of a CN, but
there is a small fraction that could be bright enough if they are behind most
of the dust along the line of sight. We looked into this subset and found that
only 8 objects (out of 2688) are still ambiguous based on available data. A
classification spectrum will be needed to confirm if any of these candidates
are CNe characterized as DNe, but it is clear that there is no significant
number of Galactic classical novae hiding in the large sample of dwarf novae.
The transient community appears to be doing an effective job classifying CV
outbursts.
Our results suggest that either Galactic nova rate predictions are too high or
there must be other factors than classical nova mis-classification causing the
discrepancy between reported and predicted classical novae. Recent
observations from Palomar Gattini-IR have revealed a sample of highly reddened
and optically missed novae due to Galactic extinction (De et al., 2021). We
plan to explore to what degree interstellar dust has an effect on ASASN’s, and
other optical observer’s, ability to discover classical novae.
## Acknowledgments
This research has made use of the International Variable Star Index (VSX)
database, operated at AAVSO, Cambridge, Massachusetts, USA. We acknowledge
with thanks the variable star observations from the _AAVSO International
Database_ contributed by observers worldwide and used in this research.
A.K., L.C., E.A., and K.V.S. acknowledge financial support of NSF award
AST-1751874 and a Cottrell fellowship of the Research Corporation. J.S.
acknowledges support from the Packard Foundation. BJS, CSK, and KZS are
supported by NSF grant AST-1907570. CSK and KZS are supported by NSF grant
AST-181440.
We thank the Las Cumbres Observatory and its staff for its continuing support
of the ASAS-SN project. ASAS-SN is supported by the Gordon and Betty Moore
Foundation through grant GBMF5490 to the Ohio State University, and NSF grants
AST-1515927 and AST-1908570. Development of ASAS-SN has been supported by NSF
grant AST-0908816, the Mt. Cuba Astronomical Foundation, the Center for
Cosmology and AstroParticle Physics at the Ohio State University, the Chinese
Academy of Sciences South America Center for Astronomy (CAS- SACA), and the
Villum Foundation.
The analysis for this work was performed primarily in ipython (Perez &
Granger, 2007) using- numpy (Oliphant, 2006; Van Der Walt et al., 2011),
Astropy (Price-Whelan et al., 2018), Matplotlib (Hunter, 2007), and scipy
(Virtanen et al., 2020). The PanSTARRS1 Catalog was accessed using packages
from MAST CasJobs333http://casjobs.sdss.org/CasJobs developed by the JHU/SDSS
team.
## References
* Alard (2000) Alard, C. 2000, A&AS, 144, 363, doi: 10.1051/aas:2000214
* Alard & Lupton (1998) Alard, C., & Lupton, R. H. 1998, ApJ, 503, 325, doi: 10.1086/305984
* Bailer-Jones et al. (2018) Bailer-Jones, C. A. L., Rybizki, J., Fouesneau, M., Mantelet, G., & Andrae, R. 2018, AJ, 156, 58, doi: 10.3847/1538-3881/aacb21
* Bode & Evans (2008) Bode, M. F., & Evans, A. 2008, Classical Novae, Vol. 43
* Bovy et al. (2016) Bovy, J., Rix, H.-W., Green, G. M., Schlafly, E. F., & Finkbeiner, D. P. 2016, ApJ, 818, 130, doi: 10.3847/0004-637X/818/2/130
* Bruch (1984) Bruch, A. 1984, A&AS, 56, 441
* Burlak & Henden (2008) Burlak, M. A., & Henden, A. A. 2008, Astronomy Letters, 34, 241, doi: 10.1134/S1063773708040038
* Capaccioli et al. (1989a) Capaccioli, M., Della Valle, M., D’Onofrio, M., & Rosino, L. 1989a, AJ, 97, 1622, doi: 10.1086/115104
* Capaccioli et al. (1989b) —. 1989b, AJ, 97, 1622, doi: 10.1086/115104
* Cardelli et al. (1989) Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, ApJ, 345, 245, doi: 10.1086/167900
* Chambers et al. (2016) Chambers, K. C., Magnier, E. A., Metcalfe, N., et al. 2016, arXiv e-prints, arXiv:1612.05560. https://arxiv.org/abs/1612.05560
* Coppejans et al. (2016) Coppejans, D. L., Körding, E. G., Knigge, C., et al. 2016, MNRAS, 456, 4441, doi: 10.1093/mnras/stv2921
* Darnley (2019) Darnley, M. J. 2019, arXiv e-prints, arXiv:1912.13209. https://arxiv.org/abs/1912.13209
* Darnley & Henze (2019) Darnley, M. J., & Henze, M. 2019, arXiv e-prints, arXiv:1909.10497. https://arxiv.org/abs/1909.10497
* Darnley et al. (2012) Darnley, M. J., Ribeiro, V. A. R. M., Bode, M. F., Hounsell, R. A., & Williams, R. P. 2012, ApJ, 746, 61, doi: 10.1088/0004-637X/746/1/61
* Darnley et al. (2016) Darnley, M. J., Henze, M., Bode, M. F., et al. 2016, ApJ, 833, 149, doi: 10.3847/1538-4357/833/2/149
* De et al. (2020) De, K., Hankins, M., Kasliwal, M. M., et al. 2020, The Astronomer’s Telegram, 13790, 1
* De et al. (2021) De, K., Kasliwal, M. M., Hankins, M. J., et al. 2021, arXiv e-prints, arXiv:2101.04045. https://arxiv.org/abs/2101.04045
* Della Valle & Izzo (2020) Della Valle, M., & Izzo, L. 2020, arXiv e-prints, arXiv:2004.06540. https://arxiv.org/abs/2004.06540
* della Valle & Livio (1995) della Valle, M., & Livio, M. 1995, ApJ, 452, 704, doi: 10.1086/176342
* Drake et al. (2014) Drake, A. J., Gänsicke, B. T., Djorgovski, S. G., et al. 2014, MNRAS, 441, 1186, doi: 10.1093/mnras/stu639
* Duerbeck (1987) Duerbeck, H. W. 1987, Space Sci. Rev., 45, 1, doi: 10.1007/BF00187826
* Flewelling et al. (2016) Flewelling, H. A., Magnier, E. A., Chambers, K. C., et al. 2016, arXiv e-prints, arXiv:1612.05243. https://arxiv.org/abs/1612.05243
* Gaia Collaboration et al. (2018) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018, A&A, 616, A1, doi: 10.1051/0004-6361/201833051
* Gänsicke et al. (2009) Gänsicke, B. T., Dillon, M., Southworth, J., et al. 2009, MNRAS, 397, 2170, doi: 10.1111/j.1365-2966.2009.15126.x
* Hameury (2020) Hameury, J. M. 2020, Advances in Space Research, 66, 1004, doi: 10.1016/j.asr.2019.10.022
* Hameury & Lasota (2017) Hameury, J. M., & Lasota, J. P. 2017, A&A, 602, A102, doi: 10.1051/0004-6361/201730760
* Harrison et al. (2004) Harrison, T. E., Johnson, J. J., McArthur, B. E., et al. 2004, AJ, 127, 460, doi: 10.1086/380228
* Hellier (2001) Hellier, C. 2001, Cataclysmic Variable Stars
* Howell et al. (1995) Howell, S. B., Szkody, P., & Cannizzo, J. K. 1995, ApJ, 439, 337, doi: 10.1086/175177
* Hunter (2007) Hunter, J. D. 2007, Computing in Science & Engineering, 9, 90, doi: 10.1109/MCSE.2007.55
* Ivezić et al. (2019) Ivezić, Ž., Kahn, S. M., Tyson, J. A., et al. 2019, ApJ, 873, 111, doi: 10.3847/1538-4357/ab042c
* Jayasinghe et al. (2018) Jayasinghe, T., Kochanek, C. S., Stanek, K. Z., et al. 2018, MNRAS, 477, 3145, doi: 10.1093/mnras/sty838
* Jayasinghe et al. (2019) Jayasinghe, T., Stanek, K. Z., Kochanek, C. S., et al. 2019, MNRAS, 486, 1907, doi: 10.1093/mnras/stz844
* Jayasinghe et al. (2020) —. 2020, MNRAS, 493, 4186, doi: 10.1093/mnras/staa499
* Kafka (2020) Kafka, S. 2020, Observations from the AAVSO International Database, https://www.aavso.org
* Kasliwal et al. (2011) Kasliwal, M. M., Cenko, S. B., Kulkarni, S. R., et al. 2011, ApJ, 735, 94, doi: 10.1088/0004-637X/735/2/94
* Kato (2015) Kato, T. 2015, PASJ, 67, 108, doi: 10.1093/pasj/psv077
* Kato et al. (2012) Kato, T., Maehara, H., & Uemura, M. 2012, PASJ, 64, 63, doi: 10.1093/pasj/64.3.63
* Kato et al. (2020) Kato, T., Isogai, K., Wakamatsu, Y., et al. 2020, PASJ, 72, 14, doi: 10.1093/pasj/psz134
* Kochanek et al. (2017) Kochanek, C. S., Shappee, B. J., Stanek, K. Z., et al. 2017, PASP, 129, 104502, doi: 10.1088/1538-3873/aa80d9
* Kostov & Bonev (2018) Kostov, A., & Bonev, T. 2018, Bulgarian Astronomical Journal, 28, 3. https://arxiv.org/abs/1706.06147
* Kuin et al. (2020) Kuin, N. P. M., Page, K. L., Mróz, P., et al. 2020, MNRAS, 491, 655, doi: 10.1093/mnras/stz2960
* Landais et al. (2013) Landais, G., Ochsenbein, F., & Simon, A. 2013, Astronomical Society of the Pacific Conference Series, Vol. 475, TAPVizieR: A New Way to Access the VizieR Database, ed. D. N. Friedel, 227
* Lasker et al. (2008) Lasker, B. M., Lattanzi, M. G., McLean, B. J., et al. 2008, AJ, 136, 735, doi: 10.1088/0004-6256/136/2/735
* Mathis (1990) Mathis, J. S. 1990, ARA&A, 28, 37, doi: 10.1146/annurev.aa.28.090190.000345
* Mclaughlin (1945) Mclaughlin, D. B. 1945, PASP, 57, 69, doi: 10.1086/125689
* Mróz et al. (2015) Mróz, P., Udalski, A., Poleski, R., et al. 2015, Acta Astron., 65, 313. https://arxiv.org/abs/1601.02617
* Munari et al. (2011) Munari, U., Siviero, A., Dallaporta, S., et al. 2011, New A, 16, 209, doi: 10.1016/j.newast.2010.08.010
* Oliphant (2006) Oliphant, T. E. 2006, A guide to NumPy, Vol. 1 (Trelgol Publishing USA)
* Ortolani et al. (1980) Ortolani, S., Rafanelli, P., Rosino, L., & Vittone, A. 1980, A&A, 87, 31
* Osaki (1996) Osaki, Y. 1996, PASP, 108, 39, doi: 10.1086/133689
* Osaki (2001) —. 2001, Astronomical Society of the Pacific Conference Series, Vol. 245, Dwarf Nova and Accretion Disks, ed. T. von Hippel, C. Simpson, & N. Manset, 57
* Otulakowska-Hypka et al. (2016) Otulakowska-Hypka, M., Olech, A., & Patterson, J. 2016, MNRAS, 460, 2526, doi: 10.1093/mnras/stw1120
* Page et al. (2020) Page, K. L., Kuin, N. P. M., & Darnley, M. J. 2020, The Astronomer’s Telegram, 13731, 1
* Pagnotta & Schaefer (2014) Pagnotta, A., & Schaefer, B. E. 2014, ApJ, 788, 164, doi: 10.1088/0004-637X/788/2/164
* Patterson (2011) Patterson, J. 2011, MNRAS, 411, 2695, doi: 10.1111/j.1365-2966.2010.17881.x
* Perez & Granger (2007) Perez, F., & Granger, B. E. 2007, Computing in Science Engineering, 9, 21
* Price-Whelan et al. (2018) Price-Whelan, A. M., Sipőcz, B. M., Günther, H. M., et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4f
* Schaefer (2010) Schaefer, B. E. 2010, ApJS, 187, 275, doi: 10.1088/0067-0049/187/2/275
* Schaefer (2018) —. 2018, MNRAS, 481, 3033, doi: 10.1093/mnras/sty2388
* Schlafly & Finkbeiner (2011) Schlafly, E. F., & Finkbeiner, D. P. 2011, ApJ, 737, 103, doi: 10.1088/0004-637X/737/2/103
* Schlafly et al. (2018) Schlafly, E. F., Green, G. M., Lang, D., et al. 2018, ApJS, 234, 39, doi: 10.3847/1538-4365/aaa3e2
* Selvelli & Gilmozzi (2019) Selvelli, P., & Gilmozzi, R. 2019, A&A, 622, A186, doi: 10.1051/0004-6361/201834238
* Shafter (2017) Shafter, A. W. 2017, ApJ, 834, 196, doi: 10.3847/1538-4357/834/2/196
* Shappee et al. (2014) Shappee, B. J., Prieto, J. L., Grupe, D., et al. 2014, ApJ, 788, 48, doi: 10.1088/0004-637X/788/1/48
* Shara et al. (2017) Shara, M. M., Doyle, T., Lauer, T. R., et al. 2017, ApJ, 839, 109, doi: 10.3847/1538-4357/aa65cd
* Smak (1984) Smak, J. 1984, Acta Astron., 34, 161
* Strope et al. (2010) Strope, R. J., Schaefer, B. E., & Henden, A. A. 2010, AJ, 140, 34, doi: 10.1088/0004-6256/140/1/34
* Tampo et al. (2020) Tampo, Y., Naoto, K., Isogai, K., et al. 2020, PASJ, 72, 49, doi: 10.1093/pasj/psaa043
* Van Der Walt et al. (2011) Van Der Walt, S., Colbert, S. C., & Varoquaux, G. 2011, Computing in Science & Engineering, 13, 22
* Virtanen et al. (2020) Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261, doi: https://doi.org/10.1038/s41592-019-0686-2
* Warner (1987) Warner, B. 1987, MNRAS, 227, 23, doi: 10.1093/mnras/227.1.23
* Warner (1995) —. 1995, Cambridge Astrophysics Series, 28
* Watson et al. (2006) Watson, C. L., Henden, A. A., & Price, A. 2006, Society for Astronomical Sciences Annual Symposium, 25, 47
* Yaron et al. (2019a) Yaron, O., Gal-Yam, A., Ofek, E., & Sass, A. 2019a, Transient Name Server AstroNote, 37, 1
* Yaron et al. (2019b) Yaron, O., Gal-Yam, A., Ofek, E., Sass, A., & Knezevic, N. 2019b, Transient Name Server AstroNote, 15, 1
* Yaron et al. (2005) Yaron, O., Prialnik, D., Shara, M. M., & Kovetz, A. 2005, ApJ, 623, 398, doi: 10.1086/428435
* York et al. (2000) York, D. G., Adelman, J., Anderson, John E., J., et al. 2000, AJ, 120, 1579, doi: 10.1086/301513
## Appendix A Appendix
### A.1 Filter Transformations
The photometry utilized in this work makes use of a range of blue-green
filters: the V filter used by AAVSO observers, the ASAS-SN V filter, the ASAS-
SN g filter, the Pan-STARRS gP1 filter, and the Gaia GBP filter. Although all
of these filters are centered around a similar wavelength, the flux of a
source in each filter band can be different, especially for reddened objects.
To account for this, all V-band observations were transformed to g-band using
$\textit{V}-\textit{g}=-0.017-0.508\times\left(\textit{g}-\textit{r}\right),$
(A1)
when gP1 and rP1 observations were available (Kostov & Bonev, 2018). Pan-
STARRS only provides estimates of colors in quiescence, so to transform V-band
data in outburst to g-band, we assume typical CV colors in quiescence (B$-$V =
0.1; Bruch 1984) and typical color changes during outburst ($\Delta$(B$-$V) =
$-$0.1 for DNe and $\Delta$(B$-$V) = 0.13 for CNe; Warner 1995). These rough
color estimates were converted from B and V to g and r using equation A1 and
additional filter transformations from Kostov & Bonev (2018). This implies an
intrinsic color of g$-$r = $-$0.29 for DNe and $-$0.08 for CNe. We then use
the measured Pan-STARRS colors to estimate the reddening and ultimately the
observed g $-$ r color in outburst.
For objects without color information in Pan-STARRS, we have to make
additional assumptions. For DNe, we assume g${}_{P1}-$rP1 = 0 in quiescence,
but for CNe we assume a g${}_{P1}-$rP1 = 1, as most CNe are more distant and
closer to the Galactic plane, and therefore reddened by dust. Although all of
these color estimates and assumptions are very crude, this transformation only
changes the flux of a typical object by a fraction of a magnitude. However,
for some CNe with high reddening, the transformation applied can exceed a
magnitude. If the reddening is substantially underestimated, the error in the
magnitude shift could be as high as a magnitude, making the source appear
brighter than it actually was in quiescence.
GBP observations of CNe in quiescence were corrected as
$\textit{g}-\textit{G}_{BP}=-0.318+0.932x-0.932x^{2}+0.507x^{3}-0.107x^{4}+0.007x^{5}$
(A2)
where x = G${}_{BP}-\textit{G}_{RP}$ using a polynomial we fit to sources with
colors ranging from GBP $-$ GBP = 0.6 to GBP $-$ GBP = 3.4 in both Gaia and
SDSS. This correction is typically a fraction of a magnitude, reaching up to
$\sim$1 magnitude for the most highly reddened objects. We assume any
differences between the ASAS-SN g filter, the Pan-STARRS g filter, and the
SDSS g filter are negligible.
### A.2 Quiescent Magnitude Measurements
The Pan-STARRS $3\pi$ Steradian Survey (Chambers et al., 2016) covers the sky
north of declination $\delta=-30^{\circ}$, and the stacked catalog has a
median 5$\sigma$ depth of g${}_{P1}\simeq 23.3$ mag. Quiescent magnitude
measurements were made using the gP1 passband, as this filter is the most
similar to the g-band measurements made by ASAS-SN and avoids the need for any
extinction correction to the outburst amplitude.
The Pan-STARRS 3$\pi$ catalog reaches its full depth and astrometric accuracy
by combining 12 exposures taken between 2009 Jun 2 and 2014 Mar 31 (Chambers
et al., 2016). This is a potential issue for estimating the quiescent
magnitude of DNe, because a substantial fraction of them have multiple
outbursts over this time frame. Therefore, it is possible, and in many cases
highly probable, that the flux of a DNe listed in the stack or mean catalog is
contaminated by outbursts and does not accurately reflect the quiescent
brightness of the source. To minimize the possibility of this issue, flux
measurements were obtained from the ForcedWarpMeasurement table from Pan-
STARRS Data Release 2. This table contains single epoch forced photometry
measurements at the position of objects detected in the stacked images
(Flewelling et al., 2016). For each source, the quiescent magnitude was
estimated from the median flux from the faintest 50$\%$ of gP1 observations.
This increases the chance that observations contaminated by outbursts were
excluded, and we only measure the quiescent brightness for objects where the
median absolute deviation of the apparent magnitude measurements is less than
0.9 mag to avoid estimates that are still likely contaminated.
If no source is present in the Pan-STARRS catalog (details of cross matching
in Section A.3), an upper limit was placed on the brightness based on the gP1
magnitude corresponding to the 98$\%$ completeness of the field obtained from
the StackDetEffMeta table. The limiting magnitudes are estimated from the
number of fake sources recovered for each skycell in the 3$\pi$ stacked survey
(Munari et al., 2011).
We estimate the quiescent magnitude of the CNe in the same way for those in
the observing field of Pan-STARRS. Since the CNe in our sample only outburst
once, we are able to supplement the Pan-STARRS photometry with _Gaia_ DR2
photometry for CNe that erupted after _Gaia_ DR2 observations Gaia
Collaboration et al. (2018), for objects outside of the observing field of
Pan-STARRS. Quiescent magnitudes for CNe estimated using _Gaia_ were made
using the $G_{BP}$ filter, as it is the most similar to ASAS-SN g-band, and
transformed to g as describe in §A.1.
### A.3 Astrometry and Catalog Matching
When matching sources from different catalogs, the possibility exists that two
different sources with similar sky positions can be mistaken as the same
object; this largely depends on the astrometric accuracy of the surveys
involved. Yaron et al. (2019b, a) investigated the astrometric accuracy of
various surveys, including ASAS-SN, by comparing the positions of objects
reported by an individual survey to those independently reported by the Gaia
Alerts Project444http://gsaweb.ast.cam.ac.uk/alerts/home. They estimated the
positional accuracy of various surveys and found that 95$\%$ of discoveries by
ASAS-SN have astrometric errors $<$ 3.4 arcseconds. Since many of the objects
in our sample are discovered by ASAS-SN, we use a positional offset threshold
of four arcseconds when considering matches between the DNe in our sample and
objects in Pan-STARRS. This is a generous search radius, as the typical error
on the discovery position is $\sim$1 arcsecond (Jayasinghe et al., 2018). For
CNe, we assume the discoveries are followed up at higher angular resolution,
and therefore place a stricter bound on the astrometric uncertainty of one
arcsecond.
We use CN and DN coordinates as listed in the VSX catalog. When transients
discovered by ASAS-SN and other surveys are entered into VSX, their positions
are updated using _Gaia_ DR2 (taking epoch and equinox J2000.0) if there is an
object within a fraction of an arcsecond from the reported transient position.
If no object exists within $\sim$1 arcsecond, other optical surveys with
similar limits like Pan-STARRS, SDSS (York et al., 2000), and the Guide Star
Catalog (GSC2.3; Lasker et al. 2008) are checked for matches within a fraction
of an arcsecond. If there are no matches in surveys with reliable astrometry,
the position is derived from the discovery report or follow-up astrometry.
For each DN in our sample, we estimate the probability that it is coincident
by chance with a different, nearby object in the Pan-STARRS catalog. A
positional offset was defined for each source as the angle between the DN’s
VSX position and the closest object in the Pan-STARRS stack catalog. We then
search the Pan-STARRS catalog at random positions on a circle of one degree
around the source and compute the frequency of having a Pan-STARRS source
closer than the measured positional offset. Only sources that had random
matches less than 5% of the time were considered secure matches. The maximum
positional offset that results in a secure match is roughly 2 arcseconds.
For CNe, we only consider a Pan-STARRS object within 1 arcsecond of the VSX
position to be a secure match. DNe with a Pan-STARRS source within 4
arcseconds and CNe with a source within 2 arcseconds where random matches were
found $>$5% of the time are considered ambiguous matches, and we do not
attempt to estimate their outburst amplitudes. If no source exists in the Pan-
STARRS catalog within 4 arcseconds for DNe and 2 arcseconds for CNe of the VSX
position, we consider the quiescent counterpart definitively not detected in
the Pan-STARRS 3$\pi$ stack catalog, and place an upper limit on the quiescent
brightness. We successfully estimate or place a limit on the quiescent
brightness for $\sim$90% of DNe in the Pan-STARRS observing field.
For CNe south of $\delta=-30$ degrees, we consider Gaia DR2 sources within one
arcsecond to be secure matches. If no _Gaia_ DR2 source appears to be within
two arcseconds of the CNe position, we place an upper limit on the brightness
of the source. The magnitude that corresponds to 98$\%$ completeness in Gaia
is not currently available and likely spatially variable, but we assume this
will roughly be the magnitude that corresponds to a magnitude error of 0.1
mag. We find that this value is on average $G_{BP}$ $\approx$ 19.7 mag, and
therefore use this as an upper limit on the magnitude for CNe with no _Gaia_
DR2 source within two arcseconds. By using Pan-STARRS and Gaia observations we
are able to measure or place a limit on $\sim$80% of CN outbursts.
### A.4 Sensitivity/Contamination
We injected fake transients into our data and attempted to recover them in
order to estimate the ranges of peak magnitudes and decline times to which our
analysis is sensitive, given the cadence of ASAS-SN. We generated linearly
declining (in magnitude versus time) outbursts with peak apparent magnitudes
ranging from 18 to 10 mag. and $t_{2}$ values ranging from an hour to a year.
These outbursts were injected into the ASAS-SN light curves obtained for our
CV sample with random outburst epochs. We sampled the mock-outburst evolution
with the cadence and sensitivity of observed ASAS-SN light curves. The same
analysis that was run on the real CV sample was run on the fake transients, in
order to estimate how frequently we could successfully measure or place limits
on $t_{2}$. The results are shown in Figure A.1 and are compared to the
measured values from our CV sample.
The top panel in Figure A.1 shows the distributions of real CV decline times
as a blue histogram, and the relative frequency with which fake transient
decline times could be estimated (red line). This shows that our analysis is
best at detecting transients with $t_{2}\approx$ 30 days, although we most
frequently find DN outbursts that decline by two magnitudes from maximum in
roughly 10 days. Also, the sensitivity of our analysis as a function of
decline time drops off much more slowly than the distribution of real decline
times. Though we are certainly worse at detecting CV outbursts with extremely
short decline times, the results are not significantly biased by our analysis
and observing cadence.
The bottom panel of Figure A.1 shows the distribution of peak magnitude versus
$t_{2}$ for the real DN outbursts (shown in blue) and contours of the
probability that the fake transients were recovered (shown in red). Even the
brightest and slowest transients are not always recovered successfully, and
this is largely due to an outburst happening while Sun constrained. The
completeness is not strongly dependent on the peak magnitude of the source
unless it is near the limiting magnitude of ASAS-SN.
Figure A.1: Top: Distribution of observed time to decline by two magnitudes
for detected DN outbursts in blue and the probability distribution for
measuring the decline as a function of $t_{2}$ in red. Both distributions are
normalized so that the area under the curve is unity, but the red curve is
then multiplied by a factor of four for visualization purposes. Bottom: Peak g
magnitudes of DN outbursts vs. measurements and limits of the time to decline
by two magnitudes from maximum. Real DN outbursts are shown as blue circles.
The red contours show the fraction of time the decline time of fake transients
could be successfully measured or constrained with an upper limit in our
analysis.
As discussed in the main text, there is a chance that the quiescent magnitude
measured from Pan-STARRS is contaminated by an outburst. To investigate the
likelihood for this to occur, we measure the outburst duty cycle, defined as
the fraction of time a CV spends in outburst. We estimate this as the number
of days the object is detected by ASAS-SN divided by the total number of days
the field is observed. Since image subtraction light curves were used to study
the DN outbursts in our sample, the assumption that each detection is during
an outburst is a safe one, but it can break down when contamination from a
nearby bright star occurs. Due to this, we only estimate the duty cycle for
objects with no g $<$ 14 mag stars within an 8 ASAS-SN pixel (64 arcseconds)
radius.
Figure A.2: Outburst amplitude for the dwarf novae as a function of duty
cycle.
As shown in Figure A.2, it does appear to be the case that DNe with larger
duty cycles have smaller outburst amplitudes. Though this is consistent with
the nature of DNe (Coppejans et al., 2016), our analysis may significantly
underestimate the outburst amplitude for an object with a high duty cycle.
Objects that spend more time in outburst have a higher probability of being
observed by Pan-STARRS in outburst, and thus will result in an underestimate
in the outburst amplitude estimated in this work. This should not
significantly alter the distribution of the outburst amplitude, but we
encourage the use of caution when quoting the outburst amplitude of an
individual object,
Contamination in the ASAS-SN light curve from a nearby, bright star could
result in a false positive of a DN outburst. Careful inspection and flagging
of the light curves was performed to mitigate these artifacts, though the
possibility still remains.
### A.5 Classical and Reccurent Nova Outburst Properties
Here, we provide the outburst properties of the classical novae and recurrent
novae in our sample in Table A.1. This table is available electronically in a
machine readable format. The data used to measure these properties for four
classical novae are shown as light curves in Figure A.3.
Figure A.3: Light curves of four classical novae analyzed in this work. The
$\geq$5-sigma detections for ASAS-SN g-band, ASAS-SN V-band and AAVSO V-band
observations are shown in orange, blue, and green respectively after
converting to brightness in g-band. The black triangles denote 5-sigma upper
limits from non-detections These show examples of faint outbursts (top left),
flares causing multiple peaks (top right), outbursts during solar conjunction
(bottom left), and smooth declines (bottom right). Table A.1: Outburst
Properties of Classical and Recurrent Novae
Name | RAJ2000 | DEJ2000 | Peak | Amp. | Amp. Flag | t2 | t2 Flag | t2,low | t2,up
---|---|---|---|---|---|---|---|---|---
| h m s | ∘ ′ $"$ | mag | mag | boolean | days | boolean | days | days
V0392 Per | 4:43:21.37 | 47:21:25.9 | 7.0 | 10.5 | 1 | 3.0 | 1 | 3.0 | 3.1
V0339 Del | 20:23:30.68 | 20:46:03.8 | 4.6 | 13.2 | 1 | 11.8 | 1 | 11.8 | 12.5
V2659 Cyg | 20:21:42.32 | 31:03:29.4 | 9.9 | 11.7 | 1 | 115.5 | 1 | 114.7 | 116.4
V0569 Vul | 19:52:08.25 | 27:42:20.9 | 16.2 | 6.0 | 0 | 6.0 | 0 | 5.0 | 6.8
V0962 Cep | 20:54:23.75 | 60:17:06.9 | 11.6 | 11.1 | 0 | 32.5 | 1 | 31.2 | 34.1
V0435 CMa | 7:13:45.84 | $-$21:12:31.3 | 10.4 | 11.5 | 1 | 53.4 | 1 | 49.3 | 55.3
V2860 Ori | 6:09:57.45 | 12:12:25.2 | 10.6 | 9.6 | 1 | 9.4 | 1 | 9.0 | 10.0
V5668 Sgr | 18:36:56.83 | $-$28:55:40.0 | 4.4 | 11.9 | 1 | 75.3 | 1 | 74.5 | 77.8
V5855 Sgr | 18:10:28.29 | $-$27:29:59.3 | 8.4 | 11.9 | 1 | 12.7 | 1 | 7.3 | 16.3
V5856 Sgr | 18:20:52.25 | $-$28:22:12.1 | 6.5 | 14.4 | 1 | 7.2 | 1 | nan | 14.5
V1707 Sco | 17:37:09.54 | $-$35:10:23.2 | 12.9 | 6.8 | 1 | 4.5 | 1 | 3.6 | 5.6
V1659 Sco | 17:42:57.68 | $-$33:25:42.9 | 13.6 | 6.1 | 1 | 22.0 | 1 | 20.7 | 22.6
V3661 Oph | 17:35:50.41 | $-$29:34:23.8 | 12.3 | 7.9 | 0 | 3.8 | 1 | 2.1 | 5.6
V5669 Sgr | 18:03:32.77 | $-$28:16:05.3 | 9.5 | 9.5 | 1 | 33.6 | 1 | 33.3 | 41.9
V5853 Sgr | 18:01:07.78 | $-$26:31:43.4 | 12.9 | 8.2 | 1 | 36.7 | 1 | 36.6 | 37.6
V5667 Sgr | 18:14:25.15 | $-$25:54:34.7 | 10.0 | 10.0 | 1 | 54.5 | 1 | 54.1 | 55.1
V3662 Oph | 17:39:46.10 | $-$24:57:55.8 | 14.7 | 7.6 | 0 | 43.7 | 1 | 43.0 | 47.3
V3890 Sgr | 18:30:43.29 | $-$24:01:08.9 | 8.1 | 12.9 | 1 | 4.1 | 1 | 4.1 | 4.1
V5666 Sgr | 18:25:08.76 | $-$22:36:02.6 | 10.1 | 8.6 | 1 | 12.7 | 1 | 12.2 | 13.0
V0612 Sct | 18:31:45.86 | $-$14:18:55.5 | 9.4 | 10.6 | 1 | 13.9 | 1 | 13.9 | 13.9
V0613 Sct | 18:29:22.93 | $-$14:30:44.2 | 11.6 | 9.5 | 0 | 36.8 | 1 | 36.2 | 37.1
V3665 Oph | 17:14:02.53 | $-$28:49:23.3 | 10.0 | 11.6 | 0 | 34.8 | 1 | 33.5 | 35.0
V3666 Oph | 17:42:24.11 | $-$20:53:08.6 | 9.4 | 12.7 | 0 | 21.6 | 1 | 21.5 | 23.2
V2944 Oph | 17:29:13.42 | $-$18:46:13.8 | 9.6 | 10.4 | 1 | 16.2 | 1 | 16.0 | 16.7
V5857 Sgr | 18:04:09.45 | $-$18:03:55.8 | 11.7 | 10.6 | 0 | 16.9 | 1 | 15.7 | 17.4
V0670 Ser | 18:10:42.29 | $-$15:34:18.0 | 13.5 | 8.7 | 0 | 118.3a | 0 | 1.0 | 118.3
V0659 Sct | 18:39:59.70 | $-$10:25:41.9 | 8.9 | 13.0 | 0 | 7.6 | 1 | 7.3 | 8.3
V1830 Aql | 19:02:33.38 | 3:15:19.0 | 16.8 | 5.8 | 0 | 20.6 | 1 | 20.6 | 20.6
V1831 Aql | 19:21:50.15 | 15:09:24.8 | 15.8 | 7.0 | 0 | 17.8 | 1 | 17.5 | 18.7
V0906 Car | 10:36:15.42 | $-$59:35:53.7 | 6.5 | 13.1 | 1 | 43.7 | 1 | 43.0 | 44.5
V0549 Vel | 8:50:29.62 | $-$47:45:28.3 | 9.7 | 8.2 | 1 | 90.1 | 1 | 90.0 | 92.7
FM Cir | 13:53:27.59 | $-$67:25:00.9 | 6.7 | 10.6 | 1 | 82.3 | 1 | 81.2 | 82.9
V1405 Cen | 13:20:55.35 | $-$63:42:19.1 | 11.3 | 8.2 | 1 | 108.4 | 1 | 102.5 | 108.8
V1655 Sco | 17:38:19.31 | $-$37:25:08.7 | 12.0 | 8.0 | 1 | 28.9 | 1 | 28.8 | 28.9
V1662 Sco | 16:48:49.62 | $-$44:57:03.2 | 10.4 | 9.3 | 1 | 8.1 | 1 | 6.6 | 9.6
V1657 Sco | 16:52:18.87 | $-$37:54:18.9 | 13.3 | 6.4 | 1 | 38.7 | 1 | 36.0 | 40.9
V1656 Sco | 17:22:51.46 | $-$31:58:37.1 | 12.0 | 7.7 | 1 | 9.0 | 1 | 8.9 | 9.5
V1661 Sco | 17:18:06.37 | $-$32:04:27.7 | 11.3 | 8.4 | 1 | 10.7 | 1 | 10.0 | 13.6
V0408 Lup | 15:38:43.86 | $-$47:44:42.1 | 10.4 | 9.7 | 1 | 59.4 | 1 | 58.1 | 60.7
V0407 Lup | 15:29:01.79 | $-$44:49:39.5 | 7.4 | 12.3 | 1 | 5.5 | 1 | 5.1 | 5.6
aafootnotetext: The eruption likely occurred during solar constraint and t2,up
is the time from before solar constraint to once the light curve dropped below
the apparent two magnitude threshold.
Note. — Names, positions, peak apparent brightness, amplitude of outburst, and
time it takes to decline to decline two magnitudes from maximum brightness of
classical and recurrent novae in our sample. The Amp. Flag column equals one
when we are able to make a measurement of the outburst amplitude and zero when
we are able to place a lower limit. The t2 Flag column equals one when were
are able to detect the object below the two magnitude threshold and is zero
when there is only a non-detection below this threshold or when the eruption
appeared to occur during solar constraint. The t2,low column gives the time
until last detection brighter than the two magnitude threshold and the t2,up
column gives the time until the first detection or non-detection fainter this
threshold. These last two columns are lower and upper limits on t2,
respectively.
|
# Approachable Free Subsets and Fine Structure Derived Scales1112010
Mathematics Subject Classification. Primary 03E04, 03E45, 03E55
Dominik Adolf and Omer Ben-Neria
###### Abstract
Shelah showed that the existence of free subsets over internally approachable
subalgebras follows from the failure of the PCF conjecture on intervals of
regular cardinals. We show that a stronger property called the Approachable
Bounded Subset Property can be forced from the assumption of a cardinal
$\lambda$ for which the set of Mitchell orders $\\{o(\mu)\mid\mu<\lambda\\}$
is unbounded in $\lambda$. Furthermore, we study the related notion of
continuous tree-like scales, and show that such scales must exist on all
products in canonical inner models. We use this result, together with a
covering-type argument, to show that the large cardinal hypothesis from the
forcing part is optimal.
## 1 Introduction
The study of set theoretic algebras has been central in many areas, with many
applications to compactness principles, cardinal arithmetic, and combinatorial
set theory.
An algebra on a set $X$ is a tuple $\mathfrak{A}=\langle
X,f_{n}\rangle_{n<\omega}$ where $f_{n}:X^{k_{n}}\rightarrow X$ is a function.
A sub-algebra is a subset $M\subseteq X$ such that
$f_{n}(x_{0},\ldots,x_{k_{n}-1})\in M$ for all $(x_{0},\ldots,x_{k_{n}-1})\in
M^{k_{n}}$ and $n<\omega$. The set of sub-algebras of $\mathfrak{A}$ is known
as a club (in $\mathcal{P}(X)$). The characteristic function $\chi_{M}$ of $M$
is defined on the ordinals of $M$ by $\chi_{M}(\tau)=\sup(M\cap\tau)$.
Shelah’s celebrated bound in cardinal arithmetic ([26]) states that if
$\aleph_{\omega}$ is a strong limit cardinal then
$2^{\aleph_{\omega}}<\min\\{\aleph_{\omega_{4}},\aleph_{(2^{\aleph_{0}})^{+}}\\}.$
Starting from a supercompact cardinal, Shelah proved that for every
$\alpha<\omega_{1}$, there exists a generic extension in which
$2^{\aleph_{\omega}}=\aleph_{\alpha+1}$ (see [15]). It is a central open
problem in cardinal arithmetic if $2^{\aleph_{\omega}}\geq\aleph_{\omega_{1}}$
is consistent. A major breakthrough towards a possible solution is the work of
Gitik ([14],[10]) on the failure of the PCF-conjecture. Shelah’s PCF
conjecture states that $|\operatorname{pcf}(A)|\leq|A|$ for every
progressive222I.e., $\min(A)>|A|$. set $A$ of regular cardinals. In [27],
Shelah has extracted remarkable freeness properties of sets over subalgrbras,
from the assumption of $2^{\aleph_{\omega}}\geq\aleph_{\omega_{1}}$, or more
generally, from the assumption $|\operatorname{pcf}(A)|>|A|$ for a progressive
interval of regular cardinals $|A|$.
###### Definition 1.
Let $\mathfrak{A}=\langle X,f_{n}\rangle_{n}$ be an algebra and $x\subset X$.
We say that $x$ is free with respect to $\mathfrak{A}$ if for every $\delta\in
x$ and $n<\omega$, $\delta\not\in f_{n}``(x\setminus\\{\delta\\})^{<\omega}$.
More generally, $x$ is free over a subalgebra $N\subseteq\mathfrak{A}$ if for
every $\delta\in x$ and $n<\omega$, $\delta\not\in
f_{n}``(N\cup(x\setminus\\{\delta\\}))^{<\omega}$.
A cardinal $\lambda$ has the Free Subset Property if every algebra
$\mathfrak{A}$ on $\lambda$ or a bigger $H_{\theta}$, has a free subset
$x\subseteq\lambda$ which is cofinal in $\lambda$. A regular cardinal
$\lambda$ with the Free Subset Property is Jonsson. Koepke [19] has shown that
the free subset property at $\aleph_{\omega}$ is equiconsistent with the
existence of a measurable cardinal. For a singular limit $\lambda$ of a
progressive interval $|A|$, it is shown in [27] that if
$|\operatorname{pcf}(A)|>|A|$ then $\lambda$ satisfies the Free Subset
Property. In his PhD thesis ([23]), Pereira has isolated the notion of the
Approachable Free Subset Property (AFSP) to play a critical role in the result
from [27]. The Approachable Free Subset Property for a singular cardinal
$\lambda$ asserts that there exists some sufficiently large $H_{\theta}$,
$\theta>\lambda$ and an algebra $\mathfrak{A}$ on $H_{\theta}$ such that for
every internally approachable substructure333See Definition 10
$N\prec\mathfrak{A}$ with $|N|<\lambda$, there exists an infinite sequence of
regular cardinal $\langle\tau_{i}\mid i<\operatorname{cof}(\lambda)\rangle\in
N$ such that the set $x=\\{\chi_{N}(\tau_{i})\mid
i<\operatorname{cof}(\lambda)\\}$ is free over $N$.
Pereira showed that Shelah’s proof yields that if $\lambda$ is a limit of a
progressive interval $A$ or regular cardinals and
$|\operatorname{pcf}(A)|>|A|$ then the Approachable Free Subset Property holds
at $\lambda$.
Working with fixed sequences $\langle\tau_{n}\mid n<\omega\rangle$ of regular
cardinal, we consider here the following version of this property.
###### Definition 2.
The Approachable Free Subset Property (AFSP) with respect to
$\langle\tau_{n}\rangle_{n}$ asserts that for every sufficiently large regular
$\theta>\lambda=(\cup_{n}\tau_{n})$ and for every internally approachable
subalgebra $N\prec\mathfrak{A}$, of an algebra $\mathfrak{A}$ extending
$(H_{\theta},\in,\langle\tau_{n}\rangle_{n})$, satisfying $|N|<\lambda$ there
exists a cofinite set $x\subseteq\\{\chi_{N}(\tau_{n})\mid n<\omega\\}$ which
is free over $N$.
By moving from one cardinal $\theta$ to $\theta^{\prime}>\theta$ if needed, it
is routine to verify the definition of AFSP with respect to a sequence
$\langle\tau_{n}\rangle_{n}$ can be replaced with a similar assertion in which
the requirement of “every internally approachable $N$” is replaced with “ for
every internally approachable in some closed unbounded subset of
$\mathcal{P}_{\lambda}(\mathfrak{A})$”. Clearly, if AFSP holds with respect to
a sequence $\langle\tau_{n}\rangle_{n}$ then AFSP holds with respect to the
singular limit $\lambda=\cup_{n}\tau_{n}$, as in the original definition of
[23].
The above mentioned results, suggest that AFSP can provide a path to possibly
improving Shelah’s bound, to $2^{\aleph_{\omega}}<\aleph_{\omega_{1}}$. I.e.,
proving (in ZFC) that AFSP must fail at $\aleph_{\omega}$ (or AFSP fails w.r.t
every subsequence $\langle\tau_{n}\rangle_{n}$ of $\\{\aleph_{k}\mid
k<\omega\\}$) would imply that $2^{\aleph_{\omega}}<\aleph_{\omega_{1}}$. To
this end, Pereira ([23]) has isolated the notion of tree-like scales, as a
potential tool of proving AFSP must fail.
###### Definition 3.
Let $\langle\tau_{n}\rangle_{n<\omega}$ be an increasing sequence of regular
cardinals. A scale444see Definition 9 for the definition of a continuous scale
$\vec{f}=\langle f_{\alpha}\mid\alpha<\eta\rangle$ is a tree-like scale on
$\prod_{n}\tau_{n}$ if for every $\alpha\neq\beta<\eta$ and $n<\omega$,
$f_{\alpha}(n+1)=f_{\beta}(n+1)$ implies $f_{\alpha}(n)=f_{\beta}(n)$.
Pereira shows in [24] that the existence of a continuous tree-like scale on a
product $\prod\limits_{n<\omega}\tau_{n}$ guarantees the failure of AFSP with
respect to $\langle\tau_{n}\rangle_{n}$ (see also Lemma 15), and further
proves that continuous tree-like scales, unlike other well-known types of
scales, such as good scales, can exist in models with some of the strongest
large cardinal notions, e.g. $I_{0}$-cardinals. Moreover, Cummings [5] proved
that tree-like scales can exist above supercompact cardinals. These results
show that as opposed to other well-known properties of scales such as good and
very-good scales, which exhibit desirable ”local” behaviour but cannot exist
in the presence of certain large cardinals ([6]), the notion of continuous
tree-like scales may coexist with the some of the strongest large cardinals
hypothesis.
The consistency of the inexistence of a continuous tree-like scale on a
product $\prod_{n}\tau_{n}$ of regular cardinal has been established by Gitik
in [12], from the consistency assumption of a cardinal $\kappa$ satisfying
$o(\kappa)=\kappa^{++}+1$.The argument makes a sophisticated use of the key
features of Gitik’s extender based Prikry forcing by a
$(\kappa,\kappa^{++})$-extender.555E.g., on the fact that there are
unboundedly many pairs $(\alpha,\alpha^{*})\in[\kappa]^{2}$, sharing the same
Rudin-Keisler projection map $\pi_{\alpha^{*},\alpha}$. Concerning the
possible consistency of the Approachable Free Subset Property, Welch ([31])
has shown that AFSP with respect to a sequence $\langle\tau_{n}\rangle_{n}$
implies that the large cardinal assumption of Theorem 4 holds in an inner
model.
It remained open whether AFSP with respect to some sequence
$\langle\tau_{n}\mid n<\omega\rangle$ is consistent at all, and if so, whether
its consistency strength is strictly stronger than the (seemingly) weaker
property, of no continuous tree-like scale on $\prod_{n}\tau_{n}$. The current
work answers both questions:
###### Theorem 4.
It is consistent relative to the existence of a cardinal $\lambda$ such that
the set of Mitchell orders $\\{o(\mu)\mid\mu<\lambda\\}$ is unbounded in
$\lambda$, that the Approachable Free Subset Property holds with respect to
some sequence of regular cardinals $\vec{\tau}=\langle\tau_{n}\rangle_{n}$.
Moreover,the sequence $\tau_{n}$ can be made to be a subsequence of the first
uncountable cardinals, in a model where $\lambda=\aleph_{\omega}$.
###### Theorem 5.
Let $\lambda$ be a singular cardinal of countable cofinality such that there
is no inner model $M$ with
$\lambda=\sup\\{\operatorname{o}^{M}(\mu)\mid\mu<\lambda\\}$. Let
$\langle\tau_{n}\mid n<\omega\rangle$ be a sequence of regular cardinals
cofinal in $\lambda$. Then $\prod\limits_{n<\omega}\tau_{n}$ carries a
continuous tree-like scale.
To achieve the proof of Theorem 5, we establish a result of an independent
interest, that the continuous tree-like scales naturally appear in fine-
structural canonical inner models. Thus obtaining complementary result to
aforementioned theorems by Pereira and Cummings, i.e. we know that no large
cardinal property that can consistently appear in canonical inner models
disproves the existence of products with continuous tree-like scales (e.g.,
Woodin cardinals).
###### Theorem 6.
Let $\mathcal{M}$ be a premouse such that each countable hull has an
$\omega$-maximal $(\omega_{1}+1)$-iteration strategy. Let
$\lambda\in\mathcal{M}$ be a singular cardinal of countable cofinality. Let
$\langle\kappa_{i}:i<\omega\rangle$ be a sequence of regular cardinals cofinal
in $\lambda$. Then $\prod\limits_{i<\omega}\kappa_{i}/J_{bd}$ carries a
continuous tree-like scale.
Continuous tree-like scales on products of successor cardinals in $L$ where
implicitly constructed by Donder, Jensen, and Stanly in [7]. In the course of
proving Theorem 4, we establish the consistency of a principle stronger than
AFSP, which we call the Approchable Bounded Subset Property.
Let $N$ be a subalgebra of $\mathfrak{A}=\langle H_{\theta},f_{n}\rangle_{n}$
and $\vec{\tau}=\langle\tau_{n}\rangle_{n}$ be an increasing sequene of
cardinals. Given a set $x\subseteq H_{\theta}$, we define $N[x]$ to be the
$\mathfrak{A}$-closure of the set $(x\cup N)$. We say that $N$ satisfies the
Bounded Appending Property with respect to $\vec{\tau}$ if for every
$n_{0}<\omega$, setting $x=\\{\chi_{N}(\tau_{n})\mid n\neq n_{0}\\}$ then the
addition of $x$ to $N$ does not increase the supremum below $\tau_{n_{0}}$,
namely $\chi_{N[x]}(\tau_{n_{0}})=\chi_{N}(\tau_{n_{0}})$.
###### Definition 7.
The Approachable Bounded Subset Property holds with respect to
$\langle\tau_{n}\rangle_{n}$ if for every sufficiently large regular
$\theta>\lambda=(\cup_{n}\tau_{n})$ and internally approachable subalgebra
$N\prec\mathfrak{A}$, of an algebra $\mathfrak{A}$ extending
$(H_{\theta},\in,\langle\tau_{n}\rangle_{n})$, that satisfies $|N|<\lambda$,
then $N$ satisfies the bounded appending property with respect to a tail of
$\langle\tau_{n}\rangle_{n}$.
We show in Lemma 15 ABSP with respect to a sequence
$\langle\tau_{n}\rangle_{n}$ implies AFSP with respect to the same sequence,
as well as the inexistence of a continuous essentially tree-like scale; a
weakening of tree-like scale introduced by Pereira (see Definition 9). The
proof of the forcing Theorem 4, stated above, goes through proving that ABSP
is consistent with respect to a sequence of regulars
$\langle\tau_{n}\rangle_{n}$.
The following summarizes the main results of this paper:
###### Corollary 8.
The following principles are equiconsistent:
1. 1.
There exists a sequence of regular cardinals $\langle\tau_{n}\mid
n<\omega\rangle$ for which the Approachable Bounded Subset Property holds.
2. 2.
There exists a sequence of regular cardinals $\langle\tau_{n}\mid
n<\omega\rangle$ for which the Approachable Free Subset Property holds.
3. 3.
There exists a sequence of regular cardinals $\langle\tau_{n}\mid
n<\omega\rangle$ for which the product $\prod_{n}\tau_{n}$ does not carry a
continuous Tree-Like scale.
4. 4.
There exists a cardinal $\lambda$ such that the set of Mitchell orders
$\\{o(\mu)\mid\mu<\lambda\\}$ is unbounded in $\lambda$.
The paper is organized as follows: The remainder of this section will be
dedicated to discussing preliminary material in PCF theory and the theory of
inner models. Section 2 will dedicated to the forcing argument establishing
the proof Theorem 4. In Section 3 we discuss how to construct tree-like scales
from the fine structure of canonical inner models. In Section 4 we will use
these fine structural scales to derive scales on products in $V$ using a
covering-like argument. Finally, in Section 5 we finish with a list of open
problems.
Acknowledgments:
The work on this project was initiated following a suggestion by Assaf Rinot
to study the consistency of the Approachable Free Subset Property. The authors
are grateful for this suggestion and for supporting the first author during
the academic year of 2018-2019 at Bar-Ilan University under a grant from the
European Research Council (grant agreement ERC-2018-StG 802756). The initial
idea for the inner model construction of continuous tree-like scale was
conceived during the Berkeley conference on Inner Model theory in July 2019.
The authors would like to thank Ralf Schindler and John Steel for organizing
the meeting and creating the opportunity for this collaboration. The first
author would like to thank Grigor Sargsyan for his generous support and warm
hospitality during the Spring of 2020 (NSF career award DMS-1352034). During
that time the first author had the opportunity to travel to Pittsburgh. It was
there that some significant improvements were made to the lower bound
argument, and would like to thank James Cummings for the opportunity to
present this research and insightful conversations. The second author was
partially supported by the Israel Science Foundation (Grant 1832/19). He would
like thank Luis Pereira for insightful discussions on the subject and many
valuable remarks on this paper, and to Spencer Unger and Philip Welch for many
valuable comments and suggestions.
### 1.1 Preliminaries
For a set $X$ and a cardinals $\lambda$, $\mathcal{P}_{\lambda}(X)$ denotes
the collections of all subsets $a\subseteq X$ of size $|a|<\lambda$. $J_{bd}$
denotes the ideal of bounded subsets of $\omega$. Let $I$ be an ideal on
$\omega$ and $f,g$ two functions from $\omega$ to ordinals. We write $f<_{I}g$
if $\\{n<\omega\mid f(n)\geq g(n)\\}\in I$. We write $f<^{*}g$ for
$f<_{J_{bd}}g$.
#### 1.1.1 Continuous and Tree-Like Scales
Let $\langle\tau_{n}\mid n<\omega\rangle$ be a sequence of ordinals of
strictly increasing cofinalities. A sequence of functions $\vec{f}=\langle
f_{\alpha}\mid\alpha<\eta\rangle\subseteq\prod_{n}\tau_{n}$ of a regular
length $\eta$, is a pre-scale in $(\prod_{n}\tau_{n},<_{I})$ if $\vec{f}$ is
strictly increasing in the ordering $<_{I}$. A prescale is a scale if it is
cofinal in $\prod_{n}\tau_{n}$. As we focus on $J_{bd}$ from this point
forward, we will frequently say that $\vec{f}$ is a (pre-)scale in
$\prod_{n}\tau_{n}$, without mentioning the ideal $J_{bd}$.
###### Definition 9.
Suppose that $\vec{f}=\langle f_{\alpha}\mid\alpha<\eta\rangle$ is a
(pre-)scale in $\prod_{n}\tau_{n}$.
1. 1.
$\vec{f}$ is continuous if for every limit ordinal $\delta<\eta$ of
uncountable cofinality, the sequence $\vec{f}\upharpoonright\delta$ is
$<^{*}$-cofinal in $\prod_{n}f_{\delta}(n)$.
2. 2.
$\vec{f}$ is Tree-like if for every $\alpha\neq\beta<\eta$ and $n<\omega$, if
$f_{\alpha}(n+1)=f_{\beta}(n+1)$ then $f_{\alpha}(n)=f_{\beta}(n)$.
3. 3.
$\vec{f}$ is Essentially Tree-like if for every $n<\omega$ and
$\mu\in[\tau_{n},\tau_{n+1})$ the set
$\\{\mu^{\prime}<\tau_{n}\mid\exists\beta<\eta,f_{\beta}(n+1)=\mu\text{ and
}f_{\beta}(n)=\mu^{\prime}\\}$
is nonstationary in $\tau_{n}$.
If a product $\prod_{n}\tau_{n}$ carries a scale, it is not difficult to find
another scale on it with the tree-like property (see Pereira [23]), but such a
scale need not be continuous.
#### 1.1.2 Internally Approachable Structures and related Principles
Considering notions such as the Approachable Free Subset Property or the
Approachable Bounded Subset Property with respect to subalgebras of Algebras
$\mathfrak{A}=(\theta,f_{n})_{n}$, there is no harm in replacing the domain
$\theta$ with another set of the same size, such as $H_{\theta}$ in cases
relevant to us, and adding more structure to the algebra. Therefore, from this
point on, we will only restrict ourselves to set theoretic algebras
$\mathfrak{A}$ of the form $\mathfrak{A}=(H_{\theta},\in,f_{n})_{n}$, which
extend the model $(H_{\theta},\in)$ in the language of set theory, and include
Skolem functions. In particular, a subalgebra $N\prec\mathfrak{A}$ will always
be an elementary substructure.
This allows us to reformulate our notion of freeness. Assuming666we will
always be abe to assume so the algebra $\mathfrak{A}$ is rich enough to
satisfy a fraction of ZFC777specifically, the Replacement property, and
$N\subseteq\mathfrak{A}$ is sufficiently closed so it is an elementary
substructure $N\prec\mathfrak{A}$, then the fact that a set $x$ is free over
$N$ is equivalent to having that for every $\delta\in x$ and a function $f\in
N$, $\delta\not\in f``(x\setminus\\{\delta\\})^{<\omega}$.
The notion of internally approachable structures was formally introduced in
[9]. We refer the reader to [8] for further exposition. The definition below
is similar to the standard ones, with the addition that here, we will focus on
internally approachable unions of uncountable cofinality.
###### Definition 10.
An elementary subalgebra (substructure) $N\prec\mathfrak{A}$ of an algebra
$\mathfrak{A}=(H_{\theta};\in,f_{n})_{n}$ is said to be internally
approachable of length $\rho$ if $N=\bigcup_{i<\rho}N_{i}$ is a union of a
sequence $\vec{N}=\langle N_{i}\mid i<\rho\rangle$ of elementary subalgebras
$N_{i}\prec N$, and for every $j<\rho$, $\vec{N}\upharpoonright j=\langle
N_{i}\mid i<j\rangle$ belongs to $N$.
We say that $N\prec\mathfrak{A}$ is internally approachable if it is
internally approachable of length $\rho$ for some $\rho$ of uncountable
cofinality $\operatorname{cof}(\rho)>\aleph_{0}$.
###### Notation 11.
Let $N\prec(H_{\theta};\in)$ for a regular cardinal $\theta$.
* •
For every regular cardinal $\tau\in N$, define
$\chi_{N}(\tau)=\sup(N\cap\tau)$.
* •
Given a sequence $\vec{\tau}=\langle\tau_{n}\mid n<\omega\rangle\subseteq N$
define the function $\chi^{\vec{\tau}}_{N}\in\prod_{n}\tau_{n}$ by
$\chi^{\vec{\tau}}_{N}(n)=\chi_{N}(\tau_{n})$ if the last ordinal is strictly
smaller than $\tau_{n}$, and $0$ otherwise.
The following folklore result connects continuous scales with characteristic
functions of internally approachable structures. We include a proof for
completeness.
###### Lemma 12.
Suppose that $\vec{\tau}=\langle\tau_{n}\mid n<\omega\rangle\in N$ is a
strictly increasing sequence of regular cardinals for which
$\prod_{n}\tau_{n}$ carries a continuous scale $\vec{f}=\langle
f_{\alpha}\mid\alpha<\eta\rangle$. For every $N\prec H_{\theta}$ which is
internally approachable of size $|N|<\bigcup_{n}\tau_{n}$, with $\vec{f}\in
N$, if $\delta=\chi_{N}(\eta)$ then $\chi_{N}^{\vec{\tau}}(n)=f_{\delta}(n)$
for all but finitely many $n<\omega$.
###### Proof.
Let $\vec{N}=\langle N_{i}\mid i<\rho\rangle$ be a sequence witnessing
$N=\cup_{i}N_{i}$ is internally approachable of length $\rho$ which has
uncountable cofinality. Since $\vec{f}$ is continuous, it suffices to show
that $\vec{f}\upharpoonright\delta$ is $<^{*}$-cofinally interleaved with the
functions in $\prod_{n}\chi_{N}^{\vec{\tau}}(n)$ to prove that
$\chi_{N}^{\vec{\tau}}(n)=f_{\delta}(n)$ for almost all $n<\omega$. First,for
every $f_{\alpha}\in\vec{f}\upharpoonright\delta$ there exists some $\beta\in
N\cap\delta$ so that $\alpha<\beta$, and thus $f_{\alpha}<^{*}f_{\beta}$. But
$f_{\beta}\in N$ since $\beta\in N$, which means that
$f_{\beta}\in\prod_{n}\chi_{N}^{\vec{\tau}}(n)$. Next, fix
$g\in\prod_{n}\chi_{N}(\tau_{n})$. We show that $g<^{*}f_{\alpha}$ for some
$\alpha<\delta$. To this end, $N=\bigcup_{i<\rho}N_{i}$ guarantees that for
each $n<\omega$ there is $i<\rho$ such that $g(n)<\chi_{N_{i}}(\tau_{n})$.
Since $\operatorname{cof}(\rho)>\aleph_{0}$ there is $i<\rho$ such that
$g(n)<\chi_{N_{i}}(\tau_{n})$ for all $n$, and in particular,
$g<^{*}\chi_{N_{i}}^{\vec{\tau}}$. Since $\vec{f}\in N$ is $<^{*}$-cofinal in
$\prod_{n}\tau_{n}$ and $\chi_{N_{i}}^{\vec{\tau}}\in N$, there is some
$\alpha\in N\cap\eta\subseteq\delta$ so that
$\chi_{N_{i}}^{\vec{\tau}}<^{*}f_{\alpha}$, and thus $g<^{*}f_{\alpha}$. ∎
###### Lemma 13.
Let $\lambda<\theta$ be cardinals with $\theta$ regular, and $\triangleleft$
be a well-ordering of $H_{\theta}$. Suppose that
$\mathcal{S}\subseteq\mathcal{P}_{\lambda}(H_{\theta})$ is a stationary set of
internally approachable structures $N\prec(H_{\theta};\in,\triangleleft)$, and
$X\in H_{\theta}$ is a set which belongs to all $N\in\mathcal{S}$, and
satisfies that $|X^{\omega}|\leq\eta$ is a regular cardinal and
$\rho^{\aleph_{0}}<\eta$ for every cardinal $\rho<\lambda$. Then, for every
assignment which maps each $N\in\mathcal{S}$ to a countable sequence $\langle
x^{N}_{n}\mid n<\omega\rangle\in X^{\omega}$ which is contained in $N$, there
exists a stationary subset $S^{*}\subseteq\eta$ and a constant sequence
$\langle x_{n}\mid n<\omega\rangle$ such that for every $\delta\in S^{*}$
there is $N\in\mathcal{S}$ satisfying $\chi_{N}(\eta)=\delta$ and $\langle
x^{N}_{n}\rangle_{n}=\langle x_{n}\rangle_{n}$.
###### Proof.
Let $\langle\vec{x}^{\alpha}\mid\alpha<\eta\rangle$ be the
$\triangleleft$-least enumeration of $X^{\omega}$ in $H_{\theta}$, where each
$\vec{x}^{\alpha}$ is of the form $\langle x^{\alpha}_{n}\mid
n<\omega\rangle$. For each $N\in\mathcal{S}$ let $\alpha_{N}<\eta$ be such
that $\langle x^{N}_{n}\mid n<\omega\rangle=\vec{x}^{\alpha_{N}}$. Note that
$\alpha_{N}$ need not be a member of $N$ since $\langle x^{N}_{n}\mid
n<\omega\rangle$ need not. Since each $N\in\mathcal{S}$ is the union of a
sequence $\langle N_{i}\mid i<\rho\rangle$ with
$\operatorname{cof}(\rho)>\aleph_{0}$, and $\langle x^{N}_{n}\mid
n<\omega\rangle\subseteq N$ there is some $i<\rho$ so that $\langle
x^{N}_{n}\mid n<\omega\rangle\subset N_{i}$, and thus $\langle x^{N}_{n}\mid
n<\omega\rangle\in(X\cap N_{i})^{\omega}\in N$. Moreover, as $|X\cap
N_{i}|<\lambda$, we have that $|(X\cap N_{i})^{\omega}|<\eta$, and therefore
there exists some $\beta_{N}\in\eta\cap N$ so that $(X\cap
N_{i})^{\omega}\subset\langle\vec{x}^{\alpha}\mid\alpha<\beta_{N}\rangle$. We
conclude that, $\alpha_{N}<\beta_{N}<\chi_{N}(\eta)$. Next, define
$S=\\{\chi_{N}(\eta)\mid N\in\mathcal{S}\\}$. $S\subseteq\eta$ is stationary,
and by choosing for each $\delta\in S$ a specific structure
$N_{\delta}\in\mathcal{S}$ with $\delta=\chi_{N}(\eta)$, we can form a
pressing down assignment taking each $\delta\in S$ to
$\alpha_{N_{\delta}}<\delta$. Let $\alpha^{*}<\eta$ and $S^{*}\subseteq S$ be
so that $\alpha_{N_{\delta}}=\alpha^{*}$ for all $\delta\in S^{*}$. The claim
follows for $S^{*}$ and $\langle x_{n}\mid
n<\omega\rangle=\vec{x}^{\alpha^{*}}$. ∎
Let $\vec{\tau}=\langle\tau_{n}\mid n<\omega\rangle$ be an increasing sequence
of regular cardinals, $\lambda=\cup_{n}\tau_{n}$, and $\theta>\lambda^{+}$
regular. A set $C\subseteq\mathcal{P}_{\lambda}(H_{\theta})$ is a closed
unbounded set if it contains all elementary substructures $M\prec\mathfrak{A}$
of size $|M|<\lambda$ of some algebra
$\mathfrak{A}=(H_{\theta},\in,f_{n})_{n}$ on $H_{\theta}$. We reformulate the
definitions of Approachable Free Subset Property and Approachable Bounded
Subset Property from the introduction.
###### Definition 14.
1. 1.
Let $F:[\lambda]^{<\omega}\to\lambda$ be a function. We say that a subset
$X\subseteq\lambda$ is free with respect to $F$ if for every $\gamma\in X$,
$\gamma\not\in F[X\setminus\\{\gamma\\}]^{<\omega}$.
2. 2.
The Approachable Free Subset Property (AFSP) with respect to $\vec{\tau}$
asserts that there exists a closed unbounded set
$C\subseteq\mathcal{P}_{\lambda}(H_{\theta})$ of structures
$N\prec(H_{\theta};\in)$ so that for every internally approachable structure
$N\in C$ there exists some $m<\omega$ such that the set
$\\{\chi_{N}(\tau_{n})\mid m\leq n<\omega\\}$ is free with respect to every
function $F\in N$
3. 3.
The Approachable Bounded Subset Property (ABSP)with respect to $\vec{\tau}$
asserts that there exists a closed unbounded set
$C\subseteq\mathcal{P}_{\lambda}(H_{\theta})$ of structures
$N\prec(H_{\theta};\in)$ so that for every internally approachable structure
$N\in C$ there exists some $m<\omega$ such that for every $F\in N$,
$F:[\lambda]^{k}\to\lambda$ of finite arity $k<\omega$, and distinct numbers
$d,d_{1},d_{2},\dots,d_{k}\in\omega\setminus m$, if
$F\left(\chi_{N}(\tau_{d_{1}}),\dots\chi_{N}(\tau_{d_{k}})\right)<\tau_{d}$
then
$F\left(\chi_{N}(\tau_{d_{1}}),\dots\chi_{N}(\tau_{c_{d}})\right)<\chi_{N}(\tau_{d}).$
To see that the formulations in Definition 14 are equivalent to the ones given
in the introduction, note that if $\theta>\lambda=\cup_{n}\tau_{n}$ is the
first for which that there exists a club
$C\subseteq\mathcal{P}_{\lambda}(H_{\theta})$ which is definable in
$\vec{\tau}$ consisting of subalgebra
$M\subseteq\mathfrak{A}=(H_{\theta},\in,f_{n})_{n}$, then for every
$\theta^{\prime}>\theta$ and $M^{\prime}\prec H_{\theta^{\prime}}$, if
$\vec{\tau}\in M^{\prime}$ then $\theta,C\in M^{\prime}$ and $M^{\prime}\cap
C\in C$.
###### Lemma 15.
Suppose that $\vec{\tau}=\langle\tau_{n}\mid n<\omega\rangle$ is an increasing
sequence of regular cardinals.
1. 1.
If there is no continuous essentially tree-like scale on $\prod_{n}\tau_{n}$
then there is no continuous tree-like scale on $\prod_{n}\tau_{n}$.
2. 2.
AFSP w.r.t $\vec{\tau}$ implies that there is no continuous tree-like scale on
$\prod_{n}\tau_{n}$.
3. 3.
ABSP w.r.t $\vec{\tau}$ implies both
(i) AFSP w.r.t $\vec{\tau}$, and
(ii) there is no continuous essentially tree-like scale on
$\prod_{n}\tau_{n}$.
###### Proof.
1. 1.
This is an immediate consequence of the definitions of an essentially tree-
like scale and a tree-like scale on $\prod_{n}\tau_{n}$.
2. 2.
We prove the contrapositive statement, that if there exists a continuous tree-
like scale on $\prod_{n}\tau_{n}$ then AFSP fails with respect to
$\vec{\tau}$. Suppose that $\vec{f}$ is a continuous tree-like scale on
$\prod_{n}\tau_{n}$. Since $\vec{f}$ is tree-like, we can assign to it a
function $F:\lambda\to\lambda$, $\lambda=\cup_{n}\tau_{n}$, defined as
follows: For every $n<\omega$ and $\mu$, $\tau_{n}\leq\mu<\tau_{n+1}$, define
$F(\mu)=\begin{cases}f_{\alpha}(n)&\mbox{if }\mu=f_{\alpha}(n+1)\text{ for
some }\alpha<\eta\\\ 0&\mbox{otherwise.}\end{cases}$
$F(\mu)$ is well defined, i.e., does not depend on the choice of $\alpha$ such
that $\mu=f_{\alpha}(n+1)$, since $\vec{f}$ is tree-like. It is clear from the
definition of $F$ that for every $\delta<\eta$ and $n<\omega$,
$F(f_{\delta}(n+1))=f_{\delta}(n)$. Now, if
$C\subseteq\mathcal{P}_{\lambda}(H_{\theta})$ is a closed unbounded subset,
$N\in C$ is an internally approachable structure with $F\in N$, and
$\delta=\chi_{N}(\eta)$, then $\chi_{N}^{\vec{\tau}}(n)=f_{\delta}(n)$ for all
but finitely many $n<\omega$. Hence, for all but finitely many $n<\omega$,
$F(\chi_{N}(\tau_{n+1}))=\chi_{N}(\tau_{n})$, which means that
$\\{\chi_{N}(\tau_{n+1}),\chi_{N}(\tau_{n})\\}$ is not free with respect to
$F\in N$. Since $C$ was an arbitrary closed and unbounded subset, AFSP with
respect to $\vec{\tau}$ fails.
3. 3.
The fact that ABSP implies AFSP is immediate from the definition of the two
properties. To show that ABSP w.r.t $\vec{\tau}$ implies that there is no
continuous scale on $\prod_{n}\tau_{n}$ which is essentially tree-like, we
prove the contrapositive statement. Suppose that $\langle
f_{\alpha}\mid\alpha<\eta\rangle$ is a continuous essentially tree-like scale
on a product $\prod_{n}\tau_{n}$. Then by Definition 9 for every $n<\omega$,
there is a function $C_{n}:\tau_{n+1}\to\mathcal{P}(\tau_{n})$ so that for
every $\mu<\tau_{n+1}$, $C_{n}(\mu)$ is a closed and unbounded subset of
$\tau_{n}$ which is disjoint from
$\\{\mu_{n}<\tau_{n}\mid\exists\beta<\eta,f_{\beta}(n+1)=\mu\text{ and
}f_{\beta}(n)=\mu_{n}\\}$. Let $C$ be any club of elementary substructures of
$(H_{\theta};\in)$. Take an internally approachable substructure $N\in C$ and
of size $|N|<\lambda=\cup_{n}\tau_{n}$, so that both $\langle\tau_{n}\mid
n<\omega\rangle$ and $\langle C_{n}\mid n<\omega\rangle$ belong to $N$. Define
$\delta=\chi_{N}(\eta)$ and let $m<\omega$ so that
$f_{\delta}(n)=\chi_{N}(\tau_{n})$ for all $n\geq m$. Fixing $n\geq m$ and
examining the elementary extension
$N^{\prime}=N[\\{f_{\delta}(n+1)\\}]=\\{F(f_{\delta}(n+1))\mid F\in
N\\}\prec(H_{\theta};\in)$ of $N$, we have that $C_{n}(f_{\delta}(n+1))\in
N^{\prime}$ since $C_{n}\in N$. Now, as
$C_{n}(f_{\delta}(n+1))\subseteq\tau_{n}$ is closed unbounded, we must have
that $\chi_{N^{\prime}}(\tau_{n})\in C_{n}(f_{\delta}(n+1))$. However
$\chi_{N}(\tau_{n})=f_{\delta}(n)\not\in C_{n}(f_{\delta}(n+1))$ by the
definition of $C_{n}$. This implies that
$\chi_{N^{\prime}}(\tau_{n})>\chi_{N}(\tau_{n})=f_{\delta}(n)$, which in turn,
implies that $F(f_{\delta}(n+1))>f_{\delta}(n)$ for some $F\in N$. Since $N\in
C$ where $C$ is an arbitrary closed unbounded collection, and $n$ is an
arbitrarily large finite ordinal, we conclude that ABSP fails with respect to
$\langle\tau_{n}\mid n<\omega\rangle$.
∎
### 1.2 Fine structure primer
#### 1.2.1 Ultrafilters
We shall take our fine structure from [30]. Our result almost certainly also
applies to different forms of fine structure such as the fine structure theory
of [17], in fact, the proof of Theorem 6 in particular would be greatly
simplified, but at the cost of significantly complicating the arguments in the
core model part of this paper. As there is currently no account of the
covering lemma for $\lambda$-indexing, we think it prudent to choose Mitchell-
Steel mice at this time. We don’t use [32], as $\lnot O^{\P}$ is much too
strong a limitation for this section. (While technically Mitchell and Steel
operate under the assumption of $\lnot M^{\\#}_{1}$ in [30], it is well
understood by now that their fine structure theory functions well past this
point.)
For our purposes an extender $F$ is a directed system of ultrafilters
$\\{(a,X)|a\in{\left[\operatorname{lh}(F)\right]}^{\mathord{<}\omega},X\subset\left[\operatorname{crit}(F)\right]^{|a|}\\}$
as described in [16, p. 384]. The individual ultrafilters will be denoted as
$F_{a}:=\\{X\subset\operatorname{crit}(F)^{|a|}|(a,X)\in F\\}$. For $a\subset
b$ and $f$ a function with domain $\left[\operatorname{crit}{F}\right]^{|a|}$,
we let $f^{a,b}$ be the function with domain
$\left[\operatorname{crit}{F}\right]^{|b|}$ determined by
$f^{a,b}(\bar{b})=f(\bar{a})$ where $\bar{a}$ is the unique subset of
$\bar{b}$ determined by the type of $a$ and $b$. This gives rise to an
embeddings from $\operatorname{Ult}(\mathcal{M},F_{a})$ into
$\operatorname{Ult}(\mathcal{M},F_{b})$. The direct limit along those
embeddings is the extender ultrapower $\operatorname{Ult}(\mathcal{M},F)$,
elements of which we will present as pairs
$\left[f,a\right]^{\mathcal{M}}_{F}$ where $f\in\mathcal{M}$ is a function
with domain $\left[\operatorname{crit}(F)\right]^{|a|}$ and
$a\in{\left[\operatorname{lh}(F)\right]}^{\mathord{<}\omega}$. The direct
limit map shall be denoted
$\iota^{\mathcal{M}}_{F}:\mathcal{M}\rightarrow\operatorname{Ult}(\mathcal{M},F)$.
We will generally omit the superscript in this notation. This should not lead
to confusion. Note that we will later form ultrapowers where some functions
involved in the construction are not elements of the structure but merely
definable over it.
$\beta<\operatorname{lh}(F)$ is a generator of $F$ if it cannot be represented
as $\left[f,a\right]_{F}$ for any
$f\in{}^{\operatorname{crit}(F)}{}_{\operatorname{crit}(F)}\cap\mathcal{M}$
and $a\in{\left[\beta\right]}^{\mathord{<}\omega}$, i.e.
$\\{b\cup\\{\xi\\}|f(b)=\xi\\}\notin F_{a\cup\\{\beta\\}}$. Let
$\operatorname{gen}(F)$ denote the strict supremum of the generators of $F$.
Also let
$\nu(F)=\max\\{\operatorname{gen}(F),(\operatorname{crit}(F)^{+})^{\mathcal{M}}\\}$.
For a subset $A$ of $\alpha$ we will write $F\upharpoonright A:=\\{(a,X)\in
F|a\subset A\\}$. We will consider this an extender, forming ultrapowers etc,
even if $A$ is not an ordinal. Let $\eta<\alpha$ be such that
$\eta=\operatorname{gen}(F\upharpoonright\eta)$, then the trivial completion
is the
$(\operatorname{crit}(F),(\eta^{+})^{\operatorname{Ult}(\mathcal{M};F\upharpoonright\eta)})$-extender
derived from $\iota_{F\upharpoonright\eta}$.
#### 1.2.2 Premice
A potential premouse is a structure of the form $\mathcal{M}=\langle
J^{\vec{E}}_{\alpha};\in,\vec{E},F\rangle$ where $J^{\vec{E}}_{\alpha}$ is a
model constructed from a sequence of extenders $\vec{E}$ using the Jensen
hierarchy. For $\beta\leq\alpha$ we define
$\mathcal{M}|\beta:=(J^{\vec{E}\upharpoonright\beta}_{\beta};\in,\vec{E}\upharpoonright\beta,\vec{E}_{\beta})$
and
$\mathcal{M}||\beta:=(J^{\vec{E}\upharpoonright\beta}_{\beta};\in,\vec{E}\upharpoonright\beta)$.
(The difference between the two notations lies in including a top predicate.)
If $\mathcal{N}$ is of one of the above forms then we write
$\mathcal{N}\trianglelefteq\mathcal{M}$ and say $\mathcal{N}$ is an initial
segment of $\mathcal{M}$.
$\vec{E}$ must be good, i.e. it has the following properties:
* (Idx)
for all $\beta<\alpha$ if $\vec{E}_{\beta}\neq\emptyset$, then
$\beta=(\nu(\vec{E}_{\beta})^{+})^{\operatorname{Ult}(\mathcal{M}|\beta;\vec{E}_{\beta})}$;
* (Coh)
for all $\beta<\alpha$ if $\vec{E}_{\beta}\neq\emptyset$, then
$\mathcal{M}||\beta=\operatorname{Ult}(\mathcal{M}|\beta;\vec{E}_{\beta})|\beta$;
* (ISC)
for all $\beta<\alpha$ if $\vec{E}_{\beta}\neq\emptyset$, then for all
$\eta<\alpha$ such that
$\eta=\operatorname{gen}(\vec{E}_{\beta}\upharpoonright\eta)$ the trivial
completion of $\vec{E}_{\beta}\upharpoonright\eta$ is on $\vec{E}$ or
$\vec{E}_{\eta}\neq\emptyset$ and it is on $\iota_{\vec{E}_{\eta}}(\vec{E})$.
Note that $\vec{E}_{\beta}$ measures exactly those subsets of its critical
point that are in $\mathcal{M}||\beta$ for any $\beta<\alpha$ such that
$\vec{E}_{\beta}\neq\emptyset$. $F$ the top extender must be such that
$\vec{E}{}^{\smallfrown}F$ remains good. $F$ can be empty in which case
$\mathcal{M}$ is called passive, otherwise $\mathcal{M}$ is active.
To an active potential premouse we associate three constants:
$\mu^{\mathcal{M}}$ the critical point of the top extender;
$\nu^{\mathcal{M}}$ the strict supremum of the generators of $\mathcal{M}$’s
top extender or $((\mu^{\mathcal{M}})^{+})^{\mathcal{M}}$ whichever is larger;
$\gamma^{\mathcal{M}}$ the index of the longest initial segment of
$\mathcal{M}$’s top extender (if it exists).
We distinguish three different types of active potential premouse:
$\mathcal{M}$ is active type I if
$\nu^{\mathcal{M}}=(\mu^{\mathcal{M},+})^{\mathcal{M}}$; $\mathcal{M}$ is
active type II if $\nu^{\mathcal{M}}$ is a successor ordinal; $\mathcal{M}$ is
a active type III if it is neither type I or type II, i.e. the set of the
generators of $\mathcal{M}$’s top extender has limit type.
#### 1.2.3 Fine structure
The big disadvantage of Mitchell-Steel indexing is that we cannot deal
directly with definability over $\mathcal{M}$, but instead need to work with
an amenable code of our original structure. The exact nature of this coding is
dependant on the type of $\mathcal{M}$. We will take inspiration from [29] and
use a uniform notation $\mathcal{C}_{0}(\mathcal{M})$ for this code.
If $\mathcal{M}:=\langle|\mathcal{M}|;\in,\vec{E},F\rangle$ is an active
potential premouse of type I or II, we will define an alternative predicate
$F^{c}$ coding the top extender $F$: $F^{c}$ consists of tuples
$(\gamma,\xi,a,X)$ such that
$\xi\in\left(\mu^{\mathcal{M}},(\mu^{\mathcal{M},+})^{\mathcal{M}}\right)$ and
$\gamma\in\left(\nu(F),\operatorname{On}\cap|\mathcal{M}|\right)$ is such that
$(F\cap({\left[\nu(F)\right]}^{\mathord{<}\omega}\times\mathcal{M}||\xi))\in\mathcal{M}||\gamma$,
and
$(a,X)\in(F\cap({\left[\gamma\right]}^{\mathord{<}\omega}\times\mathcal{M}||\xi))$.
The point is that $F^{c}$ is amenable. We let
$\mathcal{C}_{0}(\mathcal{M}):=\langle|\mathcal{M}|;\in,\vec{E},F^{c}\rangle$.
If $\mathcal{M}$ on the other hand is active type III we have to make bigger
changes. In the language of [30] we have to “squash”, that is remove ordinals
from the structure. (This is to ensure that the initial segment condition is
preserved by iterations.) We let $\mathcal{C}_{0}(\mathcal{M}):=\langle
J^{\vec{E}}_{\nu(F)};\in,\vec{E}\upharpoonright\nu(F),F\upharpoonright\nu(F)\rangle$.
We then define $r\Sigma_{1}$-formulae to be $\Sigma_{1}$ over
$\mathcal{C}_{0}(\mathcal{M})$, and $r\Sigma_{n+1}$-formulae to be
$\Sigma_{1}$ in a predicate coding an appropriate segment of the
$r\Sigma_{n}$-theory of $\mathcal{C}_{0}(\mathcal{M})$. We will let
$\operatorname{Th}^{\mathcal{M}}_{n}(\alpha,q):=\\{(\lceil\phi\rceil,b)|\phi\text{
is
}r\Sigma_{n},b\in{\left[\alpha\right]}^{\mathord{<}\omega},\mathcal{C}_{0}(\mathcal{M})\models\phi(b,q)\\}$.
Projecta can then be defined relative to these formulas, i.e.
$\rho_{n+1}(\mathcal{N})$ is the least ordinal such that some
$r\Sigma_{n+1}$-definable (in parameters) subset of it is not in
$\mathcal{C}_{0}(\mathcal{M})$.
$\rho_{0}(\mathcal{M})=\operatorname{On}\cap\mathcal{C}_{0}(\mathcal{M})$
(which might be smaller than $\operatorname{On}\cap\mathcal{M}$).
As usual we define $p_{n+1}(\mathcal{M})$, the $(n+1)$-th standard parameter,
to be the lexicographically least
$p\in{\left[\operatorname{On}\cap\mathcal{C}_{0}(\mathcal{M})/\rho_{n+1}(\mathcal{M})\right]}^{\mathord{<}\omega}$
that defines a missing subset of $\rho_{n+1}(\mathcal{M})$.
We can also define canonical $r\Sigma_{n+1}$-Skolem function allowing us to
form $\operatorname{Hull}^{\mathcal{M}}_{n+1}(A)$ given a subset $A$ of
$\mathcal{C}_{0}(\mathcal{M})$. Note that while our notation makes it look
like a hull of $\mathcal{M}$ it is a substructure of
$\mathcal{C}_{0}(\mathcal{M})$ not $\mathcal{M}$.
We say $\mathcal{M}$ is $n$-sound above $\beta$ relative to $p$ iff
$\mathcal{C}_{0}(\mathcal{M})=\operatorname{Hull}^{\mathcal{M}}_{n}(\beta\cup\\{p\\})$.
We will not mention the parameter if $\mathcal{M}$ is $n$-sound above $\beta$
relative to $p_{n}(\mathcal{M})$. If $\mathcal{M}$ is $n$-sound above
$\rho_{n}(\mathcal{M})$, we simply say that $\mathcal{M}$ is $n$-sound.
A potential premouse is then a premouse if all its initial segments are
$n$-sound for all $n$. We can now also define fine structural ultrapowers. Let
$\mathcal{M}$ be a premouse and let $F$ be an extender that measure all
subsets of its critical point in $\mathcal{M}$. Let $n$ be such that
$\operatorname{crit}(F)<\rho_{n}(\mathcal{M})$ and $\mathcal{M}$ is $n$-sound.
Then $\operatorname{Ult}_{n}(\mathcal{M},F)$ is the ultrapower formed using
all equivalence classes $\left[f,a\right]_{F}$ where
$a\in{\left[\operatorname{lh}(F)\right]}^{\mathord{<}\omega}$ and $f$ is a
function with domain $\left[\operatorname{crit}(F)\right]^{|a|}$ that is
$r\Sigma_{n}$-definable over $\mathcal{M}$ (in parameters).
###### Lemma 16.
Let $\mathcal{M}$ be a premouse, and let
$\kappa\in\mathcal{C}_{0}(\mathcal{M})$ be a regular cardinal there. Assume
$\rho_{n+1}(\mathcal{M})\leq\beta<\kappa\leq\rho_{n}(\mathcal{M})$ for some
$n$ such that $\mathcal{M}$ is $(n+1)$-sound above $\beta$. Then
$\operatorname{cof}(\kappa)=\operatorname{cof}(\rho_{n}(\mathcal{M}))$.
###### Proof.
For $\xi<\rho_{n}(\mathcal{M})$ we let $\mathcal{N}_{\xi}$ be the structure
$\mathcal{M}||\xi$ with
$\operatorname{Th}^{\mathcal{M}}_{n}(\xi,p_{n}(\mathcal{M}))$ as an additional
predicate. Let then $\kappa_{\xi}$ be the supremum of ordinals less than
$\kappa$ which are $\Sigma_{1}$-definable over $\mathcal{N}_{\xi}$ from
$p_{n+1}(\mathcal{M})$ and ordinals less than $\beta$. As all objects involved
are elements of $\mathcal{M}$, we must have $\kappa_{\xi}<\kappa$. On the
other hand $\sup\limits_{\xi<\rho_{n}(\mathcal{M})}\kappa_{\xi}=\kappa$ as
$\mathcal{M}$ was $(n+1)$-sound above $\beta$. ∎
An additional fact that we will need is that if $\mathcal{M}$ is an active
(potential) premouse, then
$\operatorname{cof}(\operatorname{On}\cap\mathcal{M})=\operatorname{cof}((\mu^{\mathcal{M},+})^{\mathcal{M}})$.
See the last remark of Chapter 1 in [30].
#### 1.2.4 Iterability
A (normal, $\omega$-maximal) iteration tree on a premouse $\mathcal{M}$ is a
tuple
$\mathcal{T}:=\langle\langle\mathcal{M}^{\mathcal{T}}_{\alpha}:\alpha\leq\operatorname{lh}(\mathcal{T})\rangle,\langle
E^{\mathcal{T}}_{\alpha}:\alpha<\operatorname{lh}(\mathcal{T})\rangle,D^{\mathcal{T}},\langle\iota^{\mathcal{T}}_{\alpha,\beta}:\alpha\leq_{\mathcal{T}}\beta\leq\operatorname{lh}(\mathcal{T})\rangle\rangle$
where $\mathcal{M}^{\mathcal{T}}_{\alpha}$ is a premouse for all
$\alpha\leq\operatorname{lh}(\mathcal{T})$
($\mathcal{M}^{\mathcal{T}}_{0}=\mathcal{M}$); $E^{\mathcal{T}}_{\alpha}$ is
an extender from the $\mathcal{M}^{\mathcal{T}}_{\alpha}$-sequence for all
$\alpha<\operatorname{lh}(\mathcal{T})$, $\alpha<\beta$ implies
$\operatorname{lh}(E^{\mathcal{T}}_{\alpha})<\operatorname{lh}(E^{\mathcal{T}}_{\beta})$;
$\iota^{\mathcal{T}}_{\alpha,\beta}:\mathcal{C}_{0}(\mathcal{M}^{\mathcal{T}}_{\alpha})\rightarrow\mathcal{C}_{0}(\mathcal{M}^{\mathcal{T}}_{\beta})$
is the (possibly) partial iteration map for all
$\alpha\leq_{\mathcal{T}}\beta\leq\operatorname{lh}(\mathcal{T})$, it is total
iff
$D^{\mathcal{T}}\cap\left(\alpha,\beta\right]_{\leq_{\mathcal{T}}}\neq\emptyset$;
$\leq_{\mathcal{T}}$ is the tree order on
$\operatorname{lh}(\mathcal{T})$ with root $0$, if
$\gamma+1\leq\operatorname{lh}(\mathcal{T})$, then the
$\mathcal{T}$-predecessor is the least $\beta$ such that
$\operatorname{crit}(E^{\mathcal{T}}_{\gamma})<\operatorname{gen}(E^{\mathcal{T}}_{\beta})$,
in that case $(\mathcal{M}^{\mathcal{T}}_{\gamma+1})^{*}$ is the segment of
$\mathcal{M}^{\mathcal{T}}_{\beta}$ to which
$E^{\mathcal{T}}_{\gamma}$ is applied, if
$\lambda\leq\operatorname{lh}(\mathcal{T})$ is a limit, then
$b^{\mathcal{T}}_{\lambda}:=\left[0,\lambda\right)_{\leq_{\mathcal{T}}}$ is a
cofinal branch whose intersection with $D^{\mathcal{T}}$ is finite,
$\mathcal{M}^{\mathcal{T}}_{\lambda}$ must be the direct limit of
$\langle\mathcal{M}^{\mathcal{T}}_{\alpha},\iota^{\mathcal{T}}_{\alpha,\beta}:\alpha\leq_{\mathcal{T}}\beta\in
b^{\mathcal{T}}_{\lambda}\rangle$;
finally, $\gamma+1\in D^{\mathcal{T}}$ if and only if
$(\mathcal{M}^{\mathcal{T}}_{\gamma+1})^{*}\neq\mathcal{M}^{\mathcal{T}}_{\beta}$.
A $\gamma$ iteration strategy $\Sigma$ for a premouse $\mathcal{M}$ is a
function such that $\Sigma(\mathcal{T})$ is a cofinal and wellfounded branch
for every iteration tree on $\mathcal{T}$ of limit length $\mathord{<}\gamma$
and with the property that
$\Sigma(\mathcal{T}\upharpoonright\alpha)=\left[0,\alpha\right)_{\leq_{\mathcal{T}}}$
for all limit $\alpha<\operatorname{lh}(\mathcal{T})$. $\mathcal{M}$ is
$\gamma$-iterable if there exists a $\gamma$-iteration strategy for
$\mathcal{M}$. We will just say $\mathcal{M}$ is iterable if it is
$\gamma$-iterabe for all ordinals $\gamma$.
Let $\mathcal{M}$ be a premouse, $n<\omega$ and let
$p_{n+1}(\mathcal{M})=\langle\xi_{0},\ldots,\xi_{k-1}\rangle$. The $(n+1)$-th
solidity witness $w_{n+1}(\mathcal{M})$ is a tuple $\langle
t_{0},\ldots,t_{k-1}\rangle$ where
$t_{i}=\operatorname{Th}^{\mathcal{M}}_{n+1}(\xi_{i},\langle\xi_{0},\ldots,\xi_{i-1}\rangle).$
We say $\mathcal{M}$ is $(n+1)$-solid if
$w_{n+1}(\mathcal{M})\in\mathcal{C}_{0}(\mathcal{M})$.
A core result of [30] is that any reasonably iterable $n$-sound premouse is
$(n+1)$-solid. Mitchell-Steel also showed the following with similar methods,
see the remark after Theorem 8.2. Note that the requirement for unique
branches can be replaced by the weak Dodd-Jensen property from [29].
###### Lemma 17 (Condensation Lemma).
Let $\mathcal{M}:=(|\mathcal{M}|;\in,\vec{E},F)$ be a $(n+1)$-sound premouse
such that every countable hull of $\mathcal{M}$ has a
$(\omega_{1}+1)$-iteration strategy. Let $\mathcal{N}$ be a premouse such that
there exist an $r\Sigma_{n+1}$-elementary embedding
$\pi:\mathcal{C}_{0}(\mathcal{N})\rightarrow\mathcal{C}_{0}(\mathcal{M})$ with
$\operatorname{crit}(\pi)\geq\rho_{n+1}(\mathcal{N})$. Then $\mathcal{N}$ is
an initial segment of $\mathcal{M}$ or of
$\operatorname{Ult}(\mathcal{M},\vec{E}_{\operatorname{crit}(\pi)})$.
Both these results use the notion of a phalanx (although this notion was not
yet fully developed by the time of [30]) of which we too will have need. A
phalanx is a tuple
$\langle\langle\mathcal{M}_{i}:i\leq\alpha\rangle,\langle\kappa_{i}:i<\alpha\rangle\rangle$
where $\mathcal{M}_{i}$ agrees with $\mathcal{M}_{j}$ up to
$(\kappa^{+}_{i})^{\mathcal{M}_{j}}$ for all $i<j\leq\alpha$.
Phalanxes are a natural byproduct of iteration trees, i.e. if $\mathcal{T}$ is
a normal iteration tree on some premouse, then
$\langle\langle\mathcal{M}^{\mathcal{T}}_{i}:i\leq\operatorname{lh}(\mathcal{T})\rangle,\langle\nu(E^{\mathcal{T}}_{i}):i<\operatorname{lh}(\mathcal{T})\rangle\rangle$
is a phalanx.
We can then also define iterability on phanlanxes as a natural extension of
the structure of iteration trees. Given a phalanx
$\langle\langle\mathcal{M}_{i}:i\leq\alpha\rangle,\langle\kappa_{i}:i<\alpha\rangle\rangle$
and an extender $E$ we can extend the phalanx by applying $E$ to
$\mathcal{M}_{i}$ where $i$ is minimal with
$\operatorname{crit}(E)<\kappa_{i}$. (Note we have to require that the length
of $E$ is above $\sup\limits_{i<\alpha}\kappa_{i}$ to maintain “normality”.)
A notion of iteration then follows naturally. The most critical difference
here is that we have to keep track above which element of the phalanx any
given model of the iteration tree lies. The art of phalanx iteration lies in
arranging things such that the last model of a co-iteration lies above the
“right” model.
## 2 Forcing the Approachable Bounded Subset Property
Our forcing notations is mostly standard. We use the Jerusalem forcing
convention by which “a condition $p$ extends (is more informative than) $q$”
is denoted by $p\geq q$. In general, names for a set $x$ in a generic
extension will be denoted by $\dot{x}$. If $x$ is in the ground model then its
canonical name is denoted by $\check{x}$.
We denote our initial ground model by $V^{\prime}$, which we assume to satisfy
the following assumptions: there are two increasing sequences
$\langle\kappa_{n}\mid n<\omega\rangle$, $\langle\lambda_{n}\mid
n<\omega\rangle$ of regular cardinals, with
$\lambda_{n}<\kappa_{n+1}<\lambda_{n+1}$ for all $n$, and that each
$\lambda_{n}$ is measurable of Mitchell order $o(\lambda_{n})=\kappa_{n}$.
For each $n<\omega$, let $\langle
U_{\lambda_{n},\alpha}\mid\alpha<\kappa_{n}\rangle$ be a
$\triangleleft$-increasing of normal measures on $\lambda_{n}$. I.e.,
$U_{\lambda_{n},\alpha}$ belongs to the ultrapower by $U_{\lambda_{n},\beta}$,
whenever $\alpha<\beta$. Denote $\lambda=\cup_{n}\lambda_{n}$.
In order to apply our main extender-based forcing notion, we first force with
a preparatory forcing $\mathbb{P}^{\prime}$ over $V^{\prime}$ to transform the
Mitchell-order increasing sequences $\langle
U_{\lambda_{n},\alpha}\mid\alpha<\kappa_{n}\rangle$ of normal measures, to
Rudin-Keisler increasing sequences. For this, we force with a Gitik-iteration
$\mathbb{P}^{\prime}$ ([11]) for changing the cofinality of measurable
cardinals between the cardinals $(\kappa_{n},\lambda_{n})$ for all $n<\omega$.
Let $G^{\prime}\subseteq\mathbb{P}^{\prime}$ be a generic filter over
$V^{\prime}$, and set $V=V[G^{\prime}]$. We list a number of facts concerning
the extensions in $V$ of the measures $\langle
U_{\lambda_{n},\alpha}\mid\alpha<\kappa_{n}\rangle$ from $V^{\prime}$. The
analysis leading to these facts can be found in [11], or [3] for a similar
type of poset. The Mitchell-order increasing sequence $\langle
U_{\lambda_{n},\alpha}\mid\alpha<\kappa_{n}\rangle$ extends to a Rudin-Keisler
increasing sequence of $\lambda_{n}$-complete measures $\langle
U^{*}_{\lambda_{n},\alpha}\mid\alpha<\kappa_{n}\rangle$, with Rudin-Keisler
projections $\pi^{n}_{\beta,\alpha}:\lambda_{n}\to\lambda_{n}$ for each
$\alpha<\beta<\kappa_{n}$. We note that the least measure
$U^{*}_{\lambda_{n},0}$ remains normal. We denote for each $n<\omega$ the
linear directed system of measures
$\\{U^{*}_{\lambda_{n},\alpha},\pi^{n}_{\beta,\alpha}\mid\alpha\leq\beta<\kappa_{n}\\}$
by $E_{n}$, and further denote each $U^{*}_{\lambda_{n},\alpha}$ by
$E_{n}(\alpha)$. Let
$j_{E_{n}}:V\to
M_{E_{n}}=\operatorname{Ult}(V,E_{n})=\operatorname{dirlim}_{\alpha<\kappa_{n}}\operatorname{Ult}(V,E_{n}(\alpha))$
Each measure $E_{n}(\alpha)$ can be derived from $j_{E_{n}}$ using a generator
$\gamma^{E_{n}}_{\alpha}<j_{E_{n}}(\lambda_{n})$. The following list
summarizes the key properties of the extenders $E_{n}$:
###### Fact 18.
1. 1.
$\operatorname{cp}(j_{E_{n}})=\lambda_{n}$ and
$M_{E_{n}}^{<\kappa_{n}}\subseteq M_{E_{n}}$
2. 2.
$\gamma^{E_{n}}_{0}=\lambda_{n}$ and
$\langle\gamma_{\alpha}\mid\alpha<\kappa_{n}\rangle$ is a strictly increasing
and continuous sequence
3. 3.
$\gamma^{E_{n}}=\sup_{\alpha<\kappa_{n}}\gamma^{E_{n}}_{\alpha}$ is strongly
inaccessible in $M_{E_{n}}$, and we may assume that there exists a function
$g_{n}:\lambda_{n}\to\lambda_{n}$ such that
$\gamma^{E_{n}}=j_{E_{n}}(g_{n})(\lambda_{n})$
4. 4.
for each $\alpha<\beta<\kappa_{n}$, $E_{n}(\alpha)$ is strictly weaker than
$E_{n}(\beta)$ in the Rudin-Keisler order. I.e., for every $A\in
E_{n}(\alpha)$ there is $\nu\in A$ such that
$\pi_{\beta,\alpha}^{-1}(\\{\nu\\})$ is unbounded in $\lambda_{n}$.
5. 5.
for every $\alpha<\kappa_{n}$ and $h:\lambda_{n}\to\lambda_{n}$ such that
$j_{E_{n}}(h)(\gamma^{E_{n}}_{\alpha})<\gamma^{E_{n}}$.
$j_{E_{n}}(h)(\gamma^{E_{n}}_{\alpha})<\gamma^{E_{n}}_{\beta}$ for all
$\beta>\alpha$.
Next, we force over $V$ with a short extender-based-type forcing $\mathbb{P}$,
associated with the extenders $E_{n}$, $n<\omega$. $\mathbb{P}$ is a variant
of the forcing in [2] . Extending the arguments of [2], we focus here on the
generic scale associated with the extender-based-forcing, and use it to
analyze the possible internally approachable structures in the generic
extensions. This approach follows the one taken in [1], where an extender-
based forcing has been used to obtain results concerning internally-
approachable structures witnessing that ground model sequences $\langle
S_{n}\mid n<\omega\rangle$ being tightly-stationary.
###### Definition 19.
Conditions $p\in\mathbb{P}$ are sequences $p=\langle p_{n}\mid
n<\omega\rangle$ such that there is some $\ell<\omega$ for which the following
requirements hold:
1. 1.
for $n<\ell$, $p_{n}=\langle f_{n}\rangle$, where
$f_{n}:\lambda^{+}\to\lambda_{n}$ is a partial function of size
$|f_{n}|\leq\lambda$, with $0\in\operatorname{dom}(f_{n})$ and both
$f_{n}(0),g_{n}(f_{n}(0))<\lambda_{n}$ are strongly inaccessible cardinals
2. 2.
For $n\geq\ell$, $p_{n}=\langle f_{n},a_{n},A_{n}\rangle$, where $f_{n}$ is as
above, $a_{n}:\lambda^{+}\to\kappa_{n}$ is a partial continuous and order-
preserving function, whose domain is a closed and bounded set of $\lambda^{+}$
of has size $|a_{n}|<\kappa_{n}$.
We define $\operatorname{mc}(a_{n})$ to be
$a_{n}(\max(d_{n}))=\max(\operatorname{rng}(a_{n}))$, and require that the set
$A_{n}$ to be contained in $\lambda_{n}\setminus\lambda_{n-1}$ and belong to
$E_{n}(\operatorname{mc}(a_{n}))$.
3. 3.
$\operatorname{dom}(a_{n})\cap\operatorname{dom}(f_{n})=\emptyset$ and
$\operatorname{dom}(a_{n})\subseteq\operatorname{dom}(a_{n+1})$ for every
$n\geq\ell$, $a_{n}(0)=0$, and for every
$\delta\in\cup_{n}\operatorname{dom}(f_{n})$ there exists some $m<\omega$ such
that $\delta\in\operatorname{dom}(a_{m})$.
For a condition $p\in\mathbb{P}$ as above, we denote $\ell,f_{n},a_{n},A_{n}$
by $\ell^{p},f_{n}^{p},a_{n}^{p},A_{n}^{p}$ respectively. Direct extensions
and end-extensions of conditions are defined as follows. A condition $p^{*}$
is a direct extension of $p$, if $\ell^{p^{*}}=\ell^{p}$, $f_{n}^{p}\subseteq
f_{n}^{p^{*}}$ for all $n<\omega$, and $a_{n}^{p}\subseteq a_{n}^{p^{*}}$,
$A_{n}^{p^{*}}\subseteq(\pi^{n}_{\operatorname{mc}(a_{n}^{p^{*}}),\operatorname{mc}(a_{n}^{p})})^{-1}A_{n}^{p}$
for all $n\geq\ell^{p}$.
For every $\nu\in A^{p}_{n}$, define
$p_{n}{}^{\frown}\langle\nu\rangle=\langle f^{\prime}_{n}\rangle$, where
$f^{\prime}_{n}=f^{p_{n}}\cup\\{\langle\alpha,\pi^{n}_{\operatorname{mc}(a_{n}^{p}),a_{n}(\alpha)}(\nu)\rangle\mid\alpha\in\operatorname{dom}(a_{n}^{p})\\}.$
If $\vec{\nu}=\langle\nu_{\ell^{p}},\dots,\nu_{n-1}\rangle$ belong to
$\prod_{i=\ell^{p}}^{n-1}A^{p}_{i}$, we define the end extension of $p$ by
$\vec{\nu}$, denoted $p{}^{\frown}\vec{\nu}$, to be the condition
$p^{\prime}=\langle p^{\prime}_{n}\mid n<\omega\rangle$, defined by
$p^{\prime}_{k}=p_{k}$ for every $k\not\in\\{\ell^{p},\dots,n-1\\}$, and
$p^{\prime}_{k}=p_{k}{}^{\frown}\langle\nu_{k}\rangle$ otherwise. A condition
$q\in\mathbb{P}$ extends $p$ if $q$ is obtained from $p$ by a finite sequence
of end-extensions and direct extensions. Equivalently, $q$ is a direct
extension of an end-extension $p{}^{\frown}\vec{\nu}$ of $p$. Following the
Jerusalem forcing convention, we write $p\geq q$ if $p$ extends $q$, and
$p\geq^{*}q$ if $p$ is a direct extension of $q$.
###### Notation 20.
We introduce the following notational convention for the Rudin-Keisler
projections $\pi^{n}_{\alpha,\beta}$ to be applied in the context of the
forcing $\mathbb{P}$. Let $p$ be a condition and $\nu\in A^{p}_{n}$ for some
$n\geq\ell^{p}$ and $\alpha\in\operatorname{dom}(a_{n}^{p})$. We write
$\pi^{p}_{\operatorname{mc}(p),\alpha}(\nu)$ for
$\pi^{n}_{\operatorname{mc}(a_{n}^{p}),a_{n}(\alpha)}(\nu)$.
888Note that the index $n$ is determined from the fact that $\nu\in
A^{p}_{n}\subseteq\lambda_{n}\setminus\lambda_{n-1}$. Similarly, for a
sequence $\vec{\nu}=\langle\nu_{i}\rangle_{\ell^{p}\leq
i<n}\in\prod_{\ell^{p}\leq i<n}A^{p}_{i}$, we write
$\pi^{p}_{\operatorname{mc}(p),\alpha}(\vec{\nu})$ for the projected sequence
$\langle\pi^{p}_{\operatorname{mc}(p),\alpha}(\nu_{i})\mid\ell^{p}\leq
i<n\rangle$.
We proceed to list several standard basic properties of the poset
$\mathbb{P}$, refering the reader to [2] for details.
###### Lemma 21.
1. 1.
$(\mathbb{P},\leq,\leq^{*})$ is a Prikry-type forcing
2. 2.
for each $p\in\mathbb{P}$, the direct extension order $\leq^{*}$ of
$\mathbb{P}/p$ is $\kappa_{\ell}$-closed
3. 3.
$\mathbb{P}$ satisfies the $\lambda^{++}$.c.c
4. 4.
(Strong Prikry Property) Let $D\subseteq\mathbb{P}$ be a dense open set. For
every $p\in\mathbb{P}$ there are $p^{*}\geq^{*}p$ and $n<\omega$, such that
for every $\vec{\nu}\in\prod_{\ell^{p}\leq i<n}A_{i}^{p^{*}}$,
$p^{*}{}^{\frown}\vec{\nu}\in D$.
It is routine to verify that the above properties imply that $\mathbb{P}$ does
not add new bounded subsets to $\lambda$, and does not collapse
$\lambda^{++}$. We extend our analysis of $\mathbb{P}$ below to show that that
it preserves $\lambda^{+}$. This result can be also derived using a standard
application of the Weak Covering Theorem. To extend our study of the poset
$\mathbb{P}$, we introduce a notation of orderings $\leq^{m}$, $m<\omega$,
which refine the direct extension ordering $\leq^{*}$.
###### Notation 22.
Let $p,q$ be two conditions in $\mathbb{P}$. For $m<\omega$ we write
$p\leq^{m}q$ if $p\leq^{*}q$ and $a^{p}_{n}=a^{q}_{n}$, $A^{p}_{n}=A^{q}_{n}$
for all $n<m$.
Therefore, for each $m<\omega$, $\leq^{m}$ is $\kappa_{m}$-closed and
$\leq^{m+1}\thinspace\subseteq\thinspace\leq^{m}$.
###### Lemma 23.
Let $\theta>\lambda^{+}$ regular, $\triangleleft$ be a well-ordering of
$H_{\theta}$, and $M\prec(H_{\theta};\in,\triangleleft)$ satisfying
$\mathbb{P}\in Mand|M|=\lambda,V_{\lambda}\subseteq M$. Suppose that there
exists an enumeration $\vec{D}=\langle D_{\mu}\mid\mu<\lambda\rangle$ of all
dense open subsets of $\mathbb{P}$ in $M$, so that
$\vec{D}\upharpoonright\nu\in M$ for every $\nu<\lambda$. Then for every
condition $p\in\mathbb{P}\cap M$ and $\ell^{*}$,
$\ell^{p}\leq\ell^{*}<\omega$, there exists $p^{*}\geq^{\ell^{*}}p$ so that
for each dense open set $D\in M$ there are -
* •
$q\in M$ with $p\leq^{\ell^{*}}q\leq^{\ell^{*}}p^{*}$,
* •
a finite ordinal $n^{D}<\omega$, and
* •
a function $N^{D}:\prod_{\ell^{p}\leq i<n^{D}}A^{q}_{i}\to\omega$,
such that for every pair of sequences $\vec{\nu}^{1},\vec{\nu}^{2}$,
satisfying
$\vec{\nu}^{1}\in\prod_{i=\ell^{p}}^{n^{D}-1}A^{q}_{i},\text{ and
}\vec{\nu}^{2}\in\prod_{i=n^{D}}^{N^{D}(\vec{\nu}^{1})}A^{q}_{i},$
the condition $q{}^{\frown}\vec{\nu}^{1}{}^{\frown}\vec{\nu}^{2}$ belongs to
$D$.
###### Remark 24.
We note that the condition $q{}^{\frown}\vec{\nu}^{1}{}^{\frown}\vec{\nu}^{2}$
in the statement of the Lemma belongs to $M$ as $V_{\lambda}\subseteq M$.
Therefore, Lemma 23 implies that $p^{*}$ is a generic condition for
$(M,\mathbb{P})$, namely, it forces the statement
$\dot{G}\cap\check{M}\cap\check{D}\neq\emptyset$ for every dense open set
$D\in M$.
###### Proof.
We assume for notational simplicity that $\ell^{p}=0$. The proof for the
general case is similar. We fix for each $n<\omega$ a bijection
$\psi_{n}:\lambda_{n}\to[\lambda_{n}]^{n+1}\times\lambda_{n}$ in $M$. Our
final condition $p^{*}$ will be obtained as a limit of a carefully constructed
sequence $\langle p^{n}\mid\ell^{*}\leq n<\omega\rangle$, starting from
$p^{\ell^{*}}=p$, and consisting of conditions in $M$. Moreover, it will
satisfy $p^{n}\leq^{n+1}p^{n+1}$ for all $n\geq\ell^{*}$. Suppose that $p^{n}$
has been defined for some $n\geq\ell^{*}$. Our goal is to construct an
extension $p^{n+1}\geq^{n+1}p^{n}$, so that for every ordinal $\mu$,
$\lambda_{n-1}\leq\mu<\lambda_{n}$ there exists a function
$N^{D_{\mu}}:\prod_{i\leq n}A^{p^{n+1}}_{i}\to\omega$ so that for every
$\vec{\nu}^{1}\in\prod_{i\leq n}A^{p^{n+1}}_{i}\text{ and
}\thinspace\vec{\nu}^{2}\in\prod_{n+1\leq
i<N^{D_{\mu}}(\vec{\nu}^{1})}A^{p^{n+1}}_{i}$
$p^{n+1}{}^{\frown}\vec{\nu}^{1}{}^{\frown}\vec{\nu}^{2}\in D_{\mu}$. We note
that this will guarantee $n^{D_{\mu}}=n+1$ for
$\lambda_{n}\leq\mu<\lambda_{n+1}$. $p^{n+1}$ will be constructed from $p^{n}$
in $(\lambda_{n}+1)$-many steps, using two sequences of condition parts,
$\langle\vec{f}^{i}\mid i\leq\lambda_{n}\rangle$ and $\langle q^{i}\mid
i\leq\lambda_{n}\rangle$ which satisfy the following requirements:
1. 1.
$\vec{f}_{i}=\langle f_{i,0},f_{i,1},\dots,f_{i,n}\rangle\in M$ is an
$(n+1)$-tuple, consisting of Cohen functions,
$f_{i,k}:\lambda^{+}\to\lambda_{k}$ of size at most $\lambda$. For $i=0$,
$\vec{f}_{0}=\langle f^{p^{n}}_{0},\dots,f^{p^{n}}_{n}\rangle$ is the tuple of
the Cohen functions of the first $(n+1)$ Cohen components of $p^{n+1}$.
2. 2.
For each $k\leq n$, the sequence $f_{i,k},i\leq\lambda_{n}$ is increasing in
$\subseteq$, and
$\operatorname{dom}(f_{i,k})\cap\operatorname{dom}(a_{k}^{p^{n}})=\emptyset$
for all $i\leq\lambda_{n}$.
3. 3.
$q^{i}=\langle q^{i}_{m}\mid n<m<\omega\rangle$ consists of tail segments of
conditions in $\mathbb{P}$ starting from the $(n+1)$-th component, and
$q^{0}=p^{n}\setminus n+1=\langle p^{n}_{m}\mid n<m<\omega\rangle$.
4. 4.
the sequence $q^{i}$,$i\leq\lambda_{n}$ will be $\leq^{*}$-increasing in the
obvious sense.
The construction of the two sequences will be internal to $M$, and definable
from $p^{n},\vec{D}\upharpoonright\lambda_{n+1}$, and using the fixed well-
ordering $\triangleleft$ of $H_{\theta}$. Let $\delta\leq\lambda_{n}$ and
suppose that $\langle q^{i},\vec{f}_{i}\mid i<\delta\rangle$ has been defined
and belongs to $M$. If $\delta$ is a limit ordinal, we define
$\vec{f}_{\delta}=\langle f_{\delta,k}\mid k\leq n\rangle$ by
$f_{\delta,k}=\bigcup_{i<\delta}f_{i,k}$. Similarly, $q_{\delta}$ is taken to
be the supremum in the direct extension ordering of $q^{i}$, $i<\delta$, which
is possible due to the fact that for each $k>n$, the direct extension ordering
of the $k$-th components $q^{i}_{k}$, is $\kappa_{k+1}$-closed, and
$\kappa_{k+1}>\lambda_{n}$. Therefore, for every $k>n$, we define
$q^{\delta}_{k}=(f_{k}^{q^{\delta}},a_{k}^{q^{\delta}},A_{k}^{q^{\delta}})$
where
$f_{k}^{q^{\delta}}=\bigcup_{i<\delta}f_{k}^{q^{i}},a_{k}^{q^{\delta}}=\bigcup_{i<\delta}a_{k}^{q^{i}}\cup\\{(\alpha,\gamma)\\},\text{
and
}A_{k}^{q^{\delta}}=\bigcap_{i<\delta}(\pi^{n}_{\gamma,\operatorname{mc}(a_{k}^{q^{i}})})^{-1}A_{k}^{q^{i}},$
where
$\alpha=\sup\left(\bigcup_{i<\delta}\operatorname{dom}(a_{k}^{q^{i}})\right)$,
and
$\gamma=\sup\left(\bigcup_{i<\delta}\operatorname{rng}(a_{k}^{q^{i}})\right)$.
Clearly, $q^{\delta}\in M$. Suppose now that $\delta=i+1$ is a successor
ordinal. We appeal to our fixed bijection
$\psi_{n}:\lambda_{n}\to[\lambda_{n}]^{n+1}\times\lambda_{n}$, and consider
$\psi_{n}(i)=(\vec{\nu}^{i},\mu^{i})$, where
$\vec{\nu}^{i}\in[\lambda_{n}]^{n+1}$ and $\mu^{i}<\lambda_{n}$. We proceed as
follows: If $\vec{\nu}^{i}\not\in\prod_{i\leq n}A^{p^{n}}_{i}$ we make no
change, setting $\vec{f}_{\delta}=\vec{f}_{i}$ and $q^{\delta}=q^{i}$.
Otherwise, $\vec{\nu}^{i}\in\prod_{i\leq n}A^{p^{n}}_{i}$ and we consider the
associated functions $\langle g_{i,0},\dots,g_{i,n}\rangle$, defined by
$g_{i,k}=\\{\langle\alpha,\pi^{k}_{\operatorname{mc}(a_{k}^{p^{n}}),a^{p^{n}}_{k}(\alpha)}(\nu)\rangle\mid\alpha\in\operatorname{dom}(a_{k}^{p^{n}})\\}.$
We note that since
$\operatorname{dom}(g_{i,k})=\operatorname{dom}(a^{p^{n}}_{k})$, it is
disjoint from $\operatorname{dom}(f_{i,k})$, and we can therefore take their
unions $f^{*}_{i,k}=f_{i,k}\cup g_{i,k}$, $k<n$ to define a sequence of
functions $\vec{f_{i}^{*}}=\langle f^{*}_{i,0},\dots,f^{*}_{i,n}\rangle$. By
concatenating the $(n+1)$-sequence $\vec{f_{i}^{*}}$ with the tail $q^{i}$, we
get a condition $q^{i}_{*}=\vec{f_{i}^{*}}{}^{\frown}q_{i}\in\mathbb{P}$ with
$\ell^{q^{i}_{*}}=n+1$, to which we apply the last clause of Lemma 21 (Strong
Prikry Property) and find a direct extension $q^{i}_{**}\geq^{*}q^{i}_{*}$ and
an integer $N$ so that for every $\vec{\nu}\in\prod_{k\leq
N}A^{q^{i}_{**}}_{n+1+k}$, $q^{i}_{**}{}^{\frown}\vec{\nu}$ belongs to
$D_{\mu_{i}}$. Specifically, we choose $q^{i}_{**}\in M$ to be such a
condition which is minimal according to the fixed well-ordering
$\triangleleft$ of $H_{\theta}$, and define
* •
$N^{D_{\mu_{i}}}(\vec{\nu}^{i})=N$,
* •
$\vec{f}_{\delta}=\langle f_{\delta,k}\mid k\leq n\rangle$ with
$f_{\delta,k}=f^{q^{i}_{**}}_{k}\setminus g_{i,k}$,999thus,
$\operatorname{dom}(f_{\delta,k})$ is disjoint from
$\operatorname{dom}(g_{i,k})=\operatorname{dom}(a^{p^{n}}_{k})$. and
* •
$q^{\delta}=\langle q^{\delta}_{m}\mid m\geq n+1\rangle$ with
$q^{\delta}_{m}=(q^{i}_{**})_{m}$ for every $m\geq n+1$.
Finally, given $\vec{f}_{\lambda_{n}}=\langle f_{\lambda_{n},k}\mid k\leq
n\rangle$ and $q^{\lambda_{n}}=\langle q^{\delta_{n}}_{m}\mid m\geq
n+1\rangle$. we define $p^{n+1}\geq^{*}p^{n}$ by setting
$a_{m}^{p^{n+1}}=a_{m}^{p^{n}}$ and $A_{m}^{p^{n+1}}=A_{m}^{p^{n}}$ and
$f_{m}^{p^{n+1}}=f_{\lambda_{n},m}$ for $m\leq n$, and
$p^{n+1}_{m}=q^{\lambda_{n}}_{m}$ for $m\geq n+1$. Our use of the well-
ordering $\triangleleft$ throughout the construction guarantees that
$p^{n+1}\in M$.
This concludes the construction of the sequence $\langle p^{n}\mid\ell^{*}\leq
n<\omega\rangle$. We now define $p^{*}\geq^{*}p$ by $p^{*}_{n}=p_{n}^{n}$. It
is straightforward to verify from the construction that $p^{*}\geq\ell^{*}$
satisfies the conclusion in the statement of the Lemma. ∎
We now show that there are plenty of models $M$, satisfying the conclusion of
Lemma 23.
###### Proposition 25.
Let $\vec{M}=\langle M_{\alpha}\mid\alpha<\lambda^{+}\rangle$ be an internally
approachable sequence (i.e., $\vec{M}\upharpoonright\beta\in M_{\beta+1}$ for
every $\beta<\lambda^{+}$) $\subseteq$-increasing and continuous sequence of
elementary substructures $M_{\alpha}\prec(H_{\theta};\in,\triangleleft)$ of
size $|M_{\alpha}|=\lambda$, and satisfy
$M_{\alpha}\cap\lambda^{+}\in\lambda^{+}$. For every limit ordinal
$\alpha<\lambda^{+}$ of $\operatorname{cof}(\alpha)=\omega$, satisfying
$\alpha=M_{\alpha}\cap\lambda^{+}$, $\ell^{*}<\omega$, and $p\in M_{\alpha}$,
there exists a direct extension $p^{*}\geq^{\ell^{*}}p$ satisfying the
conclusion of Lemma 23 with respect to $M=M_{\alpha}$.
Moreover, if the approachable ideal on $\lambda^{+}$ is trivial, i.e.,
$I[\lambda^{+}]=\lambda^{+}$, then the requirement of
$\operatorname{cof}(\alpha)=\omega$ can be removed.
###### Proof.
Suppose first that $\operatorname{cof}(\alpha)=\omega$ and let
$\langle\alpha_{n}\mid n<\omega\rangle$ be a cofinal sequence in $\alpha$.
Then $M_{\alpha}=\cup_{n}M_{\alpha_{n}}$, and for each $n<\omega$ since
$M_{\alpha_{n}}\in M_{\alpha}$, there exists an enumeration
$\vec{D}^{n}=\langle D^{n}_{\mu}\mid\mu<\lambda\rangle\in M_{\alpha}$ of all
dense open subsets of $\mathbb{P}$ in $M_{\alpha_{n}}$. Using bijections from
$\lambda_{n}\times n$ to $\lambda_{n}$, we can form a sequence
$\vec{D}=\langle D_{\mu}\mid\mu<\lambda\rangle$ so that for every $n<\omega$,
$\vec{D}\upharpoonright\lambda_{n}$ enumerates
$\vec{D}^{i}\upharpoonright\lambda_{n}$ for each $i<n$. Therefore $\vec{D}$
enumerates all dense open sets of $\mathbb{P}$ in $M_{\alpha}$ and satisfies
$\vec{D}\upharpoonright\beta\in M_{\alpha}$ for every $\beta<\lambda$. It
follows from Lemma 23 that for every condition $p\in M_{\alpha}$ and
$\ell^{*}<\omega$ there exists a direct extension $p^{*}\geq^{\ell^{*}}p$ as
in the statement of the lemma. This concludes the first part of the statement.
Suppose now that $I[\lambda^{+}]=\lambda^{+}$. We proceed to prove by
induction on limit ordinals $\alpha<\lambda^{+}$ with
$\alpha=M_{\alpha}\cap\lambda^{+}$, that for every $\ell^{*}<\omega$ and $p\in
M_{\alpha}$, there is $p^{*}\geq^{\ell^{*}}p$ satisfying the desirable
property for $M_{\alpha}$. Let $\alpha$ be such an ordinal and assume the
statment holds for all $\beta<\alpha$. If $\operatorname{cof}(\alpha)=\omega$
we are done by the first case above. Therefore, suppose that
$\operatorname{cof}(\alpha)=\rho$ is an uncountable regular cardinal. Since
$I[\lambda^{+}]=\lambda^{+}$ , there exists a closed and unbounded subset
$X\subset\alpha$ of order-type $\operatorname{otp}(X)=\rho$ so that
$X\cap\beta\in M_{\alpha^{\prime}}$ whenever $\beta<\alpha^{\prime}$ ,
$\alpha^{\prime}\in\\{\alpha\\}\cup X$. Moreover, since
$\vec{M}\upharpoonright\beta$ belongs to $M_{\alpha^{\prime}}$, so does
$\vec{M}\upharpoonright(X\cap\beta)=\langle M_{\gamma}\mid\gamma\in
X\cap\beta\rangle$. Given $\ell^{*}<\omega$ as in the statement of the claim,
we further increase it to assume that $\kappa_{\ell^{*}}>\rho$. Let
$\langle\beta_{i}\mid i\leq\rho\rangle$ be an increasing enumeration of the
limit points $\beta$ in $X\cup\\{\alpha\\}$ which satisfy that
$M_{\beta}\cap\lambda^{+}=\beta$. Given $p\in M_{\alpha}$, we may assume that
$p\in M_{\beta_{0}}$ and denote it by $p^{0}$. Then, by applying the inductive
assumption and using the well ordering $\triangleleft$, we form a sequence of
conditions $\langle p^{i}\mid i\leq\rho\rangle$ which is increasing in
$\leq^{\ell^{*}}$, so that for each $i<\rho$ $p^{i}\in M_{\beta_{i+1}}$ and
$p^{i+1}\geq^{\ell^{*}}p^{i}$ is the $\triangleleft$-minimal such extension,
which is satisfies the conclusion of Lemma 23 for $M_{\beta_{i+1}}$. Suppose
now that $j\leq\rho$ is limit. Then every initial segment of $\langle
p^{i}\mid i<j\rangle$ belongs to $M_{\beta_{j}}$, and the sequence has an
upper bound in $\leq^{\ell^{*}}$ since this ordering is
$\kappa_{\ell^{*}}$-closed and $\kappa_{\ell^{*}}>\rho$. Defining the upper
bound by $p^{j}$, it follows from the continuity of the sequence $\vec{M}$
that $p^{j}$ satisfies the desirable property for $M_{\beta_{j}}$. In
particular, for $j=\rho$, we obtain a suitable condition $p^{*}=p^{\rho}$ for
$M=M_{\alpha}$. ∎
The following consequences of Lemma 23 and Proposition 25 will play a key role
in our arguments concerning Approachable Bounded Subset Property in $V[G]$.
###### Lemma 26.
Let $\dot{F}$ be a $\mathbb{P}$-name of a function from $\lambda^{<\omega}$ to
ordinals, and $p\in\mathbb{P}$. There is a direct extension $p^{*}\geq^{*}p$
and a function
$f^{*}:[\lambda]^{<\omega}\times[\lambda]^{<\omega}\to\operatorname{On}$ which
provide the following recipe for deciding the $\mathbb{P}$-names of ordinals
$\dot{F}(\vec{\mu})$, $\vec{\mu}\in[\lambda]^{<\omega}$:
For every $\vec{\mu}\in[\lambda]^{<\omega}$ there are $n^{\vec{\mu}}<\omega$
and a function $N^{\vec{\mu}}:[\lambda]^{<\omega}\to\omega$ such that for
every $\vec{\nu}^{1}\in\prod_{\ell^{p}\leq i<n^{\vec{\mu}}}A_{i}^{p^{*}}$ and
$\vec{\nu}^{2}\in\prod_{n^{\mu}\leq
i<N^{\vec{\mu}}(\vec{\nu}^{1})}A_{i}^{p^{*}}$,
$p^{*}{}^{\frown}\vec{\nu}^{1}{}^{\frown}\vec{\nu}^{2}\Vdash\dot{F}(\vec{\mu})=\check{f^{*}}(\check{\vec{\mu}},\check{\vec{\nu}}^{1}{}^{\frown}\check{\vec{\nu}}^{2}).$
###### Proof.
Let $M\prec(H_{\theta};\in,\triangleleft)$ be a model of size which satisfies
the assumption of Lemma 23 and has $\dot{F},p\in M$ (the proof of Proposition
25 shows that such structures exist). Since $\lambda\subseteq M$ then for
every $\vec{\mu}\in[\lambda]^{<\omega}$, the dense open set
$E_{\vec{\mu}}=\\{q\in\mathbb{P}\mid\exists\xi\in\operatorname{On},q\Vdash\dot{F}(\check{\vec{\mu}})=\check{\xi}\\}$
belongs to $M$. By taking $p^{*}\geq^{*}p$ as in the statement of Lemma 23 we
obtain the desired extension of $p$. ∎
###### Corollary 27.
$\mathbb{P}$ preserves $\lambda^{+}$.
###### Proof.
If $\dot{F}:\lambda\to\lambda^{+}$ is a $\mathbb{P}$-name of a function, then
by Lemma 26 for every condition $p$ there are $p^{*}\geq^{*}p$ and a function
$f^{*}:[\lambda]^{<\omega}\times[\lambda]^{<\omega}\to\lambda^{+}$ in $V$, so
that $p^{*}$ forces $\operatorname{rng}(\dot{F})$ is contained in
$\operatorname{rng}(f^{*})$. ∎
Let $G\subseteq\mathbb{P}$ be a generic filter. By a standard density
argument, for every $\alpha<\lambda^{+}$ and $n<\omega$ there exists $p\in G$
so that $\ell^{p}>n$ and $\alpha\in\operatorname{dom}(f^{p}_{n})$. We define
the generic scale $\langle t_{\alpha}\mid\alpha<\lambda^{+}\rangle$ by
$t_{\alpha}(n)=f^{p}_{n}(\alpha)$ for any such a condition $p\in G$.
Recalling that our setup includes that $a_{n}(0)=0$ and $E_{n}(0)$ is a normal
measure on $\lambda_{n}$, we get that the sequence $\langle\rho_{n}\mid
n<\omega\rangle$, given by $\rho_{n}=t_{0}(n)$, is generic over $V$ for the
diagonal Prikry forcing with the sequence of normal measures $\langle
E_{n}(0)\mid n<\omega\rangle$.
Recall that for every $n<\omega$, there exists a function
$g_{n}:\lambda_{n}\to\lambda_{n}$ so that $j_{E_{n}}(g_{n})(\lambda_{n})$ is
the supremum of the generators of $E_{n}$, and is inaccessible in $M_{E_{n}}$.
It follows from a standard density argument that the sequence $\langle
t_{\alpha}\mid\alpha<\lambda^{+}\rangle$ is a scale in the product
$\prod_{n}g_{n}(\rho_{n})$, and that $g_{n}(\rho_{n})<\lambda_{n}$ is regular
for almost all $n<\omega$. Moreover, it is straightforward to verify that our
assumption that the functions $a_{n}$ in conditions $p\in\mathbb{P}$ are
continuous and have closed domains, implies that the scale $\langle
t_{\alpha}\mid\alpha<\lambda^{+}\rangle$ is continuous.
###### Notation 28.
In $V[G]$, we denote $g_{n}(\rho_{n})$ by $\tau_{n}$.
###### Theorem 29.
The Approachable Bounded Subset Property (ABSP) holds in $V[G]$ with respect
to the sequence $\langle\tau_{n}\mid n<\omega\rangle$.
###### Proof.
Suppose otherwise, then there exists a stationary set
$\mathcal{S}\subseteq\mathcal{P}_{\lambda}(H_{\theta})$ of internally
approachable structures $N\prec(H_{\theta};\in)$ such that for every
$N\in\mathcal{S}$ and $n<\omega$ there is a function
$F^{N}_{n}:[\lambda]^{k^{N}_{n}}\to\lambda$ in $N$, of a finite arity
$k^{N}_{n}<\omega$, and a finite sequence of distinct numbers
$\vec{d}^{N,n}=\langle
d^{N,n}_{0},\dots,d^{N,n}_{k^{N}_{n}}\rangle\subseteq\omega\setminus n$,
satisfying
$\chi_{N}(\tau_{d^{N,n}_{0}})\leq
F^{N}_{n}\left(\chi_{N}(\tau_{d^{N,n}_{1}}),\dots,\chi_{N}(\tau_{d^{N,n}_{k^{N}_{n}}})\right)<\tau_{d^{N,n}_{0}}.$
By Lemma 13, applied to the assignments $N\mapsto\langle F^{N}_{n}\mid
n<\omega\rangle$ and $N\mapsto\langle\vec{d}^{N,n}\mid n<\omega\rangle$, there
exists a stationary set $S^{*}\subseteq\lambda^{+}$ and two fixed sequences
$\langle F_{n}\mid n<\omega\rangle$, $\langle\vec{d}^{n}\mid n<\omega\rangle$,
with $F_{n}:[\lambda]^{k_{n}}\to\lambda$ and $\vec{d}^{n}=\langle
d^{n}_{0},\dots d^{n}_{k_{n}}\rangle$, such that for every $\delta\in S^{*}$
there exists $N\in\mathcal{S}$ so that $\delta=\chi_{N}(\lambda^{+})$,
$\langle F_{n}^{N}\mid n<\omega\rangle=\langle F_{n}\mid n<\omega\rangle$, and
$\langle\vec{d}^{N,n}\mid n<\omega\rangle=\langle\vec{d}^{n}\mid
n<\omega\rangle$. For each $\delta\in S^{*}$ there are $m_{\delta}<\omega$ and
$N\in\mathcal{S}^{*}$ such that for every $n\geq m_{\delta}$,
$t_{\delta}(n)=\chi_{N}(\tau_{n})$ and thus,
$t_{\delta}(d^{n}_{0})\leq
F_{n}\left(t_{\delta}({d^{n}_{1}}),\dots,t_{\delta}({d^{n}_{k_{n}}})\right)<\tau_{d^{n}_{0}}$
(1)
We move back to $V$ to contradict the above, and complete the proof. Let $p$
be a condition forcing the statement of (1) with respect to the
$\mathbb{P}$-names $\dot{S^{*}}$, $\langle\dot{F_{n}}\mid n<\omega\rangle$,
and $\langle{\dot{\vec{d}}^{n}}\mid n<\omega\rangle$. By taking a direct
extension if needed, we may assume $p$ decides the integer values for
$\vec{d}^{n}\subseteq\omega\setminus n$, for all $n<\omega$. Apply Lemma 26
repeatedly for each $F_{n}$, $n<\omega$, to form sequences, $\langle p^{n}\mid
n<\omega\rangle$ of $\leq^{*}$-extensions of $p$, and $\langle f^{n}\mid
n<\omega\rangle$ of functions,
$f^{n}:[\lambda]^{<\omega}\times[\lambda]^{<\omega}\to\lambda$, so that for
each $n<\omega$, $p^{n}\geq^{*}p^{n-1}$ and $f^{n}$ are formed to satisfy the
conclusion of Lemma 26 with respect to $\dot{F_{n}}$. We define
$\alpha=\sup\left(\bigcup_{n,m}(\operatorname{dom}(a^{p^{n}}_{m})\cup\operatorname{dom}(f^{p^{n}}_{m}))\right)+1$
and let $p^{*}$ be a common direct extension of $\langle p^{n}\mid
n<\omega\rangle$ with $\alpha=\max(\operatorname{dom}(a_{m}^{p^{*}}))$ for all
$m\geq\ell^{p^{*}}$. Next, let $q$ be an extension of $p^{*}$ which forces
$\check{\delta}\in\dot{S^{*}}$ for some ordinal $\delta>\alpha$. Since $q$
extends $p^{*}$, it is a direct extension of $p^{*}{}^{\frown}\vec{\nu}^{*}$
for some $\vec{\nu}^{*}\in\prod_{\ell^{p^{*}}\leq i<\ell^{*}}A^{p^{*}}_{i}$.
By taking a direct extension of $q$ if needed, we may also assume that $q$
decides the integer values $m_{\delta}$ from above and that
$\delta\in\operatorname{dom}(a^{q}_{n})$ for some $n<\omega$.
Next, we pick $n<\omega$ satisfying $n\geq m_{\delta},\ell^{q}$ and
$\delta\in\operatorname{dom}(a_{n}^{q})$, and denote for ease of notation,
$k_{n}$, $\langle d^{n}_{0},\dots,d^{n}_{k_{n}}\rangle$ by $k$, $\langle
d_{0},\dots,d_{k}\rangle$ respectively. Our choice of $p^{*}\geq^{*}p^{n}$,
and function $f^{n}$ guarantee that for every
$\vec{\mu}=\langle\mu_{d_{1}},\dots,\mu_{d_{k}}\rangle\in[\lambda]^{k}$ there
are $n_{*}^{\vec{\mu}}<\omega$ and a function
$N_{*}^{\vec{\mu}}:\prod_{\ell^{q}\leq
i<n_{*}^{\vec{\mu}}}A^{q}_{i}\to\omega,$
such that for every
$\vec{\nu}^{1}\in\prod_{\ell^{q}\leq i<n_{*}^{\vec{\mu}}}A^{q}_{i}\text{ and
}\vec{\nu}^{2}\in\prod_{n_{*}^{\vec{\mu}}\leq
i<N_{*}^{\vec{\mu}}(\vec{\nu}^{1})}A^{q}_{i},$
denoting
$\vec{\nu}^{*}{}^{\frown}\pi^{q}_{\operatorname{mc}(q),\alpha}(\vec{\nu}^{1}{}^{\frown}\vec{\nu}^{2})$
by $\vec{\nu}$, we have that $p^{*}{}^{\frown}\vec{\nu}$ forces
$\dot{F_{n}}(\check{\vec{\mu}})=\check{f}^{n}(\vec{\mu},\vec{\nu})$.
Recalling that $q\geq^{*}p^{*}{}^{\frown}\vec{\nu}^{*}$, we get that in
particular, $q{}^{\frown}\vec{\nu}^{1}{}^{\frown}\vec{\nu}^{2}$, which extends
$p^{*}{}^{\frown}\vec{\nu}$ forces the same value, which depends only on
$\pi^{q}_{\operatorname{mc}(q),\alpha}(\vec{\nu}^{1}{}^{\frown}\vec{\nu}^{2})$.
To complete the argument, we will make use of the last fact, and the fact that
$E_{n}(a^{q}_{n}(\delta))$ is strictly stronger than
$E_{n}(a^{q}_{n}(\alpha))$ in the Rudin-Keisler ordering, to find many
distinct choices of sequences $\vec{\nu}^{1}{}^{\frown}\vec{\nu}^{2}$, whose
projections
$\pi^{p}_{\operatorname{mc}(q),\delta}(\vec{\nu}^{1}{}^{\frown}\vec{\nu}^{2})$
are fixed, as well as the values they force for $t_{\delta}(d_{i})$, $1\leq
i\leq k$, yet they force many distinct values for $t_{\delta}(d_{0})$. This
will be used to find a condition which extends $p$ but forces
$(\ref{equation:crux})$ to fail.
To this end, we fix first a sequence of $\langle
d_{1},\dots,d_{k}\rangle$-indices,
$\vec{\nu}_{+}=\langle\nu_{d_{1}},\dots,\nu_{d_{k}}\rangle\in\prod_{i\in\\{d_{1},\dots,d_{k}\\}}A^{q}_{i}$
(note that we omit choosing a $d_{0}$-coordinate) and define
$\vec{\mu}=\pi^{q}_{\operatorname{mc}(q),\delta}(\vec{\nu}_{+})$.
Let $j\leq k+1$ largest so that $d_{i}<n_{*}^{\vec{\mu}}$ for every $i<j$ and
define $\vec{\nu}^{1}_{+}=\langle\nu_{d_{0}},\dots,\nu_{d_{j-1}}\rangle$.
Therefore, in order to extend $\vec{\nu}^{1}_{+}$ to a relevant sequence
$\vec{\nu}^{1}\in\prod_{\ell^{q}\leq i<n_{*}^{\vec{\mu}}}A^{q}_{i}$, one needs
to choose remaining coordinates
$\vec{\nu}^{1}_{-}\in\prod_{i\in[\ell^{q},n_{*}^{\vec{\mu}})\setminus\vec{d}}A^{q}_{i}.$
In particular, to every such sequence $\vec{\nu}^{1}_{-}$ we can assign the
integer $N^{\vec{\mu}}_{*}(\vec{\nu}^{1})$, where
$\vec{\nu}^{1}=\vec{\nu}^{1}_{-}\cup\vec{\nu}^{1}_{+}$, and by taking a direct
extension of $q$ if needed, we may assume that the numbers
$N_{*}^{\vec{\mu}}(\vec{\nu}^{1})$ take a constant value $N$ for all
$\vec{\nu}^{1}_{-}\in\prod_{i\in[\ell^{q},n_{*}^{\vec{\mu}})\setminus\vec{d}}A^{q}_{i}$.
Moreover, by increasing $N$ if necessary, we may also assume that $N>d_{k}$.
With $N$ being fixed, we conclude that every choice of a sequence
$\vec{\nu}^{2}_{-}\in\prod_{i\in[n_{*}^{\vec{\mu}},N)\setminus\vec{d}}A^{q}_{i}$
will allow us to extend the remaining portion of the fixed sequence
$\vec{\nu}_{+}$ to
$\vec{\nu}^{2}\in\prod_{i\in[n_{*}^{\vec{\mu}},N)}A^{q}_{i}$, which together
with a choice of $\vec{\nu}^{1}$ will produce a suitable extension
$q{}^{\frown}\vec{\nu}^{1}{}^{\frown}\vec{\nu^{2}}$ forcing
$\dot{F_{n}}(\vec{\mu})=f^{n}\left(\vec{\mu},\vec{\nu}^{*}{}^{\frown}\pi^{q}_{\operatorname{mc}(q),\alpha}(\vec{\nu}^{1}{}^{\frown}\vec{\nu}^{2})\right).$
Following this recipe, we extend our fixed choice of $\vec{\nu}_{+}$ to a
choice of all relevant coordinates for
$\vec{\nu}^{1}{}^{\frown}\vec{\nu}^{2}$, except for the coordinate of
$i=d_{0}$. Namely, we extend $\vec{\nu}_{+}$ to a fixed sequence
$\vec{\nu}_{++}\in\prod_{i\in[\ell^{q},n_{*}^{\vec{\mu}}+N)\setminus\\{d_{0}\\}}A^{q}_{i}$.
With the fixed choice $\vec{\nu}_{++}$, we derive a function
$h:A^{q}_{d_{0}}\to\tau_{d_{0}}$, defined by
$h(\nu)=f^{n}(\vec{\mu},\pi^{q}_{\operatorname{mc}(q),\alpha}(\vec{\nu}_{++}\cup\\{\nu\\}))$
if the last ordinal value is below $\tau_{d_{0}}$, and $h(\nu)=0$ otherwise.
The properties of the function $f^{n}$ guarantee that $h(\nu)$ depends only on
$\pi_{\operatorname{mc}(q),\alpha}(\nu)$, and by the last item on 18, there is
a subset $A^{*}\in E_{a_{n}(d_{0})}$, $A^{*}\subseteq A^{q}_{d_{0}}$, so that
$h(\nu)<\pi^{q}_{\operatorname{mc}(q),\delta}(\nu)$ for all $\nu\in A^{*}$.
Picking such an ordinal $\nu$, and setting
$\vec{\nu}^{1}{}^{\frown}\vec{\nu}^{2}=\vec{\nu}_{++}\cup\\{\nu\\}$, we
conclude that $p^{*}{}^{\frown}\vec{\nu}^{1}{}^{\frown}\vec{\nu}^{2}\geq p$
must force
$h(\nu)=\dot{F_{n}}\left(t_{\delta}(d_{1}),\dots,t_{\delta}(d_{k})\right)<t_{\delta}(d_{0}).$
Contradicting the statement 1 forced by $p$. ∎
###### Corollary 30.
ABSP holds in $V[G]$ with respect to $\langle\tau_{n}\mid n<\omega\rangle$ and
thus, by Lemma 15, AFSP holds and there are no continuous scales on
$\prod_{n}\tau_{n}$ which are essentially tree-like.
### 2.1 Down to $\aleph_{\omega}$
We define a variant $\hat{\mathbb{P}}$ of the forcing $\mathbb{P}$ from the
previous section, to obtain the result of Theorem 29 in a model where
$\langle\tau_{n}\mid n<\omega\rangle$ form a subsequence of the first
uncountable cardinals.
Conditions $q\in\hat{\mathbb{P}}$ are pairs $q=\langle p,h\rangle$ of
sequences, $p=\langle p_{n}\mid n<\omega\rangle$ and $h=\langle
h_{-1}^{\operatorname{high}}\rangle{}^{\frown}\langle h_{n}\mid
n\in\omega\rangle$ satisfying the following conditions:
1. 1.
$p\in\mathbb{P}$, i.e., $p$ satisfies Definition 19 above
2. 2.
for every $n<\ell^{p}$, $h_{n}=\langle
h_{n}^{\operatorname{low}},h_{n}^{\operatorname{high}}\rangle$, is a pair of
functions, which satisfy the following properties:
* •
$h_{n}^{\operatorname{low}}\in\operatorname{Coll}(\rho_{n}^{p},<\tau_{n}^{p})$
where $\rho_{n}^{p}=f^{p}_{n}(0)$, and
$\tau_{n}^{p}=g_{n}(\rho_{n}^{p})$,111111By Definition 19
$\rho_{n}^{p}<\tau_{n}^{p}<\lambda_{n}$ are both inaccessible for $n\geq 0$.
* •
$h_{n}^{\operatorname{high}}\in\operatorname{Coll}((\tau_{n}^{p})^{+},<\rho_{n+1}^{p})$
if $n<\ell^{p}-1$, and
$h_{\ell^{p}-1}^{\operatorname{high}}\in\operatorname{Coll}((\tau_{\ell^{p}-1}^{p})^{+},<\lambda_{\ell^{p}})$.
3. 3.
for every $n\geq\ell^{p}$, $h_{n}=\langle
h_{n}^{\operatorname{low}},h_{n}^{\operatorname{high}}\rangle$, is a pair of
functions, which satisfy the following properties:
* •
$\operatorname{dom}(h_{n}^{\operatorname{low}})=\operatorname{dom}(h_{n}^{\operatorname{high}})=A_{n}^{p}$,
* •
for every $\nu\in{A_{n}}^{p}$,
$h_{n}^{\operatorname{low}}(\nu)\in\operatorname{Coll}(\rho_{n}^{\nu},<\tau_{n}^{\nu})$
where $\rho_{n}^{\nu}=\pi^{n}_{\operatorname{mc}(a_{n}^{p}),0}(\nu)$ and
$\tau_{n}^{\nu}=g_{n}(\rho_{n}^{\nu})$, and
$h_{n}^{\operatorname{high}}(\nu)\in\operatorname{Coll}\left((\tau_{n}^{\nu})^{+},<\lambda_{n+1}\right)$
4. 4.
$h_{-1}^{\operatorname{high}}$ belongs to
$\operatorname{Coll}(\omega,<\rho_{0}^{p})$ if $\ell^{p}\geq 1$, and to
$\operatorname{Coll}(\omega,<\lambda_{0})$ otherwise
5. 5.
$h_{\ell^{p}-1}^{\operatorname{high}}\in V_{\rho_{\ell^{p}}^{\nu}}$ for every
$\nu\in{A_{\ell^{p}}}^{p}$, and
$h_{n-1}^{\operatorname{high}}(\nu^{\prime})\in V_{\rho_{n}^{\nu}}$ for every
$n>\ell^{p}$, $\nu^{\prime}\in A_{n-1}^{p}$, and $\nu\in{A_{n}}^{p}$.
A condition $q^{*}=\langle p^{*},h^{*}\rangle$ is a direct extension of
$p=\langle p,h\rangle$ if the following conditions hold:
1. 1.
$p^{*}\geq^{*}p$ in the sense of $\mathbb{P}$,
2. 2.
for every $n<\ell^{p}$,
$h_{n}^{\operatorname{low}}\subseteq(h_{n}^{*})^{\operatorname{low}}$ and
$h_{n}^{\operatorname{high}}\subseteq(h_{n}^{*})^{\operatorname{high}}$ ,
3. 3.
for every $n\geq\ell^{p}$, and $\nu\in A_{n}^{p^{*}}$,
$h_{n}^{\operatorname{low}}(\pi^{n}_{\operatorname{mc}(a_{n}^{p^{*}}),\operatorname{mc}(a_{n}^{p})}(\nu))\subseteq(h_{n}^{*})^{\operatorname{low}}(\nu)$,
and
$h_{n}^{\operatorname{high}}(\pi^{n}_{\operatorname{mc}(a_{n}^{p^{*}}),\operatorname{mc}(a_{n}^{p})}(\nu))\subseteq(h_{n}^{*})^{\operatorname{high}}(\nu)$.
Given a condition $q=\langle p,h\rangle$ and an ordinal $\nu\in
A_{\ell^{p}}^{p}$, we define the one-point end-extension of $q$ by $\nu$,
denoted $q{}^{\frown}\langle\nu\rangle$ to be the condition $\langle
p^{\prime},h^{\prime}\rangle$ given as follows:
* •
$p^{\prime}=p{}^{\frown}\langle\nu\rangle$ in the sense of $\mathbb{P}$, in
particular $\ell^{p^{\prime}}=\ell^{p}+1$,
* •
$h^{\prime}_{n}=h_{n}$ for every $n\leq\ell^{p}$, and in addition,
$(h^{\prime}_{\ell^{p}})^{\operatorname{high}}$ is now considered as a
condition of the restricted collapse poset
$\operatorname{Coll}(\rho_{\ell^{p}-1}^{p},<\tau_{\ell^{p}}^{\nu})$ (replacing
$\operatorname{Coll}(\rho_{\ell^{p}-1}^{p},<\lambda_{\ell^{p}})$).
* •
$(h^{\prime}_{\ell^{p}})^{\operatorname{low}}=h_{\ell^{p}}^{\operatorname{low}}(\nu)$
and
$(h^{\prime}_{\ell^{p}})^{\operatorname{high}}=h_{\ell^{p}}^{\operatorname{high}}(\nu)$,
* •
$h^{\prime}_{n}=h_{n}$ for every $n\geq\ell^{p}+1$.
Given a condition $q=\langle p,h\rangle$ and a finite sequence
$\vec{\nu}=\langle\nu_{\ell^{p}},\dots,\nu_{n-1}\rangle\in\prod_{\ell^{p}\leq
i<n}A^{p}_{i}$, the end-extension of $q$ by $\vec{\nu}$ is defined by
$q{}^{\frown}\vec{\nu}=q{}^{\frown}\langle\nu_{0}\rangle{}^{\frown}\langle\nu_{1}\rangle{}^{\frown}\dots{}^{\frown}\langle\nu_{n-1}\rangle$
In general, a condition in $\hat{\mathbb{P}}$ extends $q$ if it is obtain from
$q$ by finitely many end-extensions and direct extensions. Equivalently, it is
a direct extension of $q{}^{\frown}\vec{\nu}$ for some $n\geq\ell^{p}$ and
$\vec{\nu}\in\prod_{\ell^{p}\leq i<n}A^{p}_{i}$ .
Let $G\subseteq\hat{\mathbb{P}}$ be a $V$-generic filter. Through its
projection to $\mathbb{P}$, given by $q=(p,h)\mapsto p$, $p\in\mathbb{P}$, it
is clear that $V[G]$ adds a sequence of functions $\langle
t_{\alpha}\mid\alpha<\lambda^{+}\rangle$ in the product $\prod_{n}\tau_{n}$,
where $\tau_{n}=g_{n}(\rho_{n})$ is derived from the generic filter, as in the
case of $\mathbb{P}$. In addition, it is clear that the cardinals in the
intervals $(\tau_{n-1}^{+},\rho_{n})\cup(\rho_{n},\tau_{n})$, $n<\omega$, are
all collapsed in $V[G]$.
A standard argument for diagonal Prikry-type forcings with collapses (e.g.,
see [13]) shows that no other cardinals are collapsed. It follows that
$\tau_{n}=\aleph_{3n+2}^{V[G]}$ for every $n\in\omega$. Most relevant to us,
is Lemma 31 below, which is the $\hat{\mathbb{P}}$ analog of the Strong Prikry
Property from 21.
We first introduce certain useful notations. For a sequence of regular
cardinals $\vec{\rho}=\langle\rho_{i}\mid i<n\rangle$, satisfying
$\lambda_{i-1}<\rho_{i}<g_{i}(\rho_{i})<\lambda_{i}$,
$\mathbb{Q}_{\vec{\rho}}$ to be the product of collapse forcings
$\operatorname{Coll}(\omega,<\rho_{0})\times\left(\prod_{i<n-1}\operatorname{Coll}(\rho_{i},<g_{i}(\rho_{i}))\times\operatorname{Coll}(g_{i}(\rho_{i})^{+},<\rho_{i+1}))\right)\times\operatorname{Coll}(g_{n-1}(\rho_{n-1})^{+},<\lambda_{n})$
where for $i=0$ we set $g_{-1}(\rho_{-1})=\omega$. For a condition $q=\langle
p,h\rangle\in\hat{\mathbb{P}}$ we denote $\vec{\rho}^{q}=\langle
f_{n}^{p}(0)\mid n<\ell^{p}\rangle$,
$\mathbb{Q}_{p}=\mathbb{Q}_{\vec{\rho}^{p}}$, and define for every $q\geq p$,
the collapse restriction $q\upharpoonright\mathbb{Q}_{p}$ to be
$h\upharpoonright\ell^{p}+1=\langle h_{0},\dots h_{\ell^{p}}\rangle$.
###### Lemma 31.
Suppose that $D\subseteq\hat{\mathbb{P}}$ is a dense open set and $q=\langle
p,h\rangle\in\hat{\mathbb{P}}$ a condition. Then there exists a direct
extension $q^{*}\geq^{*}q$, $n<\omega$, such that for every
$\vec{\nu}\in\prod_{\ell^{p}\leq i<n}A^{q^{*}}_{i}$,
$q^{*}{}^{\frown}\vec{\nu}$ reduces meeting $D$ to
$\mathbb{Q}_{p^{*}\vec{\nu}}$, in the sense that there exists a dense open
subset $D(\vec{\nu})$ of $\mathbb{Q}_{q^{*}\vec{\nu}}$ so that for every
$q^{\prime}\geq q$, if
$q^{\prime}\upharpoonright\mathbb{Q}_{q^{*}\vec{\nu}}\in D(\vec{\nu})$ then
$q^{\prime}\in D$.
This version of the strong Prikry Property naturally extends to versions of
Lemmas 23 and 26, in which in addition to $n^{D},N^{D}$ ($n^{\vec{\mu}}$,
$N^{\vec{\mu}}$, respectively) which were used to determine the length of
sequences $\vec{\nu}^{1}{}^{\frown}\vec{\nu}^{2}$ for
$q^{*}{}^{\frown}\vec{\nu}^{1}{}^{\frown}\vec{\nu}^{2}$ to meet dense open
sets $D$ (or decide values of functions $\dot{F}(\vec{\mu})$), here an
additional function $\bar{D}$ mapping sequences
$\vec{\nu}^{1}{}^{\frown}\vec{\nu}^{2}$ to dense open subsets
$\bar{D}(\vec{\nu}^{1}{}^{\frown}\vec{\nu}^{2})$ of
$\mathbb{Q}_{q^{*}{}^{\frown}\vec{\nu}^{1}{}^{\frown}\vec{\nu}^{2}}$, are
added, and reduce the problem of finding $q^{\prime}\geq
q^{*}{}^{\frown}\vec{\nu}^{1}{}^{\frown}\vec{\nu}^{2}$ inside $D$, (or
deciding $\dot{F}(\vec{\mu})$) to
$q^{\prime}\upharpoonright\mathbb{Q}_{q^{*}{}^{\frown}\vec{\nu}^{1}{}^{\frown}\vec{\nu}^{2}}$
being a member of $\bar{D}(\vec{\nu}^{1}{}^{\frown}\vec{\nu}^{2})$.
We note that in particular, the conclusion of 24 applies to
$\hat{\mathbb{P}}$, since $V_{\lambda}\subseteq M$ implies that the finite
collapse products $\mathbb{Q}_{\vec{\rho}}$ are contained in $M$.
From this point, it is straightforward to verify that the rest of the
argument, leading to an analogous proof of Theorem 29, remains essentially the
same, with the additional key being that the identity of the newly introduced
collapse products $\mathbb{Q}_{q^{*}\vec{\nu}}$ and their dense sets
$\bar{D}^{\vec{\mu}}(\vec{\nu})$ will be decided by the generic information of
a bounded part of the scale, $t_{\beta}$,
$\beta<\alpha=\sup(M\cap\lambda^{+})$ for a suitable structure $M$ of size
$\lambda$. This information remains independent from higher generic scale
functions $t_{\delta}$, $\delta>\alpha$, which allows one to naturally modify
the proof of Theorem 29, to conclude the same result.
###### Theorem 32.
Let $G\subseteq\hat{\mathbb{P}}$ be a generic filter over $V$. The
Approachable Bounded Subset Property (ABSP) holds in $V[G]$ with respect to
the sequence $\langle\aleph_{3n+2}\mid n<\omega\rangle$.
## 3 Fine structure and the tree-like scale
### 3.1 Successor Cardinals
Let $\mathcal{M}\models\operatorname{ZFC}^{-}$ be a premouse such that every
countable hull of $\mathcal{M}$ has an $(\omega_{1}+1)$ iteration strategy,
$\lambda\in M$ a limit cardinal (in $\mathcal{M}$) of $V$-cofinality $\omega$
(which need not agree with its cofinality in $\mathcal{M}$) such that
$\lambda^{+}$ exists in $\mathcal{M}$.
Note if $\mathcal{N}$ is a premouse and $\alpha\in\mathcal{N}$ is such that
$\mathcal{N}\models\alpha\text{ is the largest cardinal}$, then we let
$(\alpha^{+})^{\mathcal{N}}=\operatorname{On}\cap\mathcal{N}$.
Let $\vec{\kappa}:=\langle\kappa_{n}:n<\omega\rangle$ be a sequence of
$\mathcal{M}$-cardinals cofinal in $\lambda$. We do note asume $\vec{\kappa}$
is in $\mathcal{M}$. Let $\tau_{n}:=(\kappa^{+}_{n})^{\mathcal{M}}$. We will
define a sequence in $\prod\limits_{n<\omega}\tau_{n}$ that is increasing,
tree-like, and continuous.
Let
$C^{\lambda,\mathcal{M}}:=\\{\alpha<(\lambda^{+})^{\mathcal{M}}|\mathcal{M}||\alpha\prec\mathcal{M}||(\lambda^{+})^{\mathcal{M}}\\}$.
For $\alpha\in C^{\lambda,\mathcal{M}}$ let $\mathcal{M}_{\alpha}$ be the
collapsing level for $\alpha$. Let $n_{\alpha}$ be minimal such that
$\rho^{\mathcal{M}_{\alpha}}_{n+1}=\lambda$,
$p_{\alpha}:=p^{\mathcal{M}_{\alpha}}_{n_{\alpha}+1}$, and
$w_{\alpha}:=w^{\mathcal{M}_{\alpha}}_{n_{\alpha}+1}$. Let also $F_{\alpha}$
be the top predicate of $\mathcal{M}_{\alpha}$.
By Lemma 17 there exists some
$\mathcal{M}^{n}_{\alpha}\trianglelefteq\mathcal{M}$ such that
$\mathcal{C}_{0}(\mathcal{M}^{n}_{\alpha})$ is isomorphic to
$\operatorname{Hull}^{\mathcal{M}_{\alpha}}_{n_{\alpha}+1}(\kappa_{n}\cup\\{p_{\alpha}\\})$.
$f^{\vec{\kappa},\mathcal{M}}_{\alpha}(n)=\begin{cases}(\kappa^{+}_{n})^{\mathcal{M}^{n}_{\alpha}}&\\{w_{\alpha},\lambda\\}\in\operatorname{Hull}^{\mathcal{M}_{\alpha}}_{n_{\alpha}+1}(\kappa_{n}\cup\\{p_{\alpha}\\})\\\
0&\text{otherwise}\end{cases}$
Note that the above function is non-zero almost everywhere, that is if
$\lambda\in\mathcal{C}_{0}(\mathcal{M}_{\alpha})$. This can fail if (and only
if) $\mathcal{M}_{\alpha}$ is active and $\nu^{\mathcal{M}_{\alpha}}=\lambda$.
Such $\alpha$ we will call anomalous. For such $\alpha$ we define:
$f^{\vec{\kappa},\mathcal{M}}_{\alpha}(n)=\begin{cases}(\kappa^{+}_{n})^{\operatorname{Ult}(\mathcal{M};F_{\alpha}\upharpoonright\kappa_{n})}&\kappa_{n}>\mu^{\mathcal{M}_{\alpha}}\\\
0&\text{otherwise}\end{cases}$
By the initial segment there must be some $\gamma<\lambda$ such that the
trivial completion of $F_{\alpha}\upharpoonright\kappa_{n}$ is indexed at
$\gamma$. We that it is impossible to have the alternative case as
$\kappa_{n}$ is a cardinal and hence not an index 121212It also cannot be type
Z. Type Z extenders have a largest generator.
Note that in either case $\mathcal{M}^{n}_{\alpha}$ is the least level of
$\mathcal{M}$ over which a surjection from $\kappa_{n}$ on to the ordinal
$f^{\vec{\kappa},\mathcal{M}}_{\alpha}(n)$ is definable. Hence the ordinal
defines the level and vice versa.
_In cases where it is clear which mouse and which sequence of cardinals we are
talking about, e.g. for the rest of this subsection, we will omit the
superscripts._
###### Lemma 33.
Let $\alpha<\beta$ both in $C$. If $m$ is such that
$f_{\alpha}(m)=f_{\beta}(m)$ then $f_{\alpha}(n)=f_{\beta}(n)$ for all $n\leq
m$.
###### Proof.
Note first that if $f_{\beta}(m)=0$, then $f_{\beta}(n)=0$ and the same holds
for $\alpha$. Let us then consider $f_{\beta}(m)\neq 0$, it follows that
$\mathcal{M}^{m}_{\alpha}=\mathcal{M}^{m}_{\beta}$. We will start with the
assumption that neither $\alpha$ nor $\beta$ are anomalous. In that situation
we must have that
$w_{\alpha}\in\operatorname{Hull}^{\mathcal{M}_{\alpha}}_{n_{\alpha}+1}(\kappa_{n}\cup\\{p_{\alpha}\\})$.
This implies that $p_{\alpha}$ collapses down to
$p_{n_{\alpha}+1}(\mathcal{M}^{m}_{\alpha})$. The same, of course, holds for
$\beta$. Note we must have $n_{\alpha}=n_{\beta}$. It follows that
$\displaystyle\mathcal{C}_{0}(\mathcal{M}^{n}_{\alpha})$
$\displaystyle\cong\operatorname{Hull}^{\mathcal{M}^{m}_{\alpha}}_{n_{\alpha}+1}(\kappa_{n}\cup\\{p_{n_{\alpha}+1}(\mathcal{M}^{m}_{\alpha})\\})$
$\displaystyle=\operatorname{Hull}^{\mathcal{M}^{m}_{\beta}}_{n_{\beta}+1}(\kappa_{n}\cup\\{p_{n_{\beta}+1}(\mathcal{M}^{m}_{\beta})\\})\cong\mathcal{C}_{0}(\mathcal{M}^{n}_{\beta}).$
This implies $f_{\beta}(n)=f_{\alpha}(n)$. Note that $f_{\beta}(n)=0$ if and
only if
$w_{n_{\beta}+1}(\mathcal{M}^{m}_{\beta})\notin\operatorname{Hull}^{\mathcal{M}^{m}_{\beta}}_{n_{\beta}+1}(\kappa_{n}\cup\\{p_{n_{\beta}+1}(\mathcal{M}^{m}_{\beta})\\})$
and similarly for $\alpha$.
Assume then that at least one of $\alpha$ and $\beta$ is anomalous. Let us
assume that $\alpha$ is anomalous, the proof for $\beta$ is only notationally
different. We will realize that, in fact, both must be anomalous. As types are
preserved by taking hulls we must have that both are active type III. As at
least one is anomalous we do know that the top extender of
$\mathcal{M}^{m}_{\alpha}$ has no generators above $\kappa_{m}$. If then the
other were not to be anomalous we must have that $\lambda$ is an element of
the appropriate hull. This implies that
$\mathcal{C}_{0}(\mathcal{M}^{m}_{\beta})$ has ordinals and hence generators
above $\kappa_{n}$. Contradiction!
As then both are anomalous and
$\mathcal{M}^{m}_{\alpha}=\mathcal{M}^{m}_{\beta}$, we have
$F_{\alpha}\upharpoonright\kappa_{m}=F_{\beta}\upharpoonright\kappa_{m}$. From
this follows $\mu^{\mathcal{M}_{\alpha}}=\mu^{\mathcal{M}_{\beta}}$ and
$F_{\alpha}\upharpoonright\kappa_{n}=F_{\beta}\upharpoonright\kappa_{n}$.
Therefore $f_{\alpha}(n)=f_{\beta}(n)$. ∎
###### Lemma 34.
Let $\alpha<\beta$ both in $C$. Then $f_{\alpha}(n)<f_{\beta}(n)$ for all but
finitely many $n$.
###### Proof.
Note that since $\alpha<\beta$ are in $C$ then
$\mathcal{M}_{\alpha}\neq\mathcal{M}_{\beta}$ and so
$\mathcal{M}_{\alpha}\in\mathcal{M}_{\beta}$. Let us first assume that $\beta$
is not anomalous.
Let $n^{*}$ be such that
$\mathcal{M}_{\alpha}\in\operatorname{Hull}^{\mathcal{M}_{\beta}}_{n_{\beta}+1}(\kappa_{n^{*}}\cup\\{p_{\beta}\\})$.
The pre-image of $\mathcal{M}_{\alpha}$ in $\mathcal{M}^{n}_{\beta}$ ($n\geq
n^{*}$) can then compute $\mathcal{M}^{n}_{\alpha}$ and hence $f_{\alpha}(n)$
correctly.
If on the other hand $\beta$ were anomalous, let $n^{*}$ be such that
$\mathcal{M}_{\alpha}$ is generated by some
$a\in{\left[\kappa_{n^{*}}\right]}^{\mathord{<}\omega}$, i.e.
$\mathcal{M}_{\alpha}=\iota_{F_{\beta}}(h)(a)$ for some
$h\in({}^{\mu^{\mathcal{M}_{\beta}}}{}_{\mathcal{M}||\mu^{\mathcal{M}_{\beta}}})$.
Then $f_{\alpha}(n)$ ($n\geq n^{*}$) can be computed from
$\iota_{F_{\beta}\upharpoonright\kappa_{n}}(h)(a)$ inside
$\operatorname{Ult}(\mathcal{M};F_{\beta}\upharpoonright\kappa_{n})$ by Łoś’s
Theorem. ∎
###### Lemma 35.
Let $\beta\in C$ be of uncountable cofinality. Then $\beta$ is a continuity
point of the sequence ( i.e. $f_{\beta}$ is the exact upper bound of $\langle
f_{\alpha}:\alpha\in C\cap\beta\rangle$).
###### Proof.
Let $\alpha_{n}<f_{\beta}(n)$. We shall find some $\alpha<\beta$ such that
$f_{\alpha}$ dominates $\langle\alpha_{n}:n<\omega\rangle$ almost everywhere.
Towards that end, we deal first with the case where $\beta$ is not anomalous.
For almost all $n<\omega$ we have some surjection from $\kappa_{n}$ onto
$\alpha_{n}$ in $\mathcal{M}^{n}_{\beta}$, given by some parameter
$a_{n}\in{\left[\kappa_{n}\right]}^{\mathord{<}\omega}$ and term $\tau_{n}$.
Let $\xi_{n}<\rho_{n_{\beta}}(\mathcal{M}_{\beta})$ be such that the image of
such a surjection is ($\Sigma_{1}$)-definable over $\mathcal{M}||\xi_{n}$ with
$\operatorname{Th}^{\mathcal{M}_{\beta}}_{n_{\beta}}(\xi_{n},p_{n_{\beta}}(\mathcal{M}_{\beta}))$
as an additional predicate.
By Lemma 16, $\rho_{n_{\beta}}(\mathcal{M}_{\beta})$ has uncountable
cofinality. So
$\xi:=\sup\limits_{n<\omega}\xi_{n}<\rho_{n_{\beta}}(\mathcal{M}_{\beta})$.
Take then some $A$ that codes the $\Sigma_{1}$ theory of $\mathcal{M}||\xi$
with
$\operatorname{Th}^{\mathcal{M}_{\beta}}_{n_{\beta}}(\xi,p_{n_{\beta}}(\mathcal{M}_{\beta}))$
as an additional predicate. Such an $A$ exists in $\mathcal{M}_{\beta}$.
Pick some $\alpha<\beta$ such that $A\in\mathcal{M}_{\alpha}$. Let $n<\omega$
be such that $A$ has a pre-image $\bar{A}$ in $\mathcal{M}^{n}_{\alpha}$.
$\mathcal{M}^{n}_{\alpha}$ can then compute $\alpha_{n}$ as the ordertype of
$\\{(\gamma,\delta)|(k,a_{n}{}^{\smallfrown}\langle\gamma,\delta\rangle)\in\bar{A}\\}$
where $k$ is the Gödel number of
$``\tau_{n}(a_{n})(\gamma)<\tau_{n}(a_{n})(\delta)^{\prime\prime}$. Hence
$\alpha_{n}<f_{\alpha}(n)$. Similarly, if $\alpha$ were to be anomalous, we
can pick $n$ such that $A=\iota_{F_{\alpha}}(h)(a)$ for some
$h\in{}^{\mu^{\mathcal{M}_{\alpha}}}{}_{\mathcal{M}||\mu^{\mathcal{M}_{\alpha}}}$
and $a\in{\left[\kappa_{n}\right]}^{\mathord{<}\omega}$. The rest of the
argument remains the same.
Let us then assume that $\beta$ is anomalous. Pick
$h_{n}\in{}^{\mu^{\mathcal{M}_{\beta}}}{}_{\mathcal{M}||\mu^{\mathcal{M}_{\beta}}}$
such that $\iota_{F_{\beta}}(h_{n})(a_{n})$ is a surjection from $\kappa_{n}$
onto $\alpha_{n}$ for some
$a_{n}\in{\left[\kappa_{n}\right]}^{\mathord{<}\omega}$. We have that
$\operatorname{cof}((\mu^{\mathcal{M}_{\beta},+})^{\mathcal{M}})>\omega$.
Pick then some $\xi<(\mu^{\mathcal{M}_{\beta},+})^{\mathcal{M}}$ such that
$\langle h_{n}:n<\omega\rangle\subset\mathcal{M}||\xi$. By weak amenability
the extender fragment $\bar{F}:=\\{(a,X)\in
F_{\beta}|X\in\mathcal{M}||\xi,a\in{\left[\lambda\right]}^{\mathord{<}\omega}\\}$
in $\mathcal{M}_{\beta}$. Pick then $\alpha<\beta$ with
$\bar{F}\in\mathcal{M}_{\alpha}$. Any $\mathcal{M}^{n}_{\alpha}$ containing
$\bar{\bar{F}}$ a pre-image of $\bar{F}$ can then compute $\alpha_{n}$ as the
ordertype of $\\{(\gamma,\delta)|B^{\gamma,\delta}_{n}\in\bar{\bar{F}}\\}$
where
$B^{\gamma,\delta}_{n}=\\{\bar{a}\in\left[\mu^{\mathcal{M}_{\beta}}\right]^{|b^{\gamma,\delta}_{n}|}|h^{a_{n},b^{\gamma,\delta}_{n}}_{n}(\bar{a})(\operatorname{id}^{\gamma,b^{\gamma,\delta}_{n}}(\bar{a}))<h^{a_{n},b^{\gamma,\delta}_{n}}_{n}(\bar{a})(\operatorname{id}^{\delta,b^{\gamma,\delta}_{n}}(\bar{a}))\\},$
and $b^{\gamma,\delta}_{n}:=a\cup\\{\gamma,\delta\\}$. Hence
$\alpha_{n}<f_{\alpha}(n)$. ∎
###### Lemma 36.
Assume $\langle\kappa_{n}:n<\omega\rangle\in\mathcal{M}$, then $\langle
f_{\alpha}:\alpha\in C\rangle$ is a scale in
$\prod\limits_{n<\omega}\tau_{n}\cap\mathcal{M}$.
###### Proof.
Let $f\in\prod\limits_{n<\omega}(\tau_{n}/J_{bd})\cap\mathcal{M}$. Pick
$\alpha\in C$ such that $f\in\mathcal{M}_{\alpha}$. Then $f(n)<f_{\alpha}(n)$
for all but finitely many $n$. ∎
###### Remark 37.
We note that it is possible to associate a sequence in
$\prod\limits_{n<\omega}\tau_{n}$ to any initial segment of $\mathcal{M}$
projecting to $\lambda$ and it would obey the established rules.
In certain situations we will want to consider a variant construction. Let us
consider an additional set of parameters
$\vec{\alpha}:=\langle\alpha_{n}:n<\omega\rangle\in\prod\limits_{n<\omega}\tau_{n}$.
Let $\beta\in C.$ By the condensation lemma there exists some
$\mathcal{M}^{n,\alpha_{n}}_{\beta}$ such that
$\mathcal{C}_{0}(\mathcal{M}^{n,\alpha_{n}}_{\beta})$ is isomorphic to
$\operatorname{Hull}^{\mathcal{M}_{\beta}}_{n_{\beta}+1}(\kappa_{n}\cup\\{p_{\beta}{}^{\smallfrown}\langle\alpha_{n}\rangle\\})$.
We then define:
$f^{\vec{\kappa},\vec{\alpha},\mathcal{M}}_{\beta}(n)=\begin{cases}(\kappa^{+}_{n})^{\mathcal{M}^{n,\alpha_{n}}_{\beta}}&\\{\lambda,w_{\beta}\\}\in\operatorname{Hull}^{\mathcal{M}_{\beta}}_{n_{\beta}+1}(\kappa_{n}\cup\\{p_{\beta}{}^{\smallfrown}\langle\alpha_{n}\rangle\\})\\\
0&\text{otherwise}\end{cases}$
If $\beta$ is anomalous, then we use $F_{\beta}\upharpoonright(\alpha_{n}+1)$
(instead of $F_{\beta}\upharpoonright\kappa_{n}$) to define the sequence.
This sequence will behave just like the previously defined sequence. The
proofs are mostly the same. The only minor problem adapting these arguments
lie in the preservation of standard parameters. Let $p^{n}_{\beta}$ be the
image of $p_{\beta}$ under the collapse map in
$\mathcal{M}^{n,\alpha_{n}}_{\beta}$. Then $p^{n}_{\beta}$ might fail to be
the standard parameter of $\mathcal{M}^{n,\alpha_{n}}_{\beta}$ as it can fail
to be a good parameter.
Though certainly we do know that
$p^{n}_{\beta}{}^{\smallfrown}\langle\alpha_{n}\rangle$ is a parameter so the
standard parameter is below it in the lexicographic order. As we do have a
preimage of the solidity witness in $\mathcal{M}^{n,\alpha_{n}}_{\beta}$, its
standard parameter can only be lesser on that last component, i.e.
$p_{n_{\beta}+1}(\mathcal{M}^{n,\alpha_{n}}_{\beta})=p^{n}_{\beta}{}^{\smallfrown}\alpha^{\prime}$
with $\alpha^{\prime}\leq\alpha_{n}$.
Then $\mathcal{M}^{m,\alpha_{m}}_{\beta}$ can always compute
$\mathcal{M}^{n,\alpha_{n}}_{\beta}$ from its standard parameter and the
ordinal $\alpha_{n}$ in a consistent matter, guaranteeing tree-likeness of the
sequence. Everything else goes through with minor changes.
### 3.2 Limit cardinals
Let now each of the $\kappa_{n}$ be an inaccessible cardinal in $\mathcal{M}$.
We want to extract from $\mathcal{M}_{\beta}$ ,$\beta\in C$, a sequence of
structures that singularize some $g_{\beta}(n)<\kappa_{n}$. For this we need a
vector of parameters $\vec{\alpha}=\langle\alpha_{n}:n<\omega\rangle$ where
$\alpha_{n}<\kappa_{n}$. We also do require that
$\sup\limits_{n<\omega}\alpha_{n}=\lambda$. When do these parameters give rise
to the right structure? This will depend on whether $\beta$ is anomalous or
not. When begin with listing three key factors for the case $\beta$ is not
anomalous:
* $(1)^{\beta}_{n}$
$\sup(\operatorname{Hull}^{\mathcal{M}_{\beta}}_{n_{\beta}+1}(\alpha_{n}\cup\\{p_{\beta}\\})\cap\kappa_{n})>\alpha_{n}$;
* $(2)^{\beta}_{n}$
$\kappa_{n}\in\operatorname{Hull}^{\mathcal{M}_{\beta}}_{n_{\beta}+1}(\alpha_{n}\cup\\{p_{\beta}\\})$
* $(3)^{\beta}_{n}$
$\operatorname{Hull}^{\mathcal{M}_{\beta}}_{n_{\beta}+1}(\alpha_{n}\cup\\{p_{\beta}\\})$
is cofinal in $\rho_{n_{\beta}}(\mathcal{M}_{\beta})$.
If $\beta$ is anomalous, we have the following two considerations:
* $(4)^{\beta}_{n}$
$\sup(\\{\iota_{F_{\beta}}(h)(a)|h\in{}^{\mu^{\mathcal{M}_{\beta}}}{}_{(\mu^{\mathcal{M}_{\beta}})},a\in{\left[\alpha_{n}\right]}^{\mathord{<}\omega}\\}\cap\kappa_{n})>\alpha_{n}$;
* $(5)^{\beta}_{n}$
$\kappa_{n}=\iota_{F_{\beta}}(h)(a)$ for some
$h\in{}^{\mu^{\mathcal{M}_{\beta}}}{}_{(\mu^{\mathcal{M}\beta})}$ and
$a\in{\left[\alpha_{n}\right]}^{\mathord{<}\omega}$.
We say $\beta$ is adequate iff
$(1)^{\beta}_{n}+(2)^{\beta}_{n}+(3)^{\beta}_{n}$ or
$(4)^{\beta}_{n}+(5)^{\beta}_{n}$ (depending on type) are met for all but
finitely many $n$. If $\beta$ is adequate and not anomalous then
$g^{\vec{\kappa},\vec{\alpha},\mathcal{M}}_{\beta}(n):=\begin{cases}\sup(\operatorname{Hull}^{\mathcal{M}_{\beta}}_{n_{\beta}+1}(\alpha_{n}\cup\\{p_{\beta}\\})\cap\kappa_{n})&(1)^{\beta}_{m}+(2)^{\beta}_{m}+(3)^{\beta}_{m}\forall
m\geq n\\\ 0&\text{ otherwise}\end{cases}$
We then let $\mathcal{M}^{n}_{\beta}$ be the unique initial segment of
$\mathcal{M}$ such that $\mathcal{C}_{0}(\mathcal{M}^{n}_{\beta})$ is
isomorphic to
$\operatorname{Hull}^{\mathcal{M}_{\beta}}_{n_{\beta}+1}(g_{\beta}(n)\cup\\{p_{\beta}\\})$.
(Note that the second case in the condensation lemma cannot hold as
$g_{\beta}(n)$ is a limit of cardinals and hence a cardinal itself. This
follows by elementarity, trivially so when $n_{\beta}>0$ otherwise by
$(3)^{\beta}_{n}$.)
If on the other hand $\beta$ is anomalous then
$g^{\vec{\kappa},\vec{\alpha},\mathcal{M}}_{\beta}(n):=\begin{cases}\sup(\\{\iota_{F_{\beta}}(h)(a)|h\in{}^{\mu^{\mathcal{M}_{\beta}}}{}_{\mu^{\mathcal{M}_{\beta}}},a\in{\left[\alpha_{n}\right]}^{\mathord{<}\omega}\\}\cap\kappa_{n})&(4)^{\beta}_{m}+(5)^{\beta}_{m}\forall
m\geq n\\\ 0&\text{otherwise}\end{cases}$
$\mathcal{M}^{n}_{\beta}$ will be the unique initial segment of $\mathcal{M}$
with the trivial completion of $F_{\beta}\upharpoonright g_{\beta}(n)$ as its
top extender. As in the previous section, we will omit superscripts for the
remainder of this section.
To ensure tree-likeness for this sequence we need a strong interdependence
between the ordinal $g_{\beta}(n)$ and structure $\mathcal{M}^{n}_{\beta}$.
Towards that end notice that $g_{\beta}(n)$ is definably singularized over
$\mathcal{M}^{n}_{\beta}$. The next lemma will show that
$\mathcal{M}^{n}_{\beta}$ is the least level of $\mathcal{M}$ with this
property.
###### Lemma 38.
$g_{\beta}(n)$ is regular in $\mathcal{M}^{n}_{\alpha}$ for all $n$ such that
$(1)^{\beta}_{n}+(2)^{\beta}_{n}+(3)^{\beta}_{n}$ or
$(4)^{\beta}_{n}+(5)^{\beta}_{n}$ holds.
###### Proof.
First we will consider $\beta$ that is not anomalous. Since $\kappa_{n}$ is
regular in $\mathcal{M}_{\beta}$, it will then be enough to show that
$\sup(\operatorname{Hull}^{\mathcal{M}_{\beta}}_{n_{\beta}+1}(g_{\beta}(n)\cup\\{p_{\beta}\\})\cap\kappa_{n})=g_{\beta}(n)$.
Let $\xi<\kappa_{n}$ be such that
$\xi\in\operatorname{Hull}^{\mathcal{M}_{\beta}}_{n_{\beta}+1}(g_{\beta}(n)\cup\\{p_{\beta}\\})$.
We can then take $\gamma<g_{\beta}(n)$ and
$\delta<\rho_{n_{\beta}}(\mathcal{M}_{\beta})$ such that
$\xi\in\operatorname{Hull}^{\mathcal{N}_{\delta}}_{1}(\gamma)$ where
$\mathcal{N}_{\delta}$ is $\mathcal{M}||\delta$ together with
$\operatorname{Th}^{\mathcal{M}_{\beta}}_{n_{\beta}}(\delta,p_{n_{\beta}}(\mathcal{M}_{\beta}))$
as an additional predicate. We can take $\gamma$ and $\delta$ to be in
$\operatorname{Hull}^{\mathcal{M}_{\beta}}_{n_{\beta}+1}(\alpha_{n}\cup\\{p_{\beta}\\})$
(by definition of $g_{\beta}(n)$ and $(3)_{n}$ respectively).
Then
$\eta:=\sup(\operatorname{Hull}^{\mathcal{N}_{\delta}}_{1}(\gamma)\cap\kappa_{n})$
is also in that hull (uses $(2)_{n}$) and thus $\xi<\eta<g_{\beta}(n)$.
Now consider an anomalous $\beta$. We will show that $g_{\beta}(n)$ is regular
in $\operatorname{Ult}(\mathcal{M};F_{\beta}\upharpoonright g_{\beta}(n))$. We
have some $h\in{}^{\mu^{\mathcal{M}_{\beta}}}{}_{\mu^{\mathcal{M}_{\beta}}}$
and $a\in{\left[\alpha_{n}\right]}^{\mathord{<}\omega}$ such that
$\kappa_{n}=\iota_{F_{\beta}}(h)(a)$. We will show that
$g_{\beta}(n)=\iota_{F_{\beta}\upharpoonright g_{\alpha}(n)}(h)(a)$. As this
pair represents a regular cardinal in the larger ultrapower this will suffice.
Pick then some $h_{0}$ and $b$ such that
$b\in{\left[g_{\beta}(n)\right]}^{\mathord{<}\omega}$ and
$\iota_{F_{\alpha}\upharpoonright
g_{\beta}(n)}(h_{0})(b)<\iota_{F_{\beta}\upharpoonright g_{\beta}(n)}(h)(a)$.
Pick some $c\in{\left[\alpha_{n}\right]}^{\mathord{<}\omega}$ (w.l.o.g.
$a\subset c$) and $h_{1}$ such that $b\subset\iota_{F_{\beta}}(h_{1})(c)$.
Define
$h_{2}:\left[\mu^{\mathcal{M}_{\beta}}\right]^{|c|}\rightarrow\mu^{\mathcal{M}_{\beta}}$
by
$d\mapsto\sup\\{h_{0}(e)|e\in\left[h_{1}(d)\right]^{|b|},h_{0}(e)<h^{a,c}(d)\\}$.
We then have $\iota_{F_{\beta}\upharpoonright
g_{\beta}(n)}(h_{0})(b)\leq\iota_{F_{\beta}\upharpoonright
g_{\beta}(n)}(h_{2})(c)<g_{\beta}(n)$ as required. ∎
Thus $g_{\beta}(n)$ is regular in $\mathcal{M}^{n}_{\beta}$ but is definably
singular over it. Thus it is uniquely determined as a level of $\mathcal{M}$
by $g_{\beta}(n)$.
The following is a straightforward corollary of the proof of the previous
Lemma.
###### Corollary 39.
Let $\alpha\in C$ be such that $g_{\beta}(n)$ is defined for all but finitely
many $n$. Let $\vec{\alpha}^{*}:=\langle\alpha^{*}_{n}:n<\omega\rangle$ be
such that $\alpha_{n}\leq\alpha^{*}_{n}<g_{\alpha}(n)$ for all but finitely
many $n$. Then $g^{\vec{\kappa},\mathcal{M},\vec{\alpha}^{*}}_{\beta}$ is
defined and agrees with $g_{\beta}$ almost everywhere.
###### Lemma 40.
Say $\beta^{*}$ is adequate, then every $\beta>\beta*$ in $C$ of uncountable
cofinality is also adequate.
###### Proof.
Let us assume for simplicity’s sake that
$(1)^{\beta^{*}}_{n}+(2)^{\beta^{*}}_{n}+(3)^{\beta^{*}}_{n}$ holds for all
$n$. Let then $n^{*}$ be such that
$\beta^{*}\in\operatorname{Hull}^{\mathcal{M}_{\beta}}_{n_{\beta}+1}(\alpha_{m}\cup\\{p_{\beta}\\})$
for all $m\geq n^{*}$. Then that hull can compute
$\operatorname{Hull}^{\mathcal{M}_{\beta^{*}}}_{n_{\beta^{*}}+1}(\alpha_{m}\cup\\{p_{\beta^{*}}\\})$
for all $m\geq n^{*}$. $(1)^{\beta}_{m}+(2)^{\beta}_{m}$ then follows.
That is if $\beta^{*}$ is not anomalous. If it is anomalous note that
$\operatorname{Hull}^{\mathcal{M}_{\beta}}_{n_{\beta}+1}(\alpha_{m}\cup\\{p_{\beta}\\})$
has access to the extender $F_{\beta^{*}}$ and can compute $\kappa_{m}$ from
it assuming $\alpha_{m}>\mu^{\mathcal{M}_{\beta^{*}}}$. $(1)^{\beta}_{m}$
follows for similar reasons.
$(3)^{\beta}_{m}$ almost everywhere follows for cofinality reasons.
If $\beta$ is anomalous then we take some $h$ and
$a\in{\left[\alpha_{m}\right]}^{\mathord{<}\omega}$ such that
$\beta^{*}=\iota_{F_{\beta}}(h)(a)$ and let $\tau$ be some
$r\Sigma_{n_{\beta^{*}}+1}$-term such that
$\kappa_{m}=\tau^{\mathcal{M}_{\beta^{*}}}_{m}(b_{m},p_{\beta^{*}})$ for
$b_{m}\in{\left[\alpha_{m}\right]}^{\mathord{<}\omega}$. Define then
$h^{m}_{0}:\left[\mu^{\mathcal{M}_{\beta}}\right]^{|a\cup
b_{m}|}\rightarrow\mu^{\mathcal{M}_{\beta}}$ by
$c\mapsto\tau^{\mathcal{M}_{h^{a,a\cup
b_{m}}(c)}}_{m}(\operatorname{id}^{b_{m},a\cup b_{m}}(c),p_{h^{a,a\cup
b_{m}}(c)}).$
We then have $\iota_{F_{\beta}}(h_{0})(a\cup b_{m})=\kappa_{m}$.
$(4)^{\beta}_{m}$ follows for similar reasons.
The idea is similar if $\beta^{*}$ and $\beta$ are both anomalous. (Pick $h,a$
representing $F_{\alpha^{*}}$ etc.) We skip further details. ∎
Assuming the existence of an adequate ordinal $\beta^{*}$ we can then show
that $\langle g_{\beta}:\beta\in
C\backslash\beta^{*}\cap\operatorname{cof}(\mathord{>}\omega)\rangle$ is
increasing (mod finite), tree-like, and continuous as before.
###### Lemma 41.
Let $\langle\kappa_{n}:n<\omega\rangle$ and
$\langle\alpha_{n}:n<\omega\rangle$ both in $\mathcal{M}$ then there exists an
adequate $\beta^{*}$ and $\langle g_{\beta}:\beta\in
C\backslash\beta^{*}\cap\operatorname{cof}(\mathord{>}\omega)\rangle$ is a
scale in $\prod\limits_{n<\omega}\kappa_{n}\cap\mathcal{M}$.
###### Proof.
Any $\beta^{*}$ of uncountable cofinality (in $\mathcal{M}$) such that
$\mathcal{M}||\beta^{*}$ contains both sequences will be adequate. The rest is
then as before. ∎
Note that while we have only considered sequences of a “pure” type, we could
easily deal with sequences $\langle\kappa_{n}:n<\omega\rangle$ of regular
cardinals with both successor cardinals and inaccessible cardinals by mixing
both constructions using parameters where needed. With this we finish the
proof of Theorem 6.
###### Remark 42.
Assuming that $\lambda$ is not subcompact in $\mathcal{M}$ the sequences we
defined should be very good, but we have yet to check this in detail. The
proof would presumably proceed along similar lines as in [7].
## 4 Core models and the tree-like scale
We now want to consider the question when the sequences constructed in the
previous section are scales in $V$. For this we need to consider the right
mouse. The natural candidate is, of course, the core model. But even core
model sequences are not always scales.
To keep the following as accessible as possible we are going to operate under
a smallness assumption. This will allow us to cover all known anti tree-like
scale results while greatly simplify the following arguments.
This assumption is:
$\begin{split}\text{There is no inner model }W&\text{ and }F\in W\text{ a
total extender }\\\ &\text{such that
}\operatorname{gen}(F)\geq(\operatorname{crit}(F)^{++})^{W}\end{split}$ (2)
###### Corollary 43.
There is no $\omega_{1}$-iterable premouse $(\mathcal{M},\in,\vec{E},F)$ such
that $\operatorname{gen}(F)>(\operatorname{crit}(F)^{++})^{\mathcal{M}}$.
###### Proof.
Assume towards a contradiction that $(\mathcal{M};\in,\vec{E},F)$ is a
counterexample. Then we can generate an inner model $W$ by iterating the top
extender out of the universe. Note that by a standard reflection argument,
$\omega_{1}$-iterability is enough to ensure that this model is wellfounded.
By the initial segment condition
$F\upharpoonright(\operatorname{crit}(F)^{++})^{W}$ is then in $W$
contradicting (2). ∎
The reader should be aware, though, that our main results will hold under much
weaker anti-Large Cardinals assumptions (up to one Woodin cardinal and
beyond). Neither should the choice of indexing scheme affect their validity
(though we have yet to check this in detail).
The most immediate payoff of (2) will be that all iterations we are going to
consider are linear (This is one instance in which ms-indexing will make
things simpler for us).
###### Proposition 44.
Let $\mathcal{M}$ be a $\omega_{1}$-iterable premouse, and $\mathcal{T}$ a
normal iteration tree on $\mathcal{M}$. Then no
$\alpha<\beta\leq\operatorname{lh}(\mathcal{T})$ is such that
$\operatorname{crit}(E^{\mathcal{T}}_{\beta})<\operatorname{gen}(E^{\mathcal{T}}_{\alpha})$.
###### Proof.
Let $\alpha<\beta$ such that
$\operatorname{crit}(E^{\mathcal{T}}_{\beta})<\operatorname{gen}(E^{\mathcal{T}}_{\alpha})$.
There are three cases:
Case 1:
$\operatorname{crit}(E^{\mathcal{T}}_{\alpha})<\operatorname{crit}(E^{\mathcal{T}}_{\beta})$
By agreement between models in an iteration we have that
$\operatorname{crit}(E^{\mathcal{T}}_{\beta})$ is inaccessible in
$\mathcal{M}^{\mathcal{T}}_{\alpha}||\operatorname{lh}(E^{\mathcal{T}}_{\alpha})$
and thus
$(\operatorname{crit}(E^{\mathcal{T}}_{\alpha})^{++})^{\mathcal{M}^{\mathcal{T}}_{\alpha}||\operatorname{lh}(E^{\mathcal{T}}_{\alpha})}<\operatorname{crit}(E^{\mathcal{T}}_{\beta})$.
As $E^{\mathcal{T}}_{\alpha}$ has generators above
$\operatorname{crit}(E^{\mathcal{T}}_{\beta})$,
$\mathcal{M}^{\mathcal{T}}_{\alpha}|\operatorname{lh}(E^{\mathcal{T}}_{\alpha})$
is a counterexample to Corollary 43.
Case 2:
$\operatorname{crit}(E^{\mathcal{T}}_{\alpha})>\operatorname{crit}(E^{\mathcal{T}}_{\beta})$
In $\mathcal{M}^{\mathcal{T}}_{\beta}$ due to the agreement between models in
an iteration,
$\operatorname{lh}(E^{\mathcal{T}}_{\alpha})>\operatorname{crit}(E^{\mathcal{T}}_{\beta})$
is a cardinal in $\mathcal{M}^{\mathcal{T}}_{\beta}$ so by strong
acceptability $\operatorname{crit}(E^{\mathcal{T}}_{\alpha})$ is inaccessible
in $\mathcal{M}^{\mathcal{T}}_{\beta}$ and thus above
$(\operatorname{crit}(E^{\mathcal{T}}_{\beta})^{++})^{\mathcal{M}_{\beta}})$.
As $\mathcal{T}$ is a normal iteration
$\operatorname{lh}(E^{\mathcal{T}}_{\beta})>\operatorname{lh}(E^{\mathcal{T}}_{\alpha})>(\operatorname{crit}(E^{\mathcal{T}}_{\alpha})^{+})^{\mathcal{M}^{\mathcal{T}}_{\beta}}$
and so
$\operatorname{gen}(E^{\mathcal{T}}_{\beta})>\operatorname{crit}(E^{\mathcal{T}}_{\alpha})$
but then
$\mathcal{M}^{\mathcal{T}}_{\beta}|\operatorname{lh}(E^{\mathcal{T}}_{\beta})$
is a counterexample to Corollary 43.
Case 3:
$\operatorname{crit}(E^{\mathcal{T}}_{\alpha})=\operatorname{crit}(E^{\mathcal{T}}_{\beta})$
Because $\mathcal{T}$ is normal we have
$\operatorname{lh}(E^{\mathcal{T}}_{\alpha})<\operatorname{lh}(E^{\mathcal{T}}_{\beta})$
but this means that $E^{\mathcal{T}}_{\beta}$ must have generators cofinal in
$(\operatorname{crit}(E^{\mathcal{T}}_{\beta})^{++})^{\mathcal{M}^{\mathcal{T}}_{\beta}}$.
Now, let $\gamma$ be the last drop in the interval $\left(\alpha,\beta\right]$
if it exists or $\alpha+1$ otherwise. We can assume that
$\operatorname{crit}(E^{\mathcal{T}}_{\gamma-1})\geq\operatorname{crit}(E^{\mathcal{T}}_{\beta})$.
$\iota^{\mathcal{T}}_{\gamma-1,\beta}(\operatorname{crit}(E^{\mathcal{T}}_{\gamma-1}))$
is then the critical point of an extender on the
$\mathcal{M}^{\mathcal{T}}_{\beta}$ sequence and greater than
$\operatorname{crit}(E^{\mathcal{T}}_{\beta})$. As $E^{\mathcal{T}}_{\beta}$
must be total over $\mathcal{M}^{\mathcal{T}}_{\beta}$. Thus we can produce a
class size model $W$ containing $E^{\mathcal{T}}_{\beta}$ and agreeing with
$\mathcal{M}^{\mathcal{T}}_{\beta}$ past
$(\operatorname{crit}(E^{\mathcal{T}}_{\beta})^{++})^{W}$ which contradicts
(2).
∎
Another consequence of (2) is that the Jensen-Steel core model $K$ exists by
[18]. Note that by the smallness assumption there can be no anomalous ordinals
in $K$. For the following results we will follow the general framework of the
proof of weak covering for that model. Before going into the proofs we shall
take quick note of the involved objects.
Let $\lambda$ be a singular cardinal of countable cofinality. Let
$\vec{\kappa}:=\langle\kappa_{n}:n<\omega\rangle$ be a sequence cofinal in
$\lambda$. Let $\tau_{n}:=(\kappa^{+}_{n})^{K}$. Consider some $X\prec
H_{\theta}$ ($\theta>>\lambda$) and let $\sigma^{X}:H_{X}\to X$ be the inverse
of the transitive collapse map. $X$ will need to satisfy certain properties:
* •
certain phalanxes “lift” through $\sigma^{X}$
* •
$\operatorname{card}(X)<\lambda$,
* •
$X$ is tight on $\vec{\kappa}$ (and $\langle\tau_{n}:n<\omega\rangle$), i.e.
$X\cap\prod\limits_{n<\omega}\kappa_{n}$ is cofinal in
$\prod\limits_{n<\omega}(X\cap\kappa_{n})/J_{bd}$,
* •
the collection of $X\prec H_{\theta}$ with the above three properties is
stationary.
The first point is quite vague, and we will provide more details where needed
in the course of the argument. By [21] $\omega$-closed $X$ do satisfy the
first property, but it seems possible that there are not enough, i.e.
stationary many, $X$ with all properties available. In such cases, by [22] we
do know that for every internally approachable chain $\vec{Y}:=\langle
Y_{i}:i<\kappa\rangle$ in $H_{\theta}$ there exists some $i<\kappa$ of
uncountable cofinality such that $Y_{i}$ satisfies the first property. That it
satisfies the other properties should be easy to see.
Let then from now on $X$ be some such set with the required properties. Let
$\sigma_{X}:H_{X}\rightarrow X$ be an isomorphism where $H_{X}$ is transitive.
Write $K_{X}:={\sigma^{-1}_{X}}"\left[{K}\right]$,
$\lambda_{X}:=\sigma^{-1}_{X}(\lambda)$,
$\kappa^{X}_{n}:=\sigma^{-1}_{X}(\kappa_{n})$, etc.
As is standard we will compare $K_{X}$ with $K$, we should have (for our
choice of $X$) that the iteration tree on $K_{X}$ is trivial (this is
$(1)_{\alpha}$ from [21] or $(1)^{i}_{\alpha}$ from [22] respectively). Let
then $\mathcal{I}_{X}$ be the iteration tree on $K$ that arises from the co-
iteration. We will simplify notation by writing $\mathcal{M}^{X}_{\alpha}$ for
$\mathcal{M}^{\mathcal{I}_{X}}_{\alpha}$ etc. Let
$\zeta_{X}:=\operatorname{lh}(\mathcal{I}_{X})$ be the length of the
iteration.
###### Lemma 45.
$(\operatorname{crit}(\sigma_{X})^{+})^{K_{X}}<(\operatorname{crit}(\sigma_{X})^{+})^{K}$
and if $E^{X}_{0}$ is defined then it is not total over $K$.
Note that $K_{X}$ and $K$ agree up to
$(\operatorname{crit}(\sigma_{X})^{+})^{K_{X}}$ as a result of the
condensation lemma.
###### Proof.
Assume towards a contradiction that
$(\operatorname{crit}(\sigma_{X})^{+})^{K_{X}}=\operatorname{crit}(\sigma_{X})^{+})^{K}$.
Then $E_{\sigma_{X}}$ the
$(\operatorname{crit}(\sigma_{X}),\sigma_{X}(\operatorname{crit}(\sigma_{X})))$-extender
derived from $\sigma_{X}$ measures all subsets of its critical point that are
in $K$. It also coheres with $K$ by the elementarity of $\sigma_{X}$. (This is
a little bit of a lie. We would actually need to know that all Mitchell-Steel
initial segments of $E_{\sigma_{X}}$ are on the $K$-sequence. But if this
fails we could simply apply the argument we are about to give to the least
missing segment instead.)
We do know that the phalanx $\langle\langle
K,\operatorname{Ult}(K;E_{\sigma_{X}})\rangle,\sigma_{X}(\operatorname{crit}(\sigma_{X}))\rangle$
is iterable. This is $(2)_{\alpha}$ from [21] or [22] where
$\operatorname{crit}(\sigma_{X})=(\aleph_{\alpha})^{K_{X}}$. (Once again this
is something of a lie. We actually have to replace $K$ with an appropriate
soundness witness in the above statement, but we can choose $W$ such that it
agrees with $K$ past the level we actually care about. Thus this will not make
a difference here.)
But then by [28, 8.6] we have that $E_{\sigma_{X}}$ is on the $K$-sequence. It
should be obvious that
$\operatorname{gen}(E_{\sigma_{X}})=\sigma_{X}(\operatorname{crit}(\sigma_{X}))$
and thus $K|\operatorname{lh}(E_{\sigma_{X}})$ contradicts Proposition 43.
As for the second part, assume $E^{X}_{0}$ is applied to $K$. By the first
part, if $\operatorname{crit}(E^{X}_{0})\geq\operatorname{crit}(\sigma_{X})$,
then we must truncate. If
$(\operatorname{crit}(E^{X}_{0})^{+})^{K_{X}}=\operatorname{crit}(\sigma_{X})$,
then by elementarity
$(\operatorname{crit}(E^{X}_{0})^{+})^{K}=\sigma_{X}(\operatorname{crit}(\sigma_{X})$
so we must truncate.
If
$\operatorname{crit}(\sigma_{X})\geq(\operatorname{crit}(E^{X}_{0})^{++})^{K_{X}}$
then its generators must be cofinal in $\operatorname{crit}(\sigma_{X})$. So,
if the strict inequality holds then $E^{X}_{0}$ contradicts Corollary 43. A
similar argument applies if
$\operatorname{lh}(E^{X}_{0})>(\operatorname{crit}(\sigma_{X})^{+})^{K_{X}}$.
So, we must have that
$\operatorname{crit}(\sigma_{X})=(\operatorname{crit}(E^{X}_{0})^{++})^{K_{X}}$.
Consider $\mathcal{M}^{X}_{1}$. It agrees with $K_{X}$ up to
$(\operatorname{crit}(\sigma_{X})^{+})^{K_{X}}$ and that ordinal is a cardinal
there. Thus we can apply the extender $E_{\sigma_{X}}$ to it. The properties
of $X$ will guarantee that
$\tilde{K}:=\operatorname{Ult}(\mathcal{M}^{X}_{1};E_{\sigma_{X}})$ is
iterable (similar to the proof of [21, Lemma 3.13]).
We have that $K$ and $\tilde{K}$ agree up to
$\sup({\sigma_{X}}"\left[{(\operatorname{crit}(\sigma_{X})^{+})^{K_{X}}}\right])$
which lies past $\sigma_{X}(\operatorname{crit}(\sigma_{X}))$ their common
$\operatorname{crit}(E^{X}_{0})^{++}$, but on the other hand
$(\operatorname{crit}(E^{X}_{0})^{+++})^{\tilde{K}}=\sup({\sigma_{X}}"\left[{(\operatorname{crit}(\sigma_{X})^{+})^{K_{X}}}\right])<\sigma_{X}((\operatorname{crit}(\sigma_{X})^{+})=(\operatorname{crit}(E^{X}_{0})^{+++})^{K}$
as a result of weak covering.
Consider then $\tilde{E}$ the first extender applied on the $K$ side in the
co-iteration of $K$ and $\tilde{K}$. Its index must be above
$\sigma_{X}(\operatorname{crit}(\sigma_{X}))$ but its critical point cannot be
larger than $\operatorname{crit}(E^{X}_{0})$. $\tilde{E}$ on the $K$-sequence
then contradicts (2).
∎
Remember now the sequence $\langle\kappa_{n}:n<\omega\rangle$ and the sequence
of successors $\langle\tau_{n}:n<\omega\rangle$. The general idea for the
following proofs is to find some ordinal $\alpha_{X}<\lambda^{+}$ such that
the natural scales of the core model at $\alpha_{X}$ align with the
characteristic function of $X$.
From now on we shall assume that $\kappa_{n}$ is a cutpoint of (the extender
sequence of) $K$ and hence $\kappa^{X}_{n}$ is a cutpoint of $K_{X}$. (
$\alpha\in(\mathcal{M};\in,\vec{E})$ is a cutpoint (of $\vec{E}$) iff whenever
$\operatorname{crit}(E_{\beta})<\alpha$, then
$\operatorname{lh}(E_{\alpha})<\alpha$ for all
$\beta\in\operatorname{dom}(\vec{E})$.)
###### Lemma 46.
There exist some $n_{X},k_{X}<\omega$, a sequence of models
$\langle\mathcal{N}^{X}_{n}:n_{X}\leq n<\omega\rangle$, and maps
$\langle\upsilon^{X}_{n,m}:n_{X}\leq n\leq m<\omega\rangle$ such that:
* •
$((\kappa^{X}_{n})^{+})^{\mathcal{N}^{X}_{n}}=\tau^{X}_{n}$ and
$\mathcal{N}^{X}_{n}$ agrees with $K_{X}$ up to $\tau^{X}_{n}$ for all $n\geq
n_{X}$;
* •
$\mathcal{N}^{X}_{n}$ is $(k_{X}+1)$-sound above $\kappa^{X}_{n}$ for all
$n\geq n_{X}$;
* •
$\upsilon^{X}_{n,m}:\mathcal{C}_{0}(\mathcal{N}^{X}_{n})\rightarrow\mathcal{C}_{0}(\mathcal{N}^{X}_{m})$
is $r\Sigma_{k_{X}+1}$-elementary for all $m\geq n\geq n_{X}$;
* •
$\operatorname{crit}(\upsilon^{X}_{n,m})\geq\kappa^{X}_{n}$ for all $m\geq
n\geq n_{X}$.
For our purposes the critical point of the identity will be defined as the
ordinals of its domain.
###### Proof.
There are a couple of cases.
Case 1: $\mathcal{I}_{X}$ has no indices below $\lambda$.
In that case, we have $K_{X}|(\lambda^{+})^{K_{X}}\trianglelefteq K$. By Lemma
45 we do know that some $\mathcal{N}^{\prime}\trianglelefteq K$ exists with
$(\operatorname{crit}(\sigma_{X})^{+})^{\mathcal{N}^{\prime}}=(\operatorname{crit}(\sigma_{X})^{+})^{K_{X}}$
and $\rho_{\omega}(\mathcal{N}^{\prime})\leq\operatorname{crit}(\sigma_{X})$.
By assumption we must have
$K_{X}|(\lambda^{+})^{K_{X}}\trianglelefteq\mathcal{N}^{\prime}$.
Take then $\mathcal{N}^{*}$ to be the smallest initial segment of $K$ that
end-extends $K_{X}|(\lambda^{+})^{K_{X}}$ such that
$\rho_{\omega}(\mathcal{N}^{*})<\lambda_{X}$.
Let $k_{X}$ be minimal such that
$\rho_{k_{X}+1}(\mathcal{N}^{*})<\lambda_{X}$. Let $n_{X}$ be minimal such
that $\kappa^{X}_{n_{X}}\geq\rho_{k_{X}+1}(\mathcal{N}^{*})$. We then let
$\mathcal{N}^{X}_{n}:=\mathcal{N}^{*}$ for all $n\geq n_{X}$, and let
$\upsilon^{X}_{n,m}$ be the identity for all $m\geq n\geq n_{X}$. As an
initial segment of $K$, $\mathcal{N}^{*}$ is sound so this works.
Case 2: The set $\\{\operatorname{lh}(E^{X}_{\beta})|\beta<\zeta_{X}\\}$ is
bounded below $\lambda_{X}$.
Let $\eta_{X}<\zeta_{X}$ be minimal such that $E^{X}_{\eta_{X}}$ has length
$\mathord{>}\lambda_{X}$, if it exists. If there is no such ordinal, let
$\eta_{X}=\zeta_{X}$. We must then have that $\mathcal{M}^{X}_{\eta_{X}}$
agrees with $K_{X}$ past $\lambda_{X}$. If $\mathcal{M}^{X}_{\eta_{X}}$ has
some proper initial segment of length greater than $\lambda_{X}$ projecting
below $\lambda_{X}$, then this is no different from the previous case.
So let us assume that this is not the case. Let $n_{X}$ be minimal such that
$\kappa^{X}_{n_{X}}$ the set
$\\{\operatorname{lh}(E^{X}_{\beta})|\beta<\eta_{X}\\}$. By Lemma 45,
$\mathcal{M}^{X}_{\eta_{X}}$ is not a weasel and is $(k_{X}+1)$-sound above
$\kappa^{X}_{n_{X}}$ for some unique $k_{X}$.
We then let $\mathcal{N}^{X}_{n}:=\mathcal{M}^{X}_{\eta_{X}}$ for all $n\geq
n_{X}$, and $\upsilon^{X}_{n,m}$ the identity for all $m\geq n\geq n_{X}$.
Case 3: The set $\\{\operatorname{lh}(E^{X}_{\beta})|\beta<\zeta_{X}\\}$ is
cofinal below $\lambda_{X}$.
Let
$\eta_{X}:=\sup(\\{\beta<\zeta_{X}|\operatorname{lh}(E^{X}_{\beta})<\lambda_{X}\\})$.
By assumption and Lemma 45 there is some drop in the interval
$\left(0,\eta_{X}\right)$. Let then $\gamma+1$ be the last such.
Let $k_{X}$ be minimal such that
$\rho_{k_{X}+1}((\mathcal{M}^{X}_{\gamma+1})^{*})\leq\operatorname{crit}(E^{X}_{\gamma})$.
Let $n_{X}$ be minimal such that
$\kappa^{X}_{n_{X}}\geq\operatorname{lh}(E^{X}_{\gamma})$. Let
$\eta^{X}_{n}<\eta_{X}$ be minimal such that
$\operatorname{crit}(E^{X}_{\eta^{X}_{n}})\geq\kappa^{X}_{n}$ for $n\geq
n_{X}$.
Let then $\mathcal{N}^{X}_{n}:=\mathcal{M}^{X}_{\eta^{X}_{n}}$ and
$\upsilon^{X}_{n,m}:=\iota^{X}_{\eta^{X}_{n},\eta^{X}_{m}}$ for $m\geq n\geq
n_{X}$. It is easy to see that the maps are as wanted, but it remains to check
that $\mathcal{N}^{X}_{n}$ is $(k_{X}+1)$-sound above $\kappa^{X}_{n}$. This
is going to be the one critical use of the assumption that $\kappa^{X}_{n}$ is
a cutpoint.
We have to show that the generators of the iteration up to $\eta^{X}_{n}$ are
bounded by $\kappa^{X}_{n}$. If $\eta^{X}_{n}$ is a limit this is obvious as
by choice of $\eta^{X}_{n}$ all previous critical points are less than
$\kappa^{X}_{n}$. So assume that $\eta^{X}_{n}=\delta+1$ and $E^{X}_{\delta}$
has a generator $\mathord{\geq}\kappa^{X}_{n}$. By the initial segment
condition we then have that the trivial completion $G$ of
$E^{X}_{\delta}\upharpoonright\kappa^{X}_{n}$ is on the sequence of $K_{X}$.
But we have
$\operatorname{crit}(G)=\operatorname{crit}(E^{X}_{\delta})<\kappa^{X}_{n}$
and $\operatorname{lh}(G)>\kappa^{X}_{n}$, contradicting that $\kappa^{X}_{n}$
is a cutpoint. ∎
The covering argument goes through three cases. Thanks to Lemma 45 we can
eliminate one of these cases, we will now see that we can also eliminate the
other less than convenient case.
###### Lemma 47.
If $\mathcal{N}^{X}_{n}$ for $n\geq n_{X}$ has a top extender, then
$\mu^{X}_{n}$, its critical point, is $\mathord{\geq}\kappa^{X}_{n}$.
###### Proof.
Let us first assume that $\mathcal{N}^{X}_{n}$ has been constructed according
to Case 1 or Case 2. Then $\lambda_{X}$ is a limit cardinal in
$\mathcal{N}^{X}_{n}$ and thus by (2) $\mu^{X}_{n}$ cannot be smaller than
$\lambda_{X}$.
If $\mathcal{N}^{X}_{n}$ is constructed according to Case 3, then some ordinal
$\mathord{\geq}\kappa^{X}_{n}$ has to be the critical point of an extender on
the $\mathcal{N}^{X}_{n}$-sequence. As no overlaps can exist on the
$\mathcal{N}^{X}_{n}$-sequence, $\mu^{X}_{n}\geq\kappa^{X}_{n}$ follows. ∎
###### Remark 48.
Note that $\mathcal{N}^{X}_{n}$ in the notation of [21] is the mouse
$\mathcal{P}_{\gamma}$ where $\kappa_{n}=\aleph^{K_{X}}_{\gamma}$. Recall that
$\mathcal{P}_{\gamma}$ is the least initial segment (if it exists) of
$\mathcal{M}^{X}_{\delta}$ that defines a subset of $\kappa_{n}$ not in
$K_{X}$ where $\delta<\zeta_{X}$ is least such that
$\operatorname{gen}(E^{X}_{\delta})>\kappa_{n}$. In addition, by the preceding
lemma $\mathcal{P}_{\gamma}=\mathcal{Q}_{\gamma}$, i.e. we are avoiding
protomice in this construction.
Let then
$\mathcal{N}_{X}:=\operatorname{dirlim}(\langle\mathcal{N}^{X}_{n},\upsilon^{X}_{n,m}:n_{X}\leq
n\leq m<\omega\rangle)$ and
$\upsilon^{X}_{n}:\mathcal{C}_{0}(\mathcal{N}^{X}_{n})\rightarrow\mathcal{C}_{0}(\mathcal{N}_{X})$
the direct limit map. It should be easy to see that $\mathcal{N}_{X}$ is
wellfounded and that the direct limit maps are $r\Sigma_{k_{X}+1}$-elementary
as they are generated by an iteration on $K$. But more is true:
###### Lemma 49.
The phalanx $((K_{X},\mathcal{N}_{X}),\lambda_{X})$ is iterable.
###### Proof.
We cannot quote [21] here as it seems a priori possible that the mouse (or
weasel) $\mathcal{P}_{\beta}$, where $\lambda_{X}=\aleph^{K_{X}}_{\beta}$,
from that proof is not equal to $\mathcal{N}_{X}$. (This would happen if
$(\lambda^{+}_{X})^{K_{X}}$ is not equal to
$(\lambda^{+}_{X})^{\mathcal{N}_{X}}$.)
Nevertheless, the proof presented in [21] works just as well with
$\mathcal{N}_{X}$ substituted for $\mathcal{P}_{\beta}$.
For those readers not content with this answer, we want to point out that
there is an easy cheat available to us in this case as (2) implies that
$\lambda_{X}$ must be a cutpoint in $\mathcal{N}_{X}$, and hence the
iterability of the phalanx reduces to the iterability of $\mathcal{N}_{X}$.
The latter holds as $\mathcal{N}_{X}$ is an iterate of the core model. ∎
###### Theorem 50.
Let $\lambda$ be a singular cardinal of countable cofinality. Let
$\vec{\kappa}:=\langle\kappa_{n}:n<\omega\rangle$ be a sequence of $K$-cut
points cofinal in $\lambda$. Let $\tau_{n}:=(\kappa^{+}_{n})^{K}$, then
$\prod\limits_{n<\omega}\tau_{n}$ carries a continuous, tree-like scale.
###### Proof.
We will show that $\vec{f}:=\langle f^{\vec{\kappa},K}_{\alpha}:\alpha\in
C^{\lambda,K}\rangle$ as defined in the last section is that scale. Towards
that purpose we need to show that this sequence is cofinal in
$\prod\limits_{n<\omega}\tau_{n}/J_{bd}$. Let
$g\in\prod\limits_{n<\omega}\tau_{n}/J_{bd}$ be arbitrary. Let
$X\prec(H_{\theta};\in,K||\theta,\vec{f})$ be of good type, as explained at
the beginning of this section, with $g\in X$. It will suffice to show that
there is some $\alpha_{X}$ such that
$f^{\vec{\kappa},K}_{\alpha_{X}}(n)=\sup(X\cap\tau_{n})$ for all but finitely
many $n$.
Let $\langle\mathcal{N}^{X}_{n},\upsilon^{X}_{n,m}:n_{X}\leq n\leq
m<\omega\rangle$ and $\langle\mathcal{N}_{X},\upsilon^{X}_{n}:n_{X}\leq
n<\omega\rangle$ be as previously discussed appropriate to our choice of
$\vec{\kappa}$ and $X$.
The first step will be to show that we can realize the least level of $K$ to
define a surjection onto $\sup(X\cap\tau_{n})$ by taking an ultrapower of
$\mathcal{N}^{X}_{n}$ for $n\geq n_{X}$. Let
$\mathcal{O}^{X}_{n}:=\operatorname{Ult}_{k_{X}}(\mathcal{N}^{X}_{n};\sigma_{X}\upharpoonright
K_{X}|\tau^{X}_{n})$ and $\tilde{\sigma}^{X}_{n}$ be the ultrapower map for
$n\geq n_{X}$. ( This ultrapower is formed using equivalence classes
$\left[f,a\right]_{\sigma_{X}}$ where
$a\in{\left[\kappa_{n}\right]}^{\mathord{<}\omega}$ and $f$ is a function with
domain $\left[\kappa^{X}_{n}\right]^{|a|}$ that is $r\Sigma_{k_{X}}$-definable
over $\mathcal{N}^{X}_{n}$.)
We do know that these models are wellfounded, in fact, the phalanx
$((K,\mathcal{O}^{X}_{n}),\kappa_{n})$ must be iterable. (This is
$(2)_{\beta}$ from [21] or [22], where
$\kappa^{X}_{n}=\aleph^{K_{X}}_{\beta}$.) This means that
$\mathcal{O}^{X}_{n}$ is an inital segment of $K$. Furthermore,
$\mathcal{O}^{X}_{n}$ is sound above $\kappa_{n}$, and
$\tilde{\sigma}^{X}_{n}(\tau^{X}_{n})=\sup(X\cap\tau_{n})$ is a cardinal there
by the choice of $\mathcal{N}^{X}_{n}$. This means that $\mathcal{O}^{X}_{n}$
is the level of $K$ we are looking for.
The next step must be to tie the sequence
$\langle\mathcal{O}^{X}_{n}:n_{X}\leq n<\omega\rangle$ to some level of $K$
projecting to $\lambda$. Our candidate is
$\mathcal{O}_{X}:=\operatorname{Ult}_{k_{X}}(\mathcal{N}_{X};\sigma_{X}\upharpoonright
K_{X}|\lambda_{X})$. Let $\tilde{\sigma}_{X}$ be the ultrapower map. By Lemma
49 and the lifting properties of our $X$ not only is $\mathcal{O}_{X}$
wellfounded, but it is an initial segment of the core model. Let
$\alpha_{X}:=(\lambda^{+})^{\mathcal{O}_{X}}$.
The last thing we need are appropriate embeddings from $\mathcal{O}^{X}_{n}$
into $\mathcal{O}_{X}$ for $n\geq n_{X}$. Define
$\pi^{X}_{n}:\mathcal{C}_{0}(\mathcal{O}^{X}_{n})\rightarrow\mathcal{C}_{0}(\mathcal{O}_{X})$:
let
$\pi^{X}_{n}(\left[f,a\right]_{\sigma_{X}})=\left[\upsilon^{X}_{n}(f)\upharpoonright{\left[\kappa^{X}_{n}\right]}^{\mathord{<}\omega},a\right]_{\sigma_{X}}$.
It is to be understood here that if $f$ is not an element of
$\mathcal{C}_{0}(\mathcal{N}^{X}_{n})$ but merely definable over it, then
$\upsilon^{X}_{n}(f)$ is the function over $\mathcal{C}_{0}(\mathcal{N}_{X})$
with the same definition and parameters moved according to $\upsilon^{X}_{n}$.
Let now $f$ an $r\Sigma_{k_{X}}$ definable function over
$\mathcal{C}_{0}(\mathcal{N}^{X}_{n})$, $\phi$ an $r\Sigma_{k_{X}}$-formula,
and $a\in{\left[\kappa_{n}\right]}^{\mathord{<}\omega}$.
$\displaystyle\mathcal{C}_{0}(\mathcal{O}^{X}_{n})\models\phi(\left[f,a\right]_{\sigma_{X}})$
$\displaystyle\Leftrightarrow
a\in\sigma_{X}(\\{b\in{\left[\kappa^{X}_{n}\right]}^{\mathord{<}\omega}|\mathcal{N}^{X}_{n}\models\phi(f(b))\\})$
$\displaystyle\Leftrightarrow
a\in\sigma_{X}(\\{b\in{\left[\kappa^{X}_{n}\right]}^{\mathord{<}\omega}|\mathcal{N}_{X}\models\phi(\upsilon^{X}_{n}(f)(b))\\})$
$\displaystyle\Leftrightarrow\mathcal{C}_{0}(\mathcal{O}_{X})\models\phi(\left[\upsilon^{X}_{n}(f)\upharpoonright{\left[\kappa^{X}_{n}\right]}^{\mathord{<}\omega},a\right]_{\sigma_{X}})$
This shows that $\pi^{X}_{n}$ is $r\Sigma_{k_{X}}$-elementary. Consider then
the following diagram:
$\textstyle{\mathcal{C}_{0}(\mathcal{O}_{X})}$$\textstyle{\mathcal{C}_{0}(\mathcal{N}_{X})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\tilde{\sigma}_{X}}$$\textstyle{\mathcal{C}_{0}(\mathcal{O}^{X}_{n})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi^{X}_{n}}$$\textstyle{\mathcal{C}_{0}(\mathcal{N}^{X}_{n})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\upsilon^{X}_{n}}$$\scriptstyle{\tilde{\sigma^{X}_{n}}}$
The diagram commutes, and all of
$\upsilon^{X}_{n},\tilde{\sigma_{X}},\tilde{\sigma^{X}_{n}}$ are cofinal (in
$\rho_{k_{X}}(\cdot)$). Thus so is $\pi^{X}_{n}$ which shows that it is
$r\Sigma_{k_{X}+1}$-elementary. Also note that the critical point of
$\pi^{X}_{n}$ is $\mathord{\geq}\kappa_{n}$.
It then follows that $\mathcal{C}_{0}(\mathcal{O}^{X}_{n})$ is isomorphic to
$\operatorname{Hull}^{\mathcal{O}_{X}}_{k_{X}+1}(\kappa^{X}_{n}\cup\\{p_{k_{X}+1}(\mathcal{O}_{X})\\})$,
so $\sup(X\cap\tau_{n})=f^{\vec{\kappa},K}_{\alpha_{X}}(n)$ for $n\geq n_{X}$.
∎
###### Remark 51.
The last line is inaccurate, as it seems possible that $\alpha_{X}\notin
C^{\lambda,K}$ meaning $f^{\vec{\kappa},K}_{\alpha_{X}}$ might not be defined.
Nevertheless the structure $\mathcal{O}^{X}_{n}$ is definable from
$\alpha_{X}$ and $\kappa_{n}$ in $K$ which implies that the sequence
$\langle\sup(X\cap\tau_{n}):n_{X}\leq n<\omega\rangle$ is dominated by some
$f^{\vec{\kappa},K}_{\beta}$ for $\beta\in C^{\lambda,K}$.
###### Corollary 52.
In the above situation $\alpha_{X}=\sup(X\cap\lambda^{+})$.
###### Proof.
By continuity $f^{\vec{\kappa},K}_{\sup(X\cap\lambda^{+})}$ is the exact upper
bound of $\langle
f^{\vec{\kappa},K}_{\beta}:\beta<\sup(X\cap\lambda^{+})\rangle$. On the other
hand, as we know that $\vec{f}$ is a scale, by the tightness of $X$ we also
know that $\langle\sup(X\cap\tau_{n}):n<\omega\rangle$ is also an exact upper
bound for this sequence. This implies that both agree almost everywhere, but
the latter equals $f^{\vec{\kappa},K}_{\alpha_{X}}$ almost everywhere. The
desired equality then follows. ∎
Let us now move on to the second theorem. This one concerns scales on products
that concentrate on ordinals that are inaccessible in $K$. We will see that
scales on such ordinals are significantly more restricted.
###### Theorem 53.
Let $\lambda$ be a singular cardinal of countable cofinality. Let
$\langle\kappa_{n}:n<\omega\rangle$ be a cofinal sequence such that each
$\kappa_{n}$ is an inaccessible limit of cutpoints of $K$. Assume there is
some $\delta<\lambda$ such that ordinals $\beta$ with
$\operatorname{o}^{K}(\beta)\geq\delta$ are bounded in each of the
$\kappa_{n}$. Then $\prod\limits_{n<\omega}\kappa_{n}$ admits a continuous,
tree-like scale.
Let from now on $\vec{\kappa}:=\langle\kappa_{n}:n<\omega\rangle$ and
$\delta<\lambda$ be as in the statement of the theorem. As this theorem deals
with scales on ordinals which are inaccessible in $K$ we will have need of a
theorem that provides information about the possible cofinalites of such
ordinals. In general, we cannot expect these cofinalities to be high because
of the existence of Prikry forcing. The next theorem essentially states that
this is the only real obstacle. Versions of this theorem for different forms
of the core model have existed for some time, but its newest form appropriate
for the Jensen-Steel core model is due to Mitchell and Schimmerling.
###### Theorem 54 (Mitchell-Schimmerling).
Assume there is no inner model with a Woodin cardinal, and let $K$ be the
Jensen-Steel core model. Let $\alpha\geq\aleph_{2}$ be such that $\alpha$ is
regular in $K$, but $\operatorname{cof}(\alpha)<\operatorname{card}(\alpha)$.
Then $\operatorname{o}^{K}(\alpha)\geq\nu$ where
$\operatorname{cof}(\alpha)=\omega\cdot\nu$.
See [20]. Alternatively, as we only deal with linear iterations here it should
be plausible that the results from [4] even though not directly applicable can
be mimicked here to achieve a similar end.
Let us now again consider some $X\prec(H_{\theta};\in,\ldots)$ containing
relevant objects. In addition to its previous properties we will require that
$\operatorname{cof}(\sup(X\cap\kappa_{n}))>\delta$. Note then that by our
assumption and 54, and this fact will be crucial, $\sup(X\cap\kappa_{n})$ is a
singular cardinal in $K$.
We will once again have need of the directed system
$\langle\mathcal{N}^{X}_{n,m},\upsilon^{X}_{n,m}:n_{X}\leq n\leq
m<\omega\rangle$ and its limit
$\langle\mathcal{N}_{X},\upsilon^{X}_{n}:n_{X}\leq n<\omega\rangle$, but we
will require some additional properties.
###### Lemma 55.
There exist $\tilde{\alpha}^{X}_{n}<\kappa^{X}_{n}$ such that
$\operatorname{Hull}^{\mathcal{N}^{X}_{n}}_{k_{X}+1}(\tilde{\alpha}^{X}_{n}\cup\\{p_{k_{X}+1}(\mathcal{N}^{X}_{n})\\})$
is cofinal in $\kappa^{X}_{n}$.
###### Proof.
If the system is constructed as in Case 1 and 2 then there is a single
$\alpha$ such that $\mathcal{N}^{X}_{n}$ (which is independent of $n$) is
sound above $\alpha$ so the conclusion follows.
Consider then that the system is constructed as in Case 3. Pick $n\geq n_{X}$.
Recall that $\mathcal{N}^{X}_{n}=\mathcal{M}^{X}_{\eta^{X}_{n}}$ and
$\gamma+1$ is the last drop below $\eta^{X}_{n}$. Note that by definition of
$\eta^{X}_{n}$ all critical points before that point are less than
$\kappa^{X}_{n}$. There are two cases.
Case 3.1: $\eta^{X}_{n}=\bar{\eta}+1$.
In that case as $\kappa^{X}_{n}$ is a limit cardinal we must have that
$\operatorname{lh}(E^{X}_{\bar{\eta}})<\kappa^{X}_{n}$. Let
$\tilde{\alpha}^{X}_{n}<\kappa^{X}_{n}$ be such that
$\mathcal{M}^{X}_{\eta^{X}_{n}}$ is sound above $\tilde{\alpha}^{X}_{n}$.
Case 3.2: $\eta^{X}_{n}$ is a limit ordinal.
Let $\gamma<\beta<\eta^{X}_{n}$ be such that
$\iota^{X}_{\beta,\eta^{X}_{n}}(\bar{\kappa})=\kappa^{X}_{n}$ for some
$\bar{\kappa}\in\mathcal{M}^{X}_{\beta}$. We must have that
$\iota^{X}_{\beta,\xi}(\bar{\kappa})\geq\operatorname{crit}(\iota^{X}_{\xi,\eta^{X}_{n}})$
for all $\xi\in\left[\beta,\eta^{X}_{n}\right)$. The key is to consider when
equality holds in the above equation.
Let us assume towards a contradiction that
$\iota^{X}_{\beta,\xi}(\bar{\kappa})=\operatorname{crit}(\iota^{X}_{\xi,\eta^{X}_{n}})$
for an unbounded in $\eta^{X}_{n}$ set $A$. For
$\nu\in\lim(A)\cap\eta^{X}_{n}$ we have
$\operatorname{crit}(\iota^{X}_{\nu,\eta^{X}_{n}})\geq\sup\limits_{\xi\in
A\cap\nu}\operatorname{crit}(\iota^{X}_{\xi,\eta^{X}_{n}})=\sup\limits_{\xi\in
A\cap\nu}\iota^{X}_{\beta,\xi}(\bar{\kappa})=\iota^{X}_{\beta,\nu}(\bar{\kappa})$
and hence $\nu\in A$. But then
$B:=\\{\iota^{X}_{\beta,\xi}(\bar{\kappa})|\xi\in A\\}$ is a club of
indiscernibles in $\kappa^{X}_{n}$. As $\sigma_{X}$ is continuous at points of
cofinality $\omega$, $C:={\sigma_{X}}"\left[{B}\right]$ is an $\omega$-club in
$\sup(X\cap\kappa_{n})$. As the latter was singular there must exist a club
$D\subset\sup(X\cap\kappa_{n})$ consisting of $K$-singulars. But $C\cap
D\neq\emptyset$, and $C$ consists of $K$-regulars. Contradiction!
We conclude that
$\operatorname{crit}(\iota^{X}_{\xi,\eta^{X}_{n}})<\iota^{X}_{\beta,\xi}(\bar{\kappa})$
for all $\xi\geq\nu$ for some $\nu\in\left[\beta,\eta^{X}_{n}\right)$. This
means that $\iota^{X}_{\nu,\eta^{X}_{n}}$ is continuous at
$\iota^{X}_{\beta,\nu}(\bar{\kappa})$. We then finish the argument by noticing
that $\mathcal{M}^{X}_{\nu}$ is $(k_{X}+1)$-sound above
$\operatorname{crit}(\iota^{X}_{\nu,\eta^{X}_{n}})$, and thus
$\operatorname{Hull}^{\mathcal{M}^{X}_{\eta^{X}_{n}}}_{k_{X}+1}(\operatorname{crit}(\iota^{X}_{\nu,\eta^{X}_{n}})\cup\\{p_{k_{X}+1}(\mathcal{M}^{X}_{\eta^{x}_{n}})\\})$
is cofinal in $\kappa^{X}_{n}$. ∎
###### Proof of Theorem 53.
We want to show that for some
$\vec{\alpha}_{X}:=\langle\alpha^{X}_{n}:n_{X}\leq n<\omega\rangle\in X$ the
sequence $\langle\sup(X\cap\kappa_{n}):n<\omega\rangle$ agrees almost
everywhere with $g^{\vec{\kappa},K,\vec{\alpha}_{X}}_{\alpha_{X}}$. (Implicit
here is that $\alpha_{X}$ will be adequate.)
Recall the structures $\mathcal{O}^{X}_{n}$ from the proof of the preceding
theorem. We will need a slightly different structure here. Let
$(\mathcal{O}^{X}_{n})^{*}:=\operatorname{Ult}_{k_{X}}(\mathcal{N}^{X}_{n};\sigma_{X}\upharpoonright
K_{X}|\kappa^{X}_{n})$. (This ultrapower is formed using equivalence classes
$\left[f,a\right]_{\sigma_{X}}$ where
$a\in{\left[\sup(X\cap\kappa_{n})\right]}^{\mathord{<}\omega}$ and $f$ is a
function with domain $\left[\gamma\right]^{|a|}$ where $\gamma<\kappa_{n}$ is
a cardinal with $a\subset\sigma_{X}(\gamma)$ and $f$ is
$r\Sigma_{k_{X}}$-definable over $\mathcal{N}^{X}_{n}$. Note that functions
with different domains can be compared by adding dummy values.)
Let $\bar{\sigma}^{X}_{n}$ be the ultrapower map. Note that
$\bar{\sigma}^{X}_{n}$ maps $\kappa^{X}_{n}$ cofinally into
$\sup(X\cap\kappa_{n})$ so we have
$\operatorname{Hull}^{(\mathcal{O}^{X}_{n})^{*}}_{n_{X}+1}(\alpha^{X}_{n}\cup\\{p_{k_{X}+1}((\mathcal{O}^{X}_{n})^{*})\\})$
is cofinal in $\sup(X\cap\kappa_{n})$ where
$\alpha^{X}_{n}:=\bar{\sigma}^{X}_{n}(\tilde{\alpha}^{X}_{n})$.
The phalanx $((K,(\mathcal{O}^{X}_{n})^{*}),\sup(X\cap\kappa_{n}))$ is
iterable as $\mathcal{C}_{0}((\mathcal{O}^{X}_{n})^{*})$ can be mapped into
$\mathcal{C}_{0}(\mathcal{O}^{X}_{n})$ by a map with critical point
$\sup(X\cap\kappa_{n})$, so $(\mathcal{O}^{X}_{n})^{*}$ is an initial segment
of $K$, in fact, the least one to define a witness to the singularity of
$\sup(X\cap\kappa^{X}_{n})$.
Just as before we can map $\mathcal{C}_{0}((\mathcal{O}^{X}_{n})^{*})$ into
$\mathcal{C}_{0}(\mathcal{O}_{X})$, so
$g^{\vec{\kappa},K,\vec{\alpha}_{X}}_{\alpha_{X}}(n)=\sup(X\cap\kappa_{n})$
for all $n\geq n_{X}$. We would like to have $\vec{\alpha}_{X}\in X$. This is
obvious if $X$ is $\omega$-closed. If $X$ is merely internally approachable
then we can still find some $\vec{\alpha}^{\prime}\in
X\cap\prod\limits_{n<\omega}\sup(X\cap\kappa_{n})$ that dominates
$\vec{\alpha}_{X}$ almost everywhere. Then
$g^{\vec{\kappa},K,\vec{\alpha}_{X}}_{\alpha_{X}}$ and
$g^{\vec{\kappa},K,\vec{\alpha}^{\prime}}_{\alpha_{X}}$ agree almost
everywhere by Corollary 39, so we can replace $\vec{\alpha}_{X}$ with
$\vec{\alpha}^{\prime}$.
By Fodor’s Lemma we then have a stationary set of $X$ and a single
$\vec{\alpha}$ such that $g^{\vec{\kappa},K,\vec{\alpha}}_{\alpha_{X}}$ agrees
with $\langle\sup(X\cap\kappa_{n}:n<\omega\rangle$ almost everywhere. This
then shows that $\langle g^{\vec{\kappa},K,\vec{\alpha}}_{\alpha}:\alpha\in
C\cap\operatorname{cof}(\mathord{>}\omega)\rangle$ is a scale. ∎
We are going to finish by showing how to weaken the assumption of Theorem 50
yet achieving the same result. It is here that we will make use of the
sequence $\langle f^{\vec{\kappa},K,\vec{\alpha}}_{\alpha}:\alpha\in
C^{\lambda,K}\rangle$.
We say a cardinal $\kappa\in K$ is a weak cutpoint if
$\operatorname{crit}(E)<\kappa$ implies
$\operatorname{lh}(E)<(\kappa^{+})^{K}$ for all extenders $E$ on the
$K$-sequence.
###### Theorem 56.
Let $\lambda$ be a singular cardinal of countable cofinality. Let
$\langle\kappa_{n}:n<\omega\rangle$ be a sequence of weak cutpoints cofinal in
$\lambda$. Let $\tau_{n}:=(\kappa^{+}_{n})^{K}$. Then
$\prod\limits_{n<\omega}\tau_{n}$ carries a continuous, tree-like scale.
###### Lemma 57.
There exist some $n_{X},k_{X}<\omega$, a sequence of ordinals
$\langle\tilde{\alpha}^{X}_{n}:n_{X}\leq n<\omega\rangle$, a sequence of
models $\langle\mathcal{N}^{X}_{n}:n_{X}\leq n<\omega\rangle$, and maps
$\langle\upsilon^{X}_{n,m}:n_{X}\leq n\leq m<\omega\rangle$ such that:
* •
$((\kappa^{X}_{n})^{+})^{\mathcal{N}^{X}_{n}}=\tau^{X}_{n}$ and
$\mathcal{N}^{X}_{n}$ agrees with $K_{X}$ up to $\tau^{X}_{n}$ for all $n\geq
n_{X}$;
* •
$\mathcal{N}^{X}_{n}$ is $(k_{X}+1)$-sound above $\kappa^{X}_{n}$ relative to
$p_{k_{X}+1}(\mathcal{N}^{X}_{n}){}^{\smallfrown}\tilde{\alpha}^{X}_{n}$ for
all $n\geq n_{X}$;
* •
$\upsilon^{X}_{n,m}:\mathcal{C}_{0}(\mathcal{N}^{X}_{n})\rightarrow\mathcal{C}_{0}(\mathcal{N}^{X}_{m})$
is $r\Sigma_{k_{X}+1}$-elementary for all $m\geq n\geq n_{X}$;
* •
$\operatorname{crit}(\upsilon^{X}_{n,m})\geq\max\\{\kappa^{X}_{n},\tilde{\alpha}^{X}_{n}+1\\}$
for all $m\geq n\geq n_{X}$.
###### Proof.
This proof goes through the same cases as the proof of Lemma 46, in fact, many
of the cases will be the same. (In those cases we can take
$\tilde{\alpha}^{X}_{n}$ to be $0$.) In the interest of time we shall only
deal with the case that is unique to this situation.
Let us assume that
$\eta_{X}:=\sup(\\{\beta<\zeta_{X}|\operatorname{lh}(E^{X}_{\beta})<\lambda\\})$
is a limit ordinal. Let $\gamma+1$ be the last drop in the interval
$\left(0,\eta_{X}\right)$. Let $k_{X}$ be minimal such that
$\rho_{k_{X}+1}((\mathcal{M}^{X}_{\gamma_{1}})^{*})\leq\operatorname{crit}(E^{X}_{\gamma})$.
Let $n_{X}$ be minimal such that
$\kappa^{X}_{n_{X}}\geq\operatorname{lh}(E^{X}_{\gamma})$. Let
$\eta^{X}_{n}<\eta_{X}$ be minimal such that
$\operatorname{crit}(E^{X}_{\eta^{X}_{n}})\geq\kappa^{X}_{n}$ for $n\geq
n_{X}$.
Let then $\mathcal{N}^{X}_{n}:=\mathcal{M}^{X}_{\eta^{X}_{n}}$ and
$\upsilon^{X}_{n,m}:=\iota^{X}_{\eta^{X}_{n},\eta^{X}_{m}}$ for $m\geq n\geq
n_{X}$. Let $n\geq n_{X}$ be such that $\eta^{X}_{n}=\tilde{\eta}^{X}_{n}+1$
and $\operatorname{gen}(E^{X}_{\tilde{\eta}^{X}_{n}})\geq\kappa_{n}$.
Otherwise the argument will proceed just as in the proof of Lemma 46 (and
$\tilde{\alpha}^{X}_{n}=0$).
First note then
$\operatorname{gen}(E^{X}_{\tilde{\eta}^{X}_{n}})<\tau^{X}_{n}$ as otherwise
$\kappa^{X}_{n}$ could not be a weak cutpoint by the initial segment
condition. Moreover, it then follows that $E^{X}_{\tilde{\eta}^{X}_{n}}$ has a
largest generator as otherwise
$\operatorname{gen}(E^{X}_{\tilde{\eta}^{X}_{n}})\in\left(\kappa^{X}_{n},\tau^{X}_{n}\right)$
must be a cardinal in $\mathcal{N}^{X}_{n}$.
Let $\tilde{\alpha}^{X}_{n}$ be that largest generator. We will be done if we
can show that $\kappa^{X}_{n}\cup\\{\tilde{\alpha}^{X}_{n}\\}$ generates the
whole ultrapower. Let then
$\tilde{\mathcal{M}}:=\operatorname{Ult}(\mathcal{M}^{X}_{\tilde{\eta}^{X}_{n}};E^{X}_{\tilde{\eta}^{X}_{n}}\upharpoonright\kappa^{X}_{n}\cup\\{\tilde{\alpha}^{X}_{n}\\})$
and
$\tilde{\iota}:\mathcal{C}_{0}(\tilde{\mathcal{N}})\rightarrow\mathcal{C}_{0}(\mathcal{N}^{X}_{n})$
be the canonical embedding.
We have that
$\tilde{\alpha}^{X}_{n}\in\left(\kappa^{X}_{n},\tau^{X}_{n}\right)$ is in the
range of $\tilde{\iota}$, thus so is
$\kappa^{X}_{n}=\operatorname{card}^{\mathcal{N}^{X}_{n}}(\tilde{\alpha}^{X}_{n})$
and some surjection from $\kappa^{X}_{n}$ on to $\tilde{\alpha}^{X}_{n}$. Then
$\tilde{\alpha}^{X}_{n}\subset\operatorname{ran}{\tilde{\iota}}$ and thus so
are all of the other generators of $E^{X}_{\tilde{\eta}^{X}_{n}}$. ∎
###### Remark 58.
Note that in the “special” case of Lemma 57 unlike Remark 48
$\mathcal{N}^{X}_{n}$ is not equal to the mouse $\mathcal{P}_{\gamma}$ (where
$\kappa_{n}=\aleph^{K_{X}}_{\gamma}$) from [21], but it is equal to
$\mathcal{Q}_{\gamma}$. To see this we must first realize that
$\mathcal{P}_{\gamma}$ must be an initial segment of
$\mathcal{M}^{X}_{\tilde{\eta}^{X}_{n}}$ as in this case
$E^{X}_{\tilde{\eta}^{X}_{n}}$ has generators $\mathord{\geq}\kappa_{n}$.
As the Dodd projectum of $E^{X}_{\tilde{\eta}^{X}_{n}}$ is below $\kappa_{n}$
we have, in fact,
$\mathcal{P}_{\gamma}=\mathcal{M}^{X}_{\tilde{\eta}^{X}_{n}}|\operatorname{lh}(E^{X}_{\tilde{\eta}^{X}_{n}})$.
Note though that lifting this mouse by $\sigma_{X}$ would create a proto
mouse. Hence we must move to the mouse $\mathcal{Q}_{\gamma}$ which is formed
by applying the extender $E^{X}_{\tilde{\eta}^{X}_{n}}$ using the usual
iteration tree rules. Hence the resulting mouse must be equal to
$\mathcal{M}^{X}_{\tilde{\eta}^{X}_{n}+1}=\mathcal{N}^{X}_{n}$.
###### Proof of Theorem 56.
Let $\langle\mathcal{N}^{X}_{n}:n_{X}\leq
n<\omega\rangle$,$\langle\upsilon^{X}_{n,m}:n_{X}\leq n\leq m<\omega\rangle$
and $\langle\tilde{\alpha}^{X}_{n}:n_{X}\leq n<\omega\rangle$ as in the lemma.
We will find some $\alpha_{X}$ and
$\vec{\alpha}_{X}:=\langle\alpha^{X}_{n}:n_{X}\leq n<\omega\rangle$ such that
$\sup(X\cap\tau_{n})=f^{\vec{\kappa},K,\vec{\alpha}}_{\alpha_{X}}(n)$ for all
$n\geq n_{X}$. In fact, $\alpha^{X}_{n}=\sigma_{X}(\tilde{\alpha}^{X}_{n})$
will do. A priori $\vec{\alpha}_{X}$ will depend on $X$ but we will be able to
deal with that by pressing down just as in the proof of Theorem 53.
We can mostly proceed as in the proof of Theorem 50. We will form
$\mathcal{O}^{X}_{n}$ and $\mathcal{O}_{X}$ as before, and generate embeddings
between them by lifting $\upsilon^{X}_{n}$. Note that $\upsilon^{X}_{n}$ will
not move $\tilde{\alpha}^{X}_{n}$ as iteration maps do not move generators. So
neither will its lift move $\alpha^{X}_{n}$. Thus
$\mathcal{C}_{0}(\mathcal{O}^{X}_{n})$ will be isomorphic to
$\operatorname{Hull}^{\mathcal{O}_{X}}_{k_{X}+1}(\kappa_{n}\cup\\{p_{k_{X}+1}(\mathcal{O}_{X}){}^{\smallfrown}\alpha^{X}_{n}\\})$
as required.
As before the fact that the phalanx $\langle\langle
K,\mathcal{O}^{X}_{n}\rangle,\kappa_{n}\rangle$ is iterable follows from the
covering lemma, noticing that in this case we might have to consider the mouse
$\mathcal{Q}_{\beta}$ not $\mathcal{P}_{\beta}$ as explained above.
Fortunately, this does not change anything about the rest of the argument. We
skip further detail. ∎
###### Proof of Theorem 5.
We have different cases depending on if the $\kappa_{i}$ are limit cardinals
or successor cardinals in the core model. Let us first assume that all
$\kappa_{i}$ share a type. If that shared type is limit cardinals, then we can
use Theorem 53 to finish. If that type is successor cardinals we have two
cases: if $\bar{\kappa_{i}}$ is the $K$-predecessor of $\kappa_{i}$ is
measurable, then it must be a cutpoint by the smallness assumption therefore
we can use Theorem 50 to finish; if it is not, then it must be a weak cutpoint
thus we can use Theorem 56 to finish.
In cases of mixed type, divide the sequence into three parts of pure type.
Each of these parts do have a scale by the above. These individual scales can
then be integrated. This works as individual elements of the different scales
can be tied to some common ordinal $\mathord{<}\lambda^{+}$. ∎
## 5 Open questions
We conclude this work with a discussion on further possible developments, and
open questions.
1. 1.
Consider the following natural strengthening of the ABSP with respect sequence
of regular cardinals $\vec{\tau}=\langle\tau_{n}\mid n<\omega\rangle$ with
$\lambda=\cup_{n}\tau_{n}$: For every sufficiently large regular cardinal
$\theta$ and internally approachable structure
$N\prec(H_{\theta},\in,\vec{\tau})$, there is some $m<\omega$, so that for
every strictly increasing sequence $d_{0},\dots,d_{k}\in\omega\setminus m$ and
$F\in N$, $F:[\lambda]^{k}\to\lambda$, if
$F(\chi_{N}(\tau_{d_{1}}),\dots,\chi_{N}(\tau_{d_{k}}))<\tau_{d_{0}}$
then
$F(\chi_{N}(\tau_{d_{1}}),\dots,\chi_{N}(\tau_{d_{k}}))\in N.$
Is it consistent?
2. 2.
We saw in Section 2.1 that from the same large cardinal assumptions of Theorem
29, it is consistent that ABSP holds with respect to a sequence
$\langle\tau_{n}\mid n<\omega$ so that $\tau_{n}=\aleph_{2n}$ for all
$n<\omega$. Is ABSP consistent with respect to cofinite sequence of the
$\aleph_{n}$’s?
3. 3.
The definitions of Tree-like scales, Essentially Tree-like scales, ASFP, and
ABFP naturally extend to uncountable sequences of cardinals
$\langle\tau_{i}\mid i<\rho\rangle$, $\rho>\aleph_{0}$ regular. Are those
principles consistent? If so, what is their consistency strength?
4. 4.
Another natural extension of the principles AFSP and ABFP, is to require the
appropriate principle to hold for any elementary substrucute
$N\prec(H_{\theta}\in\vec{\tau})$. Is it consistent?
5. 5.
Is there a version of Theorem 6 for Neeman-Steel long extender mice?
6. 6.
Pereira showed in [25] that it consistent relative to the existence of a
supercompact cardinal that there exist products
$\prod\limits_{n<\omega}\tau_{n}$ carrying a continuous tree-like scale of
length greater than $\sup(\langle\tau_{n}\rangle_{n})^{+}$. Can the same be
achieved from a weaker large cardinal assumptions at the level of strong
cardinals?
## References
* [1] Omer Ben-Neria. On singular stationarity ii. Journal of Symbolic Logic, 84(1):320–342, 2019.
* [2] Omer Ben-Neria, Moti Gitik, Itay Neeman, and Spencer Unger. On the powersets of singular cardinals in hod. Proceedings of the American Mathematical Society, 148:1777 – 1789, 2020.
* [3] Omer Ben-Neria and Spencer Unger. Homogeneous changes in cofinalities with applications to hod. Journal of Mathematical Logic, 17(2):1750007, 2017.
* [4] Sean D. Cox. Covering theorems for the core model, with an application to stationary set reflection. Annals of Pure and Applied Logic, 161:66 – 93, 2009.
* [5] James Cummings. Continuous tree like scales. Central European Journal of Mathematics, 8:314 – 318, 2010.
* [6] James Cummings, Matthew Foreman, and Menachem Magidor. Squares, scales and stationary reflection. Journal of Mathematical Logic, 01(01):35–98, 2001.
* [7] Hans-Dieter Donder, R. B. Jensen, and L. Stanley. Condensation-coherent global square systems. In Anil Nerode and Richard A. Shore, editors, Recursion Theory, 1985\.
* [8] Matthew Foreman and Menachem Magidor. Large cardinals and definable counterexamples to the continuum hypothesis. Annals of Pure and Applied Logic, 76(1):47 – 97, 1995.
* [9] Matthew Foreman, Menachem Magidor, and Saharon Shelah. Martin’s maximum, saturated ideals, and non-regular ultrafilters. part i. Annals of Mathematics, 127(1):1–47, 1988.
* [10] Moti Gitik. Short extenders forcing ii. preprint, online at http://www.math.tau.ac.il/~gitik/shortextendersforcing2-2017.pdf.
* [11] Moti Gitik. Changing cofinalities and the nonstationary ideal. Israel Journal of Mathematics, 56:280–314, 1986.
* [12] Moti Gitik. On a question of pereira. Archive for Mathematical Logic, 47:53 – 64, 2008.
* [13] Moti Gitik. Prikry-type forcings. In Matthew Foreman and Akihiro Kanamori, editors, Handbook of Set Theory, pages 1351–1447. Springer Netherlands, Dordrecht, 2010.
* [14] Moti Gitik. Short extenders forcing i. Journal of Mathematical Logic, 12(2), 2013.
* [15] Moti Gitik and Menachem Magidor. The singular cardinal hypothesis revisited. In Haim Judah, Winfried Just, and Hugh Woodin, editors, Set Theory of the Continuum, pages 243 – 279, 1992.
* [16] Thomas Jech. Set Theory. Springer Verlag, 3rd edition, 2002.
* [17] Ronald Jensen. A new fine structure. handwritten notes, online at https://www.mathematik.hu-berlin.de/~raesch/org/jensen.html.
* [18] Ronald Jensen and John Steel. K without the measurable. The Journal of Symbolic Logic, 78(3):708 – 734, 2013.
* [19] Peter Koepke. The consistency strength of the free subset problem for $\omega_{\omega}$. The Journal of Symbolic Logic, 49(4):1198 – 1204, 1984.
* [20] William Mitchell and Ernest Schimmerling. Covering at limit cardinals of K. submitted, online preprint at http://www.math.cmu.edu/~eschimme/Measurables-in-K.pdf.
* [21] William Mitchell, Ernest Schimmerling, and J Steel. The covering lemma up to a woodin cardinal. Annals of Pure and Applied Logic, 84(2):219 – 255, 1997.
* [22] William Mithcell and Ernest Schimmerling. Weak covering without countable closure. Mathematical Research Letters, 2:595 – 609, 1995.
* [23] Luis Pereira. Combinatoire des cardinaux singuliers et structures PCF. PhD thesis, University of Paris VII, 2007.
* [24] Luis Pereira. The pcf conjecture and large cardinals. The Journal of Symbolic Logic, 73(2):674 – 688, 2008.
* [25] Luis Pereira. Morasses, semimorasses and supercompact ultrafilters. Acta Mathematica Hungarica, 152:257 – 268, 2017.
* [26] Saharon Shelah. Cardinal Artihmetic. Oxford Science Publications, 1994.
* [27] Saharon Shelah. Pcf and infinite free subsets in an algebra. Archive for Mathematical Logic, 41:321–359, 2002.
* [28] John Steel. The Core Model Iterability Problem, volume 8 of Lecture Notes in Logic. Springer Verlag, 1996.
* [29] John Steel. An outline of inner model theory. In M. Foreman and A. Kanamori, editors, Handbook of Set Theory, pages 1595 – 1684. Springer Netherlands, 2010.
* [30] John Steel and William Mitchell. Fine Structure and Iteration Trees, volume 3 of Lecture Notes in Logic. Springer Verlag, 1994.
* [31] Philip Welch. A question on free subsets in internally approachable models. preprint.
* [32] Martin Zeman. Inner Models and Large Cardinals. De Gruyter, 2002.
|
11institutetext: Technische Universität Darmstadt, Germany 11email:
<EMAIL_ADDRESS>22institutetext: Johannes Kepler
Universität Linz, Austria
22email<EMAIL_ADDRESS>
# Revisiting Non-Specific Syndromic Surveillance
Moritz Kulessa 11 Eneldo Loza Mencía 11 Johannes Fürnkranz 22
###### Abstract
Infectious disease surveillance is of great importance for the prevention of
major outbreaks. Syndromic surveillance aims at developing algorithms which
can detect outbreaks as early as possible by monitoring data sources which
allow to capture the occurrences of a certain disease. Recent research mainly
focuses on the surveillance of specific, known diseases, putting the focus on
the definition of the disease pattern under surveillance. Until now, only
little effort has been devoted to what we call _non-specific_ syndromic
surveillance, i.e., the use of all available data for detecting any kind of
outbreaks, including infectious diseases which are unknown beforehand. In this
work, we revisit published approaches for non-specific syndromic surveillance
and present a set of simple statistical modeling techniques which can serve as
benchmarks for more elaborate machine learning approaches. Our experimental
comparison on established synthetic data and real data in which we injected
synthetic outbreaks shows that these benchmarks already achieve very
competitive results and often outperform more elaborate algorithms.
###### Keywords:
Syndromic Surveillance Outbreak Detection Anomaly Detection
## 1 Introduction
The early detection of infectious disease outbreaks is of great significance
for public health. The spread of such outbreaks could be diminished
tremendously by applying control measures as early as possible, which indeed
can save lives and reduce suffering. For that purpose, _syndromic
surveillance_ has been introduced which aims to identify illness clusters
before diagnoses are confirmed and reported to public health agencies [6].
The fundamental concept of syndromic surveillance is to define indicators for
a particular infectious disease on the given data, also referred to as
_syndromes_ , which are monitored over time to be able to detect unexpectedly
high numbers of infections which might indicate an outbreak of that disease.
Syndromic data can be obtained from _clinical_ data sources (e.g., diagnosis
in an emergency department), which allow to directly measure the symptoms of
individuals, as well as _alternative_ data sources (e.g., internet-based
health inquiries), which indirectly capture the presence of a disease [6].
In general, the definition of syndromes is a challenging task since symptoms
are often shared by different diseases and a particular disease can have
different disease patterns in the early phase of an infection. Moreover, this
kind of filtering is a highly handcrafted approach and only allows to monitor
known infectious diseases. Rather than developing highly specialized
algorithms which are based on a specific disease and assume particular
characteristics of outbreak shapes [8], we argue that the task of outbreak
detection should be viewed as a general anomaly detection problem where an
outbreak alarm is triggered if the distribution of the incoming data changes
in an unforeseen and unexpected way. Therefore, we distinguish between
_specific_ syndromic surveillance, where factors related to a specific disease
are monitored, and _non-specific_ syndromic surveillance, where general,
universal characteristics of the stream of data are monitored for anomalies.
While specific syndromic surveillance is a well-studied research area, we
found that only little research has been devoted to non-specific syndromic
surveillance with only very few algorithms available. In particular, the close
relation to anomaly detection motivated us to investigate the problem of non-
specific syndromic surveillance from a machine learning perspective and to
make the task more approachable for the anomaly detection community.
In this paper, we revisit algorithms for non-specific syndromic surveillance
and compare them to a broad range of anomaly detection algorithms. Due to
little effort on implementing baselines in previous works on non-specific
syndromic surveillance, we propose a set of benchmarks relying on simple
statistical assumptions which nonetheless have been widely used before in
specific syndromic surveillance. We experimentally compare the methods on an
established synthetic dataset [3, 11] and real data from a German emergency
department in which we injected synthetic outbreaks. Our results demonstrate
that the simple statistical approaches, which have not been considered in
previous works, are quite effective and often can outperform more elaborate
machine learning algorithms.
## 2 Non-Specific Syndromic Surveillance
### 2.1 Problem Definition
Syndromic data can be seen as a constant stream of instances of a population
$\mathcal{C}$. Each instance $\mathbf{c}\in\mathcal{C}$ is represented by a
set of attributes $\mathcal{A}=\\{A_{1},A_{2},\ldots,A_{m}\\}$ where each
attribute can be either categorical (e.g., gender), continuous (e.g., age) or
text (e.g., chief complaint). Following the notation of Wong et al. [11], we
refer to these attributes as _response attributes_. To be able to detect
changes over time, instances are grouped together according to pre-specified
time slots (e.g., all patients arriving at the emergency department in one
day). Hence, the instances for a specific time slot $t$ are denoted as
$\mathcal{C}(t)\subset\mathcal{C}$.
In addition, each group $\mathcal{C}(t)$ is associated with an environmental
setting $\mathbf{e}(t)\in E_{1}\times E_{2}\times\ldots\times E_{k}$ where
$\mathcal{E}=\\{E_{1},E_{2},\ldots,E_{k}\\}$ is a set of _environmental
attributes_. Environmental attributes are independent of the response
attributes and represent external factors which might have an influence on the
distribution of instances $\mathcal{C}(t)$ (e.g., during the winter flu-like
symptoms are more frequent). In particular, a specific characteristic of
syndromic data is _seasonality_ , in machine learning also known as _cyclic
drift_ [10]. Environmental variables can help the algorithm to adapt to this
kind of concept drift. Thus, the information available for time slot $t$ can
be represented by the tuple $(\mathcal{C}(t),\mathbf{e}(t))$ and the
information about prior time slots can be denoted as
$\mathcal{H}=((\mathcal{C}(1),\mathbf{e}(1)),\ldots,(\mathcal{C}(t-1),\mathbf{e}(t-1)))$.
The main goal of non-specific syndromic surveillance is to detect anomalies in
the set $\mathcal{C}(t)$ of the current time slot $t$ w.r.t. the previous time
slots $\mathcal{H}$ as potential indicators of an infectious disease outbreak.
Therefore, the history $\mathcal{H}$ is used to fit a model
$f_{\mathcal{H}}(\mathbf{e}(t),\mathcal{C}(t))$ which is able to generate a
score for time slot $t$, representing the likelihood of being in an outbreak.
Viewed from the perspective of specific syndromic surveillance, the non-
specific setting can be seen as the monitoring of all possible syndromes at
the same time. The set of all possible syndromes can be defined as
$\displaystyle\mathcal{S}=\left\\{\prod_{i\in\mathcal{I}}A_{i}\mid
A_{i}\in\mathcal{A}\wedge\mathcal{I}\subseteq\\{1,2,\ldots,m\\}\wedge|\mathcal{I}|\geq
1\right\\}$
where $\prod_{i\in\mathcal{I}}A_{i}$ for $|\mathcal{I}|=1$ is defined as
$\\{\\{a\\}\mid a\in A\wedge A\in\mathcal{A}\\}$. In addition, we denote
$\mathcal{S}_{\leq n}=\\{s\mid s\in\mathcal{S}\wedge|s|\leq 2\\}$ as the set
of all possible syndromes having a maximum of $n$ conditions and
$\mathcal{H}_{s}=(s(1),s(2),\ldots,s(t-1))$ as the time series of counts for a
particular syndrome $s\in\mathcal{S}$.
### 2.2 Evaluation
To evaluate a data stream it is split into two parts, namely a _training part_
, containing the first time slots which are only used for training, and a
_test part_ , which contains the remaining time slots of the data stream. The
evaluation is performed on the test part incrementally which means that for
evaluating each time slot $t$ the model will be newly fitted on the complete
set of previously observed data points
$\mathcal{H}=((\mathcal{C}(1),\mathbf{e}(1)),\ldots,(\mathcal{C}(t-1),\mathbf{e}(t-1)))$.
Alarms raised during an outbreak are considered as true positives while all
other raised alarms are considered as false positives.
For measuring the performance, we rely on the _activity monitor operating
characteristic (AMOC)_ [4]. AMOC can be seen as an adaptation of the _receiver
operating characteristic_ in which the true positive rate is replaced by the
_detection delay_ , i.e., the number of time points until an outbreak has been
first detected by the algorithm. In case the algorithm does not raise an alarm
during the period of an outbreak, the detection delay is equal to the length
of the outbreak. Moreover, for syndromic surveillance we are interested in a
very low false alarm rate for the algorithms and therefore only report the
partial area under AMOC-curve for a false alarm rate less than $5\%$, to which
we refer to as $AAUC_{5\%}$. Note that contrary to conventional AUC values in
this case lower values represent better results. Since one data stream does
normally not contain enough outbreaks to draw conclusions, the evaluation is
usually performed on a set of data streams. To obtain a final score for the
set, we take the average over the computed $AAUC_{5\%}$ results which are
computed on each data stream.
## 3 Machine Learning Algorithms
In a survey of the relevant literature we have identified only a few
algorithms which relate to non-specific syndromic surveillance, described in
Sections 3.1 to 3.3. In Section 3.4 we introduce a way how common anomaly
detection algorithms can be applied in the setting of non-specific syndromic
surveillance.
### 3.1 Data Mining Surveillance System (DMSS)
One of the first algorithms able to identify new and interesting patterns in
syndromic data was proposed by Brossette et al. [1] who adopted the idea of
association rule mining [12] to the field of public health surveillance. In
order to detect an outbreak for time slot $t$, an association rule mining
algorithm needs to be run on $\mathcal{C}(t)$ and a reference set of patients
$\mathcal{R}\subset\mathcal{C}$ is created by merging the instances of a
selected set of previous time slots. For each association rule the confidence
of the rule on $\mathcal{C}(t)$ is compared to the confidence of the rule
computed on $\mathcal{R}$ using a $\chi^{2}$ or a Fisher’s test. If the
confidence has significantly increased on $\mathcal{C}(t)$, the finding is
reported as an unexpected event. In order to reduce the complexity, the
authors propose to focus only on mining high-support association rules. An
aggregation of the observations for one time slot is not performed and
environmental attributes are not considered by this approach.
### 3.2 What is strange about recent events? (WSARE)
The family of _WSARE_ algorithms has been proposed by Wong et al. [11]. All
algorithms share the same underlying concept, namely to monitor all possible
syndromes having a maximum of two conditions $\mathcal{S}_{\leq 2}$
simultaneously. The three WSARE algorithms only differ in the way how the
reference set of patients $\mathcal{R}$ is created on which the expected
proportion for each syndrome is estimated. Each expected proportion is
compared to the proportion of the respective syndrome observed on the set
$\mathcal{C}(t)$ using the $\chi^{2}$ or Fisher’s exact test. In order to
aggregate the $p$-values of the statistical tests for one time slot, a
_permutation test_ with 1,000 repetitions is performed. The following three
versions have been considered:
WSARE 2.0
merges the instances of a selected set of prior time slots together for the
reference set $\mathcal{R}$. Since their evaluation was based on single-day
time slots, they combined the instances of the previous time slots $35$, $42$,
$49$ and $56$ to consider only instances of the same weekday.
WSARE 2.5
merges the instances of all prior time slots together which share the same
environmental setting as for the current day $\mathbf{e}(t)$. This has the
advantage that the expected proportions are conditioned on the environmental
setting $\mathbf{e}(t)$ and that potentially more instances are contained in
the reference set $\mathcal{R}$, allowing to have more precise expectations.
WSARE 3.0
learns a Bayesian network over all recent data $\mathcal{H}$ from which 10,000
instances for the reference set $\mathcal{R}$ are sampled given the
environmental attributes $\mathbf{e}(t)$ as evidence.
### 3.3 Eigenevent
The key idea of the _Eigenevent_ algorithm proposed by Fanaee-T and Gama [3]
is to track changes in the data correlation structure using eigenspace
techniques. Instead of monitoring all possible syndromes, only overall changes
and dimension-level changes are observed by the algorithm. Therefore, a
dynamic baseline tensor is created using the information of prior time slots
$\mathcal{H}$ which share the same environmental setting $\mathbf{e}(t)$. In
the next step, information of the instances $\mathcal{C}(t)$ and the baseline
tensor are decomposed to a lower-rank subspace in which the eigenvectors and
eigenvalues are compared to each other, respectively. Any significant changes
in the eigenvectors and eigenvalues between the baseline tensor and the
information of instances $\mathcal{C}(t)$ indicate an outbreak.
### 3.4 Anomaly Detection Algorithms
A direct application of point anomaly detection is in general not suitable for
syndromic surveillance [11] because these methods aim to identify single
instances $\mathbf{c}\in\mathcal{C}$ as outliers and could thus, e.g., be
triggered by a patient who is over a hundred years old. In order to still
apply point anomaly detectors to discover outbreaks, we form a dataset
$\mathcal{D}$ using the syndromes $s\in\mathcal{S}$ as features and the
respective syndrome counts $\mathcal{H}_{s}$ as values. Hence, each instance
represents the occurrence counts of all syndromes for one particular time slot
and the dataset contains $t-1$ instances in total. This dataset can be used to
fit an anomaly detector which can be then applied to the instance of syndrome
counts for time slot $t$. Hence, an outbreak could be identified by an unusual
combination of syndrome counts. In this work, we consider the following
anomaly detection algorithms. Due to space restrictions, we refer to Chandola
et al. [2] and Zhao et al. [13] and the references therein for a comprehensive
review of the methods.
One-Class SVM
extends the support vector machine algorithm to perform outlier detection by
separating instances $\mathcal{D}$ from the complement of $\mathcal{D}$.
Local Outlier Factor
computes the outlier score for an instance based on how isolated the instance
is with respect to the surrounding neighborhood.
Gaussian Mixture Models
approximate the distribution of the dataset $\mathcal{D}$ using a mixture of
Gaussian distributions. The outlier score is based on how dense the region of
the evaluated instance is.
Copula-Based Outlier Detection
(COPOD) creates an empirical copula for the multi-variate distribution of
$\mathcal{D}$ on which tail probabilities for an instance can be predicted to
estimate the outlier score.
Isolation Forest
constructs an ensemble of randomly generated decision trees in which anomalies
can be identified by counting the number of splittings required to isolate an
instance.
Autoencoder
learns an identity function of the data through a network of multiple hidden
layers. Instances which have a high reconstruction error are considered to be
anomalous.
Multiple-Objective Generative Adversarial Active Learning
(GAAL)
constructs multiple generators having different objectives to generate
outliers for learning a discriminator which can assign outlier scores to new
instances.
## 4 Basic Statistical Approaches
In addition to the machine learning models introduced in Section 3, we also
include statistical techniques, which are commonly used for specific syndromic
surveillance, into our comparison and adapt them to a non-specific syndromic
surveillance setting. The key idea of these adaptations is to monitor all
possible syndromes $\mathcal{S}$ simultaneously. For the purpose of monitoring
syndromes, a parametric distribution $P_{s}(x)$ is fitted for each single
syndrome $s\in\mathcal{S}$ using the empirical mean $\mu$ and the empirical
variance $\sigma^{2}$ computed over $\mathcal{H}_{s}$:
$\displaystyle\mu=\frac{1}{|\mathcal{H}_{s}|}\sum_{i=0}^{|\mathcal{H}_{s}|}s(i)$
$\displaystyle\sigma^{2}=\frac{1}{|\mathcal{H}_{s}|-1}\sum_{i=0}^{|\mathcal{H}_{s}|}(s(i)-\mu)^{2}$
On the fitted distribution $P_{s}(x)$, a one-tailed significance test is
performed in order to identify a suspicious increase of cases. For a
particular observed count $s(t)$, the $p$-value is computed as the probability
$\int_{s(t)}^{\infty}P_{s}(x)dx$ of observing $s(t)$ or higher counts. Thus,
for evaluating a single time slot $t$, we obtain $|\mathcal{S}|$ $p$-values
which need to be aggregated under consideration of the multiple-testing
problem. Following Roure et al. [7], we only report the minimum $p$-value for
each time slot $t$ because the Bonferroni correction can be regarded as a form
of aggregation of $p$-values based on the minimum function. In particular,
note that scale-free anomaly scores are sufficient for the purpose of
identifying the most suspicious time slots. The complement of the selected
$p$-value represents the anomaly score reported for time slot $t$. For our
benchmarks we have considered the following distributions:
Gaussian.
Not tailored for count data but often used in syndromic surveillance is the
Gaussian distribution $N(\mu,\sigma^{2})$. This distribution will serve as
reference for the other distributions which are specifically designed for
count data.
Poisson.
The Poisson distribution $Pois(\lambda)$ is directly designed for count data.
For estimating the parameter $\lambda$, we use the maximum likelihood estimate
which is the mean $\mu$.
Negative Binomial.
To be able to adapt to overdispersion, we include the negative binomial
distribution $NB(r,p)$. We have estimated the parameters with
$r=\frac{\mu^{2}}{\sigma^{2}-\mu}$ and $p=\frac{r}{r+\mu}$.
Our preliminary experiments showed that statistical tests on rare syndromes
are often too sensitive to changes, causing many false alarms. In addition,
outbreaks are usually associated with a high number of infections. Therefore,
we set the standard deviation $\sigma^{2}$ to a minimum of one before fitting
the Gaussian distribution and for the Poisson and the negative binomial
distribution we set the mean $\mu$ to a minimum of one. We leave the standard
deviation untouched for the negative binomial distribution since manipulating
the overdispersion can lead to extreme distortions in the estimation.
## 5 Experiments and Results
The goal of the experimental evaluation reported in this section is to provide
an overview of the performance of non-specific syndromic surveillance methods
in general, and in particular, to re-evaluate the established methods in
context of the proposed base statistical approaches and the anomaly detection
algorithms. We conducted experiments on synthetic data, which already have
been used for the evaluation of the algorithms Eigenevent and WSARE [3, 11],
and on real data of a German emergency department (cf. Section 5.3). As the
emergency department data do not contain any information about real outbreaks,
we decided to inject synthetic outbreaks which is common practice in the area
of syndromic surveillance, allowing us to evaluate and compare the algorithms
in a controlled environment.
### 5.1 Evaluation Setup
Table 1: Information about the
attributes of the synthetic data. attribute | type | #values
---|---|---
age | response | 3
gender | response | 2
action | response | 3
symptom | response | 4
drug | response | 4
location | response | 9
flu level | environmental | 4
day of week | environmental | 3
weather | environmental | 2
season | environmental | 4
Table 2: Information about the attributes of the real data. attribute | type | #values
---|---|---
age | response | 3
gender | response | 2
mts | response | 28
fever | response | 2
pulse | response | 3
respiration | response | 3
oxygen saturation | response | 2
blood pressure | response | 2
day of week | environmental | 7
season | environmental | 4
#### Synthetic Data.
The synthetic data consists of $100$ data streams, generated with the
synthetic data generator proposed by Wong et al. [11]. The data generator is
based on a Bayesian network and simulates a population of people living in a
city of whom only a subset are reported to the data stream at each simulated
time slot. Detailed information about the attributes in the data stream is
given in Table 2. Each data stream captures the information about the people
on a daily basis over a time period of two years, i.e., each time slot
$\mathcal{C}(t)$ contains the patients of one day. In average $34$ instances
are reported per time slot and $275$ possible syndromes are contained in the
set $\mathcal{S}_{\leq 2}$. The first year is used for the training part while
the second year serves as the test part. Exactly one outbreak is simulated in
the test part which starts at a randomly chosen day and always lasts for $14$
days. During the outbreak period, the simulated people have a higher chance of
catching a particular disease.
#### Real Data.
We rely on routinely collected, fully anonymized patient data of a German
emergency department, captured on a daily basis over a time period of two
years. We have extracted a set of response attributes and added two
environmental attributes (cf. Table 2). Continuous attributes, such as
_respiration_ , have been discretized with the help of a physician into
meaningful categories. In addition, we include the Manchester-Triage-System
(MTS) [5] initial assessment which is filled out for every patient on arrival.
To reduce the number of values for the attribute MTS, we group classifications
which do not relate to any infectious disease, such as various kinds of
injuries, into a single value. In average $165$ patients are reported per day
and in total $574$ syndromes can be formed for the set $|\mathcal{S}_{\leq
2}|$. In preparation for the injection of simulated outbreaks, we replicated
the data stream 100 times. For each data stream, we used the first year as the
training part and the second year as the test part in which we injected
exactly one outbreak. In order to simulate an outbreak, we first uniformly
sampled a syndrome from $\mathcal{S}_{\leq 2}$. In a second step, we sampled
the size of the outbreak from a Poisson distribution with mean equal to the
standard deviation of the daily patient visits and randomly selected the
corresponding number of patients from all patients that exhibit the sampled
syndrome. To avoid over-representing outbreaks on rare syndromes, only $20$
data streams contain outbreaks with syndromes that have a lower frequency than
one per day. In total, $29$ outbreaks are based on syndromes with one
condition and $71$ with two.
#### Additional Benchmarks.
We also include the _control chart_ , the _moving average_ and the _linear
regression_ algorithms into our analysis. Compared to our _syndrome-based_
statistical benchmarks, these _global_ statistical benchmarks only monitor the
total number of instances per time slot and therefore can only give a very
broad assessment of outbreak detection performance. For a detailed explanation
of these algorithms, we refer to Wong et al. [11].
Table 3: Results for the $AAUC_{5\%}$ measure on the synthetic data. name | rerun | min. $p$-value | permutation test | imported $p$-values
---|---|---|---|---
Eigenevent | 4.993 | – | – | 4.391
WSARE 2.0 | – | 2.963 | 3.805 | 4.925
WSARE 2.5 | – | 1.321 | 1.614 | 1.931
WSARE 3.0 | – | 0.899 | 1.325 | 1.610
#### Implementation and Parameterization.
For the Eigenevent algorithm we rely on the code provided by the
authors.111https://github.com/fanaee/EigenEvent All other algorithms are
implemented in Python.222Our code is publicly available at
https://github.com/MoritzKulessa/NSS Parameters for the DMSS and the anomaly
detection algorithms have been tuned in a grid search using $1000$ iterations
of _Bootstrap Bias Corrected Cross-Validation_ [9] which allows to integrate
hyperparameter tuning and reliable performance estimation into a single
evaluation loop. The evaluated parameter combinations can be found in our
repository. The WSARE, the Eigenevent, the COPOD and the statistical
algorithms do not contain any parameters which need to be tuned.
### 5.2 Preliminary Evaluation
In a first experiment, we replicated the experiments on the synthetic data of
[3]. More specifically, we imported and re-evaluated the outlier scores for
the synthetic data from the Eigenevent repository (_imported $p$-values_) and
compare these to our own results with rerunning the Eigenevent algorithm
(_rerun_) and to our implementation of the WSARE algorithms. For the latter,
we additionally evaluate the results of just reporting the minimal $p$-value
for each time slot (_min. $p$-value_, cf. Section 4) instead of performing an
originally proposed permutation test with $1000$ repetitions (_permutation
test_). The results are shown in Table 3.
Our rerun of the Eigenevent algorithm returned slightly worse results than the
imported $p$-values, which could be caused by the random initialization. For
the WSARE algorithms, we can observe that our implementation achieves better
results than the imported $p$-values, probably due to the different Bayesian
network used. In particular, the results for the minimal $p$-value were better
than those for the more expensive permutation test. Thus, we chose to only
report the minimal $p$-value for the WSARE algorithms in the following
experiments.
### 5.3 Results
Table 4: Results for the $AAUC_{5\%}$ measure on the synthetic and real data. category | algorithm name | synthetic data | real data
---|---|---|---
none | $\mathcal{S}_{\leq 1}$ | $\mathcal{S}_{\leq 2}$ | none | $\mathcal{S}_{\leq 1}$ | $\mathcal{S}_{\leq 2}$
non-specific syndromic surveillance | WSARE 2.0 | – | 3.028 | 2.963 | – | 0.661 | 0.590
WSARE 2.5 | – | 1.099 | 1.321 | – | 0.917 | 0.867
WSARE 3.0 | – | 0.803 | 0.899 | – | 0.882 | 0.847
DMSS | 2.430 | – | – | 0.953 | – | –
Eigenevent | 4.993 | – | – | 0.878 | – | –
anomaly detectors | one-class SVM | – | 1.043 | 1.262 | – | 0.468 | 0.495
local outlier factor | – | 2.000 | 2.260 | – | 0.642 | 0.610
Gaussian mixture model | – | 1.117 | 3.547 | – | 0.444 | 0.791
isolation forest | – | 4.576 | 4.948 | – | 0.873 | 0.835
COPOD | – | 5.216 | 5.032 | – | 0.816 | 0.800
autoencoder | – | 1.521 | 1.643 | – | 0.550 | 0.576
GAAL | – | 7.024 | 6.766 | – | 0.792 | 0.866
global benchmarks | control chart | 5.086 | – | – | 0.891 | – | –
moving average | 7.012 | – | – | 0.910 | – | –
linear regression | 3.279 | – | – | 0.819 | – | –
syndrome-based benchmarks | Gaussian | – | 0.806 | 0.941 | – | 0.328 | 0.267
Poisson | – | 1.294 | 1.347 | – | 0.598 | 0.486
negative binomial | – | 0.895 | 0.958 | – | 0.299 | 0.216
The results on the synthetic and real data are both shown in Table 4. For
syndrome-based algorithms, the results for monitoring $\mathcal{S}_{\leq 1}$
and $\mathcal{S}_{\leq 2}$ are reported in the respective columns while
results for the other methods are reported in the columns _none_. Note that
the worst possible result on the synthetic data is $14$ while for the real
data the worst result is $1$. In the first paragraphs, we will discuss the
results without specifically considering the size of the syndrome sets unless
needed. The effect of using $\mathcal{S}_{\leq 1}$ or $\mathcal{S}_{\leq 2}$
is discussed in the last paragraph.
#### Comparison between Non-Specific Syndromic Surveillance Algorithms.
Firstly, we analyze the results of the non-specific syndromic surveillance
approaches which have been presented in Section 3.1 to 3.3. In general, the
WSARE algorithms outperform the other algorithms in the group. In particular,
the results of the modified versions WSARE 2.5 and WSARE 3.0 on the synthetic
data show that the use of environmental attributes can be beneficial. However,
the results on the real data indicate the opposite. We further investigated
this finding by rerunning WSARE 3.0 on the real data without the use of
environmental variables and observed a substantial improvement of the results
to $0.613$ for $\mathcal{S}_{\leq 1}$ and $0.570$ for $\mathcal{S}_{\leq 2}$,
respectively. Therefore, we conclude that the modelling of the environmental
factors should be done with care since it can easily lead to worse estimates
if the real distribution does not follow the categorization imposed by defined
attributes.
The results of the DMSS algorithm suggest that monitoring association rules is
not as effective as monitoring syndromes. In particular, the space of possible
association rules is much greater than the space of possible syndromes
$\mathcal{S}$ which worsens the problem of multiple testing. Especially on the
real data this results in a bad performance since the high number of instances
per time slot yields too many rules. Conversely, by monitoring only rules with
very high support most of the outbreaks remain undetected since the disease
pattern could not be captured anymore. In contrast to the results reported by
Fanaee-T and Gama [3], the Eigenevent algorithm performs poorly compared to
the WSARE algorithms. A closer analysis reveals that the difference in these
results can be explained by the used evaluation measure. Fanaee-T and Gama [3]
consider only $p$-values in the range $[0.02,0.25]$ to create the AMOC-curve.
However, exactly the omitted low $p$-values are particularly important when
precise predictions with low false positive rates are required which is why we
explicitly included this range into the computation of the AMOC-curve.
#### Comparison to the Anomaly Detection Algorithms.
Regarding the synthetic data, which was specifically created in order to
evaluate the WSARE algorithms, we can observe that no anomaly detection
algorithm can reach competitive $AAUC_{5\%}$ scores to WSARE 3.0. Considering
the gap to WSARE 2.0, which in comparison to 3.0 does not distinguish between
environmental settings, one reason could be that the anomaly detection
algorithms are not able to take the environmental variables into account.
Another reason could be the low number of training instances (one for each
day) which might have caused problems, especially for the neural networks.
Only the SVM, which is known to work well with only few instances, and the
Gaussian mixture model are able to achieve acceptable results. These two
approaches are in fact able to outperform the WSARE variants on the real data
for which we already found evidence that the environmental information might
not be useful.
#### Comparison to the Benchmarks.
In the following, we will put the previously discussed results in relation to
the benchmarks. For the global benchmarks, we can observe that monitoring the
total number of cases per time slot is not sufficient to adequately detect
most of the outbreaks. Notably, many of the machine learning approaches do in
fact not perform considerably better than these simple benchmarks. The
comparison to our proposed statistical benchmarks applied on each possible
syndrome separately allow further important insights. Our main observation is
that, despite its simplicity, they outperform most of the previously
discussed, more sophisticated approaches. In fact, in the case of the real
data the Gaussian and the negative binomial benchmarks achieve the best
scores. On the synthetic data they are able to achieve results that are
competitive to WSARE 3.0 even though the benchmarks do not take the
environmental attributes into account. We were also surprised by the good
results of the Gaussian benchmark since this modelling is not specifically
designed for count data. The advantage may be explained with the context of
multiple testing and the generation of smoother, less extreme estimates and
hence more reliable outlier scores for the time slots. However, the results on
the real data, which obviously contain a more realistic representation of
count data than the completely generated synthetic data, show that the
negative binomial benchmark can improve over the Gaussian benchmark.
#### Comparison between $\mathcal{S}_{\leq 1}$ and $\mathcal{S}_{\leq 2}$.
We can make two basic observations regarding the complexity of the monitored
syndromes: Firstly, the outbreaks in the synthetic data are better detected by
the algorithms and benchmarks for non-specific syndromic surveillance when
monitoring single condition syndromes $\mathcal{S}_{\leq 1}$ while for the
real data we benefit from pair patterns $\mathcal{S}_{\leq 2}$. Secondly,
almost no anomaly detector is able to profit from the explicit counts for
$\mathcal{S}_{\leq 2}$ regardless of the dataset. For understanding the first
effect, we take a closer look at the results of our proposed benchmarks. These
approaches can only take co-occurrences between conditions into account if
explicitly given or if the $\mathcal{S}\setminus\mathcal{S}_{\leq 1}$ patterns
greatly affect the counts for the composing conditions. Hence, monitoring a
larger set of syndromes increases the sensitivity of detecting outbreaks with
complex disease patterns. However, it comes at the cost of a higher false
alarm rate due to multiple testing. For the real dataset, for which we know
that it contains more outbreaks based on two than on one condition, the higher
sensitivity is able to outweigh the increased false alarm rate. On the other
hand, the results on the synthetic dataset suggests that most of the outbreaks
in the synthetic data are lead by single indicators, resulting in more false
alarms when monitoring $\mathcal{S}_{\leq 2}$.
In contrast to the non-specific syndromic surveillance approaches, only some
anomaly detectors benefit and only slightly from the explicit counts for
$\mathcal{S}_{\leq 2}$, such as the local outlier factor algorithm and the
isolation forests. This indicates that the remaining approaches, such as SVM
and neural networks, already adequately consider correlations between
attributes. Especially remarkable is the case of Gaussian mixture models,
which achieves the best results in the group when monitoring
$\mathcal{S}_{\leq 1}$ but is strongly affected by the $\mathcal{S}_{\leq 2}$
patterns.
## 6 Conclusion
In this work, we presented non-specific syndromic surveillance from the
perspective of machine learning and gave an overview of the few approaches
addressing this task. Furthermore, we introduced a way of how anomaly
detection algorithms can be applied on this problem and a set of simple
statistical algorithms which we believe should serve as reference points for
future experimental comparisons. In an experimental evaluation, we revisited
the non-specific syndromic surveillance approaches in face of the previously
not considered statistical benchmarks and a variety of anomaly detectors.
Eventually, we found that these benchmarks outperform most of the more
sophisticated techniques and are competitive to the best approaches in the
field.
## References
* Brossette et al. [1998] Brossette, S., Sprague, A., Hardin, J., Waites, K., Jones, W., Moser, S.: Association rules and data mining in hospital infection control and public health surveillance. Journal of the American Medical Informatics Association 5, 373–81 (07 1998)
* Chandola et al. [2009] Chandola, V., Banerjee, A., Kumar, V.: Anomaly detection: A survey. ACM Computing Surveys 41(3), 1–58 (2009)
* Fanaee-T and Gama [2015] Fanaee-T, H., Gama, J.: Eigenevent: An algorithm for event detection from complex data streams in syndromic surveillance. Intelligent Data Analysis 19, 597–616 (06 2015)
* Fawcett and Provost [1999] Fawcett, T., Provost, F.: Activity monitoring: Noticing interesting changes in behavior. In: Proceedings of the 5th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 53–62 (1999)
* Gräff et al. [2014] Gräff, I., Goldschmidt, B., Glien, P., Bogdanow, M., Fimmers, R., Hoeft, A., Kim, S.C., Grigutsch, D.: The German version of the Manchester triage system and its quality criteria—First assessment of validity and reliability. PloS one 9(2) (2014)
* Henning [2004] Henning, K.J.: What is syndromic surveillance? Morbidity and Mortality Weekly Report: Supplement 53, 7–11 (2004)
* Roure et al. [2007] Roure, J., Dubrawski, A., Schneider, J.: A study into detection of bio-events in multiple streams of surveillance data. In: NSF Workshop on Intelligence and Security Informatics, pp. 124–133, Springer (2007)
* Shmueli and Burkom [2010] Shmueli, G., Burkom, H.: Statistical challenges facing early outbreak detection in biosurveillance. Technometrics 52(1), 39–51 (2010)
* Tsamardinos et al. [2018] Tsamardinos, I., Greasidou, E., Borboudakis, G.: Bootstrapping the out-of-sample predictions for efficient and accurate cross-validation. Machine Learning 107(12), 1895–1922 (2018)
* Webb et al. [2016] Webb, G.I., Hyde, R., Cao, H., Nguyen, H.L., Petitjean, F.: Characterizing concept drift. Data Mining and Knowledge Discovery 30(4), 964–994 (2016)
* Wong et al. [2005] Wong, W., Moore, A., Cooper, G., Wagner, M.: What’s strange about recent events (WSARE): An algorithm for the early detection of disease outbreaks. Journal of Machine Learning Research 6, 1961–1998 (12 2005)
* Zhang and Zhang [2002] Zhang, C., Zhang, S.: Association Rule Mining: Models and Algorithms. Springer-Verlag (2002)
* Zhao et al. [2019] Zhao, Y., Nasrullah, Z., Li, Z.: PyOD: A Python toolbox for scalable outlier detection. Journal of Machine Learning Research 20(96), 1–7 (2019)
|
# A Survey of Complex-Valued Neural Networks
Joshua Bassey, Xiangfang Li, Lijun Qian Center of Excellence in Research and
Education for Big Military Data Intelligence (CREDIT)
Department of Electrical and Computer Engineering
Prairie View A&M University, Texas A&M University System
Prairie View, TX 77446, USA
Email<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Artificial neural networks (ANNs) based machine learning models and especially
deep learning models have been widely applied in computer vision, signal
processing, wireless communications, and many other domains, where complex
numbers occur either naturally or by design. However, most of the current
implementations of ANNs and machine learning frameworks are using real numbers
rather than complex numbers. There are growing interests in building ANNs
using complex numbers, and exploring the potential advantages of the so called
complex-valued neural networks (CVNNs) over their real-valued counterparts. In
this paper, we discuss the recent development of CVNNs by performing a survey
of the works on CVNNs in the literature. Specifically, detailed review of
various CVNNs in terms of activation function, learning and optimization,
input and output representations, and their applications in tasks such as
signal processing and computer vision are provided, followed by a discussion
on some pertinent challenges and future research directions.
###### Index Terms:
complex-valued neural networks; complex number; machine learning; deep
learning
## I Introduction
Artificial neural networks (ANNs) are data-driven computing systems inspired
by the dynamics and functionality of the human brain. With the advances in
machine learning especially in deep learning, ANNs based deep learning models
have gain tremendous usages in many domains and have been tightly fused into
our daily lives. Applications such as automatic speech recognition make it
possible to have conversations with computers, enable computers to generate
speech and musical notes with realistic sounds, and separate a mixture of
speech into single audio-streams for each speaker [1]. Other examples include
object identification and tracking, personalized recommendations, and
automating important tasks more efficiently [2].
In many of the practical applications, complex numbers are often used such as
in telecommunications, robotics, bioinformatics, image processing, sonar,
radar, and speech recognition. This suggests that ANNs using complex numbers
to represent inputs, outputs, and parameters such as weights, have potential
in these domains. For example, it has been shown that the phase spectra is
able to encode fine-scale temporal dependencies [1]. Furthermore, the real and
imaginary parts of a complex number have some statistical correlation. By
knowing in advance the importance of phase and magnitude to our learning
objective, it makes more sense to adopt a complex-valued model, as this offers
a more constrained system than a real-valued model [3].
Complex-valued neural networks (CVNN) are ANNs that process information using
complex-valued parameters and variables [4]. The main reason for their
advocacy lies in the difference between the representation of the arithmetic
of complex numbers, especially the multiplication operation. In other words,
multiplication function which results in a phase rotation and amplitude
modulation yields an advantageous reduction of the degree of freedom [5]. The
advantage of ANNs is their self-organization and high degree of freedom in
learning. By knowing a priori about the amplitude and phase portion in data, a
potentially dangerous portion of the freedom can be minimized by using CVNNs.
Recently, CVNNs have received increased interests in signal processing and
machine learning research communities. In this paper, we discuss the recent
development of CVNNs by performing a survey of the works on CVNNs in the
literature. The contributions of this paper include
1. 1.
A systematic review and categorization of the state-of-the-art CVNNs has been
carried out based on their activation functions, learning and optimization
methods, input and output representations, and their applications in various
tasks such as signal processing and computer vision.
2. 2.
Detailed description of the different schools of thoughts, similarities and
differences in approaches, and advantages and limitations of various CVNNs are
provided.
3. 3.
Further discussions on some pertinent challenges and future research
directions are given.
To the best of our knowledge, this is the first work solely dedicated to a
comprehensive review of complex-valued neural networks.
The rest of this paper is structured as follows. A background on CVNNs, as
well as their use cases are presented in Section II. Section III discusses
CVNNs according to the type of activation functions used. Section IV reviews
CVNNs based on their learning and optimization approaches. The CVNNs
characterized by their input and output representations are reviewed in
Section V. Various applications of CVNNs are presented in Section VI and
challenges and potential research directions are discussed in Section VII.
Section VIII contains the concluding remarks.
The symbols and notations used in this review are summarized in Table I.
TABLE I: Symbols and Notations C | multivalued neural network (MVN) learning rate
---|---
$\mathbb{C}$ | complex domain
$\mathbb{R}$ | real domain
$d$ | desired output
$e$ | individual error of network output
$e_{log}$ | logarithmic error
$E$ | error of network output
$f$ | activation function
$i$ | imaginary unity
$Im$ | imaginary component
$j$ | values of k-valued logic
$J$ | regularization cost function
$k$ | output indices
$l$ | indices of preceeding network layer
$n$ | indices for input samples
$N$ | total number of input samples
$o$ | actual output (prediction)
p | dimension of real values
$Re$ | real component
$m$ | indices for output layer of MVN
t | regularization threshold parameter
T | target for MVN
$u$ | real part of activation function
$v$ | imaginary part of activation function
$x$ | real part of weighted sum
$y$ | imaginary part of weighed sum
$Y$ | output of MVN
$\delta$ | partial differential
$\Delta$ | total differential
$\nabla$ | gradientoperator
$\mathfrak{l}(e)$ | mean square loss function
$\mathfrak{l}(e_{log})$ | logarithmic loss function
$\epsilon^{*}$ | global error for MVN
$\epsilon$ | neuron error for MVN
$\omega$ | error threshold for MVN
$\hat{\beta}$ | regularized weights
$\lambda$ | regularization parameter
$\mathbf{X}$ | all inputs
$w$ | individual weight
$\mathbf{W}$ | all network weights
$z$ | weighted sum
$|\cdot|$ | modulo operation
$\lVert\cdot\rVert$ | euclidean distance
$\angle$ | angle
## II Background
### II-A Historical Notes
The ADALINE machine [6], a one-neuron, one-layer machine is one of the
earliest implementations of a trainable neural network influenced by the
Rosenblatt perceptron [7]. ADALINE used least mean square (LMS) and stochastic
gradient descent for deriving optimal weights.
The LMS was first extended to the complex domain in [8], where gradient
descent was derived with respect to the real and imaginary part. Gradient
descent was further generalized in [9] using Wirtinger Calculus such that the
gradient was performed with respect to complex variables instead of the real
components. Wirtinger Calculus provides a framework for obtaining the gradient
with respect to complex-valued functions [10]. The complex-valued
representation of the gradient is equivalent to obtaining the gradients of the
real and imaginary components in part.
### II-B Why Complex-Valued Neural Networks
Artificial neural networks (ANNs) based machine learning models and especially
deep learning models have gained wide spread usage in recent years. However,
most of the current implementations of ANNs and machine learning frameworks
are using real numbers rather than complex numbers. There are growing
interests in building ANNs using complex numbers, and exploring the potential
advantages of the so called complex-valued neural networks (CVNNs) over their
real-valued counterparts. The first question is: why CVNNs are needed?
Although in many analyses involving complex numbers, the individual components
of the complex number have been treated independently as real numbers, it
would be erroneous to apply the same concept to CVNNs by assuming that a CVNN
is equivalent to a two-dimensional real-valued neural network. In fact, it has
been shown that this is not the case [11], because the operation of complex
multiplication limits the degree of freedom of the CVNNs at the synaptic
weighting. This suggests that the phase-rotational dynamics strongly underpins
the process of learning.
From a biological perspective, the complex-valued representation has been used
in a neural network [12]. The output of a neuron was expressed as a function
of its firing rate specified by its amplitude, and the comparative timing of
its activity is represented by its phase. Exploiting complex-valued neurons
resulted in more versatile representations. With this formulation, input
neurons with similar phases add constructively and are termed synchronous, and
asynchronous neurons with dissimilar phases interfere with each other because
they add destructively. This is akin to the behavior of the gating operation
applied in deep feedforward neural networks [13], as well as in recurrent
neural networks [14]. In the gating mechanism, the controlling gates are
typically the sigmoid-based activation, and synchronization describes the
propagation of inputs with simultaneously high values held by their
controlling gates. This property of incorporating phase information may be one
of the reasons for the effectiveness of using complex-valued representations
in recurrent neural networks.
The importance of phase is backed from the biological perspective and also
from a signal processing point of view. Several studies have shown that the
intelligibility of speech is affected largely by the information contained in
the phase portion of the audio signal [15, 1]. Similar results have also been
shown for images. For example, it was shown in [16] that by exploiting the
information encoded in the phase of an image, one can sufficiently recover
most of the information encoded in its magnitude. This is because the phase
describes objects in an image in terms of edges, shapes and their
orientations.
From a computational perspective, holographic reduced representations (HRRs)
were combined to enhance the storage of data as key-value pairs in [17]. The
idea is to mitigate two major limitations of recurrent neural networks: (1)
the dependence of the number of memory cells on the recurrent weight matrices’
size, and (2) lack of memory indexing during writing and reading, causing the
inability to learn to represent common data structures such as arrays. The
complex conjugate was used for key retrieval instead of the inverse of the
weight matrix in [17]. The authors showed that the use of complex numbers by
Holographic Reduced Representation for data retrieval from associative
memories is more numerically stable and efficient. They further showed that
even conventional networks such as residual networks [18] and Highway networks
[13] display similar framework to that of associative memories. In other
words, each residual network uses the identity connection to insert or
“aggregate” the computed residual into memory.
Furthermore, orthogonal weight matrices have been shown to mitigate the well-
known problem of exploding and vanishing gradient problems associated with
recurrent neural networks in the real-valued case. Unitary weight matrices are
a generalization of orthogonal weight matrices to the complex plane. Unitary
matrices are the core of Unitary RNNs [19], and uncover spectral
representations by applying the discrete Fourier transform. Hence, they offer
richer representations than orthogonal matrices. This idea behind unitary RNNs
was exploited in [20], where a general framework was derived for learning
unitary matrices and applied on a real-world speech problem as well as other
toy tasks.
As for the theoretical point of view, a complex number can be represented
either in vector or matrix form. Consequently, the multiplication of two
complex numbers can be represented as a matrix-vector multiplication. However,
the use of the matrix representation increases the number of dimensions and
parameters of the model. It is also common knowledge in machine learning that
the more complicated a model in terms of parameters, the greater the tendency
of the model to overfit. Hence, using real-valued operations to approximate
these complex parameters could result in a model with undesirable
generalization characteristics. On the contrary, in the complex domain, the
matrix representation mimics a rotation matrix. This means that half of the
entries of the matrix is fixed once the other half is known. This constraint
reduces the degrees of freedom and enhances the generalization capacity of the
model.
Based on the above discussions, it is clear that there are two main reasons
for the use of complex numbers for neural networks
1. 1.
In many application domains such as wireless communication or audio
processing, where complex numbers occur naturally or by design, there is a
correlation between the real and imaginary parts of the complex signal. For
instance, Fourier transform involves a linear transformation, with a direct
correspondence between the multiplication of a signal by a scalar in the time-
domain, and multiplying the magnitude of the signal in the frequency domain.
In the time domain, the circular rotation of a signal is equivalent to
shifting its phase in the frequency domain. This means that during phase
change, the real and imaginary components of a complex number are
statistically correlated. This assumption is voided when real-valued models
are used especially on the frequency domain signal.
2. 2.
If the relevance of the magnitude and phase to the learning objective is known
_a priori_ , then it is more reasonable to use a complex-valued model because
it imposes more constrains on the complex-valued model than a real-valued
model would.
## III Activation Functions of CVNNs
Activation functions introduce non-linearity to the affine transformations in
neural networks. This gives the model more expressiveness. Given an input
$x\in\mathbb{C}^{M}$, and weights $W\in\mathbb{C}^{N\times M}$, where $M$ and
$N$ represent the dimensionality of the input and output respectively, the
output $y\in\mathbb{C}^{N}$ of any neuron is:
$\displaystyle f(z)$ $\displaystyle=$ $\displaystyle\mathbf{Wx},$ (1)
where $f$ is a nonlinear activation-function operation performed element-wise.
Neural networks have been shown to be neural approximators using the sigmoid
squashing activation functions that are monotonic as well as bounded [21, 22,
23, 24].
TABLE II: Complex-Valued Activation Functions Activation Function | Corresponding Publications
---|---
Split-type A | [25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 11, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50]
Split-type B | [51, 5, 26, 52, 53, 54, 55, 56, 57, 58, 11, 59, 60, 46]
Fully Complex (ETF) | [61, 62, 63, 64, 65, 51, 66]
Non Parametric | [67]
Energy Functions | [68, 69, 70, 71]
Complex ReLU | [72, 73, 74, 75, 76, 77, 78, 34, 79, 80]
Nonlinear Phase | [81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107]
Linear Activation | [108]
Split Kernel Activation | [67]
Complex Kernel Activation | [67]
Absolute Value Activation | [109]
Hybrid Activation | [110]
Mobius Activation | [110]
The majority of the activation functions that have been proposed for CVNNs in
the literature are summarized in Table II. Activation functions are generally
thought of as being holomorphic or non-holomorphic. In other words, the major
concern is whether the activation function is differentiable everywhere,
differentiable around certain points, or not differentiable at all. Complex
functions that are holomorphic at every point are known as “entire functions”.
However, in the complex domain, one cannot have a bounded complex activation
function that is complex-differentiable at the same time. This stems from
Liouville’s theorem which states that all bounded entire functions are a
constant. Hence, it is not possible to have a CVNN that uses squashing
activation functions and is entire.
One of the first works on complex activation function was done by Naum
Aizenberg [111, 112]. According to these works, A multi-valued neuron (MVN) is
a neural element with n inputs and one output lying on the unit circle, and
with complex-valued weights. The mapping is described as:
$f(x_{1},...,x_{n})=f(w_{0}+w_{1}x_{1}+...+w_{n}x_{n})$ (2)
where $x_{1},...,x_{n}$ are the dependent variables of the function, and
$w_{0},w_{1},..,w_{n}$ are the weights. All variables are complex, and all
outputs of the function are the kth roots of unity $\epsilon^{j}$ = exp$(i2\pi
j/k),j\in[0,k-1]$, and $i$ is imaginary unity. $f$ is an activation function
on the weighted sum defined as:
$\displaystyle f(z)$ $\displaystyle=$ $\displaystyle exp(i2\pi j/k),$ (3) if
$\displaystyle 2\pi j/k\leq arg(z)<2\pi(j+1)/k$
where $w_{0}+w_{1}x_{1}+...+w_{n}x_{n}$ is the weighted sum, $arg(z)$ is the
argument of $z$, and $j=0,1,...,k-1$ represents values of the k-valued logic.
A geometric interpretation of the MVN activation function is shown in Figure
1. The activation function represented by equation (3) divides the complex
plain into k equal sectors, and implements a mapping of the entire complex
plane onto the unit circle.
Figure 1: Geometric Interpretation for discrete-valued MVN activation function
The same concept can be extended to continuous-valued inputs. By making
$k\rightarrow\infty$, the angles of the sectors in Figure 1 will tend towards
zero. This continuous-valued MVN is obtained by transforming equation (3) to:
$f(z)=exp(i(arg(z)))=e^{iArg(z)}=z/|z|$ (4)
where z is the weighted sum, $|z|$ is the modulo of $z$. Equation (3) maps the
complex plane to a discrete subset of points on the unit circle whereas
equation (4) maps the complex plane to the entire unit circle.
There is no general consensus on the most appropriate activation function for
CVNNs in the literature. The main requirement is to have a nonlinear function
that is not susceptible to exploding or vanishing gradients during training.
Since the work of Aizenberg, other activation functions have been proposed.
For example, Holomorphic functions [113] have been proposed and used in so
called “fully complex” networks. The hyperbolic tangent is an example of a
fully complex activation function and has been used in [63]. Figure 2 shows
its surface plots. It can be observed that singularities in the output can be
caused by values on the imaginary axis. In order to avoid explosion of values,
inputs have to be properly scaled.
Figure 2: Split-type hyperbolic tangent activation function
Some researchers suggest that it is not necessary to impose the strict
constraint of requiring that the activation function be holomorphic. Those of
this school of thought advocate for activations which are differentiable with
respect to their imaginary and real parts. These activations are called split
activation functions and can either be real-imaginary (Type A) or amplitude-
phase (Type B) [114]. A Type A split-real activation was used in [25] where
the real and imaginary parts of the signal were input into a Sigmoid function.
A Type B split amplitude-phase nonlinear activation function that squashes the
magnitude and preserves phase was proposed in [51] and [5].
While Type B activations preserve the phase and squashes the magnitude,
activation functions that instead preserve the magnitude and squash the phase
have been proposed. These type of activation functions termed “phasor
networks” when originally introduced [115], where output values extend from
the origin to the unit circle [105, 116, 117, 118, 119]. The multi-threshold
logic activation functions used by multi-valued neural networks [120] are also
based on a similar idea.
Although most of the early approaches favor the split method based on non-
holomorphic functions, gradient-preserving backpropagation can still be
performed on fully complex activations. Consequently, elementary
transcendental functions (ETF) have been proposed [121, 66, 122] to train the
neural network when there are no singularities found in the domain of
interest. This is because singularities in ETFs are isolated.
Radial basis functions have also been proposed for complex networks [123, 124,
125, 126], and spline-based activation functions were proposed in works such
as [127] and [128]. In a more recent work [67], non-parametric functions were
proposed. The non-parametric kernels can be used in the complex domain
directly or separately on the imaginary and real components.
Given its widespread adoption and success in deep real-valued networks, the
ReLU activation [129]:
$ReLU(x):=max(x,0)\;,$ (5)
has been proposed because it mitigates the problem of vanishing gradients
encountered with Sigmoid activation. For example, ReLU was applied in [19]
after a trainable bias parameter was added to the magnitude of the complex
number. In their approach, the phase is preserved while the magnitude is non-
linearly transformed. The authors in [130] applied ReLU separately on both
real and imaginary parts of the complex number. The phase is nonlinearly
mapped when the input lies in quadrant other than the upper right quadrant of
the Argand diagram. However, neither of these activations are holomorphic.
The activation functions discussed in this section along with some others are
listed in Table II. For example, a cardioid activation in [85] is defined as:
$f(z)=\frac{1}{2}(1+cos(\angle z))z,$ (6)
and this was applied in [85] for magnitude resonance imaging (MRI)
fingerprinting. The cardioid activation function is a phase sensitive complex
extension of the ReLU.
There are other activation functions besides those described in this section.
There are activation functions which are suitable for both real and complex
hidden neurons. There is still no agreed consensus on which scenarios warrant
the use of holomorphic functions such as ETFs, or the use of nonholomorphic
functions that are more closely related to the nonlinear activation functions
widely used in current state-of-the-art real-valued deep architectures. In
general, there are no group of activation functions that are deemed the best
for both real or complex neural networks.
## IV Optimization and learning in CVNNs
Learning in neural networks refers to the process of tuning the weights of the
network to optimize learning objectives such as minimizing a loss function.
The optimal set of weights are those that allow the neural network generalize
best to out-of-sample data. Given the desired and predicted output of a
complex neural network, or ground truth and prediction in the context of
supervised learning, $d\in\mathbb{C}^{N}$ and $o\in\mathbb{C}^{N}$,
respectively, the error is
$e:=d-o.$ (7)
The complex mean square loss is a non-negative scalar
$\displaystyle\mathcal{L}(e)$ $\displaystyle=$
$\displaystyle\sum_{k=0}^{N-1}|e_{k}|^{2}$ (8) $\displaystyle=$
$\displaystyle\sum_{k=0}^{N-1}e_{k}\bar{e}_{k}.$ (9)
It is also a real-valued mapping that tends to zero as the modulus of the
complex error reduces.
The log error between the target and the prediction is given by [131]
$\displaystyle(e_{log}):=\sum_{k=0}^{N-1}\left(\text{log
}(o_{k})-\text{log}(d_{k})\right)\overline{\left(\text{log
}(o_{k})-\text{log}(d_{k})\right)}$ (10)
where $\overline{(\cdot)}$ is the conjugate function. If
$d=r\>\text{exp}(i\phi)$ and $o=\hat{r}\>\text{exp}(i\hat{\phi})$ are the
polar representations of the desired and actual outputs respectively, the log
error function is a monotonically decreasing function. With this
representation, the errors in magnitude and phase can have explicit
representation in the logarithmic loss function given as:
$\mathcal{L}(e_{log})=\frac{1}{2}\bigg{(}log\bigg{[}\frac{\hat{r}_{k}}{r_{k}}\bigg{]}^{2}+\big{[}\hat{\phi}_{k}-\phi_{k}\big{]}^{2}\bigg{)}$
(11)
such that $e_{log}\rightarrow 0$ when $\hat{r}_{k}\rightarrow r_{k}$ and
$\hat{\phi}_{k}\rightarrow\phi_{k}$. The error functions in equations (7) and
(10) are suitable for complex-valued regression. For classification, one may
map the output of the network to the real domain using a transform that is not
necessarily holomorphic.
Table III details the most popular learning methods implemented in the
literature. In general, there are two approaches for training CVNNs. The first
approach follows the same method used in training the real-valued neural
networks where the error is backpropagated from the output layer to the input
layer using gradient descent. In the second approach, the error is
backpropagated but _without_ gradient descent.
TABLE III: Learning Methods for Complex-Valued Neural Networks Error Propagation Method | Corresponding Publications
---|---
Split-Real Backpropagation | [25, 6, 5, 30, 36, 37, 39, 11, 43, 45, 46, 47, 55, 57, 64, 72, 79, 132, 133, 134, 135, 136]
Fully Complex Backpropagation (CR) | [8, 63, 51, 121, 65, 66, 33, 34, 35, 61, 137, 138]
MLMVN | [111, 112, 139, 103, 82, 86, 89, 90, 91, 92, 93, 95, 97, 98, 99, 101, 104, 140, 141, 142, 107]
Orthogonal Least Square | [124, 125]
Quarternion-based Backpropagation | [54]
Hebbian Learning | [56]
Complex Barzilai-Borwein Training Method | [137]
### IV-A Gradient-based Approach
Different approaches to the backpropagation algorithm in the complex domain
were independently proposed by various researchers in the early 1990’s. For
example, a derivation for single hidden layer complex networks was given in
[132]. In this work, the authors showed that a complex neural network with one
hidden layer and Sigmoid activation was able to solve the XOR problem.
Similarly using Sigmoid activation, the authors in [143] derived the complex
backpropagation algorithm. Derivations were also given for Cartesian split
activation function in [25] and for non-holomorphic activation functions in
[51].
In reference to gradient based methods, the Wirtinger Calculus [10] was used
to derive the complex gradient, Jacobian, and Hessian in [9] and [144].
Wirtinger developed a framework that simplifies the process obtaining the
derivative of complex-valued functions with respect to both holomorphic and
non-holomorphic functions. By doing this, the derivatives to the complex-
valued functions can be computed completely in the complex domain instead of
being computed with respect to the real and imaginary components
independently. The Wirtinger approach has not always been favored, but it is
beginning to gain more interest with newer approaches like [58, 145].
The process of learning with complex domain backpropagation is similar to the
learning process in the real domain. The error calculated after the forward
pass is backpropagated to each neuron in the network, and the weights are
adjusted in the backward pass. If the activation function of a neuron is
$f(z)=u(x,y)+iv(x,y)$, where $z=x+iy$, $u$ and $v$ are the real and imaginary
parts of $f$, and $x$ and $y$ are the real and imaginary parts of $z$. The
partial derivatives $u_{x}=\partial u/\partial x,\;u_{y}=\partial u/\partial
y,\;v_{x}=\partial v/\partial x,\;v_{y}=\partial v/\partial y$ are initially
assumed to exist for all $z\in C$, so that the Cauchy-Riemann equations are
satisfied. Given an input pattern, the error is given by
$E=\frac{1}{2}\sum_{k}e_{k}\bar{e}_{k},\qquad e_{k}=d_{k}-o_{k}$ (12)
where $d_{k}$ and $o_{k}$ are the desired and actual outputs of the $k$th
neuron, respectively. The over-bar denotes the complex conjugate operation.
Given a neuron $j$ in the network, the output $o_{j}$ is given by
$o_{j}=f(z_{j})=u^{j}+iv^{j},\quad z_{j}=x_{j}+iy_{j}=\sum_{i=1}W_{jl}X_{jl}$
(13)
where the $W_{jl}$’s are the complex weights of neuron $j$ and $X_{jl}$ its
complex input. A complex bias (1,0) may be added. The following partial
derivatives are defined:
$\displaystyle\frac{\partial x_{j}}{\partial W_{jlR}}=X_{jlR},\;\frac{\partial
y_{j}}{\partial W_{jlR}}=X_{jlI},\;$ $\displaystyle\frac{\partial
x_{j}}{\partial W_{jlI}}=-X_{jlI},\;\frac{\partial y_{j}}{\partial
W_{jlI}}=X_{jlR}$ (14)
where $R$ and $I$ represent the real and imaginary parts. The chain rule is
used to find the gradient of the error function $E$ with respect to $W_{jl}$.
The gradient of the error function with respect to the $W_{jlR}$ and $W_{jlI}$
are given by
$\displaystyle\frac{\partial E}{\partial W_{jlR}}$ $\displaystyle=$
$\displaystyle\frac{\partial E}{\partial u^{j}}\bigg{(}\frac{\partial
u^{j}}{\partial x_{j}}\frac{\partial x_{j}}{\partial W_{jlR}}+\frac{\partial
u^{j}}{\partial y_{j}}\frac{\partial y_{j}}{\partial W_{jlR}}\bigg{)}$ (16)
$\displaystyle+$ $\displaystyle\frac{\partial E}{\partial
v^{j}}\bigg{(}\frac{\partial v^{j}}{\partial x_{j}}\frac{\partial
x_{j}}{\partial W_{jlR}}+\frac{\partial v^{j}}{\partial y_{j}}\frac{\partial
y_{j}}{\partial W_{jlR}}\bigg{)}$ $\displaystyle=$
$\displaystyle-\delta_{JR}(u^{j}_{x}X_{jlR}+u^{j}_{y}X_{jlI})$
$\displaystyle-\delta_{JI}(v^{j}_{x}X_{jlI}+v^{j}_{y}X_{jlI})$
$\displaystyle\frac{\partial E}{\partial W_{jlI}}$ $\displaystyle=$
$\displaystyle\frac{\partial E}{\partial u^{j}}\bigg{(}\frac{\partial
u^{j}}{\partial x_{j}}\frac{\partial x_{j}}{\partial W_{jlI}}+\frac{\partial
u^{j}}{\partial y_{j}}\frac{\partial y_{j}}{\partial W_{jlI}}\bigg{)}$ (18)
$\displaystyle+$ $\displaystyle\frac{\partial E}{\partial
v^{j}}\bigg{(}\frac{\partial v^{j}}{\partial x_{j}}\frac{\partial
x_{j}}{\partial W_{jlI}}+\frac{\partial v^{j}}{\partial y_{j}}\frac{\partial
y_{j}}{\partial W_{jlI}}\bigg{)}$ $\displaystyle=$
$\displaystyle-\delta_{JR}(u^{j}_{x}(-X_{jlI})+u^{j}_{y}(X_{jlR})$
$\displaystyle-\delta_{JI}(v^{j}_{x}(-X_{jlI})+v^{j}_{y}X_{jlR})$
where $\delta_{j}=-\partial E/\partial u^{j}-i\partial E/\partial v^{j}$,
$\delta_{jR}=-\partial E/\partial u^{j}$ and $\delta_{jI}=-\partial E/\partial
v^{j}$. By combining equations (16) and (18), the gradient of the error
function with respect to $W_{jl}$ is given by
$\displaystyle\nabla_{wjl}E$ $\displaystyle=$ $\displaystyle\frac{\partial
E}{\partial W_{jlR}}+i\frac{\partial E}{\partial W_{jlI}}$ (19)
$\displaystyle=$
$\displaystyle-\bar{X}_{jl}((u_{x}^{j}+iu_{y}^{j})\delta_{jR}+(v_{x}^{j}+iv_{y}^{j})\delta_{jI})$
Hence, given a positive constant learning rate $\alpha$, the complex weight
$W_{jl}$ must be changed by a value $\Delta W_{jI}$ proportional to the
negative gradient:
$\Delta
W_{jl}=\alpha\bar{X}_{jl}\left((u_{x}^{j}+iu_{y}^{j})\delta_{jR}+(v_{x}^{j}+iv_{y}^{j})\delta_{jI}\right)$
(20)
For an output neuron, $\delta_{jR}$ and $\delta_{jI}$ in equation (19) are
given by
$\displaystyle\delta_{jR}=\frac{\delta E}{\delta
u^{j}}=\epsilon_{jR}=d_{jR}-u^{j}$ $\displaystyle\delta_{jI}=\frac{\delta
E}{\delta v^{j}}=\epsilon_{jI}=d_{jI}-v^{j}$ (21)
And in compact form, it will be
$\delta_{j}=\epsilon_{j}=d_{j}-o_{j}$ (22)
The chain rule is used to compute $\delta_{jR}$ and $\delta_{jI}$ for the
hidden neuron. Note that $k$ is an index for a neuron receiving input from
neuron $j$. The net input $z_{k}$ to neuron $k$ is
$z_{k}=x_{k}+iy_{k}=\sum_{l}(u^{l}+iv^{l})(W_{klR}+iW_{klI})$ (23)
where $l$ is the index for the neurons that feed into neuron $k$. Computing
$\delta_{jR}$ using the chain rule yields
$\displaystyle\delta_{jR}=-\frac{\delta E}{\delta u^{j}}$ $\displaystyle=$
$\displaystyle-\sum_{k}\frac{\delta E}{\delta u^{k}}\bigg{(}\frac{\delta
u^{k}}{\delta x_{k}}\frac{\delta x_{k}}{\delta u^{j}}+\frac{\delta
u^{k}}{\delta y_{k}}\frac{\delta y_{k}}{\delta u^{j}}\bigg{)}$ (24)
$\displaystyle-\sum_{k}\frac{\delta E}{\delta v^{k}}\bigg{(}\frac{\delta
v^{k}}{\delta x_{k}}\frac{\delta x_{k}}{\delta u^{j}}+\frac{\delta
v^{k}}{\delta y_{k}}\frac{\delta y_{k}}{\delta u^{j}}\bigg{)}$
$\displaystyle=$
$\displaystyle\sum_{k}\delta_{kR}(u^{k}_{x}W_{kjR}+u^{k}_{y}W_{kjI})$
$\displaystyle+\sum_{k}\delta_{kI}(v^{k}_{x}W_{kjR}+v^{k}_{y}W_{kjI})$
Similarly, $\delta_{jI}$ is computed as:
$\displaystyle\delta_{jI}=-\frac{\delta E}{\delta v^{j}}=-\sum_{k}\frac{\delta
E}{\delta u^{k}}\bigg{(}\frac{\delta u^{k}}{\delta x_{k}}\frac{\delta
x_{k}}{\delta v^{j}}+\frac{\delta u^{k}}{\delta y_{k}}\frac{\delta
y_{k}}{\delta v^{j}}\bigg{)}$ $\displaystyle-\sum_{k}\frac{\delta E}{\delta
v^{k}}\bigg{(}\frac{\delta v^{k}}{\delta x_{k}}\frac{\delta x_{k}}{\delta
v^{j}}+\frac{\delta v^{k}}{\delta y_{k}}\frac{\delta y_{k}}{\delta
v^{j}}\bigg{)}$
$\displaystyle=\sum_{k}\delta_{kR}(u^{k}_{x}(-W_{kjI})+u^{k}_{y}W_{kjR})$
$\displaystyle+\sum_{k}\delta_{kI}(v^{k}_{x}W_{kjR}+v^{k}_{y}W_{kjI})$ (25)
The expression for $\delta_{j}$ is obtained by combining equations (24) and
(25):
$\displaystyle\delta_{j}$ $\displaystyle=$
$\displaystyle\delta_{jR}+i\delta_{jI}$ (26) $\displaystyle=$
$\displaystyle\sum_{k}\bar{W}_{kj}\left((u^{k}_{x}+iu^{k}_{y})\delta_{kR}+(v^{k}_{x}+iv^{k}_{y})\delta_{kl}\right)$
where $\delta_{j}$ is computed for neuron $j$ starting in the output layer
using equation (22), then using equation (26) for the neurons in the hidden
layers. After computing $\delta_{j}$ for neuron $j$, equation (20) is used to
update its weights.
### IV-B Non-Gradient-based Approach
Different from the gradient based approach, the learning process of a neural
network based on the multi-valued neuron (MVN) is derivative-free and it is
based on the error-correction learning rule [103]. For a single neuron, weight
correction in the MVN is determined by the neuron’s error, and learning is
reduced to a simple movement along the unit circle. The corresponding
activation function of equations (3) and (4) are not differentiable, which
implies that there are no gradients.
Figure 3: Example of MLMVN with one hidden-layer and a single output
Considering an example MLMVN with one hidden-layer and a single output as
shown in Figure 3. If $T$ is the target, $Y_{12}$ is the output, and the
following definitions are assumed:
* •
$\epsilon^{*}=T-Y_{12}$: global error of network
* •
$w^{12}_{0},w^{12}_{1},\cdots,w^{12}_{n}$: initial weighting vector of neuron
$Y_{12}$
* •
$Y_{i1}$: initial output of neuron $Y_{12}$
* •
$Z_{12}$: weighed sum of neuron $Y_{12}$ before weight correction
* •
$\epsilon_{12}$: error of neuron $Y_{12}$
The weight correction for the second to the $m$th (output) layer, and then for
the input layer are given by
$\displaystyle\tilde{w}^{kj}_{i}$ $\displaystyle=$ $\displaystyle
w^{kj}_{i}+\frac{C_{kj}}{(N_{j-1}+1)}\epsilon_{kj}\bar{\tilde{Y}}_{i,j-1},\quad\text{$i=1,\dots,n$}$
$\displaystyle\tilde{w}^{kj}_{0}$ $\displaystyle=$ $\displaystyle
w^{kj}_{0}+\frac{C_{kj}}{(N_{j-1}+1)}\epsilon_{kj}$ (27)
$\displaystyle\tilde{w}^{k1}_{i}$ $\displaystyle=$ $\displaystyle
w^{k1}_{i}+\frac{C_{k1}}{(n+1)}\epsilon_{k1}\bar{x_{i}},\quad\text{$i=1,\dots,n$}$
$\displaystyle\tilde{w}^{k1}_{0}$ $\displaystyle=$ $\displaystyle
w^{k1}_{0}+\frac{C_{k1}}{(n+1)}\epsilon_{kj}$ (28)
$C_{kj}$ is the learning rate for the $k$th neuron of the $j$th layer.
However, in applying this learning rule two situations may arise: (1) the
absolute value of the weighted sum being corrected may jump erratically, or
(2) the output of the hidden neuron varies around some constant value. In
either of these scenarios, a large number of weight updates can be wasted. The
workaround is to instead apply a modified learning rule [92] which adds a
normalization constant to the learning for the hidden and input layers.
However, the output layer learning error back propagation is not normalized.
The final correction rule for the $k$th neuron of the $m$th (output) layer is
$\displaystyle\tilde{w}^{km}_{i}$ $\displaystyle=$ $\displaystyle
w^{km}_{i}+\frac{C_{km}}{(N_{m-1}+1)}\epsilon_{km}\bar{\tilde{Y}}_{i,m-1},\quad\text{$1=1,\dots,n$}$
$\displaystyle\tilde{w}^{km}_{0}$ $\displaystyle=$ $\displaystyle
w^{km}_{0}+\frac{C_{km}}{(N_{m-1}+1)}\epsilon_{km}$ (29)
For the second till the $(m-1)$th layer ($k$th neuron of the $j$th layer
($j=2,\cdots,m-1$), the correction rule is
$\displaystyle\tilde{w}^{kj}_{i}$ $\displaystyle=$ $\displaystyle
w^{kj}_{i}+\frac{C_{kj}}{(N_{j-1}+1)}\epsilon_{kj}\bar{\tilde{Y}}_{i,j-1},\quad\text{$1=1,\dots,n$}$
$\displaystyle\tilde{w}^{kj}_{0}$ $\displaystyle=$ $\displaystyle
w^{kj}_{0}+\frac{C_{kj}}{(N_{j-1}+1)|z_{kj}|}\epsilon_{kj}$ (30)
and for the input layer:
$\displaystyle\tilde{w}^{k1}_{i}$ $\displaystyle=$ $\displaystyle
w^{k1}_{i}+\frac{C_{k1}}{(n+1)}\epsilon_{k1}\bar{x_{i}},\quad\text{$1=1,\dots,n$}$
$\displaystyle\tilde{w}^{k1}_{0}$ $\displaystyle=$ $\displaystyle
w^{k1}_{0}+\frac{C_{k1}}{(n+1)|z_{kj}|}\epsilon_{kj}$ (31)
Given a pre-specified learning precision $\omega$, the condition for
termination of the learning process is
$\frac{1}{N}\sum_{s=1}^{N}\sum_{k}(\epsilon^{*}_{km_{s}})^{2}(W)=\frac{1}{N}\sum_{s=1}^{N}E_{s}\leq\omega.$
(32)
One of the advantages of the non-gradient based approach is its ease of
implementation. In addition, since there are no derivatives involved, the
problems associated with typical gradient descent based methods will not apply
here, such as problem with being stuck in a local minima. Furthermore, because
of the structure and representation of the network, it is possible to design a
hybrid network architecture where some nodes have discrete activation
functions, whereas some others use continuous activation functions. This may
have a great potential in future applications [139].
### IV-C Training and hyperparameters optimization
The adaptation of real-valued activation, weight initialization and batch
normalization was analyzed in [130]. More recently, building on the work of
[9], optimization via second-order methods were introduced by the use of the
complex Hessian in the complex gradient [144], and linear approaches have been
proposed to model second-order complex statistics [146]. The training of
complex-valued neural networks using complex learning rate was proposed in
[137]. The problem of vanishing gradient in recurrent neural networks was
solved by using unitary weight matrices in [19] and [20], thereby improving
time and space efficiency by exploiting the properties of orthogonal matrices.
However, regarding regularization, besides the proposed use of noise in [147],
not much work has been done in the literature. This is an interesting open
research problem and it is further discussed in Section VII.
## V Input and Output representations in CVNNs
Input representations can be complex either naturally or by design. In the
former case, a set of complex numbers represent the data domain. An example is
Fourier transform on images. In the latter case, inputs have magnitude and
phase which are statistically correlated. An example is in radio frequency or
wind data [148]. For the outputs, complex values are more suitable for
regression tasks. However, real-values are more appropriate if the goal is to
perform inference over a probability distribution of complex parameters.
Weights can be complex or real irrespective of input/output representations.
In [1], the impact and tradeoff on choices that may be made on representations
was studied using a toy experiment. In the experiment, a toy model was trained
to learn a function that is able to add sinusiods. Four input representations
were used:
1. 1.
amplitude-phase where a real-valued vector is formed from concatenating the
phase offset and amplitude parameters;
2. 2.
complex representation where the phase offset is used as the phase of the
complex phasor;
3. 3.
real-imaginary representation where the real and imaginary components of the
complex vector in 2) are used as real vectors;
4. 4.
augmented complex representation which is a concatenation of the complex
vector and its conjugate.
Two representations were considered for the target:
1. 1.
the straightforward representation which use the raw real and complex target
values;
2. 2.
analytic representation which uses the Hilbert transform for the imaginary
part.
The activation functions used were (1) identity, (2) hyperbolic tangent, (3)
split real-imaginary activation, and (4) split amplitude phase.
Different models with the combinations of input and output representations as
well as various activation functions were tested in the experiments, and the
model with the real-imaginary sigmoid activation performed the best. However,
it is interesting that this best model diverged and performed poorly when it
was trained on real inputs. There is a closed form for the real-imaginary
input representation when the output target representation are either
straightforward or analytic. However, there is no closed form solution for the
amplitude-phase representation.
In general, certain input and output representations when combined yield a
closed form solution, and there are some scenarios in which even when there is
no closed form solution, the complexity of the solution can be reduced by
transforming either the input, output or internal representation. In summary,
the question as to which input and output representation is the best would
typically depend on the application and it is affected by the level of
constraint imposed on the system.
## VI Applications of CVNNs
Various applications of CVNNs are summarized in Table IV. Because of the
complex nature of many natural and man-made signals, CVNNs find most of their
applications in the area of signal (including radio frequency signal, audio
and image) processing.
### VI-A Applications in Radio Frequency Signal Processing in Wireless
Communications
The majority of the work on complex-valued neural networks has been focused on
signal processing research and applications. Complex-valued neural network
research in signal processing applications include channel equalization [133,
149], satellite communication equalization [150], adaptive beamforming [151],
coherent-lightwave networks [4, 152] and source separation [128]. In
interferometric synthetic aperture radar, complex networks were used for
adaptive noise reduction [153]. In electrical power systems, complex valued
networks were proposed to enhance power transformer modeling [154], and
analysis of load flow [134]. In [150] for example, the authors considered the
issue of requiring long sequences for training, which results in a lot of
wasted channel capacity in cases where nonlinearities associated with the
channel are slowly time varying. To solve this problem, they investigated the
use of CVNN for adaptive channel equalization. The approach was tested on the
task of equalizing a digital satellite radio channel amidst intersymbol
interference and minor nonlinearities, and their approach showed competitive
results. They also pointed out that their approach does not require prior
knowledge about the nonlinear characteristics of the channel.
TABLE IV: Applications of Complex-Valued Neural Networks Applications | Corresponding Publications
---|---
Radio Frequency Signal Processing in Wireless Communications | [6, 8, 65, 66, 123, 124, 125, 126, 127, 128, 67, 32, 33, 35, 36, 11, 52, 53, 55, 56, 72, 89, 91, 109, 133, 149, 150, 151, 152, 154, 155]
Image Processing and Computer Vision | [119, 103, 19, 130, 85, 31, 34, 37, 39, 40, 54, 62, 69, 73, 75, 76, 77, 82, 92, 98, 99, 101, 102, 156, 157, 158, 159, 141, 142]
Audio Signal Processing and Analysis | [130, 26, 48, 49, 58, 79, 136]
Radar / Sonar Signal Processing | [139, 74, 110, 160, 161, 153]
Cryptography | [162]
Time Series Prediction | [139, 103]
Associative Memory | [105, 116]
Wind Prediction | [30, 43, 148]
Robotics | [38]
Traffic Signal Control (robotics) | [46, 60]
Spam Detection | [59]
Precision Agriculture (soil moisture prediction) | [82]
### VI-B Applications in Image Processing and Computer Vision
Complex valued neural networks have also been applied in image processing and
computer vision. There have been some works on applying CVNN for optical flow
[157, 156], and CVNNs have been combined with holographic movies [159, 158,
135]. CVNNs were used for reconstruction of gray-scale images [106], and image
deblurring [141, 99, 142], classification of microarray gene expression [107].
Clifford networks were applied for character recognition [163] and complex-
valued neural networks were also applied for automatic gender recognition in
[37]. A complex valued VGG network was implemented by [75] for image
classification. In this work, building on [130], the building blocks of the
VGG network including batch normalization, ReLU activation function, and the
2-D convolution operation were transformed to the complex domain. When testing
their model on classification of the popular CIFAR10 benchmark image dataset,
the complex-valued VGG model performed slightly better than the real-valued
VGG in both training and testing accuracy. Moreover, the complex-valued
network requires less parameters.
Another noteworthy computer vision application that showcases the potential of
complex-valued neural networks can be found in [19], where the authors
investigated the benefits of using unitary weight matrices to mitigate the
vanishing gradient problem. Among other applications, their method was tested
on the MNIST hand-writing benchmark dataset in two modes. In the first mode,
the pixels were read in order (left to right and bottom up), and in the second
mode the pixels were read in arbitrarily. The real-valued LSTM performed
slightly better than the unitary-RNN for the first mode, but in the second
mode the unitary-RNN outperformed the real-valued LSTM in spite of having
below a quarter of the parameters than the real-valued LSTM. Moreover, it took
the real-valued LSTM between 5 and 10 times as many epochs to reach
convergence comparing to the unitary RNN.
Non-gradient based learning has also been used extensively in image processing
and computer vision applications. For example, the complex neural network with
multi-valued neurons (MLMVN) was used as an intelligent image filter in [87].
The task was to apply the filter to simultaneously filter all pixels in an $n$
x $n$ region, and the results from overlapping regions of paths were then
averaged. The integer input intensities were mapped to complex inputs before
being fed into the MLMVN, and the integer output from the MLMVN were
transformed to integer intensities. This approach proved successful and
efficient, as very good nonlinear filters were obtained by training the
network with as little as 400 images. Furthermore, from their simulations it
was observed that the filtering results got better as more images were added
to the training set.
### VI-C Applications in Audio Signal Processing and Analysis
In audio processing, complex networks were proposed to improve the MP3 codec
in [58], and in audio source localization [136]. CVNNs were shown to denoise
noise-corrupted waveforms better than real-valued neural networks in [11].
Complex associative memories were proposed for temporal series obtained from
symbolic music representation [48, 49]. A notable work in this regard is [130]
where deep complex networks were used for speech spectrum prediction and music
transcription. In this work, the authors compared the various complex-valued
ReLU-based activations and formulated the building blocks necessary for
building deep complex networks. The building blocks include a complex batch
normalization, weight initialization. Apart from image recognition, the
authors tested their complex deep network on MusicNet dataset for the music
transcription task, and on the TIMIT dataset for the Speech Spectrum
prediction tasks. It is interesting to note that the datasets used contain
real values, and the imaginary components are learned using operations in one
real-valued residual block.
### VI-D Other Applications
In wind prediction, the axes of the Cartesian coordinates of a complex number
plane was used to represent the cardinal points (north, south, east, west),
and the prediction of wind strength was expressed using the distance from the
origin in [138]. However, although the circularity property of complex numbers
were exploited in this work, this method does not reduce the degree of
freedom, instead it has the same degree of freedom as a real-valued network
because of the typically high anisotropy of wind. In other words, in this
representation, the absolute value of phase does not yield any meaning.
Rather, it is the difference from a specified reference that is meaningful.
A recent application in radar can be found in [160, 161]. In this work, a
complex-valued convolutional neural network (CV-CNN) was used to exploit the
inherent nature of the time-frequency data of human echoes to classify human
activities. The human echo is a combination of the phase modulation
information caused by motion, and the amplitude data obtained from different
parts of the body. Short Time Fourier Transform was used to transform the
human radar echo signals and used to train the CV-CNN. The authors also
demonstrated that their method performed better than other machine learning
approaches at low signal-to-noise ratio (SNR), while achieving accuracies as
high as 99.81$\%$. CV-CNNs were also used in [50] to enhance radar images.
Interestingly, this is an application of CNN to a regression problem, where
the inputs are radar echoes and the outputs are expected images.
The first use of a complex-valued generative adversarial network (GAN) was
proposed in [164] to mitigate the issue of lack of labeled data in
polarimetric synthetic aperture radar (PolSAR) images. This approach which
retains the amplitude and phase information of the PolSAR images performed
better than state-of-the-art real-valued approaches, especially in scenarios
involving fewer annotated data.
A complex-valued tree parity machine network (CVTPM) was used for neural
cryptography in [162]. In neural cryptography, by applying the principle of
neural network synchronization, two neural networks exchange public keys. The
benefit of using a complex network over a real network is that two group keys
can be exchanged in one process of neural synchronization. Furthermore,
compared to a real network with the same architecture, the CVTPM was shown to
be more secure.
There have been other numerous applications of CVNNs. For instance, a
discriminative complex-valued convolutional neural network was applied to
electroencephalography (ECG) signals in [33] to automatically deduce features
from the data and predict stages of sleep. By leveraging the Fisher criterion
which takes into account both the minimum error, the maximum between-class
distance, and the minimum within-class distance, the authors assert that their
model named fast discriminative complex-valued convolutional neural network
(FDCCNN) is capable of learning inherent features that are discrimitative
enough, even when the dataset is imbalanced.
## VII Challenges and Potential Research
Complex valued neural networks have been shown to have potential in domains
where representation of the data encountered is naturally complex, or complex
by design. Since the 1980s, research of both real valued neural networks
(RVNN) and complex valued neural networks (CVNN) have advanced rapidly.
However, during the development of deep learning, research on CVNNs has not
been very active compared to RVNNs. So far research on CVNNs has mainly
targeted at shallow architectures, and specific signal processing applications
such as channel equalization. One reason for this is the difficulty associated
with training. _This is due to the limitation that the complex-valued
activation is not complex-differentiable and bounded at the same time_.
Several studies [130] have suggested that the constraint of requiring that a
complex-valued activation be simultaneously bounded and complex differentiable
need not be met, and propose activations that are differentiable independently
with respect to the real and imaginary components. This remains an open area
of research.
Another reason for the slow development of research in complex-valued neural
networks is that almost all deep learning libraries are optimized for real-
valued operations and networks. There are hardly any public libraries
developed and optimized specifically for training CVNNs. In the experiments
performed in [165] and validated in [166], a baseline, wide and deep network
was built for real, complex and split valued neural networks. Computations
were represented by computational sub-graphs of operations between the real
and imaginary parts. This enabled the use of TensorFlow, a standard neural
network libraries instead of the generalized derivatives in Clifford algebra.
However, this approach to modeling CVNN is still fundamentally based on a
library meant for optimized real-valued arithmetic computations. This issue
concerns practical implementation, and _there is a need for deep learning
libraries targeted and optimized for complex-valued computations_.
Regarding weight initialization, a formulation for complex weight
initialization was presented in [130]. By formulating this in terms of the
variance of the magnitude of the weights which follows the Rayleigh
distribution with two degrees of freedom, the authors represented the variance
of the weights in terms of the Rayleigh distribution’s parameter. It is shown
that the variance of the weights depends on the magnitude and not on the
phase, hence the phase was initialized uniformly within the range
$[-\pi,\pi]$. More research on this could provide insights and yield
meaningful results as to _alternative methods of complex-valued weight
initialization for CVNNs_. For instance, weight initialization scheme could
make use of the phase parameter in addition to the magnitude.
There has been some strides made in applications involving complex valued
recurrent networks. For example, the introduction of unitary matrices which
are the generalized complex version of real-valued Hermitian matrices,
mitigates the problem of vanishing and exploding gradients. However, in
applications involving long sequences, gradient backpropagation requires all
hidden states values to be stored. This can become impractical given the
limited availability of GPU memory for optimization. Considering that the
inverse of the unitary matrix is its conjugate transpose, it may be possible
to derive some invertible nonlinear function with which the states can be
computed during the backward pass. This will eliminate the need to store the
hidden state values.
By introducing complex parameters, the number of operations required increases
thus increasing the computational complexity. Compared to real-valued
parameters which use single real-valued multiplication, complex-valued
parameters will require up to four real multiplications and two real
additions. This means that merely doubling the number of real-valued
parameters in each layer does not give the equivalent effect that is observed
in a complex-valued neural network [167]. Furthermore, the capacity of a
network in terms of its ability to approximate structurally complex functions
can be quantified by the number of (real-valued) parameters in the network.
Consequently, by representing a complex number $a+ib$ using real numbers
$(a,b)$, the number of real parameters for each layer is doubled. This implies
that by using a complex-valued network, a network may gain more expressiveness
but run the risk of overfitting due to the increase in parameters as the
network goes deeper. Hence, _regularization during the training of CVNNs is
important but remains an open problem_.
Ridge ($L_{2}$) and LASSO ($L_{1}$) regularizations are the two forms of
regression that are aimed at mitigating the effects of multicollinearity.
Regularization enforces an upper threshold on the values of the coefficients
and produces a solution with coefficients of smaller variance. With the
$L_{2}$ regularization, the update to the weights is penalized. This forces
the magnitude of the weights to be small and hence reduce overfitting.
Depending on the application, $L_{1}$ norm can be applied that will force some
weights to be zero (sparse representation).
However, the issue with the above formulation is that it cannot be directly
applied over the field of complex numbers. This is because the application of
the above formulation of the $L_{2}$ norm to a complex number is simply its
magnitude, which is not a complex number and the phase information is lost. As
far as we know, there has not been any successful attempt to either provide a
satisfactory transformation of this problem into the complex domain, or derive
a method that is able to efficiently search the underlying hyper-parameter
solution space. Apart from the work by the authors in [147] who propose to use
noise, there is very little work on the regularization of CVNNs.
Complex and split-complex-valued neural networks are considered in [165] to
further understand their computational graph, algebraic system and
expressiveness. This result shows that the complex-valued neural network are
more sensitive to hyperparameter tuning due to the increased complexity of the
computational graph. In one of the experiments performed in [165], $L_{2}$
regularization was added for all parameters in the real, complex and split-
complex neural networks and trained on MNIST and CIFAR-10 benchmark datasets.
It was observed that in the un-regularized case, both real and complex
networks showed comparable validation accuracy. However, when $L_{2}$
regularization was added, overfitting was reduced in the real-valued networks
but had very little effect on the performance of the complex and split-complex
networks. It seems that complex neural networks are not self regularizing, and
they are more difficult to regularize than their real-valued counterpart.
## VIII Conclusion
A comprehensive review on complex-valued neural networks has been presented in
this work. The argument for advocating the use of complex-valued neural
networks in domains where complex numbers occur naturally or by design was
presented. The state-of-the-art in complex-valued neural networks was
presented by classifying them according to activation function, learning
paradigm, input and output representations, and applications. Open problems
and future research directions have also been discussed. Complex-valued neural
networks compared to their real-valued counterparts are still considered an
emerging field and require more attention and actions from the deep learning
and signal processing research community.
## IX Acknowledgment
This research work is supported by the U.S. Office of the Under Secretary of
Defense for Research and Engineering (OUSD(R&E)) under agreement number
FA8750-15-2-0119. The U.S. Government is authorized to reproduce and
distribute reprints for governmental purposes notwithstanding any copyright
notation thereon. The views and conclusions contained herein are those of the
authors and should not be interpreted as necessarily representing the official
policies or endorsements, either expressed or implied, of the Office of the
Under Secretary of Defense for Research and Engineering (OUSD(R&E)) or the
U.S. Government.
## References
* [1] A. M. Sarroff, “Complex Neural Networks for Audio,” Tech. Rep. TR2018-859, Dartmouth College, Computer Science, Hanover, NH, May 2018.
* [2] P. P. Shinde and S. Shah, “A review of machine learning and deep learning applications,” in 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), pp. 1–6, 2018.
* [3] A. Hirose and S. Yoshida, “Comparison of complex- and real-valued feedforward neural networks in their generalization ability,” in Neural Information Processing - 18th International Conference, ICONIP, 2011, Shanghai, China, November 13-17, 2011, Proceedings, Part I, pp. 526–531, 2011.
* [4] A. Hirose, “Applications of complex-valued neural networks to coherent optical computing using phase-sensitive detection scheme,” Information Sciences \- Applications, vol. 2, no. 2, pp. 103 – 117, 1994.
* [5] A. Hirose, “Continuous complex-valued back-propagation learning,” Electronics Letters, vol. 28, pp. 1854–1855, Sep. 1992.
* [6] B. Widrow and M. E. Hoff, “Adaptive switching circuits,” in 1960 IRE WESCON Convention Record, Part 4, (New York), pp. 96–104, IRE, 1960.
* [7] F. F. Rosenblatt, “The perceptron: a probabilistic model for information storage and organization in the brain.,” Psychological review, vol. 65 6, pp. 386–408, 1958.
* [8] B. Widrow, J. McCool, and M. Ball, “The complex lms algorithm,” Proceedings of the IEEE, vol. 63, pp. 719–720, April 1975.
* [9] D. H. Brandwood, “A complex gradient operator and its application in adaptive array theory,” IEE Proceedings H - Microwaves, Optics and Antennas, vol. 130, pp. 11–16, February 1983.
* [10] W. Wirtinger, “Zur formalen theorie der funktionen von mehr komplexen veränderlichen,” Mathematische Annalen, vol. 97, pp. 357–375, Dec 1927\.
* [11] A. Hirose and S. Yoshida, “Generalization characteristics of complex-valued feedforward neural networks in relation to signal coherence,” IEEE Transactions on Neural Networks and Learning Systems, vol. 23, pp. 541–551, April 2012.
* [12] D. P. Reichert and T. Serre, “Neuronal synchrony in complex-valued deep networks,” CoRR, vol. abs/1312.6115, 2014.
* [13] R. K. Srivastava, K. Greff, and J. Schmidhuber, “Training very deep networks,” in Advances in Neural Information Processing Systems 28 (C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, eds.), pp. 2377–2385, Curran Associates, Inc., 2015.
* [14] K. Cho, B. van Merrienboer, D. Bahdanau, and Y. Bengio, “On the properties of neural machine translation: Encoder-decoder approaches,” in SSST@EMNLP, 2014.
* [15] G. Shi, M. M. Shanechi, and P. Aarabi, “On the importance of phase in human speech recognition,” Trans. Audio, Speech and Lang. Proc., vol. 14, p. 1867–1874, Sept. 2006.
* [16] A. V. Oppenheim and J. S. Lim, “The importance of phase in signals,” Proceedings of the IEEE, vol. 69, no. 5, pp. 529–541, 1981.
* [17] I. Danihelka, G. Wayne, B. Uria, N. Kalchbrenner, and A. Graves, “Associative long short-term memory,” in Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML’16, pp. 1986–1994, JMLR.org, 2016.
* [18] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016.
* [19] M. Arjovsky, A. Shah, and Y. Bengio, “Unitary evolution recurrent neural networks,” in Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML’16, pp. 1120–1128, JMLR.org, 2016.
* [20] S. Wisdom, T. Powers, J. R. Hershey, J. L. Roux, and L. Atlas, “Full-capacity unitary recurrent neural networks,” in Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, (USA), pp. 4887–4895, Curran Associates Inc., 2016.
* [21] G. Cybenko, “Approximation by superpositions of a sigmoidal function,” 1989.
* [22] A. Barron, “Approximation and estimation bounds for artificial neural networks,” vol. 14, pp. 243–249, 01 1991.
* [23] K.-I. Funahashi, “On the approximate realization of continuous mappings by neural networks,” Neural Networks, vol. 2, no. 3, pp. 183 – 192, 1989\.
* [24] K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Networks, vol. 2, no. 5, pp. 359 – 366, 1989.
* [25] N. Benvenuto and F. Piazza, “On the complex backpropagation algorithm,” IEEE Transactions on Signal Processidoi = 10.1109/IJCNN.2006.246722ng, vol. 40, pp. 967–969, April 1992.
* [26] D. Hayakawa, T. Masuko, and H. Fujimura, “Applying complex-valued neural networks to acoustic modeling for speech recognition,” in 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), pp. 1725–1731, Nov 2018.
* [27] Y. E. ACAR, M. CEYLAN, and E. YALDIZ, “An examination on the effect of cvnn parameters while classifying the real-valued balanced and unbalanced data,” in 2018 International Conference on Artificial Intelligence and Data Processing (IDAP), pp. 1–5, Sep. 2018.
* [28] Y. Ishizuka, S. Murai, Y. Takahashi, M. Kawai, Y. Taniai, and T. Naniwa, “Modeling walking behavior of powered exoskeleton based on complex-valued neural network,” in 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 1927–1932, Oct 2018.
* [29] Q. Yi, L. Xiao, Y. Zhang, B. Liao, L. Ding, and H. Peng, “Nonlinearly activated complex-valued gradient neural network for complex matrix inversion,” in 2018 Ninth International Conference on Intelligent Control and Information Processing (ICICIP), pp. 44–48, Nov 2018\.
* [30] H. H. Çevik, Y. E. Acar, and M. Çunkaş, “Day ahead wind power forecasting using complex valued neural network,” in 2018 International Conference on Smart Energy Systems and Technologies (SEST), pp. 1–6, Sep. 2018\.
* [31] D. Gleich and D. Sipos, “Complex valued convolutional neural network for terrasar-x patch categorization,” in EUSAR 2018; 12th European Conference on Synthetic Aperture Radar, pp. 1–4, June 2018.
* [32] W. Gong, J. Liang, and D. Li, “Design of high-capacity auto-associative memories based on the analysis of complex-valued neural networks,” in 2017 International Workshop on Complex Systems and Networks (IWCSN), pp. 161–168, Dec 2017.
* [33] J. Zhang and Y. Wu, “A new method for automatic sleep stage classification,” IEEE Transactions on Biomedical Circuits and Systems, vol. 11, pp. 1097–1110, Oct 2017.
* [34] C. Popa, “Complex-valued convolutional neural networks for real-valued image classification,” in 2017 International Joint Conference on Neural Networks (IJCNN), pp. 816–822, May 2017.
* [35] S. Liu, M. Xu, J. Wang, F. Lu, W. Zhang, H. Tian, and G. Chang, “A multilevel artificial neural network nonlinear equalizer for millimeter-wave mobile fronthaul systems,” Journal of Lightwave Technology, vol. 35, pp. 4406–4417, Oct 2017.
* [36] M. Peker, B. Sen, and D. Delen, “A novel method for automated diagnosis of epilepsy using complex-valued classifiers,” IEEE Journal of Biomedical and Health Informatics, vol. 20, pp. 108–118, Jan 2016.
* [37] S. Amilia, M. D. Sulistiyo, and R. N. Dayawati, “Face image-based gender recognition using complex-valued neural network,” in 2015 3rd International Conference on Information and Communication Technology (ICoICT), pp. 201–206, May 2015.
* [38] Y. Maeda, T. Fujiwara, and H. Ito, “Robot control using high dimensional neural networks,” in 2014 Proceedings of the SICE Annual Conference (SICE), pp. 738–743, Sep. 2014.
* [39] Y. Liu, H. Huang, and T. Huang, “Gain parameters based complex-valued backpropagation algorithm for learning and recognizing hand gestures,” in 2014 International Joint Conference on Neural Networks (IJCNN), pp. 2162–2166, July 2014.
* [40] R. F. Olanrewaju, O. Khalifa, A. Abdulla, and A. M. Z. Khedher, “Detection of alterations in watermarked medical images using fast fourier transform and complex-valued neural network,” in 2011 4th International Conference on Mechatronics (ICOM), pp. 1–6, May 2011.
* [41] R. Haensch and O. Hellwich, “Complex-valued convolutional neural networks for object detection in polsar data,” in 8th European Conference on Synthetic Aperture Radar, pp. 1–4, June 2010.
* [42] H. Nait-Charif, “Complex-valued neural networks fault tolerance in pattern classification applications,” in 2010 Second WRI Global Congress on Intelligent Systems, vol. 3, pp. 154–157, Dec 2010.
* [43] T. Kitajima and T. Yasuno, “Output prediction of wind power generation system using complex-valued neural network,” in Proceedings of SICE Annual Conference 2010, pp. 3610–3613, Aug 2010.
* [44] S. Fukami, T. Ogawa, and H. Kanada, “Regularization for complex-valued network inversion,” in 2008 SICE Annual Conference, pp. 1237–1242, Aug 2008.
* [45] M. F. Amin, M. M. Islam, and K. Murase, “Single-layered complex-valued neural networks and their ensembles for real-valued classification problems,” in 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), pp. 2500–2506, June 2008.
* [46] I. Nishikawa, K. Sakakibara, T. Iritani, and Y. Kuroe, “2 types of complex-valued hopfield networks and the application to a traffic signal control,” in Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005., vol. 2, pp. 782–787 vol. 2, July 2005.
* [47] A. Yadav, D. Mishra, S. Ray, R. N. Yadav, and P. K. Kalra, “Representation of complex-valued neural networks: a real-valued approach,” in Proceedings of 2005 International Conference on Intelligent Sensing and Information Processing, 2005., pp. 331–335, Jan 2005.
* [48] M. Kataoka, M. Kinouchi, and M. Hagiwara, “Music information retrieval system using complex-valued recurrent neural networks,” in SMC’98 Conference Proceedings. 1998 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.98CH36218), vol. 5, pp. 4290–4295 vol.5, Oct 1998.
* [49] M. Kinouchi and M. Hagiwara, “Memorization of melodies by complex-valued recurrent network,” in Proceedings of International Conference on Neural Networks (ICNN’96), vol. 2, pp. 1324–1328 vol.2, June 1996.
* [50] X. Tan, M. Li, P. Zhang, Y. Wu, and W. Song, “Complex-valued 3-d convolutional neural network for polsar image classification,” IEEE Geoscience and Remote Sensing Letters, vol. 17, no. 6, pp. 1022–1026, 2020.
* [51] G. M. Georgiou and C. Koutsougeras, “Complex domain backpropagation,” IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol. 39, pp. 330–334, May 1992.
* [52] S. Hu, S. Nagae, and A. Hirose, “Millimeter-wave adaptive glucose concentration estimation with complex-valued neural networks,” IEEE Transactions on Biomedical Engineering, pp. 1–1, 2018.
* [53] S. Hu and A. Hirose, “Proposal of millimeter-wave adaptive glucose-concentration estimation system using complex-valued neural networks,” in IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium, pp. 4074–4077, July 2018.
* [54] R. Hata and K. Murase, “Multi-valued autoencoders for multi-valued neural networks,” in 2016 International Joint Conference on Neural Networks (IJCNN), pp. 4412–4417, July 2016.
* [55] T. Ding and A. Hirose, “Fading channel prediction based on combination of complex-valued neural networks and chirp z-transform,” IEEE Transactions on Neural Networks and Learning Systems, vol. 25, pp. 1686–1695, Sep. 2014.
* [56] Y. Suzuki and M. Kobayashi, “Complex-valued bidirectional auto-associative memory,” in The 2013 International Joint Conference on Neural Networks (IJCNN), pp. 1–7, Aug 2013.
* [57] K. Mizote, H. Kishikawa, N. Goto, and S. Yanagiya, “Optical label routing processing for bpsk labels using complex-valued neural network,” Journal of Lightwave Technology, vol. 31, pp. 1867–1876, June 2013.
* [58] A. Y. H. Al-Nuaimi, M. Faijul Amin, and K. Murase, “Enhancing mp3 encoding by utilizing a predictive complex-valued neural network,” in The 2012 International Joint Conference on Neural Networks (IJCNN), pp. 1–6, June 2012.
* [59] J. Hu, Z. Li, Z. Hu, D. Yao, and J. Yu, “Spam detection with complex-valued neural network using behavior-based characteristics,” in 2008 Second International Conference on Genetic and Evolutionary Computing, pp. 166–169, Sep. 2008.
* [60] I. Nishikawa, T. Iritani, and K. Sakakibara, “Improvements of the traffic signal control by complex-valued hopfield networks,” in The 2006 IEEE International Joint Conference on Neural Network Proceedings, pp. 459–464, July 2006.
* [61] T. Nitta and Y. Kuroe, “Hyperbolic gradient operator and hyperbolic back-propagation learning algorithms,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, pp. 1689–1702, May 2018.
* [62] Y. Kominami, H. Ogawa, and K. Murase, “Convolutional neural networks with multi-valued neurons,” in 2017 International Joint Conference on Neural Networks (IJCNN), pp. 2673–2678, May 2017.
* [63] D. P. Mandic, “Complex valued recurrent neural networks for noncircular complex signals,” in 2009 International Joint Conference on Neural Networks, pp. 1987–1992, June 2009.
* [64] A. I. Hanna and D. P. Mandic, “A normalised complex backpropagation algorithm,” in 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 1, pp. I–977–I–980, May 2002.
* [65] T. Kim and T. Adali, “Fully complex backpropagation for constant envelope signal processing,” in Neural Networks for Signal Processing X. Proceedings of the 2000 IEEE Signal Processing Society Workshop (Cat. No.00TH8501), vol. 1, pp. 231–240 vol.1, Dec 2000.
* [66] T. Kim and T. Adali, “Fully complex multi-layer perceptron network for nonlinear signal processing,” Journal of VLSI signal processing systems for signal, image and video technology, vol. 32, pp. 29–43, Aug 2002.
* [67] S. Scardapane, S. Van Vaerenbergh, A. Hussain, and A. Uncini, “Complex-valued neural networks with nonparametric activation functions,” IEEE Transactions on Emerging Topics in Computational Intelligence, pp. 1–11, 2018.
* [68] M. Kobayashi, “Noise robust projection rule for hyperbolic hopfield neural networks,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–5, 2019.
* [69] C. Popa, “Complex-valued deep boltzmann machines,” in 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–8, July 2018\.
* [70] M. Kobayashi, “Stability of rotor hopfield neural networks with synchronous mode,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, pp. 744–748, March 2018.
* [71] Y. Kuroe, N. Hashimoto, and T. Mori, “On energy function for complex-valued neural networks and its applications,” in Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP ’02., vol. 3, pp. 1079–1083 vol.3, Nov 2002.
* [72] Q. Yuan, D. Li, Z. Wang, C. Liu, and C. He, “Channel estimation and pilot design for uplink sparse code multiple access system based on complex-valued sparse autoencoder,” IEEE Access, pp. 1–1, 2019.
* [73] L. Li, L. G. Wang, F. L. Teixeira, C. Liu, A. Nehorai, and T. J. Cui, “Deepnis: Deep neural network for nonlinear electromagnetic inverse scattering,” IEEE Transactions on Antennas and Propagation, vol. 67, pp. 1819–1825, March 2019.
* [74] J. Gao, B. Deng, Y. Qin, H. Wang, and X. Li, “Enhanced radar imaging using a complex-valued convolutional neural network,” IEEE Geoscience and Remote Sensing Letters, vol. 16, pp. 35–39, Jan 2019.
* [75] S. Gu and L. Ding, “A complex-valued vgg network based deep learing algorithm for image recognition,” in 2018 Ninth International Conference on Intelligent Control and Information Processing (ICICIP), pp. 340–343, Nov 2018.
* [76] M. Matlacz and G. Sarwas, “Crowd counting using complex convolutional neural network,” in 2018 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA), pp. 88–92, Sep. 2018.
* [77] C. Popa, “Deep hybrid real-complex-valued convolutional neural networks for image classification,” in 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–6, July 2018.
* [78] I. Shafran, T. Bagby, and R. J. Skerry-Ryan, “Complex evolution recurrent neural networks (cernns),” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5854–5858, April 2018.
* [79] Y. Lee, C. Wang, S. Wang, J. Wang, and C. Wu, “Fully complex deep neural network for phase-incorporating monaural source separation,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 281–285, March 2017.
* [80] Y.-S. Lee, K. Yu, S.-H. Chen, and J.-C. Wang, “Discriminative training of complex-valued deep recurrent neural network for singing voice separation,” in Proceedings of the 25th ACM International Conference on Multimedia, MM ’17, (New York, NY, USA), pp. 1327–1335, ACM, 2017.
* [81] M. Kobayashi, “O(2)-valued hopfield neural networks,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–6, 2019.
* [82] I. Aizenberg and A. Gonzalez, “Image recognition using mlmvn and frequency domain features,” in 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–8, July 2018.
* [83] I. Aizenberg and Z. Khaliq, “Analysis of eeg using multilayer neural network with multi-valued neurons,” in 2018 IEEE Second International Conference on Data Stream Mining Processing (DSMP), pp. 392–396, Aug 2018.
* [84] M. Kobayashi, “Decomposition of rotor hopfield neural networks using complex numbers,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, pp. 1366–1370, April 2018.
* [85] P. Virtue, S. X. Yu, and M. Lustig, “Better than real: Complex-valued neural nets for mri fingerprinting,” in 2017 IEEE International Conference on Image Processing (ICIP), pp. 3953–3957, Sep. 2017.
* [86] J. Ronghua, Z. Shulei, Z. Lihua, L. Qiuxia, and I. A. Saeed, “Prediction of soil moisture with complex-valued neural network,” in 2017 29th Chinese Control And Decision Conference (CCDC), pp. 1231–1236, May 2017.
* [87] I. Aizenberg, A. Ordukhanov, and F. O’Boy, “Mlmvn as an intelligent image filter,” in 2017 International Joint Conference on Neural Networks (IJCNN), pp. 3106–3113, May 2017.
* [88] M. Kobayashi, “Symmetric complex-valued hopfield neural networks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, pp. 1011–1015, April 2017.
* [89] I. Aizenberg, A. Luchetta, S. Manetti, and M. C. Piccirilli, “System identification using fra and a modified mlmvn with arbitrary complex-valued inputs,” in 2016 International Joint Conference on Neural Networks (IJCNN), pp. 4404–4411, July 2016.
* [90] C. Hacker, I. Aizenberg, and J. Wilson, “Gpu simulator of multilayer neural network based on multi-valued neurons,” in 2016 International Joint Conference on Neural Networks (IJCNN), pp. 4125–4132, July 2016.
* [91] M. Catelani, L. Ciani, A. Luchetta, S. Manetti, M. C. Piccirilli, A. Reatti, and M. K. Kazimierczuk, “Mlmvnn for parameter fault detection in pwm dc-dc converters and its applications for buck dc-dc converter,” in 2016 IEEE 16th International Conference on Environment and Electrical Engineering (EEEIC), pp. 1–6, June 2016.
* [92] E. Aizenberg and I. Aizenberg, “Batch linear least squares-based learning algorithm for mlmvn with soft margins,” in 2014 IEEE Symposium on Computational Intelligence and Data Mining (CIDM), pp. 48–55, Dec 2014.
* [93] I. B. Păvăloiu, G. Dragoi, and A. Vasile, “Gradient-descent training for phase-based neurons,” in 2014 18th International Conference on System Theory, Control and Computing (ICSTCC), pp. 874–878, Oct 2014.
* [94] M. E. Valle, “An introduction to complex-valued recurrent correlation neural networks,” in 2014 International Joint Conference on Neural Networks (IJCNN), pp. 3387–3394, July 2014.
* [95] I. Aizenberg, “A modified error-correction learning rule for multilayer neural network with multi-valued neurons,” in The 2013 International Joint Conference on Neural Networks (IJCNN), pp. 1–8, Aug 2013.
* [96] S. Wu and S. Lee, “Multi-valued neuron with new learning schemes,” in The 2013 International Joint Conference on Neural Networks (IJCNN), pp. 1–7, Aug 2013.
* [97] J. Chen, S. Wu, and S. Lee, “Modified learning for discrete multi-valued neuron,” in The 2013 International Joint Conference on Neural Networks (IJCNN), pp. 1–6, Aug 2013.
* [98] I. Aizenberg, S. Alexander, and J. Jackson, “Recognition of blurred images using multilayer neural network based on multi-valued neurons,” in 2011 41st IEEE International Symposium on Multiple-Valued Logic, pp. 282–287, May 2011.
* [99] I. Aizenberg, D. V. Paliy, J. M. Zurada, and J. T. Astola, “Blur identification by multilayer neural network based on multivalued neurons,” IEEE Transactions on Neural Networks, vol. 19, pp. 883–898, May 2008.
* [100] D. L. Lee, “Improving the capacity of complex-valued neural networks with a modified gradient descent learning rule,” IEEE Transactions on Neural Networks, vol. 12, pp. 439–443, March 2001.
* [101] I. Aizenberg, N. Aizenberg, C. Butakov, and E. Farberov, “Image recognition on the neural network based on multi-valued neurons,” in Proceedings 15th International Conference on Pattern Recognition. ICPR-2000, vol. 2, pp. 989–992 vol.2, Sep. 2000.
* [102] I. Aizenberg, N. Aizenberg, T. Bregin, C. Butakov, and E. Farberov, “Image processing using cellular neural networks based on multi-valued and universal binary neurons,” in Neural Networks for Signal Processing X. Proceedings of the 2000 IEEE Signal Processing Society Workshop (Cat. No.00TH8501), vol. 2, pp. 557–566 vol.2, Dec 2000.
* [103] N. N. Aizenberg, I. N. Aizenberg, and G. A. Krivosheev, “Multi-valued and universal binary neurons: mathematical model, learning, networks, application to image processing and pattern recognition,” in Proceedings of 13th International Conference on Pattern Recognition, vol. 4, pp. 185–189 vol.4, Aug 1996.
* [104] O. Fink, E. Zio, and U. Weidmann, “Predicting component reliability and level of degradation with complex-valued neural networks,” Reliability Engineering & System Safety, vol. 121, pp. 198 – 206, 2014.
* [105] S. Jankowski, A. Lozowski, and J. M. Zurada, “Complex-valued multistate neural associative memory,” IEEE Transactions on Neural Networks, vol. 7, pp. 1491–1496, Nov 1996.
* [106] G. Tanaka and K. Aihara, “Complex-valued multistate associative memory with nonlinear multilevel functions for gray-level image reconstruction,” Trans. Neur. Netw., vol. 20, pp. 1463–1473, Sept. 2009.
* [107] I. Aizenberg, P. Ruusuvuori, O. Yli-Harja, and J. Astola, “Multilayer neural network based on multi-valued neurons (mlmvn) applied to classification of microarray gene expression data,” pp. 27–30, 07 2006.
* [108] L. Ding, L. Xiao, K. Zhou, Y. Lan, Y. Zhang, and J. Li, “An improved complex-valued recurrent neural network model for time-varying complex-valued sylvester equation,” IEEE Access, vol. 7, pp. 19291–19302, 2019.
* [109] A. Marseet and F. Sahin, “Application of complex-valued convolutional neural network for next generation wireless networks,” in 2017 IEEE Western New York Image and Signal Processing Workshop (WNYISPW), pp. 1–5, Nov 2017.
* [110] M. Wilmanski, C. Kreucher, and A. Hero, “Complex input convolutional neural networks for wide angle sar atr,” in 2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp. 1037–1041, Dec 2016.
* [111] N. N. Aizenberg, Y. L. Ivas’kiv, D. A. Pospelov, and G. F. Khudyakov, “Multivalued threshold functions,” Cybernetics, vol. 9, pp. 61–77, Jan 1973.
* [112] N. N. Ajzenberg and Živko Tošić, “A generalization of the threshold functions,” Publikacije Elektrotehničkog fakulteta. Serija Matematika i fizika, no. 381/409, pp. 97–99, 1972.
* [113] T. L. Clarke, “Generalization of neural networks to the complex plane,” in 1990 IJCNN International Joint Conference on Neural Networks, pp. 435–440 vol.2, June 1990.
* [114] Y. Kuroe, M. Yoshid, and T. Mori, “On activation functions for complex-valued neural networks — existence of energy functions —,” in Artificial Neural Networks and Neural Information Processing — ICANN/ICONIP 2003 (O. Kaynak, E. Alpaydin, E. Oja, and L. Xu, eds.), (Berlin, Heidelberg), pp. 985–992, Springer Berlin Heidelberg, 2003.
* [115] A. J. Noest, “Associative memory in sparse phasor neural networks,” Europhysics Letters (EPL), vol. 6, pp. 469–474, jul 1988.
* [116] T. Miyajima, F. Baisho, and K. Yamanaka, “A phasor model with resting states,” IEICE Transactions on Information and Systems, vol. E83D, pp. 299–301, 02 2000.
* [117] A. Hirose, “Dynamics of fully complex-valued neural networks,” Electronics Letters, vol. 28, pp. 1492–1494, July 1992.
* [118] I. Nemoto and K. Saito, “A complex-valued version of nagumo–sato model of a single neuron and its behavior,” Neural Netw., vol. 15, pp. 833–853, Sept. 2002.
* [119] R. S. Zemel, C. K. Williams, and M. C. Mozer, “Lending direction to neural networks,” Neural Networks, vol. 8, no. 4, pp. 503 – 512, 1995.
* [120] N. N. Aizenberg, I. N. Aizenberg, and G. A. Krivosheev, “Multi-valued neurons: Learning, networks, application to image recognition and extrapolation of temporal series,” in From Natural to Artificial Neural Computation (J. Mira and F. Sandoval, eds.), (Berlin, Heidelberg), pp. 389–395, Springer Berlin Heidelberg, 1995.
* [121] T. Kim and T. Adali, “Complex backpropagation neural network using elementary transcendental activation functions,” vol. 2, pp. 1281 – 1284 vol.2, 02 2001\.
* [122] T. Kim and T. Adali, “Approximation by fully complex multilayer perceptrons,” Neural Comput., vol. 15, p. 1641–1666, July 2003.
* [123] I. Cha and S. A. Kassam, “Channel equalization using adaptive complex radial basis function networks,” IEEE Journal on Selected Areas in Communications, vol. 13, pp. 122–131, Jan 1995.
* [124] S. Chen, S. McLaughlin, and B. Mulgrew, “Complex-valued radial basis function network, part i: Network architecture and learning algorithms,” Signal Process., vol. 35, p. 19–31, Jan. 1994.
* [125] S. Chen, S. McLaughlin, and B. Mulgrew, “Complex-valued radial basis function network, part ii: Application to digital communications channel equalisation,” Signal Process., vol. 36, p. 175–188, Mar. 1994.
* [126] D. Jianping, N. Sundararajan, and P. Saratchandran, “Communication channel equalization using complex-valued minimal radial basis function neural networks,” IEEE Transactions on Neural Networks, vol. 13, pp. 687–696, May 2002.
* [127] A. Uncini, L. Vecci, P. Campolucci, and F. Piazza, “Complex-valued neural networks with adaptive spline activation function for digital-radio-links nonlinear equalization,” IEEE Transactions on Signal Processing, vol. 47, pp. 505–514, Feb 1999.
* [128] M. Scarpiniti, D. Vigliano, R. Parisi, and A. Uncini, “Generalized splitting functions for blind separation of complex signals,” Neurocomput., vol. 71, pp. 2245–2270, June 2008.
* [129] R. Hahnloser, R. Sarpeshkar, M. Mahowald, R. Douglas, and H. Seung, “Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit,” Nature, vol. 405, pp. 947–51, 07 2000.
* [130] C. Trabelsi, O. Bilaniuk, Y. Zhang, D. Serdyuk, S. Subramanian, J. F. Santos, S. Mehri, N. Rostamzadeh, Y. Bengio, and C. J. Pal, “Deep complex networks,” CoRR, vol. abs/1705.09792, 2018.
* [131] R. Savitha, S. Suresh, and N. Sundararajan, “Projection-based fast learning fully complex-valued relaxation neural network,” IEEE Transactions on Neural Networks and Learning Systems, vol. 24, pp. 529–541, April 2013.
* [132] M. S. Kim and C. C. Guest, “Modification of backpropagation networks for complex-valued signal processing in frequency domain,” in 1990 IJCNN International Joint Conference on Neural Networks, pp. 27–31 vol.3, June 1990\.
* [133] R.-C. Huang and M.-S. Chen, “Adaptive equalization using complex-valued multilayered neural network based on the extended kalman filter,” vol. 1, pp. 519 – 524 vol.1, 02 2000.
* [134] M. Ceylan, N. Çetinkaya, R. Ceylan, and Y. Özbay, “Comparison of complex-valued neural network and fuzzy clustering complex-valued neural network for load-flow analysis,” vol. 3949, pp. 92–99, 01 2005.
* [135] C. Tay, K. Tanizawa, and A. Hirose, “Error reduction in holographic movies using a hybrid learning method in coherent neural networks,” vol. 47, pp. 884–893, 09 2007.
* [136] H. Tsuzuki, M. Kugler, S. Kuroyanagi, and A. Iwata, “An approach for sound source localization by complex-valued neural network,” IEICE Transactions on Information and Systems, vol. E96.D, pp. 2257–2265, 10 2013\.
* [137] H. Zhang and D. P. Mandic, “Is a complex-valued stepsize advantageous in complex-valued gradient learning algorithms?,” IEEE Transactions on Neural Networks and Learning Systems, vol. 27, pp. 2730–2735, Dec 2016.
* [138] and D. H. Popovic and D. P. Mandic, “Complex-valued estimation of wind profile and wind power,” in Proceedings of the 12th IEEE Mediterranean Electrotechnical Conference (IEEE Cat. No.04CH37521), vol. 3, pp. 1037–1040 Vol.3, May 2004.
* [139] I. Aizenberg and C. Moraga, “Multilayer feedforward neural network based on multi-valued neurons (mlmvn) and a backpropagation learning algorithm,” Soft Comput., vol. 11, pp. 169–183, 01 2007.
* [140] B. Shamima, R. Savitha, S. Suresh, and S. Saraswathi, “Protein secondary structure prediction using a fully complex-valued relaxation network,” in The 2013 International Joint Conference on Neural Networks (IJCNN), pp. 1–8, Aug 2013.
* [141] I. Aizenberg, D. V. Paliy, J. M. Zurada, and J. T. Astola, “Blur identification by multilayer neural network based on multivalued neurons,” Trans. Neur. Netw., vol. 19, pp. 883–898, May 2008.
* [142] I. Aizenberg, N. Aizenberg, T. Bregin, C. Butakoff, E. Farberov, N. Merzlyakov, and O. Milukova, “Blur recognition on the neural network based on multi-valued neurons,” Journal of Image and Graphics. Proc. Intl. Conf. on Image and Graphics (ICIG), vol. 5, pp. 127–130, 01 2000.
* [143] H. Leung and S. Haykin, “The complex backpropagation algorithm,” IEEE Transactions on Signal Processing, vol. 39, pp. 2101–2104, Sep. 1991.
* [144] A. van den Bos, “Complex gradient and hessian,” IEE Proceedings - Vision, Image and Signal Processing, vol. 141, pp. 380–383, Dec 1994.
* [145] S. Haykin and S. Haykin, Adaptive Filter Theory. Pearson, 2014.
* [146] S. Lee Goh and D. P Mandic, “An augmented crtrl for complex-valued recurrent neural networks,” Neural networks : the official journal of the International Neural Network Society, vol. 20, pp. 1061–6, 01 2008.
* [147] A. Hirose and H. Onishi, “Proposal of relative minimization learning forbehavior stabilization of complex-valued recurrent neural networks,” vol. 24(1-3), pp. 163–171, Elsevier, 1999.
* [148] D. Mandic, S. Javidi, S. Goh, A. Kuh, and K. Aihara, “Complex-valued prediction of wind profile using augmented complex statistics,” Renewable Energy, vol. 34, 08 2008.
* [149] M. Solazzi, A. Uncini, E. Di Claudio, and R. Parisi, “Complex discriminative learning bayesian neural equalizer,” Signal Processing, vol. 81, pp. 2493–2502, 10 2002.
* [150] N. Benvenuto, M. Marchesi, F. Piazza, and A. Uncini, “Non linear satellite radio links equalized using blind neural networks,” pp. 1521 – 1524 vol.3, 05 1991.
* [151] A. B. Suksmono and A. Hirose, “Adaptive beamforming by using complex-valued multi layer perceptron”, booktitle=”artificial neural networks and neural information processing — icann/iconip 2003,” (Berlin, Heidelberg), pp. 959–966, Springer Berlin Heidelberg, 2003.
* [152] A. Hirose and M. Kiuchi, “Coherent optical associative memory system that processes complex-amplitude information,” Photonics Technology Letters, IEEE, vol. 12, pp. 564 – 566, 06 2000.
* [153] K. Oyama and A. Hirose, “Adaptive phase-singular-unit restoration with entire-spectrum-processing complex-valued neural networks in interferometric sar,” Electronics Letters, vol. 54, no. 1, pp. 43–45, 2018.
* [154] Y. Chistyakov, E. Kholodova, A. Minin, H. Zimmermann, and A. Knoll, “Modeling of electric power transformer using complex-valued neural networks,” Energy Procedia, vol. 12, p. 638–647, 12 2011.
* [155] A. Minin, Y. Chistyakov, E. Kholodova, H. Zimmermann, and A. Knoll, “Complex-valued open recurrent neural network for power transformer modeling,” Int. J. Appl. Math. Inform, vol. 6, pp. 41–48, 01 2012.
* [156] M. Miyauchi, M. Seki, A. Watanabe, and A. Miyauchi, “Interpretation of optical flow through complex neural network,” in New Trends in Neural Computation (J. Mira, J. Cabestany, and A. Prieto, eds.), (Berlin, Heidelberg), pp. 645–650, Springer Berlin Heidelberg, 1993.
* [157] M. Miyauchi and M. Seki, “Interpretation of optical flow through neural network learning,” in [Proceedings] Singapore ICCS/ISITA ‘92, pp. 1247–1251 vol.3, 1992.
* [158] A. Hirose, T. Higo, and K. Tanizawa, “Holographic three-dimensional movie generation with frame interpolation using coherent neural networks,” pp. 492 – 497, 01 2006.
* [159] A. Hirose, T. Higo, and K. Tanizawa, “Efficient generation of holographic movies with frame interpolation using a coherent neural network,” Ieice Electronic Express, vol. 3, pp. 417–423, 10 2006.
* [160] X. Yao, X. Shi, and F. Zhou, “Complex-value convolutional neural network for classification of human activities,” in 2019 6th Asia-Pacific Conference on Synthetic Aperture Radar (APSAR), pp. 1–6, 2019.
* [161] X. Yao, X. Shi, and F. Zhou, “Human activities classification based on complex-value convolutional neural network,” IEEE Sensors Journal, vol. 20, no. 13, pp. 7169–7180, 2020.
* [162] T. Dong and T. Huang, “Neural cryptography based on complex-valued neural network,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–6, 2019.
* [163] A. F. Rahman, W. G. Howells, and M. C. Fairhurst, “A multiexpert framework for character recognition: A novel application of clifford networks,” Trans. Neur. Netw., vol. 12, pp. 101–112, Jan. 2001.
* [164] Q. Sun, X. Li, L. Li, X. Liu, F. Liu, and L. Jiao, “Semi-supervised complex-valued gan for polarimetric sar image classification,” in IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium, pp. 3245–3248, 2019.
* [165] T. Anderson, “Split complex convolutional neural networks,” 2017.
* [166] J. Bassey, X. Li, and L. Qian, “An experimental study of multi-layer multi-valued neural network,” in 2019 2nd International Conference on Data Intelligence and Security (ICDIS), pp. 233–236, 2019.
* [167] N. Monning and S. Manandhar, “Evaluation of complex-valued neural networks on real-valued classification tasks,” CoRR, vol. abs/1811.12351, 2018.
|
# Hybrid leg compliance enables robots to operate with sensorimotor delays and
low control update frequencies
Milad Shafiee Ashtiani ‡, Alborz Aghamaleki Sarvestani ‡ and Alexander Badri-
Spröwitz ∗
###### Abstract
Animals locomote robustly and agile, albeit significant sensorimotor delays of
their nervous system. The sensorimotor control of legged robots is implemented
with much higher frequencies— often in the kilohertz range—and sensor and
actuator delays in the low millisecond range. But especially at harsh impacts
with unknown touch-down timing, legged robots show unstable controller
behaviors, while animals are seemingly not impacted. Here we examine this
discrepancy and suggest a hybrid robotic leg and controller design. We
implemented a physical, parallel joint compliance dimensioned in combination
with an active, virtual leg length controller. We present an extensive set of
systematic experiments both in computer simulation and hardware. Our hybrid
leg and controller design shows previously unseen robustness, in the presence
of sensorimotor delays up to 60 ms, or control frequencies as low as 20 Hz,
for a drop landing task from 1.3 leg lengths high and with a passive
compliance ratio of 0.7. In computer simulations, we report successful drop-
landings of the hybrid compliant leg from 3.8 leg lengths (1.2 m) for a 2 kg
quadruped robot with 100 Hz control frequency and a sensorimotor delay of 35
ms. The results of our presented hybrid leg design and control provide a
further explanation for the performance robustness of animals, and the
resulting discrepancy between animals and legged robots.
## 1 Keywords:
legged robots, parallel and passive compliance, hybrid actuation and leg
design, sensorimotor delay, feedback, latency
## 2 Introduction
Animals make use of their complex networks of mono-articulate and multi-
articulate muscle-tendons to create joint torque and work for bodyweight
support (Biewener, 1989). The time until the animal’s actuator (muscle)
responds to an external stimulus depends on the velocity and other parameters
of nerve conductivity. The delay is 30 ms and higher in a cat-sized animal
(More and Donelan, 2018). In comparison, the entire stance phase at 4 Hz
running with a duty factor of 0.4 lasts only 100 ms. Yet animals are seemingly
unaffected by the significant delay governing their neuromuscular control,
especially at the arguably harshest locomotion event; the touch-down impact.
Instead, evidence shows that running birds traverse unforeseen pot-hole
perturbations with ease (Daley et al., 2006).
On the other end is an emerging system of latest-generation legged robots
driven by ‘proprioceptive’ actuation and control (Park et al., 2017), ‘quasi-
direct-drive’ strategies (Ding and Park, 2017), and alike. These robots are
capable of real dynamic behaviors and agile maneuvers, with high jumps and
landings and fast locomotion (Grimminger et al., 2020; Park et al., 2017).
These systems require high-frequency control loops of 500 Hz and more with
communication and control delays of a few milliseconds (Bledt et al., 2018;
Grimminger et al., 2020; Li et al., 2020). Such fully actuated robots consume
energy even for a simple, standing task. Proprioceptive and other legged robot
designs work well, especially during the stance phase, but they show
limitations in the transition from swing to stance phase. Touch-downs can be
harsh and include rapid leg loading from zero to multiple body weights in a
few milliseconds (Mo et al., 2020), and side-effects from wobbling masses
Günther et al. (2003). Uncertainties in touch-down timing (Daley and Biewener,
2006) are problematic because the robot’s actuators are often gain and
impedance scheduled (Hammoud et al., 2020; Hubicki et al., 2016). Learned
control strategies can mitigate some of the effects of sensor noise and timing
uncertainties and also function in unstructured terrain (Bledt et al., 2018).
To function well, the robot’s legs must rapidly establish secure and steady
ground contact and immediately produce the joint work to support the robot’s
weight.
In teleoperation, as one of the legged robot applications, time-delays are
inherent, and are caused by the communication distance (Varkonyi et al.,
2014). Sensorimotor delays in legged robotics can be defined more broadly for
cases where feedback is transmitted late compared to the expected time-frame
to react, i.e., at step-down or push-like perturbations (Daley, 2018; Daley
and Biewener, 2006). Recent hierarchical, optimization-based controllers
behave robustly and are versatile, but they heavily depend on the sensorimotor
control’s exact timing (Shafiee et al., 2019; Shafiee-Ashtiani et al., 2017).
In general, force feedback works well in minimal-delay systems, and a large
enough feedback delay eventually causes control instabilities. Thus, the
communication delay is a significant challenge for legged robots for real-
world teleoperation applications.
Which leads us back to our initial question: how do animal successfully manage
an in-built neuromuscular control with large sensorimotor delays, yet show no
obvious signs of decline in robustness, responsiveness, or agility. More and
Donelan report an exponential relation between the total neuromuscular delay
and the animal’s mass:
$t_{Delay}=0.031M^{0.21}$ (1)
with $M$ as the animal’s mass in kilogram, and $t_{Delay}$ as the sensorimotor
delay in seconds. For example, for animals with 600 g and 2 kg body weight,
the sensorimotor delays are 27 ms and 35 ms, respectively.
Inspired by the animal locomotion apparatus, legged robots increasingly apply
designs with passive and active compliance (Ambrose and Ames, 2020; Nasiri et
al., 2016). Designs featuring ‘series-elastic actuation’ ((Pratt and
Williamson, 1995)) can improve energy efficiency, robustness, and interaction
safety(Calanca et al., 2015; Hutter et al., 2011) and protect the robot’s
actuators. Compliant and elastic parts applied in-parallel to the actuator
system are labeled ‘parallel-elastic actuation’ (Ambrose and Ames, 2020;
Plooij et al., 2016; Niehues et al., 2015; Roozing, 2018; Ruppert and
Spröwitz, 2019; Spröwitz et al., 2013).
Legged robots with in-series and parallel joint elasticity can locomote purely
driven by feed-forward control, without sensor feedback informing the
controller of the robot’s touch-down status, or the overall robot state
(Spröwitz et al., 2013; Narioka et al., 2012; Ruppert and Spröwitz, 2019;
Spröwitz et al., 2018). Compliance in-parallel to the actuator system can
improve energy efficiency (Roozing et al., 2019; Liu et al., 2018; Yesilevskiy
et al., 2018), and increase the system’s stability (AhmadSharbafi et al.,
2020). Most important for the touch-down event, parallel compliance provides
an _immediate, physical_ response from its spring-equipped leg. The leg’s
spring will charge without delay, sensory feedback or control input, and it
will carry the robot’s weight. Legged robots with in-parallel leg designs have
been shown to mitigate step-down perturbations similarly to running animals
(Spröwitz et al., 2013; Daley and Biewener, 2006). But compliant elements
cause under-actuation and therefore lower the robot’s active control
authority. In parallel-elastic systems, some amount of control authority in
task-space remains, but the robot’s actuators will work against its full-
strength, in-parallel elasticities.
Animals are different in many ways, and it seems they only benefit from
combined passive and active compliant leg actuation (Alexander, 1990).
Physical, compliant structures allow a design where the control task is
partially taken over by the body’s mechanics (Blickhan et al., 2007). Keeping
the animals’ discrepancy between neuromuscular control delays (More and
Donelan, 2018; More et al., 2010) and high locomotion robustness in mind, we
wondered if these aspects are related.
We developed a hybrid between two systems, and merged what we see in legged
animals, to gain the best of two systems; A) Passively, spring-loaded legged
robots that work well with open-loop control, i.e., without control feedback,
and B) High-bandwidth, fully actuated legged robots with full control
authority and rapid response times.
Inspired by biological systems, we show a hybrid robot mechanism and control
design with _complementing levels of passive and active joint compliance_. The
concept successfully overcomes significant sensorimotor delays and works with
lower sensorimotor control update frequencies, than the state-of-the-art fully
actuated legged robots working with delays in the low millisecond range
(Grimminger et al., 2020; Park et al., 2017). Our hybrid robot leg draws
comparatively lower actuator power due to its partial, in-parallel passive
compliance. The proposed hybrid design leads to a balanced level of control
authority, in-between that of robots with passively compliant legs and full
state controlled robots.
In Section 3, we present a theoretical stability analysis of a simplified
joint design with hybrid passive and active stiffness in the presence of
sensorimotor delays. We then present extensive computer simulation and
hardware experiments and investigate the effect of varying actuation update
control frequencies and sensorimotor delays on a robotic leg with varying
ratios of passive and active stiffness and a simulated quadruped robot
equipped with these legs (Section 4). We conclude our work in Section 5.
## 3 Materials and methods
Figure 1: Different source of delay in the robotic leg and biological
counterpart and modeling the leg as a simple pendulum.
We first analyse the poles of a linear and simplified actuated system with
sensorimotor delays. Its mechanics consist of a pendulum mounted with parallel
compliant elements (Figure 1B). We then implement hardware experiments and
simulations to investigate the effect of hybrid active and passive compliance
on a multi-body leg’s control performance. We quantify the total (sum of)
system compliance as the active compliance in-parallel to the passive (spring-
based) compliance, acting at a robotic knee joint (Figure 1C):
$K_{Total}=K_{Active}+K_{Passive}$ (2)
Where $K_{Passive}$($\frac{Nm}{rad}$) is the joint’s passive rotational
stiffness, $K_{Active}$($\frac{Nm}{rad}$) is the joint’s active (virtual)
rotational stiffness ($\frac{Nm}{rad}$) provided by its actuator.
$K_{Total}$($\frac{Nm}{rad}$) is the total rotational stiffness of the joint.
We define the ratio $\lambda_{Passive}$ as the passive stiffness over the
total stiffness.
$\lambda_{Passive}=\frac{K_{Passive}}{K_{Total}}$ (3)
For example, a low passive compliant ratio of $0.1$ indicates that the knee
spring supplies $10\text{\,}\%$ of the knee’s stiffness to carry the robot,
and the active, virtual leg compliance supplies the remaining $90\text{\,}\%$.
### 3.1 Theoretical analysis of a reduced order model
We analysed a simplified system without impacts, with a load and parallel
compliance, to analytically quantify the effects of sensorimotor delays. The
reduced order model consists of a strut-leg mounted as a single degree-of-
freedom pendulum (Figure 1B), and represents a robot’s lower leg. The
equations governing the pendulum motion are:
$I\ddot{\theta}+mgl\cdot
sin(\theta-\theta_{d})+K_{Passive}(\theta-\theta_{d})+B(\dot{\theta}-\dot{\theta}_{d})=\tau_{knee}$
(4)
where $B$ is the physical system’s damping, $K_{Passive}$ is the stiffness of
the parallel compliance element, $I$ is the momentum of inertia, $m$ is the
mass, $l$ is the Center of Mass distance to the pivot point, $g$ is the
standard acceleration. $\theta_{d}$ is the equilibrium knee joint angle that
corresponds to the orientation that a relaxed spring, $\theta$ is a joint
angle and $\tau_{knee}$ is the control torque input to the knee joint. We
implement the input torque as an active compliance:
$I\ddot{\theta}+mgl\cdot
sin(\theta-\theta_{d})+K_{Passive}(\theta-\theta_{d})+B(\dot{\theta}-\dot{\theta}_{d})=-K_{Active}(\theta_{feedback}-\theta_{d})$
(5)
where the $K_{Active}$ is related to the active compliance from motor torque.
$\theta_{feedback}$ is the joint angle read by sensor. We assume a small
enough angular deviation of the pendulum around the equilibrium point:
$sin(\theta-\theta_{d})\simeq(\theta-\theta_{d})$, and can write Equation 5 as
a linear differential equation. We convert Equation 5 to the Laplace domain
and incorporate a fixed time delay $t_{d}$ in the feedback loop of the control
input (active compliance). The resulting closed loop system transfer function
can be presented in the frequency domain as:
$\frac{\Theta_{s}}{\Theta_{ds}}=\frac{K_{Active}e^{-t_{d}s}+mgl+K_{Passive}}{s^{2}I+Bs+K_{Active}e^{-t_{d}s}+K_{Passive}+mgl}$
(6)
The displacement of the system’s poles can be analyzed to understand the
effects that a combination of active and passive compliance have, on the
closed-loop stability in the presence of sensorimotor delay. We linearised the
system’s exponential time delay term with a third-order Padé approximation.
The desired, full joint stiffness $K_{Total}$ is achieved with the combination
of active and passive joint compliance (Equation 3).
Our analysis of the system’s poles is illustrated in Figure 2A. The $"\times"$
mark shows cases where the joint’s compliance is fully active, i.e.,
$\lambda_{Passive}=0$. The $"\circ"$ mark indicates cases where $70\%$ of the
total compliance is caused by the spring, i.e., $\lambda_{Passive}=0.7$.
Figure 2A shows that in a case of $\lambda_{Passive}=0$, and with increasing
feedback loop delay, the dominant system poles move rapidly from their stable
region towards the unstable region at the imaginary axis. For passive
compliance in-parallel with active actuator compliance, the rate of divergence
is much lower. The system’s step response (Figure 2B) indicates that
increasing the sensorimotor delay with a $\lambda_{Passive}=0$ leads to high
oscillations, and a resonance effects eventually destabilizes the joint.
However, in the case of combined passive and active stiffness with an
increased delay of $20ms$, the closed-loop system’s step response is stable
and smooth (Figure 2C, purple line).
Figure 2: A) Pole analysis of a simplified pendulum system with in-parallel,
passive and active joint elasticity. The effects of varying amounts of delay
and passive compliance on the stability of the system are shown. Here, the
system’s mass is $m=0.5\;kg$, the total joint stiffness is
$K_{total}=1.15\;\frac{Nm}{rad}$, and the damping coefficient is
$B=0.14\;\frac{Nms}{rad}$. B) The system’s step response for varying delays
with $\lambda_{Passive}=0$. C) The system’s step response for varying delays
with $\lambda_{Passive}=0.7$.
### 3.2 Computer simulating articulated robot legs
The previous results from the analysis of poles of a simplified, linearised
system provided insights about sensorimotor systems under considerable
sensorimotor delay, and hybrid joint compliance. Here we are extending our
characterization to an articulated leg with hybrid joint compliance,
sensorimotor delays, and impacts from touch-down. The landing task is one of
the most challenging motions due to the large impulsive ground reaction force
applied at the system. The landing load case is nonlinear, hybrid, under-
actuated, and basically a system’s step response. A step response analysis
like the drop-experiment applied here is one conventional way for a system
characterization, in control theory. We implemented our simulation in the
Pybullet simulator (Coumans and Bai, 2016–2019), and we performed extensive
computer simulations for the landing task, for a broad range of sensorimotor
delays, frequencies, and varying $\lambda_{Passive}$ values. We simulated a
single leg and a quadruped robot, both modified from the open-source quadruped
robot ’Solo’ (Grimminger et al., 2020).
In Figure 3, we depict the tested control and sensorimotor strategies. The
black curve is a schematic, desired knee motor torque trajectory, and the
control command frequency (red line) is measured in commands per second. For a
reference; the control frequency of active actuators in proprioceptive legged
robot is often around 1 kHz, i.e., a cycle period
$dt_{control}=\frac{1}{freq}$ of 1 ms. Here we are especially interested in
investigating scenarios with control frequencies much below 1 kHz. We defined
three strategies for applying torque with a duration of $dt_{Activation}$, and
a duty cycle $DC$, which is the fraction of $dt_{control}$ with a non-zero
actuator torque. $dt_{Activation}$ is defined as the time period between
control commands, i.e., $dt_{Activation}=DC\times dt_{control}$, and ranges
from $1\text{\,}\mathrm{m}\mathrm{s}$ to a maximum of $\frac{1}{freq}$ in
$\mathrm{[}\mathrm{m}\mathrm{s}\mathrm{]}$. For $dt_{Activation,min}$, the
control command is applied for $1\text{\,}\mathrm{m}\mathrm{s}$, and then
reset to zero. For $dt_{Activation,max}$, the actuator will send a command
equal to the previous value until the control command is updated (Figure 3,
red line). We further simulate cases where the control command is applied
after a given sensorimotor delay (Figure 3, blue curve).
Figure 3: Knee motor command for different combinations of control frequency,
frequency duty cycle, and sensorimotor delay. A) Is showing a $100\text{\,}\%$
duty cycle control frequency, at a relatively low update frequency. B) Shows a
set sensorimotor delay between the desired knee output torque, and the
commanded output torque. C) Shows a $50\text{\,}\%$ duty cycle at the same
control frequency as A). D) Indicates the portion of parallel, passive joint
compliance, of the joint’s overall joint compliance. Here, a passive
compliance ratio of $\lambda_{Passive}$ 0.5 is shown, such that the knee’s
mechanical spring produces half of the knee torque, and the virtual knee
spring produces the remaining knee torque.
The active compliance controller input is applied at the knee joint as:
$\tau_{knee,motor}=K_{Total}(1-\lambda_{Passive})(\theta_{feedback,Knee}-\theta_{d,knee})$
(7)
To present the physical spring in Pybullet, we implement a spring torque
applied at the knee joint:
$\tau_{knee,spring}=K_{Total}(\lambda_{Passive})(\theta_{knee}-\theta_{d,knee})$
(8)
### 3.3 Setup hardware experiments
We modified a single leg of the 8-DoF open-source, quadruped robot ’Solo’ with
its proprioceptive design and controller architecture (Grimminger et al.,
2020). The leg has two active degrees of freedom, one at the hip and one at
the knee. Both leg segments are $0.16\>m$ long. A brushless motor (Antigravity
MN4004-kv380, T-Motor) drives a two-stage belt transmission with an overall
$9:1$ gear ratio. Two encoders (AEDT-9810-T00, Avago) measure the motor’s
rotor position, which is recalculated into joint angles. We added a physical
spring (SWY 16.5-30, Misumi) in parallel to the knee. The spring is inserted
into the knee joint via a tendon and a pulley with
$18.9\text{\,}\mathrm{m}\mathrm{m}$ radius (Figure 1C). We designed the spring
mount to rapidly exchange springs by softer or harder ones, between
experiments. To simplify the touch-down scenario, the robot leg was dropped
while guided by a vertical rail (Figure 4A). The hip joint is constrained to
follow half of the knee joint angle at all times, controlled by a position
controller, and is meant to create foot contact vertically below the hip
joint. We monitor the robot’s vertical position with a draw-wire sensor (LX-
PA-40, WayCon), we use two draw-wire sensors on top and bottom to cancel
unwanted tension force produce by draw-wire sensors. With the vertical robot
leg position, we quantify the robot’s landing behavior and evaluate the
effectiveness of the proposed hybrid active/passive joint stiffness framework.
The recorded vertical position is sampled by an analog-to-digital (A/D) port
of the brushless motor driver board. The brushless motor driver board sends
motor position and vertical position data via a communication board to a PC,
and it receives the motor control commands, via a SPI Protocol. The PC
communication board is the bridge between the brushless motor driver board and
the PC (Intel® Xeon(R) W-2145 CPU @ 3.70GHz $\times$ 16, 64-bit, RAM 62,5 GB,
Ubuntu 18.04) via EtherCAT. We wrote a custom robot control program in a
Python wrapper, which communicates via the PC communication board with the
robot leg. The Python wrapper adds a time stamp to all input parameters and
sensory data such as joint angles, motor currents, and body height, and saves
all data into a text file for further analysis. A custom Matlab script was
written to post-process and plot the data.
Figure 4: Experimental setup. A) A 2-DoF hybrid compliant leg; a one-
directional spring (‘passive compliance’) extends the knee joint via a knee
tendon and a knee pulley/cam. Knee springs with varying stiffness were mounted
during the experiments, supporting between $0\text{\,}\%100\text{\,}\%$ of the
robot’s weight. A rail guides the robot’s vertical drop, and a set of
potentiometers measures the robot’s height. The leg’s active knee actuation
acts in combination to the in-parallel mounted knee spring. B) Setup details.
C) The URDF model of the hybrid compliant robot leg in the Pybullet
simulation.
## 4 Result and discussion
In this section we present the results from the computer simulation of a
single robot leg with hybrid joint compliance, and from dropping an equivalent
quadruped robot in simulation. We then present the results from our hardware
experiments with a single leg mounted to a vertical slider.
### 4.1 Single leg computer simulation
We study the effects of varying combinations of sensorimotor delay, control
loop update frequency, and the ratio of passive compliance $\lambda_{Passive}$
on the compliance controller performance during landing. We derive a reference
landing hip height trajectory from dropping a parallel compliant leg with
$\lambda_{Passive}=1$. Deviations in settling time and final hip height are
accepted as successful landings, within given limits.
We performed computer simulations to quantify the viability of the landing
task. We varied $\lambda_{Passive}$ from $0$ to $1$ by steps of $0.05$, the
sensorimotor delay from $0\>ms$ to $60\>ms$ in steps $5\>ms$, and the
sensorimotor control loop update frequency in a range of
$[20,\>50,\>250,\>1000]\>Hz$. In Pybullet, we set joint damping values of
$0.01\frac{Nms}{rad}$ and $0.05\frac{Nms}{rad}$ for the hip and knee,
respectively. The weight of the single robotic leg is $0.6\>kg$, the weight of
the quadruped is $2.0\>kg$. We chose the total stiffness for the knee joint
compliance for an approximate leg length deflection of $10\%$ during mid-
stance, after dropping the leg from a height of $42.5\>cm$. We also assumed
that the robot’s weight is concentrated at its hip joint. We realized a
$\lambda_{Passive}=1$ rotational spring as a combination of a pulley with
radius $r=0.0189\>m$, and a linear spring with a stiffness of
$K=4680\>\frac{N}{m}$ acting at the knee joint. The total rotational stiffness
is then $Kr^{2}=1.67\>\frac{Nm}{rad}$ as a combination of passive and active
compliance components. In simulation, the delay and frequency modes do not
affect the $\tau_{knee,spring}$, as it is representing a physical spring.
Instead, delay and frequency variations only affected the active part of the
joint stiffness $\tau_{knee,motor}$. We dropped the robot from $0.425\>m$
height, which is 1.3 times the length of both leg segments. The robot’s body
was constrained to move vertically only, and we monitored the robot’s height
during landing. The vertical hip trajectory represents the system’s step
response. We then assessed the robot’s controller performance by measuring the
settling time and the final hip height. The settling time is the duration to
settle to the hip height within $|\mathrm{Height(t)-Height_{final}}|$ range,
between the hip height trajectory $\mathrm{Height(t)}$ and the steady-state
hip height $\mathrm{Height_{final}}$. The reference final hip height is that
of a $\lambda_{Passive}=1$ robot leg. We chose a critical settling time above
$0.7\>s$, and a critical hip height below $0.3\>m$ as failing cases.
In Figure 5, the results of 315 drop-landing simulations are illustrated, with
varying sensorimotor delays and $\lambda_{Passive}$ settings, and for a single
control frequency of $1\>kHz$. The grey points represent failed simulation
cases with a steady-state hip height of less than $0.3\>m$ or settling times
higher than $0.7\>s$. Our computer simulation results show that in a case of
$\lambda_{Passive}=0.0$, and by increasing the sensorimotor delay to values
above $25\>ms$, all scenarios failed. Yet by setting $\lambda_{Passive}$ high
than $0.7$, the system is stable at the presence of $60\>ms$ delays. The
computer simulation results demonstrate that the hybrid compliant leg has
stable impact control regimes in the presence of large sensorimotor delays,
with an appropriate combination of passive and active compliance.
Figure 5: Results from 315 computer simulations for a control update frequency
of $1000\;Hz$ for the drop-landing task of a simulated robot leg. The passive
compliant ratio $\lambda_{passive}$ was varied between $0$ to $1$ in steps of
$0.05$, and the sensorimotor delay between $0\>ms$ to $60\>ms$ by steps
$5\>ms$. The grey data points and the grey hip height trajectory show a failed
landing task with too much initial deflection and a too large settling time.
The colored data points and trajectories show viable landings. Viable landings
are shown for sensorimotor delays up to $60\text{\,}\mathrm{m}\mathrm{s}$, in
combination with a passive compliant ratio of $\lambda_{Passive}=0.6$.
We then investigated the effect of varying the control update frequency, and
we performed computer simulations for frequencies
$[20,\>50,100\>,\>250,\>1000]\>Hz$, and duty cycles of $[25,\;50,\;100]\>\%$.
The results are shown in Figure 6. Most visible is a decreasing viable area
for all three duty cycles, from reduced the control frequencies. Comparing
duty cycles of $25\%$, $100\;\%$ (Figure 6A, Figure 6C) shows that for a
reduced duty cycle, the viable areas did overall change much. Low amounts of
$\lambda_{Passive}\approx 0.2$ only lead to viable landings in combination
with a DC of $50\text{\,}\%$ or the highest update frequency
($1\text{\,}\mathrm{k}\mathrm{H}\mathrm{z}$). The Figure 6C shows that a DC of
$100\>\%$ at almost all control frequencies $[\>100,\>250,\>1000]\>Hz$ have a
similar viable area. Only when switching to control frequencies as low as
$20\>Hz$, the viable area largely changes. For a DC of $50\;\%$ (Figure 6B),
the viable area changes slightly when switching between control frequencies
$[\>50,\>100,\>250]\>Hz$. The biggest viable area changes are visible when
changing from $1000\>Hz$ to $250\>Hz$, or from $50\>Hz$ to $20$. At a DC of
$25\;\%$ (Figure 6A) and when switching between control frequencies of
$1000\>Hz$ to $250\>Hz$, the viable area is reduced drastically. The change of
control frequency for the remaining values has seemingly less influence on the
change of viable area. For control frequencies of $20,\>50,\>100,\>250\>Hz$
and high sensorimotor delays around $50,\>60\>ms$, and by reducing the duty
cycle, a larger viable area for large $\lambda_{passive}$ is visible. For all
$\lambda_{Passive}$ above $0.6$ we show stable landing, including critical
combinations of $60\>ms$ delay and $20\>Hz$ updated frequency. These examples
show how capable a hybrid active and passive compliance system can be. In
general, all results indicate stable landing for passive compliant ratios
equal and higher than $\lambda_{Passive}$ $0.7$.
Figure 6: Results showing three different duty cycles (DC). Each DC plot is
made from 1575 simulations, in sum 4725 computer simulations. Top and bottom
plots show the same data. The reference landing performance is the top left
data point in each plot, from dropping a $\lambda_{Passive}=1$ robot leg. A)
$DC=25\>\%$ B) $DC=50\>\%$ C) $DC=100\>\%$.
### 4.2 Quadruped computer simulation
The previous computer simulation results from landing a single leg indicate
that by choosing a high ratio of passive compliance, the robot’s performance
becomes largely independent of the sensorimotor delay, and the control
frequency. But fully passive compliance limits a robot’s agility, and to
maintain control authority we suggest reducing the passive compliance ratio.
In seven drop-landing scenarios, we altered a quadruped robot’s drop height,
and its hybrid passive and active stiffness (Figure 7). The simulation
parameters are provided in Table 1. The robot in case 1 features a passive
compliance ratio of $1$. It was dropped from a height of $0.7\>m$, and landed
successfully. The robot in case 2 uses identical parameters, and it was
dropped from $1\>m$ height, and fails to land properly. This example shows the
drawback of pure, passive and non-adjustable compliance. The robot
configuration in case 3 is a controller with full, bi-directionally active
compliance, no passive compliance, and no sensorimotor motor delay. The
robot’s controller successfully guided the landing. In case 4 a fully-active
compliance with a $17\>ms$ sensorimotor delay failed to land properly, which
shows the vulnerability of active compliance in the presence of sensorimotor
delay. Case 5 shows a successful landing scenario by combining passive and
active compliance, with a $27\>ms$ sensorimotor delay. The control update
frequency was reduced from $1000\>Hz$ to $200\>Hz$. Case 6 uses the same
passive compliance ratio $\lambda_{Passive}$ as case 5, and we reduced the
control update frequency to $100\>Hz$. The robot’s controller did not land the
robot successfully. Finally in case 7, we decreased the passive compliance
ratio $\lambda_{Passive}$ by increasing the active stiffness, and the robot
landed successfully, for a drop height of $1.2\>m$, a sensorimotor delay of
$35\>ms$, and $100\>Hz$ control frequency. The last case shows how an
appropriate combination of hybrid active and passive compliance at low control
frequency increases the control authority, energy efficiency (the spring
partially captures the robot’s load) and robustness in the presence of
sensorimotor delay.
Figure 7: Computer simulated quadruped robots landing, in seven different
scenarios. A) The Robot’s initial state. The initial drop heights are
indicated in red. B) An intermediate robot state at 4 s simulation time. The
panels also provide controller parameters. C) Converged robot state after 10
s. Simulation cases 1, 3, 5, and 7 were landing successfully.
### 4.3 Hardware experiments
We validated previous simulation results with hardware experiments. We
selected passive compliance ratios of
$\lambda_{Passive}=[0\>,0.37\>,0.67\>,1]$ and a total knee stiffness of
$K_{Total}=4680\>Nm$. We then varied the physical spring stiffness, control
frequencies $Frequencies=[1000,\>100,\>10]$ and sensorimotor delays
$\>Delays=[0,\>10,\>20,\>30,\>50]$\mathrm{m}\mathrm{s}$$. In Figure 8 we
assess the difference $e_{sim2real}$ between the computer simulation results
and the hardware experimental results, as the mean-square-error between both
resulting vertical hip trajectory, normalized by the maximum leg length. Grey
color data shows failure cases in both experiments and simulations. Viable
cases with a mean-square-error of less than $6\text{\,}\%$ (Figure 8, deep
red) indicate a good consistency between the hardware experiment, and the
computer simulation. We show four exemplary hip trajectory, with varying
passive compliance ratios $\lambda_{passive}$ (Figure 8, I-IV). The first
three cases are viable cases with a good consistency between simulation and
experiments, and with a $e_{sim2real}$ of less than $6\text{\,}\%$. The fourth
case is a typical failure case. It shows an atypical settling and transition
behavior both in the computer simulation and in the hardware experiment.
Figure 8: Comparing results from computer simulation and hardware experiments,
in form of a mean-square-error of the normalized body height. Left) Good
similarities are shown as colored data patches, grey data patches indicate
unviable landing experiments. I-IV) exemplary hip trajectories for common
control frequency and passive compliant ratio combinations. I-III are
successful landings with short settling times and a good final hip height. IV
shows an unsuccessful landing example.
## 5 Conclusion and summary
We proposed a new, hybrid parallel passive and active compliance architecture
for the design and control of robotic legs, where a portion of the leg loading
is captured by the in-parallel knee spring. The remaining load is captured by
the robot’s active, virtual spring. We show robust landing scenarios with
varying hybrid compliance ratios in the presence of sensorimotor delays and
low control frequencies. The hybrid compliance design remains relatively
energy efficient because the leg’s in-parallel spring stores and releases
significant elastic energy. Yet with an appropriate amount of active
compliance, the robot’s controller maintains its control authority. We
performed extensive computer simulations and systematically characterized the
system’s response to varying sensorimotor delays and control frequencies in a
drop-landing task. The computer simulation results show that drop-landings
were successful for sensorimotor delays up to $60\>ms$, and a control
frequency as low as $20\>Hz$, in combination with a passive compliance ratio
of $\lambda_{Passive}=0.7$. We also simulated drop-landings with a quadruped
robot, with varying total leg stiffness and multiple drop heights. In these
examples, the hybrid compliant actuation showed good robustness in the
presence of sensorimotor delay and low update control frequencies. With a
significant amount of active compliance ratio, we still have good control
authority; we altered the robot’s total compliance and successful drop-landed
the robot from multiple heights. Our results are well consistent with the
locomotion robustness of running animals, which evidently and successfully
deal with neuromuscular sensorimotor delays in a similar range. We verified
our computer simulation results with hardware experiments for a selected range
of control and design parameters and could show a good agreement between both.
## Author Contributions
MSA and AAS contributed to the concept, robot design, experimental setup,
simulation, and experimentation. AB-S developed the original hybrid
active/passive compliant concept. All authors discussed the data, agreed with
the presented results, and contributed to the writing.
## Acknowledgments
The authors thank the International Max Planck Research School for Intelligent
Systems (IMPRS-IS) for supporting AAS. We thank Felix Grimminger and Jad Saud
for support developing the robot leg, the Robotic ZWE for prototyping support
and Julian Viereck for providing the URDF file of the quadruped.
## Supplemental Data
The supplementary material includes a code algorithms snippet driving the
simulations, and our video presenting results from computer simulation and
hardware experiment.
## Data Availability Statement
The datasets recorded and generated for this study are available on request
from the corresponding author.
## References
* AhmadSharbafi et al. (2020) AhmadSharbafi, M., Yazdanpanah, M. J., Ahmadabadi, M. N., and Seyfarth, A. (2020). Parallel compliance design for increasing robustness and efficiency in legged locomotion-theoretical background and applications. _IEEE/ASME Transactions on Mechatronics_ 10.1109/TMECH.2020.3019686
* Alexander (1990) Alexander, R. (1990). Three uses for springs in legged locomotion. _International Journal of Robotics Research_ 9, 53–61. 10.1177/027836499000900205
* Ambrose and Ames (2020) Ambrose, E. and Ames, A. D. (2020). Improved performance on moving-mass hopping robots with parallel elasticity. In _2020 IEEE International Conference on Robotics and Automation (ICRA)_ (IEEE), 2457–2463. 10.1109/ICRA40945.2020.9197070
* Biewener (1989) Biewener, A. A. (1989). Scaling Body Support in Mammals: Limb Posture and Muscle Mechanics. _Science_ 245, 45–48. 10.1126/science.2740914
* Bledt et al. (2018) Bledt, G., Wensing, P. M., Ingersoll, S., and Kim, S. (2018). Contact Model Fusion for Event-Based Locomotion in Unstructured Terrains. In _2018 IEEE International Conference on Robotics and Automation (ICRA)_. 4399–4406. 10.1109/ICRA.2018.8460904. ISSN: 2577-087X
* Blickhan et al. (2007) Blickhan, R., Seyfarth, A., Geyer, H., Grimmer, S., Wagner, H., and Günther, M. (2007). Intelligence by mechanics. _Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences_ 365, 199–220. 10.1098/rsta.2006.1911
* Calanca et al. (2015) Calanca, A., Muradore, R., and Fiorini, P. (2015). A review of algorithms for compliant control of stiff and fixed-compliance robots. _IEEE/ASME Transactions on Mechatronics_ 21, 613–624. 10.1109/TMECH.2015.2465849
* Coumans and Bai (2016–2019) [Dataset] Coumans, E. and Bai, Y. (2016–2019). Pybullet, a python module for physics simulation for games, robotics and machine learning. http://pybullet.org
* Daley (2018) Daley, M. A. (2018). Understanding the agility of running birds: sensorimotor and mechanical factors in avian bipedal locomotion. _Integrative and comparative biology_ 58, 884–893. doi.org/10.1093/icb/icy058
* Daley and Biewener (2006) Daley, M. A. and Biewener, A. A. (2006). Running over rough terrain reveals limb control for intrinsic stability. _Proceedings of the National Academy of Sciences_ 103, 15681–15686. 10.1073/pnas.0601473103
* Daley et al. (2006) Daley, M. A., Usherwood, J. R., Felix, G., and Biewener, A. A. (2006). Running over rough terrain: guinea fowl maintain dynamic stability despite a large unexpected change in substrate height. _J Exp Biol_ 209, 171–187. 10.1242/jeb.01986
* Ding and Park (2017) Ding, Y. and Park, H.-W. (2017). Design and Experimental Implementation of a Quasi-Direct-Drive Leg for Optimized Jumping. In _2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. 300–305
* Grimminger et al. (2020) Grimminger, F., Meduri, A., Khadiv, M., Viereck, J., Wüthrich, M., Naveau, M., et al. (2020). An open torque-controlled modular robot architecture for legged locomotion research. _IEEE Robotics and Automation Letters_ 5, 3650–3657. 10.1109/LRA.2020.2976639
* Günther et al. (2003) Günther, M., Sholukha, V. A., Kessler, D., Wank, V., and Blickhan, R. (2003). Dealing with skin motion and wobbling masses in inverse dynamics. _Journal of Mechanics in Medicine and Biology_ 3, 309–335
* Hammoud et al. (2020) Hammoud, B., Khadiv, M., and Righetti, L. (2020). Impedance Optimization for Uncertain Contact Interactions Through Risk Sensitive Optimal Control. _arXiv:2011.04684 [cs, eess]_
* Hubicki et al. (2016) Hubicki, C., Grimes, J., Jones, M., Renjewski, D., Spröwitz, A., Abate, A., et al. (2016). ATRIAS: Design and validation of a tether-free 3D-capable spring-mass bipedal robot. _The International Journal of Robotics Research_ 35, 1497–1521. 10.1177/0278364916648388
* Hutter et al. (2011) Hutter, M., Remy, C. D., Hoepflinger, M. A., and Siegwart, R. (2011). Scarleth: Design and control of a planar running robot. In _2011 IEEE/RSJ International Conference on Intelligent Robots and Systems_ (IEEE), 562–567. 10.1142/9789814415958_0062
* Li et al. (2020) Li, C., Ding, Y., and Park, H.-W. (2020). Centroidal-momentum-based trajectory generation for legged locomotion. _Mechatronics_ 68, 102364. 10.1016/j.mechatronics.2020.102364
* Liu et al. (2018) Liu, X., Rossi, A., and Poulakakis, I. (2018). A switchable parallel elastic actuator and its application to leg design for running robots. _IEEE/ASME Transactions on Mechatronics_ 23, 2681–2692. 10.1109/TMECH.2018.2871670
* Mo et al. (2020) Mo, A., Izzi, F., Haeufle, D. F. B., and Badri-Spröwitz, A. (2020). Effective Viscous Damping Enables Morphological Computation in Legged Locomotion. _Frontiers in Robotics and AI_ 7\. 10.3389/frobt.2020.00110
* More and Donelan (2018) More, H. L. and Donelan, J. M. (2018). Scaling of sensorimotor delays in terrestrial mammals. _Proceedings of the Royal Society B: Biological Sciences_ 285, 20180613\. 10.1098/rspb.2018.0613
* More et al. (2010) More, H. L., Hutchinson, J. R., Collins, D. F., Weber, D. J., Aung, S. K., and Donelan, J. M. (2010). Scaling of sensorimotor control in terrestrial mammals. _Proceedings of the Royal Society B: Biological Sciences_ 277, 3563–3568. 10.1098/rspb.2010.0898
* Narioka et al. (2012) Narioka, K., Rosendo, A., Spröwitz, A., and Hosoda, K. (2012). Development of a Minimalistic Pneumatic Quadruped Robot for Fast Locomotion. In _Proceedings of IEEE International Conference on Robotics and Biomimetics (ROBIO)_ (Guangzhou, China), 307–311. 10.1109/ROBIO.2012.6490984
* Nasiri et al. (2016) Nasiri, R., Khoramshahi, M., Shushtari, M., and Ahmadabadi, M. N. (2016). Adaptation in variable parallel compliance: Towards energy efficiency in cyclic tasks. _IEEE/ASME Transactions on Mechatronics_ 22, 1059–1070. 10.1109/TMECH.2016.2637826
* Niehues et al. (2015) Niehues, T. D., Rao, P., and Deshpande, A. D. (2015). Compliance in parallel to actuators for improving stability of robotic hands during grasping and manipulation. _The International Journal of Robotics Research_ 34, 256–269. 10.1177/0278364914558016
* Park et al. (2017) Park, H.-W., Wensing, P. M., and Kim, S. (2017). High-speed bounding with the MIT Cheetah 2: Control design and experiments. _The International Journal of Robotics Research_ , 027836491769424410.1177/0278364917694244
* Plooij et al. (2016) Plooij, M., Wisse, M., and Vallery, H. (2016). Reducing the energy consumption of robots using the bidirectional clutched parallel elastic actuator. _IEEE Transactions on Robotics_ 32, 1512–1523. 10.1109/TRO.2016.2604496
* Pratt and Williamson (1995) Pratt, G. A. and Williamson, M. M. (1995). Series elastic actuators. In _Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots_ (IEEE), vol. 1, 399–406. 10.1109/IROS.1995.525827
* Roozing (2018) Roozing, W. (2018). Modeling and control of adjustable articulated parallel compliant actuation arrangements in articulated robots. _Frontiers in Robotics and AI_ 5, 4. 10.3389/frobt.2018.00004
* Roozing et al. (2019) Roozing, W., Ren, Z., and Tsagarakis, N. G. (2019). An efficient leg with series–parallel and biarticular compliant actuation: design optimization, modeling, and control of the eleg. _The International Journal of Robotics Research_ , 027836491989376210.1177/0278364919893762
* Ruppert and Spröwitz (2019) Ruppert, F. and Spröwitz, A. (2019). Series elastic behavior of biarticular muscle-tendon structure in a robotic leg. _Frontiers in neurorobotics_ 13, 64. 10.3389/fnbot.2019.00064
* Shafiee et al. (2019) Shafiee, M., Romualdi, G., Dafarra, S., Chavez, F. J. A., and Pucci, D. (2019). Online dcm trajectory generation for push recovery of torque-controlled humanoid robots. In _2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)_ (IEEE), 671–678. 10.1109/Humanoids43949.2019.9034996
* Shafiee-Ashtiani et al. (2017) Shafiee-Ashtiani, M., Yousefi-Koma, A., and Shariat-Panahi, M. (2017). Robust bipedal locomotion control based on model predictive control and divergent component of motion. In _2017 IEEE International Conference on Robotics and Automation (ICRA)_ (IEEE), 3505–3510. 10.1109/ICRA.2017.7989401
* Spröwitz et al. (2013) Spröwitz, A., Tuleu, A., Vespignani, M., Ajallooeian, M., Badri, E., and Ijspeert, A. J. (2013). Towards dynamic trot gait locomotion: Design, control, and experiments with cheetah-cub, a compliant quadruped robot. _The International Journal of Robotics Research_ 32, 932–950. 10.1177/0278364913489205
* Spröwitz et al. (2018) Spröwitz, A. T., Tuleu, A., Ajallooeian, M., Vespignani, M., Möckel, R., Eckert, P., et al. (2018). Oncilla Robot: A Versatile Open-Source Quadruped Research Robot With Compliant Pantograph Legs. _Frontiers in Robotics and AI_ 5\. 10.3389/frobt.2018.00067
* Varkonyi et al. (2014) Varkonyi, T. A., Rudas, I. J., Pausits, P., and Haidegger, T. (2014). Survey on the control of time delay teleoperation systems. In _IEEE 18th International Conference on Intelligent Engineering Systems INES 2014_ (IEEE), 89–94. 10.1109/INES.2014.6909347
* Yesilevskiy et al. (2018) Yesilevskiy, Y., Gan, Z., and David Remy, C. (2018). Energy-optimal hopping in parallel and series elastic one-dimensional monopeds. _Journal of Mechanisms and Robotics_ 10\. 10.1115/1.4039496
| Case
---
number
| Total compliance
---
[$\frac{N.m}{rad}$]
| $\lambda_{Passive}$
---
[%]
| Control frequency
---
[Hz]
| Delay
---
[ms]
1 | 1.6717 | 100 | 1000 | 0
2 | 1.6717 | 100 | 1000 | 0
3 | 1.6717 | 0 | 1000 | 0
4 | 1.6717 | 0 | 1000 | 17
5 | 2.5076 | 67 | 200 | 27
6 | 2.5076 | 67 | 100 | 27
7 | 2.8419 | 59 | 100 | 35
Table 1: Simulation parameter of quadrupedal robot for task of landing with
different control frequency and $\lambda_{Passive}$
|
# Exceptional surgeries in $3$–manifolds
Kenneth L. Baker Department of Mathematics
University of Miami
Coral Gables, FL 33146
USA<EMAIL_ADDRESS>and Neil R. Hoffman Department of Mathematics
Oklahoma State University
Stillwater, OK 74078
USA<EMAIL_ADDRESS>
###### Abstract.
Myers shows that every compact, connected, orientable $3$–manifold with no
$2$–sphere boundary components contains a hyperbolic knot. We use work of
Ikeda with an observation of Adams-Reid to show that every $3$–manifold
subject to the above conditions contains a hyperbolic knot which admits a non-
trivial non-hyperbolic surgery, a toroidal surgery in particular. We conclude
with a question and a conjecture about reducible surgeries.
Myers shows that there are hyperbolic knots in every compact, connected,
orientable $3$–manifold whose boundary contains no $2$–spheres [Mye93]. Might
there be such a $3$–manifold for which every hyperbolic knot has no non-
trivial exceptional surgeries?
One approach to showing the answer is Yes would be to prove that there exists
a $3$–manifold in which every hyperbolic knot has cusp volume larger than $18$
so that the $6$–Theorem [Ago00, Lac00] would obstruct any exceptional surgery.
However, [ACF+06, Corollary 5.2] implies that every closed, connected,
orientable $3$–manifold contains infinitely many hyperbolic knots with cusp
volume at most $9$. So this approach will not work. Furthermore, the knots
constructed in [ACF+06, Corollary 5.2] do not necessarily have any exceptional
surgery, so that work does not address our question.
In this short note we demonstrate the answer to the question is actually No by
constructing hyperbolic knots with a non-trivial toroidal surgery in any
$3$–manifold.
###### Theorem 1.
Let $M$ be a compact, connected, orientable $3$–manifold such that $\partial
M$ contains no $2$–spheres. There exist infinitely many hyperbolic knots in
$M$ that admit a toroidal surgery.
###### Proof.
Let $M$ be a compact, connected orientable $3$–manifold whose boundary
contains no $2$–spheres. In [Ike12], Ikeda shows that $M$ contains an infinite
family of embedded genus $2$ handlebodies in $M$, each with hyperbolic and
anannular complement of its interior where its genus $2$ boundary is totally
geodesic. Let $H$ be any one of these handlebodies.
In Lemma 2 we find a knot $K$ in $H$ that bounds an embedded once-punctured
Klein bottle $\Sigma$ such that $H-K$ is a one-cusped anannular hyperbolic
manifold in which $\partial H$ is totally geodesic. Therefore $M-K$ decomposes
along $\partial H$ into two anannular hyperbolic manifolds. Thus, following an
observation of Adams and Reid [AR93, Observation 2.1], $M-K$ is a hyperbolic
manifold containing a quasi-Fuchsian surface isotopic to $\partial H$, and $K$
is a hyperbolic knot in $M$. (Note that while $\partial H$ is totally geodesic
in both $M-int(H)$ and $H-K$, its hyperbolic structure may not be the same in
these two manifolds. Hence $\partial H$ is not necessarily totally geodesic in
$M-K$.)
Since $K$ bounds the once-punctured Klein bottle $\Sigma$, surgery on $K$
along the slope $\sigma$ of $\partial\Sigma$ produces a manifold
$M_{K}(\sigma)$ containing an embedded Klein bottle $\widehat{\Sigma}$. The
manifold $M_{K}(\sigma)$ will be toroidal unless the torus
$\partial\mathcal{N}(\widehat{\Sigma})$ compresses. However, Lemma 2 shows
that $K$ may be further chosen in $M$ so that $H_{K}(\sigma)-\widehat{\Sigma}$
is also a one-cusped anannular hyperbolic manifold in which $\partial H$ is
totally geodesic boundary. Therefore, as $M_{K}(\sigma)-\widehat{\Sigma}$
decomposes along $\partial H$ into the hyperbolic manifolds $M-int(H)$ and
$H_{K}(\sigma)-\widehat{\Sigma}$, it follows that
$\partial\mathcal{N}(\widehat{\Sigma})$ must be incompressible in
$M_{K}(\sigma)$. ∎
###### Lemma 2.
There is a knot $K$ in a genus $2$ handlebody $H$ that bounds a once-punctured
Klein bottle $\Sigma$ so that $H-K$ is a one-cusped anannular hyperbolic
manifold in which $\partial H$ is totally geodesic. Hence surgery on $K$ along
the slope $\sigma$ of $\partial\Sigma$ produces a manifold $H_{K}(\sigma)$
containing an embedded Klein bottle $\widehat{\Sigma}$. Furthermore, $K$ may
be chosen so that $H_{K}(\sigma)-\widehat{\Sigma}$ is also a one-cusped
anannular hyperbolic manifold in which $\partial H$ is totally geodesic.
###### Proof.
Figure 2(a) shows a surgery description of a trivial $3$–strand tangle in the
ball, along with an arc $k$ that has its endpoints on the tangle strands.
Figure 2(b) shows the result of an isotopy in which the tangle is more
obviously trivial at the expense of elongating the arc $k$. The double
branched cover of this trivial $3$–strand tangle is a handlebody $H$ in which
the arc $k$ lifts to a knot $K$. Figure 2(c),(d), and (e) illustrate the
construction of the knot $K$ in the handlebody $H$. In (c), two caps with red
dual arcs are attached to the $3$–strand tangle to form a trivial $1$–strand
tangle in the ball. After straightening the strand in (d), the double branched
cover is taken in (e). The two caps each lift to $2$–handles attached to $H$.
The two red arcs in (e) are the co-cores of these two $2$–handles, so $H$ is
obtained by drilling them out. For lifting the surgery description, note that
a curve linking the branch locus once with surgery coefficient $1/2a$ lifts to
a single curve with surgery coefficient $1/a$. Thus for each pair of integers
$n,m$, we obtain a knot $K$ in a genus $2$ handlebody $H$.
Figure 2(f) shows a surgery description of the double of $(H,K)$ across
$\partial H$, the link $K\cup\overline{K}$ in $H\cup\overline{H}=S^{1}\times
S^{2}\\#S^{1}\times S^{2}$, obtained by mirroring Figure 2(e) and performing
$0$ surgery on the components formed from the co-cores of the $2$–handles and
their mirrors. A verified computation in SnapPy [CDGW] confirms that the
complement of the link $K\cup\overline{K}$ in $S^{1}\times S^{2}\\#S^{1}\times
S^{2}$ is hyperbolic for choices of $n,m\in\mathbb{Z}$ with $|n|$ and $|m|$
suitably large. (More specifically, a verified computation in SnapPy shows
that after doing the two $0$-surgeries on the two red components in Figure
2(f), the resulting $6$–component link in $S^{1}\times S^{2}\\#S^{1}\times
S^{2}$ has a hyperbolic complement. Then there is a constant $N$ such that the
$2$–component link complement resulting from the surgeries on the green and
purple components will be hyperbolic if both $|n|>N$ and $|m|>N$; see [Koj88,
Lemma 5] or [BHL19, Theorem 3.1].) Since the double has the reflective
symmetry in which $\partial H$ is the fixed set, it must be a totally geodesic
surface. Hence $H-K$ must be a one-cusped anannular hyperbolic manifold in
which $\partial H$ is totally geodesic.
In Figure 2(e) one observes that $K$ bounds a once-punctured Klein bottle
$\Sigma$ in $H$ that is disjoint from the two curves of the surgery
description. As such, Dehn surgery on $K$ in $H$ along the boundary slope
$\sigma=\partial\Sigma$ produces the manifold $H_{K}(\sigma)$ which contains
the Klein bottle $\widehat{\Sigma}$ obtained by capping off $\Sigma$ with a
meridional disk of the surgery.
All that remains is to show that $\widehat{\Sigma}$ is essential in the
filling. First, we may understand the complement of $\widehat{\Sigma}$ through
tangles. As apparent in Figure 2(e), the surface $\Sigma$ may be taken to be
invariant under the involution of $H$ from the branched covering so that the
fixed set intersects $\Sigma$ in two points and an arc. Then $\Sigma$ descends
to a disk $D$ containing the arc $k$ in its boundary and meeting the branch
locus in the remainder of its boundary and two points in its interior. This
disk $D$ may be tracked from its initial quotient of $\Sigma$ in Figure 2(d)
back to Figure 2(a). Now Figure 1(a) shows the exterior of the arc $k$ while
Figure 1(b) shows the rational tangle filling associated to $\sigma$–framed
surgery on $K$. In particular, the disk $D-k$ is completed to a disk
$\widehat{D}$ containing the closed component of the branch locus as its
boundary and meeting the strands of the branch locus in two interior points.
Indeed, the double branched cover of the tangle Figure 1(b) is the manifold
$H_{K}(\sigma)$ in which $\widehat{D}$ lifts to $\widehat{\Sigma}$. Finally,
Figure 1(c) shows the tangle that is the complement of a small regular
neighborhood of $\widehat{D}$.
Figure 3(a) shows a rational tangle filling of Figure 1(c) with the arc
$k^{\prime}$ that is the core of the rational tangle. This $3$–strand tangle
is a trivial tangle as made more apparent in Figures 3(b), (c), and (d) which
isotop the tangle while elongating arc $k^{\prime}$. As before, (c) shows the
attachment of two caps with dual arcs and (d) straightens the resulting
$1$–strand tangle. Figure 3(e) shows the double branched cover which
illustrates the lift of the arc $k^{\prime}$ as the knot $K^{\prime}$ in
another genus $2$ handlebody $H^{\prime}$. Again, the two caps each lift to
$2$–handles attached to $H^{\prime}$, the two red arcs in (e) are the co-cores
of these two $2$–handles, and so $H^{\prime}$ is obtained by drilling them
out. Note that the knot $K^{\prime}$ in $H^{\prime}$ depends on the previously
chosen pair of integers $n,m$ of the surgery description.
It now follows that, by construction, $H_{K}(\sigma)-\widehat{\Sigma}$ is
homeomorphic to $H^{\prime}-K^{\prime}$. We show that $H^{\prime}-K^{\prime}$
is a one-cusped anannular hyperbolic manifold in which $\partial H^{\prime}$
is totally geodesic just as we did for $H-K$. Figure 3(f) shows a surgery
description of the double of $(H^{\prime},K^{\prime})$ across $\partial
H^{\prime}$, the link $K^{\prime}\cup\overline{K^{\prime}}$ in
$H^{\prime}\cup\overline{H^{\prime}}=S^{1}\times S^{2}\\#S^{1}\times S^{2}$,
obtained by mirroring Figure 3(e) and performing $0$ surgery on the components
formed from the co-cores of the $2$–handles and their mirrors. A verified
computation in SnapPy [CDGW] confirms that the complement of the link
$K^{\prime}\cup\overline{K^{\prime}}$ in $S^{1}\times S^{2}\\#S^{1}\times
S^{2}$ is hyperbolic if both $|n|$ and $|m|$ are suitably large. Since the
double has the reflective symmetry in which $\partial H^{\prime}$ is the fixed
set, it must be a totally geodesic surface. Hence $H^{\prime}-K^{\prime}$ is a
one-cusped anannular hyperbolic manifold in which $\partial H^{\prime}$ is
totally geodesic.
Since $H_{K}(\sigma)-\widehat{\Sigma}\cong H^{\prime}-K^{\prime}$, we obtain
the desired results whenever $|n|$ and $|m|$ are large enough to be suitably
large in both situations. ∎
###### Remark 3.
To give a concrete example, taking $n=m=1$ is sufficient for the knots
$K\subset H$ and $K^{\prime}\subset H^{\prime}$ to be hyperbolic. Certainly,
one could verify the hyperbolicity of these knots by hand in the spirit of
what was done in [AR93], but the argument would take longer. Hence we content
ourselves with verified computations in SnapPy [CDGW].
Figure 1. (b) A surgery description of a $3$–strand tangle in the ball with an
unknot component that bounds a disk intersected twice by the strands. (a) The
complement of a rational tangle in this tangle. (c) The complement of a small
neighborhood of the disk bounded by the unknot component. Figure 2. (a) A
rational tangle filling of Figure 1(a) with its core arc $k$. (b) & (c) An
isotopy showing the filled tangle is a rational $3$–strand tangle. The arc $k$
is carried along. (c) Attached to the rational $3$–strand tangle are two caps
with their dual arcs to form a $1$–strand tangle in the ball. (d) The
$1$–strand tangle is straightened. (e) The double branched cover is formed.
Drilling out the red arcs leaves a genus $2$ handlebody $H$ containing the
knot $K$ that covers $k$. (f) A surgery description of the double of $(H,K)$
is formed. Figure 3. (a) A rational tangle filling of Figure 1(c) with its
core arc $k^{\prime}$. (b) & (c) An isotopy showing the filled tangle is a
rational $3$–strand tangle. The arc $k^{\prime}$ is carried along. (c)
Attached to the rational $3$–strand tangle are two caps with their dual arcs
to form a $1$–strand tangle in the ball. (d) The $1$–strand tangle is
straightened. (e) The double branched cover is formed. Drilling out the red
arcs leaves a genus $2$ handlebody $H^{\prime}$ containing the knot
$K^{\prime}$ that covers $k^{\prime}$. (f) A surgery description of the double
of $(H^{\prime},K^{\prime})$ is formed.
What can be said about other kinds of exceptional surgeries? Considerations of
Betti numbers show that many closed, compact, orientable $3$–manifolds cannot
contain a knot with a Dehn surgery to a lens space or a small Seifert fibered
space. In light of the Cabling Conjecture [GAnS86] whose proof would imply
that no hyperbolic knot in $S^{3}$ has a reducible surgery, it is reasonable
to expect that there are $3$–manifolds in which no hyperbolic knot admits a
reducible surgery. However, we are presently unaware of any $3$–manifold known
to not have a hyperbolic knot with a non-trivial reducible surgery.
###### Question 4.
Which compact, connected, orientable $3$–manifolds do not contain a hyperbolic
knot with a non-trivial reducible surgery?
While non-trivial reducible surgeries on hyperbolic knots in reducible
manifolds do exist, see e.g. [HM03], we suspect that manifolds whose prime
decompositions have at least $3$ summands are candidates.
###### Conjecture 5.
A closed orientable $3$–manifold with at least $3$ summands does not contain a
hyperbolic knot with a non-trivial reducible surgery.
Towards the conjecture, suppose $K$ is a hyperbolic knot in a closed
orientable $3$–manifold $M$ with at least $3$ summands. One may hope that each
planar meridional surfaces in the knot complement $M-K$ arising from $K$
intersecting multiple reducing spheres would contribute a certain amount to
the length of the shortest longitude of $K$. From this, at least if $M$ had
sufficiently many summands, one would be able to use the 6-Theorem to obstruct
a non-trivial reducible surgery. However this would also obstruct a toroidal
surgery contrary to Theorem 1. Indeed, it would also contradict [ACF+06,
Corollary 5.2] which shows that the topology of $M$ cannot force all
longitudes of hyperbolic knots in $M$ to be long.
On the other hand, combinatorial structures in knot complements can induce
obstructions. For instance, hyperbolic alternating knots in $S^{3}$ that have
at least $9$ twist regions (in twist-reduced diagrams) provide an obstruction
the existence of non-trivial exceptional fillings; see [Lac00, Theorem 5.1].
###### Acknowledgments 6.
KB thanks Jacob Caudell for conversations related to [Cau21, Conjecture 5]
that prompted this note. This work was partially supported by Simons
Foundation grant #209184 to Kenneth Baker and by Simons Foundation grant
#524123 to Neil Hoffman.
## References
* [ACF+06] C. Adams, A. Colestock, J. Fowler, W. Gillam, and E. Katerman. Cusp size bounds from singular surfaces in hyperbolic 3-manifolds. Trans. Amer. Math. Soc., 358(2):727–741, 2006.
* [Ago00] Ian Agol. Bounds on exceptional Dehn filling. Geom. Topol., 4(1):431–449, 2000.
* [AR93] Colin C. Adams and Alan W. Reid. Quasi-Fuchsian surfaces in hyperbolic knot complements. Journal of the Australian Mathematical Society. Series A. Pure Mathematics and Statistics, 55(1):116–131, 1993.
* [BHL19] Kenneth L Baker, Neil R Hoffman, and Joan E Licata. Jointly primitive knots and surgeries between lens spaces. arXiv preprint arXiv:1904.03268, 2019. To appear in Communications in Analysis and Geometry.
* [Cau21] Jacob Caudell. Three lens space summands from the Poincaré homology sphere, 2021. arXiv:2101.01256.
* [CDGW] Marc Culler, Nathan M. Dunfield, Matthias Goerner, and Jeffrey R. Weeks. SnapPy, a computer program for studying the geometry and topology of $3$-manifolds. Available at http://snappy.computop.org (17/01/2021).
* [GAnS86] Francisco González-Acuña and Hamish Short. Knot surgery and primeness. Math. Proc. Cambridge Philos. Soc., 99(1):89–102, 1986.
* [HM03] James A. Hoffman and Daniel Matignon. Examples of bireducible Dehn fillings. Pacific J. Math., 209(1):67–83, 2003.
* [Ike12] Toru Ikeda. Hyperbolic spatial graphs in 3-manifolds. Topology and its Applications, 159(1):279–282, 2012.
* [Koj88] Sadayoshi Kojima. Isometry transformations of hyperbolic 3-manifolds. Topology and its Applications, 29(3):297–307, 1988.
* [Lac00] Marc Lackenby. Word hyperbolic Dehn surgery. Inventiones mathematicae, 140(2):243–282, 2000.
* [Mye93] Robert Myers. Excellent $1$-manifolds in compact $3$-manifolds. Topology Appl., 49(2):115–127, 1993.
|
# Explicit zero density for the Riemann zeta function
Habiba Kadiri Department of Mathematics and Computer Science
University of Lethbridge
4401 University Drive
Lethbridge, Alberta
T1K 3M4 Canada<EMAIL_ADDRESS>, Allysa Lumley Department of
Mathematics and Statistics
York University
4700 Keele St
Toronto, Ontario
M3J 1P3 Canada<EMAIL_ADDRESS>and Nathan Ng Department of Mathematics and
Computer Science
University of Lethbridge
4401 University Drive
Lethbridge, Alberta
T1K 3M4 Canada<EMAIL_ADDRESS>
###### Abstract.
Let $N(\sigma,T)$ denote the number of nontrivial zeros of the Riemann zeta
function with real part greater than $\sigma$ and imaginary part between $0$
and $T$. We provide explicit upper bounds for $N(\sigma,T)$ commonly referred
to as a zero density result. In 1937, Ingham showed the following asymptotic
result $N(\sigma,T)=\mathcal{O}(T^{\frac{8}{3}(1-\sigma)}(\log T)^{5})$.
Ramaré recently proved an explicit version of this estimate. We discuss a
generalization of the method used in these two results which yields an
explicit bound of a similar shape while also improving the constants.
###### Key words and phrases:
Riemann zeta function, zero density, explicit results
###### 2010 Mathematics Subject Classification:
Primary 11M06, 11M26; Secondary 11Y35
Research for this article is partially supported by the NSERC Discovery grants
of H.K. (RGPIN-2015-06799) and N.N. (RGPIN-2015-05972). The calculations were
executed on the University of Lethbridge Number Theory Group Eudoxus machine,
supported by an NSERC RTI grant.
## 1\. Introduction
Throughout this article $\zeta(s)$ denotes the Riemann zeta function and
$\varrho$ denotes a non-trivial zero of $\zeta(s)$ lying in the critical
strip, $0<{\mathfrak{Re}}(s)<1$. Let $\frac{1}{2}<\sigma<1,T>0$, and define
(1.1) $N(\sigma,T)=\\#\\{\varrho=\beta+i\gamma:\
\zeta(\varrho)=0,0<\gamma<T\text{ and }\sigma<\beta<1\\}.$
We shall prove a non-trivial, explicit upper bound for $N(\sigma,T)$. Such a
bound is commonly referred to as a zero-density estimate. We denote RH the
Riemann Hypothesis and $\text{RH}(H_{0})$ the statement:
(1.2) $\text{RH}(H_{0}):\text{ all non-trivial zeros }\varrho\text{ of
}\zeta(s)\ \text{ with }\ |{\mathfrak{Im}}(\varrho)|\leq H_{0}\text{ satisfy
}{\mathfrak{Re}}(\varrho)=\frac{1}{2}.$
Currently, the best published value of $H_{0}$ for which (1.2) is true is due
to David Platt [19]:
$H_{0}=3.0610046\cdot 10^{10}$
with $N(H_{0})=103\,800\,788\,359$. Other strong evidence towards the RH is
the large body of zero-density estimates for $\zeta(s)$. Namely, very good
bounds for $N(\sigma,T)$ in various ranges of $\sigma$.
Let $\sigma>\frac{1}{2}$. In 1913 Bohr and Landau [2] showed that
(1.3) $N(\sigma,T)=\mathcal{O}\left(\frac{T}{\sigma-\frac{1}{2}}\right)$
for $T$ asymptotically large. This result implies that for any fixed
$\varepsilon>0$, almost all zeros of $\zeta(s)$ lie in the band
$|\frac{1}{2}-{\mathfrak{Re}}(s)|<\varepsilon$. This was improved in 1937 by
Ingham [12], who showed
(1.4) $N(\sigma,T)=\mathcal{O}\left(T^{(2+4c)(1-\sigma)}(\log T)^{5}\right)$
assuming that $\zeta(\frac{1}{2}+it)=\mathcal{O}\left(t^{c+\epsilon}\right)$.
In particular, the Lindelöf Hypothesis
$\zeta(\frac{1}{2}+it)=\mathcal{O}\left(t^{\epsilon}\right)$ implies that
$N(\sigma,T)=\mathcal{O}\left(T^{2(1-\sigma)+\epsilon}\right)$, also known as
the Density Hypothesis. There is a prolific literature on the bounds for
$\zeta(s)$, starting with the convexity bound of $c=\frac{1}{4}=0.25$
(Lindelöf), the first subconvexity bound of Hardy & Littlewood [8]
$c=\frac{1}{6}=0.1666\ldots$, to some more recent results of Huxley [10]
(2005) $c=\frac{32}{205}=0.1560\ldots$ and of Bourgain [3] (2017)
$c=\frac{13}{84}=0.1547\ldots$. In addition, there are also many articles on
estimates for $N(\sigma,T)$. A selection of some notable results may be found
in [10], [11], [13], and [3]. On the other hand, there are few explicit bounds
for $N(\sigma,T)$. We refer the reader to a result of the first author [14]
for an explicit version of Bohr and Landau’s bound. The method provides two
kind of results: for $T$ asymptotically large, as in $N(0.90,T)\leq
0.4421T+0.6443\log T-363\,301,$ and for $T$ taking a specific value, as in
$N(0.90,H_{0})<96.20$. These bounds are useful to improve estimates of prime
counting functions, as in [5], [4], [20], [26] and in [15] to find primes in
short intervals. Ramaré had earlier proven a version of (1.4) in his D.E.A.
memoire, which remained unpublished until recently. Let $\sigma\geq 0.52$ be
fixed. In [24] he proves 111Equation (1.1) [24, p. 326 ] gives the bound
$N(\sigma,T)\leq 4.9(3T)^{\frac{8(1-\sigma)}{3}}(\log T)^{5-2\sigma}+51.5(\log
T)^{2}$. However, there is a mistake in [24]. The authors have been in
communication with Professor Ramaré and he has sent us a proof of the revised
inequality (1.5). that for any $T\geq 2000$
(1.5) $N(\sigma,T)\leq 965(3T)^{\frac{8(1-\sigma)}{3}}(\log
T)^{5-2\sigma}+51.5(\log T)^{2},$
which gives $N(0.90,T)<1293.48(\log
T)^{\frac{16}{5}}T^{\frac{4}{15}}+51.50(\log T)^{2},$ which gives the bound
for $T=H_{0}$: $N(0.90,H_{0})<2.1529\cdot 10^{10}.$ The purpose of this
article is to bound $N(\sigma,T)$ by applying Ingham’s argument with a general
weight and to improve both [14] and [24].
###### Theorem 1.1.
Let $\frac{10^{9}}{H_{0}}\leq k\leq 1,d>0,H\in[1002,H_{0})$, $\alpha>0$,
$\delta\geq 1$, $\eta_{0}=0.23622\ldots$, $1+\eta_{0}\leq\mu\leq 1+\eta$, and
$\eta\in(\eta_{0},\tfrac{1}{2})$ be fixed. Let
$\sigma>\frac{1}{2}+\frac{d}{\log H_{0}}$.
Then there exist $\mathcal{C}_{1},\mathcal{C}_{2}>0$ such that, for any $T\geq
H_{0}$,
(1.6) $N(\sigma,T)\leq\frac{(T-H)(\log T)}{2\pi
d}\log\Big{(}1+\frac{\mathcal{C}_{1}(\log(kT))^{2\sigma}(\log
T)^{4(1-\sigma)}T^{\frac{8}{3}(1-\sigma)}}{T-H}\Big{)}+\frac{\mathcal{C}_{2}}{2\pi
d}(\log T)^{2},$
where $\mathcal{C}_{1}=\mathcal{C}_{1}(\alpha,d,\delta,k,H,\sigma)$ and
$\mathcal{C}_{2}=\mathcal{C}_{2}(d,\eta,k,H,\mu,\sigma)$ are defined in (4.72)
and (4.73). Since $\log(1+x)\leq x$ for $x\geq 0$, (1.6) implies
(1.7) $N(\sigma,T)\leq\frac{\mathcal{C}_{1}}{2\pi d}(\log(kT))^{2\sigma}(\log
T)^{5-4\sigma}T^{\frac{8}{3}(1-\sigma)}+\frac{\mathcal{C}_{2}}{2\pi d}(\log
T)^{2}.$
In addition, numerical results are displayed in tables in Section 5.
For instance (1.7) gives $N(0.90,T)<11.499(\log
T)^{\frac{16}{5}}T^{\frac{4}{15}}+3.186(\log T)^{2},$ and (1.6) gives
$N(0.90,H_{0})<130.07.$ This improves previous results both numerically and
methodologically (one of the key ingredients is the choice of a more efficient
weight function in Ingham’s method). Note that choosing $k<1$ and optimizing
in $H$ can provide extra improvements to (1.5). In addition, we prove a
stronger bound for the argument of a holomorphic function. We now explain the
main ideas to prove Theorem 1.1.
## 2\. Setting up the proof
### 2.1. Littlewood’s classical method to count the zeros
Let $h(s)=\zeta(s)M(s)$ where $M(s)$ is entire and
(2.1)
$N_{h}(\sigma,T)=\\#\Big{\\{}\varrho^{\prime}=\beta^{\prime}+i\gamma^{\prime}\in\mathbb{C}\
:\ h(\varrho^{\prime})=0,\sigma<\beta^{\prime}<1,\text{ and
}0<\gamma^{\prime}<T\Big{\\}}.$
Then for a parameter $H\in(0,H_{0})$, we have by (1.2) that
$N(\sigma,T)=N(\sigma,T)-N(\sigma,H)\leq N_{h}(\sigma,T)-N_{h}(\sigma,H)$
for $T\geq H_{0}$. We compare the above number of zeros for $h$ to its
average:
$N_{h}(\sigma,T)-N_{h}(\sigma,H)\leq\frac{1}{\sigma-\sigma^{\prime}}\int_{\sigma^{\prime}}^{\mu}(N_{h}(\tau,T)-N_{h}(\tau,H))\,d\tau$
where $\mu>1$ and $\sigma^{\prime}$ is a parameter satisfying
$\frac{1}{2}<\sigma^{\prime}<\sigma$. Let $\mathcal{R}$ be the rectangle with
vertices $\sigma^{\prime}+iH$, $\mu+iH$, $\mu+iT$, and $\sigma^{\prime}+iT$.
We apply the classical lemma of Littlewood as stated in [25, (9.9.1)]:
(2.2)
$\int_{\sigma^{\prime}}^{\mu}\Big{(}N_{h}(\tau,T)-N_{h}(\tau,H)\Big{)}d\tau=-\frac{1}{2\pi
i}\int_{\mathcal{R}}\log h(s)ds.$
Thus
(2.3)
$N(\sigma,T)\leq\frac{1}{2\pi(\sigma-\sigma^{\prime})}\Big{(}\int_{H}^{T}\log|h(\sigma^{\prime}+it)|dt\Big{.}\\\
\Big{.}+\int_{\sigma^{\prime}}^{\mu}\arg
h(\tau+iT)d\tau-\int_{\sigma^{\prime}}^{\mu}\arg
h(\tau+iH)d\tau-\int_{H}^{T}\log|h(\mu+it)|dt\Big{)}.$
As $T$ grows larger, the main contribution arises from the first integral. The
second and third integrals can be treated by using a general result for
bounding $\arg f(s)$ for $f$ a holomorphic function. To do this we give an
improvement of a lemma of Titchmarsh [25, p. 213] (see Proposition 4.10 and
Corollary 4.11 below). The fourth integral can be estimated with a standard
mean value theorem for Dirichlet polynomials (see Lemma 3.6). A key goal is to
minimize the above expression over admissible functions $h$. We now give an
idea of how to estimate the first integral in (2.3).
### 2.2. How the second mollified moment of $\zeta(s)$ occurs
Let $X\geq 1$ be a parameter and define the mollifier to be
(2.4) $M_{X}(s)=\sum_{n\leq X}\frac{\mu(n)}{n^{s}}$
where $\mu(n)$ is the Möbius function. Note that this is a truncation of the
Dirichlet series for $\zeta(s)^{-1}$. These mollifiers were invented by Bohr
and Landau [2] to help control the size of $\zeta(s)$ in the critical strip.
Futhermore, let
(2.5) $f_{X}(s)=\zeta(s)M_{X}(s)-1.$
Note that the series expansion for $f_{X}$ is given by
(2.6) $\displaystyle f_{X}(s)=\sum_{n>X}\Big{(}\sum_{\begin{subarray}{c}d\mid
n\\\ d\leq X\end{subarray}}\mu(d)\Big{)}n^{-s}=\sum_{n\geq
1}\frac{\lambda_{X}(n)}{n^{s}},$ (2.7) with $\displaystyle\lambda_{X}(n)=0\
\text{ if }n\leq X,\ \ \lambda_{X}(n)=\sum_{\begin{subarray}{c}d\mid n\\\
d\leq X\end{subarray}}\mu(d)\ \text{ if }n>X.$
We shall choose $h=h_{X}$ with
(2.8) $h_{X}(s)=1-f_{X}(s)^{2}=\zeta(s)M_{X}(s)(2-\zeta(s)M_{X}(s)).$
Since we have
$\frac{1}{b-a}\int_{a}^{b}\log
f(t)dt\leq\log\left(\frac{1}{b-a}\int_{a}^{b}f(t)dt\right),$
for any $f$ non-negative and continuous, and $|h_{X}(s)|\leq
1+|f_{X}(s)|^{2}$, we deduce that
(2.9)
$\int_{H}^{T}\log\left(|h_{X}(\sigma^{\prime}+it)|\right)dt\leq(T-H)\log\left(1+\frac{1}{T-H}\int_{H}^{T}|f_{X}(\sigma^{\prime}+it)|^{2}dt\right).$
We denote
(2.10) $F_{X}(\sigma,T)=\int_{0}^{T}|f_{X}(\sigma+it)|^{2}dt\text{ where
}\sigma\geq\frac{1}{2}.$
To resume, the key point for getting a good bound on $N(\sigma,T)-N(\sigma,H)$
is to obtain a good bound for $F_{X}(\sigma,T)$. Following a classical method
due to Ingham we compare it to a smoothed version of itself.
### 2.3. Ingham’s smoothing method
Let $\sigma_{1}$ and $\sigma_{2}$ be such that $\sigma_{1}<\sigma<\sigma_{2}$.
Let $T>0$ and $g=g_{T}$ be a non-negative, real valued function, depending on
the parameter $T$, and holomorphic in
$\sigma_{1}\leq{\mathfrak{Re}}(s)\leq\sigma_{2}$. We define
(2.11)
$\mathcal{M}_{g,T}(X,\sigma)=\int_{-\infty}^{+\infty}|g(\sigma+it)|^{2}|f_{X}(\sigma+it)|^{2}dt.$
We shall consider $g$ of a special shape. For $\alpha,\beta>0$, assume that
there exist positive functions $\omega_{1},\omega_{2}$ such that $g$
satisfies, for all $\sigma\in[\sigma_{1},\sigma_{2}]$,
(2.12)
$\displaystyle|g(\sigma+it)|\leq\omega_{1}(\sigma,T,\alpha)e^{-\alpha\left(\frac{|t|}{T}\right)^{\beta}}\
\text{ for all}\ t,$ (2.13)
$\displaystyle\omega_{2}(\sigma,T,\alpha)\leq|g(\sigma+it)|\ \text{ for all}\
t\in[H,T].$
In addition, we assume that $|g|$ is even in $t$:
(2.14) $|g(\sigma-it)|=|g(\sigma+it)|\text{ for
}\sigma\in(\sigma_{1},\sigma_{2})\text{ and }t\in\mathbb{R}.$
Thus $F_{X}(\sigma,T)\ll_{g}\mathcal{M}_{g,T}(X,\sigma)$, and more precisely
(2.15)
$F_{X}(\sigma,T)\leq\frac{\mathcal{M}_{g,T}(X,\sigma)}{2(\omega_{2}(\sigma,T,\alpha))^{2}}.$
In this article, we shall choose a family of weights of the form
(2.16) $g(s)=g_{T}(s)=\frac{s-1}{s}e^{\alpha(\frac{s}{T})^{2}},\text{ where
}\alpha>0.$
These weights will satisfy the above conditions with $\beta=2$. We remark that
Ingham [12] made use of the weight $g(s)=\frac{s-1}{s\cos(\frac{1}{2T})}$ and
Ramaré [24] used $g(s)=\frac{s-1}{s(\cos s)^{\frac{1}{2T}}}$. These weights
satisfy (2.12) with $\beta=1$. We also studied the weights
$g(s)=\frac{s-1}{s(\cos s)^{\frac{\alpha}{T}}}$ and
$g(s)=\frac{s-1}{s(\cos\frac{\alpha}{T})}$. However, we obtained the best
results with $g$ given by (2.16). The functions $g$ are chosen so that for
fixed $\sigma$, $g(\sigma+it)$ behave likes the indicator function,
$\mathds{1}_{[0,T]}(t)$, and for $t$ large, $g(\sigma+it)$ has rapid decay.
Nevertheless, it is an open problem to determine the best weights $g$ to use
in this problem.
### 2.4. Final bound
Finally, to bound the integral $\mathcal{M}_{g,T}$, we appeal to a convexity
estimate for integrals (see [7]). For $\sigma_{2}>1$ (and $\sigma_{2}$ close
to $1$), if $\frac{1}{2}\leq\sigma\leq\sigma_{2}$, then
(2.17)
$\mathcal{M}_{g,T}(X,\sigma)\leq\mathcal{M}_{g,T}(X,\tfrac{1}{2})^{\frac{\sigma_{2}-\sigma}{\sigma_{2}-\frac{1}{2}}}\mathcal{M}_{g,T}(X,\sigma_{2})^{\frac{\sigma-\frac{1}{2}}{\sigma_{2}-\frac{1}{2}}}.$
The largest contribution arises from $\mathcal{M}_{g,T}(X,\frac{1}{2})$. To
bound this we make use of:
* •
bounds (2.12), (2.13) for $g$ (see Lemma 3.7),
* •
a version of Montgomery and Vaughan’s Mean Value Theorem for Dirichlet
polynomials (see Lemma 3.6),
* •
bounds for arithmetic sums to bound the second moment of the mollifier $M_{X}$
(we use Ramaré’s bounds, see Lemma 3.3 and 3.4),
* •
the most recent explicit subconvexity bound for the Riemann zeta function (due
to Hiary [9], see Lemma 3.2).
## 3\. Preliminary lemmas
### 3.1. Bounds for the Riemann zeta function
In this section we record a number of bounds for the zeta function. Rademacher
[22, Theorem 4] established the following explicit convexity bound.
###### Lemma 3.1.
For $-\frac{1}{2}\leq-\eta\leq\sigma\leq 1+\eta\leq\frac{3}{2}$, we have
(3.1) $|\zeta(s)|\leq
3\frac{|1+s|}{|1-s|}\left(\frac{|1+s|}{2\pi}\right)^{\frac{1}{2}(1-\sigma+\eta)}\zeta(1+\eta).$
The next lemma is an explicit version of van der Corput’s subconvexity bound
for $\zeta$ on the critical line, recently proven by Hiary. [9].
###### Lemma 3.2.
We have
(3.2) $\displaystyle\ |\zeta(\tfrac{1}{2}+it)|\leq a_{1}t^{\frac{1}{6}}\log t$
$\displaystyle\ \text{for all }\ t\geq 3,$ (3.3) $\displaystyle\max_{|t|\leq
T}|\zeta(\tfrac{1}{2}+it)|\leq a_{1}T^{\frac{1}{6}}\log T+a_{2}$
$\displaystyle\ \text{for all }\ T>0,$
with
(3.4) $a_{1}=0.63\text{ and }a_{2}=2.851.$
###### Proof of Lemma 3.2.
Statement (3.2) is [9, Theorem 1.1]. For $T\in[0,3]$, [9, Theorem 1.1]
provides that $|\zeta(\tfrac{1}{2}+it)|\leq 1.461$. We find that the minimum
of the function $t^{\frac{1}{6}}\log(t)$ occurs when $t=e^{-6}$. We require
the polynomial $a_{1}t^{\frac{1}{6}}\log(t)+a_{2}\geq 1.461$, choosing $a_{2}$
as in the statement of the lemma achieves this. ∎
### 3.2. Bounds for arithmetic sums
We list here some preliminary lemmas from [24] providing estimates for finite
arithmetic sums. Let
(3.5) $\begin{split}b_{1}=0.62,\ b_{2}=1.048,\ b_{3}=0.605,\ \text{and}\
b_{4}=0.529.\end{split}$
###### Lemma 3.3.
We have
(3.6) $\displaystyle\sum_{n\leq X}\mu^{2}(n)\leq b_{1}X\ \text{ for all }\
X\geq 1700,$ (3.7) $\displaystyle\sum_{n\leq
X}\frac{\mu^{2}(n)}{n}-\frac{6}{\pi^{2}}\log X\leq b_{2}\ \text{ for all }\
X\geq 1002.$
(3.6) is [24, Lemma 3.1] and (3.7) is [24, Lemma 3.4].
###### Lemma 3.4.
Let $\tau>1,\delta>0$, $X\geq 10^{9}$, and $\gamma$ denotes Euler’s constant.
Then
(3.8)
$\displaystyle\sum_{X<n<5X}\frac{\lambda_{X}(n)^{2}}{n^{2}}\leq\frac{b_{3}}{X},$
(3.9) $\displaystyle\sum_{n\geq
1}\frac{\lambda_{X}(n)^{2}}{n^{\tau}}\leq\frac{b_{4}\tau^{2}}{\tau-1}e^{\gamma(\tau-1)}\log
X,$ (3.10) $\displaystyle\sum_{n\geq
1}\frac{\lambda_{X}(n)^{2}}{n^{1+\frac{\delta}{\log
X}}}\leq\frac{b_{4}}{\delta}\Big{(}1+\frac{\delta}{\log
X}\Big{)}^{2}e^{\frac{\delta\gamma}{\log X}}(\log X)^{2},$ (3.11)
$\displaystyle\sum_{n\geq 1}\frac{\lambda_{X}(n)^{2}}{n^{2+\frac{2\delta}{\log
X}}}\leq\frac{b_{4}}{5\delta e^{\delta}}\Big{(}1+\frac{{\delta}}{\log
X}\Big{)}^{2}e^{\frac{{\delta}(\gamma-\log 5)}{\log X}}\frac{(\log
X)^{2}}{X}+\frac{b_{3}e^{-2{\delta}}}{X}.$
###### Proof.
(3.8) is [24, Lemma 5.6] and (3.9) is [24, Lemma 5.5]. (3.10) is a direct
consequence of (3.9), taking $\tau=1+\frac{{\delta}}{\log X}$.
For (3.11) we set $\tau=2+\frac{2{\delta}}{\log X}$. Since
$\lambda_{X}(n)^{2}=0$ when $1\leq n\leq X$, then
$\sum_{n\geq
1}\frac{\lambda_{X}(n)^{2}}{n^{\tau}}=\sum_{X<n<5X}\frac{\lambda_{X}(n)^{2}}{n^{\tau}}+\sum_{n\geq
5X}\frac{\lambda_{X}(n)^{2}}{n^{\tau}}.$
Since $\tau\geq 2$, we use (3.8) and find that the first sum is
$\leq\frac{1}{X^{\tau-2}}\sum_{X<n<5X}\frac{\lambda_{X}(n)^{2}}{n^{2}}\leq\frac{1}{X^{\tau-2}}\frac{b_{3}}{X}=\frac{b_{3}e^{-2{\delta}}}{X}.$
We bound the second sum using $n^{\tau}\geq(5X)^{1+\frac{{\delta}}{\log
X}}n^{1+\frac{{\delta}}{\log X}}$ and (3.10). We find that it is
$\leq\frac{1}{(5X)^{1+\frac{{\delta}}{\log
X}}}\frac{b_{4}}{{\delta}}\Big{(}1+\frac{{\delta}}{\log
X}\Big{)}^{2}e^{\frac{{\delta}\gamma}{\log X}}(\log X)^{2}.$
Combining bounds
$\sum_{n\geq
1}\frac{\lambda_{X}(n)^{2}}{n^{\tau}}\leq\frac{b_{4}}{5{\delta}e^{\delta}}\Big{(}1+\frac{{\delta}}{\log
X}\Big{)}^{2}e^{\frac{{\delta}(\gamma-\log 5)}{\log X}}\frac{(\log
X)^{2}}{X}+\frac{b_{3}e^{-2{\delta}}}{X}.$
∎
###### Lemma 3.5.
Let $\tau>1$ and $\gamma$ is Euler’s constant. Then for $X\geq 1$,
(3.12) $\sum_{n\geq
X}\frac{d(n)}{n^{\tau}}\leq\frac{\tau}{X^{\tau-1}}\left(\frac{\log
X}{\tau-1}+\frac{1}{(\tau-1)^{2}}+\frac{\gamma}{\tau-1}+\frac{7}{12\tau
X}\right)\\\ $
and for $X\geq 47$,
(3.13) $\sum_{n\geq
X}\frac{d(n)^{2}}{n^{\tau}}\leq\frac{2\tau}{X^{\tau-1}}\Big{(}\frac{(\log
X)^{3}}{\tau-1}+\frac{3\log^{2}X}{(\tau-1)^{2}}+\frac{6\log
X}{(\tau-1)^{3}}+\frac{6}{(\tau-1)^{4}}\Big{)}.$
###### Proof.
By partial summation, we have
$\displaystyle\sum_{n\geq
X}\frac{d(n)}{n^{\tau}}\leq\tau\int_{X}^{\infty}\frac{\sum_{n\leq
t}d(n)}{t^{\tau+1}}dt.$
Using $\sum_{n\leq t}d(n)\leq t(\log t+\gamma+\frac{7}{12t})$, for $t\geq 1$,
which follows from [23, Equation 3.1], we have
$\displaystyle\sum_{n\geq
X}\frac{d(n)}{n^{\tau}}\leq\tau\left(\int_{X}^{\infty}\frac{\log
t}{t^{\tau}}dt+\gamma\int_{X}^{\infty}\frac{dt}{t^{\tau}}+\frac{7}{12}\int_{X}^{\infty}\frac{dt}{t^{\tau+1}}\right).$
By applying the integrals
$\int_{X}^{\infty}\frac{\log t}{t^{c}}dt=\frac{\log
X}{(c-1)X^{c-1}}+\frac{1}{(c-1)^{2}X^{c-1}}\text{ and
}\int_{X}^{\infty}\frac{dt}{t^{c}}=\frac{1}{(c-1)X^{c-1}},\text{ where }c>1,$
we obtain (3.12). The second estimate is similar. We have
$\sum_{n\geq
X}\frac{d(n)^{2}}{n^{\tau}}\leq\tau\int_{X}^{\infty}\frac{\sum_{n\leq
t}d(n)^{2}}{t^{\tau+1}}\,dt.$
It suffices to use the elementary bound $\sum_{n\leq t}d(n)^{2}\leq t(\log
t+1)^{3}\leq 2t\log^{3}t$ for $t\geq 47$, derived by Gowers [6]. Thus
$\sum_{n\geq X}\frac{d(n)^{2}}{n^{\tau}}\leq
2\tau\int_{X}^{\infty}\frac{\log^{3}t}{t^{\tau}}\,dt=2\tau\left(\frac{\frac{(\log
X)^{3}}{\tau-1}+\frac{3\log^{2}X}{(\tau-1)^{2}}+\frac{6\log
X}{(\tau-1)^{3}}+\frac{6}{(\tau-1)^{4}}}{X^{\tau-1}}\right).$
∎
### 3.3. Mean value theorem for Dirichlet polynomials
We require Montgomery and Vaughan’s mean value theorem for Dirichlet
polynomials in the form derived by Ramaré [24].
###### Lemma 3.6.
Let $(u_{n})$ be a real-valued sequence. For every $T\geq 0$ we have
(3.14)
$\int_{0}^{T}\Big{|}\sum_{n=1}^{\infty}u_{n}n^{it}\Big{|}^{2}dt\leq\sum_{n\geq
1}|u_{n}|^{2}(T+\pi m_{0}(n+1)),$
with
(3.15) $m_{0}=\sqrt{1+\frac{2}{3}\sqrt{\frac{6}{5}}}.$
Let $0<T_{1}<T_{2}$. Then
(3.16)
$\int_{T_{1}}^{T_{2}}|\sum_{n=1}^{\infty}u_{n}n^{it}|^{2}dt\leq\sum_{n\geq
1}|u_{n}|^{2}(T_{2}-T_{1}+2\pi m_{0}(n+1)).$
###### Proof.
The inequality (3.14) is [24, Lemma 6.5], and (3.16) follows by the same
proof. This argument is an explicit version of Corollary 3 of [18] which makes
use of the main theorem of [21]. Note that (3.16) follows from two
applications of (3.14). ∎
### 3.4. Choice for the smooth weight $g$
###### Lemma 3.7.
Let $\alpha>0$ and $\beta=2$. Let $s=\sigma+it$ and let $g$ be as defined in
(2.16):
(3.17) $g(s)=\frac{s-1}{s}e^{\alpha\left(\frac{s}{T}\right)^{2}}.$
Let $\sigma_{1}=\frac{1}{2},\sigma_{2}>1$, and $H<T$. Define
(3.18) $\displaystyle\omega_{1}(\sigma,T,\alpha)$
$\displaystyle=e^{\alpha\left(\frac{\sigma}{T}\right)^{2}},$ (3.19)
$\displaystyle\omega_{2}(\sigma,T,\alpha)$
$\displaystyle=\left(1-\frac{1}{H}\right)e^{\alpha\left(\frac{\sigma}{T}\right)^{2}-\alpha}.$
Then for $\frac{1}{2}\leq\sigma\leq\sigma_{2}$, $g$ satisfies (2.12) and
(2.13):
(3.20)
$\displaystyle|g(\sigma+it)|\leq\omega_{1}(\sigma,T,\alpha)e^{-\alpha\left(\frac{|t|}{T}\right)^{2}}$
$\displaystyle\text{ for all }\ t,$ (3.21)
$\displaystyle\omega_{2}(\sigma,T,\alpha)\leq$ $\displaystyle|g(\sigma+it)|$
$\displaystyle\text{ for }\ H\leq t\leq T.$
###### Proof.
Since $\sigma\geq\frac{1}{2}$, we have
$\left|\frac{s-1}{s}\right|^{2}=1-\frac{2\sigma-1}{\sigma^{2}+t^{2}}\leq 1.$
Thus
$|g(s)|\leq|e^{\alpha(\frac{s}{T})^{2}}|=e^{\frac{\alpha\sigma^{2}}{T^{2}}}e^{\frac{-\alpha
t^{2}}{T^{2}}}$ and we have the expression for $\omega_{1}(\sigma,T,\alpha)$.
In addition, $|\frac{s-1}{s}|=|1-\frac{1}{s}|\geq 1-\frac{1}{|s|}\geq
1-\frac{1}{|t|}$, so for all $t\in[H,T]$, we have
$|g(s)|\geq\left(1-|t|^{-1}\right)e^{\frac{\alpha\sigma^{2}}{T^{2}}}e^{\frac{-\alpha
t^{2}}{T^{2}}}\geq(1-H^{-1})e^{\frac{\alpha\sigma^{2}}{T^{2}}}e^{-\alpha},$
which gives $\omega_{2}(\sigma,T,\alpha)$. ∎
## 4\. Proof of the Main Theorem
Unless specified in the rest of the article, we set $H_{0}=3.0610046\cdot
10^{10}$ and we have the following conditions on the parameters
$k,\sigma_{1},\delta$, and $\sigma_{2}$:
(4.1) $\displaystyle k\geq\frac{10^{9}}{H_{0}},\ \sigma_{1}=\frac{1}{2},\
\delta>0,\ \text{and}\ \sigma_{2}=1+\frac{{\delta}}{\log X}.$
### 4.1. Bounding $F_{X}(\sigma,T)$
We establish here some preliminary lemmas to estimate $F_{X}(\sigma,T)$ at
$\frac{1}{2}$ and at $1+\frac{{\delta}}{\log X}$.
#### 4.1.1. Bounding $F_{X}(\frac{1}{2},T)$
We first need to bound the second moment of $M_{X}(\frac{1}{2}+it)$, where
$M_{X}$ is defined in (2.4).
###### Lemma 4.1.
Let $T>0$, $X\geq kH_{0}$, and $k$ satisfies (4.1). Then
(4.2)
$\int_{0}^{T}\left|M_{X}(\tfrac{1}{2}+it)\right|^{2}dt\leq(C_{1}T+C_{2}X)(\log
X),$
where
(4.3) $\displaystyle C_{1}$
$\displaystyle=C_{1}(k)=\frac{6}{\pi^{2}}+\frac{b_{2}}{\log(kH_{0})},$ (4.4)
$\displaystyle C_{2}$ $\displaystyle=C_{2}(k)=\frac{\pi
m_{0}b_{1}}{\log(kH_{0})}+\frac{6m_{0}}{\pi kH_{0}}+\frac{\pi
m_{0}b_{2}}{kH_{0}\log(kH_{0})},$
and the $b_{i}$’s are defined in (3.5) and $m_{0}$ in (3.15).
###### Proof.
We apply (3.14) to $u_{n}=\frac{\mu(n)}{n^{\frac{1}{2}}}$:
$\int_{0}^{T}\left|M_{X}\left(\tfrac{1}{2}+it\right)\right|^{2}dt\leq\sum_{n\leq
X}\frac{\mu^{2}(n)}{n}(T+\pi m_{0}(n+1)).$
Since $X\geq 1700$, we apply (3.6) to $(T+\pi m_{0})\sum_{n\leq
X}\frac{\mu^{2}(n)}{n}$ and (3.7) to $(\pi m_{0})\sum_{n\leq X}\mu^{2}(n)$
respectively. We factor $\log X$ to give
$\displaystyle\int_{0}^{T}\left|M_{X}\left(\tfrac{1}{2}+it\right)\right|^{2}dt$
$\displaystyle=(T+\pi m_{0})\left(\frac{6}{\pi^{2}}\log X+b_{2}\right)+\pi
m_{0}b_{1}X$ $\displaystyle=\left(\left(\frac{6}{\pi^{2}}+\frac{b_{2}}{\log
X}\right)T+\left(\frac{6m_{0}}{\pi X}+\frac{\pi m_{0}b_{2}}{X\log X}+\frac{\pi
m_{0}b_{1}}{\log X}\right)X\right)(\log X),$
and use the fact that $X\geq kH_{0}$ to obtain the announced bound. ∎
###### Lemma 4.2.
Let $T>0$, $X\geq kH_{0}$, and $k$ satisfies (4.1). Then
(4.5) $F_{X}(\tfrac{1}{2},T)\leq C_{4}\left(T^{\frac{1}{6}}\log
T+\frac{a_{2}}{a_{1}}\right)^{2}\left(T+\frac{C_{2}}{C_{1}}X\right)(\log X),$
where $a_{1},a_{2}$ are defined in (3.4), $C_{1}$ in (4.3), $C_{2}$ in (4.4),
and
(4.6) $\displaystyle a_{3}=-\frac{6a_{1}}{e}+a_{2},$ (4.7) $\displaystyle
C_{3}=C_{3}(k)=a_{3}^{2}C_{1}(k)\log(kH_{0}),$ (4.8) $\displaystyle
C_{4}=C_{4}(k)=C_{1}(k)a_{1}^{2}\left(1+\frac{1}{\sqrt{C_{3}(k)}}\right)^{2}.$
###### Proof.
We have from the definition of $F_{X}(\sigma,T)$ given as (2.10) and
Minkowski’s inequality that
$\sqrt{|F_{X}(\tfrac{1}{2},T)|}\leq\sqrt{\int_{0}^{T}|\zeta(\tfrac{1}{2}+it)M_{X}(\tfrac{1}{2}+it)|^{2}dt}+\sqrt{T}.$
To the last integral we apply Hiary’s subconvexity bound (3.3) to bound zeta
and (4.2) to bound the mean square of $M_{X}$. We let $I_{0}$ denote the
resulting bound so that
$I_{0}=(a_{1}T^{\frac{1}{6}}\log T+a_{2})^{2}(C_{1}T+C_{2}X)(\log X),$
and thus
$|F_{X}(\tfrac{1}{2},T)|\leq\left(\sqrt{I_{0}}+\sqrt{T}\right)^{2}=I_{0}\left(1+\sqrt{\frac{T}{I_{0}}}\right)^{2}.$
We note that $a_{1}T^{\frac{1}{6}}\log T+a_{2}$ is minimized at $T=e^{-6}$ and
we let $a_{3}$ represent this minimum. Then
$I_{0}\geq a_{3}^{2}(C_{1}T+C_{2}X)\log X\geq a_{3}^{2}C_{1}T\log X.$
We conclude with the lower bound $\frac{I_{0}}{T}\geq
a_{3}^{2}C_{1}\log(kH_{0})$, which is labeled $C_{3}$, and
$I_{0}=C_{1}a_{1}^{2}\left(T^{\frac{1}{6}}\log
T+\frac{a_{2}}{a_{1}}\right)^{2}\left(T+\frac{C_{2}}{C_{1}}X\right)(\log X),$
which completes the proof. ∎
#### 4.1.2. Bounding $F_{X}(\sigma_{2},T)$ at
$\sigma_{2}=1+\frac{\delta}{\log X}$
###### Lemma 4.3.
Let $T>0$, $X\geq kH_{0}$ and $k,\delta,\sigma_{2}$ satisfy (4.1). Then
(4.9)
$F_{X}(\sigma_{2},T)\leq\left(C_{5}(k,{\delta})+\frac{C_{6}(k,{\delta})(T+\pi
m_{0})}{X}\right)(\log X)^{2},$
where
(4.10) $\displaystyle C_{5}(k,{\delta})=\frac{\pi
m_{0}b_{4}}{2{\delta}}\Big{(}1+\frac{2{\delta}}{\log(kH_{0})}\Big{)}^{2}e^{\frac{2{\delta}\gamma}{\log(kH_{0})}},$
(4.11) $\displaystyle
C_{6}(k,{\delta})=\frac{b_{4}}{5{\delta}e^{\delta}}\Big{(}1+\frac{{\delta}}{\log(kH_{0})}\Big{)}^{2}+\frac{b_{3}e^{-2{\delta}}}{(\log(kH_{0}))^{2}},$
the $b_{i}$’s are defined in (3.5), $m_{0}$ in (3.15), and $\gamma$ is Euler’s
constant.
###### Proof.
Recall that $F_{X}$ is defined by (2.10) and by (2.6) we have
$F_{X}(\sigma_{2},T)=\int_{0}^{T}|f_{X}(\sigma_{2}+it)|^{2}dt=\int_{0}^{T}\Big{|}\sum_{n\geq
1}\frac{\lambda_{X}(n)}{n^{\sigma_{2}+it}}\Big{|}^{2}dt.$
Inequality (3.14) implies the bound
$F_{X}(\sigma_{2},T)\leq\pi m_{0}\sum_{n\geq
1}\frac{\lambda_{X}(n)^{2}}{n^{2\sigma_{2}-1}}+(T+\pi m_{0})\sum_{n\geq
1}\frac{\lambda_{X}(n)^{2}}{n^{2\sigma_{2}}}.$
For $2\sigma_{2}-1=1+\frac{2{\delta}}{\log X}$ and
$2\sigma_{2}=2+\frac{2{\delta}}{\log X}$, we apply the bounds for arithmetic
sums (3.10) and (3.11) to respectively bound the two above sums. Thus
$\displaystyle\sum_{n\geq
1}\frac{\lambda_{X}(n)^{2}}{n^{1+2\frac{{\delta}}{\log
X}}}\leq\frac{b_{4}}{2{\delta}}\Big{(}1+\frac{2{\delta}}{\log
X}\Big{)}^{2}e^{\frac{2{\delta}\gamma}{\log X}}(\log X)^{2},$ and
$\displaystyle\sum_{n\geq
1}\frac{\lambda_{X}(n)^{2}}{n^{2+\frac{2{\delta}}{\log
X}}}\leq\frac{b_{4}}{5{\delta}e^{\delta}}\Big{(}1+\frac{{\delta}}{\log
X}\Big{)}^{2}\frac{(\log X)^{2}}{X}+\frac{b_{3}e^{-2{\delta}}}{X}.$
We combine these results and use the fact that $X\geq kH_{0}$ to complete the
proof. ∎
From here we may derive a bound for $\mathcal{M}_{g,T}(X,\sigma)$.
### 4.2. Explicit upper bounds for the mollifier
$\mathcal{M}_{g,T}(X,\sigma)$
The results in this section are proven for a general weight $g$ satisfying the
conditions described in Section 2.3. In [7, Theorem 7], Hardy et al. proved
the following convexity estimate:
###### Lemma 4.4.
Let $\frac{1}{2}\leq\sigma_{1}<1<\sigma_{2}$, let $T>0$, and $X>1$. Then
(4.12)
$\mathcal{M}_{g,T}(X,\sigma)\leq\mathcal{M}_{g,T}(X,\sigma_{1})^{\frac{\sigma_{2}-\sigma}{\sigma_{2}-\sigma_{1}}}\mathcal{M}_{g,T}(X,\sigma_{2})^{\frac{\sigma-\sigma_{1}}{\sigma_{2}-\sigma_{1}}}.$
In order to obtain a bound for the mollifier $\mathcal{M}_{g,T}(X,\sigma)$
inside the strip $\frac{1}{2}\leq\sigma\leq 1+\frac{{\delta}}{\log X}$, we
need explicit bounds at the extremities $\frac{1}{2}$ and
$1+\frac{{\delta}}{\log X}$.
###### Lemma 4.5.
Let $T>0$, $X>0,\sigma\geq\frac{1}{2}$, and let $g$ satisfy conditions (2.12)
and (2.14). Then
(4.13) $\mathcal{M}_{g,T}(X,\sigma)\leq
4\omega_{1}(\sigma,T,\alpha)^{2}\alpha\beta\int_{0}^{\infty}x^{\beta-1}e^{-2\alpha
x^{\beta}}F_{X}(\sigma,xT)dx.$
###### Proof.
By (2.14) and $|g(\sigma+it)|=|g(\sigma-it)|$ for $t\in\mathbb{R}$ and by an
application of (2.12) to the weight $g$ in the definition (2.11) of
$\mathcal{M}_{g,T}(X,\sigma)$, we have
(4.14) $\mathcal{M}_{g,T}(X,\sigma)\leq
2\omega_{1}(\sigma,T,\alpha)^{2}\int_{0}^{\infty}e^{-2\alpha\left(\frac{t}{T}\right)^{\beta}}|f_{X}(\sigma+it)|^{2}dt.$
Note that $\int_{0}^{U}|f_{X}(\sigma+it)|^{2}dt=F_{X}(\sigma,U)$ with
$F_{X}(\sigma,0)=0$ and
$\displaystyle{\lim_{U\to\infty}\Big{(}F_{X}(\sigma,U)e^{-2\alpha\left(\frac{U}{T}\right)^{\beta}}\Big{)}=0}$.
Integrating by parts gives
$\displaystyle\int_{0}^{\infty}e^{-2\alpha\left(\frac{t}{T}\right)^{\beta}}|f_{X}(\sigma+it)|^{2}dt$
$\displaystyle=2\alpha\beta\int_{0}^{\infty}\Big{(}\frac{t}{T}\Big{)}^{\beta}e^{-2\alpha\left(\frac{t}{T}\right)^{\beta}}F_{X}(\sigma,t)\frac{dt}{t}$
$\displaystyle=2\alpha\beta\int_{0}^{\infty}x^{\beta}e^{-2\alpha
x^{\beta}}F_{X}(\sigma,xT)\frac{dx}{x},$
by the variable change $x=\frac{t}{T}$. This combined with (4.14) yields the
announced (4.13). ∎
#### 4.2.1. Bounding $\mathcal{M}_{g,T}(X,\frac{1}{2})$
Let $\alpha,\beta,A>0$ and let $n$ be a non-negative integer. We define
(4.15) $I(A,n)=\int_{0}^{\infty}x^{A}e^{-2\alpha x^{\beta}}(\log x)^{n}dx.$
In our context, $I(A,n)$ is a constant depending on parameters $A$ and $n$ and
is $\mathcal{O}(1)$ in comparison with $T$. The change of variable $y=2\alpha
x^{B}$ leads to the identity
(4.16)
$I(A,n)=(2\alpha)^{-\frac{A+1}{\beta}}\beta^{-(n+1)}\sum_{j=0}^{n}\binom{n}{j}(-\log(2\alpha))^{j}\Gamma^{(n-j)}\left(\frac{A+1}{\beta}\right),$
where $\Gamma^{(j)}(z)$ denotes the $j$-th derivative of Euler’s gamma
function. We also define
(4.17)
$\begin{split}\mathcal{J}(k,T)=&I(\beta+\tfrac{1}{3},0)+\frac{C_{2}}{C_{1}}kI(\beta-\tfrac{2}{3},0)+\frac{2I(\beta+\frac{1}{3},1)+2\frac{C_{2}}{C_{1}}kI(\beta-\frac{2}{3},1)}{(\log
T)}\\\
&+\frac{I(\beta+\frac{1}{3},2)+\frac{C_{2}}{C_{1}}kI(\beta-\frac{2}{3},2)}{(\log
T)^{2}}+\frac{2a_{2}\left(I(\beta+\frac{1}{6},0)+\frac{C_{2}k}{C_{1}}I(\beta-\frac{5}{6},0)\right)}{a_{1}T^{\frac{1}{6}}(\log
T)}\\\
&+\frac{2a_{2}\left(I(\beta+\frac{1}{6},1)+\frac{C_{2}k}{C_{1}}I(\beta-\frac{5}{6},1)\right)}{a_{1}T^{\frac{1}{6}}(\log
T)^{2}}+\frac{a_{2}^{2}\left(I(\beta,0)+\frac{C_{2}k}{C_{1}}I(\beta-1,0)\right)}{a_{1}^{2}T^{\frac{1}{3}}(\log
T)^{2}},\end{split}$ (4.18) $\mathcal{U}(\alpha,k,T)=4\alpha\beta
C_{4}\omega_{1}(\tfrac{1}{2},T,\alpha)^{2}\mathcal{J}(k,T),$
where $\omega_{1}$ and $C_{4}$ are respectively defined in (3.18) and (4.8).
We remark that in the case of our weight $g$, we have $\beta=2$. Thus in our
calculations of $\mathcal{J}(k,T)$ we specialize to $\beta=2$.
###### Lemma 4.6.
Let $\alpha,\beta>0$ and $g$ be a function satisfying (2.12) and (2.14). Let
$T\geq H_{0}$, $X=kT$, and $k$ satisfies (4.1). Then
$\mathcal{M}_{g,T}(X,\tfrac{1}{2})\leq\mathcal{U}(\alpha,k,T)(\log(kT))(\log
T)^{2}T^{\frac{4}{3}}.$
###### Proof.
We combine the bound (4.13) for $\mathcal{M}_{g,T}$ with the bound (4.5) for
$F_{X}(\frac{1}{2},xT)$:
$\mathcal{M}_{g,T}(X,\tfrac{1}{2})\leq 4\alpha\beta
C_{4}\omega_{1}(\tfrac{1}{2},T,\alpha)^{2}(\log
X)\left\\{T^{\frac{4}{3}}\int_{0}^{\infty}x^{\beta+\frac{1}{3}}(\log(xT))^{2}e^{-2\alpha
x^{\beta}}dx\right.\\\
\left.+\frac{2a_{2}}{a_{1}}T^{\frac{7}{6}}\int_{0}^{\infty}x^{\beta+\frac{1}{6}}\log(xT)e^{-2\alpha
x^{\beta}}dx+\frac{a_{2}^{2}}{a_{1}^{2}}T\int_{0}^{\infty}x^{\beta}e^{-2\alpha
x^{\beta}}dx\right.\\\
\left.+\frac{C_{2}}{C_{1}}XT^{\frac{1}{3}}\int_{0}^{\infty}x^{\beta-\frac{2}{3}}(\log(xT))^{2}e^{-2\alpha
x^{\beta}}dx+\frac{2a_{2}}{a_{1}}\frac{C_{2}}{C_{1}}XT^{\frac{1}{6}}\int_{0}^{\infty}x^{\beta-\frac{5}{6}}\log(xT)e^{-2\alpha
x^{\beta}}dx\right.\\\
\left.+\frac{a_{2}^{2}}{a_{1}^{2}}\frac{C_{2}}{C_{1}}X\int_{0}^{\infty}x^{\beta-1}e^{-2\alpha
x^{\beta}}dx\right\\}.$
We also use the fact that $(\log(xT))^{2}=(\log x)^{2}+2(\log x)(\log T)+(\log
T)^{2}$ and obtain
$\mathcal{M}_{g,T}(X,\tfrac{1}{2})\leq 4\alpha\beta
C_{4}\omega_{1}(\tfrac{1}{2},T,\alpha)^{2}(\log
X)\left\\{T^{\frac{4}{3}}\left(I(\beta+\tfrac{1}{3},2)+2(\log
T)I(\beta+\tfrac{1}{3},1)\right.\right.\\\ \left.\left.+(\log
T)^{2}I(\beta+\tfrac{1}{3},0)\right)+\frac{2a_{2}}{a_{1}}T^{\frac{7}{6}}\left(I(\beta+\tfrac{1}{6},1)+(\log
T)I(\beta+\tfrac{1}{6},0)\right)+\frac{a_{2}^{2}}{a_{1}^{2}}TI(\beta,0)\right.\\\
\left.+\frac{C_{2}}{C_{1}}XT^{\frac{1}{3}}\left(I(\beta-\tfrac{2}{3},2)+2(\log
T)I(\beta-\tfrac{2}{3},1)+(\log T)^{2}I(\beta-\tfrac{2}{3},0)\right)\right.\\\
\left.+\frac{2a_{2}}{a_{1}}\frac{C_{2}}{C_{1}}XT^{\frac{1}{6}}\left(I(\beta-\tfrac{5}{6},1)+(\log
T)I(\beta-\tfrac{5}{6},0)\right)+\frac{a_{2}^{2}}{a_{1}^{2}}\frac{C_{2}}{C_{1}}XI(\beta-1,0)\right\\},$
where $I$ is the integral defined in (4.15). At this point we choose $X=kT$ so
as to optimize the above bound, and we factor out the main term
$T^{\frac{4}{3}}(\log T)^{2}$:
$\mathcal{M}_{g,T}(X,\tfrac{1}{2})\leq 4\alpha\beta
C_{4}\omega_{1}(\tfrac{1}{2},T,\alpha)^{2}(\log(kT))(\log
T)^{2}T^{\frac{4}{3}}\left\\{I(\beta+\tfrac{1}{3},0)+\frac{kC_{2}}{C_{1}}I(\beta-\tfrac{2}{3},0)\right.\\\
\left.+2\frac{I(\beta+\tfrac{1}{3},1)+\frac{kC_{2}}{C_{1}}I(\beta-\tfrac{2}{3},1)}{(\log
T)}+\frac{I(\beta+\frac{1}{3},2)+\frac{kC_{2}}{C_{1}}I(\beta-\frac{2}{3},2)}{(\log
T)^{2}}+\frac{2a_{2}}{a_{1}}\frac{I(\beta+\frac{1}{6},0)+\frac{kC_{2}}{C_{1}}I(\beta-\frac{5}{6},0)}{(\log
T)T^{\frac{1}{6}}}\right.\\\
\left.+\frac{2a_{2}}{a_{1}}\frac{I(\beta+\frac{1}{6},1)+\frac{kC_{2}}{C_{1}}I(\beta-\frac{5}{6},1)}{(\log
T)^{2}T^{\frac{1}{6}}}+\frac{a_{2}^{2}}{a_{1}^{2}}\frac{I(\beta,0)+\frac{kC_{2}}{C_{1}}I(\beta-1,0)}{(\log
T)^{2}T^{\frac{1}{3}}}\right\\}.$
We recognize in the above term between brackets $\mathcal{J}(k,T)$ as
introduced in (4.17). ∎
#### 4.2.2. Bounding $\mathcal{M}_{g,T}(X,\sigma_{2})$ at
$\sigma_{2}=1+\frac{\delta}{\log X}$
###### Lemma 4.7.
Let $g$ be as defined in Lemma 3.7. Let $T\geq H_{0}$, $X=kT$, and
$k,\delta,\sigma_{2}$ satisfy (4.1). Then
$\mathcal{M}_{g,T}(X,\sigma_{2})\leq\mathcal{V}(\alpha,k,{\delta},T)(\log(kT))^{2},$
where
(4.19)
$\displaystyle\mathcal{V}(\alpha,k,{\delta},T)=8\alpha\omega_{1}(\sigma_{2},T,\alpha)^{2}\mathcal{K}(k,{\delta},T),$
(4.20)
$\displaystyle\mathcal{K}(k,{\delta},T)=\left(C_{5}(k,{\delta})+\frac{C_{6}(k,{\delta})\pi
m_{0}}{kT}\right)I(1,0)+\frac{C_{6}(k,{\delta})}{k}I(2,0),$
and $m_{0},\omega_{1},C_{5},C_{6},$ and $I$ are respectively defined in
(3.15), (3.18), (4.10), (4.11), and (4.15).
###### Proof.
We combine the bound (4.13) for $\mathcal{M}_{g,T}$ with the bound (4.9) for
$F_{X}(\sigma_{2},xT)$ (since $X\geq kH_{0}$) to obtain
(4.21) $\mathcal{M}_{g,T}(X,\sigma_{2})\leq
4\alpha\beta\omega_{1}(\sigma_{2},T,\alpha)^{2}\Bigg{(}\int_{0}^{\infty}x^{\beta-1}e^{-2\alpha
x^{\beta}}\Big{(}C_{5}(k,\delta)+\frac{C_{6}(k,\delta)(xT+\pi
m_{0})}{X}\Big{)}(\log X)^{2}dx\Bigg{)}.$
Rearranging this and recalling the definition for $I$ in (4.15) we obtain
$\mathcal{M}_{g,T}(X,\sigma_{2})\leq
4\alpha\beta\omega_{1}(\sigma_{2},T,\alpha)^{2}(\log
X)^{2}\left(\left(C_{5}(k,{\delta})+\frac{C_{6}(k,{\delta})\pi
m_{0}}{X}\right)I(\beta-1,0)\right.\\\
\left.+\frac{C_{6}(k,{\delta})T}{X}I(\beta,0)\right).$
We conclude by noting that $X=kT$ and for our $g$, $\beta=2$. ∎
#### 4.2.3. Conclusion
Finally, we provide bounds for $\mathcal{M}_{g,T}$.
###### Lemma 4.8.
Let $g$ be as defined in Lemma 3.7. Let $T\geq H_{0}$, $X=kT$, and $k$
satisfies (4.1). Assume $\frac{1}{2}\leq\sigma\leq 1+\frac{\delta}{\log X}$.
Then
(4.22) $\begin{split}\mathcal{M}_{g,T}(X,\sigma)&\leq
e^{\frac{8}{3}\delta(2\sigma-1)M(k,\sigma)+\frac{4\delta(2\sigma-1)\log\log
H_{0}}{\log(kH_{0})+2\delta}}\mathcal{U}(\alpha,k,T)^{2(1-\sigma)+\frac{2\delta(2\sigma-1)}{\log(kT)+2\delta}}\times\\\
&\mathcal{V}(\alpha,k,{\delta},T)^{2\sigma-1-\frac{2\delta(2\sigma-1)}{\log(kT)+2\delta}}(\log(kT))^{2\sigma}(\log
T)^{4(1-\sigma)}T^{\frac{8}{3}(1-\sigma)},\end{split}$
where $\mathcal{U}$ and $\mathcal{V}$ are respectively defined in (4.18) and
(4.19) and
(4.23) $M(k,\delta)=\max\Big{(}\frac{\log
H_{0}}{\log(kH_{0})+2\delta},1\Big{)}.$
###### Proof.
Let $\sigma_{1}=\frac{1}{2}$ andf $\sigma_{2}=1+\frac{\delta}{\log X}$ and
$\sigma\in[\sigma_{1},\sigma_{2}]$. We apply the convexity inequality (4.12)
with exponents
(4.24)
$a=\frac{\sigma_{2}-\sigma}{\sigma_{2}-\sigma_{1}}=\frac{1+\frac{\delta}{\log
X}-\sigma}{(1+\frac{\delta}{\log X})-\frac{1}{2}}\text{ and
}b=1-a=\frac{\sigma-\sigma_{1}}{\sigma_{2}-\sigma_{1}}=\frac{\sigma-\frac{1}{2}}{(1+\frac{\delta}{\log
X})-\frac{1}{2}}$
in combination with Lemmas 4.6, Lemma 4.7 to obtain
(4.25)
$\mathcal{M}_{g,T}(X,\sigma)\leq\mathcal{U}(\alpha,k,T)^{a}\mathcal{V}(\alpha,k,{\delta},T)^{b}(\log(kT))^{a+2b}(\log
T)^{2a}T^{\frac{4}{3}a}.$
Next, from the definitions of (4.24) it may be checked that
(4.26) $a=2(1-\sigma)+\frac{2\delta(2\sigma-1)}{\log X+2\delta},\text{ and }\
b=2\sigma-1-\frac{2\delta(2\sigma-1)}{\log X+2\delta}.$
From these equalities it follows that $a+2b\leq 2\sigma$. Using (4.26) and the
bound for $a+2b$ (since $\log(kT)\geq\log(kH_{0})\geq\log(10^{9})>1$), we have
(4.27) $\begin{split}\mathcal{M}_{g,T}(X,\sigma)&\leq
e^{\frac{4}{3}\times\frac{2\delta(2\sigma-1)\log
T}{\log(kT)+2\delta}+2\times\frac{2\delta(2\sigma-1)\log\log
T}{\log(kT)+2\delta}}\mathcal{U}(\alpha,k,T)^{2(1-\sigma)+\frac{2\delta(2\sigma-1)}{\log(kT)+2\delta}}\times\\\
&\mathcal{V}(\alpha,k,{\delta},T)^{2\sigma-1-\frac{2\delta(2\sigma-1)}{\log(kT)+2\delta}}(\log(kT))^{2\sigma}(\log
T)^{4(1-\sigma)}T^{\frac{8}{3}(1-\sigma)}.\end{split}$
Next we observe that the function $\frac{\log T}{\log(kT)+2\delta}$ decreases
if $\log k+2\delta<0$ and increases if $\log k+2\delta>0$ and thus
(4.28) $\frac{\log T}{\log(kT)+2\delta}\leq
M(k,\delta):=\begin{cases}\frac{\log H_{0}}{\log(kH_{0})+2\delta}&\text{ if
}\log k+2\delta<0,\\\ 1&\text{ if }\log k+2\delta\geq 0\end{cases}$
where $M(k,\delta)$ was defined in (4.23). Furthermore, it may be checked by
the conditions on $k$, that $\frac{\log\log T}{\log(kT)+2\delta}$ decreases as
long as $0<\delta<\frac{\log(H_{0})(\log\log H_{0}-1)}{2}$. Using these
observations in (4.27) we deduce (4.22). ∎
### 4.3. Bounding $F_{X}(\sigma,T)-F_{X}(\sigma,H)$
###### Lemma 4.9.
Let $g$ be as defined in Lemma 3.7. Let $\sigma\in[\frac{1}{2},1]$ and
$\alpha>0$. Let $T\geq H_{0}\geq H>0$, $X=kT$, $k$ satisfies (4.1), and
$0<\delta<\frac{\log(H_{0})(\log\log H_{0}-1)}{2}=26.36\ldots$. Then
(4.29)
$\begin{split}F_{X}(\sigma,T)-F_{X}(\sigma,H)&\leq\frac{e^{\frac{8}{3}\delta(2\sigma-1)M(k,\delta)+\frac{4\delta(2\sigma-1)\log\log
H_{0}}{\log(kH_{0})+2\delta}}\mathcal{U}(\alpha,k,T)^{2(1-\sigma)+\frac{2\delta(2\sigma-1)}{\log(kT)+2\delta}}\mathcal{V}(\alpha,k,{\delta},T)^{2\sigma-1-\frac{2\delta(2\sigma-1)}{\log(kT)+2\delta}}}{2(\omega_{2}(\sigma,T,\alpha))^{2}}\\\
&\times(\log(kT))^{2\sigma}(\log
T)^{4(1-\sigma)}T^{\frac{8}{3}(1-\sigma)},\end{split}$
where $\omega_{2},\mathcal{U},\mathcal{V}$ are respectively defined in (3.19),
(4.18), (4.19).
###### Proof.
By the assumed lower bound on $g$, (2.13), we have
$F_{X}(\sigma,T)-F_{X}(\sigma,H)=\int_{H}^{T}|f_{X}(\sigma+it)|^{2}dt\leq\frac{1}{(\omega_{2}(\sigma,T,\alpha))^{2}}\int_{H}^{T}|g(\sigma+it)|^{2}|f_{X}(\sigma+it)|^{2}dt.$
Since $t\to|g(\sigma+it)f_{X}(\sigma+it)|$ is even, it follows that
$F_{X}(\sigma,T)-F_{X}(\sigma,H)\leq\frac{\mathcal{M}_{g,T}(X,\sigma)}{2(\omega_{2}(\sigma,T,\alpha))^{2}}$
and we conclude by inserting the bound (4.22) for
$\mathcal{M}_{g,T}(X,\sigma)$. ∎
### 4.4. Explicit upper bounds for $\int_{\sigma^{\prime}}^{\mu}\arg
h_{X}(\tau+iT)d\tau-\int_{\sigma^{\prime}}^{\mu}\arg h_{X}(\tau+iH)d\tau$
The following Proposition and Corollary are a variant of Titchmarsh [25,
Lemma, p. 213]. This proposition gives a bounds for $\arg f(\sigma+iT)$ where
$f$ is a holomorphic function. The argument we use here is due to Backlund [1]
in the case that $f(s)=\zeta(s)$. The cases of Dirichlet $L$-functions and
Dedekind zeta functions have been worked out by McCurley [17] and by the first
and third authors [16] respectively.
###### Proposition 4.10.
Let $\eta>0$. Let $f(s)$ be a holomorphic function, for
${\mathfrak{Re}}(s)\geq-\eta$, real for real $s$. Assume there exist positive
constants $M$ and $m$ such that
(4.30) $\displaystyle|f(s)|\leq M\text{ for }{\mathfrak{Re}}(s)\geq 1+\eta,$
(4.31) $\displaystyle|{\mathfrak{Re}}f(1+\eta+it)|\geq m>0\text{ for all
}t\in\mathbb{R}.$
Let $\sigma\in(0,1+\eta]$ and assume that $U$ is not the ordinate of a zero of
$f(s)$. Then there exists an increasing sequence of natural numbers
$\\{N_{k}\\}_{k=1}^{\infty}$ such that
(4.32) $\left|\arg f(\sigma+iU)\right|\leq\frac{\pi}{\log
2}\mathscr{L}_{k}+\frac{\pi\log M}{2\log 2}-\frac{\pi\log m}{\log
2}+\frac{\pi}{2}+o_{k}(1)$
where
(4.33) $\mathscr{L}_{k}=\frac{1}{2\pi
N_{k}}\int_{\frac{\pi}{2}}^{\frac{3\pi}{2}}\log\Big{(}\frac{1}{2}\sum_{j=0}^{1}|f(1+\eta+(1+2\eta)e^{i\theta}+(-1)^{j}iU)|^{N_{k}}\Big{)}\,d\theta$
and $o_{k}(1)$ is a term that approaches $0$ as $k\to\infty$.
###### Proof of Proposition 4.10.
Let $\eta>0$. We define $\arg f(1+\eta)=0$, and $\arg
f(s)=\arctan\frac{{\mathfrak{Im}}f(s)}{{\mathfrak{Re}}f(s)}$ for
${\mathfrak{Re}}(s)=1+\eta$, since, by (4.31), ${\mathfrak{Re}}(f(s))$ does
not vanish on ${\mathfrak{Re}}(s)=1+\eta$. It follows that
(4.34) $|\arg f(1+\eta+iU)|<\frac{\pi}{2}.$
Recall that $\arg f(\sigma+iU)$ is defined by continuous variation, moving
along the line $\mathcal{C}$ from $1+\eta+iU$ to $\sigma+iU$. It follows that
(4.35) $|\arg f(\sigma+iU)|\leq\left|\Delta_{\mathcal{C}}\arg
f(s)\right|+\frac{\pi}{2}.$
We now bound the argument change on $\mathcal{C}$. Let $N\in\mathbb{N}$ and
let
(4.36) $F_{N}(w)=\frac{1}{2}(f(w+iU)^{N}+f(w-iU)^{N}).$
Since $f(s)$ is real when $s$ is real, the reflection principle gives
$F_{N}(\sigma)={\mathfrak{Re}}\,f(\sigma+iU)^{N}$ for all $\sigma$ real.
Suppose $F_{N}(\sigma)$ has $n$ real zeros in the interval $[\sigma,1+\eta]$.
These zeros partition the interval into $n+1$ subintervals. On each of these
subintervals $\arg f(\sigma+iU)^{N}$ can change by at most $\pi$, since
${\mathfrak{Re}}\,f(\sigma+iU)^{N}$ is nonzero on the interior of each
subinterval. It follows that
(4.37) $|\Delta_{\mathcal{C}}\arg f(s)|=\frac{1}{N}|\Delta_{\mathcal{C}}\arg
f(s)^{N}|\leq\frac{(n+1)\pi}{N}.$
We now provide an upper bound for $n$. Jensen’s theorem asserts that
$\log|F_{N}(1+\eta)|+\int_{0}^{1+2\eta}\frac{n(u)du}{u}=\frac{1}{2\pi}\int_{-\frac{\pi}{2}}^{\frac{3\pi}{2}}\log|F_{N}(1+\eta+(1+2\eta)e^{i\theta}|d\theta,$
where $n(u)$ denotes the number of zeros of $F_{N}(z)$ in the circle centered
at $1+\eta$ of radius $u$. Observe that $n(u)\geq n$ for
$u\geq\frac{1}{2}+\eta$ and thus
(4.38) $n\log
2\leq\frac{1}{2\pi}\int_{-\frac{\pi}{2}}^{\frac{3\pi}{2}}\log|F_{N}(1+\eta+(1+2\eta)e^{i\theta}|d\theta-\log|F_{N}(1+\eta)|.$
Trivially from (4.36),
$|F_{N}(1+\eta+(1+2\eta)e^{i\theta})|\leq\frac{1}{2}\sum_{j=0}^{1}|f(1+\eta+(1+2\eta)e^{i\theta}+(-1)^{j}iU)|^{N},$
so for the left part of the contour in (4.38),
(4.39)
$\int_{\frac{\pi}{2}}^{\frac{3\pi}{2}}\log|F_{N}(1+\eta+(1+2\eta)e^{i\theta}|d\theta\leq\int_{\frac{\pi}{2}}^{\frac{3\pi}{2}}\log\Big{(}\frac{1}{2}\sum_{j=0}^{1}|f(1+\eta+(1+2\eta)e^{i\theta}+(-1)^{j}iU)|^{N}\Big{)}\,d\theta.$
For the right part of the contour in (4.38), we have
$-\frac{\pi}{2}\leq\theta\leq\frac{\pi}{2}$, so
${\mathfrak{Re}}(1+\eta+(1+2\eta)e^{i\theta})\geq 1+\eta$. We apply (4.30) and
obtain
(4.40)
$\frac{1}{2\pi}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\log|F_{N}(1+\eta+(1+2\eta)e^{i\theta}|d\theta\leq\frac{N}{2}\log
M.$
To complete our bound for $n$, we require a lower bound for
$\log|F_{N_{k}}(1+\eta)|$.
We write $\displaystyle{f(1+\eta+iU)=re^{i\phi}}$ and then choose (by
Dirichlet’s approximation theorem) an increasing sequence of positive integers
$N_{k}$ tending to infinity such that $N_{k}\phi$ tends to $0$ modulo $2\pi$.
Since
$\displaystyle{\frac{F_{N_{k}}(1+\eta)}{|f(1+\eta+iU)|^{N_{k}}}=\frac{r^{N_{k}}\cos(N_{k}\phi)}{r^{N_{k}}}}$,
it follows that
$\displaystyle{\lim_{k\to\infty}\frac{F_{N_{k}}(1+\eta)}{|f(1+\eta+iU)|^{N_{k}}}=1}$.
Thus we derive
$\log|F_{N_{k}}(1+\eta)|\geq N_{k}\log|f(1+\eta+iU)|+o_{k}(1),$
where the term $o_{k}(1)\rightarrow 0$ as $k\rightarrow\infty$. Together with
(4.31), we obtain
(4.41) $\log|F_{N_{k}}(1+\eta)|\geq N_{k}\log m+o_{k}(1).$
Then (4.38), (4.39), (4.40), and (4.41) give
(4.42) $n\log
2\leq\frac{1}{2\pi}\int_{\frac{\pi}{2}}^{\frac{3\pi}{2}}\log\Big{(}\frac{1}{2}\sum_{j=0}^{1}|f(1+\eta+(1+2\eta)e^{i\theta}+(-1)^{j}iU)|^{N_{k}}\Big{)}\,d\theta\\\
+\frac{N_{k}\log M}{2}-N_{k}\log m+o_{k}(1).$
By (4.37) it follows that
$\left|\Delta_{\mathcal{C}}\arg f(s)\right|\leq\frac{\pi}{\log
2}\mathscr{L}_{k}+\frac{\pi\log M}{2\log 2}-\frac{\pi\log m}{\log
2}+o_{k}(1),$
where $\mathscr{L}_{k}$ is defined by (4.33) . We conclude by combining this
with (4.35). ∎
We derive the following Corollary for $\arg h_{X}(s)$ from Proposition 4.10.
###### Corollary 4.11.
Let $\eta_{0}=0.23622\ldots$, $\eta\in[\eta_{0},\frac{1}{2})$, and $X\geq
10^{9}$. Assume that $U\geq H\geq 1002$ and that $U$ is not the ordinate of a
zero of $h_{X}(s)$. Then for all $\tau\in(0,1+\eta]$,
$|\arg h_{X}(\tau+iU)|\leq\frac{(1+2\eta)}{\log
2}\log\Big{(}\frac{b_{8}(\eta,H)}{2\pi}U\Big{)}+\frac{\pi(1+\eta)}{\log
2}(\log X)+\frac{\pi\log b_{7}(k,\eta,H_{0})}{2\log 2}+\frac{\pi\log
b_{5}(\eta)}{2\log 2}\\\ -\frac{\pi\log(1-b_{6}(10^{9},\eta)^{2})}{\log
2}+\frac{\pi}{2},$
where $b_{5},b_{6},b_{7},b_{8}$ are defined in (4.44), (4.45), (4.50), and
(4.51).
###### Proof of Corollary.
We apply Proposition 4.10 to $f=h_{X}$ as defined in (2.8):
$h_{X}(s)=1-f_{X}(s)^{2}=\zeta(s)M_{X}(s)(2-\zeta(s)M_{X}(s)).$
Let $\sigma\geq\eta+1$ and $t\in{\mathbb{R}}$. We establish an upper bound for
$|h_{X}(\sigma+it)|$. The triangle inequality in conjunction with
$\displaystyle{|\zeta(s)|\leq\zeta(1+\eta)}$ and with
$\displaystyle{|M_{X}(s)|\leq\sum_{n=1}^{\infty}\frac{|\mu(n)|}{n^{1+\eta}}=\frac{\zeta(1+\eta)}{\zeta(2+2\eta)}}$
give
(4.43) $|h_{X}(s)|\leq b_{5}(\eta)$
with
(4.44)
$b_{5}(\eta)=\frac{\zeta(1+\eta)^{4}}{\zeta(2+2\eta)^{2}}+\frac{2\zeta(1+\eta)^{2}}{\zeta(2+2\eta)}.$
We now give a lower bound for $|{\mathfrak{Re}}h_{X}(1+\eta+it)|$. We use the
reverse triangle inequality $|h_{X}(s)|\geq 1-|f_{X}(s)|^{2}$. It remains to
provide an upper bound for $|f_{X}(s)|$. Trivially from (2.5),
$|f_{X}(s)|\leq\sum_{n>X}\frac{|\lambda_{X}(n)|}{n^{1+\eta}}\leq\sum_{n>X}\frac{d(n)}{n^{1+\eta}},$
and by Lemma 3.5, we obtain
(4.45) $|f_{X}(1+\eta+it)|\leq b_{6}(X,\eta)=\frac{(1+\eta)(\log X)}{\eta
X^{\eta}}\Big{(}1+\frac{1}{\eta\log X}+\frac{\gamma}{\log
X}+\frac{7\eta}{12(1+\eta)X(\log X)}\Big{)}.$
Note that $\frac{(\log X)}{X^{\eta}}$ decreases when $\eta>\frac{1}{\log X}$,
which is the case since we assumed
$\eta>\frac{1}{\log(10^{9})}=0.048254\ldots$ and $X\geq 10^{9}$. Thus
$|f_{X}(s)|\leq b_{6}(10^{9},\eta)$ and
(4.46)
$|{\mathfrak{Re}}(h_{X}(s))|=|1-{\mathfrak{Re}}(f_{X}(s))^{2}|\geq|1-|f_{X}(s)|^{2}|\geq
1-|f_{X}(s)|^{2}\geq 1-b_{6}(10^{9},\eta)^{2}.$
Note our assumption $\eta\geq\eta_{0}=0.23622\ldots$ ensures
$1-b_{6}(10^{9},\eta)^{2}>0$.
Finally, we must bound $\mathscr{L}_{k}$ as defined in (4.33) in the case
$f=h_{X}$. We assume $w$ is a complex number such that
$-\eta\leq{\mathfrak{Re}}w\leq 1+\eta$ and $|{\mathfrak{Im}}w|\geq
U-(1+2\eta)$. Recall that by Lemma 3.1
$|\zeta(w)|\leq
3\frac{|1+w|}{|1-w|}\Big{(}\frac{|w+1|}{2\pi}\Big{)}^{\frac{1+\eta-{\mathfrak{Re}}w}{2}}\zeta(1+\eta).$
Since $\frac{|1+w|}{|1-w|}=\Big{|}1+\frac{2}{w-1}\Big{|}\leq
1+\frac{2}{|{\mathfrak{Im}}(w)|}\leq 1.002$ when $|{\mathfrak{Im}}(w)|\geq
1000$, then
(4.47) $|\zeta(w)|\leq
3.006\zeta(1+\eta)\Big{(}\frac{|w+1|}{2\pi}\Big{)}^{\frac{1+\eta-u}{2}}\text{
for }|{\mathfrak{Im}}(w)|\geq 1000.$
From the definition (2.4), we have the trivial bound
(4.48) $|M_{X}(w)|\leq X^{1+\eta}.$
It follows from
$|h_{X}(w)|\leq|\zeta(w)M_{X}(w)|^{2}+2|\zeta(w)||M_{X}(w)|,$
the bounds (4.47), (4.48),
$\frac{|w+1|}{2\pi}>1,-\frac{1+\eta-{\mathfrak{Re}}w}{2}<0$, and $X\geq
kH_{0}$, that
(4.49) $|h_{X}(w)|\leq
b_{7}(k,\eta,H_{0})\Big{(}\frac{|w+1|}{2\pi}\Big{)}^{1+\eta-u}X^{2(1+\eta)}\text{
for }|{\mathfrak{Im}}(w)|\geq 1000,$
with
(4.50)
$b_{7}(k,\eta,H_{0})=\left(1+\frac{2}{3.006\zeta(1+\eta)(kH_{0})^{1+\eta}}\right)\left(3.006\zeta(1+\eta)\right)^{2}.$
We apply this with $w=1+\eta+(1+2\eta)e^{i\theta}\pm iU$. Since
$\cos\theta\leq 0$, a little calculation gives
$|w+1|=|2+\eta+(1+2\eta)e^{i\theta}\pm
iU|\leq\sqrt{(2+\eta)^{2}+(1+2\eta+U)^{2}}\leq b_{8}(\eta,H)U,$
with
(4.51)
$b_{8}(\eta,H)=\sqrt{\frac{(2+\eta)^{2}}{H^{2}}+\Big{(}\frac{1+2\eta}{H}+1\Big{)}^{2}}.$
In addition
$1+\eta-u=1+\eta-(1+\eta+(1+2\eta)\cos\theta)=-(1+2\eta)(\cos\theta)$, and
(4.49) gives
(4.52) $|h_{X}(1+\eta+(1+2\eta)e^{i\theta}\pm iU)|\leq
b_{7}(k,\eta,H_{0})\Big{(}\frac{b_{8}(\eta,H)}{2\pi}U\Big{)}^{-(1+2\eta)(\cos\theta)}X^{2(1+\eta)},$
since $|{\mathfrak{Im}}(1+\eta+(1+2\eta)e^{i\theta}\pm iU)|\geq
U-(1-2\eta)\geq H-2\geq 1000$. We use this to bound $\mathscr{L}_{k}$ as
defined in (4.33):
$\mathscr{L}_{k}\leq\frac{1}{2\pi}\int_{\frac{\pi}{2}}^{\frac{3\pi}{2}}\left(\log
b_{7}(k,\eta,H_{0})-(1+2\eta)(\cos\theta)\log\Big{(}\frac{b_{8}(\eta,H)}{2\pi}U\Big{)}+2(1+\eta)(\log
X)\right)d\theta.$
Calculating the integrals give
(4.53) $\mathscr{L}_{k}\leq\frac{\log
b_{7}(k,\eta,H_{0})}{2}+\frac{(1+2\eta)}{\pi}\log\Big{(}\frac{b_{8}(\eta,H)}{2\pi}U\Big{)}+(1+\eta)(\log
X).$
By (4.43) and (4.46) we may take $M=b_{5}(\eta)$ and
$m=1-b_{6}(10^{9},\eta)^{2}$ in (4.30) and (4.31) in the case of
$f(s)=h_{X}(s)$. Therefore by Proposition 4.10
(4.54) $\left|\arg h_{X}(\sigma+iU)\right|\leq\frac{\pi}{\log
2}\mathscr{L}_{k}+\frac{\pi\log b_{5}(\eta)}{2\log
2}-\frac{\pi\log(1-b_{6}(10^{9},\eta)^{2})}{\log 2}+\frac{\pi}{2}+o_{k}(1).$
Inserting the upper bound for $\mathscr{L}_{k}$ from (4.53) and letting
$k\to\infty$ we complete the proof as the $o_{k}(1)$ terms goes to zero. ∎
We are now in a position to bound the arguments.
###### Lemma 4.12.
Let $0<H\leq H_{0}\leq T$ and $X\leq T$. Let $\eta\in(\eta_{0},\frac{1}{2})$
with $\eta_{0}=0.23622\ldots$, $\sigma^{\prime}$ and $\mu$ satisfying
$\frac{1}{2}\leq\sigma^{\prime}<1<\mu\leq 1+\eta$. Then
(4.55) $\Big{|}\int_{\sigma^{\prime}}^{\mu}\arg
h_{X}(\tau+iT)d\tau-\int_{\sigma^{\prime}}^{\mu}\arg
h_{X}(\tau+iH)d\tau\Big{|}\leq C_{7}(\eta,H)\,(\mu-\sigma^{\prime})(\log T),$
where
(4.56) $C_{7}(\eta,H)=\frac{2(1+2\eta)+2\pi(1+\eta)}{\log
2}+\frac{b_{9}(\eta,H)}{\log H_{0}}.$
with $b_{9}(\eta,H)$ defined in (4.59).
###### Proof.
Note that
(4.57) $\Big{|}\int_{\sigma^{\prime}}^{\mu}\arg
h_{X}(\tau+iT)d\tau-\int_{\sigma^{\prime}}^{\mu}\arg
h_{X}(\tau+iH)d\tau\Big{|}\leq(\mu-\sigma^{\prime})\max_{\tau\in(\sigma^{\prime},\mu)}\Big{(}|\arg
h_{X}(\tau+iT)|+|\arg h_{X}(\tau+iH)|\Big{)}.$
By Corollary 4.11 we have
(4.58) $|\arg h_{X}(\tau+iH)|+|\arg h_{X}(\tau+iT)|\\\ \leq
b_{9}(\eta,H)+\frac{(1+2\eta)}{\log 2}(\log(HT))+\frac{2\pi(1+\eta)}{\log
2}(\log X)$
with
(4.59) $b_{9}(\eta,H)=\frac{\pi\log b_{7}(k,\eta,H_{0})}{\log 2}+\frac{\pi\log
b_{5}(\eta)}{\log 2}-\frac{2\pi\log(1-b_{6}(10^{9},\eta)^{2})}{\log
2}+\pi+\frac{2(1+2\eta)}{\log 2}\log\Big{(}\frac{b_{8}(\eta,H)}{2\pi}\Big{)}$
where $b_{7},b_{5},b_{6},b_{8}$ are defined in (4.50), (4.44), (4.45), (4.51).
Factoring $\log T$ in the right hand side of (4.58), using $H\leq T$, $X\leq
T$, and $H_{0}\leq T$ yields
(4.60) $|\arg h_{X}(\tau+iH)|+|\arg h_{X}(\tau+iT)|\leq(\log
T)\left(\frac{2(1+2\eta)+2\pi(1+\eta)}{\log 2}+\frac{b_{9}(\eta,H)}{\log
H_{0}}\right).$
Combining (4.57) and (4.60) leads to (4.55). ∎
### 4.5. Explicit lower bounds for $\int_{H}^{T}\log|h_{X}(\mu+it)|dt$
First, observe that (4.45) implies for
(4.61) $\mu\geq 1+\eta_{0}=1.23622\ldots,\ |f_{X}(\mu+it)|<1.$
This fact is used in the next lemma.
###### Lemma 4.13.
Assume $\mu\geq 1+\eta_{0}$ where $\eta_{0}=0.23622\ldots$. Let $X=kT$ where
$T\geq H_{0}$, $k$ satisfies (4.1), $k\leq 1$, and $2\pi m_{0}\leq H<T$. Then
(4.62) $-\int_{H}^{T}\log|h_{X}(\mu+it)|dt\leq C_{8}(k,\mu)(\log T).$
with
(4.63)
$C_{8}(k,\mu)=b_{10}(k,\mu)\frac{(\log(kH_{0}))^{2}}{(kH_{0})^{2\mu-2}}\left(\frac{4\mu
b_{11}(kH_{0},2\mu)}{k(2\mu-1)}+\frac{2\pi
m_{0}(2\mu-1)b_{11}(kH_{0},2\mu-1)}{(\mu-1)}\right),$
$b_{10}$ is defined in (4.66), $b_{11}$ in (4.68), and $m_{0}$ in (3.15).
###### Proof.
We begin by remarking that (4.45) implies $|f_{X}(\mu+it)|\leq
b_{6}(kH_{0},\mu-1)<1$ since $X\geq kH_{0}\geq 10^{9}$ and $\mu\geq
1+\eta_{0}$. Next, observe that $|h_{X}(\mu+it)|\geq|1-f_{X}(\mu+it)^{2}|\geq
1-|f_{X}(\mu+it)|^{2}$ and thus
(4.64) $-\log|h_{X}(\mu+it)|\leq-\log(1-|f_{X}(\mu+it)|^{2}).$
Since $-\frac{\log(1-u^{2})}{u^{2}}$ increases with $u\in(0,1)$, we have
(4.65) $-\log(1-|f_{X}(\mu+it)|^{2})\leq b_{10}(k,\mu)|f_{X}(\mu+it)|^{2},$
with
(4.66)
$b_{10}(k,\mu)=-\frac{\log\left(1-b_{6}(kH_{0},\mu-1)^{2}\right)}{b_{6}(kH_{0},\mu-1)^{2}}$
where $b_{6}$ is defined in (4.45). It follows from (4.64) and (4.65) that
(4.67) $-\int_{H}^{T}\log|h_{X}(\mu+it)|dt\leq
b_{10}(k,\mu)\int_{H}^{T}|f_{X}(\mu+it)|^{2}dt.$
We apply Lemma 3.6 and the bound $|\lambda_{X}(n)|\leq d(n)$ with
$\lambda_{X}(n)=0$ if $n\leq X$. We obtain
$\displaystyle\int_{H}^{T}|f_{X}(\mu+it)|^{2}dt$
$\displaystyle\leq\sum_{n=1}^{\infty}\frac{|\lambda_{X}(n)|^{2}}{n^{2\mu}}(T-H+2\pi
m_{0}(n+1))$ $\displaystyle\leq(T-H+2\pi
m_{0})\sum_{n>X}\frac{d(n)^{2}}{n^{2\mu}}+2\pi
m_{0}\sum_{n>X}\frac{d(n)^{2}}{n^{2\mu-1}}.$
We appeal to (3.13) to bound the above sums:
$\sum_{n\geq X}\frac{d(n)^{2}}{n^{\tau}}\leq\frac{(\log
X)^{3}}{X^{\tau-1}}\frac{2\tau b_{11}(kH_{0},\tau)}{(\tau-1)},$
since $X\geq kH_{0}$ where
(4.68) $b_{11}(X,\tau)=1+\frac{3}{(\tau-1)(\log X)}+\frac{6}{(\tau-1)^{2}(\log
X)^{2}}+\frac{6}{(\tau-1)^{3}(\log X)^{3}}.$
Since $X=kT$ we deduce that
$\int_{H}^{T}|f_{X}(\mu+it)|^{2}dt\leq\frac{(\log(kT))^{3}}{(kT)^{2\mu-2}}\left(\frac{4\mu
b_{11}(kH_{0},2\mu)}{k(2\mu-1)}+\frac{2\pi
m_{0}(2\mu-1)b_{11}(kH_{0},2\mu-1)}{(\mu-1)}\right).$
Note that $\frac{(\log(kT))^{2}}{(kT)^{2\mu-2}}$ decreases with $T$ as long as
$10^{9}>e^{\frac{1}{\mu-1}}$ (i.e. $\mu>\mu_{2}=1.072382\ldots$). Using this
and $\log(kT)\leq\log T$ (since $k\leq 1$) implies
(4.69)
$\int_{H}^{T}|f_{X}(\mu+it)|^{2}dt\leq\frac{(\log(kH_{0}))^{2}}{(kH_{0})^{2\mu-2}}\left(\frac{4\mu
b_{11}(kH_{0},2\mu)}{k(2\mu-1)}+\frac{2\pi
m_{0}(2\mu-1)b_{11}(kH_{0},2\mu-1)}{(\mu-1)}\right)(\log T).$
We conclude by combining this with (4.67). ∎
### 4.6. Proof of Zero Density Result
Finally, we are able to compile our bounds to obtain an upper bound for
$N(\sigma,T)$.
###### Lemma 4.14.
Assume
$\alpha>0,d>0,\delta>0,\eta_{0}=0.23622\ldots,\eta\in[\eta_{0},\frac{1}{2}),$
and $\mu\in[1+\eta_{0},1+\eta]$. Let $H_{0}=3.0610046\cdot 10^{10},\ 1002\leq
H\leq H_{0},\ \frac{10^{9}}{H_{0}}\leq k\leq 1$, $T\geq H_{0}$, and $X=kT$
Assume $\sigma>\frac{1}{2}+\frac{d}{\log H_{0}}$,
$\mathcal{U}(\alpha,k,H_{0})>1$, and $\mathcal{U}(\alpha,k,T)$ decreases in
$T$. Thus
(4.70) $N(\sigma,T)\leq\frac{(T-H)(\log T)}{2\pi
d}\log\left(1+\mathcal{C}_{1}\frac{(\log(kT))^{2\sigma}(\log
T)^{4(1-\sigma)}T^{\frac{8}{3}(1-\sigma)}}{T-H}\right)+\frac{\mathcal{C}_{2}}{2\pi
d}(\log T)^{2},$ (4.71) $N(\sigma,T)\leq\frac{\mathcal{C}_{1}}{2\pi
d}(\log(kT))^{2\sigma}(\log
T)^{5-4\sigma}T^{\frac{8}{3}(1-\sigma)}+\frac{\mathcal{C}_{2}}{2\pi d}(\log
T)^{2},$
with
(4.72)
$\displaystyle\mathcal{C}_{1}=\mathcal{C}_{1}(\alpha,d,\delta,k,H,\sigma)$
$\displaystyle=b_{12}(H)e^{\frac{8}{3}\delta(2\sigma-1)M(k,\delta)+\frac{4\delta(2\sigma-1)\log\log
H_{0}}{\log(kH_{0})+2\delta}}\mathcal{U}(\alpha,k,H_{0})^{2(1-\sigma)+\frac{2d}{\log
H_{0}}+\frac{2\delta(2\sigma-1)}{\log(kH_{0})+2\delta}}\times$
$\displaystyle\mathcal{V}(\alpha,k,{\delta},H_{0})^{2\sigma-1}e^{\frac{2d(2\log\log
H_{0}-\log\log(kH_{0}))}{\log H_{0}}+\frac{8d}{3}+2\alpha},$ (4.73)
$\displaystyle\mathcal{C}_{2}=\mathcal{C}_{2}(d,\eta,k,H,\mu,\sigma)$
$\displaystyle=C_{7}(\eta,H)\Big{(}\mu-\sigma+\frac{d}{\log
H_{0}}\Big{)}+C_{8}(k,\mu),$
and $\mathcal{U},\mathcal{V},M(k,\delta),C_{7},C_{8}$ and $b_{12}$ are
respectively defined in (4.18), (4.19), (4.23), (4.56), (4.63), and (4.75).
Remark. 1. The assumptions that $U(\alpha,k,H_{0})>1$ and $U(\alpha,k,T)$ are
decreasing can be removed from the theorem. However, this would overly
complicate the statement of the theorem. In all instances that we apply this
theorem (for various values of $\alpha$ and $k$) these conditions hold.
###### Proof.
We begin by assuming that $T$ is not the ordinate of a zero of $\zeta(s)$.
From (2.3), (2.9), and the definition (2.10) of $F_{X}$, we have for
$\sigma\in[\sigma^{\prime},1]$ where $\sigma^{\prime}\geq\frac{1}{2}$ and
$\mu\in[1+\eta_{0},1+\eta]$
$N(\sigma,T)\leq\frac{1}{2\pi(\sigma-\sigma^{\prime})}\Big{(}(T-H)\log\left(1+\frac{F_{X}(\sigma^{\prime},T)-F_{X}(\sigma^{\prime},H)}{(T-H)}\right)\Big{.}\\\
\Big{.}+\int_{\sigma^{\prime}}^{\mu}\arg
h_{X}(\tau+iT)d\tau-\int_{\sigma^{\prime}}^{\mu}\arg
h_{X}(\tau+iH)d\tau-\int_{H}^{T}\log|h_{X}(\mu+it)|dt\Big{)}.$
We apply Lemma 4.9, Lemma 4.12, and Lemma 4.13 to achieve
(4.74)
$\begin{split}&N(\sigma,T)\leq\frac{(T-H)}{2\pi(\sigma-\sigma^{\prime})}\times\\\
&\log\Bigg{(}1+\frac{e^{\frac{8}{3}\delta(2\sigma^{\prime}-1)M(k,\delta)+\frac{4\delta(2\sigma^{\prime}-1)\log\log
H_{0}}{\log(kH_{0})+2\delta}}\mathcal{U}(\alpha,k,T)^{2(1-\sigma^{\prime})+\frac{2\delta(2\sigma^{\prime}-1)}{\log(kT)+2\delta}}\mathcal{V}(\alpha,k,{\delta},T)^{2\sigma^{\prime}-1-\frac{2\delta(2\sigma^{\prime}-1)}{\log(kT)+2\delta}}}{2(\omega_{2}(\sigma^{\prime},T,\alpha))^{2}}\times\\\
&\frac{(\log(kT))^{2\sigma^{\prime}}(\log
T)^{4(1-\sigma^{\prime})}T^{\frac{8}{3}(1-\sigma^{\prime})}}{(T-H)}\Bigg{)}+\frac{\left(C_{7}(\eta,H)\,(\mu-\sigma^{\prime})+C_{8}(k,\mu)\right)(\log
T)}{2\pi(\sigma-\sigma^{\prime})}.\end{split}$
We make the choice $\sigma^{\prime}=\sigma-\frac{d}{\log T}$, for some $d>0$.
From the definition (4.19), we note that $\mathcal{V}(\alpha,k,{\delta},T)$
decreases with $T$. Since by assumption $\mathcal{U}(\alpha,k,H_{0})>1$ and
$T\to U(k,\alpha,T)$ decreases, it follows that
$\mathcal{U}(\alpha,k,H_{0})^{\frac{2d}{\log
T}+\frac{2\delta(2\sigma^{\prime}-1)}{\log(kT)+2\delta}}$ decreases with $T$
and thus
$\mathcal{U}(\alpha,k,T)^{2(1-\sigma^{\prime})+\frac{2\delta(2\sigma^{\prime}-1)}{\log(kT)+2\delta}}\leq\mathcal{U}(\alpha,k,H_{0})^{2(1-\sigma)+\frac{2d}{\log
H_{0}}+\frac{2\delta(2\sigma^{\prime}-1)}{\log(kH_{0})+2\delta}}.$
It may be shown that for our choice of parameters $\alpha,k,\delta$ that
$\mathcal{V}(\alpha,k,{\delta},T)>1$ for all $T\geq H_{0}$ and thus
$\mathcal{V}(\alpha,k,{\delta},T)^{2\sigma^{\prime}-1-\frac{2\delta(2\sigma^{\prime}-1)}{\log(kT)+2\delta}}\leq\mathcal{V}(\alpha,k,{\delta},T)^{2\sigma^{\prime}-1}.$
In addition,
$\displaystyle(\log(kT))^{2\sigma^{\prime}}(\log
T)^{4(1-\sigma^{\prime})}T^{\frac{8}{3}(1-\sigma^{\prime})}$
$\displaystyle=e^{\frac{2d}{\log T}(2\log\log
T-\log\log(kT))+\frac{8d}{3}}(\log(kT))^{2\sigma}(\log
T)^{4(1-\sigma)}T^{\frac{8}{3}(1-\sigma)}$ $\displaystyle\leq
e^{\frac{2d(2\log\log H_{0}-\log\log(kH_{0}))}{\log
H_{0}}+\frac{8d}{3}}(\log(kT))^{2\sigma}(\log
T)^{4(1-\sigma)}T^{\frac{8}{3}(1-\sigma)},$
since $T\geq H_{0}$ and $\frac{10^{9}}{H_{0}}\leq k\leq 1$ imply
$\frac{2\log\log T-\log\log(kT)}{\log T}$ decreases in $T$. Since
$\omega_{2}(\sigma^{\prime},T,\alpha)$ as defined in (3.19) increases with
$\sigma^{\prime}\geq\sigma-\frac{d}{\log H_{0}}$ and decreases with $T$, then
(4.75) $\frac{1}{2(\omega_{2}(\sigma^{\prime},T,\alpha))^{2}}\leq
b_{12}(H)e^{2\alpha}\ \text{with}\ b_{12}(H)=\frac{1}{2(1-\frac{1}{H})^{2}}.$
Combining the above inequalities establishes (4.70), and thus (4.71) (applying
$\log(1+y)\leq y$).
(4.76) $\begin{split}N(\sigma,T)&\leq\frac{(T-H)(\log T)}{2\pi
d}\log\left(1+b_{12}(H)e^{\frac{8}{3}\delta(2\sigma^{\prime}-1)M(k,\delta)+\frac{4\delta(2\sigma^{\prime}-1)\log\log
H_{0}}{\log(kH_{0})+2\delta}}\right.\\\
&\times\mathcal{U}(\alpha,k,H_{0})^{2(1-\sigma)+\frac{2d}{\log
H_{0}}+\frac{2\delta(2\sigma^{\prime}-1)}{\log(kH_{0})+2\delta}}\mathcal{V}(\alpha,k,{\delta},H_{0})^{2\sigma-1}e^{\frac{2d(2\log\log
H_{0}-\log\log(kH_{0}))}{\log H_{0}}+\frac{8d}{3}+2\alpha}\\\
&\times\left.\frac{(\log(kT))^{2\sigma}(\log
T)^{4(1-\sigma)}T^{\frac{8}{3}(1-\sigma)}}{(T-H)}\right)\\\
&+\frac{\left(C_{7}(\eta,H)\,(\mu-\sigma+\frac{d}{\log
H_{0}})+C_{8}(k,\mu)\right)(\log T)^{2}}{2\pi d}.\end{split}$
Since $\sigma^{\prime}\leq\sigma$, each remaining occurrence of
$\sigma^{\prime}$ may be replaced by $\sigma$. Finally, by a continuity
argument these inequalities extend to the case where $T$ is the ordinate of a
zero of the zeta function. ∎
## 5\. Tables of Computation
For fixed values of $\sigma$, Table 1 provides bounds for $N(\sigma,T)$ of the
shape (4.71). We fix values for $k$ in $[\frac{10^{9}}{H_{0}},1]$. The
parameters $\alpha,d,\delta,\eta$ and $H$ are chosen to make
$\frac{\mathcal{C}_{1}}{2\pi d}$ as small as possible with
$\mathcal{C}_{1}(\alpha,d,\delta,k,H,\sigma)$ as defined in (4.72) . The
program returns $H=H_{0}-1$ for all lines in the table. With this $H$ we
minimize of $C_{7}(\eta,H)$ which chooses $\eta=0.25618\ldots$. Then $\mu$ is
chosen to minimize $\mu C_{7}(\eta,H)+C_{8}(k,\mu)$ (as in the definition
(4.73) of $\mathcal{C}_{2}=\mathcal{C}_{2}(d,\eta,k,H,\mu,\sigma)$). We remark
that there is a small bit of subtlety when considering
$\mathcal{U}(\alpha,k,T)$, it is necessary to ensure all the coefficients in
$\mathcal{J}(k,T)$ are positive and this is checked with each set of
parameters used. This is to guarantee that $\mathcal{U}(\alpha,k,T)$ decreases
with $T$.
Table 1. The bound $N(\sigma,T)\leq A(\log(kT))^{2\sigma}(\log T)^{5-4\sigma}T^{\frac{8}{3}(1-\sigma)}+B(\log T)^{2}$ (4.71) for $\sigma=\sigma_{0}$ with $\frac{10^{9}}{H_{0}}\leq k\leq 1$. $\sigma_{0}$ | $k$ | $\mu$ | $\alpha$ | ${\delta}$ | $d$ | $A=\frac{\mathcal{C}_{1}}{2\pi d}$ | $B=\frac{\mathcal{C}_{2}}{2\pi d}$
---|---|---|---|---|---|---|---
$0.60$ | $0.5$ | $1.251$ | $0.288$ | $0.3140$ | $0.341$ | $2.177$ | $5.663$
$0.65$ | $0.6$ | $1.249$ | $0.256$ | $0.3070$ | $0.340$ | $2.963$ | $5.249$
$0.70$ | $0.8$ | $1.247$ | $0.222$ | $0.3040$ | $0.339$ | $3.983$ | $4.824$
$0.75$ | $1.0$ | $1.245$ | $0.189$ | $0.3030$ | $0.338$ | $5.277$ | $4.403$
$0.80$ | $1.0$ | $1.245$ | $0.160$ | $0.3030$ | $0.337$ | $6.918$ | $3.997$
$0.85$ | $1.0$ | $1.245$ | $0.133$ | $0.3030$ | $0.336$ | $8.975$ | $3.588$
$0.86$ | $1.0$ | $1.245$ | $0.127$ | $0.3030$ | $0.335$ | $9.441$ | $3.514$
$0.87$ | $1.0$ | $1.245$ | $0.122$ | $0.3030$ | $0.335$ | $9.926$ | $3.430$
$0.88$ | $1.0$ | $1.245$ | $0.116$ | $0.3030$ | $0.335$ | $10.431$ | $3.346$
$0.89$ | $1.0$ | $1.245$ | $0.111$ | $0.3030$ | $0.335$ | $10.955$ | $3.262$
$0.90$ | $1.0$ | $1.245$ | $0.105$ | $0.3030$ | $0.334$ | $11.499$ | $3.186$
$0.91$ | $1.0$ | $1.245$ | $0.100$ | $0.3030$ | $0.334$ | $12.063$ | $3.102$
$0.92$ | $1.0$ | $1.245$ | $0.095$ | $0.3030$ | $0.334$ | $12.646$ | $3.017$
$0.93$ | $1.0$ | $1.245$ | $0.089$ | $0.3030$ | $0.333$ | $13.250$ | $2.941$
$0.94$ | $1.0$ | $1.245$ | $0.084$ | $0.3030$ | $0.333$ | $13.872$ | $2.856$
$0.95$ | $1.0$ | $1.245$ | $0.079$ | $0.3030$ | $0.333$ | $14.513$ | $2.772$
$0.96$ | $1.0$ | $1.245$ | $0.074$ | $0.3030$ | $0.332$ | $15.173$ | $2.694$
$0.97$ | $1.0$ | $1.245$ | $0.069$ | $0.3030$ | $0.332$ | $15.850$ | $2.609$
$0.98$ | $1.0$ | $1.245$ | $0.064$ | $0.3030$ | $0.331$ | $16.544$ | $2.532$
$0.99$ | $1.0$ | $1.245$ | $0.060$ | $0.3030$ | $0.331$ | $17.253$ | $2.446$
For fixed values of $\sigma$, Table 2 provide bounds for $N(\sigma,H_{0})$ of
the shape (4.70). In this case, the choice of $H$ is essential and we choose
$H=H_{0}-10^{-6}$. As a consequence the “main term” is $\frac{10^{-6}}{2\pi
d}(\log H_{0})\log\Big{(}1+10^{6}\mathcal{C}_{1}(\log(kH_{0}))^{2\sigma}(\log
H_{0})^{4(1-\sigma)}H_{0}^{\frac{8}{3}(1-\sigma)}\Big{)}$ which becomes
insignificant in comparison to
$\frac{\mathcal{C}_{2}(d,\eta,k,H,\mu,\sigma)}{2\pi d}(\log H_{0})^{2}$, the
term arising from the argument. We take $\alpha=0.324$, $\delta=0.3000$, and
$k=1$ (as we did not find any other values giving better bounds). The
parameter $\eta$ is chosen to minimize $C_{7}(\eta,H)$, and then $\mu$ to
minimize $\mu C_{7}(\eta,H)+C_{8}(k,\mu)$: $\eta=0.2561\ldots$ and
$\mu=1.2453\ldots$.
Table 2. Bound (4.70) with $k=1$ $\sigma$ | $d$ | $\frac{1}{2\pi d}$ | $\mathcal{C}_{1}$ | $\frac{\mathcal{C}_{2}}{2\pi d}$ | $N(\sigma,H_{0})\leq$
---|---|---|---|---|---
$0.60$ | $2.414$ | $0.066$ | $2094.73$ | $0.893$ | $520.28$
$0.65$ | $3.621$ | $0.044$ | $97986.60$ | $0.595$ | $346.85$
$0.70$ | $4.828$ | $0.033$ | $4583580.34$ | $0.447$ | $260.14$
$0.75$ | $6.036$ | $0.027$ | $214409007.32$ | $0.357$ | $208.11$
$0.80$ | $7.243$ | $0.022$ | $10029544375.44$ | $0.298$ | $173.42$
$0.85$ | $8.450$ | $0.019$ | $469158276689.92$ | $0.255$ | $148.65$
$0.86$ | $8.691$ | $0.019$ | $1012341447042.27$ | $0.248$ | $144.52$
$0.87$ | $8.933$ | $0.018$ | $2184412502812.95$ | $0.242$ | $140.61$
$0.88$ | $9.174$ | $0.018$ | $4713486735514.76$ | $0.235$ | $136.91$
$0.89$ | $9.416$ | $0.017$ | $10170678467214.40$ | $0.229$ | $133.40$
$0.90$ | $9.657$ | $0.017$ | $21946110446020.33$ | $0.224$ | $130.07$
$0.91$ | $9.899$ | $0.017$ | $47354929689448.17$ | $0.218$ | $126.90$
$0.92$ | $10.140$ | $0.016$ | $102181631292174.11$ | $0.213$ | $123.88$
$0.93$ | $10.382$ | $0.016$ | $220485720114084.42$ | $0.208$ | $120.99$
$0.94$ | $10.623$ | $0.015$ | $475760194464125.94$ | $0.203$ | $118.24$
$0.95$ | $10.864$ | $0.015$ | $1026586948666903.92$ | $0.199$ | $115.62$
$0.96$ | $11.106$ | $0.015$ | $2215151194732183.30$ | $0.195$ | $113.10$
$0.97$ | $11.347$ | $0.015$ | $4779814142285142.58$ | $0.190$ | $110.70$
$0.98$ | $11.589$ | $0.014$ | $10313798574616601.14$ | $0.186$ | $108.39$
$0.99$ | $11.830$ | $0.014$ | $22254932487167323.15$ | $0.183$ | $106.18$
## References
* [1] R.J. Backlund, Über die Nullstellen der Riemannschen Zetafunktion, Acta Mathematica, vol. 41 (1917), 345–375.
* [2] H. Bohr, E. Landau, Beiträge zur Theorie der Riemannschen Zetafunktion, Math. Ann. 74 (1913), no. 1, 3–30.
* [3] J. Bourgain, On large values estimates for Dirichlet polynomials and the density hypothesis for the Riemann zeta function, Internat. Math. Res. Notices 2000, no. 3, 133-146.
* [4] P. Dusart, Explicit estimates of some functions over primes, Ramanujan J, (2016), doi:10.1007/s11139-016-9839-4.
* [5] L. Faber, H. Kadiri, New bounds for $\psi(x)$, Math. Comp. 84 (2015), no. 293, 1339–1357.
* [6] T. Gowers, Vinogradovs three primes theorem, notes available at https://www.dpmms.cam.ac.uk/ wtg10/ .
* [7] G. H. Hardy, A. E. Ingham, G. Pólya, Theorems concerning mean values of analytic functions, Proceedings Royal Soc. London (A) 113 (1927), 542-569.
* [8] G.H. Hardy, J.E. Littlewood, The zeros of the Riemann zeta-function on the critical line, Math. Z. 10 (1921), 283–317.
* [9] G. A. Hiary, An explicit van der Corput estimate for $\zeta(\frac{1}{2}+it)$, Indag. Math. (N.S.) (2016), no. 2, 524-533.
* [10] M.N. Huxley, On the difference between consecutive primes, Invent. Math. 15 (1972), 164-170.
* [11] M.N. Huxley, Large values of Dirichlet polynomials. II, Collection of articles in memory of Juriĭ Vladimirovic̆ Linnik, Acta Arith. 27 (1975), 159-169.
* [12] A. E. Ingham On the difference between consecutive primes, Quart. J. Pure and Appl. Math., Oxford, (2) 8 (1937), 255–266.
* [13] M. Jutila, Zero-density estimates for L-functions, Acta Arith. 32 (1977), 52-62.
* [14] H. Kadiri A zero density result for the Riemann zeta function, Acta Arith. 160 (2013), no. 2, 185–200.
* [15] H. Kadiri, A. Lumley Short effective intervals containing primes, Integers 14 (2014), Paper No. A61, 18 pp.
* [16] H. Kadiri, N. Ng, Explicit zero density theorems for Dedekind zeta functions, J. Number Theory 132 (2012), no. 4, 748-775.
* [17] K.S. McCurley, Explicit estimates for the error term in the prime number theorem for arithmetic progressions, Math. Comp. 42 (165) (1984) 265-285.
* [18] H.L. Montgomery, R.C. Vaughan, Hilbert’s inequality, J. London Math. Soc. (2) 8 (1974), 73-82.
* [19] D.J. Platt, Isolating some non-trivial zeros of zeta, Math. Comp. 86 (2017), no. 307, 2449–2467.
* [20] D.J. Platt, T.S. Trudgian, On the first sign change of $\theta(x)-x$, Math. Comp. 85 (2016), no. 299, 1539–1547.
* [21] E. Preissmann, Sur une inégalité de Montgomery-Vaughan, Enseign. Math. (2) 30 (1984), no. 1-2, 95Ð113.
* [22] H. Rademacher, On the Phragmén-Lindelöf theorem and some applications, Math. Z. 72 (1959), 192–204.
* [23] O. Ramaré, On Snirelman’s constant, Ann. Scuola Norm. Sup. Pisa Cl. Sci. 22 (1995), no. 4, 645–706.
* [24] O. Ramaré, An explicit density estimate for Dirichlet $L$-functions, Math. Comp. 85 (2016), 325-356.
* [25] E.C. Titchmarsh, The Theory of the Riemann Zeta-function, second edition, Oxford Science Publications.
* [26] T. Trudgian, Updating the error term in the prime number theorem, Ramanujan J. 39 (2016), no. 2, 225–234.
|
###### Abstract
In recent years, there has been a massive increase in the amount of Internet
of Things (IoT) devices as well as the data generated by such devices. The
participating devices in IoT networks can be problematic due to their
resource-constrained nature, and integrating security on these devices is
often overlooked. This has resulted in attackers having an increased incentive
to target IoT devices. As the number of attacks possible on a network
increases, it becomes more difficult for traditional intrusion detection
systems (IDS) to cope with these attacks efficiently. In this paper, we
highlight several machine learning (ML) methods such as k-nearest neighbour
(KNN), support vector machine (SVM), decision tree (DT), naive Bayes (NB),
random forest (RF), artificial neural network (ANN), and logistic regression
(LR) that can be used in IDS. In this work, ML algorithms are compared for
both binary and multi-class classification on Bot-IoT dataset. Based on
several parameters such as accuracy, precision, recall, F1 score, and log
loss, we experimentally compared the aforementioned ML algorithms. In the case
of HTTP distributed denial-of-service (DDoS) attack, the accuracy of RF is
99%. Furthermore, other simulation results-based precision, recall, F1 score,
and log loss metric reveal that RF outperforms on all types of attacks in
binary classification. However, in multi-class classification, KNN outperforms
other ML algorithms with an accuracy of 99%, which is 4% higher than RF.
###### keywords:
Internet of Things (IoT); IoT attacks; security; intrusion detection systems;
privacy; machine learning; ML models; multi-class classification
00 1 0 https://doi.org/ An Experimental Analysis of Attack Classification
Using Machine Learning in IoT Networks An Experimental Analysis of Attack
Classification Using Machine Learning in IoT Networks Andrew Churcher 1,
Rehmat Ullah 2,*, Jawad Ahmad 1, Sadaqat Ur Rehman 3, Fawad Masood 4, Mandar
Gogate 1, Fehaid Alqahtani 5, Boubakr Nour 6 and William J. Buchanan 1 Andrew
Churcher, Rehmat Ullah, Jawad Ahmad, Sadaqat Ur Rehman, Fawad Masood, Mandar
Gogate, Fehaid Alqahtani, Boubakr Nour and William J. Buchanan Churcher, A.;
Ullah, R.; Ahmad, J.; Ur Rehman, S.; Masood, F.; Gogate, M.; Alqahtani, F.;
Nour, B.; Buchanan, W.J. Correspondence<EMAIL_ADDRESS>Tel.:
+44-7459-408406
## 1 Introduction
The Internet of Things (IoT) offers a vision where devices with the help of
sensors can understand the context and through networking functions can
connect with each other Dorsemaine et al. (2015). The devices in the IoT
network can be employed for collecting information based on the use cases.
These include retail, healthcare, and manufacturing industries that use IoT
devices for tasks such as tracking purchased items, remote patient monitoring,
and fully autonomous warehouses. It is reported that the amount of IoT devices
has been growing every year with the predicted amount of devices by 2025
reaching 75.44 billion Statista (2019). Such a massive surge of IoT devices
ultimately results in more attackers to target IoT networks. Reports state
that most of the attack traffic generated on IoT networks is automated through
various means such as scripts and malware Doffman (2019). The increase in
attacks combined with the autonomous nature of the attacks is a problem for
IoT networks as the devices are mostly used in a fire and forget fashion for
years without any human interaction. This combined with the limitations of IoT
devices including limited processing power and bandwidth means that providing
adequate security can be difficult, which can result in network layer attacks
such as denial of service (DoS). Therefore, it is important to research ways
to identify this kind of traffic on networks which can be used in intrusion
detection and prevention systems.
Machine learning (ML) methods can be exploited to detect malicious traffic in
intrusion detection and prevention systems. ML is a subset of artificial
intelligence (AI) that involves using algorithms to learn from data and make
predictions based on the data provided Furbush (2018). ML has many
applications including in retail, healthcare, and finance where AI algorithms
may be applied for predicting customer spending habits, predicting medical
problems in patients, and detecting bank fraud, respectively Jmj (2018).
Due to the large yearly increases in cyberattacks that are being seen on a
yearly basis, ML methods are being incorporated to help tackle the increasing
threats of cyberattacks. ML has several uses within the field of
cybersecurity, such as network threat analysis, which can be defined as the
act of analyzing threats to the network Dosal (2018). ML can be beneficial in
this task as it is able to monitor incoming and outgoing traffic to identify
potentially suspicious traffic Groopman (2019). This area of research is known
as intrusion detection and is a widely known research area. ML can be applied
to intrusion detection systems (IDS) to help improve the systems ability to
run autonomously and increase the accuracy of the system when raising the
alarm on a suspected attack Technologies (2019). To this end, our primary role
is to identify the best ML methods for detecting attacks on IoT networks,
using a state-of-the-art dataset by utilizing both binary and multi-class
classification testing.
The main contributions of this paper can be summarized as follows:
1. 1.
We conduct an in-depth and comprehensive survey on the role of various ML
methods and attack detection specifically in regards to IoT networks.
2. 2.
We evaluate and compare the state-of-the-art ML algorithms in terms of various
performance metrics such as confusion matrix, accuracy, precision, recall, F1
score, log loss, ROC AUC, and Cohen’s kappa coefficient (CKC).
3. 3.
We evaluate the results comparing binary class testing as well as examining
the results of the multi-class testing.
The rest of the paper is organized as follows: Table 1 lists all the
abbreviations used in the paper. Section 2 is devoted to a literature review
involving investigating IoT intrusion detection techniques as well as ML
methods and how they are being used to aid intrusion detection efforts
specifically in regards to IoT networks. Details of various attacks that can
occur in IoT networks are also showcased with an explanation of how the
various ML methods and performance metrics work. Section 3 explains the
performance evaluation, which also includes an in-depth examination of the
data used in the datasets. The models are compared against each other for both
binary and multi-class classification with an overall best model being
selected. Finally, Section 4 draws a conclusion.
[H] Abbreviations and their explanations.
Acronym | Explanation | Acronym | Explanation
---|---|---|---
IDS | Intrusion Detection Systems | ANN | Artificial Neural Network
ML | Machine Learning | KNN | K-nearest Neighbour
SVM | Support Vector Machine | DT | Decision Tree
NB | Naive Bayes | RF | Random Forest
LR | Logistic Regression | DDoS | Distributed Denial-of-Service
IoT | Internet of Things | CKC | Cohen’s Kappa Coefficient
TP | True Positive | TN | True Negative
FP | False Positive | FN | False Negative
TPR | True Positive Rate | FPR | False Positive Rate
## 2 Background and Related Work
This section presents the background and examines current literature that
would clear up the picture for the reader about the design of the experiments
conducted in this paper. Firstly, we discuss IDS including the use of ML used
in attack detection and the related work which would help with selecting the
algorithms to be used as well as identifying any datasets that could be
utilized for testing the models. Each algorithm is explored with further
research into the suitability of the algorithm for use in an IDS. The IoT is
also described including the attacks that are used in the dataset that has
been selected.
### 2.1 Intrusion Detection System
An IDS is a tool that allows a network to be monitored for potentially harmful
traffic. An IDS can be implemented using two distinct types: signature-based
detection and anomaly-based detection. A signature-based IDS uses a database
of existing attack signatures and compares the incoming traffic with the
database, meaning that an attack can be detected only if the signature is
already available in the database. An anomaly-based IDS monitors network
traffic and attempts to identify any traffic that is abnormal in regards to
the normal network traffic.
The signature-based detection approach has a major flaw as a signature-based
IDS will always be susceptible to a zero-day attack or an attacker that
modifies the attack to hide from the signature database. Anomaly-based IDS are
much better suited to use ML as the IDS can be trained to detect the
difference between normal traffic and attack traffic. However, integrating ML
with IDS is not a silver bullet and may result in some problems. Research
conducted by Sommer and Paxson Sommer and Paxson (2010) identified several
problems where one important problem is that models can produce false
positives, which can render the IDS unusable due to normal data causing the
IDS to alert the system. Even though the research is very outdated, this is
still a major problem when using ML with IDS. As a result of this, it is of
paramount importance to identify models that produce the lowest number of
false positives.
### 2.2 IoT Intrusion Detection Using Machine Learning
ML is a subset of AI that involves giving an algorithm or in this case a model
a dataset which will be used to identify patterns that can be used to make
predictions with future data. There has been limited research devoted to IDS
using ML on IoT networks. To this end, recently a study used the Defense
Advanced Research Projects Agency (DARPA) ML datasets to test the models such
as support vector machine (SVM), Naive Bayes (NB), random forest (RF), and
multi-layer perceptron Foley et al. (2020). The results of this research were
presented in terms of root mean squared error, mean absolute percentage error,
receiver operating characteristic curve, and accuracy, yielding good results
with RF being one of the top models. However, this research has two main
limitations: Firstly, it used the DARPA datasets, which were over 20 years old
at the time of writing. Secondly, it was not performed for multi-class testing
using the datasets.
The research was also conducted using the Bot-IoT dataset that used the models
k-nearest neighbour (KNN), quadratic discriminant analysis, iterative
dichotomiser 3, RF, adaptive boosting, multi-layer perceptron, and NB Alsamiri
and Alsubhi (2019). The research did yield very good results in terms of
accuracy, precision, recall, F1 score, and time. This study used an up-to-date
dataset as well as a wide variety of ML models. However, this research did not
include any multi-class testing for any of the models.
In regards to multi-class classification, the authors of Hasan et al. (2019)
used several ML methods. This research compared the algorithms such as
logistic regression (LR), decision tree (DT), RF, and artificial neural
network (ANN) using a dataset created by the researchers which was not
available for public use. It was concluded in the study that RF was the best
model for multi-class classification. This research shows that with multi-
class classification it is possible to achieve high results. Testing with
additional algorithms could help bolster the results of the research.
Overall, there is currently a lack of research into intrusion detection within
the area of IoT networks. This could be due to the lack of datasets as well as
lack of real hardware with all datasets being comprised of simulated IoT
devices on regular computers. There is also a lack of research into multi-
class classification, which could be due to the lack of a dedicated multi-
class dataset. With all available datasets being created with binary
classification in mind, performing multi-class testing requires the datasets
to be merged into one with proper labelling for each class.
Various ML models can be utilized to perform ML tasks, each with their own
mathematical equations powering the analysis of the data presented. In the
next subsections, we discuss various ML algorithms for our analysis such as:
(i) KNN; (ii) SVM; (iii) DT; (iv) NB; (v) LR; and (vi) ANN.
#### 2.2.1 K-Nearest Neighbor
KNN is a supervised learning model that is considered to be one of the
simplest ML models available Brownlee (2019). KNN is referred to as a lazy
learner because there is no training done with KNN; instead, the training data
are used when making predictions to classify the data Brownlee (2019). KNN
operates under the assumption that similar data points will group and finds
the closest data points using the K value, which can be set to any number
Harrison (2019). KNN is a suitable model to be used for intrusions detection
as showcased with several pieces of research conducted. The authors of Liao
and Vemuri (2002) examined the effectiveness of KNN at distinguishing between
attack and normal data. The results of this research show that KNN was an
effective model of detecting attack data and had a low false-positive rate.
Moreover, recent research also examined the effectiveness of KNN Nikhitha and
Jabbar (2019) with a similar consensus being met. The research showed that KNN
was an effective model beating SVM and DT.
#### 2.2.2 Support Vector Machine
Support Vector Machine (SVM) is a supervised learning algorithm that uses a
hyperplane to separate the training data to classify future predictions. The
hyperplanes divide a dataset into two classes and they are decision boundaries
that help classify the data points. A hyperplane can be represented as a line
or a plane in a multi-dimensional space and is used to separate the data based
on the class they belong to. It does this by finding the maximum margin space
between the support vectors. SVM is a suitable model for intrusion detection
as evident by the large amount of research conducted over the years. One older
piece of research created an enhanced SVM model for intrusion detection Yao et
al. (2006). The research was successful at creating the model but proved to be
only a slight improvement over regular SVM, showing that the model even
without enhancements or augmenting is capable of accurately classifying attack
data. Other more recent research compared SVM and ANN’s ability to classify
attack data Cahyo et al. (2016). As previously mentioned, SVM relies on
placing a hyperplane to separate data which can be expressed as follows:
$ax+b=0$ (1)
where $a$ is the vector of the same dimensions as the input feature vector $x$
and $b$ is the bias. In this case, $ax$ can be written as
$a^{1}x^{1}+a^{2}x^{2}+...+a^{n}x^{n}$ where $n$ is the number of dimensions
of the feature vector $x$. When making predictions, the following expression
is used:
$y=sign(ax-b)$ (2)
where $sign$ is a function that returns either $+1$ or $-1$ depending if the
input is a positive number or a negative number respectively. This value is
used to determine the prediction of what class the feature vector belongs to.
$x_{i}$ is the feature vector and $i$ and $y_{i}$ is the label that can either
be $+1$ or $-1$ and can be written as the follows:
$ax_{i}-b\geq+1\;ify_{i}=+1$ $ax_{i}-b\leq-1\;ify_{i}=-1$
SVMs use kernels and kernel is basically a set of mathematical functions. The
kernel is used to take data as an input and transform them into the required
form of processing data. The kernels can be linear, nonlinear, polynomial,
Gaussian kernel, Radial basis function (RBF), sigmoid, etc.
#### 2.2.3 Decision Tree
DT is a supervised learning algorithm that is useful to present a visual
representation of the model. A DT uses a hierarchical model that resembles a
flow chart which has several connected nodes. These nodes represent tests on
the attribute in the dataset with a branch that leads to either another node
or a decision on the data being classified Sharma and Kumar (2016). The
training data are used to build the tree with the prediction data being run
through the nodes until the data can be classified. DT is a suitable model for
intrusion detection based on the research conducted. One fairly recent piece
of research compared DT with several other models including NB and KNN Stampar
and Fertalj (2015). The results show that DT was one of the better models
along with NB when compared to ANN’s which dominate IDS research. Other
research created an IDS for connected vehicles in smart cities Aloqaily et al.
(2019). This research showed that the model that used DT was the best model
with high accuracy and a low false positive rate. As previously mentioned, DT
creates a hierarchical model using the training data to create nodes that act
as tests for making predictions. When making DT, the root node needs to be
selected as well as selecting the nodes that make up the DT. In this regard,
there are many ways to do this with entropy being used in this case. Entropy
is used to measure the probability of a data point being incorrectly
classified when randomly chosen and is expressed as follows:
$E=\sum^{c}_{i=1}-p_{i}\;log_{2}\ (p_{i})$ (3)
where $p_{i}$ is the probability of the data being classified to a given class
of $i$ and $c$ is the number of classes. The attribute with the lowest entropy
would be used for the root node.
#### 2.2.4 Random Forest
RF is a supervised learning algorithm that is seen to be an improvement on the
DT model. The random aspect of the model comes from two key concepts. The
first is that, when training the model, each tree is given a random assortment
of the data which can result in some trees using the same data multiple times.
The reason behind this is to lower the variance of the model, which lowers the
difference in the predicted results scores Koehrsen (2018). The second concept
involves only using small subset of the features when splitting the nodes in
the trees Dubey (2018). This is done to prevent overfitting when the model
uses the training data to inflate the predictions made by the model Brownlee
(2019). When making predictions with RF, the average of each of the trees
predictions is used to determine the overall class of the data; this process
is called bootstrap aggregating Brownlee (2019). The reason RF is seen as an
improvement on DT is that, instead of relying on one tree to make the
classification, multiple trees with different training data and with a
different selection of features are used for giving predictions. This allows
for a fairer analysis of the data when making predictions. RF is proven to be
a suitable model for intrusion detection. To this end, the authors of Farnaaz
and Jabbar (2016) compared RF to other frameworks used in intrusion detection.
They found that the RF model outperformed the other frameworks with increased
accuracy, precision, recall, and F1 score.
#### 2.2.5 Naive Bayes
NB is a probabilistic algorithm that works by getting the probability of all
the feature vectors and their outcome. The algorithm is used to determine the
probability of an event occurring based on previous events occurring which is
called posterior probability and is expressed as follows:
$P(A|B)=\frac{P(B|A)P(A)}{P(B)}$ (4)
where $P(A|B)$ is the posterior probability, $P(A)$ is known as the prior
probability, $P(B)$ is marginal likelihood (evidence), and $P(B|A)$ is
referred to as the likelihood. This formula can be applied to datasets in the
following way:
$P(y|x)=\frac{P(x|y)P(y)}{P(x)}$ (5)
where $y$ is the class variable and $x$ is the feature vector of size $n$
shown as the following:
$x=(x_{1},x_{2},x_{3},...,x_{n})$ (6)
#### 2.2.6 ANN
An ANN refers to a model of performing machine learning that is based on how
the human brain operates and can be used to perform supervised learning. An
ANN consists of neurons or nodes that make up the layers of the network
Saritas and Yasar (2019). The three types of layers in an ANN are input,
hidden, and output layers where the input layer takes information provided and
passes it onto the hidden layer. The hidden layer performs computations and
transfers the data to the output layer. The output layer also performs
computations and presents the output of the ANN Ujjwalkarn (2016). When
performing supervised learning, the network is given the inputs and expected
outputs for training. The connections between the nodes in the network have
numbers assigned to them called weights. When an error is made by the network,
the data are propagated back through the network and the weights are adjusted.
This process occurs repeatedly until the error is minimized, and then the test
data can be fed through the network Maind et al. (2014). Training an ANN is
described as follows:
The first step in training the ANN involves multiplying the input values
$x_{i}$ and the weights $w_{i}$, and then summing the values expressed as the
following:
$x_{i}\cdot w_{i}=(x_{1}\cdot w_{1})+(x_{2}\cdot w_{2})+...+(x_{n}\cdot
w_{n})$ (7)
The second step involves adding the summed values to the bias $b$ of the
hidden layer node as expressed as the following:
$z=x_{i}\cdot w_{i}+b$ (8)
The third step is to pass the $z$ value through an activation function such as
ReLU and Softmax. ReLU $R(z)$ can be defined as follows:
$\hat{y}=R(z)=max(0,z),$ (9)
where $z$ is the input to a neuron. When the $z$ is smaller than zero, the
function will output zero, and, when the $z$ is greater or equal to zero, the
output is simply the input. Softmax can be defined as follows:
$\hat{y}=s(z)_{i}=\frac{e^{z_{i}}}{\sum\nolimits_{j=1}^{n}e^{z_{j}}}$ (10)
where $e$ is the base of the natural logarithm, $z$ is a vector of the inputs,
and i and j indexes the input and output units, respectively.
To train the ANN, the loss needs to be calculated so the network can
effectively evaluate its performance and make the appropriate changes. Once
the loss has been calculated, the next step is to minimize this loss by
changing the weights and the biases. Knowing how the cost function $C$ (which
is is a measure of “how good” a neural network did with respect to its given
training sample and the expected output) changes in relation to weights
$w_{i}$ can be done using gradients. Using the following chain rule, the
gradient of the cost function in relation to the weights can be calculated:
$\frac{\partial C}{\partial w_{i}}=\frac{\partial
C}{\partial\hat{y}}\times\frac{\partial\hat{y}}{\partial
z}\times\frac{\partial z}{\partial w_{i}}$ (11)
where $\frac{\partial C}{\partial\hat{y}}$ is the gradient of the cost
function, $\frac{\partial\hat{y}}{\partial z}$ is the gradient of the
predicted value, and $\frac{\partial z}{\partial w_{i}}$ is the gradient of
$z$ in regards to $w_{i}$.
ANN is the most suitable model for IoT attacks detection and has had many
implementations. Recently, the authors of Anitha and Arockiam (2019)
implemented an ANN based model for detecting IoT based attacks. The model was
successful and can be used on IoT networks to perform intrusion detection. In
Shenfield et al. (2018), the implementation is done for intrusion detection
using ANNs. This research had very good results with the model having a near
perfect accuracy and a very low false positive rate.
#### 2.2.7 Logistic Regression
LR is a supervised learning algorithm that uses the logistic function also
known as the Sigmoid function. Logistic regression is similar to linear
regression except, instead of predicting data that are continuous, it is used
for classifying data either true or false. Linear regression can have any
value, whereas LR has values between 0 and 1 Rajput (2018). Logistic
regression is a model that is less represented in intrusion detection than
other models. Its suitability for use in intrusion detection is not as well
established as the previous models. However, some research has examined a
logistic regression based intrusion detection model Ghosh and Mitra (2015).
This model was tested using multi-class classification and was able to
outperform the other models.
As previously mentioned, logistic regression can be thought of as linear
regression but for classification problems. The reason that logistic
regression is used is because with linear regression the hypothesis $h_{o}(x)$
can be greater than one or less than zero. With logistic regression, the
hypothesis is between zero and one, e.g., $0\leq h_{o}(x)\leq 1$, where
$h_{o}$ is a single hypothesis that maps inputs to outputs and can be
evaluated and used to make predictions.
To get a value between zero and one, the Sigmoid function is used which is
represented as follows:
$S(x)=\frac{1}{1+e^{-x}}$ (12)
This function returns a number between 0 and 1 which can be mapped to a
particular class of data by using a decision boundary to determine the
likelihood of the data of a certain class, which can be expressed as follows:
$p\geq 0.5\ class=1$ $p<0.5\ class=0$
Once the threshold is set, predictions can be made using the Sigmoid function
to determine the likelihood that the data belongs to class 1 as follows:
$S(class=1)=\frac{1}{1+e^{-x}}$ (13)
This function gives back a number that represents the probability that the
data should be classified as Class 1. With the previously defined threshold,
if the number is 0.5 or above then the data will be classified as Class 1, and
anything less than 0.5 will be classified as class 0.
The following subsection provides some details on IoT including the attacks
that are used in the dataset for this paper.
### 2.3 Internet of Things Attacks
As previously discussed, IoT is considered as a network of devices/objects
communicating through wired or wireless communication technologies Hussain et
al. (2020). The protocols used by IoT devices are designed to be used on
devices with limited computation, storage, and communication capabilities that
need to conserve as much battery power as possible. Such protocols include
ZigBee, radio-frequency identification (RFID), and smart Bluetooth. The
relatively quick increase in IoT devices being used has resulted in a lack of
standardization activities which have seen a massive influx of unsecured
devices being connected to networks Saleem et al. (2018). This in turn creates
a massive attack vector allowing for a massive amount of vulnerable devices
open to be exploited by attackers. In the following subsections, we provide
relevant threats and attacks faced by IoT.
#### 2.3.1 Data Exfiltration
A data exfiltration attack involves attackers gaining access to a private
network and stealing data stored on the network Ullah et al. (2017). This type
of attack can result in the theft of data such as credit card information and
personal data. Several studies have been conducted in the field of detecting
data exfiltration attacks using methods such as partially observable Markov
decision process Carthy et al. (2016) and a method that involves capturing
metadata at the file system level Fadolalkarim and Bertino (2019).
#### 2.3.2 DoS and DDoS
Denial of service (DoS) and distributed denial of service (DDoS) attacks are
very similar in execution. The primary difference involves the scale of the
attack. A DoS attack involves a single system and Internet connection being
used to attack the victim, whereas a DDoS attack involves multiple systems and
Internet connections on a global scale being used to attack the victim, which
are typically referred to as botnets Malik and Singh (2015).
There are many different ways to perform either of these attacks depending on
what protocol is used in the attack. These different methods include HTTP
flood, TCP SYN, and UDP flood attack, as identified by Mahjabin et al. (2017).
An HTTP flood attack involves altering either the GET or POST requests sent
via HTTP. A GET request is used when a client wishes to receive information
from the server, whereas a POST request is used to send information to the
sever such as uploading a file. Sending thousands of these requests to a
server or cluster of servers at once increases the workload at the server(s)
side exponentially, slowing the entire system down or preventing legitimate
users from accessing the server(s).
A TCP SYN attack exploits the three way handshake that occurs during a TCP
connection which involves sending a SYN packet which elicits a response from
the server with a SYN and ACK packet. During the attack, the destination
address sent in the SYN packet is false. As a result, the server sends out SYN
and ACK messages repeatedly. This process stores entries in the server’s
connection tables which then becomes full and prevents legitimate users from
accessing the server. A UDP flood attack involves sending UDP packets with a
port number and sometime a spoofed IP address as well. Once the server
receives this packet, it will check for any applications using the port in the
UDP packet. The server checks for applications associated with these UDP
packets and, if not found, the server sends back a “Destination Unreachable”
packet. As more and more packets are received, the system becomes unresponsive
to other clients.
Moreover, attackers are able to turn on devices such as webcams and digital
video recorders (DVRs). One such example of this was the Mirai botnet in 2016
which was able to make use of up to 400,000 devices and take down large
websites such as Twitter and GitHub Kolias et al. (2017). Due to lack of
security on IoT devices, paramount research has been conducted into detecting
DoS and DDoS traffic Galeano-Brajones et al. (2020); Ul et al. (2018).
However, all such algorithms lacks the use of ML techniques.
#### 2.3.3 Keylogging
The basic function of a keylogger is to store the keystrokes made by a user on
their keyboard. Keyloggers can be both hardware and software based Olzak
(2008); Abukar et al. (2014). Software keylogging is typically done by
installing malware on the victim machine that saves the key strokes and relays
this to the attacker. Some research has been devoted to keylogging detection
methods (see, e.g., Ortolani et al. (2010); Wajahat et al. (2019)).
#### 2.3.4 OS Scan and Service Scan
Operating system (OS) and service scans are similar in nature and can be
grouped into the attack category of probing. This can be done either
passively, in which the attacker gathers packets from the network, or
actively, in which the attacker sends traffic and recording the responses.
Since passive scanning generates no traffic, active scanning is needed for
traffic to test. OS scans involve the attacker being able to discover the OS
being used by the victim machine. This information can help an attacker
identify the type of device, e.g., server, computer or IoT device. It can also
help the attacker identify the version of the OS being used. This can help the
attacker find vulnerabilities related to the OS.
There has been plethora of research conducted into using OS scans to identify
if a device is an IoT device. One study used neural networks to identify if
the device scanned was an IoT device Yang et al. (2019). Another study used
deep learning techniques to identify Raspberry Pi devices that were acting as
IoT devices Aneja et al. (2018). Both studies show that it is possible to
identify IoT devices using OS scanning techniques.
Service scans, more commonly referred to as port scans, involve the attacker
probing a network in order to identify open ports on the network Bhuyan et al.
(2011). This is commonly used by an attacker to gain a better insight into the
types of activity on the network as well as showcasing any open ports that are
vulnerable to being exploited. A port scan works by having the software used
send a request to a port on another network to set up a connection. The
software will then wait for a response from the network.
Due to the fact that IoT devices can range from printers to heating
controllers, the ports that can be used by devices can vary. To this end, the
authors of Markowsky and Markowsky (2015) conducted a study performing a scan
on printers to identify vulnerable ports. The results showcase that port 9100
was a commonly opened port on printers. The port is used to carry data to and
from printers over TCP. It was also noted that gaining access to the network
using this port was a simple process.
Port scanning can also be used to identify if a device is an IoT device. An
analysis by Sivanathan et al. (2018) showed that by scanning for a small
number of TCP ports it could be determined whether a device was an IoT device
including information on the device itself, such as identifying a device as an
HP printer. Since IoT devices are generally more vulnerable than other
devices, this could be used to identify an entry point to a network. A study
using an approach based on Dempster–Shafer evidence theory produced a solid
groundwork for detecting port scan traffic Shao et al. (2016). Another study
proposed a new evaluation metric for IDS, which was reported to take less time
to identify port scan data than previous metrics Lopez-Vizcaino et al. (2019).
Neither of these studies included IoT devices, and there is currently a lack
of research into OS scans in regards to IoT devices.
Recently, several efforts have been devoted for ML in IoT network Hussain et
al. (2020); Rashid et al. (2020); Soe et al. (2020); Ioannou and Vassiliou
(2019). However, in most of the existing works, the performance are checked
for specific types of ML algorithms, such as ANN, J48 DT, and NB without
detailed performance evaluation. Although some work is based on various ML
algorithms such as LR, SVM, DT, RF, ANN, and KNN, most of them are used to
mitigate IoT cybersecurity threats in special environments such a smart city.
Contrary to existing works, our study provides a comprehensive evaluation for
both real attack and simulated attack data that were created by simulating a
realistic network at the University of New South Wales where real attacks on
IoT networks were recorded.
## 3 Performance Evaluation
### 3.1 Benchmark Data
Our evaluation involves using several datasets with several ML models to
identify the best model for correctly classifying IoT attack data. When
selecting the datasets, the two most important factors were the amount of
variety in the attack data and how up-to-date the datasets are. The datasets
chosen were the bot-IoT datasets Koroniotis et al. (2019) because they met the
two criteria previously mentioned.
### 3.2 Performance Evaluation Metrics
For evaluation, we consider the following metrics.
#### 3.2.1 Confusion Matrix
A confusion matrix shows the predictions made by the model. It is designed to
show where the model has correctly and incorrectly classified the data.
The confusion matrix for binary and multi-class classification is different.
With binary classification, the matrix shows the true positive (TP), true
negative (TN), false positive (FP), and false negative (FN) results, as shown
in Table 3.2.1. The columns represent the correct classification of the data
and the rows represent the available classifications.
[H] Confusion matrix example. Actual Label Predicted label No attack Attack No
attack True negative False negative Attack False positive True positive
TP and TN are when the data are correctly classified as either attack or no
attack. FP and FN are when data are incorrectly predicted as the other class.
When using a confusion matrix for multi-class problems, the same principles
apply. However, the matrix shows all the classes which allow for observing
where the mis-classification is occurring in the classes, as shown in Table
3.2.1.
[H] Multi-class confusion matrix example. Actual Label Predicted label Class 1
Class 2 Class 3 Class 1 C W W Class 2 W C W Class 3 W W C
In Table 3.2.1, C represents where the correct classifications are located and
W represents incorrect classifications. It is to be noted that correct
classifications create a diagonal path through the table from the top left
corner to the bottom right corner.
#### 3.2.2 Accuracy
Accuracy is a metric that can be used to identify the percentage of
predictions that were classified correctly and is expressed as follows:
$Accuracy=\frac{\text{Number of correct predictions}}{\text{Total number of
predictions}}$ (14)
This can be expanded upon by utilizing the results of a confusion matrix
including TP, TN, FP, and FN and can be defined as follows:
$Accuracy=\frac{\text{TP + TN}}{\text{TP + TN + FP + FN}}$ (15)
#### 3.2.3 Precision
Precision is used to determine the ratio of correctly predicted positive
outcomes against the total number of predicted positive outcomes and can be
defined as follows:
$Precision=\frac{\text{TP}}{\text{TP + FP}}$ (16)
#### 3.2.4 Recall
Recall is used to determine the ratio of correctly predicted positive outcomes
to all the outcomes in the given class and can be defined as follows:
$Recall=\frac{\text{TP}}{\text{TP + FN}}$ (17)
#### 3.2.5 F1 Score
F1 score is the weighted average of both precision and recall which produces a
number between 0 and 1. F1 score is seen as a better performance metric than
accuracy and can be defined as follows:
$F1score=\frac{\text{2 $\times$ (recall $\times$ precision)}}{\text{recall +
precision}}$ (18)
It is to be noted that selection of F1 score or accuracy is dependent on how
the data are distributed. The F1 score seems a better performance metric than
accuracy in the case where the classes are highly unbalanced. F1 score takes
into account how the data are distributed, and, in most real-life
classification problems, imbalanced class distribution exists and thus F1
score is a better metric to be used. Accuracy is used when the class
distribution is similar and it does not take into account how the data are
distributed, which may lead to wrong conclusion.
#### 3.2.6 Log Loss
Log loss is used to measure the performance of a model by using the
probability of the expected outcome. The higher the probability of the actual
class is, the higher the log loss will be. The lower score indicates that the
model has performed better.
For binary classification where number of possible classes (_M_) = 2, log loss
can be expressed as follows:
$-{(y_{i}\log(p_{i})+(1-y_{i})\log(1-p_{i}))}$ (19)
For multi-class classification where _M_ $>$ 2, sa eparate loss for each class
label is calculated, and the results are summed, which is expressed as
follows.
$-\sum_{c=1}^{M}y_{o,c}\log(p_{o,c})$ (20)
where _M_ is the number of possible classes (0, 1, 2), log is the natural
logarithm, $y_{i}$ is a binary indicator of whether class label $i$ is the
correct classification for observations, and $p_{i}$ is the models prediction
probability.
#### 3.2.7 ROC AUC
ROC is a graph used to plot the results of the model at various thresholds
when making predictions. The graph uses the true positive rate (TPR) and false
positive rates (FPR), which are expressed as follows:
$TPR=\frac{\text{TP}}{\text{TP + FN}}$ (21)
$FPR=\frac{\text{FP}}{\text{FP + TN}}$ (22)
#### 3.2.8 Cohen’s Kappa Coefficient
Cohen’s kappa coefficient (CKC), also referred to as the kappa statistic, is
used to test the inter rater reliability of prediction and can be expressed as
follows:
$k=\frac{\text{Pr(a) - Pr(e)}}{\text{1 - Pr(e)}}$ (23)
where Pr(a) is the observed agreement and Pr(e) is the expected agreement.
This metric is useful as it compares the model against a model that guesses
based on the frequency of the classes. This allows for the disparity in a
dataset to be evaluated particularly with multi-class testing as the dataset
has varying numbers of data points per attack.
### 3.3 Dataset Description
The dataset named Bot-IoT was submitted to the IEEE website on 16/10/19 and
was created by the University of New South Wales (UNSW). The dataset consists
of ten CSV files containing records for the following attacks on IoT networks:
(i) Data exfiltration; (ii) DoS HTTP; (iii) DoS TCP; (iv) DoS UDP; (v) DDoS
HTTP; (vi) DDoS TCP; (vii) DDoS UDP; (viii) Keylogging; (ix) OS Scan; and (x)
Service Scan. The dataset comprises both real attack and simulated attack data
and was created by simulating a realistic network at the UNSW Koroniotis et
al. (2019).
Table 3.3 shows the features used in the experiments. There are 35 columns in
the dataset. However, only the ones in Table 3.3 were used. When deciding what
features to use, the contents of the columns are examined and any columns that
have no values are removed as well as columns that contain text and columns
that are deemed to be irrelevant to the overall classification of the data.
[H] Dataset features and description. Features Description Stime Record start
time Sport Port that data is being sent from Dport Port that data is being
received from Pkts Total number of packets transferred Bytes Total number of
bytes transferred Ltime Record last time Seq Sequence number Dur Record total
duration Mean Average duration of aggregated records Sum Total duration of
aggregated records Min Minimum duration of aggregated records Max Maximum
duration of aggregated records Spkts Source to destination packet count Dpkts
Destination to source packet count Sbytes Source to destination byte count
Dbytes Destination to source byte count Rate Total packets per second in
transaction Srate Source to destination packets per second Drate Destination
to source packets per second
One important part of examining the dataset involves checking the
representation of the classes in the dataset, i.e. whether one class is over
or under represented, as this can have a detrimental effect on the
experiments. Table 3.3 shows the amount of attack data and no attack data for
each dataset used in the experiments.
[H] Dataset label distribution. Dataset No Attack Data Attack Data Total Data
exfiltration 24 118 142 DDoS HTTP 55 19771 19826 DDoS TCP 32 1048543 1048575
DDoS UDP 36 1048539 1048575 Key logging 164 1469 1633 OS Scan 3949 358275
362224 Service scan 1993 1046582 1048575 DoS HTTP 56 29706 29762 DoS TCP 106
1048469 1048575 DoS UDP 37 1048538 1048575
To conduct multi-class testing, a new CSV file is created using the binary
classification datasets. The datasets were collected and then randomized and
put into a new file. Due to the large size of the dataset, only a selected
percentage of the data is used to prevent excessive run times. Table 3.3 shows
the class representation of the training and test data in the multi-class
dataset. It is observable in both the binary and multi-class datasets that not
all classes have equal representation. Testing with weighted classes can be
done to see the effects of having equal representation among the classes. The
models SVM, DT, RF, ANN, and LR are able to use the balanced weighted classes
option, which applies to the class weights as follows:
$W=\frac{Samples}{Classes\times Y}$ (24)
where $Samples$ is the number of rows in the dataset, $Classes$ is the number
of classes in the dataset, and $Y$ is the number of labels.
[H] Multi-class data representation. Classes Training Data Test Data Total No
attack 1398 335 1733 Data exfiltration 22 7 29 DDoS HTTP 4209 1015 5224 DDoS
TCP 221638 56377 278015 DDoS UDP 222728 55302 278030 Key logging 314 81 395 OS
Scan 75877 18907 94784 Service scan 221745 55768 277509 DoS HTTP 6343 1475
7818 DoS TCP 223555 55236 278791 DoS UDP 222171 55501 277672
### 3.4 Implementation
#### 3.4.1 Tools Used
We use Python version 3.7.4 programming language for the implementation of ML
algorithms. The two main modules used for the implementation of the models are
sklearn (also referred to as scikit-learn) and Keras. Keras is used to
implement the ANN while sklearn is used to implement the other models. It is
to be noted that, for comparison purposes, we used the default values of
hyperparameters for each classifier. Table 3.4.1 contains names of the modules
used and a brief description of the module.
[H] Modules used and description. Module Name Description numpy Used to store
the dataset in an array pandas Used to read the dataset CSV file preprocessing
Used to normalize feature data model_selection Used for splitting the training
and test data random Used to randomize the multi-class dataset metrics
Contains the performance metrics used in testing the model neighbors Contains
KNN model SVM Noble (2006) Contains the SVM model tree Contains the DT model
naive bayes Contains the NB model ensemble Contains the RF model linear model
Contains the LR model models Contains the ANN model layers Contains ANN layers
utils Contains class weight for ANN
#### 3.4.2 Feature Extraction
The dataset contains features that either contain no information or have
information that is irrelevant in helping the model classify the data. The
unwanted features can be removed during the preprocessing stage using the
pandas module. Several features, such as flgs, proto, dir, state, saddr,
daddr, srcid, smac, dmac, soui, doui, sco, record, category, and subcategory,
were removed from the dataset.
#### 3.4.3 Feature Scaling
The features in the dataset contain large numbers that vary in size.
Therefore, it is important to normalize the data in the features. This is done
by re-scaling the values of the features to within a defined scale such as
$-$1 to 1 and can be defined as follows:
$x^{\prime}=a+\frac{(x-min(x))(b-a)}{max(x)-min(x)}$ (25)
where $x^{\prime}$ is the normalized value, $x$ is the original value, and $a$
and $b$ are the minimum and maximum values. The result of this will take any
number between $-$1 and 1. This can be done in Python using the MinMaxScaler
in the preproccesing module.
#### 3.4.4 Multi-Class Dataset
The multi-class dataset is created by collecting all the rows of all the
datasets and then randomizing the rows using the random Python module. The
random module contains the shuffle method, which allows an array, in this case
the rows of the dataset, to be randomized. Due to the large size of the
dataset when using it for testing, only roughly 25% of the dataset is used,
which is 1,500,000 rows.
#### 3.4.5 Training Data
The data used by the model to learn are called the training data. Data can be
split into training and test data with multiple ratios. For this study, a
split of 80:20 was used, with 80% being used for training the models, which is
governed by the Pareto principle that states that 80% of result comes from 20%
of the effort.
#### 3.4.6 Test Data
Twenty percent of the data is used for testing, which is typically a good
amount of data. However, if the dataset is small, this can result in a low
amount of test data and in the illusion that the model has done extremely well
when in fact it has not had enough data to be properly tested. To split the
dataset into training and test data, train_test_split can be used from the
Python module named model_selection. When using this function, the random
state parameter can be used that sets the seed of the pseudo random number
generator; in this case, the number 121 was used.
### 3.5 Results and Discussion
To test several ML algorithms and to identify which are the best and worst for
classifying attack data on IoT networks, this section provides all the results
and analysis based on several performance metrics including binary and multi-
class testing.
#### 3.5.1 Binary Classification
Data Exfiltration: Table 3.5.1 shows the results for data exfiltration data
where RF has the best scores for all the performance metrics including log
loss. Whereas DT also has perfect scores, it has a high log loss, indicating
that the RF model is more confident in making predictions. [H] Data
exfiltration results. Algorithms Used Accuracy Precision Recall F1 Score Log
Loss ROC AUC KNN Harrison (2019) 0.86 0.95 0.87 0.91 0.19 0.83 SVM Noble
(2006) 0.89 0.95 0.91 0.93 0.27 0.85 DT Sharma and Kumar (2016) 1.0 1.0 1.0
1.0 9.99 1.0 NB Rish et al. (2001) 0.89 1.0 0.87 0.93 3.57 0.93 RF Farnaaz and
Jabbar (2016) 1.0 1.0 1.0 1.0 0.059 1.0 ANN Saritas and Yasar (2019) 0.82 0.82
1.0 0.90 2.57 0.5 LR Ghosh and Mitra (2015) 0.89 0.95 0.91 0.93 0.22 0.85
Table 3.5.1 shows the confusion matrix for RF and shows two noteworthy pieces
of information. The first is that the amount of data tested is very low and
that the classes do not have equal representation. It is possible that the low
amount of test data is having an impact on the results. However, the other
models except from DT have relatively poor scores compared to RF.
[H] Data exfiltration RF confusion matrix. Actual Label Predicted label No
Attack Attack No Attack 5 0 Attack 0 24
Table 3.5.1 shows that increasing the test data to 30% has a decrease in the
log loss, indicating that the model performs better with more data although
only marginally. Once the test data reaches 40% and beyond, the results begin
to get worse, although the model is able to maintain perfect recall with up to
a 50% split in the training and test data. [H] Data exfiltration RF test data
amounts. Test Amount Accuracy Precision Recall F1 Score Log Loss ROC AUC 20
1.0 1.0 1.0 1.0 0.059 1.0 30 1.0 1.0 1.0 1.0 0.043 1.0 40 0.98 0.97 1.0 0.98
0.042 0.94 50 0.97 0.96 1.0 0.98 0.083 0.9 60 0.94 0.97 0.95 0.96 0.089 0.89
Due to the class representation being imbalanced, the weighted classes
parameter can be used. This allows the disparity of the classes to be
rectified, the results of which are shown in Table 3.5.1. This option is not
available when using the KNN and NB models. It is observable in Table 3.5.1
that SVM has had its performance increase by using weighted classes with all
metrics increasing and log loss decreasing. ANN is unaffected by weighted
classes and LR is marginally affected with the model perfect precision but
lowering its recall. DT losses its perfect scores while RF is able to keep
perfect scores but slightly increases its log loss.
[H] Data exfiltration weighted classes results. Algorithms Used Accuracy
Precision Recall F1 Score Log Loss ROC AUC KNN Harrison (2019) n/a n/a n/a n/a
n/a n/a SVM Noble (2006) 0.93 1.0 0.91 0.95 0.25 0.95 DT Sharma and Kumar
(2016) 0.93 1.0 0.91 0.95 0.12 0.95 NB Rish et al. (2001) n/a n/a n/a n/a n/a
n/a RF Farnaaz and Jabbar (2016) 1.0 1.0 1.0 1.0 0.074 1.0 ANN Saritas and
Yasar (2019) 0.82 0.82 1.0 0.90 2.57 0.5 LR Ghosh and Mitra (2015) 0.89 1.0
0.87 0.93 0.38 0.93
Without using weighted classes, RF is the best model due to its low log loss
when compared to DT. When weighted classes are applied, RF is still the best
model with perfect scores and a low log loss, indicating that the model is
confident in making predictions.
DDoS HTTP: Table 3.5.1 shows the results of DDoS HTTP data. DT has perfect
performance scores but a high log loss of 7.25. This dataset does not suffer
from a lack of data, rather it suffers from a large imbalance of data since
the attack data have more prevalence in the dataset, as shown in Table 3.5.1.
[H] DDoS HTTP results. Algorithms Used Accuracy Precision Recall F1 Score Log
Loss ROC AUC KNN Harrison (2019) 0.99 0.99 1.0 0.99 0.0095 0.83 SVM Noble
(2006) 0.99 0.99 1.0 0.99 0.0093 0.77 DT Sharma and Kumar (2016) 1.0 1.0 1.0
1.0 7.25 1.0 NB Rish et al. (2001) 0.99 0.99 0.99 0.99 0.063 0.66 RF Farnaaz
and Jabbar (2016) 0.99 0.99 1.0 0.99 0.0021 0.88 ANN Saritas and Yasar (2019)
0.99 0.99 1.0 0.99 0.044 0.5 LR Ghosh and Mitra (2015) 0.99 0.99 1.0 0.99
0.0069 0.77
[H] DDoS HTTP DT confusion matrix. Actual Label Predicted label No Attack
Attack No Attack 9 0 Attack 0 3950
This confusion matrix shows a large disparity in the data with a ratio of
3:1319 in favor of attack data. A large disparity in the dataset can cause the
log loss to be affected, as log loss is based on probability, and, because the
data are more likely to be attack data, this can result in a skewed log loss.
Table 3.5.1 shows the results of weighted classes on the DDoS HTTP data. With
weighted classes, both SVM and LR have a sizeable decrease in performance
across all metrics except log loss which has decreased for both and ROC AUC,
which has increased for both. ANN is unaffected by the weighted classes and
retains its perfect recall, whereas RF loses the perfect recall. DT loses its
perfect scores but has a large decrease in its log loss.
[H] DDoS HTTP weighted classes results. Algorithms Used Accuracy Precision
Recall F1 Score Log Loss ROC AUC KNN Harrison (2019) n/a n/a n/a n/a n/a n/a
SVM Noble (2006) 0.89 0.99 0.89 0.94 0.013 0.83 DT Sharma and Kumar (2016)
0.99 0.99 0.99 0.99 0.018 0.88 NB Rish et al. (2001) n/a n/a n/a n/a n/a n/a
RF Farnaaz and Jabbar (2016) 0.99 0.99 0.99 0.99 0.0047 0.88 ANN Saritas and
Yasar (2019) 0.99 0.99 1.0 0.99 0.044 0.5 LR Ghosh and Mitra (2015) 0.91 0.99
0.91 0.95 0.15 0.90
Without using weighted classes, DT is the best model due to the perfect
scores, although the high log loss is a factor to consider. RF would be the
second best as it has perfect recall as well as the lowest log loss and the
highest ROC AUC. When weighted classes are applied, ANN is the best model as
it has perfect recall and a low log loss.
DDoS TCP: Table 3.5.1 shows the results of the DDoS TCP data. The models DT
and RF both have perfect score except for log loss which is high for both.
Table 3.5.1 shows the confusion matrix for RF and once again the matrix shows
a very large disparity in the data represented. [H] DDoS TCP results.
Algorithms Used Accuracy Precision Recall F1 Score Log Loss ROC AUC KNN
Harrison (2019) 0.99 0.99 1.0 0.99 1.76 0.83 SVM Noble (2006) 0.99 1.0 0.99
0.99 5.82 0.83 DT Sharma and Kumar (2016) 1.0 1.0 1.0 1.0 9.99 1.0 NB Rish et
al. (2001) 0.99 1.0 0.99 0.99 0.029 0.99 RF Farnaaz and Jabbar (2016) 1.0 1.0
1.0 1.0 2.55 1.0 ANN Saritas and Yasar (2019) 0.99 0.99 1.0 0.99 4.75 0.5 LR
Ghosh and Mitra (2015) 0.99 0.99 1.0 0.99 0.00010 0.58
[H] DDoS TCP RF confusion matrix. Actual Label Predicted label No Attack
Attack No Attack 6 0 Attack 0 209709
Table 3.5.1 shows the results of DDoS TCP data with weighted classes enabled.
With weighted classes enabled, SVM has lost its perfect precision but lowered
its log loss significantly. DT and ANN are unaffected by the weighted classes
but RF retains its perfect scores and lowers its log loss slightly. LR has
lost its perfect recall and increased its log loss and ROC AUC.
[H] DDoS TCP weighted classes results. Algorithms Used Accuracy Precision
Recall F1 Score Log Loss ROC AUC KNN Harrison (2019) n/a n/a n/a n/a n/a n/a
SVM Noble (2006) 0.99 0.99 0.99 0.99 0.00040 0.83 DT Sharma and Kumar (2016)
1.0 1.0 1.0 1.0 9.99 1.0 NB Rish et al. (2001) n/a n/a n/a n/a n/a n/a RF
Farnaaz and Jabbar (2016) 1.0 1.0 1.0 1.0 1.33 1.0 ANN Saritas and Yasar
(2019) 0.99 0.99 1.0 0.99 4.75 0.5 LR Ghosh and Mitra (2015) 0.99 0.99 0.99
0.99 0.025 0.91
Both with and without weighted classes, RF is the best model as it has perfect
scores. With weighed classes, the log loss is lowered but is still quite high
when compared to LR which has a very low log loss.
DDoS UDP: Table 3.5.1 shows the results of the DDoS UDP data, where both KNN
and and DT have perfect score but KNN is the better model as it has a lower
log loss. Although the log loss is still high, this is the case for all the
models apart from NB. Table 3.5.1 shows the confusion matrix for RF, which
shows the disparity in the class representation. [H] DDoS UDP results.
Algorithms Used Accuracy Precision Recall F1 Score Log Loss ROC AUC KNN
Harrison (2019) 1.0 1.0 1.0 1.0 4.56 1.0 SVM Noble (2006) 0.99 0.99 1.0 0.99
8.93 0.92 DT Sharma and Kumar (2016) 1.0 1.0 1.0 1.0 9.99 1.0 NB Rish et al.
(2001) 0.99 1.0 0.99 0.99 0.00098 0.99 RF Farnaaz and Jabbar (2016) 0.99 0.99
1.0 0.99 5.71 0.92 ANN Saritas and Yasar (2019) 0.99 0.99 1.0 0.99 5.30 0.5 LR
Ghosh and Mitra (2015) 0.99 0.99 1.0 0.99 7.77 0.78
[H] DDoS UDP KNN confusion matrix. Actual Label Predicted label No Attack
Attack No Attack 7 0 Attack 0 209708
Table 3.5.1 shows the results of DDoS UDP data with weighted classes enabled.
The table shows that SVM has gained perfect scores and lowered it loss loss,
while DT has lost its perfect scores and lowered its log loss substantially.
RF has gained perfect scores and lowered its log loss, while ANN is
unaffected. LR has lost perfect recall but gained perfect precision and
lowered its log loss and increased its ROC AUC.
[H] DDoS UDP weighted classes results. Algorithms Used Accuracy Precision
Recall F1 Score Log Loss ROC AUC KNN Harrison (2019) n/a n/a n/a n/a n/a n/a
SVM Noble (2006) 1.0 1.0 1.0 1.0 2.84 1.0 DT Sharma and Kumar (2016) 0.99 1.0
0.99 0.99 0.000011 0.99 NB Rish et al. (2001) n/a n/a n/a n/a n/a n/a RF
Farnaaz and Jabbar (2016) 1.0 1.0 1.0 1.0 0.0020 1.0 ANN Saritas and Yasar
(2019) 0.99 0.99 1.0 0.99 5.30 0.5 LR Ghosh and Mitra (2015) 0.99 1.0 0.99
0.99 0.00028 0.99
Without weighted classes, KNN is the best model as it has perfect scores but
the log loss is high. NB would be second best as it has perfect precision and
a low log loss. With weighted classes, RF is the best model as it has perfect
scores and a low log loss.
Key logging: Table 3.5.1 shows the results of Key logging data. DT is the best
model as it has the best log loss and ROC AUC scores combined with perfect
precision while having high metric scores.
[H] Key logging results. Algorithms Used Accuracy Precision Recall F1 Score
Log Loss ROC AUC KNN Harrison (2019) 0.98 0.98 1.0 0.99 0.33 0.93 SVM Noble
(2006) 0.96 0.96 1.0 0.98 0.16 0.81 DT Sharma and Kumar (2016) 0.99 1.0 0.99
0.99 0.0085 0.99 NB Rish et al. (2001) 0.91 0.92 0.98 0.95 2.64 0.58 RF
Farnaaz and Jabbar (2016) 0.99 0.99 1.0 0.99 0.022 0.96 ANN Saritas and Yasar
(2019) 0.91 0.91 1.0 0.95 1.58 0.5 LR Ghosh and Mitra (2015) 0.96 0.96 1.0
0.98 0.16 0.79 Table 3.5.1 shows the confusion matrix for DT where it is
observable that the dataset has a low amount of data and the data are
imbalanced.
[H] Key logging DT confusion matrix. Actual Label Predicted label No Attack
Attack No Attack 29 0 Attack 2 296
Just as with data exfiltration, the amount of test data can be increased to
observe the effect on the scores of the DT model. Table 3.5.1 shows the
results of increasing the test data for key logging data. Increasing the test
data to 30% gives the model perfect recall instead of perfect accuracy. Once
the data are increased to 50%, the model no longer has perfect recall or
precision. Based on the changes in the results, it is observable that the low
amount of data has a significant impact on the results of the model.
[H] Key logging DT test data amounts. Test Amount Accuracy Precision Recall F1
Score Log Loss ROC AUC 20 0.99 1.0 0.99 0.99 0.0085 0.99 30 0.99 0.99 1.0 0.99
0.080 0.96 40 0.99 0.98 1.0 0.99 0.11 0.95 50 0.99 0.99 0.99 0.99 0.13 0.95
Table 3.5.1 shows the results of key logging data with weighted classes
enabled. SVM shows an overall decrease in performance with the model no longer
having perfect recall. DT and RF also show a drop in performance with the
models losing their perfect precision and recall, respectively. ANN is
unaffected with LR having a large decrease in the models recall leading to the
worst performance of all the models.
[H] Key logging weighted classes results. Algorithms Used Accuracy Precision
Recall F1 Score Log Loss ROC AUC KNN Harrison (2019) n/a n/a n/a n/a n/a n/a
SVM Noble (2006) 0.87 0.98 0.87 0.92 0.17 0.88 DT Sharma and Kumar (2016) 0.98
0.99 0.98 0.99 0.038 0.97 NB Rish et al. (2001) n/a n/a n/a n/a n/a n/a RF
Farnaaz and Jabbar (2016) 0.98 0.99 0.98 0.99 0.051 0.97 ANN Saritas and Yasar
(2019) 0.91 0.91 1.0 0.95 1.58 0.5 LR Ghosh and Mitra (2015) 0.77 0.98 0.76
0.85 0.46 0.82
Without weighted classes, DT is the best model with the lowest log loss and
highest ROC AUC as well as perfect precision. With weighted class, all the
models tested had a decrease in performance except for ANN, which was
unchanged. Apart from the models perfect recall, it still has comparatively
worse scores than DT and RF. Unless perfect recall is a factor DT should be
used as it will correctly classify more data than ANN.
OS Scan: Table 3.5.1 shows the results for OS Scan data. All of the models
have good scores with RF, ANN and LR having a perfect recall indicating the
models made no false negatives. RF has a higher precision than LR and ANN as
well as having a lower log loss and higher ROC AUC. This would suggest that RF
is the best model. However, inspection of the confusion matrix shows a large
imbalance of data in the dataset, as shown in Table 3.5.1.
[H] OS Scan results. Algorithms Used Accuracy Precision Recall F1 Score Log
Loss ROC AUC KNN Harrison (2019) 0.99 0.99 0.99 0.99 0.063 0.80 SVM Noble
(2006) 0.94 0.99 0.94 0.97 0.024 0.96 DT Sharma and Kumar (2016) 0.99 0.99
0.99 0.99 0.0038 0.98 NB Rish et al. (2001) 0.98 0.98 0.99 0.99 0.54 0.51 RF
Farnaaz and Jabbar (2016) 0.99 0.99 1.0 0.99 0.0061 0.83 ANN Saritas and Yasar
(2019) 0.98 0.98 1.0 0.99 0.16 0.5 LR Ghosh and Mitra (2015) 0.98 0.98 1.0
0.99 0.036 0.50
[H] OS Scan RF confusion matrix. Actual Label Predicted label No Attack Attack
No Attack 608 161 Attack 0 71673
Table 3.5.1 shows the results of OS scan data with weighted classes enabled.
SVM shows a decrease in accuracy, recall, F1 score, log loss, and ROC AUC. The
table also shows that the models decreased the performance overall. DT shows a
decrease in log loss and ROC AUC marking a slight increase in the models
confidence but lower ability to perform well at different thresholds. RF has
lost its perfect recall and has an increased log loss and ROC AUC. ANN has
seen no change to its results, whereas LR has a large performance decrease
with only ROC AUC have been improved.
[H] OS scan weighted classes results. Algorithms Used Accuracy Precision
Recall F1 Score Log Loss ROC AUC KNN Harrison (2019) n/a n/a n/a n/a n/a n/a
SVM Noble (2006) 0.89 0.99 0.89 0.94 0.013 0.83 DT Sharma and Kumar (2016)
0.99 0.99 0.99 0.99 0.025 0.88 NB Rish et al. (2001) n/a n/a n/a n/a n/a n/a
RF Farnaaz and Jabbar (2016) 0.99 0.99 0.99 0.99 0.030 0.99 ANN Saritas and
Yasar (2019) 0.98 0.98 1.0 0.99 0.16 0.5 LR Ghosh and Mitra (2015) 0.90 0.99
0.90 0.95 0.19 0.94
Without weighted classes, RF is the best model as it has perfect recall and
the lowest log loss also having the highest ROC AUC. With weighted classes,
ANN is the only model with perfect recall but DT and RF both have better
accuracy, precision, log loss, and ROC AUC. If having no false positives is
needed, then ANN is the best, but DT is better at classifying data in general.
Service Scan: Table 3.5.1 shows the results for service scan data. The models
SVM, RF, and ANN have perfect recall but have poor ROC AUC scores. DT has the
highest ROC AUC and the lowest log loss, but RF could be considered the best
due to its perfect recall. [H] Service Scan results. Algorithms Used Accuracy
Precision Recall F1 Score Log Loss ROC AUC KNN Harrison (2019) 0.99 0.99 0.99
0.99 0.013 0.79 SVM Noble (2006) 0.99 0.99 1.0 0.99 0.012 0.54 DT Sharma and
Kumar (2016) 0.99 0.99 0.99 0.99 0.0028 0.84 NB Rish et al. (2001) 0.99 0.99
0.99 0.99 0.26 0.58 RF Farnaaz and Jabbar (2016) 0.99 0.99 1.0 0.99 0.0039
0.54 ANN Saritas and Yasar (2019) 0.99 0.99 1.0 0.99 0.029 0.5 LR Ghosh and
Mitra (2015) 0.99 0.99 0.99 0.99 0.0087 0.54
Table 3.5.1 shows the confusion matrix for RF as well as the imbalanced data.
[H] Service Scan RF confusion matrix. Actual Label Predicted label No attack
Attack No attack 31 350 Attack 0 209334
Table 3.5.1 shows the results of service scan data with weighted classes
enabled. SVM was not tested due to excessive running times. DT, RF, and LR
have increased their ROC AUC but all other metrics have been negatively
affected. ANN is unaffected, being the only model to keep its perfect recall.
[H] Service scan weighted classes results. Algorithms Used Accuracy Precision
Recall F1 Score Log Loss ROC AUC KNN Harrison (2019) n/a n/a n/a n/a n/a n/a
SVM Noble (2006) n/a n/a n/a n/a n/a n/a DT Sharma and Kumar (2016) 0.97 0.99
0.97 0.98 0.079 0.97 NB Rish et al. (2001) n/a n/a n/a n/a n/a n/a RF Farnaaz
and Jabbar (2016) 0.94 0.99 0.94 0.97 0.13 0.96 ANN Saritas and Yasar (2019)
0.99 0.99 1.0 0.99 0.029 0.5 LR Ghosh and Mitra (2015) 0.85 0.99 0.85 0.92
0.29 0.90
Without weighted classes of the models with perfect recall, RF is the best as
it has the lowest log loss and highest ROC AUC. However, DT has the best log
loss but does not have perfect recall. With weighted classes, ANN is the best
as it is the only model to retain perfect recall, but its ROC AUC is the
poorest of all the models.
DoS HTTP: Table 3.5.1 shows the results for DoS data; DT and RF both have
perfect scores and a low log loss with DT narrowly beating RF. [H] DoS HTTP
results. Algorithms Used Accuracy Precision Recall F1 Score Log Loss ROC AUC
KNN Harrison (2019) 0.99 0.99 1.0 0.99 0.0063 0.90 SVM Noble (2006) 0.99 0.99
1.0 0.99 0.0065 0.86 DT Sharma and Kumar (2016) 1.0 1.0 1.0 1.0 0.00013 1.0 NB
Rish et al. (2001) 0.99 0.99 0.99 0.99 0.034 0.77 RF Farnaaz and Jabbar (2016)
1.0 1.0 1.0 1.0 0.00094 1.0 ANN Saritas and Yasar (2019) 0.99 0.99 1.0 0.99
0.029 0.5 LR Ghosh and Mitra (2015) 0.99 0.99 1.0 0.99 0.0044 0.81
Table 3.5.1 shows the confusion matrix for RF which showcases the disparity in
the dataset.
[H] DoS HTTP RF confusion matrix. Actual Label Predicted label No attack
Attack No attack 11 0 Attack 0 5942
Table 3.5.1 shows the results of DoS HTTP data with weighted classes enabled.
SVM shows deceased performance in all metric except for ROC AUC. DT and RF
have lost their perfect scores and have an increased log loss. ANN is
unaffected, whereas LR has seen a decrease in all performance metrics apart
from ROC AUC, which has increased.
[H] DoS HTTP weighted classes results. Algorithms Used Accuracy Precision
Recall F1 Score Log Loss ROC AUC KNN Harrison (2019) n/a n/a n/a n/a n/a n/a
SVM Noble (2006) 0.90 0.99 0.90 0.95 0.0067 0.90 DT Sharma and Kumar (2016)
0.99 0.99 0.99 0.99 0.0098 0.95 NB Rish et al. (2001) n/a n/a n/a n/a n/a n/a
RF Farnaaz and Jabbar (2016) 0.99 0.99 0.99 0.99 0.0097 0.95 ANN Saritas and
Yasar (2019) 0.99 0.99 1.0 0.99 0.029 0.5 LR Ghosh and Mitra (2015) 0.88 0.99
0.88 0.94 0.21 0.89
[H] DoS TCP results. Algorithms Used Accuracy Precision Recall F1 Score Log
Loss ROC AUC KNN Harrison (2019) 0.99 0.99 1.0 0.99 0.00035 0.90 SVM Noble
(2006) 0.99 0.99 1.0 0.99 0.0011 0.61 DT Sharma and Kumar (2016) 0.99 0.99 1.0
0.99 1.62 0.95 NB Rish et al. (2001) 0.99 0.99 0.99 0.99 0.026 0.69 RF Farnaaz
and Jabbar (2016) 0.99 0.99 1.0 0.99 2.25 0.92 ANN Saritas and Yasar (2019)
0.99 0.99 1.0 0.99 0.0016 0.5 LR Ghosh and Mitra (2015) 0.99 0.99 1.0 0.99
0.00066 0.61
Table 3.5.1 shows the confusion matrix for DT and shows the imbalance of the
data in the dataset.
[H] DoS TCP DT confusion matrix. Actual Label Predicted label No attack Attack
No attack 19 2 Attack 0 209694
Without weighted classes, DT is the best model as it has perfect scores and
the lowest log loss. With weighted classes, ANN is the best model as it has
perfect recall. In regards to the models ability to classify data, ANN comes
out on top due to having perfect recall.
DoS TCP: Table 3.5.1 shows the results for DoS TCP data, where all the models
apart from NB have perfect recall. DT and RF have the best ROC AUC scores, but
both have high log losses when compared to the other models. KNN has the
lowest log loss and a ROC AUC almost as good as RF.
Table 3.5.1 shows the results of DoS TCP data with weighted classes enabled.
SVM was not recorded due to excessively long running times. With weighted
classes, both DT and RF have lost their perfect recall, but DT has gained
perfect precision. Both models have also seen an improvement in log loss and
ROC AUC. ANN is affected and LR has had a performance decrease in almost all
metrics. [H] DoS TCP weighted classes results. Algorithms Used Accuracy
Precision Recall F1 Score Log Loss ROC AUC KNN Harrison (2019) n/a n/a n/a n/a
n/a n/a SVM Noble (2006) n/a n/a n/a n/a n/a n/a DT Sharma and Kumar (2016)
0.99 1.0 0.99 0.99 0.018 0.99 NB Rish et al. (2001) n/a n/a n/a n/a n/a n/a RF
Farnaaz and Jabbar (2016) 0.99 0.99 0.99 0.99 0.022 0.97 ANN Saritas and Yasar
(2019) 0.99 0.99 1.0 0.99 0.0016 0.5 LR Ghosh and Mitra (2015) 0.96 0.99 0.96
0.98 0.078 0.91
Without weighted classes, KNN could be considered the best model as it has the
lowest log loss and a reasonably high ROC AUC. DT and RF have a higher ROC AUC
but also have a considerably higher log loss than KNN. With weighted classes,
both DT and ANN could be considered the best with DT having perfect precision
and ANN having perfect recall. Both models also have a low log loss, but ANN
has a poorer ROC AUC score.
DoS UDP: Table 3.5.1 shows the results for DoS UDP data. NB is the best model
with perfect precision, low log loss, and high ROC AUC, as well as having high
metrics across all categories. All the other models have perfect recall but
have either a high log loss or a low ROC AUC, or both. [H] DoS UDP results.
Algorithms Used Accuracy Precision Recall F1 Score Log Loss ROC AUC KNN
Harrison (2019) 0.99 0.99 1.0 0.99 3.28 0.75 SVM Noble (2006) 0.99 0.99 1.0
0.99 0.00039 0.68 DT Sharma and Kumar (2016) 0.99 0.99 1.0 0.99 3.41 0.87 NB
Rish et al. (2001) 0.99 1.0 0.99 0.99 0.00065 0.99 RF Farnaaz and Jabbar
(2016) 0.99 0.99 1.0 0.99 1.61 0.87 ANN Saritas and Yasar (2019) 0.99 0.99 1.0
0.99 5.30 0.5 LR Ghosh and Mitra (2015) 0.99 0.99 1.0 0.99 0.00030 0.56
Table 3.5.1 shows the confusion matrix for NB which shows the disparity
between the data in the dataset.
[H] DoS UDP NB confusion matrix. Actual Label Predicted label No attack Attack
No attack 8 0 Attack 4 209703
Table 3.5.1 shows the results of DoS UDP data with weighted classes enabled.
ANN is unaffected and maintains poor log loss and ROC AUC scores. SVM has
gained perfect precision but lost perfect recall with an increase in log loss
and ROC AUC. DT has also swapped its precision and recall scores with an
increase in both log loss and ROC AUC scores. RF has lost its perfect recall
and increased its log loss and ROC AUC. LR has improved its log loss, ROC AUC,
and gained perfect precision while losing perfect recall. [H] DoS UDP weighted
classes results. Algorithms Used Accuracy Precision Recall F1 Score Log Loss
ROC AUC KNN Harrison (2019) n/a n/a n/a n/a n/a n/a SVM Noble (2006) 0.99 1.0
0.99 0.99 0.00053 0.99 DT Sharma and Kumar (2016) 0.99 1.0 0.99 0.99 5.24 0.99
NB Rish et al. (2001) n/a n/a n/a n/a n/a n/a RF Farnaaz and Jabbar (2016)
0.99 0.99 0.99 0.99 1.34 0.93 ANN Saritas and Yasar (2019) 0.99 0.99 1.0 0.99
5.30 0.5 LR Ghosh and Mitra (2015) 0.99 1.0 0.99 0.99 0.00079 0.99
Without weighted classes, NB is the best model having perfect precision with a
low log loss and high ROC AUC. With weighted classes, both SVM and LR perform
very well but SVM is the better model as it has the lower log loss of the two
models.
#### 3.5.2 Model Comparison
Table 3.5.2 shows the best models for each of the datasets including both
(with and without weighted classes). DT and RF are the models that appear the
most in the table with ANN appearing frequently in the weighted classes
column. Without the use of weighted classes, RF achieves the best performance.
With weighted classes, ANN achieves the best performances. However, using
weighted classes generally decreases the overall performance of the model. [H]
Model comparison. Dataset Best Model No Weighted Classes Weighted Classes Data
Exfiltrantion RF RF DDoS HTTP DT ANN DDoS TCP RF RF DDoS UDP KNN RF Keylogging
DT DT OS Scan RF ANN Service scan RF ANN DoS HTTP DT ANN DoS TCP KNN DT DoS
UDP NB SVM Most Occurrences RF ANN
#### 3.5.3 Multiclass Classification
Table 3.5.3 shows the results for multi-class classification, KNN has the best
performance metrics including the lowest log loss and the highest CKS. LR is
the worst model with the lowest metrics including the lowest CKS and a high
log loss beat only by SVM.
[H] Multi-class results. Algorithms Used Accuracy Precision Recall F1 Score
Log Loss CKS KNN Harrison (2019) 0.99 0.99 0.99 0.99 0.042 0.99 SVM Noble
(2006) 0.79 0.79 0.79 0.79 0.65 0.75 DT Sharma and Kumar (2016) 0.96 0.96 0.96
0.96 0.11 0.95 NB Rish et al. (2001) 0.94 0.94 0.94 0.94 0.30 0.93 RF Farnaaz
and Jabbar (2016) 0.95 0.95 0.95 0.95 0.30 0.94 ANN Saritas and Yasar (2019)
0.97 0.97 0.97 0.97 0.066 0.97 LR Ghosh and Mitra (2015) 0.74 0.74 0.74 0.74
0.63 0.68
Table 3.5.3 shows the results with weighted classes. KNN and NB cannot use
weighted classes and SVM was not tested because of its excessively long
running time. Weighted classes have reduced the performance metrics for all
models apart from ANN, which has had a small decrease in log loss, making it
the best model with weighted classes.
[H] Multi-class weighted classes results. Algorithms Used Accuracy Precision
Recall F1 Score Log Loss CKS KNN Harrison (2019) n/a n/a n/a n/a n/a n/a SVM
Noble (2006) n/a n/a n/a n/a n/a n/a DT Sharma and Kumar (2016) 0.92 0.92 0.92
0.92 0.46 0.90 NB Rish et al. (2001) n/a n/a n/a n/a n/a n/a RF Farnaaz and
Jabbar (2016) 0.86 0.86 0.86 0.86 0.79 0.83 ANN Saritas and Yasar (2019) 0.97
0.97 0.97 0.97 0.063 0.97 LR Ghosh and Mitra (2015) 0.69 0.69 0.69 0.69 0.75
0.63
Table 3.5.3 shows that KNN performs very well with the multi-class dataset
with all the classes having low amounts of incorrectly classified data. [H]
KNN confusion matrix. Predicted True 0 1 2 3 4 5 6 7 8 9 10 0 172 0 0 2 0 1
107 50 0 2 1 1 1 4 0 0 0 2 0 0 0 0 0 2 0 0 965 4 2 0 0 0 43 1 0 3 0 0 1 56368
3 0 0 0 0 2 3 4 0 0 0 4 55296 0 0 0 0 0 2 5 0 1 0 0 0 80 0 0 0 0 0 6 48 0 0 0
0 0 18294 565 0 0 0 7 12 0 0 0 0 0 395 55357 0 0 0 8 0 0 38 6 3 0 0 0 1427 1 0
9 0 0 2 9 1 0 0 0 4 55218 2 10 0 0 0 4 1 0 0 0 0 1 55496
Table 3.5.3 shows that SVM performs poorly with the multi-class dataset with
data exfiltration (1), DDoS HTTP (2), and key logging (5) data all being
incorrectly classified. These classes are ones featuring low amounts of data,
which could be the reason for the low accuracy.
[H] SVM confusion matrix. Predicted True 0 1 2 3 4 5 6 7 8 9 10 0 10 0 0 2 3 0
183 111 6 15 5 1 0 0 0 0 4 0 0 0 2 1 0 2 0 0 0 296 6 0 0 0 79 630 4 3 0 0 0
19626 17561 0 0 0 55 17778 1357 4 0 0 0 429 54506 0 0 0 0 2 365 5 0 0 0 0 7 0
0 0 72 2 0 6 0 0 0 1 0 0 13056 5779 1 41 29 7 0 0 0 1 0 0 3097 52658 0 8 0 8 0
0 0 512 17 0 0 0 56 885 5 9 0 0 0 1804 442 0 0 0 48 52933 9 10 0 0 0 4021 5139
0 0 0 0 22 46319
Table 3.5.3 shows the confusion matrix for DT multi-class classification. It
can be observed that the model performs very well; however, the model appears
to have difficultly in correctly classifying the data that are imbalanced in
the dataset. This is evident in Table 3.5.3 with data exfiltration (1), DDoS
HTTP (2), and key logging (5) being incorrectly classified. [H] DT confusion
matrix Predicted True 0 1 2 3 4 5 6 7 8 9 10 0 3 0 0 1 20 0 98 227 0 1 4 1 0 0
0 0 3 0 0 0 0 0 0 2 0 0 0 0 1084 0 0 0 0 0 0 3 0 0 0 55648 0 0 0 0 0 0 0 4 0 0
0 0 55460 0 0 0 0 0 0 5 0 0 0 0 84 0 0 0 0 0 0 6 0 0 0 0 0 0 9620 9474 0 0 0 7
0 0 0 0 0 0 0 55504 0 0 0 8 0 0 0 0 0 0 0 0 1597 0 0 9 1 0 0 0 0 0 0 0 0 55784
0 10 0 0 0 0 0 0 0 0 0 0 55387
Table 3.5.3 shows the confusion matrix for DT with weighted classes enabled.
Using weighted classes has resulted in an overall decrease in the models
performance, but has improved the correct classification of data for normal
traffic (0), data exfiltration (1), and key logging (5). This has also
resulted in DoS HTTP having all its data incorrectly classified.
[H] DT weighted classes confusion matrix. Predicted True 0 1 2 3 4 5 6 7 8 9
10 0 297 2 0 1 3 10 0 21 0 10 3 1 0 6 0 0 3 0 0 0 0 0 0 2 0 0 0 0 1055 0 0 0 0
0 0 3 0 0 0 55716 0 0 0 0 0 0 0 4 0 0 0 0 55324 0 0 0 0 0 0 5 0 0 0 0 0 70 0 0
0 0 0 6 1037 0 0 0 0 0 17965 0 0 0 0 7 1442 0 0 0 0 0 0 54099 0 0 0 8 0 0 0 0
0 0 0 0 0 0 1520 9 0 0 0 0 0 0 0 0 0 55745 0 10 0 0 0 0 0 0 0 0 0 0 55674
Table 3.5.3 shows the confusion matrix for NB multi-classification, which
performs quite well with no classes having all the data incorrectly
classified. The model is also able to handle the data disparity in the classes
with the low data classes having good classification results. [H] NB confusion
matrix. Predicted True 0 1 2 3 4 5 6 7 8 9 10 0 33 1 0 1 0 9 224 62 1 5 0 1 0
7 0 0 0 0 0 0 0 0 0 2 2 0 1008 0 0 0 0 7 0 0 0 3 29 0 0 56296 0 0 0 29 23 0 0
4 2 0 0 0 55298 0 0 2 0 0 0 5 0 0 0 0 0 81 0 0 0 0 0 6 67 0 0 0 0 0 18199 641
0 0 0 7 362 0 0 0 0 0 15754 39648 0 0 0 8 0 0 0 0 0 0 0 0 1475 0 0 9 1 0 0 0 0
0 0 55 0 55180 0 10 1 0 0 0 0 0 0 1 0 0 55499
Table 3.5.3 shows the results for RF multi-class classification, which has
good classification accuracy for the classes that have lots of data. The
classes with low data have no correctly classified data. [H] RF confusion
matrix. Predicted True 0 1 2 3 4 5 6 7 8 9 10 0 0 0 0 16 2 0 31 100 0 172 14 1
0 0 0 5 2 0 0 0 0 0 0 2 0 0 0 955 20 0 0 0 0 0 0 3 0 0 0 56377 0 0 0 0 0 0 0 4
0 0 0 95 55207 0 0 0 0 0 0 5 0 0 0 77 4 0 0 0 0 0 0 6 0 0 0 1 0 0 9238 9514 0
151 3 7 0 0 0 0 0 0 0 55716 0 47 1 8 0 0 0 1422 0 0 0 0 0 0 53 9 0 0 0 0 0 0 0
0 0 55229 7 10 0 0 0 10 0 0 0 0 0 0 55491
Table 3.5.3 shows the results of having weighted classes. It is shown that,
despite the models having lower correct classifications overall, they have
performed better with low data and correctly classifying the classes.
[H] RF weighted classes confusion matrix Predicted True 0 1 2 3 4 5 6 7 8 9 10
0 250 1 1 0 1 11 8 32 10 12 9 1 0 7 0 0 0 0 0 0 0 0 0 2 0 0 1013 0 0 2 0 0 0 0
0 3 0 0 231 37544 18602 0 0 0 0 0 0 4 0 0 0 76 55226 0 0 0 0 0 0 5 0 4 1 0 0
76 0 0 0 0 0 6 180 11 0 0 0 0 17748 514 443 11 0 7 333 0 0 0 0 0 16281 38936
131 83 0 8 0 0 0 0 0 0 2 0 1473 0 0 9 3758 0 0 0 0 0 0 0 25 50448 1005 10 1 0
0 0 0 0 0 0 0 0 55500
Table 3.5.3 shows the results for ANN multi-class classification. The model
performs well except for exfiltration (1) and key logging (5), which have
incorrectly classified data. [H] ANN confusion matrix. Predicted True 0 1 2 3
4 5 6 7 8 9 10 0 4 0 1 14 0 0 219 86 4 6 1 1 0 0 0 7 0 0 0 0 0 0 0 2 0 0 981
34 0 0 0 0 0 0 0 3 0 0 27 56349 1 0 0 0 0 0 0 4 0 0 0 4 55298 0 0 0 0 0 0 5 0
0 0 81 0 0 0 0 0 0 0 6 0 0 0 9 0 0 15312 3586 0 0 0 7 0 0 0 0 0 0 2701 53063 0
0 0 8 0 0 0 59 0 0 0 0 1415 0 1 9 0 0 0 55 0 0 19 0 1 55161 0 10 0 0 0 2 0 0 0
0 0 0 55499
Table 3.5.3 shows the results with weighted classes enabled. It is observable
that the model is much better at classifying most classes with OS scan (6) and
service scan (7) having the most incorrectly classified data. The models is
also unable to correctly classify any data for normal data (0) and data
exfiltration (1).
[H] ANN weighted classes confusion matrix. Predicted Predicted True 0 1 2 3 4
5 6 7 8 9 10 0 0 0 1 1 0 12 228 81 4 6 2 1 0 0 0 0 7 0 0 0 0 0 0 2 0 0 1010 0
5 0 0 0 0 0 0 3 0 0 0 56377 0 0 0 0 0 0 0 4 1 0 0 2 55299 0 0 0 0 0 0 5 0 0 0
0 0 81 0 0 0 0 0 6 0 0 0 0 0 0 16950 1956 0 1 0 7 0 0 0 0 0 0 4195 51569 0 0 0
8 0 0 0 0 0 0 0 0 1473 0 2 9 0 0 0 0 0 0 0 0 4 55232 0 10 0 0 0 0 0 0 0 0 0 1
55500 Table 3.5.3 shows the results for LR multi-class classification, which
has poor performance overall with the low data classes and also having no
correctly classified data.
[H] LR confusion matrix. Predicted True 0 1 2 3 4 5 6 7 8 9 10 0 0 0 0 11 2 0
200 101 0 9 12 1 0 0 0 7 0 0 0 0 0 0 0 2 0 0 0 93 6 0 0 0 308 601 7 3 0 0 0
16470 6901 0 0 0 44 24490 8472 4 0 0 0 145 49111 0 0 0 0 71 5947 5 0 0 0 81 0
0 0 0 0 0 0 6 2 0 0 0 0 0 14690 4186 0 0 29 7 0 0 0 0 0 0 3713 52049 0 0 2 8 0
0 0 139 18 0 9 0 482 819 8 9 0 0 0 274 453 0 2 0 435 54035 37 10 0 0 0 10658
9195 0 0 0 0 15 35633 Table 3.5.3 shows the results of having weighted
classes. It is evident that the accuracy of overall classification has
decreased; however, the model shows improvement in classifying the low data
classes. [H] LR weighted classes confusion matrix. Predicted Predicted True 0
1 2 3 4 5 6 7 8 9 10 0 291 4 0 0 2 7 10 14 0 6 1 1 0 7 0 0 0 0 0 0 0 0 0 2 0 9
692 0 2 0 0 0 306 2 4 3 0 105 432 14010 7516 0 0 0 1004 23636 9674 4 0 44 50
186 52423 0 0 0 34 71 2494 5 0 3 0 0 0 78 0 0 0 0 0 6 3570 0 0 0 0 0 13753
1582 2 0 0 7 4353 0 0 0 0 0 9652 41758 0 0 1 8 0 15 710 0 8 0 0 0 736 4 2 9 0
0 954 424 453 0 11 0 2479 50907 8 10 0 177 197 10008 9850 0 0 0 216 16 35037
## 4 Conclusions
In this paper, state-of-the-art ML algorithms are compared in terms of
accuracy, precision, recall, F1 score, and log loss on both weighted and non-
weighted Bot-IoT dataset. It is shown that the performance of RF in terms of
accuracy and precision is the best with the non-weighted dataset. However, in
a weighted dataset, ANN has higher accuracy for binary classification. In
multi-classification, KNN and ANN are highly accurate for weighted and non-
weighted datasets, respectively. From the results, it is evident that, when
all types of attack have weighted datasets, ANN predicts the type of attack
with higher accuracy.
In the future, we intend to adopt the models explored in this research into an
IDS prototype for testing using diverse data including a mix of attacks to
validate the multi-class functionality of models.
Conceptualization, A.C.; Methodology, A.C.; Validation, A.C., J.A., and R.U.;
Formal Analysis, A.C.; Investigation, A.C., J.A., R.U., and B.N.; resources,
A.C.; writing—original draft preparation, A.C. and J.A.; writing—review and
editing, A.C., J.A., R.U., B.N., F.M., S.U.R., F.A., and W.J.B.; supervision,
J.A. and W.J.B.; and funding acquisition, F.A. and J.A. All authors have read
and agreed to the published version of the manuscript.
text. text.
text.
text. The authors declare no conflict of interest.
References
## References
* Dorsemaine et al. (2015) Dorsemaine, B.; Gaulier, J.P.; Wary, J.P.; Kheir, N.; Urien, P. Internet of Things: A Definition & Taxonomy. In Proceedings of the 2015 9th International Conference on Next Generation Mobile Applications, Services and Technologies, Cambridge, UK, 9–11 September 2015, doi:black10.1109/ngmast.2015.71.
* Statista (2019) Statista. _IoT: Number of Connected Devices Worldwide 2012–2025_ ; Statista: Hamburg, Germany, 2019.
* Doffman (2019) Doffman, Z. Cyberattacks On IOT Devices Surge 300% In 2019, ’Measured in Billions’, Report Claims. 2019.
* Furbush (2018) Furbush, J. Machine learning: A quick and simple definition. _O’Reilly_ , 3 May 2018.
* Jmj (2018) Jmj, A. 5 Industries that heavily rely on Artificial Intelligence and Machine Learning. 2018.
* Dosal (2018) Dosal, E. 3 Advantages of a Network Threat Analysis. _Compuquip_ , 4 September 2018.
* Groopman (2019) Groopman, J. Understand the top 4 use cases for AI in cybersecurity. 2019.
* Technologies (2019) Technologies, C. Evaluation of Machine Learning Algorithms for Intrusion Detection System. 2019.
* Sommer and Paxson (2010) Sommer, R.; Paxson, V. Outside the Closed World: On Using Machine Learning for Network Intrusion Detection. In Proceedings of the 2010 IEEE Symposium on Security and Privacy, Oakland, CA, USA, 16–19 May 2010; pp. 305–316.
* Foley et al. (2020) Foley, J.; Moradpoor, N.; Ochenyi, H. Employing a Machine Learning Approach to Detect Combined Internet of Things Attacks against Two Objective Functions Using a Novel Dataset. Secur. Commun. Netw. 2020, 2020, doi:black10.1155/2020/2804291.
* Alsamiri and Alsubhi (2019) Alsamiri, J.; Alsubhi, K. Internet of Things Cyber Attacks Detection using Machine Learning. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 628–634, doi:black10.14569/ijacsa.2019.0101280.
* Hasan et al. (2019) Hasan, M.; Islam, M.M.; Zarif, M.I.I.; Hashem, M. Attack and anomaly detection in IoT sensors in IoT sites using machine learning approaches. Internet Things 2019, 7, 100059, doi:black10.1016/j.iot.2019.100059.
* Brownlee (2019) Brownlee, J. K-Nearest Neighbors for Machine Learning. 2019.
* Harrison (2019) Harrison, O. Machine Learning Basics with the K-Nearest Neighbors Algorithm. _Towards Data Science_ , 10 September 2019.
* Liao and Vemuri (2002) Liao, Y.; Vemuri, V. Use of K-Nearest Neighbor classifier for intrusion detection. Comput. Secur. 2002, 21, 439–448, doi:black10.1016/s0167-4048(02)00514-x.
* Nikhitha and Jabbar (2019) Nikhitha, M.; Jabbar, M. K Nearest Neighbor Based Model for Intrusion Detection System. Int. J. Recent Technol. Eng. 2019, 8, 2258–2262, doi:black10.35940/ijrte.b2458.078219.
* Yao et al. (2006) Yao, J.; Zhao, S.; Fan, L. An Enhanced Support Vector Machine Model for Intrusion Detection. Rough Sets Knowl. Technol. Lect. Notes Comput. Sci. 2006, 538–543, doi:black10.1007/11795131_78.
* Cahyo et al. (2016) Cahyo, A.N.; Hidayat, R.; Adhipta, D. Performance comparison of intrusion detection system based anomaly detection using artificial neural network and support vector machine. AIP Conf. Proc. 2016, doi:black10.1063/1.4958506.
* Sharma and Kumar (2016) Sharma, H.; Kumar, S. A Survey on Decision Tree Algorithms of Classification in Data Mining. Int. J. Sci. Res. (IJSR) 2016, 5, 2094–2097, doi:black10.21275/v5i4.nov162954.
* Stampar and Fertalj (2015) Stampar, M.; Fertalj, K. Artificial intelligence in network intrusion detection. In Proceedings of the 38th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 25–29 May 2015; pp. 1318–1323. doi:black10.1109/MIPRO.2015.7160479.
* Aloqaily et al. (2019) Aloqaily, M.; Otoum, S.; Al Ridhawi, I.; Jararweh, Y. An Intrusion Detection System for Connected Vehicles in Smart Cities. Ad. Hoc. Netw. 2019, doi:black10.1016/j.adhoc.2019.02.001.
* Koehrsen (2018) Koehrsen, W. An Implementation and Explanation of the Random Forest in Python, 2018\.
* Dubey (2018) Dubey, A. Feature Selection Using Random forest. 2018.
* Farnaaz and Jabbar (2016) Farnaaz, N.; Jabbar, M. Random Forest Modeling for Network Intrusion Detection System. Procedia Comput. Sci. 2016, 89, 213–217, doi:black10.1016/j.procs.2016.06.047.
* Saritas and Yasar (2019) Saritas, M.M.; Yasar, A. Performance analysis of ANN and Naive Bayes classification algorithm for data classification. Int. J. Intell. Syst. Appl. Eng. 2019, 7, 88–91.
* Ujjwalkarn (2016) Ujjwalkarn. A Quick Introduction to Neural Networks. _The Data Science Blog_ , 9 August 2016.
* Maind et al. (2014) Maind, S.B.; Wankar, P. Research paper on basic of artificial neural network. Int. J. Recent Innov. Trends Comput. Commun. 2014, 2, 96–100.
* Anitha and Arockiam (2019) Anitha, A.A.; Arockiam, L. ANNIDS: Artificial Neural Network based Intrusion Detection System for Internet of Things. Int. J. Innov. Technol. Explor. Eng. Regul. Issue 2019, 8, 2583–2588, doi:black10.35940/ijitee.k1875.0981119.
* Shenfield et al. (2018) Shenfield, A.; Day, D.; Ayesh, A. Intelligent intrusion detection systems using artificial neural networks. ICT Express 2018, 4, 95–99, doi:black10.1016/j.icte.2018.04.003.
* Rajput (2018) Rajput, H. MachineX: Simplifying Logistic Regression. _Knoldus Blogs_ , 28 March 2018.
* Ghosh and Mitra (2015) Ghosh, P.; Mitra, R. Proposed GA-BFSS and logistic regression based intrusion detection system. In Proceedings of the 2015 Third International Conference on Computer, Communication, Control and Information Technology (C3IT), Hooghly, India, 7–8 February 2015; pp. 1–6.
* Hussain et al. (2020) Hussain, F.; Hussain, R.; Hassan, S.A.; Hossain, E. Machine Learning in IoT Security: Current Solutions and Future Challenges. IEEE Commun. Surv. Tutorials 2020.
* Saleem et al. (2018) Saleem, J.; Hammoudeh, M.; Raza, U.; Adebisi, B.; Ande, R. IoT standardisation. In Proceedings of the 2nd International Conference on Future Networks and Distributed Systems—ICFNDS 18, Amman, Jordan, 26–27 June 2018, doi:black10.1145/3231053.3231103.
* Ullah et al. (2017) Ullah, F.; Edwards, M.; Ramdhany, R.; Chitchyan, R.; Babar, M.A.; Rashid, A. Data exfiltration: A review of external attack vectors and countermeasures. J. Netw. Comput. Appl. 2017, 101, 18–54, doi:black10.1016/j.jnca.2017.10.016.
* Carthy et al. (2016) Carthy, S.M.M.; Sinha, A.; Tambe, M.; Manadhata, P. Data Exfiltration Detection and Prevention: Virtually Distributed POMDPs for Practically Safer Networks. Lect. Notes Comput. Sci. Decis. Game Theory Secur. 2016, 39–61, doi:black10.1007/978-3-319-47413-7_3.
* Fadolalkarim and Bertino (2019) Fadolalkarim, D.; Bertino, E. A-PANDDE: Advanced Provenance-based ANomaly Detection of Data Exfiltration. Comput. Secur. 2019, 84, 276–287, doi:black10.1016/j.cose.2019.03.021.
* Malik and Singh (2015) Malik, M.; Singh, Y. A Review: DoS and DDoS Attacks. Int. J. Comput. Sci. Mob. Comput. 2015, 4, 260–265.
* Mahjabin et al. (2017) Mahjabin, T.; Xiao, Y.; Sun, G.; Jiang, W. A survey of distributed denial-of-service attack, prevention, and mitigation techniques. Int. J. Distrib. Sens. Networks 2017, 13, 2–33, doi:black10.1177/1550147717741463.
* Kolias et al. (2017) Kolias, C.; Kambourakis, G.; Stavrou, A.; Voas, J. DDoS in the IoT: Mirai and Other Botnets. Computer 2017, 50, 80–84, doi:black10.1109/mc.2017.201.
* Galeano-Brajones et al. (2020) Galeano-Brajones, J.; Carmona-Murillo, J.; Valenzuela-Valdés, J.F.; Luna-Valero, F. Detection and Mitigation of DoS and DDoS Attacks in IoT-Based Stateful SDN: An Experimental Approach. Sensors 2020, 20, 816, doi:black10.3390/s20030816.
* Ul et al. (2018) Ul, I.; Bin, M.; Asif, M.; Ullah, R. DoS/DDoS Detection for E-Healthcare in Internet of Things. Int. J. Adv. Comput. Sci. Appl. 2018, 9, 297–300, doi:black10.14569/ijacsa.2018.090140.
* Olzak (2008) Olzak, T. Keystroke logging (keylogging) 2008.
* Abukar et al. (2014) Abukar, Y.; Maarof, M.; Hassan, F.; Abshir, M. Survey of Keylogger Technologies. Int. J. Comput. Sci. Telecommun. 2014, 5, 25–31.
* Ortolani et al. (2010) Ortolani, S.; Giuffrida, C.; Crispo, B. Bait Your Hook: A Novel Detection Technique for Keyloggers. Lect. Notes Comput. Sci. Recent Adv. Intrusion Detect. 2010, 198–217, doi:black10.1007/978-3-642-15512-3_11.
* Wajahat et al. (2019) Wajahat, A.; Imran, A.; Latif, J.; Nazir, A.; Bilal, A. A Novel Approach of Unprivileged Keylogger Detection. In Proceedings of the 2019 2nd International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), Sukkur, Pakistan, 30–31 January 2019, doi:black10.1109/icomet.2019.8673404.
* Yang et al. (2019) Yang, K.; Li, Q.; Sun, L. Towards automatic fingerprinting of IoT devices in the cyberspace. Comput. Netw. 2019, 148, 318–327, doi:black10.1016/j.comnet.2018.11.013.
* Aneja et al. (2018) Aneja, S.; Aneja, N.; Islam, M.S. IoT Device Fingerprint using Deep Learning. In Proceedings of the 2018 IEEE International Conference on Internet of Things and Intelligence System (IOTAIS), Bali, Indonesia, 1–3 November 2018, doi:black10.1109/iotais.2018.8600824.
* Bhuyan et al. (2011) Bhuyan, M.H.; Bhattacharyya, D.K.; Kalita, J.K. Surveying Port Scans and Their Detection Methodologies. Comput. J. 2011, 54, 1565–1581, doi:black10.1093/comjnl/bxr035.
* Markowsky and Markowsky (2015) Markowsky, L.; Markowsky, G. Scanning for vulnerable devices in the Internet of Things. In Proceedings of the 2015 IEEE 8th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), Warsaw, Poland, 24–26 September 2015, doi:black10.1109/idaacs.2015.7340779.
* Sivanathan et al. (2018) Sivanathan, A.; Gharakheili, H.H.; Sivaraman, V. Can We Classify an IoT Device using TCP Port Scan? In Proceedings of the 2018 IEEE International Conference on Information and Automation for Sustainability (ICIAfS), Colombo, Sri Lanka, 21–22 December 2018, doi:black10.1109/iciafs.2018.8913346.
* Shao et al. (2016) Shao, G.L.; Chen, X.S.; Yin, X.Y.; Ye, X.M. A fuzzy detection approach toward different speed port scan attacks based on Dempster-Shafer evidence theory. Secur. Commun. Netw. 2016, 9, 2627–2640, doi:black10.1002/sec.1508.
* Lopez-Vizcaino et al. (2019) Lopez-Vizcaino, M.; Novoa, F.J.; Fernandez, D.; Carneiro, V.; Cacheda, F. Early Intrusion Detection for OS Scan Attacks. In Proceedings of the 2019 IEEE 18th International Symposium on Network Computing and Applications (NCA), Cambridge, MA, USA, 26–28 September 2019. doi:black10.1109/nca.2019.8935067.
* Rashid et al. (2020) Rashid, M.M.; Kamruzzaman, J.; Hassan, M.M.; Imam, T.; Gordon, S. Cyberattacks Detection in IoT-Based Smart City Applications Using Machine Learning Techniques. Int. J. Environ. Res. Public Health 2020, 17, 9347.
* Soe et al. (2020) Soe, Y.N.; Feng, Y.; Santosa, P.I.; Hartanto, R.; Sakurai, K. Machine Learning-Based IoT-Botnet Attack Detection with Sequential Architecture. Sensors 2020, 20, 4372.
* Ioannou and Vassiliou (2019) Ioannou, C.; Vassiliou, V. Classifying Security Attacks in IoT Networks Using Supervised Learning. In Proceedings of the 2019 15th International Conference on Distributed Computing in Sensor Systems (DCOSS), Santorini Island, Greece, 29–31 May 2019; pp. 652–658, doi:black10.1109/DCOSS.2019.00118.
* Koroniotis et al. (2019) Koroniotis, N.; Moustafa, N.; Sitnikova, E.; Turnbull, B. Towards the development of realistic botnet dataset in the Internet of Things for network forensic analytics: Bot-IoT dataset. Future Gener. Comput. Syst. 2019, 100, 779–796, doi:black10.1016/j.future.2019.05.041.
* Noble (2006) Noble, W.S. What is a support vector machine? Nat. Biotechnol. 2006, 24, 1565–1567.
* Rish et al. (2001) Rish, I. An empirical study of the naive Bayes classifier. In Proceedings of the IJCAI 2001 Workshop on Empirical Methods in Artificial Intelligence, Seattle, WA, USA, 4 August 2001; Volume 3, pp. 41–46.
|
# A Causal Convolutional Neural Network for Motion Modeling and Synthesis
Shuaiying Hou
Zhejiang University
China
<EMAIL_ADDRESS>
&Weiwei Xu
Zhejiang University
China
<EMAIL_ADDRESS>
&Jinxiang Chai
Texa A&M University
USA
<EMAIL_ADDRESS>&Congyi Wang
Xmov
China
<EMAIL_ADDRESS>&Wenlin Zhuang
Southeast University
China
<EMAIL_ADDRESS>&Yu Chen
Xmov
China
<EMAIL_ADDRESS>&Hujun Bao
Zhejiang University
China
<EMAIL_ADDRESS>&Yangang Wang
Southeast University
China
<EMAIL_ADDRESS>corresponding author
###### Abstract
We propose a novel deep generative model based on causal convolutions for
multi-subject motion modeling and synthesis, which is inspired by the success
of WaveNet in multi-subject speech synthesis. However, it is nontrivial to
adapt WaveNet to handle high-dimensional and physically constrained motion
data. To this end, we add an encoder and a decoder to the WaveNet to translate
the motion data into features and back to the predicted motions. We also add
1D convolution layers to take skeleton configuration as an input to model
skeleton variations across different subjects. As a result, our network can
scale up well to large-scale motion data sets across multiple subjects and
support various applications, such as random and controllable motion
synthesis, motion denoising, and motion completion, in a unified way. Complex
motions, such as punching, kicking and, kicking while punching, are also well
handled. Moreover, our network can synthesize motions for novel skeletons not
in the training dataset. After fine-tuning the network with a few motion data
of the novel skeleton, it is able to capture the personalized style implied in
the motion and generate high-quality motions for the skeleton. Thus, it has
the potential to be used as a pre-trained network in few-shot learning for
motion modeling and synthesis. Experimental results show that our model can
effectively handle the variation of skeleton configurations, and it runs fast
to synthesize different types of motions on-line. We also perform user studies
to verify that the quality of motions generated by our network is superior to
the motions of state-of-the-art human motion synthesis methods.
_K_ eywords Deep learning $\cdot$ Temporal convolutional neural network
$\cdot$ Motion synthesis and control $\cdot$ Optimization $\cdot$ Motion
denoising $\cdot$ Motion completion
Figure 1: Our causal convolutional neural network can scale up well to a
large-scale motion dataset across multiple subjects and support a variety of
applications, as listed in this figure, in a unified way with a single set of
network parameters. Left: Examples of motion capture data. Middle: Training of
our network. Right: Applications.
## 1 Introduction
It is a challenging task to learn a powerful generative motion model from
prerecorded human motion data because human motion is intrinsically governed
by highly nonlinear dynamical systems. An appealing solution for generative
motion models should scale up well to motion datasets across multiple
subjects. In addition, it should be accurate and compact, efficient for
runtime evaluation, and amenable to various forms of applications, such as
motion synthesis with or without control inputs, motion prediction, motion
denoising, and motion completion.
Recent deep learning-based motion synthesis algorithms show great potentials
in resolving these issues. The deep neural network with nonlinear activation
functions can well model nonlinear dynamics and generate different motions
without the motion capture data used for network training to save the memory
footprint. Autoregressive models, such as restricted Boltzmann machines (RBMs)
and recurrent neural networks (RNNs) [1, 2, 3], have been applied to motion
synthesis by predicting the possibility of motions in the future. However, to
avoid causal error accumulation in such models, careful training strategies
must be employed. Conditioned on control signals, convolutional neural
networks (CNN), phase-functioned neural network (PFNN), Long-short term memory
(LSTM) networks, and variational autoencoders have been applied to generate
controllable motions to interact with the environment for a specific person
[4, 5, 6, 7]. Despite significant progress in deep learning-based motion
modeling, there is no accurate generative model that can scale up well to
motion datasets across multiple subjects and support different applications in
a unified way.
This paper proposes a novel causal convolutional neural network (CCNet) to
address the aforementioned issues in generative motion modeling and synthesis.
The network architecture is inspired by the success of causal-convolution-
based WaveNet [8] in multi-subject speech synthesis. The causal convolution is
appealing to generative human motion modeling since it can explicitly model
the correlation among a long-range temporal window, which is demonstrated to
be more effective than the hidden states used in RNNs and their variations in
speech synthesis experiments. However, it is a non-trivial task to apply
WaveNet to motion synthesis because motion data is a high dimensional signal,
and one needs to pay attention to foot contact to synthesize plausible
motions. To this end, we adapt the network architecture of the WaveNet by
adding an encoder and a decoder to translate the motion data into features and
back to the predicted motions. Moreover, we add 1D convolution layers to allow
the CCNet to take skeleton configurations as an input, which is critical for
the network to handle the skeleton variations of different subjects. The
output of the CCNet is the probabilistic density function (PDF) of the motion
at the next time step that is conditioned on the motions at previous time
steps, control signals, and the skeleton configuration. Consequently, with a
meticulously-designed training strategy, our CCNet can capture personalized
style variation of different skeletons and effectively support random and
controllable motion synthesis in a unified way using a single set of network
parameters.
The CCNet possesses the desirable properties of the generative motion model.
It is a compact model of size ~4.5M bytes. Combined with a Gaussian loss that
penalizes the deviation of joint angles, positions, and velocities
simultaneously in training, the drifting or freezing issues that are
frequently encountered in RNN models can be effectively mitigated. Thus, the
CCNet can be applied to random motion synthesis, i.e., synthesizing long-time
motion sequences of different subjects without control signals. This renders
it suitable for motion generation, denoising, and completion applications. The
CCNet can also be applied to the controllable motion synthesis. After training
on the motion capture (mocap) data across multi-subjects, we allow the user to
control the motion synthesis result through various control signals, such as
heading direction, velocity, and motion type. The skeleton is also called
control signals hereafter and processed in the same way as other control
signals in the CCNet. Moreover, the CCNet can synthesize motions for novel
skeletons not in the training dataset. After fine-tuning the network with a
few motion data of the novel skeleton, it is able to capture the personalized
style implied in the motion and generate high-quality motions for the
skeleton. Hence, the CCNet has the potential to be used as a pre-trained
network in few-shot learning for motion modeling and synthesis.
We have built a large-scale motion database of 12 subjects with a variety of
motion types and transitional clips between different types of motions.
Complex motions, such as punching, kicking, and kicking while punching, are
also included. The database has a total of 486,282 frames and will be made
public with our code. Experimental results show that, with such a database and
data augmentation, the CCNet can be efficiently trained, and it can generate
high-quality motions fast (~65fps) in the inference. We show its advantages in
the applications of motion processing and controllable motion synthesis for
different subjects and styles. User studies verify that the quality of motions
generated by our network is superior to the motions of state-of-the-art human
motion modeling and synthesis methods.
## 2 Related Work
Human motion modeling and synthesis is a long-standing problem in the area of
computer graphics. In the following, we focus our review on data-driven human
motion modeling, as well as their applications in human motion synthesis and
processing.
Motion Control and Synthesis. Data-driven motion synthesis is built
successfully on the interpolation of motions in a database. For example, Rose
et al. [9] classified the motion database into verbs and adverbs. Then human
motions are interpolated by the constructed radial basis as well as low-order
polynomials. However, interpolation-based methods can not adapt to the rich
repertoire of human behaviors. Generative statistical models, which describe
human movements by hidden parameters and the associated probability
distributions, became the mainstream in the previous decades. Tanco et al.
[10] learned a statistical model from the data set, and motion is then
interpolated by giving the start and end frames, as well as a few keyframes
with the learned statistical model. Other varieties of different statistical
models, including Hidden Markov Models (HMMs) [11, 12], spatial-causal dynamic
models [13, 14], and low-dimensional statistical models [15, 16] are developed
one after the other for human pose analysis and synthesis. Furthermore, motion
graphs [17, 18] and their various extensions [19, 20] are proposed to extend
the statistical models for representing complex human motions. These directed
graphs also provide benefits for the interactive editing and control for
complex human motions. In [21], a generative motion graphs named MotionGraph++
is proposed to process motions at semantic and kinematic levels.
In this paper, we propose an end-to-end deep learning framework for human
motion modeling and synthesis. Our deep-learning model is much more compact
than motion graph++. Besides, our model has a much better generalization
ability than motion graph++. Our experiments show that it can model not only a
rich set of human actions and their transitions but also delicate motion
variations across multiple subjects, a capability that has not been
demonstrated in previous work.
Methods | Generative Model | Scalability | Applications | Motion Quality
---|---|---|---|---
I | II | I | II | III | IV | V
| MotionGraph[22]
---
[23][17]
$\times$ | $\times$ | $\times$ | $\checkmark$ | ✓ | $\times$ | $\times$ | $\times$ | ✓
MotionGraph++[21] | o | o | $\times$ | ✓ | ✓ | $\times$ | $\times$ | $\times$ | ✓
| ERD-LSTM[2] [6]
---
[3] [24]
✓ | ✓ | o | ✓ | o | o | ✓ | ✓ | o
DAE-LSTM[25] | ✓ | ✓ | o | o | ✓ | o | o | o | ✓
PFNN[5] | ✓ | ✓ | ✓ | - | ✓ | - | - | - | ✓
CCNet | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓
Table 1: Comparisons between the CCNet and state-of-the-art deep-learning-
based models in motion modeling and synthesis. Columns I, II of "generative
model" are the model size and motion variation during synthesis respectively;
"scalability" represents multiple subjects; and Columns I, II, III, IV, V for
"applications" are random motion synthesis without control signals, motion
synthesis with control signals, motion denoising, motion completion and motion
prediction. ✓means good, $\times$ means poor and o means ok, namely between
poor and good. -: Not implemented. Since PFNN[5] is designed to predict the
pose for the next frame using phase values, not the probability distribution
of the pose, it’s not suitable for random motion synthesis. For fair
comparisons, we only extend PFNN to controllable motion synthesis for multi-
subjects by adding skeleton configuration as an additional input. We also
extend LSTM networks to support multi-subject motion modeling (see Sec. 6 for
details).
Motion Style. Our method has superior performance for generating different
motion styles associated with different people, even when they possess the
same type of human motions. It is noted that motion style and motion type are
clearly differentiated in this paper. For example, a fat man and a thin man
would have different walking styles for the same motion type, i.e., walking.
The critical challenge is how to model the motion style. In the work of [12],
the motion style is parameterized to learn a statistical model, and different
types of motion can be synthesized from the learned model according to the
input style parameters. Arikan et al. [19] proposed categorizing the human
motions into different motion types, such as turning left and turning right,
and then generating motion sequences according to the input motion types
through a dynamic programming search algorithm. In the work of [26], the
motion style for a single person has been proposed for addressing the problem
of unlabeled heterogeneous motions. However, all the methods mentioned above
only focus on the motion styles for a specific person. None of them can model
the variations of human motion styles across different people. Similar to
[27], our deep generative model can handle personalized style variations
across multiple subjects. However, unlike [27], which can only model
personalized style variations for a particular human motion such as walking or
running, our generative model can scale up well to a rich repertoire of human
motions as well as their transitions. Aberman et al. [28] handled the skeleton
variations by representing skeletons as the static offsets in some arbitrary
initial pose together with the dynamic motions in a tree graph structure.
However, they focused on motion retargetting while we are interested in motion
synthesis.
Deep Learning based Motion Synthesis. In recent years, deep learning has
gained lots of attention in computer vision and computer graphics. Like many
other tasks, such as image segmentation, classification, and object
recognition, human motion synthesis has also benefited from the rapid progress
of deep learning. It provides a remarkable tool for directly learning a
compact, low-dimensional motion space from a dataset without any motion
feature designations. By comparison, traditional successful generative
statistical models for human motion synthesis heavily rely on the human-made
ad-hoc motion features [16, 29, 20]. Holden et al. [30] proposed a
convolutional auto-encoder to learn compact motion representation, termed as
the motion manifolds, for human motion synthesis. Such motion representation
can be utilized to fill in the missing data and perform the motion
interpolation in the manifold space.
In [4], user-friendly high-level parameters for motion control, such as
character motion trajectory and foot contact information, are investigated for
synthesizing the human motions. Phase variable for cyclic motions is
explicitly used as an input to control the weights in the network [5]. In
contrast, the hidden state in LSTM can model the causal dependence of the
motion implicitly. Thus motion synthesis with multi-objective control can be
realized without the phase information [6]. Mixture-of-expert (MoE)
architecture [31, 32, 33] are used to ease the burden of phase labeling for
motions and improve the capacity of the networks for motion synthesis. Ling et
al. [7] leveraged MoE as the decoder of motion VAEs to model the distribution
of possible next poses. Starke et al. [34] added local motion phase feature to
MoE to learn asynchronous movements of each bone and its interaction with
external objects and environments. Reinforcement learning has been widely
applied [35, 36, 37, 38, 39] to train physics-based motion controllers. Peng
et al. [40] adopted a two-level hierarchical control policy with high-level
environment information to make the character be aware of the surroundings.
Won et al. [33] clustered the reference library of motions generated by the
kinematic controller to construct experts and then combined these experts by
deep reinforcement learning. The adversarial training strategy is adopted in
[41, 36] to improve the quality of generated motions. Recently, mapping the
features extracted from music [42] or language [43] to the motion space is
utilized to generate character motions synchronizing with such multi-modal
inputs.
Enormous neural networks are proposed to address long-term motion prediction
problem, which include recurrent neural networks (RNNs) [2, 44, 45, 46], fully
connected networks [47, 48], reinforcement learning [49, 50], graph networks
[51, 52], and generative adversarial networks (GAN) [53]. To better model the
randomness of human motions, Aliakbarian et al. [54] combined the root of
variations with previous pose information to force the network to learn the
motion variations. At the same time, Zhao et al. [55] exploited Bayesian
Hierarchical Hidden semi-Markov Model(BH-HSMM) as generator and discriminators
for adversarial learning. To solve the error accumulation problem for long-
term motion prediction, practical strategies, including adding residual blocks
and introducing sampling in training, are applied to improve RNN [3], and the
auto-condition scheme is adopted in RNN in the work of [56]. QuaterNet [57]
conducts extensive experiments to demonstrate that the quaternion
representation is beneficial to improve the quality of synthesized motions.
Despite significant progress in deep-learning-based motion modeling and
synthesis, constructing a generative model capable of accurately modeling
motion data sets across different subjects remains challenging. The CCNet is
more appealing to human motion modeling because the explicit causal
convolution adopted in CCNet has a larger and more efficient receptive field
than widely used networks such as RNNs or their variations. As shown in our
comparisons (see Sec. 6), in the case of multiple subjects, the CCNet can
capture personalized style variation and produce superior results than
alternative deep learning models in terms of both motion synthesis quality and
motion control accuracy, a capability that has not been demonstrated in
previous work. Finally, as shown in Tab. 1, our model is more flexible and
powerful for motion synthesis and processing. In contrast, traditional motion
graph techniques [22, 23, 17, 21] are hard to scale up to handle large scale
motion data, and the required memory footprint is usually large. For fair
comparisons, ERD-LSTM models [2, 3, 24, 6], DAE-LSTM models [25] and PFNN [5]
are extended to model multi-subject motion data by taking skeleton
configuration as an input (see Sec. 6 for details).
## 3 Overview
The overall framework of our system is illustrated in Fig. 1. Given the
processed motion data (Sec.4), we train a CCNet to predict the PDF of the
future pose conditioned on the poses of past frames and optional control
signals, where the details of control signals are discussed in (sec. 4.2). The
designed CCNet has three types of functional blocks, i.e., encoder, separate
residual block, and decoder, and it outputs the mean and variance of the PDF
(Sec. 5.1). A variety of motion synthesis applications, such as motion
denoising, motion completion, and motion control, can be realized by this
unified generative model with a single set of network parameters. We also test
how the network generalizes to novel skeletons not in the training dataset
(Sec. 6).
During training, we use Gaussian loss, foot contact loss, and smoothness loss
to learn the network parameters (sec. 5.2). Noises are added to the sampled
training motion data such that the trained network is robust to the
accumulated error in the motion synthesis and can produce high-quality, non-
freezing motions. The slight foot sliding in the generated motion is removed
by an inverse kinematic (IK) algorithm according to the predicted foot contact
label by the CCNet. To ease the interactive control, we also train the
proposed network to output the direction and velocity control signal for the
next frame.
## 4 Data Processing and Representation
We build a human motion database of 12 different subjects using the mocap
technique. Three of the subjects are female, and the rest of them are male.
The database includes 10 types of motions: walking, running, jumping with the
left foot, jumping with the right foot, jumping with both feet, back walking,
zombie-walking, kicking, punching and kicking while punching. All subjects are
asked to perform the first 7 types of motions, and 5 of them are asked to
perform the last 3 types of complex motions additionally. The motion recording
speed is 120fps, and the recording time for each subject is within 2 hours.
Thus, there are around 80,000 frames of motion data for each subject. During
recording, we ask each subject to perform two types of motions in one motion
sequence to facilitate the learning of transition between different motion
types. Afterwards, We down-sample the recorded motion data to 60fps and obtain
a total of 486,282 frames to be used as our training and validation datasets.
The validation dataset is formed by randomly selecting one motion sequence of
each subject. Moreover, all the motion sequences of subject 7 are removed from
the training dataset and only present in the validation dataset, which is used
to test how our network can handle the skeleton variation after training on
multi-subject motion data. As a result, the validation dataset contains 41
motion sequences and a total of 88,649 frames. The rest motion data is used in
the training dataset.
### 4.1 Motion representation
The character skeleton in the mocap data is modeled as an articulated figure
with rigid links connected by ball-and-socket joints. The motion at each frame
is recorded as the translation and rotation at the root joint and the relative
rotations at other joints. However, such motion representation is a relatively
local feature since most rotations at ball-and-socket joints are relative to
their parent joint. Thus, we add 3D joint positions and the joints’ angular
and linear velocities into the representation to better model the global
influence of joint rotations on rigid links’ positions and orientations.
Overall, the motion information at $n_{th}$ frame is represented as
$x_{n}=\\{x_{n}^{e},x_{n}^{\omega},x_{n}^{p},x_{n}^{v},x_{n}^{f}\\}$, where
$x_{n}^{e}$ denotes the vector of relative joint rotations represented using
exponential coordinates [58], $x_{n}^{\omega}$ the vector of relative angular
velocities of the joints, $x_{n}^{p}$ the 3D joint positions, and $x_{n}^{v}$
denotes the vector of joint linear velocities. The foot contact information at
$n_{th}$ frame is represented as a 2-dimensional binary vector $x_{n}^{f}$.
Before converting the mocap data into our representation, we first align each
recorded motion clip by translating its first frame to the origin of the
global coordinate system on the XOZ plane and setting the root’s rotation
around the global Y-axis to be zero. For the $n_{th}$ frame in the clip, we
first rotate the root orientation represented in the global coordinate system
of frame $n-1$ to a coordinate system whose Y-axis is $\\{0,1,0\\}$. We then
represent the root position and orientation of $n_{th}$ frame to the rotated
coordinate system at frame $n-1$. However, we still represent the $y$
coordinate of the root joint in the global coordinate system to emphasize this
quantity in network training. The rotations for non-root joints remain the
same with the motion capture data. The linear and angular velocities are
computed by subtracting the corresponding joint positions and rotations in
exponential coordinates at frame $n$ and $n-1$ and representing the difference
vectors in the rotated coordinate system of frame $n-1$. The motion alignment
makes our motion representation invariant to translation and facing
orientation in the plane, which means that no matter where the global root
position and orientation of frame $n-1$ are, the aligned motion representation
remains the same as long as the relative motion is the same.
The foot contact information $x_{n}^{f}$ is used to alleviate the foot sliding
of the generated motions. It can be directly computed from the motion data. We
first compute the distance between the current and the previous frame and
their heights for the left and right toe joints. The height of the joint is
just the y coordinate in our global coordinate system. If the distance and
position y are less than the threshold $\delta_{d}=5mm$ and $\delta_{y}=80mm$
respectively, we set the foot contact label to be 1, otherwise 0.
### 4.2 Control Signal Representation
For the motion data at $n_{th}$ frame, our system allows the user to input
three types of control signals, which control the motions at future frames.
The control signal is denoted by $c_{n}=\\{c_{n}^{d},c_{n}^{t},c_{n}^{s}\\}$,
where $c_{n}^{d}$ denotes the direction and velocity control, $c_{n}^{t}$ the
motion type control, and $c_{n}^{s}$ the skeleton configuration signal used to
differentiate subjects. The control signals used in training are extracted
from the motion capture data.
Direction and velocity control. The control signal $c_{n}^{d}$ is a
12-dimensional vector formed by sparsely sampling, in the causal domain, the
points on the motion trajectory starting from $n_{th}$ in a motion clip. Thus,
this control signal also implicitly implies the velocity information. It is
inspired by the trajectory used in [5], but we only use the future trajectory.
Specifically, starting from the $n_{th}$ frame in the clip, a motion sequence
lasting for 1 second consisting of future frames is extracted, which is 60
frames in our 60fps motion capture setting. Second, we compute the 3D
positions of root joints for all the extracted frames and then represent them
in the $n_{th}$ frame’s coordinate system. All the relative root positions are
projected onto XOZ plane to obtain a future motion trajectory. Finally, the
trajectory is uniformly down-sampled in the causal domain (every 10 frames) to
6 2D points to form the $c_{n}^{d}$. This procedure is repeated for all the
frames in the motion database.
The direction control signal itself contains the velocity information because
it is the heading trajectory in the next one second of future frames. Thus, it
is not necessary to compute another velocity control signal for each frame.
When the user needs to control the velocity of the motion, we allow the user
to input the trajectory length for the future 1-second motion. It will be used
to update the 6 points in $c_{n}^{d}$.
Motion type. Since our database consists of ten types of motions, the motion
type control signal $c_{n}^{t}$ is a 10-dimensional vector using one-hot
coding. We manually label frames in the captured motion to obtain the motion
type signal for training. However, it is difficult to give an exact type label
for those transitional frames between two types of motions. Thus, when the
transition happens, we treat the right-foot-touching-the-ground as the ending
state of the first type of motion and the left-foot-leaving-the-ground as the
starting state of the second type of motion. According to the frame index, we
linearly interpolate the first type label vectors and the second type label
vectors and assign the interpolation results as the motion type signal for the
frames between the ending states and the starting states to the transitional
frames.
Skeleton configuration. Since our database consists of different skeletons,
the skeleton configuration is necessary to help the network to discern the
skeleton variations across different persons. We transform the skeleton’s
chain structure at T-pose into a control signal $c_{n}^{s}$ that consists of
the following components:
$c_{n}^{s}=\\{(h_{r},t_{1}^{x},t_{1}^{y},t_{1}^{z},...,t_{m}^{x},t_{m}^{y},t_{m}^{z}\\},$
(1)
where $h_{r}$ is the height of the root joint, and the 3D positions of non-
root joints, i.e.,
$\\{t_{1}^{x},t_{1}^{x},t_{1}^{y},t_{1}^{z},...,t_{m}^{x},t_{m}^{y},t_{m}^{z}\\}$,
are set to be relative to the root. The number of joints $m$ is set to be 27,
which is the number of non-finger joints in our database. The 27 non-finger
joints’ relative 3D positions and the root joint’s height form the 82
dimensions $c_{n}^{s}$. Note that the skeleton configuration is input to the
network all the time, but other control signals are optional in the motion
synthesis.
Figure 2: The network architecture of the CCNet. It consists of an encoder,
separate residual blocks, and a decoder. The kernel size of Conv1D is set to
1, and the kernel size of CausalConv1D is set to 2 in all the blocks. Given
the input frames, it outputs the Gaussian distribution of a future frame,
direction control signal, and foot contact label.
## 5 CCNet
In this section, we first describe the network structure of CCNet and then
proceed to its training details.
### 5.1 The Network Structure
The network structure of our CCNet $F$ is illustrated in Fig. 2 (detailed
network parameters are reported in the supplementary material). It models the
PDF of the predicted motion for $n_{th}$ frame with the following formula:
$\displaystyle p(x_{n}|X,c_{n-1})=\mathcal{\psi}(X,c_{n-1})=\hskip 200.0pt$
(2) $\displaystyle\hskip
10.0pt\mathcal{\psi}_{D}\big{(}{\mathcal{\psi}_{R}}^{0}(\mathcal{\psi}_{E}(X),c_{n-1})+\sum_{i=1}^{19}\mathcal{\psi}_{R}^{i}(\mathcal{\psi}_{R}^{i-1,..,0}(\mathcal{\psi}_{E}(X)),c_{n-1})\big{)},$
where $X=\\{x_{n-1},...,x_{n-l-1}\\}$ and $c_{n-1}$ are the motion data of
past $l$ frames and the control signals of $(n-1)_{th}$ frame . The dropout
layer before the encoder, denoted by $D$, is used to resolve the possible
over-fitting issue, and its drop probability is set to be 0.5.
The encoder $\mathcal{\psi}_{E}$, the separate residual blocks (SRB)
$\mathcal{\psi}_{R}^{i}$, and the decoder $\mathcal{\psi}_{D}$ are the
functional blocks in the network. There are a total of 20 SRBs in our network.
The superscript $\\{i-1,..,0\\}$ in Eq. 2 indicates that the SRBs in the
network are recursively executed, and each $i_{th}$ block takes the output of
$(i-1)_{th}$ block and the control signals as its inputs. The first separate
residual block $\mathcal{\psi}_{R}^{0}$ takes the output from the encoder and
the control signals as inputs. The output PDF $p(x_{n})$ is set to be the
Gaussian $\mathcal{N}(\hat{\mu}_{n},\hat{\sigma}_{n})$, where its mean
$\hat{\mu}_{n}$ and standard deviation $\hat{\sigma}_{n}$ are the output of
the decoder $\mathcal{\psi}_{D}$. Since the decoder $\mathcal{\psi}_{D}$ might
output a negative standard deviation value after convolution, we compute the
final standard deviation values as $\hat{\sigma}_{n}=e^{\sigma_{n}}$, where
$\sigma_{n}$ is the direct output of the $\mathcal{\psi}_{D}$.
Encoder. The motion representation $X$ of past $l$ frames are first input to
an encoder $\mathcal{\psi}_{E}$ to map the data into features, and the encoder
is of a simple "Conv1D-ReLU" structure where the size of 1D convolution kernel
is $1$ and use ReLU as the activation function. Note that the kernel of size
$1$ makes sure that the feature of the motion at each input frame is
independent.
Separate residual blocks: The core component of the CCNet is the set of
separate residual blocks $\mathcal{\psi}_{R}^{i}$. It is similar to the
residual blocks used in WaveNets [8], which use dilated causal convolution to
guarantee the input motion data’s ordering. The control signals are also
inputted to the residual blocks through convolution layers with kernel size
$1$. The difference is that we use two separate dilated causal convolution
layers and 1D convolution layers to compute separate features for the gated
activation. This is designed to disentangle the information to increase the
capacity of our network. Moreover, the features from the motion data and
control signals are fused through summation. This enables us to switch on/off
the control signals online during the training and inferring. The dilated
Causal Convolution is implemented as in [8]. Specifically, zeros are padded
before the feature of the $\mathcal{\psi}_{E}(x_{n-l-1})$ so that the output
of the convolution of a frame $i$ only depends on the frames before it. The
padding size can be easily computed as $(k-1)*d$, where $k$ is the kernel size
and $d$ the dilation size. The kernel size of the causal convolution and
dilation size in SRBs are set to be 2. As a result, the causal receptive field
of the CCNet is 41 frames, which can be computed using the following formula:
$F=(k-1)+\sum_{i=0}^{{\color[rgb]{0,0,0}19}}(k-1)*2$ (3)
where $k$ is $2$, the kernel size of the dilated Causal Convolution used in
our CCNet.
Fig. 3 shows the details on how the control signals are input to the separate
residual blocks: each type of control signal is input to its own Conv1D layer,
and the kernel size of Conv1D is 1. The input motion feature (channel number:
32) for $\text{SRB}^{i}$ is the feature $O_{r}^{i-1}$ output by
$\text{SRB}^{i-1}$. The output feature $O_{s}^{i}$ (dimensionality of the
output features: 512) is sent to the decoder.
Decoder. The decoder $\mathcal{\psi}_{D}$ maps the summed features from the
SRBs to the PDF of the predicted motion. It is of a simple "ReLU-Conv1D"
structure, where the convolution kernel size is also set to be $1$.
### 5.2 Training Loss
The training loss consists of four terms, a Gaussian loss $L_{G}$, a motion
smoothness loss $L_{s}$, a foot contact label loss $L_{f}$ and a direction
control loss $L_{d}$. It can be formulated into:
$L=L_{G}+\lambda_{1}*L_{s}+\lambda_{2}*L_{f}+\lambda_{3}*L_{d}$ (4)
where the weight $\lambda_{1}$ is set to be $10.0$, $\lambda_{2}$ be $2.0$ and
$\lambda_{3}$ be $1.0$ in all our experiments.
The first term $L_{G}$ is the Gaussian loss. This term follows the Gaussian
mixture loss in [2], while we only use one mode and set the covariance matrix
to be diagonal to reduce the number of parameters. It can be written into:
$L_{G}=-ln(p(x_{n}|\hat{\mu}_{n},\hat{\sigma}_{n})),$ (5)
where $x_{n}$ is the motion representation extracted at $n_{th}$ frame. Thus,
this term enforces the network to output the values of mean $\hat{\mu}_{n}$
and standard deviation $\hat{\sigma}_{n}$ so that the captured motion data is
of high probability. The binary foot contact label in $x_{n}$ is handled in
$L_{f}$ and thus not included in this term. We add a constraint in our
implementation to ensure the standard deviation $\hat{\sigma}_{n}$ is greater
than 1e-4 by a clipping operation, and we observe that the standard deviation
output by the trained CCNet is usually between 1e-4 and 1e-3. Consequently, in
motion synthesis, we can sample a motion according to the Gaussian
distribution to enrich the variation of the synthesized motion. Note that the
Gaussian function is used to maximize the probability of the motion
representation vector of the ground-truth mocap data during training. Thus,
the joint positions and linear velocities included in this term can help to
model the correlations between the rotational degrees of freedom of different
joints since such quantities are affected by all the parent joints on the
kinematic chain connected to the joints.
Figure 3: The detailed architecture of the separate residual block. The
numbers besides $O_{r}^{i-1}$, $O_{r}^{i}$ and $O_{s}^{i}$ indicate the number
of channels.
The second term is the smoothness term to prevent the sudden change of the
motion among neighboring frames, which is as follows:
$L_{s}=\sum_{n=2}^{N}(\hat{\mu}_{n-2}+\hat{\mu}_{n}-2\hat{\mu}_{n-1}).$ (6)
The smoothness loss is only optimized for the mean of predicted Gaussian
distributions since the motion generated by the network is usually close to
the mean at each frame. This term is a soft constraint to prevent the sudden
change of accelerations at joints and make the synthesized motion smoother.
The third term is used to train the network to predict whether the foot is in
contact with the supporting plane at $n_{th}$ frame. Specifically, we adopt
binary cross entropy(BCE) loss function to compute this term:
$L_{f}=BCE(x_{n}^{f},\hat{x}_{n}^{f}),$ (7)
where $x_{n}^{f}$ is the ground-truth foot contact label from the data, and
$\hat{x}_{n}^{f}$ is the network prediction. The foot contact label is used to
trigger IK algorithms to remove the foot sliding in the synthesized motion.
The last term is used when the direction and velocity controls are switched
on. It can be simply written as:
$L_{d}=\|\hat{c}_{n}^{d}-{c}_{n}^{d}\|^{2}+\|\hat{c}_{n}^{v}-{c}_{n}^{v}\|^{2}$
(8)
where ${c}_{n}^{d}$ and ${c}_{n}^{v}$ are the direction and velocity control
signals computed from the motion data, and $\hat{c}_{n}^{d}$ and
$\hat{c}_{n}^{v}$ are the predicted values. This term is useful in interactive
motion control applications when control signals might be input by the user
occasionally. In this case, the predicted control signal values will be fed
into the network to continue the motion synthesis.
### 5.3 Training Details
We train our CCNet using the RMSProp optimizer [59]. The initial learning rate
is 1e-4 and will be decayed by multiplying it by 0.5 every 500 epochs. The
loss curve usually converges around 1000 epochs. The batch size is set to be
256, and each sample in the batch is a motion sequence of 240 consecutive
frames. There are two steps to generate 240 samples in a batch: 1) randomly
select a motion clip from the database and then the starting frame index in
the clip. 2) Sample 240 frames in the clip repeatedly using a one frame
interval, i.e., the starting frame index, $F_{s+1}$, of the next 240 frames
sequence is equal to $F_{s}+1$. For an input sequence,
$X=\\{x_{0},x_{1},...,x_{n-1}\\}$, the CCNet can produce the output
$Y=\\{y_{1},y_{2},...,y_{n}\\}$ due to the guaranteed causal ordering in all
the dilated causal convolutional operations in CCNet. Thus, we can compute the
training loss for all the output motions at different frames, which helps the
CCNet learn to handle the input motions of different lengths.
(a) skeletons of different subjects
(b) meshes of different subjects
Figure 4: Skeletons and meshes in our database. The skeletons of subjects
0$\sim$11 are from left to right, and their heights are 160cm, 170cm, 168cm,
183cm, 172cm, 165cm, 195cm, 170cm, 178cm, 166cm, 155cm, 154cm, respectively.
Subjects 2, 9, and 10 are female.
Data augmentation. We add additional independent identically distributed
Gaussian noises to each sampled motion representation vector of training
motion data to train the network to handle accumulated errors in the motion
synthesis. The mean and standard deviation of the noise is selected to be $0$
and $0.03$.
## 6 Experiments
We have implemented our algorithm using Pytorch 1.6 on a desktop PC with
Intel(R) Xeon(R) Gold 5120 CPU, 128G RAM, and one Tesla V100 SXM2 32GB
graphics card. The trained CCNet has 1.16M parameters, resulting in a model of
size ~4.5M Bytes. The skeleton information is always input to the CCNet in
both random and controllable motion synthesis since they are required to
differentiate between subjects in the experiments. Although the network can
generate high-quality motion, slight foot sliding might still occur. If not
mentioned, the inverse kinematics algorithm is adopted to completely remove
the foot sliding in the generated motion according to the predicted foot
contact label. Besides, we denote the initial frames input to the CCNet to
begin the motion generation as seed frames hereafter.
(a) Random motion synthesis
(b) Noisy motion
(c) Denoised motion
(d) Incomplete motion
(e) Completed motion
Figure 5: Random motion synthesis results. (a) Random motion generation result
for subject 3 (the fourth skeleton shown in Fig. 4). Different colors of the
clothing indicate the different random motions generated by the trained CCNet.
(b) and (c) A motion denoising result for subject 7 (the eighth skeleton shown
in Fig. 4). (d) and (e) A motion completion result for subject 5 (the sixth
skeleton shown in Fig. 4).
Baseline networks: We compare the CCNet to state-of-the-art motion synthesis
networks that are listed below:
* •
ERD-4LR: the encoder-recurrent-decoder network structure in [2]. We implement
the network using 4 LSTM layers as in [6].
* •
DAE-LSTM: the network structure in [25] that uses a dropout autoencoder to
filter the predicted poses output by an LSTM network.
* •
PFNN: the network structure in [5] that adopts a cyclic function to compute
the neural network’s weights by taking the motion phase as an input.
To test these three network structures’ performance in multi-subject motion
synthesis, we add parameters at their first layer to accept as input the
skeleton configuration and the same set of control signals. For clarity, we
refer to the modified networks for random motion synthesis as ERD-4LR-rand and
DAE-LSTM-rand and the modified networks for controllable motion synthesis as
ERD-4LR-cond and DAE-LSTM-cond. Since PFNN is mainly designed for controllable
motion synthesis, not for a generative model, we only compare the CCNet with
PFNN on controllable motion synthesis. We refer to the modified PFNN as PFNN-
cond. Please refer to the supplementary_material.pdf in "other supplementary
materials" for the detailed network parameters of the modified networks.
We separately train these five networks, namely ERD-4LR-rand, DAE-LSTM-rand,
ERD-4LR-cond, DAE-LSTM-cond, and PFNN-cond, and our CCNet, on the multi-
subject motion dataset. For all the networks, the initial learning rate is set
to 1e-4, and it will be decayed by multiplying it by 0.5 every epoch. The
batch size is 256, and each sample in the batch is a motion sequence of 240
consecutive frames. We train these networks for 2000 epochs. The rest training
settings for baseline networks remain the same as reported in their papers.
Please refer to [2], [6], [25] and [5] for the detailed settings. Finally, we
choose the best-performed network snapshots that achieve the lowest validation
loss as the final networks used in the comparisons. Specifically, the network
snapshots are selected as follows: CCNet, ERD-4LR-rand, and ERD-4LR-cond:
1510th epoch, the PFNN-cond network: 1045th epoch, DAE-LSTM-rand: 1650th
epoch, and DAE-LSTM-cond: 1120th epoch.
### 6.1 Random motion synthesis
Random motion generation: In this experiment, the skeleton configuration and
seed frames are fed to the CCNet to generate high-quality motions. However, we
do not input control signals, such as direction, velocity, and motion type, to
the CCNet. Also, we use 120 seed frames to facilitate the comparisons since
such length is chosen in ERD-4LR and DAE-LSTM [6, 25]. As illustrated in Fig.
5a, for the fourth skeleton shown in Fig. 4, five motion sequences are
generated by sampling the pose at frame $n$ using the predicted Gaussian
distribution. The sampled pose is fed back to the network to generate future
frames. The random motion generation can be used in motion prediction given a
long motion sequence as an input, which is useful in on-line pose detection in
computer vision or RGBD-based motion capture [60].
We also feed ERD-4LR-rand and DAE-LSTM-rand with the same 120 seed frames and
the skeleton configuration to synthesize random motions for fair comparisons.
As a result, we found that the CCNet, ERD-4LR-rand, and DAE-LSTM-rand can all
synthesize long period motion sequences with more than 20,000 frames. The
random motion generated by ERD-4LR-rand is also of good quality but a little
bit less smooth than the motion generated by the CCNet. Comparisons conducted
in the user study (Sec. 6.4) also show that the quality of the random motions
generated by the CCNet is superior.
Motion denoising and completion: To test how the CCNet perform in motion
denoising, we randomly select a motion sequence $X=\\{x_{0},x_{1},..,x_{k}\\}$
of a subject and then add independent identically distributed Gaussian noise
(mean 0, standard deviation 0.01~0.1) to the motion data $\hat{X}$. The
network takes $\hat{X}$ as an input and outputs the denoised motion $Y$.
However, the denoised pose is not fed back into the network, which is
different from the random motion generation. For those frames with indices
less than the casual receptive length, 41 frames in our network, they are
denoised according to all the frames before them since the CCNet is trained to
handle motions of different lengths. Fig. 5b and 5c illustrate a denoising
result. The standard deviation of the noise for this experiment is set to
0.08. Before denoising, the right hand and right toe’s trajectories are very
jerky, and the foot is underneath the ground at some frames. It can be seen
that these artifacts are significantly reduced in the denoised motion. To
compare the CCNet to baseline networks in the quality of denoised motions, we
add Gaussian noise to the test data with standard deviations 0.03, 0.05, and
0.1, respectively, and then use CCNet, DAE-LSTM-rand, and ERD-4LR-rand to
denoise the noisy motion data. The error between the ground truth motion and
the denoising result is computed as the Euclidean distance between their
motion representation vectors as described in Sec. 4. As shown in Tab. 2, the
error of the denoised motion generated by the CCNet is less than the motions
denoised by DAE-LSTM-rand and ERD-4LR-rand.
Noise STD | Denoising Errors (mean±std)
---|---
ERD-4LR-rand | DAE-LSTM-rand | CCNet
0.03 | 0.768±0.357 | 0.687±0.384 | 0.528±0.126
0.05 | 0.768±0.357 | 0.687±0.384 | 0.539±0.125
0.1 | 0.77±0.361 | 0.687±0.384 | 0.584±0.117
Table 2: Motion denoising comparison. STD: standard deviation. IK is disabled
in this experiment. The errors of the motions denoised by the CCNet are less
than the errors of the motions denoised by ERD-4LR-rand and DAE-LSTM-rand.
The procedure of motion completion is similar to motion denoising. In the
experimental results illustrated in Fig. 5d and 5e, we first select a
700-frame motion sequence containing walking and jumping with both feet, and
then set the rotations of joints of the right legs of 30% frames to 0. The
CCNet takes the incomplete motion as the inputs and outputs a completed
naturally looking motion.
(a) (b)
Figure 6: Trajectory-following results of two subjects. (a) A synthesized
motion transiting from walking to running then to zombie walking for subject
10 (the eleventh skeleton shown in Fig. 4). (b) A synthesized motion
transiting from jumping with the right foot to jumping with the left foot then
to jumping with both feet for subject 11 (the twelfth skeleton shown in Fig.
4). We use different colors to represent different motion types at the
different part of trajectories as follows: red-> walking, magenta->running,
green->jumping with the left foot, cyan->jumping with the right foot, green-
yellow->jumping with both feet, blue->back walking, pink->zombie walking,
orange->kicking, purple->punching, and brown->kicking while punching.
(a)
(b)
Figure 7: Our CCNet can synthesize (a) motions heading along a complex
trajectory and (b) Complex motions for subject 5 (the eighth skeleton shown in
Fig. 4).
### 6.2 Controllable motion synthesis
Motion control using user-specified trajectories: Synthesizing different types
of motions along a specified trajectory is a desirable function in motion
planning. We allow users to specify a motion trajectory $J$ on the XOZ plane
with additional velocity and motion type information and then map the
trajectory information into the control signals $c_{n}^{d}$ and $c_{n}^{t}$
supported in our system. Specifically, the trajectory $J$ is represented as
$J=\\{\\{J_{i},t_{i},v_{i}\\},i=1,..,k\\}$, where $J_{i}$ is the $i_{th}$ part
of the whole trajectory represented as an ordered densely sampled 2D points,
the motion type information $t_{i}$, and a scalar velocity value $v_{i}$ are
also associated to the $J_{i}$. For two adjacent parts of the trajectory with
different motion types, we set up 20 transitional frames with interpolated
motion type control signal (see Sec. 4 for the details of motion type
interpolation).
Suppose we already synthesize frame $n$, and its root position projected on
the XOZ plane, denoted by $t_{n}^{p}$, might deviate from the input
trajectory. We first find the closest point $t^{J}$ on the $J$, and then
extract the part of trajectory $\hat{J}^{E}$ lasting for one second starting
from $t^{J}$ using the specified velocities. We then linearly interpolate
between $\hat{J}^{E}$ and the line connecting $t_{n}^{p}$ and the endpoints of
the expected trajectory $\hat{J}^{E}$ to obtain a blended trajectory
$\hat{J}^{b}$. The blending weight for the $\hat{J}^{E}$ starts with 0 and
increases towards 1 according to the time parameter. Afterward, 6 2D points
are uniformly sampled in the causal domain from $\hat{J}^{b}$ to be input as
the $c_{n}^{d}$. All the sampled points are represented into the root
orientation at frame $n$ as described in Sec. 4.
As shown in Fig. 6, our system can synthesize motions following two user-
specified trajectories. Different trajectory colors indicate different types
of motion. Fig. 7a illustrates that the synthesized motions can well follow a
trajectory with large curvatures and frequent change of motion types. In Fig.
7b, we show that the CCNet can generate complex motions, such as kicking and
punching motions present in our training dataset, when the user specifies
these two motion types along a trajectory.
Figure 8: The user interface for interactive control. The green dots on the
ground represent the direction control signal. IK is disabled.
(a) CCNet
(b) ERD-4LR-cond
(c) PFNN-cond
Figure 9: Comparisons against ERD-4LR-cond and PFNN-cond. The character starts
by jumping with the left foot, then changes to jumping with the right foot
till the end. The total errors between the synthesized trajectories (the
yellow lines) and the input trajectories (the green lines) of ERD-4LR-cond,
PFNN-cond and the CCNet are 177.143cm, 156.604cm and 27.043cm, respectively.
IK is disabled in this experiment.
Interactive control: The CCNet can be easily integrated into interactive
applications, and we demonstrate this capability by developing a demo that
allows the user to control the direction, velocity, and motion type through a
keyboard. Direction and velocity signals are used to generate future motion
trajectory $c_{n}^{d}$ online similar to PFNN [5]. The PyTorch implementation
is exported to C++ through the LibTorch API to ease the implementation of this
demo.
Specifically, to control the motion type, the user can use the number keys
from 1 to 5 to select among 5 motion types: walking, running, jumping with the
left foot, jumping with the right foot, and jumping with both feet. Once a
key(for instance, 2) is pressed down, we update the motion type label by
interpolating the new type label with the previous one in 20 frames, which
means the character can transition from the previous motion type to the new
one more smoothly. The user can also control the velocity by pressing up and
down keys and heading direction of the character by pressing the left and
right keys. Once the left key is pressed, the trajectory will be turned left.
It is achieved by first computing a small offset vector $o_{n}=[1,0]*h*0.015$,
where $h$ is the root’s height. This offset will be added to the $c_{n}^{d}$
by $o_{n}*w_{i}$, where $w_{i}=i/5,i=0.5$. Thus, the offset will be added to
the 6 points in the predicted control signal $\hat{c}_{n}^{d}$ through the
corresponding $w_{i}$. The distance between the 2D points in the updated
$\hat{c}_{n}^{d}$ is then adjusted according to the user-specified velocity
$v_{u}$. Since the $\hat{c}_{n}^{d}$ represents the future motion trajectory
within one second, we can adjust the distance between its 2D points by
multiplying it by the ratio between the current scalar velocity of the
character $v_{cur}$ (computed by the length of the 2D points in $c_{n}^{d}$)
and $v_{u}$, which is $v_{u}/v_{cur}$. The velocity is changed from the
current velocity to the user-specified velocity with 20 frames interval. Fig.
8 illustrates the user interface used in interactive control.
Figure 10: Trajectory-following results generated by the CCNet for a unseen skeleton subject 7_b. The skeleton of subject 7_b is generated by scaling the lower body of subject 7’s skeleton by 0.8. Please see the accompanying video from 2m 44s to 3m 3s for the demo. Models | Relative Pose Difference (mean±std)
---|---
CCNet on entire dataset | no fine-tuning | 0.083±0.0925
fine-tuning with walking data | 0.0536±0.0365
fin-tuning with walking and running data | 0.0483±0.0544
CCNet on dataset1 | no fine-tuning | 0.0861±0.155
fine-tuning with walking data | 0.0543±0.0646
fin-tuning with walking and running data | 0.0525±0.0594
CCNet on dataset2 | no fine-tuning | 0.0914±0.108
fine-tuning with walking data | 0.0598±0.077
fine-tuning with walking and running data | 0.059±0.0713
Table 3: The influence of the fine-tuning of CCNet with partial motion data of
an unseen skeleton subject 7. Dataset1 contains motions of subjects 1, 3, 4, 8
and dataset2 contains motions of subjects 0, 5, 6, 11. Fine-tuning with
walking and running mocap data of subject 7 with our entire training dataset
(third row) achieves the lowest relative pose difference. IK is disabled in
this experiment.
Comparisons: We first compare the CCNet with baseline models on how accurate
the generated motion is with respect to the user-specified trajectory. Thus,
we leverage the average distance between the user-specified trajectory and the
root trajectory on the XOZ plane as the criterion. In this accuracy
experiment, with six different trajectories manually specified by users, we
extract the direction control signals and randomly assign motion type
information to trajectory segments. Afterward, we synthesize motions using the
first 120 frames of the 33 locomotion sequences in the validation dataset as
the seed frames for each specified trajectory and get a total of 198
controllable motion synthesis results. The trajectory distance is computed by
summing the closest distance between a projected root position and a target
trajectory at each frame. The mean and standard deviation of the averaged
trajectory distance is as follows: 27.878±8.516 for the CCNet, 158.67±30.94
for PFNN-cond, and 171.973±31.862 for ERD-4LR-cond. Fig. 9 shows an example.
The result of the CCNet is more accurate than the baseline models. We will
show the six trajectories in the supplementary material.
We also compare the motion quality in the case of controllable motion
synthesis through a user study (see Sec. 6.4 for details). The user study
results also verify that the CCNet can generate controllable motions of better
quality in the setting of multi-subjects.
### 6.3 Generalization to Unseen Skeletons
After training with multi-subject motion data, the CCNet can generate motions
for skeletons not in the training dataset. As illustrated in Fig. 4, 5b, 5c,
and 7a, we have applied the trained CCNet to automatically generate denoised
and controllable motions for the skeleton of subject 7 not in the training
dataset. We also test the generalization ability of the CCNet by applying it
to a particularly designed skeleton generated by scaling the skeleton of
subject 7. Fig. 10 illustrates that the CCNet can generalize well to the new
skeleton. Since there is no mocap data for it, we utilize motion retargetting
[61] algorithm to generate 120 seed frames for the new skeleton. Besides, we
use ERD-4LR-cond and PFNN-cond to generate motions for the skeleton. The
results show that motions generated by both of them contain big sudden changes
between seed frames and generated frames, which is inferior to the motion
generated by the CCNet.
Moreover, the CCNet can be used as a pre-trained network in few-shot learning
for motion modeling and synthesis. Given a few motion data of a novel
skeleton, it can learn to capture the personalized style implied in the motion
for the skeleton. Tab.3 shows that, after finetuning the network on the
walking and running motion of subject 7, the relative pose difference,
$rel_{p}$, for all mocap data of this subject in the validation dataset can be
significantly reduced. We compute relative pose difference as
$rel_{p}=\frac{1}{N}\sum_{n=0}^{N}(\|\hat{x}_{n}-x_{n}\|_{2}/\|x_{n}\|_{2})$,
where $N$ is the number of frames, and $\hat{x}_{n}$ and $x_{n}$ are the
motion representation vectors of motions generated by the finetuned CCNet and
the corresponding mocap data. The ability of generalization to new skeletons
is crucial since it can save efforts to capture a large number of high-quality
mocap data for a new skeleton in the motion synthesis applications.
To evaluate how the number of subjects in the dataset influences the
generalization ability of the CCNet, we intentionally put the motions of
subjects 1, 3, 4 and 8 into dataset1 and the rest motions of subjects 0, 5, 6
and 11 to dataset2. Tab. 3 lists the statistics of $rel_{p}$ of the motion
generated by the CCNet finetuned on dataset1 (CCNet on dataset1) and dataset2
(CCNet on dataset2). Since the heights of subjects 1, 3, 4 and 8 are closer to
subject 7’s height, the $rel_{p}$ of CCNet on dataset1 is less than that of
dataset2, but still larger than that of CCNet trained with all mocap data in
the training dataset. Thus, to improve the generalization ability of the CCNet
to new skeletons, it is better to construct a database with more subjects for
the network to learn how to handle skeleton variations.
### 6.4 User Study
(a)
(b)
(c)
Figure 11: Foot-ground penetrations in the motions generated by DAE-LSTM-rand,
ERD-4LR-cond and PFNN-cond. (a) The 611th frame generated by DAE-LSTM-rand for
subject 9 in random motion synthesis. (b) The 450th frame generated by
ERD-4LR-cond for subject 7 in controllable motion synthesis. (c) The 666th
frame generated by PFNN-cond for subject 8 in controllable motion synthesis.
DAE-LSTM-rand, ERD-4LR-cond and PFNN-cond can not effectively differentiate
styles of different skeletons and lead to the foot-ground penetrations, as
indicated by red rectangles. Furthermore, the motions generated by the CCNet
is smoother.
VS | RAND | CONTR
---|---|---
| ERD-4LR-rand
---
(mean: 3.065, std: 1.39)
| DAE-LSTM-rand
---
(mean: 1.75, std: 0.968)
| ERD-4LR-cond
---
(mean: 2.75, std: 1.199)
| DAE-LSTM-cond
---
(mean: 0, std: 0)
| PFNN-cond
---
(mean: 2.375, std: 0.992)
CCNet (RAND: mean: 6.25, std: 1.09 CONTR: mean: 6.625 std: 1.615) | P-value: 9.1671e-8 t-value: -6.9878 | P-value: 6.1188e-13 t-value: -11.9588 | P-value: 2.7365e-8 t-value: -7.4383 | P-value: 1.7796e-16 t-value: -16.3353 | P-value: 1.0127e-9 t-value: -8.7165
Table 4: T-test of user-study in the cases of random and controllable motion
synthesis (confidence interval=0.95). VS: performing t-test between the
results of the CCNet and all the results of baseline models in the second row.
RAND: random motion synthesis. CONTR: controllable motion synthesis. Mean: the
average number of generated sequences selected by all the participants after
comparing to mocap sequences in the same group. Std: the standard deviation of
the number being selected.
We perform a t-test to verify the hypothesis that the CCNet can generate
motions of better quality than baseline models. To this end, we design the
user study as follows. First, we present 16 participants with all groups of
motion sequences: three groups for random motion synthesis and four groups for
controllable motion synthesis. Each group contains 16 pairs of motion
sequences. One is the mocap sequence, and the other is generated by the CCNet
or one of the baseline models (Please refer to the supplementary_material.pdf
in "other supplementary materials" for the details of motion generation in the
user study). Second, we ask the participants to select which sequence in a
pair is of better motion quality. If the chosen number of CCNet-generated
motion sequences is larger than the chosen number for other baseline models
with statistical significance, we deem that our hypothesis is verified. The
participants include six females and ten males, and all of them have
experience with 3D animation or games. Before the user study, we present a few
mocap sequences to the participants as examples of goog-quality motions.
Besides, if there are sudden changes or foot penetrations into the ground
plane, the sequence should be regarded as the worse one. At last, we get 15
valid questionnaires for random motion synthesis and 16 for controllable
motion synthesis.
The t-test results are shown in Tab. 4. The P-values of the CCNet vs. other
baseline models are all less than the selected threshold (0.05). Therefore,
the motions generated by the CCNet are significantly different from the
motions generated by baseline models. According to the average number of
motion sequences selected by the participants (mean in Tab. 4), the average
number of selected motion sequences of CCNet is larger than that of other
baseline models. It verifies that the CCNet can generate better motions than
state-of-the-art baseline models in different scenarios (The ANOVA test result
in the supplementary material also verifies the statistical significance of
the user study). Furthermore, we prepare another 5-group data that contain
pairs of motion sequences generated by the CCNet and each baseline model. As
listed in Tab. 5, the number of CCNet-generated motion sequences selected by
participants is still larger than that of the sequences generated by baseline
models. Fig. 11 shows examples of generated motion sequences used in this
study. Motion jittering and penetrations into the ground plane frequently
happen for motion sequences generated using ERD-4LR-cond, ERD-4LR-cond, and
PFNN-cond, which indicates that these models can’t handle the skeleton
variations as well as the CCNet. In addition, we observe that DAE-LSTM-cond
fails to generate long-period, controllable motion.
Groups | numbers for CCNet (mean±std)
---|---
ERD-4LR-rand vs. CCNet | 10.94±1.56
DAE-LSTM-rand vs. CCNet | 12.31±2.34
ERD-4LR-cond vs. CCNet | 11.63±2.87
DAE-LSTM-cond vs. CCNet | 16±0.0
PFNN-cond vs. CCNet | 11.445±1.87
Table 5: The average selected number for CCNet-generated motion sequences.
Baseline-X vs. CCNet: a group of 16 pairs of motion sequences generated by a
baseline model and the CCNet. Mean±std: mean and variance of the number of
CCNet-generated motion sequences selected by all the participants.
### 6.5 Evaluation of Network hyper-Parameters and Training Settings
In this section, we report the experiments conducted to figure out the hyper-
parameters and training settings selected for the CCNet, including causal
receptive length (CRL), numbers of consecutive frames of each sample in a
batch (NCF), the length of seed frames, and the choice of smoothness loss
term. We conduct all the experiments on the same training and validation
datasets to verify our choices. Precisely, the chosen hyper-parameters for our
CCNet in the random and controllable motion synthesis experiments above are as
follows: CRL=41 and NCF=240. They are selected to minimize the loss in Eq. 2
computed on mocap data in the validation set. For better visualization, the
loss curves are plotted using the formula ${\log_{10}(loss+320)}$, where the
loss is computed using Eq. 2. A bias $320$ is added to make $loss+320$
positive since the loss value is usually around $-300$.
Figure 12: Loss curves obtained using different hyper-parameters of the CCNet.
Left: Training. Right: Validation. We modify each hyper-parameter, including
CRL, NCF and with/without skeleton configuration (w/_sk or w/o_sk), and
compute the losses on the training and validation datasets respectively.
Figure 13: Ablation study of smoothness loss term. We remove the smoothness
loss term and evaluate the corresponding re-trained model. Left: Training.
Right: Validation.
Causal receptive length: We conduct experiments to choose the causal receptive
length among three choices: 31 (with dilations of SRBs being repeatedly 1, 2),
41 (with dilations of SRBs being 2), and 46 (with dilations of SRBs being
repeatedly 1, 2, 4), numbers of SRBs are fixed at 20 and we keep all the other
settings the same as described in Section. 5.1. When the causal receptive
length is 31 and 46, the network converges to higher loss after training on
the training dataset, as shown by the red dashed lines and green dash-dot
lines in Fig. 12. We choose CRL as 41 that can lead to the lowest loss in this
experiment.
NCF in a batch: We also test the NCF of each sample in a batch, which is set
to be 60, 120, and 240, respectively. These numbers correspond to 1 second, 2
seconds, and 4 seconds long sequences. This can be easily achieved by changing
the frame numbers in a batch and keep the other parameters the same. For the
comparison’s convenience, we keep the numbers of consecutive frames of each
sample in a batch fixed as 240 when computing the loss on the validation
dataset. The cyan dashed lines in Fig. 12 illustrate that both training and
validation loss exploded before converging when NCF is 60. The magenta dashed
lines in Fig. 12 show that the network converges to a higher loss in the case
of NCF=120, comparing to NCF=240. We hypothesize that 60 and 120 consecutive
frames result in inadequate training data in a batch and too noisy gradient
when training the network. Thus, we set NCF to 240 frames in training.
The importance of the skeleton configuration and the smoothness loss term: It
is implemented by disconnecting the 1D convolution module for the skeleton
configurations signal to the network and check whether the loss on the
validation set increases significantly. The black dash-dot lines in Fig. 12
indicate that the network without skeleton configuration converges to a much
higher loss when training on our multi-subject training dataset. However, the
loss explodes at around 700 epochs on the validation dataset. Thus, the
skeleton configuration plays an important role in the network to disambiguate
different subjects’ motions. To verify the importance of smoothness loss term
to the CCNet, we train the network by removing it from the loss terms. Fig. 13
indicates that the loss curves on both training and validation datasets
explode before converging without smoothness term, which means the smoothness
term is essential for the network to prevent the generated motions from big
sudden changes and converge to a better result.
seed frame numbers | Relative Pose Difference (mean/std)
---|---
ERD-4LR-cond | DAE-LSTM-cond | CCNet
1 | 0.214±0.302 | 0.56±0.572 | 0.13±0.179
5 | 0.18±0.145 | 0.325±0.508 | 0.091±0.128
10 | 0.14±0.159 | 0.303±0.49 | 0.084±0.136
30 | 0.145±0.179 | 0.182±0.27 | 0.074±0.128
60 | 0.12±0.138 | 0.178±0.253 | 0.074±0.124
120 | 0.126±0.16 | 0.115±0.144 | 0.074±0.125
Table 6: Ablation study on the length of seed frames in the case of
controllable motion synthesis. We synthesize motion sequences using different
lengths of seed frames to check their influences on the generated motions. IK
is disabled in this ablation study.
Seed frame length: The influence of seed frame length on the quality of
generated motions is measured by computing the relative pose differences
between generated motion and corresponding mocap data. Specifically, we
extract seed frames from mocap data in the validation dataset and then let the
networks predict a frame to be compared with the corresponding mocap frame in
the case of controllable motion synthesis. Tab. 6 shows the computed relative
pose difference $rel_{p}$ (see its definition in Sec. 6.3) for different seed
frame length, such as 1, 5, 10, 30, 60 and 120. The lower difference value
indicates that the generated motion is more similar to mocap data and thus of
better quality. It can be seen that the CCNet is robust to the variation of
the length of seed frames comparing to ERD-4LR-cond and DAE-LSTM-cond, and it
doesn’t need too long seed frames to synthesize high-quality motions. However,
we observe that there are more obvious jitters between the seed frames and the
generated frames in the cases of 1 and 5 seed frame lengths. We hypothesize
that, for such short length seed frames, the CCNet can’t get enough
information to generate smooth motions.
## 7 Conclusion
We have designed a deep generative motion model, i.e., CCNet, based on causal
convolution proposed in WaveNet [8] to synthesize high-quality motions for
multiple subjects. The trained CCNet can also synthesize several types of
complex motions, such as punching, kicking, and kicking while punching,
included in our database. The CCNet can be applied to various applications,
such as random motion generation, motion denoising, motion completion, and
controllable motion synthesis. Moreover, the CCNet can generate motions for
novel skeletons. Given sample motions of a novel skeleton, the pre-trained
CCNet can be fine-tuned to capture the skeleton’s motion style.
Limitation and future work: Currently, the CCNet trained on our database can
not handle arbitrary skeleton variation. For instance, if we scale the lower
body of a skeleton in our database with a ratio less than 0.6, the CCNet
without fine-tuning will generate motions with severe foot-ground penetration
for the scaled skeleton. We hypothesize that it is because that 11 different
skeletons in our training set might not be enough for the network to learn how
to handle the large space of skeleton variation. Therefore, we plan to
increase the number of subjects in the database to investigate the capacity of
the CCNet. In addition, we also plan to increase the number and types of
complex motions in the database to improve the quality of complex motion
generation. Another issue with our CCNet is the number of seed frames required
to initialize the motion synthesis. When seed frame length is set to between 1
and 5, jitters will occur in the generated motion, which might limit the
application of the CCNet to model swift motions. We are also interested in
investigating to take more temporal information as input, for instance, phase-
like information in PFNN or joint accelerations, to mitigate this issue.
## References
* [1] Graham W Taylor and Geoffrey E Hinton. Factored conditional restricted boltzmann machines for modeling motion style. In Proceedings of the 26th annual international conference on machine learning, pages 1025–1032. ACM, 2009.
* [2] Katerina Fragkiadaki, Sergey Levine, Panna Felsen, and Jitendra Malik. Recurrent network models for human dynamics. In Proceedings of the IEEE International Conference on Computer Vision, pages 4346–4354, 2015.
* [3] J A Martinez, Michael J Black, and Javier Romero. On human motion prediction using recurrent neural networks. pages 4674–4683, 2017.
* [4] Daniel Holden, Jun Saito, and Taku Komura. A deep learning framework for character motion synthesis and editing. ACM Transactions on Graphics (TOG), 35(4):138, 2016.
* [5] Daniel Holden, Taku Komura, and Jun Saito. Phase-functioned neural networks for character control. ACM Trans. Graph., 36(4), July 2017.
* [6] Kyungho Lee, Seyoung Lee, and Jehee Lee. Interactive character animation by learning multi-objective control. ACM Trans. Graph., 37(6), December 2018.
* [7] Hung Yu Ling, Fabio Zinno, George Cheng, and Michiel van de Panne. Character controllers using motion vaes. 39(4), 2020.
* [8] Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio, 2016.
* [9] Charles Rose, Michael F Cohen, and Bobby Bodenheimer. Verbs and adverbs: Multidimensional motion interpolation. IEEE Computer Graphics and Applications, 18(5):32–40, 1998.
* [10] Luis Molina Tanco and Adrian Hilton. Realistic synthesis of novel human movements from a database of motion capture examples. In Proceedings Workshop on Human Motion, pages 137–142. IEEE, 2000\.
* [11] Richard Bowden. Learning statistical models of human motion. In IEEE Workshop on Human Modeling, Analysis and Synthesis, CVPR, volume 2000, 2000.
* [12] Matthew Brand and Aaron Hertzmann. Style machines. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pages 183–192. ACM Press/Addison-Wesley Publishing Co., 2000.
* [13] Jinxiang Chai and Jessica K Hodgins. Constraint-based motion optimization using a statistical dynamic model. In ACM Transactions on Graphics (TOG), volume 26, page 8. ACM, 2007\.
* [14] Xiaolin Wei, Jianyuan Min, and Jinxiang Chai. Physically valid statistical models for human motion generation. ACM Transactions on Graphics (TOG), 30(3):19, 2011.
* [15] Keith Grochow, Steven L Martin, Aaron Hertzmann, and Zoran Popović. Style-based inverse kinematics. In ACM transactions on graphics (TOG), volume 23, pages 522–531. ACM, 2004.
* [16] Jinxiang Chai and Jessica K Hodgins. Performance animation from low-dimensional control signals. ACM Transactions on Graphics (ToG), 24(3):686–696, 2005.
* [17] Jehee Lee, Jinxiang Chai, Paul SA Reitsma, Jessica K Hodgins, and Nancy S Pollard. Interactive control of avatars animated with human motion data. In ACM Transactions on Graphics (ToG), volume 21, pages 491–500. ACM, 2002.
* [18] Rachel Heck and Michael Gleicher. Parametric motion graphs. In Proceedings of the 2007 symposium on Interactive 3D graphics and games, pages 129–136. ACM, 2007.
* [19] Okan Arikan, David A Forsyth, and James F O’Brien. Motion synthesis from annotations. In ACM Transactions on Graphics (TOG), volume 22, pages 402–408. ACM, 2003.
* [20] Yongjoon Lee, Kevin Wampler, Gilbert Bernstein, Jovan Popović, and Zoran Popović. Motion fields for interactive character locomotion. In ACM Transactions on Graphics (TOG), volume 29, page 138. ACM, 2010.
* [21] Jianyuan Min and Jinxiang Chai. Motion graphs++: a compact generative model for semantic motion analysis and synthesis. ACM Transactions on Graphics (TOG), 31(6):153, 2012.
* [22] Okan Arikan and David A Forsyth. Interactive motion generation from examples. In ACM Transactions on Graphics (TOG), volume 21, pages 483–490. ACM, 2002.
* [23] Lucas Kovar, Michael Gleicher, and Frédéric Pighin. Motion graphs. In ACM SIGGRAPH 2008 classes, page 51. ACM, 2008.
* [24] Yi Zhou, Zimo Li, Shuangjiu Xiao, Chong He, Zeng Huang, and Hao Li. Auto-conditioned recurrent networks for extended complex human motion synthesis. 2018\.
* [25] Partha Ghosh, Jie Song, Emre Aksan, and Otmar Hilliges. Learning human motion models for long-term predictions. In 2017 International Conference on 3D Vision (3DV), pages 458–466. IEEE, 2017.
* [26] Shihong Xia, Congyi Wang, Jinxiang Chai, and Jessica Hodgins. Realtime style transfer for unlabeled heterogeneous human motion. ACM Transactions on Graphics (TOG), 34(4):119, 2015.
* [27] Jianyuan Min, Huajun Liu, and Jinxiang Chai. Synthesis and editing of personalized stylistic human motion. In Proceedings of the 2010 ACM SIGGRAPH symposium on Interactive 3D Graphics and Games, pages 39–46, 2010.
* [28] Kfir Aberman, Peizhuo Li, Dani Lischinski, Olga Sorkine-Hornung, Daniel Cohen-Or, and Baoquan Chen. Skeleton-aware networks for deep motion retargeting. ACM Transactions on Graphics (TOG), 39(4):62, 2020.
* [29] Jack M Wang, David J Fleet, and Aaron Hertzmann. Gaussian process dynamical models for human motion. IEEE transactions on pattern analysis and machine intelligence, 30(2):283–298, 2007.
* [30] Daniel Holden, Jun Saito, Taku Komura, and Thomas Joyce. Learning motion manifolds with convolutional autoencoders. In SIGGRAPH Asia 2015 Technical Briefs, page 18. ACM, 2015.
* [31] He Zhang, Sebastian Starke, Taku Komura, and Jun Saito. Mode-adaptive neural networks for quadruped motion control. ACM Trans. Graph., 37(4), July 2018.
* [32] Sebastian Starke, He Zhang, Taku Komura, and Jun Saito. Neural state machine for character-scene interactions. ACM Trans. Graph., 38(6), November 2019.
* [33] Jungdam Won, Deepak Gopinath, and Jessica Hodgins. A scalable approach to control diverse behaviors for physically simulated characters. ACM Trans. Graph., 39(4), July 2020.
* [34] Sebastian Starke, Yiwei Zhao, Taku Komura, and Kazi Zaman. Local motion phases for learning multi-contact character movements. ACM Trans. Graph., 39(4), July 2020.
* [35] Xue Bin Peng, Pieter Abbeel, Sergey Levine, and Michiel van de Panne. Deepmimic: Example-guided deep reinforcement learning of physics-based character skills. ACM Transactions on Graphics (TOG), 37(4):143, 2018.
* [36] Ying-Sheng Luo, Jonathan Hans Soeseno, Trista Pei-Chun Chen, and Wei-Chao Chen. Carl: Controllable agent with reinforcement learning for quadruped locomotion. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2020), 39(4), 2020.
* [37] Libin Liu and Jessica Hodgins. Learning basketball dribbling skills using trajectory optimization and deep reinforcement learning. ACM Trans. Graph., 37(4), July 2018.
* [38] Libin Liu and Jessica Hodgins. Learning to schedule control fragments for physics-based characters using deep q-learning. ACM Trans. Graph., 36(3), June 2017.
* [39] Libin Liu, Michiel van de Panne, and KangKang Yin. Guided learning of control graphs for physics-based characters. ACM Transactions on Graphics, 35(3), 2016.
* [40] Xue Bin Peng, Glen Berseth, KangKang Yin, and Michiel Van De Panne. Deeploco: Dynamic locomotion skills using hierarchical deep reinforcement learning. ACM Transactions on Graphics (TOG), 36(4):41, 2017.
* [41] Zhiyong Wang, Jinxiang Chai, and Shihong Xia. Combining recurrent neural networks and adversarial training for human motion synthesis and control. IEEE Transactions on Visualization and Computer Graphics, pages 1–1, 2019.
* [42] Omid Alemi, Jules Françoise, and Philippe Pasquier. Groovenet: Real-time music-driven dance movement generation using artificial neural networks. networks, 8(17):26, 2017.
* [43] Angela S Lin, Lemeng Wu, Rodolfo Corona, Kevin Tai, Qixing Huang, and Raymond J Mooney. Generating animated videos of human activities from natural language descriptions. In NeurIPS, 2018.
* [44] Ashesh Jain, Amir R Zamir, Silvio Savarese, and Ashutosh Saxena. Structural-rnn: Deep learning on spatio-temporal graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5308–5317, 2016.
* [45] Qiang Nie, Ziwei Liu, and Yunhui Liu. Unsupervised 3d human pose representation with viewpoint and pose disentanglement. In European Conference on Computer Vision (ECCV), 2020.
* [46] Enric Corona, Albert Pumarola, Guillem Alenya, and Francesc Moreno-Noguer. Context-aware human motion prediction. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
* [47] Judith Butepage, Michael J Black, Danica Kragic, and Hedvig Kjellstrom. Deep representation learning for human motion prediction and classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6158–6166, 2017.
* [48] Mao Wei, Liu Miaomiao, and Salzemann Mathieu. History repeats itself: Human motion prediction via motion attention. In ECCV, 2020.
* [49] Ye Yuan and Kris Kitani. Residual force control for agile human behavior imitation and extended motion synthesis. In Advances in Neural Information Processing Systems, 2020.
* [50] Jingwei Xu, Huazhe Xu, Bingbing Ni, Xiaokang Yang, Xiaolong Wang, and Trevor Darrell. Hierarchical style-based networks for motion synthesis. In eccv, 2020.
* [51] Maosen Li, Siheng Chen, Yangheng Zhao, Ya Zhang, Yanfeng Wang, and Qi Tian. Dynamic multiscale graph neural networks for 3d skeleton based human motion prediction. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
* [52] Qiongjie Cui, Huaijiang Sun, and Fei Yang. Learning dynamic relationships for 3d human motion prediction. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
* [53] Ruben Villegas, Jimei Yang, Duygu Ceylan, and Honglak Lee. Neural kinematic networks for unsupervised motion retargetting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8639–8648, 2018.
* [54] Sadegh Aliakbarian, Fatemeh Sadat Saleh, Mathieu Salzmann, Lars Petersson, and Stephen Gould. A stochastic conditioning scheme for diverse human motion prediction. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
* [55] Rui Zhao, Hui Su, and Qiang Ji. Bayesian adversarial human motion synthesis. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
* [56] Zimo Li, Yi Zhou, Shuangjiu Xiao, Chong He, and Hao Li. Auto-conditioned lstm network for extended complex human motion synthesis. arXiv preprint arXiv:1707.05363, 3, 2017.
* [57] Dario Pavllo, David Grangier, and Michael Auli. Quaternet: A quaternion-based recurrent model for human motion. In BMVC, 2018.
* [58] Wikipedia. Quaternion — Wikipedia, the free encyclopedia, 2020. [Online; accessed 19-November-2020].
* [59] Alex Graves. Generating sequences with recurrent neural networks. Computer Science, 2013.
* [60] Xiaolin K. Wei, Peizhao Zhang, and Jinxiang Chai. Accurate realtime full-body motion capture using a single depth camera. ACM Trans. Graph., 31(6):188:1–188:12, 2012.
* [61] Samuel R. Buss. Introduction to inverse kinematics with jacobian transpose, pseudoinverse and damped least squares methods. Technical report, IEEE Journal of Robotics and Automation, 2004.
## 8 Supplementary Material
### 8.1 Details of Network Parameters
CCNet: The detailed parameters of the separate dilation blocks (SRB) used in
the CCNet are shown in Tab.8 and Tab.9 respectively. The format of the "output
size" column is #channel$\times$#NCF, where #channle indicates the numbers of
feature channels and #NCF indicates the number of consecutive frames (NCF) of
each sample in a batch.
The modification of ERD-4LR and DAE-LSTM: Both original ERD-4LR and DAE-LSTM
networks in Sec.6 of our paper have 1024 hidden units in the linear layers and
512 hidden units in the LSTM layers. To test the performance of these two
types of networks in the multi-subject motion synthesis, we add additional
parameters to let the network accept as inputs the skeleton configuration.
Specifically, we add a Linear-Tanh module (with 1024 hidden units in the
Linear layer) for ERD-4LR and DAE-LSTM to map skeleton configurations to
features and then add them to the feature output by the encoder. The summed
features are fed to LSTM layers for the random motion synthesis. We denote
these two adapted networks for random motion synthesis as _ERD-4LR-rand_ and
_ERD-4LR-rand_. Their network parameters are shown in Tab.10.
Similarly, to extend ERD-4LR and DAE-LSTM to support multi-subject,
controllable motion synthesis, we additionally add a Linear-Tanh module (with
1024 hidden units in the Linear layer) to convert each of the control signals
into features. Similarly, these features are added to the encoder output to
form the input of subsequent LSTM layers. We denote the two adapted networks
for controllable motion synthesis as _ERD-4LR-cond_ and _ERD-4LR-cond_. Their
network parameters are shown in Tab.11.
The modification of PFNN: We also adapt the network architecture of PFNN [5]
to take the skeleton configuration as an input. It is implemented by replacing
terrain data in its original inputs with the skeleton configuration since we
only test the synthesis of motions on a ground plane. The rest inputs of PFNN
remain the same as in [5]. Since PFNN is mainly designed for controllable
motion synthesis, not a generative model, we only compare the CCNet with PFNN
on controllable motion synthesis. The modified PFNN is denoted as _PFNN-cond_
, and its network parameters are shown in Tab.12.
### 8.2 Trajectory-following Error Comparisons
Fig.14 illustrates the six manually specified trajectories used in the
trajectory-following error comparison of controllable motion synthesis (the
"comparisons" paragraph in Sec. 6.2 in our paper). The trajectory-following
error for all the trajectories is visualized in Fig.9.
### 8.3 User Study
Motion generation: To conduct the user study, we first randomly select 16
mocap sequences from the validation dataset, including five motion sequences
of subject 7 and one motion sequence of each of the rest of subjects in our
database. Since subject 7 is not included in the training dataset, we thus
choose more number of its motions to check the motion quality, in this case,
more carefully. Secondly, for the user study on random motion synthesis, we
initialize the CCNet and baseline models using the first 120 frames of each
mocap sequence as seed frames. Consequently, we obtain 16 sequences generated
by CCNet, ERD-4LR-rand, and DAE-LSTM-rand, respectively, to form three groups
of motion pairs. For controllable motion synthesis, seed frames are the same
120 frames of each sequence, and control signals are extracted from the 16
mocap sequences as described in section 4.2. The extracted control signals
guarantee that these networks generate motions with the same motion types as
the mocap sequences. We apply CCNet, ERD-4LR-cond, DAE-LSTM-cond, and PFNN-
cond to generate 16 motion sequences separately. The average length of
selected motion sequences is around 10 seconds, and the length of each
generated motion sequence is chosen to be the same length as the corresponding
mocap sequence. All the motion sequence pairs are present to the participants
in a random order for their evaluation.
We name the videos of the generated motion sequences as follows: "rand0" and
"cond0" are used for the videos of mocap data; "rand1" and "cond1" for the
videos of motions generated by the CCNet; "rand2" and "cond2" for ERD-4LR-rand
and ERD-4LR-cond; "rand3" and "cond3" for DAE-LSTM-rand and DAE-LSTM-cond; and
"cond4" for PFNN-cond.
| Source of Variation | SS | df | MS | F | P-value | F-critical
---|---|---|---|---|---|---|---
Random motion synthesis | Between Groups | 171.375 | 2 | 85.6875 | 59.3792 | 2.3862e-13 | 3.2043
Within Groups | 64.9375 | 45 | 1.4431 | - | - | -
Total | 236.3125 | 47 | - | - | - | -
Controllable motion synthesis | Between Groups | 346.6875 | 3 | 115.5625 | 90.342 | 3.1855e-22 | 2.7581
Within Groups | 76.75 | 60 | 1.2792 | - | - | -
Total | 423.4375 | 63 | - | - | - | -
Table 7: ANOVA-test of user study for confidence interval=0.95 in the cases of
random and controllable motion synthesis. SS: sum-of-squares for between-group
variability. Df: degrees of freedom. MS: mean squares. F: F ratio. -: not
applicable.
ANOVA-test: We also perform an ANOVA-test on the user study results as
illustrated in Tab.7, which also verifies the statistical significance of the
user study.
Block name | Output size | Filter size
---|---|---
CausalConv1 | 32×240 | 1×1, 32, stride 1, dilation 2, padding (2, 0); input: motion frames
cond1_conv1+ReLU | 32×240 | 1×1, 32, stride 1, padding 1, dilation 1; input: c1
cond2_conv1+ReLU | 32×240 | 1×1, 32, stride 1, padding 1, dilation 1; input: c2
cond3_conv1+ReLU | 32×240 | 1×1, 32, stride 1, padding 1, dilation 1; input: c3
CausalConv2 | 32×240 | 1×1, 32, stride 1, dilation 2, padding (2, 0); input: the same as it to CausalConv1
cond1_conv2+ReLU | 32×240 | 1×1, 32, stride 1, padding 1, dilation 1; input: the same as it to cond1_conv1
cond2_conv2+ReLU | 32×240 | 1×1, 32, stride 1, padding 1, dilation 1; input: the same as it to cond2_conv1
cond3_conv2+ReLU | 32×240 | 1×1, 32, stride 1, padding 1, dilation 1; input: the same as it to cond3_conv1
sigmoid | 32×240 | input: the sum of CausalConv1, cond1_conv1+ReLU, cond2_conv1+ReLU and cond3_conv1+ReLU
Tanh | 32×240 | input: the sum of CausalConv2, cond1_conv2+ReLU, cond2_conv2+ReLU and cond3_conv2+ReLU
element-wise multiply | 32×240 | input: sigmoid and Tanh
conv_res | 32×240 | | 1×1, 32, stride 1, padding 1, dilation 1; input: element-wise multiply
---
output(SRB_resi): the sum of conv_res and the input to CausalConv1
conv_skip | 512×240 | | 1×1, 512, stride 1, padding 1, dilation 1; input: element-wise multiply
---
output(SRB_skipi): the output of conv_skip
Table 8: Network parameters of a single SRBi. The dilation of SRBi is 2.
Block name | Output size | Filter size
---|---|---
Encoder
conv1+ReLU | 32×240 | 1×1, 32, stride 1, padding 1, dilation 1
conv2+ReLU | 32×240 | 1×1, 32, stride 1, padding 1, dilation 1
Separate Residual Blocks(SRBs)
SRB0 | SRB_res0: 32×240 | dilation 2; input: conv2, c1, c2 and c3
SRB_skip0: 512×240 | dilation 2; input: conv2, c1, c2 and c3
SRB1 | SRB_res1: 32×240 | dilation 2; input: SRB_res0, c1, c2 and c3
SRB_skip1: 512×240 | dilation 2; input: SRB_res0, c1, c2 and c3
SRB2 | SRB_res2: 32×240 | dilation 2; input: SRB_res1, c1, c2 and c3
SRB_skip2: 512×240 | dilation 2; input: SRB_res1, c1, c2 and c3
SRB3 | SRB_res3: 32×240 | dilation 2; input: SRB_res2, c1, c2 and c3
SRB_skip3: 512×240 | dilation 2; input: SRB_res2, c1, c2 and c3
SRB4 | SRB_res4: 32×240 | dilation 2; input: SRB_res3, c1, c2 and c3
SRB_skip4: 512×240 | dilation 2; input: SRB_res3, c1, c2 and c3
SRB5 | SRB_res5: 32×240 | dilation 2; input: SRB_res4, c1, c2 and c3
SRB_skip5: 512×240 | dilation 2; input: SRB_res4, c1, c2 and c3
SRB6 | SRB_res6: 32×240 | dilation 2; input: SRB_res5, c1, c2 and c3
SRB_skip6: 512×240 | dilation 2; input: SRB_res5, c1, c2 and c3
SRB7 | SRB_res7: 32×240 | dilation 2; input: SRB_res6, c1, c2 and c3
SRB_skip7: 512×240 | dilation 2; input: SRB_res6, c1, c2 and c3
SRB8 | SRB_res8: 32×240 | dilation 2; input: SRB_res7, c1, c2 and c3
SRB_skip8: 512×240 | dilation 2; input: SRB_res7, c1, c2 and c3
SRB9 | SRB_res9: 32×240 | dilation 2; input: SRB_res8, c1, c2 and c3
SRB_skip9: 512×240 | dilation 2; input: SRB_res8, c1, c2 and c3
SRB10 | SRB_res10: 32×240 | dilation 2; input: SRB_res9, c1, c2 and c3
SRB_skip10: 512×240 | dilation 2; input: SRB_res9, c1, c2 and c3
SRB11 | SRB_res11: 32×240 | dilation 2; input: SRB_res10, c1, c2 and c3
SRB_skip11: 512×240 | dilation 2; input: SRB_res10, c1, c2 and c3
SRB12 | SRB_res12: 32×240 | dilation 2; input: SRB_res11, c1, c2 and c3
SRB_skip12: 512×240 | dilation 2; input: SRB_res11, c1, c2 and c3
SRB13 | SRB_res13: 32×240 | dilation 2; input: SRB_res12, c1, c2 and c3
SRB_skip13: 512×240 | dilation 2; input: SRB_res12, c1, c2 and c3
SRB14 | SRB_res14: 512×240 | dilation 2; input: SRB_res13, c1, c2 and c3
Decoder
ReLU+conv3 | 512×240 | 1×1, 512, stride 1, padding 1, dilation 1; input: the sum of SRB_skip0, SRB_skip1, …, SRB_skip14
ReLU+conv4 | 613×240 | 1×1, 613, stride 1, padding 1, dilation 1
Table 9: Network parameters of the CCNet. The input to the CCNet includes 240
frames of motions, the skeleton configuration c1, the direction and velocity
c2, and the motion type c3. The network architecture of the CCNet is inspired
by [8].
Block name | Output size | Input/Output
---|---|---
linear1+Tanh | 1024×240 | | input: motion frames
---
output: linear1
linear2+Tanh | 1024×240 | | input: linear1
---
output: linear2
LSTM1 | 512×240 | | input: linear2
---
output: lstm1
LSTM2 | 512×240 | | input: lstm1
---
output: lstm2
LSTM3 | 512×240 | | input: lstm2
---
output: lstm3
LSTM4 | 1024×240 | | input: lstm3
---
output: lstm4
linear3+Tanh | 1024×240 | | input: lstm4
---
output: linear3
linear4+Tanh | 613×240 | | input: linear3
---
output: the predicted motion frames
(a) ERD-4LR-rand.
Block name | Output size | Input/Output
---|---|---
linear1+Tanh | 1024×240 | | input: motion frames
---
output: linear1
linear2+Tanh | 1024×240 | | input: linear1
---
output: linear2
cond1_linear+Tanh | 1024×240 | | input: c1
---
output: cond1_linear
cond2_linear+Tanh | 1024×240 | | input: c2
---
output: cond2_linear
cond3_linear+Tanh | 1024×240 | | input: c3
---
output: cond3_linear
LSTM1 | 512×240 | | input: the sum of the linear2,
---
cond_linear1, cond_linear2 and cond_linear3
output: lstm1
LSTM2 | 512×240 | | input: lstm1
---
output: lstm2
LSTM3 | 512×240 | | input: lstm2
---
output: lstm3
LSTM4 | 1024×240 | | input: lstm3
---
output: lstm4
linear3+Tanh | 1024×240 | | input: lstm4
---
output: linear3
linear4+Tanh | 613×240 | | input: linear3
---
output: the predicted motion frames
(b) ERD-4LR-cond.
Table 10: Network parameters of the ERD-4LR-rand and ERD-4LR-cond. The inputs
to the ERD-4LR-rand and ERD-4LR-cond include 240 frames of motions, the
skeleton configuration c1, the direction and velocity c2, and the motion type
c3. The overall network architectures of ERD-4LR-rand and ERD-4LR-cond are the
adaptation of [2] to multi-subject motion synthesis, but implemented with
4-layers LSTM as in [6].
Block name | Output size | Input/Output
---|---|---
dropout+linear1+ReLU | 1024×240 | | dropout probability: 0.3
---
input: motion frames
output: linear1
linear2+ReLU | 1024×240 | | input: linear1
---
output: linear2
linear3+ReLU | 1024×240 | | input: linear2
---
output: linear3
linear4 | 305×240 | | input: linear3
---
output: linear4
LSTM1 | 512×240 | | input: linear2,
---
output: lstm1
LSTM2 | 512×240 | | input: lstm1
---
output: lstm2
LSTM3 | 512×240 | | input: lstm2
---
output: lstm3
linear5 | 613×240 | | input: lstm3
---
output: the predicted motion frames
(a) DAE-LSTM-rand.
Block name | Output size | Input/Output
---|---|---
dropout+linear1+ReLU | 1024×240 | | dropout probability: 0.3
---
input: motion frames
output: linear1
linear2+ReLU | 1024×240 | | input: linear1
---
output: linear2
linear3+ReLU | 1024×240 | | input: linear2
---
output: linear3
linear4 | 305×240 | | input: linear3
---
output: linear4
cond1_linear+Tanh | 1024×240 | | input: c1
---
output: cond1_linear
cond2_linear+Tanh | 1024×240 | | input: c2
---
output: cond2_linear
cond3_linear+Tanh | 1024×240 | | input: c3
---
output: cond3_linear
LSTM1 | 512×240 | | input: the sum of the linear4,
---
cond_linear1, cond_linear2 and cond_linear3
output: lstm1
LSTM2 | 512×240 | | input: lstm1
---
output: lstm2
LSTM3 | 512×240 | | input: lstm2
---
output: lstm3
linear5 | 613×240 | | input: lstm3
---
output: the predicted motion frames
(b) DAE-LSTM-cond.
Table 11: Network parameters of DAE-LSTM-rand and DAE-LSTM-cond. The inputs to
the DAE-LSTM-rand and DAE-LSTM-cond include 240 frames of motions, the
skeleton configuration c1, the direction and velocity c2, and the motion type
c3. The overall network architectures of DAE-LSTM-rand and DAE-LSTM-cond are
the adaptation of the network in [25] to multi-subject motion synthesis.
Block name | Output size | Input/Output
---|---|---
dropout+pfnn_linear1+ELU | 512×1 | | dropout probability: 0.7
---
input: the concatenation of
motion frames and control signals
output: pfnn_linear1
dropout+pfnn_linear2+ELU | 512×1 | | dropout probability: 0.7
---
input: pfnn_linear1
output: pfnn_linear2
dropout+pfnn_linear3 | 437×1 | | dropout probability: 0.7
---
input: pfnn_linear2
output: the predicted
motion frames and control signals
Table 12: Network parameters of the PFNN-cond. The input to the PFNN-cond is
described in Sec.1 in the supplementary material.
(a) trajectory 1 (b) trajectory 2 (c) trajectory 3 (d) trajectory 4
(e) trajectory 5 (f) trajectory 6
Figure 14: Six trajectories used in trajectory-following comparisons. The
green color indicates the starting point of a trajectory, while the red color
indicates the terminal point.
|
# Stellar Evolution and Tidal Dissipation in REBOUNDx
Stanley A. Baronett,1 Noah Ferich,2 Daniel Tamayo,3 Jason H. Steffen,1
1Department of Physics & Astronomy, University of Nevada, Las Vegas, 4505 S.
Maryland Pkwy, Las Vegas 89154, USA
2Department of Astrophysical & Planetary Sciences, University of Colorado
Boulder, Boulder, CO 80309, USA
3Department of Astrophysical Sciences, Princeton University, Princeton, NJ
08544, USA
E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>Sagan
Fellow<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
We introduce two new features to REBOUNDx, an extended library for the N-body
integrator REBOUND. The first is a convenient parameter interpolator for
coupling different physics and integrators using numerical splitting schemes.
The second implements a constant time lag model for tides (without evolving
spins) from Hut (1981). We demonstrate various examples of these features
using post-main sequence stellar evolution data from MESA (Modules for
Experiments in Stellar Astrophysics). These additional effects are publicly
available as of REBOUNDx’s latest release.
###### keywords:
software: public release – stars: evolution – planet–star interactions –
software: simulations – software: development – software: documentation
††pubyear: 2020††pagerange: Stellar Evolution and Tidal Dissipation in
REBOUNDx–Stellar Evolution and Tidal Dissipation in REBOUNDx
## 1 Introduction
REBOUND is an open-source, N-body integrator, which simulates the dynamical
motion of particles (e.g., stars, planets, and dust) under the influence of
forces such as gravity (Rein & Liu, 2012). Written entirely in C, with memory
and computational efficiency in mind, the code can also be conveniently called
as a Python module and supports parallelisation for both shared and
distributed memory systems. REBOUNDx (eXtras) is an extended library and
flexible framework for incorporating additional physics into its integrations,
e.g., post-Newtonian corrections or radiation forces (Tamayo et al., 2020).
With the development of increasingly sophisticated codes to model different
physics, leveraging numerical schemes that couple distinct integrators in a
modular fashion can prove useful. For example, rather than duplicating stellar
evolution models in REBOUNDx, it would be preferable to use state-of-the-art
codes, e.g., the open-source Modules for Experiments in Stellar Astrophysics
(MESA, Paxton et al., 2011; Paxton et al., 2013, 2015, 2018, 2019). A simple
and powerful class of schemes for coupling integrators are splitting schemes,
which alternate (in this case) between evolving the orbits and the star using
fixed timesteps (Strang, 1968; Hairer et al., 2006). Calling integrators
separately in this fashion minimizes code duplication, and the numerical
scheme errors can be understood in terms of the commutation relations between
the operators being combined (e.g., Tamayo et al., 2020). This strategy has
been vigorously pursued in the Astronomical Multipurpose Software Environment
(AMUSE) package (Portegies Zwart & McMillan, 2018; Portegies Zwart, 2018;
Zwart et al., 2020), which couples a wide range of codes with splitting
schemes of various orders.
In this paper, we develop this capability for the REBOUNDx package. For the
adiabatic case where one set of variables (e.g., stellar parameters) change
much more slowly than the other (orbital parameters), one can take the much
more efficient approach of running a single stellar model and interpolating
its results for a large number of N-body integrations. To this end, we present
a machine-independent implementation of parameter interpolation in § 2 and
apply it to stellar evolution data from MESA as an example. To further show
the modularity of such splitting schemes, we implement a constant time lag
model for tides (without evolving spins) from Hut (1981) in § 3, and we show
results that combine both stellar and tidal evolution in § 4.
The latest version of REBOUNDx is available at
https://github.com/dtamayo/reboundx. We make available Jupyter notebooks and
sample Python scripts used to generate the following results and figures at
https://github.com/sabaronett/REBOUNDxPaper.111Any questions or problems can
be reported by opening an issue at
https://github.com/sabaronett/REBOUNDxPaper/issues.
## 2 Splitting Schemes for Additional Effects
### 2.1 REBOUNDx Implementation
We can couple distinct integrators that model different physics using the
following numerical scheme. Formally, we have a coupled set of differential
equations for the N-body evolution $\hat{N}\>\mathbf{z}$ and the parameters
themselves $\hat{P}\>\mathbf{z}$, where we define differential operators
$\hat{N}$ and $\hat{P}$, which act on the current state of the system
$\mathbf{z}$. If we have a solution for the parameter differential equations
in isolation, we define a corresponding integration operator $\mathcal{P}(h)$
that advances the state $\mathbf{z}$ by a timestep $h$ according to
$\hat{P}\>\mathbf{z}$. We can also define a solution to the N-body equations
through its own corresponding integration operator $\mathcal{N}(h)$ that
similarly advances the state according to $\hat{N}\>\mathbf{z}$.
Thus, we construct a first-order splitting scheme $\mathcal{S}$ that
alternates between an N-body step for a splitting time interval and a
parameter-evolution step for a splitting time interval:
$\mathcal{SNP}(dt_{\textrm{split}})\mathbf{z}(t)\equiv\mathcal{N}(dt_{\textrm{split}})\circ\mathcal{P}(dt_{\textrm{split}}),$
(1)
where $\mathcal{N}(dt_{\textrm{split}})$ is made up of many N-body steps of
size $dt$. For small enough timesteps, this splitting method approximates the
true solution:
$(\mathcal{N}+\mathcal{P})(dt_{\textrm{split}})=\mathbf{z}(t+dt_{\textrm{split}})\approx\mathcal{N}(dt_{\textrm{split}})\circ\mathcal{P}(dt_{\textrm{split}}).$
(2)
The integration errors of such splitting schemes can be understood precisely
in terms of the non-commutative properties of the two operators (see Tamayo et
al., 2020). This also helps guide an appropriate choice of
$dt_{\textrm{split}}$, such that $dt_{\textrm{split}}\ll\tau_{\textrm{PI}},$
the time-scale of the parameter evolution (see § 4.1).
Precise adherence to this splitting scheme would use parameter integration
outputs that correspond to the specific and exact time intervals of
$dt_{\textrm{split}}$. For incorporating stellar evolution as an example, this
amounts to alternating timestep calls between REBOUND and MESA. However,
repeating runs of the same stellar model for many different N-body
integrations in this way can be inefficient.
Ensuring REBOUND’s N-body steps always fall at exactly the same times as
MESA’s can be impractical for a survey of planetary systems with different
orbital periods (and hence different timesteps). This is also challenging when
using integrators with adaptive timesteps as we do here and as used by MESA.
Yet many effects, such as stellar evolution, are very slow compared to orbital
time-scales. In such adiabatic cases, a simple approach is to interpolate the
results of a single MESA integration at arbitrary times. The error from
interpolating at $dt_{\textrm{split}}$, instead of evaluating
$\mathcal{P}($$dt_{\textrm{split}}$) with MESA explicitly, is negligible
compared to the splitting scheme error.
We introduce a new feature to REBOUNDx to accomplish the following: (1) load
and store parametric time-series data in the simulation’s allocated memory;
and (2) spline the data so users can interpolate a parameter’s value at any
arbitrary time in the simulation. Using a cubic spline, we reduce the
potential for Runge’s phenomenon around discontinuous derivatives (compared to
polynomial interpolation) when interpolating non-smooth data, e.g., stellar
mass and radius profiles around the tips of the red-giant branch (TRGB) and
asymptotic giant branch (AGB) (see Fig. 1).
We developed REBOUNDx’s interpolation functions adapting the cubic spline
algorithm from Press et al. (1992), tailored for the C language. We ensure
machine independence and avoid requiring installation of any additional
libraries or dependencies by adding these functions directly to the core C
source code. We also incorporate a custom and optimised searching algorithm
into the interpolation function. This function allows the code to support
forward and backward integrations222E.g., using REBOUND’s JANUS integrator
(Rein & Tamayo, 2017). and interpolations at arbitrary times. This ‘Parameter
Interpolation’ (PI) feature, available as of version 3.1.0, therefore allows
users to import data from other codes into their REBOUND
simulations.333Documentation, as well as both C and Python examples of its
uses, can be found at
https://reboundx.readthedocs.io/en/latest/effects.html#parameter-
interpolation.
The interpolator object incorporates a time series by accepting two arrays:
(1) a monotonically increasing time series, in one-to-one correspondence with
(2) a series of values for a given parameter. Users can populate these arrays
in any desired manner, including, but not limited to, importing values from an
external data file. For example, users can generate a discrete set of
parameter values (e.g., stellar mass) from their own formulas, from their own
integrations, or from existing stellar evolution codes, e.g., MESA or SSE.
When using MESA, we recommend the methodology laid out in the mesa2txt.ipynb
Jupyter notebook, available at the repository for this paper (see § 1). The
procedure isolates a parameter from standard MESA output logs and generates a
two-column, tab-separated text file. This method also accounts for when a MESA
integration restarts from an earlier timestep444For example, MESA may
automatically attempt a ‘backup’ or ‘retry’ when convergence fails between
timesteps. and ensures the time-series part of the data imported into REBOUNDx
is strictly increasing.
Before starting an integration, we create a separate interpolator object for
each varying parameter. We then repeatedly call REBOUND’s main integration
function when looping over a list of times to update the parameters to their
interpolated values at each iterated time of the loop. This results in two
distinct intervals: (1) the existing integration timestep $dt$, and (2) an
interpolation interval $dt_{\textrm{split}}$.
### 2.2 Demonstration
Here we interpolate stellar evolution data to demonstrate the splitting scheme
in action. We use MESA to model the Sun’s evolution from pre-main sequence
(MS) to white dwarf (WD)555Release version 12778, and MESA SDK version 20.3.2
(DOI 10.5281/zenodo.3706650). MESA supports various preloaded and custom mass-
loss rate configurations along different evolutionary stages (Paxton et al.,
2011, p. 16). For the red-giant branch (RGB) phase, we used the default
Reimers (1975) formula for MESA’s ‘cool-wind RGB scheme’:
$\dot{M}=4\times 10^{-13}\eta\dfrac{LR}{M},$ (3)
where $L$, $R$, and $M$ respectively are the stellar luminosity, radius, and
mass (all in solar units), and $\eta$ is a dimensionless scaling factor. We
opted to maintain the test suite’s default factor of $\eta=0.8$, which falls
on the upper bound of a reasonable range for the Sun ($0.4<\eta<0.8$),
according to Sackmann et al. (1993), and as discussed in Veras & Wyatt (2012,
p. 2970).
Figure 1: MESA results of post-MS stellar mass (top) and radius (bottom)
evolution for a 1-$M_{\sun}$ main sequence star during its RGB and AGB phases.
For reference, 1 au $\approx 215~{}R_{\sun}$.
Fig. 1 shows post-MS results from MESA for the mass and radius evolution of a
$1~{}M_{\sun}$ star. With these data in hand, we twice simulate an idealised
Sun-Earth system roughly 4 million years (Myr) before the TRGB using the
WHFast integrator (Rein & Tamayo, 2015).666A Jupyter Notebook of this
interpolation example can be found at
https://github.com/dtamayo/reboundx/blob/master/ipython_examples/ParameterInterpolation.ipynb.
We invoke our new PI code in REBOUNDx to load in the Sun’s post-MS MESA data
to interpolate and update its mass and radius. We do this first with
$dt_{\textrm{split}}$$=4000$ yr and second with $dt_{\textrm{split}}$$=400$ yr
(a 10x-shorter interval) to observe the numerical splitting scheme’s
trajectory in terms of the quantitative results. We initialise Earth’s semi-
major axis at 1 au, although in reality its orbit would have expanded somewhat
from any stellar mass loss prior to the start of our simulation.
Figure 2: Evolution of the Sun’s mass $M(t)$ and radius $R(t)$ (both in solid
red) and Earth’s semi-major axis $a(t)$ for splitting intervals
$dt_{\textrm{split}}$ of 400 yr (solid yellow) and 4000 yr (dotted blue). The
simulation starts approximately 4 Myr before the TRGB phase. Earth’s orbital
radius starts at 1 au.
As functions of simulation time, Fig. 2 shows the Sun’s mass and radius
compared with Earth’s semi-major axis for the two splitting intervals
$dt_{\textrm{split}}$.777Note this mass-loss profile corresponds to a smaller,
4-Myr window, slightly off-centred around the TRGB, seen in Fig. 1. The solar
radius only reaches 0.8 au. As we expect, Earth’s orbit adiabatically expands
in sync with solar mass loss, stopping at about 1.5 au when the Sun reaches
its TRGB. Comparing the semi-major axis plots for the two values for
$dt_{\textrm{split}}$ shows the solutions are indistinguishable and converged.
## 3 Tides Constant Time Lag (TCTL)
### 3.1 REBOUNDx Implementation
We implement a general form of the weak friction model for tidal interaction
in binary systems with constant time lag from Hut (1981) (see also Bolmont et
al., 2015). The tidal perturbing force from Hut (1981, Eq. 8) is
$\bm{F}=-G\dfrac{Mm}{r^{2}}\left\\{\hat{r}+3q\left(\dfrac{R}{r}\right)^{5}k_{2}\left[\left(1+3\dfrac{\dot{r}}{r}\tau\right)\hat{r}-(\Omega-\dot{\theta})\tau\hat{\theta}\right]\right\\},$
(4)
where $G$ is the gravitational constant; $M$ and $m$ are the masses of the
tidally deformed body and perturber respectively; $r$ is the radial distance
between the two as point masses; $q=m/M$ is the mass ratio; $R$ is the
perturbed body’s physical radius; $\tau$ is a small constant time lag that
corresponds to the slight change in both amplitude and direction (i.e.,
misalignment) of the tides; $\Omega$ and $\dot{\theta}$ are the rotational
(spin) and instantaneous orbital angular velocities of the perturbed body and
perturber respectively ($\theta$ is the true anomaly); and $\hat{r}$ and
$\hat{\theta}$ are unit vectors in the $r$ and $\theta$ directions.
The perturbed body’s potential Love number of degree 2, $k_{2}$, is defined as
(e.g., Becker & Batygin, 2013),
$k_{2}=\frac{3-\eta_{2}}{2+\eta_{2}},$ (5)
where $\eta_{2}$ is the solution of Radau’s equation for $j=2$ at the body’s
surface. Hut (1981) confusingly refers to this quantity as the apsidal motion
constant $k$, which instead would imply a coefficient of $6$ in the $k_{2}$
term in Eq. 4 (e.g., Csizmadia et al., 2019). We therefore follow the more
standard notation of Bolmont et al. (2015).
We release this implementation of ‘Tides Constant Time Lag’ (TCTL) in version
3.0.5 of REBOUNDx.888Documentation is available at
https://reboundx.readthedocs.io/en/latest/effects.html#tides-constant-time-
lag. When activated, the tidal effect applies to all other bodies in a REBOUND
simulation, allowing for arbitrary orbital inclinations and eccentricities.
Tides can be raised on either the primary or the orbiting bodies – or both –
by setting the requisite parameters on all desired particles. For example, if
we set a physical radius for the primary, any orbiting body, with non-zero
mass, will raise tides on the primary. Similarly, if we add a physical radius
and $k_{2}$ to any of the orbiting bodies, the primary will raise tides on
those particles, e.g., modeling binary star systems. We note that for
computational efficiency, secondary bodies themselves (i.e., all particles
added to the simulation beyond the first) will not raise tides on one another
with the current implementation.
The inclusion of a non-zero constant time lag $\tau$ introduces dissipation to
the system, whereas $\tau=0$ corresponds to the case of instantaneous
equilibrium tides. The latter case provides a conservative, radial, non-
Keplerian potential, i.e., the total energy will be conserved, but the
pericentre will precess. However, in the former case a delayed response
typically causes eccentricity damping and will drive orbiting bodies radially
either inward or outward depending on whether they orbit faster or slower than
the spin ($\Omega$) of the tidally deformed body.
There are two main limitations with the current implementation. First, the
effect does not evolve the spins; it is thus applicable to cases where the
angular momentum change due to tides has a negligible effect on the spins or
in cases where $\dot{\theta}\ll\Omega$. Thus, users must consider whether more
angular momentum is being exchanged in the system than is available in the
spins. Second, it assumes all of the bodies’ spins remain fixed along the
reference $z$-axis. Thus if a body’s orbit is inclined with respect to the
$xy$-plane, then its spin will be inclined with respect to its orbital plane.
### 3.2 Demonstration
We compare the results of our code with an analytic approximation of Earth’s
orbital decay around a non-rotating RGB Sun. We use the following tidal
evolution equation from Hut (1981, Eq. 9) to predict the decay of Earth’s
orbit as a function of time:
$\displaystyle\dfrac{da}{dt}=$
$\displaystyle-6\dfrac{k_{2}}{T}q(1+q)\left(\dfrac{R}{a}\right)^{8}\dfrac{a}{(1-e^{2})^{15/2}}$
(6)
$\displaystyle\cdot\left\\{f_{1}(e^{2})-(1-e^{2})^{3/2}f_{2}(e^{2})\dfrac{\Omega}{n}\right\\},$
where
$\displaystyle f_{1}(e^{2})$
$\displaystyle=1+\tfrac{31}{2}e^{2}+\tfrac{255}{8}e^{4}+\tfrac{185}{16}e^{6}+\tfrac{25}{64}e^{8},$
$\displaystyle f_{2}(e^{2})$
$\displaystyle=1+\tfrac{15}{2}e^{2}+\tfrac{45}{8}e^{4}+\tfrac{5}{16}e^{6},$
$n=G^{1/2}(M+m)^{1/2}a^{-3/2}$ is the mean orbital angular velocity, and
$T=\dfrac{R^{3}}{GM\tau}$
‘is a typical time scale on which significant changes in the orbit take place
through tidal evolution’ (Hut, 1981, p. 128). We assume a circular orbit
($e=0$) and solve differential Eq. 6 to get a predictive expression for
Earth’s semi-major axis as a function of time:
$a(t)=R\left[\left(\dfrac{a_{0}}{R}\right)^{8}-48\dfrac{k_{2}}{T}q(1+q)t\right]^{1/8}.$
(7)
We set up an idealised Sun-Earth system just before the TRGB, with
$M=0.86~{}M_{\sun}$, $R=0.85~{}R_{\sun}$, and $\Omega=0$. All solar parameters
remain constant, and Earth’s initial semi-major axis is at 1 au. We vary
Earth’s initial eccentricity in two different setups, with $e_{\earth}=0$ and
$e_{\earth}=0.03$. In both cases, we set the constant time lag $\tau=0.4$
yr.999A Jupyter Notebook containing these tidal tests can be found at
https://github.com/dtamayo/reboundx/blob/master/ipython_examples/TidesConstantTimeLag.ipynb.
We plot results for a 25-kyr integration in Fig. 3. In the top subplot, we see
the dissipative tidal effect causes Earth’s orbit, measured by its semi-major
axis (dotted blue), to decay into the solar photosphere (dashed red). We run
the simulation with the high-accuracy ‘Implicit integrator with Adaptive
timeStepping, 15th order,’ or IAS15 (Rein & Spiegel, 2015) to better compare
our results with the theoretical decay predicted by Eq. 7 (solid yellow). In
the bottom subplot, in our variation with an initial $e_{\earth}=0.03$, we
observe eccentricity damping due to the dissipative tidal effect, consistent
with physical expectations.
Figure 3: 25-kyr simulation of the Earth’s orbital decay and engulfment due to
dissipative tidal interactions with the Sun. (Top) $a(t)_{\textrm{pred}}$ and
$a(t)_{\textrm{sim}}$ respectively are the predicted (solid yellow) and
simulated (dotted blue) evolutions of Earth’s semi-major axis; cf. $R(t)$, the
solar radius (red). (Bottom) A similar setup where Earth’s orbital
eccentricity (solid blue), initialised to $e_{\earth}=0.03$, dampens over time
due to dissipative tides.
## 4 Combining Effects
To further showcase the capabilities of these new features in REBOUNDx, we
demonstrate both dynamical stellar evolution (via § 2) and the effects of
dissipative tidal interactions (via § 3) running simultaneously. We first
derive an expression for $\tau$ in Eq. 4 solely in terms of stellar mass,
radius, and luminosity, as all these values are generated from MESA. We then
interpolate the time-varying solar data, generated in § 2.2, to evaluate and
update the corresponding TCTL parameter $\tau$ (§ 3.1) throughout a
simulation.
For a highly-evolved RGB Sun, tidal friction in the outer convective envelope
will retard tidal bulges on the solar photosphere (Schröder & Smith, 2008),
resulting in a non-zero value for $\tau$. Setting Eq. 11 in Zahn (1989) equal
to the azimuthal ($\hat{\theta}$) component of our Eq. 4 and solving for
$\tau$, we find
$\tau=\dfrac{2R^{3}}{GMt_{f}},$ (8)
where $t_{f}(t)=(M(t)R(t)^{2}/L(t))^{1/3}\approx\mathcal{O}(1\textrm{yr}$) is
the convective friction time (Zahn, 1989, Eq. 7).
With this expression, one can separately interpolate stellar mass, radius, and
luminosity data to evaluate and update $\tau$ as needed throughout a
simulation. However, as discussed in § 4.1 and § 4.3, the computational
overhead associated with excessive interpolation calls can result in increased
simulation runtimes. Since the stellar profiles for $R(t)$, $M(t)$, and $L(t)$
are known in advance from MESA’s output, we instead precalculate the values of
$\tau(t)$ with Eq. 8 for use with its own interpolator object (§ 2.1). This
requires only one interpolation call per update of $\tau$ and is therefore
more computationally efficient.
The remaining two tidal parameters are $\Omega$ and $k_{2}$ (see § 3.1). As
conservation of angular momentum and post-MS magnetic braking effectively
result in a non-rotating RGB Sun (Schröder & Smith, 2008), we set $\Omega=0$
in the following simulations. Meanwhile, $k_{2}$ is approximately equal to
$\lambda_{2}$ (Zahn, 1989, 1977), which depends on properties of the Sun’s
convective envelope. Following Schröder & Smith (2008, p. 6), we set
$k_{2}\approx\lambda_{2}\approx 0.03$ for a fully convective envelope.
### 4.1 Convergence Tests
We study the effect that various time intervals between parameter updates has
on dynamical results in two different convergence tests. The first compares
the engulfment time of a body closely-orbiting an RGB Sun against a range of
time intervals for updating parameters. The second compares the final semi-
major axis reached by a more distant body at the TRGB as a function of the
same range of intervals. We use the high-accuracy IAS15 integrator for all
setups and record each runtime.
Figure 4: Relative errors of convergence tests of dynamical results as a
function of stellar- and tidal-parameter update intervals of two-body, post-MS
systems approximately 5 Myr pre-TRGB. The top subplot shows the relative error
(Eq. 9) in engulfment times $\delta t_{\textrm{eng}}$ (blue circles) and
simulation runtimes (orange triangles) versus update intervals for an Earth-
mass planet with initial semi-major axis of 0.7 au. The bottom subplot shows
the relative error in final semi-major axes $\delta a_{\textrm{f}}$ (blue
circles) and simulation runtimes (orange triangles) versus update intervals
for a Jupiter-mass planet with initial semi-major axis of 5 au.
In our first test, we initialise an Earth-mass planet at 0.7 au about 5 Myr
before the TRGB. We enable both stellar evolution and dissipative tidal
interactions, and we record both the integration time when REBOUND detects a
particle collision (i.e., the planet is engulfed), $t_{\textrm{eng}}$, and the
elapsed (wall-clock) real time of the simulation. Without tides at this
initial distance the planet escapes engulfment via adiabatic expansion from
stellar mass loss before the TRGB (see § 2.2 and Fig. 2). We interpolate and
update the RGB Sun’s mass, radius, and time-lag $\tau$ at regular intervals
and repeat the runs across a logarithmic range in decades from every 1-Myr to
one-tenth a year.
We take our highest accuracy result of the engulfment time,
$t_{\textrm{eng}}=3.2997761626440026$ Myr for an update interval of 0.1-yr, as
our true value. We then calculate the relative error defined by
$\delta
t_{\textrm{eng}}=\frac{|t_{\textrm{eng,0}}-t_{\textrm{eng}}|}{t_{\textrm{eng}}},$
(9)
where $t_{\textrm{eng,0}}$ is the engulfment time measured at each update
interval. Our results of engulfment-time relative errors versus update
intervals can be seen in the top subplot of Fig. 4.
We note a difference of almost 0.5 Myr in engulfment time between the least
frequent (every Myr) and the most frequent update intervals (ten times per
year). The shortest update intervals (yearly and $10^{-1}$-yr) coincide with
noticeable increases in total runtimes. The case with the most frequent
updates takes more than three times longer to run than the fastest simulation
with $10^{2}$-yr updates. Comparing the two curves, the additional
computational overhead from excessive interpolation and updating yields
diminishing returns to accuracy.
Looking to the left-hand side of the subplot, between the $10^{6}$\- and
$10^{4}$-yr intervals, we find runtimes first start out longer than those
around the middle and decrease with shorter intervals. Since our runs
terminate upon engulfment, this behaviour corresponds to instances where the
planet is able to survive longer due to a slower orbital decay. Rewriting Eq.
7 for the analytic approximation of the planet’s semi-major axis as a function
of time, we find
$a(t)=\left[a_{0}^{8}-48R^{5}GM\tau k_{2}q(1+q)t\right]^{1/8}.$ (10)
For a positive time-lag $\tau$, inspection of Eq. 10 reveals that an increase
in solar radius $R$ results in a decrease in semi-major axis $a(t)$. Thus
shorter parameter update intervals that more accurately capture the rapid
growth of the TRGB solar radius serve to accelerate orbital decay toward
engulfment. In other words, until the $10^{2}$\- and $10$-yr range, more
frequent updates result in shorter runtimes since engulfment occurs sooner.
Conversely, longer update intervals less accurately capture radial growth,
helping to slow orbital decay and resulting in the longer aforementioned
engulfment times.
In our second test, we initialise a Jupiter-mass at 5 au at the same solar
age. Stellar evolution and dissipative tides remain enabled. As engulfment by
the TRGB does not occur, we record the final semi-major axis of the planet,
$a_{f}$, after a full 5-Myr integration. We repeat the simulation for the same
range of parameter update intervals as before and take
$a_{f}=7.7047437314161416$ au from our update interval of 0.1-yr as our true
value.
Following Eq. 9, we plot the relative error of our results in the bottom
subplot of Fig. 4. We note a difference of about 0.5 au in final semi-major
axis reached by the planet between the longest ($10^{6}$ yr) and shortest
($10^{-1}$ yr) update intervals. Again we find that excessive interpolation
and updating, between intervals of $10$\- and $10^{-1}$-yr, result in longer
computational runtimes (more than 60 times in the worst case) with diminishing
returns in accuracy.
### 4.2 Engulfment Survey
We survey several suites of single-planet setups about 5 Myr before the TRGB.
We include stellar evolution in all cases and run each setup twice: once with
TCTL on and once with it off. We use the IAS15 integrator, and we opt for a
parameter update interval of every 100 yr based on Fig. 4’s results in § 4.1.
Our three main testing suites involve a single planet of either 1, 10, or 100
Earth-masses. For each suite we initialise the planet’s semi-major axis
between 0.4 and 1.4 au in increments of 0.2. We choose a lower bound for the
orbital distance of 0.4 au because the solar radius is already larger than 0.3
au at the start of the 5 Myr integrations.
Figure 5: Suites of simulations for the case of a single planet around the
Sun, approximately 5 Myr before the TRGB, with TCTL (§ 3) enabled (solid
coloured curves) and disabled (dashed coloured curves), and evolving solar
radius (thick black curve) and mass using PI of MESA data (§ 2). Initial semi-
major axes of the planets range from 0.4 to 1.4 au in increments of 0.2. The
heavier line weights of the solid curves correspond to more massive planets as
shown in the legend. With tides off, planets starting at the same semi-major
axis follow the same trajectory regardless of mass.
We show the results of our survey in Fig. 5. The thick black curves correspond
to the RGB Sun’s radius, reaching its tip around 4.7 Myr into the simulation
(cf. Figs. 1 and 2). The solid and dashed coloured curves correspond to the
planet’s semi-major axis with and without tides present, respectively.
We first note that the planet’s orbit in non-tidal cases (dashed coloured
curves) all exhibit the same adiabatic expansion due to the stellar mass loss,
stopping once the Sun reaches the TRGB (cf. § 2.2 and Fig. 2). Differences in
the final semi-major axis reached without tides depend only on initial semi-
major axis with no dependence on planetary mass. Among these non-tidal cases,
engulfment occurs only for initial semi-major axes of 0.4 au (solid blue
curves).
With TCTL enabled (solid coloured curves), we observe the tidal drag effect
begin to dominate adiabatic expansion (see § 3). In the 1-$M_{\oplus}$ suite
(thinnest solid curves), we see that drag on the planet from tides raised on
the Sun result in engulfment by the TRGB for initial semi-major axes between
0.4 and 0.8 au. We find similar results between $0.4\leq a_{0}\leq 1.0$ au for
the 10-$M_{\oplus}$ suite (thicker solid curves), and between $0.4\leq
a_{0}\leq 1.4$ au for the 100-$M_{\oplus}$ suite (thickest solid curves).
The wider range of $a_{0}$ that lead to engulfment as a function of planetary
mass is mathematically consistent with the tidal force being directly
proportional to the perturbing mass ($m$ in Eq. 4) and physically consistent
with raising larger tidal bulges on the Sun’s surface which lag behind the
planet’s orbit. As a final note, we observe attenuation of adiabatic expansion
due to tides in the surviving planetary cases, e.g., $a_{0}\geq 1.0$ au for 1
$M_{\oplus}$ and $a_{0}\geq 1.1$ au for 10 $M_{\oplus}$.
### 4.3 Time Performance
To measure the computational performance costs of these two new features, we
record runtimes over multiple trials of the terrestrial planets simultaneously
orbiting a pre-TRGB Sun. The four configurations we specify include (1) no new
effects; (2) ‘Parameter Interpolation’ Stellar Evolution (PISE) only; (3) TCTL
only; and (4) both effects running simultaneously. For these runs, we instead
use the WHFast integrator with a fixed timestep of one-tenth Mercury’s initial
orbital period to rule out any differences in performance between the four
setups caused by adaptive timesteps (e.g., IAS15).
We end the integration after 920 kyr for all configurations, which corresponds
to the engulfment of Mercury when both stellar evolution and tidal
interactions are enabled. In configurations (2) and (4), we interpolate and
update stellar mass, radius and time lag $\tau$ parameters 1000 times
throughout the run, corresponding to a frequency interval of 920 yr. For
configuration (3), we evaluate and set $\tau$ only once before the start of
the integration. We perform ten single-threaded runs of each setup on a
computing cluster with each node containing two Intel Xeon E5-2640v3 (8-core)
CPUs and 128 GB of available memory.
Table 1: Computational time performance results from 920 kyr simulations of all four terrestrial planets in various REBOUNDx configurations, using the WHFast integrator with fixed timesteps. We computed the average and standard deviation of 10 runs for each of the following setups: no REBOUNDx effects (None); ‘Parameter Interpolation’ Stellar Evolution only (PISE); tidal interaction only through TCTL; and both effects simultaneously (PISE & TCTL). Effects | Avg. Runtime | Std. Dev. | Increase
---|---|---|---
| (s) | (s) | (%)
None | 57.73 | $\pm 0.37$ |
PISE | 58.69 | $\pm 0.37$ | +1.7
TCTL | 67.30 | $\pm 0.74$ | +16.6
PISE & TCTL | 68.26 | $\pm 0.62$ | +18.2
Table 1 shows the computed averages, standard deviations, and percentage
increases of runtimes for each configuration. We find including PISE alone
adds (on average) less than a 2 per cent increase in overhead. The addition of
TCTL alone adds an average of 17 per cent to the computation time. As expected
from the above benchmarks, including both effects extends the runtime by about
18 per cent. While exact runtimes will vary depending on hardware, these
increases in overhead are not prohibitive for extended integrations, e.g., on
the order of hundreds of millions or billions of orbits.
## 5 Conclusion
We add two new features to REBOUNDx’s existing library of astrophysical
effects: generalised parameter interpolation for splitting schemes (§ 2) and
dissipative tidal interactions (§ 3). The former conveniently allows the
results of other integration codes to be used as parameter inputs for REBOUND.
The latter lets users examine tidal effects among close encounter situations,
e.g., ‘hot Jupiters’ around post-MS stars. Users can also utilise both
features simultaneously (§ 4) to study in detail a wide-range of orbital
instabilities caused by stellar mass loss and tidal drag. Hearkening the
concluding notes of Tamayo et al. (2020), we hope users find these additional
effects straightforward to incorporate – especially with the convenient Python
wrapper – and useful in their REBOUND N-body integrations. More importantly,
we encourage others in the community to continue adding to the REBOUNDx
extended library.
## Acknowledgements
We thank Tamás Borkovits for helpful discussions. Simulations in this paper
made use of the MESA, REBOUND and REBOUNDx codes, all of which are freely
available at http://mesa.sourceforge.net/,
http://github.com/hannorein/rebound, and https://github.com/dtamayo/reboundx.
This research was made possible by the open-source projects Jupyter (Kluyver
et al., 2016), IPython (Perez & Granger, 2007), and matplotlib (Hunter, 2007;
Caswell et al., 2020).
The MESA EOS is a blend of the OPAL Rogers & Nayfonov (2002), SCVH Saumon et
al. (1995), PTEH Pols et al. (1995), HELM Timmes & Swesty (2000), and PC
Potekhin & Chabrier (2010) EOSes. Radiative opacities are primarily from OPAL
(Iglesias & Rogers, 1993, 1996), with low-temperature data from Ferguson et
al. (2005) and the high-temperature, Compton-scattering dominated regime by
Buchler & Yueh (1976). Electron conduction opacities are from Cassisi et al.
(2007). Nuclear reaction rates are a combination of rates from NACRE (Angulo
et al., 1999), JINA REACLIB (Cyburt et al., 2010), plus additional tabulated
weak reaction rates Fuller et al. (1985); Oda et al. (1994); Langanke &
Martínez-Pinedo (2000). (For MESA versions before 11701): Screening is
included via the prescriptions of Salpeter (1954); Dewitt et al. (1973);
Alastuey & Jancovici (1978); Itoh et al. (1979). (For MESA versions 11701 or
later): Screening is included via the prescription of Chugunov et al. (2007).
Thermal neutrino loss rates are from Itoh et al. (1996).
All simulations performed in § 4 were run on the ‘Penguin’ Cherry-Creek 2
cluster at the UNLV National Supercomputing Institute for High Performance
Computing and Communications in Nevada (https://www.nscee.edu/).
## References
* Alastuey & Jancovici (1978) Alastuey A., Jancovici B., 1978, ApJ, 226, 1034
* Angulo et al. (1999) Angulo C., et al., 1999, Nuclear Physics A, 656, 3
* Becker & Batygin (2013) Becker J. C., Batygin K., 2013, ApJ, 778, 100
* Bolmont et al. (2015) Bolmont E., Raymond S. N., Leconte J., Hersant F., Correia A. C. M., 2015, A&A, 583, A116
* Buchler & Yueh (1976) Buchler J. R., Yueh W. R., 1976, ApJ, 210, 440
* Cassisi et al. (2007) Cassisi S., Potekhin A. Y., Pietrinferni A., Catelan M., Salaris M., 2007, ApJ, 661, 1094
* Caswell et al. (2020) Caswell T. A., et al., 2020, matplotlib/matplotlib: REL: v3.3.1, doi:10.5281/zenodo.3984190, https://doi.org/10.5281/zenodo.3984190
* Chugunov et al. (2007) Chugunov A. I., Dewitt H. E., Yakovlev D. G., 2007, Phys. Rev. D, 76, 025028
* Csizmadia et al. (2019) Csizmadia S., Hellard H., Smith A. M. S., 2019, A&A, 623, A45
* Cyburt et al. (2010) Cyburt R. H., et al., 2010, ApJS, 189, 240
* Dewitt et al. (1973) Dewitt H. E., Graboske H. C., Cooper M. S., 1973, ApJ, 181, 439
* Ferguson et al. (2005) Ferguson J. W., Alexander D. R., Allard F., Barman T., Bodnarik J. G., Hauschildt P. H., Heffner-Wong A., Tamanai A., 2005, ApJ, 623, 585
* Fuller et al. (1985) Fuller G. M., Fowler W. A., Newman M. J., 1985, ApJ, 293, 1
* Hairer et al. (2006) Hairer E., Lubich C., Wanner G., 2006, Geometric numerical integration: structure-preserving algorithms for ordinary differential equations. Springer Science & Business Media
* Hunter (2007) Hunter J. D., 2007, Computing in Science Engineering, 9, 90
* Hut (1981) Hut P., 1981, A&A, 99, 126
* Iglesias & Rogers (1993) Iglesias C. A., Rogers F. J., 1993, ApJ, 412, 752
* Iglesias & Rogers (1996) Iglesias C. A., Rogers F. J., 1996, ApJ, 464, 943
* Itoh et al. (1979) Itoh N., Totsuji H., Ichimaru S., Dewitt H. E., 1979, ApJ, 234, 1079
* Itoh et al. (1996) Itoh N., Hayashi H., Nishikawa A., Kohyama Y., 1996, ApJS, 102, 411
* Kluyver et al. (2016) Kluyver T., et al., 2016, in Loizides F., Scmidt B., eds, Positioning and Power in Academic Publishing: Players, Agents and Agendas. IOS Press, pp 87–90, https://eprints.soton.ac.uk/403913/
* Langanke & Martínez-Pinedo (2000) Langanke K., Martínez-Pinedo G., 2000, Nuclear Physics A, 673, 481
* Oda et al. (1994) Oda T., Hino M., Muto K., Takahara M., Sato K., 1994, Atomic Data and Nuclear Data Tables, 56, 231
* Paxton et al. (2011) Paxton B., Bildsten L., Dotter A., Herwig F., Lesaffre P., Timmes F., 2011, ApJS, 192, 3
* Paxton et al. (2013) Paxton B., et al., 2013, ApJS, 208, 4
* Paxton et al. (2015) Paxton B., et al., 2015, ApJS, 220, 15
* Paxton et al. (2018) Paxton B., et al., 2018, ApJS, 234, 34
* Paxton et al. (2019) Paxton B., et al., 2019, ApJS, 243, 10
* Perez & Granger (2007) Perez F., Granger B. E., 2007, Computing in Science Engineering, 9, 21
* Pols et al. (1995) Pols O. R., Tout C. A., Eggleton P. P., Han Z., 1995, MNRAS, 274, 964
* Portegies Zwart (2018) Portegies Zwart S., 2018, Science, 361, 979
* Portegies Zwart & McMillan (2018) Portegies Zwart S., McMillan S., 2018, Astrophysical Recipes; The art of AMUSE, by Portegies Zwart, Simon; McMillan, Steve. ISBN: 978-0-7503-1321-6. IOP ebooks. Bristol, UK: IOP Publishing, 2018
* Potekhin & Chabrier (2010) Potekhin A. Y., Chabrier G., 2010, Contributions to Plasma Physics, 50, 82
* Press et al. (1992) Press W. H., Teukolsky S. A., Vetterling W. T., Flannery B. P., 1992, Numerical recipes in C. The art of scientific computing. Cambridge Univ. Press
* Reimers (1975) Reimers D., 1975, Memoires of the Societe Royale des Sciences de Liege, 8, 369
* Rein & Liu (2012) Rein H., Liu S. F., 2012, A&A, 537, A128
* Rein & Spiegel (2015) Rein H., Spiegel D. S., 2015, MNRAS, 446, 1424
* Rein & Tamayo (2015) Rein H., Tamayo D., 2015, MNRAS, 452, 376
* Rein & Tamayo (2017) Rein H., Tamayo D., 2017, MNRAS, 473, 3351
* Rogers & Nayfonov (2002) Rogers F. J., Nayfonov A., 2002, ApJ, 576, 1064
* Sackmann et al. (1993) Sackmann I.-J., Boothroyd A. I., Kraemer K. E., 1993, ApJ, 418, 457
* Salpeter (1954) Salpeter E. E., 1954, Australian Journal of Physics, 7, 373
* Saumon et al. (1995) Saumon D., Chabrier G., van Horn H. M., 1995, ApJS, 99, 713
* Schröder & Smith (2008) Schröder K.-P., Smith R. C., 2008, MNRAS, 386, 155
* Strang (1968) Strang G., 1968, SIAM Journal on Numerical Analysis, 5, 506
* Tamayo et al. (2020) Tamayo D., Rein H., Shi P., Hernandez D. M., 2020, MNRAS, 491, 2885
* Timmes & Swesty (2000) Timmes F. X., Swesty F. D., 2000, ApJS, 126, 501
* Veras & Wyatt (2012) Veras D., Wyatt M. C., 2012, MNRAS, 421, 2969
* Zahn (1977) Zahn J. P., 1977, A&A, 500, 121
* Zahn (1989) Zahn J. P., 1989, A&A, 220, 112
* Zwart et al. (2020) Zwart S. P., Pelupessy I., Martínez-Barbosa C., van Elteren A., McMillan S., 2020, Communications in Nonlinear Science and Numerical Simulation, p. 105240
|
# GF-Flush: A GF(2) Algebraic Attack on
Secure Scan Chains
Dake Chen, Chunxiao Lin, Peter A. Beerel
###### Abstract
Scan chains provide increased controllability and observability for testing
digital circuits. The increased testability, however, can also be a source of
information leakage for sensitive designs. The state-of-the-art defenses to
secure scan chains apply dynamic keys to pseudo-randomly invert the scan
vectors. In this paper, we pinpoint an algebraic vulnerability of these
dynamic defenses that involves creating and solving a system of linear
equations over the finite field GF(2). In particular, we propose a novel
GF(2)-based flush attack that breaks even the most rigorous version of state-
of-the-art dynamic defenses. Our experimental results demonstrate that our
attack recovers the key as long as 500 bits in less than 7 seconds, the attack
times are about one hundredth of state-of-the-art SAT based attacks on the
same defenses. We then demonstrate how our attacks can be extended to scan
chains compressed with Multiple-Input Signature Registers (MISRs).
###### Index Terms:
Hardware Security, Logic Locking, Dynamic Obfuscated Scan Chain, GF(2)
Analysis, Algebraic Attack
## I Introduction
The decentralized supply chain of modern integrated circuit (IC) design and
manufacturing raises significant concern related to threats that include
intellectual property (IP) piracy [1] and Trojan insertion [2]. For many
designs, the scan chain used in manufacturing testing presents a significant
threat vector as it provides extensive controllability and observability of
chip internals to the attacker [3, 4, 5].
The state of the art defenses involve applying dynamic keys to obfuscate the
scan chain [6, 7, 8]. They leverage a linear feedback shift register (LFSR)
that controls XOR gates along the scan chain to psuedo-randomly invert the
scan chain sequence. The psuedo-random sequence is dependent on the seed of
the LFSR which must remain secret to ensure security. Recently, some SAT based
attacks [9, 10] were proposed to unveil the seed by converting the scan flip-
flops to psuedo input and outputs and thereby modeling the sequential circuit
and LFSR as a combinational circuit that can be analyzed through well-known
SAT attacks. The work [8] points out that this conversion from sequential to
combinational logic increases the number of SAT literals and clauses,
increasing the complexity and associated run-times of SAT attacks.
In contrast, a simple flush and reset attack was proposed in [11]. Here all
flip-flops on scan chain are reset to 0 and the attack examines the initial
sequence of scan out bits. Because the attacker can also reverse engineer the
location of the locking gates, they are able to reveal the key input values
from the scan out patterns. One recent dynamic obfuscation design [7, 8]
resists this reset attack by adding a shadow chain between the LFSR and scan
chain. Due to the presence of the shadow chain which has the same length as
LFSR, the initial scan out patterns remain zero and leak no information about
the secret seed.
In this paper, we propose a more comprehensive flush attack based on GF(2)
algebra that unveils the secret key of the dynamic scan locking defenses even
when protected by a shadow chain. In contrast to SAT-based attacks [9, 10]
which attack the scan chain coupled with locked combinational logic, our
attack isolates the scan chain, enabling the use of more computationally
scalable algebraic techniques used in crypto-analysis [12], including attacks
on LFSRs [13], and automatic test pattern generation [14, 15]. In particular,
the attack involves solving a system of linear equations over the finite field
GF(2) whose size scales linearly with the size of the key. We empirically
validate that the complexity of our attack scales as a low-degree polynomial,
recovering the key that is as long as 500 bits in less than 7 seconds. The
attack times are about one hundredth of state-of-the-art SAT based attacks on
the same defense.
We further consider the case when the only access to the scan chain outputs is
through test compression logic, such as a Multiple-Input Signature Registers
(MISR). Because MISRs also consists of XOR gates and FFs they can be modeled,
analyzed, and thus included in our attack. To the best of our knowledge, this
is the first attack on obfuscated scan chains that considers the impact of
test compression logic. Although slower with MISRs, we demonstrate our attack
times remain manageable.
The remainder of this paper is organized as follows. Section II reviews the
background leveraged in this paper. Section III describes the proposed attack.
Section IV details experimental results of our attack. Some conclusions and
opportunities for future work are discussed in the last section.
## II Background
Figure 1: LFSR and basic structure of a dynamically secured scan chain
### II-A A Linear Feedback Shift Register (LFSR)
A Linear Feedback Shift Register (LFSR) is often used as pseudo-random number
generator in many cryptographic and secure systems because of its lightweight,
low overhead and high throughput [16, 17].
The generic structure of an LFSR is shown in Figure 1, where $\lambda$ denotes
its length and the Binary values $c_{0}$ to $c_{\lambda-1}$ determine its
feedback structure. The next state equation $f_{i}^{t+1}$ can be represented
as
$\displaystyle f^{t+1}_{i}$ $\displaystyle=f^{t}_{i+1},\text{ for
$i\in[0,\lambda-1)$}$ (1) $\displaystyle f^{t+1}_{\lambda-1}$
$\displaystyle=\sum_{j=0}^{\lambda-1}c_{j}f^{t}_{j}$ (2)
where $t$ and $t+1$ represent the current and next state, respectively,
$f_{i}^{t}$ denotes the value of stage $i$ of LFSR at time $t$, and all
operations are in GF(2).
The sequence generated by an LFSR is periodic and the period depends on the
values of $c_{i}$ and the initial state, or _seed_ of the LFSR. The maximum
period of an LFSR of length $\lambda$ is $2^{\lambda}-1$ [18]. The sequences
generated by LFSRs with maximum period are referred to as PN-sequences and
these are desired for secure systems as they are more difficult to break than
LFSRs with small periods.
### II-B Dynamically Obfuscated Scan Chains
Due to the effectiveness of SAT-based attacks [9] on static scan chain
obfuscation techniques[19], state-of-the-art secure chains dynamically obscure
scan chains using XORs that are driven by an LFSR [6, 7, 8] and psuedo-
randomly invert the scan sequence.111MUXes can also be used to selectively
invert the scan bit by muxing between the $Q$ and $Q_{bar}$ outputs of the
scan FFs [6]. The basic structure of these schemes is shown in Figure 1, where
$\lambda$ represents the length of the LFSR and key, $N$ denotes the length of
scan chain, and $b$ represents the spacing of locking gates throughout the
chain. The most secure version of these methods updates the LFSR every clock
cycle, applying new key bits to the scan locking gates every cycle.
### II-C MISR
As the size of chips and number of scanned FFs increase, the latency and
memory requirements to shift out and process their stored values during test
grows. For this reason test compression techniques, involving both a
decompressor and compressor, have become an essential part of the design. The
decompressor expands one scanned-in sequence into many parallel scan chain
segments and the compressor compresses the outputs of many parallel scan
segments into one. The most commonly used compressor is a Multiple-Input
Signature Register (MISR) [20] illustrated in Figure 3,
Because the MISR can prohibit direct access to the scan outputs, it has
significant impact on all HW security attacks that rely on scan chain access,
including previous SAT-based attacks [9, 10]. Interestingly, as the MISR uses
XOR gates that are commonly used to obfuscate combinational logic, one might
think the MISR effectively encrypts the scan outputs.
### II-D Algebraic Analysis
LFSRs are commonly used in built-in-self-test structures and algebraic
analysis establishing the relationship between seed and outputs [14, 15] has
been used to find seeds and characteristic polynomials that lead to high test
coverage. Moreover, algebraic cryptanalysis or algebraic attack [13, 12] has
been widely used for attacking various ciphers. These attacks first find low
degree equations to approximate the function of feedback shift registers (FSR)
or algorithms based on their features, then leverage the XL algorithm [21] to
solve the system of multivariate polynomial equations, thereby acquire the key
bits. These algebraic techniques, however, have never been applied in scan-
chain locking. Considering all operations in the LFSR, scan-chain locking
gates and MISR are effectively XOR operations, we hypothesize that an
algebraic attack over GF(2) can be very efficient.
## III GF-Flush: A GF(2) Algebraic Attack
### III-A Algebraic Foundations of the Attack
Figure 2: Flow of the proposed attack
The basic flow of our proposed attack is illustrated in Figure 2. Similar to
previous attacks on the same defenses [10], we assume that the netlist is
reverse-engineered and thus the structural information about the LFSR $c_{i}$,
the length of the scan chain $N$, and the location of XOR gates $b$ are known
to the attacker. We also assume the attacker has access to an oracle, which in
this case amounts to a working scan-chain with the correct seed programmed in
the LFSR.
To obtain enough algebraic expressions, our attack shifts in sequence of logic
$0$s into the oracle scan chain obfuscated by the LFSR and captures the
corresponding scan outputs $\boldsymbol{o}$. This is known as _flushing_ the
scan chain[9]. As we show below, choosing logic $0$s to scan in instead of
random bits simplifies the algebraic expression of the scan output and
corresponding final system of equations.
In particular, we can derive an algebraic representation of the secure scan
chain. The matrix representation of the LFSR states reveals many properties
and can be derived from Equation 2 as follows
$\begin{pmatrix}f^{t+1}_{0}\\\ \vdots\\\ f^{t+1}_{\lambda-2}\\\
f^{t+1}_{\lambda-1}\\\ \end{pmatrix}=\begin{pmatrix}0&1&\cdots&0\\\
\vdots&\vdots&\ddots&\vdots\\\ 0&0&\cdots&1\\\
c_{0}&c_{1}&\cdots&c_{\lambda-1}\end{pmatrix}\begin{pmatrix}f^{t}_{0}\\\
\vdots\\\ f^{t}_{\lambda-2}\\\ f^{t}_{\lambda-1}\\\ \end{pmatrix}\\\ $ (3)
where, $t$ and $t+1$ represent the current and next cycle, respectively. We
will refer to this transition matrix as $\boldsymbol{T}$. The state at any
time step $t^{\prime}$ can then be derived from the LFSR seed and
$\boldsymbol{T}$ as follows
$\begin{pmatrix}f^{t^{\prime}}_{0}\\\ \vdots\\\ f^{t^{\prime}}_{\lambda-2}\\\
f^{t^{\prime}}_{\lambda-1}\\\ \end{pmatrix}=\begin{pmatrix}0&1&\cdots&0\\\
\vdots&\vdots&\ddots&\vdots\\\ 0&0&\cdots&1\\\
c_{0}&c_{1}&\cdots&c_{\lambda-1}\end{pmatrix}^{t^{\prime}}\begin{pmatrix}s_{0}\\\
\vdots\\\ s_{\lambda-2}\\\ s_{\lambda-1}\\\ \end{pmatrix}\\\ $ (4)
To simplify this representation, we use the matrix and vector forms as follows
$\boldsymbol{f}^{t+1}=\boldsymbol{T}*\boldsymbol{f}^{t}$ (5)
$\boldsymbol{f}^{t^{\prime}}=\boldsymbol{T}^{t^{\prime}}*\boldsymbol{s}$ (6)
Using Equation 6, we can symbolically represent the key input of any locking
gate driven by the $i$th stage of the LFSR at time step $t^{\prime}$:
$f^{t^{\prime}}_{i}=(\boldsymbol{T}^{t^{\prime}}*\boldsymbol{s})[i]\\\ $ (7)
We observe that when logic $0$s go through the scan chain, they are simply XOR
with keys $f^{t^{\prime}}_{i}$. We can thus derive the symbolic expression for
the expected values of the scan out signal. Let $\boldsymbol{o_{m}}$
correspond to the scan output associated with the $m$th scan input. We then
have
$\displaystyle\boldsymbol{o_{m}}=$
$\displaystyle(\boldsymbol{T}^{m}\boldsymbol{s})[0]+(\boldsymbol{T}^{m+b}\boldsymbol{s})[1]+(\boldsymbol{T}^{m+2b}\boldsymbol{s})[2]$
$\displaystyle+...+(\boldsymbol{T}^{m+(\lambda-1)b}\boldsymbol{s})[\lambda-1]$
(8)
By introducing an identity matrix $\boldsymbol{R}$ with shape
$\lambda*\lambda$ and factoring out $\boldsymbol{s}$, we can further simplify
this expression as follows
$\displaystyle\boldsymbol{o_{m}}=$
$\displaystyle[\boldsymbol{r_{0}}\boldsymbol{T}^{m}+\boldsymbol{r_{1}}\boldsymbol{T}^{m+b}+\boldsymbol{r_{2}}\boldsymbol{T}^{m+2b}$
$\displaystyle+...+\boldsymbol{r_{\lambda-1}}\boldsymbol{T}^{m+(\lambda-1)b}]\boldsymbol{s}$
(9)
where $\boldsymbol{r_{i}}$ is the $i$th row of $\boldsymbol{R}$. The size of
the first term
$\boldsymbol{a}=\boldsymbol{r_{0}}\boldsymbol{T}^{m}+...+\boldsymbol{r_{\lambda-1}}\boldsymbol{T}^{m+(\lambda-1)b}$
is $1*\lambda$. Using the above $o_{m}$ symbolic equation repeatedly for
$\lambda$ clock cycles and extracting their first term $a$, we can compose a
system of linear equations in GF(2)
$\boldsymbol{As}=\boldsymbol{o}$ (10)
where $\boldsymbol{A}$ consists of $\lambda\ \boldsymbol{a}$’s and
$\boldsymbol{o}$ is the corresponding captured scan outputs. Our attack
completes by solving this system of equations in GF(2).
### III-B Analysis of the Proposed Attack
Since the system of linear equations in Eq. 10 is based on the physical
structure of the circuit, it is guaranteed to be solvable. If $\boldsymbol{A}$
is full-rank, the solution yields the unique secret seed vector
$\boldsymbol{s}$. Otherwise, the solution yields a set of potential seed
vectors characterized by a particular solution of
$\boldsymbol{As}=\boldsymbol{o}$ along with the null space of
$\boldsymbol{A}$. More precisely, when the rank is $k$ less than $\lambda$,
there are $2^{k}$ possible seeds. These seeds can be used in further analysis,
such as brute-force or SAT attacks, possibly in conjunction with attacking the
combinational logic.
State of the art secure chains are protected by a shadow chain which prevents
the scan chain from being influenced by the LFSR for first $\lambda$ clock
cycles [8]. Because the scan chain is longer than the LFSR, the first $o$
fully affected by the LFSR will be scanned out at cycle $N+1$. Interestingly,
our attack can circumvent this defense by simply skipping the first $N$ scan
outputs and collecting the next $\lambda$ scan outputs to compose the matrix
$\boldsymbol{A}$.
### III-C Attack on MISR
Figure 3: Structure of a secured scan chain with MISR
Figure 3 shows the structure of dynamically secured scan chain with a MISR,
where the length of every chain is $N$, the Boolean values $d_{i}$ define the
structure of the MISR, and $D_{i}$ represent the internal Boolean state of
MISR that is available for reading after every round of tests. We can observe
that the MISR thwarts the direct access to scan outputs $im_{i}$. Importantly,
the $h$ XOR gates in MISR are locking gates which corrupt the scan outputs
$im$ and make attacks that demand direct access to scan outputs ineffective.
Therefore, it is important to integrate the MISR into our algebraic model.
In our attack on scan chains with a MISR, we still shifts in sequence of logic
0s into scan chain, after $2N$ cycles, the MISR forms the signature outs
$D^{2N}_{i}$ which we can read out.
In following equations, all superscripts denote the time stamps. First of all,
we derive the scan outputs $im_{i}$ from the LFSR keys $f$:
$im^{t}_{i}=\sum^{N-1}_{r=0}f^{t-N+r}_{r+iN}$ (11)
where $im^{t}_{i}$ denotes the scan output of $i^{th}$ chain at cycle $t$, the
sum is addition in GF(2), and all $f$ can be obtained using Equation 6. Then,
we can derive $D^{t}_{i}$ as follows
$D^{t}_{0}=im^{t-1}_{0}+d_{0}*D^{t-1}_{h-1}$ (12)
$D^{t}_{i}=im^{t-1}_{i}+D^{t-1}_{i-1}+d_{i}*D^{t-1}_{h-1}\ \ for\ i>0$ (13)
where $D^{t}_{i}$ represents the internal values of the MISR stage $i$ at
cycle $t$ and the initial $D_{i}^{0}$ are reset to 0. After $2N$ cycles, the
signature outs are formed and available for reading:
$signature\ out_{i}=D^{2N}_{i}$ (14)
where every signature out is an equation with respect to seed bits, thus we
obtain $h$ such equations in each round of test.
We do not reset the LFSR but, as is typical, we reset the MISR at the
beginning of every test sequence. Hence we require $\lambda/h=h*N/h=N$ tests,
each involving $h$ equations with respect to seed bits, to obtain a sufficient
number of equations to recover the secret seed. Similar to the analysis in
Section III-B, a unique seed vector $s$ would be acquired in the case that
these equations are full-rank, otherwise, we are able to acquire a set of
potential seed vectors.
## IV Experimental Results
### IV-A Experiment Setup
Our first experiment compares our algebraic attack to SAT-based attacks on
scan chains and thus excludes a MISR.222In practice, there often exists a
bypass signal to circumvent the MISR. This analysis considers the case the
attacker has access to such a signal. Both experiments demonstrate results for
different key lengths. Since our attack isolates the scan chain, LFSR, and
MISR, there is no need to model the combinational logic driven by the scan
chain. We assume the key length $\lambda$ equals the scan chain length ($N$
without a MISR and $hN$ with a MISR), i.e., we set $b=1$. The update of the
LFSR is synchronized to the scan clock, which is also presumed to be the most
secure defense. In addition, we assumed the existence of a shadow chain of
length $\lambda$.
We used MATLAB to generate the LFSR transition matrix $\boldsymbol{T}$,
transition matrix of the secure scan chain $\boldsymbol{A}$ and MISR signature
out recursively. We then utilized the MATLAB function $\textit{gflineq}()$
and, when necessary, $\textit{gf2null}()$ to identify all the solutions over
GF(2). For each key length, we randomly chose 10 configuration vectors $c$,
constrained to have $c_{0}=1$, made all $d_{i}=1$, and measured the average
run-time including the generation of matrix $\boldsymbol{A}$ and
$\boldsymbol{T}$ and the solving of the system of linear equations. All
experiments were run on Intel i7-8700 CPU running at 3.20 GHz with 16-GB RAM.
### IV-B Analysis of Basic Obfuscated Scan Chains
Figure 4: Average attack run-times vs. LFSR length $\lambda$
Figure 4 plots the average attack run-time on the defense without MISR as the
number of key bits $\lambda$ ranging from 3 to 500. Even with 500 key bits,
the attack on average took less than 7 second. The run-time trend suggests the
complexity of our attack scales as no more than a low-degree polynomial. This
is expected because solving a system of linear equations has complexity no
worse than $O(\lambda^{3})$. To further show the scalability of our proposed
attack, we also tried $\lambda=1000$ and the attack took 66 seconds.
Interestingly, 87% of the random configurations led to a unique seed, however,
the average number of seeds is influenced by a few extreme cases and is
$43.8$. We further experimented with $\lambda=500$ and explored 1000 different
random configurations of $c$. The average number of seeds of 2.5 with the vast
majority cases yielding a unique seed. We should emphasize however that for
configurations where we could verify that the characteristic polynomial of the
LFSR is primitive, a unique seed was always unveiled.
### IV-C Analysis of Impact of MISRs
Figure 5: Average attack run-times with different size MISRs
Figure 5 demonstrates the average attack run-times on the dynamically secured
scan chain with different lengths of MISRs $h$ as a function of varying LFSR
length $\lambda$ constrained by the relationship $\lambda=h*N$. The
experiments with $\lambda>300$ timed-out after 8 hours for smaller values of
$h$. This is because with a MISR, we obtain only $h$ equations every test
round (i.e., $2N$ cycles) compared to the case without a MISR which produces
roughly one equation every cycle. For practical MISR lengths that are
typically greater than 16 [22], the attack run-time remains under 8 hours for
LFSR lengths of under $250$. In all cases, the run-time is dominated by the
computation of the various powers of the system matrix $T$.333Although not
experimentally tested, we note that this computation can be parallelized
across multiple processors by pre-computing increasingly larger powers of $T$
via iterative squaring.
### IV-D Comparison to Other Attacks
State-of-the-art attacks on dynamically secured scan chains are based on SAT
attacks [9, 10]. In particular, [9] observed that the LFSR logic can be
unrolled and combined with the associated combinational logic circuit and then
attacked with SAT. They tested their attack framework with various ISCAS
benchmarks and demonstrated that even with 368 key bits they can successfully
uncover the LFSR seed in less than one hour. However, their attack assumed the
combinational logic was not logic locked, in contrast to what is advocated in
[8]. This is an important limitation because several combinational logic
techniques are known to be SAT-resistant [23, 24] which would complicate this
type of attack. Furthermore, the SAT-based attacks rely on the access to scan
outputs. The presence of a MISR may restrict this access and should not be
neglected.
In contrast, our proposed attack isolates the scan chain and in particular
does not involve modeling or attacking the combinational logic and thus
circumvents any effort to also unlock the combinational logic. Moreover,
because it leverages the algebraic nature of the problem it can integrate the
MISR into the model for attacking, and for the same defenses without MISR, it
recovers the set of potential seeds orders of magnitude faster than equivalent
SAT-based attacks .
## V Conclusions and Discussion
This paper presents a scalable GF(2) algebraic attack on scan chains that are
obfuscated by dynamic keys generated by an LFSR. The experimental results
demonstrate that the defenses with 500 key bits can be cracked in 7 seconds.
The power of the proposed attack stems from the observation that all
operations in the defensive circuitry can be modeled in GF(2). The results
highlight that while SAT-attacks are powerful, algebraic attacks should not be
overlooked as they can be significantly more efficient.
The results lead to several ideas of improving secure scan chains to protect
against such algebraic attacks. For example, obfuscating the structure of the
LFSR or using non-linear LFSRs [25, 26] may make anticipating the scan output
vectors more challenging. Studying whether such additional defenses can be
circumvented with more sophisticated algebraic attacks becomes an important
and interesting area of future work.
## References
* [1] J. A. Roy, F. Koushanfar, and I. L. Markov, “Ending Piracy of Integrated Circuits,” _Computer_ , vol. 43, no. 10, pp. 30–38, 2010.
* [2] M. Tehranipoor and F. Koushanfar, “A Survey of Hardware Trojan Taxonomy and Detection,” _IEEE Des. Test. Comput._ , vol. 27, no. 1, pp. 10–25, 2010\.
* [3] B. Yang, K. Wu, and R. Karri, “Secure Scan: A Design-for-Test Architecture for Crypto Chips,” _IEEE Trans. Comput.-Aided Design Integr. Circuits Syst._ , vol. 25, no. 10, pp. 2287–2293, 2006.
* [4] R. Nara, N. Togawa, M. Yanagisawa, and T. Ohtsuki, “Scan-based Attack against Elliptic Curve Cryptosystems,” in _15th ASP-DAC_ , 2010.
* [5] J. DaRolt, A. Das, G. Natale, M. Flottes, B. Rouzeyre, and I. Verbauwhede, “A New Scan Attack on RSA in Presence of Industrial Countermeasures,” in _COSADE_ , 2012.
* [6] R. Karmakar, S. Chattopadhyay, and R. Kapur, “A Scan Obfuscation Guided Design-for-Security Approach for Sequential Circuits,” _IEEE Trans. Circuits Syst. II, Exp. Briefs_ , vol. 67, no. 3, pp. 546–550, 2020.
* [7] X. Wang, D. Zhang, M. He, D. Su, and M. Tehranipoor, “Secure Scan and Test Using Obfuscation Throughout Supply Chain,” _IEEE Trans. Comput.-Aided Design Integr. Circuits Syst_ , vol. 37, no. 9, pp. 1867–1880, 2018\.
* [8] M. M. Rahman, A. Nahiyan, S. Amir, F. Rahman, F. Farahmandi, D. Forte, and M. Tehranipoor, “Dynamically Obfuscated Scan Chain To Resist Oracle-Guided Attacks On Logic Locked Design,” _IACR Cryptol. ePrint Arch._ , vol. 2019, p. 946, 2019.
* [9] L. Alrahis, M. Yasin, N. Limaye, H. Saleh, B. Mohammad, M. Alqutayri, and O. Sinanoglu, “ScanSAT: Unlocking Static and Dynamic Scan Obfuscation,” _IEEE Transactions on Emerging Topics in Computing_ , pp. 1–1, 2019.
* [10] N. Limaye and O. Sinanoglu, “DynUnlock: Unlocking Scan Chains Obfuscated using Dynamic Keys,” in _DATE_ , 2020, pp. 270–273.
* [11] G. Sengar, D. Mukhopadhyay, and D. R. Chowdhury, “Secured Flipped Scan-Chain Model for Crypto-Architecture,” _IEEE Trans. Comput.-Aided Design Integr. Circuits Syst._ , vol. 26, no. 11, pp. 2080–2084, 2007.
* [12] N. T. Courtois and G. V. Bard, “Algebraic Cryptanalysis of the Data Encryption Standard,” in _IMA International Conference on Cryptography and Coding_. Springer, 2007, pp. 152–169.
* [13] N. T. Courtois and W. Meier, “Algebraic Attacks on Stream Ciphers with Linear Feedback,” in _Advances in Cryptology — EUROCRYPT_ , E. Biham, Ed. Springer, 2003, pp. 345–359.
* [14] Li-Ren Huang, Jing-Yang Jou, and Sy-Yen Kuo, “Gauss-elimination-based generation of multiple seed-polynomial pairs for LFSR,” _IEEE Trans. Comput.-Aided Design Integr. Circuits Syst._ , vol. 16, no. 9, pp. 1015–1024, 1997\.
* [15] H. Wunderlich, “Self test using unequiprobable random patterns,” in _Proc. IEEE 17th InternationalSymposium on Fault-Tolerant Computing, FTCS-17_ , 1987.
* [16] R. Shiva Prasad, A. Siripagada, S. Selvaraj, and N. Mohankumar, _Random Seeding LFSR-Based TRNG for Hardware Security Applications_. Springer, 2019, pp. 427–434.
* [17] J. Melià-Seguí, J. Garcia-Alfaro, and J. Herrera-Joancomartí, “Multiple-polynomial LFSR based pseudorandom number generator for EPC Gen2 RFID tags,” in _IECON 2011 - 37th Annual Conference of the IEEE Industrial Electronics Society_ , 2011, pp. 3820–3825.
* [18] W. Wardlaw, “A Matrix Model for the Linear Feedback Shift Register,” Naval Research Lab, Tech. Rep., July 1989.
* [19] R. Karmakar, S. Chattopadhyay, and R. Kapur, “Encrypt Flip-Flop: A Novel Logic Encryption Technique For Sequential Circuits,” _ArXiv_ , vol. abs/1801.04961, 2018.
* [20] F. Elguibaly and M. W. El-Kharashi, “Multiple-input signature registers: an improved design,” in _PACRIM._ , vol. 2, 1997, pp. 519–522 vol.2.
* [21] N. Courtois, A. Klimov, J. Patarin, and A. Shamir, “Efficient Algorithms for Solving Overdefined Systems of Multivariate Polynomial Equations,” in _Advances in Cryptology — EUROCRYPT_ , B. Preneel, Ed. Berlin, Heidelberg: Springer, 2000, pp. 392–407.
* [22] K. N. Devika and R. Bhakthavatchalu, “Programmable MISR modules for logic BIST based VLSI testing,” in _ICCICCT_ , 2016, pp. 699–703.
* [23] M. Yasin, A. Sengupta, M. T. Nabeel, M. Ashraf, J. Rajendran, and O. Sinanoglu, “Provably-secure Logic Locking: From Theory to Practice,” in _ACM CCS_ , 2017, pp. 1601–1618.
* [24] K. Shamsi, T. Meade, M. Li, D. Z. Pan, and Y. Jin, “On the Approximation Resiliency of Logic Locking and IC Camouflaging Schemes,” _IEEE Trans. Inf. Forensics Security_ , vol. 14, no. 2, pp. 347–359, 2019\.
* [25] M. Hell, T. Johansson, A. Maximov, and W. Meier, _The Grain Family of Stream Ciphers_. Springer, 2008, pp. 179–190.
* [26] S. W. Golomb _et al._ , _Shift Register Sequences_. Aegean Park Press, 1967.
|
# On the Degeneracy of Spin Ice Graphs, and Its Estimate via the Bethe
Permanent
Francesco Caravelli Theoretical Division (T4), Los Alamos National
Laboratory, Los Alamos, New Mexico 87545, USA Michael Saccone Center for
Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico
87545, USA Theoretical Division (T4), Los Alamos National Laboratory, Los
Alamos, New Mexico 87545, USA Cristiano Nisoli Theoretical Division (T4),
Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA
###### Abstract
The concept of spin ice can be extended to a general graph. We study the
degeneracy of spin ice graph on arbitrary interaction structures via graph
theory. Via the mapping of spin ices to the Ising model, we clarify whether
the inverse mapping is possible via a modified Krausz construction. From the
gauge freedom of frustrated Ising systems, we derive exact, general results
about frustration and degeneracy. We demonstrate for the first time that every
spin ice graph, with the exception of the 1D Ising model, is degenerate. We
then study how degeneracy scales in size, using the mapping between Eulerian
trails and spin ice manifolds, and a permanental identity for the number of
Eulerian orientations. We show that the Bethe permanent technique provides
both an estimate and a lower bound to the frustration of spin ices on
arbitrary graphs of even degree. While such technique can be used also to
obtain an upper bound, we find that in all the examples we studied but one,
another upper bound based on Schrijver inequality is tighter.
## I Introduction
Ever since the discovery of degeneracy of ground states in constrained,
disordered systems obeying the so-called ice rule bernal ; pauling ; lieb and
their subsequent experimental implementation in magnetic systems called Spin
Ices ramirez , there has been an active interest in frustrated materials.
Recently, the idea has been extended both theoretically and experimentally to
artificial realizations called artificial spin ices (ASI) colloq which have
allowed for design of frustration to generate exotic behaviors in their
collective physics. ASIs are arrays of interacting, shape-anisotropic nano-
islands, each of which can be modeled as a binary Ising spin. They can be
characterized directly in real time and real space via a variety of techniques
Wang1 ; Sandra ; Bader .
A spin ice can be described abstractly as a set of binary spins arranged on
the edge of a lattice, such that its low energy configuration obeys the ice
rule. This rule dictates that for each vertex the absolute difference between
spins pointing toward the vertex and spins pointing out of the vertex is zero
if the vertex has even coordination, or one if the vertex has odd
coordination.
Recently, there has been an intense investigation both in the physical
underpinnings and control of artificial spin ices and their emergent
interactions reddim ; Heyderman ; Canals1 ; Nisoli1 ; Morgan ; Budrikis ;
Branford ; Ryzhkin ; Moeller ; Chern2 ; Le ; Chern3 ; Gliga ; Nisoli4 ;
Gilbert2 ; Bhat ; CaravelliMASI , with a broad interest in applications,
ranging from topological order Castelnovo1 ; topor , memory in materials
Lammert2 ; GilbertMem , disordered systems and slow relaxation Cugliandolo2 ,
novel resistive switching memristors ; caravelli , and embedding logic
circuits in the magnetic substrate logic2 ; logic3 ; logic4 ; logic5 . These
new materials are highly controllable gartside ; WangYL2 ; WangYL ; colloq ;
Nisoli4 ; vavassori0 ; vavassori and can be used to realize novel models via
engineering geometric frustration Nisoli4 ; Schanilec ; Nisoli8 ; mol .
The collective behavior of these artificial structures typically depends on
the geometry, which is open to design Morrison ; Nisoli4 via lithographic
printing. As novel lithographic techniques are discovered, the control of the
dimensionality of these materials require new theoretical tools to understand
the frustration of non-planar artificial spin ices Ladak3d ; Ladak3d2 .
Moreover, it is now possible to embed general spin ice graphs into quantum
annealers QuantumASI . For these reasons, this paper is focused on a more
theoretical and extensible approach for calculating lower and upper bounds to
the degeneracy of the ground state of generic spin ices.
Previous work Field set up the concept of spin ice on a general graph. Spin
ice concepts are often translatable into the language of graph theory, and
vice versa. For instance in the mathematical literature a balanced graph (with
even degree nodes) is a directed graph whose indegrees is equal to the
outdegrees Graph1 . In spin ice language, this corresponds to a configuration
of the ice manifold Field . Thus, the problem of finding the Pauling entropy
pauling of a spin ice graph is equivalent to the problem of counting the
number of balanced digraphs. Moreover, it is one of the many celebrated
results by Euler that only graphs which can be balanced via an orientation
support an Eulerian trail Euler1 . In this sense, many results from graph
theory can be borrowed to study frustration in spin ice.
In this paper we use and generalize some of the known results from graph
theory. We demonstrate for the first time that a general spin ice graph is
always degenerate, with the exception of the trivial case: the one-dimensional
Ising model. There, because the scaling of degeneracy with size is fundamental
to the notion of Pauling entropy in spin ices, we compute lower and upper
bound the ground state degeneracy of several spin ices.
In the first part of the paper, we use the Line Graph dual representation
graph , and derive properties for the effective Ising model on arbitrary
graphs. In particular, we use the Krausz coarse-graining procedure to show
that from the effective Ising model there is a well-known technique to obtain
the original spin ice interaction graph.
In the second part of the paper, we focus on estimation techniques. For even-
degree graphs there exists a permanental identity which in principle, but not
in practice, allows for the evaluation of the degeneracy of the spin ice.
However, using the Bethe Permanent and the Schrijver bound, it is possible to
obtain lower and upper bounds to geometric frustration for the cases in which
the degree of the graph is even. We apply these bounds to the square lattice,
the triangular tiling, the cubic lattice, and tournaments.
## II Graph theory for Spin Ice
General graph theoretic approaches are common tools in Statistical Physics
Baxter ; Fisher ; CaravelliMarkopoulou . Previously, one of us has discussed
the notion of spin ice on a general undirected graph $\mathcal{G}$, where a
spin configuration may be thought of as a directed graph, considered for its
Coulomb phase properties, and shown that charge correlations are computable
from graph spectral analysis Field . We use the same approach here, but unlike
the previous approximated study, we derive exact results for the degeneracy of
the ice manifold. Later in the paper we will also introduce a way to obtain
also estimates (which are lower bounds), but here we prefer to keep a general
discussion.
Consider a set of spins $s_{j}$ lying on the bonds of a graph, and the
Hamiltonian
$H=J\sum_{v}(\sum_{j\rightarrow v}\pm s_{j})^{2},$ (1)
where $v$ are the vertices of the graph $\mathcal{G}$, and $s_{j}$ have a
certain orientation, such that the minimum of the energy corresponds to
minimal absolute value of charge, defined as the difference between vertices
pointing in or out.
To see the connection between frustrated spins in the system consider
$e=|E(\mathcal{G})|$ the total number of edges (ordered pairs of adjacent
vertices) of the graph graph , while $n=|V(\mathcal{G})|$ is the total number
of nodes. Let us first introduce a few graph theoretical constructs in order
to fix the notation. For a generic and undirected graph $\mathcal{G}$ consider
the (undirected) incidence matrix $B$ of size $n\times e$ with entries
$B_{i\beta}$, where $i$ is an integer between $1$ and $n$ on the set of
vertices and $\beta$ is an integer between $1$ and $e$ on the set of edges,
such that:
$B_{i\beta}=\left\\{\begin{array}[]{rl}1&\text{if the edge $\beta$ contains
the vertex $i$},\\\ 0&\text{otherwise }.\end{array}\right.$ (2)
Let us now consider instead the directed incidence matrix $B_{i\beta}$, of
size $n\times p$ constructed as follows. First, assign an orientation to each
edge of the graph. This can be thought as a possible spin configuration Field
. Given such orientation $\mathcal{O}$, we assign the matrix elements of
$B^{\mathcal{O}}_{v\beta}$ as
$B_{v\beta}=\begin{cases}0&\text{if the edge }\beta\text{ is not incident to
the vertex }v\\\ 1&\text{if the edge, given the orientation $\mathcal{O}$,
enter $v$}\\\ -1&\text{if the edge, given the orientation $\mathcal{O}$,
leaves $v$}\end{cases}$ (3)
(we use latin indices for vertices and greek for edges or spins).
Crucially, we can rewrite the Hamiltonian for a generic spin ice as
$H=J\sum_{v=1}^{n}(\sum_{\beta=1}^{p}B_{v,\beta}s_{\beta})^{2}=J\sum_{v=1}^{n}\sum_{\beta,\beta^{\prime}=1}^{e}pB_{v,\beta}B_{v,\beta^{\prime}}s_{\beta}s_{\beta^{\prime}}$
(4)
Swapping the vertex and spin summation, we write
$H=J\sum_{\beta,\beta^{\prime}=1}^{p}Q_{\beta,\beta^{\prime}}s_{\beta}s_{\beta^{\prime}},$
(5)
where
$Q_{\beta,\beta^{\prime}}=\sum_{v}B_{v,\beta}B_{v,\beta^{\prime}}\equiv(B^{t}B)_{\beta,\beta^{\prime}}$
is symmetric.
We now note the following. Given the directed incidence matrix, we have
$Q_{\beta,\beta^{\prime}}=\begin{cases}2&:\text{if }\beta=\beta^{\prime}\\\
0&:\text{if the spins $\beta$ and $\beta^{\prime}$}\\\ &\text{have no vertex
in common}\\\ -1&:\text{if both spins $\beta$, $\beta^{\prime}$ }\\\
&\text{leave or enter a common vertex $v$}\\\ 1&:\text{if one spin $\beta$
leaves a common vertex}\\\ &\text{$v$ and $\beta^{\prime}$ enters it, and
viceversa}.\end{cases}$ (6)
Given the matrix $Q$, we can write $Q=2I-A$ and define $A$, which is a
directed incidence matrix with support on the line graph
$\mathcal{L}(\mathcal{G})$. In order to gain some intuition about the matrix
$A$, let us discuss the non-directed case first.
### II.1 Undirected Line Graphs
We start by defining line graphs, which are graph constructed from an
undirected graph $\mathcal{G}$ and such that the edges of the graph
$\mathcal{G}$ become the vertices of the graph $\mathcal{L}(\mathcal{G})$. The
edges (or edges) of the line graph are constructed based on the connectivity
of the original graph, as follows Whitney ; Krausz ; Harary ; Beineke .111It
is interesting to note that here there is a mismatch between the original
literature in graph theory, starting with the original work of Harary Harary
(1965). The original Line Graph of a digraph did not have any negative values,
e.g. if the two edges do not have zero sum, then we assign a value zero.
Let $\mathcal{G}=(V,E)$ denote a graph with vertex set
$V=\\{v_{1},v_{2},...v_{n}\\}$ and edge set $E=\\{e_{1},e_{2},...,e_{p}\\}$
Figure 1: The Line Graph construction. Black vertices and dashed lines
correspond the original graph $\mathcal{G}$, while grey vertices and the solid
lines correspond to $\mathcal{L}(\mathcal{G})$.
Each vertex $\tilde{v}\in{\widetilde{V}}({\mathcal{L}}(\mathcal{G}))$
corresponds to an edge $e\in E(\mathcal{G})$. Two vertices $\tilde{v}_{1}$ and
$\tilde{v}_{2}$ in $\widetilde{V}(\mathcal{L}(\mathcal{G}))$ are adjacent if
and only if the edges in $\mathcal{G}$ (corresponding to $\tilde{e}_{1}$ and
$\tilde{e}_{2}$) share a vertex. The correspondence between $\mathcal{G}$ and
$\mathcal{L}(\mathcal{G})$ is injective but not surjective. From a given graph
$\mathcal{G}$ we can construct only one $\mathcal{L}(\mathcal{G})$, and an
example is provided in Fig. 1. Yet, in general it is not true that any graph
can be thought as the line graph or another graph. In fact, according to the
Beineke classification, there are 9 non-minimal graphs that are not line
graphs of another graph, and each graph containing them is thus not a line
graph descendent of any other graph. We will discuss this later in detail
Beineke .
Given a graph $\mathcal{G}$, we can construct its line graph using the
following procedure:
1. 1.
Enumerate the vertices of $\mathcal{G}$.
2. 2.
Enumerate the edges of $\mathcal{G}$ with a fixed prescription (see example
below).
3. 3.
If two edges share a vertex, draw a bold line between them.
4. 4.
Remove $\mathcal{G}$ and its enumeration.
What is left is the line graph of $\mathcal{G}$, $\mathcal{L}(\mathcal{G})$.
Consider now the Kirchhoff matrix, obtained from the (undirected) incidence
matrix of the graph $\mathcal{L}(\mathcal{G}).$
The Kirchhoff matrix $K$ is the $p\times p$ matrix built from $P$, such that:
$K=B^{t}B,$ (7)
$B^{t}$ being the transpose of $B$. A well-known theorem now gives the
relationship between the incidence matrix and the adjacency matrix of the line
graph $\mathcal{L}(\mathcal{G})$:
Let $\mathcal{G}$ be a graph with $p$ edges and $n$ vertices and let
$\mathcal{L}(\mathcal{G})$ be its line graph. Then we have
$K=A-2\ I,$ (8)
where $I$ is the $p\times p$ identity matrix, and $A$ is the adjacency matrix
of $\mathcal{L}(\mathcal{G})$.
### II.2 Directed Line Graphs
We see immediately that the definition of $Q$ and $K$ are very similar, with
an important difference. The matrix $Q$ can be written as
$Q=2I-A,$ (9)
where $A$ is called the weighted adjacency matrix, has the same support as the
undirected line graph $\mathcal{L}(\mathcal{G})$, but can take both positive
and negative values on edges of the line graph, depending on the orientation
$\mathcal{O}$ (see Fig. 1 and Fig. 2). In fact, $A$ takes a positive value
$+1$ if the two edges have zero sum on the vertex according to the
orientation, e.g. if one leaves and one enters, while $+1$ if they both enter
and leave.
A fundamental result follows: in general, we can write any spin ice model
(written in charge formulation), as
$H=-J\sum_{\beta\beta^{\prime}}A_{\beta\beta^{\prime}}s_{\beta}s_{\beta^{\prime}}$
(10)
where $A$ is the weighted adjacency matrix according to the rule above. Thus,
in the case of directed graphs we can have both ferromagnetic
$(JA_{\beta\beta^{\prime}}>0)$ and antiferromagnetic values
$(JA_{\beta\beta^{\prime}}<0)$, and is thus a weighted adjacency matrix. If
the element of $A$ is positive, the interaction on the line graph is
ferromagnetic (e.g. the two spins are aligned in the ground state), while if
it is negative the interaction is antiferromagnetic (anti-aligned in the
ground state).
Note that spin ices are generally frustrated, but frustration cannot be
reabsorbed by a spin redefinition, as it invariant under the Ising model gauge
freedom Hey which in our graph-theoretical language corresponds to
$s_{\beta}\rightarrow\xi_{\beta}s_{\beta}$, $B_{v\beta}\to
B_{v\beta}\xi_{\beta}$ for $\xi_{\beta}=\pm 1$. As such, the couplings that
one obtains in the procedure depend on the gauge transformation. What does not
change is the frustration, which cannot be removed.
As an example, consider the Hexagonal spin ice of Fig. 2. We see that with the
orientation we have used, the only frustrated cycles are those associated to
the vertices of the Hexagonal model. This implies naturally that the
degeneracy of the ground state in the model must scale with the size of the
vertices.
Figure 2: The weighted line graph $A$ superimposed to the hexagonal lattice,
for a given orientation, where blue are antiferromagnetic couplings (negative)
and red are ferromagnetic (positive). Figure 3: Equivalent interaction model
on the directed line graph, with red lines equal to antiferromagnetic
interactions. Every single fundamental cycle (the triangles) are frustrated.
### II.3 Spin Ice, Frustration, and Degeneracy: General Facts
We are now in a position to state some general facts for a spin ice on a
graph.
Remark 1 Not all frustrated Ising models are spin ices. Because the directed
line graph dual has support on the undirected line graph, we can borrow the
results from the undirected case Beineke . If the line graph contains any of
the subgraphs contained in the Beineke classification, then we know that the
graph is not the line graph of any root graph.
Remark 2 Vertices are mapped to complete graph interactions. This is a well
known fact that we restate graph theoretically. Vertices in a spin ice are
mapped to a complete graph with a number of vertices equal to the degree of
the vertex. This implies immediately that if a spin ice is composed by a
sequence of vertices of degree $d=\\{d_{1},\cdots,d_{n}\\}$, if for any $i$ we
have $d_{i}>3$, the line graph dual will not be planar.
Remark 3 The only purely ferromagnetic spin ice graph is the one dimensional
Ising model. It is also the only spin ice whose ground state is non-
degenerate. Consider the following one-dimensional spin ice:
$H=J\sum_{i=1}^{n}(s_{i}-s_{i+1})^{2}.$ (11)
The ground state has a $Z_{2}$ symmetry: these are all right or all left
spins, which is equivalent to a 1-dimensional ferromagnetic Ising model. This
implies that at least one spin ice is non-degenerate. Interestingly, it is the
only one.
In order to see this, consider a vertex with $d$ edges or spins. Then, the
total number of interaction terms are $d(d-1)/2$. Assume an orientation in
which $d_{1}$ spins go in and $d_{2}$ go out, with $d=d_{1}+d_{2}$. Then, the
number of ferromagnetic and antiferromagnetic interactions are
antiferromagnetic $\displaystyle:$
$\displaystyle\frac{d_{1}(d_{1}-1)}{2}+\frac{d_{2}(d_{2}-1)}{2},$
ferromagnetic $\displaystyle:$ $\displaystyle d_{1}d_{2}.$ (12)
The only case in which we have no antiferromagnetic interaction is
$d_{1}=d_{2}=1$, which is a vertex of degree $2$. It follows that the only
graphs that can be formed with vertices of the degree $2$ are either line or
circle graphs, and thus one dimensional spin ices are the only ferromagnetic
models.
This does not mean that all models are necessarily extensively degenerate,
that is possess a nonzero Pauling entropy bernal ; pauling . This is a more
complicated notion which depends on the product of signs of interaction on a
loop. We discuss this next.
Remark 4: All spin ices with $d>2$ have frustration at the vertex level.
Since frustration is gauge invariant, we can pick any orientation of the
vertex configuration and calculate frustration along a certain loop (a closed
sequence of edges, or a loop) in that particular configuration. Let us choose
$d_{2}=0$, thus all spins going into the vertex. Now, all the fundamental
cycles at the vertex interactions are of length 3, as the effective
interaction is a complete graph $K_{d}$. There are $m=\frac{d(d-1)(d-2)}{3!}$
fundamental circuits of length $3$. Since all interactions are anti-
ferromagnetic, we have that the product of the signs in every cycle is $-1$,
and thus frustrated.
Remark 5: For planar spin ices, frustration is only at the vertex level.
This is a byproduct of the following fact. Consider a cycle in a spin ice (in
the original lattice). We can always choose an orientation of the lattice such
that, for a given cycle, the arrows are chosen head to tail. Thus, when we
construct the directed line graphs, the interactions bordering two spins are
going to be ferromagnetic. Thus, if we go around a cycle in the line graph
dual, we only have ferromagnetic interactions, and thus the cycle is not
frustrated. In order to see that this is true always, note that for a planar
graph we can always choose orientations of the spin such that such
configuration is consistent. Thus, frustration is due only to vertices
(cliques) in the directed line graph dual.
Note that this implies that so-called vertex-frustration Morrison , that is
the inability to arrange collectively all vertices in a lowest energy
configuration, cannot exist in a graph spin ice whose Hamiltonian depends only
on the vertex charge. Indeed, all the vertex-frustrated systems Morrison ,
many of which have been realized tetris ; shakti and depend upon a lifting of
degeneracy within vertices of the same charge. They are therefore not pure
spin ice graphs.
### II.4 Spin ice reconstruction via Krausz clique partitions
One of the most interesting byproducts of the direct construction is that
there is an inverse procedure, known as Krausz partitioning. We know that if
the original spin ice interaction is planar, then vertices are mapped to fully
frustrated cliques (condition 1). Also, any cycle subgraph which is not a
clique must not be frustrated (condition 2). If these conditions are
satisfied, and if none of the Beineke graphs are present, then we can
reconstruct the original spin ice interaction via the Krausz decomposition,
which goes as follows (condition 3).
Figure 4: The coarse graining procedure according to the Krausz partitioning.
Given a graph $\mathcal{Q}$, if condition $1$ and $2$ are satisfied, consider
the unweighted graph $|\mathcal{Q}|$, and
1. 1.
Enumerate all complete subgraph of the graph $|\mathcal{Q}|$, and defined as
partitions $\mathcal{K}$;
2. 2.
If all partitions $\mathcal{K}$ have only one vertex in common, contract the
cliques into a vertex, and connect the partitions by an edge
3. 3.
The resulting graph is the spin ice interaction matrix: assign spins to the
edges of the resulting graph and add a term $J(\sum_{i}s_{i})^{2}$ to the
corresponding interaction.
4. 4.
Because of gauge invariance, the directionality of the original spin is
irrelevant.
It is important to note that a graph is a line graph of a root graph
$\mathcal{G}$ if and only if there is a Krausz partitioning of the graph
$\mathcal{Q}$. An example of such procedure is in Fig. 4. Each complete
subgraph is identified and the coarse graining procedure is performed.
## III Lower and upper bounds to degeneracy
Extending Pauling’s estimate pauling can become extremely challenging on
arbitrary graphs. In order to estimate the degeneracy of the ice manifold
beyond the case of planar graphs or non-bipartite graphs, we will use a
general graph-theoretic approach, and upper bound the entropy associated to
the ice manifold. First, let us note that the maximum degeneracy that a spin
ice can have is given by $2^{N_{spins}}$. If the spin ice is degree regular
and has $N_{v}$ vertices, then the maximum entropy of the ice manifold is
naturally given by $\epsilon_{max}=N_{v}\frac{d_{v}}{2}\ln 2.$ Below we
provide a procedure to systematically calculate upper bounds based on the
theory of Eulerian tours on graphs. Given a certain spin ice graph
$\mathcal{G}$, we are interested in calculating the number of configurations
in the ice manifold, $\epsilon(\mathcal{G})$, and its entropy
$S(\mathcal{G})=\ln\epsilon(\mathcal{G})$ pauling .
### III.1 Case of even degrees
We first focus on the case in which all vertices have even degree,
independently from planarity. Consider a graph $\mathcal{G}$. It is very well
known that Euler got interested in the problem of walks on graphs with the
following property: starting from a certain vertex $v$, perform a walk on the
graph $\mathcal{G}$ such that you never use the same edge twice. A Euler cycle
starts at a vertex $v$ and ends in the vertex $v$. Euler proved the following
theorem Euler1 :
Theorem (Euler) Let $\mathcal{G}$ be a connected graph. Then, $\mathcal{G}$
has an Euler cycle if and only if every vertex $v$ has even degree $d_{v}$.
One direction of this theorem is rather obvious, as if a Euler cycle exists,
necessary $d_{v}$ must be even. The theorem is powerful because it ensures
that if all $d_{v}$ are even, the converse also applies. While an enumeration
of the number of eulerian tours for undirected graph is an open problem, a
formula for the number of eulerian orientations exists. Let $\mathcal{G}$ be a
graph with degrees $d_{v}$. Then, the total number of Eulerian orientations
$\epsilon(\mathcal{G})$ is given by schrijver :
$\displaystyle\epsilon(\mathcal{G})=\frac{\text{perm}(A)}{\prod_{v\in
V}(\frac{d_{v}}{2})!},$ (13)
where the matrix $A$ is constructed as follows. Let us consider the incidence
matrix $B$ of the undirected graph $\mathcal{G}$. For each vertex $v$, $A$
contains a $\frac{d_{v}}{2}$ number of identical rows of $B$. This implies
that $A$ is square and of size equal to the number of edges of $\mathcal{G}$.
The formula above is exact but hard to use, given the fact that the permanent
is rather hard to calculate, being $\\#P$-complete valiant . However, one can
upper bound the permanent of $(0,1)$ matrices using the Bregman-Minc result
for the permanent schrijver , which is
$\displaystyle\text{perm}(A)\leq\prod_{i=1}^{m}(r_{i}!)^{\frac{1}{r_{i}}},$
(14)
where $r_{i}$ is the row sum of the i-th row of $A$. This implies that, if
$d_{v}$ is the degree of the graph, one has Schrijver’s inequality schrijver
$\displaystyle\epsilon(\mathcal{G})\leq\prod_{v\in
V}\sqrt{\binom{d_{v}}{\frac{d_{v}}{2}}}.$ (15)
We note that the upper bound above is base on the fact that the graph has an
eulerian orientation, and thus $d_{v}$ must be even. For degree regular
graphs, we have
$\displaystyle\ln\epsilon(\mathcal{G})\leq\frac{N}{2}\ln\binom{d_{v}}{\frac{d_{v}}{2}}.$
(16)
which is a bound of fairly general nature and depends on the graph properties,
such as the vertex degree.
### III.2 Approximating the number of Eulerian configurations from the
permanent
For completeness, we discuss first a technique which proved unfruitful for us,
but which deserves to be mentioned. One way to calculate such quantity
exploits the Godsil-Gutman theorem GGe ; Lovasz . Let $A$ be a non-negative
matrix. Then, if we define $B_{ij}=\epsilon_{ij}\sqrt{a_{ij}}$, where
$\epsilon_{ij}$’s are uncorrelated random variables distributed according to
$P(\epsilon_{ij})$, with mean $0$ and variance $1$, we have
$\displaystyle\text{perm}(A)=\langle\text{det}B^{t}B\rangle_{P(\epsilon)}.$
(17)
Since we are interested in the logarithm, we have
$\displaystyle\ln\text{perm}(A)=\ln\langle\text{det}(B)^{2}\rangle_{P(\epsilon)}=\ln\langle\text{det}(B^{t}B)\rangle_{P(\epsilon)}.$
(18)
If $\epsilon_{ij}=\\{1,-1\\}$ the estimator above is called Godsil-Gutman, but
if $\epsilon_{ij}\in\mathcal{N}(0,1)$ is is called Barvinok estimator. We have
tested both the Gutman-Godsil and Barvinok estimators for the case of the
triangular, square and cubic lattice degeneracies, but we have found that the
variance of the estimates does not fall fast enough with the number of Monte
Carlo samples.
Figure 5: Hypergraph construction for the spin ice degeneracy.
The second approximation method for the permanent of a non-negative matrix is
based on Belief Propagation, which is the one we present here huang . Consider
the square matrix $A$ obtained via the Schrijver augmentation. The permanent
of the matrix $A$ is defined via the sum over all possible permutations, as
$\text{perm}(A)=\sum_{\pi\in S_{n}}f(\pi;A),$ (19)
where $f(\pi;W)=\prod_{i=1}^{n}A_{i\pi(i)}$. Another way of thinking of these
permutations is in terms of perfect matching between two sets $A$ and $B$, and
in particular “double dimerizations”, as follows.
Given a certain permutation $\pi$, a matching between the set A (the first
index) and the set B (the second index), can be represented as
$\Big{(}i,\sigma(i)\Big{)}$. Similarly, a particular valid configuration of
the permanent is a set of $n$ non-overlapping dimers
$A_{1,\sigma(1)}....A_{n,\sigma(n)}$. So one constructs a bipartite graph in
which one places a dimer between $(i,j)$ if $A_{ij}>0$. The permanent is a sum
over all possible dimerizations.
Based on this idea, Huang and Jebara mapped the permanent to a set of double-
dimerizations which correspond to a valid choice of the permanent as follows
huang . A dimer between the set $A$ and $B$ is an assignment between the
variables $X=\\{x_{1},\cdots,x_{n}\\}$ and the variables
$Y=\\{y_{1},\cdots,y_{n}\\}$. We now introduce the potentials
$\phi(x_{i})=\sqrt{A_{ix_{i}}}$ and $\phi(y_{j})=\sqrt{A_{y_{j}j}}$ and
introduce the function
$\displaystyle
f(X,Y)=\prod_{ij}\psi(x_{i},y_{j})\prod_{k}\phi_{x_{k}}\phi(y_{k})$ (20)
which is a function of the assignment. If the function $\psi(x_{i},y_{j})$
enforces that, given two sets of assignments (two possible dimer
configurations) between $A$ and $B$, the two assignments are identical, then
ones obtains
$\displaystyle\prod_{k}\phi_{x_{k}}\phi(y_{k})=A_{1,\sigma(1)}....A_{n,\sigma(n)},$
(21)
and
$\displaystyle Z\equiv\text{\text{Bperm}}(A)=\sum_{\sigma,\pi\in
S_{n}}f(X,Y).$ (22)
where $\sigma=(i,x_{i})$ and $\pi=(y_{j},j)$. In terms of the dimer
representation, a valid configuration is such that two dimers either overlap
completely or do not overlap at all. Then, a logic function which ensures such
condition is the negation of the XOR function $I(\neg(j=x_{i}\oplus
i=y_{j}))$, where the function $I(\cdot)$ is zero if the condition is false
and one otherwise.
It is interesting at this point to note that eqn.(22) can be interpreted as
the partition function for a particular factor graph with pairwise
interactions, and can be analyzed in terms of the belief propagation. Let us
define the Bethe free energy $F_{\text{Bethe}}$ as
$\displaystyle F_{\text{Bethe}}$ $\displaystyle=$
$\displaystyle-\sum_{ij}\sum_{x_{i},y_{k}}b(x_{i},y_{j})\ln\Big{(}\psi(x_{i},y_{j})\phi(x_{i})\phi(y_{j})\Big{)}$
(23) $\displaystyle+$
$\displaystyle\sum_{ij}\sum_{x_{i},y_{j}}b(x_{i},y_{j})\ln b(x_{i},y_{j})$
$\displaystyle-$ $\displaystyle(n-1)\sum_{i}\sum_{x_{i}}b(x_{i})\ln b(x_{i})$
$\displaystyle-$ $\displaystyle(n-1)\sum_{j}\sum_{y_{j}}b(y_{j})\ln b(y_{j})$
Using Belief propagation, the partition function can be written as a
minimization of the Bethe free energy
$\displaystyle Z=e^{-\text{min}_{b}F_{\text{Bethe}}(b)}.$ (24)
The minima of the belief can be obtained via a message passing algorithm. The
beliefs must satisfy $b(x_{i})=\sum_{y_{j}}b(x_{i},y_{j})$ and
$b(y_{j})=\sum_{x_{i}}b(x_{i},y_{j})$, and
$\sum_{x_{i},y_{j}}b(x_{i},y_{j})=1$, and these functions can be obtained
iteratively as
$b(x_{i},y_{j})\propto\psi(x_{i},y_{j})\phi(x_{i})\phi(y_{j})\prod_{k\neq
j}m_{y_{k}}(x_{i})\prod_{l\neq i}m_{x_{l}}(y_{j})$ (25)
and,
$\displaystyle b(x_{i})$ $\displaystyle\propto$
$\displaystyle\phi(x_{i})\prod_{l\neq i}m_{y_{l}}(x_{i})$ $\displaystyle
b(y_{j})$ $\displaystyle\propto$ $\displaystyle\phi(y_{j})\prod_{l\neq
i}m_{x_{l}}(y_{j}).$ (26)
The messages can be obtained iteratively, starting from a random initial
state, one has
$\displaystyle
m_{x_{i}}^{t+1}(y_{j})=\sum_{x_{i}}\Big{(}\phi(x_{i})\psi(x_{i},y_{j})\prod_{k\neq
j}m_{y_{k}}^{t}(x_{i})\Big{)}$ (27)
An interesting byproduct is that it is possible to prove that the Bethe
permanent can serve as a lower and upper bound vontobel ; gurvits :
Theorem (Vontobel-Gurvits)
$\displaystyle\text{\text{Bperm}}(A)\leq\text{perm}(A)\leq\sqrt{2}^{n}\text{\text{Bperm}}(A).$
(28)
where $n$ is the size of $A$ and $\text{Bperm}(A)$ is its Bethe permanent.
The theorem above implies that we can obtain, via the Bethe permanent Bp$(A)$,
a level of confidence on the value of the permanent and in particular a
certificate for the lower bound scaling. Note that in practice, we have found
that Schrijver’s upper bound is typically lower than the one obtained via the
Bethe Permanent.
Given a certain lattice described by the graph $\mathcal{G}$ we call
$B\epsilon(\mathcal{G})$ the lower bound obtained via the permanent. We thus
have
$\displaystyle
B\epsilon(\mathcal{G})\leq\epsilon(\mathcal{G})\leq\prod_{v}\sqrt{\binom{d_{v}}{\frac{d_{v}}{2}}},$
(29)
where $d_{v}$’s are the vertex degrees.
The numerical results are shown in Fig. 6 for the planar cases of the square
lattice and the triangular tiling, which are two perfect Archimedean lattices
harrison . For the case of the square ice we have Lieb’s exact result lieb
$\epsilon^{Exact}=(\frac{4}{3})^{3L^{2}/2}$. For the case of the triangular
tiling there is no exact solution. Pauling’s argument pauling does not apply,
as it relies on the bipartiteness of the lattice. For the case of the
triangular tiling, we find that $\frac{\ln\epsilon}{L^{2}}\geq 2.33[..]$,
using the fact that the Bethe permanent is a lower bound.
Figure 6: Numerical scaling of the degeneracy of the ground state via the
Bethe Permanent for the square spin ice and the triangular tiling (on the
torus), of linear size $L$ (equal to the number of nodes). The lower bound
(gold solid) corresponds to the Bethe Permanent $\text{Bperm}(A)$, while the
upper bound (purple dashed) corresponds to $\sqrt{2^{n}}\text{Bperm}(A)$. A
numerical fit shows that the Bethe permanent scales as
$\epsilon_{SI}\approx\text{Bperm}(A_{L})/(\prod(d_{v}/2)!)\approx(1.419[..])^{L^{2}}$,
while Lieb’s exact result is
$\epsilon_{SI}^{Exact}=(\frac{8}{3\sqrt{3}})^{L^{2}}\approx(1.53[..])^{L^{2}}$.
For the case of the triangular tiling, the scaling of the Bethe Permanent can
be fit as $\epsilon_{TT}\approx(2.3396[..])^{L^{2}}$. In both figures, the
shaded area is the bound on the (logarithm) of the degeneracy of the balanced
configuration according to the Bethe Permanent bounds.
For the cubic lattice (with toroidal boundary conditions), Pauling’s
calculation suggests that the entropy of the spin ice scales with the number
of vertices $L^{3}$. In fact, given a certain node, we have $2^{6}=64$
possible configurations, but only $20$ of them satisfy the ice rule. It
follows that, from Pauling’s argument, the spin ice ground state degeneracy
should be
$\displaystyle\epsilon_{\text{Pauling}}=2^{3L^{3}}(\frac{20}{64})^{L^{3}}=(5/2)^{L^{3}}.$
(30)
or $\ln\epsilon_{\text{Pauling}}=L^{3}\ln\frac{5}{2}\approx 0.916(3)\cdot
L^{3}$. Using the Bethe permanent (see Fig. 7), we observe that Pauling’s
estimate is not too far from the Bethe permanent lower bound, which gives
$B\epsilon\approx 2.41^{L^{3}}$, but provides a certificate for a lower bound.
Figure 7: Numerical scaling of the degeneracy of the ground state via the
Bethe Permanent for the cubic spin ice with a number of nodes $L^{3}$. The
lower bound (gold solid) corresponds to the Bethe Permanent $\text{Bperm}(A)$,
while the upper bound (purple dashed) corresponds to
$\sqrt{2^{n}}\text{Bperm}(A)$. A numerical fit shows that the Bethe permanent
scales as
$\epsilon_{SI}\approx\text{Bperm}(A_{L})/(\prod(d_{v}/2)!)\approx(2.41(0))^{L^{3}}$,
while Pauling’s estimate is $\epsilon_{\text{Pauling}}=(2.5)^{L^{3}}$. The
lower bound obtained via the Bethe permanent is thus within $3\%$ of Pauling’s
estimate.
As a last application, we consider regular tournaments, which is the total
number of Eulerian configurations for a complete graph with $L$ nodes, as in
Fig. 8, or equivalently the degeneracy of the spin ice configurations on a
complete graph. This number was calculated by McKay mckay and is given by
$\displaystyle\epsilon=\left(\frac{2^{L+1}}{\pi
L}\right)^{\frac{L-1}{2}}\frac{\sqrt{L}}{\sqrt{e}}\left[1+O(\frac{1}{\sqrt{L}})\right].$
(31)
It can be easily seen that also in this case the Bethe permanent provides a
good estimate for the number of tournaments.
Figure 8: Number of Eulerian orientations for the complete graph $K_{L}$, also
called regular tournaments. We provide a comparison between the upper and
lower bounds given by the Bethe Permanent and the McKay exact calculation
given in eqn. (31). We note that for this particular case, Schrijver upper
bound is looser than the Bethe permanent upper bound.
A summary of the results is provided in the Table 1.
## IV Conclusions
We have discussed some theoretical results for spin ice on arbitrary graphs.
In the first part of the paper we have provided a graph theoretical mapping
between spin ices and Ising models based on the directed incidence matrix of
the Line graph. Specifically, we have shown something that was already known:
while all spin ices can be mapped to Ising models, not all Ising models can be
mapped to spin ices; but here we show a general a procedure to extract the
original spin ice. We have also shown that all spin ices are degenerate
(except the trivial case: the 1D Ising model). We have proved this result
using the known gauge transformation for the Ising model. Another fact which
we managed to prove in the first part of this paper, and that we found
surprising, is that for planar spin ices, degeneracy is only at the vertex
level. This implies that the effective Ising model, cycles between different
vertices of the original spin ice are unfrustrated, as it can be seen via a
gauge transformation to align these vertices. The same is not true for the
interactions at the vertices, implying that the frustration scales with the
number of vertices of the original spin ice. In the second part of the paper
we have focused on techniques to bound the degeneracy of the ice manifold. The
method is based on the number of Eulerian tours, as for every balanced graph
(ice manifold state) there exists at least one Eulerian trail. The advantage
of using the Eulerian trails is that there are spectral techniques to count
the number of Eulerian trails of a directed graph. We applied these techniques
for even-degree and connected graphs, showing that these bounds can be
extended to the case of odd-degree reducible graphs. To conclude, we have used
an exact formulation for the degeneracy of spin ices configurations using a
permanent identity. Given the fact that the permanent is $\\#P$-complete
quantity to compute, we have employed numerical methods based on the Bethe
free energy. Such method provides numerical lower bounds to the permanent, and
we have thus obtained lower bounds to the spin ice degeneracy for various spin
ice regular lattices. The advantage of such procedure is that these certified
lower bound estimates can be obtained relatively quickly and without using
loop Monte Carlo techniques barkema ; loop ; giawei which, however, would
give a more precise estimate.
In this respect, it is worth mentioning that there are also other
implementations using fractional belief propagation chertkov , with an extra
parameter which is known (given the matrix) to provide the exact result of the
permanent, and for values above and below give upper and lower bounds
respectively. These will be considered in the future.
Graph | Exact | Pauling | Bethe | Maximum
---|---|---|---|---
Square | 1.53[..] | $\frac{3}{2}$ | 1.41 | $4$
Triangular | | | 2.33 | 8
Cubic | | $\frac{5}{2}$ | 2.41 | 8
Table 1: Normalized ground state entropy $s=\frac{\ln\epsilon}{N_{v}}$. The
Bethe permanent typically provides a certified lower bound given the matrix,
and the scaling is extracted numerically. For the complete graph, tournaments
do not scale simply exponentially with the number of vertices (see eqn. (31)).
The maximum is obtained via the relationship for regular graphs, $s=\ln
2^{\frac{d}{2}}$, with $N_{v}$ the total number of vertices.
###### Acknowledgements.
We thank Prof. A. Schrijver for some clarifications regarding the permanent
formula for the eulerian orientations of eqn. (13), and Prof. M. Chertkov for
some clarifications regarding the Bethe Permanent. We also acknowledge the
support of NNSA for the U.S. DoE at LANL under Contract No. DE-AC52-06NA25396.
This work was carried out under the auspices of the U.S. DoE through the Los
Alamos National Laboratory, operated by Triad National Security, LLC (Contract
No. 892333218NCA000001). FC was also financed via DOE-LDRD grants PRD20190195.
M. Saccone acknowledges the Center for Nonlinear Studies for a fellowship.
## References
* (1) J. Bernal and R. Fowler, A theory of water and ionic solution, with particular reference to hydrogen and hydroxyl ions,” The Journal of Chemical Physics 1, 515-548 (1933).
* (2) L. Pauling, The structure and entropy of ice and of other crystals with some randomness of atomic arrangement,” Journal of the American Chemical Society 57, 2680-2684 (1935).
* (3) Lieb E.H. (2004) Residual Entropy of Square Ice. In: Nachtergaele B., Solovej J.P., Yngvason J. (eds) Condensed Matter Physics and Exactly Soluble Models. Springer, Berlin, Heidelberg.
* (4) A. P. Ramirez, A. Hayashi, R. J. Cava, R. Siddharthan, and B. S. Shastry, Zero-point entropy in ‘spin ice’,” Nature 399, 333-335 (1999).
* (5) C. Nisoli et al.,“Colloquium: Artificial spin ice: Designing and imaging magnetic frustration”, Rev. Mod. Phys. 85(1473), (2013)
* (6) R. F. Wang et al., Artificial spin ice sin a geometrically frustrated lattice of nanoscale ferromagnetic islands, Nature 439(7074):303-6, (2006).
* (7) S. H. Skjærvø, C. H. Marrows, RL Stamps, LJ Heyderman “Advances in artificial spin ice” Nature Reviews Physics 1-16 (2019).
* (8) S.D. Bader, Colloquium: Opportunities in nanomagnetism, Rev. Mod. Phys., 78(1):1, (2006).
* (9) I. Gilbert et al., “Emergent reduced dimensionality by vertex frustration in artificial spin ice”, Nature Phys. 12, 162-165 (2016)
* (10) L. J. Heyderman, R. L. Stamps, Artificial ferroic systems: novel functionality from structure, interactions and dynamics, J. of Phys.: Condensed Matter, 25(36):363201 (2013)
* (11) B. Canals et al., Fragmentation of magnetism in artificial kagome dipolar spin ice, Nat. Comm. 7 (2016)
* (12) C. Nisoli et al., Ground State Lost but Degeneracy Found: The Effective Thermodynamics of Artificial Spin Ice, Phys. Rev. Lett., 98(21):217203 (2007)
* (13) J. P. Morgan et al., Thermal ground state ordering and elementary excitations in artificial magnetic square ice, Nat. Phys. 7(1):75-70 (2010)
* (14) Z. Budrikis et al., Disorder strength and field-driven ground state domain formation in artificial spin ice: experiment, simulation and theory, Phys. Rev. Lett 109(30):037203 (2012)
* (15) W. R. Branford et al., Emerging Chirality in Artificial Spin Ice, Science, 335(6076):1597-1600 (2012)
* (16) I.A. Ryzhkin. Zhurnal Ehksperimentalnoj i Teoreticheskoj Fiziki, 128(3):559-566 (2005)
* (17) G. Moeller, R. Moessner, Magnetic multipole analysis of kagome and artificial spin-ice dipolar arrays, Phys. Rev. B, 80(14):140409 (2009)
* (18) G.-W. Chern, P. Mellado, Magnetic monopole polarons in artificial spin ices, EPL 114 (3): 37004 (2016)
* (19) B. L. Le et al., Understanding magnetotransport signatures in networks of connected permalloy nanowires. Phys. Rev. B, 95:060405 (2017)
* (20) G.-W. Chern, Magnetotransport in Artificial Kagome Spin Ice, Phys. Rev. App. 8(6) : 064006 (2017)
* (21) S. Gliga, et al., Spectral analysis of topological defects in an artificial spin-ice lattice, Phys. Rev. Lett, 110(11):117205 (2013).
* (22) I. Gilbert et al., Emergent ice rule and magnetic charge screening from vertex frustration in artificial spin ice, Nat Phys. 10(9):670-675 (2014)
* (23) V. S. Bhat et al., Controlled magnetic reversal in permalloy films patterned into artificial quasicrystals, Phys. Rev. Lett. 111(7):077201 (2013)
* (24) F. Caravelli, A model for the Mediated Artificial Square Ice phenomenology, EPL 120(4),2020
* (25) C. Nisoli, V. Kapaklis, P. Schiffer, Deliberate exotic magnetism via frustration and topology, Nature Phys.13(3):200-203 (2017)
* (26) M J Morrison, et al., Unhappy vertices in artificial spin ice: new degeneracies from vertex frustration. New Journal of Physics, 15(4):045009 (2013)
* (27) C. Castelnovo et al., Spin ice, fractionalization, and topological order, Ann. Rev. Condens. Matter Phys., 3(1): 35-55 (2012)
* (28) Y. Lao et al., “Classical topological order in the kinetics of artificial spin ice”, Nature Phys. 14, 723-727 (2018)
* (29) I. Gilbert et al., Direct visualization of memory effects in artificial spin ice. Phys. Rev. B, 92(10):104417 (2015)
* (30) P. E. Lammert et al., Direct entropy determination and application to artificial spin ice. Nat. Phys., 6(10):786-789 (2010)
* (31) D. Levis et al., Thermal phase transitions in artificial spin ice, Phys. Rev. Lett., 110(20):207206 (2013)
* (32) F. Caravelli, G.-W. Chern, C. Nisoli, Artificial Spin Ice Memory Resistors, arXiv:1908.08073
* (33) F. Caravelli, J. Carbajal, Memristors for the curious outsiders, Technologies 6(4):118 (2019)
* (34) H. Arava et al., ”Computational logic with square rings of nanomagnets.” Nanotechnology 29, no. 26 265205 (2018)
* (35) J. H. Hensen, E. Folven, G. Tufte, Computation in artificial spin ice, Proc. of ALIFE 2018, pp. 15-22, MIT Press, 10.1162/isal-a-00011 (2018)
* (36) M. T. Niemier, et al.,‘Nanomagnet logic: progress toward system-level integration”, J. of Phys.: Condensed Matter, 23(49):493202 (2011)
* (37) F. Caravelli, C. Nisoli, Logical gates embedding in Artificial Spin Ice, arXiv:1810.09190 (to appear in the NJP)
* (38) J. C. Gartside et al., “Magnetic topological lithography: Gateway to the artificial spin ice manifold”, Nature Nano., 13(1):53-58 (2018)
* (39) Y.-L. Wang et al., Rewritable artificial magnetic charge ice, Science 352, 6288: 962-966 (2016)
* (40) Y.-L. Wang et al., Switchable geometric frustration in an artificial-spin-ice-superconductor heterosystem, Nature Nano. 13(7): 560 (2018)
* (41) P. Vavassori, ”Towards plasmon-assisted thermal excitations in artificial spin ices”, invited talk at DPG Fruehjahrstagung, 19-24/03/2017, Dresden, Germany.
* (42) Z. Li et al., “Simultaneous Local Heating/Thermometry Based on Plasmonic Magnetochromic Nanoheaters”, Small 14, 1800868 (2018)
* (43) V. Schanilec, Y. Perrin, S. Le Denmat, B. Canals, N. Rougemaille, Artificial vertex systems by design, arxiv:1902.00452
* (44) C. Nisoli, Write it as you like it, Nature Nano. 13(1): 5. (2018)
* (45) L. A. S. Mol, A. R. Pereira, W. A. Moura-Melo, Extending spin ice concepts to another geometry: The artificial triangular spin ice, Phys. Rev. B 85, 184410 (2012)
* (46) A. May, M. Hunt, A. Van Den Berg, A. Hejazi, S. Ladak , Realisation of a frustrated 3D magnetic nanowire lattice, Comm. Phys. (13) (2019)
* (47) A. May, M. Saccone, A. van den Berg, J. Askey, M. Hunt, S. Ladak, Magnetic Charge Propagation upon a 3D Artificial Spin-ice. arXiv:2007.07618
* (48) AD King, C Nisoli, ED Dahl, G Poulin-Lamarre, A Lopez-Bezanilla arXiv:2007.10555 (2020)
* (49) C. Nisoli AIP Advances 10 (11), 115102 (2020).
* (50) J. Bang-Jensen, G. Gutin, Digraphs:Theory, Algorithms and Applications, Springer-Verlag Berlin (2007)
* (51) L. Euler, ”Commentarii academiae scientiarum petropolitanae,” Solutio problematis ad geometriam situs pertinentis 8, 128-140 (1736).
* (52) Godsil, C. and Royle, G. “Algebraic Graph Theory”, New York: Springer-Verlag, 2001.
* (53) R. Baxter, Exactly Solved Models in Statistical Mechanics, Academic Press (London), 1989
* (54) M. Fisher, Transformations of Ising Models, Phys. Rev. 113:4, pp 969-981 (1959)
* (55) F. Caravelli, F. Markopoulou, “Properties of Quantum Graphity at Low Temperature”, Phys. Rev. D 84(2), 2010
* (56) H. Whitney, ”Congruent graphs and the connectivity of graphs”, American Journal of Mathematics, 54 (1): 150-168 (1932);
* (57) J. Krausz, ”Démonstration nouvelle d’un théorème de Whitney sur les réseaux”, Mat. Fiz. Lapok, 50: 75–85 (1943); Rendiconti del Circolo Matematico di Palermo, 9 (2): 161–169 (1960);
* (58) F. Harary, R. Z Norman, ”Some properties of line digraphs”,
* (59) L. W. Beineke, ”Derived graphs of digraphs”, in Sachs, H.; Voss, H.-J.; Walter, H.-J. (eds.), Beiträge zur Graphentheorie, Leipzig: Teubner, pp. 17-33 (1968);
* (60) A. J G Hey, Lattice gauge theory - an introductory review, Surveys in High Energy Physics, 5:4, 287-327 (1987)
* (61) I. Gilbert, Y. Lao, I. Carrasquillo, L. O’Brien, J. D. Watts, M. Manno, C. Leighton, A. Scholl, C. Nisoli, P. Schiffer, Emergent reduced dimensionality by vertex frustration in artificial spin ice, Nature Physics 12, p; 162–165(2016)
* (62) I. Gilbert, G.-W. Chern, S. Zhang, L. O’Brien, B. Fore, C. Nisoli, P. Schiffer, Emergent ice rule and magnetic charge screening from vertex frustration in artificial spin ice, Nature Physics 10, p 670-675 (2014)
* (63) K. G. Valiant (1979), The Complexity of Computing the Permanent, Theor. Comp. Sci. 8 (2): 189–201.
* (64) A. Schrijver, Bounds on the number of Eulerian Orientations, Combinatorica 3(3-4) 375-380, 1983.
* (65) C. Godsil, I. Gutman, On the matching polynomial of a graph, in Algebraic Methods in Graph Theory, Vol. I,II (Szeged, 1978), North-Holland, Amsterdam–New York, 1981, pp. 241-249.
* (66) L. Lovasz and M. D. Plummer, Matching Theory, North Holland, Amsterdam (1986).
* (67) B. Huang, T. Jebara, Approximating the Permanent with Belief Propagation, arXiv:0908.1769
* (68) P.O. Vontobel, The Bethe permanent of a non-negative matrix, in Proc. of Communication, Control, and Computing (Allerton), 2010 48th Annual Allerton Conference on, 29 2010-Oct. 1 2010; available at Pascal Vontobel home page.
* (69) L. Gurvits. Unleashing the power of Schrijver’s permanental inequality with the help of the Bethe approximation, arXiv:1106.2844 (2011)
* (70) A.Harrison, First catch your hare: the design and synthesis of frustrated magnets, Journal of Physics: Condensed Matter, 16(11), S553–S572 (2004)
* (71) B. D. McKay, Combinatorica 10, pages 367–377 (1990)
* (72) G. T. Barkema, M. E. J. Newman, Monte Carlo simulation of ice models, Phys. Rev. E 57, 1155–1166 (1998)
* (73) R. G. Melko, B. den Hertog, M. J. P. Gingras, Long-Range Order at Low Temperatures in Dipolar Spin Ice, Phys. Rev. Let.. 87(6):067203 (2001)
* (74) G.-W. Chern, O. Tchernyshyov, Magnetic charge and ordering in kagome spin ice, Phil. Trans. R. Soc. A (2012) 370, 5718–5737
* (75) A. May et al.,Realisation of a frustrated 3D magnetic nanowire lattice., Comm. Physics , 2:13 (2019)
* (76) M. Chertkov and A. B. Yedidia, “Approximating the permanent with fractional belief propagation”, In: The Journal of Machine Learning Research 14.1 (2013),
|
# From Geometry to Topology: Inverse Theorems for Distributed Persistence
††thanks: The first and third authors were partially supported by the Air
Force Office of Scientific Research under the grant “Geometry and Topology for
Data Analysis and Fusion”, AFOSR FA9550-18-1-0266. The second author was
partially supported by the National Science Foundation under the grant “HDR
TRIPODS: Innovations in Data Science: Integrating Stochastic Modeling, Data
Representations, and Algorithms”, NSF CCF-1934964.
Elchanan Solomon
Department of Mathematics,
Duke University
Durham, USA
<EMAIL_ADDRESS>Alexander Wagner
Department of Mathematics,
Duke University
Durham, USA
<EMAIL_ADDRESS>Paul Bendich
Department of Mathematics, Duke University
Geometric Data Analytics
Durham, USA
<EMAIL_ADDRESS>
###### Abstract
What is the “right” topological invariant of a large point cloud X? Prior
research has focused on estimating the full persistence diagram of X, a
quantity that is very expensive to compute, unstable to outliers, and far from
a sufficient statistic. We therefore propose that the correct invariant is not
the persistence diagram of X, but rather the collection of persistence
diagrams of many small subsets. This invariant, which we call “distributed
persistence,” is trivially parallelizable, more stable to outliers, and has a
rich inverse theory. The map from the space of point clouds (with the quasi-
isometry metric) to the space of distributed persistence invariants (with the
Hausdorff-Bottleneck distance) is a global quasi-isometry. This is a much
stronger property than simply being injective, as it implies that the inverse
of a small neighborhood is a small neighborhood, and is to our knowledge the
only result of its kind in the TDA literature. Moreover, the quasi-isometry
bounds depend on the size of the subsets taken, so that as the size of these
subsets goes from small to large, the invariant interpolates between a purely
geometric one and a topological one. Lastly, we note that our inverse results
do not actually require considering all subsets of a fixed size (an enormous
collection), but a relatively small collection satisfying certain covering
properties that arise with high probability when randomly sampling subsets.
These theoretical results are complemented by two synthetic experiments
demonstrating the use of distributed persistence in practice.
## I Introduction
Morphometric techniques in data analysis can be loosely divided into the
geometric and the topological. Geometric techniques, like landmarks, the
Procrustes distance, the Gromov-Hausdorff metric, optimal transport methods,
PCA, MDS [Kruskal:1964aa], LLE [Roweis2323], and Isomap [Tenenbaum2319], are
designed to capture some combination of global and local metric structure.
Many geometric methods can be solved exactly or approximately via spectral
methods, and hence are fast to implement using iterative and sketching
algorithms. In contrast, topological techniques, like t-SNE
[JMLR:v9:vandermaaten08a], UMAP [McInnes2018], Mapper [SPBG:SPBG07:091-100],
and persistent homology, aim to capture large-scale connectivity structure in
data. The growing popularity of t-SNE and UMAP as dimensionality reduction
methods suggests that many data sets are topologically, but not metrically,
low-dimensional.
The goal of this paper is to introduce a new technique into topological data
analysis (TDA) that:
1. 1.
Provably interpolates between topological and geometric structure (Theorem
V.15).
2. 2.
Is trivially parallelizable.
3. 3.
Is exactly computable via deterministic and stochastic methods (Porisms V.17
and V.18 and Propositions V.23 and V.25).
4. 4.
Is provably stable to perturbation of the data (Proposition V.2).
5. 5.
Is provably invertible, with globally stable inverse (Theorems V.9, V.15,
V.21, and Porism V.19).
6. 6.
Suggests new methods for a host of morphometric challenges, ranging from
dimensionality reduction to feature extraction (Section VI).
The theoretical guarantees provided here are, to our knowledge, unmatched by
any other method in topological data analysis. The same applies for many
spectral methods, which are famously unstable in the presence of a small
spectral gap. In addition to these theoretical contributions, we demonstrate
our theoretical results empirically on synthetic data sets.
## II The Distributed Topology Problem
Let $\lambda$ be a statistic of finite point clouds in $\mathbb{R}^{d}$. Let
$X$ be an abstract indexing set with an embedding $\psi:X\to\mathbb{R}^{d}$.
For $k\in\mathbb{Z}$, we can define a distributed statistic $\lambda_{k}$ that
maps the _labeled point cloud_ $(X,\psi)$ to the set
$\\{(S,\lambda(\psi(S)))\mid S\subset X,|S|=k\\}$ if $k>0$ and to $\emptyset$
otherwise. Put another way, $\lambda_{k}(X,\psi)$ records the values of
$\lambda$ on subsets of $\psi(X)$ of a fixed size, together with abstract
labels identifying which invariant corresponds to which subset.111It is also
possible to do away with these labels, and we will consider this possibility
later on in the paper. For the remainder of this paper, we will omit
mentioning the embedding $\psi$, and will refer to $X$ as a point cloud,
unless it becomes important to disambiguate between $X$ as an abstract set and
$X$ as a set with a fixed embedding.
When the computational complexity of $\lambda$ scales poorly in the size of
$X$, the statistic $\lambda_{k}$ can be easier to compute. Moreover,
$\lambda_{k}$ may contain information not accessible via $\lambda$ itself. We
will say that $\lambda$ is $k$-distributed if $\lambda_{k}(X)$ determines
$\lambda(X)$ for any subset $X\subset\mathbb{R}^{d}$ with $|X|\geq k$. Many
common geometric invariants are $k$-distributed:
* •
Let $\lambda$ send a finite set $X$ to its Euclidean distance matrix. This
invariant is $k$-distributed for all $k\geq 2$.
* •
Let $\lambda$ send a finite set $X$ to its diameter. This invariant is
$k$-distributed for all $k\geq 2$.
* •
Let $\lambda$ send a finite set $X$ to its mean. This invariant is
$k$-distributed for all $k\geq 1$.
The primary theoretical goal of this paper is to address the following three
questions:
###### Problem II.1.
Which invariants in applied algebraic topology are $k$-distributed for various
$k$?
###### Problem II.2.
If $\lambda$ is $k$-distributed, how much additional topological or geometric
information does $\lambda_{k}$ contain, as compared to $\lambda$, and how does
this depend on $k$?
###### Problem II.3.
Can $\lambda_{k}$ be well-approximated, with high probability, using only a
small fraction of the total number of subsets of size $k$?
### II-A Case Study: The Noisy Circle
To illustrate the advantage of working with distributed invariants, we compare
three data sets of $500$ points. The first is spaced regularly around a
circle, the second sampled uniformly from the unit disc, and the third
contains $450$ points on the circle and $50$ points sampled from the disc (we
call this the _noisy circle_), see Figure II.1. For each of these point
clouds, we compute their full $1$-dimensional persistence diagrams, see Figure
II.2. In addition, for each point cloud, we sample $1000$ subsets of size
$10$, compute the resulting $1000$ $1$-dimensional persistence diagrams,
vectorize them as _persistence images_ 222This is a technique for turning a
persistence diagram into a function by placing a Gaussian kernel at each dot
in the persistence diagram, with mean and variance varying by location, cf.
[adams2017persistence]., and average the results, see Figure II.3. The
persistence diagram of the noisy circle is most similar to that of the disc
(in Bottleneck distance), demonstrating that ordinary persistence does not see
the circle around which most of the data points are clustered. The distributed
persistence, however, tells a different story. The distribution for the noisy
circle interpolates between the distributions of the other two spaces, but is
substantially closer to that of the circle than the disc.
Figure II.1: Three point clouds: the circle, the noisy circle, and the disc.
Figure II.2: The persistence diagrams of our three point clouds, plotted in
birth-persistence coordinates. Figure II.3: Averaged distributed persistence
images of our three spaces. The dominant orange/yellow region is the overlay
of the circle (red) distribution and the noisy circle (green) distribution.
## III Prior Work on Distributed Topology
In [pmlr-v37-chazal15], Chazal et al. propose the following framework. Given a
metric measure space $(\mathbb{X},\rho,\mu)$, sample $m$ points and compute
the persistence landscape of the associated Vietoris-Rips filtration. This
procedure produces a random persistence landscape, $\lambda$, whose
distribution is denoted $\Psi_{\mu}^{m}$. Repeating this procedure $n$ times
and averaging produces the empirical average landscape, an unbiased estimator
of the average landscape $E_{\Psi_{\mu}^{m}}[\lambda]$. This approach is
similar to the distributed topological statistics considered in this paper,
except we consider a collection of topological statistics as a labeled set
rather than taking their sum. Though Bubenik [10.1007/978-3-030-43408-3_4]
gives conditions in Theorem $5.11$ under which a collection of persistence
diagrams may be reconstructed from the average of their corresponding
persistence landscapes, such an inverse exists only generically, and is highly
unstable.
The main theorem of [pmlr-v37-chazal15] is that the average landscape is
stable with respect to the underlying measure. Specifically, if $\mu$ and
$\nu$ are two probability measures on the same metric space
$(\mathbb{X},\rho)$, then the sup norm between induced average landscapes is
bounded by $m^{1/p}W_{\rho,p}(\mu,\nu)$ for any $p\geq 1$. Similar results
were obtained in [Blumberg:2014aa] for distributions of persistence diagrams
of subsamples. In particular, Blumberg et al. showed that the distribution of
barcodes with the Prohorov metric is stable with respect to the associated
compact metric measure space with the Gromov-Prohorov metric. Both these
results are analagous to the stability of the distributed topological
statistics given in Proposition V.2. However, working with labeled collections
of distributed topological statistics, we are also able to provide inverse
stability results, such as our main Theorem V.15, which states that changes in
the metric structure are bounded with respect to changes in the distributed
topological statistics.
In [Bubenik_2020], Bubenik et al. consider unit disks, denoted $D_{K}$, of
surfaces of constant curvature $K$ with $K\in[-2,2]$. Since these spaces are
all contractible, their reduced singular homology is trivial and global
homology cannot distinguish them. However, the authors prove that the maximum
Čech persistence for three points sampled from $D_{K}$ determines $K$. The
authors also successfully apply the same empirical framework of average
persistence landscapes from [pmlr-v37-chazal15] to experimentally determine
the curvature of $D_{K}$ for various $K$. The authors in [PhysRevE.93.052138]
used average persistence landscapes to provide experimental verification of a
known phase transition. Finally, the authors in [10.1007/978-3-030-42266-0_14]
use average persistence landscapes to achieve improved results, compared to
standard machine learning algorithms, in disease phenotype prediction based on
subject gene expressions.
## IV Background
The content of this paper assumes familiarity with the concepts and tools of
persistent homology. Interested readers can consult the articles of Carlsson
[carlsson2009topology] and Ghrist [ghrist2008barcodes] and the textbooks of
Edelsbrunner and Harer [edelsbrunner2010computational] and Oudot
[oudot2015persistence]. We include the following primer for readers interested
in a high-level, non-technical summary.
Persistent homology records the way topology evolves in a parametrized
sequence of spaces. To apply persistent homology to a point cloud, a pre-
processing step is needed that converts the point cloud into such a sequence.
The two classical ways of doing this are called the Rips and Čech filtrations,
respectively; the former is much easier to compute than the latter, at the
expense of some geometric fidelity. Both consist of inserting simplices into
the point cloud at a parameter value equal to the proximity of the associated
vertex points. As the sequence of spaces evolves, the addition of certain
edges or higher-dimensional simplices changes the homological type of the
space – these simplices are called critical. Persistent homology records the
parameter values at which critical simplices appear, notes the dimension in
which the homology changes, and pairs critical values by matching the critical
value at which a new homological feature appears to the critical value at
which it disappears. This information is organized into a data structure
called a persistence diagram, and there are a number of metrics with which
persistence diagrams can be compared.
If one forgets about the pairing and retains only the dimension information of
the critical values, the resulting invariant is called a Betti curve. Betti
curves are simpler to compute and work with than persistence diagrams, but are
less informative and harder to compare. Finally, if one also drops the
dimension information by taking the alternating sum of the Betti curves, one
gets an Euler curve. Euler curves are even less discriminative than Betti
curves, but enjoy the special symmetry properties of the Euler characteristic.
These symmetries will be put to good use in this paper.
Persistence theory guarantees that a small modification to the parametrization
of a sequence of spaces implies only small changes in its persistence diagram.
To be precise, if the appearance time of any given simplex is not delayed or
advanced by more than $\epsilon$, the persistence diagram as a whole is not
distorted by more than $\epsilon$ in the appropriate metric (called the
_Bottleneck distance_). Throughout this paper we will use the trick of
modifying filtrations by rounding their critical values to a fixed, discrete
set.
As a rule, the map sending a point cloud to its persistence diagram is not
injective, as many different point clouds share the same persistence diagram.
Moreover, the set of point clouds sharing a common persistence diagram need
not be bounded, so that arbitrarily distinct point clouds might have the same
persistence. There are a number of constructions in the TDA literature that
attempt to correct this lack of injectivity by constructing more sophisticated
invariants; these are often called _topological transforms_. Examples include
the Persistent Homology Transform [turner2014persistent] and Intrinsic
Persistent Homology Transform [oudot2017barcode]; consult [oudot2020inverse]
for a survey of inverse results in persistence. These methods are largely
unfeasible to compute exactly, unstable, and provide no global Lipschitz
bounds on their inverse, so two wildly different spaces may produce
arbitrarily similar (though not exactly identical) transforms. The distributed
topology invariant studied in this paper is injective, practically computable,
stable, and with Lipschitz inverse.
## V Theoretical Results
In what follows, we let $\lambda$ be any of the following four topological
invariants:
* •
Rips Persistence (RP).
* •
Rips Euler Curve (RE).
* •
Čech Persistence (CP).
* •
Čech Euler Curve (CE).
To be precise, RP and CP consist of persistence diagrams for every homological
degree. When working with either of these invariants, the Bottleneck or
Wasserstein distance is the maximum of the Bottleneck or Wasserstein distances
over all degrees.
### V-A Stability
A result of the following form is standard in the TDA literature, and
demonstrates the ease of producing stable invariants using persistent
homology.
###### Definition V.1.
Let $(X,d_{X})$ and $(Y,d_{Y})$ be metric spaces. A map
$\phi:(X,d_{X})\to(Y,d_{Y})$ is an $\epsilon$-quasi-isometry if
$|d_{X}(x_{1},x_{2})-d_{Y}(\phi(x_{1}),\phi(x_{2}))|\leq\epsilon$ for all
$x_{1},x_{2}\in X$.
###### Proposition V.2.
Let $\phi:(X,d_{X})\to(Y,d_{Y})$ be an $\epsilon$-quasi-isometry of metric
spaces. Then for all subsets $S\subseteq X$, and $\lambda$ either RP or CP,
$d_{B}(\lambda(S),\lambda(\phi(S)))\leq\epsilon$, where $d_{B}$ is the
Bottleneck distance on persistence diagrams.
###### Proof.
This follows immediately from the Gromov-Hausdorff stability theorem for
persistence diagrams of point clouds [chazal2016structure,
cohen2007stability]. ∎
### V-B $k$-Distributivity
In this section, we show how many distributed invariants suffice to determine
the isometry type of a point cloud. This provides an answer to Problem II.1.
To help motivate this result, we consider the simple cases of $k=2$ and $k=3$.
###### Lemma V.3.
All of our $\lambda$ are $2$-distributed. Moreover, the knowledge of
$\lambda_{2}$ determines the isometry type of $X$.
###### Proof.
Regardless of the invariant used, it is possible to read off the distances
between any pair of points in $X$. This determines the embedding of $X$ up to
rigid isometry (see [singer2008remark]), and hence the Rips and Čech
filtrations. ∎
Setting $k=3$ is sufficient to break the implication of an isometry.
###### Lemma V.4.
$\lambda_{3}$ does not determine the isometry type of $X$.
###### Proof.
A simple counterexample suffices. Let $X$ consist of the vertices of an obtuse
triangle with angle $\theta>\pi/2$. Varying the angle $\theta$ in
$(\pi/2,\pi)$ alters the isometry type of $X$, but leaves its topology
unchanged. ∎
To obtain stronger results, we introduce the following two generalizations,
one to the notion of distributivity, and the other to the invariants
$\lambda$.
###### Definition V.5.
We say that $\lambda$ is $(k_{1},k_{2},\cdots,k_{r})$-distributed if
$\lambda_{k_{1}}$ through $\lambda_{k_{r}}$, taken together, determine
$\lambda$.
###### Definition V.6.
For any of our four invariants $\lambda$, let $\lambda^{m}$ be the modified
invariant restricted to the $m$-skeleton of the Rips of Čech complex. In other
words, they are persistence invariants of filtrations whose top simplices have
dimension $m$.
Setting $m=0$ provides information only on the cardinality of $X$. The
$1$-skeleton contains both geometric and topological information, and its
persistence is fast to compute. As $m$ increases, computational complexity
goes up, and the resulting invariants record higher-dimensional topological
information. The following lemma demonstrates how knowing sufficiently many
Euler characteristic invariants allows one to determine new ones.
###### Lemma V.7.
Let $\lambda$ be RE or CE. For any point cloud $X$ and $k\geq m+2$,
$\\{\lambda_{k}^{m},\lambda_{k-1}^{m},\cdots\lambda_{k-m-1}^{m}\\}$ determine
$\lambda_{k-m-2}^{m}$.
###### Proof.
Let $Y\subset X$ be a subset of size $(k-m-2)$. Let
$\\{x_{1},\cdots,x_{m+2}\\}$ be points in $X\setminus Y$, set
$W=Y\cup\\{x_{1},\cdots,x_{m+2}\\}$ and $Y_{i}=W\setminus\\{x_{i}\\}$. Then
$|W|=k$ and $|Y_{i}|=(k-1)$ for all $i$. Note that every subset of size
$(m+1)$ in $W$ is contained in some $Y_{i}$. Thus if we write $K^{m}(W)$ to
denote the $m$-skeleton of the full simplex on $W$, we have
$K^{m}(W)=\bigcup_{i}K^{m}(Y_{i})$, and the same equality holds true when the
full simplex is replaced with the Rips or Čech complex at a fixed scale $r$.
Note that in general, $K^{m}(S)\cap K^{m}(T)=K^{m}(S\cap T)$ for any subsets
$S,T\subset X$, but the same equality does not hold with intersections
replaced with unions, as there may be simplices in $K^{m}(S\cup T)$ whose set
of vertices are not contained in either $S$ or $T$. This explains why we take
all $Y_{1},\cdots Y_{m+2}$ to cover $W$.
Let us now apply the inclusion-exclusion property of the Euler characteristic
to compare the Euler characteristic of $W$ (at a given scale $r$) with those
of the $Y_{i}$.
$\displaystyle\chi(W^{r})$
$\displaystyle=\chi\left(\bigcup_{i}Y_{i}^{r}\right)$
$\displaystyle=\sum_{i}\chi(Y_{i}^{r})$
$\displaystyle-\sum_{i<j}\chi(Y_{i}^{r}\cap Y_{j}^{r})$
$\displaystyle+\sum_{i<j<k}\chi(Y_{i}^{r}\cap Y_{j}^{r}\cap Y_{k}^{r})$
$\displaystyle\dots$ $\displaystyle+(-1)^{m+3}\chi(Y_{1}^{r}\cap\dots\cap
Y_{m+2}^{r})$
The resulting alternating sum involves all pairwise intersections of the
$Y_{i}$, and only a single union term, $\lambda^{m}(W)$. By hypothesis, we
know the Euler characteristics of all pairwise intersections of cardinality at
least $k-m-1$. The only unknown term in the sum is $\lambda^{m}(Y)$, which we
can then solve for, completing the proof. See Figure V.1 for an concrete
example. ∎
Figure V.1: Our goal is to deduce the Euler Characteristic (at a fixed scale
$r$) of $Y$, a $1$-simplex consisting of $k=2$ points. This can be derived
from the Euler Characteristics of the other subcomplexes in the diagram above.
###### Corollary V.8.
Let $\lambda$ be RE or CE. For any point cloud $X$ and $k\geq m+2$,
$\\{\lambda_{k}^{m},\lambda_{k-1}^{m},\cdots\lambda_{k-m-1}^{m}\\}$ determine
$\lambda_{2}^{m}$.
###### Proof.
Lemma V.7 shows that
$\\{\lambda_{k}^{m},\lambda_{k-1}^{m},\cdots\lambda_{k-m-1}^{m}\\}$ determines
$\lambda_{k-m-2}^{m}$. By the same logic,
$\\{\lambda_{k-1}^{m},\lambda_{k-2}^{m},\cdots\lambda_{k-m-2}^{m}\\}$
determines $\lambda_{k-m-3}^{m}$. Repeating this argument, we can deduce
$\lambda_{2}^{m}$. ∎
Leveraging Lemma V.7, we prove that all of our persistence invariants are
appropriately distributed.
###### Theorem V.9.
For any of the four invariants $\lambda$, the $m$-skeleton invariant
$\lambda^{m}$ is $(k,k-1,\cdots,k-m-1)$-distributed for all $k\geq m+1\geq 2$.
Moreover, $\\{\lambda_{k}^{m},\lambda_{k-1}^{m},\cdots\lambda_{k-m-1}^{m}\\}$
determine the isometry type of $X$.
###### Proof.
When $m\geq 1$, the $m$-skeleton contains all edges in $X$, so Lemma V.3 still
applies. If the set $\\{k,k-1,k-2,\cdots,k-m-1\\}$ contains $2$, this follows
from Lemma V.3. Otherwise, let us assume $\lambda$ is either RE or CE, as RP
or CP contain strictly more information than their Euler characteristic
counterparts. By Corollary V.8, we can determine $\lambda_{2}^{m}$ and then
apply Lemma V.3. ∎
###### Remark V.10.
Note that $m=1$ is sufficient to apply the prior theorem. As $m$ gets larger,
more topological information is needed to determine the isomety type of the
underlying space.
### V-C Approximate Distributivity
We now consider what happens if two point clouds have distributed invariants
which are similar but not identical. We show that this implies a quasi-
isometry between $X$ and $Y$, with constant depending quadratically on the
subset size parameter $k$. This provides a precise answer to Problem II.2 on
how the distributed statistic interpolates between geometry and topology.
The key insight in the proof of this result is that there is always a way to
modify the Rips or Čech filtrations on $X$ and $Y$ to force their distributed
invariants to coincide exactly. Taken together with the telescoping trick of
Corollary V.8, this modified invariant must agree for all subsets of size two.
Persistence stability allows us to assert that the modified invariant and the
original persistence invariant are a bounded distance apart, so equality of
the modified invariant gives near-equality of the Rips or Čech persistences on
subsets of size two, which is nothing more than pairwise distance data.
The proposed modification to our filtration consists of rounding it to a
discrete set of values. The following technical lemma shows how to pick a
rounding set $R$ that aligns two sets of points without moving any point more
than a bounded amount.
###### Lemma V.11 (Rounding Lemma).
Let $P=\\{p_{1}\leq p_{2}\leq\cdots p_{N}\\}$ and
$Q=\\{q_{1},q_{2}\cdots,q_{N}\\}$ be two sets of real numbers. Define
$d_{i}=|p_{i}-q_{i}|$, let $\epsilon=\max d_{i}$ and
$\delta=\sum_{i=1}^{n}d_{i}$. Then there exists a subset $R\subset\mathbb{R}$
and a map $\pi:P\cup Q\to R$ sending a point $x$ to the unique closest element
in $R$ (rounding up at midpoints), with:
1. 1.
$\pi(p_{i})=\pi(q_{i})$ for all $i$.
2. 2.
$|\pi(x)-x|\leq 3\epsilon+4\delta$.
In particular, since $\epsilon\leq\delta$, we can replace (2) with (2*)
$|\pi(x)-x|\leq 7\delta$.
###### Proof.
The proof is a recursive construction. The first step is to add $p_{1}$ to
$R$. We then repeat the following argument, iterating through $P$. Consider
$p_{n}$, and let $r_{*}$ be the largest element of $R$ so far. If
$p_{n}<r_{*}+2\epsilon+4\delta$, skip $p_{n}$. Otherwise, initialize
$r_{n}=p_{n}$, and iterate over all $i<n$ and check that
$p_{i}>(r_{n}+r_{*})/2$ iff $q_{i}>(r_{n}+r_{*})/2$. Every time an index $i$
is found for which this condition is violated, increment $r_{n}\leftarrow
r_{n}+2d_{i}$. The effect of this incrementation is to force both $q_{i}$ and
$p_{i}$ to be strictly closer to $r_{*}$ than they are to $r_{n}$. This
condition can be violated at most once for each $p_{i}$, hence the total sum
of the incrementation is $2\delta$, at the end of which $r_{n}$ is added to
$R$.
Let us see why the resulting set $R$ satisfies (1) and (2). If $r_{n}$ was
added to $R$, then it is at most $2\delta$ from $p_{n}$ and $2\delta+\epsilon$
from $q_{n}$, whereas $|r_{*}-p_{n}|>2\epsilon+4\delta$ and
$|r_{*}-q_{n}|>\epsilon+4\delta$ by the triangle inequality. Thus
$\pi(q_{n})=\pi(p_{n})=r_{n}$. For $i<n$, the recursive incrementation ensures
$\pi(p_{i})=r_{n}$ if and only if $\pi(q_{i})=r_{n}$, and otherwise the value
of $\pi$ on $(p_{i},q_{i})$ is unchanged. Thus (1) is preserved. To check (2),
note that if $\pi(p_{i})=\pi(q_{i})=r_{n}$ for $i<n$, then $p_{i}$ and $q_{i}$
are closer to $r_{n}$ than any other element in $R$. By recursive hypothesis,
this distance is at most $3\epsilon+4\delta$, so $|p_{i}-r_{n}|$ and
$|q_{i}-r_{n}|\leq 3\epsilon+4\delta$.
If, on the other hand, no point was added to $R$, then
$p_{n}<r_{*}+2\epsilon+4\delta$. Let $p_{*}\in P$ be the point corresponding
to $r_{*}$. Since $r_{*}+2\epsilon+4\delta>p_{n}\geq p_{*}\geq r_{*}-2\delta$,
we know $|p_{n}-r_{*}|\leq 2\epsilon+4\delta$ and
$|q_{n}-r_{*}|\leq|q_{n}-p_{n}|+|p_{n}-r_{*}|\leq 3\epsilon+4\delta$. If we
can show that $\pi(p_{n})=r_{*}$ and $\pi(q_{n})=r_{*}$, the proof will be
complete. If $p_{n}\geq r_{*}$ then it is clear that $\pi(p_{n})=r_{*}$, and
similarly, if $q_{n}\geq r_{*}$, we have $\pi(q_{n})=r_{*}$. Thus we need to
consider what happens if $p_{n}$ or $q_{n}$ are strictly less than $r_{*}$.
Let $r_{**}<r_{*}$ be the penultimate point in $R$. Our goal is to show that
$p_{n}$ or $q_{n}$ are strictly closer to $r_{*}$ than they are to $r_{**}$.
Recall the point $p_{*}\in P$ corresponding to $r_{*}$. Since $p_{*}\leq
p_{n}$ and $|r_{*}-p_{*}|\leq 2\delta$, we know that $p_{n}\geq r_{*}-2\delta$
and $q_{n}\geq r_{*}-2\delta-\epsilon$. Thus if $p_{n}$ or $q_{n}$ are
strictly less than $r_{*}$, they are no further than $2\delta$ and
$2\delta+\epsilon$ away, respectively. However, since $|r_{*}-r_{**}|\geq
2\epsilon+4\delta$, the triangle inequality implies that $|p_{n}-r_{**}|\geq
2\epsilon+2\delta$ and $|q_{n}-r_{**}|\geq\epsilon+2\delta$. Thus, if $p_{n}$
or $q_{n}$ are smaller than $r_{*}$, they must still round up $r_{*}$ than
$r_{**}$, and not $r_{**}$ or any other element of $R$. ∎
###### Corollary V.12.
We can extend the set $R$ in the Rounding Lemma to a $14\delta$-dense subset
$R^{\prime}\subset\mathbb{R}$, without changing $\pi$ on $P\cup Q$. All that
is necessary is to enrich $R$ by adding points in $(\cup_{r\in
R}N(r,14\delta))^{C}$.
With our rounding trick in hand, we can now prove the central result of this
section, Theorem V.15. The following pieces of notation clarify the statement
and proof of the theorem:
###### Definition V.13.
Let $m<k$ be natural numbers. We define the following partial sum of binomial
coefficients:
$S(k,m)={k\choose 2}+{k\choose 3}+\cdots+{k\choose m+1}$
###### Definition V.14.
Let $(K,f)$ be a filtered simplicial complex, i.e. a simplicial complex $K$
with a real-valued function $f:K\to\mathbb{R}$ encoding the appearance times
of simplices. Given a subset $R\subset\mathbb{R}$, rounding this filtration to
$R$ consists of post-composing $f$ with the map sending every element of
$\mathbb{R}$ to its nearest element in $R$ (rounding up at midpoints). Thus,
the simplices in the rounding filtration appear only at values contained in
$R$. The effect of rounding on the resulting persistence diagrams is to round
the birth and death times of its constituent dots; no new points are
introduced.
###### Theorem V.15.
Let $\lambda$ be either RP or CP, and take $k>m>0$. Let $\phi:X\to Y$ be a
bijection such that for all $S\subseteq X$ with
$|S|\in\\{k,k-1,\cdots,k-m-1\\}$,
$d_{B}(\lambda^{m}(S),\lambda^{m}(\phi(S)))\leq\epsilon$. If $\lambda$ is RP,
$\phi$ is a $112k^{2}\epsilon$ quasi-isometry, and if $\lambda$ is CP, $\phi$
is a $224S(k,m)k^{m+1}\epsilon$ quasi-isometry.
###### Proof.
Let $(x_{1},x_{2})$ be an edge in $X$, and let $(y_{1},y_{2})$ be the
corresponding edge in $Y$. Let $S\subseteq X$ be a subset of size $k$
containing $(x_{1},x_{2})$. Let $A(S)$ be the set of appearance times of
simplices in the $m$-skeleton of $S$, and define $A(\phi(S))$ similarly. Apply
the Rounding Lemma to the following set of pairs:
$\\{(l,l+2\epsilon),(l,l-2\epsilon)\mid l\in A(S)\cup A(\phi(S))\\}$
In the language of the hypotheses of the Rounding Lemma, we have $\delta=\sum
d_{i}=4\epsilon|S(A)|+4\epsilon|S(\phi(A))|$. Let $R$ be the subset given by
the Rounding Lemma and its corollary, and let $\lambda^{R}$ denote the
invariant $\lambda^{m}$ with filtration rounded to $R$. Note that if
$S^{\prime}\subset S$ has the property that
$d_{B}(\lambda^{m}(S^{\prime}),\lambda^{m}(\phi(S^{\prime})))\leq\epsilon$,
then $\lambda^{R}(S^{\prime})=\lambda^{R}(\phi(S^{\prime}))$. To see why this
is the case, let $p=(a,b)\in\lambda^{R}(S^{\prime})\cup\Delta$ and
$p^{\prime}=(a^{\prime},b^{\prime})\in\lambda^{R}(\phi(S^{\prime}))\cup\Delta$
be dots paired in an optimal Bottleneck matching, where $\Delta$ is the
diagonal.
Let us first assume that $p$ is on the diagonal, so that
$|b^{\prime}-a^{\prime}|\leq 2\epsilon$. If $p^{\prime}$ is also on the
diagonal, then both $p$ and $p^{\prime}$ remain on the diagonal after rounding
to $R$ (or, indeed, rounding to any set of values). If $p^{\prime}$ is not on
the diagonal, $a^{\prime},b^{\prime}\in A(\phi(S))$; since
$|b^{\prime}-a^{\prime}|\leq 2\epsilon$, $a^{\prime}$ are $b^{\prime}$ are
rounded to the same point in $R$, and hence the point
$(a^{\prime},b^{\prime})$ is rounded to the diagonal.
If $p$ is not on the diagonal, then $a,b\in A(S)$, and since
$a^{\prime}\in[a-\epsilon,a+\epsilon]$ and
$b^{\prime}\in[b-\epsilon,b+\epsilon]$, we can conclude that $a$ and
$a^{\prime}$ round to the same point in $R$, and the same is true for $b$ and
$b^{\prime}$. In any case, the points $p$ and $p^{\prime}$ become identical
after rounding to $R$. Thus, using $\lambda^{R}$, $\phi$ preserves persistence
diagrams of all subsets of $S$ of size $k$ through $k-m-1$, and hence, by
Corollary V.8, all subsets of size two, in particular $(x_{1},x_{2})$. Thus,
$\lambda^{R}((x_{1},x_{2}))=\lambda^{R}((y_{1},y_{2}))$333Noting that for
subsets of size two, Euler curves and persistence diagrams contain identical
information.. As $R$ is $(4\times 14)\epsilon|S(A)|+(4\times
14)\epsilon|S(\phi(A))|$ dense in $\mathbb{R}$, persistence stability implies
that $\lambda^{m}$ and $\lambda^{R}$ are within
$56\epsilon(|S(A)|+|S(\phi(A))|)$ of each other in Bottleneck distance. The
triangle inequality then tells us that
$d_{B}(\lambda^{m}(x_{1},x_{2}),\lambda^{m}((y_{1},y_{2})))\leq
112\epsilon(|S(A)|+|S(\phi(A))|)$, which is equivalent to
$|\|x_{1}-x_{2}\|-\|y_{1}-y_{2}\||\leq 112\epsilon(|S(A)|+|S(\phi(A))|)$. To
conclude the proof, note that for the Rips complex,
$|S(A)|,|S(\phi(A))|\leq{k\choose 2}=\frac{k^{2}-k}{2}\leq\frac{k^{2}}{2}$, as
all appearance times of simplices are just pairwise distances between points.
For the Čech complex, there may be a total of $S(k,m)$ distinct appearance
times in $S(A)$ or $S(\phi(A))$, one for each simplex of dimension between $1$
and $m$, that need to be rounded correctly (all dimension zero simplices
necessarily appear at height zero). ∎
###### Remark V.16.
Theorem V.15 answers Problem II.2 by showing that smaller values of $k$ give
more control of quasi-isometry type than larger values. This justifies our
claim that distributed topology interpolates between local geometry and global
topology.
Moving on to Problem II.3, the following two porisms, resulting from the proof
of Theorem V.15, show that our inverse results do not require checking _all_
subsets with cardinality $k$ through $k-m-1$, but a much smaller collection
that covers the space $X$ in the right way. Subsection V-E bounds the number
of randomly selected subsets needed to produce such a covering with high
probability.
###### Porism V.17.
The results of Theorem V.15 do not require $\phi$ to preserve the topology for
_all_ subsets $S$ with $|S|\in\\{k,k-1,\cdots,k-m-1\\}$. Rather, it suffices
to consider a collection $C$ of subsets of $X$ with the following properties:
* •
(Covering property) For every subset $\sigma$ of $X$ with $|\sigma|\leq 2$,
there is a subset $S\in C$ containing $\sigma$ with $|S|=k$.
* •
(Closure property) If $S\in C$ has $|S|=k$, and $S^{\prime}\subset S$ has
$|S^{\prime}|\geq k-m-1$, then $S^{\prime}\in C$.
This requires checking many fewer subsets of $X$, rather than ${|X|\choose
k}+{|X|\choose k-1}+\cdots+{|X|\choose k-m-1}$.
One can often check even fewer subsets by replacing the covering property with
a $\delta$-dense version:
* •
($\delta$-dense covering property) There exists a subset $X^{\prime}\subseteq
X$ with $|X^{\prime}|\geq k$, such that $X^{\prime}$ is $\delta$-dense in $X$
and $\phi(X^{\prime})$ is $\delta$-dense in $Y$, and such that for every
subset $\sigma$ of $X^{\prime}$ with $|\sigma|=2$, there is a subset $S\in C$
containing $\sigma$ with $|S|=k$.
The resulting bound is not in the quasi-isometry distance but in the Gromov-
Hausdorff distance.
###### Porism V.18.
Let $\lambda$ be either RP or CP, and take $k>m>0$. Let $\phi:X\to Y$ be a
bijection between metric spaces, and let $C$ be a collection of subsets of
cardinality between $k$ and $k-m-1$ that satisfies both the $\delta$-dense
covering property and the closure property. Suppose that
$d_{B}(\lambda^{m}(S),\lambda^{m}(\phi(S)))\leq\epsilon$ for all $S\in C$. If
$\lambda$ is RP, then $d_{GH}(X,Y)\leq 112k^{2}\epsilon+2\delta$, and if
$\lambda$ is CP, then $d_{GH}(X,Y)\leq 224S(k,m)k^{m+1}\epsilon+2\delta$.
###### Proof.
The proof of Theorem V.15 implies that $\phi$ is a quasi-isometry from
$X^{\prime}$ to $\phi(X^{\prime})$. We can extend this to a a Gromov-Hausdorff
matching between $X$ and $Y$, and two applications of the triangle inequality
increase the bound by $2\delta$. ∎
###### Porism V.19.
If $X\subset\mathbb{R}^{d_{1}}$ and $Y\subset\mathbb{R}^{d_{2}}$, then the
quasi-isometry bound for Čech persistence in the prior theorem can be replaced
with:
$112k^{2}\left(\epsilon+\sqrt{\frac{2d_{1}}{d_{1}+1}}+\sqrt{\frac{2d_{2}}{d_{2}+1}}\right)$
Note that the added terms sum at most to $2\sqrt{2}$, so that this bound is
better than the bound given in Porism V.18 for non-infinitesimal $\epsilon$,
but does fail to go to $0$ as $\epsilon\to 0$.
###### Proof.
The Rips and Čech persistence of point clouds in $\mathbb{R}^{d}$ are always
within $\sqrt{\frac{2d}{d+1}}$ of one another in the bottleneck distance, cf.
Theorem 2.5 in [de2007coverage]. The result then follows by replacing Čech
persistence with Rips persistence and using the triangle inequality. ∎
### V-D Topology + Sparse Geometry
Our goal now is improve the results of the prior section by giving quasi-
isometry bounds that scale linearly in $k$, rather than quadratically. This
can be accomplished by using an inclusion-exclusion argument on the
$1$-skeleton persistence of $X$ that uses only subsets of size $k$ and
$(k-1)$. Namely, given a subset $Y\subset X$ with $|Y|=(k-2)$, we take
$Y=Y_{1}\cap Y_{2}$ for $|Y_{1}|=|Y_{2}|=(k-1)$ and $W=(Y_{1}\cup Y_{2})$ with
$|W|=k$, as shown in Figure V.2, and attempt to deduce the Euler
characteristic of $Y$ from those of $Y_{1},Y_{2}$, and $W$. However, the union
of the $1$-skeleton complexes on $Y_{1}$ and $Y_{2}$ is not the $1$-skeleton
complex on $W$, owing to the fact that $W$ contains an extra edge connecting
the pair of vertices in $W\setminus Y$.
Figure V.2: Our goal is to deduce the Euler Characteristic (at a fixed scale
$r$) of $Y$, a subcomplex of size $k=3$, using subcomplexes of size $k=4$ and
$k=5$. However, the inclusion-exclusion argument fails because the union of
the complexes of $Y_{1}$ and $Y_{2}$ is not the complex on $W=Y_{1}\cup
Y_{2}$, and the missing edge is shown in red.
The effect of this extra edge on persistence is quite subtle, but its effect
on the Euler curve is trivial, as it amounts to subtracting a step function
supported on $[r,\infty)$, where $r$ is the appearance time of the extra edge
in the complex. If we knew $r$, we could correct the deficit in our inclusion-
exclusion argument. Note that the we have the freedom to choose $Y_{1}$ and
$Y_{2}$ as we like, so to make this argument work we need only know the length
of a single edge in $X$ that does not intersect $Y$. A very small collection
of edge lengths suffice to patch up the inclusion-exclusion argument for all
subsets of $X$ of size at most $k$. Before proving our quasi-isometry bound,
we need the following corollary of the Rounding Lemma.
###### Lemma V.20.
Given $A_{1}\cdots A_{n}$ and $B_{1}\cdots B_{n}$ persistence diagrams, with
$W^{1}(A_{i},B_{i})\leq\delta$, there exists a $28n\delta$-dense subset
$R\subset\mathbb{R}$ such that rounding all the persistence diagrams to the
grid $R\times R$ forces $\pi(A_{i})=\pi(B_{i})$ for all $i$.
###### Proof.
This is a straightforward application of the Rounding Lemma. We take the set
$P$ to consist of all the birth and death times of all the dots in the
$A_{i}$, and construct $Q$ from the $B_{i}$ similarly. As each $(A_{i},B_{i})$
pair contributes two sets of points, births and deaths, the total $\ell^{1}$
norm of pairing $P$ with $Q$ is $2\times n\delta=2n\delta$. By Corollary V.12,
one can find a subset $R$ of density $28n\delta$ which ensures
$\pi(p_{i})=\pi(q_{i})$ for all matched pairs $p_{i}\in P,q_{i}\in Q$, and
hence $\pi(A_{i})=\pi(B_{i})$ for all $i$. ∎
###### Theorem V.21.
Let $\lambda$ be either RP or CP, and take $k>m=1$. Let $\phi:X\to Y$ be a
bijection such that for all $S\subseteq X$ with $|S|\in\\{k,k-1\\}$,
$W^{1}(\lambda^{1}(S),\lambda^{1}(\phi(S)))\leq\epsilon_{1}$. Suppose further
that there is a subset $X^{\prime}\subset X$ of size $(k-1)$ with
$\sum_{(x_{i},x_{j})\in X^{\prime}\times
X^{\prime}}|\|x_{i}-x_{j}\|-\|\phi(x_{i})-\phi(x_{j})\||\leq\epsilon_{2}.$
Then $\phi$ is a $56(k+1)\epsilon_{1}+28\epsilon_{2}$ quasi-isometry.
###### Proof.
Let $x_{1},x_{2}$ be a pair of points in $X$. Without loss of generality, we
can assume that at least one of these points is not in $X^{\prime}$, as the
proof is otherwise trivial. Thus, we can extend $x_{1},x_{2}$ to a subset $S$
of size $k$ by adding points in $X^{\prime}$. $S$ has $k$ subsets of size
$(k-1)$. The prior lemma tells us that we can find a
$28(k+1)\epsilon_{1}$-dense subset $R\subset\mathbb{R}$ such that
$\lambda^{R}(S)=\lambda^{R}(\phi(S))$, and
$\lambda^{R}(S^{\prime})=\lambda^{R}(\phi(S^{\prime}))$ for any subset
$S^{\prime}\subset S$ with $|S|=(k-1)$. We can further demand from the
Rounding Lemma that the appearance time of every edge in $X^{\prime}$ and
every edge in $\phi(X^{\prime})$ be exactly the same, where $R$ will now be
$28(k+1)\epsilon_{1}+14\epsilon_{2}$ dense in $\mathbb{R}$.
Now, for any subset $S^{\prime}\subset S$ containing $(x_{1},x_{2})$ with size
$|S^{\prime}|=k-2$, the set $S\setminus S^{\prime}$ consists of a pair of
points $(p_{1},p_{2})\in X^{\prime}$. We then know that
$\lambda^{R}(S^{\prime})=\lambda^{R}(\phi(S^{\prime}))$ by using an inclusion-
exclusion calculation with $S^{\prime}\cup p_{1},S^{\prime}\cup p_{2}$, and
$S^{\prime}\cup p_{1}\cup p_{2}$, since the missing term in the inclusion-
exclusion formula is exactly the same for both $X$ and $Y$, after rounding to
$R$. This argument can be iterated on the entire sublattice of $S$ consisting
of those subsets $S^{\prime}\subset S$ with $|S^{\prime}|\leq k-2$ and which
contain $(x_{1},x_{2})$. The proof concludes by an identical stability
analysis to that of Theorem V.15. ∎
###### Remark V.22.
The above proof does not require all pairwise distances in $X^{\prime}$, as
the inclusion-exclusion trick can be carried out with $O(k)$ intersections,
rather than the full sublattice of $O(k^{2})$ intersections. We have omitted
this analysis as it obfuscates the statement of the theorem and does not
significantly improve it.
### V-E Probabilistic Results
Porisms V.17 and V.18 tell us that we do not need to sample all ${|X|\choose
k}+{|X|\choose k-1}+\cdots+{|X|\choose k-m-1}$ subsets $S\subseteq X$ of size
$|S|\in\\{k,\cdots,k-m-1\\}$, so long as the collection $C$ of subsets
considered satisfies appropriate cover and closure properties. The goal of
this section is to give bounds on the probability that a randomly chosen
collection of subsets of size $k$ has the covering property. The closure
property can then be ensured by adding subsets of the appropriate
cardinalities.
###### Proposition V.23.
Let $X$ be a set of size $n$, and choose $M$ subsets
$\\{S_{1},\cdots,S_{M}\\}$ of size $k$ by uniform sampling without
replacement. Let $p\leq k$ and $A$ be the outcome that every set of $p$ points
$(x_{1},\cdots,x_{p})$ is contained in at least one $S_{i}$. Then
$P(A)\geq 1-{n\choose
p}\left(1-\left(\frac{k-p+1}{n-p+1}\right)^{p}\right)^{M}.$
###### Proof.
$\displaystyle P(A)$ $\displaystyle=1-P(\exists(x_{1},\cdots,x_{p})\mbox{ not
in any }S_{i})$ (1) $\displaystyle\geq
1-\sum_{(x_{1},\cdots,x_{p})}P((x_{1},\cdots,x_{p})\mbox{ not in any }S_{i})$
(2) $\displaystyle=1-{n\choose p}P((x_{1},\cdots,x_{p})\mbox{ not in any
}S_{i})$ (3) $\displaystyle=1-{n\choose
p}\prod_{i=1}^{M}P((x_{1},\cdots,x_{p})\mbox{ not in }S_{i})$ (4)
$\displaystyle=1-{n\choose p}\prod_{i=1}^{M}(1-P((x_{1},\cdots,x_{p})\subseteq
S_{i}))$ (5)
An elementary counting argument provides:
$P((x_{1},\cdots,x_{p})\subseteq S_{i})=\frac{{n-p\choose k-p}}{{n\choose k}}$
Note further that:
$\frac{{n-p\choose k-p}}{{n\choose
k}}=\frac{k(k-1)(k-2)\cdots(k-p+1)}{n(n-1)(n-2)\cdots(n-p+1)}\geq\left(\frac{k-p+1}{n-p+1}\right)^{p}$
Finally, observe that the effect of replacing $P((x_{1},\cdots,x_{p})\subseteq
S_{i})$ with $\left(\frac{k-p+1}{n-p+1}\right)^{p}$ is to decrease the value
of (5), and so the result is proved. ∎
###### Proposition V.24.
Let $A$ be as in the prior proposition. For any $\epsilon\in(0,1)$, if
$M\geq(p\log\left(\frac{ne}{p}\right)-log(1-\epsilon))\left(\frac{n-p+1}{k-p+1}\right)^{p}$
then $P(A)\geq\epsilon$.
###### Proof.
Our goal is to have:
$\epsilon\geq 1-{n\choose
p}\left(1-\left(\frac{k-p+1}{n-p+1}\right)^{p}\right)^{M}$
which is equivalent to
${n\choose p}\left(1-\left(\frac{k-p+1}{n-p+1}\right)^{p}\right)^{M}\geq
1-\epsilon$
Taking the log of both sides gives
$\log{n\choose
p}+M\log\left(1-\left(\frac{k-p+1}{n-p+1}\right)^{p}\right)\geq\log(1-\epsilon)$
Solving for $M$ gives:
$M\geq\frac{\log(1-\epsilon)-\log{n\choose
p}}{\log\left(1-\left(\frac{k-p+1}{n-p+1}\right)^{p}\right)}$ (6)
The denominator on the right-hand side of (6) is negative, so using the
identity ${n\choose p}<\left(\frac{ne}{p}\right)^{p}$, we can replace (6) with
the strictly stronger inequality:
$M\geq\frac{\log(1-\epsilon)-p\log\frac{ne}{p}}{\log\left(1-\left(\frac{k-p+1}{n-p+1}\right)^{p}\right)}$
(7)
We can then apply the identity $0\geq-x\geq\log(1-x)$ for $x\in(0,1)$, and so
replace (7) with the stronger inequality,
$M\geq\frac{\log(1-\epsilon)-p\log\frac{ne}{p}}{-\left(\frac{k-p+1}{n-p+1}\right)^{p}}$
(8)
The result then follows via simple algebra. ∎
The following proposition can be used to bound the probability that a
collection C is a $\delta$-dense covering.
###### Proposition V.25.
Suppose that the set $X$ has a probability measure $\mu$ and can be covered by
$s$ subsets $\\{X_{1},\cdots,X_{s}\\}$ with measure $\mu(X_{i})\geq 1/s$.
Choose $\\{S_{1},\cdots,S_{M}\\}$ subsets of size $k$ according to $\mu$. Let
$A$ be the outcome that for every collection of $p$ subsets
$\\{X_{i_{1}},\cdots,X_{i_{p}}\\}$, there exists some $S_{i}$ such that
$S_{i}\cap X_{i_{j}}\neq\emptyset$ for all $j$. Then
$P(A)\geq 1-{s\choose
p}\left(1-\left(\frac{k-p+1}{s-p+1}\right)^{p}\right)^{M}$
###### Proof.
Construct the set $\tilde{X}$ whose points are the sets
$\\{[X_{1}],\cdots,[X_{s}]\\}$. A subset $S\subseteq X$ maps to subset
$\tilde{S}\subseteq\tilde{X}$ in the following way: $\tilde{S}$ contains
$[X_{i}]$ if $S\cap X_{i}\neq\emptyset$. It is evident that the outcome $A$ is
equivalent to the condition that any $\\{[X_{i_{1}}],\cdots,[X_{i_{p}}]\\}$ is
contained in some $\tilde{S}_{i}$. Let $B$ be the same outcome, with a
different sampling procedure: instead of randomly picking subsets $S\subset X$
and constructing $\tilde{S}$, pick subsets $\tilde{S}$ uniformly in
$\tilde{X}$ directly. It is clear that $P(A)\geq P(B)$, because
$\mu(X_{i})\geq 1/s$ means that the likelihood of $\tilde{S}$ containing
$[X_{i}]$ is higher for the first sampling procedure than the second. But
Proposition V.23 implies that
$P(B)\geq 1-{s\choose
p}\left(1-\left(\frac{k-p+1}{s-p+1}\right)^{p}\right)^{M}$
∎
Let us explain how to produce such a measure $\mu$. Given $\phi:X\to Y$, we
define
$d_{\phi}(x_{1},x_{2})=\max\\{\|x_{1}-x_{2}\|,\|\phi(x_{1})-\phi(x_{2})\|\\}$.
Using furthest point sampling, we can produce a subset
$\\{x_{1},\cdots,x_{s}\\}$ of $X$ that is $\delta$-dense in $d_{\phi}$ for
some $\delta$, and let $X_{i}=N(x_{i},\delta)$. We define $\mu$ on $X$ via the
following mixed sampling procedure: we randomly pick a subset $X_{i}$ and then
uniformly sample its elements. The resulting measure $\mu$ satisfies the
hypotheses of the prior proposition, and a $\delta$-dense covering $C$ can be
obtained with high probability by sampling i.i.d. from $\mu$.
## VI Applications
Let us return to viewing $X$ as an abstract set, and $\psi:X\to\mathbb{R}^{d}$
an embedding that turns $X$ into a point cloud. The distributed topology
$\lambda_{k}$ of $X$, as we defined it, is $\\{(S,\lambda(\psi(S)))\mid
S\subset X,|S|=k\\}$. It is often also necessary to consider the un-labeled
invariant $\\{\lambda(\psi(S))\mid S\subset X,|S|=k\\}$, particularly in
situations when distributed persistence is a feature extraction method. As we
list some applications of distributed persistence below, we will take care to
identify if the invariant needed is labeled or unlabeled.
* •
(Dimensionality Reduction) When the target dimension of
$\psi:X\to\mathbb{R}^{d}$ is too high, we may wish to learn a lower-
dimensional embedding $\pi:X\to\mathbb{R}^{d^{\prime}}$. We can force $\pi$ to
preserve the topological structure of $\psi$ by minimizing the following sum
over $\\{S\subset X\mid|S|=k\\}$:
$\sum_{S}d_{B}(\lambda(\psi(S)),\lambda(\pi(S)))$
This application uses labeled distributed topology.
* •
(Shape Registration) Given two embedded point clouds $X$ and $Y$ modeling the
same shape, it can be of interest to learn a map $f:X\to Y$ aligning
corresponding points. This can be accomplished by having $f$ minimize the
following sum over $\\{S\subset X\mid|S|=k\\}$:
$\sum_{S}d_{B}(\lambda(S),\lambda(f(S)))$
This application uses labeled distributed topology.
* •
(Feature Extraction) Given an embedded point cloud $X$, we can consider the
unlabeled set $\\{\lambda(\psi(S))\mid S\subset X,|S|=k\\}$ as a bag-of-
features invariant. These features can be vectorized, averaged, transformed
into a measure, and in any other way summarized, before being fed into a
standard supervised or unsupervised machine learning pipeline.
## VII Experiments
Suppose $X$ and $Y$ are finite subsets of Euclidean spaces and $\phi:X\to Y$
is a bijection between them. Theorem V.15 shows that we may test if $\phi$ is
a quasi-isometry by evaluating $d_{B}(\lambda^{m}(S),\lambda^{m}(\phi(S)))$
for a certain collection of subsets $S\subseteq X$. If $X$ is fixed and $Y$ is
variable, we can minimize $d_{B}(\lambda^{m}(S),\lambda^{m}(\phi(S)))$ thanks
to the differentiability of persistence computations; this has the effect of
bringing $Y$ closer in alignment with $X$. Moreover, Porisms V.17 and V.18 and
the probabilistic results in Section V-E show that correcting a relatively
small number of subsets $S\subseteq X$ is likely to force a quasi-isometry.
In the following two synthetic experiments, we follow the methodology
described above for $X$ as (1) $100$ points evenly distributed on a circle in
$\mathbb{R}^{2}$ and (2) $256$ points evenly distributed on a torus in
$\mathbb{R}^{3}$. The codomain $Y$ is initialized to be $X$ with independent
Gaussian noise added coordinate-wise. Our aim is to see whether minimizing a
distributed topological functional via gradent descent succeeds in correcting
for the large geometric distortion of adding Gaussian noise. In both cases,
every iteration step consists of uniformly sampling $k=25$ points, denoted
$S$, from $X$ and taking a step (i.e. perturbing $Y$) to minimize the loss
$W_{2}^{2}(D_{0}(S),D_{0}(\phi(S)))+W_{2}^{2}(D_{1}(S),D_{1}(\phi(S)))$, where
$D_{i}$ is the degree $i$ persistence diagram of the Rips filtration. Because
we are updating $Y$ based on only a single sample $S$, we use the Adam
optimizer [kingma2014adam] to benefit from momentum. The first (resp. second)
row in Figure VII.1 show the initial state of $Y$, $Y$ after $1e5$ (resp.
$1e6$) iterations, and $Y$ after $2e5$ (resp. $2e6$) iterations. For both
experiments, we observe the codomain space $Y$ re-organizing itself to closely
resemble $X$. The coloring of the points in Figure VII.1 denotes their
labeling in $X$, so that nearby points have similar colors. The fact that the
color gradients in the final positions of $Y$ are largely continuous affirm
that our optimization fixes not only the global geometry of $Y$, but also the
labeled pairwise distances, and hence gives a space quasi-isometric to $X$.
Figure VII.1: Synthetic optimization experiments. Columns correspond to
initial, intermediate, and final positions of $Y$. Color denotes labelling.
## VIII Conclusion
It has long been understood that computational complexity and sensitivity to
outliers are major challenges in the application of persistent homology in
data analysis. Moreover, the lack of a stable inverse makes it very hard to
say which geometric information is retained in a persistence diagram, and
which is forgotten. Multiple lines of research have sought to address these
problems by constructing more sophisticated topological invariants and tools,
such as the persistent homology transform, multiparameter persistence,
distributed persistence calculations [10.1145/3330345.3332147] and discrete
Morse theory. However, any gains in invertibility are compromised by sizeable
increases in computational complexity.
The focus of this paper was the simplest scheme for speeding up persistence
calculations: subsampling. Subsampling and bootstrapping are ubiquitous in
machine learning and are already being applied in topological data analysis.
What we have shown is that this simple approach also enjoys uniquely strong
theoretical guarantees. In particular, the manner in which distributed
persistence interpolates between geometry and topology is explicitly given by
quadratic bounds. Moreover, these theoretical guarantees are complemented by
the success that subsampling has seen in the TDA literature, and the robust
synthetic experiments shown above.
There remain a number of outstanding problems, both theoretical and
computational, that would complement the results of this paper and facilitate
its practical application.
* •
Distributed persistence, as we have defined it, depends on an alignment of two
data sets. In practice, we use it as an unlabeled bag of features. What
injectivity results can be obtained in this unstructured setting?
* •
Individual persistence diagrams can be challenging to work with, due to the
fact that the space of diagrams admits no Hilbert space structure [MR3968607,
Bubenik:2020aa, 2019arXiv191013935W], though there are a number of effective
vectorizations in the literature. How can these be extended or adapted to
provide vectorizations of sets of persistence diagrams coming from subsamples
of a fixed point cloud? This is a more structured problem than working with
arbitrary collections of persistence diagrams.
* •
If we are interested in recovering the global topology of $X$ rather than its
quasi-isometry or Gromov-Hausdorff type, it suffices to estimate pairwise
distances between points in adjacent Voronoi cells, at least when working with
the full Rips or Čech complex and not a skeleton. A careful analysis of this
setting could dramatically decrease the Lipschitz constants appearing in
Theorem V.15.
## References
|
# Independent Hyperplanes in Oriented Paving Matroids
Lamar Chidiac Winfried Hochstättler
###### Abstract
In 1993, Csima and Sawyer [3] proved that in a non-pencil arrangement of n
pseudolines, there are at least $\frac{6}{13}n$ simple points of intersection.
Since pseudoline arrangements are the topological representations of
reorientation classes of oriented matroids of rank $3$, in this paper, we will
use this result to prove by induction that an oriented paving matroid of rank
$r\geq 3$ on $n$ elements, where $n\geq 5+r$, has at least
$\frac{12}{13(r-1)}\binom{n}{r-2}$ independent hyperplanes, yielding a new
necessary condition for a paving matroid to be orientable.
Fakultät für Mathematik und Informatik,
FernUniversität in Hagen, Germany,
<EMAIL_ADDRESS>
## 1 Introduction
In 1893 Sylvester asked in [18] the following question: “Prove that it is not
possible to arrange any finite number of real points so that a right line
through every two of them shall pass through a third, unless they all lie in
the same right line.” This question remained unsolved for almost 40 years,
until it was independently raised by Erdös and solved shortly after by Gallai
in 1933 [7, 8, 6] and later on several other proofs have been found (for
reference [16], p.451 and [2], p. 65).
In other words, the Sylvester-Gallai theorem states that given a set of non-
collinear points in the Euclidean plane, we can always find at least one line
that has exactly two of the given points, we call it an ordinary line. A
generalization of this theorem to higher dimension is not always true, i.e.
given a finite set of points in a d-dimensional Euclidean space which is not
contained in a hyperplane, we cannot always find a hyperplane containing
exactly d of the given points, which we call an independent hyperplane. A
counterexample would be a set of 6 points, 3 on each of two skew lines as seen
in Figure 1 below; there is no hyperplane spanned by these points that has
exactly 3 of them. This counterexample is by Hansen [11] and another one is
the Desargues configuration in 3-space [16, p. 452].
•••••• Figure 1: An illustration of Hansen’s construction [11]
Looking into Hansen’s counterexample we notice that the main issue lies in the
3-point lines, and we can actually forbid this by considering a more specific
type of point configuration. We consider oriented paving matroids which we
will define later. In fact, realizable simple oriented paving matroids of rank
4 are exactly the point configurations in three space that have no three point
lines. By considering this type of matroids we can go further with the
generalization and not only prove the existence of an independent hyperplane
but also examine how many do we have. To do so we will have to go back and
start with the 2-dimensional space.
Sylvester-Gallai’s theorem can surely be dualized to the projective plane,
where the theorem will be equivalent to the following: For every finite set of
lines, not all going through one point (non-pencil arrangement), then among
all the intersection points of these lines, at least one is incident with
exactly two of the lines. We call it a simple point.
After proving the existence of at least one simple point, Dirac turned to the
question of how many. He [5] proved that there are at least 3 simple points in
an arrangements of more than 2 lines and conjectured that there are at least
n/2 simple points in an arrangement of n lines. This is a long-standing
conjecture which has been known as the Dirac-Motzkin conjecture. Apparently
neither author formally conjectures this in print, but Dirac [5] states twice
that its truth is ”likely”. Motzkin [16] does not seem to mention it at all.
This conjecture was and still is a motivation for a lot of mathematicians to
work on a lower bound for the number of simple points in line arrangements but
very few results are known since then, the latest was Tao and Green’s result
in 2013 [10] where they proved Dirac’s conjecture when n is sufficiently
large. The conjecture however remains open in general.
A careful consideration of the proofs of most of the results for line
arrangements reveals that the straightening of the lines which form the
arrangements plays only a very limited role. This leads naturally to the idea
of investigating arrangements of more general types; arrangements of
pseudolines.
###### Definition 1.
An arrangement of pseudolines is a finite collection of simple curves (no self
intersection) in the real projective plane satisfying the following two
properties:
1. 1.
any two curves intersect in exactly one point, where they cross.
2. 2.
the intersection of all curves is empty.
As in line arrangements, a simple point in a pseudoline arrangement is the
point formed by the intersection of exactly 2 pseudolines (curves). In [3],
simple points are referred to as ordinary points.
In the following, we present our main result that concerns a lower bound for
the number of independent hyperplanes in an oriented paving matroid. In
Section 2, we show an application to this result and then give its proof in
Section 3. Before going any further, let’s see the main connection between
oriented matroids and pseudoline arrangements and the main motivation behind
paving matroids.
In this work we consider matroids to be pairs $M=(E,\mathcal{I})$ where $E$ is
a finite set and $\mathcal{I}$ is the collection of independent sets in $M$.
We denote by $M/e$ the matroid we get when we contract an element $e$ in $M$,
and $M\backslash e$ the matroid we get when we delete an element $e$ in $M$.
We assume basic familiarity with matroid theory and oriented matroids. The
standard references are [17, 1].
###### Definition 2.
Let $M$ be a matroid of rank $r$ and $H$ a hyperplane. We say that $H$ is
* •
simple, if there exists $e\in H$ such that $H\backslash\\{e\\}$ is a flat.
* •
independent if it has exactly $r-1$ elements i.e. if it is simple for every
element $e\in H$.
* •
multiple if it is not simple.
After Hansen’s counterexample for the natural generalization of Sylvester’s
theorem to higher dimension, another generalization was proven by him in [11]
where he proved that any real point configuration of any dimension has a
simple hyperplane but not necessarily an independent one. Later on Murty [14]
conjectured that Hansen’s theorem is also true for oriented matroids i.e. any
simple oriented matroid (matroid without loops or parallels) has a simple
hyperplane.
As mentioned earlier, the main problem with Hansen’s counterexample is that
the 6 point configuration does not correspond to a paving matroid since we
have three points lying on the same line.
###### Definition 3.
A uniform matroid with a ground set $E$ of size $n$ and rank $r$ is a matroid
where every subset of $E$ of size $r$ is a basis. A paving matroid is a
matroid in which every circuit has size either $r$ or $r+1$, where $r$ is the
rank of the matroid. In a point configuration, this means that no $r-1$ points
lie on a same flat of co-dimension 2.
Note that paving matroids are those matroids which are uniform or only
slightly deviate from being uniform. Mayhew et al. [15] conjecture that almost
all matroids are paving. This is mainly why we find paving matroids to be
interesting specially since in rank 3, all simple matroids are paving. The
following is immediate:
###### Proposition 1.
The class of paving matroid is closed under minors.
Oriented matroids and pseudoline arrangements are strongly connected by the
topological representation theorem.
###### Theorem 1 (The Topological Representation Theorem [9, 12]).
Any reorientation class of a rank-3 oriented matroid has a representation as a
pseudoline arrangement.
In this representation, the pseudolines in the pseudoline arrangement are the
elements of the oriented matroid. Their intersection points are the
hyperplanes of the matroids and their simple points of intersection are
exactly the independent hyperplanes in the oriented matroid. Therefore,
counting simple points in a pseudoline arrangement is actually counting
independent hyperplanes in the oriented matroid.
Only two main results on the lower bound of simple points in line arrangement
were found after Dirac’s conjecture in 1951. The first was by Kelly and Moser
in 1958 [13], where they proved that we have at least 3n/7 simple points in an
arrangement of n lines, with equality occurring in the Kelly-Moser
configuration shown in Figure 2, and the second was by Csima and Sawyer in
1993, [3] where they improved Kelly and Moser’s bound to 6n/13 with of course
the Kelly-Moser configuration being the only exception. In the same paper,
Csima and Sawyer indicated that their proof easily extends to arrangement of
pseudolines. This is why we were able to use their result to prove our main
result concerning a lower bound for independent hyperplanes in oriented paving
matroids.
Figure 2: The Kelly-Moser configuration [13].
###### Theorem 2.
An oriented paving matroid $M$ of rank $r\geq 3$, on n elements, where $n\geq
5+r$, has at least
$f(n,r)=\dfrac{12}{13(r-1)}\,\,\binom{n}{r-2}$
independent hyperplanes.
The proof can be found in Section 3.
## 2 Application of Theorem 2
We apply Theorem 2 to give a short proof that the Matroid $AG(3,2)^{\prime}$
is not orientable. The matroid $AG(3,2)^{\prime}$ shown in Figure 3, is the
unique relaxation of the binary affine cube AG(3,2). It’s an 8-element matroid
of rank 4 where the 4 point planes are the six faces of the cube, the six
diagonals such as {1,2,7,8}, and one twisted plane {1,8,3,6}.
41235678••••••••• Figure 3: An illustration of the non-orientable simple
paving matroid $AG(3,2)^{\prime}$. (Check [17] page 508)
$AG(3,2)^{\prime}$ is a simple paving matroid of rank 4. By our theorem, in
order for $AG(3,2)^{\prime}$ to be orientable, it should have at least
$f(8,4)=\dfrac{12}{13\cdot 3}\,\,\binom{8}{2}=\dfrac{336}{39}\cong 8.6$
independent hyperplanes. The independent hyperplanes in this matroid are the
sets of 3 elements that do not lie on a common 4-point plane. After looking at
the figure we notice that there are only 4 independent hyperplanes, namely:
{2,4,5}, {2,4,7}, {5,7,2} and {5,7,4}. Therefore, by Theorem 2,
$AG(3,2)^{\prime}$ is not orientable. This is already known by da Silva in
[4], where she actually proves a more general and stronger result, but by a
more complicated argument.
## 3 Proof of Theorem 2
We proceed by induction on $r\geq 3$.
By the Topological Representation Theorem 1, a rank-3 oriented matroid has a
representation as a pseudoline arrangement in the projective plane. By Csima
and Sawyer [3], except for the Kelly-Moser configuration, a pseudoline
arrangement has a least $6n/13$ simple points. Since $n\geq 5+r$ the Kelly-
Moser configuration is excluded. In matroid terms, this corresponds to $6n/13$
independent hyperplanes in a rank $3$ oriented matroid founding our induction.
Now assume $r>3$. We fix an element $e\in E$. By Proposition 1, the
contraction $M/e$ is paving and has at least 5+(r-1) elements, so by
induction, $M/e$ has at least
$f(n-1,r-1)=\dfrac{12}{13(r-2)}\,\,\binom{n-1}{r-3}$
independent hyperplanes.
Now any independent hyperplane in $M/e$ can be extended by $e$ to an
independent hyperplane in $M$. To prove this, we take an independent
hyperplane $H$ in $M/e$ and prove that $H\cup e$ is an independent hyperplane
in $M$.
We have that $r(H)=r-2$, thus $r(H\cup e)=r-1=|H\cup e|$. Therefore $H\cup e$
is independent. It is also a flat, because if it wasn’t i.e. if there exists
an element $x\in\operatorname{cl}(H\cup e)\backslash(H\cup e)$ then
$x\in\operatorname{cl}_{M/e}(H)\backslash H$ contradicting, $H$ being closed
in $M/e$. Therefore $H\cup e$ is an independent flat of rank $r-1$, thus an
independent hyperplane in $M$.
Since each of the $f(n-1,r-1)$ independent hyperplanes in $M/e$ can be
extended by $e$ to obtain an independent hyperplane in $M$ and since this is
the case for any element $e$ of the matroid, each of these independent
hyperplanes will be counted $r-1$ times, and so $M$ has at least
$\frac{n}{r-1}f(n-1,r-1)=\dfrac{12}{13(r-1)}\,\,\binom{n}{r-2}=f(n,r)$
independent hyperplanes. This concludes the proof of Theorem 2.
## 4 Discussion
In the rank three case all simple matroids are paving, thus being paving is a
restriction only in rank at least four. While in the pseudoline case the bound
is known to be almost tight [3], one might expect, that due to the simplicity
of the proof our bound should be quite weak for higher dimension. But
considering $n-1$ points in general position in $\mathbb{R}^{d-1}$ and an
extra point increasing the dimension, this configuration yields a paving
matroid of rank $d+1$ with one multiple hyperplane and
$\binom{n-1}{d-1}=\binom{n-1}{r-2}$ independent ones. Note, that in this
example all but one hyerplane are independent. It still seems possible, that
always at least a linear fraction of the hyperplanes in rank at least $4$ is
independent.
As we have seen, paving matroids are closed under minors. But they are not
closed under taking duals. A paving matroid where the dual is paving as well
is called sparse paving. It is easy to see that a paving matroid is sparse
paving if for any two circuits $C_{1},C_{2}$ such that $|C_{1}|=|C_{2}|=r$ we
must have $|C_{1}\cap C_{2}|\leq r-2$.
One can be interested in the following two problems:
###### Problem 1.
Is there a class of sparse paving matroids of rank $4$ with only a subcubic
number of independent hyperplanes?
Finally, coming back to the simple hyperplanes we would be interested in the
following
###### Problem 2.
Does any paving matroid of rank at least $4$ always have $r-2$ points which
belong to at least as many simple as multiple hyperplanes?
Note that in rank 3 the matroid of $K_{4}$ is a counterexample to the above.
### Acknowledgment
The authors would like to thank Manfred Scheucher for his collaboration and
efforts which helped to enhance the presentation of the results.
## References
* [1] Anders Björner, Michel Las Vergnas, Bernd Sturmfels, Neil White, and Günter M. Ziegler. Oriented matroids, volume 46 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, second edition, 1999.
* [2] H. S. M. Coxeter. Introduction to geometry. John Wiley & Sons, Inc., New York-London, 1961.
* [3] J. Csima and E. T. Sawyer. There exist $6n/13$ ordinary points. Discrete Comput. Geom., 9(2):187–202, 1993.
* [4] I.P.F. da Silva. Orientability of cubes and las vergnas cube conjecture. preprint, 2006.
* [5] G. A. Dirac. Collinearity properties of sets of points. Quart. J. Math. Oxford Ser. (2), 2:221–227, 1951.
* [6] P. Erdős. Personal reminiscences and remarks on the mathematical work of Tibor Gallai. Combinatorica, 2(3):207–212, 1982.
* [7] P. Erdős, Richard Bellman, H. S. Wall, James Singer, and V. Thébault. Problems and Solutions: Advanced Problems: Problems for Solution: 4065-4069. Amer. Math. Monthly, 50(1):65–66, 1943.
* [8] P. Erdős and Robert Steinberg. Problems and Solutions: Advanced Problems: Solutions: 4065. Amer. Math. Monthly, 51(3):169–171, 1944.
* [9] Jon Folkman and Jim Lawrence. Oriented matroids. J. Combin. Theory Ser. B, 25(2):199–236, 1978.
* [10] Ben Green and Terence Tao. On sets defining few ordinary lines. Discrete Comput. Geom., 50(2):409–468, 2013.
* [11] Sten Hansen. A generalization of a theorem of Sylvester on the lines determined by a finite point set. Math. Scand., 16:175–180, 1965.
* [12] W. Hochstättler. Oriented matroids from wild spheres. In Proceedings of the Colloguy “Discrete Optimization—Structure and Stability of Dynamical Systems” (Cologne, 2000), volume 7, pages 16–26, 2002. Special Issue.
* [13] L. M. Kelly and W. O. J. Moser. On the number of ordinary lines determined by $n$ points. Canadian J. Math., 10:210–219, 1958.
* [14] Arnaldo Mandel. Topology of oriented matroids. ProQuest LLC, Ann Arbor, MI, 1982. Thesis (Ph.D.)–University of Waterloo (Canada).
* [15] Dillon Mayhew, Mike Newman, Dominic Welsh, and Geoff Whittle. On the asymptotic proportion of connected matroids. European J. Combin., 32(6):882–890, 2011.
* [16] Th. Motzkin. The lines and planes connecting the points of a finite set. Trans. Amer. Math. Soc., 70:451–464, 1951.
* [17] James Oxley. Matroid theory, volume 21 of Oxford Graduate Texts in Mathematics. Oxford University Press, Oxford, second edition, 2011.
* [18] James Joseph Sylvester. Mathematical question 11851. Educational Times, 59(98):256, 1893.
|
# Magic Conditions for Multiple Rotational States of Bialkali Molecules in
Optical Lattices
Q. Guan Department of Physics, Temple University, Philadelphia, PA 19122, USA
Simon L. Cornish Joint Quantum Centre (JQC) Durham-Newcastle, Department of
Physics, Durham University, South Road, Durham, DH1 3LE S. Kotochigova
Department of Physics, Temple University, Philadelphia, PA 19122, USA
###### Abstract
We investigate magic-wavelength trapping of ultracold bialkali molecules in
the vicinity of weak optical transitions from the vibrational ground state of
the X${}^{1}\Sigma^{+}$ potential to low-lying rovibrational states of the
b${}^{3}\Pi_{0}$ potential, focussing our discussion on the 87Rb133Cs molecule
in a magnetic field of $B=181\,$G. We show that a frequency window exists
between two nearest neighbor vibrational poles in the dynamic polarizability
where the trapping potential is “near magic” for multiple rotational states
simultaneously. We show that the addition of a modest DC electric field of
$E=0.13\,\text{kV}/\text{cm}$ leads to an exact magic-wavelength trap for the
lowest three rotational states at a angular-frequency detuning of
$\Delta_{v^{\prime}=0}=2\pi\times 218.22$ GHz from the
X${}^{1}\Sigma^{+}(v=0,J=0)\rightarrow$ b${}^{3}\Pi_{0}(v^{\prime}=0,J=1)$
transition. We derive a set of analytical criteria that must be fulfilled to
ensure the existence of such magic frequency windows and present an analytic
expression for the position of the frequency window in terms of a set of
experimentally measurable parameters. These results should inform future
experiments requiring long coherence times on multiple rotational transitions
in ultracold polar molecules.
## I Introduction
Ultracold polar molecules present a wealth of opportunities in quantum science
and technology Carr et al. (2009). Proposed applications span the fields of
precision measurement and metrology Zelevinsky et al. (2008); Salumbides et
al. (2011, 2013); Tarbutt et al. (2013); Schiller et al. (2014); Borkowski
(2018); Borkowski et al. (2019), quantum-state resolved chemistry Krems
(2008); Bell and Softley (2009); Ospelkaus et al. (2010a); Dulieu et al.
(2011); Balakrishnan (2016), dipolar quantum matter Santos et al. (2000);
Micheli et al. (2007); Pollet et al. (2010); Capogrosso-Sansone et al. (2010);
Baranov et al. (2012); Lechner and Zoller (2013), quantum simulation Barnett
et al. (2006); Micheli et al. (2006); Büchler et al. (2007); Macià et al.
(2012); Manmana et al. (2013); Gorshkov et al. (2013) and quantum information
processing DeMille (2002); Yelin et al. (2006); Zhu et al. (2013); Herrera et
al. (2014); Ni et al. (2018); Sawant et al. (2020); Hughes et al. (2020).
Recent experimental progress on the production of ultracold molecules by
association Ni et al. (2008); Danzl et al. (2008); Lang et al. (2008);
Takekoshi et al. (2014); Molony et al. (2014); Park et al. (2015); Guo et al.
(2016); Rvachov et al. (2017); Seeßelberg et al. (2018); Yang et al. (2019);
Voges et al. (2020) and direct laser cooling Shuman et al. (2010); Barry et
al. (2014); Truppe et al. (2017); Kozyryev et al. (2017); Anderegg et al.
(2018); Collopy et al. (2018) has brought many of these applications within
reach.
In the realm of quantum simulation and computation, the rotational structure
of ultracold molecules provides a rich basis of long-lived states in which to
encode pseudo-spins or quantum information. Owing to the permanent molecular-
frame electric dipole moment, the rotational states can be conveniently
manipulated with microwave fields, as already demonstrated in a number of
settings Ospelkaus et al. (2010b); Yan et al. (2013); Gregory et al. (2016);
Will et al. (2016); Guo et al. (2018); Blackmore et al. (2020). Moreover,
laboratory-frame dipole moments can be engineered using applied electric
fields or superpositions of rotational states. The resulting long-range
interaction between molecules can be exploited to realise model Hamiltonians
in quantum magnetism Barnett et al. (2006); Micheli et al. (2006); Gorshkov et
al. (2011a, b); Manmana et al. (2013); Hazzard et al. (2013) and two-qubit
gates for quantum information processing DeMille (2002); Yelin et al. (2006);
Zhu et al. (2013); Herrera et al. (2014); Ni et al. (2018); Sawant et al.
(2020); Hughes et al. (2020). To generate useful interaction strengths
necessitates inter-molecular distances below a micrometre. This is most
readily achieved using optical potentials, either in the form of an optical
lattice Moses et al. (2015); Reichsöllner et al. (2017) or an array of optical
tweezers Liu et al. (2019); Anderegg et al. (2019).
For diatomic molecules, such as ground-state bialkali molecules Ni et al.
(2008); Danzl et al. (2008); Lang et al. (2008); Takekoshi et al. (2014);
Molony et al. (2014); Park et al. (2015); Guo et al. (2016); Rvachov et al.
(2017); Seeßelberg et al. (2018); Yang et al. (2019); Voges et al. (2020), the
dynamic polarizability along the molecular axis ($\alpha_{\parallel}$) is, in
general, different from that perpendicular to it ($\alpha_{\perp}$). For light
polarized at an angle $\theta$ to the molecular axis, this leads to a dynamic
polarizability in the body-fixed frame given by,
$\displaystyle\alpha(\theta)=\alpha^{(0)}+\alpha^{(2)}P_{2}(\cos(\theta)),$
(1)
where $\alpha^{(0)}=\frac{1}{3}(\alpha_{\parallel}+2\alpha_{\perp})$ and
$\alpha^{(2)}=\frac{2}{3}(\alpha_{\parallel}-\alpha_{\perp})$ are the
isotropic and anisotropic components of the polarizability tensor,
respectively. $\alpha_{\parallel}$ and $\alpha_{\perp}$ result from a sum over
all allowed molecular transitions for the component of the dipole operator
parallel or perpendicular to the molecular axis, respectively, and are smooth
functions of wavelength in the regime where the frequency of the trapping
laser is far-detuned from any rovibronic transitions Kotochigova and Tiesinga
(2006); Vexiau et al. (2017); Li et al. (2017). In the lab frame, the dynamic
polarizability can be thought of as the spatial average of $\alpha(\theta)$.
Although $\alpha^{(0)}$ is the same for all rotational states, $\alpha^{(2)}$
strongly mixes states with different rotational projections in excited
rotational states. It follows that for molecules confined in an optical
potential, the anisotropic polarizability leads to rotational transition
frequencies that are strongly dependent on the intensity and polarization of
the trapping light. The concomitant state-dependent light shifts make it
highly challenging to achieve rotational coherence times that are sufficiently
long to be sensitive to the $\sim$kHz interaction strengths Yan et al. (2013);
Seeßelberg et al. (2018) typical of most molecules. Nevertheless, several
approaches have been developed to match the polarizabilities of two specific
states within a molecule. These include judicious choice of the intensity and
polarisation of the trapping light Neyenhuis et al. (2012); Gregory et al.
(2017); Blackmore et al. (2018) and the addition of applied electric fields to
simplify the couplings within the molecule Kotochigova and DeMille (2010); Li
et al. (2017); Seeßelberg et al. (2018).
Inspired by the magic-wavelength traps used in atomic clocks Katori et al.
(2003); Ye et al. (2008), it is natural to investigate magic-wavelength
trapping for molecules. Intuitively, magic trapping independent of the
molecular rotational state can be realized under the condition of
$\alpha^{(2)}=0$. To search for this condition, one needs to tune the trapping
laser wavelength into a regime where there is significant interplay between
several ro-vibrational poles in $\alpha_{\parallel}$ and $\alpha_{\perp}$.
Indeed, following this approach, very recent work has demonstrated state-
insensitive trapping for two vibrational Kondov et al. (2019) or rotational
Bause et al. (2020) levels. These magic-frequency traps show reduced
sensitivity to experimental parameters, enabling longer coherence times to be
achieved. However, numerous proposed applications make greater use of the rich
internal structure of molecules by simultaneously addressing more than two
rotational levels. Examples include, coupling three rotational levels with
microwave fields to realize highly tunable models in quantum magnetism
Gorshkov et al. (2011b) and mapping many rotational levels onto a synthetic
dimension Sundar et al. (2018). It is therefore pertinent to ask whether the
concept of a magic frequency trap can be extended to multiple rotational
levels simultaneously.
In this work, we investigate magic-wavelength trapping of ultracold bialkali
molecules in the vicinity of weak optical transitions from the vibrational
ground state of the X${}^{1}\Sigma^{+}$ potential to low-lying rovibrational
states of the b${}^{3}\Pi_{0}$ potential, focussing our discussion on the
87Rb133Cs molecule. We show that a magic trapping frequency window for
multiple rotational states of the X${}^{1}\Sigma^{+}$ potential exists between
two nearest neighbor vibrational poles of the b${}^{3}\Pi_{0}$ potential, far
away from any rotational poles. Within this window, the laser trapping is
“near magic” for multiple rotational states simultaneously and is exactly
magic for pairs of neighboring rotational states at specific laser
frequencies. Moreover, the “near magic” frequency window can be tuned to a
true magic frequency for the lowest three rotational states by applying an
experimentally accessible DC electric field. This true triple magic condition
is expected to be useful for future studies of synthetic spin-1 systems using
ultracold molecules. The existence of such a magic frequency window relies on
a set of strict criteria which we derive analytically. We show that these
criteria can be satisfied near the narrow
X${}^{1}\Sigma^{+}\rightarrow\mathrm{b}^{3}\Pi_{0}$ transitions for heavy
molecules, including 87Rb133Cs and 23Na87Rb. We also derive an analytic
expression for the position of the frequency window in terms of a set of
experimentally measurable parameters, such as transition widths and transition
wavelengths. This will provide a straightforward, self-consistent approach to
search for the magic trapping frequency window in future experiments.
This paper is organized as follows. Section II presents the general
theoretical framework describing the molecular rotational states in the lowest
vibrational state of the ground electronic potential in the presence of
applied magnetic, electric and optical fields. In section III, we discuss the
hyperfine structure of the 87Rb133Cs molecule in the presence of applied
magnetic and electric fields with a view to identifying the best target states
in each rotational level for magic trapping. In section IV, we consider the
AC-Stark shift and dynamic polarizability of 87Rb133Cs molecules in the
vicinity of the weakly allowed
X${}^{1}\Sigma^{+}\rightarrow\mathrm{b}^{3}\Pi_{0}$ transitions. In section V,
we identify magic trapping frequencies by searching for crossings among the
frequency-dependent dynamic polarizability curves of different rotational
states. We present a simple analytic treatment that shows excellent agreement
with our numerical results, both near-resonance and in the magic frequency
window between two vibrational poles. Imaginary polarizabilities for
rotational states in the magic frequency window are also calculated. In
section VI, we discuss the wider significance of our work, before concluding
in section VII.
## II Theoretical Framework
We focus on the molecular rotational states $\vec{J}$ associated with the
$v=0$ vibrational state of the ground electronic state of RbCs. The effective
Hamiltonian that describes the system in the presence of a static magnetic
field $\vec{B}$, a static electric field $\vec{E}$, and an optical laser field
of intensity $I$ Petrov et al. (2013); Gregory et al. (2016); Li et al. (2017)
is given by:
$\displaystyle
H=H_{\text{rot}}+H_{Z}+H_{\text{hf}}+H_{\text{DC}}+H_{\text{AC}},$ (2)
where the rotational Hamiltonian is
$\displaystyle H_{\text{rot}}=B_{v}\vec{J}^{2}\,,$ (3)
the Zeeman Hamiltonian is
$\displaystyle H_{Z}=-g_{r}\mu_{\rm
N}\vec{J}\cdot\vec{B}-\sum_{k=1}^{2}g_{k}\mu_{\rm
N}\vec{I}_{k}\cdot\vec{B}(1-\sigma_{k})\,,$ (4)
the nuclear quadrupole interaction is
$\displaystyle
H_{\text{hf}}=\sum_{k=1}^{2}\frac{(eqQ)_{k}}{I_{k}(I_{k}-1)}C_{2}(\alpha,\beta)T_{2}(\vec{I}_{k},\vec{I}_{k})\,,$
(5)
and the DC-Stark shift is
$\displaystyle H_{\text{DC}}=-\vec{d}\cdot\vec{E}\,.$ (6)
In Eqs (3)-(6) $\vec{J}$, $\vec{I}_{k}$, and $\vec{d}$ denote the molecule
orbital angular momentum operator, the nuclear spin operators for the $k$-th
atom, and the permanent molecular electric dipole moment operator,
respectively. The nuclear quadrupole interaction $H_{\text{hf}}$ couples the
nuclear spin to rotational states and depends on the quadrupole coupling
constants $(eqQ)_{k}$ for Rb and Cs obtained from Refs Gregory et al. (2016).
The operator $T_{2}(\vec{I}_{k},\vec{I}_{k})$ is a rank-2 tensor and
$C_{2}(\alpha,\beta)=\sqrt{4\pi/5}Y_{20}(\alpha,\beta)$ is the modified
spherical harmonic function, where the angles $\alpha$, $\beta$ describe the
orientation of the diatomic molecule in the space-fixed coordinate frame. In
these equations $B_{v}$ is the rotational constant, $\mu_{\rm N}$ is the
nuclear magneton, and $g_{r}$ is the molecule rotational $g$-factor. Moreover,
$g_{k}$ and $\sigma_{k}$ with $k=1,2$ are nuclear-spin $g$-factors and
isotropic molecular nuclear shielding factors, respectively.
Here, the direction of the external magnetic field is our quantization axis
along which we define projection quantum numbers of angular momenta. The
matrix elements of the Hamiltonian are determined in low-energy set of basis
functions $|J,M;m_{1},m_{2}\rangle$ , where $J$ and $M$ are the orbital
angular momentum and its associated projection, respectively. Quantum numbers
$m_{k}$ are nuclear spin projections of the $k$-th atom.
The AC-Stark Hamiltonian $H_{\text{AC}}$ in Eq. (2) is constructed up to
second order in the electric field strength of the driving laser in the regime
where the AC-Stark shift is much smaller than the rotational constant. In this
regime, the AC-Stark Hamiltonian $H_{\text{AC}}$ is
$\displaystyle H_{\text{AC}}$ $\displaystyle=$
$\displaystyle-\frac{I}{\epsilon_{0}c}\sum_{\begin{subarray}{c}J,M,M^{\prime},\\\
m_{1},m_{2}\end{subarray}}|J,M^{\prime};m_{1},m_{2}\rangle\langle
J,M;m_{1},m_{2}|$ (7) $\displaystyle\quad\times\sum_{f}\frac{\langle
J,M^{\prime}|\vec{d}_{\rm tr}\cdot\vec{\epsilon}^{*}|f\rangle\langle
f|\vec{d}_{\rm
tr}\cdot\vec{\epsilon}|J,M\rangle}{E_{f}-(E_{J}+\hbar\omega)}\,,$
where energies $E_{J}$ are the eigenvalues of $H_{\rm rot}$, $\vec{d}_{\rm
tr}$, $\vec{\epsilon}$, and $\omega$ are the molecular transition electric
dipole moment operator, the laser polarization, and the laser angular
frequency, respectively. The summations over $J$, $M$, $M^{\prime}$, and
$m_{k}$ only contain basis functions in the low-energy space. The summation
$f$ in Eq. (7) is over all ro-vibrational states and continua of excited
electronic states with energies $E_{f}$ excluding their Zeeman, hyperfine, and
DC-Stark shifts. We have included previously studied Kotochigova and Tiesinga
(2006); Kotochigova and DeMille (2010) excited electronic states that
dissociate to limits where only one of Rb or Cs is excited to its
energetically-lowest excited $n$P state. In this work, we are interested in
the regime where the AC-Stark shift is much smaller than the rotational
constant. Thus, in writing Eq. (7), couplings between the states with
different orbital angular momenta $J$ are neglected. Finally, $\epsilon_{0}$,
$c$, and $\hbar$ are the vacuum permittivity, the speed of light in vacuum,
and the reduced Planck’s constant, respectively.
We diagonalize Eq. (2) in the basis $|J,M;m_{1},m_{2}\rangle$ including
$J\leq$ 20 to find eigenenergies $E_{i}$ and corresponding eigenstates
$|i\rangle$ of the molecular system. The dynamic polarizability of an
eigenstate is $-\partial E_{i}/\partial I$. By mapping out the intensity-
dependence of the eigenenergies of the effective low-energy Hamiltonian, we
obtain the dynamic polarizabilities for various rotational states. The
electric field, magnetic field, and laser frequency serve as our tuning
parameters which can be manipulated, as shown in the following discussions, to
realize various magic trapping conditions. Although, in this work we focus our
discussion on the 87Rb133Cs molecule, the extension to other diatomic alkali
molecules is implied.
## III Zeeman Splittings and DC-Stark Shifts in RbCs molecules
Figure 1: The hyperfine energy levels for the $J=0$, $1$, and $2$ manifolds as
functions of the magnetic field strength $B$ (the left column (a), (c), and
(e) panels, respectively) and the static electric field strength $E$ applied
parallel to a magnetic field of $B=181$ G (the right column (b), (d), and (f)
panels, respectively). The red dashed lines in (a), (c), and (e) mark the
target trapping state (see text). Panel (b) consists of a band with 32 energy
levels. Panel (d) consists of two bands; the upper one contains 32 energy
levels with $M=0$ and the lower one 64 energy levels with $M=\pm 1$. Panel (f)
consists of three bands; the upper one contains 32 energy levels with $M=0$,
the middle one 64 energy levels with $M=\pm 1$, and the lower one 64 energy
levels with $M=\pm 2$.
The nuclear spins of 87Rb and 133Cs atoms are $I_{1}=3/2$ and $I_{2}=7/2$,
respectively. Because of the multiple combinations of the atomic nuclear spin
projections and the molecular orbital angular momentum projections, there
exist $(2J+1)(2I_{1}+1)(2I_{2}+1)$ energy levels that are associated with the
rotational state with orbital angular momentum $J$. In the presence of the
magnetic field, the static electric field, and the hyperfine interactions,
these “near” degenerate energy levels split. Before we discuss the magic
trapping conditions, it is necessary to select the best target states to be
trapped among these levels for each rotational state.
The left column of Fig. 1 shows the magnetic field strength dependence of the
rotational energy manifold $E_{J}$ $(J=0,1,2)$ with vanishing static electric
field. In the weak magnetic field regime, the splitting between the levels of
the same energy manifold are dominated by the hyperfine interactions. In this
regime, the total angular momentum
$\vec{F}^{2}=(\vec{J}+\vec{I}_{1}+\vec{I}_{2})^{2}$ and the total projection
$M_{F}=M+m_{1}+m_{2}$ are approximately good quantum numbers which means that
the eigenstates consist of strong admixture of states with different nuclear
spin projections. The level repulsion is strong in this regime, leading to
quadratic Zeeman shifts dominating over linear Zeeman shifts for $B<50$ G in
the left column of Fig. 1. With increasing magnetic field strength, the linear
Zeeman shift dominates. Due to the differences in the various $g$-factors,
$g_{r}=0.0062$, $g_{1}=1.836(3)$, and $g_{2}=0.738(1)$ in Eq. (4) for
87Rb133Cs Aldegunde et al. (2008); Gregory et al. (2016), the projections $M$,
$m_{1}$, and $m_{2}$ are all approximately good quantum numbers in the high-
field regime. Thus, the eigenstates have significantly reduced admixture. The
levels marked by the red dashed lines in the left column of Fig. 1 correspond
to the states containing more than $50\%$ occupation in the
$|J,M=0;m_{1}=3/2,m_{2}=7/2\rangle$ component. Note this corresponds to the
spin-stretched state in $J=0$ and is the initial state created in experiments
Takekoshi et al. (2014); Molony et al. (2014). We select these states as our
target trapping states. For $B>150$ G, the admixture of the other components
into the target trapping states is less than $20\%$ for $J=0,1$, and $2$. The
red dashed lines in Fig. 1 (c) and Fig. 1 (e) terminate around $B=25$ G and
$B=100$ G, respectively, because no target state can be identified in the
small $B$ regime due to strong admixture. In this paper, we focus on a
magnetic field strength of $B=181$ G according to the experimental work Molony
et al. (2014).
To further suppress the admixture of other components into our target trapping
states, we take advantage of an applied static electric field. Due to the
small magnitude of the rotational $g$-factor $g_{r}$, the states with
different orbital angular momentum projections $M$ and the same nuclear spin
projections are close in the spectrum. For example, the energy splitting
between states with a unit difference in the orbital angular momentum
projection $M$ is $\sim 10$ kHz for $B=181$ G. A static electric field along
the magnetic field direction separates the levels with different absolute
values of $M$ in the spectrum.
The right column of Fig. 1 shows the dependence of the rotational energy
manifold $E_{J}$ $(J=1,2,3)$ on the electric field strength in the presence of
a parallel magnetic field of $B=181$ G. With increasing $E$, the energies of
the $J=0$ manifold decrease quadratically [see Fig. 1 (b)] due to the second-
order level repulsion with the states $|J=1,M=0;m_{1},m_{2}\rangle$. It turns
out that the energies of the states $|J=1,M=0;m_{1},m_{2}\rangle$ are pushed
up. Due to the level repulsion between the states $|J=1,M=\pm
1;m_{1},m_{2}\rangle$ with the states $|J=2,M=\pm 1;m_{1},m_{2}\rangle$, the
states with $M=\pm 1$ in the $J=1$ manifold are pushed down with increasing
$E$. Thus, the static electric field separates the $J=1$ rotational energy
manifold into two bands, the upper one with $M=0$ and the lower one with
$M=\pm 1$ [see Fig. 1 (d)]. Similarly, for the $J=2$ manifold, a three-band
structure is seen with the upper, the middle, and the lower one corresponding
to $M=0$, $M=\pm 1$, and $M=\pm 2$, respectively. For $B=181$ G, a static
electric field of strength $E=0.1\,\text{kV}/\text{cm}$ already makes the
admixture of the states with finite $M$ into the state with $M=0$ negligible.
## IV AC-Stark Shifts Near the Narrow
$\mathrm{X}^{1}\Sigma^{+}\rightarrow\mathrm{b}^{3}\Pi_{0}$ Transitions
Figure 2: Ground and relevant excited adiabatic relativistic $\Omega=0^{+}$
potentials of the 87Rb133Cs molecule as a function of internuclear separation
$R$. The energetically-lowest potential is identified by non-relativistic
label X${}^{1}\Sigma^{+}$. The two excited adiabatic potentials have a narrow
avoided crossing at $R_{\rm c}\approx 10a_{0}$. For $R<R_{\rm c}$ the
electronic wavefunction of the second adiabat is well described by the non-
relativistic b${}^{3}\Pi_{0}$ symmetry. For $R>R_{\rm c}$ this state is well
described by the A${}^{1}\Sigma^{+}$ symmetry. The vertical lines indicate
transitions from the $J=0$ trapping state in the X${}^{1}\Sigma^{+}$ state to
the lowest $J^{\prime}=1$ ro-vibrational states of the coupled
A${}^{1}\Sigma^{+}$-b${}^{3}\Pi_{0}$ complex. The transition wavelength is
$1146.287$ nm.
To study the AC-Stark shift of the 87Rb133Cs molecule, we consider the
application of a driving laser field with the angular frequency $\omega$ to
induce coupling between the target trapping states and electronically excited
states. Figure 2 shows the selected relativistic adiabatic $\Omega=0^{+}$
potential curves of the 87Rb133Cs molecule, where $\Omega$ is the total
projection quantum number of the electronic angular momentum and nuclear spins
along the diatomic molecule axis. The b${}^{3}\Pi_{0}$ potential and the
A${}^{1}\Sigma^{+}$ potential are coupled by the spin-orbit coupling terms
which lead to an avoided crossing near $R_{c}=10a_{0}$. Here, the potentials
and the spin-orbit coupling functions are generated based on the data in Refs.
Docenko et al. (2010, 2011); Rakić et al. (2016); Vexiau et al. (2017). Due to
the spin-orbit coupling, the few lowest bound states lying near the bottom of
the b${}^{3}\Pi_{0}$ potential have some admixture of the A${}^{1}\Sigma^{+}$
component which enables the electric dipole coupling from these states to the
states of the ground electronic potential X${}^{1}\Sigma^{+}$. These
transitions are much narrower than the transitions to the states with dominant
occupation in the A${}^{1}\Sigma^{+}$ potential. In this work, we are
particularly interested in the AC-Stark shift and the dynamic polarizabilities
near these narrow transitions, indicated by the blue dashed line in Fig. 2. We
denote $\omega_{v^{\prime}}$ the resonance transition frequency from the
$(v=0,J=0)$ state of the X${}^{1}\Sigma^{+}$ potential to the
$(v^{\prime},J=1)$ state of the b${}^{3}\Pi_{0}$ potential. For
$v^{\prime}=0$, the resonance frequency reads $\omega_{0}=2\pi\times 261.533$
THz which corresponds to a wavelength of $1146.287$ nm. When the driving laser
frequency $\omega$ is close to the resonance frequency $\omega_{v^{\prime}}$,
we reference $\omega$ to $\omega_{v^{\prime}}$ through the detuning
$\Delta_{v^{\prime}}=\omega-\omega_{v^{\prime}}$.
Figure 3: Microwave transition frequencies from the
$|J=0,M=0;m_{1}=3/2,m_{2}=7/2\rangle$ ground state to the $J=1$ manifold as a
function of the laser intensity for a laser frequency near the resonance
transition to the $v^{\prime}=0$ vibrational state of the b${}^{3}\Pi_{0}$
potential. A magnetic field of strength $B=181$G is applied in the
$z$-direction. Panels (a) and (c) correspond to a detuning of
$\Delta_{v^{\prime}=0}=2\pi\times 3$ GHz. Panels (b) and (d) correspond to a
detuning of $\Delta_{v^{\prime}=0}=2\pi\times 200$ GHz. Panels (a) and (b)
correspond to vanishing static electric field. Panels (c) and (d) correspond
to a static electric field of $E=0.2\,\text{kV}/\text{cm}$ applied in the
$z$-direction. The red circles in all panels mark the energy level of the
target trapping state.
Figure 3 shows the impact of the static electric field on the AC-Stark shifts
of the microwave transition frequencies from the
$|J=0,M=0;m_{1}=3/2,m_{2}=7/2\rangle$ ground state to the $J=1$ rotational
energy manifold in the small and large detuning regimes. The driving laser is
linearly polarized with a polarization parallel to the magnetic field. The red
circles correspond to the target trapping state as discussed in Sec. III. For
the case with the detuning of $\Delta_{v^{\prime}=0}=2\pi\times 3$ GHz and
vanishing static electric fields [Fig. 3 (a)], the AC-Stark shifts can be
characterized into two bands; one going up with increasing laser intensity
while the other staying almost independent of the laser intensity. The former
corresponds to states with $M=0$ while the latter to states with $M=\pm 1$. As
shown by the red circles in Fig. 3 (a), the energy level of the target
trapping state in the $J=1$ manifold crosses those of the other levels with
increasing laser intensity. These crossings lead to strong level interactions
[see the gap in the red circles near $I=0.1\text{kW}/\text{cm}^{2}$ in Fig. 3
(a)], hence to large hyper-polarizabilities which makes the system unstable
with respect to fluctuations of the trapping laser intensity.
The level-crossing behavior in the AC-Stark shift can be avoided by separating
the $M=0$ band and the $M=\pm 1$ band using a static electric field as
discussed in Sec. III. Figure 3 (c) shows the AC-Stark shifts in the presence
of a static electric field of $E=0.2\,\text{kV}/\text{cm}$. Compared to Fig. 3
(a), the $M=0$ band lies roughly $5$ MHz above the $M=\pm 1$ band for $I=0$.
With increasing laser intensity, the energy gap between the $M=0$ band and the
$M=\pm 1$ band keeps increasing. The energy of the target trapping state does
not cross any of the $M=\pm 1$ states any more.
The level crossings seen in Fig. 3 (a) result from the fact that the AC-Stark
shift of the target trapping state is larger than the energy splitting between
the nearest neighbor hyperfine levels. With larger laser detuning, the
differential AC-Stark shift is greatly reduced. For example, for a detuning of
$\Delta_{v^{\prime}=0}=2\pi\times 200$ GHz as shown in Fig. 3 (b), the level
crossings between the target trapping state and the other states in the $J=1$
manifold disappear for the laser intensity regime shown here. A finite static
electric field still separates the $M=0$ band from the $M=\pm 1$ band as shown
in Fig. 3 (d), which does make the system more robust, but is not necessary in
this case.
In the following discussion of dynamic polarizabilities, we describe the
detuning as near-resonance when $\Delta_{v^{\prime}}<2\pi\times 10$ GHz and as
medium-detuned otherwise. According to the above discussion, the static
electric field is always turned on for the near-resonance cases and not
mandatory for the far-detuned cases. This setup makes our results independent
of the laser intensity in a broad intensity regime for both cases.
## V Magic conditions for multiple rotational states
We may identify magic trapping frequencies by searching for crossings among
the frequency-dependent dynamic polarizability curves of different rotational
states. We start the discussion with the dynamic polarizabilities $\alpha_{J}$
near the resonance from which we extract the parallel and perpendicular
background polarizabilities $\alpha_{\text{bg},\parallel}$ and
$\alpha_{\text{bg},\perp}$ and the transition width $\Gamma_{0,v^{\prime}}$.
Given the values of $\alpha_{\text{bg},\parallel}$,
$\alpha_{\text{bg},\perp}$, and $\Gamma_{0,v^{\prime}}$, it is proved
analytically and verified by our numerical calculations that there exists a
“near” magic frequency window for multiple rotational states in the medium-
detuned regime between vibrational poles. By tuning the static electric field,
a true triple magic frequency is found for the $J=0$, $J=1$, and $J=2$ target
trapping states for the 87Rb133Cs molecule.
### V.1 Near-Resonance Dynamic Polarizabilities
Figure 4: The dynamic polarizabilities near the resonance transition to the
$v=0$ vibrational state of the b${}^{3}\Pi_{0}$ potential. A magnetic field of
strength $B=181$ G and a static electric field of strength
$E=0.2\,\text{kV}/\text{cm}$ are applied in the $z$-direction. The driving
laser polarization is (a) parallel and (b) perpendicular to the external
static fields. The black circles and red squares correspond to the numerical
results of the dynamic polarizabilities of the $J=0$ and $J=1$ target trapping
state. The black solid lines and the red solid lines correspond to the
analytical results generated using the Eqs. (8) and (9). The green upper
triangle in Panel (b) marks the crossing between the black circles and red
squares.
In the near-resonance regime, we fix the strength of the static electric field
to be $E=0.2\,\text{kV}/\text{cm}$. The angle between the laser polarization
and the magnetic field is denoted $\theta$. In this case, the dynamic
polarizabilities $\alpha_{J=0}$ of the $J=0,M=0$ target trapping state and
$\alpha_{J=1}$ of the $J=1,M=0$ target trapping state can be approximated
using Neyenhuis et al. (2012) by
$\displaystyle\alpha_{J=0}=-\frac{3\pi
c^{2}}{2\omega_{v^{\prime}}^{3}}\frac{\Gamma_{0,v^{\prime}}}{3\Delta_{v^{\prime}}}+\frac{1}{3}\alpha_{\text{bg},\parallel}+\frac{2}{3}\alpha_{\text{bg},\perp},$
(8)
and
$\displaystyle\alpha_{J=1}$ $\displaystyle=-\frac{3\pi
c^{2}}{2\omega_{v^{\prime}}^{3}}\Big{[}\frac{\cos^{2}(\theta)}{3}\frac{\Gamma_{0,v^{\prime}}}{\Delta_{v^{\prime}}+2B_{v}+2B_{v^{\prime}}}+$
(9)
$\displaystyle\frac{3+\cos^{2}(\theta)}{15}\frac{\Gamma_{0,v^{\prime}}}{\Delta_{v^{\prime}}+2B_{v}-4B_{v^{\prime}}}\Big{]}+$
$\displaystyle\frac{2\cos^{2}(\theta)+1}{5}\alpha_{\text{bg},\parallel}+\frac{4-2\cos^{2}(\theta)}{5}\alpha_{\text{bg},\perp},$
respectively. Here, the parameters $B_{v}$ and $B_{v^{\prime}}$ correspond to
the rotational constants for the $v=0$ vibrational state of the
X${}^{1}\Sigma^{+}$ potential and the $v^{\prime}=0$ vibrational state of the
b${}^{3}\Pi_{0}$ potential. The transition width $\Gamma_{0,v^{\prime}}$ can
be calculated via
$\displaystyle\Gamma_{0,v^{\prime}}=\frac{\omega_{v^{\prime}}^{3}}{3\pi\epsilon_{0}\hbar
c^{3}}|\mu_{0,v^{\prime}}|^{2}$ (10)
where the $\mu_{0,v^{\prime}}$ is the transition dipole momentum between the
$v=0$ vibrational state of the X${}^{1}\Sigma^{+}$ potential and the
$v^{\prime}$ vibrational state of the b${}^{3}\Pi_{0}$ potential. The parallel
and perpendicular background polarizabilities $\alpha_{\text{bg},\parallel}$
and $\alpha_{\text{bg},\perp}$ contain the contributions from all the far-
detuned rovibronic states with $\Omega=0$ and $\Omega=1$, respectively
Kotochigova and Tiesinga (2006); Vexiau et al. (2017); Li et al. (2017). For
87Rb133Cs, we find $B_{v}=2\pi\times 0.490$ GHz, $B_{v^{\prime}}=2\pi\times
0.510$ GHz, $\Gamma_{0,v^{\prime}=0}=2\pi\times 15.5\,\text{kHz}$,
$\alpha_{\text{bg},\parallel}=h\times 0.127$ kHz/(W/cm2), and
$\alpha_{\text{bg},\perp}=h\times 0.0340$ kHz/(W/cm2). Experimentally, these
values can be extracted by fitting the measured dynamic polarizability curves
near the poles.
Figure 4 shows the dynamic polarizabilities for laser polarizations parallel
and perpendicular to the magnetic field direction in the near-resonance
regime. The symbols correspond to the numerical results and the lines show the
analytical results generated using Eqs. (8) and (9). The agreement in both
cases is excellent. As can be seen, there is no crossing between the
$\alpha_{J=0}$ curve and the $\alpha_{J=1}$ curve in the near-resonance regime
for $\theta=0^{\circ}$. According to Eq. (9), the dynamic polarizability
$\alpha_{J=1}$ can be tuned by varying the polarization direction of the
driving laser. For example, for $\theta=90^{\circ}$, the term in the first row
of Eq. (9) inside the square bracket vanishes and the pole structure at
$\Delta_{v^{\prime}=0}=-2\pi\times 2.00$ GHz is missing, as shown by the red
squares in Fig. 4 (b). In addition, the pole at
$\Delta_{v^{\prime}=0}=2\pi\times 1.06$ GHz is slightly narrower compared to
the $\theta=0^{\circ}$ case. In this case, the $\alpha_{J=1}$ curve crosses
the $\alpha_{J=0}$ curve at the magic detuning of $2\pi\times 2.68$ GHz, as
shown by the green upper triangle in Fig. 4 (b). The value of the
polarizability at the magic detuning is $-h\times 2.71$ kHz/(W/cm2). The
negative polarizability indicates that the molecules can be trapped at the
nodal point of an optical lattice where the laser intensity is the local
minimum. This trapping condition is beneficial for also minimizing heating and
loss from incoherent photon scattering.
### V.2 Multiple Magic Frequency Window
For arbitrary $J$, we derive the general formula for the dynamic
polarizability near the resonance transition to one of the states of the
b${}^{3}\Pi_{0}$ potential,
$\displaystyle\alpha_{J}$ $\displaystyle=-\frac{3\pi
c^{2}}{2\omega_{v^{\prime}}^{3}}\left[A_{J}(\theta)\frac{\Gamma_{0,v^{\prime}}}{\Delta_{v^{\prime}}+L_{J}}+B_{J}(\theta)\frac{\Gamma_{0,v^{\prime}}}{\Delta_{v^{\prime}}+R_{J}}\right]$
(11)
$\displaystyle+\left[A_{J}(\theta)+B_{J}(\theta)\right]\alpha_{\text{bg},\parallel}+\left[1-A_{J}(\theta)-B_{J}(\theta)\right]\alpha_{\text{bg},\perp},$
where the pole positions $L_{J}$ of the left branch and $R_{J}$ of the right
branch read
$\displaystyle L_{J}=J(J+1)B_{v}-[J(J-1)-2]B_{v^{\prime}},$ (12)
and
$\displaystyle R_{J}=J(J+1)B_{v}-[(J+1)(J+2)-2]B_{v^{\prime}},$ (13)
respectively. The angular factors $A_{J}(\theta)$ and $B_{J}(\theta)$ in Eq.
(11) are,
$\displaystyle A_{J}(\theta)=\left\\{\begin{aligned}
&\frac{(J+1)(J-1)}{2(2J+1)(2J-1)}+&\\\
&\frac{J^{2}+1}{2(2J+1)(2J-1)}\cos^{2}(\theta)&J>0\\\
&0&J=0,\end{aligned}\right.$ (14)
and
$\displaystyle B_{J}(\theta)=$
$\displaystyle\frac{(J+2)(J+1)}{2(2J+3)(2J+1)}+$ (15)
$\displaystyle\frac{J(J+1)}{2(2J+3)(2J+1)}\cos^{2}(\theta).$
By Taylor-expanding the right hand side of Eq. (11) with respect to $L_{J}$
and $R_{J}$, we obtain,
$\displaystyle\alpha_{J}$
$\displaystyle=\left[A_{J}(\theta)+B_{J}(\theta)\right]\left(-\frac{3\pi
c^{2}}{2\omega_{v^{\prime}}^{3}}\frac{\Gamma_{0,v^{\prime}}}{\Delta_{v^{\prime}}}+\alpha_{\text{bg},\parallel}-\alpha_{\text{bg},\perp}\right)+$
(16)
$\displaystyle\alpha_{\text{bg},\perp}+T_{J}(\Delta_{v^{\prime}},\theta),$
where the remaining term $T_{J}(\Delta_{v^{\prime}},\theta)$ reads,
$\displaystyle T_{J}(\Delta_{v^{\prime}},\theta)=$ $\displaystyle\frac{3\pi
c^{2}}{2\omega_{v^{\prime}}^{3}}\frac{\Gamma_{0,v^{\prime}}}{\Delta_{v^{\prime}}^{2}}\left[A_{J}(\theta)L_{J}+B_{J}(\theta)R_{J}\right]+$
(17)
$\displaystyle\mathcal{O}\left(\frac{\Gamma_{0,v^{\prime}}L_{J}^{2}}{\Delta_{v^{\prime}}^{3}}\right)+\mathcal{O}\left(\frac{\Gamma_{0,v^{\prime}}R_{J}^{2}}{\Delta_{v^{\prime}}^{3}}\right).$
Based on Eq. (16), we can always find a detuning
$\Delta_{v^{\prime},\text{cr}}$ such that,
$\displaystyle\alpha_{J}=\alpha_{\text{bg},\perp}+T_{J}(\Delta_{v^{\prime},\text{cr}},\theta),$
(18)
where,
$\displaystyle\Delta_{v^{\prime},\text{cr}}=\frac{3\pi
c^{2}}{2\omega_{v^{\prime}}^{3}}\frac{\Gamma_{0,v^{\prime}}}{\alpha_{\text{bg},\parallel}-\alpha_{\text{bg},\perp}}.$
(19)
For the transitions with $\Delta_{v^{\prime},\text{cr}}$ lying in the medium-
detuned regime, i.e., $|\Delta_{v^{\prime},\text{cr}}|\gg\left|L_{J}\right|$,
$|\Delta_{v^{\prime},\text{cr}}|\gg\left|R_{J}\right|$, and
$|\Delta_{v^{\prime},\text{cr}}|\gg\Gamma_{0,v^{\prime}}$, the remaining term
$T_{J}(\Delta_{v^{\prime},\text{cr}},\theta)$ can be neglected. In this case,
both the $\theta$-dependence and the $J$-dependence of $\alpha_{J}$ in Eq.
(18) disappear, indicating that the frequency-dependent dynamic
polarizabilities of all rotational states pass through the same fixed point;
the trap is magic for all rotational states at this laser detuning. The
multiple magic frequency is approximately given by Eq. (19) and the value of
the dynamic polarizability is approximately equal to the background
perpendicular dynamic polarizability $\alpha_{\text{bg},\perp}$.
Figure 5: The triple magic conditions for $J=0$, $1$, and $2$ rotational
states near the resonance transition to the $v=0$ state of the
b${}^{3}\Pi_{0}$ potential. A magnetic field $B=181$G is applied in the
$z$-direction. The laser polarization is parallel to the magnetic field. The
circles mark the crossings between different curves in (b) and (c). The static
electric field is vanishing in (a) and (b). A finite static electric field of
$E=0.13\,\text{kV}/\text{cm}$ is applied along the $z$-direction in (c). A
near triple magic condition exists in (b) and a true triple magic condition
exists in (c).
Figure 5 (a) shows the triple crossing magic frequency for $\alpha_{J}$ with
$J=0$, $1$, and $2$ near the resonance transition to the $v^{\prime}=0$
vibrational states of the b${}^{3}\Pi_{0}$ potential. The three curves cross
each other in the detuning window of $2\pi\times 216$ GHz to $2\pi\times 219$
GHz, as highlighted in Fig. 5 (b). Evaluating Eq. (19) using the values of the
transition width and the background polarizabilities obtained in Sec. V.1, the
predicted magic frequency corresponds to a detuning of $2\pi\times 240$ GHz.
The difference comes from the higher order corrections in the remaining term
$T_{J}(\Delta_{v^{\prime}},\theta)$. The range of the $\alpha_{J}$ values in
Fig. 5 (b) is consistent with the value of $\alpha_{\text{bg},\perp}$ as
calculated in Sec. V.1. Even though the three curves do not intersect each
other at the same frequency, their values are very close in the frequency
window shown in Fig. 5 (b). The percent difference
$\left|\alpha_{J}-\alpha_{J^{\prime}}\right|/|\alpha_{J^{\prime}}|$ for any
pair of $J$ and $J^{\prime}$ in Fig. 5 (b) is less than $0.6\%$ within the
detuning range of $2\pi\times 3$ GHz, which makes the magic trapping condition
robust to uncertainty in the trapping laser frequency. This near triple magic
frequency window can be tuned to a true triple magic frequency by adding a
weak static electric field. Figure 5 (c) shows that the three curves cross at
$\Delta_{v^{\prime}=0}=2\pi\times 218.22$ GHz for $E=0.13$ kV/cm. The value of
the polarizability at this detuning is $\alpha_{J}=h\times 0.03392$
kHz/(W/cm2).
Figure 6: The dynamic polarizabilities near the resonance transition to the
$v=0$ vibrational state of the b${}^{3}\Pi_{0}$ potential for multiple
rotational states up to $J=4$. A magnetic field of $B=181$ G and a static
electric field of $E=0.13\,\text{kV}/\text{cm}$ are applied in the
$z$-direction. The laser polarization is parallel to the $z$-axis. The insets
show the zoom-in of the “near magic” frequency window in which the
polarizabilities of many rotational state are either crossing or close to each
other.
Our theory also predicts that the triple magic frequency window also holds for
higher rotational states. Figure 6 shows the $\alpha_{J}$ curves up to $J=4$
for the parallel driving case in the presence of the static electric field of
strength $E=0.13$ kV/cm. It can seen that all the values of $\alpha_{J}$ are
very close to $\alpha_{\text{bg},\parallel}$ in the same magic frequency
window as discussed before. A further zoom-in of the magic frequency window,
shown in the inset of Fig. 6, indicates that $\alpha_{J=3}$ and $\alpha_{J=4}$
almost run parallel to $\alpha_{J=2}$ and, consequently, do not pass through
the triple magic frequency point for the $\alpha_{J=0,1,2}$ curves. The higher
rotational states make the contribution from the remaining term
$T_{J}(\Delta_{v^{\prime},\text{cr}},\theta)$ more important due to larger
values of $|L_{J}|$ and $|R_{J}|$. Thus, no crossings among the polarizability
curves of higher $J$ values are expected within the magic frequency window.
The similarity of the $\alpha_{J}$ curves in the medium-detuned regime with
increasing $J$ values is explained by the asymptotic behavior of the angular
factors $A_{J}(\theta)$ and $B_{J}(\theta)$ in Eqs. (14) and (15) in the large
$J$ limit. Expanding $A_{J}(\theta)$ and $B_{J}(\theta)$ in terms of $1/J$, we
obtain,
$\displaystyle
A_{J}(\theta)=\frac{1+\cos^{2}(\theta)}{8}+\mathcal{O}\left(\frac{1}{J^{2}}\right),$
(20)
and,
$\displaystyle
B_{J}(\theta)=\frac{1+\cos^{2}(\theta)}{8}+\frac{\sin^{2}(\theta)}{8J}+\mathcal{O}\left(\frac{1}{J^{2}}\right).$
(21)
With increasing $J$, the leading order terms of both $A_{J}(\theta)$ and
$B_{J}(\theta)$ are independent of the value of $J$; hence the expression for
$\alpha_{J}$ in Eq. (16) becomes the same for all $J$, neglecting the
remaining $T_{J}(\Delta_{v^{\prime}},\theta)$ term. Thus, for large $J$, the
various $\alpha_{J}$ curves are close and almost parallel to each other in the
medium-detuned regime. Combining the true triple magic condition for the lower
$J$ values and the similarity between $\alpha_{J}$ for higher $J$ values,
leads to a “near magic” trapping window for multiple rotational states that
should be possible to realize experimentally.
Figure 7: The dynamic polarizabilities $\alpha_{J=1}$ near the resonance
transition to the $v^{\prime}=0$ vibrational state of the b${}^{3}\Pi_{0}$
potential for various driving laser polarization directions. The angle
$\theta$ is scanned from $0^{\circ}$ to $90^{\circ}$ in $5^{\circ}$
increments. A magnetic field of $B=181$ G and a static electric field of
$E=0.13$ kV/cm are applied in the $z$-direction.
The $\theta$-independence of $\alpha_{J}$ within the multiple magic frequency
window is also verified by our numerical results. Figure 7 shows the dynamic
polarizability $\alpha_{J=1}$ for angles between $0^{\circ}$ and $90^{\circ}$.
All the curves nearly cross the same point around the detuning of $2\pi\times
218$ GHz.
Based on all the results and observations discussed above, we conclude that
the existence of the multiple magic frequency window presents a frequency
region of a few gigahertz within which the system is super robust with respect
to the fluctuations of the trapping laser frequency and the polarization
direction for arbitrary rotational states. Within this window long-rotational
coherences should be possible on multiple rotational transitions in the
87Rb133Cs molecule.
### V.3 Criteria for the Multiple Magic Frequency Window
Figure 8: The dynamic polarizabilities of the $J=0$, $1$, and $2$ rotational
states near the resonance transitions to the (a) $v^{\prime}=1$, (b)
$v^{\prime}=2$, and (c) $v^{\prime}=3$ vibrational states of the
b${}^{3}\Pi_{0}$ potential. A magnetic field of $B=181$ G is applied in the
$z$-direction. No static electric field is applied. The black solid, red
dashed, and blue dotted lines correspond to the dynamic polarizabilities of
$J=0$, $1$, and $2$ rotational states, respectively. A near triple magic
condition exists in (a) and (b) but not in (c).
The existence of the multiple magic frequency window relies on the condition
that the remaining $T_{J}(\Delta_{v^{\prime}},\theta)$ term in Eq. (18) is
much smaller than the $\alpha_{\text{bg},\perp}$ and thus can be neglected.
Taking the leading order term of $T_{J}(\Delta_{v},\theta)$ in Eq. (17), the
condition
$|T_{J}(\Delta_{v^{\prime},\text{cr}},\theta)|\ll|\alpha_{\text{bg},\perp}|$
yields a lower bound on the transition width $\Gamma_{0,v^{\prime}}$ in terms
of the background polarizabilities and rotational constants,
$\displaystyle\Gamma_{0,v^{\prime}}\gg\frac{2\omega_{v^{\prime}}^{3}}{3\pi
c^{2}}\frac{\left(\alpha_{\text{bg},\parallel}-\alpha_{\text{bg},\perp}\right)^{2}}{|\alpha_{\text{bg},\perp}|}\sqrt{B_{v}^{2}+B_{v^{\prime}}^{2}}.$
(22)
For 87Rb133Cs molecules near the narrow transitions to the bottom of the
b${}^{3}\Pi_{0}$ potential, the right hand side of Eq. 22 is equal to
$2\pi\times 0.125$ kHz. As the transition linewidth $\Gamma_{0,v^{\prime}}$
decreases with increasing $v^{\prime}$, this condition puts a constraint on
the number of vibrational poles around which the multiple magic frequency
window exists.
Figure 8 shows $\alpha_{J}$ for $J=1$, $2$, and $3$ near the $v^{\prime}=1$,
$2$, and $3$ vibrational poles at the bottom of b${}^{3}\Pi_{0}$ potential.
With increasing $v^{\prime}$, the transition is narrower and the triple
crossing moves towards the pole of $\alpha_{J}$. The transition widths are
$\Gamma_{0,v^{\prime}=1}=2\pi\times 6.84$ kHz for the $v^{\prime}=1$ pole and
$\Gamma_{0,v^{\prime}=2}=2\pi\times 1.44$ kHz for the $v^{\prime}=2$ pole.
Triple crossings can be seen around $\Delta_{v^{\prime}=1}=2\pi\times 120$ GHz
for the $v^{\prime}=1$ vibrational pole (Fig. 8 (a)) and around
$\Delta_{v^{\prime}=2}=2\pi\times 22$ GHz near the $v^{\prime}=2$ vibrational
pole (Fig. 8 (b)). For $v^{\prime}=3$, the transition width
$\Gamma_{0,v^{\prime}=3}$ is $2\pi\times 0.206$ kHz which is already close to
the lower bound. Thus, no triple crossings can be seen in Fig. 8 (c).
### V.4 Imaginary Polarizability in the Magic Trapping Window
Figure 9: The imaginary polarizabilities for the $v=0$, $J=0$, $J=1$ and
$J=2$, $M=0$ states of the X${}^{1}\Sigma^{+}$ potential near the resonance
transitions to the lower vibrational states of the b${}^{3}\Pi_{0}$ potential
for $\sigma_{z}$ polarization of the trapping light.
Light-induced decoherence of rovibrational levels of a polar molecule is often
characterized by the imaginary part of the polarizability Chotia et al.
(2012), which accounts for losses due to spontaneous emission and other decay
mechanism of intermediate electronically excited states. Here, we evaluate the
imaginary part of the complex molecular dynamic polarizability
$\alpha(\hbar\omega,\vec{\epsilon})$ as
$\displaystyle\alpha(\hbar\omega,\vec{\epsilon})=$
$\displaystyle\frac{1}{\epsilon_{0}c}\sum_{f}\frac{(E_{f}-ih\gamma_{f}/2-E_{i})}{(E_{f}-ih\gamma_{f}/2-E_{i})^{2}-(\hbar\omega)^{2}}\times|\langle
f|{\vec{d}_{tr}}\cdot\vec{\epsilon}|i\rangle|^{2}\,,$
assuming that each of these intermediate $E_{f}$ state has a line width
$\gamma_{f}$ equal to 6 MHz, the atomic line width of Rb 5p(2P) state. This
assumption is justified by previous calculations of the imaginary
polarizability of rovibrational levels of ground state KRb molecules Petrov et
al. (2013) and a comparison of $\alpha_{\text{imag}}$ with an experimentally
measured value Chotia et al. (2012). The sum over $f$ in Eq. V.4 is limited to
transitions to relativistic electronic excited potentials that dissociate to
either a singly excited Rb or a singly excited Cs atom.
Figure 9 shows the calculated imaginary part of the polarizability of the
$v=0,J=0,1,2$ X${}^{1}\Sigma^{+}$ states as functions of laser frequency. By
construction the imaginary part is negative. It is several orders of magnitude
smaller than the real part. The resonances in the graph correspond to poles
due to the lowest vibrational $v^{\prime}$ of the $\Omega=0$ relativistic
component of the b${}^{3}\Pi_{0}$ potential. For a detuning of
$\Delta_{v^{\prime}=0}=2\pi\times 218$ GHz close to the triple magic frequency
shown in Fig. 5, the value of the imaginary part of the polarizability is
$1.0\times 10^{-9}$ kHz/(W/cm2). For comparison, the polarizability at this
detuning is $\alpha_{J}=h\times 0.03392$ kHz/(W/cm2), as stated earlier.
## VI Discussion
Although all the results above are derived by considering transitions to the
b${}^{3}\Pi_{0}$ potential, similar results to Eqs. (18) and (19) are found
for $\Omega=1$ potentials with $\alpha_{\text{bg},\perp}$ replaced by
$\alpha_{\text{bg},\parallel}$ and vice versa. These observations indicate
that any rovibrational pole that is associated with a resonance transition to
the state with quantum number $\Omega$ can be used to cancel the contributions
to the rank-2 dynamic polarizability tensor from all the other far-detuned
states with the same quantum number $\Omega$. What remains is the contribution
to the dynamic polarizability from the states with different $\Omega$. This
cancellation happens at a frequency that is independent of the rotational
quantum number $J$ and the polarization direction of the laser.
Even though the derivation of the equations in Sec. V.2 is “universal”, i.e.
independent of the molecule species, the existence of the magic frequency
window does require certain conditions to be fulfilled. For example, Eq. (22)
gives us a lower bound on the transition width. For heavier molecules, such as
87Rb133Cs, this condition can be satisfied near the narrow transitions to the
bottom of b${}^{3}\Pi_{0}$ potential, since the spin-orbit coupling effect is
stronger and the rotational constants, $B_{v}$ and $B_{v^{\prime}}$, are
smaller. For 23Na87Rb, we also find that the multiple magic frequency window
exists near the narrow transitions to the b${}^{3}\Pi_{0}$ potential. However,
compared to 87Rb133Cs, the window only exists near the $v^{\prime}=0$ and
$v^{\prime}=1$ vibrational poles and missing near the $v^{\prime}=2$ pole.
Here, we emphasise that the condition on the lower bound of the transition
width given by Eq. (22) is not the only criteria for the existence of the
multiple magic frequency window. Eq. (22) allows the multiple magic frequency
window to also be found near to broad transitions. However, in this case, the
predicted magic frequency position in Eq. (19) cannot be larger than the
energy spacing between two nearest neighbor vibrational poles (i.e.,
$|\Delta_{v^{\prime},cr}|\ll\left|\omega_{v^{\prime}\pm
1}-\omega_{v^{\prime}}\right|$). This condition puts an upper bound for the
transition width,
$\displaystyle\Gamma_{0,v^{\prime}}\ll\frac{2\omega_{v^{\prime}}^{3}}{3\pi
c^{2}}\left|\alpha_{\text{bg},\parallel}-\alpha_{\text{bg},\perp}\right|\times\left|\omega_{v^{\prime}\pm
1}-\omega_{v^{\prime}}\right|,$ (24)
where the “$+/-$” should be used for the positive/negative value of
$\alpha_{\text{bg},\parallel}-\alpha_{\text{bg},\perp}$. This condition is
very easily satisfied near the narrow transitions, however, it needs to be
examined near to the broad ones. This condition implies that we need to be in
the “medium-detuned” regime to find the multiple magic frequency window.
Although the existence of the multiple magic frequency windows needs to be
checked case-by-case, the results derived in this work will greatly benefit
the search for them. In experiments, the background values of the
polarizabilities and the transition widths can both be straightforwardly
measured. According to Eq. (19), the magic detuning can then be predicted
based entirely upon these measured values.
## VII Conclusion
We have investigated magic-wavelength trapping of ultracold bialkali molecules
in the vicinity of weak optical transitions from the vibrational ground state
of the X${}^{1}\Sigma^{+}$ potential to low-lying rovibrational states of the
b${}^{3}\Pi_{0}$ potential, focussing our discussion on the 87Rb133Cs
molecule. We have shown that a magic trapping frequency window for multiple
rotational states exists between two nearest neighbor vibrational poles, far
away from any rotational poles. Within this window, the laser trapping is
“near magic” for multiple rotational states simultaneously and is exactly
magic for pairs of neighboring rotational states at specific laser
frequencies. Moreover, the “near magic” frequency window can be tuned to a
true magic frequency for the lowest three rotational states by applying an
experimentally accessible DC electric field. This true triple magic condition
is expected to be useful for future studies of synthetic spin-1 systems using
ultracold molecules.
We have derived a set of criteria that must be fulfilled to ensure the
existence of such magic frequency windows and have also presented an analytic
expression for the position of the frequency window in terms of a set of
experimentally measurable parameters. These will provide a straightforward,
self-consistent approach to search for the magic trapping frequency window in
future experiments. We expect the realization of optical traps which are
simultaneously magic for multiple rotational states will enable the
implementation of highly tunable models in quantum magnetism Gorshkov et al.
(2011b) and the mapping of many rotational levels onto a synthetic dimension
Sundar et al. (2018). More broadly, our work is relevant in settings where
there is a need to control the relative polarizabilities of different
molecular rotational states, facilitating, for example, the study of Hopf
insulators in dipolar systems Schuster et al. (2019).
## VIII Acknowledgements
SLC acknowledges support from the UK Engineering and Physical Sciences
Research Council (grant numbers EP/P01058X/1 and EP/P008275/1). Work at Temple
University is supported by the Army Research Office Grant No. W911NF-
17-1-0563, the U.S. Air Force Office of Scientific Research Grant No.
FA9550-19-1-0272 and the National Science Foundation Grant No. PHY-1908634.
## References
* Carr et al. (2009) L. D. Carr, D. DeMille, R. V. Krems, and J. Ye, New Journal of Physics 11, 055049 (2009), URL https://doi.org/10.1088%2F1367-2630%2F11%2F5%2F055049.
* Zelevinsky et al. (2008) T. Zelevinsky, S. Kotochigova, and J. Ye, Physical Review Letters 100, 043201 (2008), URL https://doi.org/10.1103%2Fphysrevlett.100.043201.
* Salumbides et al. (2011) E. J. Salumbides, G. D. Dickenson, T. I. Ivanov, and W. Ubachs, Physical Review Letters 107, 043005 (2011), URL https://doi.org/10.1103%2Fphysrevlett.107.043005.
* Salumbides et al. (2013) E. J. Salumbides, J. C. J. Koelemeij, J. Komasa, K. Pachucki, K. S. E. Eikema, and W. Ubachs, Physical Review D 87, 112008 (2013), URL https://doi.org/10.1103%2Fphysrevd.87.112008.
* Tarbutt et al. (2013) M. R. Tarbutt, B. E. Sauer, J. J. Hudson, and E. A. Hinds, New Journal of Physics 15, 053034 (2013), URL https://doi.org/10.1088%2F1367-2630%2F15%2F5%2F053034.
* Schiller et al. (2014) S. Schiller, D. Bakalov, and V. Korobov, Physical Review Letters 113, 023004 (2014), URL https://doi.org/10.1103%2Fphysrevlett.113.023004.
* Borkowski (2018) M. Borkowski, Physical Review Letters 120, 083202 (2018), URL https://doi.org/10.1103%2Fphysrevlett.120.083202.
* Borkowski et al. (2019) M. Borkowski, A. A. Buchachenko, R. Ciuryło, P. S. Julienne, H. Yamada, Y. Kikuchi, Y. Takasu, and Y. Takahashi, Scientific Reports 9 (2019), ISSN 2045-2322.
* Krems (2008) R. V. Krems, Physical Chemistry Chemical Physics 10, 4079 (2008).
* Bell and Softley (2009) M. T. Bell and T. P. Softley, Molecular Physics 107, 99 (2009).
* Ospelkaus et al. (2010a) S. Ospelkaus, K.-K. Ni, D. Wang, M. H. G. de Miranda, B. Neyenhuis, G. Quéméner, P. S. Julienne, J. L. Bohn, D. S. Jin, and J. Ye, Science 327, 853 (2010a), ISSN 0036-8075, URL https://science.sciencemag.org/content/327/5967/853.
* Dulieu et al. (2011) O. Dulieu, R. Krems, M. Weidemüller, and S. Willitsch, Physical Chemistry Chemical Physics 13, 18703 (2011).
* Balakrishnan (2016) N. Balakrishnan, Journal of Chemical Physics 145, 150901 (2016).
* Santos et al. (2000) L. Santos, G. V. Shlyapnikov, P. Zoller, and M. Lewenstein, Physical Review Letters 85, 1791 (2000).
* Micheli et al. (2007) A. Micheli, G. Pupillo, H. P. Büchler, and P. Zoller, Physical Review A 76, 043604 (2007), URL https://doi.org/10.1103%2Fphysreva.76.043604.
* Pollet et al. (2010) L. Pollet, J. D. Picon, H. P. Büchler, and M. Troyer, Physical Review Letters 104, 125302 (2010), URL https://doi.org/10.1103%2Fphysrevlett.104.125302.
* Capogrosso-Sansone et al. (2010) B. Capogrosso-Sansone, C. Trefzger, M. Lewenstein, P. Zoller, and G. Pupillo, Physical Review Letters 104, 125301 (2010), URL https://doi.org/10.1103%2Fphysrevlett.104.125301.
* Baranov et al. (2012) M. A. Baranov, M. Dalmonte, G. Pupillo, and P. Zoller, Chemical Reviews 112, 5012 (2012).
* Lechner and Zoller (2013) W. Lechner and P. Zoller, Physical Review Letters 111, 185306 (2013), URL https://doi.org/10.1103%2Fphysrevlett.111.185306.
* Barnett et al. (2006) R. Barnett, D. Petrov, M. Lukin, and E. Demler, Physical Review Letters 96, 190401 (2006).
* Micheli et al. (2006) A. Micheli, G. K. Brennen, and P. Zoller, Nature Physics 2, 341 (2006).
* Büchler et al. (2007) H. P. Büchler, E. Demler, M. Lukin, A. Micheli, N. Prokof’ev, G. Pupillo, and P. Zoller, Physical Review Letters 98, 060404 (2007).
* Macià et al. (2012) A. Macià, D. Hufnagl, F. Mazzanti, J. Boronat, and R. E. Zillich, Physical Review Letters 109, 235307 (2012).
* Manmana et al. (2013) S. R. Manmana, E. M. Stoudenmire, K. R. A. Hazzard, A. M. Rey, and A. V. Gorshkov, Physical Review B 87, 081106 (2013), URL https://doi.org/10.1103%2Fphysrevb.87.081106.
* Gorshkov et al. (2013) A. V. Gorshkov, K. R. A. Hazzard, and A. M. Rey, Molecular Physics 111, 1908 (2013), ISSN 0026-8976.
* DeMille (2002) D. DeMille, Physical Review Letters 88, 067901 (2002), URL https://link.aps.org/doi/10.1103/PhysRevLett.88.067901.
* Yelin et al. (2006) S. F. Yelin, K. Kirby, and R. Côté, Physical Review A 74, 050301 (2006), URL https://doi.org/10.1103%2Fphysreva.74.050301.
* Zhu et al. (2013) J. Zhu, S. Kais, Q. Wei, D. Herschbach, and B. Friedrich, The Journal of Chemical Physics 138, 024104 (2013), ISSN 0021-9606.
* Herrera et al. (2014) F. Herrera, Y. Cao, S. Kais, and K. B. Whaley, New Journal of Physics 16, 075001 (2014), URL https://doi.org/10.1088%2F1367-2630%2F16%2F7%2F075001.
* Ni et al. (2018) K.-K. Ni, T. Rosenband, and D. D. Grimes, Chemical Science 9, 6830 (2018), URL http://dx.doi.org/10.1039/C8SC02355G.
* Sawant et al. (2020) R. Sawant, J. A. Blackmore, P. D. Gregory, J. Mur-Petit, D. Jaksch, J. Aldegunde, J. M. Hutson, M. R. Tarbutt, and S. L. Cornish, New Journal of Physics 22, 013027 (2020), URL https://doi.org/10.1088%2F1367-2630%2Fab60f4.
* Hughes et al. (2020) M. Hughes, M. D. Frye, R. Sawant, G. Bhole, J. A. Jones, S. L. Cornish, M. R. Tarbutt, J. M. Hutson, D. Jaksch, and J. Mur-Petit, Phys. Rev. A 101, 062308 (2020), URL https://link.aps.org/doi/10.1103/PhysRevA.101.062308.
* Ni et al. (2008) K.-K. Ni, S. Ospelkaus, M. H. G. de Miranda, A. Pe’er, B. Neyenhuis, J. J. Zirbel, S. Kotochigova, P. S. Julienne, D. S. Jin, and J. Ye, Science 322, 231 (2008).
* Danzl et al. (2008) J. G. Danzl, E. Haller, M. Gustavsson, M. J. Mark, R. Hart, N. Bouloufa, O. Dulieu, H. Ritsch, and H.-C. Nägerl, Science 321, 1062 (2008).
* Lang et al. (2008) F. Lang, K. Winkler, C. Strauss, R. Grimm, and J. Hecker Denschlag, Physical Review Letters 101, 133005 (2008).
* Takekoshi et al. (2014) T. Takekoshi, L. Reichsöllner, A. Schindewolf, J. M. Hutson, C. R. Le Sueur, O. Dulieu, F. Ferlaino, R. Grimm, and H.-C. Nägerl, Physical Review Letters 113, 205301 (2014).
* Molony et al. (2014) P. K. Molony, P. D. Gregory, Z. Ji, B. Lu, M. P. Köppinger, C. R. Le Sueur, C. L. Blackley, J. M. Hutson, and S. L. Cornish, Physical Review Letters 113, 255301 (2014).
* Park et al. (2015) J. W. Park, S. A. Will, and M. W. Zwierlein, Physical Review Letters 114, 205302 (2015).
* Guo et al. (2016) M. Guo, B. Zhu, B. Lu, X. Ye, F. Wang, R. Vexiau, N. Bouloufa-Maafa, G. Quéméner, O. Dulieu, and D. Wang, Physical Review Letters 116, 205303 (2016).
* Rvachov et al. (2017) T. M. Rvachov, H. Son, A. T. Sommer, S. Ebadi, J. J. Park, M. W. Zwierlein, W. Ketterle, and A. O. Jamison, Physical Review Letters 119, 143001 (2017).
* Seeßelberg et al. (2018) F. Seeßelberg, N. Buchheim, Z.-K. Lu, T. Schneider, X.-Y. Luo, E. Tiemann, I. Bloch, and C. Gohle, Physical Review A 97, 013405 (2018), URL https://doi.org/10.1103%2Fphysreva.97.013405.
* Yang et al. (2019) H. Yang, D.-C. Zhang, L. Liu, Y.-X. Liu, J. Nan, B. Zhao, and J.-W. Pan, Science 363, 261 (2019), ISSN 0036-8075, URL https://science.sciencemag.org/content/363/6424/261.
* Voges et al. (2020) K. K. Voges, P. Gersema, M. Meyer zum Alten Borgloh, T. A. Schulze, T. Hartmann, A. Zenesini, and S. Ospelkaus, Phys. Rev. Lett. 125, 083401 (2020), URL https://link.aps.org/doi/10.1103/PhysRevLett.125.083401.
* Shuman et al. (2010) E. S. Shuman, J. F. Barry, and D. DeMille, Nature 467, 820 (2010).
* Barry et al. (2014) J. F. Barry, D. J. McCarron, E. B. Norrgard, M. H. Steinecker, and D. DeMille, Nature 512, 286 (2014).
* Truppe et al. (2017) S. Truppe, H. J. Williams, M. Hambach, L. Caldwell, N. J. Fitch, E. A. Hinds, B. E. Sauer, and M. R. Tarbutt, Nature Physics 13, 1173 (2017).
* Kozyryev et al. (2017) I. Kozyryev, L. Baum, K. Matsuda, B. L. Augenbraun, L. Anderegg, A. P. Sedlack, and J. M. Doyle, Physical Review Letters 118, 173201 (2017).
* Anderegg et al. (2018) L. Anderegg, B. L. Augenbraun, Y. Bao, S. Burchesky, L. W. Cheuk, W. Ketterle, and J. M. Doyle, Nature Physics 14, 890 (2018), ISSN 1745-2481, URL https://doi.org/10.1038/s41567-018-0191-z.
* Collopy et al. (2018) A. L. Collopy, S. Ding, Y. Wu, I. A. Finneran, L. Anderegg, B. L. Augenbraun, J. M. Doyle, and J. Ye, Phys. Rev. Lett. 121, 213201 (2018), URL https://link.aps.org/doi/10.1103/PhysRevLett.121.213201.
* Ospelkaus et al. (2010b) S. Ospelkaus, K.-K. Ni, G. Quéméner, B. Neyenhuis, D. Wang, M. H. G. de Miranda, J. L. Bohn, J. Ye, and D. S. Jin, Phys. Rev. Lett. 104, 030402 (2010b), URL https://link.aps.org/doi/10.1103/PhysRevLett.104.030402.
* Yan et al. (2013) B. Yan, S. A. Moses, B. Gadway, J. P. Covey, K. R. Hazzard, A. M. Rey, D. S. Jin, and J. Ye, Nature 501, 521 (2013), ISSN 00280836.
* Gregory et al. (2016) P. D. Gregory, J. Aldegunde, J. M. Hutson, and S. L. Cornish, Phys. Rev. A 94, 041403 (2016), URL https://link.aps.org/doi/10.1103/PhysRevA.94.041403.
* Will et al. (2016) S. A. Will, J. W. Park, Z. Z. Yan, H. Loh, and M. W. Zwierlein, Phys. Rev. Lett. 116, 225306 (2016), URL https://link.aps.org/doi/10.1103/PhysRevLett.116.225306.
* Guo et al. (2018) M. Guo, X. Ye, J. He, G. Quéméner, and D. Wang, Phys. Rev. A 97, 020501 (2018), URL https://link.aps.org/doi/10.1103/PhysRevA.97.020501.
* Blackmore et al. (2020) J. A. Blackmore, P. D. Gregory, S. L. Bromley, and S. L. Cornish, Phys. Chem. Chem. Phys. pp. – (2020), URL http://dx.doi.org/10.1039/D0CP04651E.
* Gorshkov et al. (2011a) A. V. Gorshkov, S. R. Manmana, G. Chen, J. Ye, E. Demler, M. D. Lukin, and A. M. Rey, Physical Review Letters 107, 115301 (2011a), URL https://doi.org/10.1103%2Fphysrevlett.107.115301.
* Gorshkov et al. (2011b) A. V. Gorshkov, S. R. Manmana, G. Chen, E. Demler, M. D. Lukin, and A. M. Rey, Physical Review A 84, 033619 (2011b), URL https://doi.org/10.1103%2Fphysreva.84.033619.
* Hazzard et al. (2013) K. R. A. Hazzard, S. R. Manmana, M. Foss-Feig, and A. M. Rey, Physical Review Letters 110, 075301 (2013), URL https://doi.org/10.1103%2Fphysrevlett.110.075301.
* Moses et al. (2015) S. A. Moses, J. P. Covey, M. T. Miecnikowski, B. Yan, B. Gadway, J. Ye, and D. S. Jin, Science 350, 659 (2015), ISSN 0036-8075, URL https://science.sciencemag.org/content/350/6261/659.
* Reichsöllner et al. (2017) L. Reichsöllner, A. Schindewolf, T. Takekoshi, R. Grimm, and H.-C. Nägerl, Phys. Rev. Lett. 118, 073201 (2017), URL https://link.aps.org/doi/10.1103/PhysRevLett.118.073201.
* Liu et al. (2019) L. R. Liu, J. D. Hood, Y. Yu, J. T. Zhang, K. Wang, Y.-W. Lin, T. Rosenband, and K.-K. Ni, Phys. Rev. X 9, 021039 (2019), URL https://link.aps.org/doi/10.1103/PhysRevX.9.021039.
* Anderegg et al. (2019) L. Anderegg, L. W. Cheuk, Y. Bao, S. Burchesky, W. Ketterle, K.-K. Ni, and J. M. Doyle, Science 365, 1156 (2019), ISSN 0036-8075, URL https://science.sciencemag.org/content/365/6458/1156.
* Kotochigova and Tiesinga (2006) S. Kotochigova and E. Tiesinga, Phys. Rev. A 73, 041405 (2006), URL https://link.aps.org/doi/10.1103/PhysRevA.73.041405.
* Vexiau et al. (2017) R. Vexiau, D. Borsalino, M. Lepers, A. Orbán, M. Aymar, O. Dulieu, and N. Bouloufa-Maafa, International Reviews in Physical Chemistry 36, 709 (2017), eprint https://doi.org/10.1080/0144235X.2017.1351821, URL https://doi.org/10.1080/0144235X.2017.1351821.
* Li et al. (2017) M. Li, A. Petrov, C. Makrides, E. Tiesinga, and S. Kotochigova, Phys. Rev. A 95, 063422 (2017), URL https://link.aps.org/doi/10.1103/PhysRevA.95.063422.
* Seeßelberg et al. (2018) F. Seeßelberg, X.-Y. Luo, M. Li, R. Bause, S. Kotochigova, I. Bloch, and C. Gohle, Phys. Rev. Lett. 121, 253401 (2018), URL https://link.aps.org/doi/10.1103/PhysRevLett.121.253401.
* Neyenhuis et al. (2012) B. Neyenhuis, B. Yan, S. A. Moses, J. P. Covey, A. Chotia, A. Petrov, S. Kotochigova, J. Ye, and D. S. Jin, Physical Review Letters 109, 230403 (2012), URL https://link.aps.org/doi/10.1103/PhysRevLett.109.230403.
* Gregory et al. (2017) P. D. Gregory, J. A. Blackmore, J. Aldegunde, J. M. Hutson, and S. L. Cornish, Physical Review A 96, 021402(R) (2017).
* Blackmore et al. (2018) J. A. Blackmore, L. Caldwell, P. D. Gregory, E. M. Bridge, R. Sawant, J. Aldegunde, J. Mur-Petit, D. Jaksch, J. M. Hutson, B. E. Sauer, et al., Quantum Science and Technology 4, 014010 (2018), URL https://doi.org/10.1088%2F2058-9565%2Faaee35.
* Kotochigova and DeMille (2010) S. Kotochigova and D. DeMille, Physical Review A 82, 063421 (2010), URL https://link.aps.org/doi/10.1103/PhysRevA.82.063421.
* Katori et al. (2003) H. Katori, M. Takamoto, V. G. Pal’chikov, and V. D. Ovsiannikov, Phys. Rev. Lett. 91, 173005 (2003), URL https://link.aps.org/doi/10.1103/PhysRevLett.91.173005.
* Ye et al. (2008) J. Ye, H. J. Kimble, and H. Katori, Science 320, 1734 (2008), ISSN 0036-8075, URL https://science.sciencemag.org/content/320/5884/1734.
* Kondov et al. (2019) S. S. Kondov, C. H. Lee, K. H. Leung, C. Liedl, I. Majewska, R. Moszynski, and T. Zelevinsky, Nature Physics 15, 1118 (2019).
* Bause et al. (2020) R. Bause, M. Li, A. Schindewolf, X.-Y. Chen, M. Duda, S. Kotochigova, I. Bloch, and X.-Y. Luo, Phys. Rev. Lett. 125, 023201 (2020), URL https://link.aps.org/doi/10.1103/PhysRevLett.125.023201.
* Sundar et al. (2018) B. Sundar, B. Gadway, and K. R. Hazzard, Scientific Reports 8, 1 (2018), ISSN 20452322.
* Petrov et al. (2013) A. Petrov, C. Makrides, and S. Kotochigova, Mol. Phys. 111, 1731 (2013).
* Aldegunde et al. (2008) J. Aldegunde, B. A. Rivington, P. S. Żuchowski, and J. M. Hutson, Phys. Rev. A 78, 033434 (2008), URL https://link.aps.org/doi/10.1103/PhysRevA.78.033434.
* Docenko et al. (2010) O. Docenko, M. Tamanis, R. Ferber, T. Bergeman, S. Kotochigova, A. V. Stolyarov, A. de Faria Nogueira, and C. E. Fellows, Phys. Rev. A 81, 042511 (2010), URL https://link.aps.org/doi/10.1103/PhysRevA.81.042511.
* Docenko et al. (2011) O. Docenko, M. Tamanis, R. Ferber, H. Knöckel, and E. Tiemann, Phys. Rev. A 83, 052519 (2011), URL https://link.aps.org/doi/10.1.
* Rakić et al. (2016) M. Rakić, R. Beuc, N. Bouloufa-Maafa, O. Dulieu, R. Vexiau, G. Pichler, and H. Skenderović, J. Chem. Phys. 144, 204310 (2016), URL https://doi.org/10.1063/1.4952758.
* Chotia et al. (2012) A. Chotia, B. Neyenhuis, S. A. Moses, B. Yan, J. P. Covey, F.-F. M., A. M. Rey, D. S. Jin, and J. Ye, Physical Review Letters 108, 080405 (2012).
* Schuster et al. (2019) T. Schuster, F. Flicker, M. Li, S. Kotochigova, J. E. Moore, J. Ye, and N. Y. Yao, arXiv:1901.08597 (2019), URL https://arxiv.org/abs/1901.08597.
|
# Combining pre-trained language models and structured knowledge
Pedro Colon-Hernandez 75 Amherst St, Cambridge, MA, 02139. E-mail:
<EMAIL_ADDRESS>MIT Media Lab Catherine Havasi Dalang Health Jason Alonso
Dalang Health Matthew Huggins MIT Media Lab Cynthia Breazeal MIT Media Lab
###### Abstract
In recent years, transformer-based language models have achieved state of the
art performance in various NLP benchmarks. These models are able to extract
mostly distributional information with some semantics from unstructured text,
however it has proven challenging to integrate structured information, such as
knowledge graphs into these models. We examine a variety of approaches to
integrate structured knowledge into current language models and determine
challenges, and possible opportunities to leverage both structured and
unstructured information sources. From our survey, we find that there are
still opportunities at exploiting adapter-based injections and that it may be
possible to further combine various of the explored approaches into one
system.
## 1 Introduction
Recent developments in Language Modeling (LM) techniques have greatly improved
the performance of systems in a wide range of Natural Language Processing
(NLP) tasks. Many of the current state of the art systems are based on
variations to the transformer Vaswani et al. (2017) architecture. The
transformer architecture, along with modifications such as the Transformer XL
Dai et al. (2019) and various training regimes such as the Masked Language
Modeling (MLM) used in BERT Devlin et al. (2018) or the Permutation Language
Modeling (PLM) used in XLNetYang et al. (2019), uses an attention based
mechanism to model long range dependencies between text. This modeling encodes
syntactic knowledge, and to a certain extent some semantic knowledge contained
in unstructured texts.
There has been interest in being able to understand what kinds of knowledge
are encoded in these models’ weights. Hewitt et al. Hewitt and Manning (2019)
devise a system that generates a distance metric between embeddings for words
in language models such as BERT. They show that there is some evidence that
there are syntax trees embedded in the transformer language models and this
could explain the performance of these models in some tasks that utilize
syntactic elements of text.
Petroni et al. Petroni et al. (2019) build a system (LAMA) to gauge what kinds
of knowledge are encoded in these weights. They discover that language models
embed some facts and relationships in their weights during pre-training. This
in turn can help explain the performance of these models in semantic tasks.
However, these transformer-based language models have some tendency to
hallucinate knowledge (whether through bias or incorrect knowledge in the
training data). This also means that some of the semantic knowledge they
incorporate is not rigidly enforced or utilized effectively.
Avenues of research have begun to open up on how to prevent this hallucination
and how to inject additional knowledge from external sources into the
transformer-based language models. One promising avenue is through the
integration of knowledge graphs such as FreebaseBollacker et al. (2008),
WordNetMiller (1998), ConceptNetSpeer, Chin, and Havasi (2017), and ATOMICSap
et al. (2019).
A knowledge graph (used somewhat interchangeably with knowledge base although
they are different concepts) is defined as “a graph of data intended to
accumulate and convey knowledge of the real world, whose nodes represent
entities of interest and whose edges represent relations between these
entities" Hogan et al. (2020). Formally, a knowledge graph is a set of triples
that represents nodes and edges between these nodes. Let us define a set of
vertices (which we will refer to as concepts) as $V$, a set of edges as $E$
(which we will refer to as assertions as per Speer and HavasiSpeer and Havasi
(2012)), and a set of labels $L$ (which we will refer to as relations). A
knowledge graph is a tuple $G:=(V,E,L)$111We use the formal definitions found
in Appendix B of Hogan et al. (2020). The set of edges ($E$) or assertions is
composed of triples $E\subseteq V\times L\times V$ which are seen as a subject
(a concept), a relation (a label), and object (another concept) respectively
(e.g. $(subject,relation,object)$). These edges in some cases can have weights
to represent the strength of the assertion. Broadly speaking, knowledge graphs
(KGs) are a collection of tuples that represent things that should be true
within the knowledge of the world that we are representing. An example
assertion is “a dog is an animal" and its representation as a tuple would be:
(dog,isA,animal).
Ideally, we would want to “inject" this structured collection of confident
information (i.e. knowledge graph) into that of the high-coverage, contextual
information found in language models. This injection would permit the model to
incorporate some of the information found in the KG to improve its performance
in inference tasks.
There are currently various approaches that try to achieve this injection. The
approaches in general take either one or combinations of three forms: input
focused injections, architecture focused injections, and output focused
injections. We define an input focused injection as any technique that
modifies the data pre-processing or the pre-transformer layer inputs that the
base model uses(i.e. injecting knowledge graph triples into the training data
to pre-train/fine tune on them or combining entity embeddings into the static
word embeddings that the models have). We define architecture focused
injections as techniques that alter a base model’s transformer layers (i.e.
adding additional layers that inject in some representation). Lastly, we
define an output focused injection as any techniques that either modify the
output of the base models or that modify/add custom loss functions. In
addition to these three basic types, there are approaches that utilize
combinations of these (i.e. a system that uses both input and output
injections), which we call combination injections. Figure 1 gives an abstract
visualization of the types of injections that we describe.
To be consistent throughout the type of injections, we will now give some
definitions and nomenclature. Let us define a sequence of words (unstructured
text) as $S$. Typically in a transformer-based model, this sequence of words
is converted to a sequence of tokens that is then converted into some initial
context-independent embeddings. To a word sequence we can apply a tokenization
technique $\mathcal{T}$ to convert the word sequence into a token sequence
$T$. This can be seen as $\mathcal{T}(S)=T$. This sequence $T$ is used as a
lookup in an embedding layer $\mathcal{E}$ to produce context independent
token vector embeddings: $\mathcal{E}(T)=E$. These are then passed
sequentially through various contextualization layers (i.e. transformers)
which we define as the set $\mathcal{H}$, $\mathcal{H}=(H_{1},...,H_{n})$. The
successive application of these ultimately produces a sequence of contextual
embeddings $C$: $C=H_{n}(H_{n-1}(...H_{1}(E)))$. We additionally define
$\mathcal{G}$ as graph embeddings of a knowledge graph $G$ that are the result
of some embedding function $\mathcal{E}_{g}$:
$\mathcal{G}=\mathcal{E}_{g}(G)$. This final sequence is run through a final
layer $L_{LM}$ that is used to calculate the language modeling loss function
$\mathcal{L}$ that is optimized through back-propagation. The notation that we
utilize is intentionally vague on the definition of the functions, in order
for us to fit the different works that we survey.
In the following sections we will look at attempts of injecting knowledge
graph information that fall into the aforementioned categories. Additionally,
we will highlight relevant benefits in these approaches. We conclude with
possible opportunities and future directions for injecting structured
knowledge into language models.
Figure 1: Visualization of boundaries of the different categories of knowledge
injections. Combination injections involve combinations of the three
categories.
## 2 Input Focused Injections
In this section we will describe knowledge injections whose techniques center
around modifying either the structure of the input or the data that is
selected to be fed into the base transformer models. A common approach to
inject information from a knowledge graph is by converting its assertions into
a set of words (possibly including separator tokens) and pre-training or fine-
tuning a language model with these inputs. We discuss two particular papers
that focus on structuring the input in different ways as to capture the
semantic information from triples found in a KG. These approaches start from a
pre-trained model, and fine-tune on their knowledge infusing datasets. A
summary of these approaches can be found in table 1.
Input focused injections can be seen any technique whose output is a modified
$E$, hereby known as $E^{\prime}$. This modification can be achieved either by
modifying $S$,$\mathcal{T}$, $T$, $\mathcal{E}$,or directly $E$. (i.e. the
word sequence, the token sequence, the tokenization function, the context-less
embedding function, or the actual context-less embeddings). The hope of input
focused injections is that the knowledge in $E^{\prime}$ will be distributed
and contextualized through $\mathcal{H}$ as the language models are trained.
### 2.1 Align, Mask, Select (AMS) Ye et al. (2019)
AMS is an approach in which a question answering dataset is created, whose
questions and possible answers are generated by aligning a knowledge graph (in
this particular case ConceptNet) with plain text. A BERT model is trained on
this dataset to inject it with the knowledge.
Taking an example from their work, the ConceptNet triple (population,
AtLocation, city) is aligned with a sentence from the English Wikipedia (i.e.
“The largest city by population is Birmingham, which has long been the most
industrialized city.") that contains both concepts in the triple. They then
proceed to mask out one of the concepts with a special token $([QS])$ and
produce 4 plausible concepts as answers to the masking task by looking at the
neighbors in ConceptNet that have the same masked token and relationship.
Lastly, they concatenate the generated question with the plausible answers and
run it through a BERT model tailored for question answering (QA) (following
the same approach as the architecture and loss for the SWAG task in the
original BERT). At the output, they run the classification token $([CLS])$
through a softmax classifier to determine if the selected concept is the
correct one or not.
The authors note that the work is sensitive to what it has seen in the pre-
training because when asked a question that needs to disambiguate a pronoun it
tries to match what it has seen the most in the training data. This may mean
that the generalization of the structured knowledge (here commonsense
information) or the understanding of the it is overshadowed by the
distributional information that it is learning, however more testing would
need to be done to verify this. Overall some highlights of the work are:
* •
Automated pre-training approach which constructs a QA dataset aligned to a KG
* •
Utilization of graph-based confounders in generated dataset entries
### 2.2 COMmonsEnse Transformers (COMET) Bosselut et al. (2019)
COMET is a GPTRadford et al. (2018) based system which is trained on triples
from KGs (ConceptNet and Atomic) to learn to predict the object of the triple
(the triples being defined as (subject, relation, object)). The triples are
fed as a concatenated sequence of words into the model (i.e. the words for the
subject, the relationship, and the object) along with some separators.
The authors initialize the GPT model to the final weights in the training from
Radford et al.Radford et al. (2018) and proceed to train it to predict the
words that belong to the object in the triple. A very interesting part of this
work is that it is capable directly of performing knowledge graph completion
for nodes and relations that may not have been seen during training, in the
form of sentences.
Some plausible shortcomings of this work is that you are still having the
model extract the semantic information from the distributional one and
possibly suffering from the same bias as AMS. In addition to this, by training
on the text version of these triples, it may be the case that we lose some of
the syntax that the model learns due to awkwardly formatted inputs (i.e. “cat
located at housing" rather than “a cat is located at a house"), however
further testing of these two needs to be performed.
There is some relevant derivative work for COMET by Bosselut et al.Bosselut
and Choi (2019) which looks into how effective COMET is at building KGs on the
fly given a certain context, a question, and a proposed answer. They combine
the context with a relation from ATOMIC and feed it into COMET to represent
reasoning hops. They do this for multiple relations and keep redoing this with
the generated outputs to represent a reasoning chain for which they can derive
a probability. They use this in a zero-shot evaluation of a question-answering
system and find that it is effective. Overall some highlights of COMET are:
* •
Generative language model that can provide natural language representations of
triples
* •
Useful model for zero-shot KG completion
* •
Simple pre-processing of triples for training
Model | Summary of Injection | Example of Injection
---|---|---
Align, Mask, Select | Aligns a knowledge base with textual sentences, masks entities in the sentences, and selects alternatives with confounders to create a QA dataset | KG Assertion: (population, AtLocation, city)
Model Input: The largest [QW] by population is Birming- ham, which has long
been the most industrialized city? city, Michigan, Petrie dish, area with
people inhabiting, country
COMET | Ingests a formatted sentence version of a triple from ConceptNet and Atomic | KG Assertion: (PersonX goes to the mall, xIntent, to buy clothes)
Model input: PersonX goes to the mall [MASK] $\langle$ xIntent $\rangle$ to
buy clothes
Table 1: Input Injection System Comparisons
## 3 Architecture Injections
In this section we describe approaches that focus on architectural changes to
language models. This involves either adding additional layers that integrate
knowledge in some way with the contextual representations or modifying
existing layers to manipulate things such as attention mechanisms. We discuss
two approaches within this category that fall under layer modifications. These
approaches utilize adapter-like mechanisms to be able to inject information
into the models. A summary of these approaches can be found in table 2.
### 3.1 KnowBERTPeters et al. (2019)
KnowBERT modifies BERT’s architecture by integrating some layers that they
call the Knowledge Attention and Recontextualization (KAR). These layers take
graph entity embeddings, that are based on Tucker Tensor Decompositions for KG
completion Balažević, Allen, and Hospedales (2019), and run them through an
attention mechanism to generate entity span embeddings. These span embeddings
are then added to the regular BERT contextual representations. The summed
representations are then uncompressed and passed on to the next layer in a
regular BERT. Once the KAR entity linker has been trained, the rest of the
BERT model is unfrozen and is trained in the pre-training. These KAR layers
are trained for every KG that is to be injected, in this work they use data
from Wikipedia and Wordnet.
An interesting observation is that the injection happens in the later layers,
which means that the contextual representation up to that point may be
unaltered by the injected knowledge. This is done to stabilize the training,
but could present an opportunity to inject knowledge in earlier levels.
Additionally, the way the system is trained, the entity linking is first
trained, and then the whole system is unfrozen to incorporate the additional
knowledge into BERT. This strategy could lead to the catastrophic
forgettingKirkpatrick et al. (2017) problem where the knowledge from the
underlying BERT model or the additional structured injection may be forgotten
or ignored.
This technique falls into a broader category of what is called AdaptersHoulsby
et al. (2019). Adapters are layers that are added into a language model and
are subsequently fine tuned to a specific task. The interesting aspect of
adapters is that they add a minimal amount of additional parameters, and
freeze the original model weights. The added parameters are also initialized
to produce a close to identity output. It is worth noting that the KnowBERT is
not explicitly an Adapter technique as the model is unfrozen during training.
Some highlights of KnowBERT are the following:
* •
Fusion of contextual and graph representation of entities
* •
Attention enhanced entity spanned knowledge infusion
* •
Permits the injection of multiple KGs in varying levels of the model
### 3.2 Common sense or world knowledge? investigating adapter-based
knowledge injection into pre-trained transformersLauscher et al. (2020)
This work explores what kinds of knowledge are infused by fine tuning an
adapter equipped version of BERT on ConceptNet. They generate and test models
trained on sentences from the Open Mind Common Sense (OMCS)Singh et al. (2002)
corpus and from walks in the ConceptNet graph. They note that with simple
adapters and as little as 25k/100k update steps on their training sentences,
they are able greatly improve the encoded “World Knowledge" (another name for
the knowledge found in ConceptNet).
However, it is worth noting that the information is presented as sentences to
which the adapters are fine tuned. This may mean that the model may have
similar possible shortcomings such as with the approaches that are input-
focused (model may rely more on the distributional rather than the semantic
information), however testing needs to be performed to confirm this. Overall
some highlights of this work are the following:
* •
Adapter based approach which fine-tunes a minimal amount of parameters
* •
Shows that a relatively small amount of additional iterations can inject the
knowledge in the adapters
* •
Show that adapters, trained on KGs, do indeed boost the semantic performance
of transformer-based models
Model | Injected in Pre-Training | Injected in Fine-Tuning | Summary of Injection
---|---|---|---
KnowBERT | Yes | Yes | Sandwich Adapter-like layers which sum contextual representation of layer with graph representation of entities and distributes it in an entity span
Common sense or world knowledge?[…] | No | Yes | Use sandwich adapters to fine tune on a KG
Table 2: Architecture Injection System Comparisons
## 4 Output Injections
In this section we describe approaches that focus on changing either the
output structure or the losses that were used in the base model in some way to
incorporate knowledge. Only one model falls strictly under this category, the
model injects entity embeddings into the output of a BERT model.
### 4.1 SemBERTZhang et al. (2019)
SemBERT uses a subsystem that generates embedding representations of the
output of a semantic role labelingMàrquez et al. (2008) system. They then
concatenate this representation with the output of the contextualized
representation from BERT to help incorporate relational knowledge. The
approach, although clever, may fall short in that although it gives a
representation for the roles, it leaves the model to figure out the exact
relationship that the roles are performing, however testing would need to be
performed to check this. Some highlights of SemBERT are:
* •
Encodes semantic role in an entity embedding that is combined at the output
## 5 Combination and Hybrid Injections
Here we describe approaches that use combinations of injection types such as
input/output injections or architecture/output injections, etc. We start by
looking at models that perform input injections and reinforce these with
output injections (LIBERT, KALM), We then look at models that manipulate the
attention mechanisms to mimic graph connections (BERT-MK,K-BERT). We follow
this by looking into KG-BERT, a model that operates on KG triples, and
K-Adapter, a modification of RoBERTa that encodes KGs into Adapter Layers and
fuses them. After this, we look into the approach presented as Cracking the
Contextual Commonsense Code[…] which determines that there are areas lacking
in BERT that could be addressed by supplying appropriate data, and we look at
ERNIE 2.0, a framework for multi-task training for semantically aware models.
Lastly, we look at two hybrid approaches which extract LM knowledge and
leverage it for different tasks. A summary of these injections can be found in
table 3.
### 5.1 Knowledge-Aware Language Model (KALM) Pre-Training Rosset et al.
(2020)
KALM is a system that does not modify the internal architecture of the model
that it is looking to inject the knowledge into, rather it modifies the input
of the model by fusing entity embeddings with the normal word embeddings that
the language model (in KALM’s case, GPT-2) uses. They then enforce the model
in the output to uphold the entity information by adding an additional loss
component in the pre-training that uses a max margin between the cosine
distance of the output contextual representation and the input entity
embedding and the cosine distance of the contextual representation and a
confounder entity. Altogether what this does is that it forces the model to
notice when there is an entity and tries to make the contextual representation
have the semantics of the correct input entity. Some highlights of KALM are:
* •
Sends an entity signal in the beginning and and enforces it in the output of a
generative model to notice its semantics
### 5.2 Exploiting structured knowledge in text via graph-guided
representation learningShen et al. (2020)
This work masks informative entities that are drawn from a knowledge graph, in
BERT’s MLM objective. In addition to this, they have an auxiliary objective
which uses a max-margin loss for a ranking task which is composed of using a
bilinear model that calculates a similarity score between a contextual
representation of an entity mention and the representation of the $[CLS]$
token for the text. The use for this is to determine if it is a relevant
entity or a distractor. Both KALM and this work are very similar, but a key
difference is that KALM uses a generative model without any kind of MLM
objective, and KALM does not do any kind of filtering for the entities. Some
highlights of this work are:
* •
Filters relevant entities to incorporate their information into the model
* •
Enforces entity signal at beginning and end of the model through masking and
max-margin losses
### 5.3 Lexically Informed BERT (LIBERT)Lauscher et al. (2019)
LIBERT converts batches of lexical constraints and negative examples, into a
BERT-compatible format. The lexical constraints are synonyms and direct
hyponyms-hypernyms (specific,broad) and take the form of a set of tuples of
words: $(C=\\{(w_{1},w_{2})_{i}\\}^{N}_{i=1})$. In addition to this set, the
authors generate some negative examples by finding the words that are
semantically close to $(w_{1})$ and $(w_{2})$ in a given batch. They then
format the examples into something BERT can use which is simply the wordpieces
that pertain to words in the batch separated by the separator token. They pass
this input through BERT and use the $[CLS]$ token as an input to a softmax
classifier to determine if the example is a valid lexical relation or not.
During pre-training they alternate between a batch of sentences and a batch of
constraints. LIBERT outperforms BERT with lesser (1M) iterations of pre-
training. It is worth noting that as the amount of iterations of training
increase, the gap between the two systems, although present, becomes smaller.
This may indicate that, although the additional training objective is
effective, it may be getting overshadowed by the regular MLM coupled with
large amounts of data, however more testing needs to be performed. It is also
worth noting that the authors do not align the sentences with the constraint
batches, combine the training tuples which may hinder training as BERT has to
alternate between different training input structures, and lastly, they do not
incorporate antonymy constraints in their confounder selection, so further
experimentation would be required to verify the effects of these. Some
highlights of LIBERT are the following:
* •
Incorporate lexical constraints from entity embeddings
* •
Good performance with constrained amounts of data
### 5.4 BERT-MKHe et al. (2019)
BERT-MK utilizes a combination of architecture injection and an output
injection (additional training loss). In BERT-MK, they utilize the KG-
transformer modules which are transformer layers that are combined with
learned entity representations. These entity representations are generated
from another set of transformer layers that are trained on a KG converted to
natural language sentences. The interesting aspect is that these additional
layers incorporate an attention mask that mimics the connections in the KG, so
to a certain extent, incorporating the structure of the graph and propagating
it back into the embeddings. These additional layers are trained to
reconstruct the input set of triples. The authors evaluate the system for
medical knowledge (MK) however it may be interesting to evaluate this on the
GLUE benchmark along with utilizing other KGs such as ATOMIC or ConceptNet.
Some highlights of BERT-MK are:
* •
Utilization of a modified attention mechanism to mimic KG structure between
terms
* •
Incorporation of triple reconstruction loss to train the KG-transformer
modules
* •
Merges KG-transformer with regular transformer for contextual+knowledge-
informed representation
Model | Injected in Pre-Training | Injected in Fine-Tuning | Summary of Input Injection | Summary of Architecture Injection | Summary of Output Injection
---|---|---|---|---|---
KALM | Yes | No | Combines an entity embedding with the model’s word embeddings | N/a | Incorporates a max-margin loss with cosine distances to enforce semantic information
Exploiting structured knowledge in text […] | Yes | No | Uses a KG informed masking scheme to exploit MLM learning | N/A | Incorporates a max-margin loss with distractors from a KG and a bilinear scoring model for the MM loss.
LIBERT | Yes | No | Alternates between batches of sentences, and batches of tuples that are lexically related | N/a | Adds a binary classifier as a third training task to determine if the tuples form a valid lexical relation
BERT-MK | Yes | No | N/A | Combines a base transformer with KG transformer modules which are trained to learn contextual entity representations and have an attention mechanism that mimics the connections between a graph | Use a triple reconstruction loss, similar to MLM, but for triples
K-BERT | Yes | No | Incorporate as part of their training batches, assertions from entities present in a sample | Modify the attention mechanism and position embeddings to reorder injected information, and mimic KG connections | N/A
KG-BERT | No | Yes | Feeds triples from a knowledge graph as input examples | N/A | Uses a binary classification objective to determine if a triple is correct and a multi-class classification objective to determine what kind of relation the triple has.
K-Adapter | No | Yes | N/A | Fine tune adapter layers to store information from a KG | Combine the model and different adapter layers to give a contextual representation with information from different sources, additionally use a relation classification loss for each trained adapter.
Cracking the contextual commonsense code[…] | Yes | No | Pre-process the data to address commonsense relation properties that are deficient in BERT | N/A | Concatenates a graph embedding to the output of the BERT model
ERNIE 2.0 | Yes | No | Construct data for pre-training tasks | N/A | Provides a battery of tasks that are trained in parallel to enforce different semantic areas in a model
Table 3: Combination Injection Systems Comparisons
### 5.5 K-BERT Liu et al. (2020)
K-BERT uses a combination of input injection and architecture injections. For
a given sentence, they inject relevant triples for the entities that are
present in the sentence and in a KG. They inject these triples in between the
actual text and utilize a soft-position embedding to determine the order in
which the triples are evaluated. These soft position embeddings simply add
positional embeddings to the injected triple tokens. This in turn creates a
problem that the tokens are injected as entities appear in a sentence, and
hence the ordering of the tokens is altered.
To remedy this the authors utilize a masked self attention similar to BERT-MK.
What this means is that the attention mechanism should only be able to see
everything up to the entity that matched in the injected triple. This
attention mechanism helps the model focus on what relevant knowledge it should
incorporate. It would have been good to see a comparison of just adding these
as sentences in the input rather than having to fix the attention mechanism to
compensate for the erratic placement. Some highlights of K-BERT are:
* •
Utilization of attention mechanism to mimic connected subgraphs of injected
triples
* •
Injection of relevant triples as text inputs
### 5.6 KG-BERT Yao, Mao, and Luo (2019)
The authors present a combination approach which fine-tunes a BERT model with
the text of triples from a KG similar to COMET. The authors also feed
confounders in the form of random samples of entities into the training of the
system. It utilizes a binary classification task to determine if the triple is
valid and a relationship type prediction task to determine which relations are
present between pairs of entities. Although this system is useful for KG
completion, there is no evidence of its performance on other tasks.
Additionally they train one triple at a time which may limit the model’s
ability to learn the extended relationships for a given set of entities. Some
highlights of KG-BERT are the following:
* •
Fine tunes BERT into completing triples from a KG
* •
Uses a binary classification to predict if a triple is valid
* •
Uses multi-class classification to predict relation type
### 5.7 K-Adapter Wang et al. (2020)
A work based on adapters, K-Adapter works by adding projection layers before
and after a subset of transformer layers. They do this only for some specific
layers in a pre-trained RoBERTa model (the first layer, the middle layer, and
the last layer). They then freeze RoBERTa as per the Adapter work in Houlsby
et al. (2019) and train 2 adapters to learn factual knowledge from Wikipedia
triples Elsahar et al. (2019) and linguistic knowledge from outputs of the
Stanford parserChen and Manning (2014). They then train the adapters with a
triple classification (whether the triple is true or not) task similar to KG-
BERT.
It is worth noting that the authors compare RoBERTa and their K-Adapter
approach against BERT, and BERT has considerably better performance on the
LAMA probes. The authors attribute RoBERTa’s byte pair encodingsShibata et al.
(1999) (BPE) as the major performance delta between their approach and BERT.
Another possible reason may be that they only perform injection in a few
layers rather than throughout the entire model, although testing needs to be
done to confirm this. Some highlights of K-Adapter are:
* •
Approach provides a framework for continual learning
* •
Use a fusion of trained adapter outputs for evaluation tasks
### 5.8 Cracking the Contextual Commonsense Code: Understanding Commonsense
Reasoning Aptitude of Deep Contextual RepresentationsDa and Kusai (2019)
The authors analyze BERT to determine that it is deficient in certain
attribute representations of entities. The authors use the RACE Lai et al.
(2017) dataset, and based on five attribute categories (Visual, Encyclopedic,
Functional Perceptual, Taxonomic), select samples from the dataset that may be
help a BERT model compensate deficiencies in the areas. They then fine tune on
this data. In addition to this, the authors concatenate the fine tuned BERT
embeddings with some knowledge graph embeddings. These graph embeddings are
generated based on assertions that involve the entities that are present in
the questions and passages they train their final joint model on (MCScript
2.0Ostermann, Roth, and Pinkal (2019)). Their selection of additional fine-
tuning data for BERT improves their performance in MCScript 2.0, highlighting
that their selection addressed missing knowledge.
It is worth noting that the graph embeddings that they concatenate boost the
performance of their system which shows that there is still some information
in KGs that is not in BERT. We classify this approach as a combination
approach because they concatenate the result of the BERT embeddings and KG
embeddings and fine tune both at the same time. The authors however gave no
insight as to how the KG embeddings could have been incorporated in the fine-
tuning/pre-training of BERT with the RACE dataset. Some highlights of this
work are:
* •
BERT has some commonsense information in some areas, but is lacking in others
* •
Fine-tuning on the deficient areas increases performance accordingly
* •
The combination of graph embeddings plus contextual representations are useful
### 5.9 ERNIE 2.0 Sun et al. (2020)
The authors develop a framework that constructs pre-training tasks that center
around Word-aware Pre-training, Structure-aware Pre-training, Semantic-aware
Pre-training Tasks, and proceeds to train a transformer based model on these
tasks. An interesting aspect is that as they finish training on tasks, they
keep training on older tasks in order for the model to not forget what it has
learned. In ERNIE 2.0 the authors do not incorporate KG information
explicitly. They do have a sub-task within the Word-aware pre-training that
masks entities and phrases with the hope that it learns the dependencies of
the masked elements which may help to incorporate assertion information.
A possible shortcoming of this model is that some tasks that are intended to
infuse semantic information into the model (i.e. the Semantic aware tasks
which are a Discourse Relation task and an information retrieval (IR)
relevance) rely on the model to pick it up from the distributional examples.
This could have the same possible issue as with the Input Injections and would
need to be investigated further. Additionally, they do not explicitly use KGs
in the work. Some highlights of ERNIE 2.0 are:
* •
Continual learning platform keeps training on older tasks to maintain their
information
* •
Framework permits flexibility on the underlying model
* •
Wide variety of semantic pre-training tasks
### 5.10 Graph-based reasoning over heterogeneous external knowledge for
commonsense question answering Lv et al. (2020)
A hybrid approach in which the authors do not inject knowledge into a language
model (namely XLNetYang et al. (2019), rather they utilize a language model as
a way to unify graph knowledge and contextual information. They combine XLNet
embeddings as nodes in a Graph Convolutional Network (GCN) to answer
questions.
They generate relevant subgraphs of ConceptNet and Wikipedia (from ConceptNet
the relations that include entities in a question/answer exercise and the top
10 most relevant sentences from ElasticSearch on Wikipedia). They then perform
a topological sorting on the combined graphs and pass them as input to XLNet.
XLNet then generates contextual representations that are then used as
representations for nodes in a Graph Convolutional Network (GCN). They then
utilize graph attention to generate a graph level representation and combine
it with XLNet’s input ([CLS] token) representation to determine if an answer
is valid for a question. In this model they do not fine-tune XLNet, which
could have been done on the dataset to give better contextual representations,
and additionally they do not leverage the different levels of representation
present in XLNet. Some highlights of this work are the following
* •
Combination of GCN, Generative Language Model, and Search systems to answer
questions
* •
Use XLNet as contextual embedding for GCN nodes
* •
Perform QA reasoning with the GCN output
### 5.11 Commonsense knowledge base completion with structural and semantic
contextMalaviya et al. (2020)
Another hybrid approach, the authors fine tune a BERT model on a list of the
unique phrases that are used to represent nodes in a KG. They then take the
embeddings from BERT and from a sub-graph in the form of a GCN and run it
through an encoder/decoder structure to determine the validity of an
assertion.222It is worth noting that the two hybrid projects possibly
benefited from the ability for these language models to encode assertions as
shown by Feldman et al. Davison, Feldman, and Rush (2019) and Petroni et
al.Petroni et al. (2019).
They then take this input and concatenate it with node representations for for
a subgraph (in this case a combination of ConceptNet and Atomic). They treat
this concatenation as an encoded representation, and run combinations of these
through a convolutional decoder that additionally takes an embedding of a
relation type.
The result of the convolutional decoder is run through a bilinear model and a
sigmoid function to determine the validity of the assertion. It seems
interesting that the authors only run the convolution through one side:
convolution of $(e_{i},e_{rel})$ rather than both the convolution of
$(e_{i},e_{rel})$ and $(e_{rel},e_{j})$ (where $(e_{i},e_{j})$ are the entity
embeddings for entity i and j respectively and $(e_{rel})$ is the embedding
for a specific relationship) and then a concatenation. They rely on the
bilinear model joining the two representations.
Some highlights of this work are the following:
* •
Use a GCN and a LM to generate contextualized assertions representations
* •
Use BERT to generate contextual embeddings for nodes
* •
Use an encoder-decoder structure to learn triples
## 6 Future Directions
### 6.1 Input Injections
Most input injections are to format KG information into whatever format a
transformer model can ingest. Although KALM has explored incorporating a
signal to the input representations, it would be interesting to add additional
information such as the lexical constraints mentioned in LIBERT, to the word
embeddings that are trained with the transformer based models like BERT. A
possible approach could be to build a post-specialization system that could
generate retrofittedFaruqui et al. (2014) representations that can then be fed
into language models.
### 6.2 Architecture Injections
Adapters seem to be a promising field of research in language models overall.
The idea that one can fine tune a small amount of parameters may simplify the
injection of knowledge and KnowBERT has explored some of these benefits. It
would be interesting to apply a similar approach to generative models and see
the results.
Another possible avenue of research would be to incorporate neural memory
models/modules such as the ones by MunkhdalaiMunkhdalai et al. (2019) into
adapter-based injections. The reasoning would be that the model can simply
look up relevant information encoded into a memory architecture and fuse it
into a contextual representation.
### 6.3 Combined Approaches
There are a variety of combined approaches, but none of them tackle all three
areas (input, architecture, and output) at the same time. It seems promising
to test out a signaling method such as KALM and see how this would work with
an adapter based method similar to KnowBERT. The idea being that the input
signal could help the entity embeddings contextualize better within the
injected layers. Additionally, it would be interesting to see how the
aforementioned combination would look with a system similar to LIBERT; such
that you could fuse entity embeddings with some semantic information.
## 7 Conclusion
Infusing structured information from Knowledge Graphs into pre-trained
language models has had some success. Overall, the works reviewed here give
evidence that the models benefit from the incorporation of the structured
information. By analyzing the existing works, we give some research avenues
that may help to develop more tightly coupled language/KG models.
## References
* Balažević, Allen, and Hospedales (2019) Balažević, Ivana, Carl Allen, and Timothy M Hospedales. 2019. Tucker: Tensor factorization for knowledge graph completion. _arXiv preprint arXiv:1901.09590_.
* Bodenreider (2004) Bodenreider, Olivier. 2004. The unified medical language system (umls): integrating biomedical terminology. _Nucleic acids research_ , 32(suppl_1):D267–D270.
* Bollacker et al. (2008) Bollacker, Kurt, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008\. Freebase: a collaboratively created graph database for structuring human knowledge. In _Proceedings of the 2008 ACM SIGMOD international conference on Management of data_ , pages 1247–1250.
* Bosselut and Choi (2019) Bosselut, Antoine and Yejin Choi. 2019. Dynamic knowledge graph construction for zero-shot commonsense question answering. _arXiv preprint arXiv:1911.03876_.
* Bosselut et al. (2019) Bosselut, Antoine, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. Comet: Commonsense transformers for automatic knowledge graph construction. _arXiv preprint arXiv:1906.05317_.
* Chen and Manning (2014) Chen, Danqi and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In _Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)_ , pages 740–750.
* Da and Kusai (2019) Da, Jeff and Jungo Kusai. 2019. Cracking the contextual commonsense code: Understanding commonsense reasoning aptitude of deep contextual representations. _arXiv preprint arXiv:1910.01157_.
* Dai et al. (2019) Dai, Zihang, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. _arXiv preprint arXiv:1901.02860_.
* Davison, Feldman, and Rush (2019) Davison, Joe, Joshua Feldman, and Alexander M Rush. 2019. Commonsense knowledge mining from pretrained models. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 1173–1178.
* Devlin et al. (2018) Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_.
* Elsahar et al. (2019) Elsahar, Hady, Pavlos Vougiouklis, Arslen Remaci, Christophe Gravier, Jonathon Hare, Elena Simperl, and Frederique Laforest. 2019. T-rex: A large scale alignment of natural language with knowledge base triples.
* Faruqui et al. (2014) Faruqui, Manaal, Jesse Dodge, Sujay K Jauhar, Chris Dyer, Eduard Hovy, and Noah A Smith. 2014. Retrofitting word vectors to semantic lexicons. _arXiv preprint arXiv:1411.4166_.
* Gabrilovich, Ringgaard, and Subramanya (2013) Gabrilovich, Evgeniy, Michael Ringgaard, and Amarnag Subramanya. 2013. Facc1: Freebase annotation of clueweb corpora, version 1 (release date 2013-06-26, format version 1, correction level 0). _Note: http://lemurproject. org/clueweb09/FACC1/Cited by_ , 5:140.
* He et al. (2019) He, Bin, Di Zhou, Jinghui Xiao, Qun Liu, Nicholas Jing Yuan, Tong Xu, et al. 2019\. Integrating graph contextualized knowledge into pre-trained language models. _arXiv preprint arXiv:1912.00147_.
* Hewitt and Manning (2019) Hewitt, John and Christopher D Manning. 2019. A structural probe for finding syntax in word representations. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4129–4138.
* Hogan et al. (2020) Hogan, Aidan, Eva Blomqvist, Michael Cochez, Claudia d’Amato, Gerard de Melo, Claudio Gutierrez, José Emilio Labra Gayo, Sabrina Kirrane, Sebastian Neumaier, Axel Polleres, et al. 2020. Knowledge graphs. _arXiv preprint arXiv:2003.02320_.
* Houlsby et al. (2019) Houlsby, Neil, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. _arXiv preprint arXiv:1902.00751_.
* Kirkpatrick et al. (2017) Kirkpatrick, James, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks. _Proceedings of the national academy of sciences_ , 114(13):3521–3526.
* Lai et al. (2017) Lai, Guokun, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. _arXiv preprint arXiv:1704.04683_.
* Lauscher et al. (2020) Lauscher, Anne, Olga Majewska, Leonardo FR Ribeiro, Iryna Gurevych, Nikolai Rozanov, and Goran Glavaš. 2020. Common sense or world knowledge? investigating adapter-based knowledge injection into pretrained transformers. _arXiv preprint arXiv:2005.11787_.
* Lauscher et al. (2019) Lauscher, Anne, Ivan Vulić, Edoardo Maria Ponti, Anna Korhonen, and Goran Glavaš. 2019. Informing unsupervised pretraining with external linguistic knowledge. _arXiv preprint arXiv:1909.02339_.
* Liu et al. (2020) Liu, Weijie, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. 2020. K-bert: Enabling language representation with knowledge graph. In _AAAI_ , pages 2901–2908.
* Lv et al. (2020) Lv, Shangwen, Daya Guo, Jingjing Xu, Duyu Tang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, and Songlin Hu. 2020. Graph-based reasoning over heterogeneous external knowledge for commonsense question answering. In _AAAI_ , pages 8449–8456.
* Malaviya et al. (2020) Malaviya, Chaitanya, Chandra Bhagavatula, Antoine Bosselut, and Yejin Choi. 2020\. Commonsense knowledge base completion with structural and semantic context. In _AAAI_ , pages 2925–2933.
* Màrquez et al. (2008) Màrquez, Lluís, Xavier Carreras, Kenneth C Litkowski, and Suzanne Stevenson. 2008. Semantic role labeling: an introduction to the special issue.
* Miller (1998) Miller, George A. 1998. _WordNet: An electronic lexical database_. MIT press.
* Munkhdalai et al. (2019) Munkhdalai, Tsendsuren, Alessandro Sordoni, Tong Wang, and Adam Trischler. 2019\. Metalearned neural memory. In _Advances in Neural Information Processing Systems_ , pages 13331–13342.
* Ostermann, Roth, and Pinkal (2019) Ostermann, Simon, Michael Roth, and Manfred Pinkal. 2019. Mcscript2. 0: A machine comprehension corpus focused on script events and participants. _arXiv preprint arXiv:1905.09531_.
* Peters et al. (2019) Peters, Matthew E, Mark Neumann, Robert L Logan IV, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A Smith. 2019. Knowledge enhanced contextual word representations. _arXiv preprint arXiv:1909.04164_.
* Petroni et al. (2019) Petroni, Fabio, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? _arXiv preprint arXiv:1909.01066_.
* Radford et al. (2018) Radford, Alec, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training.
* Rosset et al. (2020) Rosset, Corby, Chenyan Xiong, Minh Phan, Xia Song, Paul Bennett, and Saurabh Tiwary. 2020. Knowledge-aware language model pretraining. _arXiv preprint arXiv:2007.00655_.
* Sap et al. (2019) Sap, Maarten, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019. Atomic: An atlas of machine commonsense for if-then reasoning. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 33, pages 3027–3035.
* Shen et al. (2020) Shen, Tao, Yi Mao, Pengcheng He, Guodong Long, Adam Trischler, and Weizhu Chen. 2020\. Exploiting structured knowledge in text via graph-guided representation learning. _arXiv preprint arXiv:2004.14224_.
* Shibata et al. (1999) Shibata, Yusuxke, Takuya Kida, Shuichi Fukamachi, Masayuki Takeda, Ayumi Shinohara, Takeshi Shinohara, and Setsuo Arikawa. 1999. Byte pair encoding: A text compression scheme that accelerates pattern matching. Technical report, Technical Report DOI-TR-161, Department of Informatics, Kyushu University.
* Singh et al. (2002) Singh, Push, Thomas Lin, Erik T Mueller, Grace Lim, Travell Perkins, and Wan Li Zhu. 2002. Open mind common sense: Knowledge acquisition from the general public. In _OTM Confederated International Conferences" On the Move to Meaningful Internet Systems"_ , pages 1223–1237, Springer.
* Speer and Havasi (2012) Speer, Robert and Catherine Havasi. 2012. Representing general relational knowledge in conceptnet 5. In _LREC_ , pages 3679–3686.
* Speer, Chin, and Havasi (2017) Speer, Robyn, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In _Thirty-First AAAI Conference on Artificial Intelligence_.
* Sun et al. (2020) Sun, Yu, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. 2020. Ernie 2.0: A continual pre-training framework for language understanding. In _AAAI_ , pages 8968–8975.
* Vaswani et al. (2017) Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _Advances in neural information processing systems_ , pages 5998–6008.
* Wang et al. (2020) Wang, Ruize, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Cuihong Cao, Daxin Jiang, Ming Zhou, et al. 2020. K-adapter: Infusing knowledge into pre-trained models with adapters. _arXiv preprint arXiv:2002.01808_.
* Yang et al. (2019) Yang, Zhilin, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In _Advances in neural information processing systems_ , pages 5753–5763.
* Yao, Mao, and Luo (2019) Yao, Liang, Chengsheng Mao, and Yuan Luo. 2019. Kg-bert: Bert for knowledge graph completion. _arXiv preprint arXiv:1909.03193_.
* Ye et al. (2019) Ye, Zhi-Xiu, Qian Chen, Wen Wang, and Zhen-Hua Ling. 2019. Align, mask and select: A simple method for incorporating commonsense knowledge into language representation models. _arXiv preprint arXiv:1908.06725_.
* Zhang et al. (2019) Zhang, Zhuosheng, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, and Xiang Zhou. 2019. Semantics-aware bert for language understanding. _arXiv preprint arXiv:1909.02209_.
Knowledge Injection Approach | Underlying Language Model | Type of Injection | Knowledge Sources | Training Objective
---|---|---|---|---
Align, Mask, Select | BERT | Input | ConceptNet | Binary Cross Entropy
COMET | GPT | Input | Atomic, ConceptNet | Language Modeling
KnowBERT | BERT | Architecture | Wikipedia, WordNet | Masked Language Modeling
Common sense or world knowledge?[…] | BERT | Architecture | ConceptNet | Masked Language Modeling
SemBERT | BERT | Output | Semantic Role Labeling of Pre-Training data | Masked Language Modeling
KALM | GPT-2 | Combination (Input,Output) | FACC1 and FAKBA entity annotationGabrilovich, Ringgaard, and Subramanya (2013) | Language Modeling + Max Margin
Exploiting Structured[…] | BERT | Combination (Input+Output) | ConceptNet | Masked Language Modeling, Max Margin
LiBERT | BERT | Combination (Input, Output) | Wordnet, Roget’s Thesaurus | Masked Language Modeling + Max Margin
Graph-based reasoning over heterogeneous external knowledge | XLNet + Graph Convolutional Network | Hybrid (Language Model + Graph Reasoning) | Wikipedia, ConceptNet | Cross Entropy
Commonsense knowledge base completion with structural and semantic context | BERT+Graph Convolutional Net | Hybrid (Language Model + GCN Embeddings) | Atomic,ConceptNet | Binary Cross Entropy
ERNIE 2.0 | Transformer-Based Model | Combination (Input+Output) | Wikipedia, BookCorpus, Reddit, Discovery Data (Various types of relationships extracted from these datasets) | Various tasks, among them Knowledge Masking, Token-Document Relation Prediction, Sentence Distance Task, IR Relevance Task
BERT-MK | BERT | Combination (Architecture+Output) | Unified Medical Language SystemBodenreider (2004)(UMLS) | Masked Language Modeling, Max Margin
K-Bert | BERT | Combination (Input + Architecture) | TBD | Same As BERT
KG-Bert | BERT | Combination (Input + Output) | Freebase, Wordnet, UMLS | Binary + Categorical Cross Entropy
K-Adapter | RoBERTa | Combination (Architecture+Output) | Wikipedia, Dependency Parsing from Book Corpus | Binary Cross Entropy
Cracking the Commonsense Code | BERT | Combination (Input+Output) | N/A:Fine Tuning on RACE dataset subset | Binary Cross Entropy
Table 4: Knowledge Injection Models Overview Knowledge Injection Approach | Benchmark Name | Base Model | Base Model Benchmark Performance | Knowledge Injected Model Performance | Percent Difference
---|---|---|---|---|---
BERT-CSbase (Align, Mask, Select) | GLUE (Average) | Bert-Base | 78.975 | 79.612 | 0.81%
BERT-CSlarge (Align, Mask, Select) | GLUE (Average) | Bert-Base-large | 81.5 | 81.45 | -0.06%
LIBERT (2M) | GLUE (Average) | BERT Baseline Trained with 2m examples | 72.775 | 74.275 | 2.06%
SemBERTbase | GLUE(Average) | BERT-Base | 78.975 | 80.35 | 1.74%
SemBERTlarge | GLUE(Average) | BERT-Base | 81.5 | 84.262 | 3.39%
K-Adapter F+L | CosmosQA, TACRED | RoBERTa + Multitask training | 81.19,71.62 | 81.83,71.93 | 1.54%, 0.95%
Ernie 2.0 (large) | GLUE (Average) | BERT-Base-Large | 81.5 | 84.65 | 3.87%
BERT-MK | Entity Typing, Rel. Classification (using UMLS) | BERT-base | 96.55 ,77.75 | 97.26,83.02 | 0.74%,6.78%
K-BERT | XNLI | BERT-base | 75.4 | 76.1 | 0.93%
Cracking the contextual commonsense code (Bert large + KB+RACE) | MCScript 2.0 | BERT-Large | 82.3 | 85.5 | 3.89
KnowBERT | TACRED | BERT-Base | 66 | 71.5 | 8%
Common Sense or World Knowledge? (OM-ADAPT 100K) | GLUE (Average) | BERT-Base | 78.975 | 79.225 | 0.40%
Common Sense or World Knowledge? (CN-ADAPT 50K) | GLUE (Average) | BERT-Base | 78.975 | 79.225 | 0.32%
Table 5: Knowledge Injection Models Performance Comparison
|
# Spectral $\boldsymbol{\zeta}$-Functions and $\boldsymbol{\zeta}$-Regularized
Functional Determinants for Regular Sturm–Liouville Operators
Guglielmo Fucci Department of Mathematics, East Carolina University, 331
Austin Building, East Fifth Street, Greenville, NC 27858-4353, USA
<EMAIL_ADDRESS>http://myweb.ecu.edu/fuccig/ , Fritz Gesztesy Department of
Mathematics, Baylor University, Sid Richardson Bldg., 1410 S. 4th Street,
Waco, TX 76706, USA<EMAIL_ADDRESS>http://www.baylor.edu/math/index.php?id=935340 , Klaus Kirsten Department of
Mathematics, Baylor University, Sid Richardson Bldg., 1410 S. 4th Street,
Waco, TX 76706, USA, and Mathematical Reviews, American Mathematical Society,
416 4th Street, Ann Arbor, MI 48103, USA<EMAIL_ADDRESS>http://www.baylor.edu/math/index.php?id=54012 and Jonathan Stanfill
Department of Mathematics, Baylor University, Sid Richardson Bldg., 1410 S.
4th Street, Waco, TX 76706, USA<EMAIL_ADDRESS>http://sites.baylor.edu/jonathan-stanfill/
###### Abstract.
The principal aim in this paper is to employ a recently developed unified
approach to the computation of traces of resolvents and $\zeta$-functions to
efficiently compute values of spectral $\zeta$-functions at positive integers
associated to regular (three-coefficient) self-adjoint Sturm–Liouville
differential expressions $\tau$. Depending on the underlying boundary
conditions, we express the $\zeta$-function values in terms of a fundamental
system of solutions of $\tau y=zy$ and their expansions about the spectral
point $z=0$. Furthermore, we give the full analytic continuation of the
$\zeta$-function through a Liouville transformation and provide an explicit
expression for the $\zeta$-regularized functional determinant in terms of a
particular set of this fundamental system of solutions.
An array of examples illustrating the applicability of these methods is
provided, including regular Schrödinger operators with zero, piecewise
constant, and a linear potential on a compact interval.
###### Key words and phrases:
$\zeta$-function, Sturm–Liouville operators, Traces, (modified) Fredholm
determinants, zeta regularized functional determinants.
###### 2020 Mathematics Subject Classification:
Primary: 47A10, 47B10, 47G10. Secondary: 34B27, 34L40.
###### Contents
1. 1 Introduction
2. 2 Background on Self-Adjoint Regular Sturm–Liouville Operators
3. 3 Expansion in z for Fundamental Solutions, Asymptotic Expansion, and the Zeta Regularized Functional Determinant
1. 3.1 Expansion in z for Fundamental Solutions
2. 3.2 Asymptotic Expansion of the Characteristic Function
3. 3.3 Analytic Continuation of the Spectral Zeta Function and the Zeta Regularized Functional Determinant
4. 4 Computing Spectral Zeta Function Values and Traces for Regular Sturm–Liouville Operators
1. 4.1 Computing Spectral Zeta Function Values and Traces for Separated Boundary Conditions
2. 4.2 Computing Spectral Zeta Function Values and Traces for Coupled Boundary Conditions
5. 5 Examples
1. 5.1 The Example q=0
2. 5.2 Examples of Nonnegative (Piecewise) Constant Potentials
3. 5.3 Example of a Negative Constant Potential
4. 5.4 Example of a Linear Potential
## 1\. Introduction
The principal motivation for this paper is to illustrate how a recently
developed unified approach to the computation of Fredholm determinants, traces
of resolvents, and $\zeta$-functions in [33] can be used to efficiently
compute certain values of spectral $\zeta$-functions associated to regular
Sturm–Liouville operators as well as give the full analytic continuation of
the $\zeta$-function through a Liouville transformation and finally provide an
explicit expression for the $\zeta$-regularized functional determinant.
In Section 2 we begin by outlining the background for regular self-adjoint
Sturm–Liouville operators on bounded intervals, that is, operators in
$L^{2}((a,b);rdx)$ with separated and coupled boundary conditions and the
associated spectral $\zeta$-functions. Under appropriate hypotheses on the
Sturm–Liouville operator associated with three-coefficient differential
expressions of the type $\tau=r^{-1}[-(d/dx)p(d/dx)+q]$, certain values of the
spectral $\zeta$-function can be found via complex contour integration
techniques to be equal to residues of explicit functions involving a canonical
system of fundamental solutions $\phi(z,\,\cdot\,,a)$ and
$\theta(z,\,\cdot\,,a)$ of $\tau y=zy$ for separated or coupled boundary
conditions. Moreover, the zeros with respect to the parameter $z$ of $\phi$,
$\theta$, and some of their (boundary condition dependent) linear combinations
are precisely the eigenvalues corresponding to the underlying operator,
including multiplicity.
In Section 3 we provide a series expansion for $\phi(z,\,\cdot\,,a)$ and
$\theta(z,\,\cdot\,,a)$ about $z=0$ using the Volterra integral equation
associated with the general three-coefficient regular self-adjoint
Sturm–Liouville operator. This method leads to an expansion in powers of $z$
of the fundamental solutions and their $z$-derivative involving their values
at $z=0$ and the appropriate Volterra Green’s function. We also investigate
the $|z|\to\infty$ asymptotic expansion of the characteristic function
appearing in the complex integral representation of the spectral
$\zeta$-function given in Section 2. This asymptotic expansion is then
exploited in order to construct the analytic continuation of the spectral
$\zeta$-function and to obtain an explicit expression for the zeta regularized
functional determinant.
Section 4 contains the main theorems that allow for the calculation of the
values of spectral $\zeta$-functions of general regular Sturm–Liouville
operators on bounded intervals as ratios of series expansions of (boundary
condition dependent) solutions of $\tau y=zy$ about $z=0$. In particular, we
consider separated boundary conditions when zero is not an eigenvalue, or,
when it is (necessarily) a simple eigenvalue, and coupled boundary conditions
when either zero is not an eigenvalue, or, an eigenvalue of multiplicity
(necessarily) at most two. (For more details in this context see [33] as well
as [34, Ch. 3], [76, Sect. 8.4], [77, Sect. 13.2], and [78, Ch. 4].)
We continue by providing some examples in Section 5 illustrating the main
theorems and corollaries of Section 4 and the zeta regularized functional
determinant given in Section 3. In particular, we present the case of
Schrödinger operators with zero potential imposing Dirichlet, Neumann,
periodic, antiperiodic, and Krein–von Neumann boundary conditions. We then
consider positive (piecewise) constant and negative constant potentials for
Dirichlet boundary conditions, and finally the case of a linear potential.
Here we summarize some of the basic notation used in this manuscript. If $A$
is a linear operator mapping (a subspace of) a Hilbert space into another,
then $\operatorname{dom}(A)$ and $\ker(A)$ denote the domain and the kernel
(i.e., null space) of $A$. The spectrum, point spectrum, and resolvent set of
a closed linear operator in a separable complex Hilbert space,
${\mathcal{H}}$, will be denoted by $\sigma(\,\cdot\,),\
\sigma_{p}(\,\cdot\,),$ and $\rho(\,\cdot\,)$ respectively. If $S$ is self-
adjoint in ${\mathcal{H}}$, the multiplicity of an eigenvalue
$z_{0}\in\sigma_{p}(S)$ is denoted $m(z_{0};S)$ (the geometric and algebraic
multiplicities of $S$ coincide in this case). The proper setting for our
investigations is the Hilbert space $L^{2}((a,b);rdx)$, which we will
occasionally abbreviate as $L^{2}_{r}((a,b))$. The spectral $\zeta$-function
of a self-adjoint linear operator $S$ is denoted by $\zeta(s;S)$. In addition,
${\operatorname{tr}}_{{\mathcal{H}}}(T)$ denotes the trace of a trace class
operator $T\in{\mathcal{B}}_{1}({\mathcal{H}})$ and
$\det_{\mathcal{H}}(I_{{\mathcal{H}}}-T)$ the Fredholm determinant of
$I_{{\mathcal{H}}}-T$.
For consistency of notation, throughout this manuscript we will follow the
conventional notion that derivatives annotated with superscripts are
understood as with respect to $x$ and derivatives with respect to $\xi$ will
be abbreviated by $\overset{\textbf{\large.}}{\ }=d/d\xi$. We also employ the
notation ${\mathbb{N}}_{0}={\mathbb{N}}\cup\\{0\\}$.
## 2\. Background on Self-Adjoint Regular Sturm–Liouville Operators
In the first part of this section we briefly recall basic facts on regular
Sturm–Liouville operators and their self-adjoint boundary conditions. This
material is standard and well-known, hence we just refer to some of the
standard monographs on this subject, such as, [9, Sect. 6.3], [34, Ch. 3],
[41, Sect. II.5], [61, Ch. V], [76, Sect. 8.4], [77, Sect. 13.2], [78, Ch. 4].
In the second part we discuss Fredholm determinants, traces of resolvents, and
spectral $\zeta$-functions associated with these regular Sturm–Liouville
problems. For background as well as relevant material in this context we refer
to [3], [7], [12], [13], [17], [19], [20], [25], [26], [27], [28], [29], [31],
[33], [38], [39], [42], [49], [50], [51], [53], [57], [58], [59], [62], [64],
[71], [72, Sects. 5.4, 5.5, 6.3], [75].
Throughout our discussion of regular Sturm–Liouville operators we make the
following assumptions:
###### Hypothesis 2.1.
Let $(a,b)\subset{\mathbb{R}}$ be a finite interval and suppose that $p,q,r$
are $($Lebesgue $)$ measurable functions on $(a,b)$ such that the following
items $(i)$–$(iii)$ hold:
$(i)$ $r>0$ a.e. on $(a,b)$, $r\in L^{1}((a,b);dx)$.
$(ii)$ $p>0$ a.e. on $(a,b)$, $1/p\in L^{1}((a,b);dx)$.
$(iii)$ $q$ is real-valued a.e. on $(a,b)$, $q\in L^{1}((a,b);dx)$.
Given Hypothesis 2.1, we now study Sturm–Liouville operators associated with
the general, three-coefficient differential expression $\tau$ of the type,
$\displaystyle\tau=\frac{1}{r(x)}\left[-\frac{d}{dx}p(x)\frac{d}{dx}+q(x)\right]\,\text{
for a.e.~{}$x\in(a,b)\subseteq{\mathbb{R}}$.}$ (2.1)
We start with the notion of minimal and maximal
${L^{2}((a,b);rdx)}$-realizations associated with the regular differential
expression $\tau$ on the finite interval $(a,b)\subset{\mathbb{R}}$. Here, and
elsewhere throughout this manuscript, the inner product in
${L^{2}((a,b);rdx)}$ is defined by
$\displaystyle(f,g)_{{L^{2}((a,b);rdx)}}=\int_{a}^{b}r(x)dx\,\overline{f(x)}g(x),\quad
f,g\in{L^{2}((a,b);rdx)}.$ (2.2)
Assuming Hypothesis 2.1, the differential expression $\tau$ of the form (2.1)
on the finite interval $(a,b)\subset{\mathbb{R}}$ is called regular on
$[a,b]$. The corresponding maximal operator $T_{max}$ in ${L^{2}((a,b);rdx)}$
associated with $\tau$ is defined by
$\displaystyle T_{max}f=\tau f,$ $\displaystyle
f\in\operatorname{dom}(T_{max})=\big{\\{}g\in{L^{2}((a,b);rdx)}\,\big{|}\,g,g^{[1]}\in{AC([a,b])};$
(2.3) $\displaystyle\hskip 170.71652pt\tau g\in{L^{2}((a,b);rdx)}\big{\\}},$
and the corresponding minimal operator $T_{min}$ in ${L^{2}((a,b);rdx)}$
associated with $\tau$ is given by
$\displaystyle T_{min}f=\tau f,$ $\displaystyle
f\in\operatorname{dom}(T_{min})=\big{\\{}g\in{L^{2}((a,b);rdx)}\,\big{|}\,g,g^{[1]}\in{AC([a,b])};$
(2.4) $\displaystyle\hskip 85.35826ptg(a)=g^{[1]}(a)=g(b)=g^{[1]}(b)=0;\;\tau
g\in{L^{2}((a,b);rdx)}\big{\\}}.$
Here (with $\prime:=d/dx$)
$\displaystyle y^{[1]}(x)=p(x)y^{\prime}(x),$ (2.5)
denotes the first quasi-derivative of a function $y$ on $(a,b)$, assuming that
$y,py^{\prime}\in AC_{loc}((a,b))$.
Assuming Hypothesis 2.1 so that $\tau$ is regular on $[a,b]$, the following is
well-known (see, e.g., [9, Sect. 6.3], [34, Sect. 3.2], [41, Sect. II.5], [61,
Ch. V], [76, Sect. 8.4], [77, Sect. 13.2], [78, Ch. 4]): $T_{min}$ is a
densely defined, closed operator in ${L^{2}((a,b);rdx)}$, moreover, $T_{max}$
is densely defined and closed in ${L^{2}((a,b);rdx)}$, and
$\displaystyle T_{min}^{*}=T_{max},\quad T_{min}=T_{max}^{*}.$ (2.6)
Moreover, $T_{min}\subset T_{max}=T_{min}^{*}$, and hence $T_{min}$ is
symmetric, while $T_{max}$ is not.
The next theorem describes all self-adjoint extensions of $T_{min}$ (cf.,
e.g., [77, Sect. 13.2], [78, Ch. 4]).
###### Theorem 2.2.
Assume Hypothesis 2.1 so that $\tau$ is regular on $[a,b]$. Then the following
items $(i)$–$(iii)$ hold:
$(i)$ All self-adjoint extensions $T_{\alpha,\beta}$ of $T_{min}$ with
separated boundary conditions are of the form
$\displaystyle T_{\alpha,\beta}f=\tau f,\quad\alpha,\beta\in[0,\pi),$
$\displaystyle
f\in\operatorname{dom}(T_{\alpha,\beta})=\big{\\{}g\in\operatorname{dom}(T_{max})\,\big{|}\,g(a)\cos(\alpha)+g^{[1]}(a)\sin(\alpha)=0;$
(2.7) $\displaystyle\hskip
159.3356ptg(b)\cos(\beta)-g^{[1]}(b)\sin(\beta)=0\big{\\}}.$
Special cases: $\alpha=0$ $($i.e., $g(a)=0$$)$ is called the Dirichlet
boundary condition at $a$; $\alpha=\frac{\pi}{2}$, $($i.e., $g^{[1]}(a)=0$$)$
is called the Neumann boundary condition at $a$ $($analogous facts hold at the
endpoint $b$$)$.
$(ii)$ All self-adjoint extensions $T_{\varphi,R}$ of $T_{min}$ with coupled
boundary conditions are of the type
$\displaystyle\begin{split}&T_{\varphi,R}f=\tau f,\\\
&f\in\operatorname{dom}(T_{\varphi,R})=\bigg{\\{}g\in\operatorname{dom}(T_{max})\,\bigg{|}\begin{pmatrix}g(b)\\\
g^{[1]}(b)\end{pmatrix}=e^{i\varphi}R\begin{pmatrix}g(a)\\\
g^{[1]}(a)\end{pmatrix}\bigg{\\}},\end{split}$ (2.8)
where $\varphi\in[0,2\pi)$, and $R$ is a real $2\times 2$ matrix with
$\det(R)=1$ $($i.e., $R\in SL(2,{\mathbb{R}})$$)$. Special cases: $\varphi=0$,
$R=I_{2}$ $($i.e., $g(b)=g(a)$, $g^{[1]}(b)=g^{[1]}(a)$$)$ are called periodic
boundary conditions; similarly, $\varphi=\pi$, $R=I_{2}$ $($i.e.,
$g(b)=-g(a)$, $g^{[1]}(b)=-g^{[1]}(a)$$)$ are called antiperiodic boundary
conditions.
$(iii)$ Every self-adjoint extension of $T_{min}$ is either of type $(i)$
$($i.e., separated $)$ or of type $(ii)$ $($i.e., coupled $)$.
Next we state some of the most pertinent concepts and results summarized from
[33] (in particular, Section 3) and will then illustrate how this permits one
to effectively calculate certain values for the spectral $\zeta$-functions of
the regular Sturm–Liouville operators considered.
For this purpose we introduce the fundamental system of solutions
$\theta(z,x,a)$, $\phi(z,x,a)$ of $\tau y=zy$ defined by
$\displaystyle\theta(z,a,a)=\phi^{[1]}(z,a,a)=1,\quad\theta^{[1]}(z,a,a)=\phi(z,a,a)=0,$
(2.9)
such that
$\displaystyle W(\theta(z,\,\cdot\,,a),\phi(z,\,\cdot\,,a))=1,$ (2.10)
noting that for fixed $x,$ each is entire with respect to $z$. Here the
Wronskian of $f$ and $g$, for $f,g\in{AC_{loc}((a,b))}$, is defined by
$\displaystyle W(f,g)(x)=f(x)g^{[1]}(x)-f^{[1]}(x)g(x).$ (2.11)
Furthermore, we introduce the boundary values for $g,g^{[1]}\in{AC([a,b])}$,
see [60, Ch. I], [78, Sect. 3.2],
$\displaystyle\begin{split}&U_{\alpha,\beta,1}(g)=g(a)\cos(\alpha)+g^{[1]}(a)\sin(\alpha),\\\
&U_{\alpha,\beta,2}(g)=g(b)\cos(\beta)-g^{[1]}(b)\sin(\beta),\end{split}$
(2.12)
in the case ($i$) of separated boundary conditions in Theorem 2.2, and
$\displaystyle\begin{split}&V_{\varphi,R,1}(g)=g(b)-e^{i\varphi}R_{11}g(a)-e^{i\varphi}R_{12}g^{[1]}(a),\\\
&V_{\varphi,R,2}(g)=g^{[1]}(b)-e^{i\varphi}R_{21}g(a)-e^{i\varphi}R_{22}g^{[1]}(a),\end{split}$
(2.13)
in the case ($ii$) of coupled boundary conditions in Theorem 2.2. Moreover, we
define the _characteristic functions_
$\displaystyle
F_{\alpha,\beta}(z)=\det\begin{pmatrix}U_{\alpha,\beta,1}(\theta(z,\,\cdot\,,a))&U_{\alpha,\beta,1}(\phi(z,\,\cdot\,,a))\\\
U_{\alpha,\beta,2}(\theta(z,\,\cdot\,,a))&U_{\alpha,\beta,2}(\phi(z,\,\cdot\,,a))\end{pmatrix},$
(2.14)
and
$\displaystyle
F_{\varphi,R}(z)=\det\begin{pmatrix}V_{\varphi,R,1}(\theta(z,\,\cdot\,,a))&V_{\varphi,R,1}(\phi(z,\,\cdot\,,a))\\\
V_{\varphi,R,2}(\theta(z,\,\cdot\,,a))&V_{\varphi,R,2}(\phi(z,\,\cdot\,,a))\end{pmatrix}.$
(2.15)
Notational Convention. To describe all possible self-adjoint boundary
conditions associated with self-adjoint extensions of $T_{min}$ effectively,
we will frequently employ the notation $T_{A,B}$, $F_{A,B}$,
$\lambda_{A,B,j}$, $j\in J$, etc., where $A,B$ represents $\alpha,\beta$ in
the case of separated boundary conditions and $\varphi,R$ in the context of
coupled boundary conditions.
By construction, eigenvalues of $T_{A,B}$ are determined via $F_{A,B}(z)=0$,
with multiplicity of eigenvalues of $T_{A,B}$ corresponding to multiplicity of
zeros of $F_{A,B}$, and $F_{A,B}(z)$ is entire with respect to $z$. In
particular, for $T_{\alpha,\beta}$, that is, for separated boundary
conditions, one has
$\displaystyle\begin{split}F_{\alpha,\beta}(z)&=\cos(\alpha)[-\sin(\beta)\
\phi^{[1]}(z,b,a)+\cos(\beta)\ \phi(z,b,a)]\\\
&\quad-\sin(\alpha)[-\sin(\beta)\ \theta^{[1]}(z,b,a)+\cos(\beta)\
\theta(z,b,a)],\quad\alpha,\beta\in[0,\pi),\end{split}$ (2.16)
and for $T_{\varphi,R}$, that is, for coupled boundary conditions, one has for
$\varphi\in[0,2\pi)$ and $R\in SL(2,{\mathbb{R}})$,
$\displaystyle F_{\varphi,R}(z)$
$\displaystyle=e^{i\varphi}\big{(}R_{12}\theta^{[1]}(z,b,a)-R_{22}\theta(z,b,a)+R_{21}\phi(z,b,a)-R_{11}\phi^{[1]}(z,b,a)\big{)}$
$\displaystyle\quad+e^{2i\varphi}+1.$ (2.17)
Next we will demonstrate that $F_{A,B}(\,\cdot\,)$ is an entire function of
order $1/2$ and finite type, independent of the boundary conditions chosen.
This result is used when considering convergence of the complex contour
integral representation of the spectral $\zeta$-function for large values of
the spectral parameter $z$.
For this purpose we recall the following facts (see, e.g., [10, Ch. 2], [52,
Ch. I]): Supposing that $F(\,\cdot\,)$ is entire, one introduces
$M_{F}(R)=\sup_{|z|=R}|F(z)|,\quad R\in[0,\infty).$ (2.18)
Then the order $\rho_{F}$ of $F$ is defined by
$\rho_{F}=\limsup_{R\to\infty}\text{\rm ln}(\text{\rm ln}(M_{F}(R)))/\text{\rm
ln}(R)\in[0,\infty)\cup\\{\infty\\}.$ (2.19)
In addition, if $\rho_{F}>0$, the type $\tau_{F}$ of $F$ is defined as
$\tau_{F}=\limsup_{R\to\infty}\text{\rm
ln}(M_{F}(R))/R^{\rho_{F}}\in[0,\infty)\cup\\{\infty\\},$ (2.20)
and, in obvious notation, $F$ is called of order $\rho_{F}>0$ and of finite
type $\tau_{F}$ if $\tau_{F}\in[0,\infty)$.
Thus, $F$ is of finite order $\rho_{F}\in[0,\infty)$ if and only if for every
$\varepsilon>0$, but for no $\varepsilon<0$,
$M_{F}(R)\underset{R\to\infty}{=}O\big{(}\exp\big{(}R^{\rho_{F}+\varepsilon}\big{)}\big{)},$
(2.21)
and $F$ is of positive and finite order $\rho_{F}\in(0,\infty)$ and finite
type $\tau_{F}\in[0,\infty)$ if and only if for every $\varepsilon>0$, but for
no $\varepsilon<0$,
$M_{F}(R)\underset{R\to\infty}{=}O\big{(}\exp\big{(}(\tau_{F}+\varepsilon)R^{\rho_{F}}\big{)}\big{)}.$
(2.22)
By definition, if $F_{j}$ are entire of order $\rho_{j}$, $j=1,2$, then the
order of $F_{1}F_{2}$ does not exceed the larger of $\rho_{1}$ and $\rho_{2}$.
For $F$ entire we also introduce the zero counting function
$N_{F}(R)=\\#\big{(}Z_{F}\cap\overline{D(0;R)}\big{)},\quad R\in(0,\infty),$
(2.23)
where $\\#$ denotes cardinality and $Z_{F}$ represents the set of zeros of $F$
counting multiplicity (i.e., $N_{F}(R)$ counts the number of zeros of $F$ in
the closed disk of radius $R>0$ centered at the origin).
###### Remark 2.3.
Assuming Hypothesis 2.1, then all solutions $\psi(z,\,\cdot\,)$ of the regular
Sturm–Liouville problem $(\tau y)(z,x)=zy(z,x)$, $z\in{\mathbb{C}}$,
$x\in[a,b]$, satisfying $z$-independent initial conditions
$\psi(z,x_{0})=c_{0},\quad\psi^{[1]}(z,x_{0})=c_{1},$ (2.24)
for some $x_{0}\in[a,b]$ and some $(c_{0},c_{1})\in{\mathbb{C}}^{2}$, together
with $\psi^{[1]}(z,\,\cdot\,)$, for any fixed $x\in[a,b]$, are entire
functions of $z$ of order at most $1/2$. Indeed, as shown in [5, Sect. 8.2]
(see also [56], [78, Theorem 2.5.3]), upon employing a Prüfer-type
transformation, one obtains
$\displaystyle|z||\psi(z,x)|^{2}+\big{|}\psi^{[1]}(z,x)\big{|}^{2}\leqslant
C(x_{0})\exp\bigg{(}|z|^{1/2}\int_{\min(x_{0},x)}^{\max(x_{0},x)}dt\,\big{[}|p(t)|^{-1}+|r(t)|\big{]}$
$\displaystyle\hskip
85.35826pt+|z|^{-1/2}\int_{\min(x_{0},x)}^{\max(x_{0},x)}dt\,|q(t)|\bigg{)},\quad
z\in{\mathbb{C}},\;x_{0},x\in[a,b].$ (2.25)
In particular, (2.16) and (2) yield that $F_{A,B}$ is an entire function of
order at most $1/2$ for any self-adjoint boundary condition represented by
$A,B$, that is,
$\rho_{F_{A,B}}\leqslant 1/2.$ (2.26)
Given Hypothesis 2.1, one infers that
$T_{A,B}\geqslant\Lambda_{A,B}I_{L^{2}_{r}((a,b))}$ for some
$\Lambda_{A,B}\in{\mathbb{R}}$, with purely discrete spectrum, and hence
$Z_{F_{A,B}}(R)\subset[\Lambda_{A,B},R]$ the elements of $Z_{F_{A,B}}(R)$
being precisely the eigenvalues of $T_{A,B}$ in the interval
$[\max(-R,\Lambda_{A,B}),R]$. Employing the theory of Volterra operators in
Hilbert spaces (and under some additional lower boundedness hypotheses111Upon
closer inspection, the additional condition stated on [36, p. 305, 306] just
ensures lower semiboundedness of $T_{A,B}$, which is independently known to
hold in our present scalar context. on $q$) in [36, Chs. VI, VII],
alternatively, using oscillation theoretic methods in [6], it is shown that
the eigenvalue counting function $N_{F_{A,B}}$ associated with $T_{A,B}$
satisfies
$N_{F_{A,B}}(\lambda)\underset{\lambda\to\infty}{=}\pi^{-1}\int_{a}^{b}dx\,[r(x)/p(x)]^{1/2}\lambda^{1/2}[1+o(1)].$
(2.27)
Ignoring finitely many nonpositive eigenvalues of $T_{A,B}$, equivalently,
splitting off the factors in the infinite product representation associated
with nonpositive zeros of $F_{A,B}$, that is, replacing $F_{A,B}$ by
$\widetilde{F}_{A,B}(z)=C_{A,B}\prod_{\begin{subarray}{c}j\in{\mathbb{N}},\\\
\lambda_{A,B,j}>0\end{subarray}}[1-(z/\lambda_{A,B,j})]$ (2.28)
with
$N_{\widetilde{F}_{A,B}}(\lambda)\underset{\lambda\to\infty}{=}\pi^{-1}\int_{a}^{b}dx\,[r(x)/p(x)]^{1/2}\lambda^{1/2}[1+o(1)],$
(2.29)
implies (cf. [10, Theorem 4.1.1], [73], [74]),
$\text{\rm
ln}\big{(}\widetilde{F}_{A,B}(\lambda)\big{)}\underset{\lambda\to\infty}{=}\int_{a}^{b}dx\,[r(x)/p(x)]^{1/2}\lambda^{1/2}[1+o(1)].$
(2.30)
Thus,
$\rho_{F_{A,B}}=\rho_{\widetilde{F}_{A,B}}\geqslant 1/2,$ (2.31)
and hence by (2.26),
$\rho_{F_{A,B}}=1/2.$ (2.32)
Moreover, by (2.25), $F_{A,B}$ is of order 1/2 and finite type. Finally, we
also mention that (2.27) implies that
$\lambda_{A,B,j}\underset{j\to\infty}{=}\bigg{[}\int_{a}^{b}dx\,[r(x)/p(x)]^{1/2}\bigg{]}^{-2}\pi^{2}j^{2}[1+o(1)]$
(2.33)
(cf. also the discussion in [65, Sects. 1.11, 9.1], [78, Sect. 4.3]).
$\diamond$
The following theorem (see [33, Thm. 3.4]) directly relates the function
$F_{A,B}$ to Fredholm determinants and traces (see [35, Ch. IV], [63, Sect.
XIII.17], [68], [69, Ch. 3], [70, Ch. 3] for background).
###### Theorem 2.4.
Assume Hypothesis 2.1 and denote by $T_{\alpha,\beta}$ and $T_{\varphi,R}$ the
self-adjoint extensions of $T_{min}$ as described in cases $(i)$ and $(ii)$ of
Theorem 2.2, respectively.
$(i)$ Suppose $z_{0}\in\rho(T_{\alpha,\beta})$, then
$\displaystyle\begin{split}\det&{}_{L_{r}^{2}((a,b))}\big{(}I_{L_{r}^{2}((a,b))}-(z-z_{0})(T_{\alpha,\beta}-z_{0}I_{L_{r}^{2}((a,b))})^{-1}\big{)}\\\
&=F_{\alpha,\beta}(z)/F_{\alpha,\beta}(z_{0}),\quad
z\in{\mathbb{C}}.\end{split}$ (2.34)
In particular,
$\displaystyle\operatorname{tr}_{L_{r}^{2}((a,b))}\big{(}(T_{\alpha,\beta}-zI_{L_{r}^{2}((a,b))})^{-1}\big{)}=-(d/dz)\text{\rm
ln}(F_{\alpha,\beta}(z)),\quad z\in\rho(T_{\alpha,\beta}).$ (2.35)
$(ii)$ Suppose $z_{0}\in\rho(T_{\varphi,R})$, then
$\displaystyle\begin{split}\det&{}_{L_{r}^{2}((a,b))}\big{(}I_{L_{r}^{2}((a,b))}-(z-z_{0})(T_{\varphi,R}-z_{0}I_{L_{r}^{2}((a,b))})^{-1}\big{)}\\\
&=F_{\varphi,R}(z)/F_{\varphi,R}(z_{0}),\quad z\in{\mathbb{C}}.\end{split}$
(2.36)
In particular,
$\displaystyle\operatorname{tr}_{L_{r}^{2}((a,b))}\big{(}(T_{\varphi,R}-zI_{L_{r}^{2}((a,b))})^{-1}\big{)}=-(d/dz)\text{\rm
ln}(F_{\varphi,R}(z)),\quad z\in\rho(T_{\varphi,R}).$ (2.37)
Given these preparations, we let $T_{A,B}$ denote the self-adjoint extension
of $T_{min}$ with either separated ($T_{\alpha,\beta}$) or coupled
($T_{\varphi,R}$) boundary conditions as described in cases $(i)$ and $(ii)$
of Theorem 2.2. One recalls (see, e.g., [33]), the spectral $\zeta$-function
of the operator, $T_{A,B}$, is defined as
$\displaystyle\zeta(s;T_{A,B}):=\sum_{\underset{\lambda_{j}\neq 0}{j\in
J}}\lambda_{A,B,j}^{-s},$ (2.38)
with $J\subset{\mathbb{Z}}$ an appropriate index set counting eigenvalues
according to their multiplicity and $\text{\rm Re}(s)>0$ sufficiently large
such that (2.38) converges absolutely. Applying Theorem 2.4, it was shown in
[33] that for $\text{\rm Re}(s)>0$ sufficiently large,
$\displaystyle\begin{split}\zeta(s;T_{A,B})&=\dfrac{1}{2\pi
i}\ointctrclockwise_{\gamma}dz\ z^{-s}\bigg{(}\dfrac{d}{dz}\text{\rm
ln}(F_{A,B}(z))-z^{-1}m(0;T_{A,B})\bigg{)}\\\ &=\dfrac{1}{2\pi
i}\ointctrclockwise_{\gamma}dz\ z^{-s}\bigg{(}\dfrac{d}{dz}\text{\rm
ln}(F_{A,B}(z))-z^{-1}m_{0}\bigg{)},\end{split}$ (2.39)
where $m(0;T_{A,B})=m_{0}$ is the multiplicity of zero as an eigenvalue of
$T_{A,B}$ and $\gamma$ is a simple contour enclosing
$\sigma(T_{A,B})\backslash\\{0\\}$ in a counterclockwise manner so as to dip
under (and hence avoid) the point 0 (cf. Figure 3). Here, following [47] (see
also [48]), we take
$\displaystyle
R_{\psi}=\\{z=te^{i\psi}:t\in[0,\infty)\\},\quad\psi\in(\pi/2,\pi),$ (2.40)
to be the branch cut of $z^{-s}$, and, once again, eigenvalues will be
determined via $F_{A,B}(z)=0$, with the multiplicity of eigenvalues of
$T_{A,B}$ corresponding to the multiplicity of zeros of $F_{A,B}$.
The cut $\boldsymbol{R_{\psi}}$ for
$\boldsymbol{z^{-s}}$$\boldsymbol{z}$-plane$\boldsymbol{\gamma}$
Figure 1. Contour $\gamma$ in the complex $z$-plane.
The cut $\boldsymbol{R_{\psi}}$ for
$\boldsymbol{z^{-s}}$$\boldsymbol{z}$-plane$\boldsymbol{\gamma}$
Figure 2. Deforming $\gamma$.
$\boldsymbol{z}$-plane$\boldsymbol{C_{\varepsilon}}$
Figure 3. Contour $C_{\varepsilon}$.
To continue the computation of (2.39) and deform the contour $\gamma$ as to
“hug” the branch cut $R_{\psi}$ (cf. Figure 3) requires knowledge of the
asymptotic behavior of $F_{A,B}(z)$ as $|z|\to\infty$, which in turn demands
$\text{\rm Re}(s)>1/2$ for large-$z$ convergence (cf. Remark 2.3).
Furthermore, if one is interested in the calculation of the value of the
spectral zeta function at positive integers, the following method provides a
very simple way of obtaining those values. In fact, by letting $s=n$,
$n\in{\mathbb{N}}$, in (2.39), one no longer needs a branch cut for the
fractional powers of $z^{-s}$ given in Figures 3 and 3. This reduces the
integral along the curve $\gamma$ to a clockwise oriented integral along the
circle $C_{\varepsilon}$, centered at zero with radius $\varepsilon>0$ (cf.
Figure 3). Letting $s=n$ also ensures that $m_{0}$ (the multiplicity of zero
as an eigenvalue of $T_{A,B}$) does not contribute to the integral in (2.39).
Hence,
$\displaystyle\begin{split}\zeta(n;T_{A,B})&=-\dfrac{1}{2\pi
i}\ointctrclockwise_{C_{\varepsilon}}dz\ z^{-n}\dfrac{d}{dz}\text{\rm
ln}(F_{A,B}(z))\\\ &=-\text{\rm Res}\left[z^{-n}\dfrac{d}{dz}\text{\rm
ln}(F_{A,B}(z));\ z=0\right],\quad n\in{\mathbb{N}}.\end{split}$ (2.41)
Thus, determining an expansion of $F_{A,B}(z)$ about $z=0$ enables one to
effectively compute $\zeta(n;T_{A,B})$. In addition, by (2.16), (2),
$F_{A,B}(z)$ is a linear combination of $\theta$, $\theta^{[1]}$, $\phi$, and
$\phi^{[1]}$ for each boundary condition considered, so it suffices to find
the expansion of each of these functions individually.
## 3\. Expansion in z for Fundamental Solutions, Asymptotic Expansion, and
the Zeta Regularized Functional Determinant
### 3.1. Expansion in z for Fundamental Solutions
Assuming Hypothesis 2.1 throughout this section, we discuss next the expansion
in $z$ about $z=0$ for the solutions $\phi(z,\,\cdot\,,a)$ and
$\theta(z,\,\cdot\,,a)$ of $\tau y=zy$,
$\displaystyle\phi(z,x,a)$
$\displaystyle=\phi(0,x,a)+z\int_{a}^{x}r(x^{\prime})dx^{\prime}\,g(0,x,x^{\prime})\phi(z,x^{\prime},a),$
(3.1) $\displaystyle\theta(z,x,a)$
$\displaystyle=\theta(0,x,a)+z\int_{a}^{x}r(x^{\prime})dx^{\prime}\,g(0,x,x^{\prime})\theta(z,x^{\prime},a),$
(3.2) $\displaystyle\hskip 123.76965ptz\in{\mathbb{C}},\,x\in[a,b],$
employing the following expression for the Volterra Green’s function
$\displaystyle
g(0,x,x^{\prime})=\theta(0,x,a)\phi(0,x^{\prime},a)-\theta(0,x^{\prime},a)\phi(0,x,a),\quad
x,x^{\prime}\in[a,b].$ (3.3)
That (3.1) and (3.2) indeed represent solutions of $\tau y=zy$ is clear from
applying $\tau$ to either side, moreover, the initial conditions (2.9) are
readily verified.
Iterating these integral equations establishes the power series expansions
$\displaystyle\phi(z,x,a)=\sum_{m=0}^{\infty}z^{m}\phi_{m}(x),\quad
z\in{\mathbb{C}},\;x\in[a,b],$ (3.4)
where
$\displaystyle\begin{split}\phi_{0}(x)&=\phi(0,x,a),\\\
\phi_{1}(x)&=\int_{a}^{x}r(x_{1})dx_{1}\ g(0,x,x_{1})\phi(0,x_{1},a),\\\
\phi_{k}(x)&=\int_{a}^{x}r(x_{1})dx_{1}\
g(0,x,x_{1})\int_{a}^{x_{1}}r(x_{2})dx_{2}\ g(0,x_{1},x_{2})\dots\\\
&\quad\dots\int_{a}^{x_{k-1}}r(x_{k})dx_{k}\
g(0,x_{k-1},x_{k})\phi(0,x_{k},a),\quad k\in{\mathbb{N}},\end{split}$ (3.5)
and
$\displaystyle\theta(z,x,a)=\sum_{m=0}^{\infty}z^{m}\theta_{m}(x),\quad
z\in{\mathbb{C}},\;x\in[a,b],$ (3.6)
where
$\displaystyle\begin{split}\theta_{0}(x)&=\theta(0,x,a),\\\
\theta_{1}(x)&=\int_{a}^{x}r(x_{1})dx_{1}\ g(0,x,x_{1})\theta(0,x_{1},a),\\\
\theta_{k}(x)&=\int_{a}^{x}r(x_{1})dx_{1}\
g(0,x,x_{1})\int_{a}^{x_{1}}r(x_{2})dx_{2}\ g(0,x_{1},x_{2})\dots\\\
&\quad\dots\int_{a}^{x_{k-1}}r(x_{k})dx_{k}\
g(0,x_{k-1},x_{k})\theta(0,x_{k},a),\quad k\in{\mathbb{N}}.\end{split}$ (3.7)
Analogously one obtains
$\displaystyle\phi^{[1]}(z,x,a)=\sum_{m=0}^{\infty}z^{m}\phi^{[1]}_{m}(x),\quad
z\in{\mathbb{C}},\;x\in[a,b],$ (3.8)
where
$\displaystyle\begin{split}\phi^{[1]}_{0}(x)&=\phi^{[1]}(0,x,a),\\\
\phi^{[1]}_{1}(x)&=\int_{a}^{x}r(x_{1})dx_{1}\
g^{[1]}(0,x,x_{1})\phi(0,x_{1},a),\\\
\phi^{[1]}_{k}(x)&=\int_{a}^{x}r(x_{1})dx_{1}\
g^{[1]}(0,x,x_{1})\int_{a}^{x_{1}}r(x_{2})dx_{2}\ g(0,x_{1},x_{2})\dots\\\
&\quad\dots\int_{a}^{x_{k-1}}r(x_{k})dx_{k}\
g(0,x_{k-1},x_{k})\phi(0,x_{k},a),\quad k\in{\mathbb{N}},\end{split}$ (3.9)
using the abbreviation
$\displaystyle
g^{[1]}(0,x,x_{1})=\theta^{[1]}(0,x,a)\phi(0,x_{1},a)-\theta(0,x_{1},a)\phi^{[1]}(0,x,a).$
(3.10)
Similarly, one finds from (3.6)
$\displaystyle\theta^{[1]}(z,x,a)=\sum_{m=0}^{\infty}z^{m}\theta^{[1]}_{m}(x),\quad
z\in{\mathbb{C}},\;x\in[a,b],$ (3.11)
where
$\displaystyle\begin{split}\theta^{[1]}_{0}(x)&=\theta^{[1]}(0,x,a),\\\
\theta^{[1]}_{1}(x)&=\int_{a}^{x}r(x_{1})dx_{1}\
g^{[1]}(0,x,x_{1})\theta(0,x_{1},a),\\\
\theta^{[1]}_{k}(x)&=\int_{a}^{x}r(x_{1})dx_{1}\
g^{[1]}(0,x,x_{1})\int_{a}^{x_{1}}r(x_{2})dx_{2}\ g(0,x_{1},x_{2})\dots\\\
&\quad\dots\int_{a}^{x_{k-1}}r(x_{k})dx_{k}\
g(0,x_{k-1},x_{k})\theta(0,x_{k},a),\quad k\in{\mathbb{N}}.\end{split}$ (3.12)
### 3.2. Asymptotic Expansion of the Characteristic Function
Next we investigate the $|z|\to\infty$ asymptotic expansion of the function
$F_{A,B}(z)$ in order to provide an analytic continuation of the spectral
$\zeta$-function, $\zeta(s;T_{A,B})$, and compute the zeta regularized
functional determinant. We first strengthen Hypothesis 2.1 by introducing the
following assumptions on $p,q,r$ following [33, Sect. 3]. These additional
assumptions are necessary in order to perform a Liouville-type transformation.
###### Hypothesis 3.1.
Let $(a,b)\subset{\mathbb{R}}$ be a finite interval and suppose that $p,q,r$
are $($Lebesgue $)$ measurable functions on $(a,b)$ such that the following
items $(i)$–$(iv)$ hold:
$(i)$ $r>0$ a.e. on $(a,b)$, $r\in L^{1}((a,b);dx)$, $1/r\in
L^{\infty}((a,b);dx)$.
$(ii)$ $p>0$ a.e. on $(a,b)$, $1/p\in L^{1}((a,b);dx)$.
$(iii)$ $q$ is real-valued a.e. on $(a,b)$, $q\in L^{1}((a,b);dx)$.
$(iv)$ $pr$ and $(pr)^{\prime}/r$ are absolutely continuous on $[a,b]$, and
for some $\varepsilon>0$, $pr\geqslant\varepsilon$ on $[a,b]$.
The variable transformations (cf. [54, p. 2]),
$\displaystyle\xi(x)=\dfrac{1}{c}\int_{a}^{x}dt\
[r(t)/p(t)]^{1/2},\quad\xi(x)\in[0,1]\,\text{ for }\,x\in[a,b],$ (3.13)
$\displaystyle\xi^{\prime}(x)=c^{-1}[r(x)/p(x)]^{1/2}>0\,\text{ a.e.~{}on
$(a,b)$,}$ (3.14) $\displaystyle
u(z,\xi)=[p(x(\xi))r(x(\xi))]^{1/4}y(z,x(\xi)),$ (3.15)
with $c>0$ given by
$\displaystyle c=\int_{a}^{b}dt\ [r(t)/p(t)]^{1/2},$ (3.16)
transform the Sturm–Liouville problem $(\tau y(z,\,\cdot\,))(x)=zy(z,x)$,
$x\in(a,b)$, into
$\displaystyle-\overset{\textbf{\large.\large.}}{u}(z,\xi)+V(\xi)u(z,\xi)=c^{2}zu(z,\xi),\quad\xi\in(0,1),$
(3.17)
and abbreviating
$\nu(\xi)=[p(x(\xi))r(x(\xi))]^{1/4},$ (3.18)
one verifies that
$\displaystyle\begin{split}V(\xi)&=\dfrac{\overset{\textbf{\large.\large.}}{\nu}(\xi)}{\nu(\xi)}+c^{2}\dfrac{q(x)}{r(x)}\\\
&=-\dfrac{c^{2}}{16}\dfrac{1}{p(x)r(x)}\left[\dfrac{(p(x)r(x))^{\prime}}{r(x)}\right]^{2}+\dfrac{c^{2}}{4}\dfrac{1}{r(x)}\left[\dfrac{(p(x)r(x))^{\prime}}{r(x)}\right]^{\prime}+c^{2}\dfrac{q(x)}{r(x)},\end{split}$
(3.19)
and
$V\in L^{1}((0,1);d\xi),$ (3.20)
as guaranteed by Hypothesis 3.1.
In order to construct the asymptotic expansion of $F_{A,B}(z)$ we begin by
assuming Hypothesis 3.1, but note that throughout the construction of the
expansion stronger assumptions will be necessary, all of which will be
addressed once the final asymptotic expansion is given.
When applying the Liouville transformation the boundary conditions undergo a
similar transformation. In fact, setting
$Q(\xi)=[(pr)^{\prime}/r](x(\xi))$ (3.21)
one can write
$\displaystyle\begin{pmatrix}u(z,\xi)\\\
\overset{\textbf{\large.}}{u}(z,\xi)\end{pmatrix}=M(\xi)\begin{pmatrix}y(z,x(\xi))\\\
y^{[1]}(z,x(\xi))\end{pmatrix},$ (3.22)
where
$\displaystyle M(\xi)=\begin{pmatrix}\nu(\xi)&0\\\
(c/4)\nu(\xi)^{-1}Q(\xi)&c\nu(\xi)^{-1}\end{pmatrix},\quad\xi\in[0,1],\quad{\det}_{{\mathbb{C}}^{2}}(M(\,\cdot\,))=c.$
(3.23)
Employing relation (3.22), the separated boundary conditions for the function
$g(\,\cdot\,)$ in Theorem 2.2 $(i)$ transform into separated boundary
conditions for the transformed function $v(\,\cdot\,)$ as follows,
$\displaystyle\begin{pmatrix}\cos(\alpha)&\sin(\alpha)\\\
0&0\end{pmatrix}M(0)^{-1}\begin{pmatrix}v(0)\\\
\overset{\textbf{\large.}}{v}(0)\end{pmatrix}+\begin{pmatrix}0&0\\\
\cos(\beta)&-\sin(\beta)\end{pmatrix}M(1)^{-1}\begin{pmatrix}v(1)\\\
\overset{\textbf{\large.}}{v}(1)\end{pmatrix},$ (3.24)
where $\alpha,\beta\in[0,\pi)$, and the inverse matrix $M^{-1}(\,\cdot\,)$ has
the form
$\displaystyle M(\xi)^{-1}=\begin{pmatrix}\nu(\xi)^{-1}&0\\\
-(1/4)\nu(\xi)^{-1}Q(\xi)&c^{-1}\nu(\xi)\end{pmatrix},\quad\xi\in[0,1],$
(3.25)
or, more explicitly,
$\displaystyle\begin{split}c^{-1}\nu(0)\sin(\alpha)\overset{\textbf{\large.}}{v}(0)+\nu(0)^{-1}\left[\cos(\alpha)-4^{-1}\sin(\alpha)Q(0)\right]v(0)&=0,\\\
-c^{-1}\nu(1)\sin(\beta)\overset{\textbf{\large.}}{v}(1)+\nu(1)^{-1}\left[\cos(\beta)+4^{-1}\sin(\beta)Q(1)\right]v(1)&=0.\end{split}$
(3.26)
With the help of relation (3.22) the coupled boundary conditions for
$g(\,\cdot\,)$ in Theorem 2.2 $(ii)$ transform into coupled boundary
conditions for $v(\,\cdot\,)$ via
$\displaystyle\begin{pmatrix}v(1)\\\
\overset{\textbf{\large.}}{v}(1)\end{pmatrix}=e^{i\varphi}\widetilde{R}\begin{pmatrix}v(0)\\\
\overset{\textbf{\large.}}{v}(0)\end{pmatrix},\quad\varphi\in[0,2\pi),$ (3.27)
where
$\widetilde{R}=M(1)^{-1}RM(0)\in SL(2,{\mathbb{R}})$ (3.28)
is of the form
$\displaystyle\begin{split}\widetilde{R}_{11}&=\nu(0)^{-1}\nu(1)\left[R_{11}-4^{-1}Q(0)R_{12}\right],\quad\widetilde{R}_{12}=c^{-1}\nu(0)\nu(1)R_{12},\\\
\widetilde{R}_{21}&=c\nu(0)^{-1}\nu(1)^{-1}\left[R_{21}-4^{-1}Q(0)R_{22}+4^{-1}Q(1)R_{11}-(16)^{-1}Q(0)Q(1)R_{12}\right],\\\
\widetilde{R}_{22}&=\nu(0)\nu(1)^{-1}\left[R_{22}+4^{-1}Q(1)R_{12}\right].\end{split}$
(3.29)
The fundamental system of solutions $\phi(z,\,\cdot\,,a)$ and
$\theta(z,\,\cdot\,,a)$ of $\tau y=zy$ satisfying (2.9) is transformed into
the set of solutions $\Phi(z,\,\cdot\,,0)$ and $\Theta(z,\,\cdot\,,0)$ of
(3.17) satisfying the conditions
$\displaystyle\Phi(z,0,0)=0,\qquad\,\,\,\overset{\textbf{\large.}}{\Phi}(z,0,0)=c\nu(0)^{-1},$
(3.30)
$\displaystyle\Theta(z,0,0)=\nu(0),\quad\overset{\textbf{\large.}}{\Theta}(z,0,0)=4^{-1}c\nu(0)^{-1}Q(0),$
(3.31)
where, once again, the derivatives of $\Phi(z,\xi,0)$ and $\Theta(z,\xi,0)$
are understood with respect to the variable $\xi$ (cf. (3.17)) and one notes
that for fixed $\xi$, each is entire with respect to $z$. By writing a generic
solution of (3.17) as a linear combination of $\Phi(z,\xi,0)$ and
$\Theta(z,\xi,0)$ and by imposing the separated boundary conditions in (3.26)
one obtains the following characteristic function
$\displaystyle\begin{split}{\mathcal{F}}_{\alpha,\beta}(z)&=\sin(\alpha)\big{\\{}c^{-1}\nu(1)\sin(\beta)\overset{\textbf{\large.}}{\Theta}(z,1,0)\\\
&\quad-\nu(1)^{-1}\left[\cos(\beta)+4^{-1}\sin(\beta)Q(1)\right]\Theta(z,1,0)\big{\\}}\\\
&\quad+\cos(\alpha)\big{\\{}-c^{-1}\nu(1)\sin(\beta)\overset{\textbf{\large.}}{\Phi}(z,1,0)\\\
&\quad+\nu(1)^{-1}\left[\cos(\beta)+4^{-1}\sin(\beta)Q(1)\right]\Phi(z,1,0)\big{\\}},\quad
z\in{\mathbb{C}}.\end{split}$ (3.32)
The zeros of ${\mathcal{F}}_{\alpha,\beta}(z)$ represent, including
multiplicity, the eigenvalues $\lambda_{A,B,j}$, $j\in J$, of the original
Sturm–Liouville problem $\tau y=zy$ endowed with the separated boundary
conditions in (2.7). By repeating this argument for coupled boundary
conditions (2.8) one obtains the characteristic function
$\displaystyle\begin{split}{\mathcal{F}}_{\varphi,\widetilde{R}}(z)&=e^{i\varphi}\big{\\{}2\cos(\varphi)-\big{[}c^{-1}\nu(0)\widetilde{R}_{11}+4^{-1}\nu(0)^{-1}Q(0)\widetilde{R}_{12}\big{]}\overset{\textbf{\large.}}{\Phi}(z,1,0)\\\
&\quad+\big{[}c^{-1}\nu(0)\widetilde{R}_{21}+4^{-1}\nu(0)^{-1}Q(0)\widetilde{R}_{22}\big{]}\Phi(z,1,0)\\\
&\quad+\widetilde{R}_{12}\nu(0)^{-1}\overset{\textbf{\large.}}{\Theta}(z,1,0)-\widetilde{R}_{22}\nu(0)^{-1}\Theta(z,1,0)\big{\\}},\quad
z\in{\mathbb{C}}.\end{split}$ (3.33)
###### Remark 3.2.
Explicit computations confirm that in the case of separated as well as coupled
boundary conditions one finds
$\displaystyle F_{\alpha,\beta}(z)$
$\displaystyle={\mathcal{F}}_{\alpha,\beta}(z),\quad z\in{\mathbb{C}},$ (3.34)
$\displaystyle F_{\varphi,R}(z)$
$\displaystyle={\mathcal{F}}_{\varphi,\widetilde{R}}(z),\quad
z\in{\mathbb{C}}.$ (3.35)
$\diamond$
As an example we now consider the case of the Krein–von Neumann extension
(see, e.g., [30] and the literature cited therein for details):
###### Example 3.3.
The Krein–von Neumann boundary conditions in terms of the variable $x\in[a,b]$
are characterized by imposing the coupled boundary conditions $\varphi=0$,
$R=R_{K}$ $($cf., e.g., [33, eq. (3.35)]$)$ with
$\displaystyle R_{K}=\begin{pmatrix}\theta(0,b,a)&\phi(0,b,a)\\\
\theta^{[1]}(0,b,a)&\phi^{[1]}(0,b,a)\end{pmatrix}.$ (3.36)
In terms of the variable $\xi\in[0,1]$, these boundary conditions are
transformed into $\varphi=0$ and $\widetilde{R}=\widetilde{R}_{K}$ with
$\displaystyle\widetilde{R}_{K}=\begin{pmatrix}\nu(0)^{-1}\big{[}\Theta(0,1,0)-4^{-1}Q(0)\Phi(0,1,0)\big{]}&c^{-1}\nu(0)\Phi(0,1,0)\\\
\nu(0)^{-1}\big{[}\overset{\textbf{\large.}}{\Theta}(0,1,0)-4^{-1}Q(0)\overset{\textbf{\large.}}{\Phi}(0,1,0)\big{]}&c^{-1}\nu(0)\overset{\textbf{\large.}}{\Phi}(0,1,0)\\\
\end{pmatrix}.$ (3.37)
By using these parameters in (3.33) one obtains the transformed characteristic
function
$\displaystyle\begin{split}{\mathcal{F}}_{0,\widetilde{R}_{K}}(z)&=2-c^{-1}\big{[}\overset{\textbf{\large.}}{\Phi}(0,1,0)\Theta(z,1,0)+\Theta(0,1,0)\overset{\textbf{\large.}}{\Phi}(z,1,0)\\\
&\hskip
48.36958pt-\Phi(0,1,0)\overset{\textbf{\large.}}{\Theta}(z,1,0)-\overset{\textbf{\large.}}{\Theta}(0,1,0)\Phi(z,1,0)\big{]},\quad
z\in{\mathbb{C}},\end{split}$ (3.38)
to be compared with $($see [33, eq. (3.36), (3.37)]$)$
$\displaystyle\begin{split}F_{0,R_{K}}(z)&=2-\big{[}\phi^{[1]}(0,b,a)\theta(z,b,a)+\theta(0,b,a)\phi^{[1]}(z,b,a)\\\
&\hskip
32.72049pt-\phi(0,b,a)\theta^{[1]}(z,b,a)-\theta^{[1]}(0,b,a)\phi(z,b,a)\big{]},\quad
z\in{\mathbb{C}}.\end{split}$ (3.39)
In order to obtain a large-$z$ asymptotic expansion of the functions (3.32)
and (3.33), we need the asymptotic expansion of the transformed fundamental
set of solutions $\Phi(z,\xi,0)$ and $\Theta(z,\xi,0)$. To this end, and since
the principal results we are focused on in this section are of a local nature
with respect to $\xi\in[0,1]$, we now envisage that $V(\,\cdot\,)$ is
continued in a sufficiently smooth and compactly supported manner to a
function on ${\mathbb{R}}$ (by a slight abuse of notation still abbreviated by
$V$),
$V\in C_{0}^{N}({\mathbb{R}})\cap C^{\infty}((-\infty,-1)\cup(2,\infty)),$
(3.40)
for $N\in{\mathbb{N}}$ to be determined later on. In addition, we consider the
associated Weyl–Titchmarsh (resp., Jost) solutions $u_{\pm}(z,\,\cdot\,)$ such
that for all $x_{0}\in{\mathbb{R}}$,
$u_{+}(z,\,\cdot\,)\in L^{2}([x_{0},\infty);d\xi),\quad u_{-}(z,\,\cdot\,)\in
L^{2}((-\infty,x_{0}];d\xi),\quad\text{\rm Im}\big{(}z^{1/2}\big{)}>0.$ (3.41)
Writing
$\displaystyle u_{\pm}(z,\xi)=\exp\bigg{\\{}\int_{0}^{\xi}dt\
{\mathcal{S}}_{\pm}(z,t)\bigg{\\}},\quad{\mathcal{S}}_{\pm}(z,\xi)=\frac{\overset{\textbf{\large.}}{u}_{\pm}(z,\xi)}{u_{\pm}(z,\xi)},\quad\xi\in{\mathbb{R}},\;\text{\rm
Im}\big{(}z^{1/2}\big{)}\geqslant 0$ (3.42)
(the compact support hypothesis on $V$ on ${\mathbb{R}}$, more generally, a
suitable short-range, i.e., integrability assumption on $V$, permits the
continuous extension of ${\mathcal{S}}_{\pm}(z,\,\cdot\,)$ to $\text{\rm
Im}\big{(}z^{1/2}\big{)}\geqslant 0$), one infers that
${\mathcal{S}}_{\pm}(z,\,\cdot\,)$ satisfy the Riccati differential equation
$\overset{\textbf{\large.}}{S}(z,\xi)+S_{\pm}(z,\xi)^{2}-V(\xi)+c^{2}z=0,\quad\xi\in{\mathbb{R}},\;\text{\rm
Im}\big{(}z^{1/2}\big{)}\geqslant 0.$ (3.43)
In addition, ${\mathcal{S}}_{\pm}(z,\xi)$ represent the half-line
Weyl–Titchmarsh functions on $[\xi,+\infty)$, respectively, $(-\infty,\xi]$,
in particular, for each $\xi\in{\mathbb{R}}$, $\pm S_{\pm}(\,\cdot\,,\xi)$ are
Nevanlinna–Herglotz functions on ${\mathbb{C}}_{+}$ (i.e., analytic on
${\mathbb{C}}_{+}$ with strictly positive imaginary part on
${\mathbb{C}}_{+}$).
Inserting the formal asymptotic expansion
$\displaystyle{\mathcal{S}}_{\pm}(z,\,\cdot\,)\underset{\begin{subarray}{c}|z|\to\infty\\\
\text{\rm Im}(z^{1/2})\geqslant 0\end{subarray}}{=}\pm
icz^{1/2}+\sum_{j=1}^{\infty}(\mp 1)^{j}S_{j}(\,\cdot\,)z^{-j/2}$ (3.44)
into the Riccati equation (3.43) yields the recursion relation
$\displaystyle\begin{split}&S_{1}(\xi)=[i/(2c)]V(\xi),\quad
S_{2}(\xi)=[1/4c^{2}]\overset{\textbf{\large.}}{V}(\xi),\\\
&S_{j+1}(\xi)=-[i/(2c)]\bigg{[}\overset{\textbf{\large.}}{S}_{j}(\xi)+\sum_{k=1}^{j-1}S_{k}(\xi)S_{j-k}(\xi)\bigg{]},\quad
j\in{\mathbb{N}},\;\xi\in{\mathbb{R}}.\end{split}$ (3.45)
The first few terms $S_{j}(\,\cdot\,)$ explicitly read
$\displaystyle\begin{split}S_{3}(\xi)&=\big{[}i\big{/}\big{(}8c^{3}\big{)}\big{]}\big{[}V^{2}(\xi)-\overset{\textbf{\large.\large.}}{V}(\xi)\big{]},\\\
S_{4}(\xi)&=-\big{[}1/16c^{4}\big{]}\big{[}V^{(3)}(\xi)-4V(\xi)\overset{\textbf{\large.}}{V}(\xi)\big{]},\\\
S_{5}(\xi)&=\big{[}i\big{/}\big{(}32c^{5}\big{)}\big{]}\big{[}2V^{3}(\xi)-5\overset{\textbf{\large.}}{V}(\xi)^{2}-6V(\xi)\overset{\textbf{\large.\large.}}{V}(\xi)+V^{(4)}(\xi)\big{]},\\\
&\text{etc.}\end{split}$ (3.46)
See [32, Sects. 5, 6] for a variety of closely related asymptotic expansions.
Assuming (3.40), the formal asymptotic expansion (3.43) turns into an actual
asymptotic expansion of the the type (see [11]),
$\displaystyle{\mathcal{S}}_{\pm}(z,\xi)\underset{\begin{subarray}{c}|z|\to\infty\\\
\text{\rm Im}(z^{1/2})\geqslant 0\end{subarray}}{=}\pm
icz^{1/2}+\sum_{j=1}^{N}(\mp
1)^{j}S_{j}(\xi)z^{-j/2}+o\big{(}|z|^{-N/2}\big{)},$ (3.47)
with the $o\big{(}|z|^{-N/2}\big{)}$-term uniform with respect to
$\xi\in[0,1]$.
###### Remark 3.4.
There is an enormous literature available in connection with asymptotic high-
energy expansions of Weyl–Titchmarsh $m$-functions (see, e.g., the detailed
list in [14]) and the associated spectral function, however, much less can be
found in connection with (local) uniformity of the error term
$o\big{(}|z|^{-N/2}\big{)}$ with respect to $x$ in expansions of the type
(3.47). Notable exceptions are, for instance, [11], [16], [40], [43], [66],
[67]. In particular, [11] (see [55, Sects. 1.4, 3.1]) and [16] use the theory
of transformation operators, while [40] and [43] employ a detailed analysis of
the Riccati equation (3.43), and [66], [67] iterate an underlying Volterra
integral equation. In addition we note that the compact support hypothesis on
$V$ can be relaxed to the condition
$\int_{{\mathbb{R}}}(1+|x|)dx\,\big{|}V^{(\ell)}(x)\big{|}<\infty,\quad
0\leqslant\ell\leqslant N.$ (3.48)
$\diamond$
The correct asymptotic behavior as $|z|\to\infty$ of any solution
$u(z,\,\cdot\,)$ to (3.17) is given as a linear combination of
$u_{\pm}(z,\,\cdot\,)$,
$\displaystyle
u(z,\xi)={\mathcal{A}}(z)u_{+}(z,\xi)+{\mathcal{B}}(z)u_{-}(z,\xi),\quad\text{\rm
Im}(z)>0,\;\xi\in[0,1],$ (3.49)
and one notices that the solutions $u_{\pm}(z,\,\cdot\,)$ satisfy the initial
conditions
$\displaystyle
u_{\pm}(z,0)=1,\quad\overset{\textbf{\large.}}{u}_{\pm}(z,0)={\mathcal{S}}_{\pm}(z,0),\quad\text{\rm
Im}(z)>0.$ (3.50)
Since $W(u_{+}(z,\,\cdot\,),u_{-}(z,\,\cdot\,))(\xi)\neq 0$, $\xi\in[0,1]$,
one infers that
${\mathcal{S}}^{+}(z,0)-{\mathcal{S}}^{-}(z,0)\neq 0,\quad\text{\rm Im}(z)>0.$
(3.51)
By imposing the initial conditions (3.30) and (3.31) on the function (3.49),
one obtains an expression for $\Phi(z,\,\cdot\,,0)$ and
$\Theta(z,\,\cdot\,,0)$ suitable for an asymptotic expansion. For instance, in
the case of $\Phi(z,\xi,0)$ one obtains
$\displaystyle\begin{split}\Phi(z,\xi,0)&=\frac{c\nu(0)^{-1}}{{\mathcal{S}}_{-}(z,0)-{\mathcal{S}}_{+}(z,0)}\exp\bigg{(}\int_{0}^{\xi}d\eta\,{\mathcal{S}}_{-}(z,\eta)\bigg{)}\\\
&\quad\,\times\bigg{[}1-\exp\bigg{(}\int_{0}^{\xi}d\eta\,[{\mathcal{S}}_{+}(z,\eta)-{\mathcal{S}}_{-}(z,\eta)]\bigg{)}\bigg{]}.\end{split}$
(3.52)
Furthermore, for large values of $z$, with $\text{\rm Im}(z)>0$, (3.47)
implies
$\displaystyle\exp\bigg{(}\int_{0}^{\xi}d\eta\,[{\mathcal{S}}_{+}(z,\eta)-{\mathcal{S}}_{-}(z,\eta)]\bigg{)}$
$\displaystyle\quad\underset{\begin{subarray}{c}|z|\to\infty\\\ \text{\rm
Im}(z^{1/2})\geqslant
0\end{subarray}}{=}\exp\big{(}2icz^{1/2}\xi\big{)}\exp\bigg{(}-2\sum_{n=1}^{N}z^{-n+(1/2)}\int_{0}^{\xi}d\eta\,S_{2n-1}(\eta)\bigg{)}$
(3.53) $\displaystyle\hskip 54.06006pt\times[1+o\big{(}z^{-N+1/2}\big{)}].$
Since the integrals on the right-hand side of (3.53) are finite, one finds
$\displaystyle\exp\bigg{(}-2\sum_{n=1}^{N}z^{-n+1/2}\int_{0}^{\xi}d\eta\,S_{2n-1}(\eta)\bigg{)}\underset{\begin{subarray}{c}|z|\to\infty\\\
\text{\rm Im}(z^{1/2})\geqslant 0\end{subarray}}{=}O(1),$ (3.54)
uniformly in $\xi\in[0,1]$. Relations (3.47) and (3.53) permit one to conclude
that
$\displaystyle\exp\bigg{(}\int_{0}^{\xi}d\eta\,[{\mathcal{S}}_{+}(z,\eta)-{\mathcal{S}}_{-}(z,\eta)]\bigg{)}\underset{\begin{subarray}{c}|z|\to\infty\\\
\text{\rm Im}(z^{1/2})\geqslant
0\end{subarray}}{=}O\big{(}e^{2icz^{1/2}}\big{)},$ (3.55)
uniformly for $\xi\in[0,1]$, and therefore,
$\displaystyle\Phi(z,\xi,0)$
$\displaystyle\underset{\begin{subarray}{c}|z|\to\infty\\\ \text{\rm
Im}(z^{1/2})\geqslant
0\end{subarray}}{=}\frac{c\nu(0)^{-1}}{{\mathcal{S}}_{-}(z,0)-{\mathcal{S}}_{+}(z,0)}\exp\bigg{(}\int_{0}^{\xi}d\eta\,{\mathcal{S}}_{-}(z,\eta)\bigg{)}\big{[}1+O\big{(}e^{2icz^{1/2}}\big{)}\big{]}.$
(3.56)
Similar arguments permit one to derive the following expressions:
$\displaystyle\Theta(z,\xi,0)$
$\displaystyle\underset{\begin{subarray}{c}|z|\to\infty\\\ \text{\rm
Im}(z^{1/2})\geqslant
0\end{subarray}}{=}\frac{(c/4)\nu(0)^{-1}Q(0)-\nu(0)S_{+}(z,0)}{{\mathcal{S}}_{-}(z,0)-{\mathcal{S}}_{+}(z,0)}\exp\bigg{(}\int_{0}^{\xi}d\eta\,{\mathcal{S}}_{-}(z,\eta)\bigg{)}$
$\displaystyle\hskip
45.52458pt\times\big{[}1+O\big{(}e^{2icz^{1/2}}\big{)}\big{]},$ (3.57)
$\displaystyle\overset{\textbf{\large.}}{\Phi}(z,\xi,0)$
$\displaystyle\underset{\begin{subarray}{c}|z|\to\infty\\\ \text{\rm
Im}(z^{1/2})\geqslant
0\end{subarray}}{=}\frac{c\nu(0)^{-1}{\mathcal{S}}_{-}(z,1)}{{\mathcal{S}}_{-}(z,0)-{\mathcal{S}}_{+}(z,0)}\exp\bigg{(}\int_{0}^{\xi}d\eta\,{\mathcal{S}}_{-}(z,\eta)\bigg{)}\big{[}1+O\big{(}e^{2icz^{1/2}}\big{)}\big{]},$
(3.58) $\displaystyle\overset{\textbf{\large.}}{\Theta}(z,\xi,0)$
$\displaystyle\underset{\begin{subarray}{c}|z|\to\infty\\\ \text{\rm
Im}(z^{1/2})\geqslant
0\end{subarray}}{=}\frac{\left[(c/4)\nu(0)^{-1}Q(0)-\nu(0){\mathcal{S}}_{+}(z,0)\right]{\mathcal{S}}_{-}(z,1)}{{\mathcal{S}}_{-}(z,0)-{\mathcal{S}}_{+}(z,0)}$
$\displaystyle\hskip
45.52458pt\times\exp\bigg{(}\int_{0}^{\xi}d\eta\,{\mathcal{S}}_{-}(z,\eta)\bigg{)}\big{[}1+O\big{(}e^{2icz^{1/2}}\big{)}\big{]},$
(3.59)
uniformly with respect to $\xi\in[0,1]$.
By utilizing the expressions (3.2)-(3.59) in the characteristic functions
(3.32) and (3.33) we obtain
$\displaystyle{\mathcal{F}}_{A,B}(z)$
$\displaystyle\underset{\begin{subarray}{c}|z|\to\infty\\\ \text{\rm
Im}(z^{1/2})\geqslant
0\end{subarray}}{=}\frac{1}{{\mathcal{S}}^{-}(z,0)-{\mathcal{S}}_{-}(z,0)}\exp\bigg{(}\int_{0}^{1}d\eta\,{\mathcal{S}}_{-}(z,\eta)\bigg{)}$
(3.60) $\displaystyle\hskip
45.52458pt\times\left[j_{A,B}+k_{A,B}{\mathcal{S}}_{+}(z,0)+\ell_{A,B}{\mathcal{S}}_{-}(z,1)+m_{A,B}{\mathcal{S}}_{+}(z,0){\mathcal{S}}_{-}(z,1)\right]$
$\displaystyle\hskip
45.52458pt\times\big{[}1+O\big{(}e^{2icz^{1/2}}\big{)}\big{]}.$
The first line on the right-hand side of (3.60) is entirely independent of
boundary conditions, in particular, it does not distinguish between separated
and coupled boundary conditions. In contrast, the terms
$j_{A,B},k_{A,B},\ell_{A,B}$, and $m_{A,B}$ in the second line on the right-
hand side of (3.60) encode the specific information about the boundary
conditions imposed. In the case of separated boundary conditions, where $A,B$
represents $\alpha,\beta$ as in (2.7) one obtains
$\displaystyle\begin{split}j_{\alpha,\beta}&=-\dfrac{c}{\nu(0)\nu(1)}\left[\cos(\beta)+(1/4)\sin(\beta)Q(1)\right]\left[\cos(\alpha)-(1/4)\sin(\alpha)Q(0)\right],\\\
k_{\alpha,\beta}&=-\dfrac{\nu(0)}{\nu(1)}\sin(\alpha)\left[\cos(\beta)+(1/4)\sin(\beta)Q(1)\right],\\\
\ell_{\alpha,\beta}&=\dfrac{\nu(1)}{\nu(0)}\sin(\beta)\left[\cos(\alpha)-(1/4)\sin(\alpha)Q(0)\right],\\\
m_{\alpha,\beta}&=(1/c)\nu(0)\nu(1)\sin(\alpha)\sin(\beta).\end{split}$ (3.61)
In the case of coupled boundary conditions, where $A,B$ represents
$\varphi,\widetilde{R}$ as in (3.27), (3.29), one infers
$\displaystyle\begin{split}j_{\varphi,\widetilde{R}}=-e^{i\varphi}\widetilde{R}_{21},\quad
k_{\varphi,\widetilde{R}}=-e^{i\varphi}\widetilde{R}_{22},\quad\ell_{\varphi,\widetilde{R}}=e^{i\varphi}\widetilde{R}_{11},\quad
m_{\varphi,\widetilde{R}}=e^{i\varphi}\widetilde{R}_{12}.\end{split}$ (3.62)
For the purpose of the analytic continuation of the spectral $\zeta$-function
one needs the large-$z$ asymptotic expansion of $\text{\rm
ln}({\mathcal{F}}_{A,B}(z))$ rather then the one for ${\mathcal{F}}_{A,B}(z)$.
For this reason we will focus next on the derivation of the large-$z$
asymptotic expansion of the expression
$\displaystyle\text{\rm
ln}({\mathcal{F}}_{A,B}(z))\underset{\begin{subarray}{c}|z|\to\infty\\\
\text{\rm Im}(z^{1/2})\geqslant 0\end{subarray}}{=}-\text{\rm
ln}\big{(}{\mathcal{S}}_{+}(z,0)-{\mathcal{S}}_{-}(z,0)\big{)}+\int_{0}^{1}d\eta\,{\mathcal{S}}_{-}(z,\eta)$
$\displaystyle\quad+\text{\rm
ln}\big{(}j_{A,B}+k_{A,B}{\mathcal{S}}_{+}(z,0)+\ell_{A,B}{\mathcal{S}}_{-}(z,1)+m_{A,B}{\mathcal{S}}_{+}(z,0){\mathcal{S}}_{-}(z,1)\big{)}$
(3.63) $\displaystyle\quad+O\big{(}e^{2icz^{1/2}}\big{)}.$
We can now use the expansion (3.43) in (3.60) to obtain a large-$z$ asymptotic
expansion of (3.2). We start with the part of (3.2) that is independent of the
boundary conditions. For the integral in (3.2) one finds
$\displaystyle\int_{0}^{1}d\eta\,{\mathcal{S}}_{-}(z,\eta)\underset{\begin{subarray}{c}|z|\to\infty\\\
\text{\rm Im}(z^{1/2})\geqslant
0\end{subarray}}{=}-iz^{1/2}c+\sum_{m=1}^{N}z^{-m/2}\int_{0}^{1}d\eta\,S_{m}(\eta)+o\big{(}z^{-N/2}\big{)}.$
(3.64)
For the first term in (3.2) one concludes that
$\displaystyle{\mathcal{S}}_{+}(z,0)-{\mathcal{S}}_{-}(z,0)\underset{\begin{subarray}{c}|z|\to\infty\\\
\text{\rm Im}(z^{1/2})\geqslant
0\end{subarray}}{=}2icz^{1/2}\bigg{(}1+(i/c)\sum_{j=1}^{N}S_{2j-1}(0)z^{-j}\bigg{)}+o\big{(}z^{-N+1/2}\big{)}.$
(3.65)
Relation (3.65) permits one to write
$\displaystyle\text{\rm
ln}\big{(}{\mathcal{S}}_{+}(z,0)-{\mathcal{S}}_{-}(z,0)\big{)}\underset{\begin{subarray}{c}|z|\to\infty\\\
\text{\rm Im}(z^{1/2})\geqslant 0\end{subarray}}{=}\text{\rm
ln}(2ic)+2^{-1}\text{\rm
ln}(z)+\sum_{m=1}^{N}D_{2m-1}z^{-m}+o\big{(}z^{-N}\big{)},$ (3.66)
where the terms $D_{2m-1}$ are determined through the formal asymptotic
expansion
$\displaystyle\text{\rm
ln}\bigg{(}1+(i/c)\sum_{m=1}^{\infty}S_{2m-1}(0)z^{-m}\bigg{)}=\sum_{j=1}^{\infty}D_{j}z^{-j}.$
(3.67)
We refer to (4.7)–(4.9) for a recursive formula for $D_{j}$ in terms of
$(i/c)S_{2m-1}(0)$. The first few $D_{j}$ explicitly read
$\displaystyle\begin{split}D_{1}&=-V(0)\big{/}\big{[}2c^{2}\big{]},\quad
D_{2}=\big{[}\overset{\textbf{\large.\large.}}{V}(0)-2V(0)^{2}\big{]}\big{/}\big{[}8c^{4}\big{]},\\\
D_{3}&=-\big{[}3V^{(4)}(0)-24V(0)\overset{\textbf{\large.\large.}}{V}(0)-15\overset{\textbf{\large.}}{V}(0)^{2}+16V(0)^{3}\big{]}\big{/}\big{[}96c^{6}\big{]},\\\
D_{4}&=\big{(}128c^{8}\big{)}^{-1}\big{[}V^{(6)}(0)+48V(0)^{2}\overset{\textbf{\large.\large.}}{V}(0)-20\overset{\textbf{\large.\large.}}{V}(0)^{2}-12V(0)V^{(4)}(0)\\\
&\quad+60V(0)\overset{\textbf{\large.}}{V}(0)^{2}-28V^{(3)}(0)\overset{\textbf{\large.}}{V}(0)-16V(0)^{4}\big{]},\\\
&\text{etc.}\end{split}$ (3.68)
Computing the asymptotic expansion of the last logarithmic term in (3.2),
namely the term which depends on the boundary conditions, is somewhat more
involved. By using the asymptotic expansion (3.43) it is not difficult to find
$\displaystyle\begin{split}&j_{A,B}+k_{A,B}{\mathcal{S}}^{-}(z,0)+\ell_{A,B}{\mathcal{S}}^{+}(z,1)\\\
&\quad\underset{\begin{subarray}{c}|z|\to\infty\\\ \text{\rm
Im}(z^{1/2})\geqslant
0\end{subarray}}{=}-icz^{1/2}(\ell_{A,B}-k_{A,B})+\sum_{m=0}^{N}\Delta_{m}z^{-m/2}+o\big{(}z^{-N/2}\big{)},\end{split}$
(3.69)
where
$\displaystyle\Delta_{0}=j_{A,B},\quad\Delta_{m}=\ell_{A,B}S_{m}(1)+(-1)^{m}k_{A,B}S_{m}(0),\quad
m\in{\mathbb{N}},$ (3.70)
and
$\displaystyle
m_{A,B}{\mathcal{S}}^{-}(z,0){\mathcal{S}}^{+}(z,1)\underset{\begin{subarray}{c}|z|\to\infty\\\
\text{\rm Im}(z^{1/2})\geqslant
0\end{subarray}}{=}m_{A,B}c^{2}z\bigg{(}1+\sum_{m=2}^{N}\Lambda_{m}z^{-m/2}\bigg{)}+o\big{(}z^{-(N-2)/2}\big{)},$
(3.71)
where
$\displaystyle\Lambda_{m}=\sum_{\ell=0}^{m}\Omega^{-}_{\ell}(0)\Omega_{m-\ell}^{+}(1),\quad
m\in{\mathbb{N}},\;m\geqslant 2,$ (3.72)
with
$\displaystyle\Omega^{-}_{0}(0)=\Omega_{0}^{+}(1)=1,\quad\Omega_{j}^{+}(x)=(-1)^{j}\Omega_{j}^{-}(x)=(i/c)S_{j-1}(x),\quad
j\in{\mathbb{N}}.$ (3.73)
The first few $\Lambda_{m}$ have the explicit form,
$\displaystyle\begin{split}\Lambda_{2}&=-2^{-1}c^{-2}[V(1)+V(0)],\quad\Lambda_{3}=-i4^{-1}c^{-3}\big{[}\overset{\textbf{\large.}}{V}(1)+\overset{\textbf{\large.}}{V}(0)\big{]},\\\
\Lambda_{4}&=8^{-1}c^{-4}\big{[}\overset{\textbf{\large.\large.}}{V}(1)+\overset{\textbf{\large.\large.}}{V}(0)-V(0)^{2}-V(1)^{2}+2V(1)V(0)\big{]},\\\
\Lambda_{5}&=i(16)^{-1}c^{-5}\Big{[}V^{(3)}(0)-V^{(3)}(1)-2V(0)\big{(}2\overset{\textbf{\large.}}{V}(0)+\overset{\textbf{\large.}}{V}(1)\big{)}+2V(1)\big{[}\overset{\textbf{\large.}}{V}(0)\\\
&\quad+2\overset{\textbf{\large.}}{V}(1)\big{]}\Big{]},\\\
&\text{etc.}\end{split}$ (3.74)
This finally implies
$\displaystyle\begin{split}&j_{A,B}+k_{A,B}{\mathcal{S}}^{-}(z,0)+\ell_{A,B}{\mathcal{S}}^{+}(z,1)+m_{A,B}{\mathcal{S}}^{-}(z,0){\mathcal{S}}^{+}(z,1)\\\
&\quad\underset{\begin{subarray}{c}|z|\to\infty\\\ \text{\rm
Im}(z^{1/2})\geqslant
0\end{subarray}}{=}\sum_{m=-2}^{N}\Gamma_{m}z^{-m/2}+o\big{(}z^{-N/2}\big{)},\end{split}$
(3.75)
where
$\displaystyle\begin{split}&\Gamma_{-2}=m_{A,B}c^{2},\quad\Gamma_{-1}=-ic(\ell_{A,B}-k_{A,B}),\\\
&\Gamma_{m}=\Delta_{m}+m_{A,B}c^{2}\Lambda_{m+2},\quad
m\in{\mathbb{N}}_{0}.\end{split}$ (3.76)
Let $\Gamma_{k_{0}}$ with $k_{0}\in\mathbb{Z}$ and $k_{0}\geqslant-2$, be the
first non-vanishing term of the series in (3.75). Since $\Gamma_{k_{0}}\neq 0$
one can write
$\displaystyle\text{\rm
ln}\big{(}j_{A,B}+k_{A,B}{\mathcal{S}}^{-}(z,0)+\ell_{A,B}{\mathcal{S}}^{+}(z,1)+m_{A,B}{\mathcal{S}}^{-}(z,0){\mathcal{S}}^{+}(z,1)\big{)}$
(3.77) $\displaystyle\quad\underset{\begin{subarray}{c}|z|\to\infty\\\
\text{\rm Im}(z^{1/2})\geqslant 0\end{subarray}}{=}\text{\rm
ln}(\Gamma_{k_{0}})-(k_{0}/2)\text{\rm ln}(z)+\text{\rm
ln}\bigg{(}1+\sum_{m=1}^{N}[\Gamma_{m+k_{0}}/\Gamma_{k_{0}}]z^{-m/2}+o\big{(}z^{-N/2}\big{)}\bigg{)},$
which, in turn, yields
$\displaystyle\begin{split}&\text{\rm
ln}\big{(}j_{A,B}+k_{A,B}{\mathcal{S}}^{-}(z,0)+\ell_{A,B}{\mathcal{S}}^{+}(z,1)+m_{A,B}{\mathcal{S}}^{-}(z,0){\mathcal{S}}^{+}(z,1)\big{)}\\\
&\quad\underset{\begin{subarray}{c}|z|\to\infty\\\ \text{\rm
Im}(z^{1/2})\geqslant 0\end{subarray}}{=}\text{\rm
ln}(\Gamma_{k_{0}})-(k_{0}/2)\text{\rm
ln}(z)+\sum_{j=1}^{N}\Pi_{j}z^{-j/2}+o\big{(}z^{-N/2}\big{)},\end{split}$
(3.78)
where the terms $\Pi_{j}$ are obtained via the formal asymptotic expansion
$\displaystyle\text{\rm
ln}\bigg{(}1+\sum_{m=1}^{\infty}[\Gamma_{m+k_{0}}/\Gamma_{k_{0}}]z^{-m/2}\bigg{)}=\sum_{j=1}^{\infty}\Pi_{j}z^{-j/2}.$
(3.79)
Once again we refer to (4.7)–(4.9) for a recursive determination of $\Pi_{j}$
in terms of $\Gamma_{m+k_{0}}/\Gamma_{k_{0}}$. The first few $\Pi_{m}$ are
explicitly of the form,
$\displaystyle\Pi_{1}$
$\displaystyle=\Gamma_{1+k_{0}}/\Gamma_{k_{0}},\quad\Pi_{2}=2^{-1}\Gamma^{-2}_{k_{0}}\big{[}2\Gamma_{k_{0}}\Gamma_{k_{0}+2}-\Gamma^{2}_{k_{0}+1}\big{]},$
$\displaystyle\Pi_{3}$
$\displaystyle=3^{-1}\Gamma^{-3}_{k_{0}}\big{[}\Gamma^{3}_{k_{0}+1}-3\Gamma_{k_{0}}\Gamma_{k_{0}+2}\Gamma_{k_{0}+1}+3\Gamma^{2}_{k_{0}}\Gamma_{k_{0}+3}\big{]},$
(3.80) $\displaystyle\Pi_{4}$
$\displaystyle=-4^{-1}\Gamma^{-4}_{k_{0}}\big{[}\Gamma^{4}_{k_{0}+1}-4\Gamma_{k_{0}}\Gamma_{k_{0}+2}\Gamma^{2}_{k_{0}+1}+4\Gamma^{2}_{k_{0}}\Gamma_{k_{0}+3}\Gamma_{k_{0}+1}$
$\displaystyle\quad+2\Gamma^{2}_{k_{0}}\left(\Gamma^{2}_{k_{0}+2}-2\Gamma_{k_{0}}\Gamma_{k_{0}+4}\right)\big{]},$
etc. (3.81)
More explicit expressions for $\Pi_{m}$ in terms of the potential $V$ and its
derivatives can be obtained with a simple computer program once the index
$k_{0}$ has been determined.
Finally, we can provide the large-$z$ asymptotic expansion of the logarithm of
the characteristic function in the form
$\displaystyle\begin{split}\text{\rm
ln}({\mathcal{F}}_{A,B}(z))&\underset{\begin{subarray}{c}|z|\to\infty\\\
\text{\rm Im}(z^{1/2})\geqslant
0\end{subarray}}{=}-icz^{1/2}-2^{-1}(k_{0}+1)\text{\rm ln}(z)+\text{\rm
ln}(\Gamma_{k_{0}}/(2ic))\\\ &\hskip
45.52458pt+\sum_{m=1}^{N}\Psi_{m}z^{-m/2}+o\big{(}z^{-N/2}\big{)},\end{split}$
(3.82)
where
$\displaystyle\begin{split}&\Psi_{2n}=\int_{0}^{1}d\eta\,S_{2n}(\eta)-D_{2n-1}+\Pi_{2n},\quad
n\in\mathbb{N},\\\
&\Psi_{2n+1}=\int_{0}^{1}d\eta\,S_{2n+1}(\eta)+\Pi_{2n+1},\quad
n\in\mathbb{N}_{0}.\end{split}$ (3.83)
### 3.3. Analytic Continuation of the Spectral Zeta Function and the Zeta
Regularized Functional Determinant
In order to perform the analytic continuation of the spectral
$\zeta$-function, we need to investigate the specific behavior for
$z\downarrow 0$ and $|z|\to\infty$. The characteristic function
${\mathcal{F}}_{A,B}(z)$ is constructed as a linear combination of the basis
functions $\phi(z,\,\cdot\,,a)$ and $\theta(z,\,\cdot\,,a)$ (or equivalently
the transformed basis functions $\Phi(z,\,\cdot\,,0)$ and
$\Theta(z,\,\cdot\,,0)$) and their quasi-derivatives. We have proved that
$\phi(z,\,\cdot\,,a)$ and $\theta(z,\,\cdot\,,a)$, and consequently
$\Phi(z,\,\cdot\,,0)$ and $\Theta(z,\,\cdot\,,0)$, have a small-$z$ asymptotic
expansion in the form of a power series in the variable $z$ in Section 3.1.
This implies that, in general, the characteristic function
${\mathcal{F}}_{A,B}(z)$ has a small-$z$ asymptotic expansion of the form
$\displaystyle{\mathcal{F}}_{A,B}(z)={\mathcal{F}}_{m_{0}}z^{m_{0}}+\sum_{m=m_{0}+1}^{\infty}{\mathcal{F}}_{m}z^{m},$
(3.84)
where $m_{0}\in\\{0,1,2\\}$ represents the multiplicity of the zero eigenvalue
and ${\mathcal{F}}_{m_{0}}\neq 0$. The asymptotic expansion (3.84) suggests
that the appropriate characteristic function to use in the integral
representation of the spectral $\zeta$-function is
$z^{-{m_{0}}}{\mathcal{F}}_{A,B}(z)$ rather than simply
${\mathcal{F}}_{A,B}(z)$ (obviously the two coincide when no zero eigenvalue
is present). In this case it is easy to verify that
$\displaystyle\frac{d}{dz}\text{\rm
ln}\left({\mathcal{F}}_{A,B}(z)z^{-m_{0}}\right)\underset{|z|\downarrow
0}{=}O(1).$ (3.85)
From the large-$z$ asymptotic expansion (3.82) of the characteristic function,
namely,
$\displaystyle\begin{split}\text{\rm
ln}({\mathcal{F}}_{A,B}(z))&\underset{\begin{subarray}{c}|z|\to\infty\\\
\text{\rm Im}(z^{1/2})\geqslant
0\end{subarray}}{=}-icz^{1/2}-[(k_{0}+1)/2]\text{\rm ln}(z)+\text{\rm
ln}(\Gamma_{k_{0}}/(2ic))\\\ &\hskip
45.52458pt+\sum_{m=1}^{N}\Psi_{m}z^{-m/2}+o\big{(}|z|^{-N/2}\big{)},\end{split}$
(3.86)
one readily infers that
$\displaystyle\frac{d}{dz}\text{\rm
ln}({\mathcal{F}}_{A,B}(z)z^{-m_{0}})\underset{\begin{subarray}{c}|z|\to\infty\\\
\text{\rm Im}(z^{1/2})\geqslant 0\end{subarray}}{=}O\big{(}|z|^{-1/2}\big{)}.$
(3.87)
The asymptotic behaviors in (3.85) and (3.87) justify deforming the contour
$\gamma$ in the integral representation (2.39) to one that surrounds the
branch cut $R_{\psi}$ as shown in Figure 3. This contour deformation leads to
the following integral representation (with $\psi$ introduced in (2.40))
$\displaystyle\zeta(s;T_{A,B})=e^{is(\pi-\psi)}\pi^{-1}\sin(\pi
s)\int_{0}^{\infty}dt\,t^{-s}\frac{d}{dt}\text{\rm
ln}\left({\mathcal{F}}_{A,B}(te^{i\psi})t^{-m_{0}}e^{-im_{0}\psi}\right),$
(3.88)
which is valid in the region $1/2<\text{\rm Re}(s)<1$. To obtain the analytic
continuation of (3.88) to the left of the abscissa of convergence $\text{\rm
Re}(s)=1/2$ we subtract and then add $N$ terms of the large-$z$ asymptotic
expansion of $\text{\rm
ln}\left({\mathcal{F}}_{A,B}(te^{i\psi})t^{-m_{0}}e^{-im_{0}\psi}\right)$.
This process leads to the following expression of the spectral
$\zeta$-function
$\displaystyle\zeta(s;T_{A,B})=Z(s,A,B)+\sum_{j=-1}^{N}h_{j}(s,A,B),$ (3.89)
which is valid in the region $-(N+1)/2<\text{\rm Re}(s)<1$. The explicit form
of the functions in the analytically continued expression of
$\zeta(s;T_{A,B})$ in (3.89) is
$\displaystyle Z(s,A,B)$ $\displaystyle=e^{is(\pi-\psi)}\pi^{-1}\sin(\pi
s)\int_{0}^{\infty}dt\,t^{-s}\frac{d}{dt}\bigg{\\{}\text{\rm
ln}\left({\mathcal{F}}_{A,B}(te^{i\psi})t^{-m_{0}}e^{-im_{0}\psi}\right)$
$\displaystyle\quad-H(t-1)\bigg{[}-ict^{1/2}e^{i\psi/2}-[((k_{0}+1)/2)+m_{0}]\text{\rm
ln}(t)$ (3.90)
$\displaystyle\quad-\big{[}((k_{0}+1)/2)+m_{0}\big{]}i\psi+\text{\rm
ln}(\Gamma_{k_{0}}/(2ic))+\sum_{n=1}^{N}\Psi_{n}e^{-in\psi/2}t^{-n/2}\bigg{]}\bigg{\\}},$
where $H(s)=\begin{cases}1,&s>0,\\\ 0,&s<0,\end{cases}$ represents the
Heaviside function, and
$\displaystyle\begin{split}h_{-1}(s,A,B)&=-ie^{is(\pi-\psi)}\pi^{-1}\sin(\pi
s)c\,e^{i\psi/2}/(2s-1),\\\
h_{0}(s,A,B)&=-(k_{0}+1+2m_{0})e^{is(\pi-\psi)}(2\pi s)^{-1}\sin(\pi s),\\\
h_{n}(s,A,B)&=-e^{is(\pi-\psi)}\pi^{-1}\sin(\pi
s)[n/(2s+n)]e^{-in\psi/2}\Psi_{n},\quad n\in{\mathbb{N}}.\end{split}$ (3.91)
Thanks to the expression (3.89) we are now able to compute the zeta
regularized functional determinant in terms of $\zeta^{\prime}(0;T_{A,B})$ as
in [33, Thm. 2.9]. For the purpose of computing $\zeta^{\prime}(0;T_{A,B})$,
it is sufficient to set $N=0$ in (3.89) to obtain
$\displaystyle\zeta^{\prime}(0;T_{A,B})=Z^{\prime}(0,A,B)+h^{\prime}_{-1}(0,A,B)+h^{\prime}_{0}(0,A,B).$
(3.92)
By computing the derivative with respect to $s$ of (3.3) and the first two
expressions in (3.91) at $s=0$ one obtains the remarkably simple formula
$\displaystyle\zeta^{\prime}(0;T_{A,B})=i\pi n-\text{\rm
ln}(2c|{\mathcal{F}}_{m_{0}}/\Gamma_{k_{0}}|),$ (3.93)
where $n$ is the number of strictly negative eigenvalues of $T_{A,B}$.
## 4\. Computing Spectral Zeta Function Values and Traces for Regular
Sturm–Liouville Operators
We have now completed the necessary preparations to give the main theorem for
computing values of the spectral $\zeta$-function for self-adjoint regular
Sturm–Liouville operators when imposing either separated or coupled boundary
conditions. When zero is not an eigenvalue we also find an expression for
computing the trace of the inverse Sturm–Liouville operator.
###### Theorem 4.1.
Assume Hypothesis 2.1, denote by $T_{A,B}$ the self-adjoint extension of
$T_{min}$ with either separated or coupled boundary conditions as described in
Theorem 2.2, and let $m_{0}=0,1,2$, denote the multiplicity of zero as an
eigenvalue of $T_{A,B}$ $($with $m_{0}=0$ denoting zero is not an
eigenvalue$)$. Suppose that $F_{A,B}(z)$ given in (2.39) has the series
expansion
$\displaystyle F_{A,B}(z)=\sum_{j=0}^{\infty}a_{j}z^{j},\quad
0\leqslant|z|\text{ sufficiently small}.$ (4.1)
Then,
$\displaystyle\zeta(n;T_{A,B})=-\text{\rm
Res}\left[z^{-n}\dfrac{d}{dz}\text{\rm ln}(F_{A,B}(z));\
z=0\right]=-n\,b_{n},\quad n\in{\mathbb{N}},$ (4.2)
where
$\displaystyle\begin{split}&b_{1}=a_{1+m_{0}}/a_{m_{0}},\\\
&b_{j}=[a_{j+m_{0}}/a_{m_{0}}]-\sum_{\ell=1}^{j-1}[\ell/j][a_{j-\ell+m_{0}}/a_{m_{0}}]b_{\ell},\quad
j\in{\mathbb{N}},\;j\geqslant 2.\end{split}$ (4.3)
In particular, if zero is not an eigenvalue of $T_{A,B}$, then
$\displaystyle\text{\rm
tr}_{L^{2}_{r}((a,b))}\big{(}T_{A,B}^{-1}\big{)}=\zeta(1;T_{A,B})=-a_{1}/a_{0}.$
(4.4)
###### Proof.
The residue in equation (4.2) coincides with the $z^{-1}$ coefficient of the
Laurent expansion, in the neighborhood of $z=0$, of the integrand in (2.41).
By using the expansion (4.1) one obtains, for $|z|\geqslant 0$ sufficiently
small and for $n\in{\mathbb{N}}$, that
$\displaystyle z^{-n}\dfrac{d}{dz}\text{\rm
ln}(F_{A,B}(z))=z^{-n}\dfrac{d}{dz}\text{\rm
ln}\bigg{(}\sum_{j=0}^{\infty}a_{j}z^{j}\bigg{)}.$ (4.5)
Since $z=0$ can be an eigenvalue of multiplicity at most 2, the expansion can
be rewritten as follows
$\displaystyle z^{-n}\dfrac{d}{dz}\text{\rm ln}(F_{A,B}(z))$
$\displaystyle=z^{-n}\dfrac{d}{dz}\text{\rm
ln}\bigg{(}\sum_{j=m_{0}}^{\infty}a_{j}z^{j}\bigg{)}$
$\displaystyle=z^{-n}\dfrac{d}{dz}\bigg{(}\text{\rm
ln}\big{(}a_{m_{0}}z^{m_{0}}\big{)}+\text{\rm
ln}\bigg{(}1+\sum_{j=1}^{\infty}[a_{j+m_{0}}/a_{m_{0}}]z^{j}\bigg{)}\bigg{)}$
$\displaystyle=m_{0}z^{-n-1}+z^{-n}\dfrac{d}{dz}\text{\rm
ln}\bigg{(}1+\sum_{j=1}^{\infty}[a_{j+m_{0}}/a_{m_{0}}]z^{j}\bigg{)}.$ (4.6)
Since $n\in{\mathbb{N}}$, the term $m_{0}z^{-n-1}$ never contributes to the
residue and the only contribution comes from the $z^{n}$ coefficient of the
small-$|z|$ asymptotic expansion of the logarithm on the right-hand side. This
expansion can be obtained by making use of the fact that if $F$ has the
analytic expansion
$\displaystyle F(z)=\sum_{m=1}^{\infty}c_{m}z^{m},\quad 0\leqslant|z|\text{
sufficiently small},$ (4.7)
then
$\displaystyle\text{\rm ln}(1+F(z))=\sum_{m=1}^{\infty}d_{m}z^{m},\quad
0\leqslant|z|\text{ sufficiently small},$ (4.8)
where
$d_{1}=c_{1},\quad
d_{j}=c_{j}-\sum_{\ell=1}^{j-1}[\ell/j]c_{j-\ell}d_{\ell},\quad
j\in{\mathbb{N}},\;j\geqslant 2.$ (4.9)
By using (4.8) one obtains
$\displaystyle\text{\rm
ln}\bigg{(}1+\sum_{j=1}^{\infty}[a_{j+m_{0}}/a_{m_{0}}]z^{j}\bigg{)}=\sum_{j=1}^{\infty}b_{j}z^{j},$
(4.10)
with the coefficients $b_{j}$ given by equation (4.3). From the last expansion
one finally obtains
$\displaystyle z^{-n}\dfrac{d}{dz}\text{\rm
ln}(F_{A,B}(z))=z^{-n}\dfrac{d}{dz}\text{\rm
ln}\bigg{(}\sum_{j=1}^{\infty}a_{j}z^{j}\bigg{)}=\sum_{j=1}^{\infty}jb_{j}z^{j-n-1}.$
(4.11)
This is the Laurent expansion, and from it one can easily deduce that
$\displaystyle\text{\rm Res}\left[z^{-n}\dfrac{d}{dz}\text{\rm
ln}(F_{A,B}(z));\ z=0\right]=n\,b_{n},\quad n\in{\mathbb{N}},$ (4.12)
proving (4.2).
Assertion (4.4) about the trace of the inverse operator when $z=0$ is not an
eigenvalue is obtained by noting
$\displaystyle-\dfrac{d}{dz}\text{\rm
ln}(F_{A,B}(z))\bigg{|}_{z=0}=-d_{1}=-a_{1}/a_{0}$ (4.13)
from the analytic expansions (4.7) and (4.8), and upon applying Theorem 2.4. ∎
This theorem allows one to utilize the series expansions found in the previous
section in order to express the $\zeta$-function values for each of the
boundary conditions considered.
### 4.1. Computing Spectral Zeta Function Values and Traces for Separated
Boundary Conditions
We begin by applying Theorem 4.1 to find an expression for values of
$\zeta(n;T_{\alpha,\beta})$ when imposing separated boundary conditions.
###### Theorem 4.2.
Assume Hypothesis 2.1, consider $T_{\alpha,\beta}$ as described in Theorem 2.2
$(i)$, and let $m_{0}=0,1$, denote the multiplicity of zero as an eigenvalue
of $T_{\alpha,\beta}$. Then,
$\displaystyle\zeta(n;T_{\alpha,\beta})=-\text{\rm
Res}\left[z^{-n}\dfrac{d}{dz}\text{\rm ln}(F_{\alpha,\beta}(z));\
z=0\right]=-n\,b_{n},\quad n\in{\mathbb{N}},$ (4.14)
where
$\displaystyle\hskip 256.0748ptj\in{\mathbb{N}},\;j\geqslant 2.$ (4.15)
In particular, if zero is not an eigenvalue of $T_{\alpha,\beta}$, then
$\displaystyle\text{\rm
tr}_{L^{2}_{r}((a,b))}\big{(}T^{-1}_{\alpha,\beta}\big{)}=\zeta(1;T_{\alpha,\beta})$
(4.16)
###### Proof.
One substitutes (3.4), (3.6), (3.8), and (3.11) into equation (2.16) for
$\alpha,\beta\in[0,\pi)$ to find
$\displaystyle\begin{split}F_{\alpha,\beta}(z)=\sum_{m=0}^{\infty}\big{\\{}\cos(\alpha)\big{[}&\cos(\beta)\
\phi_{m}(b)-\sin(\beta)\ \phi^{[1]}_{m}(b)\big{]}\\\
&-\sin(\alpha)\big{[}\cos(\beta)\ \theta_{m}(b)-\sin(\beta)\
\theta^{[1]}_{m}(b)\big{]}\big{\\}}z^{m}.\end{split}$ (4.17)
From (4.17) one proves the assertion by applying Theorem 4.1 with
$\displaystyle\begin{split}a_{k}=\cos(\alpha)\big{[}&\cos(\beta)\
\phi_{k}(b)-\sin(\beta)\ \phi^{[1]}_{k}(b)\big{]}\\\
&-\sin(\alpha)\big{[}-\sin(\beta)\ \theta^{[1]}_{k}(b)+\cos(\beta)\
\theta_{k}(b)\big{]},\quad k\in{\mathbb{N}}.\end{split}$ (4.18)
∎
We now give a few corollaries that will be of use in the context of specific
boundary conditions. One notes that for Dirichlet boundary conditions one has
$\alpha=\beta=0$ and for Neumann boundary conditions one has
$\alpha=\beta=\pi/2$.
###### Corollary 4.3 (Dirichlet boundary conditions).
Assume Hypothesis 2.1, consider $T_{0,0}$ as described in case Theorem 2.2
$(i)$, and let $m_{0}=0,1$, denote the multiplicity of zero as an eigenvalue
of $T_{0,0}$. Then,
$\displaystyle\zeta(n;T_{0,0})=-\text{\rm
Res}\left[z^{-n}\dfrac{d}{dz}\text{\rm ln}(F_{0,0}(z));\
z=0\right]=-n\,b_{n},\quad n\in{\mathbb{N}},$ (4.19)
where
$\displaystyle\begin{split}&b_{1}=\phi_{1+m_{0}}(b)/\phi_{m_{0}}(b),\\\
&b_{j}=[\phi_{j+m_{0}}(b)/\phi_{m_{0}}(b)]-\sum_{\ell=1}^{j-1}[\ell/j][\phi_{j-\ell+m_{0}}(b)/\phi_{m_{0}}(b)]b_{\ell},\quad
j\in{\mathbb{N}},\;\geqslant 2.\end{split}$ (4.20)
In particular, if zero is not an eigenvalue of $T_{0,0}$, then
$\displaystyle\text{\rm
tr}_{L^{2}_{r}((a,b))}\big{(}T^{-1}_{0,0}\big{)}=\zeta(1;T_{0,0})=-\phi_{1}(b)/\phi_{0}(b).$
(4.21)
###### Proof.
Take $\alpha=\beta=0$ in Theorem 4.2. ∎
In particular, one finds explicitly for $n=2,3,4$, when zero is not an
eigenvalue of $T_{0,0}$:
$\displaystyle\zeta(2;T_{0,0})=-2b_{2}=-2\left[\dfrac{\phi_{2}(b)}{\phi_{0}(b)}-\dfrac{[\phi_{1}(b)]^{2}}{2[\phi_{0}(b)]^{2}}\right],$
$\displaystyle\zeta(3;T_{0,0})=-3b_{3}=-3\Bigg{[}\dfrac{\phi_{3}(b)}{\phi_{0}(b)}-\dfrac{\phi_{1}(b)\phi_{2}(b)}{[\phi_{0}(b)]^{2}}+\dfrac{[\phi_{1}(b)]^{3}}{3[\phi_{0}(b)]^{3}}\Bigg{]},$
(4.22)
$\displaystyle\begin{split}&\leavevmode\resizebox{433.62pt}{}{$\zeta(4;T_{0,0})=-4b_{4}=-4\Bigg{[}\dfrac{\phi_{4}(b)}{\phi_{0}(b)}-\dfrac{\phi_{1}(b)\phi_{3}(b)}{[\phi_{0}(b)]^{2}}-\dfrac{[\phi_{2}(b)]^{2}}{2[\phi_{0}(b)]^{2}}+\dfrac{[\phi_{1}(b)]^{2}\phi_{2}(b)}{[\phi_{0}(b)]^{3}}-\dfrac{[\phi_{1}(b)]^{4}}{4[\phi_{0}(b)]^{4}}\Bigg{]}.$}\end{split}$
One also finds explicitly for $n=2,3,4$, when zero is a simple eigenvalue of
$T_{0,0}$:
$\displaystyle\zeta(2;T_{0,0})=-2b_{2}=-2\left[\dfrac{\phi_{3}(b)}{\phi_{1}(b)}-\dfrac{[\phi_{2}(b)]^{2}}{2[\phi_{1}(b)]^{2}}\right],$
$\displaystyle\zeta(3;T_{0,0})=-3b_{3}=-3\Bigg{[}\dfrac{\phi_{4}(b)}{\phi_{1}(b)}-\dfrac{\phi_{2}(b)\phi_{3}(b)}{[\phi_{1}(b)]^{2}}+\dfrac{[\phi_{2}(b)]^{3}}{3[\phi_{1}(b)]^{3}}\Bigg{]},$
(4.23)
$\displaystyle\begin{split}&\leavevmode\resizebox{433.62pt}{}{$\zeta(4;T_{0,0})=-4b_{4}=-4\Bigg{[}\dfrac{\phi_{5}(b)}{\phi_{1}(b)}-\dfrac{\phi_{2}(b)\phi_{4}(b)}{[\phi_{1}(b)]^{2}}-\dfrac{[\phi_{3}(b)]^{2}}{2[\phi_{1}(b)]^{2}}+\dfrac{[\phi_{2}(b)]^{2}\phi_{3}(b)}{[\phi_{1}(b)]^{3}}-\dfrac{[\phi_{2}(b)]^{4}}{4[\phi_{1}(b)]^{4}}\Bigg{]}.$}\end{split}$
###### Corollary 4.4 (Dirichlet boundary condition at $a$).
Assume Hypothesis 2.1, consider $T_{0,\beta}$ as described in Theorem 2.2
$(i)$, and let $m_{0}=0,1$, denote the multiplicity of zero as an eigenvalue
of $T_{0,\beta}$. Then,
$\displaystyle\zeta(n;T_{0,\beta})=-\text{\rm
Res}\left[z^{-n}\dfrac{d}{dz}\text{\rm ln}(F_{0,\beta}(z));\
z=0\right]=-n\,b_{n},\quad n\in{\mathbb{N}},$ (4.24)
where
$\displaystyle\begin{split}b_{1}&=\dfrac{\cos(\beta)\phi_{1+m_{0}}(b)-\sin(\beta)\phi^{[1]}_{1+m_{0}}(b)}{\cos(\beta)\phi_{m_{0}}(b)-\sin(\beta)\phi^{[1]}_{m_{0}}(b)},\\\
b_{j}&=\dfrac{\cos(\beta)\phi_{j+m_{0}}(b)-\sin(\beta)\phi^{[1]}_{j+m_{0}}(b)}{\cos(\beta)\phi_{m_{0}}(b)-\sin(\beta)\phi^{[1]}_{m_{0}}(b)}\\\
&\quad-\sum_{\ell=1}^{j-1}[\ell/j]\dfrac{\cos(\beta)\phi_{j-\ell+m_{0}}(b)-\sin(\beta)\phi^{[1]}_{j-\ell+m_{0}}(b)}{\cos(\beta)\phi_{m_{0}}(b)-\sin(\beta)\phi^{[1]}_{m_{0}}(b)}b_{\ell},\quad
j\in{\mathbb{N}},\;j\geqslant 2.\end{split}$ (4.25)
In particular, if zero is not an eigenvalue of $T_{0,\beta}$, then
$\displaystyle\text{\rm
tr}_{L^{2}_{r}((a,b))}\big{(}T^{-1}_{0,\beta}\big{)}=\zeta(1;T_{0,\beta})=-\dfrac{\cos(\beta)\phi_{1}(b)-\sin(\beta)\phi^{[1]}_{1}(b)}{\cos(\beta)\phi_{0}(b)-\sin(\beta)\phi^{[1]}_{0}(b)}.$
(4.26)
###### Proof.
Take $\alpha=0$ in Theorem 4.2. ∎
###### Corollary 4.5 (Dirichlet boundary condition at $b$).
Assume Hypothesis 2.1, consider $T_{\alpha,0}$ as described in Theorem 2.2
$(i)$, and let $m_{0}=0,1$, denote the multiplicity of zero as an eigenvalue
of $T_{\alpha,0}$. Then,
$\displaystyle\zeta(n;T_{\alpha,0})=-\text{\rm
Res}\left[z^{-n}\dfrac{d}{dz}\text{\rm ln}(F_{\alpha,0}(z));\
z=0\right]=-n\,b_{n},\quad n\in{\mathbb{N}},$ (4.27)
where
$\displaystyle\begin{split}b_{1}&=\dfrac{\cos(\alpha)\phi_{1+m_{0}}(b)-\sin(\alpha)\theta_{1+m_{0}}(b)}{\cos(\alpha)\phi_{m_{0}}(b)-\sin(\alpha)\theta_{m_{0}}(b)},\\\
b_{j}&=\dfrac{\cos(\alpha)\phi_{j+m_{0}}(b)-\sin(\alpha)\theta_{j+m_{0}}(b)}{\cos(\alpha)\phi_{m_{0}}(b)-\sin(\alpha)\theta_{m_{0}}(b)}\\\
&\quad-\sum_{\ell=1}^{j-1}[\ell/j]\dfrac{\cos(\alpha)\phi_{j-\ell+m_{0}}(b)-\sin(\alpha)\theta_{j-\ell+m_{0}}(b)}{\cos(\alpha)\phi_{m_{0}}(b)-\sin(\alpha)\theta_{m_{0}}(b)}b_{\ell},\quad
j\in{\mathbb{N}},\;j\geqslant 2.\end{split}$ (4.28)
In particular, if zero is not an eigenvalue of $T_{\alpha,0}$, then
$\displaystyle\text{\rm
tr}_{L^{2}_{r}((a,b))}\big{(}T^{-1}_{\alpha,0}\big{)}=\zeta(1;T_{\alpha,0})=-\dfrac{\cos(\alpha)\phi_{1}(b)-\sin(\alpha)\theta_{1}(b)}{\cos(\alpha)\phi_{0}(b)-\sin(\alpha)\theta_{0}(b)}.$
(4.29)
###### Proof.
Take $\beta=0$ in Theorem 4.2. ∎
###### Corollary 4.6 (Neumann boundary conditions).
Assume Hypothesis 2.1, consider $T_{\pi/2,\pi/2}$ as described in Theorem 2.2
$(i)$, and let $m_{0}=0,1$, denote the multiplicity of zero as an eigenvalue
of $T_{\pi/2,\pi/2}$. Then,
$\displaystyle\zeta(n;T_{\pi/2,\pi/2})=-\text{\rm
Res}\left[z^{-n}\dfrac{d}{dz}\text{\rm ln}(F_{\pi/2,\pi/2}(z));\
z=0\right]=-n\,b_{n},\quad n\in{\mathbb{N}},$ (4.30)
where
$\displaystyle b_{1}=\theta^{[1]}_{1+m_{0}}(b)\big{/}\theta^{[1]}_{m_{0}}(b),$
(4.31) $\displaystyle
b_{j}=\theta^{[1]}_{j+m_{0}}\big{/}(b)\theta^{[1]}_{m_{0}}(b)-\sum_{\ell=1}^{j-1}[\ell/j]\big{[}\theta^{[1]}_{j-\ell+m_{0}}(b)\big{/}\theta^{[1]}_{m_{0}}(b)\big{]}b_{\ell},\quad
j\in{\mathbb{N}},\;j\geqslant 2.$
In particular, if zero is not an eigenvalue of $T_{\pi/2,\pi/2}$, then
$\displaystyle\text{\rm
tr}_{L^{2}_{r}((a,b))}\big{(}T^{-1}_{\pi/2,\pi/2}\big{)}=\zeta(1;T_{\pi/2,\pi/2})=-\theta^{[1]}_{1}(b)\big{/}\theta^{[1]}_{0}(b).$
(4.32)
###### Proof.
Take $\alpha=\beta=\pi/2$ in Theorem 4.2. ∎
These are only a few of the most considered separated boundary conditions that
have been singled out. One can also consider Neumann boundary conditions at
only one endpoint, or any other combination of separated boundary conditions,
by referring back to Theorem 4.2 with the appropriate values chosen for
$\alpha,\beta\in[0,\pi)$.
### 4.2. Computing Spectral Zeta Function Values and Traces for Coupled
Boundary Conditions
We now apply Theorem 4.1 to find values of $\zeta(n;T_{\varphi,R})$ when
imposing coupled boundary conditions. Notice that according to [30], zero is
an eigenvalue of multiplicity 2 only for the Krein–von Neumann extension.
###### Theorem 4.7.
Assume Hypothesis 2.1, consider $T_{\varphi,R}$ as described in Theorem 2.2
$(ii)$, and let $m_{0}=0,1$, denote the multiplicity of zero as an eigenvalue
of $T_{\varphi,R}$. Then,
$\displaystyle\zeta(n;T_{\varphi,R})=-\text{\rm
Res}\left[z^{-n}\dfrac{d}{dz}\text{\rm ln}(F_{\varphi,R}(z));\
z=0\right]=-n\,b_{n},\quad n\in{\mathbb{N}},$ (4.33)
where for $m_{0}=0$,
$\displaystyle
b_{1}=\dfrac{e^{i\varphi}\big{(}R_{12}\theta_{1}^{[1]}(b)-R_{22}\theta_{1}(b)+R_{21}\phi_{1}(b)-R_{11}\phi_{1}^{[1]}(b)\big{)}}{e^{i\varphi}\big{(}R_{12}\theta_{0}^{[1]}(b)-R_{22}\theta_{0}(b)+R_{21}\phi_{0}(b)-R_{11}\phi_{0}^{[1]}(b)\big{)}+e^{2i\varphi}+1},$
$\displaystyle
b_{j}=\dfrac{e^{i\varphi}\big{(}R_{12}\theta_{j}^{[1]}(b)-R_{22}\theta_{j}(b)+R_{21}\phi_{j}(b)-R_{11}\phi_{j}^{[1]}(b)\big{)}}{e^{i\varphi}\big{(}R_{12}\theta_{0}^{[1]}(b)-R_{22}\theta_{0}(b)+R_{21}\phi_{0}(b)-R_{11}\phi_{0}^{[1]}(b)\big{)}+e^{2i\varphi}+1}$
(4.34)
$\displaystyle\qquad-\displaystyle\sum_{\ell=1}^{j-1}\frac{\ell}{j}\dfrac{e^{i\varphi}\big{(}R_{12}\theta_{j-\ell}^{[1]}(b)-R_{22}\theta_{j-\ell}(b)+R_{21}\phi_{j-\ell}(b)-R_{11}\phi_{j-\ell}^{[1]}(b)\big{)}}{e^{i\varphi}\big{(}R_{12}\theta_{0}^{[1]}(b)-R_{22}\theta_{0}(b)+R_{21}\phi_{0}(b)-R_{11}\phi_{0}^{[1]}(b)\big{)}+e^{2i\varphi}+1}b_{\ell},$
$\displaystyle\hskip 278.83708ptj\in{\mathbb{N}},\;j\geqslant 2,$
and for $m_{0}=1$,
$\displaystyle
b_{1}=\dfrac{e^{i\varphi}\big{(}R_{12}\theta_{2}^{[1]}(b)-R_{22}\theta_{2}(b)+R_{21}\phi_{2}(b)-R_{11}\phi_{2}^{[1]}(b)\big{)}}{e^{i\varphi}\big{(}R_{12}\theta_{1}^{[1]}(b)-R_{22}\theta_{1}(b)+R_{21}\phi_{1}(b)-R_{11}\phi_{1}^{[1]}(b)\big{)}},$
$\displaystyle
b_{j}=\dfrac{e^{i\varphi}\big{(}R_{12}\theta_{j+1}^{[1]}(b)-R_{22}\theta_{j+1}(b)+R_{21}\phi_{j+1}(b)-R_{11}\phi_{j+1}^{[1]}(b)\big{)}}{e^{i\varphi}\big{(}R_{12}\theta_{1}^{[1]}(b)-R_{22}\theta_{1}(b)+R_{21}\phi_{1}(b)-R_{11}\phi_{1}^{[1]}(b)\big{)}}$
(4.35)
$\displaystyle\qquad-\displaystyle\sum_{\ell=1}^{j-1}\frac{\ell}{j}\dfrac{e^{i\varphi}\big{(}R_{12}\theta_{j-\ell+1}^{[1]}(b)-R_{22}\theta_{j-\ell+1}(b)+R_{21}\phi_{j-\ell+1}(b)-R_{11}\phi_{j-\ell+1}^{[1]}(b)\big{)}}{e^{i\varphi}\big{(}R_{12}\theta_{1}^{[1]}(b)-R_{22}\theta_{1}(b)+R_{21}\phi_{1}(b)-R_{11}\phi_{1}^{[1]}(b)\big{)}}b_{\ell},$
$\displaystyle\hskip 300.17665ptj\in{\mathbb{N}},\;j\geqslant 2.$
In particular, if zero is not an eigenvalue of $T_{\varphi,R}$, then
$\displaystyle\begin{split}&\text{\rm
tr}_{L^{2}_{r}((a,b))}\big{(}T_{\varphi,R}^{-1}\big{)}=\zeta(1;T_{\varphi,R})\\\
&\quad=\dfrac{-e^{i\varphi}\big{(}R_{12}\theta_{1}^{[1]}(b)-R_{22}\theta_{1}(b)+R_{21}\phi_{1}(b)-R_{11}\phi_{1}^{[1]}(b)\big{)}}{e^{i\varphi}\big{(}R_{12}\theta_{0}^{[1]}(b)-R_{22}\theta_{0}(b)+R_{21}\phi_{0}(b)-R_{11}\phi_{0}^{[1]}(b)\big{)}+e^{2i\varphi}+1}.\end{split}$
(4.36)
###### Proof.
Substituting (3.4), (3.6), (3.8), and (3.11) into equation (2) yields
$\displaystyle
F_{\varphi,R}(0)=e^{i\varphi}\big{(}R_{12}\theta_{0}^{[1]}(b)-R_{22}\theta_{0}(b)+R_{21}\phi_{0}(b)-R_{11}\phi_{0}^{[1]}(b)\big{)}+e^{2i\varphi}+1.$
(4.37)
Thus, the coefficient of the $z^{m}$ term for $m\geqslant 1$ in the series is
given by
$\displaystyle
e^{i\varphi}\big{(}R_{12}\theta_{m}^{[1]}(b)-R_{22}\theta_{m}(b)+R_{21}\phi_{m}(b)-R_{11}\phi_{m}^{[1]}(b)\big{)}.$
(4.38)
Hence, assertions (4.34) and (4.35) follow from Theorem 4.1 with
$\displaystyle\begin{split}a_{0}&=e^{i\varphi}\big{(}R_{12}\theta_{0}^{[1]}(b)-R_{22}\theta_{0}(b)+R_{21}\phi_{0}(b)-R_{11}\phi_{0}^{[1]}(b)\big{)}+e^{2i\varphi}+1,\\\
a_{k}&=e^{i\varphi}\big{(}R_{12}\theta_{k}^{[1]}(b)-R_{22}\theta_{k}(b)+R_{21}\phi_{k}(b)-R_{11}\phi_{k}^{[1]}(b)\big{)},\quad
k\in{\mathbb{N}}.\end{split}$ (4.39)
∎
Next, we provide corollaries regarding the most common coupled boundary
conditions, periodic and antiperiodic as well as the Krein-von Neumann
extension.
###### Corollary 4.8 (Periodic boundary conditions).
Assume Hypothesis 2.1, consider $T_{0,I_{2}}$ as described in Theorem 2.2
$(ii)$, and let $m_{0}=0,1$, denote the multiplicity of zero as an eigenvalue
of $T_{0,I_{2}}$. Then,
$\displaystyle\zeta(n;T_{0,I_{2}})=-\text{\rm
Res}\left[z^{-n}\dfrac{d}{dz}\text{\rm ln}(F_{0,I_{2}}(z));\
z=0\right]=-n\,b_{n},\quad n\in{\mathbb{N}},$ (4.40)
where for $m_{0}=0$,
$\displaystyle b_{1}$
$\displaystyle=\big{[}-\theta_{1}(b)-\phi^{[1]}_{1}(b)\big{]}\big{/}\big{[}-\theta_{0}(b)-\phi^{[1]}_{0}(b)+2\big{]},$
(4.41) $\displaystyle b_{j}$
$\displaystyle=\dfrac{-\theta_{j}(b)-\phi^{[1]}_{j}(b)}{-\theta_{0}(b)-\phi^{[1]}_{0}(b)+2}-\sum_{\ell=1}^{j-1}\frac{\ell}{j}\,\dfrac{-\theta_{j-\ell}(b)-\phi^{[1]}_{j-\ell}(b)}{-\theta_{0}(b)-\phi^{[1]}_{0}(b)+2}b_{\ell},\quad
j\in{\mathbb{N}},\;j\geqslant 2,$
and for $m_{0}=1$,
$\displaystyle b_{1}$
$\displaystyle=\big{[}\theta_{2}(b)+\phi^{[1]}_{2}(b)\big{]}\big{/}\big{[}\theta_{1}(b)+\phi^{[1]}_{1}(b)\big{]},$
(4.42) $\displaystyle b_{j}$
$\displaystyle=\dfrac{\theta_{j+1}(b)+\phi^{[1]}_{j+1}(b)}{\theta_{1}(b)+\phi^{[1]}_{1}(b)}-\sum_{\ell=1}^{j-1}\frac{\ell}{j}\,\dfrac{\theta_{j-\ell+1}(b)+\phi^{[1]}_{j-\ell+1}(b)}{\theta_{1}(b)+\phi^{[1]}_{1}(b)}b_{\ell},\quad
j\in{\mathbb{N}},\;j\geqslant 2.$
In particular, if zero is not an eigenvalue of $T_{0,I_{2}}$, then
$\displaystyle\text{\rm
tr}_{L^{2}_{r}((a,b))}\big{(}T^{-1}_{0,I_{2}}\big{)}=\zeta(1;T_{0,I_{2}})=\big{[}\theta_{1}(b)+\phi^{[1]}_{1}(b)\big{]}\big{/}\big{[}-\theta_{0}(b)-\phi^{[1]}_{0}(b)+2\big{]}.$
(4.43)
###### Proof.
Take $\varphi=0$ and $R=I_{2}$ in Theorem 4.7. ∎
###### Corollary 4.9 (Antiperiodic boundary conditions).
Assume Hypothesis 2.1, consider $T_{\pi,I_{2}}$ as described in Theorem 2.2
$(ii)$, and let $m_{0}=0,1$, denote the multiplicity of zero as an eigenvalue
of $T_{\pi,I_{2}}$. Then,
$\displaystyle\zeta(n;T_{\pi,I_{2}})=-\text{\rm
Res}\left[z^{-n}\dfrac{d}{dz}\text{\rm ln}(F_{\pi,I_{2}}(z));\
z=0\right]=-n\,b_{n},\quad n\in{\mathbb{N}},$ (4.44)
where for $m_{0}=0$,
$\displaystyle b_{1}$
$\displaystyle=\big{[}\theta_{1}(b)+\phi^{[1]}_{1}(b)\big{]}\big{/}\big{[}\theta_{0}(b)+\phi^{[1]}_{0}(b)+2\big{]},$
(4.45) $\displaystyle b_{j}$
$\displaystyle=\dfrac{\theta_{j}(b)+\phi^{[1]}_{j}(b)}{\theta_{0}(b)+\phi^{[1]}_{0}(b)+2}-\sum_{\ell=1}^{j-1}\frac{\ell}{j}\,\dfrac{\theta_{j-\ell}(b)+\phi^{[1]}_{j-\ell}(b)}{\theta_{0}(b)+\phi^{[1]}_{0}(b)+2}b_{\ell},\quad
j\in{\mathbb{N}},\;j\geqslant 2,$
and for $m_{0}=1$,
$\displaystyle b_{1}$
$\displaystyle=\big{[}\theta_{2}(b)+\phi^{[1]}_{2}(b)\big{]}\big{/}\big{[}\theta_{1}(b)+\phi^{[1]}_{1}(b)\big{]},$
(4.46) $\displaystyle b_{j}$
$\displaystyle=\dfrac{\theta_{j+1}(b)+\phi^{[1]}_{j+1}(b)}{\theta_{1}(b)+\phi^{[1]}_{1}(b)}-\sum_{\ell=1}^{j-1}\frac{\ell}{j}\,\dfrac{\theta_{j-\ell+1}(b)+\phi^{[1]}_{j-\ell+1}(b)}{\theta_{1}(b)+\phi^{[1]}_{1}(b)}b_{\ell},\quad
j\in{\mathbb{N}},\;j\geqslant 2.$
In particular, if zero is not an eigenvalue of $T_{\pi,I_{2}}$, then
$\displaystyle\text{\rm
tr}_{L^{2}_{r}((a,b))}\big{(}T^{-1}_{\pi,I_{2}}\big{)}=\zeta(1;T_{\pi,I_{2}})=-\big{[}\theta_{1}(b)+\phi^{[1]}_{1}(b)\big{]}\big{/}\big{[}\theta_{0}(b)+\phi^{[1]}_{0}(b)+2\big{]}.$
(4.47)
###### Proof.
Take $\varphi=\pi$ and $R=I_{2}$ in Theorem 4.7. ∎
###### Corollary 4.10 (Krein-von Neumann extension).
Assume Hypothesis 2.1, consider $T_{0,R_{K}}$ the Krein-von Neumann extension
of $T_{min}$ with
$\displaystyle\varphi=0,\quad
R_{K}=\begin{pmatrix}\theta(0,b,a)&\phi(0,b,a)\\\
\theta^{[1]}(0,b,a)&\phi^{[1]}(0,b,a)\end{pmatrix},$ (4.48)
and let $m_{0}=2$, denote the multiplicity of zero as an eigenvalue of
$T_{0,R_{K}}$. Then,
$\displaystyle\zeta(n;T_{0,R_{K}})=-\text{\rm
Res}\left[z^{-n}\dfrac{d}{dz}\text{\rm ln}(F_{0,R_{K}}(z));\
z=0\right]=-n\,b_{n},\quad n\in{\mathbb{N}},$ (4.49)
where
$\displaystyle
b_{1}=\dfrac{\phi_{0}(b)\theta_{3}^{[1]}(b)-\phi_{0}^{[1]}(b)\theta_{3}(b)+\theta_{0}^{[1]}(b)\phi_{3}(b)-\theta_{0}(b)\phi_{3}^{[1]}(b)}{\phi_{0}(b)\theta_{2}^{[1]}(b)-\phi_{0}^{[1]}(b)\theta_{2}(b)+\theta_{0}^{[1]}(b)\phi_{2}(b)-\theta_{0}(b)\phi_{2}^{[1]}(b)},$
$\displaystyle
b_{j}=\dfrac{\phi_{0}(b)\theta_{j+2}^{[1]}(b)-\phi_{0}^{[1]}(b)\theta_{j+2}(b)+\theta_{0}^{[1]}(b)\phi_{j+2}(b)-\theta_{0}(b)\phi_{j+2}^{[1]}(b)}{\phi_{0}(b)\theta_{2}^{[1]}(b)-\phi_{0}^{[1]}(b)\theta_{2}(b)+\theta_{0}^{[1]}(b)\phi_{2}(b)-\theta_{0}(b)\phi_{2}^{[1]}(b)}$
$\displaystyle\hskip 256.0748ptj\in{\mathbb{N}},\;j\geqslant 2.$ (4.50)
###### Proof.
As shown in [15, Example 3.3], the resulting operator $T_{0,R_{K}}$ represents
the Krein–von Neumann extension of $T_{min}$. Take $\varphi=0$ and $R=R_{K}$
(as defined by (4.48)) in Theorem 4.7, denoting $\phi(0,b,a)=\phi_{0}(b)$,
$\phi_{0}^{[1]}(b)=\phi^{[1]}(0,b,a)$, $\theta_{0}(b)=\theta(0,b,a)$, and
$\theta_{0}^{[1]}(b)=\theta^{[1]}(0,b,a)$ as before, for simplicity. ∎
## 5\. Examples
In this section, we provide an array of examples illustrating our approach for
computing spectral $\zeta$-function values of regular Schrödinger operators
starting with the simplest case of $q=0$, then a positive (piecewise) constant
potential, followed by a constant negative potential, and ending with the case
of a linear potential.
Throughout this section we suppose that
$p=r=1\,\text{ a.e.~{}on }\,(a,b)$ (5.1)
which leaves the potential coefficient $q\in L^{1}((a,b);dx)$, $q$ real-
valued, and hence leaves us with the differential expression
$\tau=-\big{(}d^{2}/dx^{2}\big{)}+q(x),\quad x\in(a,b).$ (5.2)
### 5.1. The Example q=0
We start by providing examples for calculating spectral $\zeta$-function
values for the simple case $q(x)=0$, $x\in(a,b)$, imposing various boundary
conditions. In this case $\tau y=-y^{\prime\prime}=zy$ has the following
linearly independent solutions,
$\displaystyle\phi(z,x,a)=z^{-1/2}\sin\big{(}z^{1/2}(x-a)\big{)},\quad\theta(z,x,a)=\cos\big{(}z^{1/2}(x-a)\big{)},\quad
z\in{\mathbb{C}}.$ (5.3)
Hence,
$\displaystyle\phi(z,b,a)$
$\displaystyle=\sum_{m=0}^{\infty}z^{m}\phi_{m}(b),\quad
z\in{\mathbb{C}},\quad\phi_{k}(b)=\dfrac{(-1)^{k}}{(2k+1)!}(b-a)^{2k+1},\;k\in{\mathbb{N}},$
$\displaystyle\theta(z,b,a)$
$\displaystyle=\sum_{m=0}^{\infty}z^{m}\theta_{m}(b),\quad
z\in{\mathbb{C}},\quad\theta_{k}(b)=\dfrac{(-1)^{k}}{(2k)!}(b-a)^{2k},\;k\in{\mathbb{N}},$
(5.4) $\displaystyle\phi^{\prime}(z,b,a)$
$\displaystyle=\sum_{m=0}^{\infty}z^{m}\phi^{\prime}_{m}(b),\quad
z\in{\mathbb{C}},\quad\phi^{\prime}_{k}(b)=-\dfrac{(-1)^{k}}{(k+1)!}(b-a)^{k+1},\quad\
k\in{\mathbb{N}},$ $\displaystyle\theta^{\prime}(z,b,a)$
$\displaystyle=\sum_{m=0}^{\infty}z^{m}\theta^{\prime}_{m}(b),\quad
z\in{\mathbb{C}},\quad\theta^{\prime}_{k}(b)=\dfrac{(-1)^{k}}{k!}(b-a)^{k},\quad\
k\in{\mathbb{N}}.$ (5.5)
One can explicitly write the corresponding expressions for
$F_{\alpha,\beta}(z)$ and $F_{\varphi,R}(z)$ for this example to find for
$\alpha,\beta\in[0,\pi)$,
$\displaystyle F_{\alpha,\beta}(z)=\cos(\alpha)\big{[}-\sin(\beta)\
\cos\big{(}z^{1/2}(b-a)\big{)}+\cos(\beta)z^{-1/2}\sin\big{(}z^{1/2}(b-a)\big{)}\big{]}$
$\displaystyle\quad-\sin(\alpha)\big{[}\sin(\beta)\
z^{1/2}\sin\big{(}z^{1/2}(b-a)\big{)}+\cos(\beta)\
\cos\big{(}z^{1/2}(b-a)\big{)}\big{]},$ (5.6)
and for $\varphi\in[0,2\pi),\ R\in SL(2,{\mathbb{R}})$,
$\displaystyle
F_{\varphi,R}(z)=e^{i\varphi}\big{[}-R_{12}z^{1/2}\sin\big{(}z^{1/2}(b-a)\big{)}-R_{22}\cos\big{(}z^{1/2}(b-a)\big{)}$
$\displaystyle\quad+R_{21}z^{-1/2}\sin\big{(}z^{1/2}(b-a)\big{)}-R_{11}\cos\big{(}z^{1/2}(b-a)\big{)}\big{]}+e^{2i\varphi}+1.$
(5.7)
We provide an explicit expression for $\zeta(1;T_{A,B})$ since it only
involves the first few coefficients of the small-$z$ expansion. In the case of
separated boundary conditions one obtains
$\displaystyle\begin{split}a_{0}&=\cos(\alpha)((b-a)\cos(\beta)-\sin(\beta))-\sin(\alpha)\cos(\beta),\\\
a_{1}&=\cos(\alpha)\left(\frac{1}{2}(b-a)^{2}\sin(\beta)-\frac{1}{6}(b-a)^{3}\cos(\beta)\right)\\\
&\quad+\sin(\alpha)\left(\frac{1}{2}(b-a)^{2}\cos(\beta)-(b-a)\sin(\beta)\right),\\\
a_{2}&=\sin(\alpha)\left(\frac{1}{6}(b-a)^{3}\sin(\beta)-\frac{1}{24}(b-a)^{4}\cos(\beta)\right)\\\
&\quad+\cos(\alpha)\left(\frac{1}{120}(b-a)^{5}\cos(\beta)-\frac{1}{24}(b-a)^{4}\sin(\beta)\right).\end{split}$
(5.8)
If $T_{\alpha,\beta}$ does not have a zero eigenvalue, then $a_{0}\neq 0$ and,
hence, one finds from (4.4),
$\displaystyle\text{\rm
tr}_{L^{2}_{r}((0,b))}\big{(}T_{\alpha,\beta}^{-1}\big{)}=\zeta(1;T_{\alpha,\beta})=$
(5.9)
If, instead, $T_{\alpha,\beta}$ has a zero eigenvalue then $a_{0}=0$ and one
finds
$\displaystyle\zeta(1;T_{\alpha,\beta})=$ (5.10)
In the case of coupled boundary conditions one finds
$\displaystyle a_{0}$
$\displaystyle=e^{i\varphi}((b-a)R_{21}-R_{11}-R_{22})+e^{2i\varphi}+1,$
$\displaystyle a_{1}$
$\displaystyle=e^{i\varphi}\left(-\frac{1}{6}(b-a)^{3}R_{21}+\frac{1}{2}(b-a)^{2}R_{11}+\frac{1}{2}(b-a)^{2}R_{22}+(a-b)R_{12}\right),$
(5.11) $\displaystyle a_{2}$
$\displaystyle=e^{i\varphi}\left(\frac{1}{120}(b-a)^{5}R_{21}-\frac{1}{24}(b-a)^{4}R_{11}-\frac{1}{24}(b-a)^{4}R_{22}+\frac{1}{6}(b-a)^{3}R_{12}\right).$
Once again, if zero is not an eigenvalue of $T_{\varphi,R}$, $a_{0}\neq 0$ and
one finds
$\displaystyle\begin{split}&\text{\rm
tr}_{L^{2}_{r}((0,b))}\big{(}T_{\varphi,R}^{-1}\big{)}=\zeta(1;T_{\varphi,R})\\\
&\quad=\dfrac{e^{i\varphi}\left(R_{21}(b-a)^{3}-3(b-a)^{2}R_{11}-3(b-a)^{2}R_{22}+6(b-a)R_{12}\right)}{6e^{i\varphi}((b-a)R_{21}-R_{11}-R_{22})+6e^{2i\varphi}+6}.\end{split}$
(5.12)
If, on the other hand, zero is an eigenvalue of $T_{\varphi,R}$ with
multiplicity one, then $a_{0}=0$ and
$\displaystyle\begin{split}\zeta(1;T_{\varphi,R})=\dfrac{(b-a)^{5}R_{21}-5(b-a)^{4}R_{11}-5(b-a)^{4}R_{22}+20(b-a)^{3}R_{12}}{20(b-a)^{3}R_{21}-60(b-a)^{2}R_{11}-60(b-a)^{2}R_{22}+120(b-a)R_{12}}.\end{split}$
(5.13)
If zero is an eigenvalue of $T_{\varphi,R}$ with multiplicity two, we refer to
the Krein–von Neumann extension, see Example 5.5.
Finally we give the form of the zeta regularized functional determinant for
this example. As $z\downarrow 0$, one obtains
$\displaystyle
F_{\alpha,\beta}(z)=(b-a)\cos(\alpha)\cos(\beta)-\sin(\alpha+\beta)+O\big{(}z^{1/2}\big{)},$
(5.14)
which implies that for particular values of $\alpha$ and $\beta$ one finds a
zero eigenvalue. For now we will assume that no zero eigenvalue is present and
hence we consider the following set of parameters
$\displaystyle{\mathcal{A}}=\\{\alpha,\beta\in(0,\pi)|(b-a)\cos(\alpha)\cos(\beta)-\sin(\alpha+\beta)\neq
0\\}.$ (5.15)
For $\alpha,\beta\in{\mathcal{A}}$ we have, by construction, that $m_{0}=0$
and the product $\sin(\alpha)\sin(\beta)\neq 0$. The latter condition implies
that in (3.93) one must set $k_{0}=-2$. By using (5.14), one obtains
$\displaystyle\begin{split}\zeta^{\prime}(0;T_{\alpha,\beta})&=-\text{\rm
ln}\left(\left|\frac{2{\mathcal{F}}_{\alpha,\beta}(0)}{\sin(\alpha)\sin(\beta)}\right|\right)\\\
&=-\text{\rm
ln}\left(\left|\frac{2(b-a)\cos(\alpha)\cos(\beta)-2\sin(\alpha+\beta)}{\sin(\alpha)\sin(\beta)}\right|\right),\end{split}$
(5.16)
which coincides with [33, Eq. (3.72)].
Furthermore, as $z\downarrow 0$, one obtains
$\displaystyle
F_{\varphi,R}(z)=e^{i\varphi}[(b-a)R_{21}-R_{11}-R_{22}]+e^{2i\varphi}+1+O\big{(}z^{1/2}\big{)},$
(5.17)
which implies that for particular choices of $\varphi$ and $R$ one finds a
zero eigenvalue. For now we will assume that no zero eigenvalue is present and
hence we consider the following set of parameters
$\displaystyle{\mathcal{B}}=\\{\varphi\in(0,2\pi),R\in
SL(2,{\mathbb{R}})|e^{i\varphi}[(b-a)R_{21}-R_{11}-R_{22}]+e^{2i\varphi}+1\neq
0\\}.$ (5.18)
For $\varphi,R\in{\mathcal{B}}$ we have, by construction, that $m_{0}=0$.
Making the additional assumption $R_{12}\neq 0$ implies that in (3.93) one
must set $k_{0}=-2$. By using (5.17), one obtains
$\displaystyle\begin{split}\zeta^{\prime}(0;T_{\varphi,\widetilde{R}})&=-\text{\rm
ln}\left(\left|2{\mathcal{F}}_{\varphi,\widetilde{R}}(0)/R_{12}\right|\right)\\\
&=-\text{\rm
ln}\left(\left|\frac{2[(b-a)R_{21}-R_{11}-R_{22}]+4\cos(\varphi)}{R_{12}}\right|\right).\end{split}$
(5.19)
If $R_{12}=0$, then since $R\in SL(2,{\mathbb{R}})$, by assumption
$R_{11}\neq-R_{22}$ which implies that in (3.93) one must set $k_{0}=-1$. By
once again using (5.17), one obtains
$\displaystyle\begin{split}\zeta^{\prime}(0;T_{\varphi,\widetilde{R}})&=-\text{\rm
ln}\left(\left|\frac{2{\mathcal{F}}_{\varphi,\widetilde{R}}(0)}{R_{11}+R_{22}}\right|\right)\\\
&=-\text{\rm
ln}\left(\left|\frac{2[(b-a)R_{21}-R_{11}-R_{22}]+4\cos(\varphi)}{R_{11}+R_{22}}\right|\right).\end{split}$
(5.20)
The following examples, each with different boundary conditions, will
illustrate how the main theorems and corollaries of the previous section can
be used to effectively compute the spectral $\zeta$-function values of the
operator, $T_{A,B}$, for $n\in{\mathbb{N}}$.
###### Example 5.1 (Dirichlet boundary conditions).
Consider the case $\alpha=\beta=0$. Then the operator $T_{0,0}$ has
eigenvalues and eigenfunctions given by
$\displaystyle\lambda_{k}=k^{2}\pi^{2}\big{/}(b-a)^{2},\quad
y_{k}(x)=\lambda_{k}^{-1/2}\sin\big{(}\lambda_{k}^{1/2}(x-a)\big{)},\quad
k\in{\mathbb{N}}$ (5.21)
$($in particular, $z=0$ is not an eigenvalue of $T_{0,0}$$)$, and
$\displaystyle F_{0,0}(z)=z^{-1/2}\sin\big{(}z^{1/2}(b-a)\big{)},\quad
z\in{\mathbb{C}}.$ (5.22)
Applying Corollary 4.3 with $m_{0}=0$ one finds for $n=1,2,3,4$,
$\displaystyle\begin{split}&\zeta(1;T_{0,0})=(b-a)^{2}\pi^{-2}\sum_{k=1}^{\infty}k^{-2}=\text{\rm
tr}_{L^{2}_{r}((a,b))}\big{(}T^{-1}_{0,0}\big{)}=(b-a)^{2}/6,\\\
&\zeta(2;T_{0,0})=(b-a)^{4}/90,\\\ &\zeta(3;T_{0,0})=(b-a)^{6}/945,\\\
&\zeta(4;T_{0,0})=(b-a)^{8}/9450.\end{split}$ (5.23)
Next, we explicitly compute the zeta regularized functional determinant with
Dirichlet boundary conditions. Since no zero eigenvalue is present and
$\Gamma_{0}=-(b-a)$, one easily obtains
$\displaystyle\zeta^{\prime}(0;T_{0,0})=-\text{\rm ln}[2F_{0,0}(0)]=-\text{\rm
ln}[2(b-a)].$ (5.24)
One can corroborate the values found in Example 5.1 by utilizing the following
relation of $\zeta(s;T_{0,0})$ with the Riemann $\zeta$-function (see, e.g.,
[8], [18] for some background)
$\displaystyle\zeta(s;T_{0,0})=(b-a)^{2s}\pi^{-2s}\zeta(2s),\quad\text{\rm
Re}(s)>1/2.$ (5.25)
By using [37, 0.2333], the last expression allows us to find for
$s=n\in{\mathbb{N}}$,
$\displaystyle\zeta(n;T_{0,0})=2^{2n-1}(b-a)^{2n}|B_{2n}|/[(2n)!],$ (5.26)
where $B_{2n}$ is the $2n$th Bernoulli number (cf. [1, Ch. 23]).
###### Example 5.2 (Neumann boundary conditions).
Consider the case $\alpha=\beta=\pi/2$. Then the operator $T_{\pi/2,\pi/2}$
has eigenvalues and eigenfunctions given by
$\displaystyle\lambda_{k}=k^{2}\pi^{2}/(b-a)^{2},\quad
y_{k}(x)=\cos\big{(}\lambda_{k}^{1/2}(x-a)\big{)},\quad k\in{\mathbb{N}}_{0}$
(5.27)
$($in particular, $z=0$ is a simple eigenvalue of $T_{\pi/2,\pi/2}$$)$ and
$\displaystyle F_{\pi/2,\pi/2}(z)=-z^{1/2}\sin\big{(}z^{1/2}(b-a)\big{)},\quad
z\in{\mathbb{C}}.$ (5.28)
Applying Corollary 4.6 with $m_{0}=1$ one finds for $n=1,2,3,4$,
$\displaystyle\begin{split}&\zeta(1;T_{\pi/2,\pi/2})=(b-a)^{2}\pi^{-2}\sum_{k=1}^{\infty}k^{-2}=(b-a)^{2}/6,\\\
&\zeta(2;T_{\pi/2,\pi/2})=(b-a)^{4}/90,\\\
&\zeta(3;T_{\pi/2,\pi/2})=(b-a)^{6}/945,\\\
&\zeta(4;T_{\pi/2,\pi/2})=(b-a)^{8}/9450.\end{split}$ (5.29)
Noting that the series expression for $\zeta(s;T_{\pi/2,\pi/2})$ in (2.38)
sums only over non-zero eigenvalues, and that the eigenvalues for Dirichlet
and Neumann boundary conditions only differ by zero being an eigenvalue for
the latter, but not the former, the same expressions apply as in Example 5.1,
which is reflected in equations (5.23) and (5.29) yielding the same values.
###### Example 5.3 (Periodic boundary conditions).
Consider the case $\varphi=0,\ R=I_{2}$. Then the operator $T_{0,I_{2}}$ has
eigenvalues given by
$\displaystyle\lambda_{k}=(2k)^{2}\pi^{2}/(b-a)^{2},\quad
k\in{\mathbb{N}}_{0}.$ (5.30)
In particular, $z=0$ is a simple eigenvalue of $T_{0,I_{2}}$ and all other
eigenvalues of $T_{0,I_{2}}$ are of multiplicity 2, and
$\displaystyle F_{0,I_{2}}(z)=-2\cos\big{(}z^{1/2}(b-a)\big{)}+2,\quad
z\in{\mathbb{C}}.$ (5.31)
Applying Corollary 4.8 with $m_{0}=1$ one finds for $n=1,2,3,4$,
$\displaystyle\begin{split}&\zeta(1;T_{0,I_{2}})=2(b-a)^{2}\pi^{-2}\sum_{k=1}^{\infty}(2k)^{-2}=(b-a)^{2}/12,\\\
&\zeta(2;T_{0,I_{2}})=(b-a)^{4}/720,\\\
&\zeta(3;T_{0,I_{2}})=(b-a)^{6}/30240,\\\
&\zeta(4;T_{0,I_{2}})=(b-a)^{8}/1209600.\end{split}$ (5.32)
Here, once again, one can verify the values found in Example 5.3 by utilizing
the following relation of $\zeta(s;T_{0,I_{2}})$ with the Riemann
$\zeta$-function,
$\displaystyle\zeta(s;T_{0,I_{2}})=2^{1-2s}\pi^{-2s}(b-a)^{2s}\zeta(2s),\quad\text{\rm
Re}(s)>1/2.$ (5.33)
By using [37, 0.2333], the last expression allows one to find for
$s=n\in{\mathbb{N}}$,
$\displaystyle\zeta(n;T_{0,I_{2}})=(b-a)^{2n}|B_{2n}|/[(2n)!].$ (5.34)
###### Example 5.4 (Antiperiodic boundary conditions).
Consider the case $\varphi=\pi,\ R=I_{2}$. Then the operator $T_{\pi,I_{2}}$
has eigenvalues given by
$\displaystyle\lambda_{k}=(2k-1)^{2}\pi^{2}/(b-a)^{2},\quad k\in{\mathbb{N}}.$
(5.35)
In particular, $z=0$ is not an eigenvalue of $T_{\pi,I_{2}}$ and all
eigenvalues of $T_{\pi,I_{2}}$ are of multiplicity 2, and
$\displaystyle F_{\pi,I_{2}}(z)=2\cos\big{(}z^{1/2}(b-a)\big{)}+2,\quad
z\in{\mathbb{C}}.$ (5.36)
Applying Corollary 4.9 with $m_{0}=0$ one finds for $n=1,2,3,4$,
$\displaystyle\begin{split}&\zeta(1;T_{\pi,I_{2}})=2(b-a)^{2}\pi^{-2}\sum_{k=1}^{\infty}(2k-1)^{-2}=\text{\rm
tr}_{L^{2}_{r}((a,b))}\big{(}T^{-1}_{\pi,I_{2}}\big{)}=(b-a)^{2}/4,\\\
&\zeta(2;T_{\pi,I_{2}})=(b-a)^{4}/48,\\\
&\zeta(3;T_{\pi,I_{2}})=(b-a)^{6}/480,\\\
&\zeta(4;T_{\pi,I_{2}})=[17/80640](b-a)^{8}.\end{split}$ (5.37)
One can verify the values found in Example 5.4 by utilizing the following
relation,
$\displaystyle\zeta(s;T_{\pi,I_{2}})=2(b-a)^{2s}\pi^{-2s}\sum_{k\in{\mathbb{N}}}(2k-1)^{-2s}=\big{(}1-2^{-2s}\big{)}2(b-a)^{2s}\pi^{-2s}\zeta(2s),$
$\displaystyle\hskip 256.0748pt\text{\rm Re}(s)>1/2,$ (5.38)
which in turn by using either [37, 0.2335] on the first equality or [37,
0.2333] on the second allows one to find for $s=n\in{\mathbb{N}}$,
$\displaystyle\zeta(n;T_{\pi,I_{2}})=(2^{2n}-1)(b-a)^{2n}|B_{2n}|/[(2n)!].$
(5.39)
###### Example 5.5 (Krein–von Neumann boundary conditions).
Consider the case $\varphi=0,\ R=R_{K}$, with
$\displaystyle R_{K}=\begin{pmatrix}\theta(0,b,a)&\phi(0,b,a)\\\
\theta^{[1]}(0,b,a)&\phi^{[1]}(0,b,a)\end{pmatrix}=\begin{pmatrix}1&b-a\\\
0&1\end{pmatrix}.$ (5.40)
As shown in [15, Example 3.3], the resulting operator $T_{0,R_{K}}$ represents
the Krein–von Neumann extension of $T_{min}$. For more on the Krein–von
Neumann extension, including an extensive discussion of eigenvalues and
eigenfunctions, see [2] or [4]. From (2) with $\varphi=0,\ R=R_{K}$ defined as
in (5.40),
$\displaystyle
F_{0,R_{K}}(z)=(a-b)z^{1/2}\sin\big{(}z^{1/2}(b-a)\big{)}-2\cos\big{(}z^{1/2}(b-a)\big{)}+2,\quad
z\in{\mathbb{C}}.$ (5.41)
Using the series expansions in (5.41), one finds
$\displaystyle F_{0,R_{K}}(z)\underset{z\downarrow
0}{=}\big{[}(b-a)^{4}/12\big{]}z^{2}+O\big{(}z^{3}\big{)},$ (5.42)
so that $z=0$ is a zero of multiplicity two of $F_{0,R_{K}}(z)$ and hence an
eigenvalue of multiplicity two of $T_{0,R_{K}}$ $($coinciding with what was
found in [4] and noted in [33, Example 3.7]$)$. Applying Corollary 4.10 with
$m_{0}=2$ gives
$\displaystyle\begin{split}\zeta(1;T_{0,R_{K}})&=(b-a)^{2}/15,\\\
\zeta(2;T_{0,R_{K}})&=[11/12600](b-a)^{4},\\\
\zeta(3;T_{0,R_{K}})&=(b-a)^{6}/54000,\\\
\zeta(4;T_{0,R_{K}})&=[457/317520000](b-a)^{8}.\end{split}$ (5.43)
### 5.2. Examples of Nonnegative (Piecewise) Constant Potentials
Next we provide examples for calculating spectral $\zeta$-function values
considering a positive (piecewise) constant potential $q$, imposing Dirichlet
boundary conditions.
###### Example 5.6.
Let $V_{0}\in(0,\infty)$, consider $q(x)=V_{0}$, $x\in(a,b)$, and denote by
$T_{0,0}$ the associated Schrödinger operator with Dirichlet boundary
conditions at $a$ and $b$ $($i.e., $\alpha=\beta=0$$)$. Then,
$\displaystyle\begin{split}&\phi(z,x,a)=(z-V_{0})^{-1/2}\sin\big{(}(z-V_{0})^{1/2}(x-a)\big{)},\\\
&\theta(z,x,a)=\cos\big{(}(z-V_{0})^{1/2}(x-a)\big{)},\quad
x\in(a,b),\;z\in{\mathbb{C}}.\end{split}$ (5.44)
Furthermore, the eigenvalues and eigenfunctions for $T_{0,0}$ with
$q(x)=V_{0}>0,\ x\in(a,b),$ are given by
$\displaystyle\begin{split}&\lambda_{k}=k^{2}\pi^{2}/(b-a)^{-2}+V_{0},\\\
&y_{k}(x)=(\lambda_{k}-V_{0})^{-1/2}\sin\big{(}(\lambda_{k}-V_{0})^{1/2}(x-a)\big{)},\quad
k\in{\mathbb{N}}\end{split}$ (5.45)
$($in particular, $z=0$ is not an eigenvalue of $T_{0,0}$$)$, and
$\displaystyle
F_{0,0}(z)=(z-V_{0})^{-1/2}\sin\big{(}(z-V_{0})^{1/2}(b-a)\big{)},\quad
z\in{\mathbb{C}}.$ (5.46)
Applying Corollary 4.3 with $m_{0}=0$ one finds for $n=1,2,3$ $($the
expression for $n=4$ is significantly longer and hence is omitted here$)$,
$\displaystyle\zeta(1;T_{0,0})=\sum_{k=1}^{\infty}\left[\frac{k^{2}\pi^{2}}{(b-a)^{2}}+V_{0}\right]^{-1}=\text{\rm
tr}_{L^{2}_{r}((a,b))}\big{(}T^{-1}_{0,0}\big{)}$ $\displaystyle\hskip
39.26494pt=\big{[}V_{0}^{1/2}(b-a)\coth\big{(}V_{0}^{1/2}(b-a)\big{)}-1\big{]}\big{/}(2V_{0}),$
$\displaystyle\zeta(3;T_{0,0})=\big{(}64V_{0}^{3}\sinh^{2}\big{(}V_{0}^{1/2}(b-a)\big{)}\big{)}^{-1}\big{[}12V_{0}(b-a)^{2}-16\cosh\big{(}2V_{0}^{1/2}(b-a)\big{)}$
$\displaystyle\hskip
39.26494pt\quad+16+V_{0}^{1/2}(b-a)\big{(}8a^{2}V_{0}-16abV_{0}+8b^{2}V_{0}-3\big{)}\coth\big{(}V_{0}^{1/2}(b-a)\big{)}$
$\displaystyle\hskip
39.26494pt\quad-3aV_{0}^{1/2}\cosh\big{(}3V_{0}^{1/2}(b-a)\big{)}\big{(}\sinh\big{(}V_{0}^{1/2}(b-a)\big{)}\big{)}^{-1}$
$\displaystyle\hskip
39.26494pt\quad+3bV_{0}^{1/2}\cosh\big{(}3V_{0}^{1/2}(b-a)\big{)}\big{(}\sinh\big{(}V_{0}^{1/2}(b-a)\big{)}\big{)}^{-1}\big{]}.$
(5.47)
Taking the limit $V_{0}\downarrow 0$ of (5.47) recovers the expressions in
Example 5.1.
###### Remark 5.7.
One can also verify the expressions found in Example 5.6 by means of the one-
dimensional Epstein $\zeta$-function given by
$\displaystyle\zeta_{E}(s;m^{2})=\sum_{k=-\infty}^{\infty}\big{(}k^{2}+m^{2}\big{)}^{-s},\quad
m^{2}\neq 0,\;s>1/2$ (5.48)
(see, e.g., the classical sources [23], [24], [44], and [21, Sect. 1.1.3],
[22, Sects. 1.2.2, 5.3.2], [45], [46, Ch. 3, App. A], and the extensive list
of references therein). Now one finds that $\zeta(s;T_{0,0})$ in Example 5.6
can be written
$\displaystyle\begin{split}\zeta(s;T_{0,0})&=\sum_{k=1}^{\infty}\left[\frac{k^{2}\pi^{2}}{(b-a)^{2}}+V_{0}\right]^{-s}=(b-a)^{2s}\pi^{-2s}\sum_{k=1}^{\infty}\left[k^{2}+m^{2}\right]^{-s}\\\
&=2^{-1}(b-a)^{2s}\pi^{-2s}\big{[}\zeta_{E}(s;m^{2})-m^{-2s}\big{]},\quad
s>1/2,\end{split}$ (5.49)
where
$\displaystyle m^{2}=(b-a)^{2}V_{0}\pi^{-2}>0.$ (5.50)
Then the following formula for the analytic continuation of
$\zeta_{E}(s;m^{2})$ in $s$ for $m\neq 0,-1,-2,\ldots$ (see [21, Sect. 4.1.1])
$\displaystyle\begin{split}\zeta_{E}(s;m^{2})=\pi^{1/2}\dfrac{\Gamma(s-\frac{1}{2})}{\Gamma(s)}m^{1-2s}+\dfrac{4\pi^{s}}{\Gamma(s)}m^{1/2-s}\sum_{n=1}^{\infty}n^{s-1/2}K_{s-1/2}(2\pi
mn),\\\ s\neq(1/2)-\ell,\ \ell\in{\mathbb{N}}_{0},\
s\in{\mathbb{C}},\end{split}$ (5.51)
where $K_{\mu}(\,\cdot\,)$ is the modified Bessel function of the second kind
(see for example [1, Chs. 9-10]), can be used to explicitly verify the
expressions found in Example 5.6.
We verify the expressions for $\zeta(1;T_{0,0})$ and $\zeta(2;T_{0,0})$ next.
From (5.51) one has, using the fact that
$K_{1/2}(z)=\pi^{1/2}(2z)^{-1/2}e^{-z}$,
$\displaystyle\zeta_{E}(1;m^{2})$ $\displaystyle=\pi m^{-1}+4\pi
m^{-1/2}\sum_{n=1}^{\infty}n^{1/2}\pi^{1/2}(4\pi mn)^{-1/2}e^{-2\pi mn}$
$\displaystyle=\pi m^{-1}+2\pi m^{-1}\sum_{n=1}^{\infty}e^{-2\pi mn}=\pi
m^{-1}+2\pi m^{-1}\dfrac{1}{e^{2\pi m}-1}$
$\displaystyle=\dfrac{\pi}{m}\coth(\pi m).$ (5.52)
Thus, from (5.49) and (5.50) one obtains
$\displaystyle\zeta(1;T_{0,0})$
$\displaystyle=\dfrac{(b-a)^{2}}{2\pi^{2}}\big{(}\zeta_{E}(1;m^{2})-m^{-2}\big{)}=\dfrac{(b-a)^{2}}{2\pi^{2}}\bigg{(}\dfrac{\pi
m\coth(\pi m)-1}{m^{2}}\bigg{)}$
$\displaystyle=\big{[}V_{0}^{1/2}(b-a)\coth\big{(}V_{0}^{1/2}(b-a)\big{)}-1\big{]}\big{/}(2V_{0}),$
(5.53)
in accordance with Example 5.6.
Next we verify the expression for $\zeta(2;T_{0,0})$ by first noting that
$\displaystyle\dfrac{d}{dm}\big{(}\zeta_{E}(s;m^{2})\big{)}=-2sm\zeta_{E}(s+1;m^{2}),$
(5.54)
which implies the functional equation
$\displaystyle\zeta_{E}(s+1;m^{2})=-\dfrac{1}{2sm}\dfrac{d}{dm}\big{(}\zeta_{E}(s;m^{2})\big{)}.$
(5.55)
From (5.52) and (5.55) with $s=1$ one has
$\displaystyle\zeta_{E}(2;m^{2})$
$\displaystyle=-\dfrac{\pi}{2m}\dfrac{d}{dm}\bigg{(}\dfrac{\coth(\pi
m)}{m}\bigg{)}$ $\displaystyle=\dfrac{\pi\sinh(2\pi
m)+2\pi^{2}m}{4m^{3}\sinh^{2}(\pi m)}.$ (5.56)
Thus from (5.49) and (5.50) one obtains
$\displaystyle\zeta(2;T_{0,0})=\dfrac{(b-a)^{4}}{2\pi^{4}}\big{(}\zeta_{E}(2;m^{2})-m^{-4}\big{)}$
$\displaystyle\hskip
39.26494pt=\dfrac{(b-a)^{4}}{2\pi^{4}}\bigg{(}\dfrac{\pi\sinh(2\pi
m)+2\pi^{2}m}{4m^{3}\sinh^{2}(\pi m)}-\dfrac{1}{m^{4}}\bigg{)}$ (5.57)
again in accordance with Example 5.6. All other positive integer values can be
found recursively by means of (5.52) and the functional equation (5.55).
$\diamond$
Next, we turn to the case of a nonnegative piecewise constant potential (a
potential well):
###### Example 5.8.
Let $c,d\in(a,b)$, $c<d$, $V_{0}\in(0,\infty)$, consider
$\displaystyle q(x)=\begin{cases}0&x\in(a,c),\\\ V_{0}&x\in(c,d),\\\
0&x\in(d,b),\end{cases}$ (5.58)
and denote by $T_{0,0}$ the associated Schrödinger operator with Dirichlet
boundary conditions at $a$ and $b$. Then, for $z\in{\mathbb{C}}$,
$\displaystyle\phi(z,x,a)$
$\displaystyle=z^{-1/2}\sin\big{(}z^{1/2}(x-a)\big{)},\quad x\in(a,c),$
$\displaystyle\theta(z,x,a)$
$\displaystyle=\cos\big{(}z^{1/2}(x-a)\big{)},\quad x\in(a,c),$
$\displaystyle\phi(z,x,a)$
$\displaystyle=\cos\big{(}z^{1/2}(c-a)\big{)}(z-V_{0})^{-1/2}\sin\big{(}(z-V_{0})^{1/2}(x-c)\big{)}$
$\displaystyle\quad+z^{-1/2}\sin\big{(}z^{1/2}(c-a)\big{)}\cos\big{(}(z-V_{0})^{1/2}(x-c)\big{)},\quad
x\in(c,d),$ $\displaystyle\theta(z,x,a)$
$\displaystyle=-z^{1/2}\sin\big{(}z^{1/2}(c-a)\big{)}(z-V_{0})^{-1/2}\sin\big{(}(z-V_{0})^{1/2}(x-c)\big{)}$
$\displaystyle\quad+\cos\big{(}z^{1/2}(c-a)\big{)}\cos\big{(}(z-V_{0})^{1/2}(x-c)\big{)},\quad
x\in(c,d),$ $\displaystyle\phi(z,x,a)$
$\displaystyle=\bigg{[}\cos\big{(}z^{1/2}(c-a)\big{)}\cos\big{(}(z-V_{0})^{1/2}(d-c)\big{)}$
$\displaystyle\quad\;\;-(z-V_{0})^{1/2}z^{-1/2}\sin\big{(}z^{1/2}(c-a)\big{)}\sin\big{(}(z-V_{0})^{1/2}(d-c)\big{)}\bigg{]}$
$\displaystyle\quad\times z^{-1/2}\sin\big{(}z^{1/2}(x-d)\big{)}$ (5.59)
$\displaystyle+\bigg{[}\cos\big{(}z^{1/2}(c-a)\big{)}(z-V_{0})^{-1/2}\sin\big{(}(z-V_{0})^{1/2}(d-c)\big{)}$
$\displaystyle\quad\;\;+z^{-1/2}\sin\big{(}z^{1/2}(c-a)\big{)}\cos\big{(}(z-V_{0})^{1/2}(d-c)\big{)}\bigg{]}\cos\big{(}z^{1/2}(x-d)\big{)},$
$\displaystyle\hskip 227.62204ptx\in(d,b),$ $\displaystyle\theta(z,x,a)$
$\displaystyle=-\bigg{[}z^{1/2}\sin\big{(}z^{1/2}(c-a)\big{)}\cos\big{(}(z-V_{0})^{1/2}(d-c)\big{)}$
$\displaystyle\qquad\;+(z-V_{0})^{1/2}\cos\big{(}z^{1/2}(c-a)\big{)}\sin\big{(}(z-V_{0})^{1/2}(d-c)\big{)}\bigg{]}$
$\displaystyle\qquad\;\times z^{-1/2}\sin\big{(}z^{1/2}(x-d)\big{)}$
$\displaystyle\quad+\bigg{[}-z^{1/2}\sin\big{(}z^{1/2}(c-a)\big{)}(z-V_{0})^{-1/2}\sin\big{(}(z-V_{0})^{1/2}(d-c)\big{)}$
$\displaystyle\qquad\;+\cos\big{(}z^{1/2}(c-a)\big{)}\cos\big{(}(z-V_{0})^{1/2}(d-c)\big{)}\bigg{]}\cos\big{(}z^{1/2}(x-d)\big{)},$
$\displaystyle\hskip 227.62204ptx\in(d,b).$
In particular,
$\displaystyle\phi(z,b,a)=\sum_{m=0}^{\infty}z^{m}\phi_{m}(b),\quad
z\in{\mathbb{C}},$ (5.60)
where
$\displaystyle\phi_{0}(b)$
$\displaystyle=\left[\cosh\big{(}V_{0}^{1/2}(d-c)\big{)}+V_{0}^{1/2}(c-a)\sinh\big{(}V_{0}^{1/2}(d-c)\big{)}\right](b-d)$
$\displaystyle\quad+V_{0}^{-1/2}\sinh\big{(}V_{0}^{1/2}(d-c)\big{)}+(c-a)\cosh\big{(}V_{0}^{1/2}(d-c)\big{)},$
$\displaystyle\phi_{1}(b)$
$\displaystyle=\big{(}6V_{0}^{3/2}\big{)}^{-1}\big{\\{}3\big{[}\big{(}aV_{0}(c-d)-c^{2}V_{0}+cdV_{0}-1\big{)}\sinh\big{(}V_{0}^{1/2}(c-d)\big{)}$
(5.61)
$\displaystyle\quad+V_{0}^{1/2}(c-d)\cosh\big{(}V_{0}^{1/2}(c-d)\big{)}\big{]}+V_{0}\big{[}\sinh\big{(}V_{0}^{1/2}(d-c)\big{)}(aV_{0}(b-d)$
$\displaystyle\quad-
bcV_{0}+cdV_{0}-3)+V_{0}^{1/2}(3a-b-3c+d)\cosh\big{(}V_{0}^{1/2}(d-c)\big{)}\big{]}$
$\displaystyle\qquad\times(b-d)^{2}\big{[}V_{0}^{1/2}\sinh\big{(}2V_{0}^{1/2}(d-c)\big{)}+\cosh\big{(}2V_{0}^{1/2}(d-c)\big{)}\big{]}$
$\displaystyle\quad+V_{0}^{3/2}(a-c)^{3}\big{\\}},$ etc.
By construction, $\phi(z,a,a)=0$, so eigenvalues are given by solving
$\phi(z,b,a)=0$, or, equivalently, by solving
$\displaystyle\tan\big{(}z^{1/2}(b-d)\big{)}$ (5.62)
From (2.16), one has
$\displaystyle F_{0,0}(z)$
$\displaystyle=\bigg{[}\cos\big{(}z^{1/2}(c-a)\big{)}\cos\big{(}(z-V_{0})^{1/2}(d-c)\big{)}$
$\displaystyle\quad\;\;-(z-V_{0})^{1/2}\dfrac{\sin\big{(}z^{1/2}(c-a)\big{)}}{z^{1/2}}\sin\big{(}(z-V_{0})^{1/2}(d-c)\big{)}\bigg{]}\dfrac{\sin\big{(}z^{1/2}(b-d)\big{)}}{z^{1/2}}$
$\displaystyle\quad+\bigg{[}\cos\big{(}z^{1/2}(c-a)\big{)}(z-V_{0})^{-1/2}\sin\big{(}(z-V_{0})^{1/2}(d-c)\big{)}$
(5.63) $\displaystyle\qquad\;\;+\
z^{-1/2}\sin\big{(}z^{1/2}(c-a)\big{)}\cos\big{(}(z-V_{0})^{1/2}(d-c)\big{)}\bigg{]}\cos\big{(}z^{1/2}(b-d)\big{)},$
$\displaystyle\hskip 294.48619ptz\in{\mathbb{C}}.$
Hence, applying Corollary 4.3 with $m_{0}=0$ one explicitly finds the sum of
the inverse of these eigenvalues, namely
$\displaystyle\zeta(1;T_{0,0})=\text{\rm
tr}_{L^{2}_{r}((a,b))}\big{(}T^{-1}_{0,0}\big{)}=-\phi_{1}(b)/\phi_{0}(b)$
(5.64)
Taking the limits $c\downarrow a$ and $d\uparrow b$ of (5.64) recovers the
expression in Example 5.6. Furthermore, taking the limit $V_{0}\downarrow 0$
recovers the same expression as in Example 5.1. The expression for $n=2$ is
significantly longer and hence it is omitted here.
### 5.3. Example of a Negative Constant Potential
Next, we derive spectral $\zeta$-function values for the case of a negative
constant potential. This case is dealt with separately since the question as
to whether $z=0$ is an eigenvalue of $T_{0,0}$ depends on the actual constant
value of the potential.
###### Example 5.9.
Let $V_{0}\in(0,\infty)$, consider $q(x)=-V_{0}$, $x\in(a,b)$, and denote by
$T_{0,0}$ the associated Schrödinger operator with Dirichlet boundary
conditions at $a$ and $b$. Then,
$\displaystyle\begin{split}&\phi(z,x,a)=(z+V_{0})^{-1/2}\sin\big{(}(z+V_{0})^{1/2}(x-a)\big{)},\\\
&\theta(z,x,a)=\cos\big{(}(z+V_{0})^{1/2}(x-a)\big{)},\quad
z\in{\mathbb{C}}.\end{split}$ (5.65)
Furthermore, eigenvalues and eigenfunctions for $T_{0,0}$ with
$q(x)=-V_{0}<0,\ x\in(a,b),$ are given by
$\displaystyle\lambda_{k}=\dfrac{k^{2}\pi^{2}}{(b-a)^{2}}-V_{0},\quad
y_{k}(x)=(\lambda_{k}+V_{0})^{-1/2}\sin\big{(}(\lambda_{k}+V_{0})^{1/2}(x-a)\big{)},\quad
k\in{\mathbb{N}},$ (5.66)
where one notes that due to $q(x)=-V_{0}<0$, $z=0$ is an eigenvalue of
$T_{0,0}$ for certain values of $V_{0}$. Specifically, if one has
$\displaystyle V_{0}=k^{2}\pi^{2}/(b-a)^{2},\text{ for some
}k\in{\mathbb{N}},$ (5.67)
then $z=0$ is a simple eigenvalue of $T_{0,0}$. Otherwise, $z=0$ is not an
eigenvalue of $T_{0,0}$. Moreover,
$\displaystyle
F_{0,0}(z)=(z+V_{0})^{-1/2}\sin\big{(}(z+V_{0})^{1/2}(b-a)\big{)},\quad
z\in{\mathbb{C}}.$ (5.68)
Applying Corollary 4.3 with $m_{0}=0$ when $V_{0}\neq k^{2}\pi^{2}/(b-a)^{2}$,
$k\in{\mathbb{N}}$, one finds for $n=1,2,3$ $($the expression for $n=4$ is
significantly longer and hence is omitted here$)$,
$\displaystyle\zeta(1;T_{0,0})=\sum_{k=1}^{\infty}\left[\frac{k^{2}\pi^{2}}{(b-a)^{2}}-V_{0}\right]^{-1}=\text{\rm
tr}_{L^{2}_{r}((a,b))}\big{(}T_{0,0}^{-1}\big{)}$ $\displaystyle\hskip
39.26494pt=\big{[}V_{0}^{1/2}(a-b)\cot\big{(}V_{0}^{1/2}(b-a)\big{)}+1\big{]}\big{/}(2V_{0}),$
$\displaystyle\zeta(3;T_{0,0})=\big{(}64V_{0}^{3}\sin^{2}\big{(}V_{0}^{1/2}(b-a)\big{)}\big{)}^{-1}\big{[}-12V_{0}(b-a)^{2}-16\cos\big{(}2V_{0}^{1/2}(b-a)\big{)}$
$\displaystyle\hskip
39.26494pt\quad+16-V_{0}^{1/2}(b-a)\big{(}8a^{2}V_{0}-16abV_{0}+8b^{2}V_{0}-3\big{)}\cot\big{(}V_{0}^{1/2}(b-a)\big{)}$
$\displaystyle\hskip
39.26494pt\quad-3aV_{0}^{1/2}\cos\big{(}3V_{0}^{1/2}(b-a)\big{)}\big{(}\sin\big{(}V_{0}^{1/2}(b-a)\big{)}\big{)}^{-1}$
$\displaystyle\hskip
39.26494pt\quad+3bV_{0}^{1/2}\cos\big{(}3V_{0}^{1/2}(b-a)\big{)}\big{(}\sin\big{(}V_{0}^{1/2}(b-a)\big{)}\big{)}^{-1}\big{]}.$
(5.69)
When $V_{0}=k_{0}^{2}\pi^{2}/(b-a)^{2}$ for some $k_{0}\in{\mathbb{N}}$,
applying Corollary 4.3 with $m_{0}=1$ one finds for $n=1,2$ $($the expressions
for $n=3,4$ are significantly longer and hence are omitted here$)$,
$\displaystyle\zeta(1;T_{0,0})$ $\displaystyle=\sum_{\underset{k\neq
k_{0}}{k=1}}^{\infty}\left[\frac{k^{2}\pi^{2}}{(b-a)^{2}}-V_{0}\right]^{-1}=\frac{\pi^{2}}{(b-a)^{2}}\sum_{\underset{k\neq
k_{0}}{k=1}}^{\infty}\big{[}k^{2}-k_{0}^{2}\big{]}^{-1}$
$\displaystyle=\dfrac{\big{(}V_{0}(b-a)^{2}-3\big{)}\sin\big{(}V_{0}^{1/2}(a-b)\big{)}+3V_{0}^{1/2}(a-b)\cos\big{(}V_{0}^{1/2}(a-b)\big{)}}{4V_{0}\big{(}\sin\big{(}V_{0}^{1/2}(b-a)\big{)}+V_{0}^{1/2}(a-b)\cos\big{(}V_{0}^{1/2}(b-a)\big{)}\big{)}},$
$\displaystyle\zeta(2;T_{0,0})$
$\displaystyle=\dfrac{1}{24V_{0}^{2}\big{(}\sin\big{(}V_{0}^{1/2}(b-a)\big{)}+V_{0}^{1/2}(a-b)\cos\big{(}V_{0}^{1/2}(b-a)\big{)}\big{)}}$
$\displaystyle\quad\times\big{\\{}2\big{[}3\big{(}5-2V_{0}(b-a)^{2}\big{)}\sin\big{(}V_{0}^{1/2}(a-b)\big{)}$
$\displaystyle\qquad-
V_{0}^{1/2}(b-a)(V_{0}(b-a)^{2}-15)\cos\big{(}V_{0}^{1/2}(a-b)\big{)}\big{]}$
$\displaystyle\qquad+3\big{(}\sin\big{(}V_{0}^{1/2}(b-a)\big{)}\big{)}^{-1}\big{[}\big{(}V_{0}(b-a)^{2}-3)\sin\big{(}V_{0}^{1/2}(a-b)\big{)}$
$\displaystyle\qquad-3V_{0}^{1/2}(b-a)\cos\big{(}V_{0}^{1/2}(a-b)\big{)}\big{]}$
$\displaystyle\qquad\quad\times\big{[}\sin\big{(}V_{0}^{1/2}(b-a)\big{)}-V_{0}^{1/2}(b-a)\cos\big{(}V_{0}^{1/2}(b-a)\big{)}\big{]}\big{\\}}.$
(5.70)
Taking the limit $V_{0}\downarrow 0$ of (5.69) recovers the expressions in
Example 5.1.
###### Remark 5.10.
In the case $z=0$ is not an eigenvalue, one can verify these results via the
method outlined in Remark 5.7. Namely, letting
$\displaystyle m^{2}=-(b-a)^{2}V_{0}\pi^{-2}<0$ (5.71)
so that
$\displaystyle m=(i/\pi)(b-a)V_{0}^{1/2}$ (5.72)
in (5.53) and (5.57), one verifies the expressions for $n=1,2$ as before.
$\diamond$
### 5.4. Example of a Linear Potential
We finish with an example for calculating spectral $\zeta$-function values for
the linear potential, $q(x)=x$, $x\in(a,b)$.
###### Example 5.11.
Consider $q(x)=x$, $x\in(a,b)$, and denote by $T_{0,0}$ the associated
Schrödinger operator with Dirichlet boundary conditions at $a$ and $b$. Then,
noting that $W(\operatorname{Ai},\operatorname{Bi})(x)=\pi^{-1}$ $($cf. [1,
Eq. 10.4.10]$)$, one finds
$\displaystyle\phi(z,x,a)$
$\displaystyle=\pi(\operatorname{Ai}(a-z)\operatorname{Bi}(x-z)-\operatorname{Bi}(a-z)\operatorname{Ai}(x-z)),$
(5.73) $\displaystyle\theta(z,x,a)$
$\displaystyle=-\pi(\operatorname{Ai}^{\prime}(a-z)\operatorname{Bi}(x-z)-\operatorname{Bi}^{\prime}(a-z)\operatorname{Ai}(x-z)),\quad
z\in{\mathbb{C}},$ (5.74)
where $\operatorname{Ai}(\,\cdot\,)$ and $\operatorname{Bi}(\,\cdot\,)$
represent the Airy functions of the first and second kind, respectively $($cf.
[1, Sect. 10.4]$)$. In particular, substituting $z=0$ in (5.73) yields
$\displaystyle\phi_{0}(x)=\pi(\operatorname{Ai}(a)\operatorname{Bi}(x)-\operatorname{Bi}(a)\operatorname{Ai}(x)),\quad\theta_{0}(x)=-\pi(\operatorname{Ai}^{\prime}(a)\operatorname{Bi}(x)-\operatorname{Bi}^{\prime}(a)\operatorname{Ai}(x)),$
(5.75)
and thus the Volterra Green’s function becomes
$\displaystyle
g(0,x,x^{\prime})=\pi(\operatorname{Ai}(x)\operatorname{Bi}(x^{\prime})-\operatorname{Ai}(x^{\prime})\operatorname{Bi}(x)).$
(5.76)
Hence,
$\displaystyle\phi(z,b,a)=\sum_{m=0}^{\infty}z^{m}\phi_{m}(b),\quad
z\in{\mathbb{C}},$ (5.77)
where
$\displaystyle\phi_{0}(b)=\pi(\operatorname{Ai}(a)\operatorname{Bi}(b)-\operatorname{Bi}(a)\operatorname{Ai}(b)),$
$\displaystyle\phi_{1}(b)=\pi^{2}\int_{a}^{b}dx_{1}\
(\operatorname{Ai}(b)\operatorname{Bi}(x_{1})-\operatorname{Ai}(x_{1})\operatorname{Bi}(b))(\operatorname{Ai}(a)\operatorname{Bi}(x_{1})-\operatorname{Bi}(a)\operatorname{Ai}(x_{1}))$
$\displaystyle\hskip
22.76228pt=\pi^{2}\big{[}\operatorname{Ai}(a)\operatorname{Ai}(b)\left(\operatorname{Bi}^{\prime}(a)^{2}-\operatorname{Bi}^{\prime}(b)^{2}\right)+\operatorname{Bi}(a)\operatorname{Bi}(b)\big{(}\operatorname{Ai}^{\prime}(a)^{2}-\operatorname{Ai}^{\prime}(b)^{2}\big{)}$
(5.78) $\displaystyle\hskip
34.14322pt+(\operatorname{Ai}^{\prime}(b)\operatorname{Bi}^{\prime}(b)-\operatorname{Ai}^{\prime}(a)\operatorname{Bi}^{\prime}(a))(\operatorname{Bi}(a)\operatorname{Ai}(b)+\operatorname{Ai}(a)\operatorname{Bi}(b))\big{]},$
etc.
Furthermore, one has by construction, $\phi(z,a,a)=0$, so eigenvalues are
given by solving $\phi(z,b,a)=0$, or, equivalently, by solving
$\operatorname{Ai}(a-z)\operatorname{Bi}(b-z)=\operatorname{Bi}(a-z)\operatorname{Ai}(b-z)$.
In particular, the characteristic function is given by
$\displaystyle
F_{0,0}(z)=\pi(\operatorname{Ai}(a-z)\operatorname{Bi}(b-z)-\operatorname{Bi}(a-z)\operatorname{Ai}(b-z)),\quad
z\in{\mathbb{C}}.$ (5.79)
If zero is not an eigenvalue, applying Corollary 4.3 with $m_{0}=0$ one does
find the sum of the inverse of these eigenvalues, namely
$\displaystyle\zeta(1;T_{0,0})$
$\displaystyle=\operatorname{tr}_{L_{r}^{2}((a,b))}\big{(}T_{0,0}^{-1}\big{)}=-\phi_{1}(b)/\phi_{0}(b)$
$\displaystyle=\pi(\operatorname{Bi}(a)\operatorname{Ai}(b)-\operatorname{Ai}(a)\operatorname{Bi}(b))^{-1}\big{[}\operatorname{Ai}(a)\operatorname{Ai}(b)\left(\operatorname{Bi}^{\prime}(a)^{2}-\operatorname{Bi}^{\prime}(b)^{2}\right)$
$\displaystyle\quad+\operatorname{Bi}(a)\operatorname{Bi}(b)\big{(}\operatorname{Ai}^{\prime}(a)^{2}-\operatorname{Ai}^{\prime}(b)^{2}\big{)}$
(5.80)
$\displaystyle\quad+(\operatorname{Ai}^{\prime}(b)\operatorname{Bi}^{\prime}(b)-\operatorname{Ai}^{\prime}(a)\operatorname{Bi}^{\prime}(a))(\operatorname{Bi}(a)\operatorname{Ai}(b)+\operatorname{Ai}(a)\operatorname{Bi}(b))\big{]}.$
Acknowledgments. We are indebted to Angelo Mingarelli for very helpful
discussions.
## References
* [1] M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions, Dover, New York, 1972.
* [2] A. Alonso and B. Simon, The Birman–Krein–Vishik theory of self-adjoint extensions of semibounded operators, J. Operator Th. 4, 251–270 (1980); Addenda: 6, 407 (1981).
* [3] P. Amore, Spectral sum rules for the Schrödinger equation, Ann. Phys. 423 (2020) 168334.
* [4] M. Ashbauch, F. Gesztesy, M. Mitrea, and G Teschl, Spectral theory for perturbed Krein Laplacians in nonsmooth domains, Adv. Math. 223, 1372–1467 (2010).
* [5] F. V. Atkinson, Discrete and Continuous Boundary Value Problems, Academic Press, New York, 1964.
* [6] F. V. Atkinson and A. B. Mingarelli, Asymptotics of the number of zeros and of the eigenvalues of general weighted Sturm–Liouville problems, J. reine angew. Math. 375/376, 380–393 (1987).
* [7] R. O. Awonusika, Determinants of the Laplacians on complex projective spaces ${\mathbb{P}}_{n}({\mathbb{C}})$ ($n\geqslant 1$), J. Number Th. 190, 131–155 (2018).
* [8] R. Ayub, Euler and the Zeta Function, Amer. Math. Monthly 81, 1067–1086 (1974).
* [9] J. Behrndt, S. Hassi, and H. De Snoo, Boundary Value Problems, Weyl Functions, and Differential Operators, Monographs in Math., Vol. 108, Birkhäuser, Springer, 2020.
* [10] R. P. Boas, Entire Functions, Pure and Appl. Math., Vol. V, Academic Press, New York, 1954.
* [11] A. Boutet de Monvel and V. Marchenko, Asymptotic formulas for spectral and Weyl functions of Sturm–Liouville operators with smooth coefficients, in New Results in Operator Theory and Its Applications. The Israel M. Glazman Memorial Volume, I. Gohberg and Yu. Lyubich (eds.), Operator Theory: Advances and Applications, Vol. 98, Birkhäuser, Boston, 1997, pp. 102–117.
* [12] D. Burghelea, L. Friedlander, and T. Kappeler, On the determinant of elliptic boundary value problems on a line segment, Proc. Amer. Math. Soc. 123, 3027–3038 (1995).
* [13] V. S. Buslaev and L. D. Faddeev, Formulas for traces for a singular Sturm–Liouville differential operator, Sov. Math. Dokl. 1, 451–454 (1960).
* [14] S. Clark and F. Gesztesy, Weyl–Titchmarsh $M$-function asymptotics for matrix-valued Schrödinger operators, Proc. London Math. Soc. (3) 82, 701–724 (2001).
* [15] S. Clark, F. Gesztesy, R. Nichols, and M. Zinchenko, Boundary data maps and Krein’s resolvent formula for Sturm–Liouville operators on a finite interval, Operators and Matrices 8, 1–71 (2014).
* [16] A. A. Danielyan and B. M. Levitan, On the asymptotic behavior of the Weyl–Titchmarsh $m$-function, Math. USSR Izv. 36, 487–496 (1991).
* [17] S. Demirel and M. Usman, Trace formulas for Schrödinger operators on the half-line, Bull. Math. Sci. 1, 397–427 (2011).
* [18] L. A. Dikki, The zeta function of an ordinary differential equation on a finite interval, Izv. Akad. Nauk SSSR Ser. Mat. 19, 187–200 (1955). (Russian.)
* [19] L. A. Dikiĭ, Trace formulas for Sturm–Liouville differential operators, Amer. Math. Soc. Transl. (2) 18, 81–115 (1961).
* [20] T. Dreyfus and H. Dym, Product formulas for the eigenvalues of a class of boundary value problems, Duke Math. J. 45, 15–37 (1978).
* [21] E. Elizalde, Ten Physical Applications of Spectral Zeta Functions, 2nd ed., Lecture Notes in Physics, Vol. 855, Springer, New York, 2012.
* [22] E. Elizalde, S. D. Odintsov, A. Romeo, A. A. Bytsenko, and S. Zerbini, Zeta Regularization Techniques with Applications, World Scientific, Singapore, 1994.
* [23] P. Epstein, Zur Theorie allgemeiner Zetafunktionen, Math. Ann. 56, 615–644 (1903).
* [24] P. Epstein, Zur Theorie allgemeiner Zetafunktionen. II, Math. Ann. 63, 205–216 (1907).
* [25] G. M. Falco, A. A. Fedorenko, and I. A. Gruzberg, On functional determinants of matrix differential operators with multiple zero modes, J. Phys. A 50, (2017), 485201, 29 pp.
* [26] R. Forman, Functional determinants and geometry, Invent. Math. 88, 447–493 (1987); Erratum 108, 453–454 (1992).
* [27] R. Forman, Determinants, finite-difference operators and boundary value problems, Commun. Math. Phys. 147, 485–526 (1992).
* [28] P. Freitas and J. Lipovský, Spectral determinant for the damped wave equation on an interval, Acta Phys. Polonica A 136, 817–823 (2019).
* [29] P. Freitas and J. Lipovský, The determinant of one-dimensional polyharmonic operators of arbitrary order, J. Funct. Anal. 279, 108783 (2020), 30 pp.
* [30] G. Fucci, F. Gesztesy, K. Kirsten, L. L. Littlejohn, R. Nichols, and J. Stanfill, The Krein–von Neumann extension revisited, preprint, 2020.
* [31] G. Fucci, C. Graham, and K. Kirsten, Spectral functions for regular Sturm–Liouville problems, J. Math. Phys. 56 (2015), 043503, 24 pp.
* [32] F. Gesztesy, H. Holden, B. Simon, and Z. Zhao, Higher order trace relations for Schrödinger operators, Rev. Math. Phys. 7, 893–922 (1995).
* [33] F. Gesztesy and K. Kirsten, Effective computation of traces, determinants, and $\zeta$-functions for Sturm–Liouville operators, J. Funct. Anal. 276, 520–562 (2019).
* [34] F. Gesztesy and M. Zinchenko, Sturm–Liouville Operators, Their Spectral Theory, and Some Applications. Volume I, to appear.
* [35] I. Gohberg and M. G. Krein, Introduction to the Theory of Linear Nonselfadjoint Operators, Transl. Math. Monogr., Vol. 18., Amer. Math. Soc., Providence, RI, 1969.
* [36] I. C. Gohberg and M. G. Krein, Theory and Applications of Volterra Operators in Hilbert Space, Translations of Mathematical Monographs, Vol. 24, Amer. Math. Soc., Providence, RI, 1970.
* [37] I. S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series, and Products, corrected and enlarged edition, prepared by A. Jeffery, Academic Press, San Diego, 1980.
* [38] C. Graham, K. Kirsten, P. Morales-Almazan, and B. Quantz Streit, Functional determinants for Laplacians on annuli and elliptical regions, J. Math. Phys. 59 (2018), 013508, 22 pp.
* [39] L. Hermi and N. Saito, On Rayleigh-type formulas for a non-local boundary value problem associated with an integral operator commuting with the Laplacian, Appl. Comput. Harmon. Anal. 45, 59–83 (2018).
* [40] D. B. Hinton, M. Klaus, and J. K. Shaw, Series representation and asymptotics for Titchmarsh–Weyl $m$-functions, Diff. Integral Eqs. 2, 419–429 (1989).
* [41] K. Jörgens and F. Rellich, Eigenwerttheorie Gewöhnlicher Differentialgleichungen, Springer-Verlag, Berlin, 1976. (German.)
* [42] R. Jost and A. Pais, On the scattering of a particle by a static potential, Phys. Rev. 82, 840–851 (1951).
* [43] H. G. Kaper and Man Kam Kong, Asymptotics of the Titchmarsh–Weyl $m$-coefficient for integrable potentials, Proc. Roy. Soc. Edinburgh 103A, 347–358 (1986).
* [44] W. Kapteyn, Le calcul numérique, Mém. Soc. Roy. Sci. Liége, Ser. 3 VI, No. 9, 14 pp. (1906).
* [45] K. Kirsten, Generalized multidimensional Epstein zeta functions, J. Math. Phys. 35, 459–470 (1994).
* [46] K. Kirsten, Spectral Functions in Mathematics and Physics, CRC Press, Boca Raton, 2002.
* [47] K. Kirsten and A.J. McKane, Functional determinants by contour integration methods, Ann. Phys. 308, 502–527 (2003).
* [48] K. Kirsten and A.J. McKane, Functional determinants for general Sturm–Liouville problems, J. Phys. A 37, 4649–4670 (2004).
* [49] M. Lesch, Determinants of regular singular Sturm–Liouville operators, Math. Nachr. 194, 139–170 (1998).
* [50] M. Lesch and J. Tolksdorf, On the determinant of one-dimensional elliptic boundary value problems, Commun. Math. Phys. 193, 643–660 (1998).
* [51] M. Lesch and B. Vertman, Regular singular Sturm–Liouville operators and their zeta-determinants, J. Funct. Anal. 261, 408–450 (2011).
* [52] B. Ja. Levin, Distribution of Zeros of Entire Functions, rev., ed., Transl. of Math. Monographs, Vol. 5, Amer. Math. Soc., Providence, RI, 1980.
* [53] S. Levit and U. Smilansky, A theorem on infinite products of eigenvalues of Sturm–Liouville type operators, Proc. Amer. Math. Soc. 65, 299–302 (1977).
* [54] B. M. Levitan and I. S. Sargsjan, Introduction to Spectral Theory, Transl. of Math. Monographs, Vol. 39, Amer. Math. Soc., Providence, RI, 1975.
* [55] V. A. Marchenko, Sturm–Liouville Operators and Applications, rev. ed., AMS Chelsea Publ., Amer. Math. Soc., Providence, RI, 2011.
* [56] A. B. Mingarelli, Some remarks on the order of an entire function associated with a second order differential equation, in Ordinary Differential Equations and Operators. A tribute to F.V. Atkinson, Proc. of a Symposium held at Dundee, Scotland, March–July 1982, W. N. Everitt and R. T. Lewis (eds.), Lecture Notes in Math., Vol. 1032, Springer, Berlin, 1983, pp. 384–389.
* [57] K. A. Mirzoev and T. A. Safonova, Green’s function of ordinary differential operators and an integral representation of sums of certain power series, Dokl. Math. 98, 486–4489 (2018).
* [58] W. Müller, Relative zeta functions, relative determinants and scattering theory, Commun. Math. Phys. 192, 309–347 (1998).
* [59] J. M. Muñoz-Castañeda, K. Kirsten, and M. Bordag, QFT over the finite line. Heat kernel coefficients, spectral zeta functions and selfadjoint extensions, Lett. Math. Phys. 105, 523–549 (2015).
* [60] M. A. Naimark, Linear Differential Operators. Part I: Elementary Theory of Linear Differential Operators, Transl. by E. R. Dawson, Engl. translation edited by W. N. Everitt, Ungar Publishing, New York, 1967.
* [61] M. A. Naimark, Linear Differential Operators. Part II: Linear Differential Operators in Hilbert Space, Transl. by E. R. Dawson, Engl. translation edited by W. N. Everitt, Ungar Publishing, New York, 1968.
* [62] J. Östensson and D. R. Yafaev, A trace formula for differential operators of arbitrary order, in A Panorama of Modern Operator Theory and Related Topics. The Israel Gohberg Memorial Volume, H. Dym, M. A. Kaashoek, P. Lancaster, H. Langer, and L. Lerer (eds.), Operator Theory: Advances and Appls., Vol. 218, Birkhäuser, Springer, 2012, pp. 541–570.
* [63] M. Reed and B. Simon, Methods of Modern Mathematical Physics. IV: Analysis of Operators, Academic Press, New York, 1978.
* [64] D. Robert and V. Sordoni, Generalized determinants for Sturm–Liouville problems on the real line, in Partial Differential Equations and Their Applications, P. C. Greiner, V. Ivrii, L. A. Seco, and C. Sulem (eds.), CRM Proceedings & Lecture Notes, Vol. 12, Amer. Math. Soc., Providence, RI, 1997, pp. 251–259.
* [65] G. V. Rozenblum, M. A. Shubin, and M. Z. Solomyak, Spectral theory of differential operators, in Partial Differential Equations VII, Encyclopedia of Math. Sci., Vol. 64, Springer, Berlin, 1994.
* [66] A. Rybkin, On a complete analysis of high-energy scattering matrix asymptotics for one dimensional Schrödinger operators with integrable potentials, Proc. Amer. Math. Soc. 130, 59–67 (2001).
* [67] A. Rybkin, Some new and old asymptotic representations of the Jost solution and the Weyl $m$-function for Schrödinger operators on the line, Bull. London Math. Soc. 34, 61–72 (2002).
* [68] B. Simon, Notes on infinite determinants of Hilbert space operators, Adv. Math. 24, 244–273 (1977).
* [69] B. Simon, Trace Ideals and Their Applications, Mathematical Surveys and Monographs, Vol. 120, 2nd ed., Amer. Math. Soc., Providence, RI, 2005.
* [70] B. Simon, Operator Theory. A Comprehensive Course in Analysis, Part 4, Amer. Math. Soc. Providence, RI, 2015.
* [71] M. Spreafico, Zeta determinants of Sturm–Liouville operators, Funct. Anal. Appl. 54, 149–154 (2020).
* [72] L. A. Takhtajan, Quantum Mechanics for Mathematicians, Graduate Studies in Math., Vol. 95, Amer. Math. Soc., Providence, RI, 2008.
* [73] E. C. Titchmarsh, A theorem on infinite products, J. London Math. Soc. 1, 35–37 (1926).
* [74] E. C. Titchmarsh, On integral functions with real negative zeros, Proc. London Math. Soc. 26, 186–200 (1927).
* [75] B. Vertman, Regularized limit of determinants for discrete tori, Monatsh. Math. 186, 539–557 (2018).
* [76] J. Weidmann, Linear Operators in Hilbert Spaces, Graduate Texts in Mathematics, Vol. 68, Springer, New York, 1980.
* [77] J. Weidmann, Lineare Operatoren in Hilberträumen. Teil II: Anwendungen, Teubner, Stuttgart, 2003.
* [78] A. Zettl, Sturm–Liouville Theory, Math. Surveys and Monographs, Vol. 121, Amer. Math. Soc., Providence, RI, 2005.
|
# Asymptotics of $p$-torsion subgroup sizes in class groups of monogenized
cubic fields
Mikaeel Yunus
###### Abstract
Bhargava, Hanke, and Shankar have recently shown that the asymptotic average
$2$-torsion subgroup size of the family of class groups of monogenized cubic
fields with positive and negative discriminants is $3/2$ and $2$,
respectively. In this paper, we provide strong computational evidence for
these asymptotes. We then develop a pair of novel conjectures that predicts,
for $p$ prime, the asymptotic average $p$-torsion subgroup size in class
groups of monogenized cubic fields.
## 1 Introduction
Asymptotics of class groups of number fields over the rationals have been
studied for hundreds of years by a wide variety of mathematicians, including
Gauss [12] in the 18th century; Davenport, Heilbronn, Cohen, Lenstra, and
Martinet [8, 6, 7] in the late 20th century; and Bhargava, Fouvry, and Klüners
in the 21st century [2, 10]. In 2014, Bhargava and Varma [5] showed that the
asymptotic average $2$-torsion subgroup size of the family of class groups of
cubic fields ordered by discriminant remains the same regardless of any local
conditions imposed on these cubic fields. In 2018, Ho, Shankar, and Varma [14]
showed that the same average size remains even when ordering by height rather
than discriminant.
However, Bhargava, Hanke, and Shankar [3] have recently shown that mandating
these cubic fields to have monogenic rings of integers – a global condition –
suprisingly changes the asymptotic average in question when ordering by
height. This result is unexpected and interesting, and little is known about
it aside from the calculation of the average $2$-torsion subgroup size in [3]
(see also [20] and [21]).
In this paper, we provide computational evidence that imposing the global
monogenicity condition mentioned above does indeed increase the asymptotic
average in question. We then provide models that verify the results of [3] on
the average $2$-torsion subgroup size of the family of class groups of cubic
fields with monogenic rings of integers.
Our general approach is as follows. We first use SageMath [18] to calculate
the average $2$-torsion subgroup size of the family of class groups for
irreducible, monogenic, and maximal binary cubic forms with height less than
$Y$. Here, $Y=10^{11}$ for positive-discriminant binary cubic forms, and
$Y=10^{10}$ for negative-discriminant binary cubic forms. We then employ
genetic programming [15, 19] to predict the necessary asymptotes and provide
evidence for their agreement with the theoretical values [3].
We subsequently present our main result: a pair of new conjectures predicting,
for prime $p$, the asymptotic averages of $p$-torsion subgroup sizes of class
groups of monogenized cubic fields ordered by height.111Note that ordering
these monogenized cubic fields by height instead of discriminant should not
change the asymptotic averages in question (compare the results of [2] and
[14]). For positive discriminants, we predict the averages size to be:
$1+\dfrac{1}{p(p-1)}$ (1)
For negative discriminants:
$1+\dfrac{1}{p-1}$ (2)
We develop these conjectures using similar computational methods to the ones
we use to provide evidence for the results of [3].
## 2 Theoretical Background
Here, we present definitions that are necessary for the result of [3] that we
work with in this paper.
### 2.1 Preliminary Definitions
Our goal is to study class groups222Note that we will be using the same
definition of a class group (also known as an ideal class group) as the one
provided in [1]. of number fields. We begin by reviewing number fields.
###### Definition 2.1.1.
Let $K_{0}$ be a field. Then, the degree of the field extension $K/K_{0}$ is
the dimension of $K$ as a vector space over $K_{0}$.
###### Definition 2.1.2.
A number field $K$ is a field extension of $\mathbb{Q}$ that has a finite
degree.
Number fields are a generalization of the rational numbers, $\mathbb{Q}$. Just
as the rational numbers contain the ring $\mathbb{Z}$, there is an analogous
ring of integers of a number field.
###### Definition 2.1.3.
A ring of integers $\mathcal{O}_{K}$ of a number field $K$ is the ring of all
$\alpha$ in $K$ for which there exists a minimal polynomial $f$ such that
$f(\alpha)=0$ and $f(x)$ has coefficients in $\mathbb{Z}$.
The ring of integers of a number field is always a Dedekind domain. An
important difference between rings of integers and $\mathbb{Z}$ is that the
rational integers are a principal ideal domain (in fact, $\mathbb{Z}$ is a
Euclidean domain).
Additionally, the ring of integers of a number field is maximal in the sense
that it is the maximal order of a number field $K$.
###### Definition 2.1.4.
Let $K$ be a number field. Then, an order of $K$ is a subring of
$\mathcal{O}_{K}$ whose fraction field is equal to $K$.
While orders may not be Dedekind domains, they are always integral domains.
###### Definition 2.1.5.
An integral domain $R$ has rank $n$ if its fraction field is a number field of
degree $n$.
###### Definition 2.1.6.
We say an integral domain $R$ of rank $n$ is monogenic if and only if
$R=\left\\{\sum_{i=0}^{n-1}a_{i}\gamma^{i}\mid a_{i}\in\mathbbm{Z}\right\\}$
for some $\gamma\in R$. In other words, all elements of $R$ can be represented
as a sum of integer multiples of the powers of exactly one element.
We now recall two definitions from [3]:
###### Definition 2.1.7.
We define a monogenized cubic integral domain to be a pair $(R,\alpha)$, where
$R$ is a cubic integral domain, and $\alpha\in R$ such that
$R=\mathbb{Z}[\alpha]$.
###### Remark 2.1.
A monogenized cubic integral domain is monogenic by definition.
###### Definition 2.1.8.
We define a monogenized cubic field to be a pair $(K,\alpha)$, where $K$ is a
cubic field, and $\alpha\in\mathcal{O}_{K}$ such that
$\mathcal{O}_{K}=\mathbb{Z}[\alpha]$.
Finally, we recall a definition from [3] that is essential to Theorem 2.9
below.
###### Definition 2.1.9.
Let $(R,\alpha)$ and $(R^{\prime},\alpha^{\prime})$ be two monogenized cubic
integral domains. Then, $(R,\alpha)$ and $(R^{\prime},\alpha^{\prime})$ are
isomorphic if there exists an isomorphism $R\rightarrow R^{\prime}$ under
which $\alpha$ is mapped to $\alpha+m$ for some $m\in\mathbb{Z}$.
### 2.2 The Delone-Faddeev Correspondence
###### Definition 2.2.1.
A binary cubic form $f$ is a cubic homogeneous polynomial in two variables
with integer coefficients, i.e.
$f(x,y)=f_{0}x^{3}+f_{1}x^{2}y+f_{2}xy^{2}+f_{3}y^{3}$ where
$f_{0},f_{1},f_{2},f_{3}\in\mathbb{Z}$.
We define the action of
$\mathrm{GL}_{2}(\mathbb{Z})=\left\\{\gamma=\left(\begin{matrix}a&b\\\
c&d\end{matrix}\right)\ \middle|\ a,b,c,d\in\mathbb{Z}\ \mbox{ and
}\det(\gamma)=ad-bc=\pm 1\right\\}$
on the space of all integral binary cubic forms by:
$\displaystyle\gamma\cdot f(x,y)$
$\displaystyle\coloneqq\frac{1}{\det(\gamma)}\
f\left(\begin{matrix}(x&y)\end{matrix}\cdot\gamma\right)$
$\displaystyle=\frac{1}{ad-bc}\
f\left(\begin{matrix}(x&y)\end{matrix}\cdot\left(\begin{matrix}a&b\\\ c&d\\\
\end{matrix}\right)\right)$
For the remainder of this paper, we refer to an integral domain of rank 3 as a
cubic integral domain.
###### Definition 2.2.2.
The discriminant of a binary cubic form is
$\emph{Disc}(f):=f_{1}^{2}f_{2}^{2}-4f_{0}f_{2}^{3}-4f_{3}f_{1}^{3}-27f_{0}^{2}f_{3}^{3}+18f_{0}f_{1}f_{2}f_{3}$
We now recall the Delone-Faddeev correspondence as given in [4].
###### Theorem 2.2 (Delone-Faddeev [9]).
There is a natural bijection between the set of
$\mathrm{GL}_{2}(\mathbb{Z})$-equivalence classes of irreducible integral
binary cubic forms and the set of isomorphism classes of cubic integral
domains. Furthermore, the discriminant of an integral binary cubic form $f$ is
equal to the discriminant of the corresponding cubic integral domain $R(f)$.
Although we do not define the discriminant of a cubic integral domain here, we
can utilize the above theorem to define
$\textrm{Disc}(R(f)):=\textrm{Disc}(f)$
The construction of a cubic integral domain from a binary cubic form can be
described quite explicitly in the form of a multiplication table. Given a
binary cubic form $f(x,y)=f_{0}x^{3}+f_{1}x^{2}y+f_{2}xy^{2}+f_{3}y^{3}$, the
cubic integral domain associated to $f(x,y)$ under Theorem 2.2 can be written
as
$R(f):=\\{a+b\cdot\omega+c\cdot\theta\mid a,b,c\in\mathbbm{Z}\\}$
where:
$\displaystyle\omega\cdot\theta$ $\displaystyle=$
$\displaystyle\theta\cdot\omega=-f_{0}\cdot f_{3}$ (3)
$\displaystyle\omega^{2}$ $\displaystyle=$ $\displaystyle-f_{0}\cdot
f_{2}-f_{1}\cdot\omega+f_{0}\cdot\theta$ (4) $\displaystyle\theta^{2}$
$\displaystyle=$ $\displaystyle-f_{1}\cdot
f_{3}-f_{3}\cdot\omega+f_{2}\cdot\theta$ (5)
Now, building on Theorem 2.2, we can establish a correspondence between binary
cubic forms and fraction fields.
###### Lemma 2.3.
Let $f(x,y)=f_{0}x^{3}+f_{1}x^{2}y+f_{2}xy^{2}+f_{3}y^{3}$ be a binary cubic
form, and let $R(f)$ be its corresponding integral domain (cf. Theorem 2.2).
Then the fraction field $\textrm{Frac}(R(f))$ is precisely the fraction field
of the polynomial $g(x)=x^{3}+f_{1}x^{2}+f_{0}f_{2}x+f_{0}^{2}f_{3}$.
###### Proof.
If $f(x,y)=f_{0}x^{3}+f_{1}x^{2}y+f_{2}xy^{2}+f_{3}y^{3}$, then the basis
elements of $R(f)$ are $1,\omega,\theta$, where $\omega,\theta$ are determined
via Equations (1), (2), and (3) in the discussion following Theorem 2.2.
This means that
$\begin{cases}\theta=\frac{-f_{0}f_{3}}{\omega}\hskip 113.81102pt\\\
\omega^{2}=-f_{0}f_{2}-f_{1}\omega+f_{0}\theta\hskip 38.41121pt\\\
\end{cases}$
Thus, substituting the first equation above into the second equation,
$\omega^{2}=-f_{0}f_{2}-f_{1}\omega-\frac{f_{0}^{2}f_{3}}{\omega}$
Multiplying both sides by $\omega$ and rearranging, we have
$\omega^{3}+f_{1}\omega^{2}+f_{0}f_{2}\omega+f_{0}^{2}f_{3}=0$
Since the solution to the above equation is the second basis element of the
cubic integral domain corresponding to the given binary cubic form, the
fraction field of this binary cubic form is precisely the fraction field of
the following polynomial:
$g(x)=x^{3}+f_{1}x^{2}+f_{0}f_{2}x+f_{0}^{2}f_{3}$
∎
### 2.3 Special Properties of Binary Cubic Forms
We start with the following definition:
###### Definition 2.3.1.
A monogenic binary cubic form is a binary cubic form that corresponds to a
monogenic cubic integral domain through the Delone-Faddeev correspondence.
In this subsection, we present two constraints that we are allowed to make on
the coefficients of monogenic binary cubic forms. These constraints prove
useful in the computational methodology (Section 3).
We begin with a constraint on the $x^{3}$-coefficient $f_{0}$ of an arbitrary
monogenic binary cubic form.
###### Lemma 2.4.
Given $f_{0},f_{1},f_{2},f_{3}\in\mathbb{Z}$, the binary cubic form
$f_{0}x^{3}+f_{1}x^{2}y+f_{2}xy^{2}+f_{3}y^{3}$ is monogenic if and only if we
can perform a $\mathrm{GL}_{2}(\mathbb{Z})$-action on it to achieve another
binary cubic form
$f_{0}^{\prime}x^{3}+f_{1}^{\prime}x^{2}y+f_{2}^{\prime}xy^{2}+f_{3}^{\prime}y^{3}$
with
$f_{0}^{\prime},f_{1}^{\prime},f_{2}^{\prime},f_{3}^{\prime}\in\mathbb{Z}$ and
$f_{0}^{\prime}=1$.
###### Proof.
First, we write the proof of the reverse direction. Assume that
$f_{0}^{\prime}=1$. Then, by the Delone-Faddeev correspondence, the following
system of equations is satisfied:
$\displaystyle\omega\cdot\theta$ $\displaystyle=$
$\displaystyle\theta\cdot\omega=-f_{3}^{\prime}$ $\displaystyle\omega^{2}$
$\displaystyle=$ $\displaystyle-
f_{2}^{\prime}-f_{1}^{\prime}\cdot\omega+\theta$ $\displaystyle\theta^{2}$
$\displaystyle=$ $\displaystyle-f_{1}^{\prime}\cdot
f_{3}^{\prime}-f_{3}^{\prime}\cdot\omega+f_{2}^{\prime}\cdot\theta$
From the second equation in the system above, we have
$\theta=\omega^{2}+f_{2}^{\prime}+f_{1}^{\prime}\omega$. As a result, we have
that any element in $R(f)$ can be written as a linear combination of powers of
$\omega$:
$\displaystyle a+b\omega+c\theta$ $\displaystyle=$ $\displaystyle
a+b\omega+c(\omega^{2}+f_{2}^{\prime}+f_{1}^{\prime}\omega)$ $\displaystyle=$
$\displaystyle(a+cf_{2}^{\prime})+(b+cf_{1}^{\prime})\omega+c\omega^{2}$
Let $a^{\prime}=a+cf_{2}^{\prime}$, $b^{\prime}=b+cf_{1}^{\prime}$, and
$c^{\prime}=c$. Then, we have
$R(f)=\\{a^{\prime}+b^{\prime}\omega+c^{\prime}\omega^{2}\mid
a^{\prime},b^{\prime},c^{\prime}\in\mathbb{Z}\\}$. Thus, $R(f)$ is monogenic.
Now, we write the proof of the forward direction. Since the binary cubic form
$f(x,y)$ is monogenic, the corresponding integral domain $R(f)$ is monogenic.
For some $\alpha\in R(f)$, we can write $R(f)=\langle
1,\alpha,\alpha^{2}\rangle$.
Since $\alpha$ is a member of a cubic integral domain, there exists a monic
polynomial $g(x)=x^{3}+g_{1}x^{2}+g_{2}x+g_{3}$, where
$g_{1},\cdots,g_{3}\in\mathbb{Z}$, such that $g(\alpha)=0$. Keeping this in
mind, we can redefine $R(f)$ as follows:
$R(f)=\langle 1,\alpha+g_{1},\alpha^{2}+g_{2}\rangle$
We note that
$(\alpha+g_{1})(\alpha^{2}+g_{2})=\alpha^{3}+g_{1}\alpha^{2}+g_{2}\alpha+g_{1}g_{2}=-g_{3}+g_{1}g_{2}\in\mathbb{Z}$
Thus, the basis $\langle 1,\alpha,\alpha^{2}\rangle$ corresponds to a binary
cubic form through the Delone-Faddeev correspondence.
Let $\omega=\alpha+g_{1}$, and $\theta=\alpha^{2}+g_{2}$. We compute the
following:
$\omega^{2}=(\alpha+g_{1})^{2}=\alpha^{2}+2\alpha g_{1}+g_{1}^{2}$
Note that since $\omega=\alpha+g_{1}$ and $\theta=\alpha^{2}+g_{2}$, we have
$\alpha=\omega-g_{1}$ and $\alpha^{2}=\theta-g_{2}$. Thus,
$\displaystyle\omega^{2}$ $\displaystyle=$ $\displaystyle(\theta-
g_{2})+2(\omega-g_{1})g_{1}+g_{1}^{2}$ $\displaystyle=$ $\displaystyle\theta-
g_{2}+2\omega g_{1}-2g_{1}^{2}+g_{1}^{2}$ $\displaystyle=$
$\displaystyle-(g_{1}^{2}+g_{2})+2g_{1}\omega+\theta$
In conjunction with the above, (4) implies that $f_{0}=1$.
∎
For the remainder of this paper, we assume any monogenic binary cubic form has
$x^{3}$-coefficient $1$, because out interest lies in cubic domains, not their
bases, and we can choose whichever binary cubic form we want to represent its
own $\mathrm{GL}_{2}(\mathbb{Z})$-equivalence class.
Before we proceed, we define the following property of monogenic binary cubic
forms, which we call “height.” This property lets us not only assign a weak
ordering to binary cubic forms, but also a weak ordering to cubic fields.
###### Definition 2.3.2.
Let $f=x^{3}+f_{1}x^{2}y+f_{2}xy^{2}+f_{3}y^{3}$ be a monogenic binary cubic
form, and let $I(f)=f_{1}^{2}-3f_{2}$, $J(f)=-2f_{1}^{3}+9f_{1}f_{2}-27f_{3}$.
Then the height $H$ of $f$ is as follows:
$H(f):=\max\left(|I|^{3},\frac{J^{2}}{4}\right)$
If $f$ is clear from context, we will simply let $I=I(f)$ and $J=J(f)$.
Now, having defined the height invariant, we set a constraint on the second
coefficient $f_{1}$ of an arbitrary binary cubic form. But first, we must
introduce a little more theory.
###### Definition 2.3.3.
We define the subgroup $F(\mathbb{Z})<\mathrm{GL}_{2}(\mathbb{Z})$ as follows:
$F(\mathbb{Z})=\left\\{\gamma_{a}=\left(\begin{matrix}1&0\\\
a&1\end{matrix}\right)\ \middle|\ a\in\mathbb{Z}\right\\}$
Note that every $\gamma_{a}\in F(\mathbb{Z})$ has determinant $1$.
Furthermore, note that for some monogenic binary cubic form
$f(x,y)=x^{3}+f_{1}x^{2}y+f_{2}xy^{2}+f_{3}y^{3}$ and $\gamma_{a}\in
F(\mathbb{Z})$,
$\begin{smallmatrix}(x&y)\end{smallmatrix}\cdot\gamma_{a}=\left(\begin{smallmatrix}x+ay\\\
y\end{smallmatrix}\right)$. Thus, $\gamma_{a}\cdot f(x,y)=f(x+ay,y)$.
###### Lemma 2.5.
Let $f(x,y)$ be a binary cubic form with $x^{3}$-coefficient $1$, and let
$\gamma_{a}\in F(\mathbb{Z})$. Then, the $x^{3}$-coefficient of
$\gamma_{a}\cdot f(x,y)$ is also $1$.
###### Proof.
Let $f(x,y)=x^{3}+f_{1}x^{2}y+f_{2}xy^{2}+f_{3}y^{3}$. In the discussion
following Definition 2.3.3, we show that $\gamma_{a}\cdot f(x,y)=f(x+ay,y)$.
We now compute the following:
$\displaystyle f(x+ay,y)$ $\displaystyle=$
$\displaystyle(x+ay)^{3}+f_{1}(x+ay)^{2}y+f_{2}(x+ay)y^{2}+f_{3}y^{3}$
$\displaystyle=$ $\displaystyle
x^{3}+3x^{2}ay+3xa^{2}y^{2}+a^{3}y^{3}+f_{1}x^{2}y$
$\displaystyle+2f_{1}xay^{2}+f_{1}a^{2}y^{3}+f_{2}xy^{2}+f_{2}ay^{3}+f_{3}y^{3}$
$\displaystyle=$ $\displaystyle
x^{3}+(3a+f_{1})x^{2}y+(3a^{2}+2f_{1}a+f_{2})xy^{2}$
$\displaystyle+(a^{3}+f_{1}a^{2}+f_{2}a+f_{3})y^{3}$
Upon observation, we can see that the $x^{3}$-coefficient of this polynomial
is still $1$. ∎
###### Lemma 2.6.
Let $f(x,y)$ be a binary cubic form with $x^{3}$-coefficient $1$, and let
$\gamma_{a}\in F(\mathbb{Z})$. Then $I(\gamma_{a}\cdot f)=I(f)$ and
$J(\gamma_{a}\cdot f)=J(f)$.
###### Proof.
Let $f(x,y)=x^{3}+f_{1}x^{2}y+f_{2}xy^{2}+f_{3}y^{3}$. In the proof of Lemma
2.5, we show that
$\gamma_{a}\cdot
f=f(x+ay,y)=x^{3}+(3a+f_{1})x^{2}y+(3a^{2}+2f_{1}a+f_{2})xy^{2}+(a^{3}+f_{1}a^{2}+f_{2}a+f_{3})y^{3}$
We can see that the action of $\gamma_{a}$ sends $f_{1}$ to $3a+f_{1}$,
$f_{2}$ to $3a^{2}+2f_{1}a+f_{2}$, and $f_{3}$ to
$a^{3}+f_{1}a^{2}+f_{2}a+f_{3}$. Thus, this action sends
$I(f)=f_{1}^{2}-3f_{2}$ to the following:
$I(\gamma_{a}\cdot
f)=(3a+f_{1})^{2}-3(3a^{2}+2f_{1}a+f_{2})=9a^{2}+6af_{1}+f_{1}^{2}-9a^{2}-6f_{1}a-3f_{2}$
Thus, by cancellation, $I(\gamma_{a}\cdot f)=f_{1}^{2}-3f_{2}$, so
$I(\gamma_{a}\cdot f)=I(f)$.
Now, we turn towards $J(f)=-2f_{1}^{3}+9f_{1}f_{2}-27f_{3}$. We compute the
following:
$\displaystyle J(\gamma_{a}\cdot f)$ $\displaystyle=$
$\displaystyle-2(3a+f_{1})^{3}+9(3a+f_{1})(3a^{2}+2f_{1}a+f_{2})$
$\displaystyle-27(a^{3}+f_{1}a^{2}+f_{2}a+f_{3})$ $\displaystyle=$
$\displaystyle-2(27a^{3}+27a^{2}f_{1}+9af_{1}^{2}+f_{1}^{3})$
$\displaystyle+9(9a^{3}+9a^{2}f_{1}+2af_{1}^{2}+3af_{2}+f_{1}f_{2})$
$\displaystyle-27(a^{3}+f_{1}a^{2}+f_{2}a+f_{3})$ $\displaystyle=$
$\displaystyle-54a^{3}-54a^{2}f_{1}-18af_{1}^{2}-2f_{1}^{3}$
$\displaystyle+81a^{3}+81a^{2}f_{1}+18af_{1}^{2}+27af_{2}+9f_{1}f_{2}$
$\displaystyle+-27a^{3}-27a^{2}f_{1}-27af_{2}-27f_{3}$
Thus, by cancellation, $J(\gamma_{a}\cdot f)=-2f_{1}^{3}+9f_{1}f_{2}-27f_{3}$,
so $J(\gamma_{a}\cdot f)=J(f)$. This concludes the proof. ∎
###### Definition 2.3.4.
We define the height of a monogenized cubic integral domain to be equivalent
to the height of its corresponding orbit of binary cubic forms.
###### Lemma 2.7.
In every $F(\mathbb{Z})$-equivalence class of monogenic binary cubic forms,
there exists exactly one binary cubic form
$f(x,y)=x^{3}+f_{1}x^{2}y+f_{2}xy^{2}+f_{3}y^{3}$ with $f_{1}\in\\{-1,0,1\\}$.
Additionally, $f_{1}\equiv J(f)\ (\mathrm{mod}\ 3)$.
###### Proof.
To prove the first part of the lemma, we will start by demonstrating that the
binary cubic form $f(x,y)$, as described in the lemma, exists in any arbitrary
$F(\mathbb{Z})$-equivalence class. Then, we will show that only one such
binary cubic form exists in this equivalence class.
We have shown in the proof of Lemma 2.5 that under the action of an
$F(\mathbb{Z})$-matrix, the $x^{2}y$-coefficient $f_{1}$ of $f(x,y)$ becomes
$f_{1}+3a$. Now, let $f_{1}^{\prime}\in\\{-1,0,1\\}$ such that
$f_{1}^{\prime}\equiv f_{1}\ (\mathrm{mod}\ 3)$. We want to find an
$a\in\mathbb{Z}$ such that $f_{1}+3a=f_{1}^{\prime}$. We have
$3a=f_{1}^{\prime}-f_{1}$. Thus,
$a=\frac{f_{1}^{\prime}-f_{1}}{3}$
Since $f_{1}^{\prime}\equiv f_{1}\ (\mathrm{mod}\ 3)$ – in other words,
$f_{1}^{\prime}-f_{1}\equiv 0\ (\mathrm{mod}\ 3)$ – $a$ is always an integer.
Hence, we are able to reduce the original binary cubic form to one whose
$x^{2}y$-coefficient is $-1$, $0$, or $1$.
Now that we have shown that such a binary cubic form exists, we will
demonstrate that only one such binary cubic form exists. Let
$f_{2}^{\prime}=3a^{2}+2f_{1}a+f_{2}$ and
$f_{3}^{\prime}=a^{3}+f_{1}a^{2}+f_{2}a+f_{3}$, where
$a=\frac{f_{1}^{\prime}-f_{1}}{3}$ and $f_{1}^{\prime}$ is defined as above.
Let $g(x,y)$ be the “reduced” form of $f(x,y)$ – i.e.
$g(x,y)=x^{3}+f_{1}^{\prime}x^{2}y+f_{2}^{\prime}xy^{2}+f_{3}^{\prime}y^{3}$.
By Lemma 2.6, $I(f)$ and $J(f)$ are invariant in this
$F(\mathbb{Z})$-equivalence class. Thus, we can derive the following
equations:
$\begin{cases}I(f)=f_{1}^{\prime 2}-3f_{2}^{\prime}\implies
f_{2}^{\prime}=\frac{f_{1}^{\prime 2}-I(f)}{3}\\\ J(f)=-2f_{1}^{\prime
3}+9f_{1}^{\prime}f_{2}^{\prime}-27f_{3}^{\prime}\implies
f_{3}^{\prime}=-\frac{2f_{1}^{\prime
3}}{27}+\frac{f_{1}^{\prime}f_{2}^{\prime}}{9}-\frac{J(f)}{27}\end{cases}$
We can see from the above equations that there is only one possible value for
$f_{2}^{\prime}$ and $f_{3}^{\prime}$. Thus, there is exactly one binary cubic
form with $x^{2}y$-coefficient in the set $\\{-1,0,1\\}$ in every
$F(\mathbb{Z})$-equivalence class.
Finally, we show that $f_{1}\equiv J(f)\ (\mathrm{mod}\ 3)$.
$\displaystyle J(f)$ $\displaystyle\equiv$
$\displaystyle-2f_{1}^{3}+9f_{1}f_{2}-27f_{3}$ $\displaystyle\equiv$
$\displaystyle-2f_{1}^{3}$ $\displaystyle\equiv$ $\displaystyle-2f_{1}$
$\displaystyle\equiv$ $\displaystyle f_{1}\ (\mathrm{mod}\ 3)$
∎
As before, we can treat any arbitrary monogenic binary cubic form as being in
an $F(\mathbb{Z})$-equivalence class with a binary cubic form with
$x^{2}y$-coefficient in the set $\\{-1,0,1\\}$. Thus, we can disregard all
binary cubic forms with other $x^{2}y$-coefficients.
###### Lemma 2.8.
Let $f(x,y)$ and $g(x,y)$ be binary cubic forms with $x^{3}$-coefficient $1$.
If $I(f)=I(g)$ and $J(f)=J(g)$, then $f$ and $g$ are
$F(\mathbb{Z})$-equivalent.
###### Proof.
Let $f(x,y)=x^{3}+f_{1}x^{2}y+f_{2}xy^{2}+f_{3}y^{3}$, and let
$g(x,y)=x^{3}+g_{1}x^{2}y+g_{2}xy^{2}+g_{3}y^{3}$. Moreover, suppose that
$I(f)=I(g)$ and $J(f)=J(g)$.
According to Lemma 2.7, $f_{1}\equiv J\ (\mathrm{mod}\ 3)$ and $g_{1}\equiv J\
(\mathrm{mod}\ 3)$ – hence, $f_{1}\equiv g_{1}\ (\mathrm{mod}\ 3)$. We can
pick an $a\in\mathbb{Z}$ such that $g_{1}=3a+f_{1}$.
Since $I(f)=I(g)$, we have
$\displaystyle f_{1}^{2}-3f_{2}$ $\displaystyle=$ $\displaystyle
g_{1}^{2}-3g_{2}$ $\displaystyle=$ $\displaystyle(3a+f_{1})^{2}-3g_{2}$
$\displaystyle=$ $\displaystyle 9a^{2}+6f_{1}a+f_{1}^{2}-3g_{2}$
After cancelling the $f_{1}^{2}$ terms and dividing both sides by $3$, we have
$-f_{2}=3a^{2}+2f_{1}a-g_{2}$
Rearranging, we get
$g_{2}=3a^{2}+2f_{1}a+f_{2}$
Since $J(f)=J(g)$, we have
$\displaystyle-2f_{1}^{3}+9f_{1}f_{2}-27f_{3}$ $\displaystyle=$
$\displaystyle-2(3a+f_{1})^{3}+9(3a+f_{1})(3a^{2}+2f_{1}a+f_{2})-27g_{3}$
$\displaystyle=$ $\displaystyle-2(27a^{3}+27a^{2}f_{1}+9af_{1}^{2}+f_{1}^{3})$
$\displaystyle+9(9a^{3}+9a^{2}f_{1}+2af_{1}^{2}+3af_{2}+f_{1}f_{2})-27g_{3}$
$\displaystyle=$ $\displaystyle-54a^{3}-54a^{2}f_{1}-18af_{1}^{2}-2f_{1}^{3}$
$\displaystyle+81a^{3}+81a^{2}f_{1}+18af_{1}^{2}+27af_{2}+9f_{1}f_{2}-27g_{3}$
$\displaystyle=$ $\displaystyle 27a^{3}+27a^{2}f_{1}+27af_{2}-27g_{3}$
Dividing both sides by $27$ and rearranging, we get
$g_{3}=a^{3}+a^{2}f_{1}+af_{2}+f_{3}$
Combining the expressions that we have obtained for $g_{1}$, $g_{2}$, and
$g_{3}$, we have the following:
$g(x,y)=x^{3}+(3a+f_{1})x^{2}y+(3a^{2}+2f_{1}a+f_{2})xy^{2}+(a^{3}+f_{1}a^{2}+f_{2}a+f_{3})y^{3}$
In the proof of Lemma 2.5, we show that if $\gamma_{a}\in F(\mathbb{Z})$ such
that $\gamma_{a}=\left(\begin{smallmatrix}1&0\\\ a&1\end{smallmatrix}\right)$,
then
$\gamma_{a}\cdot
f(x,y)=x^{3}+(3a+f_{1})x^{2}y+(3a^{2}+2f_{1}a+f_{2})xy^{2}+(a^{3}+f_{1}a^{2}+f_{2}a+f_{3})y^{3}$
Thus, $g(x,y)=\gamma_{a}\cdot f(x,y)$, so $f$ and $g$ are
$F(\mathbb{Z})$-equivalent.
∎
Finally, we state a property of monogenic binary cubic forms that proves to be
crucial in our computational methodology (see Section 3).
###### Theorem 2.9 (Theorem 2.2 of [3]).
There is a natural bijection between $F(\mathbb{Z})$-equivalence classes of
binary cubic forms with $x^{3}$-coefficient $1$ and isomorphism classes of
monogenized cubic integral domains.
### 2.4 Average $2$-Torsion Sizes of Class Groups of Cubic Fields
Before stating Theorem 1.2 of [3], we define the following:
###### Definition 2.4.1.
Given a finite abelian group $G$ and a prime number $p$, the $p$-torsion
subgroup of G, denoted as $G[p]$, is defined as follows:
$G[p]:=\\{g\in G\ |\ g^{p}=1\\}$
Moreover, the p-torsion size of $G$ is defined to be the cardinality of the
$p$-torsion subgroup of $G$.
A fact about $p$-torsion subgroups, which follows from Lagrange’s Theorem,
proves to be useful in our later discussion of the computational methodology:
###### Lemma 2.10.
Let $G$ be a finite abelian group. For a prime number $p$,
$G[p]=\\{g\in G\ |\ g^{p}=g_{0}^{|G|}\quad\forall g_{0}\in G\\}$
Now we have enough theoretical background to state the theorems in question
from [2] and [3]. First, the theorem from [2]:
###### Theorem 2.11 (Theorem 5 of [2]).
Let us denote the $2$-torsion subgroup of the class group of a cubic field $K$
as $Cl(K)[2]$. Then,
* •
The average size, when ordering by discriminant, of $Cl(K)[2]$ for cubic
fields $K$ of positive discriminant is $5/4$.
* •
The average size, when ordering by discriminant, of $Cl(K)[2]$ for cubic
fields $K$ of negative discriminant is $3/2$.
Note that in [14], the authors prove the same mean values, but when ordering
by height.
Now, the theorem from [3]:
###### Theorem 2.12 (Theorem 1.2 of [3]).
Again, let us denote the $2$-torsion subgroup of the class group of a cubic
field $K$ as $Cl(K)[2]$. Then,
* •
The average size, when ordering by height, of $Cl(K)[2]$ for cubic fields $K$
having monogenized rings of integers with positive discriminant is 3/2.
* •
The average size, when ordering by height, of $Cl(K)[2]$ for cubic fields $K$
having monogenized rings of integers of negative discriminant is 2.
This is the result that we provide computational evidence for in this paper.
It is surprising that the average $2$-torsion sizes from Theorem 2.11 change
when we mandate that the cubic fields in question have monogenized rings of
integers. Based on local behavior, we would expect these average $2$-torsion
sizes not to change under the added restriction.
Before we continue, we introduce some notation that can be used to express the
above theorem:
###### Definition 2.4.2.
For some $Y\in\mathbb{Z}$, $Y\geq 0$, let $\mathcal{F}^{+}_{Y}$ denote the set
of positive-discriminant, maximal, and monogenized cubic integral domains
$(R,\alpha)$ with height less than $Y$. Similarly, let $\mathcal{F}^{-}_{Y}$
denote the set of negative-discriminant, maximal, and monogenized cubic
integral domains $(R,\alpha)$ with height less than $Y$.
Let $K$ denote the fraction field $\textrm{Frac}(R)$. Then, for $p$ prime, we
define the following:
$\mu_{p}(\mathcal{F}^{+}(Y)):=\frac{\sum\limits_{(R,\alpha)\in\mathcal{F}^{+}_{Y}}|Cl(K)[p]|}{|\mathcal{F}^{+}_{Y}|}$
(6)
$\mu_{p}(\mathcal{F}^{-}(Y)):=\frac{\sum\limits_{(R,\alpha)\in\mathcal{F}^{-}_{Y}}|Cl(K)[p]|}{|\mathcal{F}^{-}_{Y}|}$
(7)
With this new notation, Theorem 2.12 can be rewritten as
$\lim_{Y\rightarrow\infty}\mu_{2}(\mathcal{F}^{+}(Y))=3/2$
$\lim_{Y\rightarrow\infty}\mu_{2}(\mathcal{F}^{-}(Y))=2$
## 3 Computational Methodology
Our computational methodology for verifying Theorem 2.12 is summarized below.
Our goal is to compute the averages $\mu_{2}(\mathcal{F}^{+}(Y))$ and
$\mu_{2}(\mathcal{F}^{-}(Y))$ for very large values of $Y$.
* •
Step 1: Generate all monogenic binary cubic forms of bounded height and
positive/negative discriminant
We can assume that any monogenic binary cubic form $f(x,y)$ in our computation
has $f_{0}=1$ by Lemma 2.4. Moreover, by Lemma 2.7, we can assume that
$f_{1}\in\\{-1,0,1\\}$ and $f_{1}\equiv J(f)\ (\mathrm{mod}\ 3)$. By looping
through all of the possible values of $I$ and $J$, we can see that every
$(I,J)$ pair corresponds to a single $(f_{0},f_{1})$ pair (by Lemma 2.8). This
approach requires us to impose bounds on $I$ and $J$ so that the number of
binary cubic forms we generate remains finite. Upon calculation, we find that
given a positive integer $Y$, for all binary cubic forms $f(x,y)$ such that
$H(f)<Y$, $|I|<\sqrt[3]{Y}$ and $|J|<2\sqrt{Y}$.
We now construct the following nested loop: We loop from
$I=-\lfloor{\sqrt[3]{Y}\rfloor}$ to $I=\lfloor{\sqrt[3]{Y}\rfloor}$, and for
each individual value of $I$ in this loop, we loop from
$J=-2\lfloor{\sqrt{Y}\rfloor}$ to $J=2\lfloor{\sqrt{Y}\rfloor}$. For any
$(I,J)$ pair, we will always have $f_{0}=1$ and $f_{1}=-1$, $0$, or $1$
depending on the value of $J$. According to Definition 2.3.2, we also have
$\begin{cases}I=f_{1}^{2}-3f_{2}\implies f_{2}=\frac{f_{1}^{2}-I}{3}\\\
J=-2f_{1}^{3}+9f_{1}f_{2}-27f_{3}\implies
f_{3}=-\frac{2f_{1}^{3}}{27}+\frac{f_{1}f_{2}}{9}-\frac{J}{27}\end{cases}$
If $f_{2},f_{3}$ are integers, then we have generated a valid binary cubic
form; otherwise, the $(I,J)$ pair does not have a corresponding binary cubic
form.
* •
Step 2: Calculate the fraction field $K$ of the monogenized cubic integral
domain corresponding to each monogenic binary cubic form: Before starting this
step, note that we can only construct a correspondence between monogenized
cubic integral domains and binary cubic forms with $x^{3}$-coefficient $1$ and
$x^{2}y$-coefficient in the set $\\{-1,0,1\\}$ because of Theorem 2.9,
together with Remark 2.1.
We now have a large list of binary cubic forms that we must loop through.
Given a specific binary cubic form within our loop, we can find the minimal
polynomial for the corresponding fraction field using Lemma 2.3. But since the
binary cubic forms we are concerned with are all monogenic, by Lemma 2.4, we
can simplify the minimal polynomial for the fraction field to the following:
$g(x)=x^{3}+f_{1}x^{2}+f_{2}x+f_{3}$
We can now compute the fraction field corresponding to the given binary cubic
form. (Note that if the above polynomial is reducible, the fraction field is
no longer a field – thus, the corresponding binary cubic form will be ejected
from the computation.)
* •
Step 3: Ensure that each binary cubic form is maximal: Within our large binary
cubic form loop described in Step 2, we must ensure that every binary cubic
form’s corresponding cubic integral domain is maximal inside its fraction
field. In order to do so, we must compute the discriminant of each binary
cubic form and its corresponding fraction field, and then ensure that the two
discriminants are equal. This will imply that the cubic integral domain
corresponding to each binary cubic form is isomorphic to the ring of integers
inside the binary cubic form’s fraction field, implying that the cubic
integral domain is indeed maximal. Any binary cubic form whose cubic integral
domain is not maximal will be ejected from the computation – otherwise, it
will remain until the end of the computation.
* •
Step 4: Find the class group of each fraction field $K$ resulting from Step 2:
We do this using SageMath [18].
* •
Step 5: Calculate the number of 2-torsion elements in class group of each
fraction field $K$: By Lemma 2.10, we simply have to find all the elements of
$Cl(K)[2]=\\{g\in Cl(K)\ |\ g^{2}=e=g_{0}^{|Cl(K)|}\\}$
where $e$ is the multiplicative identity of $Cl(K)$ and $g_{0}$ is an
arbitrary element of $Cl(K)$. We perform this computation for every surviving
fraction field $K$ in the large binary cubic form loop described in Step 2.
* •
Step 6: Compute averages $\mu_{2}(\mathcal{F}^{+}(Y))$ and
$\mu_{2}(\mathcal{F}^{-}(Y))$
Our ultimate goal now is to increase the height restriction $Y$ from Step 1 to
a large enough number so that we can predict the values of
$\mu_{2}(\mathcal{F}^{+}(Y))$ and $\mu_{2}(\mathcal{F}^{-}(Y))$. We were able
to reach a $Y$-value of 100 billion for positive-discriminant monogenic binary
cubic forms, and 10 billion for negative-discriminant monogenic binary cubic
forms. (The limiting factor here was computation time – our longest run took
about a week to complete.)
## 4 Computational Data Towards Theorem 2.12
In this section, we present our computational data on the average $2$-torsion
size of class groups of monogenized cubic fields, ordered by height. First, we
will present two propositions from [3] and compare them with computational
data.
### 4.1 Asymptotics on Binary Cubic Form Counts
The following proposition is equivalent to Proposition 3.2 of [3].
###### Proposition 4.1.
Let the number of $F(\mathbb{Z})$-equivalence classes of positive-discriminant
monogenic binary cubic forms with height less than $Y$ be $N^{+}(Y)$, and let
the number of $F(\mathbb{Z})$-equivalence classes of negative-discriminant
monogenic binary cubic forms with height less than $Y$ be $N^{-}(Y)$. Then,
$N^{+}(Y)=\frac{8}{135}Y^{5/6}+O(Y^{1/2+\epsilon})$
$N^{-}(Y)=\frac{32}{135}Y^{5/6}+O(Y^{1/2+\epsilon})$
Figure 1 below shows $N^{+}(Y)$ in a dotted blue line (computed by our
program) and $\frac{8}{135}Y^{5/6}$ in a solid red line. Figure 2 shows
$N^{-}(Y)$ in a dotted blue line (also computed by our program) and
$\frac{32}{135}Y^{5/6}$ in a solid red line. In Figure 1, $Y$ goes up to
$10^{11}$; in Figure 2, $Y$ goes up to $10^{10}$.
$1$$2\cdot 10^{10}$$4\cdot 10^{10}$$6\cdot 10^{10}$$8\cdot 10^{10}$$1\cdot
10^{11}$$0$$2\cdot 10^{7}$$4\cdot 10^{7}$$6\cdot 10^{7}$$8\cdot
10^{7}$$Y$$N^{+}(Y)$$\frac{8}{135}Y^{5/6}$ Figure 1: The number of positive-
discriminant monogenic binary cubic forms bounded by height $Y$, compared to
the equation $\frac{8}{135}Y^{5/6}$ $1$$2\cdot 10^{9}$$4\cdot 10^{9}$$6\cdot
10^{9}$$8\cdot 10^{9}$$1\cdot 10^{10}$$0$$2\cdot 10^{7}$$4\cdot
10^{7}$$Y$$N^{-}(Y)$$\frac{32}{135}Y^{5/6}$ Figure 2: The number of negative-
discriminant monogenic binary cubic forms bounded by height $Y$, compared to
the equation $\frac{32}{135}Y^{5/6}$
Note that in both Figure 1 and Figure 2, from a visual standpoint, the
theoretical and computational results are in agreement. Below is a more
detailed error analysis.
For $Y=10^{11}$, our program calculated that $N^{+}(Y)=86,961,377$. The
expected theoretical value of $N^{+}(Y)$ is
$\frac{8}{135}(10^{11})^{5/6}=86,980,697.341$, implying our calculated value
has an error of $0.022\%$.
For $Y=10^{10}$, our program calcualted that $N^{-}(Y)=51,074,450$. The
expected theoretical value of $N^{-}(Y)$ is
$\frac{32}{135}(10^{10})^{5/6}=51,068,081.542$, implying our calculated value
has an error of $0.012\%$.
From these low error percentages, it is clear that our computational results
quickly agree with the corresponding theoretical results.
### 4.2 Asymptotics on Binary Cubic Form Maximality Ratios
The following proposition can be deduced from Proposition 4.1 above, in
conjunction with Theorem 3.6 of [3].
###### Proposition 4.2.
Let $N_{max}^{+}(Y)$ be the number of $F(\mathbb{Z})$-equivalence classes of
maximal positive-discriminant binary cubic forms with height less than $Y$,
and let $N_{max}^{-}(Y)$ be the number of $F(\mathbb{Z})$-equivalence classes
of maximal negative-discriminant binary cubic forms with height less than $Y$.
In addition, let $N^{+}(Y)$, $N^{-}(Y)$ be as defined in Proposition 4.1.
Then,
$\frac{N_{max}^{+}(Y)}{N^{+}(Y)}=\frac{N_{max}^{-}(Y)}{N^{-}(Y)}=\frac{1}{\zeta(2)}$
Figure 3 below demonstrates the convergence of both
$\frac{N_{max}^{+}(Y)}{N^{+}(Y)}$ and $\frac{N_{max}^{-}(Y)}{N^{-}(Y)}$ to
$\frac{1}{\zeta(2)}$.
$1$$2\cdot 10^{6}$$4\cdot 10^{6}$$6\cdot 10^{6}$$8\cdot 10^{6}$$1\cdot
10^{7}$$0.5$$0.55$$0.6$$0.65$$Y$$N_{max}^{+}(Y)/N^{+}(Y)$$N_{max}^{-}(Y)/N^{-}(Y)$$1/\zeta(2)$
Figure 3: The fractions of positive-discriminant and negative-discriminant
monogenic binary cubic forms that are maximal, compared with
$\frac{1}{\zeta(2)}$
In Figure 3, as early as $Y=2\cdot 10^{6}$ , we have
$\dfrac{N_{max}^{+}(Y)}{N^{+}(Y)}=\dfrac{6318}{10405}=0.607$
$\dfrac{N_{max}^{-}(Y)}{N^{-}(Y)}=\dfrac{25662}{42143}=0.609$
Both of these values are within $0.2$ percent of $\frac{1}{\zeta(2)}$.
In order to ensure that a maximality constraint on our binary cubic forms
would not introduce too much additional error to our computation, let us
perform an error analysis similar to the one we conducted at the end of
Section 4.1, with the requirement that our binary cubic forms be maximal.
For $Y=2\cdot 10^{6}$, our program calculated that $N_{max}^{+}(Y)=6,318$. The
expected theoretical value of $N_{max}^{+}(Y)$ is
$\frac{8}{135\zeta(2)}(2\cdot 10^{6})^{5/6}=6,418.980$, implying our
calculated value has an error of $1.573\%$.
For the same value of $Y$, our program calculated that
$N_{max}^{-}(Y)=25,662$. The expected theoretical value of $N_{max}^{-}(Y)$ is
$\frac{32}{135\zeta(2)}(2\cdot 10^{6})^{5/6}=25,675.922$, implying our
calculated value has an error of $0.043\%$.
Since these error percentages are once again low, we are assured that our
computational results agree with the corresponding theoretical predictions.
### 4.3 Asymptotic Averages of Binary Cubic Forms
We are now ready to give evidence towards Theorem 2.12. The computational
verification process relied heavily on Eureqa, a computer application that
employs genetic programming [15] to find regression curves that optimally fit
the data given. See section 5.1 for more details on how Eureqa works.
Figures 4 and 5 below display $\mu_{2}(\mathcal{F}^{+}(Y))$ and
$\mu_{2}(\mathcal{F}^{-}(Y))$ with respect to $Y$:
$1$$2\cdot 10^{10}$$4\cdot 10^{10}$$6\cdot 10^{10}$$8\cdot 10^{10}$$1\cdot
10^{11}$$1$$1.2$$1.4$$Y=787,702,088$$Y$$\mu_{2}(\mathcal{F}^{+}(Y))$ Figure 4:
$\mu_{2}(\mathcal{F}^{+}(Y))$, compared to the general binary cubic form
asymptote in [2] (in black), as well as the monogenic binary cubic form
asymptote predicted in [3] (in red) $1$$2\cdot 10^{9}$$4\cdot 10^{9}$$6\cdot
10^{9}$$8\cdot 10^{9}$$1\cdot
10^{10}$$1$$1.2$$1.4$$1.6$$1.8$$2$$Y=17,382,351$$Y$$\mu_{2}(\mathcal{F}^{-}(Y))$
Figure 5: $\mu_{2}(\mathcal{F}^{-}(Y))$, compared to the general binary cubic
form asymptote in [2] (in black), as well as the monogenic binary cubic form
asymptote predicted in [3] (in red)
Note that upon examination of the above two graphs, we can instantly determine
that $\lim_{Y\rightarrow\infty}\mu_{2}(\mathcal{F}^{+}(Y))$ and
$\lim_{Y\rightarrow\infty}\mu_{2}(\mathcal{F}^{-}(Y))$ are different from the
asymptotic averages in Theorem 2.11 (where no monogenicity condition is
imposed). This is undoubtedly a surprising result.
After we inserted all of the data comprising the above two graphs into a
spreadsheet, Eureqa formulated the following pair of equations nearly
instantaneously:
$\begin{cases}\mu_{2}(\mathcal{F}^{+}(Y))\approx 1.51-1.34Y^{-0.0810}\\\
\mu_{2}(\mathcal{F}^{-}(Y))\approx 1.99-2.05Y^{-0.0878}\end{cases}$
These equations have the exact form we are looking for: asymptotes of
approximately $3/2$ and $2$, respectively, followed by second-order terms that
vanish to zero as $Y$ approaches infinity. It is also worth noting that, when
compared against the data, both of the above equations have mean-squared error
(MSE) values below $10^{-7}$.
It is evident now that this model independently relates the computational data
to the asymptotics given in Theorem 2.12. The next step, as mentioned
previously, is to extend our approach for computing class group $2$-torsion
sizes to computing class group $p$-torsion sizes, where $p$ is a prime
integer. This extension is detailed in the next section, along with our main
results.
## 5 Main Results
### 5.1 Introduction to Genetic Programming
As mentioned in Section 4.3, Eureqa [19] is a computer application that uses
genetic programming [15] to find regression curves that optimally fit the data
given. Here we will discuss what genetic programming is, and why it is useful
for solving regression problems like the ones we are working with.
Genetic programming (GP) is an artificial intelligence-based problem-solving
technique. Broadly speaking, if a GP-based algorithm is given a problem to
solve, it will start by developing randomly generated solutions that often do
not solve the problem well. It will then conduct two types of operations,
namely “reproduction” operations and “mutation” operations, to try to modify
the original solutions and develop new ones. A “reproduction” operation will
take two proposed solutions and combine them together somehow to form a new
solution. A “mutation” operation will take a proposed solution and change it
in some way.333It may be obvious by now that the name “genetic programming,”
as well as the names of the two types of operations involved in GP, are
derived from Darwin’s theory of evolution.
Through combinations of these two types of operations, the algorithm will
improve the original randomly-generated solutions and develop new solutions
that solve the problem better. We are able to discern whether or not a
proposed solution solves the given problem well based on some evaluation
metric (e.g. mean-squared error) – this metric is usually referred too as a
loss function. Generally, a solution with a lower loss (i.e. a lower loss-
function value) is a better solution.
In our case, we are using a GP-based algorithm – i.e. Eureqa – to solve a
regression problem. We start with a set of random solutions, and develop
better solutions by conducting “reproduction” and “mutation” operations on the
original solutions. We determine which solutions are better by looking at
their mean-squared errors with respect to the original data (i.e. the
$\mu_{p}(\mathcal{F}^{\pm}(Y))$ values that we provide Eureqa, for fixed $p$
and increasing $Y$). It is worth noting, however, that we often do not choose
the solutions with the best mean-squared errors, because these solutions are
often so complex that they over-fit the given data. In other words, they have
poor predictive power for high $Y$-values that are not supplied to Eureqa due
to computational limitations. Thus, we tend to look for solutions with
sufficiently low mean-squared errors whose forms are as simple as possible.
These solutions often take the same form as the
$\mu_{2}(\mathcal{F}^{\pm}(Y))$ equations stated at the end of Section 4.3:
$\mu_{p}(\mathcal{F}^{\pm}(Y))\approx\alpha^{\pm}-\beta^{\pm}Y^{\gamma^{\pm}}$
where $\alpha^{\pm},\beta^{\pm},\gamma^{\pm}\in\mathbb{R}$.
Note that GP-based algorithms are often highly probabilistic, so running the
same program several times in a row may yield multiple different answers. When
comparing the results presented at the end of Section 5.3 with the conjectures
stated in Section 6, bear in mind that any differences between computed and
conjectured values of $\lim_{Y\rightarrow\infty}\mu_{p}(\mathcal{F}^{\pm}(Y))$
can be at least partially attributed to the noise inherent to Eureqa’s
training algorithm.
### 5.2 Eureqa Methodology
In principle, the approach for calculating class group $p$-torsion sizes is
highly similar to the approach for calculating class group $2$-torsion sizes.
The only theoretical difference lies in Step 5 of the methodology detailed in
Section 3. By Lemma 2.10, all we have to do is find all the elements of
$Cl(K)[p]$.s
However, due to Proposition 4.1, when counting monogenic binary cubic forms
bounded by height $Y$, there are far more negative-discriminant binary cubic
forms than positive-discriminant binary cubic forms. Thus, it is easier to
push the positive-discriminant height bound to a high value than the negative-
discriminant height bound. In our computation, we pushed the positive-
discriminant height bound to $10^{11}$, and we pushed the negative-
discriminant height bound to $10^{10}$.
As we saw in Section 4.3, the formulas Eureqa discovered were of the form
$\mu_{p}(\mathcal{F}^{\pm}(Y))\approx\alpha_{i}^{\pm}-\beta_{i}^{\pm}Y^{\gamma_{i}^{\pm}}$
where $\alpha_{i}^{\pm},\beta_{i}^{\pm},\gamma_{i}^{\pm}\in\mathbb{R}$. It is
safe to assume that $\gamma_{i}^{+}=\gamma_{i}^{-}$ [17]. Thus, since we have
a higher height bound for positive-discriminant binary cubic forms than for
negative-discriminant binary cubic forms, we can give $\gamma_{i}^{+}$ to the
negative-discriminant Eureqa model as a fixed constant. In other words, we can
set $\gamma_{i}^{-}$ to always be equal to $\gamma_{i}^{+}$.
With this in mind, we can develop a new Eureqa strategy. Let $p$ be a prime
number. First, we train the positive-discriminant model as usual. This yields
our formula for $\mu_{p}(\mathcal{F}^{+}(Y))$, which has the form
$\alpha_{i}^{+}-\beta_{i}^{+}Y^{\gamma_{i}^{+}}$. We then train a second
positive discriminant model where Eureqa is told to use the form
$\alpha_{f}^{+}-\beta_{f}^{+}Y^{\gamma_{f}^{+}}$. In addition, the
$\alpha_{f}^{+}$ value – i.e. the asymptote – is fixed. This way, we can
eliminate some of the unwanted noise inherent to Eureqa’s training algorithm
and focus on finding a more accurate value of the exponent $\gamma_{f}^{+}$.
(This is discussed in Section 5.1 above.) We refer to this process as “fine-
tuning” the exponent.
After all of this is finished, we begin training the negative model. We
mandate that Eureqa uses the form
$\alpha_{f}^{-}-\beta_{f}^{-}Y^{\gamma_{f}^{-}}$, and we set $\gamma_{f}^{-}$
to be equal to $\gamma_{f}$, i.e. the fine-tuned version of $\gamma_{i}^{+}$.
After Eureqa discovers $\alpha_{f}^{-}$ and $\beta_{f}^{-}$ for us, we have
our formula for $\mu_{p}(\mathcal{F}^{-}(Y))$.444Note that we use the
$\gamma_{i}^{+}$ exponents to state our $\mu_{p}(\mathcal{F}^{+}(Y))$
formulas, and we use the $\gamma_{f}^{-}$ exponents to state our
$\mu_{p}(\mathcal{F}^{-}(Y))$ formulas. Thus, for a given $p$, the exponent in
the $\mu_{p}(\mathcal{F}^{+}(Y))$ expression will not necessarily be the same
as the exponent in the $\mu_{p}(\mathcal{F}^{-}(Y))$ expression.
### 5.3 Results
Below are the graphs that we obtained for the average class group $p$-torsion
sizes – the first graph is for positive-discriminant binary cubic forms, and
the second is for negative-discriminant binary cubic forms.
$1$$2\cdot 10^{10}$$4\cdot 10^{10}$$6\cdot 10^{10}$$8\cdot 10^{10}$$1\cdot
10^{11}$$1$$1.1$$1.2$$1.3$$Y$$p$-Torsion Size Figure 6:
$\mu_{p}(\mathcal{F}^{+}(Y))$ for
$p={\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}2},{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}3},5,{\color[rgb]{0,1,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,0}7},$
and ${\color[rgb]{1,.5,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,.5,0}11}$
$1$$2\cdot 10^{9}$$4\cdot 10^{9}$$6\cdot 10^{9}$$8\cdot 10^{9}$$1\cdot
10^{10}$$1$$1.2$$1.4$$1.6$$Y$$p$-Torsion Size Figure 7:
$\mu_{p}(\mathcal{F}^{-}(Y))$ for
$p={\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}2},{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}3},5,{\color[rgb]{0,1,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,0}7},$
and ${\color[rgb]{1,.5,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,.5,0}11}$.
Note that at $Y=10^{11}$,
$\begin{cases}\mu_{2}(\mathcal{F}^{+}(10^{11}))=1.333\\\
\mu_{3}(\mathcal{F}^{+}(10^{11}))=1.259\\\
\mu_{5}(\mathcal{F}^{+}(10^{11}))=1.039\\\
\mu_{7}(\mathcal{F}^{+}(10^{11}))=1.020\\\
\mu_{11}(\mathcal{F}^{+}(10^{11}))=1.008\end{cases}$
Also note that at $Y=10^{10}$,
$\begin{cases}\mu_{2}(\mathcal{F}^{-}(10^{10}))=1.714\\\
\mu_{3}(\mathcal{F}^{-}(10^{10}))=1.645\\\
\mu_{5}(\mathcal{F}^{-}(10^{10}))=1.203\\\
\mu_{7}(\mathcal{F}^{-}(10^{10}))=1.142\\\
\mu_{11}(\mathcal{F}^{-}(10^{10}))=1.090\end{cases}$
Using the new Eureqa strategy described in Section 5.2, we obtained the
following generalized asymptotic average predictions. Note that we used mean-
squared error (MSE) as our error metric, since it is commonly used in the
machine learning community.
Positive Discriminants:
$\begin{cases}\mu_{2}(\mathcal{F}^{+}(Y))\approx
1.51-1.34Y^{-0.0810}\longrightarrow\text{MSE}=5.620\times 10^{-9}\\\
\mu_{3}(\mathcal{F}^{+}(Y))\approx 1.30-1.72Y^{-0.143}\
\longrightarrow\text{MSE}=1.106\times 10^{-7}\\\
\mu_{5}(\mathcal{F}^{+}(Y))\approx 1.04-0.595Y^{-0.196}\
\longrightarrow\text{MSE}=8.359\times 10^{-9}\\\
\mu_{7}(\mathcal{F}^{+}(Y))\approx 1.02-0.248Y^{-0.171}\
\longrightarrow\text{MSE}=7.595\times 10^{-9}\\\
\mu_{11}(\mathcal{F}^{+}(Y))\approx 1.01-0.548Y^{-0.261}\
\longrightarrow\text{MSE}=3.108\times 10^{-9}\end{cases}$
Negative Discriminants555Note that the formula for
$\mu_{2}(\mathcal{F}^{-}(Y))$ is different from the one in Section 4.3 – this
is because we recalculated it using the new Eureqa strategy from Section 5.2.
:
$\begin{cases}\mu_{2}(\mathcal{F}^{-}(Y))\approx 2.00-1.95Y^{-0.0832}\
\longrightarrow\text{MSE}=1.746\times 10^{-7}\\\
\mu_{3}(\mathcal{F}^{-}(Y))\approx 1.45+0.02Y^{0.0939}\
\longrightarrow\text{MSE}=5.936\times 10^{-5}\\\
\mu_{5}(\mathcal{F}^{-}(Y))\approx 1.24-0.254Y^{-0.0872}\
\longrightarrow\text{MSE}=2.349\times 10^{-6}\\\
\mu_{7}(\mathcal{F}^{-}(Y))\approx 1.16-0.485Y^{-0.145}\
\longrightarrow\text{MSE}=2.995\times 10^{-6}\\\
\mu_{11}(\mathcal{F}^{-}(Y))\approx 1.10-0.643Y^{-0.178}\
\longrightarrow\text{MSE}=1.138\times 10^{-6}\end{cases}$
## 6 Conjectures on Class Group $p$-Torsion Sizes
Based on the set of Eureqa-predicted asymptotes from Section 5, we can now
formulate conjectures on the general form of
$\lim_{Y\rightarrow\infty}\mu_{p}(\mathcal{F}^{+}(Y))$ and
$\lim_{Y\rightarrow\infty}\mu_{p}(\mathcal{F}^{-}(Y))$ for $p$ prime, $p\neq
3$.666Theoretical results indicate that we should not include $p=3$ in the
statement of our conjecture [6]. The following conjectures were also predicted
by Eureqa.
###### Conjecture 6.1.
Let $p$ be a prime number such that $p\neq 3$. Then,
$\lim_{Y\rightarrow\infty}\mu_{p}(\mathcal{F}^{+}(Y))=1+\dfrac{1}{p(p-1)}$
###### Conjecture 6.2.
Let $p$ be a prime number such that $p\neq 3$. Then,
$\lim_{Y\rightarrow\infty}\mu_{p}(\mathcal{F}^{-}(Y))=1+\dfrac{1}{p-1}$
These conjectures predict most of the asymptotes in Section 5 with minimal
error. The one pair of exceptions is
$\lim_{Y\rightarrow\infty}\mu_{3}(\mathcal{F}^{\pm}(Y))$ – however, note that
these conjectures are not expected to hold for $p=3$ [6].
###### Remark 6.3.
It is worth noting that according to conjectures 6.1 and 6.2,
$\lim_{Y\rightarrow\infty}\mu_{p}(\mathcal{F}^{-}(Y))-\lim_{Y\rightarrow\infty}\mu_{p}(\mathcal{F}^{+}(Y))=\left(1+\dfrac{1}{p-1}\right)-\left(1+\dfrac{1}{p(p-1)}\right)=\dfrac{1}{p}$
## 7 Conclusions
Conjectures 6.1 and 6.2 are the main contributions of this paper. They are the
culminating result of our overall methodology – computationally rediscovering
the theoretical result in [3] on the average $2$-torsion size of class groups
of cubic fields, and then extending this result from average $2$-torsion sizes
to average $p$-torsion sizes, for all prime $p$. It is worth noting, however,
that these conjectures were made solely based on computational evidence. We
hope that future work in this field presents theoretical evidence for these
conjectures and provides deeper insight into what these conjectures truly
mean.
## Acknowledgements
Many thanks to Prof. Ila Varma (University of Toronto), whose guidance
throughout the course of this project was invaluable. Also, thanks to Prof.
Jon Hanke (Princeton University) and Dr. Dylan Yott (UC Berkeley) for their
mentorship during the early stages of this project, and to Prof. Arul Shankar
(University of Toronto) for his advice later in the project. Finally, thanks
to Hari Pingali and Stephen New, whose contributions during the early stages
of the project helped shape its future.
## References
* [1] M. Baker. Algebraic Number Theory Course Notes (Fall 2006) Math 8803, Georgia Tech (2006). http://people.math.gatech.edu/~mbaker/pdf/ANTBook.pdf
* [2] M. Bhargava, The density of discriminants of quartic rings and fields, Annals of Mathematics 162(2) (2013), 1031-1064.
* [3] M. Bhargava, J. Hanke, and A. Shankar, The mean number of 2-torsion elements in the class groups of $n$-monogenized cubic fields (2020). https://arxiv.org/abs/2010.15744
* [4] M. Bhargava, A. Shankar, and J. Tsimerman, On the Davenport-Heilbronn theorems and second order terms, Inventiones Mathematicae 193 (2013), 439-499.
* [5] M. Bhargava and I. Varma, On the mean number of 2-torsion elements in the class groups and narrow class groups of cubic orders and fields, Duke Mathematical Journal 164(10) (2015), 1911-1933.
* [6] H. Cohen and H. W. Lenstra, Heuristics on class groups of number fields, Lecture Notes in Mathematics 1068 (1984), 33-62.
* [7] H. Cohen and J. Martinet, Class groups of number fields: numerical heuristics. Mathematics of Computation 48(177) (1987), 123-137.
* [8] H. Davenport and H. Heilbronn, On the density of discriminants of cubic fields II, Proceedings of the Royal Society of London Series A 322(1551) (1971), 405-420.
* [9] B. Delone and D. Faddeev. The theory of irrationalities of the third degree, AMS Translations of Mathematical Monographs 10 (1964).
* [10] E. Fouvry and J. Klüners. On the 4-rank of class groups of quadratic number fields, Inventiones Mathematicae 167 (2007), 455–513.
* [11] W. Gan, B. Gross, and G. Savin, Fourier coefficients of modular forms on $G_{2}$, Duke Mathematical Journal 115 (2002), 105-169.
* [12] C. F. Gauss, Disquisitiones Arithmeticae, Springer-Verlag, New York (1986).
* [13] J. Hanke, I. Varma, and D. Yott, Class Numbers of Monogenic Cubic Fields (2016). https://tinyurl.com/hanke-varma-yott
* [14] W. Ho, A. Shankar, and I. Varma, Odd degree number fields with odd class number, Duke Mathematical Journal 167(5) (2018), 995-1047.
* [15] J. R. Koza, Genetic Programming: On the Programming of Computers by Means of Natural Selection, MIT Press, Cambridge, MA (2000).
* [16] G. Malle, On the distribution of class groups of number fields, Experimental Mathematics 19(4) (2010), 465-474.
* [17] D. P. Roberts, Density of cubic field discriminants, Mathematics of Computation 70(236) (2000), 1699-1705.
* [18] SageMath, the Sage Mathematics Software System, The Sage Developers, 2016, https://www.sagemath.org.
* [19] M. Schmidt and H. Lipson. Distilling Free-Form Natural Laws from Experimental Data, Science 324(5923) (2009), 81-85.
* [20] A. Siad. Monogenic fields with odd class number Part I: odd degree (2020). https://arxiv.org/abs/2011.08834
* [21] A. Siad. Monogenic fields with odd class number Part II: even degree (2020). https://arxiv.org/abs/2011.08842
|
# Growth and Collapse of an Isolated Bubble Driven by a Single Negative
Histotripsy Cycle in Agarose Gel: Stress, Strain, and Strain Rate Fields
Lauren Mancia, Jonathan R. Sukovich, Zhen Xu, and Eric Johnsen L. Mancia and
E. Johnsen are with the Department of Mechanical Engineering, University of
Michigan, Ann Arbor, MI, 48109 USA e-mail<EMAIL_ADDRESS>ejohnsen@umich.edu).J Sukovich and Z Xu are with the Department of Biomedical
Engineering, University of Michigan, Ann Arbor, MI, 48109 USA e-mail:
<EMAIL_ADDRESS>zhenx@umich.edu).
###### Abstract
Histotripsy relies on cavitation to mechanically homogenize soft tissue. There
is strong evidence that the high stresses, strains, and strain rates developed
as bubbles grow and collapse contribute to this tissue homogenization. While
such stresses and strains have been examined computationally in model systems
with assumed constitutive models (e.g., finite-deformation Neo-Hookean model)
and viscoelastic properties determined under quasi-static conditions, recent
studies proposed that the Quadratic Law Kelvin-Voigt (QLKV) constitutive
model, which additionally accounts for strain stiffening, more accurately
represents the viscoelastic response of soft materials subjected to
cavitation; this model has also been used to infer viscoelastic properties at
high rates. In this work, we use the QLKV model and these properties to
calculate the time-dependent stress, strain, and strain rate fields produced
during the growth and collapse of individual bubbles subjected to a
histotripsy-relevant pressure waveform in agarose gels of 0.3 % and 1.0 %
concentration and corresponding to actual (past) experiments. We find that, as
the gel concentration is increased, strain stiffening manifests in larger
elastic stresses and compressive stresses extending into the collapse phase,
particularly for the 1.0 % concentration gel. As a result, the duration of the
collapse phase also increases. In comparison with the conventional Neo-Hookean
model, the compressive stress has a larger magnitude, extends farther into the
surrounding medium, and shows an increased departure from growth/collapse
symmetry close to the bubble; all of these effects are magnified in the
stiffer gel.
###### Index Terms:
histotripsy, cavitation, bubble dynamics, ultrasound bioeffects.
## I Introduction
Noninvasive surgical techniques are broadly favored in the treatment of
pathologies affecting nearly all systems of the body as they avoid the
numerous complications that may arise during invasive procedures. To this end,
hosts of technologies have been investigated and developed to address the wide
variety of surgical needs associated with the treatment of different
pathologies throughout the body [1, 2, 3, 4, 5, 6, 7, 8, 9]. High-intensity
focused ultrasound (HIFU) therapies in particular have garnered significant
interest in the noninvasive surgical space owing to their broad applicability,
diverse mechanisms of therapeutic action (e.g. targeted drug delivery, thermal
or mechanical ablation), and the relative safety of ultrasound waves as a
delivery mechanism (e.g., compared to ionizing radiation which is dangerous
and inherently damaging to tissues [10]). Indeed, HIFU therapies, through
their various mechanisms of action, have been demonstrated effective in pre-
clinical and/or clinical settings for treating pathologies ranging from
varicose veins [11, 12], uterine fibroids [5], and essential tremor [13], to
kidney- and gallstones [14, 15], to stroke [16] and various cancers [17].
Histotripsy [18, 19] is a non-thermal HIFU therapy that relies on the targeted
generation of cavitation events to mechanically fractionate and destroy soft
tissues, and has been demonstrated in a wide variety of tissues [20].
Cavitation during histotripsy is generated using short-duration ($\leq 2$
acoustic cycles), high-amplitude ($\gtrsim$$28\text{\,}\mathrm{MPa}$ [21])
focused ultrasound pulses delivered from an extracorporeal ultrasound
transducer. The high tension produced by focusing megahertz pressure pulses
gives rise to clouds of cavitation bubbles, which explosively grow and
collapse, thereby destroying the surrounding tissue. There is strong evidence
that the high stresses, strains, and strain rates developed during bubble
growth and collapse are connected to the tissue homogenization process [22,
23]. However, although cavitation events during histotripsy can be generated
in a controllable and targeted fashion, accurately predicting the bubble cloud
dynamics, upon which the induction of damage within the tissue and overall
size of the resulting homogenized region depends, remains a challenge and
requires an accurate characterization of the pressure field in the focal
region and relevant constitutive model for the tissue. The former is
challenging due to the presence of many bubbles and nonlinear bubble-bubble
and bubble-waveform interactions. The latter is especially difficult at rates
relevant to cavitation ($>10^{5}$ s-1), where the constitutive properties of
materials are known to exhibit strong rate-dependence; adequately resolving
the dynamics of the bubbles necessary to allow these properties to be
determined at such high rates remains experimentally challenging. More
importantly, acoustically nucleating the types of single spherical bubbles
required to compare to predictive models has been difficult to accomplish in
practice in a repeatable fashion without the inclusion of external agents to
act as seed nuclei [24].
Recent advances in ultrasound transducer technology have enabled the
development of experimental setups capable of repeatably growing a single
bubble using well defined histotripsy pulses [25] and data from these
experiments were instrumental to validating modeling strategies describing the
resulting bubble dynamics in water, including determining the appropriate
driving pressure [26] and inferring nuclei size distributions [27]. However,
the stress and strain fields in the medium surrounding the bubble, which are
readily determined for a Newtonian liquid like water, are not representative
of those in soft materials like hydrogels or tissues, which exhibit a
viscoelastic response. As such, more sophisticated models, along with well
characterized sets of viscoelastic properties, are required to accurately
predict the stresses and strains generated in these materials by cavitation
and ultimately inform our understanding of how damage thereby generated.
By analogy to laser-induced cavitation rheometry [28], ultrasound-driven
cavitation was proposed in [29] as a means to characterize the viscoelastic
properties of soft materials, including agarose gels, at high rates. That
study modeled agarose gel using the Neo-Hookean model and the higher-order
Quadratic Law Kelvin-Voigt (QLKV) model. The former had been favored for
representing large-amplitude bubble growth observed under high-amplitude
ultrasound forcing [30, 23, 22] and deformation of cells in other contexts
[31]. However, the higher-order QLKV model was shown to achieve superior
agreement with experimental data [29]. The QLKV model is based on a model
originally proposed by Fung [32] and includes strain-stiffening effects
considered significant at high strain rates developed during cavitation events
[28, 33]. Notably, the QLKV model inferred shear moduli that were close to
their quasi-static measurements whereas shear moduli inferred with the Neo-
Hookean model were significantly larger [29]. These differences indicate that
use of the QLKV model could have significant implications for mechanical
damage thresholds (e.g., stresses, strains, and strain rates) considered in
prior studies which modeled tissue as a Neo-Hookean material [23, 30, 22].
With a strategy to accurately model single-bubble dynamics in histotripsy [30]
and a constitutive model valid at high rates, the stress and strain fields in
surrounding soft matter due to cavitation-bubble growth and collapse can be
determined. The present study uses the QLKV model to calculate stress, strain,
and strain rate fields produced by the growth and collapse of bubbles driven
by histotripsy-relevant waveforms, based on past experiments in 0.3 % and 1.0
% agarose gel specimens. Viscoelastic properties of the gels were previously
determined using a variant of the Inertial Microcavitation high strain-rate
Rheometry (IMR) technique [28, 34] with the QLKV model [29]. Previous studies
investigating histotripsy bubble dynamics in viscoelastic media have assumed
initial radii are equal to cavitation nucleus sizes in water [27, 30, 21]; to
avoid reliance on this assumption, simulations are initialized with the mean
stress-free radius measured for each gel specimen [29]. We conclude by
considering how the QLKV model impacts prior work demonstrating distinct
mechanical origins of maximum compressive stress with increasing distance from
the bubble [23].
## II Methods
In this work, we select two representative data sets from past experiments of
ultrasound-generated growth and collapse of a single cavitation bubble in
agarose [25], one in a 0.3 % gel and the other in a 1.0 % gel. We then
simulate the corresponding bubble growth and collapse using a single-bubble
model to yield the time history of the bubble radius, from which we calculate
the associated radial stress, strain, and strain rate fields. We describe here
the experiments, model, and field calculation.
### II-A Experiments
In the present work, we select one representative data set from the ensemble
of 19 experiments in 0.3 % agarose and one representative data set from the
ensemble of 20 experiments 1.0 % agarose from the experiments of [25].
Briefly, those experiments were carried out in a open-topped,
$10\text{\,}\mathrm{cm}$ diameter spherical histotripsy array comprised of 16
focused acoustic transducer elements with a center frequency of
$1\text{\,}\mathrm{MHz}$. During experiments the array was filled with
deionized water, filtered to $2\text{\,}\mathrm{\SIUnitSymbolMicro m}$ and
degassed to $4\text{\,}\mathrm{kPa}$. Agarose gel samples measuring
$2.5\text{\,}\mathrm{cm}$ in diameter and $7.5\text{\,}\mathrm{cm}$ in length,
with concentrations (w/v) of $0.3$ % and $1.0$ % [20], were prepared for
cavitation experiments and inserted into the transducer for nucleation via the
opening at the top. For reference, the quasi-static shear moduli of the $0.3$
% and $1.0$ % gel samples were $3.4\text{\,}\mathrm{kPa}$ and
$65.1\text{\,}\mathrm{kPa}$, respectively. The acoustic pulses responsible for
nucleating single spherical bubbles in the gel samples measured 1.5 acoustic
cycles ($1.5\text{\,}\mathrm{\SIUnitSymbolMicro s}$) and contained only a
single rarefactional pressure half-cycle with a peak focal pressure of
$-24\text{\,}\mathrm{MPa}$ [21]. All bubbles were nucleated $\geq
5\text{\,}\mathrm{mm}$ from the edge of the gel samples to avoid potential
influences from boundary effects on the resulting bubble dynamics. The
dynamics of the nucleated bubbles were monitored from their inception until
the time of first collapse using a high speed camera in combination with an
adaptive, multi-flash-per-camera-exposure illumination technique [35]. Each of
the two selected data sets is the realization closest to the mean of each
ensemble. The time history of the bubble radius is shown in Fig. 1 with circle
markers corresponding to the $0.3$ % data set and square markers corresponding
to the $1.0$ % data set, indicating excellent agreement between the model
results and the experiments. The emphasis of the present study lies in this
first growth and collapse. A full description of the experiments can be found
in [25].
### II-B Theoretical Model and Numerical Methods
Following past studies [30, 36, 23, 37, 22, 20], our modeling approach
considers the dynamics of a single spherical bubble in an infinite,
homogeneous viscoelastic medium. The time-history of the bubble radius $R(t)$
is governed by the Keller-Miksis equation [38]:
$\displaystyle\begin{split}&\left(1-\frac{\dot{R}}{c_{\infty}}\right)R\ddot{R}+\frac{3}{2}\left(1-\frac{\dot{R}}{3c_{\infty}}\right)\dot{R}^{2}=\\\
&\frac{1}{\rho_{\infty}}\left(1+\frac{\dot{R}}{c_{\infty}}+\frac{R}{c_{\infty}}\frac{d}{dt}\right)\Biggl{[}p_{b}-p_{\infty}\Biggl{(}t+\frac{R}{c_{\infty}}\Biggr{)}-\frac{2S}{R}+J\Biggr{]},\end{split}$
(1)
where the sound speed, $c_{\infty}$, density, $\rho_{\infty}$, and surface
tension, $S$, are fixed at the values given in prior work [25], and $J$ is the
integral of the deviatoric contribution of the stresses in the surroundings.
The far-field driving pressure, $p_{\infty}(t)$, is a sum of the constant
ambient pressure, $P_{0}$, and an analytic function representative of a
histotripsy pulse [30, 23, 26]. This pressure waveform has an amplitude of
$-24$ MPa, as shown in Fig. 1. The bubble is homobaric with pressure
$p_{b}(t)$; the calculation of the bubble pressure is coupled to the energy
balance partial differential equation, which is discretized inside the bubble
to more accurately account for energy tranport [39, 40, 41]. The bubble-gel
interface is assumed to be impervious to gas, and gel surrounding the bubble
remains at a constant ambient temperature of $25$ ∘C. These assumptions have
been adopted by previous authors [39, 40, 41, 42, 26, 30] and are acceptable
for modeling the single cycle of growth and collapse typically resolved in
histotripsy-relevant cavitation experiments [25, 26]. As in [29], the
Quadratic Law Kelvin-Voigt (QLKV) constitutive relation [34] is used to relate
stresses and strains in agarose gels. In this model, the stress integral $J$
takes the following form:
$\displaystyle\begin{split}J&=-\frac{4\mu\dot{R}}{R}+\frac{G(3\alpha-1)}{2}\left[5-4\left(\frac{R_{0}}{R}\right)-\left(\frac{R_{0}}{R}\right)^{4}\right]\\\
&+2G\alpha\left[\frac{27}{40}+\frac{1}{8}\left(\frac{R_{0}}{R}\right)^{8}+\frac{1}{5}\left(\frac{R_{0}}{R}\right)^{5}+\left(\frac{R_{0}}{R}\right)^{2}-\frac{2R}{R_{0}}\right],\end{split}$
(2)
where $R_{0}$ is the stress-free radius corresponding to a reference
configuration; departures from this radius give rise to restoring elastic
stresses. The viscoelastic properties of the gel specimens include viscosity,
$\mu$, shear modulus, $G$, and stiffening parameter, $\alpha$, and are taken
to be constant over the course of the simulation. When $\alpha=0$, the QLKV
model reduces to the Neo-Hookean model [43]. The values of these properties
obtained by Mancia et al. [29] for the Neo-Hookean and QLKV models are given
in Table I.
The discretized equations are solved numerically as described previously [30]
with the MATLAB ode15s time-marching scheme [44, 45] and second-order central
differences for spatial derivatives in the energy equation [28, 46]. The
$R(t)$ solutions obtained for the two representative data sets considered in
this study are shown as the line traces in Fig. 1. Stresses, strains, and
strain rates are calculated using the $R(t)$ results obtained for each gel
concentration as described in the following section (Sect. II-C).
TABLE I: Properties inferred from representative 0.3 % and 1 % agarose experiments [29] shown in Fig. 1. Model | $G$ (kPa) | $\alpha$ ($10^{-2}$) | $\mu$ (Pa$\cdot$s) | $R_{0}$ ($\mu$m)
---|---|---|---|---
0.3% gel | | | |
NH | 9.1 | 0 | 0.077 | 0.25
QLKV | 0.44 | 1.5 | 0.079 | 0.21
1% gel | | | |
NH | 31 | 0 | 0.15 | 1.3
QLKV | 7.5 | 2.8 | 0.15 | 1.3
Figure 1: Left: Validated analytic waveform used in simulations. Right:
Representative radius vs. time data sets for 0.3 % (circle markers) and 1.0 %
(square markers) gels time-shifted such that maximum radius occurs at $t=0$.
Traces correspond to simulation results obtained with inferred gel parameters
in Table I.
### II-C Calculation of Stresses, Strains, and Strain rates
Stress, strain, and strain rate fields are calculated as described previously
[30] using the QLKV and Neo-Hookean models. For all field quantities, the
original, $r_{0}$ and current, $r$, radial coordinates are related by:
$r_{0}(r,t)=\sqrt[3]{r^{3}-R^{3}+R_{0}^{3}},$ (3)
where $R_{0}$ is the stress-free radius. We consider only the radial
components of total deviatoric stress, $\tau_{rr}$, which has a simple
relation to hoop stresses, $\tau_{rr}=-2\tau_{\theta\theta}$ in incompressible
gels. As in the Neo-Hookean model [23], total stress in the QLKV model can be
expressed as a sum of its elastic, $\tau_{rr}^{E}$, and viscous,
$\tau_{rr}^{V}$, components:
$\tau_{rr}=\tau_{rr}^{E}+\tau_{rr}^{V},$ (4)
$\displaystyle\tau_{rr}^{E}=\frac{2G}{3}\left[1+\alpha\left(\left(\frac{r_{0}}{r}\right)^{4}+2\left(\frac{r}{r_{0}}\right)^{2}-3\right)\right]\left[\left(\frac{r_{0}}{r}\right)^{4}-\left(\frac{r}{r_{0}}\right)^{2}\right]$
(5)
$\displaystyle\tau_{rr}^{V}=-4\mu\frac{R^{2}\dot{R}}{r^{3}}.$ (6)
Strain fields are calculated using the Hencky (true strain) definition:
$\displaystyle E_{rr}$ $\displaystyle=-2\ln\left(\frac{r}{r_{0}}\right).$ (7)
Strain rate fields are calculated using a time derivative of Eq. 7:
$\displaystyle\dot{E}_{rr}$ $\displaystyle=-2\frac{R^{2}\dot{R}}{r^{3}}.$ (8)
Figure 2: Total deviatoric radial stress fields developed in the 0.3 % (left)
and 1.0 % (right) gels. White region corresponds to tissue deformation with
respect to distance from the stress free radius as a function of time. Black
lines overlaying the color plots correspond to Lagrangian paths taken by
particles starting 8, 10, 20, 50, and and 100 $\mu$m from the bubble center.
Plots in the lower row show the magnitude of total stress along these paths.
## III Results
### III-A Stress fields
The radial dynamics and total deviatoric radial stress fields obtained with
Eq. 4 and corresponding to the bubble growth shown in Fig. 1 are shown in Fig.
2 for each gel concentration. The white region represents the bubble, which
displaces the gels thereby causing their deformation and the associated
stresses. The bubble achieves a larger maximum radius in the less stiff 0.3 %
gel, which also gives rise to a longer collapse time. Stresses are initially
compressive (negative) during bubble growth and become tensile (positive) as
the bubble collapses to its minimum radius. The stresses are largest near the
bubble wall and decrease with distance from the bubble. In the stiffer 1.0 %
gel, the compressive stress has larger magnitude, extends farther into the
surrounding medium, and shows an increased departure from growth/collapse
symmetry as evidenced by the relatively large tension near the bubble after
reaching maximum radius. Lagrangian paths initially located at 8, 10, 20, 50,
and 100 microns from the bubble center are indicated by the solid black line
overlay in the contours. Total stress experienced along each Lagrangian path
is shown below the corresponding contour plot. As the bubble expands, the
separation distance between these different trajectories becomes smaller as a
material element becomes smaller in the radial direction and elongates in the
hoop direction. Local maxima in stress occur at the onset of bubble growth, at
bubble collapse (minimum radius), and at the point of maximum bubble radius.
At 10 microns from the bubble center, the absolute maximum stress is tensile
and occurs at collapse (minimum radius) in the $0.3$ % gel; in contrast, the
absolute maximum stress is compressive and occurs at maximum bubble radius in
the 1.0 % gel.
Figure 3: Elastic components of deviatoric radial stress as a function of time
experienced by particles starting at various distances from the bubble center
for 0.3 % (left) and 1.0 % (right) gels. Figure 4: Viscous components of
deviatoric radial stress as a function of time experienced by particles
starting at various distances from the bubble center for 0.3 % (left) and 1.0
% (right) agarose gels.
To illustrate the relative contribution of viscous and elastic stresses, Figs.
3 and 4 show the elastic and viscous components of the total stress calculated
using Eqs. 5 and 6 along each Lagrangian path depicted in Fig. 2. Elastic
stress is largest at maximum bubble radius while peaks in viscous stress occur
at the onset of bubble growth and at bubble collapse to minimum radius.
Significantly larger elastic stresses are developed in the stiffer 1.0 % gel
while viscous stress is maximized in the $0.3$ % gel. By comparing the elastic
and viscous stresses to the total stress plots in Fig. 2, it is clear that the
absolute maximum compressive stress occurs at maximum bubble radius and is
elastic in origin in the 1.0 % gel, while the absolute maximum compressive
stress occurs at the onset of bubble growth in the 0.3 % gel and is viscous in
origin.
### III-B Strain & Strain Rate Fields
The radial strains and strain rates experienced in each gel along Lagrangian
paths initially located at 8, 10, 20, 50, and 100 microns from the bubble
center are calculated using Eqs. 7 and 8 and are shown in Figs. 5 and 6,
respectively. Strains are purely compressive 5 microns from the bubble center
and are largest at maximum bubble radius. A slightly larger maximum strain is
achieved in the $0.3$ % gel, reflecting the larger maximum bubble radius
relative to the equilibrium radius in this case. The strain rate magnitude is
largest at the onset of bubble growth and at collapse to minimum bubble
radius. Strain rates then follow a power law decay in space. Although
comparable strain rates are reached in both gels, a slightly larger maximum
strain rate at collapse is observed in the $0.3$ % gel.
Figure 5: Strain as a function of time experienced by particles starting at
various distances from the bubble center for 0.3 % (left) and 1.0 % (right)
agarose gels. Figure 6: Strain rate as a function of time experienced by
particles starting at various distances from the bubble center for 0.3 %
(left) and 1.0 % (right) agarose gels.
## IV Discussion
The investigation of cavitation damage mechanisms in tissue and tissue-like
media has been limited by uncertainties in viscoelastic properties. Until our
work integrating single-bubble dynamics modeling and cavitation experiments to
determine viscoelastic properties at high rates [29], previous studies [30,
23, 36] had to rely on values originating from quasi-static measurements. An
additional modeling uncertainty pertaining to the composition of the material
is the initial radius size–or the nucleus/nidus size, as evidenced by the wide
range of values assumed for histotripsy bubbles in different media [47, 48,
23]. Our choice to initialize our simulations using the stress-free radius is
consistent with past observations that the gel is likely to fracture due to
the large stretch during explosive bubble growth [30]; we note that the
calculated stresses and strains are dependent upon this quantity. Though it is
not possible to infer the actual nucleus size, our approach does not require
modeling of this rupturing process [49].
The distinguishing feature of the QLKV model is strain stiffening–increased
stiffness at large strains. To better appreciate this effect, Fig. 7 shows the
time history of the total deviatoric stress when computing the stresses using
the Neo-Hookean model, which does not account for strain stiffening; this
figure should be compared to the stress traces in Fig. 2. In the less stiff,
0.3 % gel, the stresses are comparable; the Neo-Hookean compressive (elastic)
stresses at maximum radius calculated with the Neo-Hookean model are slightly
larger than those computed with the QLKV model. However, for the stiffer 1.0 %
gel, the QLKV yields significantly larger stresses. This result highlights the
importance of strain stiffening at large deformations in stiffer materials.
Figure 7: Total deviatoric radial stress as a function of time experienced by
particles starting at various distances from the bubble center for 0.3 %
(left) and 1.0 % (right) agarose gels obtained with a Neo-Hookean model. These
plots are in contrast to the QLKV model stress plots in Fig. 2.
A consequence of this strain stiffening is a more pronounced asymmetry of the
bubble radius in time between growth to maximum radius and maximum radius to
collapse as the agarose concentration is increased. Experiments indicate that
the collapse phase is longer than the growth phase when scaled by the maximum
radius and Rayleigh collapse time [25]. When examining the computed stress
fields, it is clear that the region of high compressive stress (e.g., the pink
region in Fig. 2) extends beyond the maximum radius. By Newton’s third law,
this implies that there is resistance to the collapsing bubble due to the
elastic stresses, which is expected to lead to a longer collapse.
As noted in previous studies based on a Neo-Hookean model [23, 30], stress
maxima may be of elastic or viscous origin. Elastic stresses are largest at
maximum bubble radius, when the strain is largest. Viscous stresses are
largest at the onset of bubble growth and at bubble collapse, when the strain
rate is largest. Although maximum tensile stress is always of viscous origin,
maximum compressive stress can be of viscous or elastic origin [23]. These
same observations hold for the stresses calculated with the QLKV model. The
compressive stresses at 8 and 10 microns from the bubble center in Figs. 2, 3,
and 4 reflect these different regimes of mechanical behavior. In the $0.3$ %
gel, the maximum compressive stress occurs at the onset of bubble growth and
is viscous origin. At the same distances in the 1.0 % gel, the maximum
compressive stress occurs at maximum bubble radius and is elastic in origin.
The origin of maximum compressive stresses in gels modeled with the QLKV model
can be further explored by examining the magnitude of maximum compressive
stress at increasing distances from the bubble. Fig. 8 shows the magnitude of
maximum compressive stress over the course of the simulation as a function of
distance from the stress-free radius for each model and gel specimen. All
maximum stress traces exhibit an abrupt change in slope in the boxed region
shown enlarged at right. These ’kinks’ in the traces correspond to the
distance from the bubble center at which compressive stress of viscous origin
first exceeds compressive stress of elastic origin. This has been called the
elastic-to-viscous transition distance in previous studies of stresses
obtained with the Neo-Hookean model [23, 30]. The elastic-to-viscous
transition distances calculated using each viscoelastic model in each gel
specimen are given in Table II. The Neo-Hookean model results in nearly
equivalent transition distances in each gel concentration because the 1.0 %
gel has a larger viscosity and a larger shear modulus than the $0.3$ % gel.
The transition distance is slightly smaller in the 1.0 % gel due to its
significantly larger shear modulus. In the QLKV model, the elastic-to-viscous
transition occurs closer to the stress-free radius for the less stiff $0.3$ %
gel. Additionally, the QLKV model results in a larger elastic-to-viscous
transition distance in the 1.0 % gel. This behavior is consistent with
expectations that the higher-order elastic effects of the QLKV model give rise
to larger elastic stresses that persist to a greater distance from the bubble.
The elastic-to-viscous transition distances predicted with the QLKV and Nee-
Hookean models, while distinct, are separated by less than 4 microns. Beyond
these locations, the models display identical stress behavior that reflects
their shared viscous stress term and similar strain rates (Eq. 6). Use of the
QLKV model is most likely to affect damage predictions in stiffer gels, in
which the higher-order elastic effects encompassed by the $\alpha$-dependent
terms of Eq. 4 contribute to markedly larger stresses prior to the elastic-to-
viscous transition. This behavior is evident when the total deviatoric radial
stress fields obtained with the QLKV model (Fig. 2) are compared to those
obtained with the Neo-Hookean model (Fig. 7). Also, the $0.3$ % gel stress
trace at $8$ microns from the bubble center reveals a maximum compressive
elastic stress at maximum bubble radius that is comparable to the maximum
compressive viscous stress. This behavior reflects the slightly farther
transition distance obtained with the Neo-Hookean model in this case. In
contrast, Neo-Hookean stress traces in the stiffer $1$ % gel have
significantly smaller maximum elastic stresses than those obtained with the
QLKV model. The stiffer gel plot also demonstrates the comparable magnitudes
of elastic and viscous compressive stresses at $8$ microns, reflecting a
transition distance closer to the bubble center in the Neo-Hookean case.
TABLE II: Elastic-to-viscous transition distance in microns obtained with each model in each gel specimen. Model | 0.3% gel | 1% gel
---|---|---
NH | 8.54 | 8.49
QLKV | 7.49 | 12.0
Figure 8: Maximum compressive stress as a function of distance from the bubble
center (starting at the stress-free radius) in the $0.3$ % and $1.0$ % gels
calculated using the Neo-Hookean (NH) and QLKV models. Black box outlines
elastic-to-viscous transition points (abrupt changes in slope enlarged in
inset).
## V Conclusions
Recent studies proposed that the Quadratic Law Kelvin-Voigt (QLKV)
constitutive model, which accounts for strain stiffening, more accurately
represents the viscoelastic response of soft materials subjected to cavitation
than previously used models (e.g., finite-deformation Neo-Hookean model); this
model has also been used to measure viscoelastic properties at high rates. In
this work, we use the QLKV model and these properties to calculate the time-
dependent stress, strain, and strain rate fields produced during the growth
and collapse of individual bubbles subjected to a histotripsy-relevant
pressure waveform in agarose gels of 0.3 % and 1.0 % concentration and
corresponding to actual (past) experiments. We find that, as the gel
concentration is increased, strain stiffening manifests in larger elastic
stresses and compressive stresses extending into the collapse phase,
particularly for the 1.0 % concentration gel. As a result, the duration of the
collapse phase also increases. In comparison with the conventional Neo-Hookean
model, the compressive stress has a larger magnitude, extends farther into the
surrounding medium, and shows an increased departure from growth/collapse
symmetry close to the bubble; all of these effects are magnified in the
stiffer gel. In the future, more detailed experimental observations of the
rupture of soft materials during explosive bubble growth would greatly benefit
the development of models for cavitation damage to soft matter.
## Acknowledgment
This work was supported by ONR Grant No. N00014-18-1-2625 (under Dr. Timothy
Bentley).
## References
* [1] W. J. Fry, F. Fry, J. Barnard, R. Krumins, and J. Brennan, “Ultrasonic lesions in mammalian central nervous system,” _Science_ , vol. 122, no. 3179, pp. 1091–1091, 1955.
* [2] S. Aronow, “The use of radio-frequency power in making lesions in the brain,” _Journal of neurosurgery_ , vol. 17, no. 3, pp. 431–438, 1960.
* [3] W. Sweet, V. Mark, and H. Hamlin, “Radiofrequency lesions in the central nervous system of man and cat:: Including case reports of eight bulbar pain-tract interruptions,” _Journal of neurosurgery_ , vol. 17, no. 2, pp. 213–225, 1960.
* [4] I. S. Cooper, “Cryogenic surgery of the basal ganglia,” _Jama_ , vol. 181, no. 7, pp. 600–604, 1962.
* [5] E. A. Stewart, W. M. Gedroyc, C. M. Tempany, B. J. Quade, Y. Inbar, T. Ehrenstein, A. Shushan, J. T. Hindley, R. D. Goldin, M. David _et al._ , “Focused ultrasound treatment of uterine fibroid tumors: safety and feasibility of a noninvasive thermoablative technique,” _American journal of obstetrics and gynecology_ , vol. 189, no. 1, pp. 48–54, 2003.
* [6] S. M. Sørensen, F. V. Mortensen, and D. T. Nielsen, “Radiofrequency ablation of colorectal liver metastases: long-term survival,” _Acta Radiologica_ , vol. 48, no. 3, pp. 253–258, 2007.
* [7] J. Fenner, M. Gwilliam, R. Mehrem, A. Bird, and L. Walton, “Analytical description of dose profile behaviour in gamma knife radiosurgery,” _Physics in Medicine & Biology_, vol. 53, no. 8, p. 2035, 2008.
* [8] N. McDannold, G. Clement, P. Black, F. Jolesz, and K. Hynynen, “Transcranial mri-guided focused ultrasound surgery of brain tumors: Initial findings in three patients,” _Neurosurgery_ , vol. 66, no. 2, p. 323, 2010.
* [9] C. Correa-Gallego, Y. Fong, M. Gonen, M. I. D’Angelica, P. J. Allen, R. P. DeMatteo, W. R. Jarnagin, and T. P. Kingham, “A retrospective comparison of microwave ablation vs. radiofrequency ablation for colorectal cancer hepatic metastases,” _Annals of surgical oncology_ , vol. 21, no. 13, pp. 4278–4283, 2014.
* [10] L. M. DeAngelis, J.-Y. Delattre, and J. B. Posner, “Radiation-induced dementia in patients cured of brain metastases,” _Neurology_ , vol. 39, no. 6, pp. 789–789, 1989.
* [11] H. Schultz-Haakh, J. K. Li, W. Welkowitz, and N. Rosenberg, “Ultrasonic treatment of varicose veins,” _Angiology_ , vol. 40, no. 2, pp. 129–137, 1989.
* [12] A. Obermayer, J.-F. Aubry, and N. Barnat, “Extracorporeal treatment with high intensity focused ultrasound of an incompetent perforating vein in a patient with active venous ulcers,” in _EJVES Vascular Forum_. Elsevier, 2020.
* [13] W. J. Elias, D. Huss, T. Voss, J. Loomba, M. Khaled, E. Zadicario, R. C. Frysinger, S. A. Sperling, S. Wylie, S. J. Monteith _et al._ , “A pilot study of focused ultrasound thalamotomy for essential tremor,” _New England Journal of Medicine_ , vol. 369, no. 7, pp. 640–648, 2013.
* [14] C. Chaussy and E. Schmiedt, “Extracorporeal shock wave lithotripsy (eswl) for kidney stones. an alternative to surgery?” _Urologic radiology_ , vol. 6, no. 1, pp. 80–87, 1984.
* [15] M. Sackmann, M. Delius, T. Sauerbruch, J. Holl, W. Weber, E. Ippisch, U. Hagelauer, O. Wess, W. Hepp, W. Brendel _et al._ , “Shock-wave lithotripsy of gallbladder stones,” _New England journal of medicine_ , vol. 318, no. 7, pp. 393–397, 1988.
* [16] A. V. Alexandrov, A. M. Demchuk, R. A. Felberg, I. Christou, P. A. Barber, W. S. Burgin, M. Malkoff, A. W. Wojner, and J. C. Grotta, “High rate of complete recanalization and dramatic clinical recovery during tpa infusion when continuously monitored with 2-mhz transcranial doppler monitoring,” _Stroke_ , vol. 31, no. 3, pp. 610–614, 2000.
* [17] O. Couture, J. Foley, N. F. Kassell, B. Larrat, and J.-F. Aubry, “Review of ultrasound mediated drug delivery for cancer treatment: updates from pre-clinical studies,” _Transl Cancer Res_ , vol. 3, no. 5, pp. 494–511, 2014.
* [18] J. E. Parsons, C. A. Cain, G. D. Abrams, and J. B. Fowlkes, “Pulsed cavitational ultrasound therapy for controlled tissue homogenization,” _Ultrasound in Medicine & Biology_, vol. 32, no. 1, pp. 115–129, 2006.
* [19] Z. Xu, J. B. Fowlkes, E. D. Rothman, A. M. Levin, and C. A. Cain, “Controlled ultrasound tissue erosion: The role of dynamic interaction between insonation and microbubble activity,” _The Journal of the Acoustical Society of America_ , vol. 117, no. 1, pp. 424–435, 2005.
* [20] E. Vlaisavljevich, K.-W. Lin, A. Maxwell, M. T. Warnez, L. Mancia, R. Singh, A. J. Putnam, B. Fowlkes, E. Johnsen, C. Cain _et al._ , “Effects of ultrasound frequency and tissue stiffness on the histotripsy intrinsic threshold for cavitation,” _Ultrasound Med. Biol._ , vol. 41, no. 6, pp. 1651–1667, 2015.
* [21] A. D. Maxwell, C. A. Cain, T. L. Hall, J. B. Fowlkes, and Z. Xu, “Probability of cavitation for single ultrasound pulses applied to tissues and tissue-mimicking materials,” _Ultrasound Med. Biol._ , vol. 39, no. 3, pp. 449–465, 2013.
* [22] E. Vlaisavljevich, A. Maxwell, L. Mancia, E. Johnsen, C. Cain, and Z. Xu, “Visualizing the histotripsy process: Bubble cloud–cancer cell interactions in a tissue-mimicking environment,” _Ultrasound Med. Biol._ , vol. 42, no. 10, pp. 2466–2477, 2016.
* [23] L. Mancia, E. Vlaisavljevich, Z. Xu, and E. Johnsen, “Predicting tissue susceptibility to mechanical cavitation damage in therapeutic ultrasound,” _Ultrasound Med. Biol._ , vol. 43, no. 7, pp. 1421–1440, 2017.
* [24] Y. Hong, M. Sarntinoranont, G. Subhash, S. Canchi, and M. King, “Localized tissue surrogate deformation due to controlled single bubble cavitation,” _Experimental Mechanics_ , vol. 56, no. 1, pp. 97–109, 2016.
* [25] C. T. Wilson, T. L. Hall, E. Johnsen, L. Mancia, M. Rodriguez, J. E. Lundt, T. Colonius, D. L. Henann, C. Franck, Z. Xu _et al._ , “Comparative study of the dynamics of laser and acoustically generated bubbles in viscoelastic media,” _Phys. Rev. E_ , vol. 99, no. 4, p. 043103, 2019.
* [26] L. Mancia, M. Rodriguez, J. Sukovich, Z. Xu, and E. Johnsen, “Single-bubble dynamics in histotripsy and high-amplitude ultrasound: Modeling and validation,” _Phys. Med. Biol._ , vol. 65, no. 22, p. 225014, 2020.
* [27] L. Mancia, M. Rodriguez, J. Sukovich, S. Haskell, Z. Xu, and E. Johnsen, “Acoustic measurements of nucleus size distribution at the cavitation threshold,” _Ultrasound Med. Biol._ , 2020, accepted.
* [28] J. B. Estrada, C. Barajas, D. L. Henann, E. Johnsen, and C. Franck, “High strain-rate soft material characterization via inertial cavitation,” _J. Mech. Phys. Solids_ , vol. 112, pp. 291–317, 2018.
* [29] L. Mancia, J. Yang, J.-S. Spratt, J. Sukovich, Z. Xu, T. Colonius, C. Franck, and E. Johnsen, “Acoustic cavitation rheometry,” 2020, preprint at https://arxiv.org/abs/2011.11174.
* [30] L. Mancia, E. Vlaisavljevich, N. Yousefi, M. Rodriguez, T. J. Ziemlewicz, F. T. Lee, D. Henann, C. Franck, Z. Xu, and E. Johnsen, “Modeling tissue-selective cavitation damage,” _Phys. Med. Biol._ , vol. 64, no. 22, p. 225001, 2019\.
* [31] E. Peeters, C. Oomens, C. Bouten, D. Bader, and F. Baaijens, “Mechanical and failure properties of single attached cells under compression,” _Journal of Biomechanics_ , vol. 38, no. 8, pp. 1685–1693, 2005.
* [32] Y.-c. Fung, _Biomechanics: mechanical properties of living tissues_. Springer Science & Business Media, 2013.
* [33] S. Raayai-Ardakani and T. Cohen, “Capturing strain stiffening using volume controlled cavity expansion,” _Extreme Mech. Lett._ , vol. 31, p. 100536, 2019.
* [34] J. Yang, H. C. Cramer, and C. Franck, “Extracting non-linear viscoelastic material properties from violently-collapsing cavitation bubbles,” _Extreme Mech. Lett._ , vol. 39, p. 100839, 2020.
* [35] J. R. Sukovich, S. C. Haskell, Z. Xu, and T. L. Hall, “A cost-effective, multi-flash,“ghost” imaging technique for high temporal and spatial resolution imaging of cavitation using “still-frame” cameras,” _J. Acoust. Soc. Am._ , vol. 147, no. 3, pp. 1339–1343, 2020.
* [36] K. B. Bader, “The influence of medium elasticity on the prediction of histotripsy-induced bubble expansion and erythrocyte viability,” _Phys. Med. Biol._ , vol. 63, no. 9, p. 095010, 2018.
* [37] E. Vlaisavljevich, Z. Xu, A. D. Maxwell, L. Mancia, X. Zhang, K.-W. Lin, A. P. Duryea, J. R. Sukovich, T. L. Hall, E. Johnsen _et al._ , “Effects of temperature on the histotripsy intrinsic threshold for cavitation,” _IEEE Trans. Ultrason. Ferroelectr. Freq. Control_ , vol. 63, no. 8, pp. 1064–1077, 2016.
* [38] J. B. Keller and M. Miksis, “Bubble oscillations of large amplitude,” _J. Acoust. Soc. Am._ , vol. 68, no. 2, pp. 628–633, 1980.
* [39] A. Prosperetti, “The thermal behaviour of oscillating gas bubbles,” _J. Fluid Mech._ , vol. 222, pp. 587–616, 1991.
* [40] A. Prosperetti, L. A. Crum, and K. W. Commander, “Nonlinear bubble dynamics,” _J. Acoust. Soc. Am._ , vol. 83, no. 2, pp. 502–514, 1988.
* [41] V. Kamath, A. Prosperetti, and F. Egolfopoulos, “A theoretical study of sonoluminescence,” _J. Acoust. Soc. Am._ , vol. 94, no. 1, pp. 248–260, 1993\.
* [42] M. Warnez and E. Johnsen, “Numerical modeling of bubble dynamics in viscoelastic media with relaxation,” _Phys. Fluids_ , vol. 27, no. 6, p. 063103, 2015.
* [43] R. Gaudron, M. Warnez, and E. Johnsen, “Bubble dynamics in a viscoelastic medium with nonlinear elasticity,” _J. Fluid Mech._ , vol. 766, pp. 54–75, 2015.
* [44] L. F. Shampine and M. W. Reichelt, “The matlab ode suite,” _SIAM journal on scientific computing_ , vol. 18, no. 1, pp. 1–22, 1997.
* [45] L. F. Shampine, M. W. Reichelt, and J. A. Kierzenka, “Solving index-1 daes in matlab and simulink,” _SIAM Rev._ , vol. 41, no. 3, pp. 538–552, 1999.
* [46] C. Barajas and E. Johnsen, “The effects of heat and mass diffusion on freely oscillating bubbles in a viscoelastic, tissue-like medium,” _J. Acoust. Soc. Am._ , vol. 141, no. 2, pp. 908–918, 2017.
* [47] K. B. Bader, E. Vlaisavljevich, and A. D. Maxwell, “For whom the bubble grows: Physical principles of bubble nucleation and dynamics in histotripsy ultrasound therapy,” _Ultrasound Med. Biol._ , vol. 45, no. 5, pp. 1056–1080, 2019.
* [48] C. Edsall, Z. M. Khan, L. Mancia, S. Hall, W. Mustafa, E. Johnsen, A. L. Klibanov, Y. Y. Durmaz, and E. Vlaisavljevich, “Bubble cloud behavior and ablation capacity for histotripsy generated from intrinsic or artificial cavitation nuclei,” _Ultrasound in Medicine & Biology_, 2020.
* [49] P. Movahed, W. Kreider, A. D. Maxwell, S. B. Hutchens, and J. B. Freund, “Cavitation-induced damage of soft materials by focused ultrasound bursts: A fracture-based bubble dynamics model,” _J. Acoust. Soc. Am._ , vol. 140, no. 2, pp. 1374–1386, 2016.
| Lauren Mancia received the B.S.E. degree in engineering physics from the
University of Michigan, Ann Arbor, MI, USA, in 2012. She completed post-
baccalaureate studies in the biological sciences at Wayne State University,
Detroit, MI, USA in 2013. She returned to the University of Michigan to
complete the M.S.E. degree in mechanical engineering in 2015. Her M.S.E.
studies were funded through a National Science Foundation Graduate Research
Fellowship. She then began medical school at the University of Michigan and
joined the Medical Scientist Training Program in 2017. In 2020, she
successfully defended her Ph.D. thesis in mechanical engineering and will
complete her M.D. degree in 2021. Her research interests include high strain-
rate injury mechanics and focused ultrasound therapies.
---|---
| Jonathan R. Sukovich received the B.S. and Ph.D. degrees in mechanical
engineering from Boston University, Boston, MA, USA, in 2008 and 2013,
respectively, where he studied laser interactions with water at high pressures
and phenomena associated with high-energy bubble collapse events. He joined
the University of Michigan, Ann Arbor, MI, USA, in the summer of 2013 to study
histotripsy for brain applications. He is currently an Assistant Research
Scientist with the Department of Biomedical Engineering, University of
Michigan. His research interests include high-energy bubble collapse
phenomena, focused ultrasound therapies, and acoustic cavitation.
---|---
| Zhen Xu (Member, IEEE) received the B.S.E.degree (Hons.) in biomedical
engineering from Southeast University, Nanjing, China, in 2001, and the M.S.
and Ph.D. degrees in biomedical engineering from the University of Michigan,
Ann Arbor, MI, USA, in 2003 and 2005, respectively. She is currently an
Associate Professor with the Department of Biomedical Engineering, University
of Michigan. Her research is focused on ultrasound therapy, particularly the
applications of histotripsy for noninvasive surgeries. Dr. Xu received the
IEEE Ultrasonics, Ferroelectrics, and Frequency Control Society Outstanding
Paper Award in 2006, the American Heart Association Outstanding research in
Pediatric Cardiology in 2010, the National Institutes of Health New
Investigator Award at the First National Institute of Biomedical Imaging and
Bioengineering Edward C. Nagy New Investigator Symposium in 2011, and the
Frederic Lizzi Early Career Award from the International Society of
Therapeutic Ultrasound in 2015. She is also an Associate Editor of the IEEE
TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL (UFFC).
---|---
| Eric Johnsen received the B.S. degree from the University of California at
Santa Barbara, Santa Barbara, CA, USA, and the M.S. and Ph.D. degrees from the
California Institute of Technology, Pasadena, CA, USA, all in mechanical
engineering. He was a Postdoctoral Fellow at the Center for Turbulence
Research, Stanford University, Stanford, CA, USA. He is currently an Associate
Professor in the Mechanical Engineering Department, University of Michigan,
Ann Arbor, MI, USA. His group’s research draws from applied mathematics,
numerical/physical modeling and high-performance computing to develop
numerical simulations and modeling techniques to investigate the basic physics
underlying complex multiscale and multiphysics flows, with a focus on
multiphase flows, turbulence, shocks, and high-energy-density physics. His
work finds applications in biomedical engineering (diagnostic and therapeutic
ultrasound, cardiovascular flow, traumatic brain injury), transportation
engineering (aeronautical, automotive, naval, hypersonics), astrophysics, and
the energy sciences (inertial fusion, nuclear energy). Dr. Johnsen received
the National Science Foundation CAREER Award and the Office of Naval Research
Young Investigator Award, and is an associate fellow of the AIAA.
---|---
|
# Atomic Clocks in Space: A Search for Rubidium and Cesium Masers in M- and
L-Dwarfs
Jeremy Darling Center for Astrophysics and Space Astronomy
Department of Astrophysical and Planetary Sciences
University of Colorado, 389 UCB
Boulder, CO 80309-0389, USA
###### Abstract
I searched for the ground state 6.8 and 9.2 GHz hyperfine transitions of
rubidium and cesium toward M- and L-dwarfs that show Rb and Cs optical
resonance lines. The optical lines can pump the hyperfine transitions,
potentially forming masers. These spin-flip transitions of Rb and Cs are the
principal transitions used in atomic clocks (the 133Cs hyperfine transition
defines the second). If they are detected in stellar atmospheres, these
transitions would provide exceptionally precise clocks that can be used as
accelerometers, as exoplanet detectors, as probes of the predictions of
general relativity, as probes of light propagation effects, and as a means to
do fundamental physics with telescopes. Observations of 21 M- and L-dwarfs,
however, show no evidence for Rb or Cs maser action, and a previous survey of
giant stars made no Rb maser detections.
††facilities: VLA††software: CASA (McMullin et al., 2007)
## 1 A Rubidium and Cesium Primer
Rubidium has atomic number 37 and two common isotopes: 85Rb (stable) and 87Rb
(49 Gyr half-life); the terrestrial isotopic ratio is 72:28 (Pringle &
Moynier, 2017). The 87Rb ground state hyperfine transition at
6.83468261090429(9) GHz (Bize et al., 1999) can form a maser and is often used
as an atomic clock. Rb has one valence electron in the $5^{2}S_{1/2}$ ground
state, and the primary optical resonance transitions are
$5^{2}S_{1/2}\rightarrow 5^{2}P_{1/2}$ and $5^{2}S_{1/2}\rightarrow
5^{2}P_{3/2}$ at 795 and 780 nm.111http://steck.us/alkalidata/ The 6.8 GHz
87Rb atomic clock maser relies on the hyperfine structure of the 85Rb optical
resonance lines to selectively filter and optically pump the 87Rb hyperfine
ground states, creating a population inversion and promoting maser action
(Bender et al., 1958; Davidovits & Novick, 1966). The same processes may occur
in stellar and sub-stellar atmospheres, producing a natural 6.8 GHz 87Rb
maser.
The analogous hyperfine cesium transition occurs at exactly 9.192631770 GHz;
this transition defines the second. Unlike Rb, there is only one stable
isotope of Cs, 133Cs. The optical resonance lines at 852.3 and 894.6 nm
correspond to the transitions $6^{2}S_{1/2}\rightarrow 6^{2}P_{3/2}$ and
$6^{2}S_{1/2}\rightarrow 6^{2}P_{1/2}$. The pumping of coherent 9.2 GHz Cs
emission occurs via collisions with buffer gases of similar pressure to that
found in stellar photospheres, seems to be fairly independent of buffer gas
species, and increases with temperature (Vanier et al., 1998). Laboratory work
was limited by temperature and did not include ions (although this is not an
issue for M- and L-dwarf atmospheres), so the expectation for stellar Cs maser
action is less certain than it is for Rb (but still favorable).
Astrophysical maser action requires the population inversion of a metastable
state, seed photons to amplify (either continuum or spontaneous emission in
the maser transition), and a velocity-coherent amplification pathway. These
processes can obtain in stellar atmospheres, which can be prodigious emitters
of molecular masers such as SiO, OH, and H2O (typically AGB stars). I predict
that the conditions in stellar and sub-stellar atmospheres are promising for
6.8 GHz 87Rb and 9.2 GHz 133Cs maser action. The Rb I optical pumping lines
have been observed in stellar and brown dwarf atmospheres (e.g., Reiners et
al., 2007). Cs is usually detected when Rb is detected, and the Cs maser is
collisionally pumped.
## 2 Science with Clocks
Pulsars have been used as cosmic clocks with great success (e.g., Backer &
Hellings, 1986; Burke-Spolaor, 2015); detection and subsequent development of
Rb and/or Cs masers would provide clocks in new classes of celestial objects.
By tying terrestrial standards to clocks in space, one can test basic physics
using telescopes, make (weak) tests of general relativity, detect exoplanets
via Doppler wobble with unprecedented sensitivity, and, in concert with Gaia
proper motions, obtain precise three-dimensional kinematics of stars.
It is worth stressing that all spectral lines are clocks. The power of Rb or
Cs hyperfine transitions would lie in their radio frequency maser action,
which enables extremely precise Doppler tracking and astrometry compared to
any UV, optical, or IR transitions (no bright radio emission lines are known
in main sequence stars or brown dwarfs).
## 3 Astrophysical Rubidium and Cesium
The optical resonance lines of alkali metals including Rb I and Cs I have been
detected in main sequence stars, brown dwarfs (Manjavacas et al., 2016), giant
stars (e.g., García-Hernández et al., 2006), and even a candidate Thorne-
Żytkow object in the Small Magellanic Cloud (Levesque et al., 2014). The 85Rb
and 87Rb lines are blended and cannot be distinguished in optical spectra, but
the observed presence of other s-process elements, such as Zr, can be used to
infer the presence of 87Rb when the blended Rb I lines are detected.
Despite the lower abundance of Cs compared to Rb (Lodders, 2003), Cs
absorption lines can be optically thick. Velocity-coherent column density is
key for maser action, and stellar atmospheres satisfy this requirement; the
question for maser production is whether the pumping of 87Rb or 133Cs is
quenched by collisions. It is worth noting that masers have almost always been
discovered rather than predicted; they amplify small-scale physical conditions
that may not be representative of the bulk properties of a gas. Addressing the
possibility of Rb or Cs masers in stellar atmospheres therefore requires
observations. A small Green Bank Telescope survey of giant stars found no 6.8
GHz 87Rb emission (Darling, 2018), so I turn to low-mass stars and brown
dwarfs, which provide less distance-dimming and more practical scientific
applications for maser lines, including exoplanet detection and
characterization.
## 4 Observations
I selected a sample of 13 M-dwarfs and 8 L-dwarfs where Rb I and Cs I are
prominent in SDSS DR16 optical spectra (Ahumada et al., 2020), indicating
these elements are abundant and that the maser pumping lines are optically
thick. Using the NSF’s Karl G. Jansky Very Large Array222The National Radio
Astronomy Observatory is a facility of the National Science Foundation
operated under cooperative agreement by Associated Universities, Inc. (VLA), I
searched for the 6.8 GHz 87Rb and 9.2 GHz 133Cs lines. VLA observations used
the C configuration with integration times of $\sim$10 min, 3 s sampling, and
dual circular polarizations. Bandpasses with 3.91 kHz (0.17 km s-1) channels
spanning 8 MHz (351 km s-1) were centered on the 6.83468261 GHz 87Rb, and
6.66852 GHz CH3OH transitions, appropriately Doppler shifted to the velocity
of each target. The 9.19263177 GHz Cs observations used 5.208 kHz (0.17 km
s-1) channels spanning 16 MHz (522 km s-1). Synthesized beams ranged from
2.6″$\times$2.0″ to 7.3″$\times$2.6″. I used CASA (McMullin et al., 2007) for
interferometric flagging, calibration, and imaging. Spectral cubes were
polarization-averaged and smoothed to 1 km s-1 to achieve 2 mJy rms noise. No
continuum was subtracted from the cubes.
## 5 Results and Conclusions
I searched for emission features over a broad velocity range ($\pm 125$ km
s-1), taking into account the sometimes high proper motions of the targets. No
credible maser features were identified. Table 1 lists the targets, SDSS
spectra, and the rms noise of the non-detected transitions.
These results suggest that Rb and Cs masers are unlikely to occur frequently
in M- and L-dwarf atmospheres. A survey of 10 giant stars and two globular
clusters for the 6.8 GHz 87Rb maser by Darling (2018) likewise made no
detections. I suggest that the search should continue, perhaps toward other
types of stars and in the interstellar medium.
Table 1: Observations and Results Star | SDSS SpectrumaaSDSS plate-MJD-fiber. | Coordinates | Spectral | SDSS | 87Rb | 133Cs | CH3OH
---|---|---|---|---|---|---|---
| | | Type | VelocitybbHeliocentric optical velocity from SDSS model fit. | rmsccNoise per 1.0 km s-1 channel. | rmsccNoise per 1.0 km s-1 channel. | rmsccNoise per 1.0 km s-1 channel.
| | (J2000) | | (km s-1) | (mJy) | (mJy) | (mJy)
SDSS J000127.84$-$094209.6 | 7167-56604-0196 | 00:01:27.84 $-$09.42.09.73 | M | $-$4.4 | 2.9 | 2.1 | 3.0
2MASS J00191165+0030176 | 4218-55479-0989 | 00:19:11.64 +00.30.17.66 | M | 6.7 | 3.1 | 2.0 | 3.2
SDSS J002226.64+000023.3 | 4219-55480-0199 | 00:22:26.63 +00.00.23.06 | M | $-$9.7 | 2.9 | 2.1 | 3.0
Gaia DR2 2534780227973449728 | 3735-55209-0976 | 01:08:46.43 +00.04.06.94 | M | $-$72.6 | 2.4 | 2.5 | 2.4
SDSS J011453.90+141914.3 | 4664-56192-0948 | 01:14:53.91 +14.19.14.28 | M | $-$32.0 | 2.5 | 2.4 | 2.4
SDSS J012418.33+002242.0 | 4228-55484-0889 | 01:24:18.33 +00.22.42.05 | M | $-$27.0 | 2.4 | 2.6 | 2.4
2MASS J01325911+1312482 | 4666-55832-0755 | 01:32:59.12 +13.12.48.27 | M | $-$3.5 | 2.5 | 2.4 | 2.5
SDSS J015450.56$-$010610.5 | 4233-55449-0242 | 01:54:50.68 $-$01.06.11.01 | M | $-$19.8 | 2.5 | 2.5 | 2.4
SDSS J023100.81+000855.9 | 3647-55945-0818 | 02:31:00.82 +00.08.55.99 | M | $-$45.5 | 2.7 | 2.7 | 2.7
SDSS J023402.03+000623.7 | 3744-55209-0860 | 02:34:02.07 +00.06.22.74 | M | $-$49.3 | 2.4 | 2.5 | 2.3
2MASS J08054990+5113130 | 4528-55559-0368 | 08:05:49.89 +51.13.12.11 | L | $-$15.2 | 2.5 | 2.4 | 2.6
SDSS J081110.35+185527.9 | 4486-55588-0464 | 08:11:10.31 +18.55.27.85 | L | 30.2 | 2.5 | 2.4 | 2.6
2MASS J08175749+1824048 | 4486-55588-0118 | 08:17:57.49 +18.24.04.99 | L | 4.5 | 2.6 | 2.4 | 2.7
SDSS J082906.61+145620.7 | 4503-55563-0828 | 08:29:06.60 +14.56.19.47 | L | $-$5.0 | 2.6 | 2.4 | 2.7
SDSS J083558.28+054830.7 | 4903-55927-0474 | 08:35:58.22 +05.48.30.54 | L | 8.7 | 2.7 | 2.4 | 2.8
2MASS J08433323+1024470 | 5284-55866-0967 | 08:43:33.34 +10.24.40.20 | L | $-$11.4 | 2.6 | 2.4 | 2.7
2MASS J10224821+5825453 | 7089-56661-0444 | 10:22:46.83 +58.25.35.19 | L | 15.3 | 2.8 | 2.7 | 2.9
SDSS J103947.32+151251.5 | 5350-56009-0554 | 10:39:47.24 +15.12.51.06 | L | 6.3 | 3.1 | 2.6 | 3.3
SDSS J221451.86+004349.9 | 4200-55499-0873 | 22:14:51.86 +00.43.50.00 | M | $-$73.0 | 2.6 | 2.1 | 2.7
2MASS J22585897+1520461 | 6140-56189-0572 | 22:58:58.87 +15.20.45.06 | M | $-$57.6 | 2.4 | 2.1 | 2.6
2MASS J23522533$-$0944105 | 7166-56602-0319 | 23:52:25.39 $-$09.44.17.52 | M | $-$89.9 | 2.7 | 2.2 | 2.9
I thank Z. Berta-Thompson for help with astrometry.
## References
* Ahumada et al. (2020) Ahumada, R., Prieto, C. A., Almeida, A., et al. 2020, ApJS, 249, 3
* Backer & Hellings (1986) Backer, D. C. & Hellings, R. W. 1986, ARAA, 24, 537
* Bender et al. (1958) Bender, P. L., Beaty, E. C., & Chi, A. R. 1958, Phys. Rev. Lett., 1, 311
* Bize et al. (1999) Bize, S., Sortais, Y., Santos, M. S., et al. 1999, EPL, 45, 558
* Burke-Spolaor (2015) Burke-Spolaor, S. 2015, PASP, in press (arxiv:1511.07869)
* Darling (2018) Darling, J. 2018, RNAAS, 2, 15
* Davidovits & Novick (1966) Davidovits, P. & Novick, R. 1966, IEEE, 54, 155
* García-Hernández et al. (2006) García-Hernández, D. A., García-Lario, P., Plez, B., et al. 2006, Science, 314, 1751
* Levesque et al. (2014) Levesque, E. M., Massey, P., Żytkow, A. N., & Morrell, N. 2014, MNRAS, 433, L94
* Lodders (2003) Lodders, K. 2003, ApJ, 591, 1220
* Manjavacas et al. (2016) Manjavacas, E., Goldman, B., Alcalá, J. M., et al. 2016, MNRAS, 455, 1341
* McMullin et al. (2007) McMullin, J. P., Waters, B., Schiebel, D., et al. 2007, Astronomical Data Analysis Software and Systems XVI (ASP Conf. Ser. 376), ed. R. A. Shaw, F. Hill, & D. J. Bell (San Francisco, CA: ASP), 127
* Pringle & Moynier (2017) Pringle, E. A. & Moynier, F. 2017, EPSL, 473, 62
* Reiners et al. (2007) Reiners, A., Homeier, D., Hauschildt, P. H., & Allard, F. 2007, A&A, 473, 245
* Vanier et al. (1998) Vanier, J., Godone, A., & Levi, F. 1998, Phys. Rev. A, 58, 2345
|
11institutetext: Bloomberg, New York, USA 22institutetext: SUNY - University
at Buffalo , New York, USA
# Putting gradual types to work
Bhargav Shivkumar 1122 0000-0002-8430-9229 Enrique Naudon 11
0000-0001-9878-2781 Lukasz Ziarek 22
###### Abstract
In this paper, we describe our experience incorporating gradual types in a
statically typed functional language with Hindley-Milner style type inference.
Where most gradually typed systems aim to improve static checking in a
dynamically typed language, we approach it from the opposite perspective and
promote dynamic checking in a statically typed language. Our approach provides
a glimpse into how languages like SML and OCaml might handle gradual typing.
We discuss our implementation and challenges faced—specifically how gradual
typing rules apply to our representation of composite and recursive types. We
review the various implementations that add dynamic typing to a statically
typed language in order to highlight the different ways of mixing static and
dynamic typing and examine possible inspirations while maintaining the gradual
nature of our type system. This paper also discusses our motivation for adding
gradual types to our language, and the practical benefits of doing so in our
industrial setting.
###### Keywords:
Gradual typing Type inference Functional programming
## 1 Introduction
Static typing and dynamic typing are two opposing type system paradigms.
Statically typed languages are able to catch more programmer bugs early in the
compilation process, at the expense of a more flexible semantics. On the other
hand, dynamically typed languages allow greater flexibility, while allowing
more bugs at runtime. The proponents of each paradigm often feel very strongly
in favor of their paradigm. Language designers are stranded in the middle of
this dichotomy and left to decide between the two extremes when designing
their languages.
At Bloomberg, we have felt this pain while designing a domain specific
language for programmatically defining financial contracts. For the purposes
of this paper, we will call our language Bloomberg Contract Language (BCL).
BCL is a statically typed functional language with Hindley-Milner style type
inference [4, 17], structural composite types and recursive types. Users of
BCL are split into two groups—end users and language maintainers. End users
are typically financial professionals whose primary programming experience
involves scripting in dynamically typed languages such as Python and MATLAB.
On the other hand, language maintainers are Bloomberg software engineers who
are most at ease programming in statically typed and often functional
languages like OCaml. Whilst it is of paramount importance to provide our end
users with an environment in which they are comfortable, our domain—financial
contracts—is one in which correctness is of extraordinary importance, since
errors can lead to large financial losses. This makes static types appealing,
as they catch many errors that dynamic systems might miss. Even though static
types provide a more error-free runtime, they do require extra effort from our
end users who must learn an unfamiliar system. Our desire to simultaneously
satisfy our end users and our language maintainers led us to gradual typing
[23], which seeks to integrate static and dynamic typing in one system.
Gradual typing in BCL allows language maintainers to stick to static typing
and end users to selectively disable static typing when it interferes with
their ability to work in BCL.
Since its introduction, gradual typing [23] has been making its way into more
mainstream languages [30, 29] and more people have acknowledged the varied
benefits of mixing static and dynamic typing in the same program. As
identified by Siek and Taha [26], there has been considerable interest in
integrating static and dynamic typing, both in academia and in industry. There
has also been a plethora of proposed approaches, from adding a dynamic keyword
[2], to using objects in object-oriented languages [16], to Seik and Taha’s
gradual typing itself [23]. While there seems to be no one-size-fits-all
approach to designing a system that mixes static and dynamic types, Siek and
Taha standardize the guarantees [26] we can expect from such a system. For
language designers, this provides a more methodical way to approach the
integration. Language designers can also draw from a large body of literature
exploring the combination of gradual types with other common features, such as
objects [24] and type inference [25, 5].
While it is typical for dynamically typed languages to go the gradual route in
order to incorporate more static type checking, we go the other way and add
more dynamism to our already static language. Most static languages that
incorporate dynamic typing do so by following in the footsteps of Abadi et.al.
[2]–C# is a prime example of this [9]. Since BCL already supports type
inference and we want to retain the dynamic feel of the language, we implement
the inference algorithm described by Siek and Vachhrajani [25], putting us in
an interesting position. Our approach promotes the use of a ? annotation to
explicitly signify dynamically typed terms while un-annotated terms are
(implicitly) statically typed, much like that of Garcia and Cimini [5]. This
approach provides a simple escape hatch to end users who want to use dynamic
typing as well as avenues to automate this process to ensure backwards
compatibility of BCL with legacy code.
Finally, we feel there is a need to study the adaptation of gradual types to
an existing language with a substantial user base and lots of existing code.
We aim to provide a technical report in this paper that models our design
decisions and implementation details of bringing in gradual types to BCL. Our
primary contributions include:
* •
A brief review of other statically typed languages that add dynamic types, to
compare and possibly derive inspiration for our own design in Section 2.
* •
Introduce a new use case that shows how a gradually typed language benefits
different user groups of a language in Section 3.
* •
An inference algorithm, which is an adaptation of a prominent inference
algorithm to add gradual types to a language with type inference in Section 4.
Note that throughout this paper we use "gradual" to indicate an implementation
that provides gradual guarantees as specified in [26]. While, we do not state
this formally for BCL and leave that to future work, our implementation
supports the smooth evolution of programs from static to dynamic typing as
prescribed for gradually typed systems.
## 2 Background
In this section we briefly survey the existing literature to better
contextualize our design choices. The incorporation of static and dynamic
typing has been extensively studied [28, 15, 23, 26, 8], though usually in the
context of a core calculus instead of a full-featured language. There also
seems to be a juxtaposition of the literature, which generally follows a
static-first approach, and practical implementations, which generally follow a
dynamic-first approach 111Here, static-first refers elaborating a static
surface language to a gradually typed intermediate representation. Conversely,
by dynamic-first we mean the opposite: elaborating a dynamic surface language
to a gradually typed intermediate representation. [7].
Abadi et al [2] has been an inspiration for many static languages looking to
incorporate dynamic typing. This work is a precursor to gradual typing, and
while it does not qualify as gradual à la [23], it is nevertheless a standard
when it comes to adding dynamic checks to a static language. Abadi’s work uses
a dynamic construct to build terms of type Dynamic and a typecase construct to
perform case analysis on the runtime type of an expression of type Dynamic.
This is similar to the typeof() function in dynamic languages like Python,
which resolve the type of an expression at runtime. Siek and Taha observe that
translating from their language of explicit casts to Abadi et al’s language is
not straightforward [23]. Nevertheless we believe that it is worthwhile to
introduce something like the typecase construct in a static language with
gradual types. We identify and discuss some potential applications of this in
Section 5.
Statically typed object oriented languages like C# and Java have worked to
incorporate some form of dynamic typing [6, 16]. C# 4.0 introduced the dynamic
type to declare objects that can bypass static type checking [1]. Although
this achieves dynamic type checking, there is no indication of it being
gradual à la [23]. Moreover, using the dynamic type in a C# program runs the
program on the Dynamic Language Runtime (DLR) which is a separate runtime from
the Common Language Runtime and which supports dynamic checking.
While works like [18, 22] examine gradual type inference from the perspective
of removing dynamic checks by performing type inference at runtime, Garcia and
Cimini [5] (much like BCL) deals with static reasoning about programs, based
on the consistency relation. [5] explores an alternate approach to gradual
type inference and presents a statically typed language and its gradual
counterpart. Instead of inferring gradual types based on type precision [25],
this work limits the inference problem to static types only and requires
consistency constraints between gradual types. An interesting feature of their
language is that they distinguish between static type parameters and gradual
type parameters to tell static parametric polymorphism apart from polymorphism
due to the dynamic type.
Our approach is to adopt the properly gradual system defined by Siek and
Vachchrajani [25]. That work describes the incorporation of gradual typing
into a language with unification-based type inference. Unification-based
inference is a common implementation of the Hindley-Milner type system [17],
and is the implementation that BCL already uses. This makes our integration
work relatively easier and also lets us leverage all the benefits of the
standard for gradual typing laid out by Siek and Taha [26].
### 2.1 Gradual types and unification based inference
(SVAR) | Γ(x) = τS; Γ⊢x: τ | $None$
---|---|---
(SCNST) | S;Γ⊢c : _typeof_(c) |
(SAPP) | S;Γ⊢e_1 : τ_1 |
S;Γ⊢e_2 : τ_2 | |
S(τ_1) = S(τ_2 →τ_3) S;Γ⊢e_1 e_2 : τ_3 | |
(SABS) | S;Γ(x↦τ_1) ⊢e : τ_2 S;Γ⊢λx: τ_1 . e : τ_1 →τ_2 |
(a)
(GVAR) | Γ(x) = τS;Γ⊢_g x: τ | $None$
---|---|---
(GCNST) | S;Γ⊢_g c : _typeof_(c) |
(GAPP) | S;Γ⊢_g e_1 : τ_1 |
S; Γ⊢_g e_2 : τ_2 | |
S ⊧τ_1 ≃τ_2 →β | |
(βfresh)S;Γ⊢_g e_1 e_2 : β | |
(GABS) | S;Γ(x↦τ_1) ⊢_g e : τ_2S;Γ⊢_g λx:τ_1 . e : τ_1 →τ_2 |
(b)
Figure 1: Simply and gradually typed lambda calculus with type variables
Figure 2: Huet’s unification of
$\\{\alpha\rightarrow\alpha=Int\rightarrow\beta\\}$
Siek and Vachchrajani [25](S&V) propose an innovative solution for performing
gradual type inference which combines gradual typing with type inference.
Their main goal is to allow inference to operate on the statically typed parts
of the code, while leaving the dynamic parts to runtime checks. Furthermore,
the dynamic type must unify with static types and type variables, so that the
static and dynamic portions of code may freely interact. In this section, we
summarize their work.
The work of S&V is based on the gradually typed lambda calculus [23]. The
gradually typed lambda calculus extends the simply typed lambda calculus
($\lambda_{\rightarrow}$) with an unknown type, $?$–pronounced “dynamic”; type
checking for terms of this type is left until runtime. The gradually typed
lambda calculus ($\lambda_{\rightarrow}^{?}$) allows static and dynamic types
to freely mix and satisfies the gradual guarantee [26], ensuring smooth
migration between static and dynamic code while maintaining the correctness of
the program.
Type inconsistencies in $\lambda_{\rightarrow}^{?}$ are caught by a
_consistent_ relation, instead of equality as in $\lambda_{\rightarrow}$. The
_consistent_ relation only compares parts of a type that are statically known;
it is one of the key contributions of $\lambda_{\rightarrow}^{?}$. All type
errors that cannot be statically resolved by the gradual type system are
delegated to runtime checks.
Type inference allows programmers to omit type annotations in their programs
and have the compiler infer the types for them. Hindley-Milner type inference
is often cast as a two step process that consists of generating constraints
and then solving them by a unification algorithm [32, 20, 21]. The inference
algorithm models the typing rules as equations, called constraints, between
type variables, while the unification algorithm computes a substitution $S$,
which is a mapping from type variables to types, such that for each equation
$\tau_{1}=\tau_{2}$, we have $S(\tau_{1})=S(\tau_{2})$.
S&V introduce the gradually typed lambda calculus with type variables (
$\lambda_{\rightarrow}^{?\alpha}$), which is $\lambda_{\rightarrow}^{?}$
extended with type variables, $\alpha$. They define a new relation,
_consistent-equal_ ($\simeq$), which extends the _consistent_ relation from
$\lambda_{\rightarrow}^{?}$ to treatment $\alpha$. Fig. LABEL:fig:lam-dyn-a-ts
compares the typing rules for $\lambda_{\rightarrow}^{\alpha}$, the statically
typed lambda calculus with type variables, to the new type system
$\lambda_{\rightarrow}^{?\alpha}$. S&V also specify a unification algorithm
for $\lambda_{\rightarrow}^{?\alpha}$ which integrates the _consistent-equal_
into Huet’s unification algorithm [10, 14] which is a popular algorithm that
doesn’t rely on substitution.
Huet’s unification algorithm uses a graph representation for types. For
example, a type like $Int\rightarrow\beta$ is represented as a sub graph in
Fig. 2. A node represents a type, ground types, type variables or the function
type ($\rightarrow$), and edges connect the nodes of types belonging to a
$\rightarrow$ type. From this it follows that the unification algorithm is the
amalgamation of two graphs present in a constraint equation following the
rules of the type system. Huet’s algorithm maintains a union find structure
[27] to maintain equivalence classes among nodes and thereby types. When node
$A$ unifies with node $B$ according to the type rules, the merge results in
one of the two nodes becoming the representative of the merge. This signifies
that the representative node is the solution to the constraint being unified.
Fig. 2 shows how the unification of the constraint
$\\{\alpha\rightarrow\alpha=Int\rightarrow\beta\\}$ proceeds.
## 3 Introduction to BCL
Our motivation to explore gradual types for BCL is rooted in several
historical and contextual details, which we discuss in this section. It is
first helpful to understand that BCL is predominantly used to model financial
contracts, by providing end users with programmatic access to a financial
contract library. The library we use is based upon the composable contracts of
Peyton Jones, Eber and Seward [19]. Its internal contract data structure is
used throughout our broader derivatives system to support various downstream
analyses. In this way, BCL serves as an expressive front-end for describing
contracts to our derivatives system.
⬇
let receive currency amount = scale (one currency) amount in
let european_stock_option args =
let first = stock_price args.effective_date args.company in
let last = stock_price args.expiry_date args.company in
let payoff = match args.call_or_put with
| Call -> (last / first - args.strike)
| Put -> (args.strike - last / first)
in
european args.expiry_date (receive args.currency payoff)
in
european_stock_option
{ company = "ABC Co.",
call_or_put = Call,
strike = 100.0,
currency = USD,
effective_date = 2021-01-17,
expiry_date = 2021-01-22 }
Figure 3: European stock option
Let us look at a short illustrative example of BCL code. Fig. 3 provides an
example of the sort of thing for which BCL might be used. The
european_stock_option function produces a Contract which models a European
stock option. European stock options grant their holder the right, but not the
obligation, to buy or sell stock in a company. The “European” in European
stock option refers to the fact that, on one specific date, the holder must
choose whether or not s/he would like to buy (or sell) the stock. This is in
contrast to “American” options, where the holder may choose to buy (or sell)
on any date within a specified range of dates.
This stock option is based on several helper functions, defined in [19], which
we must examine first. The european function constructs a contract which
allows its holder to choose between receiving “something” or nothing on a
specified date. receive constructs a contract that pays the specified amount
of the specified currency passed as arguments andn uses the scale and one
primitives. The scale primitive takes an amount of type $Obs\ Double$–where
type $Obs\ d$ represents a time-varying quantity of type $d$–and a contract as
arguments and multiplies key values in the contract by the amount. Note that
european_stock_option uses - and / operators which are built-ins that operate
on $Obs\ Double$ arguments. stock_price is a primitive for looking up the
price of the specified stock on the specified date.
european_stock_option starts off by using stock_price to look up the price of
the specified company’s stock on the “effective” (contract start) and “expiry”
(contract end) dates. It uses these stock prices to construct the payoff based
on the specified call or put style, and feeds the payoff to receive to
construct a contract that pays it. Finally european_stock_option passes the
result of receive to the european, which allows the holder to choose between
the payoff and nothing. Note that the payoff may well be negative, so the
holder’s choice is not entirely clear. The end of Fig. 3, provides an example
call european_stock_option which constructs a call option on ABC Co. In
practice, functions like european_stock_option would be defined in BCL’s
standard library, and would be called by users who wish to model European
stock options directly or who wish to model contracts that contain such
options as sub-contracts.
### 3.1 Motivation for gradual types
Given that BCL is mostly used to describe financial contracts, it should come
as no surprise that our users are largely financial professionals. In
particular, many are financial engineers or quantitative analysts with some
programming experience in dynamic languages such as Python and MATLAB.
Typically these users need to translate term sheets, plain-English
descriptions of a contract, into BCL for consumption by our system. These
contracts are mostly one-off and, once finished, are unlikely to be reused as
subcontracts to build further contracts. For these reasons, the users of BCL
are primarily concerned with development speed. Ideally, they would like to be
able to translate a term sheet as quickly as possible, so that they may focus
on analyzing the contract’s behavior once it has been ingested by our system.
On the other hand, the maintainers of BCL and its standard library are
software engineers and functional programmers with extensive experience in
OCaml, C++ and other static languages. The main jobs of the BCL maintainers
are implementing language extensions and standard library functions. One of
the significant constraints that they face is preserving backwards
compatibility. All existing user contracts must continue to work as BCL
evolves–even minor changes in behavior are unacceptable! Given the broad reuse
of the features that BCL’s language maintainers implement and the difficulties
involved in rolling back features, correctness is the paramount concern of BCL
maintainers.
Finally, it is important to note that the version of BCL described here is
actually the second version of BCL. The first version of BCL was dynamically
typed, so we will distinguish it from the second version by referring to it as
Dynamic BCL. Dynamic BCL supports only a few primitive data types, as well as
a list composite type; it does not support algebraic types. It also runs only
minimal validation before attempting evaluation. This simplicity makes Dynamic
BCL well suited to our users who seek to quickly feed contracts into our
system, but ill-suited to the library code written by our maintainers.
Additionally, some users who encounter runtime type errors while implementing
particularly complex contracts would turn to the maintainers for assistance,
further increasing the burden on the maintainers. It was in light of these
issues, that we developed (Static) BCL.
To address the issues with Dynamic BCL while remaining useful to our users,
BCL aims to be a static language that feels roughly dynamic. To this end, BCL
supports implicit static types via type inference; we chose Hindley-Milner
style inference so that our users could omit type annotations in almost all
cases. BCL also supports record and variant types, although they are
structural rather than the nominal ones typically seen in OCaml and Haskell.
This choice also lends BCL a more dynamic feel.
The goal of BCL’s design is to retain enough flexibility for our users, while
introducing static types for the benefit of our language maintainers. However,
“enough flexibility” is entirely subjective and some users may well feel that
any amount of static checking results in a system that is too inflexible.
Gradual types address this concern by allowing users to use dynamic types
where they like, while also allowing maintainers to use static types where
they would like. Importantly, gradual types guarantee that fully dynamic code
and fully static code can co-exist, and that static code is never blamed for
runtime type errors. Taken together, these two guarantees satisfy both groups,
and ensure that the type errors that dynamic users see are isolated to the
code that they themselves wrote.
### 3.2 Core calculus
BCL’s core calculus is the lambda calculus extended with structural composite
types and recursive types. Furthermore, BCL is implicitly-typed and supports
Hindley-Milner style type inference. This section describes the types and
terms of this core calculus. Note, however, that the grammars in this section
are abstract representations of BCL’s theoretical underpinnings, and do not
cover the full set of productions in BCL’s grammar.
#### 3.2.1 Kinds and Types
$\begin{array}[]{l c l l
cl}\kappa&::=&*\mid\rho\mid\kappa\Rightarrow\kappa&C&::=&\rightarrow\mid\Pi\mid\Sigma\mid...\\\
\tau&::=&\alpha\mid C\mid\tau\ \tau\mid
l:\tau;\tau\mid\epsilon\mid\mu\alpha.\tau&\sigma&::=&\tau\mid\forall\alpha.\sigma\end{array}$
Figure 4: Grammar of types and kinds
The grammar of the types and kinds that describe BCL is given in Fig. 4. Our
kind system is fairly standard and consists of only three forms. The base
kind, $*$, is the kind of “proper” types–$Int$ and $Int\rightarrow Int$, for
example–which themselves describe terms. The row kind, $\rho$, is of course
the kind for rows. The operator kind, $\Rightarrow$, is the kind of type
operators – $Array$ and $\rightarrow$, for example – which take types as
arguments and which do not directly describe terms.
$C$ ranges over type constructors, including the type operators for function
types ($\rightarrow$ of kind $*\Rightarrow*\Rightarrow*$), record types ($\Pi$
of kind $\rho\Rightarrow*$) and variant types ($\Sigma$ of kind
$\rho\Rightarrow*$). $C$ may also include additional constructors for base
types (e.g. $Int$ and $String$) and more type operators (e.g. $Array$) as
desired. However, these additional constructors are not useful for our
purposes here, so we make no further mention of them.
Our type system is stratified into monomorphic types and type schemes, per
[4]. Monomorphic types, $\tau$, consist of type variables, type constructors,
and record, variant and recursive types. Type variables are ranged over by
$\alpha$, $\beta$, $\gamma$, etc., and are explicitly bound by $\mu$ and
$\forall$ types, as described below. Rows are written $l:\tau;\tau^{\prime}$,
indicating that the row has a field labeled $l$ of type $\tau$.
$\tau^{\prime}$ has kind $\rho$ and dictates the other fields that the row may
contain. If $\tau^{\prime}$ is a type variable, the row can contain arbitrary
additional fields; if $\tau^{\prime}$ is the empty row, $\epsilon$, the row
contains no additional fields; finally if $\tau^{\prime}$ is another type of
the form $l:\tau;\tau^{\prime}$, then the row contains exactly the fields
specified therein. Recursive types are written $\mu\alpha.\tau$, where the
variable $\alpha$ represents the point of recursion and is bound within
$\tau$. BCL’s recursive types are equi-recursive, so it does not have explicit
constructs for rolling and unrolling recursive types. Finally, type schemes
have two forms: monomorphic types and universally quantified schemes.
Monomorphic types, $\tau$, are merely the types described above. Universally
quantified schemes, $\forall\alpha.\sigma$, bind the variable $\alpha$ within
the scheme $\sigma$. Naturally, it is through universal quantification that
BCL supports parametric polymorphism.
#### 3.2.2 Terms
$\begin{array}[]{l c l}t&::=&x\mid\lambda x.t\mid t\ t\mid{\textbf{let}\
\textbf{rec}\ x=t\ \textbf{in}\ t}\mid
t:\tau\mid\\{\overline{l_{i}:t_{i}}\\}\par\mid t.l\mid l\ t\mid\textbf{match}\
t\ \textbf{with}\ \overline{l_{i}x_{i}\Rightarrow t_{i}}\par\end{array}$
Figure 5: Grammar of terms
The grammar of the terms in BCL is given in Fig. 5. Most of the term forms are
drawn directly from the lambda calculus. Term variables are ranged over by
$x$, $y$, $z$, etc., and are introduced by lambda abstraction, let-bindings
and match-expressions. Lambda abstraction is written $\lambda x.t$ and binds
$x$ within the expression $t$. Lambda abstractions are eliminated by
application, which is denoted by juxtaposition: $t\ t$. Let-bindings are
written $\textbf{let}\ \textbf{rec}\ x=t\ \textbf{in}\ t$. The rec is optional
and, when present, indicates $x$ may be referenced by the expression to the
right of the $=$; $x$ may of course always be referenced by the expression to
the right of the in. Type annotations are written $t:\tau$, and serve to
ensure that $t$ has the they $\tau$.
In addition to the forms described above, BCL supports records and variants.
Record introduction is written $\\{\overline{l_{i}:t_{i}}\\}$ , where $t_{i}$
evaluates to the value stored in the field $l_{i}$. Records are eliminated by
field projection. The projection of the field $l$ from the record $t$ is
written $t.l$. Variant introduction is written $l\ t$, where the label $l$ is
used to tag the variant’s payload, $t$. Variants are eliminated by case
analysis, which is written $\textbf{match}\ t\ \textbf{with}\
\overline{l_{i}x_{i}\Rightarrow t_{i}}$, which evaluates to the branch
specified by the tag associated with the variant $t$.
## 4 Implementation
We identify three main components required to add gradual typing to a
statically typed language with type inference, such as BCL. The first is the
ability to annotate terms with types, as these annotations dictate whether a
term is type-checked statically or dynamically. The second is the addition of
a dynamic type to the existing set of types, and the third is an algorithm to
unify the existing types with the newly added dynamic type. Since our grammar,
shown in Fig. 5, already supports explicit annotation of terms, we have the
means to differentiate between dynamically typed and statically typed code. We
add a dynamic type, ?, to our set of types; it serves to indicate that a term
that will be dynamically typed. BCL’s type inference algorithm statically
infers a type for every term, meaning that by default BCL programs are
completely statically typed. In order to tell the type system to dynamically
type some terms, we must explicitly annotate those terms with the ? type.
For example: a simple increment function can be defined in BCL as follows.
⬇
let incr x = x + 1 in incr
The type system will infer the type $Int\rightarrow Int$ for the incr
function. However, we can instead provide an explicit annotation.
⬇
let incr x = x + 1 in incr : ? -> Int
In this case, the inference algorithm retains the annotated type as the type
of the function. Any type checks on the argument of the incr function would be
put off until runtime. While the type checks pertaining to ? types are
delayed, we still need to complete the inference procedure in order to infer
the types of the un-annotated portions of the program (like the return type of
incr). Siek and Vacchrajani [25](S&V) extend the standard unification-based
inference algorithm to handle the ? type. Their algorithm is based on the
_consistent-equal_ relation which takes into consideration the type variables
that are generated as part of a typical type inference algorithm. Fortunately
for us, their algorithm works well for our implementation with only minor
adaptations.
⬇
maybe_copy_dyns ($\tau_{1}\simeq\tau_{2}$) =
$\tau_{1}^{\prime}$ $\leftarrow$ if was_copied $\tau_{1}$ then $\tau_{1}$ else
copy_dyn $\tau_{1}$
$\tau_{2}^{\prime}$ $\leftarrow$ if was_copied $\tau_{2}$ then $\tau_{2}$ else
copy_dyn $\tau_{2}$
$\tau_{1}^{\prime}\simeq\tau_{2}^{\prime}$
unify $\tau_{1}^{\prime\prime}$ $\tau_{2}^{\prime\prime}$ =
$\tau_{1}$ $\leftarrow$ find $\tau_{1}^{\prime\prime}$
$\tau_{2}$ $\leftarrow$ find $\tau_{2}^{\prime\prime}$
if was_visited $\tau_{1}$ and was_visited $\tau_{2}$ then
()
else case maybe_copy_dyns ($\tau_{1}\simeq\tau_{2}$) of
$\alpha\simeq\tau$ $\mid$ $\tau\simeq\alpha$ $\Rightarrow$ merge $\tau$
$\alpha$ (*Case 1 & 2*)
$\mid$ $?\simeq\tau_{1}\ \rightarrow\tau_{2}$ $\mid$ $\tau_{1}\
\rightarrow\tau_{2}\simeq\ ?$ $\Rightarrow$ (*Case 3 & 4*)
unify $\tau_{1}$ (new $?$)
unify $\tau_{2}$ (new $?$)
$\mid$ $?\simeq\tau$ $\mid$ $\tau\simeq\ ?$ $\Rightarrow$ merge $\tau$ $?$
(*Case 5 & 6*)
$\mid$ $\tau_{11}\ \rightarrow\tau_{12}\simeq\tau_{21}\ \rightarrow\tau_{22}$
$\Rightarrow$ (*Case 7*)
unify $\tau_{11}$ $\tau_{21}$
unify $\tau_{12}$ $\tau_{22}$
$\mid$ $l:\tau_{1};\tau_{2}\simeq
l^{\prime}:\tau_{1}^{\prime};\tau_{2}^{\prime}$ if $l$ = $l^{\prime}$
$\Rightarrow$ (*Case 8*)
unify $\tau_{1}$ $\tau_{1}^{\prime}$
unify $\tau_{2}$ $\tau_{2}^{\prime}$
$\mid$ $l:\tau_{1};\tau_{2}\simeq
l^{\prime}:\tau_{1}^{\prime};\tau_{2}^{\prime}$ $\Rightarrow$ (*Case 9*)
$\alpha$ $\leftarrow$ fresh_type_variable()
unify ($l:\tau_{1};\alpha$) $\tau_{2}^{\prime}$
unify ($l^{\prime}:\tau_{1}^{\prime};\alpha$) $\tau_{2}$
$\mid$ $\mu\alpha.\tau\simeq\tau^{\prime}$ $\mid$
$\tau^{\prime}\simeq\mu\alpha.\tau$ $\Rightarrow$ (*Case 10 & 11*)
mark_visited ($\mu\alpha.\tau$)
unify $\tau[\mu\alpha.\tau/\alpha]$ $\tau^{\prime}$
$\mid$ $\epsilon\simeq\epsilon$ $\Rightarrow$ () (*Case 12*)
$\mid$ _ $\Rightarrow$ error
infer $\Gamma$ $t$ =
case $t$ of
…
$t:\tau$ $\rightarrow$
case (unify (infer $\Gamma$ $t$) $\tau$) of
Error $\Rightarrow$ Error : inconsistent types
$\mid$ _ $\Rightarrow$ $\tau$
Figure 6: Type inference algorithm
Fig. 6 shows an outline of our adaptation of S&V’s inference algorithm. Unlike
the original algorithm by S&V, BCL’s does not separate constraint generation
and constraint solving.222Put another way, our inference algorithm solves each
constraint immediately after generating it, and before generating the next
constraint. This difference is important, as it means that our inference
algorithm does not have access to the whole constraint set prior to
unification. Instead the infer function traverses the term, generating and
solving constraints on the fly. For example, if it encounters an application
$t_{1}\ t_{2}$, it figures out the type of the term from the environment
($\Gamma$) and generates a constraint like
$\\{\tau_{1}\simeq\tau_{2}\rightarrow\alpha\\}$, where $\tau_{1}$ is the type
of term $t_{1}$, $\tau_{2}$ is the type of $t_{2}$ and $\alpha$ is a fresh
type variable. infer sends this constraint to unify, which attempts to satisfy
it or raises an error if the constraint cannot be satisfied.
Fig. 6 shows the infer case for a term $t$ annotated with the type $\tau$.
infer generates a constraint which tries to unify the type inferred for $t$
with the annotated type, $\tau$. We highlight this case for two reasons.
First, the only way we can currently introduce a ? type in BCL is through an
annotation. Therefore, this is the only point where constraints involving the
? type originate. Second it is critically important that this case returns the
annotated type and not the inferred type. Note that in incr the inferred type
$Int\rightarrow Int$ differs from–but is _consistent-equal_ with–the annotated
type $\texttt{?}\rightarrow Int$. We always want the user’s explicit
annotation to take precedence in this situation.
BCL’s unification algorithm is already based on Huet’s unification algorithm,
which makes adopting the changes suggested by S&V easier. The crux of S&V’s
algorithm lies in the way the ? type unifies with other types, and
particularly with type variables. When ? unifies with a type variable, S&V’s
algorithm makes ? the representative node. However, when ? unifies with types
other than type variables, the other type becomes the representative element
of the resulting set. The find and merge functions in Fig. 6 come from the
union-find data structure that underlies Huet’s unification algorithm.
Respectively, they compute a node’s representative element, and union two
nodes’ sets keeping the representative element of the first node’s set.
The first six cases of the unify function handle unification with the ? type
as laid out by S&V. We say first six because Cases 1 and 2 take care of
unifying the ? type with type variables as specified by S&V’s algorithm. Cases
3 and 4 handle an edge case in their algorithm. These two cases simulate the
operational semantics of Siek and Taha [23], which require constraints like
$\\{\texttt{?}\simeq\alpha\rightarrow\beta\\}$ to be treated as
$\\{\texttt{?}\rightarrow\texttt{?}\simeq\alpha\rightarrow\beta\\}$. We use
new to create a new node different from what was passed in to handle this
case.
Cases 8-11 take care of unifying with row and recursive types, neither of
which are covered by S&V’s solution. However, it is our observation that these
types do not require special handling. A constraint like
$\\{x:Int;\epsilon\simeq\texttt{?}\\}$ would be handled by Case 2 and ? would
be merged with the row type $x:Int;\epsilon$. Now suppose the ? is present
inside the row type like in the following constraint
$\\{x:\texttt{?};\epsilon\simeq x:Int;\epsilon\\}$; this will be handled by
Case 8 and then Cases 5 and 12 when we recursively call unify with the types
within the row. The same holds true for unification with the recursive type.
For example, a constraint like $\\{List\ Int\simeq List\ \texttt{?}\\}$ will
have the following successful unification trace:
$\displaystyle\\{List\ Int\simeq List\ \texttt{?}\\}$ (Case 10)
$\displaystyle\rightarrow\\{\Pi(head:Int;tail:List\ Int;\epsilon)\simeq List\
\texttt{?}\\}$ (Case 11) $\displaystyle\rightarrow\\{\Pi(head:Int;tail:List\
Int;\epsilon)\simeq\Pi(head:\texttt{?};tail:List\ \texttt{?};\epsilon)\\}$
(Case 8) $\displaystyle\rightarrow\\{\\{Int\simeq\texttt{?}\\},\\{List\
Int\simeq List\ \texttt{?}\\}\\}$ (Case 6, Case 10)
$\displaystyle\rightarrow\cdots$ $\displaystyle\rightarrow\\{\Pi\
\epsilon\simeq\Pi\ \epsilon\\}$ (Case 8)
$\displaystyle\rightarrow\\{\epsilon\simeq\epsilon\\}\rightarrow()$ (Case 12)
Where $List$ is defined as follows.
$List\ \alpha\equiv\mu
a.\Sigma(Nil:\Pi\epsilon;Cons:\Pi(head:\alpha;tail:a;\epsilon);\epsilon)$ Note
that BCL supports equi-recursive types, as mentioned in Section 3, so unify
tracks the types it visits with mark_visited and was_visited to detect cycles.
#### 4.0.1 The copy_dyn conundrum:
The copy_dyn function is a crucial part of the way ? unifies with other types.
In S&V’s presentation, copy_dyn ensures that each ? node in the constraint set
is physically unique. Without this step, multiple types might unify with the
same ? node, and then transitively with each other. This has the potential to
cause spurious failures in unify. S&V’s solution to this is to traverse the
constraint set and duplicate each ? node prior to unification; this is
performed by their implementation of copy_dyn. Unfortunately, we do not have
access to the full constraint set, because our inference algorithm generates
and solves constraints in one step.
Our first attempt at working around this issue was to call copy_dyn as the
first step in unification. However, this leads to over copying. For example,
consider the constraint {
$\texttt{?}\rightarrow\alpha\simeq\alpha\rightarrow\tau$ }. According to Case
7 of unify, when $\alpha$ unifies with the ? node, copy_dyn is called and a
new ? node is created in the union-find structure. But when $\alpha$ then
unifies with $\tau$, find looks up ? as $\alpha$’s representative element, and
copy_dyn is called once more. $\tau$ therefore unifies with the new ? node,
instead of the one which unified with $\alpha$. Thus, we lose the fact that
$\tau$ and $\alpha$ are the same type.
To rectify this, we implement maybe_copy_dyns, which traverses a constraint
and copies each ? node exactly once.333There are many ways to accomplish this.
Our approach was to use one canonical ? node in type annotations, and compare
each ?’s address to the canonical node’s address before copying. The result of
this is the same as originally intended by S&V’s copy_dyn function. That is,
we ensure there is a unique ? node in the union-find structure for every
unique use of ?.
### 4.1 Discussion
In Section 2 we gave an overview of how statically typed languages approach
this problem of promoting dynamic typing. It is our observation that most
statically typed, object-oriented languages approach dynamic typing following
Abadi et al. That is, their dynamic type exploits subtype polymorphism to
bypass static type checking. This is a natural direction for object-oriented
languages which rely heavily on subtyping. In order to inspect the types at
runtime, these languages make use of type reflection. Java is one such
language where work has been done to add dynamic types using reflection,
contracts and mirrors [6]. The Java Virtual Machine supports many dynamic
languages like Jython and JRuby, demonstrating that such runtime constructs
help static languages add more dynamic type checks. However, these
implementations only add dynamic checks, and do not achieve the gradual goal
of a seamless mix of static and dynamic types as in [26]. To our knowledge,
only Featherweight Java [11] has attempted to support proper gradual typing
[12]. In any case, the primary purpose for dynamic types in these languages is
inter-operation with other dynamic languages. This differs from our own
purpose and the end result does not fit our needs well. Thus we conclude that
this approach was not a good design choice for us.
The languages closest to BCL are statically typed functional languages with
type inference, such as SML, OCaml, and Haskell. OCaml has incorporated
dynamic typing at the library level by leveraging its support for generalized
algebraic data types [3]. Similarly, Haskell supports a dynamic type as a
derivative of the Typeable type class, which uses reflection [13] to look at
the runtime representation of types. While these approaches introduce more
dynamism, they lack the simplicity of gradual typing, which hide all the nuts
and bolts of the type system under a simple ? annotation.
Seamless interoperation of static and dynamic types as promised by gradual
typing fits very well with our use case. It lets our end users access both
paradigms without knowledge of specialized types or constructs. Furthermore,
the approach we use—extending unification-based inference with gradual
typing—is a natural extension for languages like BCL, which support static
type inference. The addition of dynamic types to the type system easily boils
down to how we handle this new type in the unification algorithm, and does not
require reworking the entire type system. We attribute this benefit to S&V’s
proposed inference algorithm, which incorporates the essence of the
$\lambda_{\rightarrow}^{?}$ type system. This makes it easier to adapt to an
existing language with similar constructs.
Garcia and Cimini’s work takes a different approach to this problem but their
end goal is the same: gradual type inference in a statically typed language.
The authors of that work feel that S&V’s approach has “complexities that make
it unclear how to adopt, adapt, and extend this approach with modern features
of implicitly typed languages like let-polymorphism, row-polymorphism and
first class polymorphism”. Our experience with S&V’s approach was different:
we found the integration fairly simple without major changes to the original
inference algorithm. We leave a deep dive into the differences between these
two schemes to future work. Based on Garcia and Cimini’s design principle, Xie
et al. [34] introduce an inference algorithm with support for higher-rank
polymorphism, using a _consistent subtyping_ relation. In contrast, BCL only
infers rank-1 polymorphic types and doesn’t support higher-rank polymorphism.
We recognize an added benefit of going from a static to a dynamic language
with explicit ? annotations. Promoting static type checking in a dynamic
language without type inference requires the programmer to add annotations to
all parts of the code that they want statically checked. Needing to add these
annotations is such a burden for the programmer that they often skip some
annotations and miss out on static optimizations. These un-annotated types are
implicitly dynamic, leading to runtime overhead, despite the fact that on many
occasions they could be statically inferred. This in turn has lead to efforts
to making gradual typing more efficient [22].
BCL does not have this issue as it provides static inference by default. It
therefore enjoys the optimizations of static typing and and can skip
unnecessary runtime checks. Moreover, BCL could support a dynamic-by-default
mode with an additional compiler flag that implicitly annotates un-annotated
terms with the ? type. This makes it even more seamless to go from complete
static typing to complete dynamic typing. We might also consider doing this
implicit annotation on a file-level or function-level. In cases where it is
possible to separate dynamic and static components, this could even lead to
cleaner refactoring. These ideas have not yet been implemented in BCL but are
something we intend to do as future work.
## 5 Application of gradual types
Gradual typing enables the quick prototyping common in dynamic languages, as
well as specialized applications that enable simplification of existing code.
In this section, we focus on the latter, due to space constraints. Notice, in
Fig. 3, that the scale combinator [19] is simply multiplication of a contract
by a floating-point _observable_. In a domain specific language like BCL, it
is convenient to reuse the ** multiplication syntax for scale as well. We can
fit this more general _observable_ multiplication operator into the type
system with the gradual type $Obs\
Double\rightarrow\texttt{?}\rightarrow\texttt{?}$. Our new multiplication
operator can delegate to scale when the second argument a $Contract$ at
runtime and continue with _observable_ multiplication, or raise a runtime type
error based on the runtime type of the second argument. With this new
operator, the receive function can be rewritten thus:
⬇
let receive currency amount = one currency ** amount in ...
There are a variety of extensions to Hindley-Milner that enable this sort of
ad-hoc polymorphism statically. Type classes, for example, extend the
signature of overloaded functions with classes [31], which our users would
need to learn. Similarly, modular implicits introduce a separate syntax for
implicit modular arguments [33]. However, these constructs require effort to
educate our end users in their use and detract from the dynamic feel of the
language. Gradual types, by contrast, are much easier for our end users since
they already work in a dynamic environment and it does not require new syntax
(save a ? annotation).
It is worth noting that, while the new multiplication operator can be given a
valid type in BCL, it cannot currently be implemented in BCL; it can only be
implemented as a built-in operator because BCL provides no way to perform case
analysis on the runtime type of a dynamic value. However, addressing this is
actually quite easy if we reuse BCL’s existing support for variants. That is,
we could implement a dynamic_to_type primitive which consumes a dynamic value
and produces a variant describing its runtime type. This would allow us to
then branch on this variant with the existing match construct. Fig. 7 shows a
prototype of a function that achieves this effect assuming the dynamic_to_type
primitive is defined.
⬇
let dyn_obs_mul x y = match dynamic_to_type (y) with
| Obs Double => x ** y
| Contract => scale x y
in
dyn_obs_mul : Obs Double $\rightarrow$ Dyn $\rightarrow$ Dyn
Figure 7: Sample of a dynamic Observable multiplication function
dynamic_to_type is interesting in light of our discussion in Section 2, which
describes dynamic programming as the territory of BCL’s users and not its
maintainers. Clearly, however, the dynamic multiplication operator is
something that would live in BCL’s standard library and be maintained by the
language maintainers. Indeed there are a number of interesting standard
library functions which we might add on top of dynamic_to_type. Another simple
example would be a any_to_string function, which could produce a string
representation for arbitrary types by traversing their runtime type and
delegating to the appropriate type-specific to-string function. Such a
function would be very handy for debugging and quick inspection of values.
The any_to_string example is a function which consumes an arbitrary value.
However, there are equally compelling use cases for producing arbitrary
values. For example, property-based testing frameworks rely on automatically
generating values that conform to certain constraints. We could implement a
simple property-based testing framework with a function which consumes the
output of dynamic_to_type and generates arbitrary values that conform to that
type. Such a framework would be especially useful in a domain such as ours,
where real money is at stake, and where robust testing is absolutely critical.
## 6 Conclusion
Dynamic languages are extremely popular with many users. For users with a
limited computer science background, for whom ease-of-use is the paramount,
this is doubly true. However, despite the flexibility offered by dynamic
typing, the safety offered by static typing is helpful in domains where
correctness is critical. In such an arena, gradual types are a perfect blend
of both paradigms, and they provides a middle ground to please a larger group
of users. Given this, it is important for the literature to speak about
adapting gradual types to existing languages. As a first step towards that, we
write about our experiences adapting gradual typing to our implementation of a
statically typed functional language with type inference. We provide context
in terms of how others in similar situations approached this problem, and we
elaborate our inference algorithm with key insights around what worked for us
and what did not. We identify an interesting use case for gradual types here
at Bloomberg, where we look to harmonize end users and language maintainers
with competing goals. End users want to specify financial contracts without
worrying about static typing demands, while language maintainers need a more
rigorous type system that ensures that libraries that they write are error-
free. Gradual types allow us to satisfy both groups. We also intend to gather
feedback from our end users and maintainers about how gradual types are being
used , which can give insight into possible tweaks to make this system more
amenable to all.
## References
* [1] Using type dynamic. https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/types/using-type-dynamic, accessed: 2020-10-02
* [2] Abadi, M., Cardelli, L., Pierce, B., Plotkin, G.: Dynamic typing in a statically typed language. ACM transactions on programming languages and systems (TOPLAS) 13(2), 237–268 (1991)
* [3] Balestrieri, F., Mauny, M.: Generic programming in ocaml. Electronic Proceedings in Theoretical Computer Science 285, 59–100 (Dec 2018). https://doi.org/10.4204/eptcs.285.3, http://dx.doi.org/10.4204/EPTCS.285.3
* [4] Damas, L., Milner, R.: Principal type-schemes for functional programs. In: Proceedings of the 9th ACM SIGPLAN-SIGACT symposium on Principles of programming languages. pp. 207–212 (1982)
* [5] Garcia, R., Cimini, M.: Principal type schemes for gradual programs. SIGPLAN Not. 50(1), 303–315 (Jan 2015). https://doi.org/10.1145/2775051.2676992, https://doi.org/10.1145/2775051.2676992
* [6] Gray, K.E., Findler, R.B., Flatt, M.: Fine-grained interoperability through mirrors and contracts. ACM SIGPLAN Notices 40(10), 231–245 (2005)
* [7] Greenberg, M.: The dynamic practice and static theory of gradual typing. In: 3rd Summit on Advances in Programming Languages (SNAPL 2019). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik (2019)
* [8] Gronski, J., Knowles, K., Tomb, A., Freund, S.N., Flanagan, C.: Sage: Hybrid checking for flexible specifications. In: Scheme and Functional Programming Workshop. vol. 6, pp. 93–104 (2006)
* [9] Hejlsberg, A.: C# 4.0 and beyond by anders hejlsberg. Microsoft Channel 9 (2010)
* [10] Huet, G.: A unification algorithm for typed lambda calculus. Theoretical Computer Science 1(1), 27 – 57 (1975). https://doi.org/https://doi.org/10.1016/0304-3975(75)90011-0, http://www.sciencedirect.com/science/article/pii/0304397575900110
* [11] Igarashi, A., Pierce, B.C., Wadler, P.: Featherweight java: A minimal core calculus for java and gj. ACM Trans. Program. Lang. Syst. 23(3), 396–450 (May 2001). https://doi.org/10.1145/503502.503505, https://doi.org/10.1145/503502.503505
* [12] Ina, L., Igarashi, A.: Gradual typing for featherweight java. Computer Software 26(2), 18–40 (2009)
* [13] Jones, S.P., Weirich, S., Eisenberg, R.A., Vytiniotis, D.: A reflection on types. In: A List of Successes That Can Change the World, pp. 292–317. Springer (2016)
* [14] Knight, K.: Unification: A multidisciplinary survey. ACM Computing Surveys (CSUR) 21(1), 93–124 (1989)
* [15] Matthews, J., Findler, R.B.: Operational semantics for multi-language programs. ACM Transactions on Programming Languages and Systems (TOPLAS) 31(3), 1–44 (2009)
* [16] Meijer, E., Drayton, P.: Static typing where possible, dynamic typing when needed: The end of the cold war between programming languages. Citeseer (2004)
* [17] Milner, R.: A theory of type polymorphism in programming. Journal of computer and system sciences 17(3), 348–375 (1978)
* [18] Miyazaki, Y., Sekiyama, T., Igarashi, A.: Dynamic type inference for gradual hindley–milner typing. Proc. ACM Program. Lang. 3(POPL) (Jan 2019). https://doi.org/10.1145/3290331, https://doi.org/10.1145/3290331
* [19] Peyton Jones, S., Eber, J.M., Seward, J.: Composing contracts: An adventure in financial engineering (functional pearl). SIGPLAN Not. 35(9), 280–292 (Sep 2000). https://doi.org/10.1145/357766.351267, https://doi.org/10.1145/357766.351267
* [20] Pottier, F.: A modern eye on ml type inference: old techniques and recent developments (09 2005)
* [21] Pottier, F., Rémy, D.: The Essence of ML Type Inference, pp. 389–489 (01 2005)
* [22] Rastogi, A., Chaudhuri, A., Hosmer, B.: The ins and outs of gradual type inference. SIGPLAN Not. 47(1), 481–494 (Jan 2012). https://doi.org/10.1145/2103621.2103714, https://doi.org/10.1145/2103621.2103714
* [23] Siek, J., Taha, W.: Gradual typing for functional languages. i: Scheme and functional programming workshop (2006)
* [24] Siek, J., Taha, W.: Gradual typing for objects. In: European Conference on Object-Oriented Programming. pp. 2–27. Springer (2007)
* [25] Siek, J.G., Vachharajani, M.: Gradual typing with unification-based inference. In: Proceedings of the 2008 symposium on Dynamic languages. pp. 1–12 (2008)
* [26] Siek, J.G., Vitousek, M.M., Cimini, M., Boyland, J.T.: Refined Criteria for Gradual Typing. In: Ball, T., Bodik, R., Krishnamurthi, S., Lerner, B.S., Morrisett, G. (eds.) 1st Summit on Advances in Programming Languages (SNAPL 2015). Leibniz International Proceedings in Informatics (LIPIcs), vol. 32, pp. 274–293. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, Dagstuhl, Germany (2015). https://doi.org/10.4230/LIPIcs.SNAPL.2015.274, http://drops.dagstuhl.de/opus/volltexte/2015/5031
* [27] Tarjan, R.E.: Efficiency of a good but not linear set union algorithm. Journal of the ACM (JACM) 22(2), 215–225 (1975)
* [28] Tobin-Hochstadt, S., Felleisen, M.: Interlanguage migration: from scripts to programs. In: Companion to the 21st ACM SIGPLAN symposium on Object-oriented programming systems, languages, and applications. pp. 964–974 (2006)
* [29] Tobin-Hochstadt, S., Felleisen, M.: The design and implementation of typed scheme. ACM SIGPLAN Notices 43(1), 395–406 (2008)
* [30] Vitousek, M.M., Kent, A.M., Siek, J.G., Baker, J.: Design and evaluation of gradual typing for python. In: Proceedings of the 10th ACM Symposium on Dynamic Languages. p. 45–56. DLS ’14, Association for Computing Machinery, New York, NY, USA (2014). https://doi.org/10.1145/2661088.2661101, https://doi.org/10.1145/2661088.2661101
* [31] Wadler, P., Blott, S.: How to make ad-hoc polymorphism less ad hoc. In: Proceedings of the 16th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages. p. 60–76. POPL ’89, Association for Computing Machinery, New York, NY, USA (1989). https://doi.org/10.1145/75277.75283, https://doi.org/10.1145/75277.75283
* [32] Wand, M.: A simple algorithm and proof for type inference (1987)
* [33] White, L., Bour, F., Yallop, J.: Modular implicits. Electronic Proceedings in Theoretical Computer Science 198, 22–63 (Dec 2015). https://doi.org/10.4204/eptcs.198.2, http://dx.doi.org/10.4204/EPTCS.198.2
* [34] Xie, N., Bi, X., Oliveira, B.C.D.S., Schrijvers, T.: Consistent subtyping for all. ACM Trans. Program. Lang. Syst. 42(1) (Nov 2019). https://doi.org/10.1145/3310339, https://doi-org.gate.lib.buffalo.edu/10.1145/3310339
|
EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH
CERN-EP-2021-018
28 January 2021
Search for $K^{+}$ decays to a muon and invisible particles
The NA62 Collaboration
###### Abstract
The NA62 experiment at CERN reports searches for $K^{+}\to\mu^{+}N$ and
$K^{+}\to\mu^{+}\nu X$ decays, where $N$ and $X$ are massive invisible
particles, using the 2016–2018 data set. The $N$ particle is assumed to be a
heavy neutral lepton, and the results are expressed as upper limits of ${\cal
O}(10^{-8})$ of the neutrino mixing parameter $|U_{\mu 4}|^{2}$ for $N$ masses
in the range 200–384 MeV/$c^{2}$ and lifetime exceeding 50 ns. The $X$
particle is considered a scalar or vector hidden sector mediator decaying to
an invisible final state, and upper limits of the decay branching fraction for
$X$ masses in the range 10–370 MeV/$c^{2}$ are reported for the first time,
ranging from ${\cal O}(10^{-5})$ to ${\cal O}(10^{-7})$. An improved upper
limit of $1.0\times 10^{-6}$ is established at 90% CL on the
$K^{+}\to\mu^{+}\nu\nu\bar{\nu}$ branching fraction.
To be submitted to Physics Letters B
The NA62 Collaboration 111Corresponding author: Evgueni Goudzovski, email:
<EMAIL_ADDRESS>
Université Catholique de Louvain, Louvain-La-Neuve, Belgium
E. Cortina Gil, A. Kleimenova, E. Minucci 111Corresponding author: Evgueni
Goudzovski, email: evgueni.goudzovski@cern.ch${}^{,}\,$222Deceased, S.
Padolski 333Present address: Brookhaven National Laboratory, Upton, NY 11973,
USA, P. Petrov, A. Shaikhiev 444Also at Institute for Nuclear Research of the
Russian Academy of Sciences, 117312 Moscow, Russia, R. Volpe 555Present
address: Faculty of Mathematics, Physics and Informatics, Comenius University,
842 48, Bratislava, Slovakia
TRIUMF, Vancouver, British Columbia, Canada
T. Numao, Y. Petrov, B. Velghe
University of British Columbia, Vancouver, British Columbia, Canada
D. Bryman 666Also at TRIUMF, Vancouver, British Columbia, V6T 2A3, Canada, J.
Fu
Charles University, Prague, Czech Republic
T. Husek 777Present address: Department of Astronomy and Theoretical Physics,
Lund University, Lund, SE 223-62, Sweden, J. Jerhot 888Present address:
Université Catholique de Louvain, B-1348 Louvain-La-Neuve, Belgium, K. Kampf,
M. Zamkovsky
Institut für Physik and PRISMA Cluster of Excellence, Universität Mainz,
Mainz, Germany
R. Aliberti 999Present address: Institut für Kernphysik and Helmholtz
Institute Mainz, Universität Mainz, Mainz, D-55099, Germany, G. Khoriauli
101010Present address: Universität Würzburg, D-97070 Würzburg, Germany, J.
Kunze, D. Lomidze 111111Present address: European XFEL GmbH, D-22761 Hamburg,
Germany, L. Peruzzo, M. Vormstein, R. Wanke
Dipartimento di Fisica e Scienze della Terra dell’Università e INFN, Sezione
di Ferrara, Ferrara, Italy
P. Dalpiaz, M. Fiorini, I. Neri, A. Norton 121212Present address: University
of Glasgow, Glasgow, G12 8QQ, UK, F. Petrucci, H. Wahl 131313Present address:
Institut für Physik and PRISMA Cluster of excellence, Universität Mainz,
D-55099 Mainz, Germany
INFN, Sezione di Ferrara, Ferrara, Italy
A. Cotta Ramusino, A. Gianoli
Dipartimento di Fisica e Astronomia dell’Università e INFN, Sezione di
Firenze, Sesto Fiorentino, Italy
E. Iacopini, G. Latino, M. Lenti, A. Parenti
INFN, Sezione di Firenze, Sesto Fiorentino, Italy
A. Bizzeti 141414Also at Dipartimento di Fisica, Università di Modena e Reggio
Emilia, I-41125 Modena, Italy, F. Bucci
Laboratori Nazionali di Frascati, Frascati, Italy
A. Antonelli, G. Georgiev 151515Also at Faculty of Physics, University of
Sofia, BG-1164 Sofia, Bulgaria, V. Kozhuharov 151515Also at Faculty of
Physics, University of Sofia, BG-1164 Sofia, Bulgaria, G. Lanfranchi, S.
Martellotti, M. Moulson, T. Spadaro
Dipartimento di Fisica “Ettore Pancini” e INFN, Sezione di Napoli, Napoli,
Italy
F. Ambrosino, T. Capussela, M. Corvino 111Corresponding author: Evgueni
Goudzovski, email<EMAIL_ADDRESS>D. Di Filippo, P. Massarotti,
M. Mirra, M. Napolitano, G. Saracino
Dipartimento di Fisica e Geologia dell’Università e INFN, Sezione di Perugia,
Perugia, Italy
G. Anzivino, F. Brizioli, E. Imbergamo, R. Lollini, R. Piandani 161616Present
address: Institut für Experimentelle Teilchenphysik (KIT), D-76131 Karlsruhe,
Germany, C. Santoni
INFN, Sezione di Perugia, Perugia, Italy
M. Barbanera, P. Cenci, B. Checcucci, P. Lubrano, M. Lupi 171717Present
address: Institut am Fachbereich Informatik und Mathematik, Goethe
Universität, D-60323 Frankfurt am Main, Germany, M. Pepe, M. Piccini
Dipartimento di Fisica dell’Università e INFN, Sezione di Pisa, Pisa, Italy
F. Costantini, L. Di Lella 131313Present address: Institut für Physik and
PRISMA Cluster of excellence, Universität Mainz, D-55099 Mainz, Germany, N.
Doble 131313Present address: Institut für Physik and PRISMA Cluster of
excellence, Universität Mainz, D-55099 Mainz, Germany, M. Giorgi, S. Giudici,
G. Lamanna, E. Lari, E. Pedreschi, M. Sozzi
INFN, Sezione di Pisa, Pisa, Italy
C. Cerri, R. Fantechi, L. Pontisso, F. Spinella
Scuola Normale Superiore e INFN, Sezione di Pisa, Pisa, Italy
I. Mannelli
Dipartimento di Fisica, Sapienza Università di Roma e INFN, Sezione di Roma I,
Roma, Italy
G. D’Agostini, M. Raggi
INFN, Sezione di Roma I, Roma, Italy
A. Biagioni, E. Leonardi, A. Lonardo, P. Valente, P. Vicini
INFN, Sezione di Roma Tor Vergata, Roma, Italy
R. Ammendola, V. Bonaiuto 181818Also at Department of Industrial Engineering,
University of Roma Tor Vergata, I-00173 Roma, Italy, A. Fucci, A. Salamon, F.
Sargeni 191919Also at Department of Electronic Engineering, University of Roma
Tor Vergata, I-00173 Roma, Italy
Dipartimento di Fisica dell’Università e INFN, Sezione di Torino, Torino,
Italy
R. Arcidiacono 202020Also at Università degli Studi del Piemonte Orientale,
I-13100 Vercelli, Italy, B. Bloch-Devaux, M. Boretto 111Corresponding author:
Evgueni Goudzovski, email<EMAIL_ADDRESS>E. Menichetti, E.
Migliore, D. Soldi
INFN, Sezione di Torino, Torino, Italy
C. Biino, A. Filippi, F. Marchetto
Instituto de Física, Universidad Autónoma de San Luis Potosí, San Luis Potosí,
Mexico
J. Engelfried, N. Estrada-Tristan 212121Also at Universidad de Guanajuato,
Guanajuato, Mexico
Horia Hulubei National Institute of Physics for R&D in Physics and Nuclear
Engineering, Bucharest-Magurele, Romania
A. M. Bragadireanu, S. A. Ghinescu, O. E. Hutanu
Joint Institute for Nuclear Research, Dubna, Russia
A. Baeva, D. Baigarashev, D. Emelyanov, T. Enik, V. Falaleev, V. Kekelidze, A.
Korotkova, L. Litov 151515Also at Faculty of Physics, University of Sofia,
BG-1164 Sofia, Bulgaria, D. Madigozhin, M. Misheva 222222Present address:
Institute of Nuclear Research and Nuclear Energy of Bulgarian Academy of
Science (INRNE-BAS), BG-1784 Sofia, Bulgaria, N. Molokanova, S. Movchan, I.
Polenkevich, Yu. Potrebenikov, S. Shkarovskiy, A. Zinchenko 222Deceased
Institute for Nuclear Research of the Russian Academy of Sciences, Moscow,
Russia
S. Fedotov, E. Gushchin, A. Khotyantsev, Y. Kudenko 232323Also at National
Research Nuclear University (MEPhI), 115409 Moscow and Moscow Institute of
Physics and Technology, 141701 Moscow region, Moscow, Russia, V. Kurochka, M.
Medvedeva, A. Mefodev
Institute for High Energy Physics - State Research Center of Russian
Federation, Protvino, Russia
S. Kholodenko, V. Kurshetsov, V. Obraztsov, A. Ostankov 222Deceased, V.
Semenov 222Deceased, V. Sugonyaev, O. Yushchenko
Faculty of Mathematics, Physics and Informatics, Comenius University,
Bratislava, Slovakia
L. Bician 111Corresponding author: Evgueni Goudzovski, email:
<EMAIL_ADDRESS>T. Blazek, V. Cerny, Z. Kucerova
CERN, European Organization for Nuclear Research, Geneva, Switzerland
J. Bernhard, A. Ceccucci, H. Danielsson, N. De Simone 242424Present address:
DESY, D-15738 Zeuthen, Germany, F. Duval, B. Döbrich, L. Federici, E.
Gamberini, L. Gatignon, R. Guida, F. Hahn 222Deceased, E. B. Holzer, B.
Jenninger, M. Koval 252525Present address: Charles University, 116 36 Prague
1, Czech Republic, P. Laycock 333Present address: Brookhaven National
Laboratory, Upton, NY 11973, USA, G. Lehmann Miotto, P. Lichard, A. Mapelli,
R. Marchevski, K. Massri, M. Noy, V. Palladino 262626Present address: Physics
Department, Imperial College London, London, SW7 2BW, UK, M. Perrin-Terrin
272727Present address: Aix Marseille University, CNRS/IN2P3, CPPM, F-13288,
Marseille, France${}^{,}\,$282828Also at Université Catholique de Louvain,
B-1348 Louvain-La-Neuve, Belgium, J. Pinzino 292929Present address: INFN,
Sezione di Pisa, I-56100 Pisa, Italy, V. Ryjov, S. Schuchmann 131313Present
address: Institut für Physik and PRISMA Cluster of excellence, Universität
Mainz, D-55099 Mainz, Germany, S. Venditti
University of Birmingham, Birmingham, United Kingdom
T. Bache, M. B. Brunetti 303030Present address: Department of Physics,
University of Warwick, Coventry, CV4 7AL, UK, V. Duk 313131Present address:
INFN, Sezione di Perugia, I-06100 Perugia, Italy, V. Fascianelli 323232Present
address: Center for theoretical neuroscience, Columbia University, New York,
NY 10027, USA, J. R. Fry, F. Gonnella, E. Goudzovski111Corresponding author:
Evgueni Goudzovski, email<EMAIL_ADDRESS>J. Henshaw, L.
Iacobuzio, C. Lazzeroni, N. Lurkin 888Present address: Université Catholique
de Louvain, B-1348 Louvain-La-Neuve, Belgium, F. Newson, C. Parkinson
888Present address: Université Catholique de Louvain, B-1348 Louvain-La-Neuve,
Belgium, A. Romano, A. Sergi 333333Present address: Dipartimento di Fisica
dell’Università e INFN, Sezione di Genova, I-16146 Genova, Italy, A. Sturgess,
J. Swallow
University of Bristol, Bristol, United Kingdom
H. Heath, R. Page, S. Trilov
University of Glasgow, Glasgow, United Kingdom
B. Angelucci, D. Britton, C. Graham, D. Protopopescu
University of Lancaster, Lancaster, United Kingdom
J. Carmignani, J. B. Dainton, R. W. L. Jones, G. Ruggiero
University of Liverpool, Liverpool, United Kingdom
L. Fulton, D. Hutchcroft, E. Maurice 343434Present address: Laboratoire
Leprince Ringuet, F-91120 Palaiseau, France, B. Wrona
George Mason University, Fairfax, Virginia, USA
A. Conovaloff, P. Cooper, D. Coward 353535Also at SLAC National Accelerator
Laboratory, Stanford University, Menlo Park, CA 94025, USA, P. Rubin
11footnotetext: Present address: CERN, European Organization for Nuclear
Research, CH-1211 Geneva 23, Switzerland22footnotetext: Also at Laboratori
Nazionali di Frascati, I-00044 Frascati, Italy
## Introduction
All Standard Model (SM) fermions except neutrinos are known to exhibit both
chiralities. The existence of right-handed neutrinos, or heavy neutral leptons
(HNLs), is hypothesised in many SM extensions to generate non-zero masses of
the SM neutrinos via the seesaw mechanism [1]. For example, the Neutrino
Minimal Standard Model [2] accounts for dark matter, baryogenesis, neutrino
masses and oscillations by postulating two HNLs in the MeV–GeV mass range and
a third HNL at the keV mass scale, which is a dark matter candidate.
Mixing between HNLs (denoted $N$ below) and active neutrinos gives rise to HNL
production in meson decays. The expected branching fraction of the
$K^{+}\to\mu^{+}N$ decay is [3]
${\cal B}(K^{+}\to\mu^{+}N)={\cal
B}(K^{+}\to\mu^{+}\nu)\cdot\rho_{\mu}(m_{N})\cdot|U_{\mu 4}|^{2},$
where ${\cal B}(K^{+}\to\mu^{+}\nu)$ is the measured branching fraction of the
SM leptonic decay [4], $|U_{\mu 4}|^{2}$ is the mixing parameter, and
$\rho_{\mu}(m_{N})$ is a kinematic factor which depends on the HNL mass
$m_{N}$:
$\rho_{\mu}(m_{N})=\frac{(x+y)-(x-y)^{2}}{x(1-x)^{2}}\cdot\lambda^{1/2}(1,x,y),$
(1)
with $x=(m_{\mu}/m_{K})^{2}$, $y=(m_{N}/m_{K})^{2}$ and
$\lambda(1,x,y)=1+x^{2}+y^{2}-2(x+y+xy)$. The factor $\rho_{\mu}(m_{N})$
increases from unity at $m_{N}=0$ to a maximum of 4.13 at $m_{N}=263$
MeV/$c^{2}$, and decreases to zero at the kinematic limit
$m_{N}=m_{K}-m_{\mu}$. Assuming that the HNL decays exclusively to SM
particles, its lifetime in the mass range $m_{N}<m_{K}$ exceeds
$10^{-4}/|U_{4}|^{2}~{}\mu$s, where $|U_{4}|^{2}$ is the largest of the three
coupling parameters $|U_{\ell 4}|^{2}$ ($\ell=e,\mu,\tau$) [5]. Therefore
under the above assumption, and additionally assuming conservatively that
$|U_{\ell 4}|^{2}<10^{-4}$, the HNL can be considered stable in production-
search experiments.
A hypothetical scalar or vector hidden sector mediator $X$ with mass
$m_{X}<m_{K}-m_{\mu}$ and coupling preferentially to the muon can be produced
in $K^{+}\to\mu^{+}\nu X$ decays. The existence of such a light mediator
offers a solution to the muon $g-2$ puzzle that also accommodates dark matter
(DM) freeze-out [6]. Within these scenarios the $X$ particle is expected to
decay promptly with a sizeable invisible branching fraction. In the case of a
vector mediator, gauge invariance requires the decay $X\to\nu\bar{\nu}$. In
the light DM freeze-out model, the decay $X\to\chi\bar{\chi}$ is expected,
where $\chi$ is the DM particle.
The $K^{+}\to\mu^{+}\nu\nu\bar{\nu}$ decay occurs within the SM at second
order in the Fermi constant $G_{F}$, and the expected branching fraction at
leading order in chiral perturbation theory, ${\cal B_{\rm SM}}=1.62\times
10^{-16}$ [7], is experimentally out of reach. The strongest upper limit to
date, ${\cal B}(K^{+}\to\mu^{+}\nu\nu\bar{\nu})<2.4\times 10^{-6}$ at 90% CL,
has been established by the BNL-E949 experiment [8].
The $K^{+}\to\mu^{+}N$, $K^{+}\to\mu^{+}\nu X$ and
$K^{+}\to\mu^{+}\nu\nu\bar{\nu}$ decays with invisible $N$ and $X$ particles
are characterised by a single muon and missing energy in the final state.
Searches for these decays using the data collected by the NA62 experiment at
CERN in 2016–2018 are reported here. The $N$ particle is interpreted as a HNL,
and the results are presented as upper limits of the extended neutrino mixing
matrix element $|U_{\mu 4}|^{2}$ for $m_{N}$ in the range 200–384 MeV/$c^{2}$,
with the assumption that the HNL lifetime exceeds 50 ns. For the
$K^{+}\to\mu^{+}\nu X$ decays (in a number of $m_{X}$ hypotheses within the
range 10–370 MeV/$c^{2}$) and the $K^{+}\to\mu^{+}\nu\nu\bar{\nu}$ decay,
upper limits on the branching fractions are reported.
## 1 Beam, detector and data sample
The layout of the NA62 beamline and detector [9] is shown schematically in
Fig. 1. An unseparated secondary beam of $\pi^{+}$ (70%), protons (23%) and
$K^{+}$ (6%) is created by directing 400 GeV/$c$ protons extracted from the
CERN SPS onto a beryllium target in spills of 3 s effective duration. The
central beam momentum is 75 GeV/$c$, with a momentum spread of 1% (rms).
Beam kaons are tagged with 70 ps time resolution by a differential Cherenkov
counter (KTAG) using as radiator nitrogen gas at 1.75 bar pressure contained
in a 5 m long vessel. Beam particle positions, momenta and times (to better
than 100 ps resolution) are measured by a silicon pixel spectrometer
consisting of three stations (GTK1,2,3) and four dipole magnets. A muon
scraper (SCR) is installed between GTK1 and GTK2. A 1.2 m thick steel
collimator (COL) with a central aperture of $76\times 40$ mm2 and outer
dimensions of $1.7\times 1.8$ m2 is placed upstream of GTK3 to absorb hadrons
from upstream $K^{+}$ decays (a variable aperture collimator of $0.15\times
0.15$ m2 outer dimensions was used up to early 2018). Inelastic interactions
of beam particles in GTK3 are detected by an array of scintillator hodoscopes
(CHANTI). The beam is delivered into a vacuum tank evacuated to a pressure of
$10^{-6}$ mbar, which contains a 75 m long fiducial decay volume (FV) starting
2.6 m downstream of GTK3. The beam divergence at the FV entrance is 0.11 mrad
(rms) in both horizontal and vertical planes. Downstream of the FV, undecayed
beam particles continue their path in vacuum.
SCR
COL
M
Figure 1: Schematic side view of the NA62 beamline and detector.
Momenta of charged particles produced by $K^{+}$ decays in the FV are measured
by a magnetic spectrometer (STRAW) located in the vacuum tank downstream of
the FV. The spectrometer consists of four tracking chambers made of straw
tubes, and a dipole magnet (M) located between the second and third chambers
that provides a horizontal momentum kick of 270 MeV/$c$. The momentum
resolution achieved is $\sigma_{p}/p=(0.30\oplus 0.005p)\%$, where the
momentum $p$ is expressed in GeV/$c$.
A ring-imaging Cherenkov detector (RICH), consisting of a 17.5 m long vessel
filled with neon at atmospheric pressure (with a Cherenkov threshold for muons
of 9.5 GeV/$c$), is used for the identification of charged particles and for
time measurement with 70 ps precision for particles well above the threshold.
Two scintillator hodoscopes (CHOD), which include a matrix of tiles and two
planes of slabs arranged in four quadrants downstream of the RICH, provide
trigger signals and time measurements with 200 ps precision.
A $27X_{0}$ thick quasi-homogeneous liquid krypton (LKr) electromagnetic
calorimeter is used for particle identification and photon detection. The
calorimeter has an active volume of 7 m3, is segmented in the transverse
direction into 13248 projective cells of approximately $2\\!\times\\!2$ cm2,
and provides an energy resolution $\sigma_{E}/E=(4.8/\sqrt{E}\oplus 11/E\oplus
0.9)\%$, where $E$ is expressed in GeV. To achieve hermetic acceptance for
photons emitted in the FV by $K^{+}$ decays at angles up to 50 mrad to the
beam axis, the LKr calorimeter is supplemented by annular lead glass detectors
(LAV) installed in 12 positions inside and downstream of the vacuum tank, and
two lead/scintillator sampling calorimeters (IRC, SAC) located close to the
beam axis. An iron/scintillator sampling hadronic calorimeter formed of two
modules (MUV1,2) and a muon detector (MUV3) consisting of 148 scintillator
tiles located behind an 80 cm thick iron wall are used for particle
identification.
The data sample used for this analysis is obtained from $0.92\times 10^{6}$
SPS spills recorded during 410 days of operation in 2016–2018, with the
typical beam intensity increasing over time from $1.3\times 10^{12}$ to
$2.2\times 10^{12}$ protons per spill. The latter value corresponds to a mean
instantaneous beam particle rate at the FV entrance of 500 MHz, and a mean
$K^{+}$ decay rate in the FV of 3.7 MHz. Data recorded with a minimum-bias
trigger based on CHOD signals [10], downscaled by a factor of 400, is used for
the analysis. This trigger is 99% efficient for single charged particles in
the CHOD acceptance.
## 2 Measurement principles and event selection
The rates of the signal processes are measured with respect to the
$K^{+}\to\mu^{+}\nu$ decay rate. This approach benefits from first-order
cancellations of residual detector inefficiencies not fully accounted for in
simulations, as well as trigger inefficiencies and random veto losses common
to signal and normalization modes.
Candidate signal decays, as well as the $K^{+}\to\mu^{+}\nu$ decay, are
characterised by a single muon and no other detectable particles in the final
state. Backgrounds are due to beam particle decays upstream of the vacuum
tank, decays to multiple detectable particles, and inelastic interactions of
beam particles in GTK3. Event selection is optimized to suppress these
backgrounds. The principal selection criteria are listed below.
* •
A positively charged muon track is required to be reconstructed in the STRAW
spectrometer with momentum in the range 5–70 GeV/$c$. The track’s trajectory
through the STRAW chambers and its extrapolation to the LKr calorimeter, CHOD
and MUV3 should be within the geometrical acceptance of these detectors. The
muon time is evaluated using the RICH and CHOD signals spatially associated
with the track.
* •
Particle identification criteria are applied to the STRAW track to suppress
the backgrounds due to misidentification. The ratio of the energy deposited in
the LKr calorimeter, $E$, to the momentum, $p$, measured by the STRAW
spectrometer is required to be $E/p<0.2$. For tracks with momentum below 30
GeV/$c$, a particle identification algorithm is applied based on the RICH
signal pattern within 3 ns of the CHOD time. In particular, tracks with
momenta below the muon Cherenkov threshold must not be identified as
positrons. At least one signal in the MUV3 detector must be within 3 ns of the
muon time and spatially consistent with the projected track impact point in
the MUV3 front plane.
* •
Backgrounds from $K^{+}\to\mu^{+}\nu$ decays upstream of the KTAG and
$\pi^{+}\to\mu^{+}\nu$ decays upstream of GTK3, in coincidence with a beam
pion or proton track in the GTK, are suppressed by requiring a kaon signal in
the KTAG detector within 1 ns of the muon time.
* •
The decay vertex is defined as the point of closest approach of the $K^{+}$
track in the GTK and the muon track in the STRAW, taking into account the
stray magnetic field in the vacuum tank. Identification of the $K^{+}$ track
in the GTK relies on the time difference, $\Delta t_{\rm GK}$, between a GTK
track and the KTAG signal, and spatial compatibility of the GTK and STRAW
tracks quantified by the distance, $d$, of closest approach. A discriminant
${\cal D}(\Delta t_{\rm GK},d)$ is defined using the $\Delta t_{\rm GK}$ and
$d$ distributions measured with $K^{+}\to\pi^{+}\pi^{+}\pi^{-}$ decays [11].
Among GTK tracks with $|\Delta t_{\rm GK}|<0.5$ ns, the track of the parent
kaon is assumed to be the one with the ${\cal D}$ value most consistent with a
$K^{+}\to\mu^{+}$ decay. It is required that $d<7$ mm to reduce the background
from upstream decays.
* •
Background from $K^{+}\to\mu^{+}\nu$ decays between KTAG and GTK3 with pileup
in the GTK is suppressed by geometrical conditions. The reconstructed $K^{+}$
decay vertex is required to be located in the FV at a minimum distance from
the start of the FV, varying from 8 m to 35 m depending on the angle between
the $K^{+}$ momentum in the laboratory frame and the muon momentum in the
$K^{+}$ rest frame.
* •
Backgrounds from $K^{+}$ decays to multiple detectable particles are
suppressed by veto conditions. The muon track must not form a vertex with any
additional STRAW track segment. Energy deposits are not allowed in the LKr
calorimeter that are spatially incompatible with the muon track within 12 ns
of the muon time. No activity is allowed in the large-angle (LAV) or small-
angle (SAC, IRC) photon veto detectors within 3 ns of the muon time, or in the
CHANTI detector within 4 ns of the muon time. No more than two signals in the
CHOD tiles within 6 ns of the muon time, and no more than three signals in the
RICH PMTs within 3 ns of the muon time, spatially incompatible with the muon
track, are allowed. Data loss due to the veto conditions from accidental
activity (random veto) averaged over the data sample is measured to be about
30%.
The squared missing mass is computed as $m_{\rm
miss}^{2}=(P_{K}-P_{\mu})^{2}$, where $P_{K}$ and $P_{\mu}$ are the kaon and
muon 4-momenta, obtained from the 3-momenta measured by the GTK and STRAW
spectrometers under the $K^{+}$ and $\mu^{+}$ mass hypotheses.
Monte Carlo simulations of particle interactions with the detector and its
response are performed with a software package based on the Geant4 toolkit
[12]. The $m_{\rm miss}^{2}$ spectra of the selected events from data and
simulated samples, and their ratio, are displayed in Fig. 2. The signal from
the SM leptonic decay $K^{+}\to\mu^{+}\nu$ is observed as a peak at $m_{\rm
miss}^{2}=0$ with a resolution of $1.5\times 10^{-3}~{}{\rm GeV}^{2}/c^{4}$,
and the SM signal region is defined in terms of the reconstructed squared
missing mass as $|m_{\rm miss}^{2}|<0.01~{}{\rm GeV}^{2}/c^{4}$. In contrast,
the $K^{+}\to\mu^{+}N$, $K^{+}\to\mu^{+}\nu X$ and
$K^{+}\to\mu^{+}\nu\nu\bar{\nu}$ decays are characterised by larger $m_{\rm
miss}^{2}$ values.
| |
---
Figure 2: Left: reconstructed $m_{\rm miss}^{2}$ distributions for data and
the estimated background. The full uncertainties ($\pm 1\sigma$) in each mass
bin of the background spectrum for $m_{\rm miss}^{2}>0$ are shown with a
contour. The boundaries of the SM signal region $|m_{\rm
miss}^{2}|<0.01~{}{\rm GeV}^{2}/c^{4}$ used for normalisation are indicated
with arrows. Top-right: the region $m_{\rm miss}^{2}>0.03~{}{\rm
GeV}^{2}/c^{4}$, with simulated hypothetical $K^{+}\to\mu^{+}\nu X$ (scalar
mediator model, two $m_{X}$ values) and $K^{+}\to\mu^{+}\nu\nu\bar{\nu}$
signals with branching fractions of $10^{-4}$. Bottom-right: ratio of data and
simulated spectra in the region $m_{\rm miss}^{2}>0.03~{}{\rm GeV}^{2}/c^{4}$
with the full uncertainties. Systematic components of the uncertainties are
correlated among the bins.
## 3 Normalisation to the $K^{+}\to\mu^{+}\nu$ decay
The effective number of $K^{+}$ decays in the FV, denoted $N_{K}$, is
evaluated using the number of $K^{+}\to\mu^{+}\nu$ candidates reconstructed in
the data sample. The quantity $N_{K}$ is not corrected for trigger
inefficiency and random veto effects, which cancel between signal and
normalisation thus making the $N_{K}$ value specific to this analysis. The
background in the SM signal region is negligible (Fig. 2). It is found that
$N_{K}=\frac{N_{\rm SM}}{A_{\rm SM}\cdot{\cal B}(K^{+}\to\mu^{+}\nu)}=(1.14\pm
0.02)\times 10^{10},$
where $N_{\rm SM}=2.19\times 10^{9}$ is the number of selected data events in
the SM signal region, $A_{\rm SM}=0.302\pm 0.005$ is the acceptance of the
selection for the $K^{+}\to\mu^{+}\nu$ decay evaluated using simulations, and
${\cal B}(K^{+}\to\mu^{+}\nu)=0.6356\pm 0.0011$ is the branching fraction of
this decay [4]. The uncertainty of $A_{\rm SM}$, which dominates that of
$N_{K}$, is mainly systematic due to the accuracy of the simulation, and is
evaluated by variation of the selection criteria including the algorithm used
for identification of the $K^{+}$ track in the GTK.
## 4 Background evaluation with simulations
The main backgrounds to the potential signals at large $m_{\rm miss}^{2}$
values are due to the $K^{+}\to\mu^{+}\nu\gamma$, $K^{+}\to\pi^{0}\mu^{+}\nu$
($\pi^{0}\to\gamma\gamma$) and $K^{+}\to\pi^{+}\pi^{+}\pi^{-}$ decays inside
and upstream of the vacuum tank. Their contributions are estimated with
simulations. The $K^{+}\to\mu^{+}\nu\gamma$ decay is simulated including inner
bremsstrahlung (IB) and structure-dependent processes, and the interference
between these processes [13].
The $K^{+}\to\mu^{+}\nu\gamma$ and $K^{+}\to\pi^{0}\mu^{+}\nu$ backgrounds
arise from the photon detection inefficiency in the hermetic NA62 photon veto
system, and photon conversions in the STRAW and RICH detectors. Photon
detection inefficiency is modelled for the simulated events using the LAV,
LKr, IRC and SAC inefficiencies measured as functions of photon energy using a
$K^{+}\to\pi^{+}\pi^{0}$ decay sample [14]. To evaluate the systematic
uncertainties in the background estimates, an alternative photon veto response
model is used for the simulated events involving photon detector
inefficiencies increased by one sigma of the measurements, and a conservative
assumption that photons converting upstream of the STRAW spectrometer dipole
magnet are not detected in the LAV, IRC and SAC systems. The latter assumption
accounts for the different photon veto conditions used in this analysis with
respect to those used for the inefficiency measurements [14]. The resulting
systematic uncertainty of the estimated background comes mainly from the
limited accuracy of the LAV inefficiency measurements. In particular, the LAV
inefficiency is measured to be $(0.30\pm 0.06)\%$ for photons in the 0.3–3 GeV
energy range, which contains most photons from $K^{+}\to\mu^{+}\nu\gamma$
decays intercepting the LAV geometrical acceptance.
The accuracy of the description of the non-Gaussian $m_{\rm miss}^{2}$ tails
of the $K^{+}\to\mu^{+}\nu(\gamma)$ decay is affected by the limited precision
in the simulation of beam particle pileup and inefficiency in the GTK. This
leads to a deficit of simulated events in the negative tail of the $m_{\rm
miss}^{2}$ distribution populated by the $K^{+}\to\mu^{+}\nu(\gamma)$ decays
only (Fig. 2). For example, a 40% deficit is observed in the region $m_{\rm
miss}^{2}<-0.05~{}{\rm GeV}^{2}/c^{4}$. To account for the missing component
in the positive tail, it is assumed that the non-Gaussian tails of the $m_{\rm
miss}^{2}$ spectrum are left-right symmetrical. A “tail” component (shown
separately in Fig. 2) is added to the estimated background in each $m_{\rm
miss}^{2}$ bin in the region $m_{\rm miss}^{2}>0$ equal to the difference
between the data and simulated spectra in the symmetric mass bin with respect
to $m_{\rm miss}^{2}=0$. A 100% uncertainty is conservatively assigned to this
component to account for the above assumption.
The composition of the estimated background in the kinematic region $m_{\rm
miss}^{2}>0.1~{}{\rm GeV}^{2}/c^{4}$ is reported in Table 1. The largest
component is the radiative $K^{+}\to\mu^{+}\nu\gamma$ (IB) tail, and its
uncertainty is dominated by a contribution due to the accuracy of the
description of the non-Gaussian tail. Further systematic uncertainties due to
beam tuning, calibrations, trigger and reconstruction efficiency are
negligible compared with the overall systematic uncertainty from the sources
considered. The background represents an ${\cal O}(10^{-6})$ fraction of the
number of reconstructed SM $K^{+}\to\mu^{+}\nu$ candidates. Within the region
$m_{\rm miss}^{2}>0.03~{}{\rm GeV}^{2}/c^{4}$, the estimated background agrees
with the data within uncertainties as shown in Fig. 2.
Table 1: Estimated backgrounds in the kinematic region $m_{\rm miss}^{2}>0.1~{}{\rm GeV}^{2}/c^{4}$ with their uncertainties. The uncertainties labelled “PV” are systematic due to the accuracy of the photon veto efficiency modelling (positively correlated among the background sources), and the one labelled “tail” is systematic and accounts for the accuracy of the non-Gaussian $m_{\rm miss}^{2}$ tail simulation. Background source | Estimated background
---|---
$K^{+}\to\mu^{+}\nu\gamma$ | 6224 | $\\!\\!\pm\\!\\!$ | $105_{\rm stat}$ | $\\!\\!\pm\\!\\!$ | $333_{\rm PV}$ | $\\!\\!\pm\\!\\!$ | $780_{\rm tail}$
$K^{+}\to\pi^{0}\mu^{+}\nu$ | 1016 | $\\!\\!\pm\\!\\!$ | $47_{\rm stat}$ | $\\!\\!\pm\\!\\!$ | $178_{\rm PV}$ | |
$K^{+}\to\pi^{+}\pi^{+}\pi^{-}$ | 309 | $\\!\\!\pm\\!\\!$ | $32_{\rm stat}$ | | | |
Total background | 7549 | $\\!\\!\pm\\!\\!$ | $119_{\rm stat}$ | $\\!\\!\pm\\!\\!$ | $920_{\rm syst}$
## 5 Search for $K^{+}\to\mu^{+}N$ decays
The $K^{+}\to\mu^{+}N$ process is investigated in 269 mass hypotheses,
$m_{N}$, within the HNL search region 200–384 MeV/$c^{2}$. Distances between
adjacent $m_{N}$ values considered are 1 (0.5) MeV/$c^{2}$ below (above) the
mass of 300 MeV/$c^{2}$. The decay is characterised by a narrow peak in the
reconstructed missing mass ($m_{\rm miss}$) spectrum. Therefore the
$K^{+}\to\mu^{+}N$ event selection requires that $|m_{\rm
miss}-m_{N}|<1.5\sigma_{m}$ for each mass hypothesis $m_{N}$, where
$\sigma_{m}$ is the mass resolution evaluated with simulations, as shown in
Fig. 3 (left). The resolution improves by a factor of three with respect to
the NA62 2015 data sample collected without the GTK spectrometer [15].
Considering the peaking nature of the $K^{+}\to\mu^{+}N$ signal, the
background in each $m_{N}$ hypothesis is evaluated using sidebands in the
reconstructed $m_{\rm miss}$ spectrum of the data events. This method is more
precise than one based on simulation. Sidebands are defined in each mass
hypothesis as $1.5\sigma_{m}<|m_{\rm miss}-m_{N}|<11.25\sigma_{m}$,
additionally requiring that $m_{\rm miss}$ is within the range 188–386
MeV/$c^{2}$. The number of expected background events, $N_{\rm exp}$, within
the $\pm 1.5\sigma_{m}$ signal window is evaluated with a second-order
polynomial fit to the sideband data of the $m_{\rm miss}$ spectrum, where the
bin size is $0.75\sigma_{m}$. The uncertainty, $\delta N_{\rm exp}$, in the
number of expected background events includes statistical and systematic
components. The former comes from the uncertainties in the fit parameters,
while the latter is evaluated as the difference between values of $N_{\rm
exp}$ obtained from fits using second and third order polynomials. The
dominant contribution to $\delta N_{\rm exp}$ is statistical, although
systematic uncertainties become comparable as $m_{N}$ approaches the
boundaries of the HNL search region. Systematic errors due to possible HNL
signals in the sidebands are found to be negligible; this check is made
assuming $|U_{\mu 4}|^{2}$ to be equal to the expected sensitivity of the
analysis. The uncertainty in the background estimate, $\delta N_{\rm
exp}/N_{\rm exp}$, increases from 1–2% for $m_{N}$ below 300 MeV/$c^{2}$ to
10% at the upper limit of the HNL search region.
Figure 3: HNL mass resolution $\sigma_{m}$ (left) and acceptance $A_{N}$ of
the selection (right) evaluated from simulations as functions of the HNL mass.
Boundaries of the HNL search region are indicated by vertical arrows.
The signal selection acceptance, $A_{N}$, as a function of $m_{N}$ obtained
with simulations assuming infinite HNL lifetime is displayed in Fig. 3
(right). The acceptance for a mean lifetime of 50 ns (considering decays to
detectable particles) is lower by ${\cal O}(1\%)$ in relative terms, making
the results of the search valid for lifetimes in excess of 50 ns. For shorter
lifetimes, the HNL mean decay length in the laboratory frame becomes
comparable to or smaller than the length of the apparatus. Acceptances for
lifetimes of 5 (1) ns decrease by factors up to 2 (10), depending on $m_{N}$.
Simulations reproduce the $m_{\rm miss}^{2}$ resolution at the
$K^{+}\to\mu^{+}\nu$ peak to a 1% relative precision. Modelling of the
resolution outside the peak is validated using data and simulated
$K^{+}\to\pi^{+}\pi^{+}\pi^{-}$ decay samples; the corresponding systematic
effects on $A_{N}$ do not exceed 2% in relative terms [16].
Figure 4: Left: observed number of events $N_{\rm obs}$, observed upper limit
at 90% CL of the number of signal events $N_{S}$, and expected $\pm 1\sigma$
and $\pm 2\sigma$ bands of the upper limit in the null hypothesis for each HNL
mass value considered. Right: single event sensitivity values of ${\cal
B}_{\rm SES}(K^{+}\to\mu^{+}N)$ (dashed line) and $|U_{\mu 4}|^{2}_{\rm SES}$
(solid line) as functions of the assumed HNL mass. Boundaries of the HNL
search region are indicated by vertical arrows.
The number of observed events, $N_{\rm obs}$, within the signal window and the
quantities $N_{\rm exp}$ and $\delta N_{\rm exp}$ are used to compute the
local signal significance for each mass hypothesis. It is found that the
significance never exceeds 3, therefore no HNL production signal is observed.
Upper limits at 90% CL of the number of $K^{+}\to\mu^{+}N$ decays, $N_{S}$, in
each HNL mass hypothesis are evaluated from the quantities $N_{\rm obs}$,
$N_{\rm exp}$ and $\delta N_{\rm exp}$ using the ${\rm CL_{S}}$ method [17].
The values of $N_{\rm obs}$, the observed upper limits of $N_{S}$, and the
expected $\pm 1\sigma$ and $\pm 2\sigma$ bands of variation of $N_{S}$ in the
null (i.e. background-only) hypothesis are shown in Fig. 4 (left).
The single-event sensitivity (SES) branching fraction ${\cal B}_{\rm
SES}(K^{+}\to\mu^{+}N)$ and mixing parameter values $|U_{\mu 4}|^{2}_{\rm
SES}$, corresponding to the observation of one signal event, are defined in
each HNL hypothesis as
${\cal B}_{\rm SES}(K^{+}\to\mu^{+}N)=\frac{1}{N_{K}\cdot
A_{N}}~{}~{}~{}~{}{\rm and}~{}~{}~{}~{}|U_{\mu 4}|^{2}_{\rm SES}=\frac{{\cal
B}_{\rm SES}(K^{+}\to\mu^{+}N)}{{\cal
B}(K^{+}\to\mu^{+}\nu)\cdot\rho_{\mu}(m_{N})},$
with the kinematic factor $\rho_{\mu}(m_{N})$ given in Eq. (1). They are shown
as functions of the HNL mass in Fig. 4 (right). The expected number of
$K^{+}\to\mu^{+}N$ signal events, $N_{S}$, is written as
$N_{S}={\cal B}(K^{+}\to\mu^{+}N)/{\cal B}_{\rm SES}(K^{+}\to\mu^{+}N)=|U_{\mu
4}|^{2}/|U_{\mu 4}|^{2}_{\rm SES},$
which is used to obtain upper limits at 90% CL of the branching fraction
${\cal B}(K^{+}\to\mu^{+}N)$ and the mixing parameter $|U_{\mu 4}|^{2}$ from
those of $N_{S}$.
The upper limits obtained for $|U_{\mu 4}|^{2}$ are compared with the results
from earlier searches for the $K^{+}\to\mu^{+}N$ decay [15, 18, 19, 20], and
the Big Bang nucleosynthesis (BBN) constraint [21], in Fig. 5. The results of
the current study represent the first HNL production search in the mass range
374–384 MeV/$c^{2}$, and improve on previous NA62 results in the mass range
300–374 MeV/$c^{2}$ [15] by more than an order of magnitude. In the range
200–300 MeV/$c^{2}$, the sensitivity achieved is similar to that of the
BNL-E949 experiment [18].
A comparison of the above upper limits of $|U_{\mu 4}|^{2}$ with the upper
limits of $|U_{e4}|^{2}$ obtained from HNL production searches in $K^{+}\to
e^{+}N$ [15, 16, 20] and $\pi^{+}\to e^{+}N$ [22, 23] decays is shown in Fig.
6. Upper limits of ${\cal O}(10^{-5})$ obtained on $|U_{\mu 4}|^{2}$ in the
mass range 16–34 MeV/$c^{2}$ from searches of the $\pi^{+}\to\mu^{+}N$ process
[24] are not shown. In comparison to the limits of $|U_{\mu 4}|^{2}$ obtained
from direct HNL decay searches [25, 26], the limits from production searches
are weaker but more robust because they are based on fewer theoretical
assumptions.
Figure 5: Upper limits at 90% CL of $|U_{\mu 4}|^{2}$ obtained for each
assumed HNL mass, compared to the upper limits established by earlier HNL
production searches in $K^{+}\to\mu^{+}N$ decays at NA62 [15], BNL-E949 [18],
OKA [19] and KEK [20]. The lower boundary of $|U_{\mu 4}|^{2}$ imposed by the
BBN constraint [21] is shown by a dashed line.
Figure 6: Summary of upper limits at 90% CL of $|U_{e4}|^{2}$ (red solid
lines) and $|U_{\mu 4}|^{2}$ (blue solid lined) obtained from HNL production
searches in $K^{+}$ decays: this analysis, NA62 [15, 16], BNL-E949 [18], OKA
[19], KEK [20]; and in $\pi^{+}$ decays: TRIUMF [22], PIENU [23]. The lower
boundaries of $|U_{e4}|^{2}$ and $|U_{\mu 4}|^{2}$ imposed by the BBN
constraint [21] are shown by the lower and upper dashed lines, respectively.
## 6 Search for $K^{+}\to\mu^{+}\nu X$ and $K^{+}\to\mu^{+}\nu\nu\bar{\nu}$
decays
The $K^{+}\to\mu^{+}\nu X$ process is investigated in the framework of the
scalar and vector mediator models, defined for non-zero mediator mass $m_{X}$
[6]. In total, 37 mass hypotheses equally spaced in the range 10–370
MeV/$c^{2}$ are examined. The $K^{+}\to\mu^{+}\nu\nu\bar{\nu}$ decay is
investigated assuming the SM differential decay rate distribution [7].
The true missing mass spectrum lies in the $m_{X}\leq m_{\rm miss}\leq
m_{K}-m_{\mu}$ range for the $K^{+}\to\mu^{+}\nu X$ decay, and in the $0\leq
m_{\rm miss}\leq m_{K}-m_{\mu}$ range for the $K^{+}\to\mu^{+}\nu\nu\bar{\nu}$
decay (neglecting the neutrino mass). In both cases, a signal would manifest
itself as an excess of data events over the estimated background at large
reconstructed $m_{\rm miss}^{2}$ values as shown in Fig. 2 (top-right).
Therefore the event selection requires that $m_{\rm miss}^{2}>m_{0}^{2}$. The
$m_{0}$ value is optimized to obtain the strongest expected upper limit of the
decay rate in the null hypothesis, considering that signal acceptances and
backgrounds both decrease as functions of $m_{0}^{2}$. The optimization is
performed independently for each of the possible signals listed above.
The numbers of background events, $N_{\rm exp}$, and their uncertainties,
$\delta N_{\rm exp}$, estimated with simulations (Section 4) are shown as
functions of $m_{0}^{2}$ in Fig. 7 (left). Also shown are the expected upper
limits at 90% CL of the number of signal events, $N_{S}$, and their $\pm
1\sigma$ and $\pm 2\sigma$ bands of variation in the null hypothesis, obtained
from $N_{\rm exp}$ and $\delta N_{\rm exp}$ using the ${\rm CL_{S}}$ method
[17] for each $m_{0}^{2}$ value considered.
For the $K^{+}\to\mu^{+}\nu X$ decay in $m_{X}$ hypotheses of 320–370
MeV/$c^{2}$, the signal region is defined $m_{0}^{2}=m_{X}^{2}$ (rounded up to
the nearest multiple of $0.02~{}{\rm GeV}^{2}/c^{4}$), avoiding a significant
loss of signal acceptance. For the $K^{+}\to\mu^{+}\nu X$ decay in $m_{X}$
hypotheses of 10–310 MeV/$c^{2}$, and for the $K^{+}\to\mu^{+}\nu\nu\bar{\nu}$
decay, the signal region is defined as $m_{0}^{2}=0.1~{}{\rm GeV}^{2}/c^{4}$.
The background composition for this $m_{0}^{2}$ value is reported in Table 1.
Optimal sensitivity is obtained in this case with a reduced signal acceptance.
In particular, the acceptance for the $K^{+}\to\mu^{+}\nu\nu\bar{\nu}$ decay
decreases from $A_{\mu\nu\nu\nu}^{0}=0.277$ to $A_{\mu\nu\nu\nu}=0.103$.
Figure 7: Left: expected background, its uncertainty, and expected $\pm
1\sigma$ and $\pm 2\sigma$ bands of the upper limit on the number at 90% CL of
signal events $N_{S}$ in the null hypothesis, for each lower squared missing
mass cut ($m_{0}^{2}$) considered to optimize the definition of the
$K^{+}\to\mu^{+}\nu X$ and $K^{+}\to\mu^{+}\nu\nu\bar{\nu}$ signal regions.
Observed numbers of events and upper limits of $N_{S}$ are shown for
$m_{0}^{2}$ values found to be optimal for certain $m_{X}$ hypotheses. Right:
upper limits of ${\cal B}(K^{+}\to\mu^{+}\nu X)$ obtained at 90% CL for each
$m_{X}$ hypothesis for the scalar and vector mediator models.
The observed numbers of events and upper limits of $N_{S}$ for the above set
of $m_{0}^{2}$ values are displayed in Fig. 7 (left). Upper limits of ${\cal
B}(K^{+}\to\mu^{+}\nu X)$ in the scalar and vector $X$ models as functions of
the assumed $m_{X}$, obtained from those of $N_{S}$ similarly to the HNL case,
are shown in Fig. 7 (right). The limits obtained in the scalar model are
stronger than those in the vector model due to the larger mean $m_{\rm miss}$
value.
In the search for the $K^{+}\to\mu^{+}\nu\nu\bar{\nu}$ decay, $N_{\rm
obs}=6894$ events are observed in the signal region $m_{\rm
miss}^{2}>0.1~{}{\rm GeV}^{2}/c^{4}$, with an expected background of $N_{\rm
exp}=7549\pm 928$ events. This leads to an observed (expected) upper limit at
90% CL of 1184 (1526) events for the number of signal events $N_{S}$. An upper
limit is established on the decay rate using the relation
$N_{S}=N_{K}\cdot{\cal B}(K^{+}\to\mu^{+}\nu\nu\bar{\nu})\cdot
A_{\mu\nu\nu\nu}$:
${\cal B}(K^{+}\to\mu^{+}\nu\nu\bar{\nu})<1.0\times 10^{-6}~{}~{}~{}{\rm
at~{}90\%~{}CL},$
improving by a factor of 2.4 on the most stringent previous limit obtained by
the BNL-E949 experiment [8]. Both this and BNL-E949
$K^{+}\to\mu^{+}\nu\nu\bar{\nu}$ results are obtained assuming the SM
differential rate. However the reconstructed missing mass intervals analysed
are complementary: $m_{\rm miss}>316~{}{\rm MeV}/c^{2}$ in this study, and
$230<m_{\rm miss}<300~{}{\rm MeV}/c^{2}$ at BNL-E949.
## Summary
A search for HNL production in $K^{+}\to\mu^{+}N$ decays has been performed
using the data set collected by the NA62 experiment in 2016–2018. Upper limits
of the HNL mixing parameter $|U_{\mu 4}|^{2}$ are established at the level of
${\cal O}(10^{-8})$ over the HNL mass range of 200–384 MeV/$c^{2}$ with the
assumption of mean lifetime exceeding 50 ns, improving on the previous HNL
production searches. The first search for $K^{+}\to\mu^{+}\nu X$ decays has
been performed, where $X$ is a scalar or vector hidden sector mediator in the
mass range 10–370 MeV/$c^{2}$, which decays to an invisible final state. Upper
limits obtained at 90% CL on the decay branching fraction range from ${\cal
O}(10^{-5})$ for low $m_{X}$ values to ${\cal O}(10^{-7})$ for high $m_{X}$
values. An upper limit of $1.0\times 10^{-6}$ is obtained at 90% CL on the
branching fraction of the $K^{+}\to\mu^{+}\nu\nu\bar{\nu}$ decay, assuming the
SM differential decay rate, which improves on the earlier searches for this
process.
## Acknowledgements
It is a pleasure to express our appreciation to the staff of the CERN
laboratory and the technical staff of the participating laboratories and
universities for their efforts in the operation of the experiment and data
processing. We are grateful to Diego Redigolo and Kohsaku Tobioka for fruitful
discussions and for the inputs provided on the $K^{+}\to\mu^{+}\nu X$ decay
phenomenology.
The cost of the experiment and its auxiliary systems was supported by the
funding agencies of the Collaboration Institutes. We are particularly indebted
to: F.R.S.-FNRS (Fonds de la Recherche Scientifique - FNRS), Belgium; BMES
(Ministry of Education, Youth and Science), Bulgaria; NSERC (Natural Sciences
and Engineering Research Council), funding SAPPJ-2018-0017 Canada; NRC
(National Research Council) contribution to TRIUMF, Canada; MEYS (Ministry of
Education, Youth and Sports), Czech Republic; BMBF (Bundesministerium für
Bildung und Forschung) contracts 05H12UM5, 05H15UMCNA and 05H18UMCNA, Germany;
INFN (Istituto Nazionale di Fisica Nucleare), Italy; MIUR (Ministero
dell’Istruzione, dell’Università e della Ricerca), Italy; CONACyT (Consejo
Nacional de Ciencia y Tecnología), Mexico; IFA (Institute of Atomic Physics)
Romanian CERN-RO No.1/16.03.2016 and Nucleus Programme PN 19 06 01 04,
Romania; INR-RAS (Institute for Nuclear Research of the Russian Academy of
Sciences), Moscow, Russia; JINR (Joint Institute for Nuclear Research), Dubna,
Russia; NRC (National Research Center) “Kurchatov Institute” and MESRF
(Ministry of Education and Science of the Russian Federation), Russia; MESRS
(Ministry of Education, Science, Research and Sport), Slovakia; CERN (European
Organization for Nuclear Research), Switzerland; STFC (Science and Technology
Facilities Council), United Kingdom; NSF (National Science Foundation) Award
Numbers 1506088 and 1806430, U.S.A.; ERC (European Research Council)
“UniversaLepto” advanced grant 268062, “KaonLepton” starting grant 336581,
Europe.
Individuals have received support from: Charles University Research Center
(UNCE/SCI/ 013), Czech Republic; Ministry of Education, Universities and
Research (MIUR “Futuro in ricerca 2012” grant RBFR12JF2Z, Project GAP), Italy;
Russian Foundation for Basic Research (RFBR grants 18-32-00072, 18-32-00245),
Russia; Russian Science Foundation (RSF 19-72-10096), Russia; the Royal
Society (grants UF100308, UF0758946), United Kingdom; STFC (Rutherford
fellowships ST/J00412X/1, ST/M005798/1), United Kingdom; ERC (grants 268062,
336581 and starting grant 802836 “AxScale”); EU Horizon 2020 (Marie
Skłodowska-Curie grants 701386, 842407, 893101).
## References
* [1] J. Beacham et al., J. Phys. G47 (2020) 010501.
* [2] T. Asaka and M. Shaposhnikov, Phys. Lett. B620 (2005) 17.
* [3] R. Shrock, Phys. Lett. B96 (1980) 159; Phys. Rev. D24 (1981) 1232.
* [4] P.A. Zyla et al., Prog. Theor. Exp. Phys. 2020 083C01 (2020).
* [5] K. Bondarenko et al., JHEP 1811 (2018) 032.
* [6] G. Krnjaic et al., Phys. Rev. Lett. 124 (2020) 041802.
* [7] D. Gorbunov and A. Mitrofanov, JHEP 1610 (2016) 039.
* [8] A.V. Artamomov et al., Phys. Rev. D94 (2016) 032012.
* [9] E. Cortina Gil et al., JINST 12 (2017) P05025.
* [10] R. Ammendola et al., Nucl. Instrum. Methods A929 (2019) 1.
* [11] E. Cortina Gil et al., JHEP 2011 (2020) 042.
* [12] J. Allison et al., Nucl. Instrum. Methods A835 (2016) 186.
* [13] J. Bijnens, G. Ecker and J. Gasser, Nucl. Phys. B396 (1993) 81.
* [14] E. Cortina Gil et al., arXiv:2010.07644, submitted to JHEP.
* [15] E. Cortina Gil et al., Phys. Lett. B778 (2018) 137.
* [16] E. Cortina Gil et al., Phys. Lett. B807 (2020) 135599.
* [17] A.L. Read, J. Phys. G28 (2002) 2693.
* [18] A. Artamonov et al., Phys. Rev. D91 (2015) 052001.
* [19] A.S. Sadovsky et al., Eur. Phys. J. C78 (2018) 92.
* [20] T. Yamazaki et al., Conf. Proc. C840719 (1984) 262.
* [21] A.D. Dolgov et al., Nucl. Phys. B590 (2000) 562.
* [22] D. Britton et al., Phys. Rev. D46 (1992) R885.
* [23] A. Aguilar-Arevalo et al., Phys. Rev. D97 (2018) 072012.
* [24] A. Aguilar-Arevalo et al., Phys. Lett. B798 (2019) 134980.
* [25] G. Bernardi et al., Phys. Lett. B203 (1988) 332.
* [26] K. Abe et al., Phys. Rev. D100 (2019) 052006.
|
# New Formulations of Ambiguous Volatility with an Application to Optimal
Dynamic Contracting111I acknowledge helpful comments and suggestions from Anne
Balter, Hui Chen, Sharada Dharmasankar, Leonid Kogan, Andrey Malenko, Jianjun
Miao, Jian Sun, Xiangyu Zhang, three anonymous referees, and participants at
the MIT Finance lunch seminar and the Becker Friedman Institute mini-
conference on Ambiguity and Robustness. I am especially grateful to Lars
Hansen, Andrey Malenko, and Tom Sargent (editor) for their support and
feedback which greatly improved the paper.
Peter G. Hansen222Sloan School of Management, Massachusetts Institute of
Technology, 100 Main St, Cambridge, MA 02142. Email<EMAIL_ADDRESS>
###### Abstract
I introduce novel preference formulations which capture aversion to ambiguity
about unknown and potentially time-varying volatility. I compare these
preferences with Gilboa and Schmeidler’s maxmin expected utility as well as
variational formulations of ambiguity aversion. The impact of ambiguity
aversion is illustrated in a simple static model of portfolio choice, as well
as a dynamic model of optimal contracting under repeated moral hazard.
Implications for investor beliefs, optimal design of corporate securities, and
asset pricing are explored.
JEL Classification: D81, D86, G11, G12, G32
Keywords: ambiguity, stochastic volatility, moral hazard, capital structure,
asset pricing
## 1 Introduction
There is ample evidence that time-varying stochastic volatility exists and has
important effects on real macroeconomic variables and is important in
understanding empirical features of financial markets. The empirical evidence
suggests that volatility follows complicated nonlinear dynamics, which often
leads model builders to write down complicated parametric models of the
evolution of volatility as well as its correlation with other economic
quantities of interest. An obvious concern with this approach is whether it is
possible for economic agents to learn or estimate these models precisely, or
through repeated observation develop confidence that a particular parametric
model is correct. While one may argue that this concern is unwarranted in
financial markets with high-frequency observations which make volatility
effectively observable,333Even in high-frequency settings, direct statistical
measurement of volatility may be significantly confounded by microstructure
and liquidity effects. See for example Zhang et al. (2005). there is no
convincing reason to dismiss these concerns as they pertain to real variables
which are often observed at low frequencies.
Motivated by these concerns, I propose new preferences that capture
nonparametric model uncertainty about an unknown, possibly time-varying
volatility process. These preference formulations build on existing models of
ambiguity aversion, notably being a special case of the variational
preferences proposed by Maccheroni et al. (2006a).444See Maccheroni et al.
(2006b) for axiomatic approaches to dynamic variational preferences. These
preferences, which I call _moment-constrained variational preferences_ , are
first formulated in a static setting of decision-making under uncertainty. I
illustrate the impact of these static preferences in a simple portfolio choice
problem, where I show how the degree of ambiguity aversion affects both the
implied worst-case beliefs of the investor and the optimal portfolio weight.
Then, I explore dynamic counterparts to these preferences and derive a
continuous-time limit in which the decision-maker is uncertain about an
unknown, time-varying volatility process. The impact of these preferences is
then illustrated in a model of repeated moral hazard based on papers by
DeMarzo and Sannikov (2006) and Biais et al. (2007). Using this model, I
explore the implications of ambiguity aversion for optimal security design and
asset prices, and compare and contrast the implications of dynamic variational
preferences with those of $G$-expectations.
Ambiguity aversion leads the principal to design a contract that is robustly
optimal given uncertainty about the volatility process. Under the optimal
contract, belief heterogeneity emerges between the principal and the agent.
The agent trusts the benchmark volatility model, whereas the principal forms
expectations as if volatility is strictly higher and state-dependent. As in
DeMarzo and Sannikov (2006), the optimal contract can be interpreted as
featuring a line of credit between the principal and the agent. I show how
ambiguity aversion increases the optimal credit limit, while reducing the
reliance on long-term debt. This is important since credit lines are a
commonly used corporate security. Additionally, I derive asset pricing
implications of volatility ambiguity under the optimal contract.
### 1.1 Related literature
This paper builds on a large literature on ambiguity aversion and model
uncertainty in the context of economic decisions. The preference formulations
I introduce fit within the framework of variational preferences introduced by
Maccheroni et al. (2006a) which nests the maxmin expected utility of Gilboa
and Schmeidler (1989) and the “multiplier” formulation of ambiguity due to
Hansen and Sargent (2001). Dynamic models of ambiguity and robustness can be
broadly thought of as belonging to one of three categories, namely the
“recursive multiple priors” model proposed by Epstein and Schneider (2003),
the “recursive smooth ambiguity” model proposed by Klibanoff et al. (2009),
and the “dynamic variational preferences” model described in Maccheroni et al.
(2006b) and Hansen and Sargent (2018) as a generalization of the “multiplier
preferences” introduced by Hansen and Sargent (2001). My paper adds to this
literature by proposing a new form of preferences that captures ambiguity or
uncertainty about volatility in continuous time. To my knowledge, the only
other model of volatility ambiguity is the “G-expectations” model of Peng
(2007), which can be interpreted as a recursive multiple priors model. The
continuous-time preferences developed in this paper can be thought of as a
particular continuous-time limit of the discrete-time preferences of
Maccheroni et al. (2006b) which nest the G-expectations model.
Models of financial contracting typically assume that all economic actors
fully understand the model environment, and that such understanding is common
knowledge. This is similar to (but stronger than) an assumption of rational
expectations, and has been criticized as overly restrictive in models with
strategic interaction by Harsanyi (1967), Wilson (1987), Bergemann and Morris
(2005), Woodford (2010), Hansen and Sargent (2012), and others. This paper
attempts to relax the assumption that economic actors fully understand their
model environment and study the corresponding effect on financial contracting.
In particular, I study a long-term contracting problem where economic actors
have ambiguous beliefs about the possibly time-varying volatility of future
cash flows.
My paper builds on the large literature studying models of long-term financial
contracting. DeMarzo and Fishman (2007), DeMarzo and Sannikov (2006), and
Biais et al. (2007) show that in stationary environments with risk-neutral
economic agents, the optimal long-term financial contract can be implemented
by an interpretable capital structure. I build on these papers by introducing
uncertainty about the volatility of the cash flow process and study how this
affects the optimal contract. As with many of these papers, I rely on the
martingale approach to dynamic contracting problems developed by Sannikov
(2008) and Williams (2008).
Particularly relevant are papers that take robust approaches to incentive
problems, such as Bergemann and Morris (2005), Carroll (2015), Zhu (2016), and
Malenko and Tsoy (2018). The closest paper to this one is Miao and Rivera
(2016) who characterize the optimal contract in continuous time when the
principal faces ambiguity about expected cash flows. As I will demonstrate, my
model produces substantially different optimal security design yet has
qualitatively similar asset pricing implications. Szydlowski (2012) and Prat
and Jovanovic (2014) study related problems where the principal is uncertain
about the details of the agency problem. Adrian and Westerfield (2009)
characterize optimal contracting when the principal and the agent disagree
about the underlying dynamics of the cash flow process and both learn through
time. By focusing on uncertainty about second moments, my paper is similar in
spirit to Wolitzky (2016) who studies a static mechanism design problem.
### 1.2 Outline
The outline of this paper is as follows. Section 2 defines _moment-constrained
variational preferences_ in the context of static decision problems and then
illustrates the impact of these preferences in a static model of portfolio
choice under quadratic utility. Section 3 extends these preferences to dynamic
problems in a discrete-time setting and explores an interesting continuous-
time limit under which the more general ambiguity about the probability
distribution of state evolution reduces to ambiguity about an unknown
stochastic volatility process. I compare the continuous-time limit to the
$G$-expectations model of Peng (2007). Section 4 applies the continuous-time
preferences to a model of optimal contracting under repeated moral hazard and
illustrates the implications of ambiguous volatility for security design and
asset pricing. Section 5 concludes.
## 2 Static preferences
Let $P$ denote the decision-maker’s benchmark probability measure, and let
$\mathbb{E}_{P}[\cdot]$ denote the expectation operator under $P$. Let
$a\in\mathcal{A}$ denote the action of the decision-maker and let $\epsilon$
denote a vector of payoff-relevant shocks. I would like to capture the notion
that the decision-maker is uncertain about the entire distribution $P$, but is
certain about certain moments or functionals of the distribution $P$. More
precisely, let $g(\cdot)$ be any function such that
$\mathbb{E}_{P}[g(\epsilon)]=0$. The decision-maker allows for absolutely
continuous probabilistic distortions of $P$, which I parameterize by
likelihood ratio random variables $M$ which satisfy $M\geq 0$ with
$P$-probability 1 and $\mathbb{E}_{P}[M]=1$. The decision-maker’s certainty
about the random variable $g(\epsilon)$ being mean zero is captured by
restricting the set of likelihood ratios to those that satisfy
$\mathbb{E}_{P}[Mg(\epsilon)]=0$.
Define investor utility $V(a;P,\theta)$ as
$V(a;P,\theta)=\underset{M\geq 0,\\\
\mathbb{E}_{P}[M]=1}{\inf}\mathbb{E}_{P}[MU(a,\epsilon)]+\theta\Phi(M)$ (1)
where
$\Phi(M)=\begin{cases}\mathbb{E}_{P}[c(M)]&\text{ if
}\mathbb{E}_{P}[Mg(\epsilon)]=0\\\ \infty&\text{otherwise}\end{cases}$
where $c(\cdot)$ is a convex function with $c(1)=0$.555Note that the
penalization function $\Phi(\cdot)$ is equivalent to an $f$-divergence on the
set of $M$’s which satisfy the moment restriction. If the infimum in equation
(1) is attained, I refer to the $M$ that attains it as the _worst-case_ belief
distortion. Note that the penalty function is infinite if the moment
restriction $\mathbb{E}_{P}\left[Mg(\epsilon)\right]=0$ is violated. This
captures the notion that the investor is certain about this aspect of the
distribution, because the worst-case belief distortion $M$ will never violate
the moment restriction. Since the penalty function $\Phi(\cdot)$ is convex in
$M$, these preferences fit into the variational preference framework
axiomatized by Maccheroni et al. (2006a). I therefore refer to preferences
defined by (1) as moment-constrained variational preferences.
The decision-maker’s problem can then expressed concisely as
$\max_{a\in\mathcal{A}}V(a;P,\theta).$
### 2.1 Relative entropy penalization
A particularly tractable choice of divergence function $c(\cdot)$ is given by
$c(M)=M\log M$.666This corresponds to an $f$-divergence known as relative
entropy or Kullback-Leibler divergence, and has numerous statistical
interpretations. Certain members of the larger family of power divergences
proposed by Cressie and Read (1984) including Hellinger and logarithmic
divergences, as well as the related notion of Chernoff entropy, can lead to
degenerate solutions to (1). See Chen et al. (2021) for further discussion.
This divergence gives particularly tractable expressions for the worst-case
likelihood ratio $M^{*}$. In particular, by standard convex duality arguments,
it is possible to show that $M^{*}$ has the following exponential tilting form
$M^{*}(a,\theta)=\frac{\exp\left(-\theta^{-1}U(a,\epsilon)-\lambda(a,\theta)g(\epsilon)\right)}{\mathbb{E}_{P}\left[\exp\left(-\theta^{-1}U(a,\epsilon)-\lambda(a,\theta)g(\epsilon)\right)\right]}$
(2)
where the Lagrange multiplier $\lambda(a,\theta)$ is chosen to satisfy the
moment restriction
$\mathbb{E}_{P}\left[M^{*}(a,\theta)g(\epsilon)\right]=0.$
It is worth comparing the expression for the likelihood ratio in equation (2)
to the corresponding expression with the constraint
$\mathbb{E}_{P}[Mg(\epsilon)]=0$ omitted. This would correspond to
“multiplier” preferences. In this case, the corresponding worst-case
likelihood ratio would be given by
$M^{*}(a,\theta)=\frac{\exp(-\theta^{-1}U(a,\epsilon))}{\mathbb{E}_{P}\left[\exp(-\theta^{-1}U(a,\epsilon))\right]}.$
(3)
In equation (3), we see that the worst-case likelihood ratio distorts
probabilities towards states in which the decision-maker’s utility is low.
Such a distortion will generally violate the moment condition. By contrast,
the likelihood ratio in equation (2) must distort probabilities in such a way
that the moment restriction remains satisfied. In particular, for concave
utility functions, this intuitively corresponds to a likelihood ratio that
increases the dispersion of $\epsilon$ by overweighting states in which
$\epsilon$ takes values that are far from its mean. This will be seen in the
example presented in the next section.
### 2.2 Example: Portfolio choice under quadratic utility
I illustrate the impact of these preferences in a static portfolio choice
problem. Consider an investor with quadratic utility
$U\left(\widetilde{W}\right)=-\frac{1}{2}\left(\widetilde{W}-b\right)^{2}$
over period-1 wealth $\widetilde{W}$ where $b$ is the investor’s bliss-point
wealth level. Investor has initial wealth $W_{0}$ which can be invested at a
gross risk-free rate $R_{f}$ or a vector of risky assets with excess return
vector $\widetilde{R}$.
While quadratic utility has the well-known and arguably undesirable feature
that the investor’s utility can be decreasing in wealth, I include this
example for two reasons. First, it leads to quasi-analytic solutions which
facilitate intuition. Second, in continuous-time diffusion limits, local
quadratic approximations become exact. Therefore, much of the intuition
obtained from the static linear-quadratic Gaussian model examined in this
section will carry over to more general continuous-time diffusion models
studied later.
#### 2.2.1 Benchmark: No ambiguity
Under the investor’s subjective probability measure $P$, the vector of excess
returns $R$ is distributed as $R\sim\text{Normal}(\mu,\Sigma)$. Formally, the
investor’s portfolio optimization problem can be written as
$\displaystyle\max_{\phi\in\mathbb{R}^{k}}\ $
$\displaystyle\mathbb{E}_{P}\left[U\left(\widetilde{W}\right)\right]$ s.t.
$\displaystyle\widetilde{W}=W_{0}R_{f}+\phi^{\prime}\widetilde{R}.$
One can verify that the optimal portfolio weight vector $\phi^{*}$ is given by
$\phi^{*}=\left[\mu\mu^{\prime}+\Sigma\right]^{-1}\mu\left(b-W_{0}R_{f}\right).$
#### 2.2.2 Portfolio choice with ambiguity
Consider the same portfolio choice problem as before, but now the investor
treats their subjective probability measure $P$ under which
$\widetilde{R}\sim\text{Normal}(\mu,\Sigma)$ as an approximation. They are
willing to entertain other probability measures as possible, but treat the
expected return vector $\mathbb{E}[\widetilde{R}]=\mu$ as certain. Then we can
model the investor’s preferences as
$V(\phi;P,\theta)=\inf_{M\geq
0,\mathbb{E}_{P}[M]=1}\mathbb{E}_{P}\left[MU\left(\widetilde{W}\right)\right]+\theta\Phi(M)$
where
$\Phi(M)=\begin{cases}\mathbb{E}_{P}\left[M\log M\right]&\text{ if
}\mathbb{E}_{P}\left[M\left(\widetilde{R}-\mu\right)\right]=0\\\ \infty&\text{
otherwise.}\end{cases}$
The investor’s portfolio choice problem is then
$\max_{\phi\in\mathbb{R}^{k}}\ V(\phi;P,\theta).$
As in the expected utility case, the investor’s problem can be solved in
closed form. First, for fixed $\phi\in\mathbb{R}^{k}$ we can solve the
infimization problem for $V(\phi;P,\theta)$. Using standard duality results,
it can be shown that the worst-case $M$ is unique and given by
$M\left(\widetilde{R};\phi\right)=\frac{\exp\left(-\frac{1}{2\theta}\left(\widetilde{R}-\mu\right)^{\prime}\left[-\phi\phi^{\prime}\right]\left(\widetilde{R}-\mu\right)\right)}{\mathbb{E}_{P}\left[\exp\left(-\frac{1}{2\theta}\left(\widetilde{R}-\mu\right)^{\prime}\left[-\phi\phi^{\prime}\right]\left(\widetilde{R}-\mu\right)\right)\right]}$
for choices of $\phi$ for which the objective function is finite.777This need
not be the case. For some choices for $\phi$ the adversarial nature’s choice
of $M$ can make the investor’s utility arbitrarily negative. This shows that
for any $\phi$ the resulting penalized worst-case probability distribution is
also a multivariate normal distribution. As expected, we have
$\mathbb{E}_{P}\left[M\left(\widetilde{R};\phi\right)\right]=1$ and
$\mathbb{E}_{P}\left[\left(\widetilde{R};\phi\right)\widetilde{R}\right]=\mu$,
so there is no mean distortion. Additionally, we can see that the worst-case
$M$ distorts the variance-covariance matrix of the returns from $\Sigma$ to
$\left[\Sigma^{-1}-\theta^{-1}\phi\phi^{\prime}\right]^{-1}$.
Due to the regularity of the problem, it can be shown that the orders of
minimization and maximization can be exchanged at the optimal $M^{*}$ and
$\theta^{*}$. Exchanging the order of maximization and minimization, we can
solve for the optimal $\phi$ as a function of $M$. We obtain easily that
$\phi(M)=\mathbb{E}\left[M\widetilde{R}\widetilde{R}^{\prime}\right]^{-1}\mu(b-W_{0}R_{f}).$
Next observe that at the optimal $M^{*}$ must satisfy
$M^{*}=M\left(\widetilde{R};\phi(M^{*})\right).$
Write $S=\mathbb{E}\left[M^{*}\widetilde{R}\widetilde{R}^{\prime}\right]$. By
our previous observation, we must have that $S$ is a positive-definite
solution of the equation
$S-\mu\mu^{\prime}=\left[\Sigma^{-1}-\theta^{-1}(b-W_{0}R_{f})\mu^{\prime}S^{-2}\mu(b-W_{0}R_{f})\right]^{-1}.$
where this equation can be obtained by simply writing the penalized worst-case
distorted variance of $\widetilde{R}$ in two ways. Unfortunately, this
equation is a third-order matrix polynomial in $S$ so we cannot easily express
$S$ in closed form. Nonetheless, we know that under the penalized worst-case
distribution of $\widetilde{R}$ we have
$\widetilde{R}\sim\text{Normal}\left(\mu,S-\mu\mu^{\prime}\right).$
and that
$\phi^{*}(\theta)=S^{-1}\mu(b-W_{0}R_{f}).$
#### 2.2.3 Scalar risky asset
To facilitate intuition further, I focus on the case where the risky return
$\tilde{R}$ is a scalar random variable, so that the portfolio weight $\phi$
is also a scalar. Assume that $\tilde{R}$ has mean $\mu$ and benchmark
variance $\sigma^{2}$. Under expected utility, the optimal portfolio weight is
$\phi^{*}=\frac{1}{\mu^{2}+\sigma^{2}}\mu(b-W_{0}R_{f}).$
To characterize the solution under ambiguity aversion, it is helpful to define
the scalar $s(\theta)=\mathbb{E}[M^{*}\widetilde{R}^{2}]$. Note that the
optimal portfolio weight $\phi^{*}(\theta)$ can be written in terms of
$s(\theta)$ as
$\phi^{*}(\theta)=\frac{1}{s}\mu(b-W_{0}R_{f}).$
By the arguments in the previous section, we can write $s(\theta)$ as the
solution of the minimization problem of the adversarial nature, which
simplifies to
$\min_{s}-\frac{1}{2}(b-W_{0}R_{f})^{2}+\frac{1}{2}\mu^{2}(b-W_{0}R_{f})^{2}\frac{1}{s}+\frac{\theta}{2}\left[\frac{s-\mu^{2}}{\sigma^{2}}-1-\log\left(\frac{s-\mu^{2}}{\sigma^{2}}\right)\right].$
(4)
Note that the objective function in (4) is strictly convex in $s$, so
$s(\theta)$ is uniquely defined. Additionally, it is easy to see that for
$\theta>0$, we must have $s(\theta)>\mu^{2}+\sigma^{2}$. It follows
immediately that $\phi^{*}(\theta)\in(0,\phi^{*})$.
While it is possible to obtain closed-form expressions for $s(\theta)$ and
correspondingly $\phi^{*}(\theta)$ as roots of the cubic polynomial under
appropriate inequality restrictions, these expressions are complicated and
convey little intuition. Instead I characterize the comparative statics of
these quantities in propositions 2.1 and 2.2 below. The solutions $s(\theta)$
and $\phi^{*}(\theta)$ are characterized more explicitly in the appendix in
terms of the unique positive root of a particular cubic polynomial.
###### Proposition 2.1.
Assume that $b>W_{0}R_{f}$ and $\mu>0$. Then the optimal portfolio weight
$\phi^{*}(\theta)$ has the following properties:
1. (i)
$\phi^{*}(\theta)>0$.
2. (ii)
$\phi^{*}(\theta)$ is strictly increasing in $\theta$.
3. (iii)
As $\theta\to\infty$ we have $\phi^{*}(\theta)\to\phi^{*}$.
4. (iv)
As $\theta\to 0$ we have $\phi^{*}(\theta)\to 0$.
Figure 1: $\phi^{*}(\theta)$ as a function of $\theta$ plotted in red.
Horizontal line at $\phi^{*}$ shown in dashed blue. Parameter values are
$b=1$, $W_{0}=0$, $\mu=0.1$, $\sigma^{2}=0.5$. Note that $\phi^{*}(0)=0$.
The results of proposition 2.1 are illustrated in figure 1. Result $(i)$ shows
that the investor will always invest a strictly positive amount of their
wealth in the risky asset. This makes intuitive sense because $\mu>0$. Result
$(ii)$ shows that the amount the investor’s portfolio weight on the risky
asset is increasing in their model confidence $\theta$ or equivalently
decreasing in their ambiguity aversion $1/\theta$. Result $(iii)$ shows that
as the investor’s model confidence becomes infinite, their optimal portfolio
weight converges to the optimal portfolio weight without ambiguity aversion.
Result $(iv)$ shows that as the investor becomes infinitely ambiguity averse,
they will invest none of their wealth in the risky asset. This is because they
perceive the risky asset as becoming infinitely risky.
Write $\nu^{2}(\theta)$ as the ratio of the worst-case volatility to the
benchmark volatility, formally
$\nu^{2}(\theta)\equiv\frac{s(\theta)-\mu^{2}}{\sigma^{2}}=\frac{\mathbb{E}[M^{*}(\theta)\widetilde{R}^{2}]-\mu^{2}}{\mathbb{E}[\widetilde{R}^{2}]-\mu^{2}}$
Then we have the following continuity result:
###### Proposition 2.2.
Under the conditions of proposition 2.1 we have the following limits:
1. (i)
As $\theta\to\infty$ we have $\nu^{2}(\theta)\to 1$.
2. (ii)
As $\theta\to 0$ we have $\nu^{2}(\theta)\to\infty$.
Figure 2: Worst-case volatility ratio $\nu^{2}(\theta)$ as a function of
$\theta$.
The results of proposition 2.2 can be seen in figure 2. This result shows that
the investor’s implied beliefs converge to the benchmark model as their model
confidence becomes infinite. Together with result $(iii)$ of the previous
proposition, this formally establishes the subjective expected utility model
without ambiguity aversion as a limit of the problem with ambiguity aversion.
#### 2.2.4 Comparison with “unconstrained” or “multiplier” preferences
It is natural to compare the solution to the portfolio choice problem in the
previous section to the corresponding “unconstrained” problem where we ignore
the mean restriction $\mathbb{E}_{P}[M(\widetilde{R}-\mu)]=0$. This
corresponds to the portfolio choice of an ambiguity-averse investor with
quadratic utility and “multiplier” preferences.
Section A.2 of the appendix describes the solution to the portfolio choice
problem in the previous section ignoring the mean restriction. The
unrestricted problem has similar features to the restricted problem. In
particular, under the implied worst-case $M$, the return on wealth is still
normally distributed. However, both the mean and variance are distorted and
will depend on the penalty parameter $\theta$. For regions of the penalty
parameter $\theta$ where a solution exists, the distorted mean
$\mu_{u}(\theta)$ is increasing in $\theta$, while the distorted variance
$\sigma_{u}^{2}(\theta)$ is decreasing in $\theta$. The optimal portfolio
weight on the risky asset is
$\phi_{u}(\theta)=\frac{1}{\mu_{u}(\theta)^{2}+\sigma_{u}^{2}(\theta)}\mu_{u}(\theta)(b-W_{0}R_{f}).$
Results analogous to proposition 2.1 hold for $\phi_{u}(\theta)$, albeit for
slightly different reasons. Under moment-constrained, increasing $\theta$
decreases the implied variance of the return under the worst-case
distribution. In the unconstrained problem, both the mean and variance change
with $\theta$.
## 3 Dynamic preferences
Next, I present a dynamic extension to the static preferences defined by
equation (1). Consider a $J+1$ period discrete-time setting. Denote the time
between periods by $\Delta$ so time $t$ is discrete and satisfies
$t\in\\{0,...,J\Delta\\}$. I assume that the decision-maker’s utility in
period $t+1$ depends only on the time-$t$ and $t+1$ realizations of a
stochastic process $\\{X_{t}\\}_{t\in\\{0,...,J\Delta\\}}$. For simplicity, I
abstract from modelling how the decision-maker’s choices affect $X$ and simply
define utility taking $X$ to be an exogenous, time homogeneous Markov process
with Gaussian increments under the benchmark probability measure $P$, i.e.
$X_{t+\Delta}-X_{t}=\mu(X_{t})\Delta+\sigma(X_{t})\epsilon_{t+\Delta}$ (5)
where $\epsilon_{j\Delta}\overset{iid}{\sim}\text{Normal}(0,\Delta)$ under $P$
for all $j\in\\{1,...,J\\}$. As in the previous section, we consider
absolutely continuous changes of measure parameterized by positive random
variables $M$ with unit expectation. For each $M$, define
$M_{t}=\mathbb{E}_{t}\left[M\right]$
so that $M_{t}$ is a positive martingale relative to the filtration generated
by the process $\\{X_{t}\\}_{t=0}^{T}$. Additionally, define
$M_{t,t+\Delta}=\frac{M_{t+\Delta}}{M_{t}}.$
Note that $M_{t,t+\Delta}$ is a positive random variable with time-$t$
conditional expectation 1. Observe that $M_{t,t+\Delta}$ can be interpreted as
a conditional change-of-measure or conditional likelihood ratio.
While the decision maker does not have full confidence in $P$, she is certain
about specific conditional moments of the data-generating process. Analogous
to the moment restriction in the static model, the decision maker will only
consider models generated by martingale distortions $M$ which satisfy the
moment restriction
$\mathbb{E}_{t}\left[M_{t,t+\Delta}\left(X_{t+\Delta}-X_{t}-\mu(X_{t})\Delta\right)\right]=0,\
\forall t.$ (6)
Equation (6) captures the decision-maker’s certainty about the expected one-
period change in the process $X_{t}$. Thus, the decision-maker’s uncertainty
is limited to uncertainty about higher moments of the distribution, such as
second moments or variances. Note that equation (6) is satisfied by the
constant random variable $M=1$.
Write the time-$0$ utility of the decision-maker as
$V_{0}(X_{0};P,\theta)=\inf_{M\geq
0,\mathbb{E}[M]=1}\mathbb{E}_{0}\left[M_{(j+1)\Delta}\sum_{j=1}^{J}e^{-\rho
j\Delta}u(X_{j\Delta},X_{(j+1)\Delta})\Delta+c(M)\right]$ (7)
where
$c(M)=\begin{cases}\theta(\Delta)M_{(j+1)\Delta}\sum_{j=1}^{J}e^{-\rho
j\Delta}\log(M_{j\Delta,(j+1)\Delta})&\text{ if
}\eqref{conditionalmoment}\text{ holds for all }j\in\\{1,...,J\\}\\\
\infty&\text{ otherwise }\end{cases}$
It can easily be shown that the ambiguity index $c(\cdot)$ is convex in $M$.
Therefore, this preference specification fits into the very general framework
of dynamic variational preferences proposed and axiomatized by Maccheroni et
al. (2006b). The particular form of the ambiguity index considered here is the
same as the _discounted relative entropy_ penalty used in the multiplier
preferences of Hansen and Sargent (2001) and followup papers but with an added
sequence of conditional moment restrictions on $M$.
Chen et al. (2020) study a mathematically similar problem that arises when an
econometrician is interested in bounded implied subjective expectations of
economic agents subject to a vector of conditional moment conditions, meant to
convey partial information about asset prices or survey data on subjective
expectations. In particular, their problem can be thought of as an Ergodic
control problem that arises in the limit as $\rho\to\infty$.
### 3.1 Continuous-time limit
Next, I give a heuristic derivation of a continuous-time limit of (7). The
corresponding limit will be used as preferences in subsequent sections where I
consider models of repeated moral hazard. In these settings, the use of
continuous-time diffusion models greatly improves tractability of the optimal
contracting problems.
We will take the limit of the discrete-time problem in the previous section as
$\Delta\to 0$. Note that the discrete-time process, $X_{t}$ will converge in
law to a continuous-time diffusion process with evolution equation
$dX_{t}=\mu(X_{t})dt+\sigma(X_{t})dZ_{t}$
where $Z_{t}$ is a standard Brownian motion.
Let $V_{t}(X_{t})$ denote the time-$t$ continuation value function of the
decision-maker in the discrete-time problem. Note that we have the following
Bellman-type equation for $V_{t}$,
$\displaystyle V_{t}(X_{t})=$ $\displaystyle\inf_{m\geq
0,\mathbb{E}_{t}[m]=1}\mathbb{E}_{t}\left[mu(X_{t+\Delta},X_{t})+\theta(\Delta)m\log
m+e^{-\rho\Delta}V_{t+\Delta}(X_{t+\Delta})\right]$ $\displaystyle\text{
subject to
}\mathbb{E}_{t}[m\left(X_{t+\Delta}-X_{t}-\mu(X_{t})\Delta\right)]=0$
where the choice variable $m$ is the conditional one-period likelihood ratio
chosen by the adversarial nature. Under mild regularity conditions, the
functions $u(\cdot,\cdot)$ and $V_{t}(\cdot)$ will be twice continuously
differentiable. Therefore, we can approximate them as local quadratic
functions in the increment $X_{t+\Delta}-X_{t}$. Under this approximation, we
then have that the worst-case one-period likelihood ratio $m$ will have the
form
$m^{*}=m(\epsilon_{t+\Delta},X_{t})=\frac{\exp\left(-\frac{1}{2}(\epsilon_{t+\Delta})^{2}\omega(X_{t})\right)}{\mathbb{E}_{t}\left[\exp\left(-\frac{1}{2}(\epsilon_{t+\Delta})^{2}\omega(X_{t})\right)\right]}$
where $\omega(\cdot)$ is a function of $X_{t}$ that depends implicitly on
$\Delta,\theta,\rho$, the curvature of $u(\cdot,X_{t})$, and
$V_{t+\Delta}(\cdot)$. As before, this is easily obtained from convex duality
results. We see that the implied $m^{*}$ will change the variance of the
Gaussian shock $\epsilon_{t,t+\Delta}$ from 1 to some unknown function
$\nu^{2}(X_{t})$. We can then equivalently think of the choice of likelihood
ratio $m$ as being a choice of change-of-volatility $\nu$.
As a function of $\nu$, the one-period relative entropy of a volatility
distortion is given by
$\mathbb{E}[m(\nu)\log
m(\nu)]=\frac{1}{2}\left\\{\nu^{2}-1-\log\nu^{2}\right\\}$
Taking the limit as $\Delta\to 0$, and the number of periods $J\to\infty$ so
that $J\Delta\to T$ and letting $\theta(\Delta)=\theta\Delta$, we obtain that
time-0 lifetime utility will be given by
$V_{0}(X_{0};P,\theta)=\inf_{\\{\nu_{t}\\}_{t=0}^{T}}\mathbb{E}_{0}\left[\int_{0}^{T}e^{-\rho
t}\left(u(X_{t})+\frac{\theta}{2}\left\\{\nu_{t}^{2}-1-\log\nu_{t}^{2}\right\\}\right)dt\right]$
(8)
where the infimum is subject to the constraint
$dX_{t}=\mu(X_{t})dt+\sigma(X_{t})\nu_{t}dZ_{t}.$
I make two important observations. First, the infimum problem in equation (8)
is a different problem from the infimization in the discrete-time problem in
equation (7). In discrete-time, preferences were defined as an infimum over
probability distributions, whereas now we are representing preferences as an
infimum over a controlled process. Thus the continuous-time limit here is best
thought of as an equivalent problem that produces the same value
function.888Similar considerations arise for continuous-time multiplier
preferences. Second, the linear scaling $\theta(\Delta)=\theta\Delta$ is
important for the limit. The standard continuous-time limit for multiplier
preferences would let $\theta(\Delta)$ be constant as $\Delta\to 0$. This
imposes absolute continuity of measures in the continuous-time limit, which by
Girsanov’s theorem restricts all probabilistic distortions to conditional
drift distortions. By contrast, the limit I consider allows violations of
absolute continuity in the continuous-time setup. If it weren’t for the
conditional moment restrictions, this would allow the adversarial nature to
choose arbitrarily large drift distortions at zero cost. I refer to any
process $\\{\nu_{t}\\}_{t=0}^{T}$ that attains the infimum in (8) as a _worst-
case_ change-of-volatility process.
Additionally, observe that for the function $V_{t}(X_{t};P,\theta)$ we will
have the following PDE,
$0=\min_{\nu}\
u(X)+\frac{\theta}{2}\left\\{\nu^{2}-1-\log\nu^{2}\right\\}+\frac{\partial
V}{\partial t}-\rho V+\frac{\partial V}{\partial
X}\mu(X)+\frac{1}{2}\frac{\partial^{2}V}{\partial X^{2}}\sigma^{2}(X)\nu^{2}.$
For simplicity, I will consider the infinite-horizon limit as $T\to\infty$ so
that the problem will become stationary and therefore $\frac{\partial
V}{\partial t}=0$.
#### 3.1.1 Observability of volatility and calibration of $\theta$
The penalty parameter $\theta$ captures the degree of confidence that the
economic actor has in their benchmark model of volatility, with
$\theta=\infty$ corresponding to complete model confidence, i.e. expected
utility.
It is well-known that continuous-time diffusion models imply that volatility
is directly observable via the quadratic variation. One might conclude that
this implies that the only reasonable value for $\theta$ is infinity. I give
several reasons why this is not the case in many domains of interest:
1. (i)
Across empirical domains, despite numerous proposed models of stochastic
volatility, there is generally no accepted consensus on the “correct”
parametric model for stochastic volatility. It seems sensible therefore that
economic actors would take any parametric model of volatility as an
approximation.
2. (ii)
Many economically important quantities (such as accounting variables,
macroeconomic growth rates, inflation, values of illiquid assets) are not
observed at high frequencies. Diffusion models applied to these settings are
approximations with desireable tractability features, and thus the implication
of observable volatility should not be taken literally.
3. (iii)
Even in asset pricing settings where high-frequency data is readily available,
the literature on high-frequency econometrics (see for instance Zhang et al.
(2005), Bandi and Russell (2006), Hansen and Lunde (2006), and Bandi and
Russell (2008)) finds significant statistical evidence of “microstructure
noise”, i.e. that the latent price process (assumed to be a semimartingale) is
contaminated by weakly-dependent measurement error due to liquidity or trading
frictions. This implies that the integrated quadratic variation is different
from the integral of the true latent volatility process, and that volatility
is not directly observable.
If we accept that economic actors do not observe volatility directly for
reasons (ii) or (iii), then we may follow the robust control literature and
apply detection error probabilities to calibrate “reasonable” values for
$\theta$ by either adopting a fixed $\Delta>0$ convention or appending a
measurement error process to a continuous-time volatility specification (see
for instance Anderson et al. (2003) and Hansen et al. (2002) for calibrations
based on detection error probabilities). Alternatively, we may apply the
subjective reasoning of Good (1952) and consider a value of $\theta$ as
“reasonable” if the implied worst-case model is subjectively reasonable.
### 3.2 Comparison with $G$-expectations
An alternate approach to modelling ambiguous volatility in continuous-time was
introduced by Peng (2007). This approach, known as $G$-expectations, can be
thought of as a continuous-time counterpart to the max-min expected utility of
Gilboa and Schmeidler (1989) applied to an unknown volatility parameter. The
implications of this approach are explored by Epstein and Ji (2013) in an
asset pricing setting. As will be demonstrated, the $G$-expectations approach
is closely related to the approach described in the previous section.
Consider preferences defined by the following generalization of the
continuous-time preferences defined by (8)
$U(X_{0})=\inf_{\\{\nu_{t}\\}_{t=0}^{\infty}}\mathbb{E}_{0}\left[\int_{0}^{\infty}e^{-\rho
t}\left(u(X_{t})+\xi(\nu_{t})\right)dt\right]$
where $\xi(\cdot)$ is a convex function of $\nu$ which I refer to as the
penalty function. Note that these preferences imply the following PDE for $U$,
$0=\min_{\nu}\ u(X)+\xi(\nu)-\rho U+\frac{\partial U}{\partial
X}\mu(X)+\frac{1}{2}\frac{\partial^{2}U}{\partial X^{2}}\sigma^{2}(X)\nu^{2}.$
Of course, the preferences in (8) can be seen to be a special case by taking
$\xi(\nu)=\frac{\theta}{2}\left\\{\nu^{2}-1-\log(\nu^{2})\right\\}$.
$G$-expectations can also be thought of as a special case, but now by taking
$\xi(\nu)$ to be a convex indicator function. In the case where the benchmark
volatility $\sigma(X)=\sigma$ is constant, this corresponds to
$\xi(\nu)=\begin{cases}0&\text{ if
}\nu\in\left[\underline{\sigma}/\sigma,\overline{\sigma}/\sigma\right]\\\
\infty&\text{ otherwise. }\end{cases}$
Note that here the convex indicator function restricts the instantaneous
volatility $\sigma\nu_{t}$ under the worst-case model to be in the interval
$[\underline{\sigma},\overline{\sigma}]$. The differences between these two
approaches are analogous to the differences between max-min expected utility
and variational or multiplier formulations of ambiguity aversion.
Obviously these two approaches are mathematically similar. Nonetheless there
will be differences in implications of the two models, some of which will be
explored in the subsequent application. In particular, while the two models
will have qualitatively similar implications for the optimal contract, they
will have markedly different implications for the worst-case volatility model
and asset prices. For the optimal contracting problem I consider in the
following section, the $G$-expectations model imply a constant worst-case
volatility of $\sigma\nu_{t}=\overline{\sigma}$ whereas the relative entropy
model will imply a worst-case volatility process that is state-dependent and
higher in states that are worse for the decision-maker.
###### Remark 3.1.
As Peng (2007) and Nutz (2013) demonstrate, ambiguity about volatility in
diffusion environments can be represented via a mathematically convenient
control theory implementation. The implied value function and HJB equation are
the same as one in which the unknown volatility is treated as a controlled
process. This paper does not formally extend their equivalence results, and
sidesteps this by simply studying the equivalent control problem. Rigorous
development of nonlinear expectation theory extending the equivalence results
of Peng (2007) and Nutz (2013) to cover the applications considered here is
well beyond the scope of this paper.
## 4 Application: Optimal security design under repeated moral hazard
To illustrate the effect of the continuous-time preferences derived in the
previous section, I apply them to a model of optimal contracting under
repeated moral hazard based on papers by DeMarzo and Sannikov (2006) and Biais
et al. (2007).
### 4.1 Setup
I first describe a benchmark model without ambiguity based on DeMarzo and
Sannikov (2006) and Biais et al. (2007). At each instant $t$, agent chooses an
effort level $a_{t}\in[0,1]$. Given an effort choice, the cumulative cash-flow
process $\\{Y_{t}\\}$ obeys the law of motion
$dY_{t}=\mu a_{t}dt+\sigma dZ_{t}$ (9)
where $\mu,\sigma>0$, and $Z_{t}$ a standard Brownian motion.
The agent can derive private benefits $\lambda\mu(1-a_{t})$ from the action
$a_{t}$ where $\lambda\in(0,1)$. Due to linearity, it is without loss of
generality to take $a_{t}\in\\{0,1\\}$. At any time $t\geq 0$ the project can
be liquidated, producing a liquidation value of $L$. The principal and the
agent are both assumed to be risk neutral. The principal discounts cash flows
at a rate $r>0$ while the agent discounts cash flows at a rate
$\gamma>r$.999This assumption means that the agent is impatient relative to
the principal, and avoids degeneracy.
In selecting an optimal contract, the principal chooses a cumulative
compensation process $C$ for the agent, a liquidation stopping time $\tau$,
and a suggested effort process $a$ for the agent. The benchmark model optimal
contracting problem is given as follows.
###### Problem 4.1 (benchmark model).
$\max_{(C,\tau,a)}\mathbb{E}^{P^{a}}\left[\int_{0}^{\tau}e^{-rs}(dY_{s}-dC_{s})+e^{-r\tau}L\right]$
(10)
subject to
$\displaystyle\mathbb{E}^{P^{a}}\left[\int_{0}^{\tau}e^{-\gamma
s}(dC_{s}+\lambda\mu(1-a_{s})ds)\right]$
$\displaystyle\geq\mathbb{E}^{P^{\widehat{a}}}\left[\int_{0}^{\tau}e^{-\gamma
s}(dC_{s}+\lambda\mu(1-\widehat{a}_{s})ds)\right]$ (11)
$\displaystyle\mathbb{E}^{P^{a}}\left[\int_{0}^{\tau}e^{-\gamma
s}(dC_{s}+\lambda\mu(1-a_{s})ds)\right]$ $\displaystyle=W_{0}.$ (12)
### 4.2 Optimal Contract
Assume for simplicity that only the principal is ambiguity-averse. This will
turn out to be without loss of generality. The principal takes the evolution
equation (9) as an approximate benchmark model, but allows for alternate
models where the cumulative cash flow process evolves as
$dY_{t}=\mu a_{t}dt+\sigma\nu_{t}dZ_{t}.$ (13)
To avoid a degenerate effect of ambiguous volatility, it is necessary to
assume that realized volatility is not directly contractible. Otherwise it
would be possible for the agent to fully insure the principal’s uncertainty
about volatility without any subjective welfare loss. I formalize this with
the following restriction.
###### Restriction 4.2.
Under any feasible contract $(C,\tau,a)$, the process
$M_{t}=\mathbb{E}_{t}^{P,a}\left[\int_{0}^{\tau}e^{-\gamma
s}(dC_{s}+\lambda\mu(1-a_{s})ds\right]$ (14)
admits the martingale representation
$M_{t}=M_{0}+\int_{0}^{t}\phi_{s}(dY_{s}-\mu a_{s}ds).$ (15)
under $P$.
If there were no uncertainty about volatility, then equation (15) in
restriction 4.2 would simply follow from the martingale representation
theorem. However, since the principal and agent have potentially different
beliefs about volatility, the principal and agent could derive subjective
welfare improvements from writing contracts in which the agent’s compensation
is contingent on the realized volatility. This is disallowed by restriction
4.2.
The optimal contracting problem is given by:
###### Problem 4.3.
$\sup_{(C,\tau,a)}\inf_{\nu}\mathbb{E}^{\nu}\left[\int_{0}^{\tau}e^{-rt}(dY_{t}-dC_{t})+e^{-r\tau}L\right]+\mathbb{E}^{\nu}\left[\int_{0}^{\tau}e^{-rt}\xi(\nu_{t})dt\right]$
(16)
subject to equations (11), (12), (13), and restriction 4.2.
Problem 4.3 can be thought of as a two-player, zero-sum stochastic
differential game101010See Fleming and Souganidis (1989) for further
discussion. between the principal and an adversarial nature. Nature chooses
the time-varying change of volatility process $\nu_{t}$ to minimize the
welfare of the agent, but choosing $\nu_{t}$ different from one has cost
proportional to the instantaneous relative entropy.
Let $W_{t}$ denote the time-$t$ continuation payoff of the agent. It follows
from restriction 4.2 that
$dW_{t}=\gamma W_{t}dt-
dC_{t}-\lambda\mu_{t}(1-a_{t})dt+\phi_{t}(dY_{t}-\lambda\mu(1-a_{t})dt)$ (17)
Observe that in view of equation (13), the principal and the agent perceive
the evolution of $W_{t}$ differently. The principal perceives it as
$dW_{t}=\gamma W_{t}dt-
dC_{t}-\lambda\mu_{t}(1-a_{t})dt+\phi_{t}\sigma\nu_{t}dZ_{t}$ (18)
whereas the agent perceives it as
$dW_{t}=\gamma W_{t}dt-dC_{t}-\lambda\mu_{t}(1-a_{t})dt+\phi_{t}\sigma
dZ_{t}.$ (19)
#### 4.2.1 First-Best Contract
The first-best contract is the same as the first-best contract with no
ambiguity aversion in DeMarzo and Sannikov (2006). This is intuitively obvious
since the first-best value function is linear, hence there are no volatility
effects.
$rF(W)=\sup_{c\geq 0,\phi}\inf_{\nu}\mu-c+\psi(\nu)+\left(\gamma
W-c\right)F^{\prime}(W)+\frac{1}{2}\phi^{2}\nu^{2}\sigma^{2}F^{\prime\prime}(W).$
(20)
It is easy to verify that at the optimum, we have $\phi=0$ and therefore the
principal’s value function under the (stationary) first-best contract is
$F(W)=\frac{\mu}{r}-\frac{\gamma}{r}W,$
which can be implemented by the principal paying the agent a constant wage of
$c=\gamma W$. Of course, this can be improved if we allow time-0 lump sum
transfers in which case the principal can simply give a one-time transfer of
$W$ to the agent which gives
$F(W)-W=\frac{\mu}{r}.$
Thus with no moral hazard, volatility ambiguity produces no reduction in
welfare.
#### 4.2.2 Optimal Contract with Moral Hazard
It is a simple extension of lemma 3 of DeMarzo and Sannikov (2006) to show
that for any change-of-volatility process $\nu_{t}$, the agent’s incentive
compatibility constraint can be written as
$\phi_{t}\geq\lambda$ (21)
The HJBI equation for the optimal contract with agency is given by
$rF(W)=\sup_{c\geq
0,\phi\geq\lambda}\inf_{\nu}\mu-c+\frac{\theta}{2}\left\\{\nu^{2}-1-\log(\nu^{2})\right\\}+\left(\gamma
W-c\right)F^{\prime}(W)+\frac{1}{2}\phi^{2}\nu^{2}\sigma^{2}F^{\prime\prime}(W).$
(22)
A simple calculation shows that the worst-case change of volatility $\nu$ is
given by
$\nu^{2}=\frac{\theta}{\theta+\phi^{2}\sigma^{2}F^{\prime\prime}(W)}.$ (23)
Plugging in the our expression for $\nu^{2}$, the HJBI reduces to the
following nonlinear HJB equation
$rF(W)=\sup_{c\geq
0,\phi\geq\lambda}\mu-c-\frac{\theta}{2}\log(\theta)+\left(\gamma
W-c\right)F^{\prime}(W)+\frac{\theta}{2}\log(\theta+\phi^{2}\sigma^{2}F^{\prime\prime}(W)).$
(24)
Consider the region $[0,\overline{W})$ for which $F^{\prime}(W)>-1$ so that
$c=0$ is optimal. Rearranging (24) gives
$\sup_{\phi\geq\lambda}\frac{\theta}{2}\log\left(1+\frac{\phi^{2}\sigma^{2}}{\theta}F^{\prime\prime}(W)\right)=rF(W)-\mu-\frac{\theta}{2}-\gamma
WF^{\prime}(W)$
Now I apply $rF(W)-\mu\leq\gamma W$ which comes from the second-best value
function being less than or equal to the first-best value function without
lump-sum transfers, and $F^{\prime}(W)>-1$ to obtain
$\sup_{\phi\geq\lambda}\frac{\theta}{2}\log\left(1+\frac{\phi^{2}\sigma^{2}}{\theta}F^{\prime\prime}(W)\right)<0$
which is a contradiction unless $F^{\prime\prime}(W)<0$. This shows that $F$
is strictly concave on $[0,\overline{W}]$. Intuitively this holds because
unnecessarily exposing the agent to cash flow shocks is costly to the
principal, since it increases the probability of inefficient liquidation.
Thus we have shown the following. On the interval $[0,\overline{W}]$, the
principal’s value function satisfies the ODE
$rF(W)=\mu+\gamma
WF^{\prime}(W)+\frac{\theta}{2}\log\left(1+\frac{\lambda^{2}\sigma^{2}}{\theta}F^{\prime\prime}(W)\right).$
$F$ is strictly concave so the worst-case change of volatility given by
$\nu^{*}(W)^{2}=\frac{\theta}{\theta+\lambda^{2}\sigma^{2}F^{\prime\prime}(W)}$
(25)
is strictly greater than 1. Additionally, the strict concavity of the value
function implies that the incentive constraint always binds, i.e.
$\phi^{*}(W)=\lambda.$
While this is the same as DeMarzo and Sannikov (2006), it stands in contrast
to Miao and Rivera (2016).
The next proposition characterizes the optimal contract under the assumption
that high effort is always optimal. The optimal contract with partial shirking
can be described using methods similar to Zhu (2013), but such a
characterization is beyond the scope of this paper.
###### Proposition 4.4.
Assume that $L<\frac{\mu}{r}$ and that implementing high effort is optimal.
Assume further that there exists a unique twice differentiable solution $F$ to
the ODE
$rF(W)=\mu+\gamma
WF^{\prime}(W)+\frac{\theta}{2}\log\left(1+\frac{\lambda^{2}\sigma^{2}}{\theta}F^{\prime\prime}(W)\right)$
(26)
with boundary conditions
$F(0)=L,,\ F^{\prime}(\overline{W})=-1$
and $F^{\prime\prime}(W)<0$ for all $W\in[0,\overline{W})$ where
$\overline{W}$ is defined by $F^{\prime\prime}(\overline{W})=0$. Then:
1. (i)
When $W\in[0,\overline{W}]$, $F(W)$ is the principal’s value function for
problem 4.3, the optimal cash flow sensitivity is $\phi^{*}(W)=\lambda$ and
the worst case change of volatility $\nu^{*}(W)$ is given by (25). The
contract delivers value $W$ to the agent whose continuation value $W_{t}$
evolves according to
$dW_{t}=\gamma W_{t}dt-dC_{t}^{*}+\phi^{*}(W_{t})\sigma\nu^{*}(W_{t})\ dZ_{t}$
where $dC_{t}^{*}$ is 0 in $[0,\overline{W})$ and causes $W_{t}$ to reflect at
$\overline{W}$. The contract terminates at time $\tau=\inf\\{t\geq
0:W_{t}=0\\}$ when the project is liquidated.
2. (ii)
When $W>\overline{W}$, the principal’s value function is
$F(W)=F(\overline{W})-(W-\overline{W})$. The principal immediately pays
$W-\overline{W}$ to the agent and contracting continues with the agent’s new
initial value $\overline{W}$.
Observe that as the degree of model confidence $\theta\to\infty$, the quantity
$\frac{\theta}{2}\log\left(1+\frac{\lambda^{2}\sigma^{2}}{\theta}F^{\prime\prime}(W)\right)$
converges to $\frac{1}{2}\lambda^{2}\sigma^{2}F^{\prime\prime}(W)$ for any
value of $F^{\prime\prime}(W)$. Hence (26) converges to the ordinary
differential equation of DeMarzo and Sannikov (2006), i.e. the benchmark model
with no ambiguity aversion.
###### Proposition 4.5.
Let $F(\cdot)$, $\overline{W}$ be defined as in proposition 4.4. Then high
effort is optimal if and only if
$\min_{W\in[0,\overline{W}]}rF(W)-F^{\prime}(W)(\gamma W-\lambda\mu)\geq 0.$
The argument is simple, and is consistent with proposition 8 of DeMarzo and
Sannikov (2006). Next, I show how the optimal contract changes with the level
of ambiguity aversion.
###### Proposition 4.6.
For any promised wealth level to the agent, the principal’s value function
$F(W)$ strictly increases in $\theta$.
Figure 3: Value functions $F(W)$ for contracting problem. Value function with
no ambiguity ($\theta=\infty$) shown in blue. Value function with $\theta=5$
shown in dashed red. Parameter values are
$\mu=10,r=0.1,\gamma=0.15,\lambda=0.2,\sigma=5,L=90$. Observe that value
function with no ambiguity is strictly higher than value function with
$\theta=5$, consistent with proposition 4.6. Figure 4: Worst case change-of-
variance $\nu^{2}(W)$ for $\theta=5$ shown in red. $\nu^{2}=1$, i.e. change-
of-variance when $\theta=\infty$ shown in blue. Parameter values are
$\mu=10,r=0.1,\gamma=0.15,\lambda=0.2,\sigma=5,L=90$. Figure 5: Value
function derivative $F^{\prime}(W)$ for $\theta=5$ shown in dashed red line.
Value function derivative with no ambiguity ($\theta=\infty$) shown in blue.
Parameter values are $\mu=10,r=0.1,\gamma=0.15,\lambda=0.2,\sigma=5,L=90$.
Points for which $F^{\prime}(W)=-1$ correspond to upper boundary
$\overline{W}$.
Proposition 4.6 confirms the obvious intuition that the principal’s value
function is increasing in $\theta$ i.e. decreasing in the level of ambiguity
aversion. This is illustrated in figure 1. While this result is unsurprising,
it is nonetheless useful in establishing subsequent comparative static results
for the optimal contract. The following proposition shows how the payoff
boundary $\overline{W}$ changes with $\theta$.
###### Proposition 4.7.
The payoff boundary $\overline{W}$ is strictly decreasing in $\theta$.
Thus, higher levels of ambiguity aversion leads to a higher payoff boundary
for the agent. This result is illustrated in figure 6.
Figure 6: Upper payoff boundary $\overline{W}$ as a function of ambiguity
aversion $1/\theta$. Parameter values are
$\mu=10,r=0.1,\gamma=0.15,\lambda=0.2,\sigma=5,L=90$.
#### 4.2.3 What happens if the agent is ambiguity-averse?
Up until this point, I have assumed that only the principal was ambiguity
averse. It is natural to ask what happens if the agent is ambiguity averse as
well. As it turns out, so long as the agent has the same form of variational
ambiguity with a strictly convex penalty function, the agent’s ambiguity
aversion will not affect the optimal contract.
###### Proposition 4.8.
Assume that the agent is ambiguity averse. Then the contract described in
proposition 4.4 remains optimal. Moreover, the agent’s implied worst-case
belief is $\nu(W)=1$.
Even when the agent is ambiguity averse, the optimal contract is unaffected
and they form expectations as if they fully trust that volatility is constant
at level $\sigma$.
#### 4.2.4 Bellman-Isaacs condition
The optimal contract characterized by proposition 4.4 is the solution of a
particular max-min problem between the principal and nature. A natural
question to ask is whether the optimal contract would remain optimal if the
worst-case volatility process $\\{\nu_{t}\\}$ were specified exogenously.
Formally, this corresponds to what is known as a Bellman-Isaacs condition. As
discussed in Hansen et al. (2006), this condition is important for the
interpretation of the solution to a robust control problem. In particular, it
allows for an ex-post Bayesian interpretation of the robust control problem.
For the robust contracting problem described in this paper, the value function
is in fact globally concave, and the optimal control of nature has no binding
inequality constraints. One can verify (see Fan (1953), Hansen et al. (2006))
that the Bellman-Isaacs condition is satisfied. Therefore, the optimal
contract described in proposition 4.4 is optimal in an ex-post Bayesian sense
where the principal believes that volatility evolves according to (25), in a
restricted space of contracts where changes in the agent’s continuation payoff
are locally linear in project cash flows. As such, it is reasonable to
interpret my model as a model of endogenous belief formation about the
volatility process.
### 4.3 Implementation and Asset Pricing Implications
#### 4.3.1 Credit Line Implementation
Following DeMarzo and Sannikov (2006), I show how to implement the optimal
contract with a capital structure of equity, debt, and a credit line. The
implementation is as follows:
* •
_Equity:_ The agent holds inside equity for a fraction $\lambda$ of the firm.
Dividend payments are at the discretion of the agent.
* •
_Long-term debt:_ Long term debt is a consol bond that pays coupons at a rate
$x=\mu-\frac{\gamma}{\lambda}\overline{W}$. If the firm ever defaults on a
coupon payment, debt holders force liquidation.
* •
_Credit line:_ The firm has a revolving credit line with credit limit
$C^{L}=\frac{\overline{W}}{\gamma}$. Balances on the credit line are subject
to an interest rate $\gamma$. The firm borrows and repays funds on the credit
line at the discretion of the agent. If the balance ever exceeds $C^{L}$, the
project is terminated.
The following proposition characterizes how this implementation changes with
the level of ambiguity aversion.
###### Proposition 4.9.
As the level of ambiguity aversion $1/\theta$ increases
* •
The optimal credit limit strictly increases.
* •
The face value of the optimal long-term debt strictly decreases.
Note that the fraction of equity held by the agent is determined by the
incentive compatibility constraints, and does not change with $\theta$.
I consider asset prices in a representative agent setting where the principal
is the representative investor who trades debt and equity, whereas the agent
is an insider who is restricted from trading in either security. I take $r$ as
the risk-free rate. Then I price securities under the worst-case belief
measure of the principal. This approach is analogous to those taken in
Anderson et al. (2003), Biais et al. (2007), and Miao and Rivera (2016).
The value of equity is given by
$S_{t}=\mathbb{E}_{t}^{\nu^{*}}\left[\int_{t}^{\tau}e^{-r(s-t)}\frac{1}{\lambda}dC_{t}^{*}\right]$
It is straightforward to obtain that the stock price is given by
$S_{t}=S(W_{t})$ where the function $S(\cdot)$ satisfies the ODE
$rS(W)=\gamma
WS^{\prime}(W)+\frac{1}{2}\lambda^{2}\sigma^{2}\nu^{*}(W)^{2}S^{\prime\prime}(W)$
with boundary conditions $S(0)=0$ and $S^{\prime}(\overline{W})=1$. A simple
argument now shows that the equity premium is given by
$\mathbb{E}_{t}\left[\frac{dS_{t}}{S_{t}}\right]-r=-\frac{1}{2}\lambda^{2}\sigma^{2}\left[\nu^{*}(W_{t})^{2}-1\right]\frac{S^{\prime\prime}(W_{t})}{S(W_{t})}$
(27)
which is strictly positive in numerical computations. Note also that in the
no-ambiguity benchmark, the equity premium is identically zero.
Define the credit yield spread $\Delta_{t}$ by
$\int_{t}^{\infty}e^{-(r+\Delta_{t})(s-t)}ds=\mathbb{E}_{t}^{\nu^{*}}\left[\int_{t}^{\tau}e^{-r(s-t)}ds\right]$
(28)
which when solved yields $\Delta_{t}=\frac{rT_{t}}{1-T_{t}}$ where
$T_{t}=\mathbb{E}_{t}^{\nu^{*}}\left[e^{-r(\tau-t)}\right]$ is the time-$t$
price of one unit of consumption at the time of default. $T_{t}=T(W_{t})$
satisfies the ODE
$rT(W)=\gamma
WT^{\prime}(W)+\frac{1}{2}\lambda^{2}\sigma^{2}\nu^{*}(W)^{2}T^{\prime\prime}(W)$
with boundary conditions $T(0)=1$ and $T^{\prime}(\overline{W})=0$.
Figure 7: Credit yield spread and equity premium as a function of $W$. Asset
prices without ambiguity $(\theta=\infty)$ shown in blue. Asset prices with
ambiguity $(\theta=5)$ shown in red.
#### 4.3.2 Cash-based implementation
I briefly describe an alternate capital structure implementation of the
optimal contract, similar to Biais et al. (2007), using equity, debt, and cash
reserves. The firm holds cash reserves $M_{t}=\frac{W_{t}}{\lambda}$ which
earn the risk-free interest rate $r$. The project payoffs $dY_{t}$ are put
into the firm’s cash account. Outside investors hold a fraction $1-\lambda$ of
equity, and debt which pays coupons at a state-dependent rate
$[\mu-(\gamma-r)M_{t}]dt$, while the agent holds a fraction $\lambda$ of
equity. Then the cash reserves evolve according to
$dM_{t}=\underset{\text{interest}}{\underbrace{rM_{t}dt}}+\underset{\text{project
cash
flows}}{\underbrace{dY_{t}}}-\underset{\text{dividends}}{\underbrace{\frac{1}{\lambda}dC_{t}}}-\underset{\text{coupon}}{\underbrace{[\mu-(\gamma-r)M_{t}]dt}}$
(29)
with $M_{0}=W_{0}/\lambda$. One can easily verify that equation (29) agrees
with the evolution for $W_{t}/\lambda$. Under the cash-based implementation,
proposition 4.7 implies that higher levels of ambiguity aversion increase the
amount of cash the firm will hold before it is willing to pay dividends.
We see in both the credit line implementation and the cash-based
implenentation that higher levels of ambiguity aversion increase the maximum
“financial slack” that the firm is given under the optimal contract. In the
credit line implementation this corresponds to a higher maximum credit limit,
whereas in the cash-based implementation this corresponds to a higher cash
buffer that the firm accumulates before paying dividends to equity holders.
### 4.4 The role of commitment
The optimal contract described is proposition 4.4 is not generically
renegotiation-proof. For small values of $W$, the principal’s value function
$F$ under the optimal contract is increasing in $W$, so the principal and the
agent can both be made better off by a one-off increase in the continuation
value of the agent. To be renegotiation-proof, the principal’s value function
$F(W)$ must not have positive slope. However, it is possible to modify the
contract described in proposition 4.4 to obtain the optimal renegotiation-
proof contract, which I describe in this section. Additionally, I show that
the implied worst-case volatility under the optimal renegotiation-proof
contract is strictly decreasing in the agent’s continuation value.
Renegotiation effectively raises the minimum payoff of the agent to a point
$R$ such that $F^{\prime}(R)=0$. The agent’s promised value evolves on the
interval $[R,\overline{W}]$ according to
$dW_{t}=\gamma W_{t}dt-dC_{t}-\lambda\mu
dt+\lambda\sigma\nu(W_{t})dZ_{t}+dP_{t}$ (30)
where the processes $C$ and $P$ reflect $W_{t}$ at endpoints $\overline{W}$
and $R$ respectively. The project is terminated stochastically whenever
$W_{t}$ is reflected at $R$. The probability that the project continues at
time $t$ is
$Pr(\tau\geq t)=\exp\left(-\frac{P_{t}}{R}\right).$ (31)
The optimal contract can still be implemented with equity, long-term debt and
a credit line, though the level of long-term debt and the length of the credit
line will be different.
###### Proposition 4.10.
Under the optimal renegotiation-proof contract, the worst-case volatility
$\nu^{*}(W)^{2}$ is strictly increasing in $W$.
The renegotiation-proof implementation contract is in a sense a more robust
implementation than the implementation described in proposition 4.4 in that it
eliminates the incentive for the principal to renegotiate the contract with
the agent. However, it still requires the principal to commit to a stochastic
(unverifiable) liquidation policy. Without such commitment, there will
generally be welfare loss to the principal. In particular, if the principal
can only commit to deterministic liquidation policies, then the Pareto
frontier is generally characterized by a solution to the same differential
equation as before, but now with boundary conditions $F(0)=L$ and
$F^{\prime}(0)=0$. Under this implementation, it is possible to show similar
comparative statics as for the optimal contract with full commitment.
### 4.5 Comparison with alternative models
#### 4.5.1 Comparison with $G$-expectations
Consider the “interval uncertainty” or $G$-expectations formulation of
ambiguity aversion. Assume that the adjustment cost function $\xi(\nu)$ faced
by nature is given by
$\xi(\nu)=\begin{cases}0&\text{ if
}\nu\in[\underline{\sigma}/\sigma,\overline{\sigma}/\sigma]\\\ \infty&\text{
otherwise. }\end{cases}$ (32)
This is equivalent to assuming that nature is free to choose any level of
volatility $\sigma_{t}\in[\underline{\sigma},\overline{\sigma}]$ with no
adjustment cost. This formulation of volatility ambiguity is precisely the
$G$-expectation formulation of Peng (2007), and is similar to the
$\kappa$-ignorance specification of Chen and Epstein (2002).
###### Proposition 4.11.
Consider the optimal contracting problem in which both the principal and the
agent have interval uncertainty of the form (32). Assume that
$L<\frac{\mu}{r}$ and implementing high effort is optimal. Then the optimal
contract is the same as that of the optimal contract without ambiguity
aversion where both the principal and the agent believe the volatility level
is $\overline{\sigma}$.
###### Proposition 4.12.
The payoff boundary $\overline{W}$ of the optimal contracting problem with
interval uncertainty is strictly increasing in $\overline{\sigma}$.
#### 4.5.2 Comparison with drift ambiguity
This paper is closely related to Miao and Rivera (2016) who study a similar
dynamic contracting problem where the principal is uncertain about the
expected cash flows and is ambiguity-averse. They obtain similar asset pricing
implications as I do; time-varying risk-premia that are generally higher for
financially distressed firms. However, there are some key differences.
Firstly, the optimal contracts are quite different. In my model, the incentive
compatibility constraint always binds because the principal fears inefficient
liquidation and therefore does not want the agent to bear any more risk than
necessary. This preserves the optimality of the simple contractual form of
DeMarzo and Sannikov (2006) and Biais et al. (2007). In their model however,
the principal does not like drift ambiguity, and thus the the optimal contract
will sometimes force the agent to bear more cash-flow sensitivity than
necessary. As a result, the incentive compatibility constraint is an
occasionally binding constraint, and their optimal contract is much more
challenging to interpret. Second, the value function in my model is globally
concave, so the Bellman-Isaacs condition holds. This means that it is valid to
interpret my model as a model of endogenous belief formation. This is not the
case in Miao and Rivera (2016). Thirdly, my model can accommodate ambiguity
aversion on the part of the agent, without any reduction in the impact of
ambiguity aversion. Miao and Rivera (2016) do not model ambiguity aversion on
the part of the agent, and in their framework, it would produce an offsetting
effect which reduces the impact of ambiguity aversion on the optimal contract.
### 4.6 Empirical implications
Credit lines, also known as revolving credit facilities, are an extremely
important form of firm financing. Empirically, credit lines account for more
than a quarter of outstanding corporate debt of publicly traded firms and an
even larger fraction for smaller, non-publicly traded firms.111111See Berger
and Udell (1995), Sufi (2007) and DeMarzo and Sannikov (2006). To the extent
that smaller firms have more ambiguous riskiness of their cash flows, this is
consistent with the predictions of my model.
Under ambiguity aversion, the optimal contract may be interpreted as one that
would emerge under a particular form of belief heterogeneity. As a reflection
of the ambiguity aversion, the principal acts as if she believes that
volatility is time-varying and strictly higher than the benchmark volatility.
From the principal’s point of view, the agent’s beliefs appear to have too
little uncertainty. This is consistent with empirical evidence on managerial
overconfidence as in Landier and Thesmar (2009) and Ben-David et al. (2013).
In terms of asset prices, my model predicts that the equity premium and credit
yield spread are state-dependent and generally higher for firms closer to
default. This is consistent with the literature on characteristic-based asset
pricing (Daniel and Titman (1997), Daniel and Titman (1998)) as well as
Friewald et al. (2014) who find that firm’s equity premium and credit spread
are positively correlated.
## 5 Conclusion
This paper developed new preference formulations which capture ambiguity
aversion towards unknown volatility. These _moment-constrained variational
preferences_ were introduced in a static setting and their impact illustrated
in a simple model of portfolio choice under quadratic utility. In this model,
I showed how the degree of ambiguity aversion impacted the implied worst-case
volatility as well as the optimal portfolio of the investor. I then derived a
continuous-time limit in which ambiguity aversion towards unknown, potentially
time-varying, volatility was not degenerate. These new continuous-time
preferences were then compared with the $G$-expectations model of Peng (2007).
The impact of these new continuous-time preferences was illustrated in a model
of optimal security design under repeated moral hazard. I showed how the
worst-case volatility of the principal depended on the distance to
liquidation, whereas the agent always has full confidence in their benchmark
model. I showed how ambiguity aversion increased the dividend “hurdle” in the
optimal contract, and further showed how in a credit line implementation this
corresponded to increasing the maximum draw on the credit line to the agent.
Finally, I numerically illustrated some of the asset pricing implications of
ambiguous volatility.
For pedagogical simplicity and clarity, this paper focused exclusively on
ambiguity aversion towards unknown volatility. However, there is no deep
theoretical reason for this exclusivity. Future work could apply moment-
constrained variational preferences to modelling uncertainty about other
distributional features, and the volatility penalization could be used in
conjunction with other approaches for modelling ambiguity aversion.
The preference formulations developed in this paper can potentially be applied
to a variety of other settings. One possibility is to examine their effect in
a with moral hazard and endogenous investment, similar to DeMarzo et al.
(2012) or Bolton et al. (2013), and derive simultaneous implications for
corporate investment and asset pricing. Another possibility would be to apply
them to the problem of stress testing, where a bank regulator attempts to
control the risk-taking of a bank without full confidence in a particular risk
model. A third possibility would be to study a consumption-savings problem and
investigate the impact of volatility ambiguity on precautionary
savings.121212I am grateful to an anonymous referee for this suggestion. I
leave these and other extensions to future research.
## References
* Adrian and Westerfield (2009) Adrian, Tobias and Mark M Westerfield. 2009. Disagreement and Learning in a Dynamic Contracting Model. _Review of Financial Studies_ 22 (10):3873–3906.
* Anderson et al. (2003) Anderson, Evan W, Lars Peter Hansen, and Thomas J Sargent. 2003. A Quartet of Semigroups for Model Specification, Robustness, Prices of Risk, and Model Detection. _Journal of the European Economic Association_ 1 (1):68–123.
* Bandi and Russell (2006) Bandi, Federico M and Jeffrey R Russell. 2006. Separating microstructure noise from volatility. _Journal of Financial Economics_ 79 (3):655–692.
* Bandi and Russell (2008) ———. 2008. Microstructure noise, realized variance, and optimal sampling. _The Review of Economic Studies_ 75 (2):339–369.
* Ben-David et al. (2013) Ben-David, Itzhak, John R Graham, and Campbell R Harvey. 2013. Managerial Miscalibration. _The Quarterly Journal of Economics_ .
* Bergemann and Morris (2005) Bergemann, Dirk and Stephen Morris. 2005. Robust Mechanism Design. _Econometrica_ 73 (6):1771–1813.
* Berger and Udell (1995) Berger, Allen N and Gregory F Udell. 1995. Relationship lending and lines of credit in small firm finance. _Journal of Business_ 351–381.
* Biais et al. (2007) Biais, Bruno, Thomas Mariotti, Guillaume Plantin, and Jean-Charles Rochet. 2007\. Dynamic Security Design: Convergence to Continuous Time and Asset Pricing Implications. _The Review of Economic Studies_ 74 (2):345–390.
* Bolton et al. (2013) Bolton, Patrick, Hui Chen, and Neng Wang. 2013. Market Timing, Investment, and Risk Management. _Journal of Financial Economics_ 109 (1):40–62.
* Carroll (2015) Carroll, Gabriel. 2015. Robustness and linear contracts. _The American Economic Review_ 105 (2):536–563.
* Chen et al. (2020) Chen, Xiaohong, Lars Peter Hansen, and Peter G Hansen. 2020. Robust identification of investor beliefs. _Proceedings of the National Academy of Sciences_ 117 (52):33130–33140.
* Chen et al. (2021) ———. 2021. Robust Estimation and Inference when Beliefs are Subjective.
* Chen and Epstein (2002) Chen, Zengjing and Larry Epstein. 2002. Ambiguity, Risk, and Asset Returns in Continuous Time. _Econometrica_ 70 (4):1403–1443.
* Cressie and Read (1984) Cressie, Noel and Timothy RC Read. 1984. Multinomial goodness-of-fit tests. _Journal of the Royal Statistical Society: Series B (Methodological)_ 46 (3):440–464.
* Daniel and Titman (1997) Daniel, Kent and Sheridan Titman. 1997. Evidence on the characteristics of cross sectional variation in stock returns. _The Journal of Finance_ 52 (1):1–33.
* Daniel and Titman (1998) ———. 1998. Characteristics or covariances. _Journal of Portfolio Management_ 24 (4):24–33.
* DeMarzo and Fishman (2007) DeMarzo, Peter M and Michael J Fishman. 2007. Optimal long-term financial contracting. _The Review of Financial Studies_ 20 (6):2079–2128.
* DeMarzo and Sannikov (2006) DeMarzo, Peter M and Yuliy Sannikov. 2006. Optimal Security Design and Dynamic Capital Structure in a Continuous-Time Agency Model. _The Journal of Finance_ 61 (6):2681–2724.
* DeMarzo et al. (2012) DeMarzo, Peter M, Michael J Fishman, Zhiguo He, and Neng Wang. 2012. Dynamic Agency and the q Theory of Investment. _The Journal of Finance_ 67 (6):2295–2340.
* Epstein and Ji (2013) Epstein, Larry G and Shaolin Ji. 2013. Ambiguous Volatility and Asset Pricing in Continuous Time. _Review of Financial Studies_ 26 (7):1740–1786.
* Epstein and Schneider (2003) Epstein, Larry G and Martin Schneider. 2003. Recursive multiple-priors. _Journal of Economic Theory_ 113 (1):1–31.
* Fan (1953) Fan, Ky. 1953. Minimax Theorems. _Proceedings of the National Academy of Sciences_ 39 (1):42–47.
* Fleming and Souganidis (1989) Fleming, Wendell H and Panagiotis E Souganidis. 1989. On the Existence of Value Functions of Two-Player, Zero-Sum Stochastic Differential-Games. _Indiana University Mathematics Journal_ 38 (2):293–314.
* Friewald et al. (2014) Friewald, Nils, Christian Wagner, and Josef Zechner. 2014. The Cross-Section of Credit Risk Premia and Equity Returns. _The Journal of Finance_ 69 (6):2419–2469.
* Gilboa and Schmeidler (1989) Gilboa, Itzhak and David Schmeidler. 1989. Maxmin Expected Utility with Non-unique Prior. _Journal of Mathematical Economics_ 18 (2):141–153.
* Good (1952) Good, IJ. 1952. Rational Decisions. _Journal of the Royal Statistical Society. Series B (Methodological)_ 14 (1):107–114.
* Hansen and Sargent (2001) Hansen, Lars Peter and Thomas J. Sargent. 2001. Robust Control and Model Uncertainty. _The American Economic Review_ 91 (2):60–66.
* Hansen and Sargent (2012) Hansen, Lars Peter and Thomas J Sargent. 2012. Three types of ambiguity. _Journal of Monetary Economics_ 59 (5):422–445.
* Hansen and Sargent (2018) ———. 2018. Structured Uncertainty and Model Misspecification. _University of Chicago, Becker Friedman Institute for Economics Working Paper_ (2018-77).
* Hansen et al. (2002) Hansen, Lars Peter, Thomas J Sargent, and Neng E Wang. 2002. Robust permanent income and pricing with filtering. _Macroeconomic Dynamics_ 6 (1):40–84.
* Hansen et al. (2006) Hansen, Lars Peter, Thomas J Sargent, Gauhar Turmuhambetova, and Noah Williams. 2006\. Robust Control and Model Misspecification. _Journal of Economic Theory_ 128 (1):45–90.
* Hansen and Lunde (2006) Hansen, Peter R and Asger Lunde. 2006. Realized variance and market microstructure noise. _Journal of Business & Economic Statistics_ 24 (2):127–161.
* Harsanyi (1967) Harsanyi, John C. 1967. Games with incomplete information played by “Bayesian” players, I–III Part I. The basic model. _Management Science_ 14 (3):159–182.
* Klibanoff et al. (2009) Klibanoff, Peter, Massimo Marinacci, and Sujoy Mukerji. 2009. Recursive smooth ambiguity preferences. _Journal of Economic Theory_ 144 (3):930–976.
* Landier and Thesmar (2009) Landier, Augustin and David Thesmar. 2009. Financial Contracting with Optimistic Entrepreneurs. _Review of Financial Studies_ 22 (1):117–150.
* Maccheroni et al. (2006a) Maccheroni, Fabio, Massimo Marinacci, and Aldo Rustichini. 2006a. Ambiguity Aversion, Robustness, and the Variational Representation of Preferences. _Econometrica_ 74 (6):1447–1498.
* Maccheroni et al. (2006b) ———. 2006b. Dynamic Variational Preferences. _Journal of Economic Theory_ 128 (1):4–44.
* Malenko and Tsoy (2018) Malenko, Andrey and Anton Tsoy. 2018. Asymmetric information and security design under Knightian uncertainty. _Available at SSRN 3100285_ .
* Miao and Rivera (2016) Miao, Jianjun and Alejandro Rivera. 2016. Robust Contracts in Continuous Time. _Econometrica_ 84 (4):1405–1440.
* Nutz (2013) Nutz, Marcel. 2013. Random $G$-expectations. _The Annals of Applied Probability_ 23 (5):1755–1777.
* Peng (2007) Peng, Shige. 2007. G-Expectation, G-Brownian Motion and Related Stochastic Calculus of Itô type. _Stochastic Analysis and Applications_ 541–567.
* Prat and Jovanovic (2014) Prat, Julien and Boyan Jovanovic. 2014. Dynamic Contracts when the Agent’s Quality is Unknown. _Theoretical Economics_ 9 (3):865–914.
* Sannikov (2008) Sannikov, Yuliy. 2008. A Continuous-Time Version of the Principal-Agent Problem. _The Review of Economic Studies_ 75 (3):957–984.
* Sufi (2007) Sufi, Amir. 2007. Bank lines of credit in corporate finance: An empirical analysis. _The Review of Financial Studies_ 22 (3):1057–1088.
* Szydlowski (2012) Szydlowski, Martin. 2012. Ambiguity in Dynamic Contracts. Working Paper, University of Minnesota.
* Williams (2008) Williams, Noah. 2008. On Dynamic Principal-Agent Problems in Continuous Time. Working Paper, University of Wisconsin at Madison.
* Wilson (1987) Wilson, Robert. 1987. Game-Theoretic Analyses of Trading Processes .
* Wolitzky (2016) Wolitzky, Alexander. 2016. Mechanism design with maxmin agents: Theory and an application to bilateral trade. _Theoretical Economics_ 11 (3):971–1004.
* Woodford (2010) Woodford, Michael. 2010. Robustly Optimal Monetary Policy with Near-Rational Expectations. _The American Economic Review_ 274–303.
* Zhang et al. (2005) Zhang, Lan, Per A Mykland, and Yacine Aït-Sahalia. 2005. A tale of two time scales: Determining integrated volatility with noisy high-frequency data. _Journal of the American Statistical Association_ 100 (472):1394–1411.
* Zhu (2013) Zhu, John Y. 2013. Optimal Contracts with Shirking. _The Review of Economic Studies_ 80 (2):812–839.
* Zhu (2016) ———. 2016. Renegotiation of Dynamically Incomplete Contracts .
## Appendix A Proofs and derivations for section 2
### A.1 Proofs for subsection 2.2.3
###### Lemma A.1.
$s(\theta)$ can be expressed as $v(\theta)+\mu^{2}+\sigma^{2}$ where
$v(\theta)$ is the unique positive root of the cubic polynomial
$\frac{1}{2}\mu^{2}C^{2}\frac{1}{v+\mu^{2}+\sigma^{2}}+\frac{\theta}{2}\frac{1}{\sigma^{2}}(v+\mu+\sigma^{2})-\frac{\theta}{2}\log(v+\sigma^{2})+D.$
where
$\displaystyle C$ $\displaystyle=(b-W_{0}R_{f})$ $\displaystyle D$
$\displaystyle=-\frac{1}{2}(b-W_{0}R_{f})^{2}-\frac{\theta}{2}.$
Additionally, $v(\theta)$ is strictly decreasing in $\theta$ and we have the
limits $\lim_{\theta\to\infty}v(\theta)=0$ and $\lim_{\theta\to
0}v(\theta)=\infty$.
### Proof of lemma A.1
Write $C=(b-W_{0}R_{f})$. Then the objective function in (4) can be written as
$\frac{1}{2}\mu^{2}C^{2}\frac{1}{2}+\frac{\theta}{2}\frac{1}{\sigma^{2}}s-\frac{\theta}{2}\log(s-\mu^{2})+D$
where $D$ is a constant that does not depend on $s$. We know that the
minimizing $s$ is unique and satisfies $s>\mu^{2}+\sigma^{2}$. Now, perform
the following change-of-variables. Define $v=s-\mu^{2}-\sigma^{2}$. The
objective function, written now as a function of $v$ is given by
$\frac{1}{2}\mu^{2}C^{2}\frac{1}{v+\mu^{2}+\sigma^{2}}+\frac{\theta}{2}\frac{1}{\sigma^{2}}(v+\mu+\sigma^{2})-\frac{\theta}{2}\log(v+\sigma^{2})+D$
Clearly the minimizing $v$ is unique and satisfies $v>0$. Since the objective
function is differentiable, the first-order condition gives a necessary
condition that the optimal $v$ must satisfy. Note that this is not a
sufficient condition unless there is a unique positive solution. The first-
order condition for $v$ simplies to the following cubic polynomial equation
for $v$,
$0=\frac{\theta}{2}\frac{1}{\sigma^{2}}v^{3}+\frac{\theta}{\sigma^{2}}(\mu^{2}+\sigma^{2})v^{2}+\left[\frac{\theta}{2\sigma^{2}}(\mu^{2}+\sigma^{2})^{2}-\frac{1}{2}\mu^{2}C^{2}\right]v-\frac{\mu^{2}C^{2}}{2}.$
To see that this equation has a unique positive solution, note that the terms
of the cubic equation are in descending order, that the first two coefficients
are positive and that the last is negative.131313The third coefficient has
ambiguous sign. Hence by Descartes’ rule of signs that the cubic equation has
a unique solution.
It follows from Topkis’s theorem that $v(\theta)$ is decreasing in $\theta$.
The limits follow immediately. ∎
Note that proposition 2.2 follows immediately from lemma A.1 and the
observation that
$\nu^{2}(\theta)=\frac{v(\theta)}{\sigma^{2}}+1.$
Proposition 2.1 follows from lemma A.1 and the observation that
$\phi^{*}(\theta)=\frac{1}{v(\theta)+\mu^{2}+\sigma^{2}}\mu(b-W_{0}R_{f}).$
### A.2 Portfolio choice with “multiplier” preferences
We consider the following portfolio choice problem with“unconstrained” or
“multiplier” preferences:
$\sup_{\phi\in\mathbb{R}}\inf_{M\geq
0,\mathbb{E}_{P}[M]=1}\mathbb{E}_{P}[MU(\widetilde{W})]+\theta\Phi^{u}(M)$
(33)
where
$\displaystyle\Phi^{u}(M)$ $\displaystyle=\mathbb{E}_{P}[M\log M]$
$\displaystyle U(\widetilde{W})$
$\displaystyle=-\frac{1}{2}(\widetilde{W}-b)^{2}$ $\displaystyle\widetilde{W}$
$\displaystyle=W_{0}R_{f}+\phi\widetilde{R}$ $\displaystyle\widetilde{R}$
$\displaystyle\overset{P}{\sim}\text{Normal}(\mu,\sigma^{2}).$
As in the text, I assume that $\mu>0$ and $b>W_{0}R_{f}$. Then the solution to
[whatever the equation number of the problem is] has the following properties:
* •
The minimizing $M$ implies a Normal distribution for $\widetilde{W}$.
* •
The optimal portfolio weight $\phi_{u}(\theta)$ is increasing in $\theta$.
* •
The worst-case mean $\mu_{u}(\theta)$ is increasing in $\theta$, and
$\mu_{u}(\infty)=\mu$.
* •
The worst-case variance $\sigma^{2}_{u}(\theta)$ is increasing in $\theta$,
and $\sigma_{u}^{2}(\infty)=\sigma^{2}$.
I give the following argument: It follows from equation (3) that the
minimizing $M$ has an exponential tilting form. This implies that
$\widetilde{W}$ will have a normal distribution under the change-of-measure
induced by $M$, with distorted mean and variance $\widetilde{\mu}$ and
$\widetilde{\sigma}^{2}$ respectively. $\Phi^{u}(M)$ can then be expressed in
terms of $\widetilde{\mu}$ and $\widetilde{\sigma}^{2}$ as.
$\Phi^{u}(M)=\frac{1}{2}\left[\frac{\widetilde{\sigma}^{2}}{\sigma^{2}}+\frac{1}{\sigma^{2}}(\widetilde{\mu}-\mu)^{2}-\log\left(\frac{\widetilde{\sigma}^{2}}{\sigma^{2}}\right)-1\right]$
Treating $\phi,\widetilde{\mu}$, and $\widetilde{\sigma}^{2}$ as fixed, we see
that
$\mathbb{E}\left[MU(\widetilde{W});\phi\right]=-\frac{1}{2}\phi^{2}(\widetilde{\mu}^{2}+\widetilde{\sigma}^{2})+\phi\widetilde{\mu}(b-W_{0}R_{f})-\frac{1}{2}(b-W_{0}R_{f})^{2}$
Maximizing over $\phi$, we see that
$\phi(M)=\frac{1}{\widetilde{\mu}^{2}+\widetilde{\sigma}^{2}}\widetilde{\mu}(b-W_{0}R_{f})$
Now, substituting in the optimized value of $\phi(M)$ we obtain
$\mathbb{E}\left[MU(\widetilde{W})\right]=\frac{1}{2}\frac{\widetilde{\mu}^{2}}{\widetilde{\mu}^{2}+\widetilde{\sigma}^{2}}(b-W_{0}R_{f})^{2}-\frac{1}{2}(b-W_{0}R_{f})^{2}.$
Define
$\displaystyle
L(\widetilde{\mu},\widetilde{\sigma}^{2};\theta)=\frac{1}{2}\frac{\widetilde{\mu}^{2}}{\widetilde{\mu}^{2}+\widetilde{\sigma}^{2}}(b-W_{0}R_{f})^{2}-\frac{1}{2}(b-W_{0}R_{f})^{2}+\frac{\theta}{2}\left[\frac{\widetilde{\sigma}^{2}}{\sigma^{2}}+\frac{1}{\sigma^{2}}(\widetilde{\mu}-\mu)^{2}-\log\left(\frac{\widetilde{\sigma}^{2}}{\sigma^{2}}\right)-1\right].$
Observe that
$L(\widetilde{\mu},\widetilde{\sigma}^{2};\theta)=\max_{\phi}\mathbb{E}\left[MU(\widetilde{W})\right]+\theta\Phi^{u}(M)$
when $M$ is restricted to imply that
$\widetilde{W}\sim\text{Normal}(\widetilde{\mu},\widetilde{\sigma}^{2})$. The
solution for $\mu_{u}(\theta)$ and $\sigma^{2}_{u}(\theta)$ can thus be
obtained by solving
$\min_{\widetilde{\mu},\widetilde{\sigma}^{2}\geq
0}L(\widetilde{\mu},\widetilde{\sigma}^{2};\theta).$ (34)
It follows directly from Topkis’ monotonicity theorem that $\mu_{u}(\theta)$
is strictly increasing in $\theta$, or equivalently, strictly decreasing in
$1/\theta$. Since $\mu_{u}(\infty)=\mu$, we see that $\mu_{u}(\theta)<\mu$.
The first-order condition for $\sigma^{2}_{u}(\theta)$ implies that
$\frac{1}{2}\frac{\mu_{u}(\theta)^{2}}{\mu_{u}(\theta)^{2}+\sigma^{2}_{u}(\theta)}(b-W_{0}R_{f})=\frac{\theta}{2}\left[\frac{1}{\sigma^{2}}-\frac{1}{\sigma^{2}_{u}(\theta)}\right]$
from which we see that $\widetilde{\sigma}^{2}$ must be strictly greater that
$\sigma^{2}$. It follows from Topkis’ theorem that $\sigma^{2}_{u}(\theta)$ is
strictly increasing in $\theta$.
## Appendix B Proofs for section 4
### Proof of proposition 4.4
Note that $F(W)$ is necessarily a concave solution to the HJBI equation
$rF(W)=\sup_{c\geq 0,\phi\geq\lambda}\ \inf_{\nu\geq 0}\
\mu-c+\xi(\nu)+(\gamma
W-c)F^{\prime}(W)+\frac{1}{2}\phi^{2}\sigma^{2}\nu^{2}F^{\prime\prime}(W)$
on $(0,\overline{W})$ with optimal controls $c=0$, $\phi=\lambda$, and
$\nu^{2}=\nu^{*}(W)^{2}$. It follows that for any $\phi\geq\lambda$ we have
$\inf_{\nu\geq 0}\ \mu+\xi(\nu)+\gamma
F^{\prime}(W)+\frac{1}{2}\phi^{2}\sigma^{2}\nu^{2}F^{\prime\prime}(W)-rF(W)\leq
0$
Define
$G_{t}=\int_{0}^{t}e^{-rs}\left(dY_{s}-dC_{s}+\xi(\nu_{s})ds\right)+e^{-rt}F(W_{t})$
Then note that
$\displaystyle e^{rt}dG_{t}=\left(\mu+\xi(\nu_{t})+\gamma
W_{t}F^{\prime}(W_{t})+\frac{1}{2}\phi_{t}^{2}\sigma^{2}\nu_{t}^{2}F^{\prime\prime}(W_{t})-rW_{t}\right)dt$
$\displaystyle-(1+F^{\prime}(W_{t}))dC_{t}+(1+\phi_{t}F^{\prime}(W_{t}))\sigma\nu_{t}dZ_{t}$
Note that $F^{\prime}(W_{t})\geq-1$ so that $-(1+F^{\prime}(W_{t}))\leq 0$.
Additionally, under the worst-case $\nu_{t}$ we have
$\mu+\xi(\nu_{t})+\gamma
W_{t}F^{\prime}(W_{t})+\frac{1}{2}\phi_{t}^{2}\sigma^{2}\nu_{t}^{2}F^{\prime\prime}(W_{t})-rW_{t}\leq
0.$
Thus $G_{t}$ is a supermartingale. It is a martingale only if
$\phi_{t}=\lambda,W_{t}\leq\overline{W}$ for $t\geq 0$ and $C_{t}$ is
increasing only when $W_{t}\geq\overline{W}$.
Now, we can bound the principal’s time-0 payoff for an arbitrary incentive
compatible contract. Note that $F(W_{\tau})=L$. We have
$\displaystyle\inf_{\nu\geq
0}\mathbb{E}\left[\int_{0}^{\tau}e^{-rs}\left\\{dY_{s}-dC_{s}+\xi(\nu_{s})ds\right\\}+e^{-r\tau}L\right]$
$\displaystyle=\int_{\nu\geq
0}\mathbb{E}\left[G_{t\wedge\tau}+\mathbf{1}_{t\leq\tau}\left(e^{-rs}\left\\{dY_{s}-dC_{s}+\xi(\nu_{s})ds\right\\}+e^{-r\tau}L-e^{-rt}F(W_{t})\right)\right]$
$\displaystyle\leq\underset{\leq
G_{0}=F(W_{0})}{\underbrace{\mathbb{E}\left[G_{t\wedge\tau}\right]}}+e^{-rt}\inf_{\nu\geq
0}\mathbb{E}\left[\mathbf{1}_{t\leq\tau}\underset{\leq\mu/r-W_{t}}{\underbrace{\mathbb{E}_{t}\left[\int_{t}^{\tau}e^{-r(s-t)}\left\\{dY_{s}-dC_{s}+\xi(\nu_{s})ds\right\\}+e^{-r(\tau-t)}L\right]}}-F(W_{t})\right]$
where the second inequality follows from the first-best bound. Since
$F^{\prime}(W)\geq-1$ we have $\mu/r-W-F(W)\leq\mu/r-L$. Letting $t\to\infty$
we see that
$\inf_{\nu\geq
0}\mathbb{E}\left[\int_{0}^{\tau}e^{-rs}\left\\{dY_{s}-dC_{s}+\xi(\nu_{s})ds\right\\}+e^{-r\tau}L\right]\leq
F(W_{0}).$
∎
### Proof of proposition 4.6
Applying Dynkin’s formula to write the value function as an integral of the
differential generator and then differentiating under the integral sign and
applying the envelope theorem gives
$\displaystyle\frac{\partial}{\partial\theta}F(W)=\mathbb{E}\left[\int_{0}^{\tau}e^{-rt}\frac{1}{2}\left(\nu^{*}(W_{t})^{2}-1-\log(\nu^{*}(W_{t})^{2})\right)\
dt\bigg{|}W_{0}=W\right]>0.$
∎
### Proof of proposition 4.7
Differentiate the boundary condition $rF(\overline{W})+\gamma\overline{W}=\mu$
and use the smooth pasting condition $F^{\prime}(\overline{W})=-1$ to obtain
$r\left[\frac{\partial}{\partial\theta}F(\overline{W})-\frac{\partial\overline{W}}{\partial\theta}\right]+\gamma\frac{\partial\overline{W}}{\partial\theta}=0$
which gives
$\frac{\partial\overline{W}}{\partial\theta}=-\frac{r}{\gamma-r}\frac{\partial}{\partial\theta}F(\overline{W})<0.$
∎
### Proof of proposition 4.8
Let $h(W)$ denote the agent’s value function under the contract described in
proposition 4.4. The HJBI equation for the agent is given by
$\gamma h(W)=\lambda\mu(1-a)+h^{\prime}(W)(\gamma
W+\lambda\mu(a-1))+\frac{1}{2}\lambda^{2}\sigma^{2}\nu^{2}h^{\prime\prime}(W)+\frac{\tilde{\theta}}{2}\left\\{\nu^{2}-1-\log\nu^{2}\right\\}$
on $[0,\overline{W}]$ with boundary conditions $h(0)=0$ and
$h^{\prime}(\overline{W})=1$. Now, guess and verify that $h(W)=W$ is a
solution with optimal controls $\nu(W)=1$ and $a(W)=1$. It is easy to show
that this solution must be unique. ∎
### Proof of proposition 4.9
This follows immediately from proposition 4.7 ∎
### Proof of proposition 4.10
Differentiating (26) w.r.t. $W$ we obtain
$0=(\gamma-r)F^{\prime}(W)+\gamma
WF^{\prime\prime}(W)+\frac{\theta}{2}\frac{\lambda^{2}\sigma^{2}F^{\prime\prime\prime}(W)}{\theta+\lambda^{2}\sigma^{2}F^{\prime\prime}(W)}$
Note that the first term is negative since $\gamma>r$ and $F^{\prime}(W)<0$ on
the interval $(R,\overline{W}]$. The second term is negative since
$F^{\prime\prime}(W)<0$ for $W<\overline{W}$. Thus the third term must be
strictly positive. This can only happen if $F^{\prime\prime\prime}(W)$ is
strictly positive. The result now follows from (25). ∎
### Proof of proposition 4.12
This follows immediately from proposition 4.11 and Appendix B of DeMarzo and
Sannikov (2006) ∎
|
# Magnetoresistive Sensor Detectivity: A Comparative Analysis
J. E. Davies<EMAIL_ADDRESS>J. D. Watts J. Novotny D. Huang P. G. Eames
NVE Corporation, Eden Prairie, MN 55344, USA
###### Abstract
We report on the noise performance characteristics of magnetic sensors using
both magnetic tunnel junction (MTJ) and giant magnetoresistance (GMR)
elements. Each sensor studied has a notably different noise and detectivity.
Of the sensors we measured, those based on GMR multilayers have the lowest
noise and detectivity. However, the GMR sensor also has a significantly
smaller linear range. To make a direct comparison between sensors we scale the
linear operating ranges of each sensor to be the same. This is the
phenomenological equivalent of modifying the flux concentration. Upon scaling
the low frequency detectivity of the TMR sensors becomes essentially equal to
that of the GMR sensor. Using the scaling approach we are able to place the
detectivity in the context of other key parameters, namely size and power
consumption. Lastly, we use this technique to examine the upper limit for
magnetoresistive sensor performance based on a notional MTJ sensor using
present record setting TMR values.
††preprint: AIP/123-QED
Magnetoresistive (MR) technologies have been the fundamental building block
for spintronic devices over the last three decades.Baibich _et al._ (1988);
Binasch _et al._ (1989); Dieny _et al._ (1991); Parkin _et al._ (1999) This
is due to their ability to serve as highly sensitive transducers that readily
respond to magnetic fields and spin currents.Slonczewski (2002); Ralph and
Stiles (2008); Sankey _et al._ (2008); Wang, Anderson, and Daughton (1997);
Ikeda _et al._ (2008); Wang _et al._ (2009); Wolf _et al._ (2001) Their
small size and high sensitivity are utilized as sensor components across
several industries including the biomedical, navigation and industrial
automation markets.Graham, Ferreira, and Freitas (2004); Caruso (1997); Schewe
and Schelter (1997); Fujiwara _et al._ (2018) They are also at the core of
the magnetic data storage industry, being utilized as read heads in hard disk
drives and the storage medium for magnetic random access memory (MRAM)
bits.McFadyen, Fullerton, and Carey (2006); Parkin _et al._ (1999) From the
sensor perspective, the MR sensor’s CMOS compatible fabrication process,
robust performance, low power operation and low cost are clear advantages over
other technologies. These advantages poise MR sensors to play a key role in
technologies benefiting from advanced, compact, and cost effective sensing,
such as the internet of things (IoT), smart grids and electric cars. Tahoori
_et al._ (2018); Liu _et al._ (2019); Ouyang _et al._ (2019)
With many types of MR sensors available it is often difficult to determine the
best sensor for a particular application. Typically, application engineers
have used maximization of the % MR and the minimization of noise as key
metrics.Nowak _et al._ (1998); Ikeda _et al._ (2008); Stutzke _et al._
(2005) These metrics are straightforward for the researcher to focus on with
much work having gone into the development of MR sensor noise
phenomenology.Nowak _et al._ (1998); Ingvarsson _et al._ (2000); Klaassen,
Xinzhi Xing, and van Peppen (2004); Zheng _et al._ (2019) Experimental noise
characterization has shown that there can be substantial variation in the
noise characteristics among sensors.Stutzke _et al._ (2005); Egelhoff _et
al._ (2009); Deak, Zhou, and Shen (2017) It is important to realize that such
comparisons between specific sensors can be misleading in that they do not
place the comparison in the context of other important specifications such as
sensor size, hysteresis, power consumption, linear range and cost. Performing
this more rigorous contextual comparison is non-trivial, but more telling.
This work places magnetoresistive sensor noise characterization in a more
parameterized context. We start with a noise and performance study of three
sensor variations. Each sample has a different field response and noise
spectrum. Normalizing the noise data to a key parameter, namely the linear
range, we obtain a more contextual picture of each sensor’s performance and
demonstrate that such normalization enables a more fundamental comparison
between different sensors. Finally, we look ahead to what is likely the
ultimate limit of MR sensor detectivity.
The noise measurement setup is shown in Fig.1. The sensors, configured as
Wheatstone bridges, are soldered to circuit boards and placed in a triple
layered $\mu$-metal container for shielding from environmental magnetic
fields. The sensors were all battery powered to 3 V and, in the case of the
unipolar sensor, a 0.5 mT field was applied along its sense direction to put
the sensor in the middle of its operate range. The outputs of the bridges,
$+V_{out}$ and $-V_{out}$ were input to a two-channel preamplifier. The
differential output was then captured by a spectrum analyzer.
Figure 1: Illustration of the noise measurement setup. The sensor resides in
a mu-metal can and is battery powered to 3 V. Helmholtz coils are available
for field biasing of the sensor. Sensor outputs are fed to a low-noise
preamplifier. The differential signal is then captured on a spectrum Analyzer.
We limit this study to three often utilized sensor compositions (layer
thicknesses are in nm):
1. 1.
Bipolar MTJ sensor with superparamagnetic freelayer (SPMTJ) with layer
composition: Ta(5)/Ru(5)/IrMn(11)/CoFe(2)/Ru(0.9)/CoFeB(2.5)/
MgO(2)/CoFeB(1.2)/Ta(2)/Ru(10)
2. 2.
Bipolar MTJ sensor with full film free layer (FFMTJ) with layer composition:
Ta(5)/Ru(5)/IrMn(11)/CoFe(2)/Ru(0.9)/CoFeB(2.5)/
MgO(2)/CoFeB(2)/NiFe(6)/Ta(2)/Ru(10)
3. 3.
Omnipolar GMR multilayer sensorZou _et al._ (2001) (GMRMLs) with layers grown
on a Ta(5) seed and comprised of four CoFe(1)/NiFeCo(2)/CoFe(1) ferromagnetic
trilayers separated by CuAgAu(1.5) GMR spacers and capped with Ta(20)
All films were grown by magnetron sputtering onto Si wafers coated with 200 nm
vapor deposited SiNx. The SPMTJ and FFMTJ films were annealed ex-situ in a 400
mT field at 350C in a forming gas environment to both crystalize the MgO layer
and set the pinning layer magnetization. No annealing was performed on the
GMRML films.
The sensor elements were fabricated using standard photolithography processes.
SPMTJ and FFMTJ resistor elements are series connected arrays of forty 5
$\mu$m diameter MTJs. Individual resistors were then rotated and co-packaged
to form a push-pull sensor configuration.
The GMRML sensor was monlithically fabricated with four serpentine-style
resistors. Thick NiFe layers were then used in dual purpose to shield two
resistors and provide flux concentration for the active resistors.
Table 1: Nominal sensor parameters Sensor | MR | $R_{0}$ | $B_{sat}$ | $Sens_{max}$1111 mV/V/mT = 0.1 mV/V/Oe | Area222Area is circuit board area occupied by the sensor. SPMTJ and FFMTJ sensors are in TDFN-6 packages, GMRML is in SOIC-8.
---|---|---|---|---|---
Type | (%) | $(k\Omega)$ | (mT) | (mV/V/mT) | ($mm^{2}$)
SPMTJ | 60 | 50 | 5 | 25 | 6.25
FFMTJ | 120 | 10 | 15 | 25 | 6.25
GMRML | 15 | 5 | 1 | 12 | 20
The TMR ratio, bridge resistance (at $\mu_{o}H$ = 0 mT), saturation field
($B_{sat}$) and sensitivity for each sensor are listed in Table 1. Sensor
outputs and sensitivities as a function of applied field for the three sensors
are shown in Fig. 2 as black and red curves, respectively.
Figure 2: (left axes, black symbols) Bridge outputs and (right axes, red
symbols) sensitivities for (a) SPMTJ sensor, (b) FFMTJ and (c) GMRML sensors.
Noise measurements were performed and detectivity evaluated at H = 0 mT for
the SPMTJ and FFMTJ and H = 0.5 mT for the GMRML.
Fig. 2a shows the typical performance of SPMTJ sensors. The bridge output has
near zero hysteresis due to the thermal demagnetizing effects of the
superparamagnetic free layer. There is no offset in the output. This is due to
the fast randomization of the free layer’s magnetization and granular
structure of the film negating any of the traditional coupling effects. The
peak sensitivity of 25 mV/V/mT (Fig. 2a, red) occurs at $H$ = 0 mT. The
sensitivity decreases precipitously to either side of $H$ = 0 mT. The full
width at half maximum (FWHM) of the sensitivity "peak" is approximately 10 mT.
The sharpness of the sensitivity peak limits the linearity to +/- 2 mT.
The FFMTJ sensor response is shown in Fig. 2b. The FFMTJ was tailored to
minimize hysteresis and offset while extending the linear range through the
use of shape anisotropy. Interestingly, the maximum sensitivity is comparable
to the SPMTJ sensor (Fig.2b, red), however the sensitivity "peak" is much
broader with a FWHM of 30 mT. This results in the linear range of the sensor
being +/-10 mT, a factor of five increase over the SPMTJ films while
preserving the sensitivity.
The GMRML sensor response (Fig. 2c) is exotic compared to the bipolar films.
The multilayer structure results in a unipolar sensor response. Permalloy flux
concentrators are used in order to maximize the sensitivity and tailor the
linear range without modifying the underlying film properties. In this case,
the flux concentrators employed allow 10x reduction in the $B_{sat}$ (note the
field axis compared to the MTJ sensors). The flux concentration allows the
maximum sensitivity (Fig. 2c, red) to be comparable to the MTJ sensors (1).
This peak sensitivity occurs near $H$ = 0 mT. Without a bias field or magnet
in place, the omnipolar device produces a discontinuity in the bridge output
at $H$ = 0 mT. The usable range of the unbiased sensor shown is between 0.1 mT
and 1.5 mT, which has an average sensitivity of 12 mV/V/T.
Figure 3: Noise spectra for the GMRML (black), SPMTJ (red) and FFMTJ (blue)
sensor elements in Fig. 2. Each sensor was biased to 3 V for the measurement
as illustrated in Fig. 1.
Noise data from for the three sensors at their average operating fields (0 mT
for the SPMTJ and FFMTJ and 0.5 mT for the GMRML) is shown in Fig. 3. At low
frequencies the noise is dominated by the magnetic $1/f$ contribution. The
SPMTJ (Fig. 3, red) and FFMTJ (Fig. 3, blue) sensors have a similar $1/f$
character (slope). However, the SPMTJ sensor is roughly 10x less noisy. The
GMRML sensor’s spectrum (Fig. 3, black) has a different character with more
curvature and overall lower noise throughout the frequency range.
Around 1 kHz, the SPMTJ spectra (Fig. 3, red) flattens out as the output noise
approaches the Johnson noise floor. The frequency at which this transition
takes place is increased for lower resistance devices. Thus, by 20 kHz the
difference in noise between the high resistance SPMTJ and the FFMTJ has been
reduced, and at still higher frequencies, the noise of the SPMTJ likely
surpasses that of the FFMTJ.
The GMRML sensor also begins to saturate at high frequency, but does so more
gradually (Fig. 3, black). Previous noise measurements of this particular make
of GMRML sensor (an NVE Corp. AA002) by Stutzke, et al. show the noise floor
is reached around 100 Hz. The GMRML sensor from this study appears to have
pushed the floor to higher frequencies and is likely due to our sensors being
biased to 3 V compared to the 1.2 V in the Stutzke study, resulting in a
larger electric $1/f$ contribution.Stutzke _et al._ (2005); Jiang _et al._
(2004)
Figure 4: Detectivity versus frequency for the GMRML (black), SPMTJ (red)and
FFMTJ (blue) sensor elements in Fig. 2 The 2.5x difference between the SPMTJ
and GMRML sensors allows for the SPMTJ sensor to have the lowest detectivity
at 10 kHz.
Dividing the noise spectra by the sensitivity yields the detectivity, i.e. the
lowest detectable field. This is shown in Fig. 4 for each sensor. The GMRML
sensor (Fig. 4, black triangles) has the lowest detectivity. The SPMTJ film
has the second lowest detectivity; still 67% larger than the GMRML’s
throughout the frequency range. The detectivity for both the SPMTJ and GMRMR
drops below 1 nT/$\sqrt{Hz}$ at 10 kHz with values of 0.8 nT/$\sqrt{Hz}$ and
0.53 nT/$\sqrt{Hz}$, for the SPMTJ and GMRML sensors, respectively.
It is important to note that while the GMRML sensor has the lowest
detectivity, it also has the smallest linear range with $B_{sat}$ = 1 mT.
According to the $B_{sat}$ values in Table I, a flux concentration of 5x and
15x for the SPMTJ and FFMTJ films, respectively, would result in sensors with
the same linear range. This provides for a more direct comparison of the
sensors.
Figure 5: Detectivity versus frequency for the GMRML (black), SPMTJ (red) and
FFMTJ (blue) sensor elements. This time the detectivity curves are normalized
to have the same linear range. The normalization allows for a more direct
comparison of the sensor materials.
Fig. 5 shows the detectivity of each of the three sensors as if they were flux
concentrated to have the same $B_{sat}$. The GMRML and SPMTJ films now have
comparable detectivities of 6 nT/$\sqrt{Hz}$ and 8 nT/$\sqrt{Hz}$ at 1 Hz,
respectively. The SPMTJ detectivity drops below that of the GMRML for $f$ >
100 Hz. In contrast, the FFMTJ still has a detectivity of 30 nT/$\sqrt{Hz}$ at
1 Hz, only becoming comparable to the other sensors above 1 kHz.
This comparative analysis relies on the ability to manipulate the flux
concentration to create equivalent sensors. In practice, adding flux
concentration results in two main performance trade-offs. First, even modest
amounts of flux concentration (e.g. 5x or higher) will drastically increase
the sensor size. Second, adding flux concentration reduces and limits the
operating field range of the sensor.
The second trade-off presents the dilemma for magnetoresistive sensors of
choosing detectivity minimization versus having a robust operating field
range. This is an important problem to address in applications such as
surgical navigation and precision automation where fields on the order of 1 mT
are used, but sub 1 nT/$\sqrt{Hz}$ resolution are required. Surgical
navigation sensors also need to be very small; ideally less than 2 mm in any
direction.
Figure 6: Plot of sensitivity versus $B_{sat}$ for the (red) SPMTJ, (blue)
FFMTJ and (black) GMRML sensors. (purple) A notional sensor, assuming the
ability to create a device with record MgO TMR experimentally achieved by
Ikeda, et al. is also shownIkeda _et al._ (2008); Zheng _et al._ (2019).
Lines take each sensor type from 1x to 20x flux concentration. The coincident
dashed line represents the potential ultimate limit of TMR sensor performance.
One option to address such requirements is to maximize the TMR ratio. It has
been shown that TMR ratios beyond 100 % won’t significantly improve sensor
noise, but will allow for a reduced need for flux concentration, and hence a
higher linear range.Egelhoff _et al._ (2009) Fig. 6 shows the possible linear
ranges, i.e. $(B_{sat})$ versus sensitivities for various GMR and TMR sensors.
Each line shows when the sensor is flux concentrated between 1x (highest
$B_{sat}$ value) and 20x (lowest $B_{sat}$ value). Sensors at higher $B_{sat}$
and sensitivity correlate with larger TMR ratios.
The record room temperature TMR ratio is 604%, demonstrated by Ikeda et
al.Ikeda _et al._ (2008) The long standing of this record makes it something
of an upper limit for MTJ devices. That particular MTJ is not practical for a
sensor for several reasons, most specifically it being in a pseudo-spin-valve
configuration and having large hysteresis. However, it can notionally be used
to show the limits of TMR sensor detectivity. Thus, included in Fig. 6 is a
hypothetical sensor with 604% TMR. The dashed line is used to delineate the
ultimate limits of TMR sensitivity versus $B_{sat}$. Comparing measured
literature values from the MR Sensor roadmap, it appears that MR sensor
detectivity will minimize around 10 pT/$\sqrt{Hz}$ at 1 Hz without significant
material changes or technology shifts.Zheng _et al._ (2019)
Fig. 7 shows models of the detectivity versus frequency for the SPMTJ, FFMTJ,
GMRML and notional film. The models constrain parameters such that the sensor
footprint and magnetic response are as identical as possible. We assume an
"earth’s field anomaly" application where $B_{sat}$ = 0.1 mT and there is a
reasonably small component with an active resistor area of 3 mm2. The small
$B_{sat}$ and large resistor area allows for a significant drop in the
detectivity compared to the measured values in Fig. 4.
At low frequencies (1 Hz), where $1/f$ noise dominates, the detectivity of the
MTJ materials is comparable with the FFMTJ and notional films having the
lowest detectivity at roughly 10 pT/$\sqrt{Hz}$. This reaffirms the
diminishing impact of TMR > 100 % as shown by other groups. Egelhoff _et al._
(2009). The SPMTJ sensor has the next lowest detectivity of 20 pT/$\sqrt{Hz}$.
This performance degradation is primarily attributed to the lower TMR ratio,
although this could also be hampered by active area (which is generally an
inaccessible parameter to an end user). The GMRML sensor has the highest
detectivity at 70 pT/$\sqrt{Hz}$.
Figure 7: Modeled ultimate detectivity versus frequency for the (red) SPMTJ,
(blue) FFMTJ, (black) and (purple) notional sensor reaching 604% TMR.Ikeda
_et al._ (2008) All four plots assume $B_{sat}$ = 0.1 mT, no additional flux
concentration and an active resistor area of 3 mm2.
As is known, the $1/f$ contribution subsides at high frequencies, leaving
Johnson and shot noise as the remaining detectivity contributions. These are
dictated by the device resistance. With a RA product of nearly 400
k$\Omega-\mu$m2 The FFMTJ film approaches 1 pT/$\sqrt{Hz}$ at 1 kHz. The GMRML
film’s low resistance results in 700 fT/$\sqrt{Hz}$ detectivity around 100
kHz. The Ikeda film, with an RA = 10 $\Omega-\mu$m2 allows for the detectivity
to drop to 200 fT/$\sqrt{Hz}$ at 30 kHz.
Much work involving reaching fT detectivities involves some form of field or
signal modulation. Of these are microelectromechanical system-based (MEMS)
flux concentrators that serve to modulate the field at high frequencies where
Johnson noise is the only consideration.Edelstein _et al._ (2006). Another
approach is to use specialized flux concentrators.Pannetier _et al._ (2004).
Indeed, the utilization of other effects such as modulation by voltage
controlled magnetic anisotropy or spin transfer torques may also help to drop
the detectivity.
Interestingly, it should be noted that going to high frequencies can result in
an increased detectivity. Recent work by He et al. has shown that the
detectivity in flux concentrated MTJs can actually increase for large
frequencies, as the permeability of the flux concentrators decrease; limiting
the minimum detectivity in their sensor to 30 pT/$\sqrt{Hz}$.He _et al._
(2018) Thus, the technique of sensor/field modulation may also be limited to a
particular frequency.
In conclusion we have performed a comparative study of the noise and
detectivity in three classes of sensors. We have found that on first
inspection, the GMRML sensor has the lowest detectivity, it also has the
largest size and power consumption. When the sensors are normalized to
$B_{sat}$ the differences are no longer evident. Extending the parameterized
comparisons to other materials, including the best demonstration Ikeda films,
shows that there is likely an ultimate performance limit for magnetoresistive
sensors in the 100s of fT range. However, normalizing the sensor’s noise
performance to a tunable parameter, such as the linear operating range
provides a much clearer and useful comparison. The hope is this work will
serve as a guide to MR sensor design, illustrating the present limits of the
technology.
We would like to acknowledge Cathy Nordman and Maria Torija for fruitful
discussions.
The data that support the findings of this study are available from the
corresponding author upon reasonable request.
## References
* Baibich _et al._ (1988) M. N. Baibich, J. M. Broto, A. Fert, F. N. Van Dau, and F. Petroff, “Giant Magnetoresistance of (001)Fe/(001)Cr Magnetic Superlattices,” Physical Review Letters 61, 2472–2475 (1988).
* Binasch _et al._ (1989) G. Binasch, P. Grünberg, F. Saurenbach, and W. Zinn, “Enhanced magnetoresistance in layered magnetic structures with antiferromagnetic interlayer exchange,” Physical Review B 39, 4828–4830 (1989).
* Dieny _et al._ (1991) B. Dieny, V. Speriosu, S. Parkin, B. Gurney, D. Wilhoit, and D. Mauri, “Giant magnetoresistive in soft ferromagnetic multilayers,” Physical Review B 43, 1297–1300 (1991).
* Parkin _et al._ (1999) S. S. P. Parkin, K. P. Roche, M. G. Samant, P. M. Rice, R. B. Beyers, R. E. Scheuerlein, E. J. O’Sullivan, S. L. Brown, J. Bucchigano, D. W. Abraham, Y. Lu, M. Rooks, P. L. Trouilloud, R. A. Wanner, and W. J. Gallagher, “Exchange-biased magnetic tunnel junctions and application to nonvolatile magnetic random access memory (invited),” Journal of Applied Physics 85, 5828 (1999).
* Slonczewski (2002) J. Slonczewski, “Currents and torques in metallic magnetic multilayers,” Journal of Magnetism and Magnetic Materials 247, 324–338 (2002).
* Ralph and Stiles (2008) D. Ralph and M. Stiles, “Spin transfer torques,” Journal of Magnetism and Magnetic Materials 320, 1190–1216 (2008).
* Sankey _et al._ (2008) J. C. Sankey, Y.-T. Cui, J. Z. Sun, J. C. Slonczewski, R. A. Buhrman, and D. C. Ralph, “Measurement of the spin-transfer-torque vector in magnetic tunnel junctions,” Nature Physics 4, 67–71 (2008).
* Wang, Anderson, and Daughton (1997) D. Wang, J. Anderson, and J. Daughton, “Thermally stable, low saturation field, low hysteresis, high GMR CoFe/Cu multilayers,” IEEE Transactions on Magnetics 33, 3520–3522 (1997).
* Ikeda _et al._ (2008) S. Ikeda, J. Hayakawa, Y. Ashizawa, Y. M. Lee, K. Miura, H. Hasegawa, M. Tsunoda, F. Matsukura, and H. Ohno, “Tunnel magnetoresistance of 604% at 300 K by suppression of Ta diffusion in CoFeB$/$MgO$/$CoFeB pseudo-spin-valves annealed at high temperature,” Applied Physics Letters 93, 082508 (2008).
* Wang _et al._ (2009) C. Wang, Y.-T. Cui, J. Z. Sun, J. A. Katine, R. A. Buhrman, and D. C. Ralph, “Sensitivity of spin-torque diodes for frequency-tunable resonant microwave detection,” Journal of Applied Physics 106, 053905 (2009).
* Wolf _et al._ (2001) S. A. Wolf, D. D. Awschalom, R. A. Buhrman, J. M. Daughton, S. von Molnár, M. L. Roukes, A. Y. Chtchelkanova, and D. M. Treger, “Spintronics: a spin-based electronics vision for the future.” Science (New York, N.Y.) 294, 1488–95 (2001).
* Graham, Ferreira, and Freitas (2004) D. L. Graham, H. A. Ferreira, and P. P. Freitas, “Magnetoresistive-based biosensors and biochips,” Trends in Biotechnology 22, 455 – 462 (2004).
* Caruso (1997) M. J. Caruso, “Applications of magnetoresistive sensors in navigation systems,” in _SAE Technical Paper_ (SAE International, 1997).
* Schewe and Schelter (1997) H. Schewe and W. Schelter, “Industrial applications of magnetoresistive sensors,” Sensors and Actuators A: Physical 59, 165 – 167 (1997), 1st European magnetic sensors and actuators conference.
* Fujiwara _et al._ (2018) K. Fujiwara, M. Oogane, A. Kanno, M. Imada, J. Jono, T. Terauchi, T. Okuno, Y. Aritomi, M. Morikawa, M. Tsuchida, N. Nakasato, and Y. Ando, “Magnetocardiography and magnetoencephalography measurements at room temperature using tunnel magneto-resistance sensors,” Applied Physics Express 11, 023001 (2018).
* McFadyen, Fullerton, and Carey (2006) I. R. McFadyen, E. E. Fullerton, and M. J. Carey, “State-of-the-art magnetic hard disk drives,” MRS Bulletin 31, 379–383 (2006).
* Tahoori _et al._ (2018) M. Tahoori, S. M. Nair, R. Bishnoi, S. Senni, J. Mohdad, F. Mailly, L. Torres, P. Benoit, A. Gamatie, P. Nouet, F. Ouattara, G. Sassatelli, K. Jabeur, P. Vanhauwaert, A. Atitoaie, I. Firastrau, G. Di Pendina, and G. Prenat, “Using multifunctional standardized stack as universal spintronic technology for iot,” in _2018 Design, Automation Test in Europe Conference Exhibition (DATE)_ (2018) pp. 931–936.
* Liu _et al._ (2019) X. Liu, K. H. Lam, K. Zhu, C. Zheng, X. Li, Y. Du, C. Liu, and P. W. T. Pong, “Overview of spintronic sensors with internet of things for smart living,” IEEE Transactions on Magnetics 55, 1–22 (2019).
* Ouyang _et al._ (2019) Y. Ouyang, Z. Wang, G. Zhao, J. Hu, S. Ji, J. He, and S. X. Wang, “Current sensors based on gmr effect for smart grid applications,” Sensors and Actuators A: Physical 294, 8 – 16 (2019).
* Nowak _et al._ (1998) E. R. Nowak, R. D. Merithew, M. B. Weissman, I. Bloom, and S. S. P. Parkin, “Noise properties of ferromagnetic tunnel junctions,” Journal of Applied Physics 84, 6195–6201 (1998), https://doi.org/10.1063/1.368936 .
* Stutzke _et al._ (2005) N. A. Stutzke, S. E. Russek, D. P. Pappas, and M. Tondra, “Low-frequency noise measurements on commercial magnetoresistive magnetic field sensors,” Journal of Applied Physics 97, 10Q107 (2005), https://doi.org/10.1063/1.1861375 .
* Ingvarsson _et al._ (2000) S. Ingvarsson, G. Xiao, S. S. P. Parkin, W. J. Gallagher, G. Grinstein, and R. H. Koch, “Low-frequency magnetic noise in micron-scale magnetic tunnel junctions,” Phys. Rev. Lett. 85, 3289–3292 (2000).
* Klaassen, Xinzhi Xing, and van Peppen (2004) K. B. Klaassen, Xinzhi Xing, and J. C. L. van Peppen, “Signal and noise aspects of magnetic tunnel junction sensors for data storage,” IEEE Transactions on Magnetics 40, 195–202 (2004).
* Zheng _et al._ (2019) C. Zheng, K. Zhu, S. C. de Freitas, J.-Y. Chang, J. E. Davies, P. Eames, P. P. Freitas, O. Kazakova, C. Kim, C.-W. Leung, S.-H. Liou, A. Ognev, S. N. Piramanayagam, P. Ripka, A. Samardak, K.-H. Shin, S.-Y. Tong, M.-J. Tung, S. X. Wang, S. Xue, X. Yin, and P. W. T. Pong, “Magnetoresistive Sensor Development Roadmap (Non-Recording Applications),” IEEE Transactions on Magnetics 55, 1–30 (2019).
* Egelhoff _et al._ (2009) W. Egelhoff, P. Pong, J. Unguris, R. McMichael, E. Nowak, A. Edelstein, J. Burnette, and G. Fischer, “Critical challenges for picotesla magnetic-tunnel-junction sensors,” Sensors and Actuators A: Physical 155, 217 – 225 (2009).
* Deak, Zhou, and Shen (2017) J. G. Deak, Z. Zhou, and W. Shen, “Tunneling magnetoresistance sensor with pt level 1/f magnetic noise,” AIP Advances 7, 056676 (2017), https://doi.org/10.1063/1.4978465 .
* Zou _et al._ (2001) W. Zou, H. Wadley, X. Zhou, R. Johnson, and D. Brownell, “Composition-morphology-property relations for giant magnetoresistance multilayers grown by rf diode sputtering,” MRS Proceedings 674, T1.5 (2001).
* Jiang _et al._ (2004) L. Jiang, E. R. Nowak, P. E. Scott, J. Johnson, J. M. Slaughter, J. J. Sun, and R. W. Dave, “Low-frequency magnetic and resistance noise in magnetic tunnel junctions,” Phys. Rev. B 69, 054407 (2004).
* Edelstein _et al._ (2006) A. S. Edelstein, G. A. Fischer, M. Pedersen, E. R. Nowak, S. F. Cheng, and C. A. Nordman, “Progress toward a thousandfold reduction in 1/f noise in magnetic sensors using an ac microelectromechanical system flux concentrator (invited),” Journal of Applied Physics 99, 08B317 (2006), https://doi.org/10.1063/1.2170067 .
* Pannetier _et al._ (2004) M. Pannetier, C. Fermon, G. Le Goff, J. Simola, and E. Kerr, “Femtotesla magnetic field measurement with magnetoresistive sensors,” Science 304, 1648–1650 (2004), https://science.sciencemag.org/content/304/5677/1648.full.pdf .
* He _et al._ (2018) G. He, Y. Zhang, L. Qian, G. Xiao, Q. Zhang, J. C. Santamarina, T. W. Patzek, and X. Zhang, “Picotesla magnetic tunneling junction sensors integrated with double staged magnetic flux concentrators,” Applied Physics Letters 113, 242401 (2018), https://doi.org/10.1063/1.5052355 .
|
# Lippmann-Schwinger-Lanczos algorithm for inverse scattering problems
V. Druskin111Worcester Polytechnic Institute, Department of Mathematical
Sciences, Stratton Hall, 100 Institute Road, Worcester MA, 01609
<EMAIL_ADDRESS>S. Moskow222Department of Mathematics, Drexel
University, Korman Center, 3141 Chestnut Street, Philadelphia, PA 19104
<EMAIL_ADDRESS>M. Zaslavsky333Schlumberger-Doll Research Center, 1
Hampshire St., Cambridge, MA 02139-1578<EMAIL_ADDRESS>
###### Abstract
Data-driven reduced order models (ROMs) are combined with the Lippmann-
Schwinger integral equation to produce a direct nonlinear inversion method.
The ROM is viewed as a Galerkin projection and is sparse due to Lanczos
orthogonalization. Embedding into the continuous problem, a data-driven
internal solution is produced. This internal solution is then used in the
Lippmann-Schwinger equation, thus making further iterative updates
unnecessary. We show numerical experiments for spectral domain domain data for
which our inversion is far superior to the Born inversion and works as well as
when the true internal solution is known.
## 1 Introduction
This work extends the reduced order model (ROM) approach to inverse impedance,
scattering and diffusion problems that was developed in the sequel of papers
[5, 7, 14, 11, 8, 9, 6, 2]. This approach to solve multidimensional inverse
problems using a ROM framework can be summarized by the following:
1. 1.
The boundary data, (discrete partial Dirichlet-to-Neumann maps as in [5] or
their transient variant [14, 11, 8, 9, 6]) is matched by data-driven ROMs. The
ROMs are sparse networks that can be written as a tridiagonal or block-
tridiagonal matrix for one-dimensional and multi-dimensional problems
respectively. Matching is performed via a direct layer stripping algorithm or
a sequence of such algorithms.
2. 2.
The data-driven networks implicitly embed boundary data back into the
interior. The main property of such embeddings is that the network’s
coefficients can be approximated by localized averages of the PDE
coefficients. Approximate linear maps between these averages and the network
coefficients can be learned via so-called ”optimal grid” or ”‘finite-
difference Gaussian quadrature” [5, 14], or this map can be used as a
preconditioner in optimization algorithms [7, 2].
In short, the main nonlinearity of the inverse problem is absorbed during the
first stage thanks to the layer stripping, which is intrinsically nonlinear.
This allows one to treat the transformed data as almost linear with respect to
PDE coefficients. The layer stripping in essence is just the Euclidean
polynomial division algorithm, which was further developed in the seminal
works of Stieltjes and Lanczos, and later in the electrical circuit community.
It also related to seminal works from the Soviet school (Marchenko, Gelfand,
Levitan and Krein) on inverse spectral problems. The origin of the embedding
concept can be be traced to Krein [15], and to the idea of the spectrally
matched second order staggered finite-difference grids first introduced in
[13]. These ”optimal grids” or ”finite-difference Gaussian quadrature” rules
provided a clear geometric interpretation of the data driven network’s
coefficients [4].
It is well known that data-driven ROM can be equivalently rewritten in
projection form, for example as a Galerkin system, with the same matrix $T$
for state variable realization in the interior, see for example [1]. The
relationship between these projections and finite-difference Gaussian
quadrature rules was first studied in [12] and further developed in [6]. To
see the idea, let $A_{q}$ be the PDE operator, for example
$A_{q}=-\Delta+q,$ (1)
and let us denote by $V_{q}$ the row vector of orthonormal basis functions
${v_{q}}_{j}$, $j=1,\ldots,m$, such that
$T_{q}=\int V^{*}_{q}A_{q}V_{q}$ (2)
where $T_{q}$ is our sparse data driven network. A critical property of the
projection solution in the interior, which was first noticed in [14], is that
$V_{q}\approx V_{0},$ (3)
even for large $q$, where $V_{0}$ is the row vector of orthogonalized basis
corresponding to $q=0$. This property of the sparse realization $T_{q}$ would
not be possible, for example, if one were using the full stiffness and mass
matrices appearing in the Loewner product formulation normally used for data-
driven ROM construction. Because of the sparse structure of $T_{q}$,
(tridiagonal finite-difference in 1D problems), the basis functions
${v_{q}}_{j}$ are localized, that is, they behave somehow similarly to
piecewise linear finite element basis functions, and depend only weakly on the
media perturbations. Rigorous analysis of this property can be obtained using
already mentioned Marchenko, Gelfand, Levitan and Krein approach and is in
progress at the moment [3].
Numerical experiments have demonstrated that the weak dependence of the basis
functions on $q$ also holds in the multidimensional setting, and this gave
rise to so-called nonlinear back-projection imaging algorithm [11], that uses
the approximation
$A_{q}w\approx V_{0}T_{q}\int V_{0}^{*}w.$ (4)
The backprojection algorithm worked very well for large $m$ yielding a high
resolution basis for time domain wave problems, but failed in diffusion
problems, allowing only small $m$ due to inherent ill-posedness. To overcome
this problem, in [6] the authors with collaborators introduced the idea of
generating internal solutions. To see the idea of this method, we first
consider the one dimensional problem with source term $g$. Then the frequency
domain solution can be written as
$u_{q}(x,s)=(A_{q}+sI)^{-1}g.$ (5)
Then using (4) we obtain
$u_{q}(x,s)\approx\tilde{u}_{q}(x,s)=V_{0}(T_{q}+sI)^{-1}\int V_{0}^{*}g.$ (6)
Thanks to (4), $\tilde{u}_{q}-u_{q}<<u_{q}-u_{0}$, even when $q$ is rather
large. Then the approximate solution of inverse problem can be computed as
$q\approx\frac{\Delta\tilde{u}_{q}(x,s)-s\tilde{u}_{q}(x,s)+g(x)}{\tilde{u}_{q}(x,s)}.$
(7)
This formulation worked better for small $m$ where back-projection failed,
however, image quality was not high for large $q$. The reason was that
multiplication by $\Delta$ in (7) amplified the approximation error of the
internal solution $u_{q}(x,s)-\tilde{u}_{q}(x,s)$.
To overcome this artifact here we consider the Lippmann-Schwinger integral
equation framework. Consider data
$F_{q}(s)=\int gu_{q}=\int g(A_{q}+sI)^{-1}g,$
and similarly
$F_{0}(s)=\int gu_{0}=\int g(A_{0}+sI)^{-1}g.$
Then the Lippmann-Schwinger integral equation with respect to the unknown $q$
can be written as
$F_{q}(s)-F_{0}(s)=-\langle u_{0},qu_{q}\rangle$ (8)
where $\langle,\rangle$ is the $L_{2}$ inner product on the PDE domain.
Because of the dependence of $u_{q}$ on $q$, equation (8) is generally
nonlinear. The standard linearization is given by the famous Born
approximation in which one assumes that $u_{q}\approx u_{0}$, which is
accurate only for small $q$. In this paper we suggest to use the more accurate
approximation (6). This yields
$F_{q}(s)-F_{0}(s)\approx-\langle u_{0},q\tilde{u}_{q}\rangle.$ (9)
Indeed, the internal solution $\tilde{u}_{q}$ depends on $q$, however, it can
be precomputed directly from the data without knowing $q$. After that, (9)
becomes linear with respect to $q$. As we shall see, formula (9) relies on the
Lanczos algorithm for the computation of $T_{q}$, which is why we call (9) the
Lippmann-Schwinger-Lanczos equation.
As a by-product, the Lippmann-Schwinger-Lanczos approach resolves a number of
problems arising in the data-driven ROM framework. Recently in [2], the map
between $T_{q}-T$ and $q$ was computed via training and required multiple
solutions of forward problems for different $q$. The Lippmann-Schwinger-
Lanczos approach yields an explicit expression for this map. Furthermore, this
approach also allows for a more rigorous derivation of the back-projection
algorithm, and suggests natural extensions to more general data sets.
Finally, we should point out, that there are known approaches using internal
solutions in the Lippmann-Schwinger framework, of which Marchenko redatuming
is the closest to the suggested method here, for example, see [10]. The main
difficulty with Marchenko redatuming lies in the accurate approximation of the
inverse scattering transform in the continuous setting, and in the evaluation
of some integrals. The Lanczos based ROM approach here can be interpreted as a
linear-algebraic realization of the Marchenko-Gelfand-Levitan method [14] with
a data-driven spectral discretization, and as such promises the best possible
accuracy for a given data-set. As we shall see in our numerical experiments,
with as little as 6 Laplace frequencies and 8 source-receiver positions, our
approach produces results which are indistinguishable from the Lippmann-
Schwinger formulation using the exact internal solutions.
This paper is organized as follows. In Section 2 we describe the entire
process in detail for a one dimensional, single input single output (SISO)
problem. This includes the construction of the ROM from the data, the Lanczos
orthogonalization process, the generation of the internal solution and its use
in the Lippmann-Schwinger equation. The generalization of this process to
multiple input multiple output (MIMO) problems in higher dimensions is
described in Section 3. Section 4 contains numerical experiments, and in the
appendix we describe how the Lippmann-Schwinger Lanczos algorithm is related
to other approaches to inversion using the data driven ROM.
## 2 One dimensional SISO problem
We begin this work with the one dimensional problem, since the presentation is
simpler, and the ideas extend naturally to higher dimensions. In the first
subsection we describe the problem setup and in the second subsection we show
how one constructs the ROM from the data. In the third and fourth subsections
we discuss tridiagonalization of the ROM and generation of the internal
solution from data only. In the last subsection we show how to use the
internal solutions in the Lippmann-Schwinger equation in order to solve the
fully nonlinear inverse problem.
### 2.1 Description of the SISO problem
We start by considering the single input single output (SISO) inverse problem
in one dimension
$-\frac{d^{2}u(x,\lambda)}{dx^{2}}+q(x)u(x,\lambda)+\lambda
u(x,\lambda)=g(x)\quad\frac{du}{dx}|_{x=0}=0,\ \frac{du}{dx}|_{x=L}=0,$ (10)
where $0<L\leq\infty$. The source $g(x)$ is assumed to be a compactly
supported real distribution localized near the origin, for example, roughly
speaking, $g=\delta(x-\epsilon)$ with small $\epsilon>0$. (Note that when $g$
is a delta function at the origin this corresponds to an inhomogeneous Neumann
boundary condition.) We can write the solution formally as
$u=\left(-\frac{d^{2}}{dx^{2}}+{q}I+\lambda I\right)^{-1}g$ (11)
where the inverse is understood to correspond to Neumann boundary conditions.
Consider $\lambda_{j}\in{\mathbb{C}}\setminus{\mathbb{R}}_{-}$,
$j=1,\ldots,m$, with $\Im\lambda_{j}\geq 0$. The SISO transfer function is
then
$F(\lambda)=\int_{0}^{L}g(x)u(x,\lambda)dx=\langle g,u\rangle=\langle
g,\left(-\frac{d^{2}}{dx^{2}}+{q}I+\lambda I\right)^{-1}g\rangle$ (12)
where
$\langle w,v\rangle=\int_{0}^{L}\bar{w}(x)v(x)dx$
is the Hermitian inner product on $L^{2}(0,L)$. For $2m$ real data points,
that is, for $\Im\lambda_{j}=0$, we consider the data
$F(\lambda)|_{\lambda=\lambda_{j}}\in{\mathbb{R}},\ \ \
\frac{dF(\lambda)}{d\lambda}|_{\lambda=\lambda_{j}}\in{\mathbb{R}}\ \ \
\mbox{for}\ \ j=1,\ldots,m.$ (13)
For complex data points, we consider just
$F(\lambda)|_{\lambda=\lambda_{j}}\in{\mathbb{C}}\ \ \ \mbox{for}\ \
j=1,\ldots,m.$ (14)
Note that the complex case is equivalent to also having data at
$\bar{\lambda}_{j}$, since $F(\overline{\lambda})=\overline{F}(\lambda)$ from
the fact that $g$ and $q$ are real. From this one can see that for
$\lambda_{j}$ close to real,
$\displaystyle\frac{dF(\lambda)}{d\lambda}|_{\lambda=\Re{\lambda_{j}}}$
$\displaystyle=$ $\displaystyle\lim_{\Im\lambda_{j}\to
0}{F(\lambda_{j})-F(\overline{\lambda_{j}})\over{\lambda_{j}-\overline{\lambda_{j}}}}$
(15) $\displaystyle=$ $\displaystyle\lim_{\Im\lambda_{j}\to 0}{\Im
F(\lambda_{j})\over{\Im{\lambda_{j}}}}$
and hence the real data can be viewed as the natural extension of complex
data. The SISO inverse problem is then to determine $q(x)$ in (10) from the
data (14) or (13).
### 2.2 Construction of the data-driven ROM
We will treat $u(x,\lambda_{j})$ as a continuous basis function (or it can be
viewed as an infinite dimensional vector column) and consider the projection
subspace
${\bf
V}=\mbox{span}\\{u_{1}(x)=u(x,\lambda_{1}),\ldots,u_{m}(x)=u(x,\lambda_{m})\\}.$
We define the data-driven ROM as the Galerkin system for this subspace
$Sc(\lambda)+\lambda Mc(\lambda)=b$ (16)
where $S,M\in{\mathbb{C}}^{m\times m}$ are Hermitian positive definite
matrices with the stiffness matrix $S$ given by
$S_{ij}=\langle{u}^{\prime}_{i},u^{\prime}_{j}\rangle+\langle
q{u}_{i},u_{j}\rangle$
and mass matrix $M$ given by
$M_{ij}=\langle{u}_{i},u_{j}\rangle.$
The right hand side $b\in{\mathbb{C}}^{m}$ has components
$b_{j}=\langle{u}_{j},g\rangle,$
and the Galerkin solution for the system is determined by the vector valued
function of $\lambda$, $c(\lambda)\in{\mathbb{C}}^{m}$. Note that $c(\lambda)$
corresponds to coefficients of the solution with respect to the above basis of
exact solutions. The matrices $S$ and $M$ are obtained from matching
conditions that ensure that the ROM transfer function
$\tilde{F}(\lambda)=b^{*}c$
matches the data. For real $\lambda_{i}$ this is the same as in [6]. For
complex $\\{\lambda_{j}\\}^{m}_{j=1}$ multiplying (10) for
$\lambda=\lambda_{i}$ by $\bar{u}_{j}$ in the inner product
$\langle\cdot,\cdot\rangle$ and integrating by parts we obtain
$\bar{S}_{ij}+\lambda_{i}\bar{M}_{ij}=\bar{F}(\lambda_{j}).$ (17)
Similarly, multiplying the conjugate of (10) for $\lambda=\bar{\lambda}_{j}$
by $u_{i}$ in the inner product $\langle\cdot,\cdot\rangle$, we obtain
$S_{ij}+\lambda_{j}M_{ij}=b_{i},$ (18)
where $b_{i}=\bar{F}(\lambda_{i})$. Subtracting the complex conjugate of (17)
from (18) we obtain the expression for the mass matrix
$M_{ij}=\frac{\bar{F}(\lambda_{i})-F(\lambda_{j})}{\lambda_{j}-\bar{\lambda}_{i}}.$
(19)
Similarly, the elements of stiffness matrix have the form
$S_{ij}=\frac{F(\lambda_{j})\lambda_{j}-\bar{F}(\lambda_{i})\bar{\lambda}_{i}}{\lambda_{j}-\bar{\lambda}_{i}}.$
(20)
Furthermore, the solution to (10) is close to its Galerkin projection
$u(\lambda)\approx\tilde{u}(\lambda)={V}c(\lambda)={V}(S+\lambda M)^{-1}b$
where ${V}$ represents the row vector of basis functions $u_{i}$,
${V}=(u_{1},\ldots,u_{m}).$
Then, using that $\bar{\tilde{F}}(\lambda)=\tilde{F}(\bar{\lambda})$, the
following proposition follows immediately.
###### Proposition 1.
Assume that $\Im\lambda_{j}\neq 0$ for $j=1,\ldots,m$. Then the Galerkin
projection of the solution of (10)
$\tilde{u}(\lambda)=Vc(\lambda)=V(S+\lambda M)^{-1}b$
is exact at $\lambda=\lambda_{j}$ ,
$\tilde{u}(\lambda_{j})=u(\lambda_{j}),$
and hence
$\tilde{F}(\lambda_{j})=b^{*}(S+\lambda_{j}M)^{-1}b=F(\lambda_{j})$
and
$\tilde{F}(\bar{\lambda}_{j})=b^{*}(S+\bar{\lambda}_{j}M)^{-1}b=F(\bar{\lambda}_{j})$
for $j=1,\ldots,m$.
The corresponding proposition for $\lambda_{j}$ real was shown in [6].
### 2.3 Lanczos tridiagonalization
In the next step, we orthogonalize the above basis of exact solutions by using
the Lanczos algorithm. More precisely, we run $m$ steps of the $M$-symmetric
Lanczos algorithm corresponding to matrix $A=M^{-1}S$ and initial vector
$M^{-1}b$. This yields tridiagonal matrix $T\in{\mathbb{R}}^{m\times m}$ and
$M$-orthonornal Lanczos vectors $q_{i}\in{\mathbb{R}}^{m}$, such that
$AQ=QT,\qquad Q^{*}MQ=I,$ (21)
where
$Q=[q_{1},q_{2},\ldots,q_{m}]\in{\mathbb{R}}^{m\times m},$
and
$q_{1}=M^{-1}b/\sqrt{b^{*}M^{-1}b}.$
This resulting new basis $VQ$ will be orthonormal with respect to the
$L^{2}(0,L)$ inner product. The Galerkin projection of (10) and its transfer
function can then be written in Lanczos coordinates as
$\tilde{u}(\lambda)=\sqrt{b^{*}M^{-1}b}VQ(T+\lambda I)^{-1}e_{1},$ (22)
$\tilde{F}(\lambda)=(b^{*}M^{-1}b)e_{1}^{*}(T+\lambda I)^{-1}e_{1}.$ (23)
It is known that when $\Sigma\in{\mathbb{C}}\setminus{\mathbb{R}}_{-}$ is
compact, for any sequence $\lambda_{j}\in\Sigma$, $j=1,\ldots,m$ we have that
for any $\lambda\in\Sigma$ the corresponding Galerkin solution converges
exponentially $\tilde{u}\rightarrow u$ as $m\rightarrow\infty$ with a uniform
linear rate.
### 2.4 Internal solutions
We assume that we do not know $q$, yet we want to compute $u(x,\lambda)$
directly from the data. Recall that the output of the Lanczos algorithm,
tridiagonal $T$ and change of basis $Q$, were obtained from data only.
However, we do not know the original basis of exact solutions $V$. We propose
to approximate $u$ internally, as was done in [6], by replacing the unknown
orthogonalized internal solutions $VQ$ with orthogonalized background
solutions $V_{0}Q_{0}$ corresponding to background $q_{0}=0$. Here $V_{0}$ is
the row vector containing the basis of background solutions
${V}=(u^{0}_{1},\ldots,u^{0}_{m})$
to (10) corresponding to $q=q_{0}=0$ and the same spectral points
$\lambda=\lambda_{1},\ldots\lambda_{m}$, and $Q_{0}$ is computed from Lanczos
orthogonalization of the background ROM. That is, one can compute an
approximation to $u(x,\lambda)$ using
$u\approx\sqrt{b^{*}M^{-1}b}V_{0}Q_{0}(T+\lambda I)^{-1}e_{1},$
which is obtained from data only.
###### Conjecture 2.
Assume that $V_{0}$ and $Q_{0}$, are the solution basis and Lanczos matrix
corresponding to the background $q_{0}=0$. Then for all $\lambda\in\Sigma$ we
have that
$u=\lim_{m\to\infty}{\mathbf{u}}=\sqrt{b^{*}M^{-1}b}V_{0}Q_{0}(T+\lambda
I)^{-1}e_{1}.$ (24)
that is, for any fixed $\lambda\in\Sigma$, as the number of data points in
$\Sigma$ approaches infinity, the data generated internal solution
${\mathbf{u}}$ converges to the true solution $u$.
### 2.5 Nonlinear inverse problem
We now use the Lippmann-Schwinger formulation for the solutions $u$ to solve
the nonlinear inverse problem by using the internal solutions described above.
From (12) we obtain
$\displaystyle F_{0}(\lambda_{j})-F(\lambda_{j})=\langle
g,\left(-\frac{d^{2}}{dx^{2}}+\lambda I\right)^{-1}g\rangle-\langle
g,\left(-\frac{d^{2}}{dx^{2}}+{q}I+\lambda I\right)^{-1}g\rangle$
$\displaystyle=\langle\left(-\frac{d^{2}}{dx^{2}}+{q}I+\bar{\lambda}I\right)^{-1}g,q\left(-\frac{d^{2}}{dx^{2}}+\lambda
I\right)^{-1}g\rangle$ (25)
that is,
$F_{0}(\lambda_{j})-F(\lambda_{j})=\int
u_{0}^{*}(x,\lambda_{j})u(x,\lambda_{j})q(x)dx,\\\ \qquad j=1,\ldots,m$ (26)
which can also be seen as a direct consequence of the usual Lippmann-Schwinger
formulation with the Green’s function. Here $u_{0}$ corresponds to the
solution to the background problem with $q_{0}=0$. If all $\lambda_{j}$ are
complex, then (26) yields $2m$ real equations for $q$, which is the same as
the number of data points. For real $\lambda_{j}\in\mathbb{R}$ we also have
$2m$ real equations
$\displaystyle F_{0}(\lambda_{j})-F(\lambda_{j})=\langle
u_{0},{q}u\rangle=\int u^{*}_{0}(x,\lambda_{j})u(x,\lambda_{j})q(x)dx,$ (27)
$\displaystyle\frac{d}{d\lambda}(F_{0}-F)|_{\lambda=\lambda_{j}}=\int\frac{d}{d\lambda}[u^{*}_{0}(x,\lambda)u(x,\lambda_{j})]_{\lambda=\lambda_{j}}q(x)dx$
for $j=1,\ldots,m$. Of course, the internal solutions $u(x,\lambda_{j})$ and
their derivatives with respect to $\lambda$ are unknown, and they depend on
$q$, so the system (26-27) is nonlinear with respect to $q$. A Born
linearization replaces $u_{0}$ with $u$, however, it is only accurate for
small $q$. Using Conjecture 2, we replace $u(x,\lambda)$ in (26-27) with its
approximation
$u\approx{\mathbf{u}}=\sqrt{b^{*}M^{-1}b}V_{0}Q_{0}(T+\lambda I)^{-1}e_{1}.$
For example, for the case of complex $\lambda_{j}$ we can write the new system
for $q$ as
$\delta{\mathbf{F}}=\langle W,q\rangle$ (28)
where
$\delta{\mathbf{F}}=[F_{0}(\lambda_{1})-F(\lambda_{1}),\ldots,F_{0}(\lambda_{m})-F(\lambda_{m})]^{*}\in{\mathbb{C}}^{m},$
and
$W=[{\mathbf{u}}^{*}(x,\lambda_{1})u_{0}(x,\lambda_{1}),\ldots,{\mathbf{u}}^{*}(x,\lambda_{m})u_{0}(x,\lambda_{m})]$
is an $m$-dimensional vector complex valued functions on $(0,L)$. By
construction, ${\mathbf{u}}$ can be directly computed from the data without
knowing $q$, by using (24), thus making nonlinear system (28) linear! A
reasonable regularization of (28) is to restrict $q$ to the dominant left
singular vectors of $W$. We will refer to (28) as a Lippmann-Schwinger-Lanczos
system.
## 3 Multidimensional MIMO problem
### 3.1 Description of the MIMO problem
We consider the boundary value problem on $\Omega\in{\mathbb{R}}^{d}$ for
$-\Delta u^{(r)}+qu^{(r)}+\lambda
u=g^{(r)},\quad\frac{du}{d\nu}\large|_{\partial\Omega}=0,\ r=1,\ldots,K,$ (29)
where $g^{(r)}$ are localized sources, e.g., boundary charge distributions,
supported near or at an accessible part $S$ of $\partial\Omega$. Let
$G=[g^{(1)},g^{(2)},\ldots,g^{(K)}]\in{\mathbb{R}}^{\infty\times k}$
and
$U=[u^{(1)},u^{(2)},\ldots,u^{(K)}]\in{\mathbb{R}}^{\infty\times k},$
again understood as vectors of continuous functions. Then the multiple-input
multiple output (MIMO) transfer function is a matrix valued function of
$\lambda$
$F(\lambda)=\langle G,U\rangle=\langle
G,\left(-\frac{d^{2}}{dx^{2}}+\mathop{\operator@font diag}\nolimits{q}+\lambda
I\right)^{-1}G\rangle\in{\mathbb{C}}^{K\times K},$ (30)
For real positive $\lambda$ we will have symmetric positive definite
$F(\lambda)$. As in the SISO case, we consider the inverse problem with data
given by $2m$ real symmetric $K\times K$ matrices, that is, for
$\Im\lambda_{j}=0$ our data are
$F(\lambda)|_{\lambda=\lambda_{j}}\in{\mathbb{R}},$
and
$\frac{F(\lambda)}{d\lambda}|_{\lambda=\lambda_{j}}\in{\mathbb{R}},$
and $F(\lambda)|_{\lambda=\lambda_{j}}\in{\mathbb{C}}$ otherwise.
### 3.2 Construction of the MIMO data-driven ROM
We consider the $mK$ dimensional projection subspace
$\bf{V}=\mbox{span}\\{U_{1}(x)=U(x,\lambda_{1}),\ldots,U_{m}(x)=U(x,\lambda_{m})\\}.$
We then define the MIMO data-driven ROM as
$(S+\lambda M)C(\lambda)=B$ (31)
where $S,M\in{\mathbb{C}}^{mK\times mK}$ are Hermitian positive definite
matrices, $B\in{\mathbb{R}}^{mK\times K}$, and $C\in{\mathbb{C}}^{mK\times K}$
is a matrix valued function of $\lambda$, again corresponding to coefficients
of the solution with respect to the above basis of exact solutions. Stiffness
and mass matrices can be written as block extensions of their counterparts in
(16), with blocks given by
$S=(S_{ij}=\langle{U^{\prime}}_{i},U^{\prime}_{j}\rangle)+\langle
q{U}_{i},U_{j}\rangle)$
and
$M=(M_{ij}=\langle{U}_{i},U_{j}\rangle).$
Once again $S$ and $M$ are obtained from imposing the conditions that the ROM
transfer function
$\tilde{F}(\lambda)=B^{*}C(\lambda)$
matches the data.
### 3.3 Block-Lanczos tridiagonalization
We then run $m$ steps of the $M$ symmetric block-Lanczos algorithm with matrix
$A=M^{-1}S$
and initial block vector $M^{-1}B$. From this we obtain block-tridiagonal
matrix
$T\in{\mathbb{R}}^{Km\times Km}$
with $K\times K$ blocks, and $M$-orthonormal Lanczos block-vectors
$q_{i}\in{\mathbb{R}}^{mK\times K}$. From this we obtain the block counterpart
of (21)
$Q=[q_{1},q_{2},\ldots,q_{m}]\in{\mathbb{R}}^{mK\times mK},$
where
$q_{1}=M^{-1}B(B^{*}M^{-1}B)^{-1/2}.$
The state solution then can be written in Lanczos coordinates as
$\tilde{u}(\lambda)=\sqrt{B^{*}M^{-1}B}VQ(T+\lambda I)^{-1}E_{1},$ (32)
where $E_{1}\in{\mathbb{R}}^{mK\times K}$ consists of the first $K$ columns of
$I\in{\mathbb{R}}^{mK\times mK}$.
### 3.4 Internal solutions
The SISO result is generalizable to MIMO case, and can be vaguely formulated
as following conjecture
###### Conjecture 3.
Assume we have sequence of input sets
$G_{k}=[g_{k}^{(1)},g_{k}^{(2)},\ldots,g_{k}^{(K)}],$
such that as $k\rightarrow\infty$ the range of $G_{k}$ approximates surface
distributions at the observable boundary $S\subset\partial\Omega$. Then for
all $\lambda\in\Sigma$
$U=\lim_{m,K\to\infty}{\mathbf{U}}=\sqrt{B^{*}M^{-1}B}V_{0}Q_{0}(T+\lambda
I)^{-1}E_{1}.$ (33)
### 3.5 Nonlinear MIMO inverse problem
From (12) we obtain
$F_{0}(\lambda_{j})-F(\lambda_{j})=\langle G,\left(-\Delta+\lambda
I\right)^{-1}G\rangle-\langle G,\left(-\Delta+qI+\lambda
I\right)^{-1}G\rangle\\\ =\langle\left(-\Delta+{q}I+\lambda
I\right)^{-1}G,q\left(-\Delta+\lambda I\right)^{-1}G\rangle\\\ =\int
U^{*}_{0}(x,\lambda_{j})q(x)U(x,\lambda_{j})dx,$ (34)
that is,
$F_{0}(\lambda_{j})-F(\lambda_{j})=\int
U^{*}_{0}(x,\lambda_{j})q(x)U(x,\lambda_{j})dx,\qquad j=1,\ldots,m.$ (35)
Here again the subscript $0$ corresponds to background solutions with $q=0$.
If all $\lambda_{j}$ are complex, then (26) gives $2m$ real equations for $q$,
equal to the number of data points. For $\lambda_{j}\in R$ we will have
$\displaystyle(F_{0}-F)|_{\lambda_{j}}$ $\displaystyle=$ $\displaystyle\int
U^{*}_{0}(x,\lambda_{j})q(x){U}(x,\lambda_{j})dx,$
$\displaystyle\frac{d}{d\lambda}(F_{0}-F)|_{\lambda_{j}}$ $\displaystyle=$
$\displaystyle\int\frac{d}{d\lambda}(U^{*}_{0}(x,\lambda){U}(x,\lambda))|_{\lambda=\lambda_{j}}q(x)dx.$
(36)
Similar to the SISO case, by using Conjecture 3 we replace $U(x,\lambda)$ with
its approximation ${\mathbf{U}}$ in (35) and (3.5). Again, like in the SISO
case, precomputing ${\mathbf{U}}$ via the data-driven algorithm (33) will
yield the linear system for $q$:
$\displaystyle(F_{0}-F)|_{\lambda_{j}}$ $\displaystyle=$ $\displaystyle\int
U^{*}_{0}(x,\lambda_{j})q(x)\mathbf{U}(x,\lambda_{j})dx,$
$\displaystyle\frac{d}{d\lambda}(F_{0}-F)|_{\lambda_{j}}$ $\displaystyle=$
$\displaystyle\int\frac{d}{d\lambda}(U^{*}_{0}(x,\lambda)\mathbf{U}(x,\lambda))|_{\lambda=\lambda_{j}}q(x)dx.$
(37)
## 4 Numerical Experiments
In this section we present numerical results for reconstructing a 2D two-bump
media. For simplicity, we considered noiseless case only. In the presence of
noise, constructing the data-driven ROM may become unstable and requires
regularization. This problem was resolved in [9] by optimal SVD-based
truncation of the unstable part. However, to avoid additional complications,
we skipped that part in this work. In the first experiment we considered two
low contrast bump inclusions (see Fig.1, top left). The measurement setup
mimics one from medical imaging, i.e. we had $K=8$ sources located on the
boundary, two on each side (see red crosses at Fig.1, top left). We used $s=6$
positive spectral values. Here and below, to obtain $q(x)$, we solve ill-
conditioned linear system (3.5) by projecting it onto its dominant
eigenvectors. On the top right of Fig.1 we plotted the reconstruction when
actual true internal solution ${U}(x,\lambda))$ is available. This is
typically not the case, so we called it ’Cheated IE’. The usual Born
linearization result (when we replace ${U}(x,\lambda))$ in (3.5) with
${U}_{0}(x,\lambda))$) is shown on bottom left of Fig.1. Finally, the
reconstruction using our approach is plotted on the bottom right of Fig.1. As
one can observe, the Born approximation results in multiple artifacts and
produces a significantly worse reconstruction compared to our approach and
’Cheated IE’.
In our second experiment we considered the same two bumps but with increased
contrast (see Fig.2, top left). The source locations were the same as in
previous experiment, however, out of $s=6$ spectral values we took one
negative and 5 positive. Similar to our first experiments we plotted the
results obtained using the ’Cheated IE’, the Born approximation and our
approach (see on the top right, bottom left and bottom right of Fig.2,
respectively). Because of higher contrast, the Born linearization failed to
produce a meaningful reconstruction. However, both the ’Cheated IE’ and our
approach still performed well.
Fig. 1: Experiment 1: True medium (top left) and its reconstructions using
’Cheated IE’ (top right), Born linearization (bottom left) and our approach
(bottom right) Fig. 2: Experiment 2: True medium (top left) and its
reconstructions using ’Cheated IE’ (top right), Born linearization (bottom
left) and our approach (bottom right)
In our third experiment we again considered a medium with 2 bumps with
increased contrast (see Fig.3, top left), however, with data acquisition
similar to surface geophysics or radars. That is, we probed the medium with
$K=12$ sources located on one (upper) side $y=-1$ only (see red crosses at
Fig.3, top left). For better aperture, the lateral extent of the acquisition
boundary was three times larger than the depth of the domain. In this example
we considered $s=6$ positive frequencies. We compared the results of the
’Cheated IE’, the Born approximation and our approach (see on the top right,
bottom left and bottom right of Fig.3, respectively). Similar to the previous
example, the reconstruction via the Born approximant is totally unsatisfactory
because of high contrast of the inclusions. At the same time, our approach,
along with ’Cheated IE’, performs well.
Fig. 3: Experiment 3: True medium (top left) and its reconstructions using
’Cheated IE’ (top right), Born linearization (bottom left) and our approach
(bottom right)
## Appendix A Generalizations and extensions
In this appendix we show how the system (28) is connected to other algorithms
for extracting the unknown $q$ from the ROM, and describe how it lends itself
naturally to extensions to other data sets.
### A.1 Connection to the back-projection algorithm
The back-projection algorithm is based on the idea that the perturbed and
background operators differ only in the lower order term, so that their
difference should approximate $q$. Using the reduced models and replacing $VQ$
with $V_{0}Q_{0}$, the difference
$A_{q}-A_{0}=qI$
is approximated by the operator $\mathcal{Q}$
$A_{q}-A_{0}\approx\mathcal{Q}$ (38)
where
$\mathcal{Q}w=V_{0}{Q}_{0}(T-T_{0})\int(V_{0}{Q}_{0})^{*}w.$ (39)
Recall that $V_{0}Q_{0}$ is the row vector of orthogonalized background
solutions. Here $T$ and $T_{0}$ are obtained via data matching conditions, for
example via time-domain matching, or the frequency matching as described
above.
To see how this relates to Lippmann Schwinger, suppose that one uses the
operator approximation (39) for $q$ and replaces the right hand side of (28)
with $\langle u_{0},\mathcal{Q}{\mathbf{u}}\rangle$. We get that
$\langle
u_{0},\mathcal{Q}{\mathbf{u}}\rangle=\int(V_{0}Q_{0})(x)(T-T_{0})\int(V_{0}Q_{0})^{*}(x^{\prime}){\mathbf{u}}(x^{\prime})dx^{\prime}u^{*}_{0}(x)dx\\\
=(b_{0}^{*}M_{0}^{-1}b_{0})^{1/2}(b^{*}M^{-1}b)^{1/2}\int\int
e_{1}^{*}(T_{0}+\lambda_{j}I)^{-1}\cdot\\\
\cdot(V_{0}Q_{0})^{*}(x){(V_{0}Q_{0})}(x)(T-T_{0})(V_{0}Q_{0})^{*}(x^{\prime})(V_{0}Q_{0})(x^{\prime})(T+\lambda_{j}I)^{-1}e_{1}dx^{\prime}dx\\\
=(b_{0}^{*}M_{0}^{-1}b_{0})^{1/2}(b^{*}M^{-1}b)^{1/2}e_{1}^{*}(T_{0}+\lambda_{j}I)^{-1}(T-T_{0})(T+\lambda_{j}I)^{-1}e_{1}$
(40)
from othogonality of the basis. At the same time, the left hand side of (28)
is
$\displaystyle F_{0}(\lambda_{j})-F(\lambda_{j})$ $\displaystyle=$
$\displaystyle
g^{T}(\Delta+\lambda_{j}I)^{-1}g-g^{T}(\Delta+{q}I+\lambda_{j}I)^{-1}g$
$\displaystyle=$
$\displaystyle{b_{0}^{*}M_{0}^{-1}b_{0}}e_{1}^{*}(T_{0}+\lambda_{j}I)^{-1}e_{1}-{b^{*}M^{-1}b}e_{1}^{*}(T+\lambda_{j}I)^{-1}e_{1}$
since the ROM matches the data exactly. By definition,
$b_{0}^{*}M_{0}^{-1}b_{0}$ and $b^{*}M^{-1}b$ are the norms of projections of
$g$ on $V_{0}$ and $V$ respectively, which should become close as the
dimension grows, that is, we expect that
$\lim_{m\to\infty}b_{0}^{*}M_{0}^{-1}b_{0}=b^{*}M^{-1}b.$ (41)
We then obtain
$\displaystyle F_{0}(\lambda_{j})-F(\lambda_{j})$ $\displaystyle\approx$
$\displaystyle(b^{*}M^{-1}b)e_{1}^{*}(T_{0}+\lambda_{j}I)^{-1}(T-T_{0})(T+\lambda_{j}I)^{-1}e_{1}$
(42) $\displaystyle\approx$ $\displaystyle\langle
u_{0},\mathcal{Q}{\mathbf{u}}\rangle$ (43)
from (40) and (41). Thus we have that, modulo (41), (39) satisfies the part of
system (28) corresponding to $F_{0}(\lambda_{j})-F(\lambda_{j})$ for
$j=1,\ldots,m$. Hence the system (28) would be valid for
$\lambda_{j},\bar{\lambda}_{j}$, $j=1,\ldots,m$ if we replace $q$ with its
operator approximation (39). The extension to the derivative data in the real
case can be obtained via limiting transition. In summary, the operator
$\mathcal{Q}$ essentially satisfies the Lippmann-Schwinger-Lanczos equation.
The back projection algorithm extracts a scalar version of (39) in a natural
way. Indeed, the simplest back projection reconstruction step is to take
$q(x)=\int
q(x^{\prime})\delta(x-x^{\prime})dx^{\prime}\approx\int\mathcal{Q}\delta(x-x^{\prime})dx^{\prime}\\\
=\int V_{0}(x^{\prime}){Q}_{0}(T-T_{0})(V_{0}(x){Q}_{0})^{*}dx^{\prime}.$ (44)
It can be show that the probing function (44) uses approximations of
$\delta(x-x^{\prime})$ on the solution snapshot space. Thanks to Conjecture 2,
this can be defined via projection
$\tilde{\delta}(x-x^{\prime})=(V_{0}(x){Q}_{0})(V_{0}(x^{\prime}){Q}_{0})^{*},$
and is related to a known approach of imaging diagonal operators, e.g., [16].
The back-projection approach of [11] instead uses a sharper point spread
function, obtained by the squaring and re-normalization of the delta function:
$\frac{\tilde{\delta}(x-x^{\prime})^{2}}{\sqrt{\int\tilde{\delta}(x-x^{\prime})^{2}dx^{\prime}}},$
which yields
$q(x)\approx\frac{\int\tilde{\delta}(x-x^{\prime})\mathcal{Q}\tilde{\delta}(x-x^{\prime})dx^{\prime}}{\int\tilde{\delta}(x-x^{\prime})^{2}dx^{\prime}}=\frac{V_{0}(x){Q}_{0}(T-T_{0})(V_{0}(x){Q}_{0})^{*}}{\int\tilde{\delta}(x-x^{\prime})^{2}dx^{\prime}}.$
(45)
### A.2 Linear mapping from ROM to potential
From (40), (42) and Conjecture 2 we obtain
$s_{0}(\lambda_{j})^{*}(T-T_{0})s(\lambda_{j})\approx
F_{0}(\lambda_{j})-F(\lambda_{j})\approx\int
u^{*}_{0}(x,\lambda_{j}){\mathbf{u}}(x,\lambda_{j})q(x)dx,$ (46)
where
$s_{0}(\lambda_{j})=(b_{0}^{*}M_{0}^{-1}b_{0})^{1/2}e_{1}^{*}(T_{0}+\lambda_{j}I)^{-1}\in{\mathbb{C}}^{m},$
$s(\lambda_{j})=(b^{*}M^{-1}b)^{1/2}e_{1}^{*}(T+\lambda_{j}I)^{-1}\in{\mathbb{C}}^{m},$
and where ${\mathbf{u}}$ is given by (24). Note that $s$ and $s_{0}$ are data
driven.
###### Theorem 4.
Assuming that $s,s_{0}$ and ${\mathbf{u}}$ are precomputed from the data, (46)
yields a linear relationship between $q$ and $(T-T_{0})$ that uniquely defines
the latter for complex $\lambda_{j}$, $j=1,\ldots,m$. For real $\lambda_{j}$
for uniqueness we need to add derivatives at $\lambda_{j}$
###### Proof.
The result follows from uniqueness of [m-1/m] Padé from $2m$ matching
conditions. ∎
### A.3 Extension to general ROMs
The linear mapping approach is not limited to the ROM obtained via frequency
matching. For any reduced order model we can assume the relationship
$s_{0}(\lambda)^{*}(T-T_{0})s(\lambda)\approx(F_{0}-F)(\lambda)\approx\\\ \int
u^{*}_{0}(x,\lambda){\mathbf{u}}(x,\lambda)q(x)dx,$ (47)
for any $\lambda\in\Sigma\subset\mathbb{C}\setminus{\mathbb{R}}_{-}$, where
$\Sigma$ is a compact complex domain. Then $(T-T_{0})$ can be uniquely defined
via matching (47) for any $m$ complex $\lambda_{j}$ in that domain, or $m$
real frequencies and their derivatives. Furthermore, (47) can be transformed
to the time domain where one will obtain convolution equations.
Acknowledgements V. Druskin was partially supported by AFOSR grant FA
955020-1-0079. S. Moskow was partially supported by NSF grants DMS-1715425 and
DMS-2008441.
## References
* [1] Christopher Beattie and Serkan Gugercin. Model Reduction and Approximation, chapter 7: Model Reduction by Rational Interpolation, pages 297–334. SIAM, 2017.
* [2] L. Borcea, V. Druskin, A. Mamonov, M. Zaslavsky, and J. Zimmerling. Reduced order model approach to inverse scattering. SIAM Journal on Imaging Sciences, 13(2):685–723, 2020.
* [3] L. Borcea and J. Zimmerling. Personal communication.
* [4] Liliana Borcea and Vladimir Druskin. Optimal finite difference grids for direct and inverse Sturm-Liouville problems. Inverse Problems, 18(4):979–1001, 2002.
* [5] Liliana Borcea, Vladimir Druskin, Fernando Guevara Vasquez, and Alexander V. Mamonov. Resistor network approaches to electrical impedance tomography. Inverse Problems and Applications: Inside Out II, Math. Sci. Res. Inst. Publ, 60:55–118, 2011.
* [6] Liliana Borcea, Vladimir Druskin, Alexander V. Mamonov, Shari Moskow, and Mikhail Zaslavsky. Reduced order models for spectral domain inversion: embedding into the continuous problem and generation of internal data. Inverse Problems, 36(5), 2020.
* [7] Liliana Borcea, Vladimir Druskin, Alexander V. Mamonov, and Mikhail Zaslavsky. A model reduction approach to numerical inversion for a parabolic partial differential equation. Inverse Problems, 30(12):125011, 2014.
* [8] Liliana Borcea, Vladimir Druskin, Alexander V Mamonov, and Mikhail Zaslavsky. Untangling nonlinearity in inverse scattering with data-driven reduced order models. Inverse Problems, 34(6):065008, 2018.
* [9] Liliana Borcea, Vladimir Druskin, Alexander V. Mamonov, and Mikhail Zaslavsky. Robust nonlinear processing of active array data in inverse scattering via truncated reduced order models. Journal of Computational Physics, 381:1–26, 2019.
* [10] Leon Diekmann and I. Vasconcelos. Imaging with the exact linearised Lippmann-Schwinger integral by means of redatumed in-volume wavefields. SEG Technical Program Expanded Abstracts, 2020.
* [11] V. Druskin, A.V. Mamonov, and M. Zaslavsky. A nonlinear method for imaging with acoustic waves via reduced order model backprojection. SIAM Journal on Imaging Sciences, 11(1):164–196, 2018.
* [12] V. Druskin and S. Moskow. Three-point finite-difference schemes, padé and the spectral galerkin method. i. one-sided impedance approximation. Mathematics of computation, 71(239):995–1019, 2002.
* [13] Vladimir Druskin and Leonid Knizhnerman. Gaussian spectral rules for the three-point second differences: I. A two-point positive definite problem in a semi-infinite domain. SIAM Journal on Numerical Analysis, 37(2):403–422, 1999.
* [14] Vladimir Druskin, Alexander V. Mamonov, Andrew E. Thaler, and Mikhail Zaslavsky. Direct, nonlinear inversion algorithm for hyperbolic problems via projection-based model reduction. SIAM Journal on Imaging Sciences, 9(2):684–747, 2016.
* [15] IS Kac and MG Krein. On the spectral functions of the string. Amer. Math. Soc. Transl, 103(2):19–102, 1974.
* [16] Howard W Levinson and Vadim A Markel. Solution of the nonlinear inverse scattering problem by t-matrix completion. i. theory. Physical Review E, 94(4):043317, 2016.
|
# Topological transitions during grain growth on a finite element mesh
Erdem Eren<EMAIL_ADDRESS>Department of Materials Science and
Engineering, University of California at Davis, Davis, CA 95616, USA Jeremy
K. Mason<EMAIL_ADDRESS>Department of Materials Science and
Engineering, University of California at Davis, Davis, CA 95616, USA
###### Abstract
The topological transitions that occur to the grain boundary network during
grain growth in a material with uniform grain boundary energies are believed
to be known. The same is not true for more realistic materials, since more
general grain boundary energies in principle allow many more viable grain
boundary configurations. A simulation of grain growth in such a material
therefore requires a procedure to enumerate all possible topological
transitions and select the most energetically favorable one. Such a procedure
is developed and implemented here for a microstructure represented by a
volumetric finite element mesh. As a specific example, all possible
transitions for a typical configuration with five grains around a junction
point are enumerated, and some exceptional transitions are found to be
energetically similar to the conventional ones even for a uniform boundary
energy. A general discrete formulation to calculate grain boundary velocities
is used to simulate grain growth for an example microstructure. The method is
implemented as a C++ library based on SCOREC, an open source massively
parallelizable library for finite element simulations with adaptive meshing.
## I Introduction
One of the overarching goals of integrated computational materials engineering
(ICME) Council (2008) is to accurately predict microstructure and property
evolution during thermomechanical processing. At a minimum this would require
a simulation incorporating crystal plasticity and grain boundary motion, and
ideally interactions involving multiple phases and additional material
physics. Such simulations would benefit from recent advances in three-
dimensional microscopy Zaefferer (2005), and specifically three-dimensional
X-ray diffraction microscopy (3DXRD) that enables non-destructive three-
dimensional imaging of millimeter-sized samples Li and Suter (2013); Li et al.
(2014). These could both provide initial conditions for and allow verification
of the output of predictive simulations of microstructure evolution.
Historically, one major difficulty with simulations of microstructure
evolution has been the use of unrealistic grain boundary energy (GBE)
functions. Such functions are difficult to determine experimentally due to the
number of independent variables, but Morawiec recently suggested a procedure
to estimate the GBE from distributions of grain boundary angles around triple
junctions Morawiec (2000). Saylor et al. subsequently used a related technique
to estimate the GBE from EBSD analysis of the surface of aluminum samples
Saylor et al. (2003a, b). While explicit functions for the grain boundary
energy are not yet widely available (with a few exceptions Bulatov et al.
(2014); Runnels et al. (2016)), this will likely change in the near future.
When that happens, a code for microstructure evolution that is able to make
full use of them would ideally already be available.
Existing simulations of microstructure evolution include Monte Carlo (MC)
Potts, cellular automata (CA), phase field (PF) and front tracking models. The
Monte Carlo Potts Srolovitz et al. (1986); Holm et al. (1991) and cellular
automata Raabe (2002); Ding et al. (2006); Janssens (2010) methods are popular
partly because of their low computational complexity and ease of
implementation, but suffer from two relevant shortcomings. First, the
underlying voxel lattice introduces an artificial anisotropy that can be
difficult to eliminate Mason et al. (2015); Mason (2015), and a predictive
model requires kinetics relatively independent of any underlying grid. The
second limiting property of MC Potts and CA models is the difficulty of
connecting the model with physical units of measure. Zhang et al. scaled
quantities defining characteristic time, length and energy but observed that
the grid size affected the bulk energy driving force Zhang et al. (2012).
Mason established spatial and temporal dimensions in a CA model using the
Turnbull relation and a uniform grain boundary energy, but the technique is
not easily generalized to other situations Mason (2015).
The phase field method is an implicit boundary approach that was initially
developed to study phase transitions Steinbach and Pezzolla (1999), and can be
modified to include small deformations and mildly anisotropic interface
energies Moelans et al. (2008). One drawback is the high memory and
computational demand associated with representing grains by continuous fields,
since numerical instabilities associated with steep gradients limit the time
step. Modern implementations often use sparse data structures Gruber et al.
(2006); Vedantam and Patnaik (2006); Miyoshi et al. (2017) and adaptive
meshing Dorr et al. (2010) to address this issue. Still, finite deformations
and arbitrary boundary energies that can depend on the grain boundary plane
pose difficulties. Moreover, the use of diffuse boundaries can complicate the
study of topological aspects of the grain boundary network and can introduce
subtle numerical errors. Jin et al. compared the accuracy of level set and
phase field methods coupled with the Finite Element Method (FEM) in
representing the motion of triple lines during isotropic and anisotropic grain
growth Jin et al. (2015). They observed that under proper grid and time
refinement, both methods performed similarly for the isotropic case. For
anisotropic grain growth though they observed 14.2% error in triple junction
velocity for the level set method and as much as 68.7% error for the PF
method. Some recent functional methods allow for anisotropic grain boundary
properties Ribot et al. (2019), but modeling of finite mechanical deformation
is still not addressed.
Early front tracking methods had the advantage of concentrating computational
resources just on the boundaries, and were often used to study mean curvature
flow Kawasaki et al. (1989); Nagai et al. (1990). FEM-based approaches are a
natural extension of these that can support additional physics, e.g., boundary
energies can be explicitly defined, and volumetric meshes allow for crystal
plasticity Roters et al. (2010). However, FEM-based methods introduce
additional challenges with scalability and require explicit handling of the
topology and mesh. The complexity of the latter has encouraged use of an MC
Potts, CA or PF method in conjunction with a FEM solver. These hybrid schemes
use an implicit boundary representation to model grain growth, and transfer
the resulting microstructure to the FEM to model deformation. Sequential
evolution is achieved by transferring the microstructure back and forth Log et
al. (2008); Tonks et al. (2012); Raabe and Becker (2000). This does not
resolve accuracy concerns though, since transferring the solution potentially
introduces information loss and increases computational complexity.
Of the purely FEM-based approaches, Kuprat developed a three-dimensional
adaptation of the gradient weighted moving finite element (GWFE) method and
implemented GRAIN3D, a serial finite element framework for microstructure
modeling of grain growth Kuprat (2000). The code had an element regularization
scheme to improve the quality of low-quality elements, handled changes in the
microstructure as boundaries evolved, and supported volumetric physics. While
the initial implementation only supported constant grain boundary energies,
more general energies were investigated by Gruber et al. Gruber et al. (2005).
There are two main concerns with using this for general purpose simulations of
microstructure evolution though. First, Kuprat implemented the topological
transitions by switching the last remaining set of elements of a collapsing
boundary segment or volume to the appropriate neighboring volumes Kuprat
(2000). This is not necessarily physical, and the relabeling can cause a
substantial and artificial perturbation of the boundaries. Although the likely
changes to the overall evolution are limited for an isotropic grain boundary
energy, this could substantially affect microstructure trajectory for the
anisotropic case. Second, the existing implementation of the implicit finite
element solver is serial. This prohibits simulating microstructures on
physically relevant scales, such as the $1$ mm3 cylindrical copper sample
imaged using 3DXRD by Li et al. Li et al. (2014).
Using a surface mesh representation, Syha and Weygand studied the effects of
an anisotropic grain boundary energy Syha and Weygand (2010). They proposed to
decompose topological transitions into simpler sequential operations and used
a force-based criteria to select changes to the grain boundary network. While
this could accommodate anisotropic grain boundary energies, decomposing a
topological transition into a sequence of simpler ones could alter the
eventual trajectory of microstructure evolution. Moreover, the implementation
is not volumetric and therefore cannot support volumetric physics.
Lazar et al. studied ideal grain growth by using a surface mesh
representation, a fixed set of topological transitions applicable for uniform
grain boundary energy, and evolving the microstructure with a discretized
formulation satisfying the MacPherson-Srolovitz relation Lazar et al. (2011);
MacPherson and Srolovitz (2007). Although this approach provided insight into
ideal grain growth, it is not applicable to general microstructure evolution
for two reasons. First, the boundary evolution formulation assumes that the
microstructure is composed of quadruple points and triple junctions at all
times except for the moments where transitions occur. While this is generally
applicable for ideal grain growth, it does not hold for experimental
microstructures. For instance, highly twinned microstructures often contain
junction lines joining four grain boundaries, and accommodating such
configurations would require implementing more general topological
transitions. Second, the implementation doesn’t support volumetric physics,
and is only intended to model ideal grain growth.
A FEM code to be used for ICME would ideally be able to handle substantial
volumes of material since many grains are required to accurately reflect
variations in the local deformation response and to model stochastic processes
like recrystallization. Tucker et al. studied convergence of large scale crack
propagation simulations as a function of the number of grains and mesh
refinement in microstructures with abnormal grains Tucker et al. (2015). They
observed that the overall damage response was not significantly affected by
mesh resolution, but that more than 200 grains were required in the sample
microstructure for the convergence of the local response. This shows that a
scalable framework is necessary to accurately capture the local response
during deformation.
To summarize, existing implementations of FEM-based grain growth codes are
limited in several respects. First, they are generally serial, prohibiting
large scale simulations Kuprat (2000); Syha and Weygand (2010); Lazar et al.
(2011). Second, topological transitions are achieved by merging mesh entities
with one of the neighboring grains Kuprat (2000), by sequentially splitting
points Syha and Weygand (2010), or selecting from a restricted set of
operations Lazar et al. (2011), all of which could substantially change the
trajectory of the microstructure evolution. That is, a general FEM framework
to study grain growth and deformation at physically relevant scales does not
appear to exist.
The main contributions of this paper are four-fold. First, a method for
finding all possible topological transitions that can occur around junction
points during grain growth is proposed. Second, operations on the simplicial
mesh have been developed to modify the mesh corresponding to these topological
transitions. Third, a criteria based on the rate of energy dissipation is used
to compare different topological transitions, providing an unambiguous
selection criterion. Fourth, a discrete formulation to simulate grain boundary
motion has been implemented that allows for effectively arbitrary grain
boundary properties Mason (2017). The formulation is explicit and solves for
the motion of each vertex individually, reducing the computational load
greatly compared to the weak formulation of the FEM at the cost of increased
error. A C++ library called VDLIB implements all these operations. VDLIB
interfaces with SCOREC, a massively parallel mesh management library with
local adaptive re-meshing sco ; Seol et al. (2012). The intention is to
provide the foundations for large scale simulations of microstructure
evolution within the framework of ICME.
## II Microstructure representation
(a)
(b)
Figure 1: (a) The rectangular prism example is comprised of seven grains. A
central rectangular grain is surrounded by six grains, with examples of a
volume, surface, and line outlined in red. (b) A finite element representation
of this microstructure where tetrahedra, triangles, edges and vertices are
used to discretize volumes, surfaces, lines and points. Examples of a
tetrahedron, triangle and edge are outlined in red.
Our intention is to simulate the evolution of a microstructure at a scale that
resolves the grain structure. It will be useful in the following to introduce
specific terminology to identify the various microstructure components. A
grain will be called a volume, a boundary a surface, a boundary junction line
a line, and a boundary junction point a point. A microstructure where each of
these components is outlined in red is shown in Figure 1a. The volumes,
surfaces, lines, and points composing the microstructure formally comprise a
stratified space, and for that reason the microstructure components will
occasionally be referred to as $d$-strata where $d$ is the dimension of the
stratum. The connectivity of the topological components of the microstructure
is defined by the adjacencies of $d$-strata and $(d-1)$-strata; that is, a
volume is bounded by surfaces, surfaces by lines, and lines by points.
(a)
(b)
(c)
(d)
Figure 2: Examples indicating adjacency rules. (a) A point should bound at
least three lines. This point bounds three lines, two conical volumes on the
left and right, and two volumes above and below the page. (b) A line should
bound at least three surfaces. (c) A surface separating a top and a bottom
volume and ball embedded in the surface. The line of intersection has no
bounding points. (d) A sphere inside another volume, with a surface that has
no bounding lines.
A point is required to bound at least three lines (Figure 2a), a line at least
three surfaces (Figure 2b), and a surface at least two volumes. One can show
that any topological component not satisfying these relationships is spurious
in the sense that it can be removed by merging the adjacent components of the
next higher dimension. There are no constraints imposed on the number of
adjacent components of the next lower dimension; this allows e.g., a small
spherical volume to be embedded in the middle of a surface (Figure 2c), or a
ball to be embedded in the interior of a volume (Figure 2d).
## III Operations on the microstructure
During the course of grain growth, grain boundaries move to reduce the energy
of the microstructure. Occasionally a surface or volume will shrink to a point
or will expand from a point to participate in the subsequent evolution; such
events are called topological transitions. From the standpoint of the finite
element mesh the corresponding operations are either collapses, where
disappearing boundary segments or volumes are removed, or insertions, where
new boundary segments are introduced to allow the microstructure evolution to
continue.
### III.1 Stratum collapses
(a)
(b)
(c)
(d)
Figure 3: The cases of collapse shown on the rectangular prism example. (a)
The initial microstructure. (b) Line collapse. (c) Surface collapse. (d)
Volume collapse.
The average grain size increases during grain growth, meaning components of
the grain boundary network should generally vanish. The criterion for this
topological transition in practice is that the length of a line, area of a
surface, or volume of a grain is shrinking and passes below a threshold tied
to the overall scale of the microstructure. The collapse is effected by
merging all of the bounding points and adjusting the adjacency lists of the
surrounding components as appropriate. Examples of this operation are shown in
Figure 3, with several specifics of the algorithm given in Section II of the
supplementary material (SM).
### III.2 Stratum insertions
(a)
(b)
(c)
(d)
Figure 4: (a) Consider the point at the bottom left corner of the central
volume. (b) The neighborhood of the point shows the relationships with the
surrounding surfaces and volumes. (c) The volumes in an exploded view. (d) The
adjacency graph showing the volumes as squares and the surfaces as disks. In
this figure, volumes and squares are the same color.
Often the configuration resulting from a stratum collapse is unstable and the
energy could be lowered by splitting the point to insert a line or a surface.
There are usually many such possible insertions, and the identification of the
most likely one necessarily involves enumerating these possibilities. This
analysis can be performed using the adjacency graph of surfaces and volumes.
The adjacency graph is constructed by placing a node for each volume and
surface and an edge between adjacent volumes and surfaces. The steps involved
are shown in Figure 4 for a particular point. Formally, for non-periodic
microstructures, there is a volume surrounding the simulation cell that is
connected to the surfaces bounding the simulation cell. For the purpose of
enumerating the possible insertions, this is treated similarly to the volumes
within the simulation cell, with the specifics given in Section VIII of the
SM.
(a)
(b)
(c)
Figure 5: A line insertion corresponds to a circuit on the adjacency graph.
(a) A five grain configuration and a circuit going around the point. (b) Every
surface punctured by the circuit is extended by adding the inserted line to
their adjacency lists. (c) The adjacency graph around the point. Edges along
the circuit are dashed.
#### III.2.1 Line insertions
Every possible line insertion corresponds to a circuit on the associated
adjacency graph with one example shown in Figure 5. This configuration
frequently occurs for isotropic grain boundary energies, e.g., when a triple
line collapses and two quadruple points are merged. The circuit shown in
Figure 5a passes through the front, left and right volumes, and every surface
that is punctured by the circuit is adjacent to the inserted line. The circuit
in Figure 5a precisely corresponds to the circuit in Figure 5c, and
enumerating all possible line insertions is equivalent to enumerating all
circuits on the adjacency graph. Algorithms to identify the circuits on a
graph are available in the literature Paton (1969); Gibbs (1969). Not all
possible circuits need to be considered though; if removing the nodes and
edges of the circuit from the adjacency graph leaves only a single connected
component, then the line insertion would create a spurious line and point that
would subsequently be removed. The resulting algorithm is described in detail
in Section III of the SM.
(a)
(b)
(c)
Figure 6: A surface insertion corresponds to a set of paths on the adjacency
graph. (a) A five grain configuration, showing a set of three non-intersecting
paths connecting the disconnected (top and bottom) volumes. (b) A surface is
inserted between the disconnected volumes with one bounding line for each
path. Each line is added to the adjacency lists of the surfaces punctured by
the corresponding path. (c) The adjacency graph around the point. The color of
punctured surfaces and edges on the graph match on (a) and (c).
#### III.2.2 Surface insertions
Around a point a surface can only be inserted between two disconnected
volumes. Given a pair of such volumes, the inserted surface is connected to
the surrounding surfaces by some set of inserted lines. Each line corresponds
to a path that starts on one of the disconnected volumes and ends on the
other, as in Figure 6a. A set of such paths completely specifies the topology
around the inserted surface. Every surface punctured by a path is adjacent to
the corresponding inserted line, as in Figure 6b. The set of all possible
surface insertions can be found by constructing all possible sets of non-
intersecting paths between the nodes of the adjacency graph corresponding to
the disconnected volumes. These paths can be found using a standard depth
first search algorithm on the adjacency graph. Unlike line insertions, paths
along surfaces that share a common edge are still acceptable, as the newly
inserted line will bound the inserted surface and will be topologically
different from any preexisting line. The resulting algorithm is described in
detail in Section IV of the SM.
### III.3 Other considerations
(a)
(b)
(c)
Figure 7: Topological transitions not considered here. (a) Two lines bounding
the same surface meet to form a new point. (b) Two bounding surfaces of a
volume meet to form a new point. (c) The cross section of a cylindrical volume
is reduced to a point.
The algorithms described in this section are conjectured to result in sets of
topological transitions that include all those that occur during grain growth
for a generic initial condition, even with anisotropic grain boundary
energies. A generic initial condition is one for which the type of topological
transition shown in Figure 7 does not occur. That is, the only allowed
topological transitions are those for which the length of a line, the area of
a surface, or the volume of a grain passes through zero. This is not believed
to be a serious constraint though, since the topological transitions in Figure
7 are not expected to occur during grain growth in a generic physical system.
Figure 8: A point connected to two spherical grains, and two grains above and
below the page. The neighborhood of the point is outlined by a dashed line.
The surface in the page is represented twice in the neighborhood of the point.
There are some situations where the adjacency graph of the strata does not
accurately reflect the topology around a point. For example, a single point
could appear on the boundary of a surface more than once, as in Figure 8. This
is the reason that the adjacency graph is constructed from the microstructure
components in a small neighborhood of the point. This can allow spurious
insertions (in the sense of Section II) that are nevertheless required by the
physical system, and any spurious strata can easily be removed after the
topological transition is complete. The detection algorithm for spurious
strata is provided in Section V of the SM. The construction of a small
neighborhood necessarily involves the mesh, and will be considered further in
Section IV.3.
## IV Operations on the mesh
Since the SCOREC library does not natively support changes to the topology of
the finite element mesh, a set of fundamental and localized operations are
proposed and implemented. Given that the microstructure is represented by
means of a finite element mesh, the individual microstructure components are
comprised of sets of simplicial mesh elements. These mesh elements will be
referred to as tetrahedra, triangles, edges, and vertices, or occasionally as
$d$-simplices when that is simpler. The distinction between the topological
and geometric components of the microstructure is reinforced in Figure 1.
Applying the stratum collapse and insertion operations described in Section
III on a simplicial finite element mesh requires some mesh modifications, both
to prepare the mesh for these changes and to execute them. The two basic
operations are lens collapse and lens expansion, associated with stratum
collapse and insertion, respectively. The lens split is an additional
operation used to prepare the mesh around a stratum before stratum collapse or
in the neighborhood of a point before stratum insertion. While the actual
collapse and insertion operations are more complex than those described below,
the underlying approach is the same.
Remembering that the set of volumes, faces, lines and points and their
connections compromise a topological structure called a stratified space,
microstructural components will be called strata in this section, i.e., a
volume will be called a $3$-stratum, a surface will be called a $2$-stratum, a
line will be called a $1$-stratum, and a point will be called a $0$-stratum.
For brevity, $S^{d}$ will denote a $d$-stratum and $S^{d}_{i}$ more
specifically the $i$th $d$-stratum.
### IV.1 Stratum collapse
An $S^{d}$ with $d>0$ is represented by a collection of $e$-dimensional mesh
entities with $e=0,1,\dots,d$. Collapsing an $S^{d}$ is equivalent to
collapsing its constituents onto a single vertex. This can be further
simplified to collapsing all edges within the $S^{d}$ and its bounding strata,
giving the central idea of stratum collapse. For simplicity, this section
describes the procedure for a single collapsing edge. This is extended in
Section VII of the SM to stratum collapses involving multiple collapsing
edges.
Figure 9: Lens collapse operation. Left, the lens composed of tetrahedra and
triangles bounded by the collapsing dashed edge. Right, the disc obtained by
collapsing the lens.
On a simplicial mesh, an edge bounds a collection of tetrahedra and triangles
forming a lens around that edge. As shown in Figure 9, the entities that are
bounded by the collapsing edge will also collapse and need to be removed. For
each collapsing triangle, the other two bounding edges form a merging couple.
For each collapsing tetrahedron, the two triangles that are not collapsing
form a merging couple. After the collapse, a new entity is generated for each
merging couple. Such an entity belongs to the lower dimensional stratum of the
merging couple, assuming the merging entities belong to the same or adjacent
strata.
Figure 10: Edge split operation during preconditioning. The thicker edges in
red and blue belong to strata $S^{d}_{i}$ and $S^{e}_{j}$, respectively. If
$S^{d}_{i}$ and $S^{e}_{j}$ are not the same and one doesn’t bound the other,
collapse of the dashed vertical edge is not allowed. Splitting the red edge
and all entities that are bounded by that edge into two creates new entities
which by construction either belong to $S^{d}_{i}$ or strata bounded by
$S^{d}_{i}$.
During the stratum collapse, three issues could arise that would invalidate
the mesh. First, stratum collapse could cause an additional topological
transition if any of the merging entities do not belong to the same or
adjacent strata. Applying the edge split operation shown in Figure 10 to one
of the edges of the problematic couple resolves this situation. Second, it is
possible that two $d$-dimensional entities could unintentionally merge. This
could occur even if they do not belong to the the collapsing lens, but
requires that they share $d-1$ vertices and that the remaining vertex of each
be a distinct merging vertex as in Figure 11a. The edge split procedure can
also resolve this by isolating the collapsing entity, as shown in Figure 11. A
third issue that would invalidate the mesh is inversion of one of the
surrounding entities during a collapse. This could occur if the initial and
final positions of a merging vertex lie on distinct sides of the plane
containing the opposite triangle of an adjacent tetrahedron.
(a)
(b)
(c)
Figure 11: The effect of preconditioning for an $S^{1}$ collapse on a two-
dimensional mesh. (a) Collapsing the blue $S^{1}$ and moving the vertices to
the blue node would invert the red triangle and merge it with the purple
triangle. The resulting triangle is shown in dashed lines. (b) The splitting
procedure resolves this problem, but yields the red triangle that could invert
during collapse. (c) Relaxation allows the $S^{1}$ to be collapsed without
inverting any elements.
The three-dimensional equivalent of the preconditioning operation in Figure 11
is applied to edges that are adjacent to a single merging vertex to avoid all
three situations. First, the midpoints of all edges emanating from the merging
vertices are collected to compute their convex hull, and the emanating edges
are split where they intersect the convex hull. This resolves the first two
issues and yields a hull of triangles surrounding the collapsing stratum.
While it is still possible for a surrounding tetrahedron to invert during the
collapse, a relaxation procedure analogous to that in Figure 11c and described
in Section VI of the SM is applied to vertices on the hull to avoid such an
event. After preconditioning, the stratum memberships of the new entities
associated with the merging entities are found. A new entity belongs to the
lowest dimensional stratum that owns one of the merging entities; the
preconditioning certifies that there is a single stratum of the lowest
dimension.
During the course of microstructure evolution, the criterion for collapsing a
stratum is decided at the mesh level with a two step algorithm. First, the
diameter of a stratum is approximated as that of an edge, square or cube with
the same length, area or volume, respectively. If the diameter of a $S^{d}$ is
smaller than a threshold, then the time rate of change of the total length,
area, or volume of the collapsing stratum is calculated using the velocities
associated with the bounding vertices. If this is negative, then the stratum
is collapsed.
### IV.2 Stratum insertion
As described in Section III.2, the insertion of a $S^{1}$ or $S^{2}$ around a
central $S^{0}$ initially involves finding circuits or paths in the adjacency
graph of surfaces and volumes. For this to work on the mesh level, there
should be at least one internal edge in each of the surrounding $S^{2}$ and
$S^{3}$. This is ensured by two operations. First, a lens expansion is applied
to each connected set of tetrahedra belonging to the same $S^{3}$. The $S^{2}$
triangles bounding such a set and adjacent to the $S^{0}$ form a disc that can
be expanded. The expansion forms a new vertex, a new edge and a set of new
triangles and tetrahedra corresponding to the disc triangles, all belonging to
the specified $S^{3}$. Second, if there are any sets of connected triangles
belonging to a $S^{2}$ that consist of a single triangle, the edge opposite
the $S^{0}$ is split. Next, the split operation is applied to the edges
bounded by the central vertex belonging to the $S^{0}$. The vertices created
by these split operations are positioned on a sphere centered at the $S^{0}$
vertex location. The radius $\rho$ of the sphere is smaller than the distance
to the closest triangle opposite the central vertex in any surrounding
tetrahedron.
Preconditioning achieves three things. First, it ensures that corresponding
sets of triangles and edges can be found for each circuit associated with a
$S^{1}$ insertion and each path associated with a $S^{2}$ insertion. These
sets of triangles and edges form disc- or fin-like structures. Second, it
forms a convex cavity of triangles, preventing element inversion after the
insertion. Third, it reduces the size disparity of the surrounding triangles
and the associated bias in the numerical scheme for vertex velocities.
Stratum insertion requires expansion of a disc/fin, creation of triangles and
tetrahedra with the same stratum membership as the edges and triangles on the
disc/fin, and creation of edges and triangles belonging to the new strata. In
the case of a $S^{1}$ insertion, a new $S^{0}$ vertex and a new $S^{1}$ vertex
to be positioned at the interior of the new line are created. The disc
associated with the circuit is used to create three discs, one for the old
$S^{0}$ vertex, one for the new $S^{1}$ vertex, and one for the new $S^{0}$
vertex such that the disc entities belong to the same strata as in the initial
disc. Two new $S^{1}$ edges are created to connect the $S^{1}$ vertex to the
bounding $S^{0}$ vertices. The volume between the discs and around the new
$S^{1}$ edges is filled by triangles and tetrahedra corresponding to edges and
triangles on the discs. In the case of a $S^{2}$ insertion, the entities
bounded by the new $S^{2}$ entities need to be generated. A triangle belonging
to the new $S^{2}$ is generated for each new $S^{1}$ edge, and a new
tetrahedron belonging to the adjoining $S^{3}$ is generated for each new
$S^{2}$. When inserting strata on a $S^{0}$ on the boundary of the simulation,
the algorithm skips the creation of entities for the exterior $S^{3}$. The
final step of the insertion is the relaxation described in Section V.
### IV.3 Spurious stratum detection and insertion
(a)
(b)
(c)
Figure 12: Steps of spurious line insertion. (a) A point, connected to two
volumes top and bottom, and two grains above and below the page. (b) Insertion
of the spurious line, adjacent to two surfaces both separating the same
volumes. (c) The spurious line is removed and the two surfaces are merged.
If an inserted stratum has fewer than the minimum number of higher-dimensional
adjacencies, it is spurious and is removed by merging the higher-dimensional
adjacencies. An example is given in Figure 12. This operation is sometimes
necessary, e.g., when a $S^{0}$ is connected to multiple disjoint sets of
triangles belonging to the same $S^{2}$ or disjoint sets of tetrahedra
belonging to the same $S^{3}$. In this situation, the global connectivity of
the stratification is not representative of the possible local insertions
around the vertex. A local stratification of disjoint sets of entities
belonging to the same stratum is generated, and the set of all possible
insertions is found with the same circuit and path detection method as in the
generic case.
## V Boundary evolution and energy criteria
(a)
(b)
(c)
(d)
(e)
(f)
Figure 13: The choice of insertion can change the overall trajectory of the
system. (a) A two-dimensional degenerate configuration with four grains could
transition to either (b) or (c) since they are energetically equivalent. For
(d), (e) and (f) both lower the energy, but (e) more so.
When inserting a new stratum, it is important that the geometry of the stratum
maximizes the energy dissipation rate as the stratum expands. This is
especially important when there is more than one possible stable insertion, as
shown in Figure 13. Even for a constant grain boundary energy, inaccurate
calculations of the geometry could change the selected insertion and
drastically alter the evolution of the system.
(a)
(b)
(c)
(d)
Figure 14: The steps of mesh level insertion and reorientation for a digon
insertion. (a) Fins of triangles along paths, shown in bold black. (b)
Insertion of the new digon, where $S^{1}$ edges are shown as green lines and
$S^{2}$ edges are shown as blue lines. (c) The vertices are allowed to move
until one of the ending criteria is reached. (d) The digon is scaled to be
within the projection sphere, and relaxation continues until the energies
converge.
The calculation of the geometry of an inserted stratum begins by isolating the
mesh around the old $S^{0}$ vertex and applying the relaxation algorithm shown
in Figure 14 for a digon insertion. The bold black lines in Figure 14a
represent the fins of triangles on the paths. A new digon is inserted by
expanding the two selected fins, changing the topology as shown in Figure 14b.
The projection sphere of radius $r$ is represented by the black dot-dashed
circle and the inner (one for each $S^{1}$ and $S^{2}$ vertex) and outer
bounding spheres are represented by red dashed circles. The vertices are then
allowed to move according to the equations of motion (Figure 14c) until a
minimum energy is reached or one of the moving vertices intersects an inner or
outer bounding sphere. If one of the inner spheres is intersected, the
insertion is discarded. If the outer sphere is intersected, the inserted
stratum is scaled to be contained within the projection sphere. The steps in
Figure 14c and 14d are repeated until both the energy at the intersection and
the energy after the scaling converge to the final and initial energies
$E_{f}$ and $E_{i}$.
Since the thermodynamically-driven system follows a gradient flow of the
energy, the physical system will transition to the state with the the highest
energy dissipation rate. After the process converges, the energy dissipation
rate is calculated for the expanding insertions at the singular configuration
where all the new vertices are positioned at the old vertex position. Assuming
the contributions of the newly generated strata to the forces acting on the
vertices are vanishingly small in this configuration, the dissipation rate of
initial expansion is given by
$W=-\sum_{i}\vec{F}_{i}\cdot\vec{v}_{i}$
where $F_{i}$ and $v_{i}$ are the force acting on and the velocity of vertex
$i$ and the sum is over all newly inserted bounding vertices.
Our energy dissipation rate criterion is similar to the depinning force which
Shya and Weygand use to repeatedly split a node by edge insertions Syha and
Weygand (2010). The difference is that our approach instead compares all
possible single stratum insertions at once using the energy dissipation rate
criterion, presumably more closely following the evolution of the physical
system. Moreover, the relaxation algorithm discards insertions that do not
expand, allowing for stable high valency junctions that could form, e.g., at
intersecting deformation twins in TWIP steels.
## VI Modified MacPherson-Srolovitz relation
All numerical approaches should be benchmarked against experimental or
analytical results. One benchmark for polycrystalline microstructures evolving
under constant grain boundary energy is the MacPherson-Srolovitz relation
MacPherson and Srolovitz (2007), the three-dimensional extension of the von
Neumann-Mullins relation von Neumann (1952); Mullins (1956). For a constant
grain boundary energy, this relation should be satisfied by each grain at
every moment in time except for when a topological transition occurs.
The MacPherson-Srolovitz MacPherson and Srolovitz (2007) relation governing
the rates of change of volumes is given by:
$\frac{dV(D)}{dt}=-2\pi\mu\gamma\left[\mathcal{L}(D)-\frac{1}{6}\mathcal{M}(D)\right],$
(1)
where $\mu$ is the constant grain boundary mobility, $\gamma$ is the constant
grain boundary energy, $\mathcal{L}(D)$ is the mean width which measures the
the total mean curvature of grain $D$, and $\mathcal{M}(D)$ is the total
length of the triple lines of grain $D$. Lazar et al. describe a discretized
form of the MacPherson-Srolovitz relation that can be used to calculate the
rate of volume change for grains composed of discretized linear elements Lazar
et al. (2011). For this case, $\mathcal{L}(D)$ and $\mathcal{M}(D)$ reduce to
$\displaystyle\mathcal{L}(D)$
$\displaystyle=\frac{1}{2\pi}\sum_{i}e_{i}\alpha_{i},$
$\displaystyle\mathcal{M}(D)$ $\displaystyle=\sum_{j}l_{j},$
where $e_{i}$ is the length of the $i$th boundary edge, $\alpha_{i}$ is the
exterior angle around the $i$th boundary edge with respect to the grain $D$,
and $l_{j}$ is the length of the $j$th triple line edge.
The coefficient of $\mathcal{M}(D)$ is related to the equilibrium exterior
angle of $\pi/3$. For periodic boundary conditions and when all junctions are
composed of triple junctions and quadruple points, this is the expected
exterior angle everywhere. As will be further discussed in Section VII though,
when using an exterior boundary or allowing higher valency junctions due to
the discretized mesh, the MS relation needs to be modified to include more
general exterior angle conditions. Eq. (1) is reformulated as
$\displaystyle\frac{dV(D)}{dt}$
$\displaystyle=-\mu\gamma\left[2\pi\mathcal{L}(D)-\mathcal{N}(D)\right],$ (2)
$\displaystyle\mathcal{N}(D)$ $\displaystyle=\sum_{j}\beta_{j}l_{j},$ (3)
where $\beta_{j}$ is the equilibrium exterior angle around the $j$th junction
line edge. This is determined from the geometric relation
$(\pi-\beta_{j})n=\xi_{j}$
where $n$ is the number of grains and $\xi_{j}$ is the total interior angle
available for all grains around the $j$th junction line edge. For a stable
interior $S^{1}$, $\xi_{j}=2\pi$, $n=3$, $\beta_{j}=\pi/3$ and Eq. (2) reduces
to Eq. (1). Assuming a cubic simulation cell, the stable configuration for a
$S^{1}$ on a simulation cell edge has $n=1$, $\xi_{j}=\pi/2$ and
$\beta_{j}=\pi/2$, the stable configuration for a $S^{1}$ on a simulation cell
face has $n=2$, $\xi_{j}=\pi$ and $\beta_{j}=\pi/2$. It is possible to have
unstable junctions with $n$ larger than that for the stable configurations.
## VII Results and Discussion
We consider some example configurations to enumerate the possible insertions,
and show the effect of local geometry on the selection criterion and the
inserted stratum shape. Then we demonstrate the importance of enumerating all
possible insertions with a relatively simple microstructure that could lead to
transitions not often considered in previous FEM-based methods. Finally, we
show the evolution of a trial microstructure as a demonstration of the
capabilities of our implementation.
Figure 15: All possible insertions for the canonical configuration, classified
by symmetry groups. Observe that digon insertions are obtained by decomposing
circuits containing disconnected $3$-stratum couples into two paths connecting
the couples and using these to insert a $2$-stratum. Digon insertion type-I is
related to petal removal type-I and digon insertion type-II is related to
mixed removal.
To verify that all insertions are considered, consider the five grain
configuration previously described in Figure 5a. All possible insertions can
be found by applying the circuit and path detection algorithms, and these are
shown in Figure 15 (grouped by their symmetries). There are four classes of
$S^{1}$ insertions and three classes of $S^{2}$ insertions. The volume removal
and trigon insertion are generally handled by all grain growth codes, but the
other insertions are usually not since a $S^{1}$ collapse is always followed
by a trigon insertion for a uniform boundary energy. Digons can also be
inserted, with the two types shown in Figure 15.
To be specific, there is one volume removal, three petal removal type-Is, six
petal removal type-IIs, and six mixed removals possible, all of which are
found by circuit analysis. There are three type-I, six type-II digon, and one
trigon insertions possible, as well. Note that digon insertion type-I and
type-II use paths that can be constructed by decomposing the circuits of petal
removal type-I or mixed removal, respectively. When discussing the energy
dissipation rates, it will be shown that these additional operations could be
relevant depending on the grain boundary energy function.
Depending on the geometry of the boundaries, each insertion has a different
energy dissipation rate associated with the subsequent evolution. The energy
dissipation rate criterion states that the insertion with the highest positive
dissipation rate is the one that will be realized. To test this criterion, a
mesh was generated for the configuration in Figure 15. If the geometry is such
that the three $S^{1}$s on top and three $S^{1}$s on the bottom are separated
by the tetrahedral angle, a degenerate configuration is created where any
insertion results in an unstable configuration with increased energy. If the
angles between the $S^{1}$s are instead larger than the tetrahedral angle, a
trigon insertion is favored. Conversely, if the angles between the $S^{1}$s
are smaller than the tetrahedral angle, a volume removal is favored.
(a)
(b)
(c)
Figure 16: (a) The variation in energy change of insertion with changing
configuration. Blue triangles show the energies for the configuration when the
$S^{1}$ angles in (c) are tetrahedral angles. Red squares denote the energies
for the stretched case, and the green pentagons show the compressed case. (b)
The dissipation rates for the expanding insertions at the singular
configuration, where the volume removal and the trigon insertion are
energetically favorable for the stretched and compressed cases, respectively.
The changes in energy for each insertion are shown in Figure 16. The energies
in Figure 16a are calculated with the new vertices on the outer projection
sphere. For the compressed case where trigon insertion is favored, it is
significant that the digon insertion is also energy decreasing and the petal
removal type-I is nearly energy neutral. The dissipation rates associated with
the expanding insertions are compared in Figure 16b to select the most
energetically favorable insertion.
In the current scheme the inserted triangles apply lower forces than the
surrounding triangles due to the discretized equations of motion, and there is
a small bias towards trigon insertions in the degenerate configuration as is
visible in Figure 16. The bias depends on the selection of the ratio of the
radii of the inner and outer spheres in Figure 14. By increasing the ratio,
smaller radius insertions are discarded, effectively creating a range of $d$
around the value corresponding to the degenerate case where no insertion is
valid. However that can also make high aspect ratio $S^{2}$ insertions hit the
inner sphere and be discarded until their aspect ratio lowers on the
consecutive time steps.
(a)
(b)
(c)
Figure 17: The effect of orthogonal stretching on the trigon shape. (b)
Starting configuration, where dihedral angles between surfaces separating the
surrounding $S^{3}$ are equal. (a)-(c) After stretching (compressing) the
configuration in the lateral direction, running the relaxation yields a
laterally stretched (compressed) $S^{2}$.
Whereas the vertical stretch changes which insertion is energetically favored,
lateral stretches change the energy-minimizing shape of the inserted stratum
and are reflected in the relaxation scheme. Without this, insertions of
equilateral $S^{2}$ could increase the energy artificially and cause a
physical insertion to be overlooked. Relaxation mitigates the problem, and as
shown in Figure 17, the shape of the inserted $S^{2}$ changes depending on the
geometry.
(a)
(b)
Figure 18: Simulation of a microstructure composed of $100$ grains under
isotropic grain boundary energy. (a) Initial configuration. (b) The number of
grains is about one half of the starting number.
Finally, we simulate the evolution of some artificial microstructures
generated using Neper Quey et al. (2011). These microstructures are not
periodic, and their evolution requires imposing a local volume preservation
constraint on the exterior vertices. This relaxes the connectivity constraint
on grain surfaces on the exterior, and requires some additional operations
described in Section VIII of the SM.
Figure 19: The rates of volume change for example grains as calculated by the
modified MacPherson-Srolovitz (MS) relation, and first-order approximation
using the equations of motion (EoM).
To demonstrate the capabilities of VDLIB, a trial microstructure composed of
$100$ grains is generated as a Voronoi tesselation using Neper Quey et al.
(2011). The simulation cell is a cube with unit edge length. The mesh is
adaptively refined, with a target edge length set to a fraction of the median
edge length of cubes with equivalent volumes to the grains. In addition, the
$S^{1}$ are required to contain at least two edges to provide sufficient
degrees of freedom. The microstructure is evolved using equations of motion by
Mason Mason (2017) with unit surface drag coefficient and grain boundary
energy. The volume constraint is implemented by the method described in
Section VIII of the SM. The time iteration is implemented by a second order
Runge-Kutta scheme with the time step at each iteration given by
$\text{min}(t_{\text{inv}}/20,t_{\text{fixed}})$, where $t_{\text{inv}}$ is
the shortest time step to invert any element and $t_{\text{fixed}}$ is the
maximum fixed time step of $5.0\times 10^{-5}$. One iteration loop involves
nine sub-iterations of the equations of motion, checking for and implementing
collapses, followed by checking for and implementing insertions. Some
snapshots from the resulting system evolution are shown in Figure 18.
The modified MacPherson-Srolovitz relation in Section VI can be used to
calculate the rate of volume change for grains composed of discretized linear
elements. The resulting actual rates of volume change for a select number of
grains and the predictions of the modified MacPherson-Srolovitz relation are
given in Figure 19. The initial discrepancy is mainly due to the deviation
from the equilibrium angle conditions in the initial condition. The
discrepancy falls as the initial microstructure evolves and the angles around
the junction lines approach the equilibrium values. Topological transitions
can also cause temporary deviations (e.g., grain 78 around $t=0.003$ in Figure
19) which decrease in time. Despite using linear elements and an explicit time
integration scheme, there is overall good agreement with the MacPherson-
Srolovitz relation.
## VIII Conclusion
A computational framework with an explicit grain boundary representation is
proposed to predict grain growth for anisotropic grain boundary energies and
mobilities. This establishes the foundations of a massively parallelizable
general-purpose framework to model microstructure evolution during, e.g.,
high-temperature and finite-strain processes. There does not appear to be any
other software with these capabilities, that uses an explicit boundary
representation, and that supports general changes to the grain boundary
network.
Predictive simulations of microstructure evolution during thermomechanical
processing require the ability to represent features such as stable quadruple
junction lines in low stacking-fault energy metals. This in turn requires the
ability to handle anisotropic properties and more general topologies than
usually assumed in the literature. Moreover, the mesh should be partitioned
across multiple processing units to reach physically relevant scales, and the
equations of motion should be local to keep the computational cost linearly
proportional to the number of grains. The discrete equations of motion
proposed by Mason Mason (2017) can accommodate anisotropic grain boundary
energies and drag coefficients. They are local and scalable, and have been
implemented to describe the boundary motion.
A generic method to enumerate the singular transitions is proposed and
implemented. An energy-based insertion selection criterion is proposed and
implemented. The method can utilize models for anisotropic energies, and once
experimental grain boundary energy functions are available, the framework will
be used to simulate grain growth under these conditions. Finally, the work is
done in the context of a massively parallelizable finite element based library
that can support volumetric physics.
## IX Acknowledgments
E.E. was partially supported by the Takamura and Erhardt Family Fellowship.
## References
* Council (2008) N. R. Council, _Integrated Computational Materials Engineering: A Transformational Discipline for Improved Competitiveness and National Security_ (The National Academies Press, Washington, DC, 2008), ISBN 978-0-309-11999-3.
* Zaefferer (2005) S. Zaefferer, in _Textures of Materials - ICOTOM 14_ (Trans Tech Publications, 2005), vol. 495 of _Materials Science Forum_ , pp. 3–12.
* Li and Suter (2013) S. F. Li and R. M. Suter, Journal of Applied Crystallography 46, 512 (2013).
* Li et al. (2014) S. Li, J. Mason, J. Lind, and M. Kumar, Acta Materialia 64, 220 (2014), ISSN 1359-6454.
* Morawiec (2000) A. Morawiec, Acta Materialia 48, 3525 (2000), ISSN 1359-6454.
* Saylor et al. (2003a) D. M. Saylor, A. Morawiec, and G. S. Rohrer, Acta Materialia 51, 3663 (2003a), ISSN 1359-6454.
* Saylor et al. (2003b) D. M. Saylor, A. Morawiec, and G. S. Rohrer, Acta Materialia 51, 3675 (2003b), ISSN 1359-6454.
* Bulatov et al. (2014) V. Bulatov, B. Reed, and M. Kumar, Acta Materialia 65, 161 (2014).
* Runnels et al. (2016) B. Runnels, I. J. Beyerlein, S. Conti, and M. Ortiz, Journal of the Mechanics and Physics of Solids 94, 388 (2016), ISSN 0022-5096.
* Srolovitz et al. (1986) D. Srolovitz, G. Grest, and M. Anderson, Acta Metallurgica 34, 1833 (1986), ISSN 0001-6160.
* Holm et al. (1991) E. A. Holm, J. A. Glazier, D. J. Srolovitz, and G. S. Grest, Phys. Rev. A 43, 2662 (1991).
* Raabe (2002) D. Raabe, Annual Review of Materials Research 32, 53 (2002).
* Ding et al. (2006) H. Ding, Y. He, L. Liu, and W. Ding, Journal of Crystal Growth 293, 489 (2006), ISSN 0022-0248.
* Janssens (2010) K. Janssens, Mathematics and Computers in Simulation 80, 1361 (2010), ISSN 0378-4754, Multiscale modeling of moving interfaces in materials.
* Mason et al. (2015) J. Mason, J. Lind, S. Li, B. Reed, and M. Kumar, Acta Materialia 82, 155 (2015), ISSN 1359-6454.
* Mason (2015) J. Mason, Acta Materialia 94, 162 (2015), ISSN 1359-6454.
* Zhang et al. (2012) L. Zhang, A. D. Rollett, T. Bartel, D. Wu, and M. T. Lusk, Acta Materialia 60, 1201 (2012), ISSN 1359-6454.
* Steinbach and Pezzolla (1999) I. Steinbach and F. Pezzolla, Physica D: Nonlinear Phenomena 134, 385 (1999), ISSN 0167-2789.
* Moelans et al. (2008) N. Moelans, B. Blanpain, and P. Wollants, Calphad 32, 268 (2008), ISSN 0364-5916.
* Gruber et al. (2006) J. Gruber, N. Ma, Y. Wang, A. D. Rollett, and G. S. Rohrer, Modelling and Simulation in Materials Science and Engineering 14, 1189 (2006).
* Vedantam and Patnaik (2006) S. Vedantam and B. S. V. Patnaik, Phys. Rev. E 73, 016703 (2006).
* Miyoshi et al. (2017) E. Miyoshi, T. Takaki, M. Ohno, Y. Shibuta, S. Sakane, T. Shimokawabe, and T. Aoki, npj Computational Materials 3, 25 (2017), ISSN 2057-3960.
* Dorr et al. (2010) M. Dorr, J.-L. Fattebert, M. Wickett, J. Belak, and P. Turchi, Journal of Computational Physics 229, 626 (2010), ISSN 0021-9991.
* Jin et al. (2015) Y. Jin, N. Bozzolo, A. Rollett, and M. Bernacki, Computational Materials Science 104, 108 (2015), ISSN 0927-0256.
* Ribot et al. (2019) J. G. Ribot, V. Agrawal, and B. Runnels, Modelling and Simulation in Materials Science and Engineering 27, 084007 (2019).
* Kawasaki et al. (1989) K. Kawasaki, T. Nagai, and K. Nakashima, Philosophical Magazine B 60, 399 (1989).
* Nagai et al. (1990) T. Nagai, S. Ohta, K. Kawasaki, and T. Okuzono, Phase Transitions 28, 177 (1990).
* Roters et al. (2010) F. Roters, P. Eisenlohr, L. Hantcherli, D. Tjahjanto, T. Bieler, and D. Raabe, Acta Materialia 58, 1152 (2010), ISSN 1359-6454.
* Log et al. (2008) R. Log , M. Bernacki, H. Resk, L. Delannay, H. Digonnet, Y. Chastel, and T. Coupez, Philosophical Magazine 88, 3691 (2008).
* Tonks et al. (2012) M. R. Tonks, D. Gaston, P. C. Millett, D. Andrs, and P. Talbot, Computational Materials Science 51, 20 (2012), ISSN 0927-0256.
* Raabe and Becker (2000) D. Raabe and R. C. Becker, Modelling and Simulation in Materials Science and Engineering 8, 445 (2000).
* Kuprat (2000) A. Kuprat, SIAM Journal on Scientific Computing 22, 535 (2000).
* Gruber et al. (2005) J. Gruber, D. C. George, A. P. Kuprat, G. S. Rohrer, and A. D. Rollett, Scripta Materialia 53, 351 (2005), ISSN 1359-6462.
* Syha and Weygand (2010) M. Syha and D. Weygand, Modelling and Simulation in Materials Science and Engineering 18, 015010 (2010).
* Lazar et al. (2011) E. A. Lazar, J. K. Mason, R. D. MacPherson, and D. J. Srolovitz, Acta Materialia 59, 6837 (2011), ISSN 1359-6454.
* MacPherson and Srolovitz (2007) R. D. MacPherson and D. J. Srolovitz, Nature 466, 1053 (2007).
* Tucker et al. (2015) J. C. Tucker, A. R. C. III, A. R. Ingraffea, and A. D. Rollett, Modelling and Simulation in Materials Science and Engineering 23, 035003 (2015).
* Mason (2017) J. K. Mason, Acta Materialia 125, 286 (2017), ISSN 1359-6454.
* (39) _Scientific Computation Research Center at Rensselaer Polytechnic Institute. http://scorec.rpi.edu/ (accessed 1 November 2020)_.
* Seol et al. (2012) S. Seol, C. W. Smith, D. A. Ibanez, and M. S. Shephard, in _2012 SC Companion: High Performance Computing, Networking Storage and Analysis_ (2012), pp. 1124–1132.
* Paton (1969) K. Paton, Commun. ACM 12, 514 (1969), ISSN 0001-0782.
* Gibbs (1969) N. E. Gibbs, J. ACM 16, 564 (1969), ISSN 0004-5411.
* von Neumann (1952) J. von Neumann, in _Metal Interfaces_ (American Society for Metals, Cleveland, Ohio, 1952), pp. 108–110.
* Mullins (1956) W. W. Mullins, Journal of Applied Physics 27, 900 (1956).
* Quey et al. (2011) R. Quey, P. Dawson, and F. Barbe, Computer Methods in Applied Mechanics and Engineering 200, 1729 (2011), ISSN 0045-7825.
Supplemental Materials: Topological transitions during grain growth on a
finite element mesh
## I Notation
For brevity, let $\mathbf{S}^{d}$ be the set of $d$-dimensional strata and
$S^{d}_{i}$ the $i$th $d$-stratum. $\left|A\right|$ is the number of elements
in the set $A$, $A^{e}(S^{d}_{i})$ is the collection of $S^{e}$ adjacent to
$S^{d}_{i}$, and $A^{e}_{j}(S^{d}_{i})$ the $j$th $S^{e}$ adjacent to
$S^{d}_{i}$. $A^{f,e}(S^{d}_{i})$ are the $f$-dimensional strata adjacent to
the $e$-dimensional adjacencies of $S^{d}_{i}$. $\tilde{S}^{d}_{i}$ is a newly
inserted stratum.
The adjacency graph of $S^{2}$ and $S^{3}$ in the neighborhood of a 0-stratum
$S^{0}_{i}$ will be used to enumerate the possible changes to the local
microstructure. Let the adjacency graph around $S^{0}_{i}$ be $G_{i}$, and
have nodes corresponding to the set $A^{2}(S^{0}_{i})\cup A^{3}(S^{0}_{i})$
and edges for each incidence of an $S^{2}$ and $S^{3}$. Similarly,
$G^{\prime}_{i}$ is the adjacency graph with nodes corresponding to the set
$A^{1}(S^{0}_{i})\cup A^{2}(S^{0}_{i})$ and edges for each incidence of an
$S^{1}$ and $S^{2}$. Paths and circuits on $G_{i}$ will be used to find
possible $S^{2}$ and $S^{1}$ insertions around $S^{0}_{i}$, where a path is a
sequence of non-repeating nodes connected by edges and a circuit is a path
that begins and ends at the same node.
(a)
(b)
(c)
(d)
(e)
Figure S1: A representation of how the circuit $Q^{i}_{j}$ divides nodes into
disjoint graphs $H_{1}^{i;j}$ and $H_{2}^{i;j}$, which are connected over
$S^{1}$. (a) The initial microstructure consisting of six grains. (b) Removal
of the red dashed circuit $Q^{i}_{j}$ going through (T)op, (B)ack, (L)eft,
(F)ront and (R)ight grains leaves two disjoint graphs. $H_{1}^{i;j}$ consists
of the $S^{2}$ connecting the grains T-F and T-L. $H_{2}^{i;j}$ consists of
the b(O)ttom grain, the $S^{2}$ bounding grain O in the neighborhood of
$S^{0}_{i}$, and the $S^{2}$ connecting the grains R-B. (c) At the
microstructure level, it is easy to see how the components of $H_{1}^{i;j}$
are connected by $S^{1}$s. (d) The final configuration after the insertion
with the associated $S^{1}$ colored red. (e) $G^{\prime}_{i}$, where
$Q^{i}_{j}$ is shown superposed and the dotted blue edges correspond to
$S^{3}-S^{2}-S^{3}$ components of $Q^{i}_{j}$. The nodes of the two subgraphs
of $G^{\prime}_{i}$ can be seen to be connected by solid edges.
Let $Q^{i}$ be the set of circuits on $G_{i}$ and $Q^{i}_{j}$ be the $j$th
such circuit. Removing the circuit $Q^{i}_{j}$ from the graph leaves two
disjoint graphs of nodes which will be denoted as $H_{1}^{i;j}$ and
$H_{2}^{i;j}$. The $H_{k}^{i;j}$ could be disconnected over $G_{i}$, but the
corresponding strata around $S^{0}_{i}$ can always be connected through shared
$S^{1}$, as shown in Figure S1. If one of the $H_{k}^{i;j}$ is empty, that
implies that the circuit $Q^{i}_{j}$ is associated with an existing $S^{1}$
and should be discarded.
Let $P^{i;j,k}$ be the set of paths between $S^{3}_{j}$ and $S^{3}_{k}$ in the
vicinity of $S^{0}_{i}$, and $P^{i;j,k}_{l}$ be the $l$th such path. Let
$\raisebox{1.79993pt}{\Large$\wp$}(P^{i;j,k})$ be the set of sets of paths
between $S^{3}_{j}$ and $S^{3}_{k}$, where every set contains at least two
paths and none of the paths in the same set intersect. If $l$ is the index for
this set, then subgraph $H_{m}^{i;j,k;l}$ is the $m$th connected component
remaining when the $l$th set of paths with the end points $S^{3}_{j}$ and
$S^{3}_{k}$ is subtracted from $G_{i}$.
The mesh is composed of simplicial finite elements, including the
$0$-dimensional vertices, $1$-dimensional edges, $2$-dimensional triangles and
$3$-dimensional tetrahedra. An $n$-dimensional simplicial element belongs to
the stratum of lowest dimension in which it is contained, i.e., a vertex may
belong to an $S^{0}$, $S^{1}$, $S^{2}$, or $S^{3}$, an edge may belong to an
$S^{1}$, $S^{2}$, or $S^{3}$, etc. Similar to the notation for strata, we
denote the $i$th member of the set of $d$-dimensional simplicial entities as
$\Delta_{i}^{d}$ and use the adjacency operator $A^{e}(\cdot)$ in the same way
to obtain the set of adjacent mesh entities of dimension $e$. Additionally,
the stratum membership of a simplicial entity is indicated as
$\Delta_{i}^{d}\in S^{e}_{j}$, or $\Delta_{i}^{d}$ belongs to $S^{e}_{j}$. The
set of $e$-dimensional simplicial entities belonging to $S^{d}_{i}$ is
obtained by the membership operator $M^{e}(S^{d}_{i})$. $S(\Delta^{d}_{i})$ is
the stratum that owns the simplicial entity $\Delta^{d}_{i}$. A sample
microstructure showing the simplicial entities outlined in red is provided in
Figure 1b.
## II Stratum collapse
Given a stratum $S^{d}_{i}$ to collapse and a final point $\hat{S}^{0}$,
recursively collapse the bounding lower dimensional strata and then remove
$S^{d}_{i}$. If $\hat{S}^{0}$ is not specified, it is always possible to pick
the first bounding $S^{0}$ (otherwise there are no $S^{0}$ remaining after
collapse). Update the adjacency lists of the surrounding strata.
Implement changes in the stratification when collapsing $S^{d}_{i}$.
if $d=0$ then
return
if $\hat{S}^{0}=\emptyset$ then$\triangleright$ Assign $\hat{S}^{0}$ if not
specified.
if $A^{0}(S^{d}_{i})\neq\emptyset$ then
$\hat{S}^{0}\coloneqq A^{0}_{1}(S^{d}_{i})$
if $d=1$ then $\triangleright$ Replace the merging $S^{0}$ with $\hat{S}^{0}$.
for $S^{1}_{j}\in A^{1,0}(S^{1}_{i})$ do
for $S^{0}_{k}\in A^{0}(S^{1}_{j})$ do
if $S^{0}_{k}\in A^{0}(S^{1}_{i})$ then
$A_{k}^{0}(A^{1,0}_{j}(S^{d}_{i}))\coloneqq\hat{S}^{0}$
if $|A^{0}(S^{1}_{j})|=2$ and
$A_{1}^{0}(S^{1}_{j})=A_{2}^{0}(S^{1}_{j})=\hat{S}^{0}$ then $\triangleright$
If $\hat{S}^{0}$ is repeated remove one.
$A^{0}(S^{1}_{j})\coloneqq\\{\hat{S}^{0}\\}$
else$\triangleright$ Collapse the bounding strata.
for $S^{d}_{j}\in A^{d-1}(S^{d}_{i})$ do
Collapse $(S^{d}_{j},\hat{S}^{0})$
if $d<3$ then $\triangleright$ Remove the collapsing strata.
for $S^{d+1}_{j}\in A^{d+1}(S^{d}_{i})$ do
$A^{d}(S^{d+1}_{j})\coloneqq A^{d}(S^{d+1}_{j})\ \backslash\ \\{S^{d}_{i}\\}$
Algorithm 1 Collapse $(S^{d}_{i},\hat{S}^{0}\coloneqq\emptyset)$
## III 1-stratum insertion
Given a candidate $S^{0}_{i}$ and a circuit $Q^{i}_{j}$ on $G_{i}$ insert the
new stratum $\tilde{S}^{1}$ corresponding to $Q^{i}_{j}$. Add the new strata
$\tilde{S}^{0}$ and $\tilde{S}^{1}$ to the stratification and set the $S^{0}$
adjacencies of $\tilde{S}^{1}$ as $\\{S^{0}_{i},\tilde{S}^{0}\\}$. Update the
adjacency lists of the surrounding strata.
Implement changes in the stratification when inserting $\tilde{S}^{1}$ using
$Q^{i}_{j}$.
Create new strata $\tilde{S}^{1}$, $\tilde{S}^{0}$.
$A^{0}(\tilde{S}^{1})\coloneqq\\{S^{0}_{i},\tilde{S}^{0}\\}$ $\triangleright$
Adjacency of $\tilde{S}^{1}$.
for $S^{2}_{k}\in Q^{i}_{j}$ do $\triangleright$ Add $\tilde{S}^{1}$ to
$A^{1}(S^{2}_{k})$, for $S^{2}$ on $Q^{i}_{j}$.
$A^{1}(S^{2}_{k})\coloneqq A^{1}(S^{2}_{k})\cup\\{\tilde{S}^{1}\\}$
for $S^{2}_{k}\in H_{2}^{i;j}$ do $\triangleright$ Replace $S^{0}_{i}$ with
$\tilde{S}^{0}$.
for $S^{1}_{l}\in A^{1}(S^{2}_{k})$ do
if $S^{0}_{i}\in A^{0}(S^{1}_{l})$ then
$A^{0}(S^{1}_{l})\coloneqq A^{0}(S^{1}_{l})\cup\\{\tilde{S}^{0}\\}\
\backslash\ \\{S^{0}_{i}\\}$
Algorithm 2 $S^{1}$ insertion $(S^{0}_{i},Q^{i}_{j})$.
## IV 2-stratum insertion
Given a candidate $S^{0}_{i}$ and a set of paths
$\raisebox{1.79993pt}{\Large$\wp$}_{l}(P^{i;j,k})$ on $G_{i}$, insert the
corresponding new stratum $\tilde{S}^{2}$. Add the new strata $\tilde{S}^{2}$,
$\tilde{S}^{0}_{m}$ for
$m=1:|\raisebox{1.79993pt}{\Large$\wp$}_{l}(P^{i;j,k})|-1$, and
$\tilde{S}^{1}_{m}$ for
$m=1:|\raisebox{1.79993pt}{\Large$\wp$}_{l}(P^{i;j,k})|$ to the
stratification. Set the $(d-1)$-dimensional adjacency lists of the new strata
$\tilde{S}^{2}$ and $\tilde{S}^{1}_{m}$ for
$m=1:|\raisebox{1.79993pt}{\Large$\wp$}_{l}(P^{i;j,k})|$. Update the adjacency
lists of the surrounding strata.
Implement changes in the stratification when inserting a $\tilde{S}^{2}$ using
$\raisebox{1.79993pt}{\Large$\wp$}_{l}(P^{i;j,k})$.
Create new stratum $\tilde{S}^{2}$.
Create new strata $\tilde{S}^{0}_{m}$ for $m\coloneqq
1:|\raisebox{1.79993pt}{\Large$\wp$}_{l}(P^{i;j,k})|-1$. $\triangleright$ In
addition to $S^{0}_{i}$.
Create new strata $\tilde{S}^{1}_{m}$ for $m\coloneqq
1:|\raisebox{1.79993pt}{\Large$\wp$}_{l}(P^{i;j,k})|$.
$A^{2}(S^{3}_{j})\coloneqq A^{2}(S^{3}_{j})\cup\\{\tilde{S}^{2}\\}$
$\triangleright$ Add $\tilde{S}^{2}$ to $A^{2}(S^{3}_{j})$.
$A^{2}(S^{3}_{k})\coloneqq A^{2}(S^{3}_{k})\cup\\{\tilde{S}^{2}\\}$
$\triangleright$ Add $\tilde{S}^{2}$ to $A^{2}(S^{3}_{k})$.
$A^{1}(\tilde{S}^{2})\coloneqq\\{\tilde{S}^{1}_{1},\tilde{S}^{1}_{2},\dots,\tilde{S}^{1}_{m}\\}$
with $m\coloneqq|\raisebox{1.79993pt}{\Large$\wp$}_{l}(P^{i;j,k})|$
$\triangleright$ Set the adjacency of $\tilde{S}^{2}$.
$A^{0}(\tilde{S}^{1}_{1})\coloneqq\\{S^{0}_{i},\tilde{S}^{0}_{1}\\}$
$\triangleright$ Set the adjacencies of $\tilde{S}^{1}$.
for $m\coloneqq 2:|\raisebox{1.79993pt}{\Large$\wp$}_{l}(P^{i;j,k})|-1$ do
$A^{0}(\tilde{S}^{1}_{m})\coloneqq\\{\tilde{S}^{0}_{m-1},\tilde{S}^{1}_{m}\\}$
$A^{0}(\tilde{S}^{1}_{m})\coloneqq\\{\tilde{S}^{0}_{m-1},S^{0}_{i}\\}$ with
$m\coloneqq|\raisebox{1.79993pt}{\Large$\wp$}_{l}(P^{i;j,k})|$
for $P_{m}^{i;j,k}\in\raisebox{1.79993pt}{\Large$\wp$}_{l}(P^{i;j,k})$ do
$\triangleright$ Add $\tilde{S}^{1}$ to the adjacency lists of the $S^{2}$ on
path $P_{m}^{i;j,k}$.
for $S^{2}_{o}\in P_{m}^{i;j,k}$ do
$A^{1}(S^{2}_{o})\coloneqq A^{1}(S^{2}_{o})\cup\\{\tilde{S}^{1}_{m}\\}$
for $m\coloneqq 2:|\raisebox{1.79993pt}{\Large$\wp$}_{l}(P^{i;j,k})|$ do
$\triangleright$ For $S^{1}$ adjacent to $S^{2}\in H_{m}^{i;j,k;l}$, replace
the $S^{0}_{i}$ with $\tilde{S}^{0}_{m}$.
for $S^{2}_{o}\in H_{m}^{i;j,k;l}$ do
for $S^{1}_{p}\in A^{1}(S^{2}_{o})$ do
if $S^{0}_{i}\in A^{0}(S^{1}_{p})$ then
$A^{0}(S^{1}_{p})\coloneqq
A^{0}(S^{1}_{p})\cup\\{\tilde{S}^{0}_{m-1}\\}\backslash\\{S^{0}_{i}\\}$
Algorithm 3 $S^{2}$ insertion
$(\raisebox{1.79993pt}{\Large$\wp$}_{l}(P^{i;j,k}))$.
## V Check spurious strata
Spurious stratum insertions can occur for a $S^{1}$ insertion if there are two
$S^{2}$ on the circuit that bound the same $S^{3}$, or for a $S^{2}$ insertion
between two components of the same $S^{3}$. An example configuration leading
to such an event is shown in Figure 8 of the main text.
Compare the upper adjacencies of $(S^{d}_{i})$ to check if it is spurious.
if $d=0\ or\ d=1$ then $\triangleright$ Higher adjacency rule for valid
$S^{0}$ and $S^{1}$.
return $|A^{d+1}(S^{d}_{i})|<3$
else
if d = 2 and $|A^{d+1}(S^{d}_{i})|=2$ then $\triangleright$ If $S^{2}$ bounds
the same $S^{3}$, it is spurious.
return $A^{d+1}_{1}(S^{d}_{i})=A^{d+1}_{2}(S^{d}_{i})$
return FALSE
Algorithm 4 Check spurious $(S^{d}_{i})$.
## VI Relaxation during collapse
Figure S2: The flow chart for the calculation of the volumes and the update of
the parameters $w$, $v_{th}$ required for the conjugate gradient descent
calculations.
The preconditioning operation relaxes the positions of the surrounding
vertices to prevent inversions of the surrounding tetrahedra during a
collapse. More specifically, the vertices connected to the collapsing strata
by an edge form a hull. Positions of the vertices on the hull are found such
that the surrounding tetrahedra do not invert during collapse of the stratum
by a conjugate gradient search to minimize the positive definite potential
$\phi$. This is defined as
$\displaystyle\phi$ $\displaystyle=\sum_{i\in\Delta^{0}}\phi_{i},$
$\displaystyle\phi_{i}$
$\displaystyle=\frac{1}{w}\ln\left\\{\sum_{\Delta^{3}_{j}\in
A^{3}(\Delta_{i}^{0})}\exp\left[-wV_{j}(\bar{v})/V_{t}\right]+1\right\\},$
where $\Delta^{0}$ is the set of vertices on the hull, $\bar{v}$ is the
position vector of all vertices, $V_{j}$ is the volume of $j$th tetrahedron,
and $w$ is a weight for scaling the exponent. $w$ is defined as
$80\frac{V_{t}}{\mbox{abs}(V_{n})+\epsilon}$, where $V_{t}$ is the total
starting volume of all tetrahedra surrounding the hull vertices, and
$V_{n}=\mathrm{min}(V_{j})$. In practice $\epsilon$ is $2.22507\times
10^{-298}$, $10^{10}$ times the smallest representable double. Using the
algorithm shown in Figure S2, the volumes are updated until $V_{n}>v_{th}$,
where $v_{th}$ is the desired volume ratio of the surrounding tetrahedra at
the collapsed configuration defined as $v_{th}=(V_{m}+\epsilon)\times 10^{-5}$
where $V_{m}=\mathrm{min}(\mathrm{abs}(V_{j}))$. After each time the positions
of the hull vertices are updated, $V_{n}$ and $V_{m}$ (but not $w$ or
$v_{th}$) are updated. If the smallest volume $V_{n}$ is smaller than twice
the starting most negative volume $V_{n,0}$, or $V_{n}<0$ and
$\mathrm{abs}(V_{n})<\mathrm{abs}(V_{n,0}/10)$, $w$ is updated and the
conjugate gradient is reinitialized to increase the convergence rate.
The negative of the gradient of the potential is given by
$\displaystyle-\overline{\nabla}_{i}\phi$
$\displaystyle=-\overline{\nabla}_{i}\phi_{i}-\sum_{\Delta^{0}_{j}\in
A^{0,1}(\Delta^{0}_{i})}\overline{\nabla}_{i}\phi_{j},$
$\displaystyle-\overline{\nabla}_{i}\phi_{i}$
$\displaystyle=\sum_{k}\left[\frac{\exp(-wA_{k})}{\sum_{l}\exp(-wA_{l})+1}\overline{\nabla}_{i}V_{k}\right],$
where $k,l\in A^{3}(\Delta^{0}_{i})$ and $A_{k}=V_{k}(\bar{v})/V_{t}$. The
form of $-\overline{\nabla}_{i}\phi_{j}$ is the same but $k,l\in
A^{3}(\Delta^{0}_{i})\cap A^{3}(\Delta^{0}_{j})$. It is possible that due to
the starting geometry a non-inverting configuration with a minimum volume of
$v_{th}$ cannot be found. The relaxation continues until either a non-
inverting configuration is found or the limit for number of iterations is
reached.
## VII Generalized collapse of multiple lenses
(a)
(b)
(c)
(d)
Figure S3: The generalized lens collapse corresponding to the triangle
$(0,1,2)$. Merging entities form sets rather than couples and an entity might
be merging in one lens and collapsing in another, and will collapse during the
stratum collapse. The lens corresponding to edge (a) $(0,1)$, (b) $(1,2)$, (c)
$(0,2)$. In the lenses corresponding to edges $(0,2)$ and $(1,2)$ the triangle
$(0,1,a)$ is merging, but since the triangle is collapsing in the lens of edge
$(0,1)$, it is collapsing. Edges $(0,a)$, $(1,a)$, and $(2,a)$ form a merging
set. (d) The final configuration after the collapse.
The generalized stratum collapse follows the same procedure described in
Section IV.1 for lens collapse, i.e., the main steps are preconditioning the
mesh, finding the stratum memberships of the remaining entities, destroying
old entities, and regenerating the entities using the last remaining vertex.
When there is more than one edge in the collapsing stratum, it is possible
that some merging entities form sets rather than couples to form a new entity.
Furthermore, an entity can be a merging or a collapsing entity in different
lenses, in which case all merging entities in the associated set will collapse
as shown in Figure S3. Similar to lens collapse, for each set of merging
entities a new entity will be regenerated by replacing the merging vertex with
the final vertex and using the new stratum membership.
## VIII Using an exterior shell for volume preservation
Evolving down the gradient of surface energy, a non-periodic mesh will not
preserve volume without additional constraints. Volume preservation is
achieved by creating a stratification composed of the simulation cell corners,
edges and surfaces. These strata are called 0-, 1-, and 2-shells,
respectively. The shells are determined at the start of the simulation. In
this section, surface strata will indicate strata on the simulation cell
boundary. Each $S^{0}$ is first tested to identify those on the simulation
cell corners, edges or surfaces, and ones on the corners are attached to
$0$-shells. Next, the $S^{1}$ on the simulation cell boundaries are tested to
identify those on the simulation cell edges by a depth first search, and ones
on the edges are attached to 1-shells. The 2-shells are constructed similarly.
During the simulation the motions of the vertices on these shells are
projected onto the corresponding shell to preserve the total volume.
(a)
(b)
(c)
(d)
(e)
Figure S4: Exterior $S^{0}$ insertions require additional exterior $S^{2}$ and
$S^{3}$ in the adjacency graph. (a) The red, blue and green $S^{3}$ at the
corner of the simulation cell, having two, two, and one surface $S^{2}$,
respectively. (b) The adjacency graph obtained by including a node for the
exterior $S^{3}$ which doesn’t allow any new surface $S^{2}$ insertion. (c)
Instead of a single exterior $S^{3}$, $S^{2}$ and $S^{3}$ are included in the
adjacency graph for each surface $S^{1}$ and $S^{2}$, respectively. (d) The
augmented adjacency graph in (c) can be used to detect surface $S^{2}$
insertions, e.g., a trigon insertion using the dashed, dotted and dash-dotted
paths between the exterior and the red grain. (e) The corresponding change in
the microstructure.
Any newly inserted strata during stratum insertions around exterior $S^{0}$
are associated with the appropriate shells. Since the exterior can be multiply
connected to the volumes touching the exterior surface, one artificial
exterior $S^{3}$ is created for each disconnected mesh component of the
surface $S^{2}$, as shown in Figure S4. The paths and circuits detected on
this augmented adjacency graph contain multiplicities as there is actually a
single exterior $S^{3}$. These are removed by replacing all artificial strata
on the paths and circuits with the only exterior $S^{3}$ and only allowing
uninterrupted segments of the artificial strata on a single circuit or path.
Finally, computing the convex hull for collapses of strata touching the
exterior shell requires that the positions of vertices belonging to the
collapsing stratum be added to the set of points.
## IX Six grain configuration
As a further demonstration of the insertion detection, a more complicated
configuration with six grains is generated and some of the possible $S^{1}$
insertions are shown in Figure S5. This list is not exhaustive, but
demonstrates the capability of the detection algorithm. In addition to these,
digon, trigon, and rectangle $S^{2}$ insertions are possible. Similar to
Figure 16, the dependence of the energy change for different insertions on the
dihedral angle is shown in Figure S6.
Figure S5: To demonstrate the capabilities of the method, a configuration
formed by six grains meeting at the center of a cube is generated. The
possible classes of insertions are more numerous than in Figure 15. Some
examples are shown denoted by the number of $2$-strata on the
$H_{1}^{i;j}/Q_{j}^{i}/H_{2}^{i;j}$, though this is not a complete descriptor
(e.g., there are three $3/6/3$ type insertions). In addition to these, digon,
trigon and tetragon insertions between each three disconnected $S^{3}$ couples
are possible. Figure S6: The variation of energy with changing dihedral angle
configuration. The degenerate configuration is when the outer dimensions
correspond to a cube. Red squares denote the case of the cube stretched in one
direction and green pentagons denote the compressed case.
|
Position, Padding and Predictions:
A Deeper Look at Position Information in CNNs
Md Amirul Islam,
Matthew Kowal,
Sen Jia,
Konstantinos G. Derpanis,
and Neil D. B. Bruce
M. A. Islam, M. Kowal, K. G. Derpanis are with Ryerson University, Canada. Email: {mdamirul.islam, matthew.kowal<EMAIL_ADDRESS>
S. Jia is with University of Waterloo, Canada. Email<EMAIL_ADDRESS>N. Bruce is with University of Guelph, Canada. Email<EMAIL_ADDRESS>M. A. Islam, K. G. Derpanis, and N. Bruce are also with Vector Institute of Artificial Intelligence, Toronto, Canada.
K. G. Derpanis is also with Samsung AI Centre Toronto, Canada.
In contrast to fully connected networks, Convolutional Neural Networks (CNNs) achieve efficiency by learning weights associated with local filters with a finite spatial extent. An implication of this is that a filter may know what it is looking at, but not where it is positioned in the image. In this paper, we first test this hypothesis and reveal that a surprising degree of absolute position information is encoded in commonly used CNNs. We show that zero padding drives CNNs to encode position information in their internal representations, while a lack of padding precludes position encoding. This gives rise to deeper questions about the role of position information in CNNs: (i) What boundary heuristics enable optimal position encoding for downstream tasks?; (ii) Does position encoding affect the learning of semantic representations?; (iii) Does position encoding always improve performance? To provide answers, we perform the largest case study to date on the role that padding and border heuristics play in CNNs. We design novel tasks which allow us to quantify boundary effects as a function of the distance to the border. Numerous semantic objectives reveal the effect of the border on semantic representations. Finally, we demonstrate the implications of these findings on multiple real-world tasks to show that position information can both help or hurt performance.
Absolute Position Information, Padding, Boundary Effects, Canvas, Location Dependent Classification and Segmentation.
[]Md Amirul Islam
is currently a Ph.D. student at the Department of Computer Science at Ryerson University, Canada. He is also a Postgraduate Affiliate at the Vector Institute for AI, Toronto. He received his M.Sc. in Computer Science from University of Manitoba, Canada in 2017 and his B.Sc. in Computer Science and Engineering from North South University, Bangladesh in 2014. He has worked as a research intern at Noah’s Ark Labs, Huawei Canada, Toronto in summer 2019 and 2020. His research interests are in the area of computer vision, with a focus on exploring various mechanisms which allow for humans to understand the different properties of CNNs and semantic understanding of a scene.
[]Matthew Kowal
received a B.A.Sc. in Applied Mathematics and Engineering from Queen's University, Canada in 2017, and a M.Sc. in Computer Science from Ryerson University, Canada in 2020. He is currently pursuing his Ph.D. in Computer Science from Ryerson University, Canada. In 2020, he joined NextAI as a Scientist in Residence. His research interests include computer vision and more specifically designing interpretable deep learning algorithms for various visual tasks.
[]Sen Jia
is a postdoctoral researcher in the Vision and Image Processing lab of University of Waterloo. He has a wide range of research areas, including saliency detection in computer vision and uncertainty in neural networks. Prior to this, he worked as a postdoctoral researcher at Ryerson University(Canada) and Bristol University (UK) in 2018 and 2016 respectively. He received his PhD degree from the University of Bristol in 2017 under the supervision of Prof. Nello Cristianini. He received his Master of Science (Msc) degree with distinction from the University of Newcastle in 2010 and Bachelor of Engineering (BE) from Beijing University of Technology in 2008.
[]Konstantinos G. Derpanis
received the Honours Bachelor of Science (BSc) degree in computer science from the University of Toronto, Canada, in 2000, and the MSc (supervisors John Tsotsos and Richard Wildes) and PhD (supervisor Richard Wildes) degrees in computer science from York University, Canada, in 2003 and 2010, respectively. For his dissertation work, he received the Canadian Image Processing and Pattern Recognition Society (CIPPRS) Doctoral Dissertation Award 2010 Honourable Mention. Subsequently, he was a postdoctoral researcher in the GRASP Laboratory at the University of Pennsylvania under the supervision of Kostas Daniilidis. In 2012, he joined the Department of Computer Science at Ryerson University, Toronto, where he is an associate professor. He is a Faculty Affiliate at the Vector Institute for AI, Toronto. In 2019, Kosta joined the Samsung AI Centre in Toronto as a Research Scientist.
He currently serves as an AE for TPAMI and is an AC for CVPR 2021 and ICCV 2021. His main research field of interest is computer vision with emphasis on motion analysis and human motion understanding, and related aspects in image processing and machine learning.
[]Neil D. B. Bruce
Dr. Neil Bruce graduated from the University of Guelph with a B.Sc. Double major in CS and Pure Mathematics. Dr. Bruce then attended the University of Waterloo for an M.A.Sc. in System Design Engineering and York University for a Ph.D. in Computer Science. Prior to joining Guelph he worked in the Department of Computer Science at Ryerson University. Prior to this Dr. Bruce worked at the University of Manitoba as Assistant then Associate Professor. Dr. Bruce has postdoctoral experience working at INRIA (France) and Epson Canada. He is the recipient of the Falconer Rh Young Researcher Award and is a Faculty Affiliate at the Vector Institute for AI, Toronto. His research has explored solutions to issues in computer vision, deep-learning, human perception, neuroscience and visual computing.
|
# The Optimal Dynamic Treatment Rule SuperLearner: Considerations,
Performance, and Application
Lina Montoya Email address for correspondence<EMAIL_ADDRESS>University of
North Carolina, Chapel Hill, Department of Biostatistics Mark van der Laan
University of California, Berkeley, Division of Biostatistics Alexander
Luedtke University of Washington, Department of Statistics Jennifer Skeem
University of California, Berkeley, Departments of Social Welfare and Public
Policy Jeremy Coyle University of California, Berkeley, Division of
Biostatistics Maya Petersen University of California, Berkeley, Division of
Biostatistics
(January 2021)
###### Abstract
The optimal dynamic treatment rule (ODTR) framework offers an approach for
understanding which kinds of patients respond best to specific treatments – in
other words, treatment effect heterogeneity. Recently, there has been a
proliferation of methods for estimating the ODTR. One such method is an
extension of the SuperLearner algorithm – an ensemble method to optimally
combine candidate algorithms extensively used in prediction problems – to
ODTRs. Following the “causal roadmap,” we causally and statistically define
the ODTR and provide an introduction to estimating it using the ODTR
SuperLearner. Additionally, we highlight practical choices when implementing
the algorithm, including choice of candidate algorithms, metalearners to
combine the candidates, and risk functions to select the best combination of
algorithms. Using simulations, we illustrate how estimating the ODTR using
this SuperLearner approach can uncover treatment effect heterogeneity more
effectively than traditional approaches based on fitting a parametric
regression of the outcome on the treatment, covariates and treatment-covariate
interactions. We investigate the implications of choices in implementing an
ODTR SuperLearner at various sample sizes. Our results show the advantages of:
(1) including a combination of both flexible machine learning algorithms and
simple parametric estimators in the library of candidate algorithms; (2) using
an ensemble metalearner to combine candidates rather than selecting only the
best-performing candidate; (3) using the mean outcome under the rule as a risk
function. Finally, we apply the ODTR SuperLearner to the “Interventions”
study, an ongoing randomized controlled trial, to identify which justice-
involved adults with mental illness benefit most from cognitive behavioral
therapy (CBT) to reduce criminal re-offending.
## 1 Introduction
The primary objective of a clinical trial is often to evaluate the overall,
average effect of a treatment on an outcome in a given population [1, 2, 3].
To accomplish this objective in the point treatment setting, baseline
covariate, treatment, and outcome data are often collected and the average
treatment effect (ATE) is estimated, quantifying the average impact of the
treatment in a population. Researchers may then interpret the impact of the
treatment as beneficial, neutral, or harmful. In this interpretation, the
treatment’s impact is one-size-fits-all; in other words, the effect of the
treatment is interpreted as the same for everyone in the study population.
But, it may be the case that an intervention tends to yield better outcomes
for certain kinds of people but not for others. For example, because justice-
involved people with mental illness are a heterogeneous group with diverse
symptoms, risk factors, and other treatment-relevant characteristics [4, 5],
assigning Cognitive Behavioral Therapy (CBT) may decrease the probability of
recidivism for individuals with high risk of recidivism but not low risk of
recidivism [6]. The ATE analysis may lead one to conclude that there is no
treatment effect in a given population, when there is, in fact, a differential
treatment effect for levels of variables.
Precision health aims to shift the question from “which treatment works best”
to “which treatment works best _for whom_?” (sometimes, it further asks: at
what time? And/or at what dose? [3]). The point of moving towards this
question is to move towards better subject outcomes. While a range of novel
study designs can help to address these questions by generating data in which
individualized treatment effects are unconfounded [3, 7, 8], data from classic
randomized controlled trials also provide a rich data source for discovering
treatment effect heterogeneity. Under the assumption of no unmeasured
confounding, the same methods can be applied to observational data.
One way of learning which treatment works best for whom is to estimate effects
within subgroups. Following our above example within the field of criminal
justice, one could split the sample into subjects who are likely versus
unlikely to re-offend, and look at the average effect of CBT on recidivism
within these two risk categories. Such a classic subgroup analysis helps to
move a step closer to understanding the treatment that works best for whom.
However, the need to restrict the number of tests performed and to pre-specify
analyses limits traditional subgroup analyses to comparing intervention
effects in a small set of subgroups in which heterogeneous treatment effects
are expected [2, 9]. In practice, the subject characteristics that are most
important for determining the best-suited intervention may not be clear based
on background knowledge. Further, effectively predicting the type of
intervention that a subject will best respond to may require accounting for a
wide range of subject characteristics and complex interactions between them.
For instance, identifying the subjects most likely to respond to CBT versus,
for example, treatment as usual (TAU) may require considering not only risk
level, but also age, educational attainment, sex, substance abuse,
psychological distress, and internal motivation to adhere to treatment – as
well as various interactions between these. In summary, the challenge is to
take a wide range of subject characteristics and flexibly learn how to best
combine them into a strategy or rule that assigns to each subject the specific
intervention that works best for him or her.
Estimating the optimal dynamic treatment rule (ODTR) for a given population
offers a formal approach for learning about heterogeneous treatment effects
and developing such a strategy. A dynamic treatment rule can be thought of as
a rule or algorithm where the input is subject characteristics and the output
is an individualized treatment choice for each subject [10, 11, 12, 13]. An
optimal dynamic treatment rule (also known as an optimal treatment regime,
optimal strategy, individualized treatment rule, optimal policy, etc.) is the
dynamic treatment rule that yields the best overall subject outcomes [14, 15].
In our criminal justice example, a dynamic treatment rule takes as input
subject characteristics such as age, criminal history, and education level and
outputs a treatment decision – either CBT or TAU. The ODTR is the dynamic
treatment rule under which the highest proportion of patients are not re-
arrested. It is the most effective and, if one incorporates cost or
constraints on resources [16], efficient way of allocating the interventions
at our disposal based on measured subject characteristics.
There have been major advances in estimating the ODTR within the fields of
statistics and computer science, with important extensions to the case where
treatment decisions are made at multiple points in time. Regression-based
approaches, such as Q-learning, learn the ODTR by modeling the outcome
regression (i.e., the expected outcome given treatment and covariates)
directly [14, 17, 18, 19, 20]. Robins and Murphy developed methods of
estimating the ODTR by modeling blip-to-reference functions (i.e., the strata-
specific effect of the observed treatment versus control) and regret functions
(i.e., the strata-specific loss incurred when given the optimal treatment
versus the observed treatment), respectively [14, 15, 21]. Direct-estimation
approaches to learning the ODTR, such as outcome weighted learning (OWL), aim
to search among a large class of candidate rules for the one that yields the
best expected outcome [22, 23, 24]. These are examples of broad classes of
ODTR estimators; within and outside of them there has been a proliferation of
methods to estimate the ODTR (see [3, 25, 26] for reviews of the state of the
art in estimating ODTRs and precision medicine).
Given the vast number of methods available for estimating the ODTR, the
question becomes: which approach to use? In some settings, some algorithms may
work better than others. SuperLearning [27] (or, more specific to prediction,
stacked regression [28]) was originally proposed as a method for data-
adaptively choosing or combining prediction algorithms. The basic idea is to
define a library of candidate algorithms and choose the candidate or the
combination of candidates that gives the best performance based on V-fold
cross-validation. This requires defining: (1) the algorithms to include in the
library, (2) a parametric family of weighted combinations of these algorithms,
the “metalearning” step [29], and (3) the choice of performance metric (i.e.,
risk) as the criterion for selecting the optimal combination of algorithms.
Given these three requirements, then one can estimate the risk for each
combination of algorithms using V-fold cross-validation, and choose the
combination with the lowest cross-validated risk. The SuperLearner framework
has been implemented extensively for prediction problems [30, 31, 32, 33], and
has been extended to the ODTR setting [34, 35]. In particular, Luedtke and van
der Laan showed that in the randomized controlled trial (RCT) and sequential
multiple assignment randomized trial (SMART) [25, 36, 7] settings, under the
assumption that the loss function is bounded, the ODTR SuperLearner estimator
will be asymptotically equivalent to the ODTR estimator chosen by the oracle
selector (that is, the ODTR estimator, among the candidate ODTR estimators,
that yields the lowest risk under the true data distribution [27]). This
implies that the ODTR SuperLearner will asymptotically do as well as or better
than any single candidate estimator in the library, provided that none of the
candidate algorithms are correctly specified parametric models. If there is a
well-specified parametric model in the library, the ODTR SuperLearner
estimator of the ODTR will achieve near parametric rates of convergence to the
true rule.
These theoretical results lay important groundwork for understanding the
asymptotic benefits to using the algorithm; however, less has been published
on how the ODTR SuperLearner performs in finite samples, the practical
implications of key choices when implementing the algorithm, and illustrations
of implementing this algorithm on real RCT data. In this paper, we provide an
introduction to the implementation of the ODTR SuperLearner in the point
treatment setting, and use simulations to investigate the tradeoffs inherent
in these user-supplied choices and how they may differ with varying sample
sizes. In particular, for sample sizes 1,000 and 300, we examine: (1) how to
select the candidate algorithms for estimating the ODTR; specifically, the
costs and benefits to expanding the library to include a wider set of diverse
ODTR algorithms, including simple parametric models versus more data adaptive
algorithms, and blip-based versus direct estimation algorithms; (2)
implications of the choice of parametric family for creating weighted
combinations for candidate ODTR learners (i.e., choice of metalearner); and,
(3) implications of the choice of risk function used to judge performance and
thereby select the optimal weighted combination of candidate learners.
Finally, we apply the ODTR SuperLearner to real data generated from the
Correctional Intervention for People with Mental Illness, or “Interventions,”
trial, an ongoing RCT in which justice-involved adults with mental illness
were either randomized to CBT or TAU. In applying the ODTR SuperLearner to
this sample, we aim to identify which people benefit most from CBT versus TAU,
in order to reduce recidivism.
The organization of this article is as follows. First, we step through the
causal roadmap (as described in [37]) for defining the true ODTR for a given
population. We focus on the case in which baseline covariates are measured, a
single binary treatment is randomized, and an outcome is measured. We then
give a brief introduction to some estimators of the ODTR, and in particular,
describe the SuperLearner approach for estimating the optimal rule that builds
on Luedtke and van der Laan’s work [34]. We investigate the implications of
the three sets of implementation choices outlined above in finite samples
using simulations (with corresponding R code illustrating implementation of
all estimators considered), and the performance under such options. Lastly, we
show results for the ODTR SuperLearner algorithm applied to the
“Interventions” Study. We close with concluding remarks and future directions.
## 2 Causal Roadmap and ODTR Framework
### 2.1 Data and Causal Model
Consider point-treatment data where $W\in\mathcal{W}$ are baseline covariates,
$A\in\\{0,1\\}$ is the treatment, and $Y\in\mathbb{R}$ is the outcome measured
at the end of the study. Our data can be described by the following structural
causal model (SCM), $\mathcal{M}^{F}$ [38]:
$\displaystyle W$ $\displaystyle=f_{W}(U_{W})$ $\displaystyle A$
$\displaystyle=f_{A}(W,U_{A})$ $\displaystyle Y$
$\displaystyle=f_{Y}(W,A,U_{Y})\text{ ,}$
where the full data $X=(W,A,Y)$ are endogenous nodes,
$U=(U_{W},U_{A},U_{Y})\sim P_{U}$ are unmeasured exogenous variables, and
$f=(f_{W},f_{A},f_{Y})$ are structural equations. If it is known that data
were generated from an RCT using simple randomization with equal probability
to each arm, then the above structural causal model would state that $Y$ may
be affected by both $W$ and $A$, but that $W$ does not affect $A$ (as in the
“Interventions” trial); this can be represented in the above model by letting
$U_{A}\sim Bernoulli(p=0.5)$ and $A=U_{A}$. In this point treatment setting, a
dynamic treatment rule is a function $d$ that takes as input some function $V$
of the measured baseline covariates $W$ and outputs a treatment decision:
$V\rightarrow d(V)\in\\{0,1\\}$. For the remainder of the paper, we consider
the case where $V=W$; in other words, we consider treatment rules that
potentially respond to all measured baseline covariates. However,
consideration of dynamic rules based on a more restrictive set of baseline
covariates is also of frequent practical interest, allowing, for example, for
consideration of dynamic rules based on measurements that can be more readily
attained; all methods described extend directly to this case. We denote the
set of all dynamic treatment rules as $\mathcal{D}$.
The “Interventions” data consist of baseline covariates $W$, which include
intervention site, sex, ethnicity, age, Colorado Symptom Index (CSI) score (a
measure of psychiatric symptoms), level of substance use, Level of Service
Inventory (LSI) score (a risk score to predict future recidivism that
summarizes risk factors like criminal history, educational and employment
problems, and attitudes supportive of crime), number of prior adult
convictions, most serious offense, Treatment Motivation Questionnaire (TMQ)
score (a measure of internal motivation for undergoing treatment), and
substance use level; the randomized treatment $A$, either a manualized
Cognitive Behavioral Intervention for people criminal justice system
(abbreviated CBT; $A=1$) or treatment as usual (TAU), primarily psychiatric or
correctional services ($A=0$); and a binary outcome $Y$ of recidivism, an
indicator that the person was not re-arrested within one year after study
enrollment. In Table 2 we show the distribution of the data.
### 2.2 Target Causal Parameter
Let $d(W)$ be a deterministic function that takes as input a vector of
baseline covariates, and gives as output a treatment assignment (in this case,
either $0$ or $1$). For a given rule $d$, we intervene on the above SCM to
derive counterfactual outcomes:
$\displaystyle W$ $\displaystyle=f_{W}(U_{W})$ $\displaystyle A$
$\displaystyle=d(W)$ $\displaystyle Y_{d(W)}$
$\displaystyle=f_{Y}(W,d(W),U_{Y})\text{ .}$
Here, $Y_{d(W)}$ is the counterfactual outcome for a subject if his/her
treatment $A$ were assigned using the dynamic treatment rule $d(W)$; to
simplify notation we refer to this counterfactual outcome as $Y_{d}$. The
counterfactual outcomes for a person if he/she were assigned treatment or
given control are denoted $Y_{1}$ and $Y_{0}$, respectively. Together, the
distribution of the exogenous variables $P_{U}$ and structural equations $f$
imply a distribution of the counterfactual outcomes, and the SCM provides a
model for the set of possible counterfactual distributions:
$P_{U,X}\in\mathcal{M}^{F}$.
Our target parameter of interest in this paper is the ODTR, defined as the
rule that, among all candidate rules $\mathcal{D}$, yields the best expected
outcomes. Using the convention that larger values of $Y$ correspond to better
outcomes, an ODTR is defined as a maximizer of $E_{P_{U,X}}[Y_{d}]$ over all
candidate rules
$\displaystyle d^{*}$
$\displaystyle\in\operatorname*{arg\,max}_{d\in\mathcal{D}}E_{P_{U,X}}[Y_{d}]\text{
.}$ (1)
Any such ODTR can be defined in terms of the conditional additive treatment
effect (CATE), namely $E_{P_{U,X}}[Y_{1}-Y_{0}|W]$, which is the effect of
treatment for a given value of covariates $W$. Any ODTR assigns treatment 1
and 0 to all strata of covariates for which the CATE is positive and negative,
respectively. If the CATE is 0 for a particular $W$ (i.e., there is no
treatment effect for that strata of $W$), the ODTR as defined above may have
more than one maximizing rule and therefore may be non-unique; this is why the
RHS of equation 1 above is a set [39]. An ODTR can take an arbitrary value for
strata at which the CATE is 0. If we assume that assigning treatment 0 is
preferable to assigning treatment 1 in the absence of a treatment effect, then
we would prefer the following ODTR as a function of the CATE:
$d^{*}(W)\equiv\mathbb{I}\Big{[}E_{P_{U,X}}[Y_{1}-Y_{0}|W]>0\Big{]}\text{ .}$
In other words, if a subject’s expected counterfactual outcome is better under
treatment versus no treatment given his or her covariate profile, then assign
treatment; otherwise, assign control. A subject’s counterfactual outcome under
the ODTR is $Y_{d^{*}}$, and the expected outcome had everyone received the
treatment assigned by the ODTR is $E_{P_{U,X}}[Y_{d^{*}}]$.
Following our applied example, $Y_{1}$, $Y_{0}$, and $Y_{d}$ are the
counterfactual outcomes for a person if he/she were given CBT, TAU, and either
CBT or TAU based on the rule $d$, respectively; here, $d^{*}$ is the rule for
assigning CBT versus TAU using subjects’ covariates that would yield the
highest probability of no re-arrest, $E_{P_{U,X}}[Y_{d^{*}}]$.
### 2.3 Identification and Statistical Parameter
We assume that our observed data were generated by sampling $n$ independent
observations $O_{i}\equiv(W_{i},A_{i},Y_{i})$, $i=1,\ldots,n$, from a data
generating system described by $\mathcal{M}^{F}$ above (e.g., the
“Interventions” study consists of 441 i.i.d. observations of $O$). The
distribution of the observed data can be written as:
$P_{0}(O)=P_{W,0}(W)g_{0}(A|W)P_{Y,0}(Y|A,W)\text{ ,}$
where $P_{W,0}$ is the true distribution of $W$; $g_{0}$ is the true
conditional distribution of $A$ given $W$, or the treatment mechanism;
$P_{Y,0}$ is the true conditional distribution of $Y$ given $A$ and $W$. The
distribution of the data $P_{0}$ is an element of the statistical model
$\mathcal{M}$, which in our RCT example is semi-parametric. Further, if the
data are generated from an RCT design, as in the “Interventions” study, then
the true $g_{0}$ is known, and the backdoor criteria (with the implied
randomization assumption [38, 40]), $Y_{d}\perp A|W\hskip 14.22636pt\forall
d\in\mathcal{D}\text{ ,}$ and the positivity assumption,
$Pr\Big{(}\min_{a\in\\{0,1\\}}g_{0}(A=a|W)>0\Big{)}=1\text{ ,}$ hold by
design; in an observational data setting the randomization assumption requires
measurement of a sufficient set of baseline covariates, and the positivity
assumption may also pose greater challenges [41].
Define $Q(a,w)\equiv E[Y|A=a,W=w]$. Under the above assumption,
$E_{P_{U,X}}[Y_{d}]$ (a parameter of the counterfactual distribution) is
identified as $E_{0}[Q_{0}(A=d,W)]$ (a parameter of the observed distribution)
for any candidate rule $d$. Thus, the ODTR is identified by
$d_{0}^{*}\in\operatorname*{arg\,max}_{d\in\mathcal{D}}E_{0}[Q_{0}(A=d,W)]\text{
.}$
In addition, the CATE is identified as $Q_{0}(1,W)-Q_{0}(0,W)$, where the
latter is sometimes referred to as the blip function $B_{0}(W)$. Then, the
true optimal rule can also be defined as a parameter of the observed data
distribution using the blip function:
$d_{0}^{*}(W)\equiv\mathbb{I}[B_{0}(W)>0]\text{ .}$
Analogous to the definition of the ODTR as a function of the CATE, in words,
the blip function essentially says that if treatment for a type of subject
$W=w$ is effective (i.e., greater than 0), treat that type of person. If not,
do not treat him/her. If all subjects were assigned treatment in this way,
then this would result in the highest expected outcome, which is the goal.
## 3 Estimation of the ODTR
We denote estimators with a subscript $n$, so that, for example, an estimator
of the true ODTR $d^{*}_{0}$ is $d^{*}_{n}$. Estimates are functions of
$P_{n}$, which is the empirical distribution that gives each observation
weight $\frac{1}{n}$; $P_{n}\in\mathcal{M}_{NP}$, and $\mathcal{M}_{NP}$ is a
non-parametric model. In what follows, we briefly describe examples of common
methods for estimating the ODTR. We first describe methods that estimate the
ODTR via an estimate of the blip function. We then describe methods that
directly estimate a rule that maximizes the mean outcome.
### 3.1 Blip-based Approaches
Blip-based approaches aim to learn the blip, which implies an ODTR. A benefit
of doing this is that one can look at the distribution of the predicted
estimates of the blip for a given sample. Having the blip distribution allows
one to identify the patients in a sample who benefit most (or least, or
little) from treatment. Additionally, estimating the blip function can allow
for estimating the ODTR under resource constraints; for example, an ODTR in
which only $k$% of the population can receive treatment [16]. Below we
illustrate two methods of estimating the ODTR by way of the blip function
(i.e., blip-based estimators of the ODTR).
##### Single stage Q-learning
A plug-in estimator naturally follows from the above definition of the optimal
rule. One can estimate $Q_{n}(A,W)$ using any regression-based approach for
estimating an outcome regression and predict at $Q_{n}(1,W)$ and $Q_{n}(0,W)$.
This provides an estimate of the blip: $B_{n}(W)=Q_{n}(1,W)-Q_{n}(0,W)$, which
implies an estimate of the optimal rule: $d_{n}^{*}=\mathbb{I}[B_{n}(W)>0]$
[3, 18, 25, 42].
##### Estimating the blip function
Consider the double-robust pseudo-outcome [43]:
$D(Q,g)=\frac{2A-1}{g(A|W)}[Y-Q(A,W)]+Q(1,W)-Q(0,W)\text{ .}$
Importantly, $E_{0}[D(Q,g)|W]=B_{0}(W)$ if $Q=Q_{0}$ or $g=g_{0}$. Using this
result, one could estimate the blip by regressing the pseudo-outcome
$D_{n}(Q_{n},g_{n})$ (which we abbreviate from here on as $D_{n}$) on $W$
using any regression-based approach. As in the previous method, this estimate
of the blip implies an estimate of the optimal rule
$d_{n}^{*}=\mathbb{I}[B_{n}(W)>0]$ [15, 34, 44].
### 3.2 Direct Estimation Approaches for Maximizing the Expected Outcome
Instead of estimating the blip function, which implies an ODTR, one could
estimate the ODTR directly by selecting a rule $d$ that maximizes the
estimated $E_{U,X}[Y_{d}]$. Below we illustrate outcome weighted learning
(OWL) – one example of a direct-estimation method for the ODTR.
##### Single stage outcome weighted learning
We briefly describe the general concept of outcome weighted learning here, but
refer to zhao2012estimating [22] and rubin2012statistical [45] for a more
thorough explanation. The optimal rule defined above as a function of $P_{0}$
could equivalently be written as an inverse probability of treatment weighted
(IPTW) estimand:
$d_{0}^{*}\in\operatorname*{arg\,max}_{d\in\mathcal{D}}E_{0}[Q_{0}(A=d,W)]=\operatorname*{arg\,max}_{d\in\mathcal{D}}E_{0}\Bigg{[}\frac{Y}{g_{0}(A|W)}\mathbb{I}[A=d]\Bigg{]}\text{
.}$
Written this way, estimating $d_{0}^{*}$ could be regarded as a classification
problem, where the weighted outcome $\frac{Y}{g(A|W)}$ helps us learn what
kind of patients should get treatment: if a certain kind of patient $W=w$ has
large weighted outcomes and they were treated according to candidate rule $d$,
future patients with that covariate profile should be treated using that rule.
Conversely, the smaller the weighted outcome among patients $W$ who were
treated according to $d$, the larger the “misclassification error” and the
less likely those kinds of patients should be treated according to $d$. This
maximization problem is equivalent to the following minimization problem:
$\displaystyle d_{0}^{*}$
$\displaystyle\in\operatorname*{arg\,min}_{d\in\mathcal{D}}E_{0}\Bigg{[}\frac{Y}{g_{0}(A|W)}\mathbb{I}[A\neq
d]\Bigg{]}\text{ .}$ (2)
Now, if patients $W=w$ who did not follow the rule $d$ have large weighted
outcomes (and thus larger “misclassification error”), those kinds of patients
should be given the opposite treatment that $d$ proposes. Note that in the RCT
setting, if one uses the known $g_{0}$ and if treatments are given with equal
probability, then this reduces to finding the rule that minimizes the mean
outcome among patients who did not follow the rule. Equation 2 could
alternatively be written as a minimization problem for, instead of a rule $d$,
a function $f$:
$\displaystyle f_{0}^{*}$
$\displaystyle\in\operatorname*{arg\,min}_{f\in\mathcal{F}}E_{0}\Bigg{[}\frac{Y}{g_{0}(A|W)}I\Big{[}A\neq\frac{sign(f(W))+1}{2}\Big{]}\Bigg{]}\text{
,}$ (3)
where $sign(x)=-1$ if $x\leq 0$ and $sign(x)=1$ if $x>0$. Under the true data
distribution $P_{0}$, $f_{0}$ is the blip function, $B_{0}$. In order to solve
this minimization problem using data, we can use a plug-in estimator of (3);
however, since it is a 0-1 function (i.e., it is discontinuous and non-
convex), one could use a convex surrogate function to approximate it, to
instead minimize:
$\displaystyle f_{n}^{*}$
$\displaystyle\in\operatorname*{arg\,min}_{f\in\mathcal{F}}\frac{1}{n}\sum_{i=1}^{n}\frac{Y_{i}}{g_{n}(A_{i}|W_{i})}\Phi(A_{i}f(W_{i}))+\lambda_{n}\left\lVert
f\right\rVert^{2}\text{ ,}$ (4)
where $\Phi(t)$ is the surrogate loss function (e.g., hinge loss, exponential
loss, logistic loss), $\left\lVert f\right\rVert$ is the norm of $f$, and
$\lambda_{n}$ is the estimated penalization parameter on $f$ to avoid
overfitting of the rule. This can also be generalized with the IPTW function
replaced by the augmented IPTW [34, 45]. Once $f_{n}^{*}$ is found as the
solution to equation (4), the estimated ODTR is:
$d_{n}^{*}=sign(f_{n}^{*}(W))\text{ .}$
### 3.3 SuperLearner to Estimate ODTR
The overarching goal of SuperLearner is to let a user-supplied library of
candidate algorithms, such as specific implementations of the general
approaches described above, “team up” to improve estimation of the ODTR. In
order to implement the ODTR SuperLearner, there are three user-supplied
decisions one must make. First, one must consider the library of candidate
algorithms to include. These could include algorithms for estimating the blip
function (which imply a rule), algorithms that search for the ODTR directly
(such as OWL estimators), static rules that determine treatment regardless of
covariates, or combinations of the above classes of algorithms. Second, in
what is sometimes referred to as the metalearning step, one can either
implement a SuperLearner that chooses one algorithm out of the library of
candidate algorithms to include (i.e., “discrete” SuperLearner), or a
SuperLearner that is a combination the candidate algorithms (i.e.,
“continuous” SuperLearner). For the latter, one again has a choice of
metalearner; we consider weighted convex combinations of candidate estimators
of the blip and combinations of estimates of the rules themselves (through a
weighted “majority vote”). Finally, one must choose the risk function used to
judge the performance of the weighted combinations of algorithms (estimated
using V-fold cross validation). Here, we consider two risk functions: the
mean-squared error (MSE) and the mean outcome under the candidate rule.
The steps for implementing the ODTR SuperLearner are as follows; they closely
follow the implementation of the canonical SuperLearner for regression [46]:
1. 1.
Choose $J$ candidate algorithms for estimating the optimal rule $d_{n,j}(W)$
for $j=1,...,J$. Candidates can include approaches based on estimating the
blip $B_{n,j}(W)$, e.g., candidate regressions of $D_{n}$ on $W$, where the
candidate regressions might consist of a parametric linear model
(corresponding to a classic approach of fitting a parametric outcome
regression on $A$ and $W$) as well as more flexible machine learning type
approaches such as neural networks [47], multivariate adaptive regression
splines [48], or recursive partitioning and regression trees [49]. Candidate
algorithms might also include approaches for estimating the optimal rule
directly, such an OWL estimator. Finally, the static treatment rules that
treat all patients or treat no patients, regardless of their covariate values,
can also be included as candidates. Inclusion of both simple parametric model
estimators, as well as static rules such as treating all and treating none, is
important to allow for the possibility that the underlying true ODTR may in
fact be simple (or well-approximated by a simple rule), and providing less
aggressive candidates in the SuperLearner library can help protect against
overfitting in finite samples.
2. 2.
Split the data into $V$ exhaustive and mutually exclusive folds. Let each fold
in turn serve as the validation set and the complement data as the training
set.
3. 3.
Fit each of the $J$ candidate algorithms on the training set. Importantly,
candidate algorithms might depend on nuisance parameters, and those nuisance
parameters should be fit on the training set, as well. For example, if a
candidate algorithm regresses $D_{n}$ on $W$ to estimate the blip (which
implies an ODTR), then $Q$ and $g$ should be fit and predicted on the training
set, and then plugged into $D$ to fit that candidate algorithm on the same
training set (this is also called “nested” cross-validation, described in
detail by [35]).
4. 4.
Predict the estimated blip or the treatment assigned under the estimated ODTR
for each observation in the validation set for each algorithm, based on the
corresponding training set fit.
5. 5.
Choose to either implement the discrete SuperLearner, which selects one
algorithm out of the candidate algorithms, or the continuous SuperLearner,
which creates a weighted average of the candidate algorithms.
1. (a)
Continuous SuperLearner. Create different convex combinations of the candidate
blip or treatment rule estimates that were predicted on the validation set
(i.e., convex combinations of the predictions from the previous step).
Formally, define an estimator of $B_{n,\alpha}(W)$ or $d_{n,\alpha}(W)$ as a
convex combination of the candidate algorithms (indexed by $j$); each convex
combination of algorithms is indexed by a weight vector $\alpha$. A given
convex combination of blip estimates are denoted as:
$B_{n,\alpha}(W)=\sum_{j}\alpha_{j}B_{n,j}(W),\alpha_{j}\geq 0\forall
j,\sum_{j}\alpha_{j}=1\text{ .}$
Alternatively, the predicted treatments under the candidate ODTRs can be
combined as a weighted “majority vote” of the convex combination of the
candidate rules:
$d_{n,\alpha}(W)=\mathbb{I}\Big{[}\sum_{j}\alpha_{j}d_{n,j}(W)>\frac{1}{2}\Big{]},\alpha_{j}\geq
0\forall j,\sum_{j}\alpha_{j}=1\text{ .}$
2. (b)
Discrete SuperLearner. The discrete SuperLearner, which chooses only one
candidate algorithm, can be thought of as a special case of the continuous
SuperLearner, where algorithms are still combined using a convex combination,
but each algorithm weight $\alpha_{j}$ must be either $0$ or $1$. Such an
approach may be particularly advantageous when sample size is small:
$B_{n,\alpha}(W)=\sum_{j}\alpha_{j}B_{n,j}(W),\alpha_{j}\in\\{0,1\\}\forall
j,\sum_{j}\alpha_{j}=1$
$d_{n,\alpha}(W)=\mathbb{I}\Big{[}\sum_{j}\alpha_{j}d_{n,j}(W)>\frac{1}{2}\Big{]},\alpha_{j}\in\\{0,1\\}\forall
j,\sum_{j}\alpha_{j}=1\text{ .}$
6. 6.
Calculate an estimated risk within each validation set for each combination of
algorithms (i.e., for each convex combination indexed by a particular value
for $\alpha$). Here, we discuss two choices of risk functions for step (6)
above. First, mean-squared error risk targeting the blip function, which we
will refer to as $R_{MSE}$:
$R_{MSE}=E[(D(Q,g)-B(W))^{2}]\text{ .}$
Because the MSE compares $D_{n}$ to the predicted blip, the candidate
estimators under the MSE risk function must output estimated blip values. Of
note, $E_{P_{U,X}}[[Y_{1}-Y_{0}-B(W)]^{2}]$ is identified as $R_{MSE_{0}(B)}$
if either $Q=Q_{0}$ or $g=g_{0}$. The second risk function, which we call
$R_{E[Y_{d}]}$, uses the expected rule-specific outcome as criterion:
$R_{E[Y_{d}]}=-E[E[Y|A=d,W]]\text{ .}$
Intuitively, SuperLearner aims to choose the combination of treatment rule
algorithms that minimizes a cross-validated empirical risk, so it makes sense
to have that risk be the negative of the expected outcome, such that
SuperLearner chooses the combination of algorithms that maximizes the expected
outcome, since that is the ultimate goal of the ODTR. Candidate estimators for
the SuperLearner that use the expected mean outcome under the rule as the risk
function can include both blip estimators that imply treatment rules as well
as direct estimators of the treatment rules. When the expected rule specific
outcome is chosen as the risk function, a further practical choice is how to
estimate this quantity; we focus here on a cross-validated targeted maximum
likelihood estimator (TMLE) due to its favorable theoretical properties
(double robustness, semi-parametric efficiency, and greater protection against
overfitting through the use of sample splitting [46]); however, one can use
any estimator of treatment specific mean outcomes to estimate this quantity
[10, 50, 11].
7. 7.
Average the risks across the validation sets resulting in one estimated cross
validated risk for each candidate convex combination (i.e., each possible
choice of $\alpha$).
8. 8.
Choose the estimator (i.e., the convex combination $\alpha$) that yields the
smallest cross-validated empirical risk. Call this “best” weighting of the
algorithms $\alpha_{n}$.
9. 9.
Fit each candidate estimator $B_{n,j}(W)$ of the blip or $d_{n,j}(W)$ of the
optimal rule on the entire data set. Generate predictions for each candidate
algorithm individually, and then combine them using the weights $\alpha_{n}$
obtained in the previous step. This is the SuperLearner estimate of the ODTR,
where $d_{n,B}^{*}=\mathbb{I}[B_{n,\alpha_{n}}(W)>0]$ or
$d_{n,d}^{*}=d_{n,\alpha_{n}}(W)$ directly.
We summarize the practical implications of 3 key choices for implementing ODTR
SuperLearner here and in Table 1. In the first dimension, selection of the
candidate algorithms, for illustration we consider having a library with only
blip function estimators (called “Blip only” library) or a library with blip
estimators, direct-estimation estimators, and static treatment rules that
treat everyone or no one (called “Full” library). If one chooses to include
direct-search estimators of the ODTR or static rules (i.e., functions that do
not output a blip estimate in the process), then one is constrained to using
the vote-based metalearner and $R_{E[Y_{d}]}$ risk function, because the blip-
based metalearner and $R_{MSE}$ risk function both rely on estimates of the
blip for combining and choosing the algorithms, respectively. For the second
dimension, the choice of how to combine algorithms, we consider either the
metalearner that (a) selects only one candidate algorithm (called “Discrete”),
(b) uses a weighted average to combine predicted blip estimates and directly
plugs those into the risk (called “Blip-based”), (c) uses a weighted average
to combine predicted treatments under the candidate combinations of rules and
creates a weighted majority vote of these candidate rules as input into the
risk (called “Vote-based”). The third dimension is the choice of performance
measure, risk function – either the MSE ($R_{MSE}$) or the mean outcome under
the candidate rule ($R_{E[Y_{d}]}$). For the second and third dimensions, if
one uses the vote-based metalearner, then the $R_{MSE}$ risk cannot be used
because $R_{MSE}$ requires an estimate of the blip to choose the best
algorithm, and the vote-based metalearner does not output an estimate of the
blip.
Choice 1: Library | Blip only | Full
---|---|---
Choice 2: Metalearner | Discrete | Continuous | Discrete | Continuous
Blip-based | Vote-based | Blip-based | Vote-based
Choice 3: Risk | $R_{MSE}$ | $R_{E[Y_{d}]}$ | $R_{MSE}$ | $R_{E[Y_{d}]}$ | $R_{MSE}$ | $R_{E[Y_{d}]}$ | $R_{MSE}$ | $R_{E[Y_{d}]}$ | $R_{MSE}$ | $R_{E[Y_{d}]}$ | $R_{MSE}$ | $R_{E[Y_{d}]}$
Possible? | ✓ | ✓ | ✓ | ✓ | ✗ | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ | ✓
Table 1: Summary of the possible ODTR SuperLearner configurations across the
library, metalearner, and risk dimensions. The last row (“Possible?”)
indicates whether the particular configuration is possible to implement. The
checkmarks (✓) in the following table indicate that it is possible to
construct that kind of ODTR SuperLearner algorithm; the x-marks (✗) indicate
otherwise.
## 4 Simulation: Comparisons and Considerations of SuperLearner ODTR
Estimators
We use simulations to: (1) illustrate the potential benefit to estimating the
ODTR using a SuperLearner approach, as compared to a more traditional approach
to studying treatment effect heterogeneity based on fitting an outcome
regression with interaction terms on covariates and treatment, as is often
standard practice [1, 51, 52, 53]; and (2) investigate the implications of
practical choices when implementing an ODTR SuperLearner in finite samples,
including specification of candidate algorithms in the library, choice of
metalearner, and choice of risk function.
### 4.1 Data Generating Processes
All simulations were implemented in R [54], and the code, simulated data, and
results can be found at https://github.com/lmmontoya/SL.ODTR. We examine these
comparisons using two types of data generating processes (DGPs). Each
simulation consists of 1,000 iterations of either $n=1,000$ or $n=300$, to
assess the impacts of the different configurations as a function of sample
size. Both DGPs generate the covariates, treatment, and outcome as follows:
$\displaystyle W_{1},W_{2},W_{3},W_{4}$ $\displaystyle\sim
Normal(\mu=0,\sigma^{2}=1)$ $\displaystyle A$ $\displaystyle\sim
Bernoulli(p=0.5)$ $\displaystyle Y$ $\displaystyle\sim Bernoulli(p)\text{ .}$
The probability of having a successful outcome differs for the two DGPs,
which, in this case, means that the blip functions differ as well. The first
DGP is an example of one with a complex blip function, and the second DGP is
one with a blip function that is a simpler function of one variable. The first
DGP is directly from work by Luedtke and van der Laan [34, 16, 44], and the
second is modified from the first. The probability of a successful outcome for
DGP 1 is:
$\displaystyle p$
$\displaystyle=0.5logit^{-1}(1-W_{1}^{2}+3W_{2}+5W_{3}^{2}A-4.45A)+0.5logit^{-1}(-0.5-W_{3}+2W_{1}W_{2}+3|W_{2}|A-1.5A)\text{
,}$
then the true blip function is:
$\displaystyle B_{0}(W)=$ $\displaystyle
0.5[logit^{-1}(1-W_{1}^{2}+3W_{2}+5W_{3}^{2}-4.45)+logit^{-1}(-0.5-W_{3}+2W_{1}W_{2}+3|W_{2}|-1.5)$
$\displaystyle-
logit^{-1}(1-W_{1}^{2}+3W_{2})+logit^{-1}(-0.5-W_{3}+2W_{1}W_{2})]\text{ .}$
For DGP 1, $E_{P_{U,X}}[Y_{d^{*}}]\approx 0.5626$ and the true optimal
proportion treated $E_{P_{U,X}}[d^{*}]\approx 55.0\%$.
$E_{P_{U,X}}[Y_{1}]\approx 0.4638$ and $E_{P_{U,X}}[Y_{0}]\approx 0.4643$.
DGP 2’s probability of the outcome’s success is:
$\displaystyle p$ $\displaystyle=logit^{-1}(W_{1}+0.1A+W_{1}A)\text{ .}$
Thus the true blip function is:
$\displaystyle B_{0}(W)=$ $\displaystyle
logit^{-1}(W_{1}+0.1+W_{1})-logit^{-1}(W_{1})\text{ .}$
For DGP 2, $E_{P_{U,X}}[Y_{d^{*}}]\approx 0.5595$ and
$E_{P_{U,X}}[d^{*}]\approx 54.0\%$; $E_{P_{U,X}}[Y_{1}]\approx 0.5152$ and
$E_{P_{U,X}}[Y_{0}]\approx 0.5000$.
### 4.2 ODTR Estimators
For each data generating process, we consider a number of estimators of the
ODTR. First, mirroring epidemiologic practice, we model the outcome as an
additive function of the treatment and covariates, and interactions with the
treatment and all covariates) [1, 51, 53, 52]. Such an approach translates to
using the following parametric model for the outcome regression:
$h(E[Y|A,W])=\beta_{0}+\sum_{i=1}^{p}\beta_{i}W_{i}+\left(\gamma_{0}+\sum_{i=1}^{p}\gamma_{i}W_{i}\right)A\text{
,}$
where $h(.)$ denotes a link function, and $p$ is the number of baseline
covariates in $W$. Using a linear link, the following parametric model for the
blip function is implied:
$B(W)=\gamma_{0}+\sum_{i=1}^{p}\gamma_{i}W_{i}\text{ .}$
Next, we examine the finite sample implications of the aforementioned user-
supplied choices in implementing a SuperLearner estimator of the ODTR,
providing guidance for practical data analysis. First, we examine the choice
of library. We consider the library that only combines candidate blip
estimators (“Blip only” library; i.e., a library with candidate algorithms
suited for regressing $D_{n}$ on $W$) versus a library that has blip
estimators, direct-estimation algorithms, and static treatment rules (“Full”
library). The “Blip only” libraries consist of either:
1. (a)
Simple parametric models only (denoted “Parametric blip models”). This
consisted of univariate GLMs with each covariate.
2. (b)
Machine learning algorithms only (denoted “ML blip models”), such as SL.glm
(generalized linear models), SL.mean (the average), SL.glm.interaction
(generalized linear models with interactions between all pairs of variables),
SL.earth (multivariate adaptive regression splines [48]), SL.nnet (neural
networks [47]), SL.svm (support vector machines [55]), and SL.rpart (recursive
partitioning and regression trees [49]) from the SuperLearner package [56]
3. (c)
A combination of (a) and (b) above, denoted “Parametric + ML blip models”
The “Full” library includes other ODTR algorithms like direct-estimation
methods, static rules, and other blip-based methods. Specifically, the “Full”
library includes either the “ML blip models” or “Parametric + ML blip models”
from the “Blip only” libraries above, in addition to Q-learning [25], OWL
[22], residual weighted learning (RWL) [57], efficient augmentation and
relaxation learning (EARL) [58], optimal classification algorithms [59] (the
latter 4 are from the DynTxRegime package [60], with function names owl, rwl,
earl, and optimalClass, respectively), and static rules that treat all
patients and no patients, regardless of the patient covariate profiles. For
the algorithms from the DynTxRegime package, except for nuisance parameters
$Q_{n}$ and $g_{n}$, we use default parameters, and the rule as a function of
all covariates. Additionally, for the optimal classification algorithm, the
solver method is recursive partitioning for regression trees (rpart). Thus,
the possible “Full” libraries are:
1. (d)
Algorithms from the “ML blip models” library, plus direct maximizers and
static rules, denoted “ML blip models and $E[Y_{d}]$ maximizers”
2. (e)
All possible algorithms – that is, algorithms from the “Parametric + ML blip
models” library, plus direct maximizers and static rules, denoted “All blip
models and $E[Y_{d}]$ maximizers”
Second, we examine the performance of different metalearners for combining the
candidate ODTR algorithms. We examine the blip-based metalearner using the
“Blip only” libraries, and the discrete and vote-based metalearners using both
the “Blip only” libraries and “Full” libraries.
Third, we examine the performance of using either the MSE $R_{MSE}$ or the
expected outcome under the candidate rule $R_{E[Y_{d}]}$ as risk criteria for
choosing the optimal linear combination of candidate ODTR algorithms. In
particular, CV-TMLE is used for estimating $R_{E[Y_{d}]}$.
We fully estimate the ODTR SuperLearner by additionally estimating nuisance
parameters (as opposed to using the true nuisance parameter functions) in a
nested fashion [35] as described above. Specifically, we estimate $Q_{n}$ and
$g_{n}$ using the canonical SuperLearner [27, 56] and a correctly specified
parametric model (i.e., a logistic regression of $A$ on the intercept),
respectively. We use 10-fold cross-validation throughout.
### 4.3 Performance Metrics
We measure performance by computing the percent accuracy of the algorithm;
that is, in a sample, the proportion of times the treatment assigned by the
estimated ODTR matches the true optimal treatment (i.e., the treatment that
would have been assigned under the true ODTR) for each observation in the
sample, averaged across simulation repetitions. We also evaluate performance
metrics of the difference between the true conditional expected outcome under
the estimated rule, averaged across the sample, compared to the true mean
outcome under the true optimal rule
$E_{n}[Q_{0}(Y|A=d_{n}^{*},W)]-E_{0}[Y_{d_{0}^{*}}]$ (as an approximation to
the regret $E_{0}[Q_{0}(Y|A=d_{n}^{*},W)]-E_{0}[Y_{d_{0}^{*}}]$). We compute
the mean and variance of this difference across the simulation repetitions.
Instead of presenting the raw variance of the regret, we present a variance
relative to the regret yielded by estimating the blip, and thus the optimal
rule, using as a parametric GLM that models the blip as an additive function
of all covariates. Additionally, we compute $2.5^{th}$ and $97.5^{th}$
quantiles of the distribution of $E_{n}[Q_{0}(Y|A=d_{n}^{*},W)]$ across the
simulation repetitions.
### 4.4 Simulation Results
Figure 1 displays simulation results. Below we discuss results specific to
each DGP, configuration dimension, and sample size. In general, results within
each DGP (i.e., across sample sizes) follow generally similar patterns;
however, any differences in performance between libraries, metalearners, or
risks are more pronounced for the smaller sample size.
#### 4.4.1 DGP 1 Results - “Complex” Blip Function
Above, we showed that DGP 1 yields a blip function that is a complex function
of all of the available covariates. Here, for a larger sample size, we would
expect a benefit to more aggressive approaches to estimating the ODTR, such as
including more flexible machine learning-based approaches in the library of
candidates, as well as use of more aggressive metalearners (either vote- or
blip-based) over a discrete SuperLearner due to the better ability of these
approaches to approximate the true underlying function. That said, for smaller
sample sizes, this benefit might be attenuated, or even result in worse
performance than simpler alternatives. For this DGP, at sample size of 1,000,
indeed we find a benefit to the use of both more aggressive metalearners and
larger libraries. Interestingly, however, this benefit is maintained for
sample size 300. Specifically, libraries that included data adaptive, machine
learning algorithms (as opposed to incorrectly specified parametric models
alone) more accurately and precisely approximated the rule, even for sample
size of 300. Results also show that for both sample sizes, within the discrete
metalearner, the $R_{E[Y_{d}]}$ risk performed better than $R_{MSE}$ risk, and
more saliently, the blip-based and vote-based metalearners performed better
than the discrete SuperLearner. Finally, as predicted by theory, all
SuperLearner approaches evaluated substantially outperformed a traditional
parametric regression approach at both sample sizes. Below we describe results
specific to each sample size.
##### $n$ = 1,000
Libraries that contain machine learning algorithms (i.e., “ML blip models,”
“Parametric + ML blip models,” “ML blip models and EYd maximizers, ” and “Blip
models and EYd maximizers”) overall perform better than libraries with
parametric models only (i.e., “Parametric blip models”) and the standard GLM
(i.e., “GLM”), across all performance metrics. For example, the percent match
between the true ODTR and the estimated ODTR spans from 72.0%-77.7% for any
libraries with machine learning algorithms, whereas the percent match for
libraries with only parametric models is from 56.4% to 58.0%.
There are no stark differences within the libraries that contain machine
learning algorithms across the metalearner and risk dimensions, except when
using a discrete metalearner and $R_{MSE}$ risk. Specifically, the discrete
metalearner that uses $R_{MSE}$ has a higher average regret and relative
variance than all other algorithms that use machine learning (e.g., for the
“Parametric + ML blip models” “Blip only” library that uses a discrete
metalearner, the average regret when using $R_{MSE}$ versus $R_{E[Y_{d}]}$ is
-0.0389 versus -0.0284, respectively, and the relative variance when using
$R_{MSE}$ versus $R_{E[Y_{d}]}$ is 2.137 versus 0.7781, respectively).
##### $n$ = 300
As expected, given the limited data available to estimate a complex underlying
function, both accuracy of treatment assignment and approximated regret (the
extent to which the expected outcome under the estimated rule fell short of
the best outcomes available) deteriorated with smaller sample sizes. That
said, even in this challenging situation of a complex true pattern of
treatment effect heterogeneity and limited data with which to discover it, the
ODTR SuperLearner would have improved the expected outcome by approximately
4.5% relative to the static rule that treats everyone, an approach that would
have been suggested based on estimation of the ATE.
Libraries with only parametric models perform worse than libraries that
contain machine learning algorithms in terms of average regret and accuracy.
For example, SuperLearner ODTRs that contain libraries with parametric blip
models match 54%-55.4% of the time with the true ODTR, while the SuperLearners
that contain libraries with machine learning algorithms match 60.9%-66.1% of
the time. These results parallel those found with sample size 1,000, except
the discrepancy between libraries with machine learning models and parametric
models is not as pronounced.
Similar to the $n=1,000$ case, among the libraries that used machine learning
algorithms, using the performance of the rule as risk is better across all
performance metrics than using MSE as risk for the discrete metalearner. As
long as machine learning methods were included in the library, performance was
similar across risk functions and choice of metalearners, with the exception
of the MSE risk combined with the discrete metalearner.
#### 4.4.2 DGP 2 Results - “Simple” Blip Function
As shown above, DGP 2 has a true blip function that is a simple function of
one covariate (referred to here as a “simple” blip function). Here, the true
optimal rule is described by a simple parametric model for the blip; thus, we
expect this approach to perform well. However, in practice one is unlikely to
be sure that the truth can be well approximated by a simple rule; it is thus
of interest to evaluate what price is paid for expanding the library to
include more aggressive machine learning algorithms and metalearners. In
particular, one might expect that, for smaller sample size, adding machine
learning-based candidates and more complex metalearners risks substantial
drop-off in performance. However, specifying a library that includes simpler
parametric models, in addition to machine learning approaches, may help
mitigate this risk. In fact, for this particular DGP, we see, across
metalearners and risks, only a small price in performance from adding machine
learning algorithms to a library including only simple parametric models. In
short, having an ODTR SuperLearner library that also includes parametric
models is better than having a library that only consists of data adaptive,
machine learning algorithms. Within the libraries that did contain parametric
models, particularly for the discrete metalearner, $R_{MSE}$ risk performs
slightly better than $R_{E[Y_{d}]}$ risk; for other metalearners there is
little difference in performance in terms of risk. Performance of the
metalearners varies slightly by sample size. Below we describe results
specific to each sample size.
##### $n$ = 1,000
In terms of accuracy, the libraries that only contain parametric models
perform the best, followed closely by libraries that contain parametric models
and machine learning models, followed by the library with only machine
learning models. This pattern is evident in the percent match with the true
ODTR: for example, within the discrete metalearner with $R_{MSE}$ risk, the
percent accuracy is 90.7% for the library with parametric models only
(“Parametric blip models” library), 88.8% for the library that contain both
parametric models and machine learning models (“Parametric + ML blip models”),
and 81.9% for the library that contains machine learning algorithms only (“ML
blip models”). This same pattern is apparent in terms of average regret; that
is, the libraries that contain parametric blip models or a combination of
parametric blip models and machine learning models have the lowest average
regret (-0.0041 to -0.0095), while the libraries that only contain machine
learning models have the highest average regret (-0.0100 to -0.0138). Modeling
the blip with a single, parametric model often used in standard
epidemiological analysis (which, in this case, is incorrectly specified)
yields an average regret of -0.0100 (higher than the libraries with a
combination of parametric models and/or machine learning algorithms, and at
the same level as having machine learning algorithms only).
Within the libraries that contain parametric models and use a discrete
metalearner, $R_{MSE}$ performs better than $R_{E[Y_{d}]}$. For example, the
mean regret and relative variance for the discrete metalearner that only used
parametric models in the library is -0.0041 and 1.0267, respectively, when
using $R_{MSE}$ risk, and -.0046 and 1.3174, respectively, when using
$R_{E[Y_{d}]}$ risk. Otherwise, there were no apparent differences in
performance by risk.
For libraries that contain parametric models and use $R_{MSE}$, the discrete
ODTR SuperLearner performs better than the blip-based ODTR SuperLearner, with
regards to all performance metrics. For example, the average regret and
relative variance for the library with only parametric models that uses
$R_{MSE}$ was -0.0041 and 1.0267, respectively, when using a discrete
metalearner versus -.0059 and 1.1855, respectively, when using the blip-based
metalearner. This pattern is also evident for the library that has both
parametric models and machine learning algorithms.
##### $n$ = 300
As in the case where $n=1,000$, the library with only parametric models
performs best in terms of accuracy, followed by libraries with parametric
models and machine learning models, and finally libraries with only machine
learning algorithms; again, however, DGP 1 illustrates the risks of such a
strategy. Moreover, even at this small sample size, there is only a small
price to pay for adding machine learning-based learners to a library that also
includes simple parametric models. For example, for the discrete metalearner
that uses $R_{MSE}$, the percent accuracy for the library that uses only
parametric models is 78%, followed by a 75.5% accuracy when there is a
combination of parametric models and machine learning, while the library with
only machine learning models had a 61.7% match with the true ODTR. While this
pattern parallels that of the $n=1,000$ case, the dropoff in accuracy when the
library uses only parametric models versus when the library only uses machine
learning algorithms is larger in terms of accuracy in the smaller sample size
(16.3% difference) versus the larger sample size (8.8% difference).
Similar to the $n=1,000$ case, among libraries that contain parametric models
and in the discrete metalearner case, $R_{MSE}$ yields slightly better
performance results than $R_{E[Y_{d}]}$. In contrast to the $n=1,000$ case,
for libraries that contain parametric models and used $R_{MSE}$, the blip-
based metalearner performs slightly better than the discrete metalearner, with
regards to all performance metrics. For example, the average regret and
relative variance for the library with only parametric models that use
$R_{MSE}$ is -0.0188 and 1.6102, respectively, when using a blip-based
metalearner versus -0.0190 and 1.8109, respectively, when using the discrete
metalearner.
## 5 Application of ODTR SuperLearner to “Interventions” Study
In the “Interventions” study, 231 (52.2%) participants received CBT and 210
(47.8%) TAU. Out of the 441 participants, 271 (61.5%) were not re-arrested
within the year. The estimated probability of no re-arrest had everyone been
assigned CBT is 62.2%, and the estimated probability of no-arrest had everyone
been assigned TAU is 60.7%; there was no significant difference between these
two probabilities (risk difference: 1.51%, CI: [-8.03%,11.06%]). After
adjusting for covariates using TMLE to improve the precision on this ATE
estimate [61], the risk difference is, similarly, 1.53% (CI: [-7.31%,
10.37%]).
Figure 2 shows subgroup plots for each covariate – that is, the unadjusted
difference in probability of no re-arrest between those who received CBT
versus TAU, within each covariate group level. One might begin to identify
trends towards differential treatment effects; for example, participants may
have benefited more from CBT at the San Francisco site, or if they had
offended three or more times. Accurate interpretation of any such subgroup
analyses, however, requires variance estimates and hypothesis tests with
appropriate correction for multiple testing. In addition, as mentioned before,
it may be the case that the best way to assign treatment is by using
information on more than one variable at a time, and even interactions between
those variables.
Thus, we estimated the ODTR on the “Interventions” data to determine which
justice-involved adults with mental illness should receive CBT. Specifically,
we implemented the ODTR SuperLearner with a blip-only library, a continuous,
blip-based metalearner, and $R_{E[Y_{d}]}$ risk function. We chose a blip-
based library in order to generate estimates of the blip, which themselves can
be informative. The library for $d^{*}_{n}$ consisted of a combination of
simple parametric models (univariate GLMs with each covariate) and machine
learning algorithms (SL.glm, SL.mean, SL.glm.interaction, SL.earth, and
SL.rpart). As in the simulations, the outcome regression $Q_{n}$ was estimated
using the canonical SuperLearner, $g_{n}$ was estimated as an intercept-only
logistic regression, and we used 10-fold cross validation.
Interestingly, despite implementing a continuous metalearner, the ODTR
SuperLearner algorithm assigned all weight on a GLM that modeled the blip as a
linear function of only substance use. As shown in Figure 3, a plot depicting
the distribution of the predicted blip estimates for all participants, the
algorithm can be interpreted as such: if a justice-involved person with mental
illness has a low substance use score, give him/her CBT; otherwise, give
him/her TAU. Under this ODTR estimate, for this sample, 52.38% of the
participants would receive CBT.
## 6 Conclusions
We described the ODTR SuperLearner and illustrated its performance for sample
DGPs under different configurations of the algorithm and finite sample sizes.
These results build on existing work [34, 35] by fully estimating the ODTR and
including an expanded SuperLearner library with not only blip-based regression
estimators, but also direct-estimation methods and static interventions. We
highlighted the practical choices one must consider when implementing the ODTR
SuperLearner across three dimensions: (1) the ODTR SuperLearner library of
candidate algorithms, namely, whether to include parametric models, machine
learning algorithms, or both; whether to include only estimators that output a
predicted blip or include a combination of blip estimators, direct estimators,
and static treatment rules, (2) the metalearner that either chooses a single
algorithm or combines the algorithms and (3) the risk function that chooses
the “best” estimator or combination of estimators of the candidate ODTR
algorithms.
Simulation-based results illustrate the shortcomings of an approach to
treatment effect heterogeneity based on approximating the blip as an additive
function of the available covariates, or equivalently, modeling the outcome as
an additive function of the treatment and covariates, and interactions between
the treatment and all covariates, which is common practice in epidemiologic
analyses for heterogeneous treatment effects [1, 51, 53, 52]. With respect to
choice of library, we recommend specifying a library with both simple
parametric models and more aggressive data adaptive algorithms, as well as
static rules such as treat all or treat none, allowing for flexible estimation
of both simple and complex underlying rules. Inclusion of a full range of
algorithms from simple to aggressive was particularly important for small
sample sizes. In terms of the choice of metalearner, both vote- and blip-based
ensemble learners performed well; a vote-based metalearner has the advantage,
however, of allowing for the integration of a larger library of candidate
algorithms (including direct estimation approaches) and ease of integration of
static rules. Of note, in these simulations, vote- and blip-based metalearners
outperformed the discrete ODTR SuperLearner approach, even for sample size
300. However, we caution that this may not always be the case and when sample
size is small, a discrete SuperLearner approach may provide benefits – in
fact, one could include a convex metalearner as a candidate algorithm.
Finally, with respect to choice of risk function, both MSE and the expected
outcome under the rule performed well; in practice one might prefer
$R_{E[Y_{d}]}$ because it allows for the use of a larger library of candidate
algorithms.
Additionally, as an illustration of how to apply the ODTR SuperLearner to real
data, we estimated the ODTR using the “Interventions” study to determine which
types of justice-involved adults with mental illness should be assigned CBT
versus TAU, to yield the highest probability of no re-arrest. Preliminary
results show that the ODTR SuperLearner placed all weight on a simple
parametric model with only substance use; thus, the algorithm suggests that,
in this sample, participants with lower levels of substance use may benefit
more from CBT. We note that this is an example of a case in which the ODTR
SuperLearner generated a ODTR estimate that was fully interpretable – although
we used a continuous metalearner and thus the SuperLearner could have allowed
for combinations of algorithms, the SuperLearner happened to only choose one
algorithm: a GLM with substance use as a regressor. To guarantee
interpretability in the SuperLearner (for example, if working with
practitioners who may want a treatment decision rule that could be easily
written down [3, 62]), one could implement the ODTR SuperLearner with a
discrete metalearner and a simple parametric library only.
Importantly, in a companion paper, we _evaluate_ this ODTR – that is, we ask
the causal question: what would have been the probability of no re-arrest had
participants been assigned CBT according the ODTR SuperLearner (i.e., using
only substance use)? Further, is assigning CBT according to the ODTR
SuperLearner significantly better than assigning CBT to everyone or no one? In
this way, we can determine if it is of clinical significance to assign CBT
according to this rule – namely, if assigning CBT using only substance use
scores results in a statistically significant reduction of recidivism, and if
so, how much better one does with this ODTR compared to a non-individualized
rule (such as giving CBT to all). Under the appropriate causal assumptions,
one could use any of the methods for estimating treatment specific means to
interpret this estimate as the expected outcome under the true optimal rule or
the estimated optimal rule.
Future work could extend these simulations to the multiple treatment (i.e.,
more than 2 treatment levels) [35] and multiple timepoint setting (i.e.,
estimating a sequential ODTR from, for example, a SMART design [7, 25, 36]
instead of an RCT design). We also wish to apply the ODTR SuperLearner on the
full “Interventions” dataset (n = 720), once data collection is finished.
This work contributes to understanding the toolbox of methods for analyzing
the heterogeneity in how patients respond to treatment. By learning which
patients respond best to what treatment in a flexible manner, we can improve
patient outcomes – moving us closer to the goals of precision health.
| TAU ($A=0$) | CBT ($A=1$) | $p$
---|---|---|---
$n$ | 211 | 230 |
No re-arrest ($Y=1$) (%) | 128 (60.7) | 143 (62.2) | 0.820
Site = San Francisco (%) | 87 (41.2) | 104 (45.2) | 0.455
Gender = Female (%) | 38 (18.0) | 37 (16.1) | 0.682
Ethnicity = Hispanic (%) | 50 (23.7) | 42 (18.3) | 0.198
Age (mean (SD)) | 38.08 (11.05) | 37.01 (11.22) | 0.317
CSI (mean (SD)) | 32.35 (11.13) | 33.46 (11.27) | 0.300
LSI (mean (SD)) | 5.59 (1.33) | 5.50 (1.48) | 0.472
SES (mean (SD)) | 3.81 (1.89) | 3.81 (2.12) | 0.995
Prior adult convictions (%) | | | 0.156
Zero to two times | 74 (35.1) | 93 (40.4) |
Three or more times | 134 (63.5) | 129 (56.1) |
Missing | 3 (1.4) | 8 (3.5) |
Most serious offense (mean (SD)) | 5.29 (2.54) | 5.09 (2.52) | 0.415
Motivation (mean (SD)) | 3.22 (1.36) | 3.27 (1.37) | 0.720
Substance use (%) | | | 0.184
0 | 53 (25.1) | 76 (33.0) |
1 | 47 (22.3) | 55 (23.9) |
2 | 109 (51.7) | 98 (42.6) |
Missing | 2 (0.9) | 1 (0.4) |
Table 2: Distribution of “Interventions” data by treatment assignment. Figure
1: Performance of $E_{n}[Q_{0}(Y|A=d_{n}^{*},W)]$ (an approximation of the
true mean outcome under the estimated ODTR) for DGP 1 (top two) and DGP 2
(bottom two). The horizontal black line depicts $E_{P_{U,X}}[Y_{d_{0}^{*}}]$;
the horizontal red line depicts $E_{P_{U,X}}[Y_{1}]$; the horizontal blue line
depicts $E_{P_{U,X}}[Y_{0}]$. We compare the SuperLearner algorithm to an
incorrectly specified GLM (in gray, with N/A as the metalearner and a diamond
with no fill). We also compare (1) having a SuperLearner library with (a) only
algorithms that estimate the blip (i.e., “Blip only” libraries) that only have
parametric blip models (blue) or only have machine-learning blip models (red)
or both (purple) versus (b) an expanded or “Full” library with blip function
regressions estimated via machine learning only (orange-yellow) or machine
learning and parametric models (green), with methods that directly estimate
the ODTR and static rules, (2) having a metalearner (depicted on the x-axis)
either that chooses one algorithm (i.e., the “discrete” SuperLearner) or
combines blip predictions/treatment predictions (i.e., the “continuous”
SuperLearner), and (3) using the MSE risk function ($R_{MSE}$ as a square)
versus the mean outcome under the candidate rule risk function ($R_{E[Y_{d}]}$
as a triangle). Figure 2: Subgroup plots for each of the covariates for the
“Interventions” data. The x-axis for each of the plots is the different levels
of the covariates; the y-axis is the difference in proportion of people who
were not re-arrested between those who received CBT versus TAU, in that
covariate subgroup. Figure 3: Distribution of predicted blip estimates from
the ODTR SuperLearner. The frequencies are divided into three groups because
the ODTR SuperLearner allocated all coefficient weights to a GLM using
substance use, a variable with only 3 treatment levels. One can interpret the
ODTR SuperLearner for this sample as follows: CBT may reduce the probability
of re-arrest among justice-involved adults with low levels of substance use.
Estimation and inference of the value of the ODTR SuperLearner compared to,
for example, treating everyone or no one, informs us if there is, in fact, a
differential effect by substance use, and thus a benefit to assigning CBT in
this individualized way.
## References
* [1] Issa J Dahabreh, Rodney Hayward, and David M Kent. Using group data to treat individuals: understanding heterogeneous treatment effects in the age of precision medicine and patient-centred evidence. International journal of epidemiology, 45(6):2184–2193, 2016.
* [2] David M Kent, Ewout Steyerberg, and David van Klaveren. Personalized evidence based medicine: predictive approaches to heterogeneous treatment effects. Bmj, 363:k4245, 2018.
* [3] Michael R Kosorok and Eric B Laber. Precision medicine. Annual review of statistics and its application, 6:263–286, 2019\.
* [4] Jennifer L Skeem, Sarah Manchak, and Jillian K Peterson. Correctional policy for offenders with mental illness: Creating a new paradigm for recidivism reduction. Law and human behavior, 35(2):110–126, 2011.
* [5] Jennifer L Skeem, Eliza Winter, Patrick J Kennealy, Jennifer Eno Louden, and Joseph R Tatar II. Offenders with mental illness have criminogenic needs, too: Toward recidivism reduction. Law and human behavior, 38(3):212, 2014.
* [6] Mark W Lipsey, Nana A Landenberger, and Sandra J Wilson. Effects of cognitive-behavioral programs for criminal offenders. Campbell systematic reviews, 3(1):1–27, 2007.
* [7] Huitian Lei, Inbal Nahum-Shani, K Lynch, David Oslin, and Susan A Murphy. A” smart” design for building individualized treatment sequences. Annual review of clinical psychology, 8:21–48, 2012.
* [8] Feifang Hu and William F Rosenberger. The theory of response-adaptive randomization in clinical trials, volume 525. John Wiley & Sons, 2006.
* [9] Ilya Lipkovich, Alex Dmitrienko, and Ralph B D’Agostino Sr. Tutorial in biostatistics: data-driven subgroup identification and analysis in clinical trials. Statistics in medicine, 36(1):136–196, 2017.
* [10] Oliver Bembom and Mark J van der Laan. A practical illustration of the importance of realistic individualized treatment rules in causal inference. Electronic journal of statistics, 1:574, 2007.
* [11] Mark J van der Laan and Maya L Petersen. Causal effect models for realistic individualized treatment and intention to treat rules. The international journal of biostatistics, 3(1), 2007.
* [12] James Robins. A new approach to causal inference in mortality studies with a sustained exposure period—application to control of the healthy worker survivor effect. Mathematical modelling, 7(9-12):1393–1512, 1986.
* [13] Bibhas Chakraborty and Erica EM Moodie. Statistical methods for dynamic treatment regimes. Springer, 2013.
* [14] Susan A Murphy. Optimal dynamic treatment regimes. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 65(2):331–355, 2003.
* [15] James M Robins. Optimal structural nested models for optimal sequential decisions. In Proceedings of the second seattle Symposium in Biostatistics, pages 189–326. Springer, 2004.
* [16] Alexander R Luedtke and Mark J van der Laan. Optimal individualized treatments in resource-limited settings. The international journal of biostatistics, 12(1):283–303, 2016\.
* [17] Eric B Laber, Kristin A Linn, and Leonard A Stefanski. Interactive model building for q-learning. Biometrika, 101(4):831–847, 2014.
* [18] Phillip J Schulte, Anastasios A Tsiatis, Eric B Laber, and Marie Davidian. Q-and a-learning methods for estimating optimal dynamic treatment regimes. Statistical science: a review journal of the Institute of Mathematical Statistics, 29(4):640, 2014.
* [19] Erica EM Moodie, Bibhas Chakraborty, and Michael S Kramer. Q-learning for estimating optimal dynamic treatment rules from observational data. Canadian Journal of Statistics, 40(4):629–645, 2012.
* [20] Min Qian and Susan A Murphy. Performance guarantees for individualized treatment rules. Annals of statistics, 39(2):1180, 2011.
* [21] Erica EM Moodie, Thomas S Richardson, and David A Stephens. Demystifying optimal dynamic treatment regimes. Biometrics, 63(2):447–455, 2007.
* [22] Yingqi Zhao, Donglin Zeng, A John Rush, and Michael R Kosorok. Estimating individualized treatment rules using outcome weighted learning. Journal of the American Statistical Association, 107(499):1106–1118, 2012.
* [23] Baqun Zhang, Anastasios A Tsiatis, Marie Davidian, Min Zhang, and Eric Laber. Estimating optimal treatment regimes from a classification perspective. Stat, 1(1):103–114, 2012.
* [24] Ying-Qi Zhao, Donglin Zeng, Eric B Laber, and Michael R Kosorok. New statistical learning methods for estimating optimal dynamic treatment regimes. Journal of the American Statistical Association, 110(510):583–598, 2015.
* [25] Michael R Kosorok and Erica EM Moodie. Adaptive Treatment Strategies in Practice: Planning Trials and Analyzing Data for Personalized Medicine, volume 21. SIAM, 2015.
* [26] Ying-Qi Zhao and Eric B Laber. Estimation of optimal dynamic treatment regimes. Clinical Trials, 11(4):400–407, 2014.
* [27] Mark J van der Laan, Eric C Polley, and Alan E Hubbard. Super learner. Statistical applications in genetics and molecular biology, 6(1), 2007.
* [28] Leo Breiman. Stacked regressions. Machine learning, 24(1):49–64, 1996.
* [29] Erin LeDell, Mark J van der Laan, and Maya Petersen. Auc-maximizing ensembles through metalearning. The international journal of biostatistics, 12(1):203–218, 2016\.
* [30] Eric C Polley and Mark J van der Laan. Super learner in prediction. 2010\.
* [31] Romain Pirracchio, Maya L Petersen, and Mark van der Laan. Improving propensity score estimators’ robustness to model misspecification using super learner. American journal of epidemiology, 181(2):108–119, 2014.
* [32] Romain Pirracchio, Maya L Petersen, Marco Carone, Matthieu Resche Rigon, Sylvie Chevret, and Mark J van der Laan. Mortality prediction in intensive care units with the super icu learner algorithm (sicula): a population-based study. The Lancet Respiratory Medicine, 3(1):42–52, 2015.
* [33] Maya L Petersen, Erin LeDell, Joshua Schwab, Varada Sarovar, Robert Gross, Nancy Reynolds, Jessica E Haberer, Kathy Goggin, Carol Golin, Julia Arnsten, et al. Super learner analysis of electronic adherence data improves viral prediction and may provide strategies for selective hiv rna monitoring. Journal of acquired immune deficiency syndromes (1999), 69(1):109, 2015.
* [34] Alexander R Luedtke and Mark J van der Laan. Super-learning of an optimal dynamic treatment rule. The international journal of biostatistics, 12(1):305–332, 2016\.
* [35] Jeremy Robert Coyle. Computational Considerations for Targeted Learning. PhD thesis, UC Berkeley, 2017.
* [36] Daniel Almirall, Inbal Nahum-Shani, Nancy E Sherwood, and Susan A Murphy. Introduction to smart designs for the development of adaptive interventions: with application to weight loss research. Translational behavioral medicine, 4(3):260–274, 2014.
* [37] Maya L Petersen and Mark J van der Laan. Causal models and learning from data: integrating causal modeling and statistical estimation. Epidemiology (Cambridge, Mass.), 25(3):418, 2014.
* [38] Judea Pearl. Causality: models, reasoning and inference, volume 29. Springer, 2000.
* [39] Alexander R Luedtke and Mark J van der Laan. Statistical inference for the mean outcome under a possibly non-unique optimal treatment strategy. Annals of statistics, 44(2):713, 2016.
* [40] James M Robins and Miguel A Hernán. Estimation of the causal effects of time-varying exposures. Longitudinal data analysis, 553:599, 2009.
* [41] Maya L Petersen, Kristin E Porter, Susan Gruber, Yue Wang, and Mark J van der Laan. Diagnosing and responding to violations in the positivity assumption. Statistical methods in medical research, 21(1):31–54, 2012.
* [42] Inbal Nahum-Shani, Min Qian, Daniel Almirall, William E Pelham, Beth Gnagy, Gregory A Fabiano, James G Waxmonsky, Jihnhee Yu, and Susan A Murphy. Q-learning: A data analysis method for constructing adaptive interventions. Psychological methods, 17(4):478, 2012.
* [43] Daniel Rubin and Mark J van der Laan. A doubly robust censoring unbiased transformation. The international journal of biostatistics, 3(1), 2007.
* [44] Mark J van der Laan and Alexander R Luedtke. Targeted learning of the mean outcome under an optimal dynamic treatment rule. Journal of causal inference, 3(1):61–95, 2015.
* [45] Daniel B Rubin and Mark J van der Laan. Statistical issues and limitations in personalized medicine research with clinical trials. The international journal of biostatistics, 8(1), 2012.
* [46] Mark J van der Laan and Sherri Rose. Targeted learning: causal inference for observational and experimental data. Springer Science & Business Media, 2011.
* [47] Brian D Ripley and NL Hjort. Pattern recognition and neural networks. Cambridge university press, 1996.
* [48] Jerome H Friedman et al. Multivariate adaptive regression splines. The annals of statistics, 19(1):1–67, 1991.
* [49] Leo Breiman. Classification and regression trees. Routledge, 2017.
* [50] Mark J van der Laan and Susan Gruber. Targeted minimum loss based estimation of an intervention specific mean outcome. 2011\.
* [51] Ravi Varadhan, Jodi B Segal, Cynthia M Boyd, Albert W Wu, and Carlos O Weiss. A framework for the analysis of heterogeneity of treatment effect in patient-centered outcomes research. Journal of clinical epidemiology, 66(8):818–825, 2013.
* [52] David van Klaveren, Yvonne Vergouwe, Vasim Farooq, Patrick W Serruys, and Ewout W Steyerberg. Estimates of absolute treatment benefit for individual patients required careful modeling of statistical interactions. Journal of clinical epidemiology, 68(11):1366–1374, 2015.
* [53] Salim Yusuf, Janet Wittes, Jeffrey Probstfield, and Herman A Tyroler. Analysis and interpretation of treatment effects in subgroups of patients in randomized clinical trials. Jama, 266(1):93–98, 1991.
* [54] R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2018.
* [55] Chih-Chung Chang and Chih-Jen Lin. Libsvm: A library for support vector machines. ACM transactions on intelligent systems and technology (TIST), 2(3):27, 2011.
* [56] Eric Polley, Erin LeDell, Chris Kennedy, and Mark van der Laan. SuperLearner: Super Learner Prediction, 2018. R package version 2.0-24.
* [57] Xin Zhou, Nicole Mayer-Hamblett, Umer Khan, and Michael R Kosorok. Residual weighted learning for estimating individualized treatment rules. Journal of the American Statistical Association, 112(517):169–187, 2017.
* [58] Ying-Qi Zhao, Eric B Laber, Yang Ning, Sumona Saha, and Bruce E Sands. Efficient augmentation and relaxation learning for individualized treatment rules using observational data. Journal of Machine Learning Research, 20(48):1–23, 2019.
* [59] Baqun Zhang, Anastasios A Tsiatis, Marie Davidian, Min Zhang, and Eric Laber. Estimating optimal treatment regimes from a classification perspective. Stat, 1(1):103–114, 2012.
* [60] S. T. Holloway, E. B. Laber, K. A. Linn, B. Zhang, M. Davidian, and A. A. Tsiatis. DynTxRegime: Methods for Estimating Optimal Dynamic Treatment Regimes, 2019. R package version 4.1.
* [61] Kelly L Moore and Mark J van der Laan. Covariate adjustment in randomized trials with binary outcomes: targeted maximum likelihood estimation. Statistics in medicine, 28(1):39–64, 2009.
* [62] Zachary D Cohen and Robert J DeRubeis. Treatment selection in depression. Annual Review of Clinical Psychology, 14, 2018.
|
# A Pub-Sub Architecture to Promote Blockchain Interoperability ††thanks: This
project has been supported by _The Linux Foundation_ as part of the
_Hyperledger Summer Internships_ program under the _Towards Blockchain
Interoperability with Hyperledger_ project.
Sara Ghaemi1, Sara Rouhani2, Rafael Belchior3, Rui S. Cruz3, Hamzeh Khazaei4,
Petr Musilek1 1University of Alberta, Edmonton, Canada, {sghaemi,
<EMAIL_ADDRESS>2University of Saskatchewan, Saskatoon, Canada,
<EMAIL_ADDRESS>3Instituto Superior Técnico, Universidade de Lisboa,
Lisboa, Portugal, {rafael.belchior<EMAIL_ADDRESS>4York
University, Toronto, Canada<EMAIL_ADDRESS>
###### Abstract
The maturing of blockchain technology leads to heterogeneity, where multiple
solutions specialize in a particular use case. While the development of
different blockchain networks shows great potential for blockchains, the
isolated networks have led to data and asset silos, limiting the applications
of this technology. Blockchain interoperability solutions are essential to
enable distributed ledgers to reach their full potential. Such solutions allow
blockchains to support asset and data transfer, resulting in the development
of innovative applications.
This paper proposes a novel blockchain interoperability solution for
permissioned blockchains based on the publish/subscribe architecture. We
implemented a prototype of this platform to show the feasibility of our
design. We evaluate our solution by implementing examples of the different
publisher and subscriber networks, such as Hyperledger Besu, which is an
Ethereum client, and two different versions of Hyperledger Fabric. We present
a performance analysis of the whole network that indicates its limits and
bottlenecks. Finally, we discuss the extensibility and scalability of the
platform in different scenarios. Our evaluation shows that our system can
handle a throughput in the order of the hundreds of transactions per second.
###### Index Terms:
Distributed Ledger Technology, Blockchain, Interoperability, Publish/Subscribe
## I Introduction
The distributed ledger technology (DLT) enables a set of independent untrusted
nodes to establish an agreement on the state of a shared ledger. Blockchain, a
type of DLT, is mostly known for its use cases in cryptocurrencies such as
Bitcoin [1], Ethereum [2], and XRP[3], among others. However, the technology
can be used for more diverse applications and industries. Some examples are
biomedical and health care [4, 5], Internet of Things (IoT) [6, 7], public
administration [8, 9], and cloud computing [10, 11]. Since each industry has
its own unique sets of requirements, many isolated permissioned and
permissionless blockchains have been introduced, posing limits regarding its
interoperability.
Currently, developers commit to a single blockchain solution, and they cannot
use the capabilities of more than one blockchain (vendor lock-in). These
isolated, incompatible networks have resulted in silos of data and assets,
which cannot be used from other networks. Blockchain interoperability
solutions are needed to enable asset and information transfer from one
blockchain to another. However, interoperability for blockchains encounters
challenges that make it different from interoperability for other software
networks.
First, each interoperability solution should take into account the differences
in the architecture of blockchain networks. Although all blockchains have an
immutable ledger that stores the history of assets, they usually reach a
consensus on the order of transactions using different algorithms. As a
result, their underlying network and their validation mechanisms can be
different from other blockchains. Each blockchain network that participates in
the interoperation is independent and in full control of their assets and
information. Moreover, the interoperability solutions should not require
significant changes in the underlying blockchain networks, and it should be
usable with minimal effort for existing blockchains.
This paper aims to tackle this problem by proposing a blockchain
interoperability solution based on the publish/subscribe architecture across
permissioned blockchains. We have implemented a broker blockchain that acts as
a middleman in the interoperability process between the source and the
destination networks. It is worth noting that since the broker is itself a
blockchain network, it is not a central authority and peers from the source
and destination blockchains can also participate in the governance of this
network. The _broker_ blockchain keeps a copy of the information that needs to
be shared in the form of a _topic_. A topic has a name, message, publisher,
and a set of subscribers. The _publisher_ is the source blockchain network
that wants to share the information. It is responsible for creating the topic
on the broker and publishing it to the corresponding topic whenever the
information needs an update. The _subscribers_ are the destination networks
that need some information from the source network. As soon as the subscriber
network subscribes to a topic, the broker network notifies it whenever a
change happens. This solution enables interoperability between blockchains
with minimal effort.
We used a performance benchmark tool to analyze the performance of the
implemented prototype of the platform. The throughput and average latency for
different functionalities of the broker network were investigated. The results
indicate that our network can handle hundreds of transactions per second.
Moreover, the evaluations identified the PublishToTopic functionality to be
the broker network’s bottleneck.
The rest of this paper is organized as follows. Section II gives a summary of
the related work on blockchain interoperability and blockchain-based
publish/subscribe protocols. Section III introduces the system design details
for the proposed interoperability solution. Section IV demonstrates the
implementation and deployment details of the platform, while section V
presents its performance evaluation. Section VI outlines some discussions on
the design and evaluation of the platform and section VII concludes the paper.
## II Related Work
In this section, we discuss the related work in the field of blockchain
interoperability, as well as blockchain-based publish/subscribe protocols.
### II-A Blockchain Interoperability
The emergence of the blockchain interoperability research area has brought
attention from both the industry and academia. Blockchain interoperability
projects have been tackled as early as in 2016 [12].
A recent survey classifies blockchain interoperability studies in three
categories: Cryptocurrency-directed interoperability approaches, Blockchain
Engines, and Blockchain Connectors [12]. Cryptocurrency-directed approaches
are mostly industry solutions that provide interoperability across public
blockchains. This category has a focus on asset interoperability and is
divided into sidechains, hash lock time contracts, notary schemes, and hybrid
solutions. Sidechains allow for offloading transactions to a secondary chain,
enhance performance, and provide features that the main chain would not
provide. Sidechains also enable the representation of a token from the
mainchain at the secondary chain. Some sidechain solutions include the BTC
Relay [13], Zendoo [14], and RSK [15]. Hash lock time contract solutions
enable cross-chain atomic operations using smart contracts. Wanchain uses this
scheme and provides loan services with cryptocurrencies [16]. Notary schemes
are centralized or decentralized entities that mediate token exchange (e.g.,
cryptocurrency exchanges). Finally, hybrid solutions combine characteristics
from previous approaches. The cryptocurrency-directed approaches only work for
transferring different types of cryptocurrencies between blockchain network.
As a result, these approaches cannot be used for permissioned blockchains with
arbitrary assets and smart contracts, which are the focus of this paper.
The second category is blockchain engines, which enable creating customized
blockchains that can interoperate by providing reusable data, network,
consensus, and contract layers. Platforms like Polkadot [17] and Cosmos [18]
provide such infrastructure, with “free interoperability” among the instances
they allow to create. These approaches are fundamentally different from what
has been proposed in this paper. Instead of enabling blockchain
interoperability for currently running blockchains, blockchain engines propose
blockchain networks that are interoperable by design. As a result, these
solutions cannot be used for currently running permissioned blockchains.
The Blockchain Connector category is composed of interoperability solutions
that are not cryptocurrency-directed or blockchain engines. They include
trusted relays, blockchain agnostic protocols, blockchain of blockchains
solutions, and blockchain migrators. Each of these categories responds to a
particular set of use cases. Trusted relays allow discovering the target
blockchains, often appearing in a permissioned blockchain environment where
trusted escrow parties route cross-blockchain transactions. An interesting
trusted relay approach is Hyperledger Cactus [19], the most recent Hyperledger
project aiming to connect a client to several blockchains, whereby
transactions are endorsed by trusted validators. Cactus focuses on providing
multiple use case scenarios via a trusted consortium. Another trusted relay is
proposed by Abebe et al. [20], which is an interoperability solution between
Fabric networks using Hyperledger Fabric chaincode and a protocol-buffer based
communication protocol. Blockchain agnostic protocols enable cross-blockchain
communication between arbitrary distributed ledger technologies, including
refactoring and making changes to each blockchain. Blockchain of blockchains
are approaches that allow users to build decentralized applications using
multiple blockchains. Finally, blockchain migrators enable the state of one
blockchain to be migrated to another blockchain. Simple blockchain migrations
have already been performed, paving the direction for complex blockchain
migrations [12].
Our pub-sub solution is considered a trusted relay (although decentralized),
as it contains a blockchain mediating communication across heterogeneous
blockchains [12]. While trusted relays can provide a straightforward way of
achieving interoperability, most of them are not trustless (e.g., contain a
centralization point). Our solution is a decentralized trusted relay that
implements a pub/sub system, anchored on the trust that underlying blockchains
offer.
### II-B Blockchain-based Publish/Subscribe Protocol
The blockchain technology has been applied to the pub/sub paradigm in a few
previous studies. However, those studies adopt blockchain to address the
existing problems in other areas, such as IoT [21], supply chain [22], multi-
tenant edge cloud [23], and digital trading [24].
Huang et al. [23] exploit blockchain technology to enhance the security of
pub/sub communications in multi-tenant edge clouds. Most topic-based and
broker-enabled pub/sub streaming systems use centralized cloud servers to
store sensitive metadata and access control lists (ACL), which can compromise
the confidentiality, anonymity, and integrity of tenants’ data. Alternatively,
critical data such as ACL and identity information, as well as the hash of raw
messages, and operation logs, can be stored on the blockchain to guarantee
data security and integrity. Their smart contracts implement access control
mechanisms to authorize requests from publishers and subscribers.
Trinity [22] proposes a blockchain-based distributed publish/subscribe broker
to solve the existing flaws of centralized brokers in IoT and supply chain
monitoring applications. Trinity has three main components: blockchain
network, broker, and clients. The blockchain network is responsible for
consensus in the system and persistent storage. The broker handles the
communications between the blockchain network and clients. The clients are the
publishers and subscribers of the topics. The blockchain network is pluggable,
and the authors have used Tendermint, Hyperledger Fabric, Ethereum, and IOTA.
The broker has used the Mosquitto MQTT broker, which provides a single point
of failure.
Zhao et al. [25] have proposed Secure Pub-Sub (SPS), which provides fair
payment based on a reputation for publishers and subscribers in cyber-physical
systems. They use the Bitcoin’s network to enable payments between the
entities, and they propose a reputation mechanism that helps calculate the
price of data.
Lv et al. [21] presents a decentralized privacy-preserving pub/sub model for
IoT systems to solve centralized brokers’ problems such as single point of
failure, data tampering due to corrupter brokers, and heavy encryption
algorithms. The presented model applies public-key encryption with equality
test [26] and ElGamal [27] to protect participants’ (both publishers and
subscribers) privacy. A system prototype is implemented and evaluated against
the feasibility of the proposed model.
Bu et al. [24] and Zupan et al. [28] have proposed blockchain-based pub/sub
brokers to address the drawbacks of traditional pub/sub systems such as
privacy and accountability. However, they have not explained their
implementation and evaluation in their studies.
All these studies adopt blockchain to improve the centralized predicaments in
traditional pub/sub systems in distinct application domains, whereas our paper
focuses on establishing robust and practical interoperability between
permissioned blockchains with different architectures and infrastructures. To
the best of our knowledge, our paper is the first study that enhances
blockchain interoperability utilizing the pub/sub communication model across
heterogeneous blockchains (Hyperledger Fabric/Hyperledger Besu).
## III System Design
In this section, we first discuss the design principles that a blockchain
interoperability solution should follow. We then propose our interoperability
solution and explain its components and message flow.
Figure 1: Architecture of the platform and the message flow.
### III-A Design Principles
Blockchain interoperability aims to enable applications to use the assets and
information available on blockchains other than their main blockchain network
[12]. This allows for a greater range of applications. A blockchain
interoperability solution should take into account the following design
principles:
* •
The blockchain networks are independent, and they may have different
architectures.
* •
The blockchain networks are in full control of their assets and information.
* •
The transfer protocol should be technology agnostic.
* •
The interoperability solution should not require significant changes in the
source and destination networks.
* •
The blockchain networks should be able to incorporate the solution with
minimal effort.
Following the mentioned principles, we present our solution, which allows
interoperability based on a publish/subscribe architecture.
### III-B Components
The platform proposed in this paper aims to solve the interoperability problem
of permissioned blockchains using the publish/subscribe pattern. When a
blockchain wants to use the data from another blockchain, there needs to be a
way to fetch and transfer this data between the networks securely. Moreover,
if the data changes in the source network, the destination network should be
notified of the change. Figure 1 shows the architecture of the platform and
the message flow.
The publisher blockchain is the blockchain network that sends the data, also
referred to as the source network. For a publisher to participate in this
platform, it needs to run the appropriate connector smart contract on its
blockchain and enroll as a publisher in the broker. The publisher can then
create as many topics as they want and use the connector smart contract to
publish the changes to the topic.
The subscriber blockchain is the blockchain network that received the data,
also referred to as the destination network. Similar to the publisher, the
subscriber also needs to run the appropriate connector smart contract and
enroll as a subscriber. It can then subscribe to any topic available on the
broker blockchain. Every time the topic changes, the broker notifies the
subscriber by invoking the connector smart contract. There can exist many
subscribers for a topic.
The broker blockchain is the core component of the platform. It is a
blockchain network that stores all the information about the topics and the
blockchains that participate in the interoperability process. Since the broker
is a blockchain, operations regarding interoperation are immutable and
traceable, providing a robust basis for accountability. The broker has two
different smart contracts, the connector smart contract and the topics smart
contract. The connector smart contract stores the information about the
participating blockchain networks and how the broker can interact with them.
The topics smart contract is responsible for storing the information about the
topics such as their publisher, subscribers, and the latest message.
### III-C Message Flow
The complete interoperation process in the platform is shown in Figure 1. For
simplicity, only one publisher and one subscriber blockchain are shown in this
figure. However, for each topic, there can be an unlimited number of
subscribers and, in general, there is no limit on the number of publisher and
subscriber blockchains. A detailed explanation of each step in Figure 1
follows:
1. 1.
For any blockchain network to interact with the broker blockchain, it must
enroll in the connector smart contract. During this process, the information
that the broker needs to be able to interact with the blockchain is registered
in the connector smart contract. As a result, the publisher is required to
enroll in the connector smart contract as a publisher. This step only needs to
be performed once, when the publisher wants to create its first topic. It can
then create topics or publish to existing ones without the need to be enrolled
again.
2. 2.
A publisher that is registered in the connector smart contract can create a
new topic. In this step, the publisher needs to specify a name for the topic
and the initial topic message. This step only needs to be executed once for
each topic.
3. 3.
Similar to the publisher blockchain, the subscriber blockchain should also
enroll in the connector smart contract. This step only needs to be done once,
when the subscriber wants to subscribe to a topic for the first time.
4. 4.
To receive a notification when a topic has been changed, the subscriber should
subscribe to the topic by sending a request to the topics smart contract. This
results in the subscriber being added to the list of all subscribers for the
topic. This step only needs to be performed once for each topic.
5. 5.
Whenever needed, the application in the publisher blockchain can update the
topic by sending a request to the connector smart contract.
6. 6.
The connector smart contract sends a publish request to the topics smart
contract for the existing topic.
7. 7.
As soon as a publish request is received by the topics smart contract, the
smart contract fetches the information about all the subscribers of the topic
from the connector smart contract. It includes information on how the broker
can interact with each of the subscribers.
8. 8.
The topics smart contract then uses the data fetched from the connector smart
contract to notify all the subscribers of the change in the topic. This
happens by sending an update request to the connector smart contract in each
of the subscriber networks.
9. 9.
In each subscriber network, the connector smart contract receives the update
for the topic and notifies the application.
## IV Implementation and Deployment
A prototype of the proposed platform has been implemented as a proof-of-
concept to demonstrate the feasibility of the design. This project is a
Hyperledger Labs open-source project111https://github.com/hyperledger-
labs/pubsub-interop, under the Apache License, Version 2.0. To show the
interoperability capabilities of the platform, we implemented two example
subscriber networks, as well as an example publisher network. The broker and
the example publisher are implemented using two different Hyperledger Fabric
V2 [29] networks. The two example subscribers are implemented using
Hyperledger Fabric V1.4 [30, 31], and Hyperledger Besu [32]. In this section,
the implementation and deployment details of the broker and the example
networks are discussed.
### IV-A Broker Blockchain
The broker blockchain acts as a messaging broker between other blockchains to
enable interoperability. When choosing the blockchain solution to implement
the broker network, we had to ensure that the solution fits well with the
needs of this platform. First, since we aim to address interoperability in
permissioned blockchains, the broker also needs to be permissioned. Moreover,
many permissioned blockchains are enterprise-level, and they may have privacy
and governance concerns, which our broker aims to addresses using blockchain
that enables immutability and traceability. We need a blockchain solution for
the broker network that considers these needs. Finally, the smart contracts
that need to be implemented on the broker blockchain are complicated, and the
blockchain needs to support this kind of smart contract.
Hyperledger Fabric is an open-source permissioned blockchain that has been
designed for enterprise use cases. Unlike the open permissionless blockchains
that have scalability issues, Fabric enables high transaction throughput and
low transaction confirmation latency. The architecture of Hyperledger Fabric
is highly modular and configurable, which enables customization for each
specific use case. Many blockchains only support smart contracts written in
non-standard and domain-specific programming languages, making it impossible
to implement smart contracts that require a Turing-complete language. On the
other hand, Hyperledger Fabric supports smart contracts in general-purpose
programming languages such as Go, Node.js, and Java [29].
In the broker network, we leverage the capabilities of Hyperledger Fabric V2.2
to implement a messaging broker. We implement two chaincodes called the topics
and the connector to support the features needed for the broker. The topics
chaincode is responsible for keeping all the topics and their corresponding
details. In Hyperledger Fabric, everything is stored as a key-value pair. In
the topics smart contract, the key is a unique topic ID. The value is an
object that includes the following properties: name, publisher, subscribers,
and message. The name of a topic is a string value set by the publisher when
creating the topic. Each topic has one publisher, the blockchain network that
has registered the topic on the broker, which is the only blockchain that can
make changes to the topic. The subscribers’ property stores a list of all the
blockchains that have subscribed to the topic. It is worth mentioning that the
publisher and the subscribers’ properties only accept objects stored on the
connector blockchain.
The connector chaincode is responsible for storing the connection details of
other blockchain networks. Similar to the topics chaincode, the key in the
key-value pair used in this chaincode is a unique ID for each blockchain. The
value is an object that has the following properties: name, type, server IP,
port, extra information. The name is a string value that can be selected when
enrolling in the network. Type shows what kind of blockchain technology this
network is using. Currently, support for Fabric and Besu has been implemented,
and other blockchains will be added in future versions. The server IP and port
are used by the broker blockchain to access the publisher or subscriber using
an HTTP request. The extra information property stores network-specific
details that may be needed when interacting with the blockchains. For
instance, for a Hyperledger Besu network, this includes the private key,
address, and the contract application binary interface (ABI) that the broker
should use to send a request to the Besu network. This kind of implementation
allows our solution to be extendable to other blockchains, independent of
their underlying network.
To better understand how the topics and connector chaincodes work, we need to
discuss their implemented functionalities. Figure 2 shows the UML class
diagram of the implemented chaincodes. The Hyperledger Fabric contract API
provides an interface for developing smart contracts and applications. Each
developed chaincode should extend the contract class from this API and then
implement the required logic. In each smart contract, the InitLedger function
is used to create a set of initial assets on the ledger when the chaincode is
deployed. In the topics chaincode, the CreateTopic function is used to create
a new asset of type topic. The QueryTopic and the QueryAllTopics functions can
be used to query one specific topic and all the existing topics, respectively.
The connector chaincode implements the same initialize, create, and query
functionalities but for assets of type blockchain.
Figure 2: UML class diagram of the implemented chaincodes.
Other than the mentioned functions, the topics blockchain also implements
SubscribeToTopic, UnsubscribeFromTopic, and PublishToTopic functionalities.
When a destination blockchain wants to get notified of a topic’s change, it
subscribes to that topic by invoking the SubscribeToTopic function. The
subscriber can also unsubscribe from the topic at any time by invoking the
UnsubscribeFromTopic function. Finally, the PublishToTopic function is used by
the source blockchain network when they want to update the topic’s message. An
invoke request to this function triggers update requests to all the
subscribers of the topic. Algorithm 1 shows the detailed implementation of the
PublishToTopic method. First, the broker retrieves the latest version of the
topic from the ledger. In the case that no topic was found, it immediately
throws an error. Next, the topic’s message is updated with the new message
value and the topic’s state is put to the ledger. The next step is for the
broker to notify all the subscribers. For each of the subscribers of the
topic, the blockchain object is queried from the connector contract. This
inter-chaincode communication is also shown in Figure 2. Then, given the type
of subscriber blockchain, the steps to invoke the remote network are followed.
Input: topicID, newMessage
Result: Subscribers are notified of the new message
1 topicState $\leftarrow$ getState(topicID)
2 if _!topicState_ then
3 throw error
4
5 end if
6topicState.message $\leftarrow$ newMessage
7 putState(topicID, topicState)
8 for _subID in topicState.subscribers _ do
9 subState $\leftarrow$ query $subID$ from connector contract
10 if _subState.type = Fabric_ then
11 follow steps to invoke a remote Fabric network
12 else if _subState.type = Besu_ then
13 follow steps to invoke a remote Besu network
14 end if
15
16 end for
Algorithm 1 PublishToTopic Method
### IV-B Subscriber Blockchains
The subscriber, or destination blockchain, is the blockchain that requires
information from another blockchain to run a task. For the subscriber to be
able to participate in the platform, it needs to have the appropriate
connector smart contract deployed on it. We have implemented subscriber
connector contracts for Hyperledger Fabric V1.4 and Hyperledger Besu. However,
the connector is a simple smart contract that can also be developed by the
owners of the subscriber blockchain. This smart contract needs to keep track
of the topics that the subscriber has subscribed to and store their latest
version for other smart contracts to access at any time. Two example
subscriber networks have been implemented to demonstrate the interoperability
capabilities of the platform.
The first example subscriber is implemented using Hyperledger Fabric V1.4. The
second example subscriber is implemented using Hyperledger Besu, an open-
source Ethereum client that supports private and permissioned blockchains.
Besu can create networks that work based on a proof of work (PoW) or a proof
of authority (PoA) consensus algorithm. In this work, we implemented a PoW
network using Besu, which can be thought of as a private Ethereum network. We
then implemented a connector smart contract in Solidity to keep a record of
the subscribed topics.
### IV-C Publisher Blockchains
The publisher, or the source blockchain, is the blockchain network that needs
to send information to other blockchains. Similar to what we have in the
subscriber blockchain, a connector smart contract is also required for the
publishers. However, the connector is slightly different in the publisher. The
publisher connector should not only keep track of the topics, but it should
also connect to the broker blockchain to publish the topics. We implemented an
example publisher network using Hyperledger Fabric V2.2.
## V Experimental Evaluation
In this section, we focus on evaluating the performance of the implemented
prototype of the broker blockchain. The goal is to see how the throughput and
latency of the system changes in different scenarios. We have conducted two
series of experiments to achieve this goal. The first set of experiments aims
to show the performance metrics of different functionalities in the broker
blockchain. The second set of experiments focuses on the publish function,
which is the most important and time-consuming component of the broker
blockchain.
We have used Hyperledger Caliper [33] to run the experiments. Hyperledger
Caliper is an open-source blockchain performance benchmark tool that allows
performance measurement for different blockchains, such as Hyperledger Fabric,
Ethereum, Hyperledger Besu. In Hyperledger Caliper, the workloads or
benchmarks are responsible for generating the content of each transaction that
is sent to the blockchain network. Given the network and benchmark
configurations, Caliper uses a set of independent workers to send scheduled
requests to the blockchain network and monitor the response. When the tests
are finished, Caliper generates a performance report consisting of the average
throughput and minimum, maximum, and average latency throughout the test. The
throughput shows the number of transactions that were processed in the system
in a given time. The latency shows the amount of time it takes for a
transaction to be finished and added to the ledger.
Table I summarizes the specifications of each component in the experimental
evaluation. We have set up Hyperledger Caliper on a separate machine to ensure
that its process does not affect the performance of the broker network. We use
five workers, a fixed rate controller, and a test duration of 60 seconds for
each benchmark round. The broker network is implemented using Hyperledger
Fabric V2.2 with two peer organizations and an orderer organization, each with
an independent certificate authority. Each of the peer organizations hosts one
peer node, and the orderer uses Raft implementation. Two chaincodes have been
implemented that run on one channel. The Fabric subscriber, implemented using
Fabric V1.4, has two organizations, each hosting two peers. The whole
subscriber network uses one Solo orderer and one Fabric certificate authority.
The Besu subscriber implements a private Ethereum network with PoW consensus
algorithm. The publisher has been implemented using Hyperledger Fabric V2.2
with the same configurations as the broker network.
TABLE I: Experimental evaluation setup Component | Type | CPU | RAM | Disk
---|---|---|---|---
Caliper Benchmark | N/A | 8 vCPU | 30 GB | 288 GB
Broker | Fabric V2.2 | 8 vCPU | 30 GB | 288 GB
Fabric Subscriber | Fabric V1.4 | 2 vCPU | 7.5 GB | 36 GB
Besu Subscriber | Besu | 2 vCPU | 7.5 GB | 36 GB
Publisher | Fabric V2.2 | 2 vCPU | 7.5 GB | 36 GB
The first set of experiments focuses on the performance evaluation of broker
blockchain. In these experiments, we conduct a series of tests using
Hyperledger Caliper for each functionality that broker blockchain offers.
Figure 2 summarizes all these functionalities. Each type of transaction goes
through a specific set of steps in Hyperledger Fabric, which highly influences
the response time for that transaction. For instance, an invoke transaction
goes through endorse, order and commit steps. On the other hand, a query
transaction is not transferred to the orderer, and the response is immediately
sent back by the peer. The create actions in the connector and topics smart
contract are invoke actions that have very similar implementations. The same
goes for the query actions in the two smart contracts. As a result, it would
be repetitive to run performance evaluation experiments for both smart
contracts. Therefore, we run the experiments on the topics smart contract.
The topics smart contract has five important functionalities: create a topic,
query a topic, publish to a topic, subscribe to a topic, and unsubscribe from
a topic. For each of these actions, we run a set of experiments by changing
the transaction send rate in the Hyperledger Caliper benchmark. The goal is to
see how the system’s throughput and average latency changes when the send rate
is changed. Figure 3 shows the details of these experiments. It can be seen
that the send rate follows the same pattern for all the actions except for
PublishToTopic. The reason for this difference is that the PublishToTopic
action takes more time and needs more resources to run compared to other
actions. Consequently, broker blockchain’s hardware limits are reached when
the network receives more than roughly 100 publish transactions in each
second. We discuss the behaviour of the network with different PublishToTopic
requests in the second set of experiments shown in Figure 4. As a result of
this limitation, we lowered the send rate for the PublishToTopic action in our
experiments.
Figure 3: The trend of system throughput and average latency for various
functionalities throughout time with the change of request send rate. The
words publish, sub, unsub, query, and create in the plots stand for
PublishToTopic, SubscribeToTopic, UnsubscribeFromTopic, QueryTopic, and
CreateTopic functions, respectively.
It can be seen in Figure 3 that the SubscribeToTopic, UnsubscribeFromTopic,
and CreateTopic have similar behaviours under the same send rate. These three
actions are of type invoke. Since an invoke transaction proposes a change in
the blockchain, it needs to go through the consensus algorithm, which can be
time-consuming. Since the three actions are of the same type, and none need
heavy computations in execution, the system throughput and latency for all of
them are similar. As shown in the experimentation results, when the send rate
is lower than a threshold (around 160 TPS in this case), the throughput is the
same as the send rate, and the average latency is only a few hundred
milliseconds (around 100 to 300 milliseconds). This shows that with send rates
below the threshold, all transactions are processed immediately. When the
number of create, subscribe, or unsubscribe transactions sent in each second
is more than the threshold, the broker network’s processing limit is reached.
The throughput is limited to the broker’s maximum capacity (around 160 TPS),
and the transactions are queued before being processed, which results in an
increase in the latency. Figure 3 shows that when the send rate for the
create, subscribe, or unsubscribe transactions is around 210 TPS, the average
latency increases to about 11 seconds. The latency keeps increasing with
higher send rates and reaches approximately 50 seconds with a send rate of 360
TPS.
The QueryTopic action is different from the previous ones. Since a query
transaction does not go through the consensus protocol, its process is much
faster. The send rate pattern used for the query is similar to that of create,
subscribe, and unsubscribe. However, the throughput and average latency act
very differently. The throughput follows the same pattern as the send rate,
and the average latency is around 10 milliseconds throughout the whole
experiment. These results show that this experiment does not reach the process
limit for QueryTopic.
Finally, the PublishToTopic is similar to create, subscribe, and unsubscribe
because they are all invoke transactions. However, the publish action requires
stronger computations. As mentioned earlier, since the publish action needs
more time and computational resources, we use a different send rate pattern.
If we were to use the same send rate, the broker blockchain’s hardware limits
would be reached, resulting in the experiments being halted. We discuss this
in more detail in the second set of experiments shown in Figure 4. To ensure
that the performance of the remote source and destination networks do not
influence the performance evaluation of the broker network, we only send dummy
requests to the subscriber networks during the experiments. It can be observed
from Figure 3 that the publish action reaches the processing limit of the
broker network much faster than the other invoke transactions. With send rates
of about 70 TPS and more, the throughput is limited to 65 TPS. The average
latency for the publish action has more fluctuations compared to other invoke
actions. The main reason for this fluctuation is that in the publish method,
depending on the number of subscribers that the topic has, the processing time
can vary. In this experiment, the average latency gets as high as 80 seconds,
with the send rate of 110 TPS.
Figure 4: The trend of system throughput, average latency, and request success
rate throughout time with the change of send rate.
Given the limits of the PublishToTopic action, we decided to run some
additional experiments on this type of transaction. This experiment aims to
find the broker network’s limits and discover what happens when the limit is
reached. In the previous experiment, we discovered that the processing limit
for the publish transactions is reached at the sent rate of around 70 TPS. We
also observed that the latency increases and the throughput is limited for
send rates above 70 TPS and below 110 TPS. However, we would like to know what
happens if the send rate is more than 110 TPS. In this experiment, we linearly
increase the send rate from 50 to 150 TPS and observe the throughput, latency,
and transaction success rate. Figure 4 shows the results of this experiment.
Similar to the previous experiment, we see that the throughput is limited, and
the latency is increased when the send rate reaches 70 TPS. Nevertheless, the
interesting change happens at the 120 TPS send rate. At this point, a
significant drop in the throughput and a significant rise of latency are
observed. Moreover, the transaction success rate is not 100% anymore. From
this point on, a portion of the transactions fail since the broker network has
reached its hardware limits.
## VI Discussion and Future Work
To enable blockchain interoperability, we have proposed the use of a broker
blockchain as a middleman. The broker blockchain acts as a decentralized
trusted relay between the source and destination network. Using a relay
enables the interoperating networks to transfer data with minimal effort.
Verifying the data and handling the communications between different
blockchain networks can be delegated to the relay. As a result, there is no
need for source and destination networks to make fundamental changes to their
underlying structure. The relay network is also a blockchain network; while
exploiting all desirable features offered by blockchain, it runs smart
contracts implementing the interoperability functionality. Therefore, the
broker blockchain allows the interoperation to be seamless, transparent, and
secure.
The platform proposed in this paper stores the destination and source
blockchains as assets on the distributed ledger. As a result, a large number
of blockchains can be supported as there are no limits on the number of
assets. A study on the performance of Hyperledger Fabric V1.1 shows that the
network can scale to 100+ peers [34, 35]. As the broker network has been
implemented using Fabric V2.2, we expect this number to be even higher in our
network. Therefore, at least 100 peers can participate in the governance of
the broker blockchain.
In the current prototype of our platform, every participant can subscribe to
all existing topics, and there is no access control mechanism implemented. The
private data feature presented by Hyperledger Fabric can be utilized to
establish private channels in the broker blockchain and keep data topics
separate from other organizations. Furthermore, an access control mechanism
can be added to our pub/sub system to control the flow of data at a more
granular level, for instance, the decentralized access control proposed by
Rouhani et al. [36].
It is also possible to conduct authorization processes with minimal
information disclosure if one uses the novel self-sovereign based access
control model [37]. This way, we could model each blockchain as an agent to
prove that they own specific credentials needed for the access control
process. Moreover, the publisher network may choose only to make a topic
available to a subset of subscribers. Access control can be used to manage the
blockchains that can access each topic.
## VII Conclusion
With blockchain technology gaining popularity in academia and industry, many
blockchain networks are being introduced worldwide. These networks are highly
isolated and incompatible with each other, resulting in silos of data and
assets. Blockchain interoperability solutions can revolutionize this
technology by enabling data and asset transfers between homogeneous and
heterogeneous blockchains. In this paper, we proposed a blockchain
interoperability solution based on the publish/subscribe architecture. Our
solution consists of a broker blockchain that keeps a record of the data being
transferred between blockchain networks. The blockchains that aim to
participate in the interoperability can connect to the broker network as
publishers or subscribers, depending on their role. A prototype of the broker
blockchain has been implemented using Hyperledger Fabric. Moreover, an example
publisher and two example subscribers have been implemented using Hyperledger
Besu and two versions of Hyperledger Fabric to show that the design works for
heterogeneous blockchains. The network’s performance has been analyzed using a
benchmark tool to identify the platform’s limits and bottlenecks. The
implementation and evaluations indicate the feasibility of the idea with
satisfactory performance, and the bottleneck is identified to be the process
of publishing a new message to a topic. Finally, a discussion on the
extensibility, scalability, and possible improvements of the system is
presented.
## References
* [1] S. Nakamoto, “Bitcoin: A peer-to-peer electronic cash system.” https://bitcoin.org/bitcoin.pdf, 2008. Last accessed 2020-07-17.
* [2] G. Wood et al., “Ethereum: A secure decentralised generalised transaction ledger,” Ethereum project yellow paper, vol. 151, no. 2014, pp. 1–32, 2014.
* [3] B. Chase and E. MacBrough, “Analysis of the xrp ledger consensus protocol,” arXiv preprint arXiv:1802.07242, 2018.
* [4] T.-T. Kuo, H.-E. Kim, and L. Ohno-Machado, “Blockchain distributed ledger technologies for biomedical and health care applications,” Journal of the American Medical Informatics Association, vol. 24, no. 6, pp. 1211–1220, 2017.
* [5] S. Rouhani, L. Butterworth, A. D. Simmons, D. G. Humphery, and R. Deters, “Medichain tm: a secure decentralized medical data asset management system,” in 2018 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData), pp. 1533–1538, IEEE, 2018.
* [6] T. M. Fernández-Caramés and P. Fraga-Lamas, “A review on the use of blockchain for the internet of things,” IEEE Access, vol. 6, pp. 32979–33001, 2018.
* [7] C. Fan, H. Khazaei, Y. Chen, and P. Musilek, “Towards a scalable dag-based distributed ledger for smart communities,” in 2019 IEEE 5th World Forum on Internet of Things (WF-IoT), pp. 177–182, IEEE, 2019.
* [8] R. Belchior, M. Correia, and A. Vasconcelos, “JusticeChain: Using Blockchain To Protect Justice Logs,” in CoopIS 2019: 27th International Conference on Cooperative Information Systems, 2019.
* [9] R. Belchior, A. Vasconcelos, and M. Correia, “Towards Secure, Decentralized, and Automatic Audits with Blockchain,” in European Conference on Information Systems, 2020.
* [10] S. Ghaemi, H. Khazaei, and P. Musilek, “Chainfaas: An open blockchain-based serverless platform,” IEEE Access, vol. 8, pp. 131760–131778, 2020.
* [11] J. H. Park and J. H. Park, “Blockchain security in cloud computing: Use cases, challenges, and solutions,” Symmetry, vol. 9, no. 8, p. 164, 2017.
* [12] R. Belchior, A. Vasconcelos, S. Guerreiro, and M. Correia, “A survey on blockchain interoperability: Past, present, and future trends,” arXiv preprint arXiv:2005.14282, 2020.
* [13] Ethereum Foundation and Consensys, “BTC-relay: Ethereum contract for Bitcoin SPV,” 2015.
* [14] A. Garoffolo, D. Kaidalov, and R. Oliynykov, “Zendoo: a zk-SNARK Verifiable Cross-Chain Transfer Protocol Enabling Decoupled and Decentralized Sidechains,” tech. rep., 2020.
* [15] S. Lerner tech. rep., RSK, 2015.
* [16] J. Lu, B. Yang, Z. Liang, Y. Zhang, S. Demmon, E. Swartz, and L. Lu, 2017.
* [17] G. Wood, “Polkadot: Vision for a Heterogeneous Multi-Chain Framework,” Whitepaper, 2017.
* [18] J. Kwon and E. Buchman, “Cosmos Whitepaper,” tech. rep., 2016.
* [19] H. Montgomery, H. Borne-Pons, J. Hamilton, M. Bowman, P. Somogyvari, S. Fujimoto, T. Takeuchi, T. Kuhrt, and R. Belchior, “Hyperledger Cactus Whitepaper.” https://github.com/hyperledger/cactus/blob/master/whitepaper/whitepaper.md, 2020\. Last accessed 2020-09-28.
* [20] E. Abebe, D. Behl, C. Govindarajan, Y. Hu, D. Karunamoorthy, P. Novotny, V. Pandit, V. Ramakrishna, and C. Vecchiola, “Enabling enterprise blockchain interoperability with trusted data transfer (industry track),” in Proceedings of the 20th International Middleware Conference Industrial Track, pp. 29–35, 2019.
* [21] P. Lv, L. Wang, H. Zhu, W. Deng, and L. Gu, “An iot-oriented privacy-preserving publish/subscribe model over blockchains,” IEEE Access, vol. 7, pp. 41309–41314, 2019.
* [22] G. S. Ramachandran, K.-L. Wright, L. Zheng, P. Navaney, M. Naveed, B. Krishnamachari, and J. Dhaliwal, “Trinity: A byzantine fault-tolerant distributed publish-subscribe system with immutable blockchain-based persistence,” in 2019 IEEE International Conference on Blockchain and Cryptocurrency (ICBC), pp. 227–235, IEEE, 2019.
* [23] B. Huang, R. Zhang, Z. Lu, Y. Zhang, J. Wu, L. Zhan, and P. C. Hung, “Bps: A reliable and efficient pub/sub communication model with blockchain-enhanced paradigm in multi-tenant edge cloud,” Journal of Parallel and Distributed Computing, 2020.
* [24] G. Bu, T. S. L. Nguyen, M. P. Butucaru, and K. L. Thai, “Hyperpubsub: Blockchain based publish/subscribe,” in 2019 38th Symposium on Reliable Distributed Systems (SRDS), pp. 366–3662, IEEE, 2019.
* [25] Y. Zhao, Y. Li, Q. Mu, B. Yang, and Y. Yu, “Secure pub-sub: Blockchain-based fair payment with reputation for reliable cyber physical systems,” IEEE Access, vol. 6, pp. 12295–12303, 2018.
* [26] G. Yang, C. H. Tan, Q. Huang, and D. S. Wong, “Probabilistic public key encryption with equality test,” in Cryptographers’ Track at the RSA Conference, pp. 119–131, Springer, 2010.
* [27] T. ElGamal, “A public key cryptosystem and a signature scheme based on discrete logarithms,” IEEE transactions on information theory, vol. 31, no. 4, pp. 469–472, 1985.
* [28] N. Zupan, K. Zhang, and H.-A. Jacobsen, “Hyperpubsub: a decentralized, permissioned, publish/subscribe service using blockchains,” in Proceedings of the 18th ACM/IFIP/USENIX Middleware Conference: Posters and Demos, pp. 15–16, 2017.
* [29] Hyperledger, “Hyperledger fabric v2.2 documentation.” https://hyperledger-fabric.readthedocs.io/en/release-2.2/, 2020. Last accessed 2020-10-22.
* [30] Hyperledger, “Hyperledger fabric v1.4 documentation.” https://hyperledger-fabric.readthedocs.io/en/release-1.4/, 2019. Last accessed 2020-10-22.
* [31] E. Androulaki, A. Barger, V. Bortnikov, C. Cachin, K. Christidis, A. De Caro, D. Enyeart, C. Ferris, G. Laventman, Y. Manevich, et al., “Hyperledger fabric: a distributed operating system for permissioned blockchains,” in Proceedings of the thirteenth EuroSys conference, pp. 1–15, 2018.
* [32] Hyperledger, “Hyperledger besu documentation.” https://besu.hyperledger.org, 2020. Last accessed 2020-10-22.
* [33] T. L. Foundation, “Hyperledger Caliper.” https://www.hyperledger.org/use/caliper, 2020. Last accessed 2020-11-5.
* [34] C. Fan, S. Ghaemi, H. Khazaei, and P. Musilek, “Performance evaluation of blockchain systems: A systematic survey,” IEEE Access, vol. 8, pp. 126927–126950, 2020.
* [35] E. Androulaki, A. Barger, V. Bortnikov, C. Cachin, K. Christidis, A. De Caro, D. Enyeart, C. Ferris, G. Laventman, Y. Manevich, S. Muralidharan, C. Murthy, B. Nguyen, M. Sethi, G. Singh, K. Smith, A. Sorniotti, C. Stathakopoulou, M. Vukolić, S. W. Cocco, and J. Yellick, “Hyperledger fabric: A distributed operating system for permissioned blockchains,” in Proceedings of the Thirteenth EuroSys Conference, EuroSys ’18, (New York, NY, USA), Association for Computing Machinery, 2018.
* [36] S. Rouhani, R. Belchior, R. S. Cruz, and R. Deters, “Distributed Attribute-Based Access Control System Using a Permissioned Blockchain,” arXiv preprint, 2020.
* [37] R. Belchior, B. Putz, G. Pernul, M. Correia, A. Vasconcelos, and S. Guerreiro, “SSIBAC : Self-Sovereign Identity Based Access Control,” in The 3rd International Workshop on Blockchain Systems and Applications, IEEE, 2020.
|
11institutetext: COMIT, 11email<EMAIL_ADDRESS>22institutetext:
CoBloX Pty Ltd, 22email<EMAIL_ADDRESS>
# Atomic Swaps between Bitcoin and Monero
Philipp Hoenisch 1122 Lucas Soriano del Pino 1122
###### Abstract
Due to the evergrowing blockchain ecosystem, interoperability has become a
matter of great importance. Atomic swaps allow connecting otherwise isolated
blockchains while adhering to the core principles of censorship resistance and
permissionlessnes. Up until recently, atomic swap protocols have mostly relied
on complex script support, excluding certain types of blockchains. With
advances in cryptography, it is now possible to build a bridge between almost
any two blockchains. In this work, we give an explanation of one such protocol
which applies adaptor signatures on Bitcoin to procure atomic swaps between
Monero and Bitcoin. We dive into the cryptographic details, discuss its
limitations and give an outlook on our current work where we use adaptor
signatures on the Monero signature scheme.
###### Keywords:
Blockchain Atomic Swap Bitcoin Monero Adaptor Signatures.
## 1 Introduction
Since the birth of Bitcoin in 2008[11], many other cryptocurrencies have been
introduced. It is without a doubt that this flourishing ecosystem has evolved
into an enormous financial market. Cryptocurrencies are traded against fiat
(e.g. USD, AUD, EUR) or against each other. However, due to the lack of
interoperability between different blockchains, most of the trades are
executed on centralized exchanges. Due to regulations, these centralized
exchanges have integrated complex KYC (Know Your Customer) procedures where
traders have to go through lengthy processes to prove their identity. In
addition, traders give up control over their hard-earned coins by depositing
them in the exchange so that they can execute trades. The trader now has to
trust the exchange to manage their funds according to the highest standards,
to protect them against thieves or not lose them otherwise. This trust was
misused more than once in the past and billions of dollars in user funds have
been lost[5].
One could say that these centralized exchanges are now a relic of the past. A
new era of decentralized exchanges has started, adhering to the core idea of
Bitcoin: censorship resistance at all levels.
Decentralized exchanges powered by atomic swaps, first introduced in 2015 by
TierNolan[17], can now promise more guarantees in terms of security and
privacy to traders.
The original idea of atomic swaps uses HTLCs (Hash Time-Lock Contracts),
imposing certain requirements on the underlying blockchains: (1) they must
support scripts so that one can build hash locks; and (2) they must support
timelocks.
Technology has evolved and, with advances in cryptography, a new way of cross-
chain atomic swaps using adaptor signatures is gaining traction.
Atomic swaps using adaptor signatures (also referred to as Scriptless Scripts)
have several advantages over traditional atomic swaps using HTLCs: (1)
contrary to HTLCs where the same hash has to be used on each chain,
transactions involved in an atomic swap using adaptor signatures cannot be
linked; and (2) since no script is involved, the on-chain footprint is reduced
which makes the atomic swap cheaper.
Within this work we present our current efforts on cross-chain atomic swaps
using adaptor signatures. In particular, we show how adaptor signatures can be
employed to swap between Monero and Bitcoin. Notably, the former does not
support scripts or timelocks.
## 2 HTLC-based Atomic Swaps
Replacing centralized exchanges by decentralized ones is not new. The idea of
using HTLCs for atomically swapping two assets across two chains has been
around for a while[17]. Various companies have used this technology in their
products and protocols for cross-chain trading[1, 6]. Moreover, HTLCs are also
used in the Lightning Network for multi-hop payments[15].
In a nutshell, an HTLC-based atomic swap protocol works like this: we assume
two parties, Alice and Bob found each other somehow and agreed on the amounts
and assets (e.g. bitcoin and ether) which the two parties want to exchange.
Alice generates a random secret $s$ and uses a cryptographic hash function to
generate hash $h$. She then creates an HTLC using $h$ and locks up the
bitcoin. These coins can either be redeemed (spent) using the secret $s$ or
are returned to her after time $t$ has passed. Bob does the same thing on the
other chain: he locks up his ether in an HTLC using the same hash $h$.
Since Alice knows the original secret $s$ that was used to produce the hash
$h$, she can redeem the ether from Bob’s HTLC. By doing so, she reveals the
secret $s$ to Bob who can then take the bitcoin, completes the swap.
This apparently simple process has a few drawbacks:
* •
The requirements on the underlying blockchains are high. A certain script
capability is required in order to support a hash function as well as
timelocks. While many blockchains support these two features, some lack either
one (e.g. Grin has no script support and hence no support for hash functions)
or both (e.g. Monero).
* •
By definition, the same hash has to be used on both chains. This allows an
independent third party to link those two transactions. Worse yet, since
blockchain transactions are publicly available to everyone, this onlooker can
now track where the parties move their newly acquired funds.
* •
The use of scripts (e.g. on Bitcoin, Litecoin, etc) or smart contracts (e.g.
on Ethereum) results in an increased on-chain footprint and higher transaction
fees in general.
With recent advancements in cryptography and the application of adaptor
signatures to atomic swaps, it is now possible to overcome almost all of the
aforementioned drawbacks. For example, Grin-Bitcoin swaps can be realized
despite Grin’s lack of a scripting language. Using Schnorr adaptor signatures
and timelocks, an atomic swap protocol can be executed[4].
Recently, Gugger, J. (aka h4sh3d) came up with a protocol which enables atomic
swaps between Monero and Bitcoin[10]. In the next section we discuss this
protocol in detail; in Section 4, we present our current work, motivated by
some of the limitations of [10].
## 3 BTC to XMR atomic swaps
$tx_{\textsf{lock}}^{\textsf{btc}}$ $a\land b$
$tx_{\textsf{redeem}}^{\textsf{btc}}$
$\textsf{addr}_{\textsf{redeem}}^{\textsf{A}}$
$tx_{\textsf{cancel}}^{\textsf{btc}}$ $a\land b$
$tx_{\textsf{refund}}^{\textsf{btc}}$
$\textsf{addr}_{\textsf{refund}}^{\textsf{B}}$
$tx_{\textsf{punish}}^{\textsf{btc}}$
$\textsf{addr}_{\textsf{punish}}^{\textsf{A}}$
$A,B$$+t_{1}$$A,B$$A,B$$+t_{2}$$A,B$ $tx_{\textsf{lock}}^{\textsf{xmr}}$
$S_{a}+S_{b}$ $tx_{\textsf{redeem}}^{\textsf{xmr}}$ Bob
$tx_{\textsf{refund}}^{\textsf{xmr}}$ Alice $s_{A},s_{B}$$s_{A},s_{B}$ Figure
1: Transaction schema for BTC to XMR atomic swaps. Top: Transaction schema for
Bitcoin. Bottom: Transaction schema for Monero. Note: Monero view keys are
omitted for clarity.
The protocol described in this section is largely based on the work of
Gugger[10]. We highlight key differences between the original and our
instantiation of it[2] throughout.
### 3.1 Situation
Alice and Bob have agreed to a trade in which Alice will send
$\textsf{amt}_{\textsf{xmr}}$ to Bob, and Bob will send
$\textsf{amt}_{\textsf{btc}}$ to Alice. They require this exchange to be
atomic, i.e. the change of ownership of one asset should effectively imply the
change of ownership of the other. Additionally, should the exchange not come
to fruition, they expect any committed assets to be returned to them.
### 3.2 Overview
#### 3.2.1 Happy path
After exchanging a set of addresses, keys, zero-knowledge proofs and
signatures, Bob locks up $\textsf{amt}_{\textsf{btc}}$ in a Point Time Locked
Contract (PTLC)[14] locked using point $S_{a}^{btc}$ by publishing
$tx_{\textsf{lock}}^{\textsf{btc}}$. Being a PTLC, the output is also
spendable in an alternative manner after time $t_{1}$.
Alice subsequently locks up $\textsf{amt}_{\textsf{xmr}}$ in a shared output
with public spend key $S_{a}^{xmr}+S_{b}^{xmr}$ and public view key $V_{a}$ \+
$V_{b}$ by publishing $tx_{\textsf{lock}}^{\textsf{xmr}}$. The relationship
between $S_{a}^{xmr}$ and $S_{a}^{btc}$ is that they share the same secret key
$s_{a}$ despite being points on different elliptic curve groups. The same
relationship applies to $S_{b}^{xmr}$ and $S_{b}^{btc}$. This output will be
owned by the party with knowledge of both $s_{a}$ and $s_{b}$.
Bob notices the publication of $tx_{\textsf{lock}}^{\textsf{xmr}}$ and sends
to Alice an adaptor signature[8]
$\hat{\sigma}_{\textsf{redeem}}^{S_{a}^{btc},B}$ which she is able to combine
with $s_{a}$ producing $\sigma_{\textsf{redeem}}^{B}$. Provided there is
enough time until $t_{1}$, she then publishes a
$tx_{\textsf{redeem}}^{\textsf{btc}}$ signed with
$\sigma_{\textsf{redeem}}^{B}$ and her own $\sigma_{\textsf{redeem}}^{A}$.
Broadcasting this transaction moves $\textsf{amt}_{\textsf{btc}}$ to an
address owned by Alice.
Finally, Bob sees $tx_{\textsf{redeem}}^{\textsf{btc}}$ on the blockchain. He
finds $\sigma_{\textsf{redeem}}^{B}$ in the witness stack and combines it with
$\hat{\sigma}_{\textsf{redeem}}^{S_{a}^{btc},B}$ to learn $s_{a}$. With
knowledge of both $s_{a}$ and $s_{b}$, Bob is the de facto owner of
$\textsf{amt}_{\textsf{xmr}}$, which he is able to move to a different address
of his at any time.
#### 3.2.2 Cancel
Once Bob has published $tx_{\textsf{lock}}^{\textsf{btc}}$, if time $t_{1}$ is
reached, either party can elect to publish
$tx_{\textsf{cancel}}^{\textsf{btc}}$, diverging from the “happy path”. The
transaction $tx_{\textsf{cancel}}^{\textsf{btc}}$ was constructed in such a
way that it will only be mined after time $t_{1}$. The use of transaction-
level timelocks is one of the ways in which this protocol deviates from the
original[10].
##### Refund:
With $tx_{\textsf{cancel}}^{\textsf{btc}}$ confirmed on the blockchain, Bob
should immediately publish $tx_{\textsf{refund}}^{\textsf{btc}}$ to reclaim
his $\textsf{amt}_{\textsf{btc}}$ (minus some fees).
Alice would then spot $tx_{\textsf{refund}}^{\textsf{btc}}$ on the Bitcoin
blockchain, giving her access to $\sigma_{\textsf{refund}}^{A}$. Combining
$\sigma_{\textsf{refund}}^{A}$ with
$\hat{\sigma}_{\textsf{refund}}^{S_{b}^{btc},A}$ would leak $s_{b}$ to her.
Knowing $s_{a}$ and $s_{b}$, Alice would effectively reclaim control over
$\textsf{amt}_{\textsf{xmr}}$, which she could eventually move back to one of
her wallet addressess.
##### Punish:
Should Bob remain inactive after $tx_{\textsf{cancel}}^{\textsf{btc}}$ is
published, Alice still has a way to get compensation for the failed swap.
After time $t_{2}$, Alice can punish Bob for not triggering the refund path in
time by publishing $tx_{\textsf{punish}}^{\textsf{btc}}$. With this
transaction Alice claims $\textsf{amt}_{\textsf{btc}}$. The
$\textsf{amt}_{\textsf{xmr}}$ remains locked forever, but from Alice’s
perspective it is as if the trade went through.
The existence of $tx_{\textsf{punish}}^{\textsf{btc}}$ therefore incentivises
Bob to publish $tx_{\textsf{refund}}^{\textsf{btc}}$ as soon as possible.
Either way, Alice, the party who has no agency on whether refund will occur or
not, remains protected.
### 3.3 Off-chain preparation
As hinted at in the previous section, before Alice and Bob can go on-chain
they must exchange some data.
For simplicity we assume a fixed fee for all Bitcoin transactions that must be
signed by both parties. In practice, the best way to handle transaction fees
would be to adopt a Child-pays-for-parent (CPFP)[13] strategy, so that the
parties do not have to commit to a particular fee rate ahead of time.
#### 3.3.1 Key generation
Firstly, they engage in a key generation protocol, as shown in Fig. 2.
Alice sends to Bob a Bitcoin public key $A$; a Monero private view key
$v_{a}$; a Monero public spend key $S_{a}^{xmr}$; a Bitcoin public key
$S_{a}^{btc}$; and a Discrete Logarithm Equality (DLEQ) $\pi_{s_{a}}$ proof
between $S_{a}^{xmr}$ and $S_{a}^{btc}$. The characteristics of this kind of
proof will be explained in greater detail in Section 3.5.
Similarly, Bob sends to Alice a Bitcoin public key $B$; a Monero private view
key $v_{b}$; a Monero public spend key $S_{b}^{xmr}$; a Bitcoin public key
$S_{b}^{btc}$; and a DLEQ proof $\pi_{s_{b}}$ between $S_{b}^{xmr}$ and
$S_{b}^{btc}$.
If either party receives an invalid DLEQ proof, they must abort the protocol.
$\Pi_{\mathsf{KGen}}$$\sum^{A}_{A_{b}}$ Alice Bob $\displaystyle a\hskip
2.3pt{\leftarrow\\!\\!\mbox{\tiny${\$}$\normalsize}}\,\mathbb{Z}_{q};A\leftarrow
aG$ $\displaystyle v_{a}\hskip
2.3pt{\leftarrow\\!\\!\mbox{\tiny${\$}$\normalsize}}\,\mathbb{Z}_{p}$
$\displaystyle s_{a}\hskip
2.3pt{\leftarrow\\!\\!\mbox{\tiny${\$}$\normalsize}}\,\mathbb{Z}_{p}$
$\displaystyle S_{a}^{btc}\leftarrow s_{a}G;S_{a}^{xmr}\leftarrow s_{a}H$
$\displaystyle\pi_{s_{a}}\leftarrow\mathsf{P}_{\textsf{DLEQ}((G,S_{a}^{btc}),(H,S_{a}^{xmr}),s_{a})}$
$\displaystyle\mathsf{V}_{\textsf{DLEQ}((G,S_{a}^{btc}),(H,S_{a}^{xmr}),\pi_{s_{a}})}\stackrel{{\scriptstyle?}}{{=}}1$
$\displaystyle b\hskip
2.3pt{\leftarrow\\!\\!\mbox{\tiny${\$}$\normalsize}}\,\mathbb{Z}_{q};B\leftarrow
bG$ $\displaystyle v_{b}\hskip
2.3pt{\leftarrow\\!\\!\mbox{\tiny${\$}$\normalsize}}\,\mathbb{Z}_{p}$
$\displaystyle s_{b}\hskip
2.3pt{\leftarrow\\!\\!\mbox{\tiny${\$}$\normalsize}}\,\mathbb{Z}_{p}$
$\displaystyle S_{b}^{btc}\leftarrow s_{b}G;S_{b}^{xmr}\leftarrow s_{b}H$
$\displaystyle\pi_{s_{b}}\leftarrow\mathsf{P}_{\textsf{DLEQ}((G,S_{b}^{btc}),(H,S_{b}^{xmr}),s_{b})}$
$\displaystyle\mathsf{V}_{\textsf{DLEQ}((G,S_{b}^{btc}),(H,S_{b}^{xmr}),\pi_{s_{b}})}\stackrel{{\scriptstyle?}}{{=}}1$
$\displaystyle\mathbf{return}\ (a,A,B,v_{a},v_{b},s_{a})$
$\displaystyle\mathbf{return}\ (b,A,B,v_{a},v_{b},s_{b})$ Figure 2: Key
generation protocol.
#### 3.3.2 Address exchange
Additionally, Alice sends to Bob two Bitcoin addresses
$\textsf{addr}_{\textsf{redeem}}^{\textsf{A}}$ and
$\textsf{addr}_{\textsf{punish}}^{\textsf{A}}$; and Bob sends to Alice one
Bitcoin address $\textsf{addr}_{\textsf{refund}}^{\textsf{B}}$. The
$\textsf{amt}_{\textsf{btc}}$ will end up in one of these depending on the
protocol execution.
#### 3.3.3 Expiries
The value of the two timelocks $t_{1}$ and $t_{2}$ must be confirmed before
Alice and Bob can sign any transactions. Timelock $t_{1}$ determines how long
Alice will have to publish and confirm $tx_{\textsf{lock}}^{\textsf{xmr}}$,
and safely redeem $tx_{\textsf{lock}}^{\textsf{btc}}$. Timelock $t_{2}$
determines how long Bob has to refund his bitcoin after
$tx_{\textsf{cancel}}^{\textsf{btc}}$ is published by either party.
In this protocol we only use relative timelocks because they create consistent
windows of action no matter when $tx_{\textsf{lock}}^{\textsf{btc}}$ and
$tx_{\textsf{cancel}}^{\textsf{btc}}$ are included in a block.
#### 3.3.4 Signing phase
This phase is a pre-requisite to Bob being able to lock up the bitcoin safely.
It also ensures that Alice can safely lock up the monero herself, once she has
confirmed that the bitcoin is on the blockchain.
Before either party can start signing the Bitcoin transactions, Bob must
define what $tx_{\textsf{lock}}^{\textsf{btc}}$ looks like. Given what they
both already know, they can construct the PTLC output: one which can be spent
by providing signatures for $A$ and $B$. Bob builds the rest of the
transaction using a Bitcoin wallet which will contribute the necessary inputs
and outputs. This is the first step of the signing protocol, which is depicted
in Section 3.3.4. Bob sends the unsigned $tx_{\textsf{lock}}^{\textsf{btc}}$
to Alice, alongside the signatures $\sigma_{\textsf{cancel}}^{B}$ and
$\sigma_{\textsf{punish}}^{B}$. He can safely share these signatures with
Alice because $tx_{\textsf{lock}}^{\textsf{btc}}$ remains unpublished and
unsigned.
With $tx_{\textsf{lock}}^{\textsf{btc}}$ Alice computes the signatures
$\sigma_{\textsf{cancel}}^{A}$ and $\sigma_{\textsf{punish}}^{A}$. She also
computes the adaptor signature
$\hat{\sigma}_{\textsf{refund}}^{S_{b}^{btc},A}$, which Bob would need to
decrypt if he ever wants to refund his bitcoin. Using the corresponding
decrypted signature $\sigma_{\textsf{refund}}^{A}$ to publish
$tx_{\textsf{refund}}^{\textsf{btc}}$ would leak $s_{b}$ to Alice, allowing
her to refund her own monero. Alice sends back $\sigma_{\textsf{cancel}}^{A}$
and $\hat{\sigma}_{\textsf{refund}}^{S_{b}^{btc},A}$ to Bob.
All that remains is for Bob to compute his own $\sigma_{\textsf{cancel}}^{B}$.
Figure 3: Signing protocol. Both parties must verify the signatures received,
but this is left out for clarity.
### 3.4 On-chain protocol
In Section 3.3 we have explained how Alice and Bob set the stage for the swap
to take place. The sequence diagram in Fig. 4 shows the rest of the steps
towards a successful atomic swap. With the ability to broadcast signed
versions of $tx_{\textsf{cancel}}^{\textsf{btc}}$ and
$tx_{\textsf{refund}}^{\textsf{btc}}$ to take his coins back, Bob can now
proceed by publishing $tx_{\textsf{lock}}^{\textsf{btc}}$. He uses his Bitcoin
wallet to sign each input and broadcasts it to the network. Alice finds
$tx_{\textsf{lock}}^{\textsf{btc}}$ on the blockchain by using the transaction
ID which can be deterministically computed from
$tx_{\textsf{lock}}^{\textsf{bitcoin}}$. With enough confirmations on it to
consider it irreversible and sufficient time until $t_{1}$, Alice publishes
$tx_{\textsf{lock}}^{\textsf{xmr}}$. The only requirement on this transaction
is that it must pay $\textsf{amt}_{\textsf{xmr}}$ to the address corresponding
to the public spend key $S_{a}^{xmr}+S_{b}^{xmr}$ and the public view key
$V_{a}+V_{b}$. Bob does not need to know any other details, because the
parties do not need to sign transactions depending on
$tx_{\textsf{lock}}^{\textsf{xmr}}$ ahead of time. Bob finds
$tx_{\textsf{lock}}^{\textsf{xmr}}$ on the blockchain by leveraging his
knowledge of the private view key $v_{a}+v_{b}$. In Monero, only parties with
knowledge of the private view key are privy to transactions involving the
matching address. Once Bob considers that $tx_{\textsf{lock}}^{\textsf{xmr}}$
has garnered enough confirmations, he proceeds by sending
$\hat{\sigma}_{\textsf{redeem}}^{S_{a}^{btc},B}$ to Alice. This adaptor
signature can be decrypted by Alice to grant her the ability to redeem
$\textsf{amt}_{\textsf{btc}}$. On receiving
$\hat{\sigma}_{\textsf{redeem}}^{S_{a}^{btc},B}$ Alice first verifies that
what she has received is useful to her by executing
$\textsf{ECDSA}.\mathsf{Enc}\textsf{Vrfy}(B,S_{a}^{btc},tx_{\textsf{redeem}}^{\textsf{btc}},\hat{\sigma}_{\textsf{redeem}}^{S_{a}^{btc},B})$.
This ensures that the adaptor signature commits to a valid signature on $B$
for the transaction $tx_{\textsf{redeem}}^{\textsf{btc}}$, encrypted by
$S_{a}^{btc}$. With knowledge of $s_{a}$ Alice decrypts it by calling
$\textsf{ECDSA}.\mathsf{Dec}(s_{a},\hat{\sigma}_{\textsf{redeem}}^{S_{a}^{btc},B})$,
obtaining $\sigma_{\textsf{redeem}}^{B}$. Alice now has the means to publish
$tx_{\textsf{redeem}}^{\textsf{btc}}$, but she must only do so if there is
enough time to confirm the transaction before $t_{1}$. Otherwise, Bob could
front-run her transaction with $tx_{\textsf{cancel}}^{\textsf{btc}}$, ensuring
his refund of $\textsf{amt}_{\textsf{btc}}$ and still finding
$tx_{\textsf{redeem}}^{\textsf{btc}}$ in the mempool, with which he would be
able to also claim the $\textsf{amt}_{\textsf{xmr}}$. Assuming there is enough
time, she goes ahead and publishes $tx_{\textsf{redeem}}^{\textsf{btc}}$,
claiming $\textsf{amt}_{\textsf{btc}}$. Finally, Bob can use the information
obtained by the publication of $tx_{\textsf{redeem}}^{\textsf{btc}}$ to claim
the $\textsf{amt}_{\textsf{xmr}}$. He takes the transaction from the
blockchain, extracts the signature $\sigma_{\textsf{redeem}}^{B}$ from it and
calls
$\textsf{ECDSA}.\textsf{Rec}(\sigma_{\textsf{redeem}}^{B},\hat{\sigma}_{\textsf{redeem}}^{S_{a}^{btc},B})$
to obtain $s_{a}$. As the sole owner of both $s_{a}$ and $s_{b}$, Bob is the
only one capable of moving $\textsf{amt}_{\textsf{xmr}}$ to a different
address. He does so at his own convenience, so that he can safely forget
$s_{a}+s_{b}$. Monero Alice Bob Bitcoin
$tx_{\textsf{lock}}^{\textsf{btc}}$look for
$tx_{\textsf{lock}}^{\textsf{btc}}$$tx_{\textsf{lock}}^{\textsf{btc}}$$tx_{\textsf{lock}}^{\textsf{xmr}}$look
for
$tx_{\textsf{lock}}^{\textsf{xmr}}$$tx_{\textsf{lock}}^{\textsf{xmr}}$$\hat{\sigma}_{\textsf{redeem}}^{S_{a}^{btc},B}$$\textsf{ECDSA}.\mathsf{Dec}\mathsf{Sig}(s_{a},\hat{\sigma}_{\textsf{redeem}}^{S_{a}^{btc},B})$$\sigma_{\textsf{redeem}}^{B}$$tx_{\textsf{redeem}}^{\textsf{btc}}$look
for signature in
$tx_{\textsf{redeem}}^{\textsf{btc}}$$\sigma_{\textsf{redeem}}^{B}$$\textsf{ECDSA}.\textsf{Rec}(\sigma_{\textsf{redeem}}^{B},\hat{\sigma}_{\textsf{redeem}}^{S_{a}^{btc},B})$$s_{a}$redeem
using $s_{a}+s_{b}$ Figure 4: Happy path on-chain protocol.
### 3.5 Cross-chain DLEQ proof
The key generation phase depicted in Fig. 2 shows both Alice and Bob producing
a so-called cross-curve DLEQ proof. This construct is used to non-
interactively prove in zero-knowledge that the public key pair
$(S_{a}^{btc},S_{a}^{xmr})$ has a common secret key $s_{a}$, and that the
public key pair $(S_{b}^{btc},S_{b}^{xmr})$ has a common secret key $s_{b}$.
Without these proofs, there would be no guarantee that the adaptor signatures
$\hat{\sigma}_{\textsf{redeem}}^{S_{a}^{btc},B}$ and
$\hat{\sigma}_{\textsf{refund}}^{S_{b}^{btc},A}$ which they later exchange
actually commit to the expected signature-secret pairs. Conventionally, this
kind of proof can only be constructed for points on the same elliptic curve.
The idea of proving discrete logarithm equality across different groups comes
from [12]. The algorithm proposed in [12] is used in the original protocol by
Gugger, but we elect to use something simpler based on the original idea so
that it can be argued secure by the composition of sigma protocols[16]. We
built an experimental implementation of this proof[3] to support our proof-of-
concept implementation of this protocol[2]. We also contributed to a more
general and efficient implementation of the same proof[9] which we intend to
use in the future.
## 4 XMR to BTC atomic swaps
In the section above we described an atomic swap protocol between Bitcoin and
Monero. That protocol is appropriate for a use case in which the Service
Provider (SP) is in the role of Alice, i.e. she offers buying XMR for BTC to
her customers. Following the protocol as defined in Section 3, offering that
kind of trade, allows the SP to lock up $\textsf{amt}_{\textsf{xmr}}$ (by
publishing $tx_{\textsf{lock}}^{\textsf{xmr}}$) only after the other party has
locked up $\textsf{amt}_{\textsf{btc}}$ (by publishing
$tx_{\textsf{lock}}^{\textsf{btc}}$). This is safe for the SP as she knows
that she will either be able to redeem (publish
$tx_{\textsf{redeem}}^{\textsf{btc}}$) or refund. However, using that protocol
to swap in the opposite direction is not feasible i.e. an SP should not offer
buying BTC for XMR. The problem is that an SP (in the role of Bob) could be
easily attacked: the taker (in the role of Bob) could agree on a trade with
the SP, make him lock up funds on Bitcoin and then bail out at no cost. The SP
could always refund his locked up BTC after some time, but he would have to
pay for transaction fees to do so. The taker’s ability to make the SP incur in
transaction fees without penalty would expose the SP to running out of funds
over time, which is why we refer to this as a draining attack. We need a
different protocol to allow an SP to offer BTC/XMR buy trades, since the
original makes it a hard requirement for the party holding BTC to move first.
In the following sections we propose a new protocol which instead requires the
party holding XMR to move first. This depends on the development of adaptor
signatures based on Monero’s ring signature scheme, which is a work-in-
progress and whose details are left out of the scope of this work.
### 4.1 Protocol definition
$\mathtt{XMR_{l}}$$x_{A}$$\mathtt{XMR_{r}}{}$$x_{B}$$\mathtt{XMR_{c}}{}$$x_{A}$$\mathtt{S_{A}},\mathtt{S_{B}}$$\mathtt{S_{A}},\mathtt{S_{B}}$
1\. Create $[\mathtt{XMR_{l}}{}]$
$\xrightarrow[]{\mathtt{S_{A}},\mathtt{v_{A}}}$
$\xleftarrow[]{\mathtt{S_{B}},\mathtt{v_{B}}}$ 2\. Create
$[\mathtt{XMR_{c}}{}]$
No communication.
4\. Sign $[\mathtt{XMR_{l}}{}]$
Alice $\mathsf{Sig}(s_{A},[\mathtt{XMR_{l}}{}])$
5\. Publish $[\mathtt{XMR_{l}}{}]$ 3\. Sign $[\mathtt{XMR_{c}}{}]$
$\xrightarrow[]{R_{A}}$
$\xleftarrow[]{\mathsf{Enc}\mathsf{Sig}(s_{B},R_{A},[\mathtt{XMR_{c}}{}])}$
Figure 5: Monero transaction schema.
$\mathtt{XMR_{l}}$$x_{A}$$\mathtt{XMR_{r}}{}$$x_{B}$$\mathtt{XMR_{c}}{}$$x_{A}$$\mathtt{BTC_{c}}{}$$x_{B}$$\mathtt{BTC_{r}}{}$$\mathtt{(\mathtt{x_{A}}\land\mathtt{x_{B}})}$$\mathtt{BTC_{t}}{}$$x_{A}$$\mathtt{BTC_{e}}{}$$x_{B}$$\mathtt{BTC_{l}}$$\mathtt{(\mathtt{x_{A}}\land\mathtt{x_{B}})}$$+t_{1}$$\mathsf{\vphantom{p}pk}_{A},\mathsf{\vphantom{p}pk}_{B}$$\mathsf{\vphantom{p}pk}_{A},\mathsf{\vphantom{p}pk}_{B}$$+t_{2}$$\mathsf{\vphantom{p}pk}_{A},\mathsf{\vphantom{p}pk}_{B}$$R_{A},\mathsf{\vphantom{p}pk}_{B}$
1\. Create $[\mathtt{BTC_{l}}{}]$
$\xrightarrow[]{pk_{A}}$
$\xleftarrow[]{pk_{B},tid_{B}}$ 2\. Create
$[\mathtt{BTC_{c}}{}],[\mathtt{BTC_{r}}{}]$
$\xrightarrow[]{\mathtt{S_{A}}}$
$\xleftarrow[]{\mathtt{S_{B}}}$ 3\. Create $[\mathtt{BTC_{t}}{}]$
No communication.
4\. Sign $[\mathtt{BTC_{t}}{}]$
$\xleftarrow[]{\mathsf{sign}(sk_{B},[\mathtt{BTC_{t}}{}])}$ 5\. Sign
$[\mathtt{BTC_{c}}{}],[\mathtt{BTC_{r}}{}]$
$\xrightarrow[]{\mathsf{Sig}(\mathsf{\vphantom{p}sk}_{A},[\mathtt{BTC_{c}}{}])}$
$\xleftarrow[]{\mathsf{Enc}\mathsf{Sig}(sk_{B},\mathtt{S_{A}},[\mathtt{BTC_{r}}{}])}$
6\. Sign $[\mathtt{BTC_{l}}{}]$
Bob $\mathsf{Sig}(\mathsf{\vphantom{p}sk}_{B},[\mathtt{BTC_{l}}])$
7\. Publish $[\mathtt{BTC_{l}}{}]$
Figure 6: Bitcoin transaction schema. Fig. 5 and Fig. 6 show the transaction
schema for Monero and Bitcoin respectively. These diagrams are used to
illustrate the relationships between different transactions on the same
blockchain. Transactions are represented as rectangles with rounded corners.
Transaction outputs are depicted as boxes inside transactions. The value of
the output is written inside the output box and their spending conditions are
written above and below arrows coming out of the output. For example, $x_{A}$
means the output holds $x$ coins owned by party $A$ and $(x_{A}\land x_{B})$
means the output of amount $x$ is controlled by party $A$ and $B$. With regard
to the spending conditions, we define the following convention: the public
keys of all required signatures are listed below the arrow; other conditions,
such as timelocks, appear above the arrow.
### 4.2 Creating Monero transactions
The transaction schema for Monero can be found in Figure 5. Below we describe
the naive 5-step protocol to create all transactions. An optimized
implementation could reduce the number of steps by combining messages, but we
refrain from doing so in the interest of clarity. Step 1: To construct the
locking transaction $\mathtt{XMR_{l}}{}$ the parties need to exchange some
keys: Alice shares with Bob a public spend key $S_{A}$ and her private view
key $v_{A}$, as well as her funding source $tid_{A}$. Bob shares with Alice
his public spend key $S_{B}$ and his private view key $v_{B}$. They can now
create $\mathtt{XMR_{l}}{}$ locally with input $tid_{A}$ and an output with
public spend key $S_{A}+S_{B}$, private view key $v_{A}+v_{B}$. Notably, Alice
does not sign the transaction yet. Both parties now have a local copy of an
unsigned $\mathtt{XMR_{l}}{}$ which requires one signature from each party to
spend its output. Step 2: Both parties create the refund transaction
$\mathtt{XMR_{c}}{}$ which spends from $\mathtt{XMR_{l}}{}$ and returns the
funds back to Alice. Notably, they do not create the redeem transaction
$\mathtt{XMR_{r}}{}$ in the same way, because they do not have to exchange any
signatures on it. The key idea is that Bob will learn $s_{A}$ later on if
Alice publishes $\mathtt{BTC_{r}}{}$, allowing him to construct, sign and
publish the redeem transaction $\mathtt{XMR_{r}}{}$ by himself. Step 3: Like
in Section 3.3.4 for the old protocol, adaptor signatures are used but this
time on Bitcoin and Monero. Alice generates a keypair $(r_{A},R_{A})$ and
constructs a DLEQ proof for it for the same reasons presented in Section 3.5.
She sends $R_{A}$ to Bob, which he uses as the encryption key to generate an
adaptor signature on the refund transaction $\mathtt{XMR_{c}}{}$. Bob sends
this adaptor signature to Alice. If she were to ever publish
$\mathtt{XMR_{c}}{}$ she would need to use this adaptor signature, leaking
$r_{A}$ to Bob. This would allow him to execute an emergency refund on Bitcoin
if Alice were misbehaving by attempting to take both the bitcoin and the
monero. Steps 4+5: Alice could now sign the locking transaction
$\mathtt{XMR_{l}}{}$ and publish it on the Monero blockchain with the
assurance that she could get her funds back at any point by publishing
$\mathtt{XMR_{l}}{}$. But these steps are not carried out until the two
parties have collaborated on creating the Bitcoin transactions.
### 4.3 Creating transactions for Bitcoin
The transaction schema for Bitcoin can be found in Figure 6. Step 1: To
prepare the locking transaction $\mathtt{BTC_{l}}{}$, Alice shares a public
key $pk_{A}$ with Bob. Bob shares his funding source $tid_{B}$ with Alice as
well as a public key $pk_{B}$. Both parties can now create
$\mathtt{BTC_{l}}{}$ which spends from $tid_{B}$ into a multisignature output
requiring two signatures: one for $pk_{A}$ and another one for $pk_{B}$. Step
2: Knowing $\mathtt{BTC_{l}}{}$, both parties can construct
$\mathtt{BTC_{c}}{}$, a transaction which returns the bitcoin back to Bob
after time $t_{1}$. They also construct $\mathtt{BTC_{r}}{}$. This transaction
sets the stage for Alice to be able to take the bitcoin. It can be spent in
two ways: (1) Alice can claim the coins after time $t_{2}$ by providing
signatures for $pk_{A}$ and $pk_{B}$, and (2) Bob can still refund if he
learns Alice’s refund secret $r_{A}$ and uses it with his own public key
$pk_{B}$. Bob would learn $r_{A}$ if Alice publishes $\mathtt{BTC_{c}}{}$,
using the adaptor signature generated in step 3 of the Monero transaction
creation protocol above. Step 3: Having constructed $\mathtt{BTC_{r}}{}$,
both parties can create $\mathtt{BTC_{t}}{}$, which spends from it and can be
published after time $t_{2}$ giving the funds to Alice. Step 4: For safety
purposes, transactions are signed in reverse order of publication. To that
end, Alice and Bob collaboratively sign $\mathtt{BTC_{t}}{}$. Only Bob sends
his signature to Alice because she is the one that would care to publish this
transaction, since it benefits her. There is no need to create
$\mathtt{BTC_{e}}$ which would require a signature from $R_{A}$ and $pk_{B}$.
Bob will be able to create and sign this transaction by himself if the
situation allows. Step 5: Alice and Bob sign $\mathtt{BTC_{c}}$
collaboratively. Only Alice shares her signature with Bob because he is the
only one interested in ever being able to take his bitcoin back. Bob also
generates an adaptor signature on $\mathtt{BTC_{r}}$ for his public key
$pk_{B}$ encrypted under $S_{A}$ and sends it to Alice. This adaptor signature
ensures the atomicity of the swap: if Alice publishes $\mathtt{BTC_{r}}$ she
will need to decrypt and use the adaptor signature, leaking $s_{A}$, which he
would use to take the monero. Step 6+7: Bob is now ready to sign and publish
$\mathtt{BTC_{l}}$. He still must wait for Alice to lock her monero first by
publishing $\mathtt{XMR_{l}}$, finishing steps 4 and 5 of Section 4.2. Once
Alice has committed her funds to the Monero blockchain, Bob is safe to do the
same on Bitcoin.
### 4.4 Protocol execution
The content of this section is still work-in-progress. Hence we do not delve
deeper into the cryptography which is needed to create adaptor signatures on
Monero. Instead, we continue describing things on a high level.
#### 4.4.1 Scenario
The motivation behind this protocol is to allow the party holding XMR and
wanting BTC to move first. In this scenario, Alice holds XMR and wants to
receive BTC. Conversely, Bob holds BTC and wants to receive XMR. After
successfully building and signing transactions following the steps outlined in
Section 4.2 and Section 4.3, Alice and Bob are ready to go on-chain.
#### 4.4.2 Happy path
Alice publishes her locking transaction $\mathtt{XMR_{l}}$ knowing that she
can always receive her funds back by cancelling the swap and publishing
$\mathtt{XMR_{c}}$. Once Bob is happy with the amount of confirmations on
$\mathtt{XMR_{l}}$ he follows suit and publishes the locking transaction on
Bitcoin $\mathtt{BTC_{l}}$. Given sufficient confirmations on
$\mathtt{BTC_{l}}$ and enough time until $t_{1}$, Alice publishes the redeem
transaction $\mathtt{BTC_{r}}$. In doing so, she leaks $s_{A}$ to Bob. Alice
cannot immediately claim the bitcoin for herself but has to wait until time
$t_{2}$. In the meantime, Bob has until time $t_{2}$ to safely take the monero
by using $s_{A}$ to create and sign $\mathtt{XMR_{r}}$, and publishing it on
the blockchain. Once time $t_{2}$ is reached. Alice can finally take the
bitcoin by publishing $\mathtt{BTC_{t}}$, completing the atomic swap.
#### 4.4.3 One party is unresponsive
At any point in time during the execution phase either party could become
inactive. In order to prevent money loss, both parties have mechanisms at
their disposal to refund. For instance, Alice could publish her locking
transaction $\mathtt{XMR_{l}}$ and then see that Bob never moves forward with
the publication of his locking transaction $\mathtt{BTC_{l}}$. As depicted in
Fig. 5, $\mathtt{XMR_{c}}$ requires signatures on $S_{A}$ and $S_{B}$. Alice
can use her own secret key $s_{A}$ to produce one of the signatures, and
decrypt Bob’s adaptor signature using $r_{A}$ to produce the other. She would
then publish $\mathtt{XMR_{c}}$, taking back her monero. Similarly, if Bob
does publish $\mathtt{BTC_{l}}$, but Alice fails to continue by publishing
$\mathtt{BTC_{r}}$ before time $t_{1}$, Bob can then take his bitcoin by
publishing $\mathtt{BTC_{c}}$, since he either has or can produce or the
signatures needed for it to be valid.
#### 4.4.4 Alice tries to cheat
There exists an edge case in which Alice can attempt to take both assets. This
is possible after both parties have published their respective locking
transactions $\mathtt{XMR_{l}}$ and $\mathtt{BTC_{l}}$. Alice may attempt to
redeem the bitcoin by publishing $\mathtt{BTC_{r}}$ and refund the monero by
publishing $\mathtt{XMR_{c}}$. Fortunately, the publication of
$\mathtt{XMR_{c}}$ would leak $s_{A}$ to Bob which would allow him to create,
sign and publish $\mathtt{BTC_{e}}$ to execute an emergency refund, at least
until time $t_{2}$. The result would be equivalent to having executed a normal
refund. Bob therefore remains protected, but this possibility imposes a strong
requirement for him to stay online at all times.
## 5 Conclusion
Atomic swaps constitute the main mechanism to bridge the gap between unrelated
blockchains without violating the core principles of censorship resistance,
permissionlessness and pseudonymity originally championed by Bitcoin. Up until
recently, their application was believed to be exclusive to blockchains with
very particular characteristics. Advances in cryptography have lowered the
barrier to entry, allowing for new protocols to be devised in order to connect
blockchains that were originally thought to be incompatible. One such example
is the Bitcoin–Monero Cross-chain Atomic Swap by Gugger[10], which has
inspired the development of applications such as [7] and [2]. In this work, we
give a high-level sketch of a new protocol which expands on the ideas of the
original to serve a new use case. In particular, by applying adaptor
signatures to the Monero signature scheme, we make possible atomic swaps in
which the party holding BTC is no longer the one vulnerable to draining
attacks. A real-world service provider could therefore leverage both protocols
to put up buy and sell BTC/XMR offers as a market maker. This proposal hinges
on the viability of using adaptor signatures on Monero, a topic which we do
not discuss here, but one which is being researched at the time of writing.
## References
* [1] COMIT: Comit. https://github.com/comit-network/comit-rs (2018), accessed: 2021-01-13
* [2] COMIT: Bitcoin-monero cross-chain atomic swap. https://github.com/comit-network/xmr-btc-swap/tree/91fe18a79657e7d8ee100c931a2b2fcce0f1cd0f (2020), accessed: 2021-01-27
* [3] COMIT: Cross-curve dleq. https://github.com/comit-network/cross-curve-dleq/tree/eddcdea1d1f16fa33ef581d1744014ece535c920 (2020), accessed: 2021-01-27
* [4] COMIT: Grin-bitcoin atomic swap. https://github.com/comit-network/grin-btc-poc/tree/38cf690fa65d115db7354bee1905cb2c694308fc (2020), accessed: 2021-01-27
* [5] CryptoSec: Crypto exchange hacks. https://cryptosec.info/exchange-hacks/ (2021), accessed: 2021-01-13
* [6] ExchangeUnion: Opendex. https://opendex.network/ (2020), accessed: 2021-01-13
* [7] Farcaster: Farcaster project. https://github.com/farcaster-project/RFCs (2020), accessed: 2021-01-27
* [8] Fournier, L.: One-time verifiably encrypted signatures a.k.a adaptor signatures. https://github.com/LLFourn/one-time-VES/blob/master/main.pdf (2019)
* [9] Fournier, L.: Sigmafun! https://github.com/LLFourn/secp256kfun/tree/fa25e7ee0b7bcc5d6d12550b8def9ab798dbadca (2020), accessed: 2021-01-27
* [10] Gugger, J.: Bitcoin–monero cross-chain atomic swap. https://eprint.iacr.org/2020/1126.pdf (2020)
* [11] Nakamoto, S.: Bitcoin: A peer-to-peer electronic cash system. https://bitcoin.org/bitcoin.pdf (2008), accessed: 2021-01-13
* [12] Noether, S.: Discrete logarithm equality across groups. https://web.getmonero.org/es/resources/research-lab/pubs/MRL-0010.pdf (2018)
* [13] Optech, B.: Child-pays-for-parent (cpfp). https://bitcoinops.org/en/topics/cpfp, accessed: 2021-01-27
* [14] Optech, B.: Point time locked contracts (ptlcs). /https:/bitcoinops.org/en/topics/ptlc, accessed: 2021-01-27
* [15] Poon, J., Dryja, T.: The bitcoin lightning network: Scalable off-chain instant payments (2016)
* [16] Schoenmakers, B.: Lecture notes cryptographic protocols. https://www.win.tue.nl/~berry/CryptographicProtocols/LectureNotes.pdf (2020)
* [17] TierNolan: Atomic swaps - bitcointalk forum. https://bitcointalk.org/index.php?topic=193281.msg2224949\\#msg2224949 (2013), accessed: 2021-01-13
|
# Performance and Application of Estimators for the Value of an Optimal
Dynamic Treatment Rule
Lina Montoya Email address for correspondence<EMAIL_ADDRESS>University of
North Carolina, Chapel Hill, Department of Biostatistics Jennifer Skeem
University of California, Berkeley, Departments of Social Welfare and Public
Policy Mark van der Laan University of California, Berkeley, Division of
Biostatistics Maya Petersen University of California, Berkeley, Division of
Biostatistics
(January 2021)
###### Abstract
Given an (optimal) dynamic treatment rule, it may be of interest to evaluate
that rule – that is, to ask the causal question: what is the expected outcome
had every subject received treatment according to that rule? In this paper, we
study the performance of estimators that approximate the true value of: 1) an
a priori known dynamic treatment rule 2) the true, unknown optimal dynamic
treatment rule (ODTR); 3) an estimated ODTR, a so-called “data-adaptive
parameter,” whose true value depends on the sample. Using simulations of
point-treatment data, we specifically investigate: 1) the impact of
increasingly data-adaptive estimation of nuisance parameters and/or of the
ODTR on performance; 2) the potential for improved efficiency and bias
reduction through the use of semiparametric efficient estimators; and, 3) the
importance of sample splitting based on CV-TMLE for accurate inference. In the
simulations considered, there was very little cost and many benefits to using
the cross-validated targeted maximum likelihood estimator (CV-TMLE) to
estimate the value of the true and estimated ODTR; importantly, and in
contrast to non cross-validated estimators, the performance of CV-TMLE was
maintained even when highly data-adaptive algorithms were used to estimate
both nuisance parameters and the ODTR. In addition, we apply these estimators
for the value of the rule to the “Interventions” Study, an ongoing randomized
controlled trial, to identify whether assigning cognitive behavioral therapy
(CBT) to criminal justice-involved adults with mental illness using an ODTR
significantly reduces the probability of recidivism, compared to assigning CBT
in a non-individualized way.
## 1 Introduction
There is an interest across disciplines in using both experiments and
observational data to uncover treatment effect heterogeneity and understand
better ways of responding to it [1, 2]. Various methods aimed at estimating
heterogenous treatment effects (HTEs) wish to answer the question, “who
benefits from which treatment?” One way to uncover HTEs is by using the
dynamic treatment rule framework. A dynamic treatment rule is any rule that
assigns treatment based on covariates [3, 4, 5, 6, 7]. An optimal dynamic
treatment rule (ODTR) is the dynamic treatment rule that yields the highest
expected outcome (if higher outcomes are better) [8, 9, 10]. Using data
generated from an experiment in which treatment is randomized makes
identification of the ODTR more straightforward due to elimination of
confounding. In recent years, there has been a increase in literature
describing methods to estimate the ODTR, from regression-based techniques to
direct-search techniques; see, for example, [11], [12], and [13] for recent
overviews of the ODTR literature. One example of a data-adaptive method for
estimating the ODTR is the SuperLearner algorithm, an ensemble machine
learning approach that aims to best combine a library of candidate treatment
rule estimators to work in tandem to yield the ODTR [14, 15, 16, 17].
Once one knows or estimates a rule, it may be of interest to _evaluate_ it,
which translates to asking the causal question: what is the expected outcome
had every person received the treatment assigned to him or her by the
(optimal) rule? The causal parameter that answers this question is sometimes
referred to as the _value_ of the rule. It may be of relevance to learn this
quantity in order to determine the benefit of assigning treatment in a more
complex way compared to, for example, simply giving everyone treatment (an
intervention that is straightforward to implement without the cost or
complexity of measuring covariates and personalizing treatment assignment).
In this paper, we examine the following causal parameters, which we identify
as statistical estimands, corresponding to the value of an (optimal) rule: 1)
the true expected outcome of a given a priori known dynamic treatment rule; 2)
the true expected outcome under the true, unknown ODTR; and 2) the true
expected outcome under the _estimated_ ODTR, a so-called “data-adaptive
parameter.” The latter parameter can be further split into the true expected
outcome under a) an ODTR estimated on the entire data at hand, or b) a sample-
split specific ODTR, in which, under a cross-validation scheme, the ODTR is
estimated on each training set and evaluated, under the true data-generating
distribution, on the complementary validation set, with the data-adaptive
parameter defined as an average across sample splits.
We discuss several estimators for these estimands. Specifically, we consider
the following estimators suited for estimating a treatment-specific mean: the
simple substitution estimator of the G-computation formula [18], the inverse
probability of treatment weighted (IPTW) estimator [19, 20], the double-robust
IPTW estimator (IPTW-DR) [21, 22, 23], the targeted maximum likelihood
estimator (TMLE) [3, 24, 25], and the cross-validated TMLE (CV-TMLE) [26, 27,
28].
First, we review the conditions under which asymptotic linearity is achieved
for these estimators in the scenario where one wants to evaluate an a priori
known rule. This provides insight into the common scenario in which one wishes
to evaluate the value of a dynamic treatment rule that is pre-specified (based
on investigator knowledge or external data sources), rather than learned from
the data at hand. Estimators for this parameter require fast enough
convergence rates and smoothness assumptions on nuisance parameters, though
smoothness assumptions can be relaxed when employing CV-TMLE.
Second, we examine the more ambitious goal of estimating the expected outcome
under the true, unknown ODTR, which additionally requires fast enough
convergence of the estimate of the ODTR to the true ODTR, and for non cross-
validated estimators, smoothness assumptions on ODTR estimators. We refer the
reader to [14] and [16] for a discussion of considerations and best practices
when implementing the ODTR SuperLearner. Obtaining inference for the mean
outcome under the ODTR has been shown to be difficult due to its lack of
smoothness [6, 9, 29]; however, several methods have been proposed for
constructing valid confidence intervals for this parameter, such as re-
sampling techniques [30, 31, 6]. One approach to inference is to rely on
parametric models; however, misspecification of these models can bias results.
CV-TMLE relaxes the smoothness assumptions needed for inference, allowing one
to use a single data set to safely estimate relevant parts of the data
distribution (e.g., estimate nuisance parameters and/or the ODTR) and retain
valid inference for the target parameter itself (e.g., the mean outcome under
the ODTR) [32, 33]. Such internal sample splitting is particularly important
if the nuisance parameters or ODTR depend on a high dimensional covariate set
or make use of data-adaptive methods [27].
Finally, it may instead be of interest to estimate the true outcome under an
estimated ODTR (a data-adaptive parameter) because, in practice, it is the
estimated rule that will likely be employed in the population, not the true
rule, which is likely unknown [27]. In this case, the only rate condition
needed on the estimate of the ODTR is that it converges to a fixed rule. Non-
cross-validated estimators of this data-adaptive parameter additionally
require smoothness assumptions on the estimate of the ODTR for asymptotic
linearity; the CV-TMLE eliminates these requirements, which means that, in a
randomized experiment, achievement of asymptotic linearity for CV-TMLE with
respect to this data-adaptive parameter only requires that the estimated ODTR
converges to a fixed rule [27].
Previous simulation experiments have studied the performance of different
estimators for the aforementioned statistical estimands in the setting in
which a binary treatment is randomly assigned at a single time point. [27]
demonstrated the importance of using an estimator of the value of the rule
that uses a targeted bias reduction, such as TMLE and CV-TMLE, in order to
improve performance. Of note, when evaluating the estimated rule, the authors
used the true treatment mechanism and, as an initial estimate of the outcome
regression, either the true outcome regression or a constant value (i.e., an
incorrectly specified outcome regression) when employing the (CV-)TMLE. [17]
extended these results by “fully” estimating the value of the optimal rule,
meaning the nuisance parameters were additionally estimated for both the
optimal rule and the value of the rule, using the ensemble machine learning
approach SuperLearner [15]. Both [27] and [17] found that, indeed, there
exists a positive finite sample bias when using TMLE versus CV-TMLE when
estimating the value of the ODTR; in other words, with the rule learned and
evaluated on the same data, estimates of the value of the rule may be
optimistic, and CV-TMLE corrects this bias. Additionally, recently, [34]
showed that cross-validation techniques for estimating the value of the rule,
and in particular CV-TMLE, yielded a smaller difference between the true
expected value under the true rule and its estimate, versus, for example,
bootstrap techniques for evaluating a rule.
The current paper builds on previous work by illustrating, through a
simulation study, how the degree of overfitting when estimating the optimal
rule and/or nuisance parameters affects the performance of the estimators used
for evaluating a rule. We also explore the potential for efficiency
improvement and bias reduction through the use of semiparametric efficient
estimators, with and without targeting. Finally, we show the importance of
sample splitting using CV-TMLE when estimating the aforementioned statistical
parameters.
Additionally, we apply these estimators of the value of the rule to the
Correctional Intervention for People with Mental Illness, or “Interventions,”
trial, a ongoing study in which criminal justice-involved adults with mental
illness – a heterogeneous group with diverse symptoms, risk factors, and other
treatment-relevant characteristics [35, 36] – are either randomized to
cognitive behavioral therapy (CBT) or treatment as usual (TAU), and re-arrest
is collected one year after randomization occurs, as a measure of recidivism.
In a companion paper, we estimated the ODTR using the ODTR SuperLearner
algorithm [14] to identify which patients should receive CBT versus TAU. In
this paper, we use CV-TMLE to determine whether administering CBT using the
estimated ODTR is more effective in reducing recidivism than assigning CBT in
a non-individualized way (for example, giving CBT to all offenders).
This article steps through the causal roadmap for answering causal questions
[37], and is thus organized as follows. In the first section, we define the
data and causal model, define the causal parameters as functions of the
counterfactual distribution, and identify the causal estimands as functions of
the observed data distribution. In section 2 we discuss estimation, and in
section 3 we discuss inference procedures and conditions for asymptotic
linearity. In section 4 we present a simulation study illustrating the
performance of these estimators. In section 5 we evaluate the ODTR
SuperLearner algorithm that was applied to the “Interventions” Study. Finally,
we close with a discussion and future directions.
## 2 Causal Roadmap
In this section, we follow the first steps of the roadmap [37] for answering
the causal questions: what would have been the expected outcome had everyone
been given treatment according to: 1) any given rule; 2) the true ODTR; and 3)
an estimate of the ODTR, which could either be a) a sample-specific estimate
of the ODTR (i.e., an ODTR estimated on the entire data), or b) a sample-
split-specific estimate of the ODTR?
### 2.1 Data and Models
Structural causal models (SCM) will be used to describe the process that gives
rise to variables that are observed (endogenous) and not observed (exogenous).
The random variables in the SCM (denoted $\mathcal{M}^{F}$) follow the joint
distribution $P_{U,X}$. The endogenous variables are the covariates
$W\in\mathcal{W}$, binary treatment $A\in\mathcal{A}=\\{0,1\\}$, and outcome
$Y\in\mathbb{R}$. Exogenous variables are denoted $U=(U_{W},U_{A},U_{Y})$. The
following structural equations illustrate dependency between the variables:
$\displaystyle W$ $\displaystyle=f_{W}(U_{W})$ $\displaystyle A$
$\displaystyle=f_{A}(U_{A},A)$ $\displaystyle Y$
$\displaystyle=f_{Y}(U_{Y},A,W).$
Because we will be focusing on data where treatment is randomly assigned (as
in the “Interventions” trial), the above model can be modified by letting
$U_{A}\sim Bernoulli(p=0.5)$ and $A=U_{A}$.
We assume the observed data $O_{i}\equiv(W_{i},A_{i},Y_{i})\sim
P_{0}\in\mathcal{M}$, $i=1,\ldots,n$, where $P_{0}$ is the observed data
distribution and $\mathcal{M}$ is the statistical model, were generated by
sampling $n$ i.i.d. times from a data-generating system contained in the SCM
$\mathcal{M}^{F}$ above.
The true distribution of $O$ can be factorized as
$P_{0}(O)=P_{W,0}(W)g_{0}(A|W)P_{Y,0}(Y|A,W),$ where, $P_{W,0}$ is the true
distribution of $W$, $g_{0}(A|W)$ is the true conditional distribution of the
treatment $A$ given $W$ (otherwise known as the treatment mechanism), and
$P_{Y,0}$ is the true conditional distribution of $Y$ given $A$ and $W$.
The empirical distribution $P_{n}$ gives each observation weight
$\frac{1}{n}$; $P_{n}\in\mathcal{M}_{NP}$, where $\mathcal{M}_{NP}$ is a non-
parametric statistical model. Estimates from this empirical distribution are
denoted with a subscript $n$. If $V$-fold cross-validation is employed, the
empirical data are uniformly and at random split into $V$ mutually exclusive
sets. For sets $v\in\\{1,...,V\\}$, each set of data serves as a validation
set; the complement is its training set. Let $P_{n,v}$ be the empirical
distribution of the validation sample $v$, and $P_{n,-v}$ be the empirical
distribution of the complementary training set.
#### 2.1.1 Data and Models - Application to “Interventions” Study
The “Interventions” Study is an RCT consisting of 441 i.i.d. observations of
the following data generated by the causal model described above: covariates
$W$, which includes intervention site, sex, ethnicity, age, Colorado Symptom
Index (CSI) score (a measure of psychiatric symptoms), level of substance use,
Level of Service Inventory (LSI) score (a measure of risk for future re-
offending), number of prior adult convictions, most serious offense, Treatment
Motivation Questionnaire (TMQ) score (a measure of internal motivation for
undergoing treatment), and substance use level; the randomized treatment $A$,
which is either a manualized Cognitive Behavioral Intervention for people
criminal justice system (abbreviated CBT; $A=1$) or treatment as usual (TAU),
which is mostly psychiatric or correctional services ($A=0$); and a binary
outcome $Y$ of recidivism, an indicator that the person was not re-arrested
over a minimum period of one year. Table 2 shows the distribution of the data.
### 2.2 Causal Estimands
In this point treatment setting, a dynamic treatment rule in the set of rules
$\mathcal{D}$ is a function $d$ that takes as input some function $V$ of the
measured baseline covariates $W$ and outputs a treatment decision:
$V\rightarrow d(V)\in\\{0,1\\}$. It could be the case that $V=W$, in other
words, dynamic treatment rules that potentially respond to all measured
baseline covariates.
Counterfactual outcomes under a treatment rule $d$ – or a subject’s outcome
if, possibly contrary to fact, the subject received the treatment that would
have been assigned by the treatment rule $d$ – are derived by intervening on
the above SCM. Specifically, in parallel with our causal questions above,
counterfactual outcomes are generated by setting $A$ equal to the following
treatment rules, all in the set $\mathcal{D}$: 1) the true ODTR $d_{0}^{*}$;
and, 2) an estimate of the ODTR, either: a) the sample-specific estimate of
the ODTR $d^{*}_{n}$; or b) the training sample-specific estimate of the ODTR
$d^{*}_{n,v}$.
The expectation of each of these counterfactual outcomes under the
distribution $P_{U,X}$ are the causal parameters of interest in this paper.
Each causal estimand is a mapping $\mathcal{M}^{F}\rightarrow\mathbb{R}$.
The target causal parameter corresponding to the value of a given treatment
rule $d$ (from the set of rules $\mathcal{D}$) is:
$\Psi^{F}_{d}(P_{U,X})\equiv\mathbb{E}_{P_{U,X}}[Y_{d}].$
The true ODTR $d_{0}^{*}$ is defined as the rule that maximizes the expected
counterfactual outcome:
$d_{0}^{*}\in\operatorname*{arg\,max}_{d\in\mathcal{D}}\Psi^{F}_{d}(P_{U,X}).$
Here, the target causal parameter of interest is the expected outcome under
the true ODTR $d_{0}^{*}$:
$\Psi^{F}_{d_{0}^{*}}(P_{U,X})\equiv\mathbb{E}_{P_{U,X}}[Y_{d_{0}^{*}}].$
Let $d^{*}_{n}:\mathcal{M}_{NP}\rightarrow\mathcal{D}$ be an ODTR estimated on
the entire sample, and
$d^{*}_{n,v}=d^{*}(P_{n,-v}):\mathcal{M}_{NP}\rightarrow\mathcal{D}$ be an
ODTR estimated on the $v^{th}$ training set. The data-adaptive causal
parameters are: a) the expected outcome under a sample-specific estimate of
the ODTR:
$\Psi^{F}_{d^{*}_{n}}(P_{U,X})\equiv\mathbb{E}_{P_{U,X}}[Y_{d^{*}_{n}}],$
noting that the expectation here is not over $d_{n}^{*}$, i.e., this is
$\mathbb{E}_{P_{U,X}}[Y_{d}]$, with $d=d_{n}^{*}$, and b) the average of the
expected validation set outcomes under training-set specific estimates of the
ODTR:
$\Psi^{F}_{d^{*}_{n,CV}}(P_{U,X})\equiv\frac{1}{V}\sum_{v=1}^{V}\mathbb{E}_{P_{U,X}}[Y_{d^{*}_{n,v}}].$
One might also be interested in comparing the above causal quantities to, for
example, the expected outcome had everyone been assigned the treatment
$\mathbb{E}_{P_{U,X}}[Y_{1}]$ or had no one been assigned the treatment
$E_{P_{U,X}}[Y_{0}]$.
#### 2.2.1 Causal Estimands - Application to “Interventions” Study
Analagous to the above causal questions, for the “Interventions” Study, we are
interested in asking: what would have been the probability of no re-arrest had
everyone been given CBT according to: 1) some pre-specified rule $d$ (for
example, the simple dynamic treatment rule that gives CBT to those with high
levels of prior education and TAU to those with low levels of prior
education), where the causal parameter is $\Psi_{d}^{F}(P_{U,X})$; 2) the true
ODTR $d^{*}_{0}$ (the unknown dynamic treatment rule for assigning CBT that
yields the highest probability of no re-arrest), where the causal parameter is
$\Psi_{d^{*}_{0}}^{F}(P_{U,X})$; and 3) an estimate of the ODTR specific to
the 441 participants in the trial, which could either be a) a sample-specific
estimate $d^{*}_{n}$ (e.g., the ODTR estimated in [14]) or b) a sample-split-
specific estimate of the ODTR $d^{*}_{n,CV}$? The causal parameters for a) and
b) are $\Psi_{d^{*}_{n}}^{F}(P_{U,X})$ and $\Psi^{F}_{d^{*}_{n,CV}}(P_{U,X})$,
respectively.
### 2.3 Identification
Two assumptions are necessary for identification; that is, for determining
that the causal estimands (a function of our counterfactual distribution) are
equal to the statistical estimands (a function of our observed data
distribution): the 1) randomization assumption, $Y_{a}\perp A|W\text{
}a\in\\{0,1\\};$ and 2) the positivity assumption:
$Pr(\min_{a\in\\{0,1\\}}g_{0}(A=a|W)>0)=1.$ Both hold if, for example, data
are generated from an experiment in which treatment is randomized (as in the
“Interventions” trial); for data generated in an observational setting, the
randomization assumption requires measurement of all unmeasured confounders,
and the positivity assumption should be examined [38].
### 2.4 Statistical Estimands
We describe statistical estimands corresponding to each of the causal
parameters outlined above – each is identified via the G-computation formula
which corresponds to a mapping $\mathcal{M}\rightarrow\mathbb{R}$.
The statistical estimand of the mean outcome under any rule $d\in\mathcal{D}$
is
$\psi_{0,d}\equiv\Psi_{d}(P_{0})=\mathbb{E}_{0}[Q_{0}(d(W),W)],$
where the function $Q(A,W)=\mathbb{E}[Y|A,W]$ is the outcome regression.
The true optimal rule, as a function of the observed data distribution, is
then:
$d^{*}_{0}\in\operatorname*{arg\,max}_{d\in\mathcal{D}}\Psi_{d}(P_{0}).$
Note that the RHS of this equation is a set because there may be more than one
optimal rule for a certain kind of subject (e.g., if certain kinds of subjects
neither benefit from nor are harmed by a treatment) [39, 40, 41]. Here, we
will assume that when there is no treatment effect, assigning treatment 0 is
better than no treatment. Then, the optimal rule can be written as a function
of the so-called “blip function”, $B(W)=Q(1,W)-Q(0,W)$:
$d^{*}_{0}(W)=\mathbb{I}[B_{0}(W)>0].$
The true mean outcome under the true optimal rule $d_{0}^{*}$ is then
identified by
$\psi_{0,d^{*}_{0}}\equiv\Psi_{d^{*}_{0}}(P_{0})=\mathbb{E}_{0}[Q_{0}(d^{*}_{0}(W),W)].$
The first data-adaptive parameter we consider, as a function of the observed
data, is the true expected outcome under the ODTR estimated on the entire
sample $d^{*}_{n}$:
$\psi_{0,d^{*}_{n}}\equiv\Psi_{d^{*}_{n}}(P_{0})=\mathbb{E}_{0}[Q_{0}(d^{*}_{n}(W),W)].$
The second data-adaptive parameter is the average of the validation-set true
mean outcomes under the training-set estimated ODTRs $d^{*}_{n,v}$:
$\psi_{0,d^{*}_{n,CV}}\equiv\Psi_{d^{*}_{n,CV}}(P_{0})=\frac{1}{V}\sum_{v=1}^{V}\mathbb{E}_{0}[Q_{0}(d^{*}_{n,v}(W),W)].$
## 3 Estimation
We describe estimators for each of the statistical parameters above: a simple
substitution estimator based on the G-computation formula, an IPTW estimator,
a double-robust IPTW estimator (IPTW-DR), a TMLE, and a CV-TMLE. Each of these
estimators can be used for estimating $\psi_{0,d}$ and $\psi_{0,d_{0}^{*}}$.
We use the non-cross-validated estimators (G-computation, IPTW, IPTW-DR, and
TMLE) to estimate $\psi_{0,d_{n}^{*}}$; we estimate $\psi_{0,d^{*}_{n,CV}}$
with CV-TMLE.
Estimators of these parameters are mappings
$\hat{\Psi}:\mathcal{M}_{NP}\rightarrow\mathbb{R}$. For all estimators, let
$Q_{n}$ be an estimator of the outcome regression, which could be estimated
with, for example, SuperLearner [15]. In a randomized experiment, the
treatment mechanism $g_{0}$ is known; thus, one could use this known $g_{0}$,
or $g_{n}$ could be a maximum likelihood estimator (MLE) based on a correctly
specified model.
We first illustrate each of the non-cross-validated estimators suited for
estimating a treatment-specific mean at an arbitrary $d\in\mathcal{D}$, which,
for example, could be an a priori known rule or an optimal rule estimated on
the entire sample $d^{*}_{n}$ (see papers from [16] and [14] for a description
on how to estimate the optimal rule using, for example, the ODTR
SuperLearner). Here, $\hat{\Psi}_{d}$ is an estimator of $\psi_{0,d}$; the
estimate is denoted $\hat{\Psi}_{d}(P_{n})\equiv\hat{\psi}$. We further
subscript by each estimator name.
One can use a simple substitution estimator based on the above G-computation
formula [18]:
$\hat{\psi}_{gcomp,d}=\frac{1}{n}\sum_{i=1}^{n}Q_{n}(d(W_{i}),W_{i}),$
an IPTW estimator [19, 20]:
$\hat{\psi}_{IPTW,d}=\frac{1}{n}\sum_{i=1}^{n}\frac{\mathbb{I}[A_{i}=d(W_{i})]}{g_{n}(A_{i}|W_{i})}Y_{i},$
a double-robust IPTW estimator [21, 22, 23]:
$\hat{\psi}_{IPTW-
DR,d}=\frac{1}{n}\sum_{i=1}^{n}\frac{\mathbb{I}[A_{i}=d(W_{i})]}{g_{n}(A_{i}|W_{i})}(Y_{i}-Q_{n}(A_{i},W_{i}))+Q_{n}(d(W_{i}),W_{i}),$
or a TMLE [24, 25, 3, 27]. We briefly describe one possible TMLE procedure.
First, estimate the clever covariate for each person:
$H_{n,i}=\frac{\mathbb{I}[A_{i}=d(W_{i})]}{g_{n}(A_{i}|W_{i})}.$
Then, update the initial fit of $Q_{n}(d(W),W)$ by running a logistic
regression of $Y$ (which should be transformed between $0$ and $1$ if the
outcome is continuous [42]) on offset $Q_{n}(d(W),W)$ with weights $H_{n}$,
predicting at $A=d(W)$. Denote the updated fit as $Q_{n}^{*}(d(W),W)$. Then,
the TMLE estimator is:
$\hat{\psi}_{TMLE,d}=\frac{1}{n}\sum_{i=1}^{n}Q_{n}^{*}(d(W_{i}),W_{i}).$
As previously mentioned, the CV-TMLE can estimate $\psi_{0,d}$,
$\psi_{0,d_{0}^{*}}$, and $\psi_{0,d^{*}_{n,CV}}$ [32, 28, 27, 26, 33].
Instead of illustrating this estimator at $d$ as in the above estimators, we
illustrate one type of CV-TMLE procedure for evaluating the mean outcome under
sample-split-specific estimates of the ODTR $d^{*}_{n,v}$ to show on which
parts of the data one needs to estimate or predict the ODTR, if estimating
$\psi_{0,d_{0}^{*}}$ or $\psi_{0,d^{*}_{n,CV}}$. The same procedure holds for
a $d$ that is known, except that the rule need not be estimated on each of the
training samples and is simply applied to the validation sets:
1. 1.
Split the data into $V$ folds. Let each fold be the validation set and the
complement data be the training set.
2. 2.
Generate initial estimators of $g_{0}$, $Q_{0}$, and $d_{0}^{*}$ based on the
training sample $P_{n,-v}$.
3. 3.
Predict the training-set specific fits from the previous step on the
validation sample $P_{n,v}$.
4. 4.
Using the predictions from the previous step, in each corresponding validation
set, update the initial estimator $\hat{\Psi}_{d^{*}_{n,v}}(P_{n,-v})$ using
the TMLE procedure described above to generate
$\hat{\Psi}_{d^{*}_{n,v}}(P^{*}_{n,-v})$, a TMLE of
$\mathbb{E}_{0}[Q_{0}(d^{*}_{n,v}(W),W)]$.
5. 5.
Average over all validation folds to obtain the CV-TMLE, i.e., the estimated
mean outcome under the training-sample-split specific estimates of the rules:
$\hat{\psi}_{CV-
TMLE,d^{*}_{n,v}}=\frac{1}{V}\sum_{v=1}^{V}\hat{\Psi}_{d^{*}_{n,v}}(P^{*}_{n,-v}).$
## 4 Inference
We first discuss the conditions necessary for each the above estimators to be
asymptotically linear for $\psi_{0,d}$, $\psi_{0,d_{n}^{*}}$, and
$\psi_{0,d^{*}_{n,CV}}$ in a randomized experiment. Under these conditions,
using influence-curve based inference, we describe how to construct 95%
confidence intervals with nominal to conservative coverage for the
aforementioned statistical estimands of interest.
We do not discuss inference on the G-computation estimator, because in order
for it to be asymptotically linear, $Q_{n}$ must either be equal to $Q_{0}$ or
be an estimator that converges fast enough to $Q_{0}$, neither of which we
assume here.
### 4.1 Asymptotic Linearity Conditions for Estimators
We give a brief overview of the conditions needed for asymptotic linearity for
each of the estimators with respect to each statistical estimand in the
randomized trial setting, and provide a summary of these conditions in Table
1. For more details and proofs, we refer the reader to [25] and [27].
An estimator $\hat{\Psi}$ is asymptotically linear for its true value
$\psi_{0}$ if can be written in the following form:
$\hat{\psi}-\psi_{0}=\frac{1}{n}\sum_{i=1}^{n}IC(O_{i})+R_{n},$
where $IC$ is the estimator’s influence curve and $R_{n}$ is a remainder term
that is $o_{P}(1/\sqrt{n})$. An asymptotically linear estimator $\hat{\Psi}$
thus has the following properties: 1) its bias converges to 0 in sample size
at a rate faster than $\frac{1}{\sqrt{n}}$; 2) for large $n$, its distribution
is normal, $n^{1/2}(\hat{\psi}-\psi_{0})\overset{d}{\to}N(0,\sigma^{2}_{0})$,
allowing an estimate of $\sigma^{2}_{0}$ to be used to construct a Wald-type
confidence intervals; and, 3) the asymptotic variance of
$n^{1/2}(\hat{\psi}-\psi_{0})$ (i.e., $\sigma^{2}_{0}$) can be well-
approximated by the sample variance of its estimated influence curve (or
equivalently, $\sigma^{2}_{n}=\frac{1}{n}\sum_{i}^{n}IC^{2}_{n}(O_{i})$, since
the mean of an influence curve is 0).
The current randomized experiment scenario guarantees that $g_{0}$ is known;
here, we consider the case where $g_{n}$ is an estimate of $g_{0}$ based on a
correctly specified parametric model. Given this, for an estimand defined as
the value of an a priori specified rule $d$, the IPTW estimator is guaranteed
to be asymptotically linear for $\psi_{0,d}$; however, it will not be
asymptotically efficient. Under a Donsker class assumption on the estimator
$Q_{n}$, IPTW-DR and TMLE are guaranteed to be asymptotically linear for
$\psi_{0}$ (due to $R_{n}$ involving a second-order term that is the product
of the difference between $Q_{n}$ and $g_{n}$ for $Q_{0}$ and $g_{0}$,
respectively); if $Q_{n}$ is a consistent estimator of $Q_{0}$ with a rate of
convergence faster than $1/\sqrt{n}$, IPTW-DR and TMLE are asymptotically
efficient. This is also true for CV-TMLE, except Donsker class conditions can
be relaxed (in effect allowing for an overfit in the initial estimate of
$Q_{0}$) [27].
Construction of nominal to conservative confidence intervals around each
estimator with respect to the true expected outcome under the true, unknown
$d^{*}_{0}$ requires additional assumptions. For all estimators, statistical
inference for $\psi_{0,d^{*}_{0}}$ relies on a second-order difference in
$R_{n}$ between $\psi_{0,d^{*}_{n}}$ and $\psi_{0,d^{*}_{0}}$ going to 0 at a
rate faster $1/\sqrt{n}$. In practice, how hard it is to make this condition
hold depends on the extent to which the blip function has density at zero. If
the value of the blip is always larger than $\delta>0$ for some $\delta>0$,
then consistency of $Q_{n}$ is sufficient; however, if the treatment effect is
zero for some covariate levels, then stronger assumptions are required. The
non-cross-validated estimators additionally require Donsker conditions on
$d^{*}_{n}$. In practice, these conditions on the data-adaptivity of
$d^{*}_{n}$ hold if, for example, the optimal rule is a function of one
covariate, or, if a higher-dimensional covariate set is used, one is willing
to make strong smoothness assumptions on, for example, the blip function [27,
16, 43]. CV-TMLE relaxes these Donsker conditions on $d^{*}_{n}$. Thus, in a
randomized trial, if employing CV-TMLE for this estimand, the only condition
needed is that $d^{*}_{n}$ converges fast enough to $d_{0}^{*}$.
For the data-adaptive parameters, the estimators no longer require the strong
assumption that $d^{*}_{n}$ converges to $d^{*}_{0}$ at a certain rate;
rather, they only require that $d^{*}_{n}$ converges to some fixed rule
$d\in\mathcal{D}$ at any rate [27]. This means that, for randomized trial
data, the CV-TMLE estimator for $\psi_{0,d^{*}_{n,CV}}$ is asymptotically
linear under essentially no conditions [27].
| Conditions for Asymptotic Linearity:
---|---
Estimands | Estimators | | $g_{n}=g_{0}$
---
or
$g_{n}\overset{p}{\to}g_{0}$
| $Q_{n}=Q_{0}$
---
or
$Q_{n}\overset{p}{\to}Q_{0}$
| $\psi_{0,d^{*}_{n}}-\psi_{0,d^{*}_{0}}$
---
= $o_{P}(\frac{1}{\sqrt{n}})$
| $Q_{n}$ not
---
overfit
| $d^{*}_{n}$ not
---
overfit
Value of known rule $\psi_{0,d}$ | $\hat{\Psi}_{IPTW,d}$ | Satisfied by randomized experiment | Not required | Not required, $d$ known | Not required | Not required, $d$ known
$\hat{\Psi}_{IPTW-DR,d}$ | Required
$\hat{\Psi}_{TMLE,d}$ | Required
$\hat{\Psi}_{CV-TMLE,d}$ | Not required
Value of true ODTR $\psi_{0,d^{*}_{0}}$ | $\hat{\Psi}_{IPTW,d^{*}_{n}}$ | Required | Not required | Required
$\hat{\Psi}_{IPTW-DR,d^{*}_{n}}$ | Required
$\hat{\Psi}_{TMLE,d^{*}_{n}}$ | Required
$\hat{\Psi}_{CV-TMLE,d^{*}_{n,v}}$ | Not required | Not required
Value of sample-specific ODTR estimate $\psi_{0,d^{*}_{n}}$ | $\hat{\Psi}_{IPTW,d^{*}_{n}}$ | Not required; require $d^{*}_{n}\overset{p}{\to}d\in\mathcal{D}$ | Not required | Required
$\hat{\Psi}_{IPTW-DR,d^{*}_{n}}$ | Required
$\hat{\Psi}_{TMLE,d^{*}_{n}}$ | Required
| Value of
---
sample-split-specific
ODTR estimate
$\psi_{0,d^{*}_{n,CV}}$
$\hat{\Psi}_{CV-TMLE,d^{*}_{n,v}}$ | | Not required;
---
require $d^{*}_{n}\overset{p}{\to}d\in\mathcal{D}$
Not required | Not required
Table 1: Summary of the conditions needed for asymptotic linearity in the
randomized treatment setting for each of the estimators corresponding to each
of the estimands.
### 4.2 Construction of Confidence Intervals
Below, we list conservative working influence curves for each estimator at
$P_{n}$ and $d\in\mathcal{D}$. The actual estimators’ influence curves when an
MLE of $g_{n}$ based on a correctly specified parametric model is used (as can
be guaranteed when treatment is randomized) are the working influence curves
presented below minus a tangent space projection term [25, 43]. Thus, under
the conditions stated above, the sample variance of the following working
influence curves at a correctly specified $g_{n}$ yield conservative estimates
of the asymptotic variance of the estimators, which yields conservative
confidence interval coverage.
The IPTW estimator’s working influence curve estimate is:
$\widehat{IC}_{IPTW,d}=\frac{\mathbb{I}[A=d]}{g_{n}(A|W)}Y-\hat{\psi}_{IPTW,d}.$
The influence curve of the TMLE and double-robust IPTW estimator is the
_efficient_ influence curve for the treatment-specific mean [43, 44, 45]; the
corresponding working influence curve estimates are:
$\widehat{IC}_{IPTW-
DR,d}=\frac{\mathbb{I}[A=d]}{g_{n}(A|W)}(Y-Q_{n}(A,W))+Q_{n}(d(W),W)-\hat{\psi}_{IPTW-
DR,d},$
$\widehat{IC}_{TMLE,d}=\frac{\mathbb{I}[A=d]}{g_{n}(A|W)}(Y-Q^{*}_{n}(A,W))+Q^{*}_{n}(d(W),W)-\hat{\psi}_{TMLE,d}.$
As stated above, for these non-cross-validated estimators, the asymptotic
variance can be conservatively estimated with the sample variance of the
estimated influence curve:
$\sigma^{2}_{n}=\frac{1}{n}\sum_{i}^{n}\widehat{IC}^{2}(O_{i})$.
For the IPTW-DR and TMLE estimators, one can underestimate the estimator’s
variance if $Q_{0}$ is estimated data-adaptively on the same data on which the
sample variance of the estimated influence curve is evaluated. Through sample
splitting, CV-TMLE confidence intervals protect against overfitting incurred
by using the data twice – for both estimation and evaluation [25]. Then the
fold-specific estimate of the working influence curve for CV-TMLE at the
training-set-specific estimated ODTR is:
$\widehat{IC}_{v,d^{*}_{n,v}}=\frac{\mathbb{I}[A_{-v}=d^{*}_{n,v}(W_{-v})]}{g_{n}(A_{-v}|W_{-v})}(Y_{-v}-Q^{*}_{n,v}(A_{-v},W_{-v}))+Q^{*}_{n,v}(d^{*}_{n,v}(W_{-v}),W_{-v})-\hat{\Psi}(P^{*}_{n,v}),$
and the fold-specific estimate of the variance of the fold-specific estimator
is:
$\sigma^{2}_{n,v}=\frac{1}{n_{v}-1}\sum_{i=1}^{n_{v}}\widehat{IC}_{v,d^{*}_{n,v}}^{2}(O_{i});$
thus, the asymptotic variance of the CV-TMLE $\hat{\psi}_{CV-
TMLE,d^{*}_{n,v}}$ can be conservatively estimated with:
$\sigma^{2}_{n,CV-TMLE}=\frac{1}{V}\sum_{v=1}^{V}\sigma^{2}_{n,v}.$
In sum, for each estimator $\hat{\Psi}$ and its corresponding working
influence curve estimate $IC_{n}$, we obtain conservative inference on the
value of the rule by constructing confidence intervals in the following way:
$\hat{\psi}\pm\Phi^{-1}(0.975)\frac{\sigma_{n}}{\sqrt{n}}.$
## 5 Simulation Study
Using simulations, we evaluate the performance of various estimators of the
value of the rule in finite samples. In particular, we investigate: 1) the
impact of increasingly data-adaptive estimation of nuisance parameters and (if
applicable) the ODTR; 2) the potential for efficiency and bias improvement
through the use of semiparametric efficient estimators; and, 3) the importance
of sample splitting, in particular via a cross-validated-targeted maximum
likelihood estimator (CV-TMLE).
### 5.1 Data Generating Process
All simulations were implemented in R [46], and the code, simulated data, and
results can be found at https://github.com/lmmontoya/SL.ODTR. We examine these
comparisons using the following data generating process (DGPs) (also used in
[14, 27, 16]). Each simulation consists of 1,000 iterations of $n$=1,000.
Mimicking a randomized experiment, the covariates, treatment and outcome are
generated as follows:
$\displaystyle W_{1},W_{2},W_{3},W_{4}\sim$ $\displaystyle
Normal(\mu=0,\sigma^{2}=1)$ $\displaystyle A\sim$ $\displaystyle
Bernoulli(p=0.5)$ $\displaystyle Y\sim$ $\displaystyle Bernoulli(p)\text{ .}$
$\displaystyle p=$ $\displaystyle
0.5logit^{-1}(1-W_{1}^{2}+3W_{2}+5W_{3}^{2}A-4.45A)+$ $\displaystyle
0.5logit^{-1}(-0.5-W_{3}+2W_{1}W_{2}+3|W_{2}|A-1.5A)\text{ ,}$
then the true blip function is:
$\displaystyle B_{0}(W)=$ $\displaystyle
0.5[logit^{-1}(1-W_{1}^{2}+3W_{2}+5W_{3}^{2}-4.45)+logit^{-1}(-0.5-W_{3}+2W_{1}W_{2}+3|W_{2}|-1.5)$
$\displaystyle-
logit^{-1}(1-W_{1}^{2}+3W_{2})+logit^{-1}(-0.5-W_{3}+2W_{1}W_{2})]\text{ .}$
Here, the true expected outcome under the true ODTR
$\Psi^{F}_{d_{0}^{*}}(P_{U,X})\approx 0.5626$ and the true optimal proportion
treated $\mathbb{E}_{P_{U,X}}[d_{0}^{*}]\approx 55.0\%$. The mean outcome had
everyone and no one been treated is, respectively,
$\mathbb{E}_{P_{U,X}}[Y_{1}]\approx 0.4638$ and
$\mathbb{E}_{P_{U,X}}[Y_{0}]\approx 0.4643$.
### 5.2 Estimator Configurations
We estimate each of the statistical estimands using the following estimators
with inference based on the conservative working influence curves describe
above: IPTW, IPTW-DR, TMLE, and CV-TMLE. The G-computation estimator is also
employed, but confidence intervals are not generated.
A correctly specified logistic regression is used to estimate the nuisance
parameter $g_{0}$. SuperLearner is used to estimate $Q_{0}$ and the ODTR [15,
16]. The ODTR is estimated using a “blip-only” library, using a blip-based
metalearner (i.e., an approach to creating an ensemble of candidate ODTR
algorithms), and selecting the mean outcome under the candidate rule as the
risk function [14]. Three libraries are considered that correspond to varying
levels of data-adaptiveness, or potential for overfitting.
1. 1.
“GLMs - least data adaptive”
* •
$Q_{n}$ library: four logistic regressions, each with a main terms $W_{j}$ and
$A$, and with an interaction $W_{j}$ times $A$, for $j\in\\{1,..,4\\}$
* •
$d^{*}_{n}$ library: univariate linear regressions with each covariate
2. 2.
“ML + GLMs - moderately data adaptive”
* •
$Q_{n}$ and $d^{*}_{n}$ library: all algorithms in the “GLMs - least data
adaptive” $Q_{n}$ and $d^{*}_{n}$ libraries, respectively, in addition to the
algorithms SL.glm (generalized linear models), SL.mean (the average),
SL.glm.interaction (generalized linear models with interactions between all
pairs of variables), SL.earth (multivariate adaptive regression splines [47]),
SL.nnet (neural networks [48]), SL.svm (support vector machines [49]), and
SL.rpart (recursive partitioning and regression trees [50]) from the
SuperLearner package [51]
3. 3.
“ML + GLMs - most data adaptive”
* •
$Q_{n}$ and $d^{*}_{n}$ library: all algorithms in the “ML + GLMs - moderately
data adaptive” $Q_{n}$ and $d^{*}_{n}$ libraries, respectively, in addition to
SL.randomForest [52]
### 5.3 Performance Metrics
Using measures of bias, variance, mean squared error (MSE) and 95% confidence
interval coverage, we evaluate the ability of each of the estimators to
approximate: 1) the true expected outcome under an a priori known rule $d$,
i.e., $\psi_{0,d}$; 2) the true expected outcome under the true, unknown ODTR
$\psi_{0,d_{0}^{*}}$; 3) the true expected outcome under an ODTR estimated on:
a) the entire sample and evaluated on the entire sample $\psi_{0,d^{*}_{n}}$;
or b) estimated on each of the training sets, evaluated and averaged over each
of the validation sets $\psi_{0,d^{*}_{n,CV}}$.
First, we estimate the target parameter $\psi_{0,d}$. This illustrates the
performance of these estimators of the value of a rule when the rule is known
a priori, either because the rule is known to be of interest or it was
estimated on other data not included in the current sample. In this case, we
choose $d$ to be the true ODTR, that is, $d=d^{*}_{0}$. We note that it is
highly unlikely that in practice $d^{*}_{0}$ is known a priori, and stress
that the only reason we examine the performance of estimators
$\hat{\psi}_{d=d^{*}_{0}}$ with respect to $\psi_{0,d^{*}_{0}}$ is to
illustrate how well these estimators evaluate a given pre-specified rule.
However, illustrating this using the true rule $d^{*}_{0}$ in a simulation
facilitates comparison of estimator performance across estimands, showing, for
example, the price in performance one pays for targeting the more ambitious
parameter that seeks to estimate both the rule itself and its true value. Said
another way, if we see that estimator performance for
$\hat{\psi}_{d=d^{*}_{0}}$ with respect to $\psi_{0,d^{*}_{0}}$ is good, then
the only issue left with estimating $\psi_{0,d^{*}_{0}}$ is estimating
$d^{*}_{0}$ well.
Next, we estimate the same target parameter $\psi_{0,d^{*}_{0}}$ in the more
realistic scenario where the true ODTR $d^{*}_{0}$ is unknown. We therefore
first estimate the ODTR and then apply each of the estimators of the value of
the rule under the estimated ODTR (where the rule is either estimated on the
entire sample $\hat{\psi}_{d^{*}_{n}}$ or, for CV-TMLE, estimated on each
sample split $\hat{\psi}_{d^{*}_{n,v}}$). Performance of the estimators with
respect to $\psi_{0,d^{*}_{0}}$ reflects how well both the rule and its value
are estimated.
Finally, we treat as target parameter the true expected outcome under the
estimated optimal rule, i.e., the data-adaptive parameters
$\psi_{0,d^{*}_{n}}$ or, for CV-TMLE, $\psi_{0,d^{*}_{n,CV}}$. This
illustrates estimator performance for data-adaptive parameters whose true
values depend on the sample, and for which it is of interest to estimate their
value using the same sample on which the rule was learned. Note that the
target parameter value in this case is specific to the sample at hand (the
“truth” will vary from sample to sample); thus, performance calculations are
calculated with respect to the true sample-specific or sample-split specific
mean outcome. For example, for confidence interval coverage, across the 1,000
simulations, we calculated the proportion of times the confidence interval
around the estimated value of the estimated rule covered the true value of the
estimated rule – where both the confidence interval around the estimate and
the true value of the estimated rule are _specific to each sample_.
Furthermore, the data-adaptive parameter will vary between the non-cross-
validated estimators (whose data-adaptive parameter is the sample-specific
parameter $\psi_{0,d^{*}_{n}}$) and CV-TMLE (whose data-adaptive parameter is
the sample-split specific parameter $\psi_{0,d^{*}_{n,CV}}$), and as such, is
not only a function of the sample, but also of the split.
### 5.4 Simulation Results
#### 5.4.1 Results - Value of a Known Dynamic Treatment Regime
Bias, variance, MSE, and confidence interval coverage metrics for estimating
$\psi_{0,d}$ in the scenario where $d$ is known a priori illustrate the
performance of each of the estimators for estimating the value of a given pre-
specified rule; for illustration, we use the true optimal rule $d^{*}_{0}$.
Thus, only estimation of nuisance parameters $g$ and/or $Q$ were needed for
this parameter.
The untargeted G-computation formula exhibited considerable bias if either
misspecified parametric models or a SuperLearning approach was used to
estimate the outcome regression – regardless of the degree of data-
adaptiveness in estimating this nuisance parameter $Q$. For example, when the
$Q_{n}$ library consisted of only parametric regressions, the mean difference
between the G-computation estimate and the truth was $-9.09\%$ (i.e.,
104.44-940.00 times that of the bias of alternative estimators). We note that
this result is in contrast to that of estimating the treatment specific mean
for any static regime, in which treatment assignment is not a function of
covariates (e.g., $\mathbb{E}_{0}[Q_{0}(A=1,W)]$) from data generated from a
randomized experiment; in this case, the G-computation estimator under certain
misspecified parametric models is a TMLE, and is therefore unbiased [53].
As expected, the IPTW estimator, although unbiased, was less efficient than
the double robust estimators – specifically, throughout, the IPTW estimator’s
variance was 1.33-1.80 times that of the variance of alternative estimators.
Additionally, the IPTW-DR and TMLE were unbiased (as expected, given the
double-robustness of these estimators) if the outcome regression was estimated
using either a misspecified parametric model or a SuperLearner with a less
data-adaptive library. However, both estimators were biased (i.e., $-0.84\%$
and $-0.75\%$ bias for IPTW-DR and TMLE, respectively) with less than nominal
confidence interval coverage (i.e., $90.1\%$ and $90.6\%$ coverage for IPTW-DR
and TMLE, respectively) when a more data-adaptive library was used to estimate
the outcome regression – a result likely due to overfitting $Q_{n}$.
Sample-splitting via CV-TMLE removed the non-cross-validated estimators’ bias
($-0.01\%$, or 0.001-0.17 times the bias relative to alternative estimators)
and generated better confidence interval coverage ($93.6\%$) under the
presence of overfitting for $Q_{n}$, at no cost to variance.
#### 5.4.2 Results - Value of the True, Unknown ODTR
No estimator performed well when both the ODTR itself and its value were
estimated using the same sample (i.e., estimators $\hat{\psi}_{d^{*}_{n}}$ or
$\hat{\psi}_{d^{*}_{n,v}}$ for $\psi_{0,d^{*}_{0}}$). This was evident
particularly in terms of increased bias when a less data-adaptive library was
used to estimate $Q_{0}$ and $d^{*}_{0}$, and in terms of both increased bias
and variance when a more aggressive library was used to estimate $Q_{0}$ and
$d^{*}_{0}$. Notably, however, CV-TMLE performed the best with respect to all
performance metrics under the most data-adaptive approaches. A large component
of the bias in this case was due to the rate of convergence from $d^{*}_{n}$
to $d^{*}_{0}$ for any SuperLearner library, and therefore confidence interval
coverage of the true value under the true ODTR around any estimated value of
the estimated rule did not approach 95% (confidence interval coverage under
the least, moderately, and most data adaptive libraries ranged from
14.70%-45.0%, 66.50%-76.10%, and 31.00%-68.60%, respectively).
Although the focus of these simulations was not optimizing estimation of the
ODTR, we note that, consistent with results from [14], the least biased
estimators of the true value of the true ODTR are ones that use a combination
of parametric models and machine learning algorithms in the estimation of
$Q_{0}$ and $d_{0}$.
#### 5.4.3 Results - Value of an Estimated ODTR
We evaluated the performance of the non-cross-validated estimators (IPTW,
IPTW-DR, and TMLE, i.e., $\hat{\psi}_{d^{*}_{n}}$) of the data-adaptive
parameter (i.e., $\psi_{0,d^{*}_{n}}$) – a parameter that depends on the
optimal rule specific to the sample at hand. All non-cross-validated
estimators overestimated the value of the rule (i.e., positive bias),
regardless of the SuperLearner library. In addition, the bias increased as the
library for estimating the ODTR became more data-adaptive. For example, for
the most data-adaptive SuperLearner library configuration, TMLE exhibited a
bias of 13.46%, variance of 0.0108, MSE of 0.0307, and 15.7% confidence
interval coverage.
The CV-TMLE (i.e., $\hat{\psi}_{CV-TMLE,d^{*}_{n,v}}$) with respect to the
data-adaptive parameter $\psi_{0,d^{*}_{n,CV}}$ removed the bias incurred by
estimating and evaluating the ODTR on the same sample, at little cost to no
cost to variance. For example, for the most data-adaptive SuperLearner library
configuration, CV-TMLE had a bias of 0.04% (0.001-0.0006 times that of
alternative estimators), variance of 0.0007 (0.07-1.00 times that of
alternative estimators), MSE of 0.0005 (0.01-0.06 times that of alternative
estimators), and 94.8% confidence interval coverage.
## 6 Evaluating the Estimated ODTR for the “Interventions” Study
In our companion paper, we estimated the ODTR on the “Interventions” data (n =
441) using the ODTR SuperLearner [14]. The library for $d^{*}_{n}$ consisted
of a combination of simple parametric models and machine learning algorithms
(SL.glm, SL.mean, SL.glm.interaction, SL.earth, and SL.rpart), and we used the
same library for $Q_{n}$. The ODTR algorithm allocated all coefficient weight
on a simple GLM with only substance use; this means that the estimated ODTR
can be interpreted as: give CBT to those with low substance use scores and TAU
to those with high substance use scores.
In this paper, we _evaluate_ this estimated ODTR using CV-TMLE. Specifically,
we aim to determine if administering CBT under this individualized rule is
better than administering CBT in a non-individualized way – i.e., simply
giving all participants CBT or no participants CBT.
The CV-TMLE estimate of the probability of no re-arrest under the ODTR
SuperLearner is 61.37% (CI: [54.82%, 67.93%]). However, this probability is
not significantly different than the CV-TMLE estimate of the static rule in
which everyone receives CBT (difference: -0.35%, CI: [-6.40%, 5.71%]) and no
one receives CBT (difference: -0.18%, CI: [-7.06%, 6.68%]). Estimates and
confidence intervals of these CV-TMLE estimates are illustrated in Figure 2.
Thus, there is insufficient evidence to conclude that assigning CBT using the
ODTR SuperLearner is better than assigning CBT in a non-individualized way.
## 7 Conclusions
The aim of this paper was to illustrate the performance of different
estimators that can be used to evaluate dynamic treatment rules, and in
particular, the ODTR. At sample size 1,000, we saw a small price and many
benefits to using CV-TMLE in order to estimate the following parameters: 1)
the true value of a given a priori known rule; 2) the true value of the true,
unknown ODTR; and, 3) the true value of an estimated ODTR (a data-adaptive
parameter). In addition, we illustrated how to implement the CV-TMLE estimator
to evaluate the ODTR using the “Interventions” data as an applied example.
When evaluating estimators’ performance for the value of a known rule, CV-TMLE
performed well, irrespective of how data-adaptive the algorithms used for
estimating nuisance parameters were. Although no estimator under an estimated
ODTR yielded satisfactory performance for a target parameter corresponding to
the true value of the true ODTR, CV-TMLE performed the best when nuisance
parameters and ODTRs were estimated using the most data-adaptive algorithms,
while non-cross-validated estimators yielded overly optimistic and highly
variable results. Finally, no other estimator except CV-TMLE performed well
when estimating a data-adaptive parameter – a parameter that may be of
interest if: 1) one believes one’s estimate of the ODTR will not converge
appropriately to its truth (as was the case for these estimators of the ODTR
under the current DGP); and 2) one cares more about the performance of the
estimated ODTR that is generated by the sample at hand (as opposed to the
true, but unknown, ODTR).
Future directions for simulations should evaluate results under varying sample
sizes. In particular, for small sample sizes and thus less support in the
data, it may be that case that we pay a price in performance by sample
splitting. Additionally, future work could extend these simulations to the
multiple time-point setting to evaluate the _sequential_ ODTR that could be
generated from, for example, a SMART design [54, 12, 55] instead of an single
time-point experiment.
As an illustration of how to apply the ODTR SuperLearner to real data, we
estimated the ODTR using the “Interventions” Study to determine which types of
criminal justice-involved adults with mental illness should be assigned CBT
versus TAU, to yield the highest probability of no re-arrest. In our applied
example using the “Interventions” data, preliminary results suggest the
probability of recidivism if treatment were assigned using the ODTR algorithm
(i.e., in an individualized way) is not significantly different from
probability of recidivism if all had been assigned treatment or no treatment
(i.e., in a non-individualized way). This may indicate an absence of strong
heterogeneous treatment effects by the measured variables, or it may reflect
limitations in power to detect such effects due to preliminary sample sizes.
In future work, we will apply the ODTR SuperLearner and evaluate it on the
full sample size (n = 720).
This work contributes to statistical methods for understanding treatment
effect heterogeneity, and in particular, how much improvement we might make in
outcomes if interventions are assigned according to an ODTR. It is of great
practical relevance to study estimators of these parameters, which allow us to
determine the benefit of assigning treatment in a more individualized way
compared to, for example, simply giving all subjects treatment.
## References
* [1] Muin J Khoury, Michael F Iademarco, and William T Riley. Precision public health for the era of precision medicine. American journal of preventive medicine, 50(3):398, 2016.
* [2] Eric Laber and Marie Davidian. Dynamic treatment regimes, past, present, and future: A conversation with experts. Statistical methods in medical research, 26(4):1605–1610, 2017\.
* [3] O. Bembom and M. J. van der Laan. A practical illustration of the importance of realistic individualized treatment rules in causal inference. Electron J Stat, 1:574–596, 2007.
* [4] Mark J. van der Laan and Maya L. Petersen. Causal effect models for realistic individualized treatment and intention to treat rules. Int J Biostat, 3(1):Article 3, 2007.
* [5] James Robins. A new approach to causal inference in mortality studies with a sustained exposure period - application to control of the healthy worker survivor effect. Mathematical Modelling, 7(9-12):1393–1512, 1986.
* [6] B. Chakraborty, E. B. Laber, and Y. Zhao. Inference for optimal dynamic treatment regimes using an adaptive m-out-of-n bootstrap scheme. Biometrics, 69(3):714–23, 2013.
* [7] Bibhas Chakraborty and Susan A Murphy. Dynamic treatment regimes. Annual review of statistics and its application, 1:447–464, 2014\.
* [8] Susan A Murphy. Optimal dynamic treatment regimes. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 65(2):331–355, 2003.
* [9] James M Robins. Optimal structural nested models for optimal sequential decisions. In Proceedings of the second seattle Symposium in Biostatistics, pages 189–326. Springer, 2004.
* [10] E. E. Moodie, T. S. Richardson, and D. A. Stephens. Demystifying optimal dynamic treatment regimes. Biometrics, 63(2):447–55, 2007.
* [11] M. R. Kosorok and E. B. Laber. Precision medicine. Annu Rev Stat Appl, 6:263–286, 2019.
* [12] Michael R Kosorok and Erica EM Moodie. Adaptive Treatment Strategies in Practice: Planning Trials and Analyzing Data for Personalized Medicine, volume 21. SIAM, 2015.
* [13] Anastasios A Tsiatis. Dynamic Treatment Regimes: Statistical Methods for Precision Medicine. CRC Press, 2019.
* [14] Lina Montoya, Mark J. van der Laan, Alexander R. Luedtke, Jennifer L. Skeem, Jeremy R. Coyle, and Maya L. Petersen. The optimal dynamic treatment rule superlearner: Considerations, performance, and application. The International Journal of Biostatistics, submitted.
* [15] Mark J van der Laan, Eric C Polley, and Alan E Hubbard. Super learner. Statistical applications in genetics and molecular biology, 6(1), 2007.
* [16] Alexander R. Luedtke and Mark J. van der Laan. Super-learning of an optimal dynamic treatment rule. Int J Biostat, 12(1):305–32, 2016.
* [17] Jeremy Robert Coyle. Computational Considerations for Targeted Learning. PhD thesis, UC Berkeley, 2017.
* [18] James Robins. A new approach to causal inference in mortality studies with a sustained exposure period—application to control of the healthy worker survivor effect. Mathematical modelling, 7(9-12):1393–1512, 1986.
* [19] Miguel A. Hernan and James M. Robins. Estimating causal effects from epidemiological data. J Epidemiol Community Health, 60(7):578–86, 2006.
* [20] Paul R Rosenbaum and Donald B Rubin. The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1):41–55, 1983.
* [21] James M Robins, Andrea Rotnitzky, and Lue Ping Zhao. Estimation of regression coefficients when some regressors are not always observed. Journal of the American statistical Association, 89(427):846–866, 1994.
* [22] Daniel O Scharfstein, Andrea Rotnitzky, and James M Robins. Theory and methods-rejoinder-adjusting for nonignorable drop-out using semiparametric nonresponse models. Journal of the American Statistical Association, 94(448):1135–1146, 1999.
* [23] James M Robins. Robust estimation in sequentially ignorable missing data and causal inference models. In Proceedings of the American Statistical Association, volume 1999, pages 6–10. Indianapolis, IN, 2000.
* [24] Michael Rosenblum and Mark J van der Laan. Targeted maximum likelihood estimation of the parameter of a marginal structural model. The international journal of biostatistics, 6(2), 2010.
* [25] Mark J van der Laan and Sherri Rose. Targeted learning: causal inference for observational and experimental data. Springer Science & Business Media, 2011.
* [26] Wenjing Zheng and Mark J van der Laan. Asymptotic theory for cross-validated targeted maximum likelihood estimation. 2010\.
* [27] Mark J van der Laan and Alexander R Luedtke. Targeted learning of the mean outcome under an optimal dynamic treatment rule. Journal of causal inference, 3(1):61–95, 2015.
* [28] Mark J van der Laan and Sherri Rose. Targeted Learning in Data Science. Springer, 2018.
* [29] E. Laber and M. Qian. Evaluating Personalized Treatment Regimes, book section 15, pages 483–497. CRC Press LLC : Chapman and Hall/CRC, Boca Raton, FL, 2017.
* [30] Bibhas Chakraborty, Susan Murphy, and Victor Strecher. Inference for non-regular parameters in optimal dynamic treatment regimes. Statistical methods in medical research, 19(3):317–343, 2010.
* [31] A. Sies and I. Van Mechelen. Estimating the quality of optimal treatment regimes. Stat Med, 2019.
* [32] A. E. Hubbard, S. Kherad-Pajouh, and M. J. van der Laan. Statistical inference for data adaptive target parameters. Int J Biostat, 12(1):3–19, 2016.
* [33] Mark J van der Laan and Alexander R Luedtke. Targeted learning of an optimal dynamic treatment, and statistical inference for its mean outcome. 2014\.
* [34] Aniek Sies and Iven Van Mechelen. Estimating the quality of optimal treatment regimes. Statistics in medicine, 38(25):4925–4938, 2019.
* [35] Jennifer L Skeem, Sarah Manchak, and Jillian K Peterson. Correctional policy for offenders with mental illness: Creating a new paradigm for recidivism reduction. Law and human behavior, 35(2):110–126, 2011.
* [36] Jennifer L Skeem, Eliza Winter, Patrick J Kennealy, Jennifer Eno Louden, and Joseph R Tatar II. Offenders with mental illness have criminogenic needs, too: Toward recidivism reduction. Law and human behavior, 38(3):212, 2014.
* [37] Maya L Petersen and Mark J van der Laan. Causal models and learning from data: integrating causal modeling and statistical estimation. Epidemiology (Cambridge, Mass.), 25(3):418, 2014.
* [38] Maya L Petersen, Kristin E Porter, Susan Gruber, Yue Wang, and Mark J van der Laan. Diagnosing and responding to violations in the positivity assumption. Statistical methods in medical research, 21(1):31–54, 2012.
* [39] Alexander R Luedtke and Mark J van der Laan. Statistical inference for the mean outcome under a possibly non-unique optimal treatment strategy. Annals of statistics, 44(2):713, 2016.
* [40] James M Robins. Optimal structural nested models for optimal sequential decisions. In Proceedings of the second seattle Symposium in Biostatistics, pages 189–326. Springer, 2004.
* [41] James Robins, Andrea Rotnitzky, et al. Discussion of “dynamic treatment regimes: Technical challenges and applications”. Electronic Journal of Statistics, 8(1):1273–1289, 2014.
* [42] Susan Gruber and Mark J van der Laan. A targeted maximum likelihood estimator of a causal effect on a bounded continuous outcome. The International Journal of Biostatistics, 6(1), 2010.
* [43] Mark J van der Laan, MJ Laan, and James M Robins. Unified methods for censored longitudinal data and causality. Springer Science & Business Media, 2003.
* [44] Aad W Van der Vaart. Asymptotic statistics, volume 3. Cambridge university press, 2000.
* [45] Peter J Bickel, Chris AJ Klaassen, Peter J Bickel, Ya’acov Ritov, J Klaassen, Jon A Wellner, and YA’Acov Ritov. Efficient and adaptive estimation for semiparametric models, volume 4. Johns Hopkins University Press Baltimore, 1993.
* [46] R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2018.
* [47] Jerome H Friedman et al. Multivariate adaptive regression splines. The annals of statistics, 19(1):1–67, 1991.
* [48] Brian D Ripley and NL Hjort. Pattern recognition and neural networks. Cambridge university press, 1996.
* [49] Chih-Chung Chang and Chih-Jen Lin. Libsvm: A library for support vector machines. ACM transactions on intelligent systems and technology (TIST), 2(3):27, 2011.
* [50] Leo Breiman. Classification and regression trees. Routledge, 2017.
* [51] Eric Polley, Erin LeDell, Chris Kennedy, and Mark van der Laan. SuperLearner: Super Learner Prediction, 2018. R package version 2.0-24.
* [52] Leo Breiman. Random forests. Machine learning, 45(1):5–32, 2001.
* [53] Michael Rosenblum and Mark J van der Laan. Using regression models to analyze randomized trials: Asymptotically valid hypothesis tests despite incorrectly specified models. Biometrics, 65(3):937–945, 2009.
* [54] Huitian Lei, Inbal Nahum-Shani, K Lynch, David Oslin, and Susan A Murphy. A” smart” design for building individualized treatment sequences. Annual review of clinical psychology, 8:21–48, 2012.
* [55] Daniel Almirall, Inbal Nahum-Shani, Nancy E Sherwood, and Susan A Murphy. Introduction to smart designs for the development of adaptive interventions: with application to weight loss research. Translational behavioral medicine, 4(3):260–274, 2014.
| TAU ($A=0$) | CBT ($A=1$) | $p$
---|---|---|---
$n$ | 211 | 230 |
No re-arrest ($Y=1$) (%) | 128 (60.7) | 143 (62.2) | 0.820
Site = San Francisco (%) | 87 (41.2) | 104 (45.2) | 0.455
Gender = Female (%) | 38 (18.0) | 37 (16.1) | 0.682
Ethnicity = Hispanic (%) | 50 (23.7) | 42 (18.3) | 0.198
Age (mean (SD)) | 38.08 (11.05) | 37.01 (11.22) | 0.317
CSI (mean (SD)) | 32.35 (11.13) | 33.46 (11.27) | 0.300
LSI (mean (SD)) | 5.59 (1.33) | 5.50 (1.48) | 0.472
SES (mean (SD)) | 3.81 (1.89) | 3.81 (2.12) | 0.995
Prior adult convictions (%) | | | 0.156
Zero to two times | 74 (35.1) | 93 (40.4) |
Three or more times | 134 (63.5) | 129 (56.1) |
Missing | 3 (1.4) | 8 (3.5) |
Most serious offense (mean (SD)) | 5.29 (2.54) | 5.09 (2.52) | 0.415
Motivation (mean (SD)) | 3.22 (1.36) | 3.27 (1.37) | 0.720
Substance use (%) | | | 0.184
0 | 53 (25.1) | 76 (33.0) |
1 | 47 (22.3) | 55 (23.9) |
2 | 109 (51.7) | 98 (42.6) |
Missing | 2 (0.9) | 1 (0.4) |
Table 2: Distribution of “Interventions” data by treatment assignment. Figure
1: Performance of the value of the rule for 3 SuperLearner library
configurations with increasing (left to right) levels of data-adaptivity used
for estimating $Q_{0}$ and/or $d^{*}_{0}$ (“GLM - least data adaptive”, “ML +
GLMs - moderately data adaptive”, ”ML + GLMs - most data adaptive”). The
horizontal black line depicts the true mean outcome under the true ODTR
$\psi_{0,d^{*}_{0}}$; the blue and red lines are the data-adaptive parameters
$\psi_{0,d^{*}_{n}}$ and $\psi_{0,d^{*}_{n,CV}}$, respectively, averaged over
each of the 1,000 simulated samples. Points with error bars show the
distribution of the estimators across the 1,000 simulated samples
(G-computation estimator, IPTW estimator, TMLE, and CV-TMLE); the points
(circles and triangles) show the estimates averaged over the samples, and
error bars show the $2.5^{th}$ and $97.5^{th}$ quantiles of the distribution
of each estimator across the simulation repetitions. The circles depict the
estimators under a known rule $\hat{\psi}_{d=d_{0}^{*}}$ and the triangles
illustrate the estimators under an estimated rule, either
$\hat{\psi}_{d^{*}_{n}}$ or $\hat{\psi}_{d_{n,v}^{*}}$ (for CV-TMLE).
Library | Estimator | Bias | Variance | MSE | Coverage
---|---|---|---|---|---
GLMs | G-comp. | -0.0940 | 0.0003 | 0.0091 | -
IPTW | 0.0009 | 0.0008 | 0.0008 | 95.3%
IPTW-DR | 0.0001 | 0.0005 | 0.0005 | 93.7%
TMLE | 0.0002 | 0.0005 | 0.0005 | 93.7%
CV-TMLE | 0.0004 | 0.0005 | 0.0005 | 93.7%
ML + GLMs not aggressive | G-comp. | -0.1298 | 0.0006 | 0.0175 | -
IPTW | 0.0002 | 0.0008 | 0.0008 | 94.7%
IPTW-DR | -0.0009 | 0.0006 | 0.0006 | 94.0%
TMLE | -0.0011 | 0.0005 | 0.0005 | 93.6%
CV-TMLE | -0.0009 | 0.0005 | 0.0005 | 93.2%
ML + GLMs aggressive | G-comp. | -0.1180 | 0.0006 | 0.0146 | -
IPTW | -0.0006 | 0.0009 | 0.0009 | 94.0%
IPTW-DR | -0.0084 | 0.0005 | 0.0006 | 90.1%
TMLE | -0.0075 | 0.0005 | 0.0006 | 90.6%
CV-TMLE | -0.0001 | 0.0005 | 0.0005 | 93.6%
Table 3: Performance metrics (bias, variance, MSE, confidence interval
coverage) of each estimator $\hat{\psi}_{d=d^{*}_{0}}$ for
$\psi_{0,d^{*}_{0}}$, for each library configuration of $Q_{n}$.
Library | Estimator | Bias | Variance | MSE | Coverage
---|---|---|---|---|---
GLMs | G-comp. | -0.0773 | 0.0004 | 0.0064 | -
IPTW | -0.0558 | 0.0008 | 0.0039 | 45.0%
IPTW-DR | -0.0565 | 0.0006 | 0.0038 | 30.1%
TMLE | -0.0565 | 0.0006 | 0.0038 | 29.8%
CV-TMLE | -0.0764 | 0.0009 | 0.0067 | 14.7%
ML + GLMs not aggressive | G-comp. | -0.1306 | 0.0007 | 0.0178 | -
IPTW | 0.0334 | 0.0010 | 0.0021 | 76.1%
IPTW-DR | 0.0327 | 0.0008 | 0.0019 | 66.5%
TMLE | 0.0298 | 0.0008 | 0.0016 | 71.3%
CV-TMLE | -0.0308 | 0.0007 | 0.0017 | 69.0%
ML + GLMs aggressive | G-comp. | -0.1161 | 0.0007 | 0.0142 | -
IPTW | 0.1236 | 0.0109 | 0.0262 | 31.0%
IPTW-DR | 0.1010 | 0.0092 | 0.0194 | 33.0%
TMLE | 0.1031 | 0.0108 | 0.0214 | 33.6%
CV-TMLE | -0.0316 | 0.0007 | 0.0017 | 68.6%
Table 4: Performance metrics (bias, variance, MSE, confidence interval
coverage) of each estimator $\hat{\psi}_{d^{*}_{n}}$ (G-computation, IPTW,
IPTW-DR, TMLE) or $\hat{\psi}_{d^{*}_{n,v}}$ (CV-TMLE) for
$\psi_{0,d^{*}_{0}}$, for each library configuration of $Q_{n}$ and
$d^{*}_{n}$.
Library | Estimator | Bias | Variance | MSE | Coverage
---|---|---|---|---|---
GLMs | G-comp. | -0.0033 | 0.0004 | 0.0006 | -
IPTW | 0.0183 | 0.0008 | 0.0009 | 94.3%
IPTW-DR | 0.0175 | 0.0006 | 0.0007 | 90.6%
TMLE | 0.0175 | 0.0006 | 0.0007 | 90.7%
CV-TMLE | -0.0002 | 0.0009 | 0.0005 | 94.3%
ML + GLMs not aggressive | G-comp. | -0.1027 | 0.0007 | 0.0114 | -
IPTW | 0.0614 | 0.0010 | 0.0046 | 43.8%
IPTW-DR | 0.0607 | 0.0008 | 0.0044 | 28.9%
TMLE | 0.0578 | 0.0008 | 0.0040 | 30.4%
CV-TMLE | 0.0002 | 0.0007 | 0.0005 | 94.0%
ML + GLMs aggressive | G-comp. | -0.0846 | 0.0007 | 0.0081 | -
IPTW | 0.1551 | 0.0109 | 0.0366 | 16.3%
IPTW-DR | 0.1325 | 0.0092 | 0.0283 | 15.8%
TMLE | 0.1346 | 0.0108 | 0.0307 | 15.7%
CV-TMLE | 0.0001 | 0.0007 | 0.0005 | 94.8%
Table 5: Performance metrics (bias, variance, MSE, confidence interval
coverage) of each estimator $\hat{\psi}_{d^{*}_{n}}$ (G-computation, IPTW,
IPTW-DR, TMLE) for $\psi_{0,d^{*}_{n}}$ or $\hat{\psi}_{d^{*}_{n,v}}$ (CV-
TMLE) for $\psi_{0,d^{*}_{n,CV}}$, for each library configuration of $Q_{n}$
and $d^{*}_{n}$. Figure 2: CV-TMLE estimates of the probability of no re-
arrest under the following treatment rules: give CBT to all, give CBT to none,
give CBT according to the ODTR SuperLearner algorithm. The squares are the
point estimates and the error bars are 95% confidence intervals on these point
estimates. There is no significant difference in the estimated probability of
no re-arrest under a treatment regime in which all are given CBT, none are
given CBT, and CBT is given using this ODTR.
|
# Enabling Robots to Draw and Tell:
Towards Visually Grounded Multimodal Description Generation
Ting Han<EMAIL_ADDRESS>Artificial Intelligence Research Center
AIST, Tokyo, Japan and Sina Zarrieß<EMAIL_ADDRESS>Friedrich
Schiller University Jena
Jena, Germany
###### Abstract.
Socially competent robots should be equipped with the ability to perceive the
world that surrounds them and communicate about it in a human-like manner.
Representative skills that exhibit such ability include generating image
descriptions and visually grounded referring expressions. In the NLG
community, these generation tasks are largely investigated in non-interactive
and language-only settings. However, in face-to-face interaction, humans often
deploy multiple modalities to communicate, forming seamless integration of
natural language, hand gestures and other modalities like sketches. To enable
robots to describe what they perceive with speech and sketches/gestures, we
propose to model the task of generating natural language together with free-
hand sketches/hand gestures to describe visual scenes and reallife objects,
namely, visually-grounded multimodal description generation. In this paper, we
discuss the challenges and evaluation metrics of the task, and how the task
can benefit from progress recently made in the natural language processing and
computer vision realms, where related topics such as visually grounded NLG,
distributional semantics, and photo-based sketch generation have been
extensively studied.
††conference: The 2nd Workshop on NLG for HRI; December 18; 2020††copyright:
none
## 1\. Introduction
A key way for robots to exhibit social intelligence is to show the ability of
communicating what they perceive in the surrounding world as humans do, with
both verbal and non-verbal means such as drawing a trajectory in shared space
or on a piece of paper (i.e., gesturing and sketching) to illustrate the
contour of an object. Research in human-robot interaction has a long-standing
interest in embodied multimodal interaction. A lot of work has been done on
studying the complex interplay between speech and gestures, as well as
understanding and generating speech and symbolic gestures in interaction (Kopp
et al., 2006; Kopp et al., 2008; Sowa and Wachsmuth, 2009; Lücking et al.,
2010; Kranstedt and Wachsmuth, 2005; Han et al., 2018a, 2015). In comparison,
much less work has focused on generating iconic gestures/sketches together
with natural language to describe reallife objects, albeit such descriptions
are multimodal in nature.
Generating visually grounded multimodal descriptions is a challenging task:
1) It requires the generation of both natural language and gesture/sketch,
each of which is a standalone challenging task. Given a photo, the NLG
component generates a sequence of words to convey the content symbolically;
The gesture/sketch generation component translates the content into a sequence
of sketch strokes to communicate iconically. It decides what, where and in
which order to draw. 2) The two modalities are not independent, but bear
close semantic and temporal relations (McNeill, 1992). Iconic gestures
complement or supplement the semantic content of the accompanied speech
through their formal relevance to referents in the speech, thus they do not
exhibit meanings on their own. Instead, their meanings are largely designated
by the accompanied speech, making multimodal alignment reasoning a critical
component of the generation task. 3) Moreover, in situated interaction, the
generated descriptions unfold as timing sequences. The durations of a
gesture/sketch and the accompanied utterance must fit with each other, in
order to be temporally aligned. Therefore, generating aligned descriptions
requires reasoning of both the semantics and the timing of speech and
gestures/sketches, making the generation task extremely challenging.
Figure 1. A pseudo multimodal description of an elephant, composed of an
utterance and a sketch stroke selected from a full sketch. The red arrows
indicate the sketch direction.
For instance, in Figure 1, the utterance “ _elephant, facing right, trunk
coiled towards mouth_ ” and the sketch compose a multimodal description of the
elephant in the photo. While the utterance describes the object symbolically,
the sketch illustrates the shape of the trunk by visualizing its contour (Han
et al., 2018b, 2017), which is difficult to convey symbolically with the word
“coiled”. Yet, the meaning of the sketch is ambiguous on its own. It only
becomes evident when jointly interpreting the utterance and the sketch,
grounding the iconic concepts encoded in both modalities to the same region of
the photo (i.e., the trunk).
With the above observation in mind, we propose to decompose the multimodal
generation task into three sub-tasks: visually-grounded NLG, visually-grounded
gesture/sketch generation, and multimodal alignment reasoning. The first two
sub-tasks generate utterances and full sketches respectively. The multimodal
alignment sub-task selects representative sketch strokes from full sketches,
infers optimal semantic and temporal relations between the utterances and the
sketches, then outputs orchestrated descriptions. Finally, we convert the
utterances to speech and pair the utterances with sketches displayed on a
screen or projected to a gesture space together and duly to obtain aligned
multimodal descriptions.
Such a formulation will also benefit the proposed task from available pre-
trained large scale language models, sketch generation models, and large scale
datasets, e.g., language and vision datasets such as MSCOCO and Visual Genome
(Kazemzadeh et al., 2014; Lin et al., 2014), and free-hand sketching dataset
such as QuickDraw (Sangkloy et al., 2016). To our knowledge, no large scale
multimodal description datasets have been publicly available yet. Utilizing
existing datasets is a good starting point for the proposed task.
## 2\. The Challenges
### 2.1. Visually Grounded NLG
The visually grounded NLG sub-task not only generates verbal object
descriptions, it also provides important information for multimodal alignment.
As aforementioned, sketch/gestures convey meaning through their formal
relevance to referents in accompanied speech. Thus, we assume words and
sketches that describe the same photo region are likely to align on semantic
level. Grounding generated words to visual inputs enables the identification
of the region they describe.
Attention-based image caption models are a promising solution to our
generation task. With an attention mechanism, image caption models focus on
the relevant part of a given photo when generating each word. The “focus” is
represented as a weight matrix: The region a word describes is with higher
attention weights ((Xu et al., 2015; Lu et al., 2017)). With the attention
matrix of each word, we identify the region an utterance describes, then
reason out sketch strokes that can be aligned with the utterance (see
discussion in Section 2.3).
### 2.2. Photo-based Sketch Generation
The sketch generation sub-task generates a sequence of sketch strokes that
exhibit contours of objects in a given photo (as shown in Figure 1), mimicking
human sketching process by resembling various levels of sophistication and
abstractness.
In computer vision, photo-based sketch generation is posited as a weakly
supervised or unsupervised photo-to-sequence translation task (Zhang et al.,
2015; Xu, 2020; Song et al., 2018; Eitz et al., 2012). Photo-sketch pairs are
not strictly required for training sketch generation models. In recent works,
neural network sketching models with encoding-decoding architecture typically
learn a encoder network and a decoder network: the former encodes photos into
feature vectors; the latter decodes photo vectors to sketch sequences of
target objects.
Note that existing neural sketching models generate detailed sketches of
objects, as they are trained on sketching datasets collected without timing
constraints which might be too sophisticated to draw in situated interaction.
In our task, we aim to accompany utterances with sketch strokes in human-robot
interaction, where communication is often under timing pressure. Hence, a
challenge to be addressed here is to constrain the sophistication of sketch
strokes either during model training or via post-editing, so that they suit
for situated communication.
### 2.3. Composing Multimodal Descriptions
To produce orchestrated descriptions, the multimodal alignment sub-task first
selects sketch strokes that represent salient iconic concepts of the target
object, then determines which words are semantically and temporally compatible
with the selected strokes. Several candidate descriptions might be initially
proposed, from which the one that best suits the communicating purpose (e.g.,
emphasize a particular concept) is selected as the optimal description.
Selecting representative sketch strokes. Given a full sketch of a photo, we
select one or two sketch strokes to form an optimal multimodal description.
The selection strategy here depends on the communication intention. For
instance, to supplement a particular utterance, sketch strokes in the region
the utterance describes should be selected; To be most informative, sketch
strokes of the most salient part of the target object should be selected.
Multimodal alignment reasoning. Regarding multimodal alignment, although word
attention weights indicate which words correlate with selected strokes, note
that not every word in natural language correlates with visual features in
theory, hence, the attention weights only provide a very rough estimation of
the alignment. To achieve more accurate alignment, we propose to reason the
alignment according to semantics of words and sketches, as accompanied speech
and gestures bear close semantic relations (Han and Schlangen, 2017). Works in
_Distributional semantics_ have shown that content from different modalities
can be mapped to a joint embedding space, where vectors with within shorted
distances share similar semantic meaning (Bruni et al., 2014). Clearly, iconic
gestures and sketches are different, although they both convey iconicity. This
approach could also be applied to measure semantic similarity of utterances
and sketches.
In a pilot study, we generated several multimodal descriptions with a
simplified approach based on the above framework, and evaluated them under two
settings: 1) display speech (converted from text with a TTS software) and
sketches on a monitor, 2) enable a Nao robot to speak and gesture by
projecting generated sketch strokes into Nao’s gesture space. We found that,
with right alignment, sketches/gestures are helpful in conveying iconicity.
Iconic gestures transformed from shorter and less sophisticated sketches are
easier to interpret and more helpful. Generating iconic gestures based on
sketches would be an interesting topic for future research.
### 2.4. Evaluation
An informative evaluation should reflect the quality of individual modalities
and how well the two modalities are aligned. We propose to combine human
evaluation with automatic evaluation to assess the overall generation quality
(Belz and Reiter, 2006).
Automatic evaluation suits for fast evaluations during model development. We
suggest to evaluate NLG quality with metrics in NLG such as BLEU(Papineni et
al., 2002; Vedantam et al., 2015), while evaluating the informativeness of
sketches and multimodal descriptions with sketch-based and multimodal image
retrieving tasks: A higher retrieving accuracy indicates better description
quality. Note that these automatic evaluations can not measure temporal
alignment between modalities, therefore, we resort to human evaluation to
assess multimodal alignment quality. Human evaluation offers insights into
multimodal alignment quality as miss-alignment impairs overall interpretation
of the descriptions, especially the interpretation of sketches/gestures.
Interactive referring games between humans and robots that reveal how humans
understand the generated descriptions, are good testbeds for such evaluation
(Kazemzadeh et al., 2014).
## 3\. Conclusion and Future work
We proposed the task of visually grounded multimodal description generation.
Along with discussions on the challenges regarding NLG, free-hand sketch
generation, multimodal alignment, and evaluation methods, we pointed out how
existing works of visually grounded NLG, photo-based sketch generation,
distributional semantics, and their respective evaluation metrics benefit the
proposed task. We believe, solving the task will lead to more natural
multimodal human-robot interaction in scenarios where robots can communicate
what they perceive in the world that surrounds them.
## Acknowledgments
This work is supported by the New Energy and Industrial Technology Development
Organization (NEDO) of Japan.
## References
* (1)
* Belz and Reiter (2006) Anja Belz and Ehud Reiter. 2006. Comparing automatic and human evaluation of NLG systems. In _11th Conference of the European Chapter of the Association for Computational Linguistics_.
* Bruni et al. (2014) Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014\. Multimodal distributional semantics. _Journal of Artificial Intelligence Research_ 49 (2014), 1–47.
* Eitz et al. (2012) Mathias Eitz, James Hays, and Marc Alexa. 2012. How do humans sketch objects? _ACM Transactions on graphics (TOG)_ 31, 4 (2012), 1–10.
* Han et al. (2017) Ting Han, Julian Hough, and David Schlangen. 2017. Natural Language Informs the Interpretation of Iconic Gestures. A Computational Approach. In _The 8th International Joint Conference on Natural Language Processing. Proceedings of the Conference. Vol. 2: Short Papers_.
* Han et al. (2015) Ting Han, Casey Kennington, and David Schlangen. 2015\. Building and Applying Perceptually-Grounded Representations of Multimodal Scene Descriptions. In _Proceedings of the 19th SemDial Workshop on the Semantics and Pragmatics of Dialogue (goDIAL)_.
* Han et al. (2018a) Ting Han, Casey Kennington, and David Schlangen. 2018a. Placing Objects in Gesture Space: Toward Real-Time Understanding of Spatial Descriptions. In _Proceedings of the thirty-second AAAI conference on artificial intelligence (AAAI18)_.
* Han and Schlangen (2017) Ting Han and David Schlangen. 2017. Draw and tell. multimodal descriptions outperform verbal-or sketch-only descriptions in an image retrieval task. In _The 8th International Joint Conference on Natural Language Processing. Proceedings of the Conference. Vol. 2: Short Papers_.
* Han et al. (2018b) Ting Han, Sina Zarrieß, Kazunori Komatani, and David Schlangen. 2018b. Learning to describe multimodally from parallel unimodal data? A pilot study on verbal and sketched object descriptions. In _Proceedings of the 22nd Workshop on the Semantics and Pragmatics of Dialogue (AixDial)_.
* Kazemzadeh et al. (2014) Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. 2014. Referitgame: Referring to objects in photographs of natural scenes. In _Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)_. 787–798.
* Kopp et al. (2008) Stefan Kopp, Kirsten Bergmann, and Ipke Wachsmuth. 2008\. Multimodal communication from multimodal thinking—towards an integrated model of speech and gesture production. _International Journal of Semantic Computing_ 2, 01 (2008), 115–136.
* Kopp et al. (2006) Stefan Kopp, Brigitte Krenn, Stacy Marsella, Andrew N Marshall, Catherine Pelachaud, Hannes Pirker, Kristinn R Thorisson, and Hannes Vilhjalmsson. 2006. Towards a common framework for multimodal generation: The behavior markup language. _Procedings Intelligent Virtual Agents 2006_ 4133 (2006).
* Kranstedt and Wachsmuth (2005) Alfred Kranstedt and Ipke Wachsmuth. 2005. Incremental generation of multimodal deixis referring to objects. In _Proceedings of the Tenth European Workshop on Natural Language Generation (ENLG-05)_.
* Lin et al. (2014) Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In _European conference on computer vision_. Springer, 740–755.
* Lu et al. (2017) Jiasen Lu, Caiming Xiong, Devi Parikh, and Richard Socher. 2017\. Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning. _CVPR_.
* Lücking et al. (2010) Andy Lücking, Kirsten Bergmann, Florian Hahn, Stefan Kopp, and Hannes Rieser. 2010. The Bielefeld speech and gesture alignment corpus (SaGA). In _LREC 2010 workshop: Multimodal corpora–advances in capturing, coding and analyzing multimodality_.
* McNeill (1992) David McNeill. 1992\. _Hand and mind: What gestures reveal about thought_. University of Chicago press.
* Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002\. BLEU: a method for automatic evaluation of machine translation. In _Proceedings of the 40th annual meeting of the Association for Computational Linguistics_. 311–318.
* Sangkloy et al. (2016) Patsorn Sangkloy, Nathan Burnell, Cusuh Ham, and James Hays. 2016\. The sketchy database: learning to retrieve badly drawn bunnies. _ACM Transactions on Graphics (TOG)_ 35, 4 (2016), 1–12.
* Song et al. (2018) Jifei Song, Kaiyue Pang, Yi-Zhe Song, Tao Xiang, and Timothy M Hospedales. 2018. Learning to sketch with shortcut cycle consistency. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 801–810.
* Sowa and Wachsmuth (2009) Timo Sowa and Ipke Wachsmuth. 2009. A computational model for the representation and processing of shape in coverbal iconic gestures. In _Spatial Language and Dialogue_.
* Vedantam et al. (2015) Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015\. Cider: Consensus-based image description evaluation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_. 4566–4575.
* Xu et al. (2015) Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015\. Show, attend and tell: Neural image caption generation with visual attention. In _International conference on machine learning_. 2048–2057.
* Xu (2020) Peng Xu. 2020. Deep learning for free-hand sketch: A survey. _arXiv preprint arXiv:2001.02600_ (2020).
* Zhang et al. (2015) Liliang Zhang, Liang Lin, Xian Wu, Shengyong Ding, and Lei Zhang. 2015. End-to-end photo-sketch generation via fully convolutional representation learning. In _Proceedings of the 5th ACM on International Conference on Multimedia Retrieval_. 627–634.
|
# Are Top School Students More Critical of Their Professors?
Mining Comments on RateMyProfessor.com
Ziqi Tang, Yutong Wang, Jiebo Luo
University of Rochester
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
## abstract
Student reviews and comments on RateMyProfessor.com reflect realistic learning
experiences of students. Such information provides a large-scale data source
to examine the teaching quality of the lecturers. In this paper, we propose an
in-depth analysis of these comments. First, we partition our data into
different comparison groups. Next, we perform exploratory data analysis to
delve into the data. Furthermore, we employ Latent Dirichlet Allocation and
sentiment analysis to extract topics and understand the sentiments associated
with the comments. We uncover interesting insights about the characteristics
of both college students and professors. Our study proves that student reviews
and comments contain crucial information and can serve as essential references
for enrollment in courses and universities.
## Introduction
Since 1983, the U.S. News & World Report has been publishing rankings for the
colleges and universities in the United States each fall. These rankings have
remarkable impacts on applications, admissions, enrollment decisions, as well
as tuition pricing policies (Monks and Ehrenberg 1999). It is an important
reference for not only students and parents, but also institutions and
professors. The ranking methodology measures and calculates a variety of
factors, and has been continuously refined over time based on user feedback,
discussions with institutions and education experts, literature reviews and
their own data (Morse and Brooks 2020). The current ranking methodology
considers the following factors, along with indicator weights: Graduation and
Retention Rates (22%), Undergraduate Academic (20%), Faculty resources (20%),
Financial Resources (10%), student selectivity for entering class (7%),
Graduation Rate performance (8%), Social Mobility (5%), Graduate Indebtedness
(5%), and Alumni Giving Rate (3%). This measurement takes a good number of
objective factors into consideration. However, the learning experiences of
students are subjective and personal, which cannot readily be represented by
the ranking scores. In this regard, a professor rating website such as
RateMyProfessors.com is a great resource to uncover the hidden knowledge about
the learning experience that the U.S. News Rankings can not account for.
Rate My Professor is a website that allows students to anonymously rate their
professors and write comments. The website claims that users have added more
than 19 million ratings, 1.7 million professors, and over 7,500 schools to the
website, and there are more than 4 million college students using this website
each month (Rat 2020). Such massive text data is a great resource to study the
following topics: features of different universities, learning experiences of
students, and course and lecture qualities. Past literature has primarily
examined the usefulness and validity of these ratings (Otto, Jr, and Ross
2008), and the correlation levels between easiness, clarity and helpfulness of
lecturers (Otto, Sanford, and Wagner 2011). Yet the rich data on Rate My
Professor contain more hidden information to discover. A unique feature of
Rate My Professor is that it has professor reviews from different tiers of
universities, such as Ivy League schools, Big Ten schools, and community
colleges. These reviews discuss the same topic, which is the experiences of
taking a course from a college professor. This provides an opportunity to
conduct a plausible control variable experiment to learn about the
characteristics of students and professors in different universities or
colleges.
In summary, this study makes several contributions:
1. 1.
We conduct a large-scale study of the course learning experiences across the
broad spectrum of universities and colleges in the United States.
2. 2.
We employ exploratory data analysis, topic modeling, and sentiment analysis to
mine the behaviors and characteristics of different segments of colleges,
students and professors.
3. 3.
we uncover interesting and useful insights that can be used to understand and
improve the learning experience.
## Data Collection and Preprocessing
Rate My Professors data was scraped from the website. We selected about 75
universities based on the U.S. News college rankings of 2020. The rationale of
our selection was the following: The eight Ivy League schools represent the
top ranked private universities, ten Big Ten Academic Alliance Member
universities represent the top ranked public universities, and the top 15
ranked community colleges in the United States represent the community
colleges. In addition, we selected the top 25 ranked universities and those
ranked in [100 - 125] in the United States.
For each university in our selections, we picked the 60 most-rated professors
(not highest rated), and for each professor page, we scraped the most recent
20 comments. In total, we collected 87,436 data records, containing the
following attributes: “Professor ID”, “Professor Name”, “University”,
“Department”, “Course ID”, “Quality score”, “Difficulty score”, “Comments”.
Each data record represents a review by a student on a course.
We partitioned the collected data into several datasets. The rationale was the
following:
1. 1.
Based on the school type, we partition the data into three categories: private
(Ivy League), public (Big Ten), and community colleges.
2. 2.
Based on the average rating scores of the professors, we calculate the average
quality score of each professor, and selected those professors with an average
score above 4.0 and below 2.0 (the full score range is from 1.0 to 5.0), as
the high-rating professor and low-rating professor groups, respectively.
3. 3.
Based on the quality score of each comment, we also create datasets for three
categories: comments with a score above 4.0, comments with a scores below 2.0,
and comments with a score in between.
In the end, we have 11 datasets for three types of comparison. Note that these
datasets may overlap with each other.
## Exploratory Data Analysis
We perform exploratory analysis with the following initial findings:
Figure 1: Distributions of Word Counts in different groups. Figure 2:
Punctuation usage distributions.
* •
The word counts shown in Figure 1 indicate that most of the comments contain
around 60 words. All groups have similar distributions. The only difference is
that Ivy League students use short phrases more often than other groups.
* •
The Punctuation usage shown in Figure 2 demonstrates the most commonly used
punctuation is period. The distributions for all groups are similar as well.
The only difference is that community college students use commas, dashes and
exclamation marks more frequently than other groups.
* •
Figure 3 is the distribution of average quality ratings of all professors,
which has a left skewed distribution.
* •
Figure 4 is the distribution of quality ratings from all schools (75 schools).
* •
From Figure 5, the proportions of quality ratings of the three different
groups of schools are different. Community college students give more high
(5/5) ratings, while Ivy League students give fewer low (1/5) ratings. This
answers our initial question in the title – top school students are not
critical when rating their professors and course quality.
* •
Figure 6 shows the proportions of difficulty ratings of the three different
groups of schools, which are very similar.
* •
The correlations between quality ratings and difficulty ratings for Ivy
League, Big Ten and community colleges are [-0.178, -0.424, -0.515],
respectively. All groups have negative correlation values that imply the
quality rating decreases when difficulty rating increases, and vice versa. Ivy
League’s correlation is closer to zero which means there is little
relationship between quality ratings and difficulty ratings. Moreover,
students from Big Ten schools and community colleges are more likely to give a
higher quality rating when the course is easy.
Figure 3: Distribution of average quality ratings of all professors. Figure 4:
Distribution of quality ratings of all schools. Figure 5: Quality rating
distributions of Ivy League, Big Ten, and community colleges. Figure 6:
Difficulty rating distributions of Ivy League, Big Ten, and community
colleges.
## Quality Rating Analysis Using Topic Modeling
In order to find out what factors influence the quality ratings, we perform
Latent Dirichlet Allocation (LDA) to extract topics of the comments. We
implement a few types of topic modeling methods: LDA, BiGram LDA and TriGram
LDA using the Gensim library. Also, we apply traditional LDA and Multi-Grain
LDA using the Tomotopy library. Gensim is a well-known python library for
topic modeling, and Tomotopy is a new library that provides functions for
topic modeling. The advantages of Tomotopy are the capability of dealing with
large scale datasets, significantly faster running time than Gensim (5 to 10
times faster than Gensim), and its availability for implementing Multi-Grain
LDA. Multi-Grain LDA takes both local topics and global topics into
consideration when performing topic modeling. Therefore, we decide to examine
the Tomotopy Multi-Grain LDA model for our study. BiGram, TriGram and Multi-
Grain LDA models are similar algorithms to the traditional LDA. However, they
have an additional step that adds N-gram phrases to increase the model’s
complexity, which could be useful in boosting the model’s performance. In our
case, the BiGram model has phrases like: “easy-A”, “office-hour”, “online-
course”, etc. For the TriGram model, there are phrases like: “extra-credit-
opportunity”, “attendance_isn_mandatory”, etc.
In order to evaluate the performance of all of these models, we use coherence
score, pyLDAvis visualization, log-likelihood, and manual checking as our
evaluation metrics. For LDA, BiGram LDA and TriGram LDA models using Gensim,
their coherence score comparison is shown in Figure 7. Furthermore, we use the
pyLDAvis topic modeling visualization tool to analyze the performance of
models. For the Multi-Grain LDA model using Tomotopy, the library does not
generate coherence scores, which is a downside of this library. Therefore, we
decide to manually check all the topics these models generate and choose the
one that makes more sense to us. Figure 8 shows the resulting topics we create
using BiGram LDA, TriGram LDA and Multi-Grain LDA methods. They are all
generated from the same dataset (community college lower quality rating
comments) and have the same number of top topics selected (nine topics). A
major portion of the topics are similar. However, the TriGram LDA model covers
the most of the topics. For instance, we see key word “online” from the result
of TriGram LDA. Since this is a community college dataset, we can infer that
community colleges tend to offer more online classes than other groups, which
could be a factor that students consider when they rate the course quality.
Moreover, we also see “accent” for the first time from the result of TriGram
LDA. This is a very interesting factor to include because many students
actually have a different time understanding their professors’ accents. The
communication experience is an important aspect of course quality rating.
Figure 7: Coherence score comparison between BiGram LDA and TriGram LDA
models. Figure 8: Comparison of topic results from MultiGrin LDA, BiGram LDA
and TriGram LDA models.
### Ivy League vs. Big Ten vs. Community Colleges
#### Higher Ratings (4-5)
The key words of the topics of higher quality ratings for the three groups are
listed in Figure 9. There are many factors that students mentioned in the
comments when giving higher ratings. For example, school works (homework,
test, exam), extra help (office_hour), professor’s personality (friendly,
humor, entertaining), and so on. Meanwhile, some unexpected words stand out in
the table: “tough”, “boring”, “strict”, implying that these are not negatively
affecting Ivy League and community college’s quality ratings. In addition,
both Big Ten and community college students mention “extra_credit”, “grade”
more often. The word “friend” appears in Big Ten’s topics, perhaps implying
students in Big Ten schools are more likely to get along with their professors
like friends.
#### Lower Ratings (1-2)
The key words of the topics of lower quality ratings for the three groups are
listed in Figure 10. A number of factors are mentioned by students in the
comments with lower ratings. For example, school works (homework, test, exam),
organization of the content (unclear, disorganized, useless), professor’s
attitude (manner, rude, arrogant), and so on. One thing to point out is that
“cost” is a common factor through all schools as the cost of textbooks,
supplies and software has significantly negative effects on quality ratings.
#### Middle Ratings (2-4)
The topic key words of middle quality ratings for the three groups are listed
in Figure 11. The middle rating comments are usually not too extreme. We note
that “accent” appears under Big Ten’s topics, and in community college’s
topics for lower ratings. This suggests that Big Ten school students may have
a higher tolerance for professors’ accents than community college students.
Figure 9: Topic key words of higher ratings (4-5) of Ivy League vs. Big Ten
vs. community colleges. Figure 10: Topic key words of lower ratings (1-2) of
Ivy League vs. Big Ten vs. community colleges. Figure 11: Topic key words of
middle ratings (2-4) of Ivy League vs. Big Ten vs. community colleges.
### High-Rating Professors vs. Low-Rating Professors
The key words in the comments for professors with an average quality rating
higher than 4 and lower than 2 are listed in Figure 12. One thing to notice is
that the coherence score of higher rating professors is lower, which means the
topics of these comments are more dissimilar. Factors that affect the average
ratings of professors are: grade, difficulty, organization of the contents,
personality, extra help, passion, knowledge, fairness, and so on. In contrast,
“voice” and “recitation” appear in the lower rating professors category, and
this is the only time they appear. This implies communication is critical to
students’ experience in classes, and professors teaching science classes
(Physics, Chemistry, Biology) that have recitation sections tend to get lower
average ratings.
Figure 12: Topic key words of professors with high average ratings vs.
professors with low average ratings.
## Sentiment Analysis Using A Lexical Approach
The LIWC2015 toolkit includes the main text analysis module along with a group
of predefined internal lexicons. The text analysis module compares each word
in the text against the dictionary and then identifies which words are
associated with which psychologically-relevant categories (Pennebaker et al.
2015). It has been used on previous studies for sentiment analysis on text
data from social media (e.g., (Chen et al. 2020). LIWC2015 provides about a
hundred psychologically-relevant categories, from which we select around 20
categories for our analysis.
### Ivy League vs. Big Ten vs. Community Colleges
Figure 13: LIWC results of two comparison groups. Group A: professors with
average ratings above 4.0 vs. professors with average ratings below 2.0. Group
B: Ivy League vs. Big Ten vs. community colleges. Grand Mean is the unweighted
mean of the six genres, and Mean SD refers to the unweighted mean of the
standard deviations across the six genre categories.
After we obtain the LIWC scores for each data record, we calculate the average
scores and standard deviations. Figure 13 shows the LIWC results for our first
comparison group (Group A). Some interesting categories stand out: positive
emotion, anxiety, achievement, sexual, and gender. We run the t-test on these
categories and LIWC grand average scores. The two-tailed P values for Positive
Emotion, Achievement and Male Reference were all below 0.001 ($P<0.001$). By
conventional criteria, the differences are considered to be statistically
significant.
We make the following observations:
1. 1.
The positive emotion scores for college students are overall higher than the
average. The Ivy League students score is not only higher than the grand
average, but also higher than the other groups. It indicates that students
from top ranked private schools do not tend to criticize professors more,
instead they praise the professors more often than other groups.
2. 2.
The Achievement score for community college students is higher than other
groups. Our interpretation is that the community college students may have had
jobs previously and they decide to attend community college because they want
to receive more education and learn more skills. They possibly have clearer
motivation and goals than other groups. Therefore they tend to talk about
achievement-related topics more often in their comments.
3. 3.
The Female Reference score for community colleges and Male Reference score for
Ivy League schools stand out. The gender reference scores are measured when
the students mention gender related phrases, such as he or she. Due to the
fact that Rate My Professor website does not record the gender of the
professor, we collect a fixed number of comments from each professor. The
score generated from gender reference words is the most reliable way to infer
male and female professors. Our analysis indicates that there are more male
professors in Ivy league schools and more female lecturers in the community
colleges. Our interpretation is that for research-oriented institutions, like
Ivy League schools and Big Ten schools, the professors are required to perform
research and teaching full-time. Community colleges, on the other hand, have
more part-time lecturer positions. For female professors who might have to
take care of family and children at the same time, teaching part-time at
community college seems to be a good option.
4. 4.
The anxiety scores are considered to be statistically insignificant. Based on
the literature, our expectation was that students attending top ranked private
colleges have a higher likelihood to feel depression and pressure (Deresiewicz
2014). However, the LIWC results show that the students did not express
pressure and anxiety in their reviews. Our interpretation is that these
comments were mostly written after the final exams or projects. The students
no longer feel anxious at the time they post the comments.
5. 5.
The sexual scores are considered to be statistically insignificant. The sexual
category contains phrases that describe the appearance of the professors. This
could indicate whether the appearance could affect the student ratings and
comments. Our study showed there is no evidence to prove the existence of
connection between appearance and student ratings.
### High-Rating Professors vs. Low-Rating Professors
Similarly, after we obtain the LIWC scores for each data record, we calculated
the average scores and standard deviations. Figure 13 also shows the LIWC
results for our second comparison group (Group B). Some interesting categories
stand out: Achievement and Gender. We run the t-test on these categories and
LIWC grand average scores. The two-tailed P value for Achievement and Gender
Reference were both less than 0.001 ($P<0.01$). By conventional criteria, the
differences were considered to be statistically significant.
The specific findings are:
1. 1.
The Achievement score for high-rating professors is higher than the low-rating
professors. This may indicate that apart from the general impressions people
have for a good professor, students think a good professor also needs to know
how to motivate the students.
2. 2.
The Female Reference score for low-rating professors is higher, while the Male
Reference score for high-rating professors is higher. This shows that there
are more low-rating female professors and more high-rating male professors. It
may imply that students are more critical on female professors than male
professors.
## Conclusion and Future Work
In this paper, we have presented a framework of evaluating the learning
experiences of college students from a more subjective perspective. We first
partition the scraped data from RateMyProfessor.com into different groups and
apply several LDA models to understand the topics of the comments.
Furthermore, we perform sentiment analysis using LIWC2015. We discover a
number of interesting findings that may be helpful for improving the college
learning experience for all partied involved, including students, professors
and administrators.
There are three possible directions for future work. First, we can investigate
a fine-grained partition strategy to divide the data by departments, subjects
or courses. Second, we can track the comments over time. Our current dataset
contain comments from 2018 to 2020, while most of the comments are posted in
May and December, which are the ends of spring and fall semesters. With more
data over time, we may study individual professor’s teaching style changes and
look at this problem from a brand new point of temporal view. Lastly, many in-
person lectures are switched to online lectures due to COVID-19 and
quarantine. A valuable study is to first determine the courses that are
transformed from in-person to online and then understand the changes from the
student’s experiences.
## References
* Rat (2020) 2020\. Rate My Professors About Page. URL https://www.ratemyprofessors.com/About.jsp.
* Abulaish et al. (2009) Abulaish, M.; Jahiruddin; Doja, M. N.; and Ahmad, T. 2009. Feature and Opinion Mining for Customer Review Summarization. In Chaudhury, S.; Mitra, S.; Murthy, C. A.; Sastry, P. S.; and Pal, S. K., eds., _Pattern Recognition and Machine Intelligence_ , 219–224. Berlin, Heidelberg: Springer Berlin Heidelberg. ISBN 978-3-642-11164-8.
* Arbaugh (2001) Arbaugh, J. 2001. How Instructor Immediacy Behaviors Affect Student Satisfaction and Learning in Web-Based Courses. _Business Communication Quarterly_ 64(4): 42–54.
* Blei, Ng, and Jordan (2003) Blei, D.; Ng, A.; and Jordan, M. 2003. Latent Dirichlet Allocation. _Journal of Machine Learning Research_ 3: 993–1022.
* Blei and Lafferty (2006) Blei, D. M.; and Lafferty, J. D. 2006. Dynamic Topic Models. In _Proceedings of the 23rd International Conference on Machine Learning_ , ICML ’06, 113–120. New York, NY, USA: Association for Computing Machinery. ISBN 1595933832.
* Chen et al. (2020) Chen, L.; Lyu, H.; Yang, T.; Wang, Y.; and Luo, J. 2020. In the Eyes of the Beholder: Analyzing Social Media Use of Neutral and Controversial Terms for COVID-19.
* Deresiewicz (2014) Deresiewicz, W. 2014. Don’t Send Your Kid to the Ivy League. URL https://monticellocollege.org/FileAssets/liberal-arts/ivy˙league˙schools˙are˙overrated.˙send˙your˙kids˙elsewhere.pdf.
* Etzioni et al. (2005) Etzioni, O.; Cafarella, M.; Downey, D.; Popescu, A.-M.; Shaked, T.; Soderland, S.; Weld, D. S.; and Yates, A. 2005. Unsupervised Named-Entity Extraction from the Web: An Experimental Study. _Artif. Intell._ 165(1): 91–134. ISSN 0004-3702.
* Hu and Liu (2004) Hu, M.; and Liu, B. 2004. Mining and Summarizing Customer Reviews. KDD ’04, 168–177. New York, NY, USA: Association for Computing Machinery. ISBN 1581138881.
* Miller et al. (1990) Miller, G. A.; Beckwith, R.; Fellbaum, C.; Gross, D.; and Miller, K. J. 1990. Introduction to WordNet: An On-line Lexical Database*. _International Journal of Lexicography_ 3(4): 235–244. ISSN 0950-3846.
* Monks and Ehrenberg (1999) Monks, J.; and Ehrenberg, R. G. 1999. The Impact of US News and World Report College Rankings on Admission Outcomes and Pricing Decisions at Selective Private Institutions. Working Paper 7227, National Bureau of Economic Research.
* Morse and Brooks (2020) Morse, R.; and Brooks, E. 2020. How U.S. News Calculated the 2021 Best Colleges Rankings. URL https://www.usnews.com/education/best-colleges/articles/how-us-news-calculated-the-rankings.
* Newman et al. (2009) Newman, D.; Asuncion, A.; Smyth, P.; and Welling, M. 2009. Distributed Algorithms for Topic Models. _J. Mach. Learn. Res._ 10: 1801–1828. ISSN 1532-4435.
* Otto, Jr, and Ross (2008) Otto, J.; Jr, D. A. S.; and Ross, D. N. 2008. Does ratemyprofessor.com really rate my professor? _Assessment & Evaluation in Higher Education_ 33(4): 355–368.
* Otto, Sanford, and Wagner (2011) Otto, J.; Sanford, D.; and Wagner, W. 2011. Analysis Of Online Student Ratings Of University Faculty. _Journal of College Teaching & Learning (TLC)_ 2: 25–30.
* Pennebaker et al. (2015) Pennebaker, J.; Boyd, R.; Jordan, K.; and Blackburn, K. 2015. The Development and Psychometric Properties of LIWC2015. _University of Texas Libraries_ URL http://hdl.handle.net/2152/31333.
* Silva et al. (2008) Silva, K. M.; Silva, F. J.; Quinn, M. A.; Draper, J. N.; Cover, K. R.; and Munoff, A. A. 2008. Rate My Professor: Online Evaluations of Psychology Instructors. _Teaching of Psychology_ 35(2): 71–80.
* Titov and McDonald (2008) Titov, I.; and McDonald, R. 2008. Modeling Online Reviews with Multi-grain Topic Models.
|
# Simple prediction of immiscible metal alloying based on metastability
analysis
Shota Ono<EMAIL_ADDRESS>Department of Electrical, Electronic and
Computer Engineering, Gifu University, Gifu 501-1193, Japan Junji Yuhara
Department of Energy Engineering, Nagoya University, Nagoya 464-8603, Japan
Jun Onoe Department of Energy Science and Engineering, Nagoya University,
Furo-cho, Chikusa-ku, Nagoya, Aichi 464-8603, Japan
###### Abstract
It has been known that even though two elemental metals, $X$ and $Y$, are
immiscible, they can form alloys on surfaces of other metal $Z$. In order to
understand such surface alloying of immiscible metals, we study the energetic
stability of binary alloys, $XZ$ and $YZ$, in several structures with various
coordination numbers (CNs). By analyzing the formation energy modified to
enhance the subtle energy difference between metastable structures, we find
that $XZ$ and $YZ$ with B2-type structure (CN$=$8) become energetically stable
when the $X$ and $Y$ metals form an alloy on the $Z$ metal surface. This is
consistent with the experimental results for Pb-Sn alloys on metal surfaces
such as Rh(111) and Ru(0001). Some suitable metal substrates are also
predicted to form Pb-Sn alloys.
## I Introduction
Characterizing the structure of alloys is an important issue in materials
science. In general, alloys can be classified into two groups: ordered alloys
having regular lattices and disordered alloys (or solid solutions). On the
other hand, some metals are immiscible with each other in the bulk. Therefore,
many attempts have been made to understand the structure of alloys in a
unified manner. For example, the use of the Mendeleev number has enabled us to
categorize the structure of binary alloys pettifor . More recently, in order
to predict the ground-state structures, the density-functional theory (DFT)
approach combined with machine learning methods ceder ; schleder , high-
throughput calculations wolverton ; hart , and cluster expansion methods
nelson_CE ; seko has been proved to be useful.
The surface alloying has been observed when bulk alloys can be synthesized
overbury ; dhaka ; sad . Counterintuitively, the surface alloying has also
been reported between immiscible elements nielsen ; roder ; nagl ; steve ;
tober ; sadigh ; chen ; yuhara_Rh ; yuhara_Ru ; yuhara_Ag . One of the authors
has investigated the alloying of immiscible metals (Pb and Sn) on various
surfaces including Rh(111) yuhara_Rh , Ru(0001) yuhara_Ru , Ag(111) yuhara_Ag
, and Al(111) yuhara_Al . On the Rh and Ru surfaces, the Pb-Sn films form
ordered structures; on the Ag surface, the Pb-Sn films form disordered
structures; and on the Al surface, the Pb and Sn atoms are immiscible. These
indicate that the substrate plays a crucial role in determining the structure
of Pb-Sn thin films. While the formation of surface alloys, in principle, can
be studied by using DFT approach yang ; marathe2009 ; marathe2013 , it would
be a very complex task to treat surfaces with many atoms.
The coordination number (CN) would be a key to understand the alloying of
immiscible metals. Nielsen et al. have reported the growth of Au on Ni(110),
irrespective to their immiscibility in the bulk nielsen . Within the effective
medium theory, they have shown that the cohesive energy of Au as a function of
the number of Ni neighbors takes the minimum when Au atom is surrounded by
eight Ni atoms. This CN is smaller than twelve realized in the face-centered
cubic lattice structure, but is similar to the number, seven, realized in the
Ni(110) surface, yielding an increase in the energy gain when alloyed near the
surface. This study implies that the substrate effect can be incorporated in
the CN. We expect that this is also modeled by the energetic stability of
metastable (or hypothetical) structures: the total energies of alloys in
various structures would provide a useful information for understanding the
alloying.
In this paper, we explore a simple scheme for predicting the alloying of
immiscible metals on another metal surface based on DFT calculations with less
computational cost. In order to understand the formation of alloys $XY$ on the
$Z$ metal substrate, we study the energetic stability of binary alloys $XZ$
and $YZ$ in five structures including buckled honeycomb (bHC), buckled square
(bSQ), B2, L10, and Bh (see Fig. 1). These have different CN, allowing us to
study the substrate effect. We propose the modified formation energy that
identifies the effect of different CNs on the alloying, and demonstrate that
if the alloy $XY$ on the surface $Z$ has been synthesized experimentally, the
modified formation energy of the B2 structure (CN$=$8) is negatively large.
Using this fact and our high-throughput calculations ono2020 , we predict
suitable substrates for synthesizing Pb-Sn alloys. We also demonstrate that
the metastability of strain-induced alloys behaves different manner, where the
B2 does not take the minimum value of the modified formation energy.
Figure 1: The crystal structures of (a) bHC (top and side), (b) bSQ (top and
side), (c) B2, (d) L10, and (e) Bh alloys.
The advantage of the present approach is to predict suitable substrates for
the surface alloying without performing the structure optimization of slab
models. The physics behind this simplification is that the B2 structure model
partly accounts for the interaction between the surface alloy and the
substrate, in the sense that the CN in both systems is similar value. On the
other hand, it is difficult to predict which surface orientations are suitable
for alloying. In this way, we consider that the present approach can be used
for screening the combination of elemental metals. It would be desirable to
determine the complex structure of surface alloys predicted below and study
its dynamical stability by performing phonon dispersion calculations. However,
it is beyond the scope of the present work.
## II Computational details
We calculated the total energy of alloys based on DFT implemented in Quantum
ESPRESSO (QE) code qe . The computational details were the same as in the
high-throughput calculations ono2020 . We used the Perdew-Burke-Ernzerhof
exchange-correlation functional within the generalized gradient approximation
pbe and used the ultrasoft pseudopotentials (pslibrary.1.0.0) dalcorso . The
cutoff energies for the wavefunction and the charge density were set to 80 and
800 Ry, respectively. The self-consistent calculations within spin-restricted
approximation were performed by using 20$\times$20$\times$1 $k$ grid and
20$\times$20$\times$20 $k$ grid for two-dimensional (2D) and three-dimensional
(3D) structures, respectively MK . The smearing parameter of Marzari-
Vanderbilt smearing was set to $\sigma=0.02$ Ry. For 2D structures, we set
the size of the unit cell along the $c$ axis to be 14 Å that is enough to
avoid the interlayer coupling between different unit cells. The total energy
and forces were converged within $10^{-4}$ Ry and $10^{-3}$ a.u.,
respectively.
The standard formation energy of the binary alloy $XY$ in the structure $j$ is
defined as
$\displaystyle
F_{j}(XY)=\varepsilon_{j}(XY)-\frac{1}{2}\left[\min_{j}\varepsilon_{j}(X)+\min_{j}\varepsilon_{j}(Y)\right],$
(1)
where the values of $\varepsilon_{j}(XY)$ is the total energy of alloy $XY$ in
the structure $j$. $\min_{j}\varepsilon_{j}(X)$ is the minimum value of
$\varepsilon_{j}(X)$ among $j$s, where
$\varepsilon_{j}(X)=\varepsilon_{j}(XX)$. Negative value of $F_{j}(XY)$
indicates that alloying yields energetically more stable structure. However,
with the use of Eq. (1), it would be difficult to distinguish subtle energy
difference (a few meV per atom) among 3D (and 2D) structures. Alternatively,
by adding and subtracting the total energies of the structure $j$ for elements
$X$ and $Y$, Eq. (1) can be seen as
$\displaystyle F_{j}(XY)=E_{j}(XY)+\frac{1}{2}\left[S_{j}(X)+S_{j}(Y)\right],$
(2)
where $E_{j}(XY)$ is the modified formation energy defined as
$\displaystyle
E_{j}(XY)=\varepsilon_{j}(XY)-\frac{1}{2}\left[\varepsilon_{j}(X)+\varepsilon_{j}(Y)\right],$
(3)
and $S_{j}(X)$ is the structure energy defined as
$\displaystyle S_{j}(X)=\varepsilon_{j}(X)-\min_{j}\varepsilon_{j}(X).$ (4)
When the element $X$ has the structure $j=G$ as its ground state, $S_{G}(X)$
is zero exactly, yielding $F_{G}(XY)=E_{G}(XY)$. For the metastable structure
$M$, $S_{M}(X)>0$ by definition. Therefore, the positive values of $S_{M}(X)$
and $S_{M}(Y)$ must be cancelled out by negative value of $E_{M}(XY)$. This is
because the value of $E_{M}(XY)$ is measured from the energy of metastable
structure of $X$ and/or $Y$, as in Eq. (3). The energy difference between 3D
(and 2D) structures in Eq. (3) becomes much larger than that in Eq. (1), which
will be useful to identify the CN-dependence of the formation energy. For
example, if the elements $X$ and $Z$ in the B2 (i.e., the body-centered cubic)
structure are less stable but the alloy $XZ$ in the B2 structure is
energetically stable, $E_{\rm B2}(XZ)$ will be negatively large, implying that
the surface alloying with CN$=$8 is preferred. Below, we used Eq. (3) as the
formation energy.
The CNs of bHC, bSQ, B2, L10, and Bh structures are three, four, eight,
twelve, and twelve, respectively. The latter two values may depend on the
ratio $c/a$ of the lattice parameters: when $c/a=\sqrt{2}$ and 1.63 in L10 and
Bh, respectively, the CNs are twelve exactly. On the other hand, when $c/a=1$
exactly in L10, such a structure is the same as the B2 structure.
## III Results and discussion
Figure 2: The formation energy of AuNi and PbSn in the bHC, bSQ, B2, L10, and
Bh structures. Figure 3: Same as Fig. 2 but for PbAg, PbAl, SnAg, and SnAl.
Figure 4: Same as Fig. 2 but for PbRh, RbRu, SnRh, and SnRu.
As mentioned, Au is immiscible with Ni in the bulk but forms alloys on the Ni
surface nielsen . In order to demonstrate how our approach captures this
alloying, we first consider the stability of Au-Ni systems. Figure 2 shows the
values of $E_{j}({\rm AuNi})$ for $j=$bHC, bSQ, B2, L10, and Bh structures
(blue circles). Among five structures, the B2 structure has the lowest
$E_{j}$. Since CN$=$8 for B2, this is consistent with the effective medium
theory analysis nielsen , indicating that the surface alloying is related to
the relative stability of B2 structure.
Figure 5: $E_{j}$ as a function of the atomic number of $X$ for (a) Pb$X$ and
(b) Sn$X$ alloys in the B2 (square) and Bh (triangle) structures. The vertical
dashed lines indicate the atomic number of alkali metals (Li, Na, K, Rb, and
Cs).
### III.1 Pb-Sn alloys
We next consider the immiscible Pb-Sn alloys. As shown in Fig. 2 (orange
circles), the value of $E_{j}({\rm PbSn})$ is insensitive to the structures
and is slightly higher than zero. We thus expect that they can form alloys
when they are placed on appropriate substrates: the energy gain due to an
electrostatic interaction with substrates overcomes the energy loss due to an
alloying of immiscible metals.
To study the effect of Ag and Al substrates on the Pb-Sn alloying, we show
$E_{j}({\rm PbAg})$, $E_{j}({\rm PbAl})$, $E_{j}({\rm SnAg})$, and $E_{j}({\rm
SnAl})$ in Fig. 3. The B2 shows the lowest value of $E_{j}$ (except SnAl),
while $E_{j}$s are positive. In particular, PbAl has relatively high value of
$E_{j}\simeq 0.45$ eV. These indicate that Pb and Sn are also immiscible with
Al. Therefore, the presence of Al surface will not allow the alloying of Pb
and Sn metals, which is consistent with recent observations yuhara_Al .
Compared with Al alloys, the Ag alloys have lower values of $E_{j}$. This may
yield disordered phase of Pb-Sn on Ag surfaces yuhara_Ag .
Let us move on to the Rh and Ru substrates. Figure 4 shows $E_{j}({\rm
PbRh})$, $E_{j}({\rm PbRu})$, $E_{j}({\rm SnRh})$, and $E_{j}({\rm SnRu})$.
For all alloys, the B2 shows the lowest $E_{j}$. Except PbRu, the value of
$E_{\rm B2}(XY)$ is negative, which leads to alloying of elements $X$ and $Y$
with a CN of eight. This would allow the alloying of Pb-Sn on both Rh and Ru
surfaces, where the CN is reduced compared with that in bulk. These are
consistent with experimental syntheses of Pb-Sn alloys on these surfaces
yuhara_Rh ; yuhara_Ru . The present study shows that the values of $E_{\rm
B2}$ of SnRh and SnRu are negatively large compared to those of PbRh and PbRu.
This indicates that the creation of Pb-Sn alloys on Rh and Ru surfaces
yuhara_Rh ; yuhara_Ru is mainly due to the strong bonding between Sn atom and
Rh or Ru atom in the substrate.
We try to distinguish ordered and disordered Pb-Sn created on different
surfaces. This can be understood by comparing the structures of three-
dimensional alloys that have already been synthesized. The information of
materials synthesis is extracted from Materials Project database
materialsproject . For Sn-Ag alloys, SnAg3, i.e., Ag rich alloy, has been
synthesized only. On the other hand, for Pb-Rh, Sn-Rh, and Sn-Ru alloys, Pb
and Sn rich structures have been reported: for example, Pb5Rh4, Pb2Rh, Sn4Rh,
Sn2Rh, Sn7Ru3, and Sn3Ru2. This means that for the former system the
concentration of Sn atoms on the Ag surface must be small, rendering it
difficult to produce ordered phase.
We predict appropriate substrate for the Pb-Sn alloying by using the energy of
alloys obtained from our previous calculations ono2020 . Figures 5(a) and 5(b)
show the atomic number dependence of $E_{j}({\rm Pb}X)$ and $E_{j}({\rm
Sn}X)$, respectively, for the B2 and Bh structures, where green dashed lines
indicate the atomic number of alkali metals. The Li-based alloys have
negatively large $E_{j}$, which may be due to the lightest metallic element.
When mixed with heavier alkali metals (K, Rb, and Cs), the values of $E_{j}$
are nearly zero or positive. For elements heavier than K, $E_{j}$ behaves
periodically: the Pb-based and Sn-based alloys with the group 2 or 3 elements
have the minimum $E_{j}$, while those with the group 6 elements have the
maximum $E_{j}$, followed by a decrease in $E_{j}$ with increasing the atomic
number of $X$. While the difference of $E_{j}$ between $j=$ B2 and Bh is small
compared to the absolute value of $E_{j}$, the B2 phase seems to be preferable
to the Bh phase when $E_{j}<0$. Table 1 lists the alloys of Pb$X$ and Sn$X$
that have negative $E_{\rm B2}$. Note that PbCo (0.87), PbIr (0.63), PbNi
(0.53), and PbTi (0.31) have positively large value of $E_{\rm B2}$ that
overcomes negative $E_{\rm B2}$ of Sn-based alloys. We thus conclude that Au,
Ba, Ca, Hf, Li, Lu, Mg, Na, Pd, Pt, Rh, Ru, Sc, Sr, Y, and Zr are suitable
substrates for synthesizing Pb-Sn alloys. It is noteworthy that the growth of
Sn on the Pt(111) has already been reported overbury , so that it would be
interesting to study how the addition of Pb atoms influences the structure of
the Sn-Pt surface alloy. We also note that some impurities will be needed to
stabilize the surface of alkali metals (Li and Na).
Table 1: Formation energy (in units of eV) of Pb-based and Sn-based alloys in the B2 structure. | Au | Ba | Ca | Co | Hf | Ir | Li | Lu |
---|---|---|---|---|---|---|---|---|---
Pb | -0.12 | -1.03 | -1.10 | +0.87 | +0.27 | +0.63 | -0.63 | -0.74 |
Sn | -0.19 | -1.13 | -1.25 | -0.09 | -0.38 | -0.41 | -0.72 | -1.16 |
| | | | | | | | |
| Mg | Na | Ni | Pd | Pt | Rh | Ru | Sc |
Pb | -0.17 | -0.31 | +0.53 | -0.40 | -0.09 | -0.11 | +0.39 | -0.65 |
Sn | -0.29 | -0.29 | -0.13 | -0.84 | -0.61 | -1.00 | -0.76 | -1.07 |
| | | | | | | | |
| Sr | Ti | Y | Zr | | | | |
Pb | -1.07 | +0.31 | -0.98 | -0.08 | | | | |
Sn | -1.20 | -0.31 | -1.35 | -0.66 | | | | |
It must be noted that our approach relies on negative $E_{j}$ between the
surface metals and the substrate: in the present case, $E_{j}({\rm SnRh})$ and
$E_{j}({\rm SnRu})$ are negative. The present approach can be applied to
understand LiMg alloying on Cu(001) chen . While Li-Mg alloys have not been
synthesized in the bulk forms, negative values of $E_{\rm B2}({\rm LiCu})$
(-0.05 eV) and $E_{\rm B2}({\rm MgCu})$ (-0.27 eV), where the values of
$E_{j}$ are referred to Ref. ono2020 , explain the LiMg alloying on the Cu
surface.
### III.2 Anomalous surface alloys
Figure 6: Same as Fig. 2 but for AgCu, AgRu, and CuRu. Figure 7: Same as Fig.
2 but for AgCo, AgMo, and CoMo.
Within the present approach it is difficult to understand the surface alloying
when all three elements are immiscible steve ; sadigh . For example, we
consider AgCu on Ru(0001), where Ag, Cu, and Ru are immiscible with each other
steve . In this alloy, the lattice constant of Ag and Cu in the alloy
structure is similar to that of Ru substrate. The strain energy is thus
reduced significantly, providing the energetic stability of this surface alloy
steve . For an alloy driven by the strain relief, the structure-dependence of
$E_{j}$ is different from that in Pb-Sn alloys, as shown in Fig 6. The value
of $E_{\rm bSQ}$ is the lowest in Ag-Ru and Cu-Ru alloys, implying that the
small CN condition (or small interatomic distance) is preferred.
Different profile of $E_{j}$ is also obtained in the immiscible Ag-Co alloys
on the Mo(110) surface tober , where Ag is immiscible with Co and Mo. Tober et
al. have observed the stripe structure of Ag and Co on the Mo(110) with a
period of a few nanometers and interpreted that the presence of the stripe is
preferable to reduce the misfit strain (or the net area of the surface that
consists of Ag and Co) tober . As shown in Fig. 7, $E_{\rm B2}({\rm AgMo})$
and $E_{\rm B2}({\rm CoMo})$ have the maximum value among five structures.
Therefore, the surface alloying observed in the experiment may be related to
the stability of bHC or Bh phases because $E_{\rm bHC}({\rm CoMo})$ and
$E_{{\rm B}_{h}}({\rm CoMo})$ are negative.
In Ref. tersoff , Tersoff has demonstrated that the clusters or the stripe
structure at the surface will be created by the presence of a large interface
energy with the use of a simplified model taking into account the strain
energy only. This is consistent with the observations in AgCu alloy on
Ru(0001) for large concentration of Cu steve , where pure Ag domain walls are
present, and the observed stripe structure in AgCo alloy on Mo(110) tober . In
order to treat strain effects effectively, metastability analysis used in the
present work should be further extended by considering other structures with a
relatively large unit cell, which will provide a comprehensive understanding
of immiscibility of metals.
## IV Conclusion
We have proposed the modified formation energy, Eq. (3), to understand the
effect of different CNs on the alloying. By analyzing the formation energies
of binary alloys in various structures with different CNs, we have explained
the alloying properties of immiscible metals (Pb and Sn) on another metal
surface (Rh, Ru, Ag, and Al) yuhara_Rh ; yuhara_Ru ; yuhara_Ag ; yuhara_Al .
The negatively large formation energy of the B2 structure (CN$=$8) would be
important for predicting surface alloying of immiscible metals. The syntheses
of Pb-Sn alloys at the predicted substrates are left for future work. We have
also identified anomalous surface alloying in our metastability perspective.
In a future work, we will search appropriate descriptors that predict surface
alloying accurately, beyond Eq. (3). It would be fundamentally important to
understand the relative stability between metastable structures (i.e., B2,
L10, Bh, and other complex structures) under different conditions (i.e.,
pressure and temperature). For the ground state structure of alloys, it has
been known that the combination of pseudo-potential orbital radii is a good
descriptor to distinguish complex structures but is not suitable to
distinguish B2 and L10 structures zunger . For binary compounds in the
zincblend and rocksalt structures, many descriptors have been proposed
recently with the help of materials informatics methods ghi ; pilania . We
expect that more research along these lines allows us to identify surface
alloying in detail.
###### Acknowledgements.
The computation was carried out using the facilities of the Supercomputer
Center, the Institute for Solid State Physics, the University of Tokyo, and
using the supercomputer “Flow” at Information Technologcy Center, Nagoya
University.
## References
* (1) D. Pettifor, Bonding and Structure of Molecules and Solids (Oxford University Press, New York, 2002).
* (2) S. Curtarolo, D. Morgan, K. Persson, J. Rodgers, and G. Ceder, Predicting Crystal Structures with Data Mining of Quantum Calculations, Phys. Rev. Lett. 91, 135503 (2003).
* (3) G. R. Schleder, A. C. M. Padilha, C. M. Acosta, M. Costa, and A. Fazzio, From DFT to machine learning: recent approaches to materials science-a review, J. Phys.: Mater. 2, 032001 (2019).
* (4) C. Walverton and V. Ozoliņš, First-principles aluminum database: Energetics of binary Al alloys and compounds, Phys. Rev. B 73, 144104 (2006).
* (5) G. L. W. Hart, S. Curtarolo, T. B. Massalski, and O. Levy, Comprehensive Search for New Phases and Compounds in Binary Alloy Systems Based on Platinum-Group Metals, Using a Computational First-Principles Approach, Phys. Rev. X 3, 041035 (2013).
* (6) L. J. Nelson and G. L. W. Hart, and S. Curtarolo, Ground-state characterizations of systems predicted to exhibit $L1_{1}$ or $L1_{3}$ crystal structures, Phys. Rev. B 85, 054203 (2012).
* (7) A. Seko, K. Shitara, and I. Tanaka, Efficient determination of alloy ground-state structures, Phys. Rev. B 90, 174104 (2014).
* (8) S. H. Overbury and Yi-sha Ku, Formation of stable, two-dimensional alloy-surface phases: Sn on Cu(111), Ni(111), and Pt(111), Phys. Rev. B 46, 7868 (1992).
* (9) R. S. Dhaka, A. K. Shukla, K. Horn, and S. R. Barman, Photoemission study of Al adlayers on Mn, Phys. Rev. B 84, 245404 (2011).
* (10) P. Sadhukhan, S. Barman, T. Roy, V. K. Singh, S. Sarkar, A. Chakrabarti, and S. R. Barman, Electronic structure of Au-Sn compounds grown on Au(111), Phys. Rev. B 100, 235404 (2019).
* (11) L. P. Nielsen, F. Besenbacher, I. Stensgaard, E. Laegsgaard, C. Engdahl, P. Stoltze, K. W. Jacobsen, and J. K. Nørskov, Initial growth of Au on Ni(110): Surface alloying of immiscible metals, Phys. Rev. Lett. 71, 754 (1993).
* (12) H. Röder, R. Schuster, H. Brune, and K. Kern, Monolayer-confined mixing at the Ag-Pt(111) interface, Phys. Rev. Lett. 71, 2086 (1993).
* (13) C. Nagl, M. Pinczolits, M. Schmid, P. Varga, and I. K. Robinson, $p$($n$×1) superstructures of Pb on Cu(110), Phys. Rev. B 52, 16796 (1995).
* (14) J. L. Stevens and R. H. Hwang, Strain Stabilized Alloying of Immiscible Metals in Thin Films, Phys. Rev. Lett. 74, 2078 (1995).
* (15) E. D. Tober, R. F. C. Farrow, R. F. Marks, G. Witte, K. Kalki, and D. D. Chambliss, Self-Assembled Lateral Multilayers from Thin Film Alloys of Immiscible Metals, Phys. Rev. Lett. 81, 1897 (1998).
* (16) B. Sadigh, M. Asta, V. Ozoliņš, A. K. Schmid, N. C. Bartelt, A. A. Quong, and R. Q. Hwang, Short-Range Order and Phase Stability of Surface Alloys: PdAu on Ru(0001), Phys. Rev. Lett. 83, 1379 (1999).
* (17) M.-S. Chen, S. Mizuno, and H. Tochihara, Ordered mixed surface structures formed on Cu(001) by coadsorption of dissimilar metals: (2$\sqrt{2}\times\sqrt{2}$)R45∘ by Mg and Li, and ($\sqrt{5}\times\sqrt{5}$)R26.7∘ by Mg and K(Cs), Surf. Sci. 486, L480 (2001).
* (18) J. Yuhara, M. Schmid, and P. Varga, Two-dimensional alloy of immiscible metals: Single and binary monolayer films of Pb and Sn on Rh(111), Phys. Rev. B 67, 195407 (2003).
* (19) J. Yuhara, Y. Ishikawa, and T. Matsui, Two-dimensional alloy of immiscible Pb and Sn atoms on Ru(0001), Surf. Sci. 616, 131 (2013).
* (20) J. Yuhara and T. Ako, Two-dimensional Pb-Sn alloy monolayer films on Ag(111), Appl. Surf. Sci. 351, 83 (2015).
* (21) J. Yuhara and Y. Shichida, Epitaxial growth of two-dimensional Pb and Sn films on Al(111), Thin Solid Films 616, 618 (2016).
* (22) B. Yang, T. Muppidi, V. V. Ozoliņš, and M. Asta, First-principles theory of nanoscale pattern formation in ultrathin alloy films: A comparative study of Fe-Ag on Ru(0001) and Mo(110) substrates, Phys. Rev. B 77, 205408 (2008).
* (23) M. Marathe, M. Imam, and S. Narasimhan, Elastic and chemical contributions to the stability of magnetic surface alloys on Ru(0001), Phys. Rev. B 79, 085413 (2009).
* (24) M. Marathe, A. Díaz-Ortiz, and S. Narasimhan, Ab initio and cluster expansion study of surface alloys of Fe and Au on Ru(0001) and Mo(110): Importance of magnetism, Phys. Rev. B 88, 245442 (2013).
* (25) S. Ono and H. Satomi, arXiv:2012.04790.
* (26) P. Giannozzi et al., Advanced capabilities for materials modelling with Quantum ESPRESSO, J. Phys.: Condens. Matter 29, 465901 (2017).
* (27) J. P. Perdew, K. Burke, and M. Ernzerhof, Generalized Gradient Approximation Made Simple, Phys. Rev. Lett. 77, 3865 (1996).
* (28) A. Dal Corso, Pseudopotentials periodic table: From H to Pu, Computational Material Science 95, 337 (2014).
* (29) H. J. Monkhorst and J. D. Pack, Special points for Brillouin-zone integrations, Phys. Rev. B 13, 5188 (1976).
* (30) N. Marzari, D. Vanderbilt, A. De Vita, and M. C. Payne, Thermal Contraction and Disordering of the Al(110) Surface, Phys. Rev. Lett. 82, 3296 (1999).
* (31) A. Jain, S. P. Ong, G. Hautier, W. Chen, W. D. Richards, S. Dacek, S. Cholia, D. Gunter, D. Skinner, G. Ceder, K. A. Persson, The Materials Project: A materials genome approach to accelerating materials innovation, APL Materials, 1, 011002 (2013).
* (32) J. Tersoff, Surface-Confined Alloy Formation in Immiscible Systems, Phys. Rev. Lett. 74, 434 (1995).
* (33) A. Zunger, Structural Stability of 495 Binary Compounds, Phys. Rev. Lett. 44, 582 (1980).
* (34) L. M. Ghiringhelli, J. Vybiral, S. V. Levchenko, C. Draxl, and M. Scheffler, Big Data of Materials Science: Critical Role of the Descriptor, Phys. Rev. Lett. 114, 105503 (2015).
* (35) G. Pilania, J. E. Gubernatis, and T. Lookman, Classification of octet AB-type binary compounds using dynamical charges: A materials informatics perspective, Sci. Rep. 5, 17504 (2015).
|
# The Analysis of Discrete-Event System in Autonomous Package Delivery using
Legged Robot and Conveyor Belt
Garen Haddeler∗ ∗Department of Mechanical Engineering, National University of
Singapore, 117575, Singapore<EMAIL_ADDRESS>
###### Abstract
In this paper, the supervisory control of a Discrete Event System (DES)
analyses states and events to construct autonomous package delivery system.
The delivery system includes legged robot in order to autonomously navigate
uneven indoor terrain and a conveyor belt for transporting the package to the
legged robot. The aim of the paper is using theory of supervisory control of
DES to supervise and control machine’s state and event and ensure robots
autonomously collaborate. By applying the theory, we show collaboration of two
individual robots to deliver goods in multi-floor environment The obtained
results from the theory of supervisory control is implemented and verified in
simulation environment.
## I INTRODUCTION
Delivering package in indoor and uneven terrain can be challenging since
today’s robot cannot fully represent and navigate multi-storey terrain.
Comparing with wheeled robots, legged robots can be used to navigate uneven
terrains since legged robots can overcome larger obstacles than their body
frame [1]. Inspired by the capability of such robots, we developed an
autonomous navigation framework for the legged robot to fulfil desired
behaviour which for our case reaching the goal, avoiding obstacles, climbing
stairs and deliver goods. Moreover, a conveyor belt is used to initially
transfer goods to the robot’s body frame. Thus, robot and conveyor need to
autonomously communicate with each other and perform delivery. To do so, we
developed a supervisory control strategy to control individually machine’s
states and events and ensure robot and conveyor autonomously work together
safely and efficiently. The obtained supervisory control strategy is verified
on the simulation environment and legged navigation framework. The code is
available in the link 111https://github.com/hgaren/aliengo$\\_$delivery.
There are several autonomous robots project were used for package deliveries.
Amazon’s warehouses are actively using autonomous swarm robots which they
deliver goods between one point to another using conveyor belts and wheeled
robots [2]. However, the main drawback of wheeled vehicles are goods that can
not deliver in multi-storey facilities. To overcome this issue legged robot
can be proposed instead of wheeled. A bipedal robot from Agility Robotics is
used to deliver goods in multi-storey outdoor terrain [3]. Similarly, Boston
Dynamic’s wheeled-bipedal robot named Handle can perform delivery in indoor
facilities [4]. However, their approach doesn’t include collaboration with a
conveyor belt and multiple robot aspect. By using Supervisory control theory
for DES system, we propose autonomous collaboration between the legged robot
and conveyor belt.
The following parts of this paper are arranged as follows: Section II
introduces the general concept of autonomous legged robot navigation and
supervisory control theory. Thereafter, subsection III shows the autonomous
and system structure of a task-based delivery scenario. Moreover, the proposed
supervisory controlled structure is verified through simulation in Section
IV-A. In the end, Section V concludes the work and gives an outlook on future
research.
## II Preliminaries
In this section, concept of legged robot and its autonomous navigation are
presented. Moreover, some core methodologies of supervisory control and
automata are introduced. These methodologies are used to construct our package
delivery scenario.
### II-A Legged Robot
Autonomous navigation is needed to deliver goods from its starting position to
the desired goal position. Generally, in legged robots, self-driving behaviour
can be carried out three sub-modules: robot localization, path planning and
body controller.
#### II-A1 Robot localization
Localization is one of the major steps to perform autonomous navigation, path
tracking, avoiding obstacles and mapping. Therefore, in this article, ready
localization algorithm is used for navigation purposes. Since this isn’t the
main focus of this work, we obtained the robot’s pose from the simulation
environment.
#### II-A2 Path Planning
Path planning algorithm is used to generate the path in between starting to
goal positions and avoid static obstacles. We used 2D NavfnROS path planning
algorithm to plan path at the pre-defined route.
#### II-A3 Body Controller
Comparing with wheeled robots, legged robot’s body controller has higher
complexity due to its dynamics and kinematics. One of the main reason is
legged robots can be unstable while moving and needs to have a robust
controller to balance itself. We used MIT cheetah’s body controller to balance
its body frame and perform foot placement from the gait generator according to
command reference (velocity). Note that the legged robot needs to climb stairs
therefore we use local awareness system to re-plan foot placements if the
location of the planned foot is not traversable [1]. Additionally, local
awareness’ perception is obtained by traversability grid map where we evaluate
terrain’s slope and roughness information to detect step-able areas [5].
### II-B Concepts of automata and supervisory control
The supervisory control theory allows us to observe free events which are all
possible combinations of events, and control controllable events so that
agents fulfil given specifications [6].
Automata which is represented by Discrete event-states (DES) consists of a set
of states and a set of events. An event causes a DES system to move from one
state to another state and these events are assumed to occur instantaneously.
Below formulation where states,$Q$, are represented by numbers and events
$\sum$, are represented by symbols as following
$Q=\begin{Bmatrix}0&1&2\end{Bmatrix}\text{
}\sum=\begin{Bmatrix}\alpha&\beta&\gamma\end{Bmatrix}$ (1)
Automaton is constructed and represented as follow eq. 2 where $Q$ is states,
$\sum$ is events, $\delta$ transition function (which is relationship between
current state, event and new state), $q_{0}$ indicates initial state and
lastly, $Q_{m}$ is marked states (which usually refers to system’s equilibrium
states).
$G=\begin{Bmatrix}Q&\sum&\delta&q_{0}&Q_{m}\end{Bmatrix}$ (2)
In order to present a sequence of the events, a string is defined as the
sequence of symbols which may contain multiple symbols or one symbol. Language
refers to the collection and combination of event and generated from
automaton. An example of language and marked language can be shown as below
eqs respectively 3, 4 . Note that marked language needs to satisfy
$L_{m}(G)\subseteq L(G)$.
$L(G)=\begin{Bmatrix}\epsilon&\alpha\beta&\alpha\beta\gamma&\alpha\beta\alpha\gamma&...\end{Bmatrix}$
(3) $L_{m}(G)=\begin{Bmatrix}\alpha\beta&\alpha\beta\gamma...\end{Bmatrix}$
(4)
The supervisory control theory defines a supervisor function $S$ which also
called control policy so that it can control all controllable events of plant
$G$ where it follows given specification $E$. Controlled behavior represented
as $S/G$ and note that marked languages need to satisfy $L_{m}(S/G)\subseteq
L_{m}(E)$ [6].
Briefly, by obtaining free behaviour of plant $G$ and defined specifications
$E$, we can extract supervisor function $S$ such that it can generate
controlled DES and it’s language $L_{m}(S/G)$. Further section III, we will
show how we obtained our scenario’s free behaviour and it’s the specification.
## III Automation and System Structure
Our system has initially two real machines one robot and one conveyor. The
delivery scenario starts with a legged robot spawning on the first floor.
Secondly, the robot’s first state is initialized by the operator and it
navigates through the first goal which is in front of the conveyor belt. Next,
robot docks to belt and conveyor start moving to transfer the package to the
robot’s designated rectangular area. After conveyor belt finishes moving to
place package, the robot stands up. Then, by climbing stair, the robot tries
to reach the second goal where is on one upper floor. Lastly, if the robot
reaches its second goal, the robot changes its state to success state and
finishes its task until operator resends the first goal.
To resolve task complexity infinite state machine wise, we divided our legged
robot into two machines, therefore, our whole system consists of three sub-
machines: First Task Robot, Conveyor Belt, Second Task Robot.
### III-A Machine-1: First Task Robot
Machine-1 is defined as reaching the first goal using the legged robot. Figure
1 shows states and events of the system using Automata. Table I defines
states-events and their representations.
According to our delivery scenario, initially, the robot is in the idle state
which means the robot is in stable-stopping position. An operator starts the
delivery scenario by giving the first goal to the robot. Thereafter walking
state starts and the legged robot uses the proposed autonomous navigation
framework to reach its first goal. After reaching its first goal, the robot
changes its state to docking and robot docks through the conveyor belt. In
this state, robot slowly approaches to belt and descends to collect the
package. After docking finished robot reaches idle state again and wait for
the conveyor belt to transfer the package. As an uncontrollable event, if the
robot fails while walking or docking change its state to fail and by an error
flag event robot’s state becomes idle.
Figure 1: The automaton model of Machine-1 First Task Robot: (Red) controllable, (Green) uncontrollable events TABLE I: Table of State and Events: Machine-1 First Task Robot States | Events
---|---
Robot Idle:0 | First goal started:1
Walking:1 | First goal reached:3, Robot failed:0
Docking:2 | Docking finished:5, Robot failed:0
Fail:3 | Error flag:17
### III-B Machine-2: Conveyor Belt
Machine 2 is defined as conveyor belt transferring goods to a robot. Figure 2
shows states and events of the system using automata. Table II defines states-
events and their representations.
According to our delivery scenario, the conveyor belt is idle state until the
robot docking is finished. Thereafter, conveyor belt changes its state to a
working state and package is moved to robot’s top. After package moving
finished conveyor changes its state to idle again. As an uncontrollable event,
if conveyor belt drops a package to the ground in a working state, it changes
its state to fail state. Thereafter, box re-spawn on the conveyor belt,
machine-2 change it’s stated to idle and the whole process can start again.
Figure 2: The automaton model of Machine-2 First Task Robot: (Red) controllable, (Green) uncontrollable events TABLE II: Table of State and Events: Machine-2 Conveyor Belt States | Events
---|---
Conveyor Idle:0 | Moving Box:19
Walking:1 | Stopping box:7, Box dropped:2
Fail:2 | Spawn box: 15
### III-C Machine-3: Second Task Robot
Machine-3 is defined as reaching second goal using legged robot. Comparing
with machine-1, identical events and states represent different values. Figure
3 shows states and events of the system using automata. Table III defines
states-events and their representations.
Similarly, with Machine-1, initially, the robot is in the idle state which
means the robot is in stable-stopping position.Machine-3 start the stand-up
state after the package is on the robot’s base. Thereafter walking state
starts and the legged robot uses the proposed autonomous navigation framework
to reach its second goal where is one level above from pick-up position. After
reaching its second goal, the robot changes its state to success and later, by
a success flag, it changes to an idle state to finish it’s a task and stop
until operator gives the first goal again. As an uncontrollable event,
similarly with Machine-1, if the robot fails while walking or standing up
change its state to fail and by an error flag event, the robot’s state becomes
idle
Figure 3: The automaton model of Machine-3 First Task Robot: (Red) controllable, (Green) uncontrollable events TABLE III: Table of State and Events: Machine-3 Second Task Robot States | Events
---|---
Robot Idle:0 | Box is on the robot:21
Walking:1 | Second goal reached:11, Robot failed:3
Stand up:2 | Second goal started:9, Robot failed:3
Fail:3 | Error flag:23
Success:4 | Success flag:13
### III-D Specifications
According to Supervisory theory, specifications are needed to control free
behavior of the system. To do so, we have defined 8 specification as follows:
1. 1.
After docking finished (5), moving box (19) event must start.
2. 2.
After moving box (19), stopping box (7) event must start or as an
uncontrollable event box can be dropped (2)
3. 3.
After stopping box (7), a box is on the robot (21) must start.
4. 4.
After a box is on the robot (21), the second goal started (9) event must start
or as an uncontrollable event robot can be failed (4).
5. 5.
After the second goal started (9), second goal reached (11) event must start
or as an uncontrollable event robot can be failed (4).
6. 6.
After the second goal reached (11), success flag (13) event must start
7. 7.
After uncontrollable event robot fails (0 and 4), error flag (23) event must
start.
8. 8.
After uncontrollable event box dropped (2), spawn box (15) event must start.
## IV Experiments
We obtained our DES Supervisor using TCT software and execute autonomous
delivery scenario in the physical simulator.
### IV-A TCT Software
This program allows us to the synthesis of supervisory controls for discrete-
event systems. In this section, firstly we combined eight specifications into
one specification represented as $E$, which is done by using TCT Software’s
”MEET” function. Following Figure 4 can be obtained as a representation of
finite-state and events system.
Figure 4: The automaton model of Specifications: (Red) controllable, (Green)
uncontrollable events
Secondly, the synchronized combination of Machine-1,Machine-2 and Machine-3
are obtained from using TCT software’s ”SYNC” function. The results of
sequences, which is free behavior of delivery package scenario, shown in
Figure- 8. As it can be observed that, states and events are massive which
includes 60 states and 254 transitions. These states and events contain all
the possibility of control sequences which we call free behavior or plant $G$.
Figure 5: The automaton model of Controlled behavior: (Red) controllable,
(Green) uncontrollable events
Lastly, to obtain controlled behavior, supervisor function ($S$) can be
calculated out by using TCT software’s ”SUPCON” function which uses given
specifications ($E$) to control plant ($G$). Thereafter it returns a non-
blocking, minimally restrictive supervisor $S/G$. Controlled behavior of the
autonomous delivery package scenario can be shown Figure 5. As one of the
controlled behavior’s states sequence (0-1-3-4-5-7-8-10-11) can confirm that,
our controlled behavior meets with specifications and perform its task
accordingly.
### IV-B ROS-Gazebo Simulator
We have modelled our autonomous package delivery scenario in ROS-Gazebo
environment. Our simulation environment similarly includes high walls and
stairs which is shown Figure 6-up-left. The controlled behaviour’s DES
implemented to ROS-Smach and visualized as Figure 6-up-right. Quadrupedal
robot is represented as Figure 6-bottom-left and conveyor belt is shown as
Figure 6-bottom-right.
Figure 6: Simulation Environment in Gazebo: (Up-Left) indoor-uneven
environment which includes stair, (Up-Right) states and events in ROS-Smach
implementation (Bottom Left) Quadrupedal robot which package carrier is
mounted it’s back, (Bottom Right) Conveyor belt which moves box to robot’s
back
Execution of the state machine based on Supervisory Control Theory is shown
Figure 7. Right to left snapshots shows the scenario of package delivery
system in designed world.
Figure 7: Snapshot of Autonomous Package Delivery Scenario:(Up-Right) initial
Position, (Up-Left) first goal reached, (Bottom-Left) climbing to the stairs,
(Bottom-Right) second goal reached
Furthermore, we used ROS-Smach library to represent controlled behaviour in
the aspect of finite-state and event system. Leg Locomotion, path planning and
smach algorithms are worked separately in one ROS environment. All events are
subscribed from the ROS environment to perform actions and all events
published to the ROS environment to change its current state.
### IV-C Result
In the result, we obtained controlled DES behaviour of autonomous package
delivery scenario and fully functional simulation environment where a conveyor
belt moves box and legged robot navigate through desired goal positions. It
was observed that by implementing controlled behaviour of finite-state and
events, a robot can take a package from conveyor and deliver-climb to specific
location where is one floor above.
## V Conclusion
In this paper, supervisory control of DES is implemented to analyze the
acceptable control sequence among free sequence for autonomous package
delivery scenario using a legged robot and a conveyor belt. After the defining
DES for each machine and their specification, automated model of controlled
behaviour is obtained. Thereafter, to show the effectiveness of supervisory
control theory, DES of controlled behaviour is implemented on the simulation
environment and it successfully performs autonomous package delivery in multi-
storey terrain. For future work, we can obtain controlled DES model of
multiple legged robots and conveyors and test in simulation and the real
world.
Figure 8: The automaton model of Free behavior
## References
* [1] W. Bosworth, S. Kim, and N. Hogan, “The mit super mini cheetah: A small, low-cost quadrupedal robot for dynamic locomotion,” _2015 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)_ , 2015\.
* [2] M. Schranz, M. Umlauft, M. Sende, and W. Elmenreich, “Swarm robotic behaviors and current applications,” _Frontiers in Robotics and AI_ , vol. 7, 04 2020\.
* [3] Y. Gong, R. Hartley, X. Da, A. Hereid, O. Harib, J. Huang, and J. W. Grizzle, “Feedback control of a cassie bipedal robot: Walking, standing, and riding a segway,” _CoRR_ , vol. abs/1809.07279, 2018. [Online]. Available: http://arxiv.org/abs/1809.07279
* [4] M. Bjelonic, C. D. Bellicoso, Y. D. Viragh, D. Sako, F. D. Tresoldi, F. Jenelten, and M. Hutter, “Keep rollin’—whole-body motion control and planning for wheeled quadrupedal robots,” _IEEE Robotics and Automation Letters_ , vol. 4, no. 2, p. 2116–2123, 2019.
* [5] M. Wermelinger, P. Fankhauser, R. Diethelm, P. Krusi, R. Siegwart, and M. Hutter, “Navigation planning for legged robots in challenging terrain,” _2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , 2016.
* [6] P. J. G. Ramadge and W. M. Wonham, “The control of discrete event systems,” _Proceedings of the IEEE_ , vol. 77, no. 1, pp. 81–98, 1989.
|
# A Fast Template Periodogram for Detecting Non-Sinusoidal Fixed-Shape Signals
in Irregularly Sampled Time Series
J. Hoffman11affiliation: Xaxis, LLC, 3 World Trade Center, 175 Greenwich
Street, 30th Floor, New York, NY 10007 , J. Vanderplas22affiliation: eScience
Institute, University of Washington, Seattle, WA 98195 , J. D.
Hartman33affiliation: Department of Astrophysical Sciences, Princeton
University, Princeton NJ 08540 , G. Á Bakos33affiliation: Department of
Astrophysical Sciences, Princeton University, Princeton NJ 08540
44affiliation: Institute for Advanced Study, 1 Einsten Drive, Princeton, NJ
<EMAIL_ADDRESS>
###### Abstract
Astrophysical time series often contain periodic signals. The large and
growing volume of time series data from photometric surveys demands
computationally efficient methods for detecting and characterizing such
signals. The most efficient algorithms available for this purpose are those
that exploit the $\mathcal{O}(N\log N)$ scaling of the Fast Fourier Transform
(FFT). However, these methods are not optimal for non-sinusoidal signal
shapes. Template fits (or periodic matched filters) optimize sensitivity for
_a priori_ known signal shapes but at a significant computational cost.
Current implementations of template periodograms scale as
$\mathcal{O}(N_{f}N_{\rm obs})$, where $N_{f}$ is the number of trial
frequencies and $N_{\rm obs}$ is the number of lightcurve observations, and
due to non-convexity, they do not guarantee the best fit at each trial
frequency, which can lead to spurious results. In this work, we present a non-
linear extension of the Lomb-Scargle periodogram to obtain a template-fitting
algorithm that is both accurate (globally optimal solutions are obtained
except in pathological cases) and computationally efficient (scaling as
$\mathcal{O}(N_{f}\log N_{f})$ for a given template). The non-linear
optimization of the template fit at each frequency is recast as a polynomial
zero-finding problem, where the coefficients of the polynomial can be computed
efficiently with the non-equispaced fast Fourier transform. We show that our
method, which uses truncated Fourier series to approximate templates, is an
order of magnitude faster than existing algorithms for small problems
($N\lesssim 10$ observations) and 2 orders of magnitude faster for long base-
line time series with $N_{\rm obs}\gtrsim 10^{4}$ observations. An open-source
implementation of the fast template periodogram is available at
github.com/PrincetonUniversity/FastTemplatePeriodogram.
## 1\. Introduction
Astronomical systems exhibit a wide range of time-dependent variability. By
measuring and characterizing this variability, astronomers are able to infer a
variety of important astrophysical properties about the underlying system.
Periodic signals in noisy astronomical timeseries can be detected with a
number of techniques, including Gaussian process regression (Foreman-Mackey et
al., 2017; Rasmussen & Williams, 2005), least-squares spectral analysis (Lomb,
1976; Scargle, 1982; Barning, 1963; Vaníček, 1971), and information-theoretic
methods (Graham et al., 2013a; Huijse et al., 2012; Cincotta et al., 1995).
For an empirical comparison of some of these techniques applied to several
astronomical survey datasets, see Graham et al. (2013b).
For stationary periodic signals, least-squares spectral analysis — also known
as the Lomb-Scargle (LS) periodogram (Lomb, 1976; Scargle, 1982; Barning,
1963; Vaníček, 1971) — is perhaps the most sensitive and computationally
efficient method of detection. The LS periodogram can be made to scale as
$\mathcal{O}(N_{f}\log N_{f})$, where $N_{f}$ is the number of trial
frequencies, by utilizing the non-equispaced fast Fourier transform (Keiner et
al., 2009; Dutt & Rokhlin, 1993, NFFT) to evaluate frequency-dependent sums,
or by “extirpolating” irregularly spaced observations to a regular grid with
Lagrange polynomials (Press & Rybicki, 1989).
The LS periodogram fits the following model to a set of observations:
$\hat{y}_{\rm LS}(t|\theta,\omega)=\theta_{0}\cos{\omega
t}+\theta_{1}\sin{\omega t}.$ (1)
where $\omega=2\pi/P$ is the (angular) frequency of the underlying signal, and
$\theta_{0}$ and $\theta_{1}$ are the amplitudes of the signal. When the data
$y_{i}$ is composed of a sinusoidal component and a white noise component
(i.e., when the measurement uncertainties are uncorrelated and Gaussian), the
LS periodogram provides a maximum likelihood estimate for the model parameters
($\omega,\theta_{0},$ and $\theta_{1}$).
The LS “power” $P_{LS}(\omega)$ has several definitions in the literature
(Zechmeister & Kürster, 2009), but we adopt the following definition
throughout the paper:
$P(\omega)=\frac{\chi^{2}_{0}-\chi^{2}(\omega)}{\chi^{2}_{0}}$ (2)
Where $\chi^{2}_{0}$ is the weighted sum of squared residuals for a constant
fit:
$\chi^{2}_{0}=\sum_{i}w_{i}(y_{i}-\bar{y})^{2}$ (3)
where $\bar{y}=\sum_{i}w_{i}y_{i}$ is the weighted mean of the observations,
and $w_{i}\propto\sigma_{i}^{-2}$ are the normalized weights for each
observation ($\sum_{i}w_{i}=1$), and $\chi^{2}(\omega)$ is the weighted sum of
squared residuals for the best-fit model $\hat{y}$ at a given trial frequency
$\omega=2\pi/P$ where $P$ is the period:
$\chi^{2}(\omega)=\min_{\theta}\sum_{i}w_{i}(y_{i}-\hat{y}(t_{i}|\omega,\theta))^{2}$
(4)
### 1.1. Bayesian interpretation
We note that, while this formalism captures many data and modeling scenarios,
it is not completely general. For example, correlated uncertainties are not
handled here. A more general Bayesian treatment of periodic models in
astronomical timeseries is better handled by expressing a posterior over the
model parameters.
Assuming a Gaussian likelihood for the observations
$y_{i}\sim\mathcal{N}(\hat{y}(t_{i}|\omega,\theta),\sigma^{2}_{i})$, and
uniform priors on both the frequency parameter $\omega$ and the non-frequency
parameters $\theta$, the posterior is
$\displaystyle p(\omega,\theta|X)$
$\displaystyle=\frac{p(X|\omega,\theta)p(\omega,\theta)}{p(X)}$ (5)
$\displaystyle\propto
p(\omega,\theta)\prod_{i=1}^{N_{\mathrm{obs}}}\mathcal{N}(\hat{y}(t_{i}|\omega,\theta),\sigma^{2}_{i})$
(6)
where we use $X=\\{(t_{i},y_{i},\sigma_{i})|0<i<N_{\mathrm{obs}}\\}$ as
shorthand for the lightcurve observations. The logarithm of the posterior is
$\displaystyle\log p(\omega,\theta|X)$
$\displaystyle=-\frac{1}{2}\sum_{i=1}^{N_{\mathrm{obs}}}\left(\frac{y_{i}-\hat{y}(t_{i}|\omega,\theta)}{\sigma_{i}}\right)^{2}+\mathrm{const.}$
(7)
$\displaystyle=-\frac{W}{2}\sum_{i=1}^{N_{\mathrm{obs}}}w_{i}(y_{i}-\hat{y}(t_{i}|\omega,\theta))^{2}+\mathrm{const.}$
(8)
where $W=\sum_{i}\sigma_{i}^{-2}$. Thus,
$\displaystyle\chi^{2}(\omega)$
$\displaystyle=\min_{\theta}\sum_{i}w_{i}(y_{i}-\hat{y}(t_{i}|\omega,\theta))^{2}$
(9) $\displaystyle=\min_{\theta}\left(-\frac{2}{W}\log
p(\omega,\theta|X)+\mathrm{const.}\right)$ (10)
$\displaystyle=-\frac{2}{W}\max_{\theta}\log
p(\omega,\theta|X)+\mathrm{const.}$ (11)
and therefore, since $P(\omega)=1-\chi^{2}(\omega)/\chi_{0}^{2}$,
$P(\omega)=\frac{2}{W\chi_{0}^{2}}\max_{\theta}p(\omega,\theta|X)+\mathrm{const.}$
(12)
The Lomb-Scargle power is a linear transformation of the maximum of the log
posterior over the non-frequency parameters, with the frequency parameter held
fixed. Thus, choosing the frequency that maximizes the periodogram value
corresponds to finding a MAP estimate of the frequency parameter.
A MAP interpretation is more general, and is applicable to scenarios not
considered in this paper (e.g. correlated uncertainties, multi-dimensional
timeseries, etc.), since all of these problems are amenable to MAP estimation
of their model parameters. However, in order to maintain consistency with
notation in the Lomb-Scargle literature (e.g. Zechmeister & Kürster, 2009;
VanderPlas, 2018), we keep our definition of the periodogram restricted as
above and only consider one-dimensional timeseries with heteroscedastic but
uncorrelated uncertainties.
### 1.2. Extending Lomb-Scargle
The LS periodogram has numerous extensions to account for, e.g., biased
estimates of the mean brightness (Zechmeister & Kürster, 2009), non-sinusoidal
signals (Schwarzenberg-Czerny, 1996; Palmer, 2009), multi-band observations
(VanderPlas & Ivezić, 2015), and to mitigate overfitting of more flexible
models via regularization (VanderPlas & Ivezić, 2015). For a detailed review
of the LS periodogram and its extensions, see VanderPlas (2018).
The multi-harmonic LS periodogram (Bretthorst et al., 1988; Schwarzenberg-
Czerny, 1996; Palmer, 2009, MHGLS), provides a more flexible model by adding
harmonic components to the fit:
$\hat{y}_{\rm
MHLS}(t|\theta,\omega)=\theta_{0}+\sum_{n=1}^{H}\theta_{2n}\cos{n\omega
t}+\theta_{2n-1}\sin{n\omega t}.$ (13)
Additional harmonics are important for modeling signals that, while stationary
and periodic, are non-sinusoidal (e.g. RR Lyrae, eclipsing binaries, etc.).
However, the MHLS periodogram contains $2H+1$ free parameters while the
original LS periodogram contains only $2$ ($3$ if the mean brightness is
considered a free parameter). Including higher order harmonics adds model
complexity, which can degrade the sensitivity of the MHLS periodogram to
sinusoidal or approximately sinusoidal signals.
Tikhonov regularization (or $L_{2}$ regularization), is one tool for
mitigating overfitting of the higher order harmonics. However, adding an
$L_{2}$ regularization term to the Fourier amplitudes adds bias to the model,
and that bias should be compared to the value added by the decrease in model
variance (VanderPlas & Ivezić, 2015).
### 1.3. Computational scaling
The LS periodogram naively scales as $\mathcal{O}(N_{f}N_{\rm obs})$ where
$N_{f}$ is the number of trial frequencies and $N_{\rm obs}$ is the number of
observations. However, the limiting computations of the LS periodogram involve
sums of trigonometric functions over the observations. When the observations
are regularly sampled, the fast Fourier transform (FFT) (Cooley & Tukey, 1965)
can evaluate such sums efficiently and the LS periodogram scales as
$\mathcal{O}(N_{f}\log N_{f})$.
When the data is not regularly sampled, as is the case for most astronomical
time series, the LS periodogram can be evaluated quickly in one of two popular
ways. The first, by Press & Rybicki (1989) involves“extirpolating” irregularly
sampled data onto a regularly sampled mesh, and then performing FFTs to
evaluate the necessary sums. The second, as pointed out in Leroy (2012), is to
use the non-equispaced FFT (Keiner et al., 2009; Dutt & Rokhlin, 1993, NFFT)
to evaluate the sums; this provides roughly an order of magnitude speedup over
the Press & Rybicki (1989) algorithm, and both algorithms scale as
$\mathcal{O}(N_{f}\log N_{f})$ (Leroy, 2012).
There is a growing population of alternative methods for detecting periodic
signals in astrophysical data. Some of these methods can reliably outperform
the LS periodogram, especially for non-sinusoidal signal shapes (see Graham et
al. (2013b) for a recent empirical review of period finding algorithms).
However, a key advantage the LS periodogram and its extensions is speed.
Virtually all other “phase-folding” methods scale as $\mathcal{O}(N_{\rm
obs}\times N_{f})$, where $N_{\rm obs}$ is the number of observations and
$N_{f}$ is the number of trial frequencies, while the Lomb-Scargle periodogram
scales as $\mathcal{O}(N_{f}\log N_{f})$. The virtual independence of Lomb-
Scargle’s computation time with respect to the number of observations
(assuming $N_{f}\gtrsim N_{\rm obs}$) is especially valuable for lightcurves
with $N_{\rm obs}\gg\log N_{f}\sim 50$.
Algorithmic efficiency will become increasingly important as the volume of
data produced by astronomical observatories continues to grow larger. The
HATNet survey (Bakos et al., 2004), for example, has already made
$\mathcal{O}(10^{4})$ observations of $\mathcal{O}(10^{6}-10^{7})$ stars. The
Gaia telescope (Gaia Collaboration et al., 2016) is set to produce
$\mathcal{O}(10-100)$ observations of $\mathcal{O}(10^{9})$ stars. The Large
Synoptic Survey Telescope (LSST; LSST Science Collaboration et al. (2009))
will make $\mathcal{O}(10^{2}-10^{3})$ observations of $\mathcal{O}(10^{10})$
stars during its operation starting in 2023.
### 1.4. Template periodograms
When the shape of a stationary periodic signal is known a priori, then the
number of degrees of freedom is the same as the original LS periodogram (with
a floating mean component):
$\hat{y}(t|\theta,\omega)=\theta_{0}+\theta_{1}\mathbf{M}(\omega
t-\theta_{2}),$ (14)
where $\mathbf{M}:[0,2\pi)\rightarrow\mathcal{R}$ is a predefined periodic
template. We refer to the periodogram corresponding to this model as the
“template periodogram.”
As is the case for the LS periodogram, the template periodogram is equivalent
to a maximum-likelihood estimate of the model parameters under the assumption
that measurement uncertainties are Gaussian and uncorrelated (i.e. white
noise).
This paper develops new extensions of least-squares spectral analysis for
arbitrary signal shapes. For non-periodic signals this method is known as
matched filter analysis, and can be extended to search for periodic signals
by, e.g., phase folding the data at different trial periods.
An analysis by Sesar et al. (2016) found that template fitting significantly
improved period and amplitude estimation for RR Lyrae in Pan-STARRS DR1
photometry (Chambers et al., 2016). Since the signal shapes for RR Lyrae in
various bandpasses are known _a priori_ (see Sesar et al. (2010)), template
fitting provides an optimal estimate of amplitude and period, given that the
object is indeed an RR Lyrae star well modeled by at least one of the
templates. Templates were especially crucial for Pan-STARRS data, since there
are typically only 35 observations per source over 5 bands (Hernitschek et
al., 2016), not enough to obtain accurate amplitudes empirically by phase-
folding. By including domain knowledge (i.e. knowledge of what RR Lyrae
lightcurves look like), template fitting allows for accurate inferences of
amplitude even for undersampled lightcurves.
However, the improved accuracy comes at substantial computational cost: the
template fitting procedure took 30 minutes per CPU per object, and Sesar et
al. (2016) were forced to limit the number of fitted lightcurves ($\lesssim
1000$) in order to keep the computational costs to a reasonable level. Several
cuts were made before the template fitting step to reduce the more than 1
million Pan-STARRS DR1 objects to a small enough number, and each of these
steps removes a small portion of RR Lyrae from the sample. Though this number
was reported by Sesar et al. (2016) to be small ($\lesssim 2\%$), it may be
possible to further improve the completeness of the final sample by applying
template fits to a larger number of objects, which would require either more
computational resources, more time, or, ideally, a more efficient template
fitting procedure.
The paper is organized as follows. Section 2 poses the problem of template
fitting in the language of least squares spectral analysis and derives the
fast template periodogram. Section 3 describes a freely available
implementation of the new template periodogram. Section 4 summarizes our
results, addresses caveats, and discusses possible avenues for improving the
efficiency of the current algorithm.
## 2\. Derivations
We define a template $\mathbf{M}$
$\mathbf{M}:[0,2\pi)\rightarrow\mathbb{R},$ (15)
as a mapping between the unit interval and the set of real numbers. We
restrict our discussion to sufficiently smooth templates such that
$\mathbf{M}$ can be adequately described by a truncated Fourier series
$\hat{\mathbf{M}}(\omega t|H)=\sum_{n=1}^{H}\left(c_{n}\cos{n\omega
t}+s_{n}\sin{n\omega t}\right)$ (16)
for some finite $H>0$.
That the $c_{n}$ and $s_{n}$ values are _fixed_ (i.e., they define the
template) is the crucial difference between the template periodogram and the
multi-harmonic Lomb-Scargle (Palmer, 2009; Bretthorst et al., 1988), where
$c_{n}$ and $s_{n}$ are _free parameters_.
We now construct a periodogram for this template. The periodogram assumes that
an observed time series $S=\\{(t_{i},y_{i},\sigma_{i})\\}_{i=1}^{N}$ can be
modeled by a scaled, transposed template that repeats with period
$2\pi/\omega$, i.e.
$y_{i}\approx\hat{y}(\omega
t_{i}|\theta,\mathbf{M})=\theta_{1}\mathbf{M}(\omega
t_{i}-\theta_{2})+\theta_{3},$ (17)
where $\theta=(\theta_{1},\theta_{2},\theta_{3})\in\mathbb{R}^{3}$ is a set of
model parameters.
The optimal parameters are the location of a local minimum of the (weighted)
sum of squared residuals,
$\chi^{2}(\theta,S)\equiv\sum_{i}w_{i}(y_{i}-\hat{y}(\omega
t_{i}|\theta))^{2},$ (18)
and thus the following condition must hold for all three model parameters at
the optimal solution $\theta=\theta_{\rm opt}$:
$\left.\frac{\partial\chi^{2}}{\partial\theta_{j}}\right|_{\theta=\theta_{\rm
opt}}=0~{}~{}\forall\theta_{j}\in\theta.$ (19)
Note that we have implicitly assumed $\chi^{2}(\theta,S)$ is a $C^{1}$
differentiable function of $\theta$, which requires that $\mathbf{M}$ is a
$C^{1}$ differentiable function. Though this assumption could be violated if
we considered a more complete set of templates, (e.g. a box function), our
restriction to truncated Fourier series ensures $C^{1}$ differentiability.
Note that we also implicitly assume $\sigma_{i}>0$ for all $i$ and we will
later assume that the variance of the observations $y_{i}$ is non-zero. If
there are no measurement errors, i.e. $\sigma_{i}=0$ for all $i$, then uniform
weights (setting $\sigma_{i}=1$) should be used. If the variance of the
observations $y$ is zero, the periodogram (as defined in Equation 2) is
undefined for all frequencies. We do not consider the case where
$\sigma_{i}=0$ for some observations $i$ and $\sigma_{j}>0$ for some
observations $j$.
We can derive a system of equations for $\theta_{\rm opt}$ from the condition
given in Equation 19. The explicit condition that must be met for each
parameter $\theta_{j}$ is simplified below, using
$\hat{y}_{i}=\hat{y}(\omega t_{i}|\theta)$ (20)
and
$\partial_{j}\hat{y}_{i}=\left.\frac{\partial\hat{y}(\omega
t|\theta)}{\partial\theta_{j}}\right|_{t=t_{i}}$ (21)
for brevity:
$\begin{split}0&=\left.\frac{\partial\chi^{2}}{\partial\theta_{j}}\right|_{\theta=\theta_{\rm
opt}}\\\
&=-2\sum_{i}w_{i}\left(y_{i}-\hat{y}_{i}\right)(\partial_{j}\hat{y})_{i}\\\
\sum_{i}w_{i}y_{i}(\partial_{j}\hat{y})_{i}&=\sum_{i}w_{i}\hat{y}_{i}(\partial_{j}\hat{y})_{i}.\end{split}$
(22)
The above is a general result that extends to all least squares periodograms.
To simplify derivations, we adopt the following notation:
$\displaystyle\left<X\right>$ $\displaystyle\equiv$
$\displaystyle\sum_{i}w_{i}X_{i}$ (23) $\displaystyle\left<XY\right>$
$\displaystyle\equiv$ $\displaystyle\sum_{i}w_{i}X_{i}Y_{i}$ (24)
$\displaystyle{\rm Cov}(X,Y)$ $\displaystyle\equiv$
$\displaystyle\left<XY\right>-\left<X\right>\left<Y\right>$ (25)
$\displaystyle{\rm Var}(X)$ $\displaystyle\equiv$ $\displaystyle{\rm
Cov}(X,X)$ (26)
In addition, the transposed template
$\mathbf{M}_{\theta_{2}}=\mathbf{M}(\omega t_{i}-\theta_{2})$ can be expressed
as
$\displaystyle\mathbf{M}_{\theta_{2}}(\omega t)$
$\displaystyle=\sum_{n}c_{n}\cos n\left(\omega t-\theta_{2}\right)$ (27)
$\displaystyle\qquad+s_{n}\sin{n\left(\omega t-\theta_{2}\right)}$ (28)
$\displaystyle=\sum_{n}\left(c_{n}\cos{n\theta_{2}}-s_{n}\sin{n\theta_{2}}\right)\cos{n\omega
t}+$ (29)
$\displaystyle\qquad\left(s_{n}\cos{n\theta_{2}}+c_{n}\sin{n\theta_{2}}\right)\sin{n\omega
t}$ (30)
$\displaystyle=\sum_{n}\left(\alpha_{n}e^{in\theta_{2}}+\alpha_{n}^{*}e^{-in\theta_{2}}\right)\cos{n\omega
t}+$ (31)
$\displaystyle\qquad\left(-i\left[\alpha_{n}e^{in\theta_{2}}-\alpha_{n}^{*}e^{-in\theta_{2}}\right]\right)\sin{n\omega
t}$ (32)
$\displaystyle=\sum_{n}\left(\alpha_{n}\psi^{n}+\alpha_{n}^{*}\psi^{-n}\right)\cos{n\omega
t}+$ (33)
$\displaystyle\qquad\left(-i\left[\alpha_{n}\psi^{n}-\alpha_{n}^{*}\psi^{-n}\right]\right)\sin{n\omega
t}$ (34) $\displaystyle=\sum_{n}A_{n}(\psi)\cos{n\omega
t}+B_{n}(\psi)\sin{n\omega t}$ (35)
where $\alpha_{n}=(c_{n}+is_{n})/2$, and $\psi\equiv e^{i\theta_{2}}$ is a
convenient change of variable.
We also define the following terms:
$\displaystyle\widehat{YM}$
$\displaystyle=\left<y\mathbf{M}_{\theta_{2}}\right>$ (36)
$\displaystyle\widehat{MM}$
$\displaystyle=\left<\mathbf{M}_{\theta_{2}}^{2}\right>$ (37)
$\displaystyle\overline{M}$
$\displaystyle=\left<\mathbf{M}_{\theta_{2}}\right>$ (38) $\displaystyle MM$
$\displaystyle={\rm
Var}(\mathbf{M}_{\theta_{2}})=\widehat{MM}-\overline{M}^{2}$ (39)
$\displaystyle YM$ $\displaystyle={\rm
Cov}(\mathbf{M}_{\theta_{2}},y)=\widehat{YM}-\bar{y}\overline{M}$ (40)
For a given phase shift $\theta_{2}$, the optimal amplitude and offset are
obtained from requiring the partial derivatives of the sum of squared
residuals, $\chi^{2}$, to be zero.
Namely, we obtain that
$\displaystyle 0=\frac{\partial\chi^{2}}{\partial\theta_{1}}$
$\displaystyle=2\sum_{i}w_{i}(y_{i}-\hat{y}_{i})\left(-\frac{\partial\hat{y}}{\partial\theta_{1}}\right)_{i}$
(41)
$\displaystyle=\sum_{i}w_{i}(y_{i}-\theta_{1}\mathbf{M}_{\theta_{2}}-\theta_{3})\mathbf{M}_{\theta_{2}}$
(42)
$\displaystyle=\widehat{YM}-\theta_{1}\widehat{MM}-\theta_{3}\overline{M}$
(43)
and
$\displaystyle 0=\frac{\partial\chi^{2}}{\partial\theta_{3}}$
$\displaystyle=2\sum_{i}w_{i}(y_{i}-\hat{y}_{i})\left(-\frac{\partial\hat{y}}{\partial\theta_{3}}\right)_{i}$
(44)
$\displaystyle=\sum_{i}w_{i}(y_{i}-\theta_{1}\mathbf{M}_{\theta_{2}}-\theta_{3})$
(45) $\displaystyle=\bar{y}-\theta_{1}\overline{M}-\theta_{3}$ (46)
This system of equations can then be rewritten as
$\begin{pmatrix}\widehat{MM}&\overline{M}\\\
\overline{M}&1\end{pmatrix}\begin{pmatrix}\theta_{1}\\\
\theta_{3}\end{pmatrix}=\begin{pmatrix}\widehat{YM}\\\ \bar{y}\end{pmatrix}$
(47)
which reduces to
$\displaystyle\begin{pmatrix}\theta_{1}\\\ \theta_{3}\end{pmatrix}$
$\displaystyle=\frac{1}{\widehat{MM}-\overline{M}^{2}}\begin{pmatrix}1&-\overline{M}\\\
-\overline{M}&\widehat{MM}\end{pmatrix}\begin{pmatrix}\widehat{YM}\\\
\bar{y}\end{pmatrix}$ (48)
$\displaystyle=\frac{1}{\widehat{MM}-\overline{M}^{2}}\begin{pmatrix}\widehat{YM}-\bar{y}\overline{M}\\\
\widehat{MM}\bar{y}-\widehat{YM}\overline{M}\end{pmatrix}$ (49)
Letting $MM=\widehat{MM}-\overline{M}^{2}$ and
$YM=\widehat{YM}-\bar{y}\overline{M}$, we have
$\begin{pmatrix}\theta_{1}\\\ \theta_{3}\end{pmatrix}=\begin{pmatrix}YM/MM\\\
\bar{y}-\overline{M}(YM/MM)\end{pmatrix}$ (50)
This means we can rewrite the model
$\hat{y}=\theta_{1}\mathbf{M}_{\theta_{2}}+\theta_{3}$ as
$\hat{y}_{i}=\bar{y}+\left(\frac{YM}{MM}\right)(M_{i}-\overline{M})$ (51)
To obtain an expression for the periodogram, $P=1-\chi^{2}/\chi^{2}_{0}$, we
first compute $\chi^{2}$
$\displaystyle\chi^{2}$ $\displaystyle=\sum_{i}w_{i}(y_{i}-\hat{y}_{i})^{2}$
(52)
$\displaystyle=\sum_{i}w_{i}(y_{i}^{2}-2y_{i}\hat{y}_{i}+\hat{y}_{i}^{2})$
(53) $\displaystyle=YY-2\frac{(YM)^{2}}{MM}+\frac{(YM)^{2}}{MM}$ (54)
$\displaystyle=YY-\frac{(YM)^{2}}{MM}$ (55)
Since, $\chi^{2}_{0}=YY$, we have
$P(\omega)=\frac{(YM)^{2}}{YY\cdot MM}$ (56)
We wish to maximize $P(\omega)$ with respect to the phase shift parameter
$\theta_{2}$,
$\displaystyle\partial_{\theta_{2}}P=0$ $\displaystyle=\frac{YM}{YY\cdot
MM}\left(2\partial_{\theta_{2}}(YM)-\frac{YM}{MM}\partial_{\theta_{2}}(MM)\right)$
(57) $\displaystyle=2MM\partial_{\theta_{2}}(YM)-YM\partial_{\theta_{2}}(MM).$
(58)
The final expression is the non-linear condition that must be satisfied by the
optimal phase shift parameter $\theta_{2}$. However, satisfying Equation 57 is
not _sufficient_ to guarantee that $\theta_{2}$ is optimal. The value of the
periodogram at each $\theta_{2}$ satisfying Equation 57 must be computed, and
the globally optimal solution chosen from this set.
We seek a more explicit form for Equation 57. We derive expressions for $MM$
and $YM$, defining
$\displaystyle CC_{nm}$ $\displaystyle\equiv$ $\displaystyle{\rm
Cov}(\cos{n\omega t},\cos{m\omega t})$ (59) $\displaystyle CS_{nm}$
$\displaystyle\equiv$ $\displaystyle{\rm Cov}(\cos{n\omega t},\sin{m\omega
t})$ (60) $\displaystyle SS_{nm}$ $\displaystyle\equiv$ $\displaystyle{\rm
Cov}(\sin{n\omega t},\sin{m\omega t})$ (61) $\displaystyle YC_{n}$
$\displaystyle\equiv$ $\displaystyle\left<(y-\bar{y})\cos{n\omega t}\right>$
(62) $\displaystyle YS_{n}$ $\displaystyle\equiv$
$\displaystyle\left<(y-\bar{y})\sin{n\omega t}\right>,$ (63)
all of which can be evaluated efficiently using the NFFT.
The autocovariance of the template values $MM$, is given by
$\displaystyle MM$
$\displaystyle\equiv\sum_{i}w_{i}M_{i}^{2}-\left(\sum_{i}w_{i}M_{i}\right)^{2}$
(64) $\displaystyle=\sum_{i}w_{i}\left(\sum_{n}A_{n}\cos{\omega
nt_{i}}+B_{n}\sin{\omega nt_{i}}\right)^{2}$ (65)
$\displaystyle\qquad-\left(\sum_{n}A_{n}C_{n}+B_{n}S_{n}\right)^{2}$ (66)
$\displaystyle=\sum_{n,m}A_{n}A_{m}CC_{nm}$
$\displaystyle\qquad+(A_{n}B_{m}CS_{nm}+B_{n}A_{m}(CS^{T})_{nm})$
$\displaystyle\qquad+B_{n}B_{m}SS_{nm}$ (67)
$\displaystyle=\sum_{n,m}A_{n}A_{m}CC_{nm}+2A_{n}B_{m}CS_{nm}$
$\displaystyle\qquad+B_{n}B_{m}SS_{nm},$ (68)
using
$\sum_{n,m}A_{n}B_{m}CS_{nm}=\sum_{n,m}A_{m}B_{n}CS_{mn}.$ (69)
We also derive the products $A_{n}A_{m}$, $A_{n}B_{m}$, $B_{n}B_{m}$:
$\displaystyle\begin{split}A_{n}A_{m}&=\left(\alpha_{n}\psi^{n}+\alpha^{*}_{n}\psi^{-n}\right)\left(\alpha_{m}\psi^{m}+\alpha^{*}_{m}\psi^{-m}\right)\\\
&=\alpha_{n}\alpha_{m}\psi^{n+m}+\alpha^{*}_{n}\alpha_{m}\psi^{m-n}\\\
&\qquad+\alpha_{n}\alpha^{*}_{m}\psi^{n-m}+\alpha^{*}_{n}\alpha^{*}_{m}\psi^{-n-m)}\end{split}$
(70)
$\displaystyle\begin{split}A_{n}B_{m}&=-i\left(\alpha_{n}\psi^{n}+\alpha^{*}_{n}\psi^{-n}\right)\left(\alpha_{m}\psi^{m}-\alpha^{*}_{m}\psi^{-m}\right)\\\
&=-i\left\\{\alpha_{n}\alpha_{m}\psi^{n+m}+\alpha^{*}_{n}\alpha_{m}\psi^{m-n}\right.\\\
&\qquad\left.-\alpha_{n}\alpha^{*}_{m}\psi^{n-m}-\alpha^{*}_{n}\alpha^{*}_{m}\psi^{-n-m)}\right\\}\end{split}$
(71)
$\displaystyle\begin{split}B_{n}B_{m}&=-\left(\alpha_{n}\psi^{n}-\alpha^{*}_{n}\psi^{-n}\right)\left(\alpha_{m}\psi^{m}-\alpha^{*}_{m}\psi^{-m}\right)\\\
&=-\left\\{\alpha_{n}\alpha_{m}\psi^{n+m}-\alpha^{*}_{n}\alpha_{m}\psi^{m-n}\right.\\\
&\qquad\left.-\alpha_{n}\alpha^{*}_{m}\psi^{n-m}+\alpha^{*}_{n}\alpha^{*}_{m}\psi^{-n-m}\right\\}\end{split}$
(72)
Now we have that
$\displaystyle\begin{split}MM_{nm}&=A_{n}A_{m}CC_{nm}+2A_{n}B_{m}CS_{nm}+B_{n}B_{m}SS_{nm}\\\
&=\alpha_{n}\alpha_{m}\widetilde{CC}_{nm}\psi^{n+m}+2\alpha_{n}\alpha^{*}_{m}\widetilde{CS}_{nm}\psi^{n-m}\\\
&\qquad+\alpha^{*}_{n}\alpha^{*}_{m}\widetilde{SS}_{nm}\psi^{-(n+m)}\end{split}$
(73)
where
$\displaystyle\widetilde{CC}_{nm}$
$\displaystyle=(CC_{nm}-SS_{nm})-i\left(CS+CS^{T}\right)_{nm}$ (74)
$\displaystyle\widetilde{CS}_{nm}$ $\displaystyle=(CC_{nm}+SS_{nm})+i\left(CS-
CS^{T}\right)_{nm}$ (75) $\displaystyle\widetilde{SS}_{nm}$
$\displaystyle=(CC_{nm}-SS_{nm})+i\left(CS+CS^{T}\right)_{nm}$ (76)
and for $YM$:
$\displaystyle\begin{split}YM_{k}&=A_{k}YC_{k}+B_{k}YS_{k}\\\
&=\alpha_{k}YC_{k}\psi^{k}+\alpha_{k}^{*}YC_{k}\psi^{-k}\\\
&\qquad-i\left(\alpha_{k}YS_{k}\psi^{k}-\alpha^{*}_{k}YS_{k}\psi^{-k}\right)\\\
&=(YC_{k}-iYS_{k})\alpha_{k}\psi^{k}+(YC_{k}+iYS_{k})\alpha^{*}_{k}\psi^{-k}\\\
&=\alpha_{k}\widetilde{YC}_{k}\psi^{k}+\alpha^{*}_{k}\widetilde{YC}_{k}^{*}\psi^{-k}.\end{split}$
(77)
We also define $YM^{\prime}=\psi^{H}YM$ and $MM^{\prime}=\psi^{2H}MM$, both of
which are polynomials in $\psi$. Their derivatives are
$\displaystyle\partial YM^{\prime}$
$\displaystyle=H\psi^{H-1}YM+\psi^{H}\partial YM$ (78) $\displaystyle\partial
MM^{\prime}$ $\displaystyle=2H\psi^{2H-1}MM+\psi^{2H}\partial MM.$ (79)
A new polynomial condition can then be expressed in terms of $MM^{\prime}$,
$YM^{\prime}$ and their derivatives.
$\displaystyle 0$ $\displaystyle=2MM\partial(YM)-YM\partial(MM)$ (80)
$\displaystyle=\psi^{3H+1}\left(2MM\partial(YM)-YM\partial(MM)\right)$ (81)
$\displaystyle=2\psi^{2H}MM\left(\psi^{H+1}\partial(YM)\right)$
$\displaystyle\qquad-\psi^{H}YM\left(\psi^{2H+1}\partial(MM)\right)$ (82)
$\displaystyle=2MM^{\prime}\left(\psi\partial(YM^{\prime})-H(YM^{\prime})\right)$
$\displaystyle\qquad-
YM^{\prime}\left(\psi\partial(MM^{\prime})-2H(MM^{\prime})\right)$ (83)
$\displaystyle=\psi\left(2MM^{\prime}\partial(YM^{\prime})-YM^{\prime}\partial(MM^{\prime})\right)$
$\displaystyle\qquad-2H\left(MM^{\prime}YM^{\prime}-YM^{\prime}MM^{\prime}\right)$
(84)
$\displaystyle=2MM^{\prime}\partial(YM^{\prime})-YM^{\prime}\partial(MM^{\prime})$
(85)
The last step assumes that $\psi\neq 0$, which is a valid assumption since
$\psi=e^{i\theta_{2}}$ lies on the unit circle for all real $\theta_{2}$.
We solve for the zeros of the polynomial condition defined by Equation 80
using the numpy.polynomial.polyroots function, which solves for the
eigenvalues of the polynomial companion matrix.
Solving for the zeros of a polynomial given a set of coefficients is unstable
in certain cases, since the coefficients are represented as floating point
numbers with finite precision. Thus, we scale the roots by their modulus to
ensure they lie on the unit circle. Alternatively, we could use iterative
schemes such as Newton’s method to improve the estimate of the roots more
robustly, however this requires more computational power and the accuracy of
the roots was not a problem for any of the cases the authors have tested.
### 2.1. Negative amplitude solutions
The model $\hat{y}=\theta_{1}\mathbf{M}_{\theta_{2}}+\theta_{0}$ allows for
$\theta_{1}<0$ solutions. In the original formulation of Lomb-Scargle and in
linear extensions involving multiple harmonics, negative amplitudes translate
to phase differences, since $-\cos{x}=\cos(x-\pi)$ and $-\sin{x}=\sin(x-\pi)$.
However, for non-sinusoidal templates, $\mathbf{M}$, negative amplitudes do
not generally correspond to a phase difference. For example, a detached
eclipsing binary template $\mathbf{M}_{\rm EB}(x)$ cannot be expressed in
terms of a phase-shifted negative eclipsing binary template; i.e.
$\mathbf{M}_{\rm EB}\neq-\mathbf{M}_{\rm EB}(x-\phi)$ for any
$\phi\in[0,2\pi)$.
Negative amplitude solutions found by the fast template periodogram are
usually undesirable, as they may produce false positives for lightcurves that
resemble flipped versions of the desired template, and allowing for
$\theta_{1}<0$ solutions increases the number of effective free parameters of
the model, which lowers the signal to noise, especially for weak signals.
One possible remedy for this problem is to set $P_{\rm FTP}(\omega)=0$ if the
optimal solution for $\theta_{1}$ is negative, but this complicates the
interpretation of $P_{\rm FTP}$. Another possible remedy is, for frequencies
that have a $\theta_{1}<0$ solution, to search for the optimal parameters
while enforcing that $\theta_{1}>0$, e.g. via non-linear optimization, but
this likely will eliminate the computational advantage of FTP over existing
methods.
Thus, we allow for negative amplitude solutions in the model fit and caution
the user to check that the best fit $\theta_{1}$ is positive.
### 2.2. Extending to multi-band observations
#### 2.2.1 Multi-phase model
As shown in VanderPlas & Ivezić (2015), the multi-phase periodogram (their
$(N_{\rm base},N_{\rm band})=(0,1)$ periodogram), for any model can be
expressed as a linear combination of single-band periodograms:
$P^{(0,1)}(\omega)=\frac{\sum_{k=1}^{K}\chi^{2}_{0,k}P_{k}(\omega)}{\sum_{k=1}^{K}\chi^{2}_{0,k}}$
(86)
where $K$ denotes the number of bands, $\chi^{2}_{0,k}$ is the weighted sum of
squared residuals between the data in the $k$-th band and its weighted mean
$\left<y\right>$, and $P_{k}(\omega)$ is the periodogram value of the $k$-th
band at the trial frequency $\omega$.
With Equation 86, the template periodogram is readily applicable to multi-band
time series, which is crucial for experiments like LSST, SDSS, Pan-STARRS, and
other current and future photometric surveys.
Other multi-band extensions of the template periodogram are provided in
Appendix A.
### 2.3. Computational requirements
For a given number of harmonics $H$, the task of deriving the polynomial given
in Equation 80 requires $\mathcal{O}(H^{2})$ computations, and finding the
roots of this polynomial requires $\mathcal{O}(H^{3})$ computations. The
degree of the final polynomial is $6H-1$.
When considering $N_{f}$ trial frequencies, the polynomial computation and
root-finding step scales as $\mathcal{O}(H^{3}N_{f})$. The computation of the
sums (Equations 59 – 63) scales as $\mathcal{O}(HN_{f}\log HN_{f})$.
Therefore, the entire template periodogram scales as
$\mathcal{O}(HN_{f}\log HN_{f}+H^{3}N_{f}).$ (87)
However, an important consideration is that the peak width scales inversely
with the number of harmonics $\delta f_{\mathrm{peak}}\propto 1/H$ and so the
number of trial frequencies needed to resolve a peak increases linearly with
the number of harmonics in the template.
The computational scaling factoring in an extra power of $H$ is therefore
$\mathcal{O}(H^{2}N_{f}\log HN_{f}+H^{4}N_{f}).$ (88)
Figure 1.— Computation time of FTP scaled by $NH$ for different numbers of
harmonics. For $H\lesssim 3$, FTP scales sublinearly in $H$ (possibly due to a
constant overhead per trial frequency, independent of $H$). When $3\lesssim
H\lesssim 11$, FTP scales approximately linearly in $H$, and when $H\gtrsim
11$ FTP approaches the $\mathcal{O}(H^{3})$ scaling limit.
For a fixed number of harmonics $H$, the template periodogram scales as
$\mathcal{O}(N_{f}\log N_{f})$. However, for a constant number of trial
frequencies $N_{f}$, the template algorithm scales as $\mathcal{O}(H^{3})$,
and computational resources alone limit $H$ to reasonably small numbers
$H\lesssim 15$ (see Figure 1).
## 3\. Implementation
Figure 2.— Template periodograms performed on a simulated eclipsing binary
lightcurve (shown phase-folded in the left-hand plots). The top-most plot uses
only one harmonic, equivalent to a Lomb-Scargle periodogram. Subsequent plots
use an increasing number of harmonics, which produces a narrower and higher
peak height around the correct frequency. For comparison, the multi-harmonic
extension to Lomb-Scargle is plotted in blue, using the same number of
harmonics as the FTP. The Box Least-Squares (Kovács et al., 2002) periodogram
is shown in the final plot.
An open-source implementation of the template periodogram in Python is
available.111https://github.com/PrincetonUniversity/FastTemplatePeriodogram
Polynomial algebra is performed using the numpy.polynomial module (Jones et
al., 2001–). The nfft Python module, 222https://github.com/jakevdp/nfft which
provides a Python implementation of the non-equispaced fast Fourier transform,
is used to compute the necessary sums for a particular time series.
No explicit parallelism is used anywhere in the current implementation,
however certain linear algebra operations in Scipy use OpenMP via calls to
BLAS libraries that have OpenMP enabled.
All timing tests were run on a quad-core 2.6 GHz Intel Core i7 MacBook Pro
laptop (mid-2012 model) with 8GB of 1600 MHz DDR3 memory. The Scipy stack
(version 0.18.1) was compiled with multi-threaded MKL libraries.
### 3.1. Comparison with non-linear optimization
In order to evaluate the accuracy and speed of the template periodogram, we
have included slower alternative solvers within the Python implementation of
the FTP that employ non-linear optimization to find the best fit parameters.
Periodograms computed in Figures 2, 3, and 4 used simulated data. The
simulated data has uniformly random observation times, with Gaussian-random,
homoskedastic, uncorrelated uncertainties. An eclipsing binary template,
generated by fitting a well-sampled, high signal-to-noise eclipsing binary in
the HATNet dataset (BD+56 603) with a 10-harmonic truncated Fourier series.
#### 3.1.1 Accuracy
Figure 3.— Comparing accuracy between previous methods that rely on non-
linear optimization at each trial frequency with the fast template periodogram
described in this paper. Both methods are applied to the same simulated data
as shown in Figure 2. The FTP consistently finds more optimal template fits
than those found with non-linear optimization, which do not guarantee
convergence to a globally optimal solution. The FTP solves for the optimal fit
parameters directly, and therefore is able to achieve greater accuracy than
template fits done via non-linear optimization. Figure 4.— Comparing the
template periodogram calculated with $H=10$ harmonics to the template
periodogram using a smaller number of harmonics $H<10$. The template and data
used to perform the periodogram calculations are the same as those shown in
Figure 2.
For weak signals or signals folded at the incorrect trial period, there may be
a large number of local $\chi^{2}$ minima in the parameter space, and thus
non-linear optimization algorithms may have trouble finding the global
minimum. The FTP, on the other hand, solves for the optimal parameters
directly, and thus is able to recover optimal solutions even when the signal
is weak or not present.
Figure 3 illustrates the accuracy improvement with FTP. Many solutions found
via non-linear optimization are significantly suboptimal compared to the
solutions found by the FTP.
Figure 4 compares FTP results obtained using the full template $(H=10)$ with
those obtained using smaller numbers of harmonics. The left-most plot compares
the $H=1$ case (weighted Lomb-Scargle), which, as also demonstrated in Figure
2, illustrates the advantage of the template periodogram for known, non-
sinusoidal signal shapes.
#### 3.1.2 Computation time
Figure 5.— Computation time of FTP compared with alternative techniques that
use non-linear optimization at each trial frequency. _Left_ : timing for the
case when $N_{f}=~{}12N_{\rm obs}$, i.e. the cadence of the observations is
constant. _Right_ : timing for the case when $N_{f}$ is fixed, i.e. the
baseline of the observations are constant. Non-linear optimization techniques
scale as $\mathcal{O}(N_{f}N_{\rm obs})$ while the FTP scales as
$\mathcal{O}(HN_{f}\log HN_{f}+N_{f}H^{3})$, where $H$ is the number of
harmonics needed to approximate the template.
FTP scales asymptotically as $\mathcal{O}(N_{f}H\log N_{f}H)$ with respect to
the number of trial frequencies, $N_{f}$ and as $\mathcal{O}(N_{f}H^{3})$ with
respect to the number of harmonics in which the template is expanded, $H$. For
a given resolving power, however, there is an additional factor of $H$ in each
of these terms due to the number of trial frequencies necessary to resolve a
periodogram peak being proportional to $H$. However, for reasonable cases
($N_{f}\lesssim 10^{120}$ when $H=5$) the computation time is dominated by
computing polynomial coefficients and root finding, both of which scale
linearly in $N_{f}$.
The number of trial frequencies needed for finding astrophysical signals in a
typical photometric time series is
$N_{f}=1.75\times 10^{6}\left(\frac{H}{1}\right)\left(\frac{{\rm
baseline}}{10~{}{\rm
yrs}}\right)\left(\frac{\alpha}{5}\right)\left(\frac{15~{}{\rm mins}}{P_{\rm
min}}\right)$ (89)
where $\alpha$ represents the “oversampling factor,” $\Delta f_{\rm
peak}/\Delta f$, where $\Delta f_{\rm peak}\sim 1/{\rm baseline}$ is the
typical width of a peak in the periodogram and $\Delta f$ is the frequency
spacing of the periodogram.
Extrapolating from the timing of a test case (500 observations, 5 harmonics,
15,000 trial frequencies), the summations account for approximately 5% of the
computation time when $N_{f}\sim 10^{6}$. If polynomial computations and root-
finding can be improved to the point where they no longer dominate the
computation time, this would provide an order of magnitude speedup over the
current implementation.
Figure 5 compares the timing of the FTP with that of previous methods that
employ non-linear optimization. For the case when $N_{f}\propto N_{\rm obs}$,
FTP achieves a factor of 3 speedup for even the smallest test case (15
datapoints), while for larger cases ($N\sim 10^{4}$) FTP offers 2-3 orders of
magnitude speed improvement. For the constant baseline case, FTP is a factor
of $\sim 2$ faster for the smallest test case and a factor of $\sim 20$ faster
for $N_{\rm obs}\sim 10^{4}$. Future improvements to the FTP implementation
could further improve speedups by 1-2 orders of magnitude over non-linear
optimization.
## 4\. Discussion
Template fitting is a powerful technique for accurately recovering the period
and amplitude of objects with _a priori_ known lightcurve shapes. It has been
used in the literature by, e.g. Stringer et al. (2019); Sesar et al. (2016,
2010), to analyze RR Lyrae in the SDSS, PS1, and DES datasets, where it has
been shown to produce purer samples of RR Lyrae at a given completeness. The
computational cost of current template fitting algorithms, however, limits
their application to larger datasets or with a larger number of templates.
We have presented a novel template fitting algorithm that extends the Lomb-
Scargle periodogram (Lomb, 1976; Scargle, 1982; Barning, 1963; Vaníček, 1971)
to handle non-sinusoidal signals that can be expressed in terms of a truncated
Fourier series with a reasonably small number of harmonics ($H\lesssim 10$).
The fast template periodogram (FTP) asymptotically scales as
$\mathcal{O}(N_{f}H^{2}\log N_{f}H^{2}+N_{f}H^{4})$, while previous template
fitting algorithms such as the one used in the gatspy library (VanderPlas,
2016), scale as $\mathcal{O}(N_{f}N_{\rm obs})$. However, the FTP effectively
scales as $\mathcal{O}(N_{f}H^{4})$, since the time needed to compute
polynomial coefficients and perform zero-finding dominates the computational
time for all practical cases ($N_{f}\lesssim 10^{120}$). The $H^{4}$ scaling
effectively restricts templates to those that are sufficiently smooth to be
explained by a small number of Fourier terms.
FTP also improves the accuracy of previous template fitting algorithms, which
rely on non-linear optimization at each trial frequency to minimize the
$\chi^{2}$ of the template fit. The FTP routinely finds superior fits over
non-linear optimization methods.
An open-source Python implementation of the FTP is available at
GitHub.333https://github.com/PrincetonUniversity/FastTemplatePeriodogram The
current implementation could likely be improved by:
1. 1.
Improving the speed of the polynomial coefficient calculations and the zero-
finding steps. This could potentially yield a speedup of $\sim 1-2$ orders of
magnitude over the current implementation.
2. 2.
Exploiting the embarassingly parallel nature of the FTP using GPU’s.
For a constant baseline, the current implementation improves existing methods
by factors of a $\sim$few for lightcurves with $\mathcal{O}(100)$
observations, and by an order of magnitude or more for objects with more than
1,000 observations. These improvements, taken at face value, are not enough to
make template fitting feasible on LSST-sized datasets. However, optimizing the
polynomial computations could yield a factor of $\sim 25-100$ speedup over the
current implementation, which would make the FTP 1-3 orders of magnitude
faster than alternative techniques.
## Appendix A Shared-phase multi-band template periodogram
We derive a multi-band extension for the template periodogram for data taken
in $K$ filters, with $N_{k}$ observations in the $k$-th filter. We use the
same model as the one described in Sesar et al. (2016) in order to illustrate
the applicability of the template periodogram to more sophisticated scenarios.
The $i$-th observation in the $k$-th filter is denoted $y^{(k)}_{i}$. We wish
to fit a periodic, multi-band template
$\mathbf{M}=(\mathbf{M}^{(1)},\mathbf{M}^{(2)},...,\mathbf{M}^{(K)}):[0,2\pi)\rightarrow\mathbb{R}^{K}$
to all observations. We assume the same model used by Sesar et al. (2016),
which assumes the relative amplitudes, phase shifts, and offsets are shared
across bands:
$\hat{y}^{(k)}(t|\theta)=\theta_{1}\mathbf{M}^{(k)}(\omega
t-\theta_{2})+\theta_{3}+\lambda^{(k)}$ (A1)
where $\lambda^{(k)}$ is a fixed relative offset for band $k$. The $\chi^{2}$
for this model is
$\chi^{2}=\sum_{k=1}^{K}\sum_{i=1}^{N_{k}}w^{(k)}_{i}\left(y^{(k)}_{i}-\hat{y}^{(k)}_{i}\right)^{2}$
(A2)
To make things simpler, we can set the $\lambda^{(k)}$ values to 0 simply by
subtracting them off from our observations; this means we take all
$y^{(k)}_{i}\rightarrow y^{(k)}_{i}-\lambda^{(k)}$. We have that
$\chi^{2}=\sum_{k=1}^{K}W^{(k)}\chi^{2}_{k}$ (A3)
Where $W^{(k)}\equiv\sum_{i=1}^{N_{k}}w^{(k)}_{i}$ and
$\sum_{k=1}^{K}W^{(k)}=1$. This means that the system of equations reduces to:
$0=\frac{\partial\chi^{2}}{\partial\theta_{1}}=\sum_{k=1}^{K}W^{(k)}\left(\widehat{YM}^{(k)}-\theta_{1}\widehat{MM}^{(k)}-\theta_{3}\overline{M}^{(k)}\right)$
(A4)
for the $\theta_{1}$ parameter, where $\widehat{YM}^{(k)}$,
$\widehat{MM}^{(k)}$, $\overline{M}^{(k)}$ are values from the single band
case computed for each band individually, holding $W^{(k)}=1$ for each band.
That is:
$\displaystyle\widehat{YM}^{(k)}$
$\displaystyle\equiv\frac{1}{W^{(k)}}\sum_{i=1}^{N_{k}}w^{(k)}_{i}y^{(k)}_{i}\mathbf{M}_{\theta_{2}}^{(k)}(\omega
t_{i})$ (A5) $\displaystyle\widehat{MM}^{(k)}$
$\displaystyle\equiv\frac{1}{W^{(k)}}\sum_{i=1}^{N_{k}}w^{(k)}_{i}\left(\mathbf{M}_{\theta_{2}}^{(k)}(\omega
t_{i})\right)^{2}$ (A6) $\displaystyle\overline{M}^{(k)}$
$\displaystyle\equiv\frac{1}{W^{(k)}}\sum_{i=1}^{N_{k}}w^{(k)}_{i}\mathbf{M}_{\theta_{2}}^{(k)}(\omega
t_{i})$ (A7)
For the offset $\theta_{3}$, we have
$0=\frac{\partial\chi^{2}}{\partial\theta_{3}}=\sum_{k=1}^{K}W^{(k)}\left(\bar{y}^{(k)}-\theta_{1}\overline{M}^{(k)}-\theta_{3}\right)$
(A8)
Where $\bar{y}^{(k)}$ is the weighted-mean for the $k$-th band
($\bar{y}^{(k)}\equiv(1/W^{(k)})\sum_{i=1}^{N_{k}}w^{(k)}_{i}y^{(k)}_{i}$),
again with $y^{(k)}_{i}\rightarrow y^{(k)}_{i}-\lambda^{(k)}$.
So if we redefine the quantities $\widehat{YM}$, $\widehat{MM}$,
$\overline{M}$, and $\bar{y}$ as weighted averages across the bands, i.e.
$\widehat{YM}=\sum_{k=1}^{K}W^{(k)}\widehat{YM}^{(k)}$, etc., the solution for
the optimal parameters has the same form:
$\displaystyle\theta_{1}$ $\displaystyle=\left(\frac{YM}{MM}\right)$ (A9)
$\displaystyle\theta_{3}$
$\displaystyle=\bar{y}-\overline{M}\left(\frac{YM}{MM}\right)$ (A10)
which means that the model for the $k$-th band is
$\hat{y}^{(k)}=\bar{y}+\left(\frac{YM}{MM}\right)\left(M^{(k)}_{i}-\overline{M}\right)$
(A11)
The form of the periodogram has the same form as the single-band case:
$\displaystyle\chi^{2}$
$\displaystyle=\sum_{k=1}^{K}\sum_{i=1}^{N_{k}}w^{(k)}_{i}\left(y^{(k)}_{i}-\hat{y}^{(k)}_{i}\right)^{2}$
(A12)
$\displaystyle=\sum_{k=1}^{K}\sum_{i=1}^{N_{k}}w^{(k)}_{i}\left((y^{(k)}_{i}-\bar{y})-\left(\frac{YM}{MM}\right)(M_{i}^{(k)}-\overline{M})\right)^{2}$
(A13)
$\displaystyle=\sum_{k=1}^{K}\sum_{i=1}^{N_{k}}w^{(k)}_{i}\left((y^{(k)}_{i}-\bar{y})^{2}-2\left(\frac{YM}{MM}\right)(M_{i}^{(k)}-\overline{M})(y^{(k)}_{i}-\bar{y})+\left(\frac{YM}{MM}\right)^{2}(M_{i}^{(k)}-\overline{M})^{2}\right)^{2}$
(A14) $\displaystyle=YY-2\frac{(YM)^{2}}{MM}+\frac{(YM)^{2}}{MM}$ (A15)
$\displaystyle=YY-\frac{(YM)^{2}}{MM}.$ (A16)
Since there is a single shared offset between the bands (i.e. we assume the
mean magnitude is the same in all bands after subtracting $\lambda^{(k)}$),
the variance for the signal, $YY$, is not
$\sum_{k=1}^{K}\sum_{i=1}^{N_{k}}w^{(k)}_{i}(y^{(k)}_{i}-\bar{y}^{(k)})^{2}$
but $\sum_{k=1}^{K}\sum_{i=1}^{N_{k}}w^{(k)}_{i}(y^{(k)}_{i}-\bar{y})^{2}$.
Construction of the polynomial for the multi-band case can be performed by
first computing the polynomial expression for
$\psi^{2H}\widehat{MM}=\sum_{k=1}^{K}\sum_{i=1}^{N_{k}}w^{(k)}_{i}\left(\psi^{H}M^{(k)}_{i}\right)^{2}$
and a separate polynomial expression for $\psi^{H}\overline{M}$, which can
then be squared and subtracted from $\psi^{2H}\widehat{MM}$ to find
$MM^{\prime}=\psi^{2H}MM$.
For $YM^{\prime}=\psi^{H}YM$, we merely compute $\psi^{H}\widehat{YM}^{(k)}$
for each band, take the weighted average of the polynomial coefficients for
all bands
($\psi^{H}\widehat{YM}=\sum_{k}^{K}W^{(k)}\psi^{H}\widehat{YM}^{(k)}$) and
subtract the $\psi^{H}\bar{y}\overline{M}$ polynomial to get $YM^{\prime}$.
After computing the polynomials $MM^{\prime}$ and $YM^{\prime}$, the
polynomial in Equation 80 can be computed quickly and the following steps for
finding the optimal model parameters is the same as in the single band case.
The computational complexity of this model scales as $\mathcal{O}(KN_{f}H\log
N_{f}H+N_{f}H^{3})$ where $K$ is the number of filters. Since the polynomial
zero-finding step is the limiting computation in most real-world applications,
the multi-band template periodogram corresponding to the Sesar et al. (2016)
model should not be significantly more computationally intensive than the
single-band case.
Joel Hartman and GB acknowledge support from NASA grant NNX17AB61G. JTV is
supported by the University of Washington eScience Institute, with funding
from the Alfred P. Sloan Foundation, the Gordon and Betty Moore Foundation,
and the Washington Research Foundation.
## References
* Bakos et al. (2004) Bakos, G., Noyes, R. W., Kovács, G., et al. 2004, PASP, 116, 266
* Barning (1963) Barning, F. J. M. 1963, Bull. Astron. Inst. Netherlands, 17, 22
* Bretthorst et al. (1988) Bretthorst, G. L., Hung, C.-C., D’Avignon, D. A., & Ackerman, J. J. H. 1988, Journal of Magnetic Resonance, 79, 369
* Chambers et al. (2016) Chambers, K. C., Magnier, E. A., Metcalfe, N., et al. 2016, ArXiv e-prints, arXiv:1612.05560
* Cincotta et al. (1995) Cincotta, P. M., Mendez, M., & Nunez, J. A. 1995, ApJ, 449, 231
* Cooley & Tukey (1965) Cooley, J. W., & Tukey, J. W. 1965, Math. Comput., 19, 297
* Dutt & Rokhlin (1993) Dutt, A., & Rokhlin, V. 1993, SIAM J. Sci. Comput., 14, 1368
* Foreman-Mackey et al. (2017) Foreman-Mackey, D., Agol, E., Ambikasaran, S., & Angus, R. 2017, AJ, 154, 220
* Gaia Collaboration et al. (2016) Gaia Collaboration, Prusti, T., de Bruijne, J. H. J., et al. 2016, A&A, 595, A1
* Graham et al. (2013a) Graham, M. J., Drake, A. J., Djorgovski, S. G., Mahabal, A. A., & Donalek, C. 2013a, MNRAS, 434, 2629
* Graham et al. (2013b) Graham, M. J., Drake, A. J., Djorgovski, S. G., et al. 2013b, MNRAS, 434, 3423
* Hernitschek et al. (2016) Hernitschek, N., Schlafly, E. F., Sesar, B., et al. 2016, ApJ, 817, 73
* Huijse et al. (2012) Huijse, P., Estevez, P. A., Protopapas, P., Zegers, P., & Principe, J. C. 2012, IEEE Transactions on Signal Processing, 60, 5135
* Jones et al. (2001–) Jones, E., Oliphant, T., Peterson, P., et al. 2001–, SciPy: Open source scientific tools for Python, [Online; accessed 2017-01-19]
* Keiner et al. (2009) Keiner, J., Kunis, S., & Potts, D. 2009, ACM Trans. Math. Softw., 36, 19:1
* Kovács et al. (2002) Kovács, G., Zucker, S., & Mazeh, T. 2002, A&A, 391, 369
* Leroy (2012) Leroy, B. 2012, A&A, 545, A50
* Lomb (1976) Lomb, N. R. 1976, Ap&SS, 39, 447
* LSST Science Collaboration et al. (2009) LSST Science Collaboration, Abell, P. A., Allison, J., et al. 2009, ArXiv e-prints, arXiv:0912.0201
* Palmer (2009) Palmer, D. M. 2009, ApJ, 695, 496
* Press & Rybicki (1989) Press, W. H., & Rybicki, G. B. 1989, ApJ, 338, 277
* Rasmussen & Williams (2005) Rasmussen, C. E., & Williams, C. K. I. 2005, Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning) (The MIT Press)
* Scargle (1982) Scargle, J. D. 1982, ApJ, 263, 835
* Schwarzenberg-Czerny (1996) Schwarzenberg-Czerny, A. 1996, ApJ, 460, L107
* Sesar et al. (2010) Sesar, B., Ivezić, Ž., Grammer, S. H., et al. 2010, ApJ, 708, 717
* Sesar et al. (2016) Sesar, B., Hernitschek, N., Mitrović, S., et al. 2016, ArXiv e-prints, arXiv:1611.08596
* Stringer et al. (2019) Stringer, K. M., Long, J. P., Macri, L. M., et al. 2019, AJ, 158, 16
* VanderPlas (2016) VanderPlas, J. 2016, gatspy: General tools for Astronomical Time Series in Python, Astrophysics Source Code Library, ascl:1610.007
* VanderPlas (2018) VanderPlas, J. T. 2018, ApJS, 236, 16
* VanderPlas & Ivezić (2015) VanderPlas, J. T., & Ivezić, Ž. 2015, ApJ, 812, 18
* Vaníček (1971) Vaníček, P. 1971, Ap&SS, 12, 10
* Zechmeister & Kürster (2009) Zechmeister, M., & Kürster, M. 2009, A&A, 496, 577
|
# DNN-Life: An Energy-Efficient Aging Mitigation Framework for Improving the
Lifetime of On-Chip Weight Memories in Deep Neural Network Hardware
Architectures
Muhammad Abdullah Hanif1, Muhammad Shafique2 1Faculty of Informatics,
Technische Universität Wien (TU Wien), Vienna, Austria
2Division of Engineering, New York University Abu Dhabi (NYUAD), Abu Dhabi,
United Arab Emirates
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Negative Biased Temperature Instability (NBTI)-induced aging is one of the
critical reliability threats in nano-scale devices. This paper makes the first
attempt to study the NBTI aging in the on-chip weight memories of deep neural
network (DNN) hardware accelerators, subjected to complex DNN workloads. We
propose DNN-Life, a specialized aging analysis and mitigation framework for
DNNs, which jointly exploits hardware- and software-level knowledge to improve
the lifetime of a DNN weight memory with reduced energy overhead. At the
software-level, we analyze the effects of different DNN quantization methods
on the distribution of the bits of weight values. Based on the insights gained
from this analysis, we propose a micro-architecture that employs low-cost
memory-write (and read) transducers to achieve an optimal duty-cycle at run
time in the weight memory cells, thereby balancing their aging. As a result,
our DNN-Life framework enables efficient aging mitigation of weight memory of
the given DNN hardware at minimal energy overhead during the inference
process.
## I Introduction
DNN accelerators have already become an essential part of various machine
learning systems [1][2]. DNNs usually require a large number of parameters to
offer high accuracy, which comes at the cost of high memory requirements; see
Fig. 1a. Dedicated memory hierarchies are designed to tradeoff between the
low-cost storage offered by the off-chip DRAMs and the energy-/performance-
efficient access offered by the on-chip SRAMs [1]; see Fig. 1b for access
energy. This has led to an increasing trend towards the use of larger on-chip
memory in the state-of-the-art DNN accelerators [3][4], with the recent wafer-
scale chips having up to 18 GBs of on-chip memory [5]. However, due to
continuous technology scaling, the on-chip SRAM-based memories are becoming
increasingly vulnerable to different reliability threats, for example, soft
errors and aging [6][7][8]. Studies have shown that even a single fault in
weights of critical neurons can result in significant degradation of
application-level accuracy [9]. State-of-the-art works have focused on
analyzing and mitigating the effects of faults in DNN accelerators w.r.t. DNN
accuracy [10]. However, to the best of our knowledge, no prior works have
analyzed and optimized the aging of the on-chip weight memories of DNN
accelerators, especially when considering diverse dataflows of different DNNs
and the impact of different types of quantizations on the weight
distributions.
Aging due to NBTI: In PMOS transistors when a negative gate-to-source voltage
is applied, it can break-down the Si-H bond at the oxide-interface, thereby
causing a gradual increase in the threshold voltage ($V_{th}$) over the device
lifetime, which results in poor drive current and a reduction in the noise
margin [11]111A similar phenomenon called PBTI happens in NMOS transistors,
though NBTI has been considered relatively more serious compared to PBTI [6]..
To overcome this $V_{th}$ shift, the operating frequency of the device has to
be reduced by more than 20% over its entire lifetime [12]. However, due to
strict performance and energy constraints (specifically for embedded
applications), the $V_{th}$ shift cannot be addressed just by design-time
delay margins or adaptive operating frequency adjustments [13], as this leads
to a significant loss in the system’s performance and energy efficiency.
Therefore, in traditional computing systems, alternate opportunities have to
be exploited to overcome this challenge [12]. One such opportunity lies in the
fact that the NBTI aging phenomenon is partially reversed by removing the
stress.
Figure 1: (a) Accuracy and size comparison of few of the state-of-the-art DNNs
(b) Access energy comparison of SRAM with DRAM (data source: [1]).
NBTI Aging of On-chip Memories: On-chip memories are typically built using
6T-SRAM cells to achieve high area and power efficiency. A 6T-cell is composed
of two inverters coupled with two access transistors (see Fig. 2a). The
inverters store complementary values to store a single bit. Each inverter has
a PMOS transistor and an NMOS transistor. Depending on whether the cell is
storing ‘0’ or ‘1’, one of the PMOS transistors is always under stress, when
the transistor is on. As aging of a cell is defined by its most-aged
transistor, the lowest aging is achieved when both the PMOS transistors
receive on-average the same amount of stress over the entire lifetime of the
device, i.e., the percentage of the entire lifetime for which the cell stores
a ‘1’ (duty-cycle) is 50%, as shown in Fig. 2b. Note that NBTI aging strongly
depends on average long-term stress and weakly on short-term statistics [14].
Therefore, the key challenge in aging mitigation of on-chip memories is to
balance their duty-cycle over the entire lifetime without affecting system-
level performance.
Figure 2: (a) A 6T-SRAM Cell; and (b) its SNM degradation after 7 years [15]
State-of-the-art techniques and their limitations: Various techniques have
been proposed at circuit-level and at architecture-level. At circuit-level,
the structure of SRAM-cells is modified to reduce the aging rate [16][13]. For
example, Ricketts et al. [16] proposed an asymmetric SRAM structure for
workloads having biased bit distribution, but due to their high data
dependence, they are applicable only in specific scenarios. Recovery boosting
through dedicated recovery accelerating circuit is another method for
enhancing the lifetime of the SRAM cells [17], but it increases power/energy
consumption due to additional transistors per cell, and therefore cannot be
used in energy-constrained large-sized memories [18]. At architecture-level,
periodic inversion of data is used to reduce the aging rate of on-chip caches
[19]. However, it cannot guarantee optimal duty-cycle, specifically in cases
where the same data is periodically reused, e.g., in DNN-based systems where
the same set of parameters are reused for processing each input sample.
Calimera et al. in [20] improved recovery of unutilized portions of memory,
but at high area & energy cost of expensive online monitoring. The technique
also suffers from serious performance degradation in dynamic workload
scenarios. Another set of techniques uses bit rotations to cater NBTI aging in
registers [15], but they work only in cases where the overall distribution of
bits is relatively balanced. Moreover, they use barrel shifters that incur
high area and power overheads. The work in [21] proposed a configurable micro-
architecture for reducing aging rate of video memories, but only works for
streaming video applications.
In summary, the state-of-the-art techniques either incur high overheads in
terms of area and power/energy or rely on certain specific workloads, but
cannot be employed in DNN accelerators due to the unique properties of DNN
hardware and workloads, as we will illustrate later in this paper.
Additional Challenges from the Deep Learning Perspective: The dataflow (i.e.,
computation scheduling) for a given DNN on a specific hardware is defined as
per the DNN architecture and the hardware implementation to achieve maximum
energy-/performance-efficiency. Altering the dataflow to balance the duty-
cycle in on-chip SRAM cells can result in significant degradation of system-
level efficiency. Therefore, an aging mitigation technique that does not
require any alteration to the dataflow or the mapping of the data in on-chip
SRAM is desired.
Our Novel Contributions: Towards this, we propose DNN-Life, an aging analysis
and mitigation framework for on-chip memories of DNN hardware (see Fig. 3).
Our framework employs two key features:
1. 1.
Aging Analysis [Section III]: We analyze the impact of using different data
representation formats and quantization methods for weights of a DNN on the
probability distribution of weight-bits, as this can provide useful insights
for designing an effective and low-overhead aging mitigation technique.
2. 2.
Aging Mitigation [Section IV]: We propose a scheme and supporting micro-
architecture for mitigating the NBTI-aging of 6T-SRAM-based on-chip weight
memory of DNN accelerators with minimal energy overhead. Noteworthy, our
scheme does not require any alteration to the dataflow of DNN inference or on-
chip data mapping, and thereby maintains the energy and performance benefits
of the system. The micro-architectural extensions for aging mitigation are
integrated in the DNN accelerator before and after the on-chip weight memory
in the form of aging-aware write and read transducers, as shown in Fig. 4.
Figure 3: Overview of the design-time and run-time steps involved in our DNN-
Life framework. Our novel contributions are highlighted in colored boxes.
## II Overview of Our DNN-Life Framework
Figure 4: (a) Architecture of the baseline DNN accelerator. The highlighted
boxes, i.e., Write Data Encoder (WDE), Read Data Decoder (RDD) and Aging
Controller, are the proposed modules for mitigating NBTI aging of weight
memory. (b) A detailed view of the processing array and the accumulation unit.
In this work, we propose DNN-Life, a novel aging analysis and mitigation
framework for weight memories of DNN hardware accelerators. It employs a low-
cost data encoding scheme that accounts for diverse DNN workloads to adapt
over time to balance the duty-cycle in each on-chip weight memory cell to
alleviate the NBTI-aging effects. Towards this, the two key features of our
framework are:
1. 1.
Analysis: We analyze the probability distribution of weight-bits of different
pre-trained DNNs to find key insights that help in developing a low-cost
aging-mitigation scheme. To consider the variations in the distribution across
number representation formats and the methods used to transform the weights to
those formats, we consider different number representation formats and
different commonly used conversion methods. The detailed analysis and insights
are presented in Section III.
2. 2.
Architecture: Based on the gathered insights, we design a data encoding module
and an aging controller. The encoder is responsible for encoding the weights
before writing the values to the weight memory, and the aging controller is
responsible for generating encoding information required to encode the data
such that the duty-cycle is balanced. The encoding information is then stored
to be used by the corresponding decoder module. The data encoder is deployed
inside the DNN hardware accelerator right before the weight memory, and the
corresponding decoder is installed after the memory, to decode the weights
before passing them for computations. The integration of the encoder and the
decoder modules in a DNN accelerator is illustrated in Fig. 4a. The details of
the micro-architecture are presented in Section IV.
### II-A DNN Hardware Architecture
Our DNN hardware architecture is based on well-established DNN accelerator
models, such as [22] for dense DNNs. Our accelerator is composed of an
Activation Buffer, a Weight Buffer, a Processing Array, and an Accumulation
Unit; see Fig. 4a. Our proposed weight-memory aging mitigation modules
integrated in the architecture are also shown in the figure (see details in
Section IV). The activation and weight buffers provide intermediate storage
for the activations and weights, respectively, to reduce the costly off-chip
memory accesses. The buffers provide data to the processing array for
performing the computations. For this work, we assume a memory hierarchy
similar to Bit-Tactical [22], DaDianNao [3] and TPU [4], according to which:
1) the activation buffer is large enough to store the activations of a single
layer of a DNN; 2) the activation memory can provide $N$ number of activation
values to the processing array at a time; and 3) the weight memory can provide
$f\times N$ weights to the processing array simultaneously. The processing
array (see Fig. 4b) is composed of $f$ number of Processing Elements (PEs)
that share the activations, and therefore can perform $N$ number of
multiplications for $f$ different filters at the same time. Each PE has an
adder tree to compute the sum of the multiplications. The computed sum is
passed to the accumulation unit where it is added with the corresponding
partial sums to generate the output activation value. Note, as the filters can
be significantly large, the computation of each output activation can take
several cycles, depending on the filter size.
Figure 5: Division of filters of a CONV layer of a DNN into smaller blocks
that can be accommodated in the on-chip weight memory. Different colors
correspond to different sets of filters/blocks. The gray colored boxes define
one block of $r\times c\times ch\times f$ size. The steps show the sequence in
which the blocks are moved to the on-chip fabric for scheduling their
computations.
### II-B Dataflow in the DNN Accelerator
To perform the computations of a DNN layer using the above accelerator, the
weights have to be partitioned into blocks that can be accommodated in the on-
chip memory. The goal of partitioning is to maximize the use of available PEs.
The input/output feature maps and the filters/neurons all are divided into so-
called tiles, depending on the available on-chip storage for the corresponding
data type. Works like SmartShuttle [23] provide methods to find an optimal
tiling configuration and computation scheduling policy for a layer of a DNN
for a given memory hierarchy.
Fig. 5 illustrates the policy that we employ for partitioning the filters of a
CONV layer. Note, we support the well-established tiling technique so that we
can demonstrate that our technique can benefit a wide-range of existing DNN
hardware accelerators. The figure also illustrates the sequence in which the
blocks are moved to the on-chip weight memory and the corresponding
computations are scheduled. The filters are first divided into sets, where
each set contains f number of filters. Note, f is mainly defined based on the
number of filters that the hardware accelerator can process in parallel.
Afterwards, a chunk of data (grey boxes in Fig. 5) from a set is selected to
be moved to the on-chip memory. The selected chunk contains a block of data of
size $r\times c\times ch$ from the same location of each filter in the set.
The sequence in which the grey boxes are traversed in the filters defines rest
of the dataflow. The used sequence is shown as steps in Fig. 5.
## III Analysis of the Distribution of Weight-Bits for Different DNNs & their
Impact on Duty-Cycle
Before presenting the design of the proposed aging mitigation modules in
Section IV, here we first present an analysis which highlights the rationale
behind the proposed design.
### III-A Analyzing the Distribution of Weight-Bits
For this analysis, we consider the AlexNet and the VGG-16 networks, trained on
the ImageNet dataset. As different data representations for weights, we
consider 32-bit floating point representation (IEEE 754 standard) and 8-bit
integer format achieved using range-linear symmetric and asymmetric
quantization techniques [24]. Fig. 6 illustrates the ratio of observing a ‘1’
to the total number of observations (which corresponds to probability of
observing a ‘1’) at each bit-location of a word for all three data
representation formats for both the networks. By analyzing the distributions,
the following key observations are made:
1. 1.
The probability of getting a ‘1’ value at a particular bit-location of a
randomly selected weight depends on the network, the data representation
format, and the method used to transform the data to the particular data
representation format. For example, the probability of getting a ‘1’ at a
particular bit-location in symmetric 8-bit representation is almost the same
across bit-locations within a network for both the considered DNNs, however,
it varies across networks. Similarly, the probability of getting a ‘1’ at the
lower bit-locations in 32-bit floating-point representation is around 0.5,
however, the distribution of bits at higher bit-locations varies across bit-
locations as well as across DNNs.
2. 2.
Representation of weights using a specific format cannot guarantee a
distribution that offers 0.5 probability at each bit-location, i.e., a
distribution that can potentially lead to a balanced duty-cycle. For example,
out of all the studied cases, only the distribution of the AlexNet when
represented using 8-bit integer format achieved using symmetric range-linear
quantization offers close to 0.5 probability for all the bit-locations.
3. 3.
The average probability of getting a ‘1’ across bit-locations in a specific
format is also not guaranteed to be equal to 0.5. For example, see the
distributions of 8-bit asymmetrically quantized DNNs. Therefore, barrel
shifter-based balancing techniques would not produce desirable results in such
cases.
Figure 6: Distribution of bits of weights of different different DNNs when
represented in different data representation formats. Symmetric and asymmetric
represent which post-training quantization method is used to transform the
data for the corresponding distribution.
### III-B A Probabilistic Model-based Analysis for Aging of 6T-SRAM
On-chip Weight Memory of a DNN Accelerator
In the following, we develop a probabilistic model to analyze the
effectiveness of different aging mitigation techniques.
#### III-B1 Probabilistic Model
Assume the on-chip memory of a given DNN accelerator is composed of $I\times
J$ cells. For mapping the weights of a DNN onto the memory, we assume: (a) the
same dataflow as presented in Fig. 5; (b) each block of weights is kept in the
on-chip memory for equal amount of time, and it is fetched only once during a
single inference (similar to the dataflow for the DNN accelerator proposed in
[22]); (c) each block of data mapped onto the on-chip memory fits perfectly to
it. Based on the aforementioned conditions and the given DNN size, we can
divide the DNN into $K$ blocks that translates to $K$ number of data mappings
onto the on-chip weight memory. Now, if the same DNN is used repeatedly for
inferencing with the same dataflow, a single on-chip memory cell is mapped
with only $K$ different bits. If the probability of getting a ‘1’ for all the
bits is given by $\rho$, the probability of getting a duty-cycle less than and
equal to $b/K$, or greater than and equal to $1-b/K$, can be computed using
the following equation, except when $b/K=0.5$, where the probability is 1.
$\footnotesize
P_{b/K}=\sum_{i=0}^{b}\binom{K}{i}\rho^{i}\times(1-\rho)^{K-i}+\sum_{i=K-b}^{K}\binom{K}{i}\rho^{i}\times(1-\rho)^{K-i}$
(1)
Here, $b$ is an arbitrary variable with the range from $0$ to $\lfloor
K/2\rfloor$. Note that we combine (i) the cases in which duty-cycle is less
than and equal to $b/K$ and (ii) the cases in which duty-cycle is greater than
and equal to $1-b/K$, because in a symmetric 6T-SRAM cell both the cases cause
the same level of stress in one of the two PMOS transistors. Assuming the
above computed probability to be the same for all the cells of the on-chip
memory, the probability of at least $n$ number of cells (out of $I\times J$)
experiencing duty-cycle less than and equal to $b/K$, or greater than and
equal to $1-b/K$ can be computed using the following equation.
$\footnotesize P_{n}=\sum_{i=n}^{I\times J}\binom{I\times
J}{i}P_{b}^{i}\times(1-P_{b})^{I\times J-i}$ (2)
#### III-B2 An Example Case-Study
Let us consider a scenario where $K=20$ and $\rho=0.5$ (i.e., the best-case
with balanced bit distribution), and $I\times J=8192$. Fig. 7a shows the
probability for each possible value of $b$ computed using Eq. 1. Note, even
for $b/K=0.3$, the probability is over 0.1, i.e., more than 10% of the cells
are expected to experience a duty-cycle of less than 0.3, or greater than 0.7.
Figure 7: Probability of occurrence of $b/K\geq$ duty-cycle $\geq 1-b/K$ when
(a) $K=20$, and (b) $K=160$
Now, if we employ a given aging mitigation technique that offers upto 7 shifts
to increase the number of different bits that are mapped to a single cell, we
can theoretically increase the value of $K$ to 160, assuming the bits to be
independent from each other and the ideal shifting policy. Putting $K=160$ in
the above mentioned example, Fig. 7b shows the probabilities for different
$b/K$ values. As can be seen from Fig. 7b, the probabilities at lower $b/K$
values have dropped significantly. The above analysis implies that by
significantly increasing $K$ and having $\rho=0.5$, we can achieve close to
ideal duty-cycle for all the cells.
Now, instead of a barrel shifter, if we employ an inversion-based duty-cycle
balancing technique where every other write to the same location is inverted,
for the given scenario, the value of $K$ remains the same, as it is even.
Moreover, as $\rho$ is defined to be 0.5, the inversion-based policy has no
impact on $\rho$ either. Therefore, we get the same probabilities as presented
in Fig. 7a. However, note that the inversion-based policy is mainly useful for
achieving $\rho=0.5$ in cases where the distribution of bits is biased either
towards ‘0’ or ‘1’.
### III-C Challenges in Designing an Efficient Aging Mitigation System
Based on the above analysis, we outline the following key challenges in
designing a generic aging mitigating system.
1. 1.
The probability of occurrence of non-ideal duty-cycle is considerable even
with the state-of-the-art fixed aging mitigation techniques. Therefore, a more
robust method has to be designed by exploiting the fact that NBTI-aging is
more dependent on the average duty-cycle over the lifetime of the device [14].
2. 2.
The distribution of bits and the duty-cycle is significantly affected by the
datatype used for representing the weights. Therefore, the mitigation
technique should be generic and independent of the datatype used so that it is
beneficial for various DNN accelerators.
Moreover, in practical scenarios, each layer of a DNN can have a different
size. Therefore, each layer can take different amount of time for processing
that can vary significantly across layers. Also, different DNNs can have
different number of layers. Therefore, a method that keeps track of all these
factors at a fine granularity can help in significantly reducing the aging
rates. However, such methods are super costly. This makes it very challenging
to develop a generic method that offers effective aging mitigation at
reasonable overheads.
## IV A Micro-architecture for Mitigating Aging of the On-Chip Weight Memory
of DNN Accelerators
To address the above challenges, we propose a Write Data Encoder (WDE) for
encoding the weights before writing them to the on-chip weight memory, and a
Read Data Decoder (RDD) which performs the inverse function of the WDE while
reading the data from the on-chip memory and before passing it to the
processing array. The integration of the proposed modules in the DNN
accelerator is shown in Fig. 4a. Moreover, we propose an aging mitigation
controller which generates the control signals (metadata) for the write (and
read) transducer. The proposed micro-architectures of the WDE and the aging
mitigation controller is shown in Fig. 8.
Figure 8: Proposed micro-architecture for effective aging mitigation of
6T-SRAM weight memory of DNN accelerators.
Write Data Encoder (WDE): It leverages the inversion logic that besides its
low-overhead222low overhead compared to other techniques such as shifting,
which requires costly barrel shifters (as shown later in Section V), also
enables to perfectly balance out the distribution of bits in the cells of the
memory when the distribution is originally biased towards either ‘0’ or ‘1’,
as highlighted in Section III. The inversion logic in the proposed micro-
architecture is implemented using XOR gates as they allow the aging mitigation
controller to enable or disable it using just a 1-bit enable ($E$) signal.
Another key advantage of this design is that the micro-architecture of the RDD
is the same as WDE, where the same $E$ signal (metadata) that is used to
encode the weights is used (at a later point in time) for decoding them before
passing them to the processing array. Moreover, the proposed WDE and RDD
modules are highly scalable, as increasing the width of the modules require
only a linear increase in the number of XOR gates. Therefore, the widths of
these modules can be defined directly based on the DNN accelerator
configuration without affecting the energy-efficiency of the system.
Aging Mitigation Controller: The controller is the core part of the proposed
micro-architecture, as it is responsible for generating the enable signal
($E$) that enables/disables the inversion logic in WDE. The design is based on
the observations made in Section III that the higher the number of different
bits to be written on an SRAM cell during its lifetime (i.e., $K$ in Eq. 1)
that are generated from a uniform distribution the lower the chances of
observing a deviation in its duty-cycle from 0.5 (see Figs. 7), i.e., the
ideal point shown in Fig. 2b. Therefore, to increase the number of different
bits to be written on an SRAM cell, we employ a True Random Bit Generator
(TRBG) to generate the enable signal and decide whether the upcoming data
should be written with or without inversion in the memory cell. TRBG adds the
sense of randomness in the bits to be written in the memory and thereby leads
to larger $K$ value and lower aging.
Note in practical scenarios, the output of TRBGs can be biased towards either
‘0’ or ‘1’, which can eventually affect the duty-cycle. Therefore, to mitigate
this, we periodically invert the output of the TRBG after a defined number of
iterations with the help of an $M$-bit register before using it as the enable
signal, which balances the bias.
Figure 9: SNM degradation of 6T-SRAM on-chip weight memory cells of the
baseline DNN accelerator when used for performing inferences only using the
AlexNet network. Each bar graph shows the percentage of the number of cells
(Y-axis) experiencing different level of SNM degradation (X-axis). Figure 10:
Overall experimental setup used for evaluation.
## V Results and Discussion
### V-A Experimental Setup
Fig. 10 illustrates the overall experimental setup used for evaluation. The
setup consists of hardware synthesis for estimating the power, area and delay
characteristics of the proposed modules, and simulations for aging estimation
of the 6T-SRAM on-chip weight memory of different DNN hardware accelerators.
For hardware synthesis, we implemented different aging balancing circuits and
our DNN-Life architecture in Verilog. The circuits are synthesized for the
TSMC 65nm technology using Cadence Genus.
For aging estimation, we use Static Noise Margin (SNM) to quantify the NBTI-
aging of 6T-SRAM cells, similar to [21][25]. The SNM defines the tolerance to
noise that directly affects the read stability of a cell [26], i.e., if the
SNM of a cell is low, the cell is highly susceptible to read failures. As per
[15][21][25], SNM mainly depends on the duty-cycle over the entire lifetime of
the cell, and the least SNM degradation is achieved at 50% duty-cycle. To
obtain SNM results, we employ a similar device aging model as used in state-
of-the-art studies like [21][25]. However, due to its duty-cycle optimization
focus, our proposed technique is orthogonal to the given device aging models,
and other device-level models can easily be integrated in our framework. Based
on the models, the SNM degradation of a 6T-SRAM cell can be computed using the
duty-cycle. From the analysis, the best SNM degradation for 6T-SRAM cell after
7 years is 10.82% (at 50% duty-cycle), and the worst is 26.12% (at 0% and 100%
duty-cycle).
For large-scale simulations, we integrated the output of these models into a
memory simulator of the baseline DNN hardware (described in Section II-A). The
simulator takes the DNN hardware configuration, dataflow, pre-trained DNN
architecture and test samples as inputs. We also built a memory simulator for
a TPU-like hardware architecture [4] to validate the proposed aging-mitigation
technique across DNN hardware accelerators. The hardware configurations used
for the evaluation are presented in Table I. The DNNs used are the AlexNet and
the VGG-16 with the ImageNet dataset and a custom network with MNIST dataset.
The custom network is composed of two CONV layers and two FC layers, i.e.,
CONV(16,1,5,5), CONV(50,16,5,5), FC(256,800) and FC(10,256). For each setting
the duty-cycles are estimated based on the values observed in 100 inferences.
The bias balancing register is defined to be a 4-bit register (i.e., M=4), for
all the corresponding cases.
TABLE I: Hardware configurations and settings used in evaluation
| | Baseline Accelerator (Section II-A)
---
TPU-like NPU [4]
| Weight
---
memory size
| 512KB
---
256KB
| Activation
---
memory size
4MB | 24MB
| PE array size
---
| 8 PEs (1 PE = 8 Multipliers)
---
| 256 x 256 PEs (1 PE = 1 MAC)
---
Networks | AlexNet | | AlexNet, VGG-16 and Custom
---
### V-B Aging Estimation Results and Comparisons
In this subsection, we analyze the impact of using different aging mitigation
policies on the SNM degradation of the 6T-SRAM on-chip weight memory cells
after 7 years. We mainly considered four different policies: (1) No aging
mitigation, (2) Inversion-based, (3) Barrel shifter-based, and (4) DNN-Life.
For the proposed DNN-Life, we consider three different cases: (i) TRBG is not
biased and it generates 0s and 1s with equal probability (referred in the
results as Bias=0.5); (ii) TRBG is biased and it generates 1s with 0.7
probability, and the aging controller does not have a bias balancing register
(referred in the results as without bias balancing with Bias=0.7); and (iii)
TRBG is biased and it generates 1s with 0.7 probability and the aging
controller has a 4-bit bias balancing register (referred in the results as
with bias balancing with Bias=0.7).
Moreover, we performed experiments considering three different data
representation formats for weights: (1) 32-bit floating point format; (2)
8-bit integer format when weights are quantized using symmetric quantization
method; and (3) 8-bit integer format when weights are quantized using
asymmetric quantization method.
Fig. 9 shows the distributions of SNM degradation in the memory cells obtained
using different aging mitigation policies and a pre-trained AlexNet model. The
Y-axis of each bar graph shows the percentage of the number of cells and the
X-axis of each shows SNM degradation levels. Note that, for these experiments,
we assumed the baseline DNN accelerator configuration presented in Table I and
the dataflow shown in Fig. 5 with $f=8$. Also, we assumed that only a single
DNN (i.e., the AlexNet) is used for data inference throughout the lifetime of
the device. As can be seen in the figure, the inversion-based and barrel
shifter-based aging balancing reduce the SNM degradation of the SRAM cells,
however, they do not offer minimum SNM degradation (see 2 and 3 in comparison
with 1 in Fig. 9). This behavior is observed to be consistent across all the
data representation formats (see 2 till 7 in comparison with their respective
without aging mitigation graphs in Fig. 9). Specifically, the inversion-based
aging balancing offers sub-optimal aging mitigation in case of the 32-bit
floating point format (see 2 in Fig. 9), where most of the cells experience
around 10.8% SNM degradation (see a in Fig. 9). However, this is not the ideal
scenario as there are 4% cells that experience highest level of SNM
degradation (see b in Fig. 9) and a few that experience moderate level of SNM
degradation (see c in Fig. 9). Now, if we analyze the results of the proposed
DNN-Life with bias balancing, it offers maximum aging-mitigation (i.e., all
the cells experience around 10.8% SNM degradation) in all the cases (see 8, 9
and 10 in Fig. 9).
Impact of biased TRBG on aging balancing of 6T-SRAM on-chip weight memory:
Fig. 9 also illustrates the impact of using proposed design without bias
correction when the duty-cycle of TRBG is 0.7. As can be seen in the figure,
for all the data representation formats, having biased TRBG and no bias
correction leads to less reduction in SNM degradation of the 6T-SRAM cells
(e.g., see 11 in comparison with 8 in Fig. 9). This behavior is consistent
across all the data representation formats.
Figure 11: SNM degradation of 6T-SRAM on-chip weight memory cells of a TPU-
like NPU when used for performing inferences using the AlexNet, the VGG-16 and
the custom DNN, individually. The networks are quantized to 8-bit format using
symmetric range-linear quantization method.
Impact across different hardware accelerators: Fig. 11 shows the impact of
using the proposed aging-mitigation technique for a TPU-like [4] Neural
Processing Unit (NPU) architecture that has an on-chip weight FIFO which is
four tiles deep, where one tile is equivalent to weights for $256\times 256$
PEs. Each PE has a single MAC unit that can perform 8-bit multiplication. For
our implementation, we assumed the weight FIFO to be a circular buffer-based
design. We performed analysis using the three different networks mentioned
earlier. All the DNNs are quantized to 8-bits using post-training symmetric
quantization. Considering the dataflow of the NPU, the parameter $f$ was set
to 256. As can be seen in Fig. 11, the inversion-based aging mitigation policy
offers optimal results for the AlexNet and the VGG-16 networks (see 1 and 2 in
Fig. 11). However, when used for the custom DNN, almost all the memory cells
experience significant SNM degradation (see 3 in Fig. 11). The barrel shifter-
based approach also offer sub-optimal results (see 4 till 6 in Fig. 11).
However, the proposed DNN-Life with bias balancing offers maximum aging
mitigation (see 7 till 9 in Fig. 11). This shows that DNN-Life can be used for
a wide range of DNN accelerators.
### V-C Area and Power Results
The area, power and delay characteristics of three different WDEs composed of
different aging balancing units are shown in Table II. All three WDEs are
designed for 64 bit-width. The barrel shifter-based WDE consumes the most
amount of area and power. The proposed design consumes slightly more power and
area as compared to the inversion-based WDE. However, as shown in the previous
subsection, it offers best aging-mitigation in all the possible scenarios
regardless of the size of the given DNN, the data representation format and
the on-chip weight memory size. Note that, at hardware level, we realized TRBG
using a 5-stage ring oscillator.
TABLE II: Hardware results of different Write Data Encoders (WDEs)
| Delay [ps] | Power [nW] | | Area [cell area]
---
Barrel Shifter based WDE | 977.7 | 345190 | 9035
Inversion based WDE | 811.6 | 10716 | 195
| Proposed WDE with Aging
---
Mitigation Controller
581.8 | 13747 | 295
## VI Conclusion
In this paper, we proposed DNN-Life, an aging-mitigation framework that
employs read and write transducers to reduce NBTI-induced aging of 6T-SRAM on-
chip weight memory in DNN hardware accelerators. We analyzed different DNN
data representation formats at the software-level and their potential for
balancing the duty-cycle in SRAM cells. Based on the analysis, we proposed a
micro-architecture that makes use of a True Random Bit Generator (TRBG) to
ensure optimal duty-cycle at runtime, thereby balancing the aging of
complimentary parts in 6T-SRAM cells of the weight memory. As a result, our
DNN-Life enables efficient aging mitigation of weight memory of a given DNN
hardware with minimal energy overhead.
## Acknowledgment
This work is partially supported by Intel Corporation through Gift funding for
the project ”Cost-Effective Dependability for Deep Neural Networks and Spiking
Neural Networks.”
## References
* [1] V. Sze et al., “Efficient processing of deep neural networks: A tutorial and survey,” _Proceedings of IEEE_ , vol. 105, no. 12, pp. 2295–2329, 2017.
* [2] M. Capra et al., “Hardware and software optimizations for accelerating deep neural networks: Survey of current trends, challenges, and the road ahead,” _IEEE Access_ , 2020.
* [3] Y. Chen et al., “Dadiannao: A machine-learning supercomputer,” in _IEEE/ACM MICRO Symposium_, 2014, pp. 609–622.
* [4] N. P. Jouppi et al., “In-datacenter performance analysis of a tensor processing unit,” in _ACM/IEEE ISCA_, 2017, pp. 1–12.
* [5] P. McLellan. (2019) Hot chips: The biggest chip in the world. Accessed: 2019-09-10. [Online]. Available: https://community.cadence.com/cadence_blogs_8/b/breakfast-bytes/posts/the-biggest-chip-in-the-world
* [6] J. Henkel et al., “Reliable on-chip systems in the nano-era: Lessons learnt and future trends,” in _ACM/ESDA/IEEE DAC_, 2013, p. 99.
* [7] M. Shafique et al., “Robust machine learning systems: Challenges, current trends, perspectives, and the road ahead,” _IEEE Design & Test_, vol. 37, no. 2, pp. 30–57, 2020.
* [8] J. Henkel et al., “Thermal management for dependable on-chip systems,” in _IEEE ASP-DAC_, 2013, pp. 113–118.
* [9] M. A. Hanif et al., “Robust machine learning systems: Reliability and security for deep neural networks,” in _IEEE IOLTS_ , 2018, pp. 257–260.
* [10] S. Kim et al., “Matic: Learning around errors for efficient low-voltage neural network accelerators,” in _IEEE DATE_, 2018, pp. 1–6.
* [11] K. Kang et al., “Nbti induced performance degradation in logic and memory circuits: How effectively can we approach a reliability solution?” in _IEEE ASP-DAC_, 2008, pp. 726–731.
* [12] D. Gnad et al., “Hayat: Harnessing dark silicon and variability for aging deceleration and balancing,” in _2015 52nd ACM/EDAC/IEEE DAC_ , 2015.
* [13] J. Shin et al., “A proactive wearout recovery approach for exploiting microarchitectural redundancy to extend cache sram lifetime,” in _ACM/IEEE SIGARCH Computer Arch. News_ , vol. 36, no. 3, 2008, pp. 353–362.
* [14] J. Abella et al., “Penelope: The nbti-aware processor,” in _IEEE/ACM MICRO Symposium_. IEEE Computer Society, 2007, pp. 85–96.
* [15] S. Kothawade et al., “Analysis and mitigation of nbti aging in register file: An end-to-end approach,” in _IEEE ISQED_, 2011, pp. 1–7.
* [16] A. Ricketts et al., “Investigating the impact of nbti on different power saving cache strategies,” in _IEEE DATE_, 2010, pp. 592–597.
* [17] T. Siddiqua et al., “Enhancing nbti recovery in sram arrays through recovery boosting,” _IEEE TVLSI_, vol. 20, no. 4, pp. 616–629, 2011.
* [18] B. Zatt et al., “A low-power memory architecture with application-aware power management for motion disparity estimation in multiview video coding,” in _IEEE/ACM ICCAD_, 2011, pp. 40–47.
* [19] T. Jin et al., “Aging-aware instruction cache design by duty cycle balancing,” in _IEEE IVLSI_, 2012, pp. 195–200.
* [20] A. Calimera et al., “Partitioned cache architectures for reduced nbti-induced aging,” in _IEEE DATE_, 2011, pp. 1–6.
* [21] M. Shafique et al., “Enaam: Energy-efficient anti-aging for on-chip video memories,” in _ACM/IEEE DAC_, 2015, pp. 101:1–101:6.
* [22] A. Delmas et al., “Bit-tactical: Exploiting ineffectual computations in convolutional neural networks: Which, why, and how,” _preprint arXiv:1803.03688_ , 2018.
* [23] J. Li et al., “Smartshuttle: Optimizing off-chip memory accesses for deep learning accelerators,” in _IEEE DATE_, 2018, pp. 343–348.
* [24] D. Lin et al., “Fixed point quantization of deep convolutional networks,” in _ICML_ , 2016, pp. 2849–2858.
* [25] M. Shafique et al., “Content-aware low-power configurable aging mitigation for sram memories,” _IEEE Transactions on Computers_ , vol. 65, no. 12, pp. 3617–3630, 2016.
* [26] K. Agarwal et al., “Statistical analysis of sram cell stability,” in _ACM/IEEE DAC_, 2006, pp. 57–62.
|
# Auxetic behavior on demand: a three steps recipe for new designs
Daniel Acuna1,3<EMAIL_ADDRESS>Francisco Gutiérrez1,3 Alvaro S. Nunez1,3,4
<EMAIL_ADDRESS>Gustavo Düring2,3<EMAIL_ADDRESS>1Departamento de Física,
Facultad de Ciencias Físicas y Matemáticas, Universidad de Chile 2Instituto
de Física, Pontificia Universidad Católica de Chile, Casilla 306, Santiago,
Chile 3ANID - Millenium Nucleus of Soft Smart Mechanical Metamaterials,
Santiago, Chile 4CEDENNA, Avda. Ecuador 3493, Santiago, Chile
###### Abstract
Auxetic behavior is a fascinating mechanical property which probably
represents the paradigm of mechanical metamaterials. Despite that nowadays it
is a widely known phenomenon, a fundamental micro-mechanical understanding
which allows the proper control and design of new auxetic material remains
mystified. In this letter we show an important step forward setting a unified
framework for a large class of auxetic materials composed of 2D rotating rigid
units. In addition, a simple pathway for the design of new auxetic material is
established based on three simple rules. In particular, we construct for the
first time exotic crystals, quasi-crystal and isotropic materials with
negative Poisson ratio, which in an ideal design reach the perfect auxetic
limit. At the core of this behavior is a low energy mode reminiscent of a non
trivial floppy mode that exists in the idealized scenario. A natural
connection between 2D rotating rigid auxetic with an antiferromagnetic spin is
established which allows to identify general properties of the material.
Finally, we show that the auxetic response is robust under small perturbation
in the design.
††preprint: APS/123-QED
## I Introduction
The design of new materials with unusual mechanical properties and advanced
functionalities has become a very active field of research in soft matter
physics. These so-called mechanical metamaterials acquire their mysterious
behavior from the particular inner architecture and not from the constituent
materials properties. In recent years, metamaterials have been engineered to
display topological protection[1, 2, 3, 4, 5, 6], programmable shapes [7, 8,
9, 10, 11], nonlinear response [10, 12, 11, 13, 14, 2] and negative elastic
constants [15, 13, 16, 12, 17] among others. Auxetic materials are probably
the epitome of mechanical metamaterials, which were for the first time
intentionally designed by Lakes in 1987 [15]. An auxetic material, unlike
common elastic materials, when compressed (expanded) in a given direction, its
response is to compress (expand) in the perpendicular direction. This unusual
property is characterized by a negative Poisson’s ratio $\nu$, the ratio
between the strain in one direction and the strain in its perpendicular
direction.
A negative Poisson’s ratio has been found in natural bioauxetics [18, 19] and
molecular auxetics [20, 21]. Nowadays, with the onset of 3D printing, a wide
range of auxetic materials are being developed [22, 23, 16, 13, 11, 24], with
interest in their enhanced mechanical properties, like increased energy
absorption[25], enhanced indentation resistance[26], high fracture
toughness[27], synclastic curvature in bending[28], and variable
permeability[29], with applications in bio-medicine[30] and textiles[31] as
some examples.
Figure 1: Auxetic behavior on demand. Our proposed algorithm generated three
instances of auxetic materials. The three systems correspond to the top row a)
Random Lattice (isotropic lattice), b) Penrose’s quasicrystal, and c) Exotic
Crystal. The structures were simulated using the commercial software Ansys
Mechanical[32] with a finite element method. Their uniaxial compression
results are depicted in the bottom row where the auxetic behavior is apparent,
the color shows the intensity of the stress in the material. This unusual
property’s basic mechanism is the coordination and synchronization of the
buckling instability at each of the weak links that provide the structure its
stability. The effective collective pattern that emerges is analog to an
antiferromagnetic arrangement. Each of the two interconnected lattices that
fit the bipartite system rotates in opposite senses, as illustrated in the
inset of figure c). Out of this analogy, we infer that several properties of
the anisotropic XY antiferromagnet[33] are inherited into the context of
auxetic systems. Finally, in d), we display the Poisson’s ratios calculated
from the simulations.
A variety of shapes and geometries have been identified as prototypical
auxetics. The list ranges from re-entrant structures[34] to rotating units[35,
36]. Chiral structures[37] and others [38] complete the list. Despite the
extensive literature and enormous progress describing different types of
auxetic materials no fundamental microscopic principles for a unified
description exist. The distinction between types of auxetics relies mainly on
empirical observation rather than in fundamental principles, and no general
prescription exists to build them. In this letter, we present a unified
framework for the descriptions of bi-dimensional auxetic materials with
rotating units. These structures are generically made out of polygons
connected through their vertices (see Fig. 1 for examples). Under external
loads the stresses focalize on the vertices leaving almost undeformed the bulk
of the polygons [22]. The auxetic behavior arise because neighbor polygons
tend to rotate in opposite directions along a particular low energy mode. This
mode is reminiscent of a non-trivial floppy mode, or mechanism, that exists in
the ideal case with zero bending energy (i.e. the polygons are connected
through ideal hinges). The emergence of this “auxetic” floppy mode is due to
the network’s particular topology and exists even for overconstrained
networks. In this limit the bulk modulus vanishes while the shear modulus
remains finite implying a perfect auxetic behavior, i.e. with a Poisson ratio
$\nu=-1$ over a finite range. Other auxetics have different origins, for
instance, reentrant materials have typically an under-constrained internal
structure stabilized by bending or angular forces [39]. Therefore, the shear
and the bulk modulus vanishes in the zero bending limit, excluding a priori
the existence of a perfect auxetic behavior.
There are several models for specific rotating units systems[40, 30, 35, 41],
but still no general conditions for the emergence of this floppy mode exist,
except for certain sets of periodic lattices [42, 43]. Here we show a natural
mapping between rotating units and a classical antiferromagnetic model. Out of
this mapping it is possible to create a large variety of auxetic structures,
including the somewhat elusive isotropic auxetic material [44, 39]. In Fig. 1
one can see three different examples; a new type of auxetic crystals, a
quasicrystal and an isotropic (disordered) structure. Considering a building
elastic material with a Poisson ratio $\nu=0.5$ and a shear modulus
$G=0.15MPa$, the mechanical response of the designed materials were obtained
using the software Ansys (for more details see Appendix D). After a finite
initial load, they display a clear auxetic behavior with a Poisson’s ratio
reaching values between $-0.65$ and $-0.2$ depending on the design. In
addition, our theory generalizes and characterizes domain wall formation for
any rigid unit auxetic, see more in Appendix A, previously observed for
rectangular [23] and square [41] periodic auxetics.
The design of these materials emerges once we understand the origin of the
“auxetic” floppy mode in the ideal case with zero bending. This limit can be
easily achieved for the materials from Fig. 1 replacing the elastic vertices
by ideal hinges connecting the polygons. Considering a system of rigid
polygons connected through ideal hinges, an extended version of the Maxwell’s
degrees of freedom counting argument [45] can be done. Each polygon has 3
degrees of freedom in two dimension and one hinge between two polygons
suppress two degrees of freedom. For a network that has $N$ polygons and
$N_{h}$ hinges between polygons, the system is jammed when the number of
degrees of freedom $3N$ is less than the number of constraints $2N_{h}$.
Expressing $N_{h}$ as a function of the coordination (the average number of
hinges per polygon) $N_{h}=\frac{zN}{2}$, one gets a critical coordination
$z_{c}=3$.
Above critical coordination polygon networks typically guaranteed mechanical
stability, due to the absence of trivial floppy modes. Therefore, the
existence of an “auxetic” floppy mode must be related to a very precise
geometrical construction which also implies the appearance of a non trivial
self stress state mode following the rank theorem [46]. The starting point is
the observation that in an ideal polygon network the free hinges connecting
different rigid units only allow the rotation of the polygons. Then any floppy
mode requires that all the neighbors of each polygon have the same rotation
rate (as a function of strain). If the neighbors of a given polygon are also
neighbors between them the system will then jam. This observation sets the key
ingredient for rotating unit auxetic theory, which requires the system to be
bipartite. Although most bipartite polygon networks show some level of auxetic
response under various conditions, as we will discuss later, additional
conditions are necessary to obtain a perfect auxetic behavior.
## II A simple model for perfect auxetics
A minimal bi-dimensional model for rotating units auxetics, consists of a
series of polygons connected by springs of zero natural length [41]. The
springs act as ideal hinges as long as they are not compressed or stretched.
Replacing ideal hinges with springs not only simplifies construction in terms
of energy, but also introduces elasticity into our materials to study non
ideal scenarios. Each rigid unit has three degrees of freedom, two
translational $\vec{x}_{i}=\left(x_{i},y_{i}\right)$ and one rotational
$\theta_{i}$, not necessarily measured from the centroid of each polygon.
We are interested in the behavior of perfect auxetics, to build them we
establish 3 requirements.
1. 1.
The network must be bipartite.
This allows the units to counter rotate respect to each other, like cogs in a
machine. The units arranged in a bipartite network can be separated in two
sets $A$ and $B$, i.e. each connected to the other but not to itself, see Fig.
2.
2. 2.
Initially and at rest, every pair of neighboring polygons position’s
($\vec{x}_{i}$, $\vec{x}_{j}$) and the vertex between them have to be
collinear.
This initial setting, matches a maximum extension configuration. Furthermore,
it establishes a relationship between the internal angles of every pair of
neighboring polygons $|\alpha_{ij}+\beta_{ji}|=\pi$, see Fig. 2.
3. 3.
The ratio between the distance of a polygon to one of its vertex and the
distance of his neighbor to the same vertex must be a constant in the network.
Each vertex of a rigid unit is characterized by a vector
$\vec{a}_{ij}=a_{ij}\left(\cos(\theta_{i}+\alpha_{ij}),\sin(\theta_{i}+\alpha_{ij})\right)$
or
$\vec{b}_{ji}=b_{ji}\left(\cos(-\theta_{j}-\beta_{ji}),\sin(-\theta_{j}-\beta_{ji})\right)$,
corresponding to sets $A$ or $B$ respectively. The index $i$ will be used for
polygons in the set $A$ and the index $j$ for polygons in the set $B$. Vectors
$\vec{a}_{ij}$ ($\vec{b}_{ji}$) point from the position $\vec{x}$ of the
polygon $i(j)$ into the vertex connecting with polygon $j(i)$, as seen in Fig.
2. Therefore, the ratio $C=b_{ji}/a_{ij}$ must be a constant through the
network.
Creating a polygon network that fulfills these rules is quite simple. Starting
from a planar bipartite graph one can always build a perfect auxetic. To
understand the origin of this behavior we turn to the energy of the polygon
network
$\displaystyle
V=\frac{k}{2}\sum_{<ij>}\left((\vec{x}_{i}+\vec{a}_{ij})-(\vec{x}_{j}+\vec{b}_{ji})\right)^{2},$
(1)
where the sum is over all the pairs of interacting neighbors and all springs
have an equal elastic coefficient $k$.
Figure 2: The 3 steps recipe. a) First step, we start with a bipartite
network, the blue and red colors stand for the A and B sets respectively. We
showcase a random bipartite network to demonstrate the versatility of this
recipe. The network zoom in represents the second step. Every node of the
graph becomes the position of each polygon and we place each polygon’s vertex
on top of each corresponding segment of the network, where neighbor’s polygons
share a vertex position. At rest the polygon’s position is collinear with its
neighbor’s and the vertex between them. b) All the polygons are then connected
by zero natural length springs at the common vertices. The zoom in displays
the distance $a_{ij}$ from the node to each vertex in the A set polygons and
$b_{ji}$ which is the analogue but for the B set. Then the third step consists
of moving the vertices of each polygon such that the ratio $C=b_{ji}/a_{ij}$
remains constant along the network. In addition the angles $\alpha_{ij}$ and
$\beta_{ji}$ are displayed at the undeformed initial stage. Note that
$|\alpha_{ij}+\beta_{ji}|=\pi$. c) A compression shows that the polygon
network is a perfect auxetic. In the zoom in we see how each polygon counter
rotates with respect to its neighbors. $\theta_{i}$ and $\theta_{j}$ show the
rotation of the A and B set respectively.
We shall consider the energy change under an isotropic compression, which is
equivalent to increasing the size of all polygons while keeping the distance
between polygons constant. From the second requirement, as the neighboring
polygons are initially collinear with their vertex, the the constant distance
between polygons is
$\vec{x}_{i}-\vec{x}_{j}=-(a_{ij}+b_{ji})\left(\begin{array}[]{c}\cos(\alpha_{ij})\\\
\sin(\alpha_{ij})\end{array}\right)$. Increasing the size of the polygons is
achieved by rescaling with $\lambda$ the vectors characterizing the vertices,
such that $\vec{a}_{ij}\rightarrow\lambda\vec{a}_{ij}$ and
$\vec{b}_{ji}\rightarrow\lambda\vec{b}_{ji}$. Then we can expand and rearrange
Eq. 1 as
$V=V_{0}+\sum_{<ij>}J_{ij}\cos(\theta_{i}+\theta_{j})-H_{ij}^{A}\cos(\theta_{i})-H_{ji}^{B}\cos(\theta_{j}),$
(2)
strikingly, the structure of this equation and the counter-rotation of the
neighboring angles resembles that of an anisotropic antiferromagnetic spin
system. Properties such as domain walls arise from this analogy, see more in
Appendix A. In this equation
$V_{0}=\frac{k}{2}\sum_{<ij>}(a_{ij}+b_{ji})^{2}+\lambda^{2}(a_{ij}^{2}+b_{ji}^{2})$,
$J_{ij}=k\lambda^{2}a_{ij}b_{ji}$, $H_{ij}^{A}=k\lambda(a_{ij}+b_{ji})a_{ij}$
and $H_{ji}^{B}=k\lambda(b_{ji}+a_{ij})b_{ji}$.
Using the third requirement, setting $C=b_{ji}/a_{ij}$ as a constant, two
solutions can be found. The first one is the trivial solution
$\theta_{i}=\theta_{j}=0$ which is a minimum for $0<\lambda\leq 1$. The second
one is found when all the polygons of each set rotate at the same rate
$\theta_{i}=\theta^{0}_{A}$ and $\theta_{j}=\theta^{0}_{B}$, where
$\cos(\theta^{0}_{A})=\frac{1+C+\lambda^{2}(1-C)}{2\lambda},$ (3)
and
$\cos(\theta^{0}_{B})=\frac{1+C-\lambda^{2}(1-C)}{2\lambda C}.$ (4)
These are a minimum in the range $1<\lambda<\big{|}\frac{1+C}{1-C}\big{|}$
only if both $\theta^{0}_{A}$ and $\theta^{0}_{B}$ have the same sign, i.e.
polygons counter rotate respect to each other. Evaluating the potential energy
in this minimum we find that
$V(\theta_{i}=\theta^{0}_{A},\theta_{j}=\theta^{0}_{B})=0$ (for a detailed
derivation see Appendix E), thus this solution describes a zero energy mode of
the system. This floppy mode corresponds to a system with zero bulk modulus,
meaning that the material expands and contracts equally in all directions, for
a direct calculation of the bulk modulus see Appendix F. As the Poisson’s
ratio is defined as $\nu=-\frac{d\epsilon_{x}}{d\epsilon_{y}}$, with
$\epsilon$ being the strain in each direction, if the system expands equally
in both directions then $\epsilon_{x}=\epsilon_{y}$ and $\nu=-1$. This perfect
auxetic behavior can be seen in the numerical simulations of periodic and
isotropic materials, in Fig. 3 and Fig. 4 respectively.
## III Random perfect auxetics
Recently, several isotropic auxetics materials with a Poisson’s ratio close to
$-1$ in an infinitesimal strain range have been proposed [44, 39]. However,
none of them display a perfect auxetic behavior. The theory presented in the
previous section is not restricted to lattices. It can be applied
straightforwardly to disordered networks, which allow us to build for the
first time isotropic perfect auxetic materials. The only complication resides
in building a planar bipartite disorder network. We achieved this by two
different methods, a pruning algorithm, and a graph transformation, both
applied on an isotropic amorphous contact network [47, 14], more on Appendix
B. Once we have our bipartite network, we only need to place polygons at each
node of the graph, taking care that each vertex of the polygon is placed over
a segment of the graph, dividing it in a $C:1$ ratio, see Fig. 2.
## IV Beyond perfect auxetics
Figure 3: a) In blue an unperturbed perfect auxetic lattice, in red the same
lattice but perturbed by an angle $\delta$. The perturbation is applied such
that each pair of polygons is no longer collinear between them and their
vertex. The angle $\delta$ measures how much the vertex has been displaced.
Notice that this turns the previously parallelogram shaped holes into
trapezoidal holes. b) The Poisson’s ratio $\nu$ of the perturbed network as a
function of a vertical compression, measured by the Cauchy strain
$\epsilon_{y}$. Each curve has a different perturbation angle $\delta$. Each
perturbed system starts with a $\nu=0$ until a point where it suddenly
recovers its auxetic behavior. For more information about the measurements see
Appendix C.
The bipartite condition seems to be fundamental to have an auxetic material
composed of rotating units. Introducing defects on the network will frustrate
the rotation of the units affecting the auxetic behavior, this effect will be
addressed elsewhere. Here we will consider the effect of breaking the other
conditions, by modifying the angle that connects two polygons, thus making
$|\alpha_{ij}+\beta_{ji}|=\pi+\delta_{ij}$. For the sake of simplicity we used
a periodic polygon network and a fixed $\delta$ that will change sign from one
link to the next, see Fig. 3. By perturbing the polygon network the floppy
mode is destroyed and the polygon network jams. Under compression the system
now increases its energy by keeping initially a Poisson’s ratio close to zero
(see Fig 3). However at a finite strain an elastic instability occurs and the
system jumps to a rotated configuration recovering its auxetic behavior as
seen in Fig. 3.
Although the latter behavior seem to be generic under small perturbations of
the perfect auxetic network it is not always the case. It is known that
rectangular networks [23], which are not perfect auxetics, preserve a floppy
mode and the Poisson’s ratio moves continuously from positive to negative
values. To understand the condition for the existence of this floppy mode we
need a more general description on polygon bipartite networks.
## V Floppy modes on bipartite polygon networks
Bipartite graphs have only even sided cycles, these are closed paths that
start and end at the same node. The simplest of cycles are the faces of the
graph, which are the regions bounded by edges. When transforming a bipartite
graph into a polygon network, even cycles are reflected at the geometry of
empty spaces between the polygons which are also enclosed by the same number
of sides. We will refer to this empty spaces as holes, and to the vertex
between polygons as hinges. Notice that no odd sided holes can exist in
bipartite systems. In the case of our perfect auxetic materials, one can use
the Varignon’s quadrilateral theorem[48] and Thales theorem to show that the
constant ratio $C=b_{ji}/a_{ij}$ directly implies that all 4 sided holes will
be parallelograms.
For a system to be deformed while at zero energy, it needs the opposite angles
at each hinge to have an opposite deformation rate. This condition is
extremely difficult to fulfill if the inner angles inside a hole have a non-
linear behavior as a function of the deformation, as is the case with
trapezoidal 4 sided holes. Now parallelograms have the special property that
when deformed, all of their inner angles share the same deformation rate,
except for the sign. All 4 sided holes in perfect auxetics are parallelograms,
this allows every inner angle in each hole to be linearly related to the
deformation, fulfilling the restriction at each hinge. This mechanism suggests
that any bipartite polygon network with only parallelogram holes will have a
non-trivial floppy mode, however, it will not necessarily behave like a
perfect auxetic.
In particular, when geometrically perturbing the polygons of a perfect auxetic
with only 4 sided holes, while preserving their parallelogram shape, the
floppy mode persists. The simplest example is the rectangular network studied
in [23] where the Poisson’s ratio was observed to change its sign depending on
the strain. Similar results are observed in a more general case where 4 sided
holes remain parallelograms after a perturbation of an isotropic perfect
auxetic, showing a continuous change from positive to negative Poisson’s
ratios under strain as seen in Fig. 4.
Figure 4: The Poisson’s ratio $\nu$ of a geometrically perturbed random
auxetic with parallelogram 4 sided holes (orange circles) and the original
unperturbed perfect auxetic (green triangles), both as a function of the
Cauchy strain $\epsilon_{y}$. Both systems display a non-trivial floppy mode.
The perfect auxetic behaves as expected with a $\nu=-1$. The perturbed auxetic
starts with a positive $\nu$ which changes sign as the deformation increases.
The blue curve shows the theoretical Poisson’s ratio of a rectangular auxetic,
the rectangles in it have a $1.1$ ratio between their sides. Surprisingly the
Poisson’s ratio of rectangular auxetic and that of a perturbed random auxetic
with parallelogram cycles match perfectly. For more information about the
measurements see Appendix C.
## VI Conclusion
We have presented a simple model within this work that builds the necessary
framework to create, design, and characterize rotating unit auxetics. Such
framework is built upon a simple analogy between the rotating unit auxetics
and an anisotropic XY antiferromagnetic system. As shown in Fig. 1, we have
applied those ideas to generate novel auxetic structures in the form of a
crystal, a quasicrystal, and a random lattice. Each design can be represented,
within our theory, by a minimal model, based upon polygons and springs that
captures its essential collective response to external loads. These models can
be simulated straightforwardly to test materials properties while ignoring
bending forces. However, if needed, bending can easily be added to the model.
As we have seen, this model correctly describes the behavior of all rotating-
unit systems and could be used to predict new behaviors. In particular, we
have generalized the behavior of auxetic domain walls, which are natural
textures that these systems have because of the analogy with magnetic systems.
More phenomena related to this analogy remain to be seen and encourage further
investigation. As a major tangible result, our work leads us to establish the
ground rules to create never seen before isotropic perfect auxetics. With
current 3D printing technology, it should be relatively simple to realize any
of these systems. This model still lacks dynamic and vibrational analysis
that, hopefully, will shed light on the general topological properties of
these materials. Furthermore, this theory has the potential to be expanded
into 3D, as it is mainly written in a vectorial form, we are excited to see
where this takes us.
## VII Acknowledgments
A.S.N. thanks Fondecyt Regular 1190324 and Financiamiento Basal para Centros
Científicos y Tecnológicos de Excelencia FB 0807. D.A. acknowledges funding by
the National Agency for Research and Development (ANID) / Scholarship Program
/ DOCTORADO NACIONAL/2019 - 21192070.
## References
* Huber [2016] S. D. Huber, Topological mechanics, Nature Physics 12, 621 (2016).
* Coulais _et al._ [2017] C. Coulais, D. Sounas, and A. Alù, Static non-reciprocity in mechanical metamaterials, Nature 542, 461 (2017).
* Kane and Lubensky [2014] C. Kane and T. Lubensky, Topological boundary modes in isostatic lattices, Nature Physics 10, 39 (2014).
* Chen _et al._ [2014] B. G. Chen, N. Upadhyaya, and V. Vitelli, Nonlinear conduction via solitons in a topological mechanical insulator, Proceedings of the National Academy of Sciences 111, 13004 (2014).
* Rocklin _et al._ [2017] D. Z. Rocklin, S. Zhou, K. Sun, and X. Mao, Transformable topological mechanical metamaterials, Nature communications 8, 1 (2017).
* Souslov _et al._ [2017] A. Souslov, B. C. Van Zuiden, D. Bartolo, and V. Vitelli, Topological sound in active-liquid metamaterials, Nature Physics 13, 1091 (2017).
* Florijn _et al._ [2014a] B. Florijn, C. Coulais, and M. van Hecke, Programmable mechanical metamaterials, Physical review letters 113, 175503 (2014a).
* Coulais _et al._ [2016] C. Coulais, E. Teomy, K. De Reus, Y. Shokef, and M. Van Hecke, Combinatorial design of textured mechanical metamaterials, Nature 535, 529 (2016).
* Overvelde _et al._ [2017] J. T. Overvelde, J. C. Weaver, C. Hoberman, and K. Bertoldi, Rational design of reconfigurable prismatic architected materials, Nature 541, 347 (2017).
* Mullin _et al._ [2007] T. Mullin, S. Deschanel, K. Bertoldi, and M. C. Boyce, Pattern transformation triggered by deformation, Physical review letters 99, 084301 (2007).
* Shim _et al._ [2012] J. Shim, C. Perdigou, E. R. Chen, K. Bertoldi, and P. M. Reis, Buckling-induced encapsulation of structured elastic shells under pressure, Proceedings of the National Academy of Sciences 109, 5978 (2012).
* Nicolaou and Motter [2012] Z. G. Nicolaou and A. E. Motter, Mechanical metamaterials with negative compressibility transitions, Nature materials 11, 608 (2012).
* Coulais _et al._ [2015] C. Coulais, J. T. Overvelde, L. A. Lubbers, K. Bertoldi, and M. van Hecke, Discontinuous buckling of wide beams and metabeams, Physical review letters 115, 044301 (2015).
* Wyart _et al._ [2008] M. Wyart, H. Liang, A. Kabla, and L. Mahadevan, Elasticity of floppy and stiff random networks, Phys. Rev. Lett. 101, 215501 (2008).
* Lakes [1987] R. Lakes, Foam structures with a negative poisson’s ratio, Science 235, 1038 (1987).
* Bertoldi _et al._ [2010] K. Bertoldi, P. M. Reis, S. Willshaw, and T. Mullin, Negative poisson’s ratio behavior induced by an elastic instability, Advanced materials 22, 361 (2010).
* Lakes _et al._ [2001] R. S. Lakes, T. Lee, A. Bersie, and Y. Wang, Extreme damping in composite materials with negative-stiffness inclusions, Nature 410, 565 (2001).
* Williams and Lewis [1982] J. Williams and J. Lewis, Properties and an anisotropic model of cancellous bone from the proximal tibial epiphysis, Journal of Biomechanical Engineering 104, 50 (1982).
* Pagliara _et al._ [2014] S. Pagliara, K. Franze, C. R. McClain, G. W. Wylde, C. L. Fisher, R. J. Franklin, A. J. Kabla, U. F. Keyser, and K. J. Chalut, Auxetic nuclei in embryonic stem cells exiting pluripotency, Nature materials 13, 638 (2014).
* Baughman _et al._ [1998] R. H. Baughman, J. M. Shacklette, A. A. Zakhidov, and S. Stafström, Negative poisson’s ratios as a common feature of cubic metals, Nature 392, 362 (1998).
* Grima _et al._ [2005] J. N. Grima, R. Gatt, A. Alderson, and K. E. Evans, On the origin of auxetic behaviour in the silicate $\alpha$-cristobalite, Journal of Materials Chemistry 15, 4003 (2005).
* Coulais _et al._ [2018] C. Coulais, C. Kettenis, and M. van Hecke, A characteristic length scale causes anomalous size effects and boundary programmability in mechanical metamaterials, Nature Physics 14, 40 (2018).
* Florijn _et al._ [2014b] B. Florijn, C. Coulais, and M. van Hecke, Programmable mechanical metamaterials, Phys. Rev. Lett. 113, 175503 (2014b).
* Babaee _et al._ [2013] S. Babaee, J. Shim, J. C. Weaver, E. R. Chen, N. Patel, and K. Bertoldi, 3d soft metamaterials with negative poisson’s ratio, Advanced Materials 25, 5044 (2013).
* Chen and Pugno [2012] Q. Chen and N. M. Pugno, In-plane elastic buckling of hierarchical honeycomb materials, European Journal of Mechanics - A/Solids 34, 120 (2012).
* Lakes and Elms [1993] R. Lakes and K. Elms, Indentability of conventional and negative poisson’s ratio foams, Journal of Composite Materials 27, 1193 (1993).
* Yang _et al._ [2017] S. Yang, V. B. Chalivendra, and Y. K. Kim, Fracture and impact characterization of novel auxetic kevlar®/epoxy laminated composites, Composite Structures 168, 120 (2017).
* Lakes and Witt [2002] R. Lakes and R. Witt, Making and characterizing negative poisson’s ratio materials, International Journal of Mechanical Engineering Education 30, 50 (2002).
* Alderson _et al._ [2007] A. Alderson, J. Rasburn, and K. E. Evans, Mass transport properties of auxetic (negative poisson’s ratio) foams, physica status solidi (b) 244, 817 (2007).
* Gatt _et al._ [2015] R. Gatt, L. Mizzi, J. I. Azzopardi, K. M. Azzopardi, D. Attard, A. Casha, J. Briffa, and J. N. Grima, Hierarchical auxetic mechanical metamaterials, Scientific reports 5, 8395 (2015).
* Alderson _et al._ [2012] K. Alderson, A. Alderson, S. Anand, V. Simkins, S. Nazare, and N. Ravirala, Auxetic warp knit textile structures, physica status solidi (b) 249, 1322 (2012).
* Ansys® [2019] Ansys®, (2019), ansys Mechanical, Release 19 R2.
* Mattis [2006] D. C. Mattis, _Theory of Magnetism Made Simple: An Introduction to Physical Concepts and to Some Useful Mathematical Methods_ (2006).
* Masters and Evans [1996] I. Masters and K. Evans, Models for the elastic deformation of honeycombs, Composite Structures 35, 403 (1996).
* Grima and Evans [2000] J. N. Grima and K. E. Evans, Auxetic behavior from rotating squares, Journal of Materials Science Letters 19, 1563 (2000).
* Alderson and Evans [2001] A. Alderson and K. Evans, Rotation and dilation deformation mechanisms for auxetic behaviour in the $\alpha$-cristobalite tetrahedral framework structure, Physics and Chemistry of Minerals 28, 711 (2001).
* Ha _et al._ [2016] C. S. Ha, M. E. Plesha, and R. S. Lakes, Chiral three-dimensional isotropic lattices with negative poisson’s ratio, physica status solidi (b) 253, 1243 (2016).
* Rens and Lerner [2019] R. Rens and E. Lerner, Rigidity and auxeticity transitions in networks with strong bond-bending interactions, The European Physical Journal E 42, 114 (2019).
* Reid _et al._ [2019] D. R. Reid, N. Pashine, A. S. Bowen, S. R. Nagel, and J. J. de Pablo, Ideal isotropic auxetic networks from random networks, Soft Matter 15, 8084 (2019).
* Grima _et al._ [2012] J. N. Grima, E. Chetcuti, E. Manicaro, D. Attard, M. Camilleri, R. Gatt, and K. E. Evans, On the auxetic properties of generic rotating rigid triangles, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 468, 810 (2012).
* Deng _et al._ [2020] B. Deng, S. Yu, A. E. Forte, V. Tournat, and K. Bertoldi, Characterization, stability, and application of domain walls in flexible mechanical metamaterials, Proceedings of the National Academy of Sciences 117, 31002 (2020).
* Guest and Hutchinson [2003] S. Guest and J. Hutchinson, On the determinacy of repetitive structures, Journal of the Mechanics and Physics of Solids 51, 383 (2003).
* Mitschke _et al._ [2013] H. Mitschke, G. Schröder-Turk, K. Mecke, P. Fowler, and S. Guest, Symmetry detection of auxetic behaviour in 2d frameworks, EPL (Europhysics Letters) 102, 66005 (2013).
* Grima _et al._ [2016] J. N. Grima, L. Mizzi, K. M. Azzopardi, and R. Gatt, Auxetic perforated mechanical metamaterials with randomly oriented cuts, Advanced Materials 28, 385 (2016).
* Maxwell [1864] J. C. Maxwell, L. on the calculation of the equilibrium and stiffness of frames, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 27, 294 (1864).
* Calladine [1978] C. Calladine, Buckminster Fuller’s “tensegrity” structures and Clerk Maxwell’s rules for the construction of stiff frames, International Journal of Solids and Structures 14, 161 (1978).
* Lerner _et al._ [2014] E. Lerner, E. DeGiuli, G. Düring, and M. Wyart, Breakdown of continuum elasticity in amorphous solids, Soft Matter 10, 5085 (2014).
* Coxeter and Greitzer [1967] H. Coxeter and S. L. Greitzer, Collinearity and concurrence, in _Geometry Revisited_ (Mathematical Association of America, 1967) p. 51–79.
* Kardar [2007] M. Kardar, _Statistical Physics of Fields_ (Cambridge University Press, 2007) p. 19–34.
* [50] Ansys®, _Ansys Mechanical APDL Theory Reference-Element Library_ , release 15.0.
## Appendix A Domain Walls
Figure 5: An isotropic perfect auxetic with periodic boundary conditions and
a domain wall. The system is compressed with $\delta\lambda=0.2$. The color of
each polygon represents their normalized angle of rotation. The system has two
domains one in red and the other in blue, both separated by a yellow domain
wall. Figure 6: The normalized rotational angle of the polygons as a function
of distance, revealing the existence of a domain wall in the middle. Each
curve represents a polygon network with different levels of compression,
measured by $\delta\lambda$. In the inset the x-axis has been rescaled by
$\delta\lambda^{-1/2}$, all the curves collapse into a single one showing that
it’s the correct scaling. The average distance from a polygon’s position to
its vertices is $a=1/2$. The normalization coefficient $\theta_{0}$
corresponds to the maximum rotation angle of the system. We used a periodic
polygon network as the one in Fig. 3 with a vertical domain wall.
We can see that Eq. 2 is analogous to that of an antiferromagnetic spin
system. In such case, $\theta$ represents the spin direction, $J_{ij}$ is a
symmetric coupling constant and the sum over neighbours $H_{i}=\sum_{n}H_{in}$
is a magnetic field as a function of space.
Therefore, we can find magnetic related phenomena, like domain walls, in
polygon networks. These appear between two stable solutions of the system
which differ in the turning direction of the polygons, as seen in Fig. 5.
These domain walls have been previously studied in auxetics[23, 41].
The polygon angle defines the state of the polygon network, when
$0<\lambda\leq 1$ the angle is zero and when
$1<\lambda<\big{|}\frac{1+C}{1-C}\big{|}$ the angle is given by Eq. 3 and Eq.
4. Then $\theta$ can be used as the order parameter of this system which is
controlled by the external parameter $\delta\lambda=\lambda-1$. We can now
build a simple Ginzburg-Landau energy for the system and make the usual
analysis for the domain walls in it [49].
$H=\int
dx\left(\frac{t}{2}\theta(x)^{2}+u\theta(x)^{4}+\frac{K}{2}\left(\nabla\theta(x)\right)^{2}\right)$
(5)
Here $t$, $u$ and $K$ are analytical functions of $\delta\lambda$. To keep the
system stable $K$ and $u$ are positive constants close to the critical point,
and $t=-t_{0}\delta\lambda+O(\delta\lambda)^{2}$.
The stable solution for an homogeneous system is given by:
$\displaystyle\bar{\theta}=\sqrt{\frac{t_{0}}{4u}}\delta\lambda^{1/2}.$ (6)
To determine the length scale of a domain wall we search for the minimum
energy in a system with $\theta(\infty)=\bar{\theta}$,
$\theta(-\infty)=-\bar{\theta}$. The differential equation for such system is:
$\displaystyle K\frac{d^{2}\theta}{dx^{2}}=t\theta+4u\theta^{3},$ (7)
whose well known solution is:
$\theta(x)=\bar{\theta}\tanh\left(\frac{x}{\Delta}\right)\\\ .$ (8)
Where the domain wall width is defined as:
$\Delta=\sqrt{\frac{2K}{t_{0}}}\delta\lambda^{-1/2}.$ (9)
Thus we see that the domain wall width scales like
$\Delta\sim\delta\lambda^{-1/2}$. To check this, we performed numerical
simulations where we minimized the energy of a polygon network with periodic
boundary conditions until it had a couple of stable domain walls, the results
can be seen in Fig. 6.
In Fig. 6 if we consider that the polygon angle as a function of position is
$\theta(x)=\theta_{0}f(x)$ with $|f(x)|\leq 1$, the expansion of this function
around the origin is $f(x)=mx+O(x^{3})$, as $f$ is an even function. Now
through simple trigonometry we can relate the slope at the origin $m$ to the
domain wall length, approximately $\Delta=2/m$, and from the rescaled inset in
Fig. 6 we can determine that $m\sim\delta\lambda^{1/2}$, therefore we obtained
the same result as predicted where $\Delta\sim\delta\lambda^{-1/2}$.
## Appendix B Building Random Bipartite Planar Graphs
Figure 7: Pruning Algorithm. a) Isotropic contact amorphous network [47, 14]
with a high coordination $z=6$. b) Pruned network, the bonds between pairs of
odd cycles have been removed, adding them together into even cycles. The
result is a bipartite planar graph. Figure 8: Planar Bipartite
Transformation. a) Start with a random planar contact network, in this case
the coordination is $z=5$. b) Compute the dual graph of the network (blue
nodes and dashed lines), place each node of the dual graph inside its
corresponding cycle. c) Disconnect each node of the dual graph (blue to blue
nodes), and reconnect them to the nodes corresponding to the vertices of its
cycle in the original graph (red and blue nodes connected by dashed lines). d)
Remove the connections of the original graph (red to red nodes) and keep the
connections between the dual graph and the original graph. As both initial
graphs are independent sets that don’t connect to themselves, the final result
is a random planar bipartite graph.
One of the main problems in creating a random perfect auxetic material is the
construction of a random bipartite planar graph, from which we can construct
the polygon network. The graph must be bipartite so that the polygons can
counter rotate with respect to each other, and it must be planar so that we
can build the polygon network without overlapping them.
We propose two methods to create these graphs. The first is a heuristic
pruning algorithm, which takes advantage of the property that a graph with
only even cycles will be bipartite. The second is a general transformation
that can quickly create bipartite graphs by combining a graph with its dual
graph.
Pruning Algorithm
A bipartite graph has only even cycles, where a cycle is the shortest path
between a node and itself. We call a cycle even or odd depending on the number
of bonds in its path. Here we prune a graph in such a way that all the cycles
of the resulting graph are even, transforming the graph into a bipartite
graph.
If we have two neighboring cycles that share a bond, and we prune this bond,
we will end up with a single cycle. We can think of this operation as an
addition of cycles. Where if the starting cycles are both even or odd, the
resulting cycle will be even. And if one cycle is even and the other is odd,
the resulting cycle is odd. We can extrapolate this property to a pair of
separate odd cycles with only even cycles between them. If we remove a line of
bonds between the odd cycles, we will end up with a single even cycle. Then if
we prune the bonds between all pairs of odd cycles, we will end up with a
bipartite graph with only even cycles.
The pruning algorithm we implemented follows some simple steps, we start with
a random planar graph with an even number of odd cycles, ideally with a high
coordination number, e.g. $z=6$. Next, we place a marker at an odd cycle and
use a breadth-first search algorithm on the dual graph, to find the path to
its closest odd cycle. We remove the bonds in this path, transforming both odd
cycles into a single even cycle. Finally we move the marker to another odd
cycle and repeat the procedure until all cycles are even. An example of the
initial and final graphs is in Fig. 7. While removing bonds we prefer paths
that leave each node with at least three bonds, to give the system more
stability and to avoid generating big holes. If a node is left with less than
two links, we eliminate the node.
This algorithm works well on small graphs, but on bigger ones it may eliminate
a huge amount of bonds leaving big holes. To avoid this problem a more
sophisticated path optimization algorithm is needed, where the paths between
all pairs of odd cycles are calculated beforehand, minimizing the distance of
each path and making sure they avoid each other. Once all of the paths are
computed, the bond elimination process can be performed, minimizing the size
of the holes. Furthermore, this algorithm doesn’t necessarily work for graphs
with periodic boundary conditions. As having only even sided cycles guarantees
bipartivity if the graph has free boundary conditions, but it doesn’t if the
graph has periodic boundary conditions.
Bipartite Transformation
A bipartite graph is made out of two independent sets, each one connected to
the other but note to itself. Here we connect two independent sets, a graph
and its dual graph, transforming both into a single planar bipartite graph.
Given a graph, we first determine its dual graph. Then we connect each node of
the graph with each neighboring node in the dual graph, by neighbor node we
mean the node in the dual graph that represents a face in contact with the
node in the original graph. At last, we eliminate all the starting bonds of
the graph and its dual graph, leaving only the new bonds connecting both
graphs. The resulting graph will be bipartite, and if the original graph was
planar, the resulting graph will be planar too. The further understand this
procedure, see Fig. 8.
This transformation can be performed on any kind of graph and several times in
a row creating a graph with more nodes each time. We can reverse the
transformation, though we may not know if we obtained the graph or its dual
graph when performing the inverse transformation, unless we keep track of at
least one node from the starting graph.
## Appendix C Numerical Methods
To obtain the data shown in Fig. 3 and Fig. 4, we performed numerical
simulations of polygon networks. To model these networks we used the potential
energy in Eq. 1. Depending on the problem and the boundary conditions we used
different methods to compress and test the polygon networks. Regardless of the
boundary conditions the Poisson’s ratio was calculated using
$\nu=-\frac{d\epsilon_{x}}{d\epsilon_{y}}=-\frac{dL_{x}}{dL_{y}}\frac{L_{y}}{L_{x}}$,
where $L_{x}$ and $L_{y}$ are the approximate dimensions of the system in each
axis.
Periodic Boundary Conditions
The polygon networks in Fig. 3 were set in a periodic boundary box. To
compress the material uniaxially we shrank the box in the vertical direction,
and we let the system relax in the horizontal direction by minimizing its
energy. At each step we measured the dimensions of the box, $L_{x}$ and
$L_{y}$.
Free Boundary Conditions
The system in Fig. 4 has free boundary conditions and an internal floppy mode.
To efficiently deform the material, we applied a deformation in the direction
of the floppy mode and minimized its energy afterwards. To obtain the floppy
mode, we fixed 3 degrees of freedom in the system and found a non-trivial
solution for $M\dot{\vec{q}}=0$, where $M$ is the Hessian and $\dot{\vec{q}}$
is the floppy mode. At each step we approximated the system by a rectangle of
dimension $L_{x}$ and $L_{y}$.
## Appendix D Finite Element Simulations
For the simulations in Fig. 1, we used three bipartite networks which were
created using the methods described in Appendix B, for the exotic crystal we
modified a tetrakis tiling by skewing it and applying the bipartite
transformation; for the quasi-crystal we used a Penrose tiling which was cut
in a suitable square shape; lastly the random network was created using the
pruning method. All the materials were built with the same ratio between
average polygon size and bond thickness, such that they exhibit a similar
behavior.
For our static finite elements simulations, we used the commercial software
ANSYS [32] and a Neo-Hookean energy density as a material model, with an
initial shear modulus, $G=0.15MPa$, and Poisson’s ratio $\nu=0.5$, and in-
plane strain conditions with hybrid quadratic triangular elements (ANSYS type
PLANE183 [50]). We carried out a mesh optimization to ensure that bonds, where
most of the strain and stress is localized, are meshed with at least three
elements. This way, the material has $10^{5}$ elements approximately. To
compress the material uniaxially, we applied a vertical displacement of the
top row of polygons, and fixed the position of the bottom row of polygons. We
imposed a free boundary condition in the horizontal direction and we imposed a
no-penetration condition, therefore the material can contact itself.
To measure the Poisson’s ratio, we approximated the whole system as a
rectangle with dimensions $L_{x}$ and $L_{y}$. Then the strains are
$\Delta\epsilon_{x}=\frac{L_{x}-L^{(0)}_{x}}{L^{(0)}_{x}}$ and
$\Delta\epsilon_{y}=\frac{L_{y}-L^{(0)}_{y}}{L^{(0)}_{y}}$[16], where
$L_{x}^{0}$ and $L_{y}^{0}$ are the dimensions of the material at rest.
Finally we used the engineering strain Poisson’s ratio
$\nu=-\frac{\Delta\epsilon_{x}}{\Delta\epsilon_{y}}.$ (10)
## Appendix E Perfect Auxetic Detailed Derivation
In this section we will find the zero energy auxetic mode of a perfect auxetic
polygon network, this is a polygon network that follows 3 requirements.
1.- The underlying graph of the connected polygons must be bipartite, this
means that we can split the graph into two sets, where each set connects to
the other but not to itself, we will call each set $A$ and $B$.
2.- At rest and without prestress the system must have at least one
configuration where every pair of neighboring polygons positions
($\vec{x}_{i}$, $\vec{x}_{j}$) are collinear with the vertex between them.
3.- If we set a point in space representing each polygon position
$\vec{x}_{i}$, not necessarily the centroid of each polygon, the distance from
this position to each vertex will be $a_{ij}$ in polygons of set $A$, and
$b_{ji}$ in polygons of set $B$, the first index indicates the origin polygon
and the second index represents the neighboring polygon, finally the ratio of
this distances between neighboring polygons must remain constant, so
$C=\frac{b_{ji}}{a_{ij}}$ is constant.
The following demonstration applies to undercontrained and overconstrained
systems, though the later are of higher interest as this zero mode is non-
trivial in them.
To show the existence of this zero mode, we will expand its potential energy
and find its minimum under the assumption that the distance between polygons
remains constant and that all polygons in each set rotate equally. Finally we
will show that the minimum of energy is zero for any value of the expansion
coefficient $\lambda$.
Using restriction 1, as the system is bipartite, we can write its potential
energy as an interaction of each set $A$ and $B$.
$\displaystyle
V=\frac{k}{2}\sum_{<ij>}\left((\vec{x}_{i}+\lambda\vec{a}_{ij})-(\vec{x}_{j}+\lambda\vec{b}_{ji})\right)^{2}$
(11)
Here $\vec{x}_{i}$ is the position of the polygon $i$, $\lambda$ is an
homogeneous expansion coefficient, $\vec{a}_{ij}$ and $\vec{b}_{ji}$ are the
vectors of the set $A$ and $B$ respectively that point from the position of a
polygon to its vertex associated with the pair $<ij>$. They are defined as
$\vec{a}_{ij}=a_{ij}\begin{pmatrix}\cos(\theta_{i}+\alpha_{ij})\\\
\sin(\theta_{i}+\alpha_{ij})\end{pmatrix}$
and
$\vec{b}_{ji}=b_{ji}\begin{pmatrix}\cos(-\theta_{j}-\beta_{ji})\\\
\sin(-\theta_{j}-\beta_{ji})\end{pmatrix}.$
Note that we have set-up the variables $\theta_{i}$ and $\theta_{j}$ such that
they naturally counter rotate each other. We will use the index $i$ for the
polygons in the set $A$ and $j$ for the polygons in set $B$.
Expanding Eq. 11,
$\displaystyle
V=\frac{k}{2}\sum_{<ij>}(\vec{x}_{i}-\vec{x}_{j})^{2}+\lambda^{2}(\vec{a}_{ij}-\vec{b}_{ji})^{2}$
(12)
$\displaystyle+2\lambda(\vec{x}_{i}-\vec{x}_{j})(\vec{a}_{ij}-\vec{b}_{ji}).$
Now will review each term of the expansion of Eq. 12.
Taking into account condition 2, if the rest state of the system is at
$\lambda=1$ and $\theta_{i}=0$, then as the polygon’s positions are collinear
with the vertex between them, the angles pointing to this vertex must add to
half of a full rotation $|\alpha_{ij}+\beta_{ji}|=\pi$. Then,
$(\vec{a}_{ij}-\vec{b}_{ji})^{2}=a_{ij}^{2}+b_{ji}^{2}+2a_{ij}b_{ji}\cos(\theta_{i}+\theta_{j}).$
(13)
Moreover if we assume that the distance between polygon position’s remains
constant as $\lambda$ is increased, we can express the distance as
$\vec{x}_{i}-\vec{x}_{j}=-(a_{ij}+b_{ji})\begin{pmatrix}\cos(\alpha_{ij})\\\
\sin(\alpha_{ij})\end{pmatrix}.$
Thus we can determine the other two terms of Eq. 12.
$(\vec{x}_{i}-\vec{x}_{j})^{2}=(a_{ij}+b_{ji})^{2}$ (14)
and
$(\vec{x}_{i}-\vec{x}_{j})(\vec{a}_{ij}-\vec{b}_{ji})=-(a_{ij}+b_{ji})(a_{ij}\cos(\theta_{i})+b_{ji}\cos(\theta_{j})).$
(15)
Using Eq. 13, Eq. 14 and Eq. 15 to expand Eq. 12, we arrive to
$\displaystyle
V=\frac{k}{2}\sum_{<ij>}(a_{ij}+b_{ji})^{2}+\lambda^{2}(a_{ij}^{2}+b_{ji}^{2}+2a_{ij}b_{ji}\cos(\theta_{i}+\theta_{j}))$
(16)
$\displaystyle-2\lambda(a_{ij}+b_{ji})(a_{ij}\cos(\theta_{i})+b_{ji}\cos(\theta_{j})).$
We can rewrite this as:
$V=V_{0}+\sum_{<ij>}J_{ij}\cos(\theta_{i}+\theta_{j})-H^{A}_{ij}\cos(\theta_{i})-H^{B}_{ji}\cos(\theta_{j}).$
(17)
Here
$V_{0}=\frac{k}{2}\sum_{<ij>}(a_{ij}+b_{ji})^{2}+\lambda^{2}(a_{ij}^{2}+b_{ji}^{2})$,
$J_{ij}=k\lambda^{2}a_{ij}b_{ji}$, $H^{A}_{ij}=k\lambda(a_{ij}+b_{ji})a_{ij}$
and $H^{B}_{ji}=k\lambda(b_{ji}+a_{ij})b_{ji}$.
We can search for a minimum in Eq. 16, if we assume that all polygons in each
set rotate equally. Then $\theta_{i}=\theta_{A}$ and $\theta_{j}=\theta_{B}$.
And if we use the requirement 3, setting the ratio $C=\frac{b_{ji}}{a_{ij}}$
as a constant, the energy becomes
$\displaystyle
V=((C+1)^{2}+\lambda^{2}(C^{2}+1+2C\cos(\theta_{A}+\theta_{B}))$ (18)
$\displaystyle-2\lambda(C+1)(\cos(\theta_{A})+C\cos(\theta_{B})))\frac{k}{2}\sum_{<ij>}a_{ij}^{2}.$
Deriving by $\theta_{A}$ and $\theta_{B}$, we have
$-2\lambda^{2}C\sin(\theta_{A}^{0}+\theta_{B}^{0})+2\lambda(1+C)\sin(\theta_{A}^{0})=0$
(19)
and
$-2\lambda^{2}C\sin(\theta_{A}^{0}+\theta_{B}^{0})+2\lambda(1+C)C\sin(\theta_{B}^{0})=0.$
(20)
From Eq. 19 and Eq. 20, we see that
$\sin(\theta_{A}^{0})=C\sin(\theta_{B}^{0})$, plugging back this into Eq. 19
we find that $\sin(\theta_{A}^{0})=0$, which is the trivial minimum
$\theta_{A}^{0}=\theta_{B}^{0}=0$ for $0<\lambda\leq 1$. Leaving Eq. 19 like
$C\cos(\theta_{B}^{0})+\cos(\theta_{A}^{0})=\frac{(1+C)}{\lambda}.$ (21)
We simplify this equation using $\sin(\theta_{A}^{0})=C\sin(\theta_{B}^{0})$.
$\cos(\theta_{A}^{0})=\frac{1+C+\lambda^{2}(1-C)}{2\lambda}$ (22)
$\cos(\theta_{B}^{0})=\frac{1+C+\lambda^{2}(C-1)}{2\lambda C}$ (23)
This solution is a minimum of energy for
$1<\lambda<\big{|}\frac{1+C}{1-C}\big{|}$. We will now replace it into the
potential energy in Eq. 18, selecting the solutions where both $\theta_{A}$
and $\theta_{B}$ have the same sign and using that
$\sin(\theta_{A}^{0})=C\sin(\theta_{B}^{0})$, we can write Eq. 18 as,
$\displaystyle
V=((C+1)^{2}+\lambda^{2}(1-C^{2}+2C\cos(\theta_{B}^{0})(\cos(\theta_{A}^{0})+$
(24) $\displaystyle
C\cos(\theta_{B}^{0})))-2\lambda(C+1)(\cos(\theta_{A}^{0})+C\cos(\theta_{B}^{0})))\frac{k}{2}\sum_{<ij>}a_{ji}.$
Now using that
$\cos(\theta_{A}^{0})+C\cos(\theta_{B}^{0})=\frac{1+C}{\lambda}$,
$\displaystyle
V=((C+1)^{2}+(C+1)^{2}-2(C+1)^{2})\frac{k}{2}\sum_{<ij>}a_{ji}^{2}.$ (25)
Which is clearly zero, therefore we find that
$V(\theta_{i}=\theta^{0}_{A},\theta_{j}=\theta^{0}_{B})=0$ for
$1<\lambda<\big{|}\frac{1+C}{1-C}\big{|}$. This proves that the system has a
zero energy mode that expands the polygon network isotropically, i.e. it’s a
perfect auxetic.
## Appendix F Zero Bulk Modulus
If an elastic material has a zero bulk modulus, it means that it has a floppy
mode that allows the material to freely deform isotropically, i.e. it has a
Poisson’s ratio $\nu=-1$.
Here we will show that a polygon network that follows our three rules has a
bulk modulus $K=0$ and is therefore a perfect auxetic. Furthermore, its shear
modulus will remain finite, as the system is overconstrained.
Starting from the potential energy, we can write it as a function of a small
isotropic expansion of each polygon $\delta\lambda$.
$V=\sum_{\langle i~{}j\rangle}\frac{k}{2}{\vec{r}_{ij}}^{~{}2},$ (26)
with
$\vec{r}_{ij}=\vec{x}_{i}-\vec{x}_{j}+(1+\delta\lambda)(\vec{a}_{ij}-\vec{b}_{ji})$
the bulk modulus will be given by the coefficient of a linear expansion of the
energy as a function of $\delta\lambda$.
$\Omega K=\frac{d^{2}V}{d\delta\lambda^{2}}\bigg{\rvert}_{\delta\lambda=0},$
(27)
where $\Omega$ is the area of the system. If we expand this last equation we
obtain,
$\Omega K=k\sum_{\langle
i~{}j\rangle}\left(\left(\frac{d\vec{r}_{ij}}{d\delta\lambda}\right)^{2}+\left(\frac{d^{2}\vec{r}_{ij}}{d{\delta\lambda}^{2}}\right)\cdot\vec{r}_{ij}\right)\bigg{\rvert}_{\delta\lambda=0}.$
(28)
If we assume that the system is unstressed, then the length of each spring
must be zero $\vec{r}_{ij}=0$, and from Eq. 28 if $K=0$ we see that
$\frac{d\vec{r}_{ij}}{d\delta\lambda}=0$, which means that the spring length
must be zero as a function of $\delta\lambda$. This is quite reasonable as
zero bulk modulus implies that there is a zero mode in the system which
doesn’t deform the springs. Expanding $\frac{d\vec{r}_{ij}}{d\delta\lambda}=0$
we find that,
$\dot{\theta}_{i}\vec{a}^{\prime}_{ij}-\dot{\theta}_{j}\vec{b}^{\prime}_{ji}+\dot{\vec{x}}_{i}-\dot{\vec{x}}_{j}=-\vec{a}_{ij}+\vec{b}_{ji}.$
(29)
Where dots are derivatives by $\delta\lambda$ and primes are derivatives by
the corresponding angle $\theta$. If we now assume that the polygons only
rotate and have no displacements while expanding, i.e. $\dot{\vec{x}}=0$. And
if our network is bipartite, with each independent set rotating with the same
angle ${\theta}_{i}=\theta_{A}$ and ${\theta}_{j}=\theta_{B}$, our equation
becomes
$\dot{\theta}_{A}\vec{a}^{\prime}_{ij}-\dot{\theta}_{B}\vec{b}^{\prime}_{ji}=-\vec{a}_{ij}+\vec{b}_{ji}.$
(30)
Taking the projection of this equation in the direction of
$\vec{a}^{\prime}_{ij}$ and $\vec{b}^{\prime}_{ji}$ we obtain a vectorial
expression for the angular velocity.
$\dot{\theta}_{A}=\frac{\vec{a}_{ij}\cdot\vec{b}^{\prime}_{ji}\vec{a}^{\prime}_{ij}\cdot\vec{b}^{\prime}_{ji}}{\vec{a}^{\prime
2}_{ij}\vec{b}^{\prime
2}_{ji}-\left(\vec{a}^{\prime}_{ij}\cdot\vec{b}^{\prime}_{ji}\right)^{2}}$
(31)
$\dot{\theta}_{B}=\frac{\vec{b}_{ji}\cdot\vec{a}^{\prime}_{ij}\vec{a}^{\prime}_{ij}\cdot\vec{b}^{\prime}_{ji}}{\vec{a}^{\prime
2}_{ij}\vec{b}^{\prime
2}_{ji}-\left(\vec{a}^{\prime}_{ij}\cdot\vec{b}^{\prime}_{ji}\right)^{2}}$
(32)
Following the three rules for perfect auxetics in Appendix E. We can use the
following representation of the vectors $\vec{a}_{ij}$ and $\vec{b}_{ji}$ to
easily operate them.
$\vec{a}_{ij}=a_{ij}\begin{pmatrix}\cos(\theta_{i}+\alpha_{ij})\\\
\sin(\theta_{i}+\alpha_{ij})\end{pmatrix}$
$\vec{b}_{ji}=b_{ji}\begin{pmatrix}\cos(-\theta_{j}-\beta_{ji})\\\
\sin(-\theta_{j}-\beta_{ji})\end{pmatrix}$
With $b_{ji}/a_{ij}=C$ and $|\alpha_{ij}+\beta_{ji}|=\pi$.
$\dot{\theta}_{A}=\frac{C+\cos(\theta_{A}+\theta_{B})}{\sin(\theta_{A}+\theta_{B})}$
(33)
$\dot{\theta}_{B}=\frac{1+C\cos(\theta_{A}+\theta_{B})}{C\sin(\theta_{A}+\theta_{B})}$
(34)
As $\dot{\theta}$ only depends on general properties of the whole system, it
is a valid solution for small deformations in a system with $K=0$. Therefore,
a system that follows the stipulated rules, will be a perfect auxetic.
|
# Adaptive Inference for Change Points in High-Dimensional Data
Yangfan Zhang, Runmin Wang and Xiaofeng Shao 111 Yangfan Zhang is Ph.D.
student, Xiaofeng Shao is Professor at Department of Statistics, University of
Illinois at Urbana Champaign. Runmin Wang is Assistant Professor at Department
of Statistical Science, Southern Methodist University. Emails:
<EMAIL_ADDRESS><EMAIL_ADDRESS>and<EMAIL_ADDRESS>We would
like to thank two anonymous referees for constructive comments, which led to
substantial improvements. We are also grateful to Dr. Farida Enikeeva for
sending us the code used in Enikeeva and Harchaoui (2019). Shao’s research is
partially supported by NSF-DMS 1807023 and NSF-DMS-2014018.
Abstract: In this article, we propose a class of test statistics for a change
point in the mean of high-dimensional independent data. Our test integrates
the U-statistic based approach in a recent work by Wang et al. (2019) and the
$L_{q}$-norm based high-dimensional test in He et al. (2020), and inherits
several appealing features such as being tuning parameter free and asymptotic
independence for test statistics corresponding to even $q$s. A simple
combination of test statistics corresponding to several different $q$s leads
to a test with adaptive power property, that is, it can be powerful against
both sparse and dense alternatives. On the estimation front, we obtain the
convergence rate of the maximizer of our test statistic standardized by sample
size when there is one change-point in mean and $q=2$, and propose to combine
our tests with a wild binary segmentation (WBS) algorithm to estimate the
change-point number and locations when there are multiple change-points.
Numerical comparisons using both simulated and real data demonstrate the
advantage of our adaptive test and its corresponding estimation method.
Keywords: asymptotically pivotal, segmentation, self-normalization, structural
break, U-statistics
## 1 Introduction
Testing and estimation of change points in a sequence of time-ordered data is
a classical problem in statistics. There is a rich literature for both
univariate and multivariate data of low dimension; see Csörgö and Horváth
(1997), Chen and Gupta (2011) and Tartakovsky et al. (2014), for some book-
length introductions and Perron (2006), Aue and Horváth (2013), and
Aminikhanghahi and Cook (2017) for recent reviews of the subject. This paper
addresses the testing and estimation for change points of high-dimensional
data where the dimension $p$ is high and can exceed the sample size $n$.
As high-dimensional data becomes ubiquitous due to technological advances in
science, engineering and other areas, change point inference under the high-
dimensional setting has drawn great interest in recent years. When the
dimension $p$ is greater than sample size $n$, traditional methods are often
no longer applicable. Among recent work that addresses change point inference
for the mean of high-dimensional data, we mention Horváth and Hušková (2012),
Chan et al. (2013), Jirak (2015), Cho (2016), Wang and Samworth (2018),
Enikeeva and Harchaoui (2019), Wang et al. (2019) and Yu and Chen (2020). In
most of these papers, the proposed methods are powerful either when the
alternative is sparse and strong, i.e., there are a few large non-zero values
in the components of mean difference, or when the alternative is weak and
dense, i.e., there are many small values in the components of mean difference.
Among some of these papers, the sparsity appeared either explicitly in the
assumptions, e.g. Wang and Samworth (2018), who proposed to project the data
to some informative direction related to the mean change, to which univariate
change point detection algorithm can be applied, or implicitly in the
methodology, e.g. Jirak (2015), who took the maximal CUSUM statistic and
therefore essentially targeted at the sparse alternative. Yu and Chen (2020)
recently introduced a Gaussian multiplier bootstrap to calibrate critical
values of the sup norm of CUSUM test statistics in high dimensions and their
test is also specifically for sparse alternative. On the contrary, Horváth and
Hušková (2012) aggregated the univariate CUSUM test statistics using the sum
and their test is supposed to capture the dense alternative, but the validity
of their method required the cross-sectional independence assumption. Wang et
al. (2019) aimed at dense alternatives by extending the U-statistic based
approach pioneered by Chen and Qin (2010) in the two-sample testing problem.
An exception is the test developed in Enikeeva and Harchaoui (2019), which was
based on a combination of a linear statistic and a scan statistic, and can be
adaptive to both sparse and dense alternatives. However, its critical values
were obtained under strong Gaussian and independent components assumptions and
they do not seem to work when these assumptions are not satisfied; see Section
4 for numerical evidence.
In practice, it is often unrealistic to assume a particular type of
alternative and there is little knowledge about the type of changes if any.
Thus there is a need to develop new test that can be adaptive to different
types of alternatives, and have good power against a broad range of
alternatives. In this article, we shall propose a new class of tests that can
have this adaptive power property, which holds without the strong Gaussian and
independent components assumptions. Our test is built on two recent advances
in the high-dimensional testing literature: Wang et al. (2019) and He et al.
(2020). In Wang et al. (2019), they developed a mean change point test based
on a U-statistic that is an unbiased estimator of the squared $L_{2}$ norm of
the mean difference. They further used the idea of self-normalization [see
Shao (2010), Shao and Zhang (2010), Shao (2015)] to eliminate the need of
estimating the unknown nuisance parameter. He et al. (2020) studied both one
sample and two sample high-dimensional testing problem for the mean and
covariance matrix using $L_{q}$ norm, where $q\in[2,\infty]$ is some integer.
They showed that the corresponding U-statistics at different $q$s are
asymptotically independent, which facilitates a simple combination of the
tests based on several values of $q$ (say $2$ and $\infty$) and their
corresponding $p$-values, and that the resulting combined test is adaptive to
both dense and sparse alternatives.
Building on these two recent advances, we shall propose a new $L_{q}$ norm
based test for a change point in the mean of high-dimensional independent
data. Our contributions to the literature is threefold. On the methodological
front, we develop a new class of test statistics (as indexed by $q\in
2\mathbb{N}$) based on the principle of self-normalization in the high-
dimensional setting. Our test is tuning parameter free when testing for a
single change point. A simple combination of tests corresponding to different
$q$s can be easily implemented due to the asymptotic independence and results
in an adaptive test that has well-rounded power against a wide range of
alternatives. On the theory front, as He et al. (2020) proved the asymptotic
independence of one-sample and two-sample U-statistics corresponding to
different $q$s, we derive the asymptotic independence for several stochastic
processes corresponding to different $q$s under significantly weaker
assumptions. More precisely, we can define two-sample test statistics on
different sub-samples for each $q\in 2\mathbb{N}$. These statistics can be
viewed as smooth functionals of stochastic processes indexed by the starting
and ending points of the sub-samples, which turn out to be asymptotically
independent for different $q$s. Compared to the adaptive test in Enikeeva and
Harchaoui (2019), which relied on the Gaussian and independent components
assumptions, our technical assumptions are much weaker, allowing non-
Gaussianity and weak dependence among components. Furthermore, we obtained the
convergence rate of the argmax of our SN-based test statistic standardized by
sample size when there is one change point and $q=2$. Lastly, in terms of
empirical performance, we show in the simulation studies that the adaptive
test can have accurate size and high power for both sparse and dense
alternatives. Their power is always close to the highest one given by a single
statistic under both dense and sparse alternatives.
The rest of the paper is organized as follows. In Section 2, we define our
statistic, derive the limiting null distribution and analyze the asymptotic
power when there is one change point. We also propose an adaptive procedure
combining several tests of different $q\in 2\mathbb{N}$. In Section 3, we
study the asymptotic behavior of change-point location estimators when there
is a single change-point and combine the WBS algorithm with our test to
estimate the location when there are multiple change points. In Section 4, we
present some simulation results for both testing and estimation and apply the
WBS-based estimation method to a real data set. Section 5 concludes. All
technical details and some additional simulation results are gathered in the
supplemental material.
## 2 Test Statistics and Theoretical Properties
Mathematically, let $\\{Z_{t}\\}_{t=1}^{n}\in\mathbb{R}^{p}$ be i.i.d random
vectors with mean 0 and covariance $\Sigma$. Our observed data is
$X_{t}=Z_{t}+\mu_{t}$, where $\mu_{t}=E(X_{t})$ is the mean at time $t$. The
null hypothesis is that there is no change point in the mean vector $\mu_{t}$
and the alternative is that there is at least one change point, the location
of which is unknown, i.e., we want to test
$\mathcal{H}_{0}:\mu_{1}=\mu_{2}=\cdots=\mu_{n}\quad
v.s\quad\mathcal{H}_{1}:\mu_{1}=\cdots=\mu_{k_{1}}\neq\mu_{k_{1}+1}=\cdots=\mu_{k_{s}}\neq\mu_{k_{s}+1}\cdots=\mu_{n},$
where $k_{1}<k_{2}<\cdots<k_{s}$ and $s$ are unknown. Note that we assume
temporal independence, which seems to be commonly adopted in change point
analysis for genomic data; see Zhang et al. (2010), Jeng et al. (2010), and
Zhang and Siegmund (2012) among others.
In this section, we first construct our two-sample U-statistic for a single
change point alternative, which is the cornerstone for the estimation method
we will introduce later. Then we derive the theoretical size and power results
for our statistic. We also form an adaptive test that combines tests
corresponding to different $q$s. Throughout the paper, we assume $p\wedge
n\rightarrow+\infty,$ and we may use $p=p_{n}$ to emphasize that $p$ can
depend on $n$. For a vector or matrix $A$ and $q\in 2\mathbb{N}$, we use
$\|A\|_{q}$ to denote $\big{(}\sum_{i,j}A_{ij}^{q}\big{)}^{1/q}$, and in
particular, for $q=2,\|\cdot\|_{q}=\|\cdot\|_{F}$ equals the Frobenius norm.
We use $\|\Sigma\|_{s}$ to denote the spectral norm. Denote the number of
permutations $P_{q}^{k}=k!/(k-q)!$, and define $\sum^{*}$ to be the summation
over all pairwise distinct indices. If $\lim_{n}a_{n}/b_{n}=0$, we denote
$a_{n}=o(b_{n})$, and if
$0<\liminf_{n}a_{n}/b_{n}\leq\limsup_{n}a_{n}/b_{n}<+\infty$, we denote
$a_{n}\asymp b_{n}$. Throughout the paper, we use “$\stackrel{{\scriptstyle
D}}{{\rightarrow}}$” to denote convergence in distribution,
“$\stackrel{{\scriptstyle P}}{{\rightarrow}}$” for convergence in probability,
and “$\leadsto$” for process convergence in some suitable function space. We
use $\ell_{\infty}\left([0,1]^{3}\right)$ to denote the set of bounded
functions on $[0,1]^{3}$.
### 2.1 U-statistic and Self-normalization
In this subsection, we shall develop our test statistics for one change point
alternative, i.e.,
$\mathcal{H}_{1}:\mu_{1}=\cdots=\mu_{k_{1}}\neq\mu_{k_{1}+1}=\cdots=\mu_{n},$
where $k_{1}$ is unknown. In Wang et al. (2019), a U-statistic based approach
was developed and their test targets at the dense alternative since the power
is a monotone function of $\sqrt{n}\|\Delta\|_{2}/\|\Sigma\|_{F}^{1/2}$, where
$\Delta$ is the difference between pre-break and post-break means, i.e.,
$\Delta=\mu_{n}-\mu_{1}$, and $\Sigma$ is the covariance matrix of $X_{i}$.
Thus their test may not be powerful if the change in mean is sparse and
$\|\Delta\|_{2}$ is small. Note that several tests have been developed to
capture sparse alternatives as mentioned in Section 1. In practice, when there
is no prior knowledge of the alternative for a given data set at hand, it
would be helpful to have a test that can be adaptive to different types and
magnitudes of the change. To this end, we shall adopt the $L_{q}$ norm-based
approach, as initiated by Xu et al. (2016) and He et al. (2020), and develop a
class of test statistics indexed by $q\in 2\mathbb{N}$, and then combine these
tests to achieve the adaptivity.
Denote $X_{i}=(X_{i,1},\ldots,X_{i,p})^{T}$. For any positive even number
$q\in 2\mathbb{N}$, consider the following two-sample U-statistic of order
$(q,q)$,
$T_{n,q}(k)=\frac{1}{P_{q}^{k}P_{q}^{n-k}}\sum_{l=1}^{p}\sum^{*}_{1\leq
i_{1},\ldots,i_{q}\leq k}\sum^{*}_{k+1\leq j_{1},\ldots,j_{q}\leq
n}\left(X_{i_{1},l}-X_{j_{1},l}\right)\cdots\left(X_{i_{q},l}-X_{j_{q},l}\right),$
for any $k=q,\cdots,n-q$. Simple calculation shows that
$\mathbb{E}\left[T_{n,q}(k)\right]=0$ for any $k=q,\cdots,n-q$ under the null
hypothesis, and $\mathbb{E}\left[T_{n,q}(k_{1})\right]=\|\Delta\|_{q}^{q}$
under the alternative. When $q\in 2\mathbb{N}+1$ (i.e., $q$ is odd) and under
the alternative,
$\mathbb{E}\left[T_{n,q}(k_{1})\right]=\sum_{j=1}^{p}\delta_{j}^{q}\not=\|\Delta\|_{q}^{q}$
where $\Delta=(\delta_{1},\cdots,\delta_{p})^{T}$. This is the main reason we
focus on the statistics corresponding to even $q$s since for an odd $q$,
$\sum_{j=1}^{p}\delta_{j}^{q}=0$ does not imply $\Delta=0$.
If the change point location $k_{1}=\lfloor\tau_{1}n\rfloor$,
$\tau_{1}\in(0,1)$ is known, then we would use $T_{n,q}(k_{1})$ as our test
statistic. As implied by the asymptotic results shown later, we have that
under the null,
$\left(\frac{\tau_{1}(1-\tau_{1})}{n\|\Sigma\|_{q}}\right)^{q/2}\frac{T_{n,q}(k_{1})}{\sqrt{q!}}\stackrel{{\scriptstyle
D}}{{\rightarrow}}N(0,1),$
under suitable moment and weak dependence assumptions on the components of
$X_{t}$. In practice, a typical approach is to replace $\|\Sigma\|_{q}$ by a
ratio-consistent estimator, which is available for $q=2$ [see Chen and Qin
(2010)], but not for general $q\in 2\mathbb{N}$. In practice, the location
$k_{1}$ is unknown, which adds additional complexity to the variance
estimation and motivates Wang et al. (2019) to use the idea of self-
normalization [Shao (2010), Shao and Zhang (2010)] in the case $q=2$. Self-
normalization is a nascent inferential method [Lobato (2001), Shao (2010)]
that has been developed for low and fixed-dimensional parameter in a low
dimensional time series. It uses an inconsistent variance estimator to yield
an asymptotically pivotal statistic, and does not involve any tuning parameter
or involves less number of tuning parameters compared to traditional
procedures. See Shao (2015) for a comprehensive review of recent developments
for low dimensional time series. There have been two recent extensions to the
high-dimensional setting: Wang and Shao (2019) adopted a one sample
U-statistic with trimming and extended self-normalization to inference for the
mean of high-dimensional time series; Wang et al. (2019) used a two sample
U-statistic and extended the self-normalization (SN)-based change point test
in Shao and Zhang (2010) to high-dimensional independent data. Both papers are
$L_{2}$ norm based, and this seems to be the first time that a $L_{q}$-norm
based approach is extended to high-dimensional setting via self-normalization.
Following Wang et al. (2019), we consider the following self-normalization
procedure. Define
$\displaystyle U_{n,q}(k;s,m)$ $\displaystyle=\sum_{l=1}^{p}\sum^{*}_{s\leq
i_{1},\ldots,i_{q}\leq k}\sum^{*}_{k+1\leq j_{1},\ldots,j_{q}\leq
m}\left(X_{i_{1},l}-X_{j_{1},l}\right)\cdots\left(X_{i_{q},l}-X_{j_{q},l}\right),$
which is an un-normalized version of $T_{n,q}$ applied to the subsample
$(X_{s},\cdots,X_{m})$. Let
$W_{n,q}(k;s,m):=\frac{1}{m-s+1}\sum_{t=s+q-1}^{k-q}U_{n,q}(t;s,k)^{2}+\frac{1}{m-s+1}\sum_{t=k+q}^{m-q}U_{n,q}(t;k+1,m)^{2},$
The self-normalized statistic is given by
$\widetilde{T}_{n,q}:=\max_{k=2q,\ldots,n-2q}\frac{U_{n,q}(k;1,n)^{2}}{W_{n,q}(k;1,n)}.$
###### Remark 2.1.
If we want to test for multiple change points, we can use the scanning idea
presented in Zhang and Lavitas (2018) and Wang et al. (2019) and construct the
following statistic:
$T_{n,q}^{*}:=\max_{2q\leq l_{1}\leq
l_{2}-2q}\frac{U_{n,q}\left(l_{1};1,l_{2}\right)^{2}}{W_{n,q}\left(l_{1};1,l_{2}\right)}+\max_{m_{1}+2q-1\leq
m_{2}\leq
n-2q}\frac{U_{n,q}\left(m_{2};m_{1},n\right)^{2}}{W_{n,q}\left(m_{2};m_{1},n\right)}.$
We shall skip further details as the asymptotic theory and computational
implementation are fairly straightforward.
### 2.2 Limiting Null Distribution
Before presenting our main theorem, we need to make the following assumptions.
###### Assumption 2.1.
Suppose $Z_{1},\ldots,Z_{n}$ are i.i.d. copies of $Z_{0}$ with mean 0 and
covariance matrix $\Sigma$, and the following conditions hold.
1. 1.
There exists $c_{0}>0$ not depending $n$ such that inf
${}_{i=1,\ldots,p_{n}}\operatorname{Var}\left(Z_{0,i}\right)\geq c_{0}$.
2. 2.
$Z_{0}$ has up to $8$-th moments, with $\sup_{1\leq j\leq
p}\mathbb{E}[Z_{0,j}^{8}]\leq C,$ and for $h=2,\ldots,8$ there exist constants
$C_{h}$ depending on $h$ only and a constant $r>2$ such that
$\left|\operatorname{cum}\left(Z_{0,l_{1}},\ldots,Z_{0,l_{h}}\right)\right|\leq
C_{h}\left(1\vee\max_{1\leq i,j\leq h}\left|l_{i}-l_{j}\right|\right)^{-r}.$
###### Remark 2.2 (Discussion of Assumptions).
The above cumulant assumption is implied by geometric moment contraction [cf.
Proposition 2 of Wu and Shao (2004)] or physical dependence measure proposed
by Wu (2005) [cf. Section 4 of Shao and Wu (2007)], or $\alpha$-mixing
[Andrews (1991), Zhurbenko and Zuev (1975)] in the time series setting. It
basically imposes weak dependence among the $p$ components in the data. Our
theory holds as long as a permutation of $p$ components satisfies the cumulant
assumption, since our test is invariant to the permutation within the
components.
To derive the limiting null distribution for $\widetilde{T}_{n,q}$, we need to
define some useful intermediate processes. Define
$\displaystyle D_{n,q}(r;[a,b])$ $\displaystyle=U_{n,q}(\lfloor
nr\rfloor;\lfloor na\rfloor+1,\lfloor nb\rfloor)$
$\displaystyle=\sum_{l=1}^{p}\sum^{*}_{\lfloor na\rfloor+1\leq
i_{1},\ldots,i_{q}\leq\lfloor nr\rfloor}\sum^{*}_{\lfloor nr\rfloor+1\leq
j_{1},\ldots,j_{q}\leq\lfloor
nb\rfloor}\left(X_{i_{1},l}-X_{j_{1},l}\right)\cdots\left(X_{i_{q},l}-X_{j_{q},l}\right),$
for any $0\leq a<r<b\leq 1$. Note that under the null, $X_{i}$’s have the same
mean. Therefore, we can rewrite $D_{n,q}$ as
$\displaystyle D_{n,q}(r;[a,b])=$
$\displaystyle\sum_{l=1}^{p}\sum^{*}_{\lfloor na\rfloor+1\leq
i_{1},\ldots,i_{q}\leq\lfloor nr\rfloor}\sum^{*}_{\lfloor nr\rfloor+1\leq
j_{1},\ldots,j_{q}\leq\lfloor
bn\rfloor}\left(X_{i_{1},l}-X_{j_{1},l}\right)\cdots\left(X_{i_{q},l}-X_{j_{q},l}\right)$
$\displaystyle=$ $\displaystyle\sum_{l=1}^{p}\sum^{*}_{\lfloor na\rfloor+1\leq
i_{1},\ldots,i_{q}\leq\lfloor nr\rfloor}\sum^{*}_{\lfloor nr\rfloor+1\leq
j_{1},\ldots,j_{q}\leq\lfloor
bn\rfloor}\left(Z_{i_{1},l}-Z_{j_{1},l}\right)\cdots\left(Z_{i_{q},l}-Z_{j_{q},l}\right)$
$\displaystyle=$ $\displaystyle\sum_{c=0}^{q}(-1)^{q-c}\binom{q}{c}P^{\lfloor
nr\rfloor-\lfloor na\rfloor-c}_{q-c}P^{\lfloor nb\rfloor-\lfloor
nr\rfloor-q+c}_{c}S_{n,q,c}(r;[a,b]).$
In the above expression, considering the summand for each $c=0,1,\ldots,q,$ we
can define, for any $0\leq a<r<b\leq 1$,
$S_{n,q,c}(r;[a,b])=\sum_{l=1}^{p}\sum^{*}_{\lfloor na\rfloor+1\leq
i_{1},\cdots,i_{c}\leq\lfloor nr\rfloor}\sum^{*}_{\lfloor nr\rfloor+1\leq
j_{1},\cdots,j_{q-c}\leq\lfloor
nb\rfloor}\left(\prod_{t=1}^{c}Z_{i_{t},l}\prod_{s=1}^{q-c}Z_{j_{s},l}\right),$
if $\lfloor nr\rfloor\geq\lfloor na\rfloor+1$ and $\lfloor
nb\rfloor\geq\lfloor nr\rfloor+1,$ and 0 otherwise.
###### Theorem 2.1.
If Assumption 2.1 holds, then under the null and for a finite set $I$ of
positive even numbers, we have that
$\Big{\\{}a_{n,q}^{-1}S_{n,q,c}(\cdot;[\cdot,\cdot])\Big{\\}}_{q\in I,0\leq
c\leq q}\leadsto\Big{\\{}Q_{q,c}(\cdot;[\cdot,\cdot])\Big{\\}}_{q\in I,0\leq
c\leq q}$
in $\ell_{\infty}\left([0,1]^{3}\right)$ jointly over $q\in I,0\leq c\leq q$,
where
$a_{n,q}=\sqrt{n^{q}\sum^{p}_{l_{1},l_{2}=1}\Sigma^{q}_{l_{1},l_{2}}}=\sqrt{n^{q}\|\Sigma\|_{q}^{q}}$,
and $Q_{q,c}$ are centered Gaussian processes. Furthermore, the covariance of
$Q_{q,c_{1}}$ and $Q_{q,c_{2}}$ is given by
$\operatorname{cov}\left(Q_{q,c_{1}}(r_{1};[a_{1},b_{1}]),Q_{q,c_{2}}(r_{2};[a_{2},b_{2}])\right)=\binom{C}{c}c!(q-c)!(r-A)^{c}(R-r)^{C-c}(b-R)^{q-C},$
where
$(r,R)=(\min\\{r_{1},r_{2}\\},\max\\{r_{1},r_{2}\\})$,$(a,A)=(\min\\{a_{1},a_{2}\\},\max\\{a_{1},a_{2}\\})$,$(b,B)=(\min\\{b_{1},b_{2}\\},\\\
\max\\{b_{1},b_{2}\\}),$ and
$(c,C)=(\min\\{c_{1},c_{2}\\},\max\\{c_{1},c_{2}\\})$. Additionally,
$Q_{q_{1},c_{1}}$ and $Q_{q_{2},c_{2}}$ are mutually independent if $q_{1}\neq
q_{2}\in 2\mathbb{N}$.
For illustration, consider the case when $a_{1}<a_{2}<r_{1}<r_{2}<b_{1}<b_{2}$
and $c_{1}\leq c_{2}$. We have
$\operatorname{cov}\left(Q_{q,c_{1}}(r_{1};[a_{1},b_{1}]),Q_{q,c_{2}}(r_{2};[a_{2},b_{2}])\right)=\binom{c_{2}}{c_{1}}c_{1}!(q-c_{1})!(r_{1}-a_{2})^{c_{1}}(r_{2}-r_{1})^{c_{2}-c_{1}}(b_{1}-r_{2})^{q-c_{2}},$
which implies, for example,
$\operatorname{var}\left[Q_{q,c}(r;[a,b])\right]=c!(q-c)!(r-a)^{c}(b-r)^{q-c}.$
The proof of Theorem 2.1 is long and is deferred to the supplement.
###### Theorem 2.2.
Suppose Assumption 2.1 holds. Then for a finite set $I$ of positive even
numbers,
$\Big{\\{}n^{-q}a_{n,q}^{-1}D_{n,q}(\cdot;[\cdot,\cdot])\Big{\\}}_{q\in
I}\leadsto\Big{\\{}G_{q}(\cdot;[\cdot,\cdot])\Big{\\}}_{q\in I}$
in $\ell_{\infty}\left([0,1]^{3}\right)$ jointly over $q\in I$, where
$G_{q}=\sum_{c=0}^{q}(-1)^{q-c}\binom{q}{c}(r-a)^{q-c}(b-r)^{c}Q_{q,c}$
and $Q_{q,c}$ is given in Theorem 2.1. Furthermore, for $q_{1}\not=q_{2}\in
2\mathbb{N}$, $G_{q_{1}}$ and $G_{q_{2}}$ are independent. Consequently, we
have that under the null,
$\widetilde{T}_{n,q}\stackrel{{\scriptstyle\mathcal{D}}}{{\longrightarrow}}\widetilde{T}_{q}=\sup_{r\in[0,1]}\frac{G_{q}(r;0,1)^{2}}{\int_{0}^{r}G_{q}(u;0,r)^{2}du+\int_{r}^{1}G_{q}(u;r,1)^{2}du}.$
It can be derived that the $G_{q}(\cdot;[\cdot,\cdot])$ is a Gaussian process
with the following covariance structure:
$\operatorname{var}[G_{q}(r;[a,b])]=\sum^{q}_{c=0}\binom{q}{c}^{2}c!(q-c)!(r-a)^{2q-c}(b-r)^{q+c}=q!(r-a)^{q}(b-r)^{q}(b-a)^{q}.$
When $r_{1}=r_{2}=r$,
$\displaystyle\operatorname{cov}(G_{q}(r;[a_{1},b_{1}]),G_{q}(r;[a_{2},b_{2}]))=q!(r-A)^{q}(b-r)^{q}(B-a)^{q}.$
When $r_{1}\not=r_{2}$,
$\operatorname{cov}(G_{q}(r_{1};[a_{1},b_{1}]),G_{q}(r_{2};[a_{2},b_{2}]))=q![(r-A)(b-R)(B-a)-(A-a)(R-r)(B-b)]^{q},$
where $(r,R,a,A,b,B,c,C)$ is defined in Theorem 2.1. The limiting null
distribution $\widetilde{T}_{q}$ is pivotal and its critical values can be
simulated as done in Wang et al. (2019) for the case $q=2$. The simulated
critical values and their corresponding realizations for $q=2,4,6$ are
available upon request. For a practical reason, we did not pursue the larger
$q$, such as $q=8,10$, since larger $q$ corresponds to more trimming on the
two ends and the finite sample performance when $q=6$ is already very
promising for detecting sparse alternatives, see Section 4. An additional
difficulty with larger $q$ is the associated computation cost and complexity
in its implementation.
###### Remark 2.3.
Compared to He et al. (2020), we assume the $8$th moment conditions, which is
weaker than the uniform sub-Gaussian type conditions in their condition
A.4(2), although the latter condition seems to be exclusively used for
deriving the limit of the test statistic corresponding to $q=\infty$.
Furthermore, since their strong mixing condition with exponential decay rate
[cf. condition A.4(3) of He et al. (2020)] implies our cumulant assumption 2.1
[see Andrews (1991), Zhurbenko and Zuev (1975)], our overall assumption is
weaker than condition A.4 in He et al. (2020). Despite the weaker assumptions,
our results are stronger, as we derived the asymptotic independence of several
stochastic process indexed by $q\in 2\mathbb{N}$, which implies the asymptotic
independence of U-statistics indexed by $q\in 2\mathbb{N}$.
Note that our current formulation does not include the $q=\infty$ case, which
corresponds to $L_{\infty}$ norm of mean difference $\|\Delta\|_{\infty}$. The
$L_{\infty}$-norm based test was developed by Yu and Chen (2020) and their
test statistic is based on CUSUM statistics
$Z_{n}(s)=\sqrt{\frac{s(n-s)}{n}}\left(\frac{1}{s}\sum_{i=1}^{s}X_{i}-\frac{1}{n-s}\sum_{i=s+1}^{n}X_{i}\right)$
and takes the form $T_{n}=\max_{s_{0}\leq s\leq
n-s_{0}}\|Z_{n}(s)\|_{\infty}$, where $s_{0}$ is the boundary removal
parameter. They did not obtain the asymptotic distribution of $T_{n}$ but
showed that a bootstrap CUSUM test statistic is able to approximate the finite
sample distribution of $T_{n}$ using a modification of Gaussian and bootstrap
approximation techniques developed by Chernozhukov et al. (2013, 2017). Given
the asymptotic independence between $L_{q}$-norm based U statistic and
$L_{\infty}$-norm based test statistic [He et al. (2020)] in the two-sample
testing text, we would conjecture that $T_{n}$ test statistic in Yu and Chen
(2020) is asymptotically independent of our $\widetilde{T}_{n,q}$ for any
$q\in 2\mathbb{N}$ under suitable moment and weak componentwise dependence
conditions. A rigorous investigation is left for future work.
### 2.3 Adaptive Test
Let $I$ be a set of $q\in 2\mathbb{N}$ (e.g. {2,6}). Since
$\widetilde{T}_{n,q}$s are asymptotically independent for different $q\in I$
under the null, we can combine their corresponding $p$-values and form an
adaptive test. For example, we may use $p_{ada}=\min_{q\in I}p_{q}$, where
$p_{q}$ is the $p$-value corresponding to $\widetilde{T}_{n,q}$, as a new
statistic. Its $p$-value is equal to $1-(1-p_{ada})^{|I|}$. Suppose we want to
perform a level-$\alpha$ test, it is equivalent to conduct tests based on
$\widetilde{T}_{n,q},\forall q\in I$ at level $1-(1-\alpha)^{1/|I|}$, and
reject the null if one of the statistics exceeds its critical value.
Therefore, we only need to compare each $\widetilde{T}_{n,q}$ with its
$(1-\alpha)^{1/|I|}$-quantile of the corresponding limiting null distribution.
As we explained before, a smaller $q$ (say $q=2$) tends to have higher power
under the dense alternative, which is also the main motivation for the
proposed method in Wang et al. (2019). On the contrary, a larger $q$ has a
higher power under the sparse alternative, as
$\lim_{q\rightarrow\infty}\|\Delta_{n}\|_{q}=\|\Delta_{n}\|_{\infty}$.
Therefore, with the adaptive test, we can achieve high power under both dense
and sparse alternatives with asymptotic size still equal to $\alpha$. This
adaptivity will be confirmed by our asymptotic power analysis presented in
Section 2.4 and simulation results presented in Section 4.
### 2.4 Power Analysis
###### Theorem 2.3.
Assume that the change point location is at $k_{1}=\lfloor\tau_{1}n\rfloor$
with the change in the mean equal to
$\Delta_{n}=(\delta_{n,1},\ldots,\delta_{n,p})^{T}$. Suppose Assumption 2.1,
and the following conditions on $\Delta_{n}$ hold. We have
1. 1.
If
$n^{q/2}\left\|\Delta_{n}\right\|_{q}^{q}/\|\Sigma\|_{q}^{q/2}\rightarrow\infty,\text{
then
}\widetilde{T}_{n,q}\stackrel{{\scriptstyle\mathcal{P}}}{{\longrightarrow}}\infty$;
2. 2.
If $n^{q/2}\left\|\Delta_{n}\right\|_{q}^{q}/\|\Sigma\|_{q}^{q/2}\rightarrow
0,\text{ then
}\widetilde{T}_{n,q}\stackrel{{\scriptstyle\mathcal{D}}}{{\longrightarrow}}\widetilde{T}_{q}$;
3. 3.
If
$n^{q/2}\left\|\Delta_{n}\right\|_{q}^{q}/\|\Sigma\|_{q}^{q/2}\rightarrow\gamma\in(0,+\infty)$,
then
$\widetilde{T}_{n,q}\stackrel{{\scriptstyle\mathcal{D}}}{{\longrightarrow}}\sup_{r\in[0,1]}\frac{\\{G_{q}(r;0,1)+\gamma
J_{q}(r,0,1)\\}^{2}}{\int_{0}^{r}\\{G_{q}(u;0,r)+\gamma
J_{q}(u,0,r)\\}^{2}du+\int_{r}^{1}\\{G_{q}(u;r,1)+\gamma
J_{q}(u,r,1)\\}^{2}du},$
where
$J_{q}(r,a,b):=\left\\{\begin{array}[]{lr}{\left(\tau_{1}-a\right)^{q}(b-r)^{q}}&{a<\tau_{1}\leq
r<b}\\\ {(r-a)^{q}\left(b-\tau_{1}\right)^{q}}&{a<r<\tau_{1}<b}\\\
{0}&{\tau_{1}<a\text{ or }\tau_{1}>b}\end{array}\right..$
###### Remark 2.4.
The following example illustrates the power behavior using different $q\in
2\mathbb{N}$. For simplicity, we assume $\Sigma=I_{p}$ and consider a change
in the mean equal to $\Delta_{n}=\delta\cdot(\bm{1}_{d},\bm{0}_{p-d})^{T}$. In
addition to demonstrating that large (small) $q$ is favorable to the sparse
(dense) alternatives, our local asymptotic power results stated in Theorem 2.3
also allow us to provide a rule to classify an alternative, which is given by
$\left\\{\begin{array}[]{lr}{sparse}&{d=o(\sqrt{p})}\\\
{in~{}between}&{d\asymp\sqrt{p}}\\\
{dense}&{\sqrt{p}=o(d)}\end{array}\right..$
To have a nontrivial power, it suffices to have
$n^{q/2}\left\|\Delta_{n}\right\|_{q}^{q}/\|\Sigma\|_{q}^{q/2}=dn^{q/2}\delta^{q}/\sqrt{p}=\gamma\in(0,+\infty)$,
which implies $\delta\asymp(\sqrt{p}/d)^{1/q}n^{-1/2}$. Therefore, when
$d=o(\sqrt{p})$, a smaller $\delta$ corresponds to a larger $q$. On the
contrary, when $\sqrt{p}=o(d)$, a smaller $q$ that yields a larger $\delta$ is
preferable to have higher power. Similar argument still holds for more general
$\Delta_{n}$ and $\Sigma$, as long as we have a similar order for
$\|\Delta_{n}\|_{q}^{q}$ and $\|\Sigma\|_{q}^{q}$, and the latter one is
guaranteed by Assumption 2.1.
We can summarize the asymptotic powers of the tests under different
alternatives in the following table. Note that when at least one single-$q$
based test obtains asymptotically nontrivial power (power 1), our adaptive
test can also achieve nontrivial power (power 1).
Alternative | $\delta$ | $I=\\{2\\}$ | $I=\\{q\\}$ | $I=\\{2,q\\}$
---|---|---|---|---
Dense $\sqrt{p}=o(d)$ | $\delta=o(p^{1/4}d^{-1/2}n^{-1/2})$ | $\alpha$ | $\alpha$ | $\alpha$
$\delta\asymp p^{1/4}d^{-1/2}n^{-1/2}$ | $\beta_{1}\in(\alpha,1)$ | $\alpha$ | $(\alpha,\beta_{1})$
$p^{1/4}d^{-1/2}n^{-1/2}=o(\delta),$ | 1 | $\alpha$ | 1
& $\delta=o(p^{1/2q}d^{-1/q}n^{-1/2})$
$\delta\asymp p^{1/2q}d^{-1/q}n^{-1/2}$ | 1 | $(\alpha,1)$ | 1
$p^{1/2q}d^{-1/q}n^{-1/2}=o(\delta)$ | 1 | 1 | 1
Sparse $d=o(\sqrt{p}$) | $\delta=o(p^{1/2q}d^{-1/q}n^{-1/2})$ | $\alpha$ | $\alpha$ | $\alpha$
$\delta\asymp p^{1/2q}d^{-1/q}n^{-1/2}$ | $\alpha$ | $\beta_{2}\in(\alpha,1)$ | $(\alpha,\beta_{2})$
$p^{1/2q}d^{-1/q}n^{-1/2}=o(\delta),$ | $\alpha$ | 1 | 1
& $\delta=o(p^{1/4}d^{-1/2}n^{-1/2})$
$\delta\asymp p^{1/4}d^{-1/2}n^{-1/2}$ | $(\alpha,1)$ | 1 | 1
$p^{1/4}d^{-1/2}n^{-1/2}=o(\delta)$ | 1 | 1 | 1
Table 1: Asymptotic powers of single-$q$ and adaptive tests
Liu et al. (2020) recently studied the detection of a sparse change in the
high-dimensional mean vector under the Gaussian assumption as a minimax
testing problem. Let $\rho^{2}=\min(k_{1},n-k_{1})\|\Delta_{n}\|_{2}^{2}$. In
the fully dense case, i.e., when $\|\Delta_{n}\|_{0}=p$, where
$\|\Delta_{n}\|_{0}$ denotes the $L_{0}$ norm, Theorem 8 in Liu et al. (2020)
stated that the minimax rate is given by
$\rho^{2}\asymp\|\Sigma\|_{F}\sqrt{\log\log(8n)}\vee\|\Sigma\|_{s}\log\log(8n)$.
Thus under the assumption that $k_{1}/n=\tau_{1}\in(0,1)$, the $L_{2}$-norm
based test in Wang et al. (2019) achieves the rate optimality up to a
logarithm factor. Consequently, any adaptive test based on $I$ is rate optimal
(up to a logarithm factor) as long as $2\in I$.
In the special case $\Sigma=I_{p}$, the minimax rate is given by
$\rho^{2}\asymp\left\\{\begin{array}[]{lr}\sqrt{p\log\log(8n)}&\text{ if
}d\geq\sqrt{p\log\log(8n)}\\\
d\log\left(\frac{ep\log\log(8n)}{d^{2}}\right)\vee\log\log(8n)&\text{ if
}d<\sqrt{p\log\log(8n)}\end{array}\right..$
Recall that $\Delta_{n}=\delta\cdot(\bm{1}_{d},\bm{0}_{p-d})^{T}$. In the
sparse setting $d=o(\sqrt{p})$ and under the assumptions that $d\asymp
p^{-v}$,$v\in(0,1/2)$ and $d>\log\log(8n)$, the minimax rate is $d$ (up to a
logarithm factor), which corresponds to $\delta\asymp n^{-1/2}$. Our
$L_{q}$-norm based test is not minimax rate optimal since the detection
boundary is $(\sqrt{p}/d)^{1/q}n^{-1/2}$, which gets closer to $n^{-1/2}$ as
$q\in 2\mathbb{N}$ gets larger. In the dense setting $\sqrt{p}=o(d)$ and under
the assumptions that $d\asymp p^{-v}$,$v\in(1/2,1)$ and
$d>\sqrt{p\log\log(8n)}$, the minimax rate is $\sqrt{p}$ (up to a logarithm
factor), which corresponds to $\delta\asymp p^{1/4}/\sqrt{nd}$. Therefore the
$L_{2}$-norm based test in Wang et al. (2019) is again rate optimal (up to a
logarithm factor).
## 3 Change-point Estimation
In this section, we investigate the change-point location estimation based on
change-point test statistics we proposed in Section 2. Specifically, Section
3.1 presents convergence rate for the argmax of SN-based test statistic upon
suitable standardization. Section 3.2 proposes a combination of wild binary
segmentation (WBS, Fryzlewicz (2014)) algorithm with our SN-based test
statistics for both single-$q$ test and adaptive test to estimate multiple
change points.
### 3.1 Single Change-point Estimation
In this subsection, we propose to estimate the location of a change point
assuming that the data is generated from the following single change-point
model,
$X_{t}=\mu_{1}+\Delta_{n}{\bf 1}(t>k^{*})+Z_{t},~{}t=1,\cdots,n,$
where $k^{*}=k_{1}=\lfloor\tau^{*}n\rfloor$ is the location of change point.
In the literature, it is common to focus on the convergence rate of the
estimators of the relative location $\tau^{*}\in(0,1)$, that is, we shall
focus on the convergence rate of $\hat{\tau}=\hat{k}/n$, where $\hat{k}$ is an
estimator for $k^{*}$.
Given the discussions about size and power properties of the SN-based test
statistic in Section 2, it is natural to use the argmax of the test statistic
as the estimator for $k^{*}$. That is, we define
$\hat{k}=\operatorname{argmax}_{k=2q,\ldots,n-2q}\frac{U_{n,q}(k;1,n)^{2}}{W_{n,q}(k;1,n)}.$
To present the convergence rate for $\hat{\tau}$, we shall introduce the
following assumptions.
###### Assumption 3.1.
1. 1.
$tr(\Sigma^{4})=o(\|\Sigma\|_{F}^{4})$;
2. 2.
$\sum_{l_{1},...,l_{h}=1}^{p}cum(Z_{0,l_{1}},...,Z_{0,l_{h}})^{2}\leq
C\|\Sigma\|_{F}^{h}$, for $h=2,...,6$;
3. 3.
$\|\Sigma\|_{F}=o(n\|\Delta_{n}\|_{2}^{2})$.
Let $\gamma_{n,q}=n^{q/2}\|\Delta_{n}\|_{q}^{q}/\|\Sigma\|_{q}^{q/2}$ so
$\gamma_{n,2}=n\|\Delta_{n}\|^{2}/\|\Sigma\|_{F}$. We have the following
convergence rate of $\hat{\tau}$ for the case $q=2$.
###### Theorem 3.1.
Suppose Assumption 3.1 holds and $q=2$. It holds that
$\hat{\tau}-\tau^{*}=o_{p}(\gamma_{n,2}^{-1/4+\kappa})$ as $n\wedge
p\rightarrow\infty$, for any $0<\kappa<1/4$.
###### Remark 3.1.
Assumption 3.1 (1) and (2) have been assumed in Wang et al. (2019), and they
are implied by Assumption 2.1; see Remark 3.2 in Wang et al. (2019).
Assumption 3.1(3) is equivalent to $\gamma_{n,2}\rightarrow\infty$, which
implies that $\hat{\tau}$ is a consistent estimator of $\tau^{*}$. Note that
even in the low-dimensional setting, no convergence rate for the argmax of SN-
based statistic (standarized by the sample size) is obtained in Shao and Zhang
(2010). Thus this is the first time the asymptotic rate for the argmax of a
SN-based test statistic is studied. On the other hand, the proof for the more
general case $q\in 2\mathbb{N}$ is considerably more involved than the special
case $q=2$ and is deferred to future investigation.
### 3.2 Multiple Change-point Estimation
In practice, the interest is often in the change point estimation or
segmentation, when the presence of change points is confirmed by testing or
based on prior knowledge. In the high-dimensional context, the literature on
change point estimation is relatively scarce; see Cho (2016), Wang and
Samworth (2018) and Wang et al. (2019). Here we shall follow the latter two
papers and use the wild binary segmentation [Fryzlewicz (2014)] coupled with
our test developed for a single $q$ or adaptive test to estimate the number
and location of change points. Note that the standard binary segmentation
procedure may fail when the change in means is not monotonic, as shown in Wang
et al. (2019) via simulations.
For any integers $s,e$ satisfying $2q\leq s+2q-1\leq e-2q\leq n-2q$, define
$Q_{n,q}(s,e):=\max_{b=s+2q-1,...,e-2q}\frac{U_{n,q}^{2}(b;s,e)}{W_{n,q}(b;s,e)},$
Note that $Q_{n,q}(s,e)$ is essentially the statistic $T_{n,q}$ based on the
sub-sample $(X_{s},...,X_{e})$. Denote a random sample of $(s_{m},e_{m})$ s.t.
$2q\leq s_{m}+2q-1\leq e_{m}-2q\leq n-2q$ as $F_{n}^{M}$, where the sample is
drawn independently with replacement of size $M$. In practice, we may require
the segments to be slightly longer to reduce unnecessary fluctuations of the
critical values. Then define
$\hat{\xi}_{n,M,q}=\max_{m=1,\cdots,M}Q_{n,q}(s_{m},e_{m})$ and we stop the
algorithm if $\hat{\xi}_{n,M,q}\leq\xi_{n,q}$, where $\xi_{n,q}$ is some
threshold to be specified below, and estimate the change point otherwise; see
Algorithm 1 for details.
One anonymous reviewer asked whether it is possible to derive the limiting
distribution of $\hat{\xi}_{n,M,q}$ under the null, which turns out to be
challenging for two reasons: (1) The SN-based test statistic for different
intervals could be highly dependent, especially when the two intervals overlap
by a lot; (2) the number of such randomly generated intervals is usual large,
and it would be more valuable to develop an asymptotic distribution under the
assumption that both sample size and number of intervals go to infinity. It
seems difficult to use the classical argument for this problem, and we shall
leave this for future investigation.
To obtain the threshold value $\xi_{n,q}$ as needed in the Algorithm 1, we
generate $R$ standard Gaussian samples each of which has sample size $n$ and
dimension $p$. For the $r$-th sample ($r=1,\ldots,R)$, we calculate
$\hat{\xi}^{(r)}_{n,M,q}=\max_{m=1,\cdots,M}Q_{n,q}^{(r)}(s_{m},e_{m}),$
where $Q_{n,q}^{(r)}(s_{m},e_{m})$ is the SN-based test statistic applied to
the $r$th Gaussian simulated sample. We can take $\xi_{n,q}$ to be the 95%
quantile of $\\{\hat{\xi}^{(r)}_{n,M,q}\\}_{r=1}^{R}$. Since the self-
normalized test statistic is asymptotically pivotal, the above threshold
$\xi_{n,q}$ is expected to approximate the 95% quantile of the finite sample
distribution of maximized SN-based test statistic applied to $M$ randomly
drawn sub-samples from the original data.
1:function WBS($S,E$)
2: if $E-S<4q-1$ then
3: STOP
4: else
5: $\mathcal{M}_{s,e}\leftarrow$ set of those $1\leq m\leq M$ s.t. $S\leq
s_{m},e_{m}\leq E,e_{m}-s_{m}\geq 4q-1$
6: $m_{q}\leftarrow$argmax${}_{m\in\mathcal{M}_{s,e}}Q_{n,q}(s_{m},e_{m})$
7: if $Q_{n,q}(s_{m_{q}},e_{m_{q}})>\xi_{n,q}$ then
8: add
$b_{0}\leftarrow$argmax${}_{b}U_{n,q}(b;s_{m_{q}},e_{m_{q}})/W_{n,q}(b;s_{m_{q}},e_{m_{q}})$
to set of estimated CP
9: WBS$(S,b_{0})$
10: WBS$(b_{0}+1,E)$
11: else
12: STOP
Algorithm 1 WBS Algorithm for a given $q\in 2\mathbb{N}$
To apply the adaptive test, we calculate $\hat{\xi}_{n,M,q}^{(r)}$ with $r$-th
sample using different $q\in I$. Denote $q_{I}:=\max_{q\in I}q$ We calculate
$p$-value for each single-$q$ based statistic and select the most significant
one for location estimation, which gives the adaptive version; see Algorithm
2.
1:function WBS($S,E$)
2: if $E-S<4q_{I}-1$ then
3: STOP
4: else
5: $p_{0}=0.05$
6: for $q$ in $I$ do
7: $\mathcal{M}_{s,e}\leftarrow$ set of those $1\leq m\leq M$ s.t. $S\leq
s_{m},e_{m}\leq E,e_{m}-s_{m}\geq 4q_{I}-1$
8: $m_{q}\leftarrow$argmax${}_{m\in\mathcal{M}_{s,e}}Q_{n,q}(s_{m},e_{m})$
9:
$p_{q}=R^{-1}\\#\Big{\\{}Q_{n,q}(s_{m_{q}},e_{m_{q}})>\xi^{(r)}_{n,M,q}\Big{\\}}_{r=1}^{R}$
10: if $p_{q}<p_{0}$ for current $q$ then
11:
$b_{0}\leftarrow$argmax${}_{b}U_{n,q}(b;s_{m_{q}},e_{m_{q}})/W_{n,q}(b;s_{m_{q}},e_{m_{q}})$
12: $p_{0}\leftarrow p_{q}$
13: NEXT
14: add $b_{0}$ to set of estimated CP
15: WBS$(S,b_{0})$
16: WBS$(b_{0}+1,E)$
Algorithm 2 Adaptive WBS Algorithm
## 4 Numerical Studies
In this section, we present numerical results to examine the finite sample
performance of our testing and estimation method in comparison with the
existing alternatives. Section 4.1 shows the size and power for the single
change point tests; Section 4.2 presents the estimation result when there is
one single change-point; Section 4.3 compares several WBS-based estimation
methods for multiple change point estimation, including the INSPECT method in
Wang and Samworth (2018). Finally, we apply our method to a real data set in
Section 4.4.
### 4.1 Single Change Point Testing
In this subsection, we examine the size and power property of our single-$q$
and adaptive tests in comparison with the one in Enikeeva and Harchaoui (2019)
(denoted as EH), which seems to be the only adaptive method in the literature.
The data $X_{i}\sim N(\mu_{i},\Sigma)$, where $\mu_{i}=0$ for $i=1,\cdots,n$
under the null. We set $(n,p)=(200,100)$ and $(400,200)$ and performed 2000
Monte carlo replications. We consider four different configurations of
$\Sigma=(\sigma_{ij})$ as follows,
$\sigma_{ij}=\left\\{\begin{array}[]{lr}{\mathds{1}_{i=j}}&{\text{Id}}\\\
{0.5^{|i-j|}}&{\text{AR(0.5)}}\\\ {0.8^{|i-j|}}&{\text{AR(0.8)}}\\\
{\mathds{1}_{i=j}+0.25\cdot\mathds{1}_{i\not=j}}&{\text{CS}}\\\
\end{array}\right..$
They correspond to independent components (Id), auto-regressive model with
order $1$ (AR(0.5) and AR(0.8)) and compound symmetric (CS), respectively. The
first three configurations imply weak dependence among components so satisfy
Assumption 2.1, whereas the compound symmetric covariance matrix corresponds
to strong dependence among components and violates our assumption. The size of
our tests, including $\widetilde{T}_{n,q}$ at a single $q=2,4,6$ and combined
tests with $\mathcal{I}=(2,4)$, $(2,6)$ and $(2,4,6)$ are presented in Table
2. It appears that all tests are oversized when $\Sigma$ is compound
symmetric, which is somewhat expected since the strong dependence among
components brings non-negligible errors in asymptotic approximation. As a
matter of fact, we conjecture that our limiting null distribution
$\widetilde{T}_{q}$ no longer holds in this case. Below we shall focus our
comments on the first three configurations (Id, AR(0.5) and AR(0.8)).
The size for $q=2$ (i.e., the test in Wang et al. (2019)) appears quite
accurate except for some degree of under-rejection in the Id case. For $q=4$,
it is oversized and its size seems inferior to the case $q=6$, which also
shows some over-rejection for the AR(1) models when $(n,p)=(200,100)$, but the
size distortion improves quite a bit when we increase $(n,p)$ to $(400,200)$.
Among the three combined tests, there are apparent over-rejections for
$\mathcal{I}=(2,4)$, $(2,4,6)$ for the AR(1) models, and the test
corresponding to $\mathcal{I}=(2,6)$ exhibits the most accurate size overall.
By contrast, the EH shows serious size distortions in all settings with some
serious over-rejection when the componentwise dependence is strong (e.g.,
AR(0.8) and CS), which is consistent with the fact that its validity strongly
relies on the Gaussian and componentwise independence assumptions. We also
checked the sensitivity of the size with respect to nonGaussian assumptions
and observe serious distortion for EH when the data is generated from a
nonGaussian distribution (results not shown). Overall, the adaptive test with
$\mathcal{I}=(2,6)$ seems preferred to all other tests (including the adaptive
test with $\mathcal{I}=(2,4,6)$) in terms of size accuracy.
Please insert Table 2 here!
To investigate the power, we let $\mu_{i}=0$ for $i\leq n/2$ and
$\mu_{i}=\sqrt{\delta/d}\cdot(\bm{1}_{d},\bm{0}_{p-d})^{T}$ for $i>n/2$. We
take $\delta=1,2$ and $d=3$, which corresponds to a sparse alternative; and
let $d=p$ to examine the power under the dense alternative; see Table 3. In
the case of sparse alternative, we can see that the powers corresponding to
$q=4$ and $q=6$ are much higher than that for $q=2$, which is consistent with
our intuition. When $q=4$, the power is slightly higher than that for $q=6$,
which might be explained by the over-rejection with $q=4$ (in the case of
AR(1) models), and we expect no power gain as we increase $q$ to $8,10$ etc,
so the results for these larger $q$ are not included. Also for larger $q$,
there is more trimming involved as the maximum runs from $2q$ to $n-2q$ in our
test statistics, so if the change point occurs outside of the range
$[2q,n-2q]$, our test has little power. In the dense alternative case, the
power for $q=2$ is the highest as expected, and the power for $q=4$ is again
slightly higher than that for $q=6$.
The power of the combined tests (i.e., ${\mathcal{I}}=(2,4)$ or $(2,6)$ or
$(2,4,6)$) is always fairly close to the best single one within the set. For
example, the power for $(2,6)$ is very close to the power for $q=6$ in the
sparse case and is quite close to the power for $q=2$ in the dense case,
indicating the adaptiveness of the combined test. In the sparse case, the
powers for ${\mathcal{I}}=(2,4)$ and $(2,4,6)$ are slightly higher than that
for $(2,6)$, which could be related to the over-rejection of the tests with
${\mathcal{I}}=(2,4)$ and $(2,4,6)$, especially when the data is generated
from AR(1) models. Overall, the adaptive tests (i.e., (2,4), (2,6) or (2,4,6))
have a good all-around power behavior against both sparse and dense
alternatives and are preferred choices when there is no prior knowledge about
the type of alternative the data falls into. Since the size for $(2,6)$ is
more accurate than that for (2,4) and (2,4,6), we slightly favor the $(2,6)$
combination. EH exhibits high power for all settings, but it is at the cost of
serious size distortion. We shall not present size-adjusted power as the
serious distortion is too great to recommend its use when there are
componentwise dependence in the data.
Please insert Tables 3 here!
### 4.2 Estimation for Single Change-point
In this subsection, we present the square root of mean-square-error (RMSE,
multiplied by 1000 for readability) of SN-based location estimators and
compare with the EH-based estimator under the same settings as we used in
Section 4.1.
For both dense and sparse alternatives, the proposed estimators (i.e., SN(2),
SN(4) and SN(6)) perform better than the EH method when the signal is
relatively weak (i.e., $\delta=1,2$). However, as the signal becomes stronger
(i.e., $\delta=4$), the EH method can outperform ours in the identity
covariance matrix case. On the other other hand, the performance of the EH
estimator apparently deterioates as the cross-sectional dependence gets
stronger, indicating its strong reliance on the componentwise independence
assumption. It is interesting to note that the SN-based method performs fairly
well, even in the case of compound symmetric covariane matrix, and SN(6)
outperforms the other two in all settings. A theoretical justification for the
latter phenomenon would be intriguing.
Please insert Tables 4 here!
### 4.3 Estimation for Multiple Change Points
In the following simulations, we compare our WBS-based method with the INSPECT
method proposed by Wang and Samworth (2018). Following Wang et al. (2019), we
generate 100 samples of i.i.d. standard normal variables
$\\{Z_{t}\\}_{t=1}^{n}$ with $n=120,p=50$. The 3 change points are located at
$30,60$ and $90$. Denote the changes in mean by
$\bm{\theta_{1}},\bm{\theta_{2}},\bm{\theta_{3}},$ with
$\bm{\theta_{1}}=-\bm{\theta_{2}}=2\sqrt{k_{1}/d_{1}}\cdot(\bm{1}_{d_{1}},\bm{0}_{p-d_{1}}),\bm{\theta_{3}}=2\sqrt{k_{2}/d_{2}}\cdot(\bm{1}_{d_{2}},\bm{0}_{p-d_{2}})$.
We use, e.g., Dense(2.5) to denote dense changes with $d_{i}=p=50,k_{i}=2.5$
for $i=1,2,3$ and Sparse(4) to denote sparse changes with $d_{i}=5,k_{i}=4$
for $i=1,2,3$. In particular, Dense($2.5$) & Sparse($4$) refers to
$k_{1}=2.5,k_{2}=4,d_{1}=5,d_{2}=50$, where we have a mixture of dense and
sparse changes.
We compare WBS with INSPECT, for which we use default parameters with the
”InspectChangepoint” package in R. We use 2 different metrics for evaluating
the performance of different methods. One is to calculate the mean square
errors (MSE) of the estimated number of change points. The other metric takes
the accuracy of location estimation into account. We utilize the correlated
rand index (CRI), which can measure the accuracy of change point location
estimation. See Rand (1971), Hubert and Arabie (1985) and Wang et al. (2019)
for more details. For perfect estimation, the calculated CRI is 1. In general
it is a number between 0 and 1 and the more precise we estimate the change
point locations, the higher CRI we get. We average the CRI for all Monte Carlo
replications and record the average rand index (ARI). We report the MSE and
ARI of different methods based on 100 replications in Table 5.
When there are only sparse changes and $\delta=2.5$, the performance of
adaptive procedure (WBS(2,6)) is similar to WBS(6), whose estimation is much
more accurate than WBS(2) and INSPECT. When we strengthen the signal by
increasing $\delta$ from $2.5$ to $4$, the detection power of all methods
increase, but instead INSPECT has the best estimation accuracy in this case,
closely followed by WBS(6) and WBS(2,6). In the case of purely dense changes
with $\delta=2.5$, the performance of WBS(2) dominates the others and WBS(2,6)
is the second best. When we increase $\delta$ from $2.5$ to $4$ in this
setting, the adaptive test slightly outperforms INSPECT, and its performance
is comparable to WBS(2). For both dense settings, the performance of WBS(6) is
rather poor. We can see that under all these four settings, the performance of
WBS(2,6) is always close to the best, indicating its adaptiveness to different
types of change points. Moreover, when there is a mixture of dense and sparse
changes, the adaptive method outperforms all the others. In practice, the type
of changes is often unknown, and therefore our adaptive procedure could be
appealing for practitioners.
Please insert Table 5 here!
### 4.4 Real data illustration
In this subsection, we study the genomic micro-array data set that contains
log intensity ratios of 43 individuals with bladder tumor, measured at 2215
different loci. The data was available in R package ecp and was also studied
by Wang et al. (2019) and Wang and Samworth (2018). We compare our results
with theirs.
We take the first 200 loci for our study. For the WBS algorithm, we generate
10000 samples from i.i.d. standard normal distributions with $(n,p)=(200,43)$,
and draw 5000 random intervals to calculate the supremum statistics and get
the 98%-quantile as our critical value. The change points detected by WBS at
0.98 level and the 20 most significant points detected by INSPECT are given as
follows.
$\begin{array}[]{ll}q=2&33,39,46,74,97,102,135,155,173,191\\\
q=6&15,32,44,59,74,91,116,134,158,173,186\\\
q=2,6&15,32,38,44,59,74,91,97,102,116,134,158,173,186,191\\\
\text{INSPECT}&15,26,28,33,36,40,56,73,91,97,102,119,131,134,135,146,155,174,180,191\end{array}$
We can see that the set of change points detected by the adaptive WBS method
is roughly a union of the sets corresponding to two single WBS methods (that
is, for a single $q$), which suggests that the adaptive WBS method captures
both sparse and dense alternatives as expected. In particular,
32(33),44(46),74,134(135), 158(155),173 are detected by both single methods,
38(39),97,102,191 are detected only by $q=2$, and 15, 59, 91,116,186 only by
$q=6$. The set of the change points detected by adaptive WBS method overlaps
with the set for INSPECT by a lot, including 15, 32(33), 38(36), 74(73), 91,
97, 102, 116(119), 134, 158(155), 173(174), and 191. It is worth noting that
the change points at locations 91, 97, 191 were only detected by one of two
single WBS methods and INSPECT, whereas the adaptive WBS method is able to
capture with its good all-round power property again a broad range of
alternatives. In Figure 1, we plot the log intensity ratios of the first 10
individuals at first 200 loci, and the locations of the change points
estimated by the adaptive method.
Please insert Figure 1 here!
This example clearly demonstrates the usefulness of the proposed adaptive test
and corresponding WBS-based estimation method. An important practical choice
is the threshold, which can be viewed as a tuning parameter in the
implementation of WBS algorithm. We shall leave its choice for future
investigation.
## 5 Conclusion
In this paper, we propose a class of asymptotically pivotal statistics for
testing a mean change in high-dimensional independent data. The test
statistics are formed on the basis of an unbiased estimator of $q$-th power of
the $L_{q}$ norm of the mean change via U-statistic and self-normalization.
They are asymptotically independent for different $q\in 2\mathbb{N}$, and
therefore, we can form an adaptive test by taking the minimum of $p$-values
corresponding to test statistics indexed by $q\in 2\mathbb{N}$. The resulting
test is shown to have good overall power against both dense and sparse
alternatives via theory and simulations. On the estimation front, we obtain
the convergence rate for the argmax of SN-based test statistic standardized by
sample size under the one change-point model and $q=2$. We also combine our
tests with WBS algorithm to estimate multiple change points. As demonstrated
by our simulations, the WBS-based estimation method inherits the advantage of
the adaptive test, as it outperforms other methods under the setting where
there is a mixture of dense and sparse change points, and has close-to-best
performance for purely dense and purely sparse cases.
To conclude, we mention that it would be interesting to extend our adaptive
test to the high-dimensional time series setting, for which a trimming
parameter seems necessary to accommodate weak temporal dependence in view of
recent work by Wang and Shao (2019). In addition, the focus of this paper is
on mean change, whereas in practice the interest could be on other high-
dimensional parameters, such as vector of marginal quantiles, variance-
covariance matrix, and even high-dimensional distributions. It remains to be
seen whether some extensions to these more general parameters are possible in
the high-dimensional environment. We shall leave these open problems for
future research.
DGP | $(n,p)$ | $\alpha=$5%
---|---|---
$q=2$ | $q=4$ | $q=6$ | $q=2,4$ | $q=2,6$ | $q=2,4,6$ | EH
Id | (200,100) | 0.028 | 0.065 | 0.056 | 0.052 | 0.032 | 0.045 | 0.01
(400,200) | 0.036 | 0.068 | 0.051 | 0.055 | 0.041 | 0.045 | 0
AR(0.5) | (200,100) | 0.049 | 0.109 | 0.077 | 0.087 | 0.063 | 0.085 | 0.111
(400,200) | 0.043 | 0.089 | 0.058 | 0.081 | 0.056 | 0.074 | 0.093
AR(0.8) | (200,100) | 0.051 | 0.12 | 0.079 | 0.097 | 0.063 | 0.086 | 0.613
(400,200) | 0.045 | 0.094 | 0.046 | 0.082 | 0.047 | 0.069 | 0.66
CS | (200,100) | 0.095 | 0.103 | 0.081 | 0.109 | 0.088 | 0.098 | 0.729
(400,200) | 0.11 | 0.085 | 0.061 | 0.116 | 0.099 | 0.104 | 0.898
Table 2: Size for one change point test $\mathcal{H}_{1}$ | DGP | $\delta$ | $(n,p)$ | $\alpha=$5%
---|---|---|---|---
$q=2$ | $q=4$ | $q=6$ | $q=2,4$ | $q=2,6$ | $q=2,4,6$ | EH
Sparse | Id | 1 | (200,100) | 0.742 | 0.981 | 0.962 | 0.982 | 0.967 | 0.981 | 0.844
(400,200) | 0.94 | 1 | 1 | 1 | 1 | 1 | 0.998
2 | (200,100) | 0.995 | 1 | 1 | 1 | 1 | 1 | 1
(400,200) | 1 | 1 | 1 | 1 | 1 | 1 | 1
AR(0.5) | 1 | (200,100) | 0.566 | 0.947 | 0.91 | 0.933 | 0.894 | 0.93 | 0.809
(400,200) | 0.82 | 1 | 1 | 1 | 0.996 | 1 | 0.994
2 | (200,100) | 0.94 | 1 | 1 | 1 | 0.999 | 1 | 0.999
(400,200) | 0.998 | 1 | 1 | 1 | 1 | 1 | 1
AR(0.8) | 1 | (200,100) | 0.298 | 0.887 | 0.84 | 0.883 | 0.82 | 0.876 | 0.912
(400,200) | 0.522 | 0.99 | 0.994 | 0.99 | 0.988 | 0.992 | 1
2 | (200,100) | 0.703 | 0.997 | 0.995 | 0.997 | 0.994 | 0.997 | 0.994
(400,200) | 0.928 | 1 | 1 | 1 | 1 | 1 | 1
CS | 1 | (200,100) | 0.231 | 0.971 | 0.937 | 0.966 | 0.927 | 0.962 | 0.964
(400,200) | 0.226 | 1 | 1 | 1 | 1 | 1 | 1
2 | (200,100) | 0.592 | 1 | 1 | 1 | 1 | 1 | 1
(400,200) | 0.656 | 1 | 1 | 1 | 1 | 1 | 1
Dense | Id | 1 | (200,100) | 0.718 | 0.326 | 0.292 | 0.677 | 0.661 | 0.645 | 0.444
(400,200) | 0.94 | 0.348 | 0.282 | 0.912 | 0.896 | 0.89 | 0.82
2 | (200,100) | 0.995 | 0.567 | 0.612 | 0.991 | 0.99 | 0.985 | 0.978
(400,200) | 1 | 0.682 | 0.628 | 1 | 1 | 1 | 1
AR(0.5) | 1 | (200,100) | 0.589 | 0.357 | 0.312 | 0.581 | 0.554 | 0.552 | 0.566
(400,200) | 0.808 | 0.36 | 0.338 | 0.762 | 0.754 | 0.732 | 0.832
2 | (200,100) | 0.927 | 0.616 | 0.573 | 0.917 | 0.909 | 0.899 | 0.93
(400,200) | 0.994 | 0.68 | 0.608 | 0.99 | 0.988 | 0.984 | 0.998
AR(0.8) | 1 | (200,100) | 0.385 | 0.33 | 0.262 | 0.41 | 0.358 | 0.376 | 0.831
(400,200) | 0.502 | 0.32 | 0.242 | 0.474 | 0.436 | 0.422 | 0.912
2 | (200,100) | 0.693 | 0.537 | 0.474 | 0.699 | 0.656 | 0.667 | 0.94
(400,200) | 0.872 | 0.564 | 0.51 | 0.866 | 0.848 | 0.84 | 0.986
CS | 1 | (200,100) | 0.345 | 0.277 | 0.235 | 0.363 | 0.33 | 0.338 | 0.84
(400,200) | 0.36 | 0.284 | 0.214 | 0.368 | 0.334 | 0.348 | 1
2 | (200,100) | 0.544 | 0.455 | 0.414 | 0.551 | 0.526 | 0.532 | 0.919
(400,200) | 0.588 | 0.474 | 0.424 | 0.584 | 0.55 | 0.562 | 1
Table 3: Power for one change point test under different alternatives $\delta$ | Method | Sparse | Dense
---|---|---|---
Id | AR(0.5) | AR(0.8) | CS | Id | AR(0.5) | AR(0.8) | CS
1 | SN(2) | 38.7 | 53.7 | 72.3 | 84.5 | 41.1 | 50.6 | 72.6 | 91.4
SN(4) | 20.3 | 24.5 | 26.7 | 20.9 | 44.6 | 43.0 | 49.1 | 49.6
SN(6) | 18.5 | 22.0 | 22.3 | 19.6 | 33.1 | 31.6 | 32.6 | 31.4
EH | 150.3 | 214.9 | 300.6 | 326.8 | 155.6 | 216.9 | 291.3 | 332.3
2 | SN(2) | 26.0 | 33.5 | 41.7 | 44.3 | 27.5 | 36.6 | 54.8 | 76.8
SN(4) | 14.4 | 17.5 | 19.6 | 16.3 | 37.9 | 38.3 | 45.8 | 48.8
SN(6) | 12.1 | 14.1 | 14.9 | 11.9 | 29.7 | 28.6 | 31.9 | 30.5
EH | 41.4 | 90.2 | 196.8 | 272.7 | 40.1 | 110.8 | 210.2 | 286.6
4 | SN(2) | 21.8 | 24.8 | 29.5 | 30.4 | 20.7 | 26.1 | 39.2 | 64.2
SN(4) | 12.1 | 14.7 | 16.5 | 14.1 | 22.8 | 27.8 | 39.0 | 44.3
SN(6) | 9.9 | 10.8 | 11.6 | 10.1 | 26.1 | 26.5 | 29.1 | 33.6
EH | 8.7 | 16.7 | 66.8 | 153.2 | 9.7 | 29.9 | 109.4 | 209.6
Table 4: RMSE (multiplied by $10^{3}$) for one change point location estimation under different alternatives Figure 1: ACGH data of the first 10 individuals at first 200 loci. The dashed lines represent the locations of the change points detected. | | $\hat{N}-N$ | MSE | ARI
---|---|---|---|---
-3 | -2 | -1 | 0 | 1 | 2 | 3
Sparse(2.5) | WBS-SN(2) | 0 | 1 | 11 | 75 | 13 | 0 | 0 | 0.28 | 0.8667
WBS-SN(4) | 0 | 0 | 0 | 98 | 2 | 0 | 0 | 0.02 | 0.958
WBS-SN(6) | 0 | 0 | 0 | 94 | 5 | 1 | 0 | 0.09 | 0.9552
WBS-SN(2,6) | 0 | 0 | 0 | 90 | 10 | 0 | 0 | 0.1 | 0.9489
INSPECT | 0 | 26 | 0 | 69 | 5 | 0 | 0 | 1.09 | 0.7951
Sparse(4) | WBS-SN(2) | 0 | 0 | 0 | 86 | 14 | 0 | 0 | 0.14 | 0.9188
WBS-SN(4) | 0 | 0 | 0 | 98 | 2 | 0 | 0 | 0.02 | 0.9684
WBS-SN(6) | 0 | 0 | 0 | 94 | 5 | 1 | 0 | 0.09 | 0.9707
WBS-SN(2,6) | 0 | 0 | 0 | 90 | 10 | 0 | 0 | 0.1 | 0.9678
INSPECT | 0 | 0 | 0 | 91 | 8 | 1 | 0 | 0.12 | 0.9766
Dense(2.5) | WBS-SN(2) | 0 | 2 | 10 | 74 | 13 | 1 | 0 | 0.35 | 0.8662
WBS-SN(4) | 94 | 4 | 2 | 0 | 0 | 0 | 0 | 8.64 | 0.0263
WBS-SN(6) | 70 | 20 | 7 | 3 | 0 | 0 | 0 | 7.17 | 0.1229
WBS-SN(2,6) | 5 | 5 | 7 | 64 | 9 | 0 | 0 | 0.91 | 0.7809
INSPECT | 0 | 40 | 0 | 46 | 13 | 0 | 1 | 1.82 | 0.6656
Dense(4) | WBS-SN(2) | 0 | 0 | 0 | 85 | 13 | 2 | 0 | 0.21 | 0.9186
WBS-SN(4) | 47 | 33 | 14 | 6 | 0 | 0 | 0 | 5.69 | 0.2748
WBS-SN(6) | 46 | 28 | 21 | 5 | 0 | 0 | 0 | 5.47 | 0.2642
WBS-SN(2,6) | 0 | 0 | 0 | 87 | 13 | 0 | 0 | 0.13 | 0.9214
INSPECT | 0 | 7 | 0 | 68 | 22 | 2 | 1 | 0.67 | 0.9027
| WBS-SN(2) | 0 | 1 | 12 | 73 | 14 | 0 | 0 | 0.3 | 0.8742
Sparse(2.5) | WBS-SN(4) | 0 | 0 | 62 | 37 | 1 | 0 | 0 | 0.63 | 0.7855
& | WBS-SN(6) | 0 | 0 | 60 | 38 | 2 | 0 | 0 | 0.62 | 0.7743
Dense(4) | WBS-SN(2,6) | 0 | 0 | 0 | 91 | 9 | 0 | 0 | 0.09 | 0.9439
| INSPECT | 0 | 21 | 1 | 70 | 6 | 1 | 1 | 1.04 | 0.8198
Table 5: Multiple change-point estimation
## References
* Aminikhanghahi and Cook (2017) Aminikhanghahi, S. and Cook, D. J. (2017), “A survey of methods for time series change point detection,” Knowledge and Information Systems, 51, 339–367.
* Andrews (1991) Andrews, D. (1991), “Heteroskedasticity and autocorrelation consistent covariant matrix estimation,” Econometrica, 59, 817–858.
* Aue and Horváth (2013) Aue, A. and Horváth, L. (2013), “Structural breaks in time series,” Journal of Time Series Analysis, 34, 1–16.
* Chan et al. (2013) Chan, J., Horváth, L., and Hušková, M. (2013), “Darling–Erdős limit results for change-point detection in panel data,” Journal of Statistical Planning and Inference, 143, 955–970.
* Chen and Zhang (2015) Chen, H. and Zhang, N. (2015), “Graph-based change-point detection,” The Annals of Statistics, 43, 139–176.
* Chen and Gupta (2011) Chen, J. and Gupta, A. K. (2011), Parametric Statistical Change Point Analysis: with Applications to Genetics, Medicine, and Finance, Springer Science & Business Media.
* Chen and Qin (2010) Chen, S. X. and Qin, Y.-L. (2010), “A two-sample test for high-dimensional data with applications to gene-set testing,” The Annals of Statistics, 38, 808–835.
* Chernozhukov et al. (2013) Chernozhukov, V., Chetverikov, D., and Kato, K. (2013), “Gaussian approximations and multiplier bootstrap for maxima of sums of high-dimensional random vectors,” Annals of Statistics, 41, 2786–2819.
* Chernozhukov et al. (2017) — (2017), “Central limit theorems and bootstrap in high dimensions,” Annals of Probability, 45, 2309–2352.
* Cho (2016) Cho, H. (2016), “Change-point detection in panel data via double CUSUM statistic,” Electronic Journal of Statistics, 10, 2000–2038.
* Csörgö and Horváth (1997) Csörgö, M. and Horváth, L. (1997), Limit Theorems in Change-Point Analysis. Wiley Series in Probability and Statistics., Wiley.
* Enikeeva and Harchaoui (2019) Enikeeva, F. and Harchaoui, Z. (2019), “High-dimensional change-point detection with sparse alternatives,” The Annals of Statistics, 47, 2051–2079.
* Fryzlewicz (2014) Fryzlewicz, P. (2014), “Wild binary segmentation for multiple change-point detection,” The Annals of Statistics, 42, 2243–2281.
* He et al. (2020) He, Y., Xu, G., Wu, C., and Pan, W. (2020), “Asymptotically independent U-Statistics in high-dimensional testing,” The Annals of Statistics, forthcoming.
* Horváth and Hušková (2012) Horváth, L. and Hušková, M. (2012), “Change-point detection in panel data,” Journal of Time Series Analysis, 33, 631–648.
* Hubert and Arabie (1985) Hubert, L. and Arabie, P. (1985), “Comparing partitions,” Journal of Classification, 2, 193–218.
* Jeng et al. (2010) Jeng, X., Cai, T., and Li, H. (2010), “Optimal sparse segment identification with application in copy number variation analysis,” Journal of the American Statistical Association, 105, 1156–1166.
* Jirak (2015) Jirak, M. (2015), “Uniform change point tests in high dimension,” The Annals of Statistics, 43, 2451–2483.
* Kley et al. (2016) Kley, T., Volgushev, S., Dette, H., and Hallin, M. (2016), “Quantile spectral processes: asymptotic analysis and inference,” Bernoulli, 22, 1770–1807.
* Liu et al. (2020) Liu, H., Gao, C., and Samworth, R. (2020), “Minimax Rates in Sparse High-Dimensional Changepoint Detection,” Annals of Statistics, to appear.
* Lobato (2001) Lobato, I. N. (2001), “Testing that a dependent process is uncorrelated,” Journal of the American Statistical Association, 96, 1066–1076.
* Perron (2006) Perron, P. (2006), “Dealing with structural breaks,” Palgrave Handbook of Econometrics, 1, 278–352.
* Rand (1971) Rand, W. M. (1971), “Objective criteria for the evaluation of clustering methods,” Journal of the American Statistical Association, 66, 846–850.
* Shao (2010) Shao, X. (2010), “A self-normalized approach to confidence interval construction in time series,” Journal of the Royal Statistical Society, Series, B., 72, 343–366.
* Shao (2015) — (2015), “Self-normalization for time series: a review of recent developments,” Journal of the American Statistical Association, 110, 1797–1817.
* Shao and Wu (2007) Shao, X. and Wu, W. B. (2007), “Local whittle estimation of fractional integration for nonlinear processes,” Econometric Theory, 23, 899–929.
* Shao and Zhang (2010) Shao, X. and Zhang, X. (2010), “Testing for change points in time series,” Journal of the American Statistical Association, 105, 1228–1240.
* Tartakovsky et al. (2014) Tartakovsky, A., Nikiforov, I., and Basseville, M. (2014), Sequential Analysis: Hypothesis Testing and Changepoint Detection, Chapman and Hall/CRC.
* Wang et al. (2020) Wang, D., Yu, Y., and Rinaldo, A. (2020), “Optimal change point detection and localization in sparse dynamic networks,” forthcoming at Annals of Statistics, arXiv preprint arXiv:1809.09602.
* Wang and Shao (2019) Wang, R. and Shao, X. (2019), “Hypothesis testing for high-dimensional time series via self-normalization,” The Annals of Statistics, to appear.
* Wang et al. (2019) Wang, R., Volgushev, S., and Shao, X. (2019), “Inference for change points in high dimensional data,” arXiv preprint arXiv:1905.08446.
* Wang and Samworth (2018) Wang, T. and Samworth, R. J. (2018), “High dimensional change point estimation via sparse projection,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80, 57–83.
* Wu (2005) Wu, W. B. (2005), “Nonlinear system theory: Another look at dependence,” Proceedings of the National Academy of Sciences USA, 102, 14150–14154.
* Wu and Shao (2004) Wu, W. B. and Shao, X. (2004), “Limit theorems for iterated random functions,” Journal of Applied Probability, 41, 425–436.
* Xu et al. (2016) Xu, G., Lin, L., Wei, P., and Pan, W. (2016), “An adaptive two-sample test for high-dimensional means,” Biometrika, 103, 609–624.
* Yu and Chen (2020) Yu, M. and Chen, X. (2020), “Finite sample change point inference and identification for high-dimensional mean vectors,” Journal of Royal Statistical Society, Series B, to appear.
* Zhang and Siegmund (2012) Zhang, N. R. and Siegmund, D. O. (2012), “Model selection for high-dimensional multi-sequence change-point problems,” Statistica Sinica, 22, 1507–1538.
* Zhang et al. (2010) Zhang, N. R., Siegmund, D. O., Ji, H., and Li, J. Z. (2010), “Detecting simultaneous changepoints in multiple sequences,” Biometrika, 97, 631–645.
* Zhang and Lavitas (2018) Zhang, T. and Lavitas, L. (2018), “Unsupervised self-normalized change-point testing for time series,” Journal of the American Statistical Association, 113, 637–648.
* Zhao et al. (2019) Zhao, Z., Chen, L., and Lin, L. (2019), “Change-point detection in dynamic networks via graphon estimation,” arXiv preprint arXiv:1908.01823.
* Zhurbenko and Zuev (1975) Zhurbenko, I. and Zuev, N. (1975), “On higher spectral densities of stationary processes with mixing,” Ukrainian Mathematical Journal, 27, 364–373.
Supplement to ”Adaptive Inference for Change Points in High-Dimensional Data”
The supplement contains all the technical proofs in Section 6 and some
additional simulation results on network change-point detection in Section 7.
## 6 Technical Appendix
In the following, we will denote $a_{n}\lesssim b_{n}$ and $b_{n}\succsim
a_{n}$ if $\limsup_{n}a_{n}/b_{n}<\infty$.
###### Proof of Theorem 2.2.
Recall that under the null, as $X_{i}$’s have the same mean,
$\displaystyle D_{n,q}(r;[a,b])=\sum_{c=0}^{q}(-1)^{q-c}\binom{q}{c}P^{\lfloor
nr\rfloor-\lfloor na\rfloor-c}_{q-c}P^{\lfloor nb\rfloor-\lfloor
nr\rfloor-q+c}_{c}S_{n,q,c}(r;[a,b]).$
Therefore, we can calculate the covariance structure of $G_{q}$ based on that
of $Q_{q,c}$ given in Theorem 2.1.
$\operatorname{var}[G_{q}(r;[a,b])]=\sum^{q}_{c=0}\binom{q}{c}^{2}c!(q-c)!(r-a)^{2q-c}(b-r)^{q+c}=q!(r-a)^{q}(b-r)^{q}(b-a)^{q}.$
When $r_{1}<r_{2}$,
$\displaystyle\operatorname{cov}(G_{q}(r_{1};[a_{1},b_{1}]),G_{q}(r_{2};[a_{2},b_{2}]))$
$\displaystyle=$ $\displaystyle\sum_{0\leq c_{1}\leq c_{2}\leq
q}\Big{(}(-1)^{c_{1}+c_{2}}\binom{q}{c_{1}}\binom{q}{c_{2}}\binom{C}{c}c!(q-C)!\mathds{1}_{r_{1}>a_{2},r_{2}<b_{1}}$
$\displaystyle\cdot(r_{1}-a_{1})^{q-c_{1}}(b_{1}-r_{1})^{c_{1}}(r_{2}-a_{2})^{q-c_{2}}(b_{2}-r_{2})^{c_{2}}(r-A)^{c}(R-r)^{C-c}(b-R)^{q-C}\Big{)}.$
When $r_{1}>r_{2}$,
$\displaystyle\operatorname{cov}(G_{q}(r_{1};[a_{1},b_{1}]),G_{q}(r_{2};[a_{2},b_{2}]))$
$\displaystyle=$ $\displaystyle\sum_{0\leq c_{2}\leq c_{1}\leq
q}\Big{(}(-1)^{c_{1}+c_{2}}\binom{q}{c_{1}}\binom{q}{c_{2}}\binom{C}{c}c!(q-C)!\mathds{1}_{r_{2}>a_{1},r_{1}<b_{2}}$
$\displaystyle\cdot(r_{1}-a_{1})^{q-c_{1}}(b_{1}-r_{1})^{c_{1}}(r_{2}-a_{2})^{q-c_{2}}(b_{2}-r_{2})^{c_{2}}(r-A)^{c}(R-r)^{C-c}(b-R)^{q-C}\Big{)}.$
When $r_{1}=r_{2}=r$,
$\displaystyle\operatorname{cov}(G_{q}(r;[a_{1},b_{1}]),G_{q}(r;[a_{2},b_{2}]))$
$\displaystyle=$
$\displaystyle\sum^{q}_{c=0}\binom{q}{c}^{2}c!(q-c)!(r-a_{1})^{q-c}(b_{1}-r)^{c}(r-a_{2})^{q-c}(b_{2}-r)^{c}(r-A)^{c}(b-r)^{q-c}$
$\displaystyle=$ $\displaystyle q!(r-A)^{q}(b-r)^{q}(B-a)^{q}.$
For $r_{1}\not=r_{2}$, we have
$\operatorname{cov}(G_{q}(r_{1};[a_{1},b_{1}]),G_{q}(r_{2};[a_{2},b_{2}]))=q![(r-A)(b-R)(B-a)-(A-a)(R-r)(B-b)]^{q}.$
For $q_{1}\not=q_{2}$, since covariance of $Q_{q_{1},c_{1}}$ and
$Q_{q_{2},c_{2}}$ is 0, we know the covariance of $G_{q_{1}}$ and $G_{q_{2}}$
is also 0, since their arbitrary linear combinations are also Gaussian by
previous proofs, they are jointly Gaussian and therefore independence is
implied by uncorrelation. The rest follows from an application of the
continuous mapping theorem. $\diamondsuit$
Note that our Assumption 2.1 is a counterpart to the assumption made by Remark
3.2 in Wang et al. (2019). Their results are derived with some weaker
assumption (i.e. Assumption 3.1 therein), whose $L_{q}$-norm based counterpart
for is given as follows.
###### Assumption 6.1.
For any $q\in 2\mathbb{N}$, the following statements hold:
A.1
$\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}(\Sigma_{l_{1}l_{2}}\Sigma_{l_{2}l_{3}}\Sigma_{l_{3}l_{4}}\Sigma_{l_{4}l_{1}})^{q/2}=o\left(\|\Sigma\|_{q}^{2q}\right)$.
A.2 $Z_{0}$ has up to $8-$th moments and there exists a constant $C$
independent of $n$ such that
$\sum_{l_{1},...,l_{h}=1}^{p}|{\mbox{cum}}(Z_{0,l_{1}},...,Z_{0,l_{h}})|^{q}\leq
C\|\Sigma\|_{q}^{qh/2},$
for $h=2,\ldots,8$.
We claim that Assumption 6.1 is implied by Assumption 2.1.
###### Proof of the claim.
Define
$S_{m,h}\left(l_{1}\right):=\left\\{1\leq l_{2},\ldots,l_{h}\leq
p_{n}:\max_{1\leq i,j\leq h}\left|l_{i}-l_{j}\right|=m\right\\}.$
By triangular inequality,
$|l_{1}-l_{2}|+|l_{2}-l_{3}|+|l_{3}-l_{4}|+|l_{4}-l_{1}|\geq 2\max_{1\leq
i,j\leq 4}|l_{i}-l_{j}|$, and therefore,
$\displaystyle\sum_{l_{1},\cdots,l_{4}=1}^{p}(\Sigma_{l_{1}l_{2}}\Sigma_{l_{2}l_{3}}\Sigma_{l_{3}l_{4}}\Sigma_{l_{4}l_{1}})^{q/2}=$
$\displaystyle\sum_{l_{1}=1}^{p_{n}}\sum_{m=0}^{p_{n}}\sum_{l_{2},\ldots,l_{4}\in
S_{m,4}\left(l_{1}\right)}(\Sigma_{l_{1}l_{2}}\Sigma_{l_{2}l_{3}}\Sigma_{l_{3}l_{4}}\Sigma_{l_{4}l_{1}})^{q/2}$
$\displaystyle\leq$
$\displaystyle\sum_{l_{1}=1}^{p_{n}}\sum_{m=0}^{p_{n}}\left|S_{m,4}\left(l_{1}\right)\right|C_{2}^{2q}(1\vee
m)^{-qr}$ $\displaystyle\lesssim$ $\displaystyle p_{n}\sum_{m=0}^{p_{n}}(1\vee
m)^{4-2-qr}.$
On the other hand,
$\displaystyle\sum_{l_{1},\cdots,l_{h}=1}^{p}\operatorname{cum}^{q}\left(X_{0,l_{1},n},\cdots,X_{0,l_{h},n}\right)$
$\displaystyle=\sum_{l_{1}=1}^{p_{n}}\sum_{m=0}^{p_{n}}\sum_{l_{2},\ldots,l_{h}\in
S_{m,h}\left(l_{1}\right)}\operatorname{cum}^{q}\left(X_{0,l_{1},n},\cdots,X_{0,l_{h},n}\right)$
$\displaystyle\leq\sum_{l_{1}=1}^{p_{n}}\sum_{m=0}^{p_{n}}\left|S_{m,h}\left(l_{1}\right)\right|C_{h}^{q}(1\vee
m)^{-qr}$ $\displaystyle\lesssim p_{n}\sum_{m=0}^{p_{n}}(1\vee m)^{h-2-qr}.$
RHS has order $O\left(p_{n}^{h-qr}\right)$ if $h-qr-1>0$. Now a simple
computation shows that Assumption 6.1 is satisfied if $h-qr<h/2$ for
$h=2,\ldots,8,$ and $q=2,\ldots,$ which is equivalent to $r>2$. $\diamondsuit$
We are now ready to introduce the following lemma, which is vital in proving
the main result.
###### Lemma 6.1.
Under Assumption 2.1, for any $i_{1}^{(h)},i_{2}^{(h)},...,i_{q}^{(h)}$ that
are all distinct, $h=1,...,8$, and $c=1,2,...,q$,
$\left|\sum_{l_{1},...,l_{8}=1}^{p}\delta_{n,l_{1}}^{q-c}\cdots\delta_{n,l_{8}}^{q-c}\mathbb{E}[Z_{i_{1}^{(1)},l_{1}}\cdots
Z_{i_{c}^{(1)},l_{1}}\cdots Z_{i_{1}^{(8)},l_{8}}\cdots
Z_{i_{c}^{(8)},l_{8}}]\right|\lesssim\|\Delta_{n}\|_{q}^{8(q-c)}\|\Sigma\|_{q}^{4c}\quad(1)$
In particular, for $c=q$, we have
$\left|\sum_{l_{1},...,l_{8}=1}^{p}\mathbb{E}[Z_{i_{1}^{(1)},l_{1}}\cdots
Z_{i_{c}^{(1)},l_{1}}\cdots Z_{i_{1}^{(8)},l_{8}}\cdots
Z_{i_{c}^{(8)},l_{8}}]\right|\lesssim\|\Sigma\|_{q}^{4q}.\quad(2)$
In addition, for any $c=1,2,...,q-1$,
$\left|\sum_{l_{1},l_{2}=1}^{p}\delta_{n,l_{1}}^{q-c}\delta_{n,l_{2}}^{q-c}\Sigma_{l_{1},l_{2}}^{c}\right|=o\left(\|\Delta_{n}\|_{q}^{2(q-c)}\|\Sigma\|_{q}^{c}\right).\quad(3)$
###### Proof of Lemma 6.1.
Applying the generalized Hölder’s Inequality, we obtain
$\displaystyle\left|\sum_{l_{1},...,l_{8}=1}^{p}\delta_{n,l_{1}}^{q-c}\cdots\delta_{n,l_{h}}^{q-c}\mathbb{E}[Z_{i_{1}^{(1)},l_{1}}\cdots
Z_{i_{c}^{(1)},l_{1}}\cdots Z_{i_{1}^{(8)},l_{8}}\cdots
Z_{i_{c}^{(8)},l_{8}}]\right|=\left|\mathbb{E}\left(\prod_{u=1}^{8}\left[\sum_{l_{u}=1}^{p}\delta_{n,l_{u}}^{q-c}Z_{i_{1}^{(u)},l_{u}}\cdots
Z_{i_{c}^{(u)},l_{u}}\right]\right)\right|$ $\displaystyle\leq$
$\displaystyle\prod_{u=1}^{8}\left\\{\mathbb{E}\left(\left[\sum_{l_{u}=1}^{p}\delta_{n,l_{u}}^{q-c}Z_{i_{1}^{(u)},l_{u}}\cdots
Z_{i_{c}^{(u)},l_{u}}\right]^{8}\right)\right\\}^{1/8}=\mathbb{E}\left(\left[\sum_{l_{1}=1}^{p}\delta_{n,l_{1}}^{q-c}Z_{i_{1}^{(1)},l_{1}}\cdots
Z_{i_{c}^{(1)},l_{1}}\right]^{8}\right)$ $\displaystyle=$
$\displaystyle\sum_{l_{1},...,l_{8}=1}^{p}\delta_{n,l_{1}}^{q-c}...\delta_{n,l_{8}}^{q-c}\mathbb{E}\left[Z_{i_{1}^{(1)},l_{1}}...Z_{i_{1}^{(1)},l_{8}}\cdots
Z_{i_{c}^{(1)},l_{1}}...Z_{i_{c}^{(1)},l_{8}}\right]=\sum_{l_{1},...,l_{8}=1}^{p}\delta_{n,l_{1}}^{q-c}...\delta_{n,l_{8}}^{q-c}\left(\mathbb{E}\left[Z_{i_{1}^{(1)},l_{1}}...Z_{i_{1}^{(1)},l_{8}}\right]\right)^{c},$
since $i_{1}^{(1)},i_{2}^{(1)},...,i_{c}^{(1)}$ are all different, and
$\\{Z_{i}\\}$ are i.i.d. Again by Hölder’s Inequality,
$\displaystyle\sum_{l_{1},...,l_{8}=1}^{p}\delta_{n,l_{1}}^{q-c}...\delta_{n,l_{8}}^{q-c}\left(\mathbb{E}\left[Z_{i_{1}^{(1)},l_{1}}...Z_{i_{1}^{(1)},l_{8}}\right]\right)^{c}$
$\displaystyle\leq$
$\displaystyle\left\\{\sum_{l_{1},...,l_{8}=1}^{p}(\delta_{n,l_{1}}^{q-c}...\delta_{n,l_{8}}^{q-c})^{q/(q-c)}\right\\}^{(q-c)/q}\left\\{\sum_{l_{1},...,l_{8}=1}^{p}\left(\mathbb{E}\left[Z_{i_{1}^{(1)},l_{1}}...Z_{i_{1}^{(1)},l_{8}}\right]\right)^{cq/c}\right\\}^{c/q}$
$\displaystyle\lesssim$
$\displaystyle\|\Delta_{n}\|_{q}^{8(q-c)}\left\\{\sum_{l_{1},...,l_{8}=1}^{p}\sum_{\pi}\prod_{B\in\pi}cum(Z_{0,l_{i}},i\in
B)^{q}\right\\}^{c/q}.$
The last line in the above inequalities is due to the CR inequality and the
definition of joint cumulants, where $\pi$ runs through the list of all
partitions of $\\{1,...,8\\}$, $B$ runs through the list of all blocks of the
partition $\pi$. As all blocks in a partition are disjoint, we can further
bound it as
$\displaystyle\|\Delta_{n}\|_{q}^{8(q-c)}\left\\{\sum_{l_{1},...,l_{8}=1}^{p}\sum_{\pi}\prod_{B\in\pi}cum(Z_{0,l_{i}},i\in
B)^{q}\right\\}^{c/q}$ $\displaystyle=$
$\displaystyle\|\Delta_{n}\|_{q}^{8(q-c)}\left\\{\sum_{\pi}\prod_{B\in\pi}\sum_{l_{i}=1,i\in
B}^{p}cum(Z_{0,l_{i}},i\in
B)^{q}\right\\}^{c/q}\lesssim\|\Delta_{n}\|_{q}^{8(q-c)}\left\\{\sum_{\pi}\|\Sigma\|_{q}^{q\sum_{B\in\pi}|B|/2}\right\\}^{c/q}$
$\displaystyle\lesssim$
$\displaystyle\|\Delta_{n}\|_{q}^{8(q-c)}\|\Sigma\|_{q}^{4c},$
where the first inequality in the above is due to Assumption 6.1, A.2, which
is a consequence of Assumption 2.1, and the fact that there are only finite
number of distinct partitions over $\\{1,...,8\\}$. This completes the proof
of the first result.
For the second result, we first define $A^{\circ n}$ as the notation for the
element-wise $n$-th power of any real matrix $A$, i.e. $A^{\circ
n}_{i,j}=A_{i,j}^{n}$. Then we have
$\displaystyle\left|\sum_{l_{1},l_{2}=1}^{p}\delta_{n,l_{1}}^{q-c}\delta_{n,l_{2}}^{q-c}\Sigma_{l_{1},l_{2}}^{c}\right|=\Delta_{n}^{{\circ(q-c)}^{T}}\Sigma^{\circ
c}\Delta_{n}^{\circ(q-c)}\leq\|\Delta_{n}^{\circ(q-c)}\|_{2}^{2}\sigma_{\max}(\Sigma^{\circ
c}),$
where $\sigma_{\max}$ is the largest eigenvalue. First observe that
$\|\Delta_{n}^{\circ(q-c)}\|_{2}^{2}=\sum_{l=1}^{p}\delta_{n,l}^{2(q-c)}=\|\Delta_{n}\|_{2(q-c)}^{2(q-c)}$.
By properties of $L_{q}$ norm,
$\|\Delta_{n}\|_{2(q-c)}\leq\|\Delta_{n}\|_{q}$, if $q\leq 2(q-c)$, and
$\|\Delta_{n}\|_{2(q-c)}\leq p^{1/2(q-c)-1/q}\|\Delta_{n}\|_{q}$, if
$q>2(q-c)$. This implies
$\|\Delta_{n}\|_{2(q-c)}^{2(q-c)}\leq\max(p^{(2c-q)/q}\|\Delta_{n}\|_{q}^{2(q-c)},\|\Delta_{n}\|_{q}^{2(q-c)})$.
Next, for any symmetric matrix $A$,
$\sigma_{\max}(A)\|\leq\|A\|_{\infty}=\max_{i=1,...,p}\sum_{j=1}^{p}|A_{i,j}|$.
This, together with Assumption 2.1 (A.2), implies
$\displaystyle\sigma_{\max}(\Sigma^{\circ
c})\leq\max_{i=1,...,p}\sum_{j=1}^{p}|\Sigma^{\circ
c}_{i,j}|\lesssim\max_{i=1,...,p}\sum_{j=1}^{p}(1\wedge|i-j|)^{-cr}\leq
1+\sum_{m=1}^{p}m^{-cr}\leq\infty,$
for some $r>2$. This is equivalent to $\sigma_{\max}(\Sigma^{\circ c})=O(1)$.
Note that $\|\Sigma\|_{q}^{q}\geq tr(\Sigma^{\circ q})\gtrsim p$, which leads
to $p^{c/q}\lesssim\|\Sigma\|_{q}^{c}$. So
$\left|\sum_{l_{1},l_{2}=1}^{p}\delta_{n,l_{1}}^{q-c}\delta_{n,l_{2}}^{q-c}\Sigma_{l_{1},l_{2}}^{c}\right|\lesssim\max(p^{(2c-q)/q}\|\Delta_{n}\|_{q}^{2(q-c)},\|\Delta_{n}\|_{q}^{2(q-c)})=o(\|\Delta_{n}\|_{q}^{2(q-c)}\|\Sigma\|_{q}^{c}),$
since $(2c-q)/q=c/q+(c-q)/q<c/q$, for $c=1,2,...,q-1$. This completes the
proof for the second result.
$\diamondsuit$
This lemma is a generalization to its counterpart in Wang et al. (2019), in
which we only have $q=2$. To prove Theorem 2.1, we need the following lemmas
to show tightness and finite dimensional convergence.
###### Lemma 6.2.
Under Assumption 2.1, for any $c=0,1,2...,q$, and define the 3-dimensional
index set
$\mathcal{G}_{n}:=\\{(i/n,j/n,k/n):i,j,k=0,1,...,n\\},$
$\mathbb{E}[a_{n}^{-8}(S_{n,q,c}(r_{1};[a_{1},b_{1}])-S_{n,q,c}(r_{2};[a_{2},b_{2}]))^{8}]\leq
C\|(a_{1},r_{1},b_{1})-(a_{2},r_{2},b_{2})\|^{4},$
for some constant $C$, any
$(a_{1},r_{1},b_{1}),(a_{2},r_{2},b_{2})\in\mathcal{G}_{n}$ such that
$\|(a_{1},r_{1},b_{1})-(a_{2},r_{2},b_{2})\|>\delta/n^{4}$.
###### Proof of Lemma 6.2.
By CR-inequality,
$\displaystyle\mathbb{E}[(S_{n,q,c}(r_{1};[a_{1},b_{1}])-S_{n,q,c}(r_{2};[a_{2},b_{2}]))^{8}]\leq$
$\displaystyle
C\Big{\\{}\mathbb{E}[(S_{n,q,c}(r_{1};[a_{1},b_{1}])-S_{n,q,c}(r_{1};[a_{1},b_{2}]))^{8}]$
$\displaystyle+\mathbb{E}[(S_{n,q,c}(r_{1};[a_{1},b_{2}])-S_{n,q,c}(r_{1};[a_{2},b_{2}]))^{8}]$
$\displaystyle+\mathbb{E}[(S_{n,q,c}(r_{1};[a_{2},b_{2}])-S_{n,q,c}(r_{2};[a_{2},b_{2}]))^{8}]\Big{\\}}.$
We shall only analyze
$\mathbb{E}[(S_{n,q,c}(r;[a,b])-S_{n,q,c}(r;[a,b^{\prime}]))^{8}]$, and the
analysis of the other 2 terms are similar.
Note that for any $a,r,b,b^{\prime}\in[0,1]$ and $b<b^{\prime}$,
$\displaystyle\mathbb{E}[(S_{n,q,c}(r;[a,b])-S_{n,q,c}(r;[a,b^{\prime}]))^{8}]$
$\displaystyle=$
$\displaystyle\mathbb{E}\left[\left((q-c)\sum_{l=1}^{p}\sum_{\lfloor
nb\rfloor+1\leq j\leq\lfloor nb^{\prime}\rfloor}\sum_{\lfloor na\rfloor+1\leq
i_{1}\not=\cdots\not=i_{c}\leq\lfloor nr\rfloor}\sum_{\lfloor nr\rfloor+1\leq
j_{1}\not=\cdots\not=j_{q-c-1}\leq
j-1}\left(\prod_{t=1}^{c}Z_{i_{t},l}\cdot\prod_{s=1}^{q-c-1}Z_{j_{s},l}\cdot
Z_{j,l}\right)\right)^{8}\right]$ $\displaystyle=$ $\displaystyle
C\sum_{j^{(\cdot)},i_{t}^{(\cdot)},j_{s}^{(\cdot)}}\sum_{l_{1},\ldots,l_{8}=1}^{p}\prod_{h=1}^{8}\left(\mathbb{E}\left[\prod_{t=1}^{c}Z_{i_{t}^{(h)},l_{h}}\right]\mathbb{E}\left[\prod_{s=1}^{q-c-1}Z_{j_{s}^{(h)},l_{h}}\right]\mathbb{E}\left[Z_{j^{(h)},l_{h}}\right]\right)$
$\displaystyle\lesssim$ $\displaystyle n^{4(q-1)}(\lfloor
nb^{\prime}\rfloor-\lfloor nb\rfloor)^{4}\|\Sigma\|_{q}^{qh/2}\lesssim
n^{4q}\left[(b^{\prime}-b)^{4}+\frac{1}{n^{4}}\right]\|\Sigma\|_{q}^{4q},$
where we have applied Lemma 6.1-(2) to
$i_{1}^{(h)},\ldots,i_{c}^{(h)},j_{1}^{(h)},\ldots,j_{q-c-1}^{(h)},j^{(h)}$,
and the summation $\sum_{j^{(\cdot)},i_{t}^{(\cdot)},j_{s}^{(\cdot)}}$ is over
$\lfloor nb\rfloor+1\leq j^{(h)}\leq\lfloor nb^{\prime}\rfloor,\lfloor
na\rfloor+1\leq i_{1}^{(h)}\not=\cdots\not=i_{c}^{(h)}\leq\lfloor
nr\rfloor,\lfloor nr\rfloor+1\leq
j_{1}^{(h)}\not=\cdots\not=j_{q-c-1}^{(h)}\leq j^{(h)}-1$ for $h=1,\ldots,8$.
Therefore, we have
$a_{n}^{-8}\mathbb{E}[(S_{n,q,c}(r;[a,b])-S_{n,q,c}(r;[a,b^{\prime}]))^{8}]\lesssim(b^{\prime}-b)^{4}+\frac{1}{n^{4}}.$
$\diamondsuit$
###### Lemma 6.3.
Fix $q,c,$ for any $0\leq a_{1}<r_{1}<b_{1}\leq 1,0\leq a_{2}<r_{2}<b_{2},$
any $\alpha_{1},\alpha_{2}\in\mathbb{R}$, we have
$\frac{\alpha_{1}}{a_{n}}S_{n,q,c}\left(r_{1};\left[a_{1},b_{1}\right]\right)+\frac{\alpha_{2}}{a_{n}}S_{n,q,c}\left(r_{2},\left[a_{2},b_{2}\right]\right)\stackrel{{\scriptstyle\mathcal{D}}}{{\longrightarrow}}\alpha_{1}Q_{q,c}\left(r_{1};\left[a_{1},b_{1}\right]\right)+\alpha_{2}Q_{q,c}\left(r_{2};\left[a_{2},b_{2}\right]\right),$
where
$\operatorname{cov}\left(Q_{q,c}(r_{1};[a_{1},b_{1}]),Q_{q,c}(r_{2};[a_{2},b_{2}])\right)=c!(q-c)!(r-A)^{c}(b-R)^{q-c},$
###### Proof of Lemma 6.3.
WLOG, we can assume $a_{1}<a_{2}<r_{1}<r_{2}<b_{1}<b_{2}$. The other terms are
similar. Define
$\displaystyle\xi_{1,i}=$
$\displaystyle\frac{q-c}{a_{n}}\sum_{l=1}^{p}\sum^{*}_{\lfloor
na_{1}\rfloor+1\leq i_{1},\cdots,i_{c}\leq\lfloor
nr_{1}\rfloor}\sum^{*}_{\lfloor nr_{1}\rfloor+1\leq j_{1},\cdots,j_{q-c-1}\leq
i-1}\left(\prod_{t=1}^{c}Z_{i_{t},l}\cdot\prod_{s=1}^{q-c-1}Z_{j_{s},l}\cdot
Z_{i,l}\right)$ $\displaystyle\xi_{2,i}=$
$\displaystyle\frac{q-c}{a_{n}}\sum_{l=1}^{p}\sum^{*}_{\lfloor
na_{2}\rfloor+1\leq i_{1},\cdots,i_{c}\leq\lfloor
nr_{2}\rfloor}\sum^{*}_{\lfloor nr_{2}\rfloor+1\leq j_{1},\cdots,j_{q-c-1}\leq
i-1}\left(\prod_{t=1}^{c}Z_{i_{t},l}\cdot\prod_{s=1}^{q-c-1}Z_{j_{s},l}\cdot
Z_{i,l}\right),$
and
$\widetilde{\xi}_{n,i}=\left\\{\begin{array}[]{cc}{\alpha_{1}\xi_{1,i}}&{\text{
if }\left\lfloor nr_{1}\right\rfloor+q-c\leq i\leq\left\lfloor
nr_{2}\right\rfloor}+q-c-1\\\
{\alpha_{1}\xi_{1,i}+\alpha_{2}\xi_{2,i}}&{\text{ if }\left\lfloor
nr_{2}\right\rfloor+q-c\leq i\leq\left\lfloor nb_{1}\right\rfloor}\\\
{\alpha_{2}\xi_{2,i}}&{\text{ if }\left\lfloor nb_{1}\right\rfloor+1\leq
i\leq\left\lfloor nb_{2}\right\rfloor}\end{array}\right..$
Define $\mathcal{F}_{i}=\sigma\left(Z_{i},Z_{i-1},\cdots\right)$, we can see
that under the null $\mathbb{E}[Z_{1}]=0$, $\widetilde{\xi}_{n,i}$ is a
martingale difference sequence w.r.t. $\mathcal{F}_{i}$, and
$\frac{\alpha_{1}}{a_{n}}S_{n,q,c}\left(r_{1};\left[a_{1},b_{1}\right]\right)+\frac{\alpha_{2}}{a_{n}}S_{n,q,c}\left(r_{2},\left[a_{2},b_{2}\right]\right)=\sum_{i=\lfloor
nr_{1}\rfloor+q-c}^{\lfloor nb_{2}\rfloor}\widetilde{\xi}_{n,i}.$
To apply the martingale CLT (Theorem 35.12 in Billingsley (2008)), we need to
verify the following two conditions
$\displaystyle(1)\quad\forall\epsilon>0,\sum_{i=\lfloor
nr_{1}\rfloor+q-c}^{\lfloor
nb_{2}\rfloor}\mathbb{E}\left[\widetilde{\xi}_{n,i}^{2}\textbf{1}\left\\{\left|\widetilde{\xi}_{n,i}\right|>\epsilon\right\\}\Big{|}\mathcal{F}_{i-1}\right]\stackrel{{\scriptstyle
p}}{{\rightarrow}}0$.
$\displaystyle(2)\quad V_{n}=\sum_{i=\lfloor nr_{1}\rfloor+q-c}^{\lfloor
nb_{2}\rfloor}\mathbb{E}\left[\widetilde{\xi}_{n,i}^{2}|\mathbb{F}_{i-1}\right]\stackrel{{\scriptstyle
p}}{{\rightarrow}}\sigma^{2}$. To prove (1), it suffices to show that
$\sum_{i=\lfloor nr_{1}\rfloor+q-c}^{\lfloor
nb_{2}\rfloor}\mathbb{E}\left[\widetilde{\xi}_{n,i}^{4}\right]\rightarrow 0.$
Observe that
$\displaystyle\sum_{i=\lfloor nr_{1}\rfloor+q-c}^{\lfloor
nb_{2}\rfloor}\mathbb{E}\left[\widetilde{\xi}_{n,i}^{4}\right]$
$\displaystyle=$ $\displaystyle\alpha_{1}^{4}\sum_{i=\lfloor
nr_{1}\rfloor+q-c}^{\lfloor
nr_{2}\rfloor+q-c-1}\mathbb{E}\left[\xi_{1,i}^{4}\right]+\sum_{i=\lfloor
nr_{2}\rfloor+q-c}^{\lfloor
nb_{1}\rfloor}\mathbb{E}\left[\left(\alpha_{1}\xi_{1,i}+\alpha_{2}\xi_{2,i}\right)^{4}\right]+\alpha_{2}^{4}\sum_{i=\lfloor
nb_{1}\rfloor+1}^{\lfloor nb_{2}\rfloor}\mathbb{E}\left[\xi_{2,i}^{4}\right]$
$\displaystyle\leq$ $\displaystyle 8\alpha_{1}^{4}\sum_{i=\lfloor
nr_{1}\rfloor+q-c}^{\lfloor
nb_{1}\rfloor}\mathbb{E}\left[{\xi}_{1,i}^{4}\right]+8\alpha_{2}^{4}\sum_{i=\lfloor
nr_{2}\rfloor+q-c}^{\lfloor
nb_{2}\rfloor}\mathbb{E}\left[{\xi}_{2,i}^{4}\right].$
Straightforward calculations show that
$\displaystyle\mathbb{E}\left[{\xi}_{1,i}^{4}\right]$
$\displaystyle=\frac{C}{n^{2q}\|\Sigma\|_{q}^{2q}}\sum_{i_{t}^{(h)},j_{s}^{(h)}}\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}\prod_{h=1}^{4}\left(\mathbb{E}\left[\prod_{t=1}^{c}Z_{i_{t}^{(h)},l_{h}}\right]\mathbb{E}\left[\prod_{s=1}^{q-c-1}Z_{j_{s}^{(h)},l_{h}}\right]\mathbb{E}\left[Z_{i,l_{h}}\right]\right)$
$\displaystyle\lesssim\frac{1}{n^{2q}\|\Sigma\|_{q}^{2q}}n^{2(q-1)}\|\Sigma\|_{q}^{2q}=O(\frac{1}{n^{2}}).$
The same result holds for $\xi_{2,i}$. Therefore,
$\sum_{i=\lfloor nr_{1}\rfloor+q-c}^{\lfloor
nb_{2}\rfloor}\mathbb{E}\left[\widetilde{\xi}_{n,i}^{4}\right]\lesssim\sum_{i=\lfloor
nr_{1}\rfloor+q-c}^{\lfloor
nb_{1}\rfloor}\mathbb{E}\left[{\xi}_{1,i}^{4}\right]+\sum_{i=\lfloor
nr_{2}\rfloor+q-c}^{\lfloor
nb_{2}\rfloor}\mathbb{E}\left[{\xi}_{2,i}^{4}\right]=O(\frac{1}{n})\rightarrow
0.$
As regards (2), we decompose $V_{n}$ as follows,
$\displaystyle\sum_{i=\lfloor nr_{1}\rfloor+q-c}^{\lfloor
nb_{2}\rfloor}\mathbb{E}\left[\widetilde{\xi}_{n,i}^{2}|\mathbb{F}_{i-1}\right]$
$\displaystyle=$ $\displaystyle\alpha_{1}^{2}\sum_{i=\lfloor
nr_{1}\rfloor+q-c}^{\lfloor
nr_{2}\rfloor+q-c-1}\mathbb{E}\left[\xi_{1,i}^{2}|\mathbb{F}_{i-1}\right]+\sum_{i=\lfloor
nr_{2}\rfloor+q-c}^{\lfloor
nb_{1}\rfloor}\mathbb{E}\left[\left(\alpha_{1}\xi_{1,i}+\alpha_{2}\xi_{2,i}\right)^{2}|\mathbb{F}_{i-1}\right]+\alpha_{2}^{2}\sum_{i=\lfloor
nb_{1}\rfloor+1}^{\lfloor
nb_{2}\rfloor}\mathbb{E}\left[\xi_{2,i}^{2}|\mathbb{F}_{i-1}\right]$
$\displaystyle=$ $\displaystyle\alpha_{1}^{2}\sum_{i=\lfloor
nr_{1}\rfloor+q-c}^{\lfloor
nb_{1}\rfloor}\mathbb{E}\left[{\xi}_{1,i}^{2}|\mathbb{F}_{i-1}\right]+\alpha_{2}^{2}\sum_{i=\lfloor
nr_{2}\rfloor+q-c}^{\lfloor
nb_{2}\rfloor}\mathbb{E}\left[{\xi}_{2,i}^{2}|\mathbb{F}_{i-1}\right]+2\alpha_{1}\alpha_{2}\sum_{i=\lfloor
nr_{2}\rfloor+q-c}^{\lfloor
nb_{1}\rfloor}\mathbb{E}\left[\xi_{1,i}\xi_{2,i}|\mathbb{F}_{i-1}\right]$
$\displaystyle=$
$\displaystyle:\alpha_{1}^{2}V_{1,n}+\alpha_{2}^{2}V_{2,n}+2\alpha_{1}\alpha_{2}V_{3,n}.$
We still focus on the case $a_{1}<a_{2}<r_{1}<r_{2}<b_{1}<b_{2}$. Note that
$\displaystyle\sum_{i=\lfloor nr_{1}\rfloor+q-c}^{\lfloor
nb_{1}\rfloor}\mathbb{E}\left[{\xi}_{1,i}^{2}|\mathbb{F}_{i-1}\right]$
$\displaystyle=$
$\displaystyle\frac{(q-c)^{2}}{n^{q}\|\Sigma\|_{q}^{q}}c!(q-c-1)!\sum_{i=\lfloor
nr_{1}\rfloor+q-c}^{\lfloor
nb_{1}\rfloor}\sum_{i_{t}^{(h)},j_{s}^{(h)}}\sum_{l_{1},l_{2}=1}^{p}\Sigma_{l_{1}l_{2}}\prod_{h=1}^{2}\left(\prod_{t=1}^{c}Z_{i_{t}^{(h)},l_{h}}\cdot\prod_{s=1}^{q-c-1}Z_{j_{s}^{(h)},l_{h}}\right)$
$\displaystyle=$
$\displaystyle\frac{(q-c)^{2}}{n^{q}\|\Sigma\|_{q}^{q}}c!(q-c-1)!\sum_{i=\lfloor
nr_{1}\rfloor+q-c}^{\lfloor
nb_{1}\rfloor}\sum_{i_{t}^{(h)},j_{s}^{(h)}}^{(1)}\sum_{l_{1},l_{2}=1}^{p}\Sigma_{l_{1}l_{2}}\prod_{h=1}^{2}\left(\prod_{t=1}^{c}Z_{i_{t}^{(h)},l_{h}}\cdot\prod_{s=1}^{q-c-1}Z_{j_{s}^{(h)},l_{h}}\right)$
$\displaystyle+\frac{(q-c)^{2}}{n^{q}\|\Sigma\|_{q}^{q}}c!(q-c-1)!\sum_{i=\lfloor
nr_{1}\rfloor+q-c}^{\lfloor
nb_{1}\rfloor}\sum_{i_{t}^{(h)},j_{s}^{(h)}}^{(2)}\sum_{l_{1},l_{2}=1}^{p}\Sigma_{l_{1}l_{2}}\prod_{h=1}^{2}\left(\prod_{t=1}^{c}Z_{i_{t}^{(h)},l_{h}}\cdot\prod_{s=1}^{q-c-1}Z_{j_{s}^{(h)},l_{h}}\right)$
$\displaystyle=$ $\displaystyle:V_{1,n}^{(1)}+V_{1,n}^{(2)},$
where $\displaystyle\sum_{i_{t}^{(h)},j_{s}^{(h)}}^{(1)}$ denotes the
summation over terms s.t.
$i_{t}^{(1)}=i_{t}^{(2)},j_{s}^{(1)}=j_{s}^{(2)},\forall t,s$, and
$\displaystyle\sum_{i_{t}^{(h)},j_{s}^{(h)}}^{(2)}$ is over the other terms.
It is straightforward to see that $\mathbb{E}[V_{1,n}^{(2)}]=0$ as $Z_{i}$’s
are independent, and
$\displaystyle\mathbb{E}[V_{1,n}^{(1)}]$
$\displaystyle=\frac{(q-c)^{2}}{n^{q}\|\Sigma\|_{q}^{q}}c!(q-c-1)!n^{c}(r_{1}-a_{1})^{c}\sum_{k=1}^{\lfloor
nb_{1}\rfloor-\lfloor
nr_{1}\rfloor}k^{q-c-1}\sum_{l_{1},l_{2}=1}^{p}\Sigma_{l_{1}l_{2}}^{p}+o(1)$
$\displaystyle=c!(q-c)!(r_{1}-a_{1})^{c}(b_{1}-r_{1})^{q-c}+o(1).$
Note that
$\displaystyle\mathbb{E}[(V_{1,n}^{(1)})^{2}]=$
$\displaystyle\frac{(q-c)^{4}}{n^{2q}\|\Sigma\|_{q}^{2q}}[c!(q-c-1)!]^{2}\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}\sum_{i=\lfloor
nr_{1}\rfloor+q-c}^{\lfloor nb_{1}\rfloor}\sum_{j=\lfloor
nr_{1}\rfloor+q-c}^{\lfloor
nb_{1}\rfloor}\sum_{i_{t}^{(h)},j_{s}^{(h)}}^{*}\Bigg{[}\Sigma_{l_{1}l_{2}}\Sigma_{l_{3}l_{4}}$
$\displaystyle\prod_{h=1}^{4}\left(\prod_{t=1}^{c}Z_{i_{t}^{(h)},l_{h}}\cdot\prod_{s=1}^{q-c-1}Z_{j_{s}^{(h)},l_{h}}\right)\Bigg{]}+o(1),$
where the summation $\sum_{i_{t}^{(h)},j_{s}^{(h)}}^{*}$ is over the range of
$i_{t}^{(h)},j_{s}^{(h)},h=1,2,3,4$ s.t.
$i_{t}^{(1)}=i_{t}^{(2)},j_{s}^{(1)}=j_{s}^{(2)},i_{t}^{(3)}=i_{t}^{(4)},j_{s}^{(3)}=j_{s}^{(4)},\forall
t,s.$ Note that RHS can be further decomposed into 2 parts. The first part
corresponds to the summation of the terms s.t. $\\{i^{(h)}_{t},j^{(s)}\\}$ for
$h=1$ and has no intersection with that for $h=3$, which has order
$\displaystyle\frac{(q-c)^{4}}{n^{2q}\|\Sigma\|_{q}^{2q}}[c!(q-c-1)!]^{2}n^{2c}(r_{1}-a_{1})^{2c}\sum_{i=1}^{\lfloor
nb_{1}\rfloor-\lfloor nr_{1}\rfloor}i^{q-c-1}\sum_{j=1}^{\lfloor
nb_{1}\rfloor-\lfloor
nr_{1}\rfloor}j^{q-c-1}\sum_{l_{1},l_{2},l_{3},l_{4}}^{p}\Sigma_{l_{1}l_{2}}^{q}\Sigma_{l_{3}l_{4}}^{q}$
$\displaystyle=$
$\displaystyle[c!(q-c)!(r_{1}-a_{1})^{c}(b_{1}-r_{1})^{q-c}]^{2}+o(1)=\mathbb{E}^{2}[V_{1,n}^{(1)}]+o(1).$
For the second part, it corresponds to the summation of the terms s.t.
$\\{i^{(h)}_{t},j^{(s)}\\}$ for $h=1$ and has at least one intersection with
that for $h=3$. Since at least one ”degree of freedom” for $n$ is lost, the
summation still has the form
$\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}\mathbb{E}\left[Z_{i_{1}^{(1)},l_{1}}\cdots
Z_{i_{q}^{(1)},l_{1}}\cdots Z_{i_{1}^{(h)},l_{h}}\cdots
Z_{i_{q}^{(h)},l_{h}}\right]$ as in Lemma 6.1-(2), which has order
$O(\|\Sigma\|_{q}^{2q})$. We can conclude that the second part has order
$O(\frac{1}{n})$, and hence goes to 0.
Therefore,
$\limsup\left(\mathbb{E}[(V_{1,n}^{(1)})^{2}]-\mathbb{E}^{2}[V_{1,n}^{(1)}]\right)\leq
0$, which implies $\lim{\mbox{var}}(V_{1,n}^{(1)})=0$. Therefore, we can
conclude that $V_{1,n}^{(1)}\stackrel{{\scriptstyle
p}}{{\rightarrow}}\lim\mathbb{E}[V_{1,n}^{(1)}]=c!(q-c)!(r_{1}-a_{1})^{c}(b_{1}-r_{1})^{(q-c)}$.
It remains to show that $V_{1,n}^{(2)}\stackrel{{\scriptstyle
p}}{{\rightarrow}}0$.
It suffices to show that
$\mathbb{E}\left[(V_{1,n}^{(2)})^{2}\right]\rightarrow 0$. Based on the same
argument as before, by applying Lemma 6.1-(2) we know that every kind of
summation has the same order $O(\frac{1}{n})$ no matter how
$i_{t}^{(h)},j_{s}^{(h)},i,j$ intersects with each other. Therefore, the terms
in the expansion of $\mathbb{E}\left[(V_{1,n}^{(2)})^{2}\right]$ for which $n$
has highest degree of freedom should dominate. For these terms, each index in
$i_{t}^{(h)},j_{s}^{(h)},i,j$ should have exactly one pair. The number of
these terms is of order $O(n^{2q})$. The summation has forms
$\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}(\Sigma_{l_{1}l_{2}}^{d}\Sigma_{l_{3}l_{4}}^{d}\Sigma_{l_{1}l_{4}}^{d}\Sigma_{l_{2}l_{3}}^{e}\Sigma_{l_{1}l_{3}}^{f}\Sigma_{l_{2}l_{4}}^{f})$,
s.t. $d>0,e+f>0$ and $d+e+f=q$. We need to show that it is of order
$o(\|\Sigma\|_{q}^{2q})$ to complete the proof. By symmetry, we can assume
$e>0$, and therefore $d,e\leq 1$. Note that for $q>2$,
$\displaystyle\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}(\Sigma_{l_{1}l_{2}}^{d}\Sigma_{l_{3}l_{4}}^{d}\Sigma_{l_{1}l_{4}}^{e}\Sigma_{l_{2}l_{3}}^{e}\Sigma_{l_{1}l_{3}}^{f}\Sigma_{l_{2}l_{4}}^{f})$
$\displaystyle=$
$\displaystyle\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}(\Sigma_{l_{1}l_{2}}\Sigma_{l_{2}l_{3}}\Sigma_{l_{3}l_{4}}\Sigma_{l_{4}l_{1}})(\Sigma_{l_{1}l_{2}}^{d-1}\Sigma_{l_{3}l_{4}}^{d-1}\Sigma_{l_{1}l_{4}}^{e-1}\Sigma_{l_{2}l_{3}}^{e-1}\Sigma_{l_{1}l_{3}}^{f}\Sigma_{l_{2}l_{4}}^{f})$
$\displaystyle\leq$
$\displaystyle\left[\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}|\Sigma_{l_{1}l_{2}}\Sigma_{l_{2}l_{3}}\Sigma_{l_{3}l_{4}}\Sigma_{l_{4}l_{1}}|^{q/2}\right]^{2/q}\left[\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}|\Sigma_{l_{1}l_{2}}^{d-1}\Sigma_{l_{3}l_{4}}^{d-1}\Sigma_{l_{1}l_{4}}^{e-1}\Sigma_{l_{2}l_{3}}^{e-1}\Sigma_{l_{1}l_{3}}^{f}\Sigma_{l_{2}l_{4}}^{f}|^{q/(q-2)}\right]^{1-2/q}$
$\displaystyle\lesssim$ $\displaystyle
o(\|\Sigma\|_{q}^{4})\cdot\|\Sigma\|_{q}^{2q-4}=o(\|\Sigma\|_{q}^{2q}),$
where we have used Hölder’s inequality, along with A.1 and the fact that
$\displaystyle\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}|\Sigma_{l_{1}l_{2}}^{d-1}\Sigma_{l_{3}l_{4}}^{d-1}\Sigma_{l_{1}l_{4}}^{e-1}\Sigma_{l_{2}l_{3}}^{e-1}\Sigma_{l_{1}l_{3}}^{f}\Sigma_{l_{2}l_{4}}^{f}|^{q/(q-2)}$
$\displaystyle\lesssim$
$\displaystyle\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}(\Sigma_{l_{1}l_{2}}^{q}\Sigma_{l_{3}l_{4}}^{q}+\Sigma_{l_{1}l_{3}}^{q}\Sigma_{l_{2}l_{4}}^{q}+\Sigma_{l_{1}l_{4}}^{q}\Sigma_{l_{2}l_{3}}^{q})=3\|\Sigma\|_{q}^{2q}.$
When $q=2$, it must be the case that $d=e=1$, the term becomes
$\sum_{l_{1},l_{2},l_{3},l_{4}=1}^{p}|\Sigma_{l_{1}l_{2}}\Sigma_{l_{2}l_{3}}\Sigma_{l_{3}l_{4}}\Sigma_{l_{4}l_{1}}|$,
and directly applying A.1 can yield the desired order.
We can then conclude that $\mathbb{E}[V_{1,n}^{(2)}]\rightarrow 0$ and hence
$V_{1,n}^{(2)}\stackrel{{\scriptstyle p}}{{\rightarrow}}0$. Combining what we
have proved so far, we obtain $V_{1,n}\stackrel{{\scriptstyle
p}}{{\rightarrow}}c!(q-c)!(r_{1}-a_{1})^{c}(b_{1}-r_{1})^{q-c}$.
Similar argument shows that
$V_{2,n}\stackrel{{\scriptstyle
p}}{{\rightarrow}}c!(q-c)!(r_{2}-a_{2})^{c}(b_{2}-r_{2})^{q-c},\quad
V_{3,n}\stackrel{{\scriptstyle
p}}{{\rightarrow}}c!(q-c)!(r_{1}-a_{2})^{c}(b_{1}-r_{2})^{q-c}.$
Therefore, we conclude that
$\displaystyle V_{n}\stackrel{{\scriptstyle p}}{{\rightarrow}}$
$\displaystyle\alpha_{1}^{2}c!(q-c)!(r_{1}-a_{1})^{c}(b_{1}-r_{1})^{q-c}+\alpha_{2}^{2}c!(q-c)!(r_{2}-a_{2})^{c}(b_{2}-r_{2})^{q-c}$
$\displaystyle+2\alpha_{1}\alpha_{2}c!(q-c)!(r_{1}-a_{2})^{c}(b_{1}-r_{2})^{q-c},$
which completes the proof. $\diamondsuit$
We can generalize the above lemma to the case when $c_{i},q_{i}$ are not
identical.
###### Lemma 6.4.
Fix $q_{1},c_{1},q_{2},c_{2}$ for any $0\leq a_{1}<r_{1}<b_{1}\leq 1,0\leq
a_{2}<r_{2}<b_{2},$ any $\alpha_{1},\alpha_{2}\in\mathbb{R}$, we have
$\frac{\alpha_{1}}{a_{n}}S_{n,q_{1},c_{1}}\left(r_{1};\left[a_{1},b_{1}\right]\right)+\frac{\alpha_{2}}{a_{n}}S_{n,q_{2},c_{2}}\left(r_{2},\left[a_{2},b_{2}\right]\right)\stackrel{{\scriptstyle\mathcal{D}}}{{\longrightarrow}}\alpha_{1}Q_{q_{1},c_{1}}\left(r_{1};\left[a_{1},b_{1}\right]\right)+\alpha_{2}Q_{q_{2},c_{2}}\left(r_{2};\left[a_{2},b_{2}\right]\right),$
where $Q_{q_{1},r_{1}}$ and $Q_{q_{2},r_{2}}$ are independent Gaussian
processes if $q_{1}\not=q_{2}$, or $(c_{1}-c_{2})(r_{1}-r_{2})<0$ or
$r_{1}=r_{2},c_{1}\not=c_{2}$. And when
$q_{1}=q_{2}=q,(c_{1}-c_{2})(r_{1}-r_{2})>=0$, we have
$\operatorname{cov}\left(Q_{q,c_{1}}(r_{1};[a_{1},b_{1}]),Q_{q,c_{2}}(r_{2};[a_{2},b_{2}])\right)=\binom{C}{c}c!(q-C)!(r-A)^{c}(R-r)^{C-c}(b-R)^{q-C},$
###### Proof of Lemma 6.4.
We use the same notations in proving last lemma, as the proof is similar to
the previous one and involves applying martingale CLT, where we have
decomposed $V_{n}$ into 2 parts. Since the argument there can be directly
applied, the only additional work is about calculating the mean.
To prove the second statement, we take
$c_{1}<c_{2},a_{1}<a_{2}<r_{1}<r_{2}<b_{1}<b_{2}$, as the example case, since
the proof for other cases are similar. With the same technique we have used,
it can be shown that
$\displaystyle E[V_{n}]\rightarrow$
$\displaystyle\alpha_{1}^{2}c_{1}!(q-c_{1})!(r_{1}-a_{1})^{c_{1}}(b_{1}-r_{1})^{q-c_{1}}+\alpha_{2}^{2}c_{2}!(q-c_{2})!(r_{2}-a_{2})^{c_{2}}(b_{2}-r_{2})^{q-c_{2}}$
$\displaystyle+2\alpha_{1}\alpha_{2}\binom{c_{2}}{c_{1}}c_{1}!(q-c_{1})!(r_{1}-a_{2})^{c_{1}}(r_{2}-r_{1})^{c_{2}-c_{1}}(b_{1}-r_{2})^{q-c_{2}}.$
To derive the convergence in the statement, we can follow the same argument as
before to show the variance goes to 0, and therefore, we have the convergence
in distribution, with desired covariance structure.
As for the first statement, it is straightforward to see that the expectation
for the crossing term (corresponding to $\alpha_{1}\alpha_{2}$) is 0 for each
of the cases in the first statement, which implies that the Gaussian processes
have to be independent due to asymptotic normality. $\diamondsuit$
Now we are ready to complete the proof of Theorem 2.1.
###### Proof of Theorem 2.1.
The tightness is guaranteed by Lemma 6.2 and applying Lemma 7.1 in Kley et al.
(2016) with
$\Phi(x)=x^{4},T=T_{n},d(u,u^{\prime})=\|u-u^{\prime}\|^{3/4},\bar{\eta}=n^{-3/4}/2$.
We omit the detailed proof as the argument is similar to the tightness proof
in Wang et al. (2019). Lemma 6.4 has provided finite dimensional convergence
of $S_{n,q,c}$, which has asymptotic covariance structure as $Q_{q,c}$ after
normalization. Therefore, we have derived desired process convergence.
$\diamondsuit$
###### Proof Theorem 2.3.
Let $(s,k,m)=(\lfloor an\rfloor+1,\lfloor rn\rfloor,\lfloor bn\rfloor)$ and
define
$D_{n,q}^{Z}(r;a,b)=\sum_{l=1}^{p}\sum^{*}_{s\leq i_{1},\ldots,i_{q}\leq
k}\sum^{*}_{k+1\leq j_{1},\ldots,j_{q}\leq
m}\left(Z_{i_{1},l}-Z_{j_{1},l}\right)\cdots\left(Z_{i_{q},l}-Z_{j_{q},l}\right).$
Recall that Theorem 2.2 holds for $D^{Z}_{n,q}$ since under the null
$D_{n,q}^{Z}=D_{n,q}$.
Now we are under the alternative, with the location point $k_{1}=\lfloor
n\tau_{1}\rfloor$ and the change of mean equal to $\Delta_{n}$. Suppose WLOG
$s<k_{1}<k<m$.
$\displaystyle D_{n,q}(r;a,b)=$ $\displaystyle\sum_{l=1}^{p}\sum^{*}_{s\leq
i_{1},\ldots,i_{q}\leq k}\sum^{*}_{k+1\leq j_{1},\ldots,j_{q}\leq
m}\left(X_{i_{1},l}-X_{j_{1},l}\right)\cdots\left(X_{i_{q},l}-X_{j_{q},l}\right)$
$\displaystyle=$ $\displaystyle q!\sum_{l=1}^{p}\sum_{s\leq
i_{1}<\ldots<i_{q}\leq k}\sum^{*}_{k+1\leq j_{1},\ldots,j_{q}\leq
m}\left(X_{i_{1},l}-X_{j_{1},l}\right)\cdots\left(X_{i_{q},l}-X_{j_{q},l}\right)$
$\displaystyle=$ $\displaystyle q!\sum_{l=1}^{p}\sum_{s\leq
i_{1}<\ldots<i_{q}\leq k_{1}}\sum^{*}_{k+1\leq j_{1},\ldots,j_{q}\leq
m}\left(Z_{i_{1},l}+\delta_{n,l}-Z_{j_{1},l}\right)\cdots\left(Z_{i_{q},l}+\delta_{n,l}-Z_{j_{q},l}\right)$
$\displaystyle+q!\sum_{l=1}^{p}\sum_{c=1}^{q-1}\Big{[}\sum_{s\leq
i_{1}<\ldots<i_{c}\leq k_{1}<i_{c+1}<\ldots<i_{q}\leq k}\sum^{*}_{k+1\leq
j_{1},\ldots,j_{q}\leq m}$
$\displaystyle\quad(Z_{i_{1},l}+\delta_{n,l}-Z_{j_{1},l})\cdots(Z_{i_{c},l}+\delta_{n,l}-Z_{j_{c},l})(Z_{i_{c+1},l}-Z_{j_{c+1},l})\cdots(Z_{i_{q},l}-Z_{j_{q},l})\Big{]}$
$\displaystyle+q!\sum_{l=1}^{p}\sum_{k_{1}+1\leq i_{1}<\ldots<i_{q}\leq
k}\sum^{*}_{k+1\leq j_{1},\ldots,j_{q}\leq
m}\left(Z_{i_{1},l}-Z_{j_{1},l}\right)\cdots\left(Z_{i_{q},l}-Z_{j_{q},l}\right)$
$\displaystyle=$ $\displaystyle
D_{n,q}^{Z}+P^{k_{1}-s+1}_{q}P^{m-k}_{q}\|\Delta_{n}\|_{q}^{q}+R_{n,q}.\quad\quad(*)$
First suppose $\gamma_{n,q}\rightarrow\gamma\in[0,\infty)$, which is
equivalent to
$n^{q/2}\left\|\Delta_{n}\right\|_{q}^{q}\lesssim\|\Sigma\|_{q}^{q/2}$. It
suffices to show that in this case,
$\Big{\\{}n^{-q}a_{n,q}^{-1}D_{n,q}(\cdot;[\cdot,\cdot])\Big{\\}}\leadsto\Big{\\{}G_{q}(\cdot;[\cdot,\cdot])+\gamma
J_{q}(\cdot;[\cdot,\cdot])\Big{\\}}\text{ in
}\ell_{\infty}\left([0,1]^{3}\right).$
Since $n^{-q}a_{n,q}^{-1}D^{Z}_{n,q}(r;[a,b])$ converges to some non-
degenerate process, and
$n^{-q}a_{n,q}^{-1}P^{k^{*}-s+1}_{q}P^{m-k}_{q}\|\Delta_{n}\|_{q}^{q}=\gamma(r^{*}-a)^{q}(b-r)^{q}+o(1),$
it remains to show that $n^{-q}a_{n,q}^{-1}R_{n,q}\leadsto 0$.
Note that $R_{n,q}$ consists of terms that are each ratio consistent to
$Cn^{2(q-c)}\sum_{l=1}^{p}\delta_{n,l}^{q-c}D_{n,c,l}(r;a,b),$
for some constant $C$ depending on $q,a,b,r$ and $c=1,...,q-1$, where
$D_{n,c,l}(r;a,b)=\sum_{s\leq i_{1}<\ldots<i_{c}\leq k}\sum^{*}_{k+1\leq
j_{1},\ldots,j_{c}\leq
m}\left(Z_{i_{1},l}-Z_{j_{1},l}\right)\cdots\left(Z_{i_{c},l}-Z_{j_{c},l}\right),$
which can be further decomposed as
$\displaystyle D_{n,c,l}(r;a,b)\asymp$
$\displaystyle\sum_{d=0}^{c}C_{d}n^{c}\sum^{*}_{s\leq i_{1}\ldots,i_{d}\leq
k}\sum^{*}_{k+1\leq j_{1},\ldots,j_{c-d}\leq
m}\left(\prod_{t=1}^{d}Z_{i_{t},l}\prod_{s=1}^{c-d}Z_{j_{s},l}\right),$
for some constants depending on $d,c,q$. Therefore, it suffices to show
$\displaystyle
n^{q-c}a_{n,q}^{-1}\sum_{l=1}^{p}\delta_{n,l}^{q-c}\sum^{*}_{s\leq
i_{1},\ldots,i_{d}\leq k}\sum^{*}_{k+1\leq j_{1},\ldots,j_{c-d}\leq
m}\left(\prod_{t=1}^{d}Z_{i_{t},l}\prod_{s=1}^{c-d}Z_{j_{s},l}\right)$
$\displaystyle=$ $\displaystyle
n^{q/2-c}\|\Sigma\|_{q}^{-q/2}\sum_{l=1}^{p}\delta_{n,l}^{q-c}\sum^{*}_{s\leq
i_{1},\ldots,i_{d}\leq k}\sum^{*}_{k+1\leq j_{1},\ldots,j_{c-d}\leq
m}\left(\prod_{t=1}^{d}Z_{i_{t},l}\prod_{s=1}^{c-d}Z_{j_{s},l}\right)\leadsto
0.$
Similar argument for showing tightness and finite dimensional convergence in
proving Theorem 2.1 can be applied. More precisely, we can get a similar
moment bound as in Lemma 6.2 and follow the argument there to show the
tightness, since we have
$\displaystyle
n^{4q-8c}\|\Sigma\|_{q}^{-4q}n^{4c}\sum_{l_{1},\cdots,l_{8}=1}^{p}\mathbb{E}\left[\delta_{n,l_{1}}^{q-c}\cdots\delta_{n,l_{8}}^{q-c}Z_{i_{1}^{(1)},l_{1}}\cdots
Z_{i_{c}^{(1)},l_{1}}\cdots Z_{i_{1}^{(8)},l_{h}}\cdots
Z_{i_{c}^{(8)},l_{8}}\right]$ $\displaystyle=$ $\displaystyle
n^{4(q-c)}\|\Sigma\|_{q}^{-4q}\sum_{l_{1},\cdots,l_{8}=1}^{p}\mathbb{E}\left[\delta_{n,l_{1}}^{q-c}\cdots\delta_{n,l_{8}}^{q-c}Z_{i_{1}^{(1)},l_{1}}\cdots
Z_{i_{c}^{(1)},l_{1}}\cdots Z_{i_{1}^{(8)},l_{8}}\cdots
Z_{i_{c}^{(8)},l_{8}}\right]$ $\displaystyle\lesssim$
$\displaystyle\|\Delta_{n}\|_{q}^{-8(q-c)}\|\Sigma\|_{q}^{-4c}\sum_{l_{1},\cdots,l_{8}=1}^{p}\mathbb{E}\left[\delta_{n,l_{1}}^{q-c}\cdots\delta_{n,l_{8}}^{q-c}Z_{i_{1}^{(1)},l_{1}}\cdots
Z_{i_{c}^{(1)},l_{1}}\cdots Z_{i_{1}^{(8)},l_{8}}\cdots
Z_{i_{c}^{(8)},l_{8}}\right]\lesssim 1,$
by Lemma 6.1-(1).
Furthermore, following the proof of Lemma 6.3, Lemma 6.1-(3) implies finite
dimensional convergence to 0, as
$\displaystyle
n^{q-2c}\|\Sigma\|_{q}^{-q}n^{c}\sum_{l_{1},l_{2}=1}^{p}\delta_{n,l_{1}}^{q-c}\delta_{n,l_{2}}^{q-c}\Sigma_{l_{1}l_{2}}^{c}$
$\displaystyle=$ $\displaystyle
n^{q-c}\|\Sigma\|_{q}^{-q}\sum_{l_{1},l_{2}=1}^{p}\delta_{n,l_{1}}^{q-c}\delta_{n,l_{2}}^{q-c}\Sigma_{l_{1}l_{2}}^{c}$
$\displaystyle\lesssim$
$\displaystyle\|\Delta_{n}\|_{q}^{-2(q-c)}\|\Sigma\|_{q}^{-c}\sum_{l_{1},l_{2}=1}^{p}\delta_{n,l_{1}}^{q-c}\delta_{n,l_{2}}^{q-c}\Sigma_{l_{1}l_{2}}^{c}\rightarrow
0.$
We have the desired process convergence for
$\gamma_{n,q}\rightarrow\gamma<\infty$., which along with the continuous
mapping theorem further implies the convergence of the statistic.
When $\gamma=+\infty$, note that
$\tilde{T}_{n,q}\geq\frac{U_{n,q}(k_{1};1,n)^{2}}{W_{n,q}(k_{1};1,n)}$. Since
$k_{1}$ is the location of the change point, the denominator has the same
value as the null. On the contrary, it is immediate to see that the numerator
diverges to infinity after normalizing (with $n^{-q}a_{n,q}^{-1}$). Therefore,
we have $\tilde{T}_{n,q}\rightarrow+\infty$. $\diamondsuit$
Before we prove the convergence rate for SN-based estimator, we state the
following useful propositions.
###### Proposition 6.1.
For any $1\leq l<k<m\leq n$, $k\geq l+1$ and $m\geq k+2$, we have:
1. 1.
if $k^{*}<l$ or $k^{*}\geq m$, $U_{n,2}(k;l,m)=U_{n,2}^{Z}(k;l,m)$;
2. 2.
if $l\leq k\leq k^{*}<m$,
$\displaystyle U_{n,2}(k;l,m)=$ $\displaystyle
U_{n,2}^{Z}(k;l,m)+(k-l+1)(k-l)(m-k^{*})(m-k^{*}-1)\|\Delta_{n}\|_{2}^{2}$
$\displaystyle-2(k-l+1)(m-k^{*})(m-k)\sum_{i=l}^{k}\Delta_{n}^{T}Z_{i}+2(k-l)(k-l+1)(m-k^{*})\sum_{i=k+1}^{m}\Delta_{n}^{T}Z_{i}$
$\displaystyle+2(k-l)(m-k^{*})\sum_{i=l}^{k}\Delta_{n}^{T}Z_{i}-2(k-l+1)(k-l+1)\sum_{i=k^{*}+1}^{m}\Delta_{n}^{T}Z_{i};$
3. 3.
if $l\leq k^{*}\leq k<m$,
$\displaystyle U_{n,2}(k;l,m)=$ $\displaystyle
U_{n,2}^{Z}(k;l,m)+(k^{*}-l+1)(k^{*}-l)(m-k)(m-k-1)\|\Delta_{n}\|_{2}^{2}$
$\displaystyle-2(k^{*}-l+1)(m-k)(m-k-1)\sum_{i=l}^{k}\Delta_{n}^{T}Z_{i}+2(m-k-1)(k^{*}-l+1)(k-l+1)\sum_{i=k+1}^{m}\Delta_{n}^{T}Z_{i}$
$\displaystyle+2(m-k-1)(m-k)\sum_{i=l}^{k^{*}}\Delta_{n}^{T}Z_{i}-2(m-k-1)(k^{*}-l+1)\sum_{i=k+1}^{m}\Delta_{n}^{T}Z_{i}.$
Let $\epsilon_{n}=n\gamma_{n,2}^{-1/4+\kappa}$. We have the following result.
###### Proposition 6.2.
Under Assumption 3.1,
1. 1.
$P\left(\sup_{k\in\Omega_{n}}U_{n,2}(k;1,n)^{2}-U_{n,2}(k^{*};1,n)^{2}\geq
0\right)\rightarrow 0$;
2. 2.
$P\left(W_{n,2}(k^{*};1,n)-\inf_{k\in\Omega_{n}}W_{n,2}(k;1,n)\geq
0\right)\rightarrow 0$,
where $\Omega_{n}=\\{k:|k-k^{*}|>\epsilon_{n}\\}$.
Now we are ready to prove the convergence rate for SN-based statistic
$\hat{\tau}$.
###### Proof of Theorem 3.1.
Due to the fact that $\hat{k}$ is the global maximizer, we have
$\displaystyle 0$
$\displaystyle\leq\frac{U_{n,2}(\hat{k};1,n)^{2}}{W_{n,2}(\hat{k};1,n)}-\frac{U_{n,2}(k^{*};1,n)^{2}}{W_{n,2}(k^{*};1,n)}$
$\displaystyle=\frac{U_{n,2}(\hat{k};1,n)^{2}}{W_{n,2}(\hat{k};1,n)}-\frac{U_{n,2}(k^{*};1,n)^{2}}{W_{n,2}(\hat{k};1,n)}+\frac{U_{n,2}(k^{*};1,n)^{2}}{W_{n,2}(\hat{k};1,n)}-\frac{U_{n,2}(k^{*};1,n)^{2}}{W_{n,2}(k^{*};1,n)}$
$\displaystyle=\frac{1}{W_{n,2}(\hat{k};1,n)}(U_{n,2}(\hat{k};1,n)^{2}-U_{n,2}(k^{*};1,n)^{2})+\frac{U_{n,2}(k^{*};1,n)^{2}}{W_{n,2}(\hat{k};1,n)W_{n,2}(k^{*};1,n)}(W_{n,2}(k^{*};1,n)-W_{n,2}(\hat{k};1,n)).$
Since $U_{n,2}(k^{*};1,n)^{2}$, $W_{n,2}(\hat{k};1,n)$ and
$W_{n,2}(k^{*};1,n)$ are all strictly positive almost surely, we can then
conclude that $U_{n,2}(\hat{k};1,n)^{2}-U_{n,2}(k^{*};1,n)^{2}\geq 0$ or
$W_{n,2}(k^{*};1,n)-W_{n,2}(\hat{k};1,n)\geq 0$. Define
$\Omega_{n}=\\{k:|k-k^{*}|>\epsilon_{n}\\}$. If $\hat{k}\in\Omega_{n}$, then
there exists at least one $k\in\Omega_{n}$ such that
$U_{n,2}(k;1,n)^{2}-U_{n,2}(k^{*};1,n)^{2}\geq 0$ or
$W_{n,2}(k^{*};1,n)-W_{n,2}(k;1,n)\geq 0$. This implies
$P(\hat{k}\in\Omega_{n})\leq
P\left(\sup_{k\in\Omega_{n}}U_{n,2}(k;1,n)^{2}-U_{n,2}(k^{*};1,n)^{2}\geq
0\right)+P\left(W_{n,2}(k^{*};1,n)-\inf_{k\in\Omega_{n}}W_{n,2}(k;1,n)\geq
0\right).$
By Proposition 6.2, it is straightforward to see that
$P(\hat{k}\in\Omega_{n})\rightarrow 0$, and this completes the proof.
$\diamondsuit$
###### Proof of Proposition 6.1.
If $k^{*}<l$ or $k^{*}\geq m$, then $\mathbb{E}[X_{i}]$ are all identical, for
$i=l,...,m$. This implies that $U_{n,2}(k;l,m)=\sum_{l\leq i_{1}\neq i_{2}\leq
k}\sum_{k+1\leq j_{1}\neq j_{2}\leq
m}(X_{i_{1}}-X_{j_{1}})^{T}(X_{i_{1}}-X_{j_{2}})=\sum_{l\leq i_{1}\neq
i_{2}\leq k}\sum_{k+1\leq j_{1}\neq j_{2}\leq
m}(Z_{i_{1}}-Z_{j_{1}})^{T}(Z_{i_{1}}-Z_{j_{2}})=U_{n,2}^{Z}(k;l,m)$.
When $l\leq k^{*}<m$, there are two scenarios depending on the value of $k$.
If $k\leq k^{*}$, note that $\mathbb{E}[X_{i}]=\Delta_{n}$ for any $i>k^{*}$
and zero otherwise, then by straightforward calculation we have
$\displaystyle U_{n,2}(k;l,m)=\sum_{l\leq i_{1}\neq i_{2}\leq k}\sum_{k+1\leq
j_{1}\neq j_{2}\leq m}(X_{i_{1}}-X_{j_{1}})^{T}(X_{i_{1}}-X_{j_{2}})$
$\displaystyle=$ $\displaystyle\sum_{l\leq i_{1}\neq i_{2}\leq k}\sum_{k+1\leq
j_{1}\neq j_{2}\leq
m}(Z_{i_{1}}-Z_{j_{1}}-\mathbb{E}[X_{j_{1}}])^{T}(Z_{i_{1}}-Z_{j_{2}}-\mathbb{E}[X_{j_{2}}])$
$\displaystyle=$ $\displaystyle
U_{n,2}(k;l,m)+(k-l+1)(k-l)(m-k^{*})(m-k^{*}-1)\|\Delta_{n}\|_{2}^{2}-2(k-l)(m-k^{*})\sum_{i=l}^{k}\sum_{j=k+1}^{k^{*}}\Delta_{n}^{T}(Z_{i}-Z_{j})$
$\displaystyle-2(k-l)(m-k^{*}-1)\sum_{i=l}^{k}\sum_{j=k^{*}+1}^{m}\Delta_{n}^{T}(Z_{i}-Z_{j})$
$\displaystyle=$ $\displaystyle
U_{n,2}^{Z}(k;l,m)+(k-l+1)(k-l)(m-k^{*})(m-k^{*}-1)\|\Delta_{n}\|_{2}^{2}-2(k-l)(m-k^{*})(m-k)\sum_{i=l}^{k}\Delta_{n}^{T}Z_{i}$
$\displaystyle+2(k-l)(m-k^{*})(k-l+1)\sum_{i=k+1}^{m}\Delta_{n}^{T}Z_{i}+2(k-l)(m-k^{*})\sum_{i=l}^{k}\Delta_{n}^{T}Z_{i}$
$\displaystyle-2(k-l)(k-l+1)\sum_{i=k^{*}+1}^{m}\Delta_{n}^{T}Z_{i}.$
Similarly if $k\geq k^{*}$ we have
$\displaystyle U_{n,2}(k;l,m)=\sum_{l\leq i_{1}\neq i_{2}\leq k}\sum_{k+1\leq
j_{1}\neq j_{2}\leq m}(X_{i_{1}}-X_{j_{1}})^{T}(X_{i_{1}}-X_{j_{2}})$
$\displaystyle=$ $\displaystyle\sum_{l\leq i_{1}\neq i_{2}\leq k}\sum_{k+1\leq
j_{1}\neq j_{2}\leq
m}(Z_{i_{1}}-Z_{j_{1}}+\mathbb{E}[X_{i_{1}}]-\Delta_{n})^{T}(Z_{i_{1}}-Z_{j_{2}}+\mathbb{E}[X_{i_{2}}]-\Delta_{n})$
$\displaystyle=$ $\displaystyle
U_{n,2}(k;l,m)+(k^{*}-l+1)(k^{*}-l)(m-k)(m-k-1)\|\Delta_{n}\|_{2}^{2}-2(m-k-1)(k^{*}-l)\sum_{i=l}^{k^{*}}\sum_{j=k+1}^{m}\Delta_{n}^{T}(Z_{i}-Z_{j})$
$\displaystyle-2(m-k-1)(k^{*}-l+1)\sum_{i=k^{*}+1}^{k}\sum_{j=k+1}^{m}\Delta_{n}^{T}(Z_{i}-Z_{j})$
$\displaystyle=$ $\displaystyle
U_{n,2}^{Z}(k;l,m)+(k^{*}-l+1)(k^{*}-l)(m-k)(m-k-1)\|\Delta_{n}\|_{2}^{2}-2(k^{*}-l+1)(m-k)(m-k-1)\sum_{i=l}^{k}\Delta_{n}^{T}Z_{i}$
$\displaystyle+2(m-k-1)(k^{*}-l+1)(k-l+1)\sum_{i=k+1}^{m}\Delta_{n}^{T}Z_{i}+2(m-k-1)(m-k)\sum_{i=l}^{k^{*}}\Delta_{n}^{T}Z_{i}$
$\displaystyle-2(m-k-1)(k^{*}-l+1)\sum_{i=k+1}^{m}\Delta_{n}^{T}Z_{i}.$
$\diamondsuit$
###### Proof of Proposition 6.2.
To show the first result, we first assume $k<k^{*}-\epsilon_{n}$. Then
according to Proposition 6.1,
$\displaystyle U_{n,2}(k;1,n)$
$\displaystyle=U_{n,2}^{Z}(k;1,n)+k(k-1)(n-k^{*})(n-k^{*}-1)\|\Delta_{n}\|_{2}^{2}-2(k-1)(n-k^{*})(n-k)\sum_{i=1}^{k}\Delta_{n}^{T}Z_{i}$
$\displaystyle+2k(k-1)(n-k^{*})\sum_{i=k+1}^{n}\Delta_{n}^{T}Z_{i}+2(k-1)(n-k^{*})\sum_{i=1}^{k}\Delta_{n}^{T}Z_{i}-2k(k-1)\sum_{i=k^{*}+1}^{n}\Delta_{n}^{T}Z_{i}.$
Similarly we have
$\displaystyle U_{n,2}(k^{*};1,n)$
$\displaystyle=U_{n,2}^{Z}(k^{*};1,n)+k^{*}(k^{*}-1)(n-k^{*})(n-k^{*}-1)\|\Delta_{n}\|_{2}^{2}-2(k^{*}-1)(n-k^{*})(n-k^{*}-1)\sum_{i=1}^{k^{*}}\Delta_{n}^{T}Z_{i}$
$\displaystyle+2k^{*}(k^{*}-1)(n-k^{*}-1)\sum_{i=k^{*}+1}^{n}\Delta_{n}^{T}Z_{i}.$
It is easy to verify that
$\mathbb{E}[U_{n,2}(k;1,n)]=k(k-1)(n-k^{*})(n-k^{*}-1)\|\Delta_{n}\|_{2}^{2}$,
for $k\leq k^{*}$. Furthermore, by Theorem 2.1 in Wang et al. (2019) and the
argument therein, we have
$\sup_{k=2,...,n-2}|U_{n,2}^{Z}(k;1,n)|=O(n^{3}\|\Sigma\|_{F})=o_{p}(n^{3.5}\sqrt{\|\Sigma\|_{F}}\|\Delta_{n}\|_{2}),$
since $\sqrt{\|\Sigma\|_{F}}=o(\sqrt{n}\|\Delta_{n}\|_{2})$ by Assumption 3.1
(3), and
$\sup_{1\leq a\leq b\leq
n}\left|\sum_{i=a}^{b}\Delta_{n}^{T}Z_{i}\right|=O_{p}(\sqrt{n}\sqrt{\Delta_{n}^{T}\Sigma\Delta_{n}})\leq
O_{p}(\sqrt{n\|\Sigma\|_{2}}\|\Delta_{n}\|_{2})\leq
O_{p}(\sqrt{n\|\Sigma\|_{F}}\|\Delta_{n}\|_{2}).$
These imply that
$\displaystyle U_{n,2}(k^{*};1,n)=$ $\displaystyle
k^{*}(k^{*}-1)(n-k^{*})(n-k^{*}-1)\|\Delta_{n}\|_{2}^{2}+O_{p}(n^{3.5}\|\Delta_{n}\|_{2}\sqrt{\|\Sigma\|_{F}})$
$\displaystyle=$ $\displaystyle
k^{*}(k^{*}-1)(n-k^{*})(n-k^{*}-1)\|\Delta_{n}\|_{2}^{2}+o_{p}(n^{4}\|\Delta_{n}\|_{2}^{2}),$
since $\sqrt{\|\Sigma\|_{F}}=o(\sqrt{n}\|\Delta_{n}\|_{2})$ by Assumption 3.1
(3). Therefore, we have
$P(\sup_{k<k^{*}-\epsilon_{n}}|U_{n,2}(k;1,n)|+U_{n,2}(k^{*};1,n)>0)\rightarrow
1.$
In addition,
$\displaystyle\sup_{k<k^{*}-\epsilon_{n}}|U_{n,2}(k;1,n)|-U_{n,2}(k^{*};1,n)$
$\displaystyle\leq$
$\displaystyle\sup_{k<k^{*}-\epsilon_{n}}k(k-1)(n-k^{*})(n-k^{*}-1)\|\Delta_{n}\|_{2}^{2}+O_{p}(n^{3.5}\|\Delta_{n}\|_{2}\sqrt{\|\Sigma\|_{F}})$
$\displaystyle-k^{*}(k^{*}-1)(n-k^{*})(n-k^{*}-1)\|\Delta_{n}\|_{2}^{2}-O_{p}(n^{3.5}\|\Delta_{n}\|_{2}\sqrt{\|\Sigma\|_{F}})$
$\displaystyle=$
$\displaystyle-\epsilon_{n}(2k^{*}-\epsilon_{n}-1)(n-k^{*})(n-k^{*}-1)\|\Delta_{n}\|_{2}^{2}+O_{p}(n^{3.5}\|\Delta_{n}\|_{2}\sqrt{\|\Sigma\|_{F}})$
$\displaystyle=$
$\displaystyle-\epsilon_{n}(2k^{*}-\epsilon_{n}-1)(n-k^{*})(n-k^{*}-1)\|\Delta_{n}\|_{2}^{2}+O_{p}(n^{4}\|\Delta_{n}\|_{2}^{2}/\sqrt{\gamma_{n,2}}).$
Since
${n/\sqrt{\gamma_{n,2}}}=o(n\gamma_{n,2}^{-1/4+\kappa})=o(\epsilon_{n})$, we
have
$\displaystyle
P(\sup_{k<k^{*}-\epsilon_{n}}|U_{n,2}(k;1,n)|-U_{n,2}(k^{*};1,n)<0)$
$\displaystyle\geq$ $\displaystyle
P\Big{(}-\epsilon_{n}(2k^{*}-\epsilon_{n}-1)(n-k^{*})(n-k^{*}-1)\|\Delta_{n}\|_{2}^{2}+O_{p}(n^{4}\|\Delta_{n}\|_{2}^{2}/\sqrt{\gamma_{n,2}})<0\Big{)}\rightarrow
1.$
Finally, it is straightforward to see that
$\displaystyle\sup_{k\leq
k^{*}-\epsilon_{n}}U_{n,2}(k;1,n)^{2}-U_{n,2}(k^{*};1,n)^{2}\leq\left(\sup_{k\leq
k^{*}-\epsilon_{n}}|U_{n,2}(k;1,n)|\right)^{2}-U_{n,2}(k^{*};1,n)^{2}$
$\displaystyle=$ $\displaystyle\left(\sup_{k\leq
k^{*}-\epsilon_{n}}|U_{n,2}(k;1,n)|-U_{n,2}(k^{*};1,n)\right)\left(\sup_{k\leq
k^{*}-\epsilon_{n}}|U_{n,2}(k;1,n)|+U_{n,2}(k^{*};1,n)\right).$
And
$\displaystyle P\left(\left(\sup_{k\leq
k^{*}-\epsilon_{n}}|U_{n,2}(k;1,n)|-U_{n,2}(k^{*};1,n)\right)\left(\sup_{k\leq
k^{*}-\epsilon_{n}}|U_{n,2}(k;1,n)|+U_{n,2}(k^{*};1,n)\right)<0\right)$
$\displaystyle\geq$ $\displaystyle P\left(\left\\{\sup_{k\leq
k^{*}-\epsilon_{n}}|U_{n,2}(k;1,n)|-U_{n,2}(k^{*};1,n)<0\right\\}\bigcap\left\\{\sup_{k\leq
k^{*}-\epsilon_{n}}|U_{n,2}(k;1,n)|+U_{n,2}(k^{*};1,n)>0\right\\}\right)\rightarrow
1,$
since both
$P(\sup_{k<k^{*}-\epsilon_{n}}|U_{n,2}(k;1,n)|+U_{n,2}(k^{*};1,n)>0)$ and
$P(\sup_{k<k^{*}-\epsilon_{n}}|U_{n,2}(k;1,n)|-U_{n,2}(k^{*};1,n)<0)$ converge
to 1. This is equivalent to
$P\left(\left(\sup_{k\leq
k^{*}-\epsilon_{n}}|U_{n,2}(k;1,n)|-U_{n,2}(k^{*};1,n)\right)\left(\sup_{k\leq
k^{*}-\epsilon_{n}}|U_{n,2}(k;1,n)|+U_{n,2}(k^{*};1,n)\right)\geq
0\right)\rightarrow 0,$
and it implies that
$P\left(\sup_{k<k^{*}-\epsilon_{n}}U_{n,2}(k;1,n)^{2}-U_{n,2}(k^{*};1,n)^{2}\geq
0\right)\rightarrow 0$. Similar tactics can be applied to the case
$k>k^{*}+\epsilon_{n}$ and by combining the two parts we have
$P\left(\sup_{k\in\Omega_{n}}U_{n,2}(k;1,n)^{2}-U_{n,2}(k^{*};1,n)^{2}\geq
0\right)\rightarrow 0$. Therefore this completes the proof for the first
result.
It remains to show the second part. Let us again assume $k<k^{*}-\epsilon_{n}$
first. By Proposition 6.1 we have
$\displaystyle W_{n,2}(k^{*};1,n)$
$\displaystyle=\frac{1}{n}\sum_{t=2}^{k^{*}-2}U_{n,2}(t;1,k^{*})^{2}+\frac{1}{n}\sum_{t=k^{*}+2}^{n-2}U_{n,2}(t;k^{*}+1,n)^{2}$
$\displaystyle=\frac{1}{n}\sum_{t=2}^{k^{*}-2}U_{n,2}^{Z}(t;1,k^{*})^{2}+\frac{1}{n}\sum_{t=k^{*}+2}^{n-2}U_{n,2}^{Z}(t;k^{*}+1,n)^{2},$
and
$\displaystyle W_{n,2}(k;1,n)$
$\displaystyle=\frac{1}{n}\sum_{t=2}^{k-2}U_{n,2}(t;1,k)^{2}+\frac{1}{n}\sum_{t=k+2}^{n-2}U_{n,2}(t;k+1,n)^{2}$
$\displaystyle=\frac{1}{n}\sum_{t=2}^{k-2}U_{n,2}^{Z}(t;1,k)^{2}+\frac{1}{n}\sum_{t=k+2}^{n-2}U_{n,2}(t;k+1,n)^{2}.$
When $t$ is between $k+2$ and $k^{*}$, by Proposition 6.1 we have
$\displaystyle U_{n,2}(t;k+1,n)=$ $\displaystyle
U_{n,2}^{Z}(t;k+1,n)+(t-k)(t-k-1)(n-k^{*})(n-k^{*}-1)\|\Delta_{n}\|_{2}^{2}$
$\displaystyle-2(t-k-1)(n-k^{*})(n-t)\sum_{i=k+1}^{t}\Delta_{n}^{T}Z_{i}+2(t-k-1)(n-k^{*})(t-k)\sum_{i=t+1}^{n}\Delta_{n}^{T}Z_{i}$
$\displaystyle+2(t-k-1)(n-k^{*})\sum_{i=k+1}^{t}\Delta_{n}^{T}Z_{i}-2(t-k-1)(t-k)\sum_{i=k^{*}+1}^{n}\Delta_{n}^{T}Z_{i},$
and from the above decomposition we observe that
$\mathbb{E}[U_{n,2}(t;k+1,n)]=(t-k)(t-k-1)(n-k^{*})(n-k^{*}-1)\|\Delta_{n}\|_{2}^{2}$,
which is the second term in the above equality. Then
$\displaystyle U_{n,2}(t;k+1,n)^{2}$
$\displaystyle=(U_{n,2}(t;k+1,n)-\mathbb{E}[U_{n,2}(t;k+1,n)]+\mathbb{E}[U_{n,2}(t;k+1,n)])^{2}$
$\displaystyle\geq\mathbb{E}[U_{n,2}(t;k+1,n)]^{2}+2\mathbb{E}[U_{n,2}(t;k+1,n)](U_{n,2}(t;k+1,n)-\mathbb{E}[U_{n,2}(t;k+1,n)])$
$\displaystyle\geq\mathbb{E}[U_{n,2}(t;k+1,n)]^{2}-2\mathbb{E}[U_{n,2}(t;k+1,n)]\sup_{t=k+2,...,n-2}|U_{n,2}(t;k+1,n)-\mathbb{E}[U_{n,2}(t;k+1,n)]|,$
since $\mathbb{E}[U_{n,2}(t;k+1,n)]>0$. Furthermore,
$\displaystyle\sup_{t=k+2,...,n-2}|U_{n,2}(t;k+1,n)-\mathbb{E}[U_{n,2}(t;k+1,n)]|$
$\displaystyle\leq$
$\displaystyle\sup_{t=k+2,...,n-2}|U_{n,2}^{Z}(t;k+1,n)|+8n^{3}\sup_{a<b,a,b=1,...,n}\left|\sum_{i=a}^{b}\Delta_{n}^{T}Z_{i}\right|$
$\displaystyle=$ $\displaystyle
O_{p}(n^{3}\|\Sigma\|_{F})+O_{p}(n^{3.5}\sqrt{\Delta_{n}^{T}\Sigma\Delta_{n}})=o_{p}(n^{4}\|\Delta_{n}\|^{2}/\sqrt{a_{n}}),$
due to Assumption 3.1, Theorem 2.1 and the argument in Wang et al. (2019).
Similarly when $t$ is between $k^{*}$ and $n-2$, we have
$\displaystyle
U_{n,2}(t;k+1,n)^{2}\geq\mathbb{E}[U_{n,2}(t;k+1,n)]^{2}-2\mathbb{E}[U_{n,2}(t;k+1,n)]\sup_{t=k+2,...,n-2}|U_{n,2}(t;k+1,n)-\mathbb{E}[U_{n,2}(t;k+1,n)]|,$
where
$\mathbb{E}[U_{n,2}(t;k+1,n)]=(k^{*}-k)(k^{*}-k-1)(n-t)(n-t-1)\|\Delta_{n}\|_{2}^{2}>0$,
and
$\displaystyle\sup_{t=k+2,...,n-2}|U_{n,2}(t;k+1,n)-\mathbb{E}[U_{n,2}(t;k+1,n)]|$
$\displaystyle\leq$ $\displaystyle
O_{p}(n^{3}\|\Sigma\|_{F})+O_{p}(n^{3.5}\sqrt{\Delta_{n}^{T}\Sigma\Delta_{n}})=O_{p}(n^{4}\|\Delta_{n}\|^{2}/\sqrt{a_{n}})$
Therefore by combining the above results we obtain that
$\displaystyle W_{n,2}(k;1,n)$ $\displaystyle\geq$
$\displaystyle\frac{1}{n}\sum_{t=2}^{k-2}U_{n,2}^{Z}(t;1,k)^{2}+\frac{1}{n}\sum_{t=k+2}^{k^{*}}\mathbb{E}[U_{n,2}(t;k+1,n)]^{2}+\frac{1}{n}\sum_{t=k^{*}+1}^{n-2}\mathbb{E}[U_{n,2}(t;k+1,n)]^{2}$
$\displaystyle-\frac{2}{n}\sup_{t=k+2,...,n-2}|U_{n,2}(t;k+1,n)-\mathbb{E}[U_{n,2}(t;k+1,n)]|\sum_{t=k+2}^{k^{*}}\mathbb{E}[U_{n,2}(t;k+1,n)]$
$\displaystyle-\frac{2}{n}\sup_{t=k+2,...,n-2}|U_{n,2}(t;k+1,n)-\mathbb{E}[U_{n,2}(t;k+1,n)]|\sum_{t=k^{*}+1}^{n-2}\mathbb{E}[U_{n,2}(t;k+1,n)]$
$\displaystyle\gtrsim$
$\displaystyle(k^{*}-k)^{5}n^{3}\|\Delta_{n}\|_{2}^{4}-(k^{*}-k)^{3}n\|\Delta_{n}\|_{2}^{2}\sup_{t=k+2,...,n-2}|U_{n,2}(t;k+1,n)-\mathbb{E}[U_{n,2}(t;k+1,n)]|$
$\displaystyle+(k^{*}-k)^{4}n^{4}\|\Delta_{n}\|_{2}^{4}-(k^{*}-k)^{2}n^{2}\|\Delta_{n}\|_{2}^{2}\sup_{t=k+2,...,n-2}|U_{n,2}(t;k+1,n)-\mathbb{E}[U_{n,2}(t;k+1,n)]|$
$\displaystyle-\left(\sup_{k}\sup_{t=2,...,k-2}|U_{n,2}^{Z}(t;1,k)|\right)^{2}$
$\displaystyle=$
$\displaystyle(k^{*}-k)^{3}n^{3}\|\Delta_{n}\|_{2}^{4}[(k^{*}-k)^{2}-o_{p}(n^{2}/\sqrt{\gamma_{n,2}})]+(k^{*}-k)^{2}n^{4}\|\Delta_{n}\|_{2}^{4}[(k^{*}-k)^{2}-o_{p}(n^{2}/\sqrt{\gamma_{n,2}})]-O_{p}(n^{6}\|\Sigma\|_{F}^{2})$
$\displaystyle\geq$
$\displaystyle(k^{*}-k)^{3}n^{3}\|\Delta_{n}\|_{2}^{4}[\epsilon_{n}^{2}-o_{p}(n^{2}/\sqrt{\gamma_{n,2}})]+(k^{*}-k)^{2}n^{4}\|\Delta_{n}\|_{2}^{4}[\epsilon_{n}^{2}-o_{p}(n^{2}/\sqrt{\gamma_{n,2}})]-O_{p}(n^{6}\|\Sigma\|_{F}^{2})$
$\displaystyle=$
$\displaystyle((k^{*}-k)^{3}n^{3}+(k^{*}-k)^{2}n^{4})\|\Delta_{n}\|_{2}^{4}\epsilon_{n}^{2}(1-o_{p}(1))-O_{p}(n^{6}\|\Sigma\|_{F}^{2}),$
since $\epsilon_{n}=na_{n}^{-1/4+\kappa}$. And
$\displaystyle\inf_{k<k^{*}-\epsilon}W_{n,2}(k;1,n)\gtrsim(\epsilon_{n}^{3}n^{3}+\epsilon_{n}^{2}n^{4})\|\Delta_{n}\|_{2}^{4}\epsilon_{n}^{2}(1-o_{p}(1))-O_{p}(n^{6}\|\Sigma\|_{F}^{2})=\epsilon_{n}^{4}n^{4}\|\Delta_{n}\|^{4}(1-o_{p}(1)),$
since $\epsilon_{n}=o(n)$ and
$\epsilon_{n}^{4}n^{4}\|\Delta_{n}\|^{4}/(n^{6}\|\Sigma\|_{F}^{2})=\gamma_{n,2}^{1+4\kappa}\rightarrow\infty$.
By very similar arguments, we can obtain the same bound for
$\inf_{k>k^{*}+\epsilon}W_{n,2}(k;1,n)$, and hence
$\inf_{k\in\Omega_{n}}W_{n,2}(k;1,n)\gtrsim\epsilon_{n}^{4}n^{4}\|\Delta_{n}\|^{4}(1-o_{p}(1))$.
On the other hand, Theorem 2.1 implies that
$W_{n,2}(k^{*};1,n)=\frac{1}{n}\sum_{t=2}^{k^{*}-2}U_{n,2}^{Z}(t;1,k^{*})^{2}+\frac{1}{n}\sum_{t=k^{*}+2}^{n-2}U_{n,2}^{Z}(t;k^{*}+1,n)^{2}=O_{p}(n^{6}\|\Sigma\|_{F}^{2})$.
This indicates that
$W_{n,2}(k^{*};1,n)=\epsilon_{n}^{4}n^{4}\|\Delta_{n}\|^{4}o_{p}(1)$, and
consequently,
$P\left(W_{n,2}(k^{*};1,n)-\inf_{k\in\Omega_{n}}W_{n,2}(k;1,n)\geq
0\right)\leq
P\Big{(}\epsilon_{n}^{4}n^{4}\|\Delta_{n}\|^{4}o_{p}(1)-\epsilon_{n}^{4}n^{4}\|\Delta_{n}\|^{4}(1-o_{p}(1))\geq
0\Big{)}\rightarrow 0.$
This completes the whole proof. $\diamondsuit$
## 7 Application to network change-point detection
Our change-point testing and estimation methods are applicable to network
change-point detection in the following sense. Suppose we observe $n$
independent networks $\\{A_{t}\\}_{t=1}^{n}$ over time with $m$ nodes. Here
$A_{t}$ is the $m\times m$ adjacency matrix at time $t$. We assume the edges
in each network are generated from Bernoulli random variables and are un-
directed. That is,
$A_{ij,t}=1~{}\mbox{if nodes $i$ and $j$ are connected at time $t$
and}~{}0~{}\mbox{otherwise}.$
Let $A_{t}=(A_{ij,t})_{i,j=1}^{m}$ and assume $E(A_{ij,t})=p_{ij,t}$. Let
$E(A_{t})=\Theta_{t}=(p_{ij,t})_{i,j=1}^{m}$.
Suppose that we are interested in testing
$H_{0}:\Theta_{1}=\cdots=\Theta_{n}$
versus certain change point alternatives. Here we can convert the adjacency
matrix into a high-dimensional vector, and apply our test and estimation
procedures. Note that a mean shift in $vech(\Theta_{t})$ implies a shift in
variance matrix of $vech(A_{t})$, so the variance matrix is not constant under
the alternative. However, the asymptotic distribution of our SN-based test
statistics still holds under the null, and our change-point detection method
is applicable. Note that our method allows the edges to be weakly dependent,
which can be satisfied by many popular network models; see Wang et al. (2020).
To examine the finite sample performance of our change-point testing and
estimation in the network framework, we consider the following stochastic
block model as in Wang et al. (2020). We generate $A_{t}$ as a matrix with
entries being i.i.d. Bernoulli variables with mean matrix
$\Theta_{t}=\mu_{t}ZQZ^{T}-{\mbox{diag}}(\mu_{t}ZQZ^{T})$ where
$Z\in\mathbb{R}^{m\times r}$ is the membership matrix and $Q\in[0,1]^{r\times
r}$ is the connectivity matrix. We set $Z$ to be the first $r$ columns of
identity matrix $I_{m}$ so that $\operatorname{rank}(Z)=r$, and
$Q=\bm{1}_{r}\cdot\bm{1}_{r}^{T}$ be a matrix of ones.
Table 6 presents the size with 1000 Monte Carlo repetitions. We take
$r=cm,\mu_{t}\equiv 0.1/c$ with $c=0.2,1$.
DGP | $(n,m)$ | $\mathcal{H}_{0}$,5% | $\mathcal{H}_{0}$,10%
---|---|---|---
$c$ | $q=2$ | $q=4$ | $q=6$ | $q=2,4$ | $q=2,6$ | $q=2$ | $q=4$ | $q=6$ | $q=2,4$ | $q=2,6$
1 | (200,10) | 0.035 | 0.096 | 0.068 | 0.08 | 0.048 | 0.075 | 0.152 | 0.135 | 0.124 | 0.096
(400,20) | 0.054 | 0.084 | 0.049 | 0.071 | 0.048 | 0.097 | 0.142 | 0.094 | 0.135 | 0.099
0.2 | (200,10) | 0.065 | 0.117 | 0.08 | 0.116 | 0.062 | 0.095 | 0.153 | 0.151 | 0.147 | 0.121
(400,20) | 0.05 | 0.101 | 0.043 | 0.09 | 0.047 | 0.099 | 0.153 | 0.096 | 0.137 | 0.083
Table 6: Size for testing one change point of network time series
As regards the power simulation, we generate the network data with a change
point located at $\lfloor n/2\rfloor$, which leads to
$\mu_{t}=\mu+\delta\mathbb{I}(t>n/2)\cdot\mu$. We take $\mu=0.1/c,r=cm$ with
$c=0.2,1$ and $\delta=0.2,0.5$. We obtain the empirical power based on 1000
Monte Carlo repetitions.
DGP | $(n,m)$ | $\mathcal{H}_{0}$,5% | $\mathcal{H}_{0}$,10%
---|---|---|---
($\delta,c$) | $q=2$ | $q=4$ | $q=6$ | $q=2,4$ | $q=2,6$ | $q=2$ | $q=4$ | $q=6$ | $q=2,4$ | $q=2,6$
(0.2,1) | (200,10) | 0.152 | 0.172 | 0.116 | 0.19 | 0.145 | 0.223 | 0.254 | 0.225 | 0.265 | 0.222
(400,20) | 0.83 | 0.309 | 0.238 | 0.787 | 0.775 | 0.908 | 0.411 | 0.364 | 0.865 | 0.85
(0.5,1) | (200,10) | 0.93 | 0.628 | 0.527 | 0.917 | 0.904 | 0.963 | 0.723 | 0.666 | 0.952 | 0.937
(400,20) | 1 | 0.995 | 0.97 | 1 | 1 | 1 | 0.997 | 0.99 | 1 | 1
(0.2,0.2) | (200,10) | 0.804 | 0.677 | 0.61 | 0.798 | 0.755 | 0.866 | 0.75 | 0.708 | 0.86 | 0.829
(400,20) | 1 | 0.994 | 0.991 | 1 | 1 | 1 | 0.997 | 0.999 | 1 | 1
(0.5,0.2) | (200,10) | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1
(400,20) | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1
Table 7: Power for testing one change point of network time series
We can see that our method exhibits similar size behavior as compared to the
setting for Gaussian distributed data in Section 4.1. The power also appears
to be quite good and increases when the signal increases. Unfortunately, we
are not aware of any particular testing method tailored for single network
change-point so we did not include any other method into the comparison.
To estimate the change-points in the network time series, we also combine our
method with WBS. We generate 100 samples of networks with connection
probability $\mu_{t}$ and sparsity parameter $r$. The 3 change points are
located at $30,60$ and $90$. We take
$\mu_{t}=\mu+\delta\cdot\mathbb{I}(30<t\leq 60\text{ or }t>90)\cdot\mu$. We
report the MSE and ARI of 100 Monte Carlo simulations as before. We compare
our method with modified neighborhood smoothing (MNBS) algorithm in Zhao et
al. (2019) and the graph-based test in Chen and Zhang (2015) combined with the
binary segmentation (denoted as CZ). We do not include a comparison with Wang
et al. (2020) as their method requires two iid samples. We can see that CZ
performs worse than the other two methods as our simulation involves non-
monotonic changes in the mean that does not favor binary segmentation. When
the network becomes sparse, i.e. $c=0.3$, our method also has better
performance than MNBS. Overall the performance of our method (e.g., WBS-SN(2),
WBS-SN(2,6)) seem quite stable. Of course, the scope of this simulation is
quite limited, and we leave a more in-depth investigation of network change-
point estimation to near future.
$(\mu,\delta,c)$ | | $\hat{N}-N$ | MSE | ARI
---|---|---|---|---
-3 | -2 | -1 | 0 | 1 | 2 | 3
(0.2, 1,1) | WBS-SN(2) | 0 | 1 | 14 | 74 | 10 | 1 | 0 | 0.32 | 0.865
WBS-SN(4) | 90 | 9 | 1 | 0 | 0 | 0 | 0 | 8.47 | 0.0373
WBS-SN(6) | 32 | 23 | 24 | 16 | 4 | 1 | 0 | 4.12 | 0.278
WBS-SN(2,6) | 1 | 2 | 18 | 39 | 32 | 8 | 0 | 0.99 | 0.728
CZ | 46 | 50 | 4 | 0 | 0 | 0 | 0 | 6.18 | 0.165
MNBS | 0 | 2 | 17 | 55 | 23 | 3 | 0 | 0.6 | 0.847
(0.1, 1,0.3) | WBS-SN(2) | 0 | 0 | 4 | 82 | 14 | 0 | 0 | 0.18 | 0.893
WBS-SN(4) | 12 | 17 | 38 | 33 | 0 | 0 | 0 | 2.14 | 0.604
WBS-SN(6) | 28 | 27 | 27 | 14 | 4 | 0 | 0 | 3.91 | 0.383
WBS-SN(2,6) | 0 | 1 | 8 | 60 | 29 | 2 | 0 | 0.49 | 0.852
CZ | 55 | 33 | 6 | 4 | 1 | 1 | 0 | 6.38 | 0.156
MNBS | 97 | 0 | 2 | 1 | 0 | 0 | 0 | 8.75 | 0.019
Table 8: Multiple change point location estimations for network time series
|
# Deep Generative SToRM model for dynamic imaging
###### Abstract
We introduce a novel generative smoothness regularization on manifolds (SToRM)
model for the recovery of dynamic image data from highly undersampled
measurements. The proposed generative framework represents the image time
series as a smooth non-linear function of low-dimensional latent vectors that
capture the cardiac and respiratory phases. The non-linear function is
represented using a deep convolutional neural network (CNN). Unlike the
popular CNN approaches that require extensive fully-sampled training data that
is not available in this setting, the parameters of the CNN generator as well
as the latent vectors are jointly estimated from the undersampled measurements
using stochastic gradient descent. We penalize the norm of the gradient of the
generator to encourage the learning of a smooth surface/manifold, while
temporal gradients of the latent vectors are penalized to encourage the time
series to be smooth. The main benefits of the proposed scheme are (a) the
quite significant reduction in memory demand compared to the analysis based
SToRM model, and (b) the spatial regularization brought in by the CNN model.
We also introduce efficient progressive approaches to minimize the
computational complexity of the algorithm.
## 1 Introduction
The quest for high spatial and temporal resolution is central to several
dynamic imaging problems, ranging from MRI, video imaging, to microscopy. A
popular approach to improve spatio-temporal resolution is self-gating, where
cardiac and respiratory information is estimated from navigator or central
k-space using bandpass filtering or clustering, followed by binning and
reconstruction [1, 2]. Several authors have also introduced smooth manifold
regularization, which models the images in the time series as points on a high
dimensional manifold [3, 4, 5]. This approach may be viewed as an implicit
soft-gating alternative to self-gating methods. Manifold methods including our
smoothness regularization on manifolds (SToRM) approach has been demonstrated
in a variety of dynamic imaging applications with good performance [3, 4, 5].
Since the data is not explicitly binned into a specific phase, manifold
methods are not vulnerable to potential errors in clustering the time series
based on navigators. Despite the benefits, a key challenge with current
manifold methods is the high memory demand. Unlike self-gating methods that
only recover the specific phases, manifold schemes recover the entire time
series. This approach restricts the extension of the framework to higher
dimensional problems. The high memory demand also makes it difficult to use
additional spatial and temporal regularization.
The main focus of this work is to exploit the power of deep convolutional
neural networks (CNN) to introduce an improved and memory efficient
generative/synthesis formulation of SToRM. Unlike current manifold and self-
gating methods, this approach does not require k-space navigators to estimate
the motion states. Besides, unlike traditional CNN based approaches, the
proposed scheme does not require extensive training data, which is challenging
to acquire in free-breathing applications. We note that current manifold
methods can be viewed as an analysis formulation. Specifically, a non-linear
injective mapping is applied on the images such that the mapped points of the
alias-free images lie on a low-dimensional subspace. When recovering from
undersampled data, the nuclear norm prior is applied in the transform domain
to encourage their non-linear mappings to lie in a subspace. Unfortunately,
this analysis approach requires the storage of all the image frames in the
time series. In this work, we model the images in the time series as non-
linear mappings
$\mathbf{\rho}_{t}=\mathcal{G}_{\theta}\left(\mathbf{z}_{t}\right)$, where
$\mathbf{z}_{i}$ are vectors that live in a very low-dimensional subspace. The
dimension of the subspace can be very small (e.g 2-4) in practical
applications. We represent the non-linear mapping using a convolution neural
network with weights $\theta$. The memory footprint of the algorithm depends
on the number of parameters $\theta$ and $z$, which is orders of magnitude
smaller than that of traditional manifold methods.
Fig. 1: (a) Analysis SToRM and (b) Generative SToRM. The analysis formulation
[4, 6] in (a) minimizes the nuclear norm of the non-linear mappings
$\varphi(\mathbf{x}_{i})$ of the images $\mathbf{x}_{i}$ to encourage them to
be in a subspace. By contrast, the proposed formulation expresses the images
as non-linear mappings $\mathcal{G}_{\theta}(\mathbf{z}_{i})$ of the low-
dimensional latent vectors $\mathbf{z}_{i}$. The main benefit of the
generative model is its ability to compress the data, thus offering a memory
efficient algorithm.
We propose to jointly optimize for the network parameters $\theta$ and the
latent vector $\mathbf{z}$ such that the cost
$\sum_{i}\|\mathcal{A}_{t}(\mathcal{G}_{\theta}\mathbf{z}_{t})-\mathbf{b}_{i}\|^{2}$
is minimized during image reconstruction. The smoothness of the manifold
generated by $\mathcal{G}_{\theta}(\mathbf{z})$ depends on the gradient of
$\mathcal{G}_{\theta}$ with respect to its input. To obtain a smooth manifold,
we regularize the gradient of the mapping
$\|\nabla_{z}\mathcal{G}_{\theta}\|^{2}$. Similarly, the images in the time
series are expected to vary smoothly in time. Hence, we also use a Tikhonov
smoothness penalty on the latent vectors $\mathbf{z}_{t}$ to further constrain
the solutions. Unlike traditional CNN methods that are fast during
testing/inference, the direct application of this scheme to the dynamic MRI
setting is computationally expensive. We use a three-step progressive-in-time
approach to significantly reduce the computational complexity of the
algorithm. Specifically, we grow the number of frames in the datasets during
the optimization process. The latent vectors from the previous iteration are
linearly interpolated to initialize the latent vectors. We observe that the
use of the progressive-in-time approach significantly reduces the
computational complexity of the algorithm.
The proposed approach is inspired by deep image prior (DIP) [7], which was
introduced for static imaging problems. We note that the extension of DIP to
dynamic imaging was considered in [8]. The key difference of the proposed
formulation from the above work is the joint optimization of the latent
variables $\mathbf{z}$, unlike the above method that chooses $\mathbf{z}$ as
random or interpolated versions of random vectors. Another key distinction is
the use of regularization priors on the network parameters and latent vectors,
which ensures that the scheme learns meaningful latent vectors and the
performance of the network does not degrade with iterations as in traditional
DIP methods.
## 2 Methods
Smooth manifold methods model images $\mathbf{x}_{i}$ in the dynamic time
series as points on a smooth manifold. In SToRM, the exponential (injective)
functions of the images denoted by $\varphi(\mathbf{x}_{i})$ of the alias-free
images are assumed to lie on a low-dimensional subspace. See Fig. 1.(a). The
joint recovery of the images denoted by the matrix
$\mathbf{X}=\left[\mathbf{x}_{1},..\mathbf{x}_{N}\right]$ from undersampled
data is posed as a nuclear norm minimization problem
$\mathbf{X}^{*}=\arg\min_{\mathbf{X}}\|\mathcal{A}(\mathbf{X})-\mathbf{B}\|^{2}+\lambda~{}\|\left[\varphi(\mathbf{x}_{1}),..,\varphi\\_\\{t\\}(\mathbf{x}_{N})\right]\|_{*}$
(1)
To overcome the challenges with the above analysis scheme, we propose to model
the images in the time series as
$\mathbf{x}_{i}=\mathcal{G}_{\theta}(\mathbf{z}_{i}),$ (2)
where $\mathcal{G}_{\theta}$ is a non-linear mapping. We realize
$\mathcal{G}_{\theta}$ using a deep convolutional neural network, inspired by
the extensive work on generative image models. Here, $\mathbf{z}_{i}$ are
latent vectors that lie in a low-dimensional subspace. As $\mathbf{z}_{i}$
vary in the subspace, their non-linear mappings vary on the image manifold.
The mapping $\mathcal{G}_{\theta}$ may be viewed as the inverse of the
injective mapping $\varphi$ considered in analysis SToRM; rather than mapping
the images to a low-dimensional subspace as in classical SToRM methods we now
propose to express the images as non-linear functions of latent variables
living in a low-dimensional subspace. See Fig. 1.(b).
The smoothness of the manifold is determined by the gradient of the non-linear
mapping, denoted by $\nabla_{\mathbf{z}}\mathcal{G}_{\theta}$. A mapping with
high gradient values can result in very similar latent vectors being mapped to
very different images. To minimize this risk, we propose to penalize the
$\ell_{2}$ norm of the gradients of the network, denoted by
$\|\nabla_{\mathbf{z}}\mathcal{G}_{\theta}\|^{2}$. We term this prior as
network regularizer. We expect the adjacent time frames in the time series to
be similar; we propose to add a temporal smoothness regularizer on the latent
vectors. The parameters of the network $\theta$ as well as the low-dimensional
latent vector $\mathbf{z}$ are estimated from the measured data by minimizing
$\displaystyle\mathcal{C}(\mathbf{z},\theta)$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{N}\|\mathcal{A}_{i}\left(\mathcal{G}_{\theta}[\mathbf{z}_{i}]\right)-\mathbf{b}\|^{2}+\lambda_{1}\underbrace{\|\nabla_{\mathbf{z}}\mathcal{G}_{\theta}\|^{2}}_{\scriptsize\mbox{network
regularization}}$ (3)
$\displaystyle\qquad+\lambda_{2}\underbrace{\|\nabla_{t}\mathbf{z}_{t}\|^{2}}_{\scriptsize\mbox{temporal
regularization}}$
with respect to $\mathbf{z}$ and $\theta$. We initialize the network
parameters and latent vectors to be random variables.
We use ADAM optimization to determine the optimal parameters. Note that the
first and the second term in the expression is separable over $i$. To keep
memory demand of the algorithm low, we propose to choose mini-batches
consisting of random subset of frames. A key benefit of this framework over
conventional neural network schemes is that it does not require any training
data. Note that it is often impossible to acquire fully-sampled training data
in dynamic imaging applications.
The main benefit of this model is the compression offered by the
representation; the number of parameters of the model in (2) is orders of
magnitude smaller than the number of pixels in the dataset. The dramatic
compression offered by the representation, together with the mini-batch
training provides a memory efficient alternative to analysis SToRM [3, 4].
Although our focus is on establishing the utility of the scheme in 2-D
settings in this paper, the approach can be readily translated to higher
dimensional applications. Another benefit is the implicit spatial
regularization brought in by the generative CNN. Specifically, CNNs are
ideally suited to represent images rather than noise-like alias artifacts [7].
Fig. 2: Reconstruction performance with progressive training in time and
without progressive training in time. From the plot, one can see that
progressive training in time produces better results with much less running
time comparing to the training without progressive in time.
Fig. 3: Impact of network regularization and latent variable regularization.
The SER vs epoch plots are shown above, while two of the reconstructed images,
their time profiles, and recovered latent variables are shown. We note that
the blue curve captures respiratory motion, while the orange one captures
cardiac motion.
### 2.1 Progressive in time training
While the generative SToRM approach significantly reduces the memory demand, a
challenge with this approach is the increased computational complexity. To
minimize the complexity, we propose to use a progressive optimization
strategy. Specifically, we solve for a sequence of vectors $\mathbf{z}_{0}$,
$\mathbf{z}_{1}$,.., $\mathbf{z}_{M}$ each corresponding to increasing number
of time frames. For instance, in this work we choose $\mathbf{z}_{0}$ to be a
$2\times 1$ vector, where we consider the recovery of an average image
$\mathcal{G}_{\theta}(\mathbf{z}_{0})=\mathbf{x}_{0}$ from the entire data. We
solve for the optimal $\theta_{0}$ and $\mathbf{z}_{0}$ by minimizing (1).
Since we are solving for a single image, this optimization is fast. Following
convergence, the latent vector $\\{\mathbf{z}_{0}\\}$ is linearly interpolated
to the size of $\mathbf{z}_{1}$ and used along with $\theta_{0}$ as
initialization, while solving for
$\left\\{\theta_{1},\mathbf{z}_{1}\right\\}$. This approach significantly
reduces the computational complexity as seen from our experiments
Fig. 4: Comparison of Generative SToRM, Analysis SToRM, time dependent deep
image prior.
## 3 Experiments
### 3.1 Dataset and imaging experiments
All the experiments in this paper are based on a whole-heart multi-slice
dataset collected in the free-breathing mode using a golden angle spiral
trajectory. The acquisition of the data was performed on a GE 3T scanner. The
sequence parameters were: TR= 8.4 ms, FOV= 320 mm x 320 mm, flip angle= 18
degrees, slice thickness= 8 mm.
Results were generated using an Intel Xeon CPU at 2.40 GHz and a Tesla
P100-PCIE 16GB GPU. Results in §4.2, §4.3 were based on the first slice in the
dataset, and results in §4.4, §4.5 were based on the second slice in the
dataset. We binned the data from six spiral interleaves corresponding to 50 ms
temporal resolution. The entire dataset corresponds to 522 frames. We omit the
first 22 frames and used the remaining 500 frames for SToRM reconstructions,
which is used as ground truth for comparisons. In all the studies, we assumed
the latent variables to be two dimensional since the main source of
variability in the data correspond to cardiac and respiratory motion.
### 3.2 Benefit of progressive in time approach
We demonstrate the quite significant reduction in running time offered by the
progressive training strategy described in Section 2.1 in Fig. 2. Here, we
consider the recovery from 150 frames with and without the progressive
strategy. We plot the reconstruction performance, measured by the Signal-to-
Error Ratio (SER) with respect to the running time. The results show that the
proposed scheme can offer good reconstructions in $\approx 200$ seconds, which
is better than the direct approach that takes more than 2000 seconds.
### 3.3 Impact of regularization priors
We study the impact of network regularization priors in Fig. 3.(a), where we
show the reconstruction performance with respect to the number of epochs. The
recovered latent variables are also shown in the plots. We chose
$\lambda_{2}=2$ in this experiment. We note that unlike the case without
network regularization, the SER of the regularized reconstruction increases
with iteration. The case without regularization will start to fit to the noise
with iterations as in the case of deep image prior. We note that with
regularization, the latent variables capture cardiac (orange curve) and
respiratory (orange curve) motion, even though no explicit priors or
additional information (e.g navigators) about cardiac or respiratory rates
were used. Without network regularization, we observe increased mixing of the
cardiac and respiratory patterns in the latent vectors.
In the cost function (3), we also have the temporal smoothness regularization
of the latent variables. We compare $\lambda_{2}=2$ against $\lambda_{2}=0$,
while $\lambda_{1}$ was fixed as $0.001$. Similar to the network
regularization setting, we observe that the performance of the un-regularized
algorithm falls with iterations, while the performance of the regularized
approach increases or plateau with iterations. We also obsrved significant
mixing between cardiac and respiratory patterns in the latent variables when
no regularization is used.
### 3.4 Comparison with existing methods
We compare the proposed generative SToRM approach with analysis SToRM [6] and
time dependent deep image prior algorithm [8]. We use the k-space data of 150
frames for the reconstructions. The reconstruction results are shown in Fig.
4. The results show that the generative SToRM approach is able to reduce noise
and alias artifacts compared to analysis SToRM, offering around 1dB
improvement in performance. We attribute the improved performance to spatial
regularization offered by the CNN generator, which is absent in the analysis
SToRM formulation. The reconstruction time of both the algorithms are
comparable. The Time-DIP scheme, which assumes the latent variables to be
fixed as random values results in increased artifacts and blurring of motion
details. We note that unlike the analysis schemes, the proposed scheme does
not use k-space navigators to estimate the motion states; the latent variables
are estimated from the measured k-space data itself.
## 4 Conclusion
We introduce a generative manifold representation for the recovery of dynamic
image data from highly undersampled measurements. The deep CNN generator is
used to lift low-dimensional latent vectors to the smooth image manifold and
this proposed scheme does not require fully-sampled training data. We jointly
optimize the CNN generator parameters and the latent vectors based on the
undersampled data. We also proposed the training-in-time approch to minimize
the computational complexity of the algorithm. During the training, the norm
of the gradients of the generator is penalized to the learning of a smooth
surface/manifold, while temporal gradients of the latent vectors are penalized
to encourage the time series to be smooth. Comparisons with existing methods
suggest the utility of the proposed scheme in dynamic images.
## 5 Compliance with Ethical Standards
This research study was conducted using human subject data. The institutional
review board at the local institution approved the acquisition of the data,
and written consent was obtained from the subject.
## 6 Acknowledgments
This work is supported by grants NIH 1R01EB019961-01A1 and R01EB019961-02S.
The authors claim that there is no conflicts of interest.
## References
* [1] Li Feng, Robert Grimm, Kai Tobias Block, Hersh Chandarana, Sungheon Kim, Jian Xu, Leon Axel, Daniel K Sodickson, and Ricardo Otazo, “Golden-angle radial sparse parallel mri: combination of compressed sensing, parallel imaging, and golden-angle radial sampling for fast and flexible dynamic volumetric mri,” Magnetic resonance in medicine, vol. 72, no. 3, pp. 707–717, 2014\.
* [2] Anthony G Christodoulou, Jaime L Shaw, Christopher Nguyen, Qi Yang, Yibin Xie, Nan Wang, and Debiao Li, “Magnetic resonance multitasking for motion-resolved quantitative cardiovascular imaging,” Nature biomedical engineering, vol. 2, no. 4, pp. 215–226, 2018\.
* [3] Sunrita Poddar and Mathews Jacob, “Dynamic mri using smoothness regularization on manifolds (storm),” IEEE transactions on medical imaging, vol. 35, no. 4, pp. 1106–1115, 2015.
* [4] Sunrita Poddar, Yasir Q Mohsin, Deidra Ansah, Bijoy Thattaliyath, Ravi Ashwath, and Mathews Jacob, “Manifold recovery using kernel low-rank regularization: Application to dynamic imaging,” IEEE Transactions on Computational Imaging, vol. 5, no. 3, pp. 478–491, 2019.
* [5] Ukash Nakarmi, Yanhua Wang, Jingyuan Lyu, Dong Liang, and Leslie Ying, “A kernel-based low-rank (klr) model for low-dimensional manifold recovery in highly accelerated dynamic mri,” IEEE transactions on medical imaging, vol. 36, no. 11, pp. 2297–2307, 2017.
* [6] Abdul Haseeb Ahmed, Ruixi Zhou, Yang Yang, Prashant Nagpal, Michael Salerno, and Mathews Jacob, “Free-breathing and ungated dynamic mri using navigator-less spiral storm,” IEEE Transactions on Medical Imaging, 2020.
* [7] Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky, “Deep image prior,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 9446–9454.
* [8] Kyong Hwan Jin, Harshit Gupta, Jerome Yerly, Matthias Stuber, and Michael Unser, “Time-dependent deep image prior for dynamic mri,” arXiv preprint arXiv:1910.01684, 2019.
|
# Adversarial Learning with Cost-Sensitive Classes
Haojing Shen, Sihong Chen, Ran Wang, , Xizhao Wang Haojing Shen, Sihong Chen,
and Xizhao Wang are with Big Data Institute, College of Computer Science and
Software Engineering, Guangdong Key Lab. of Intelligent Information
Processing, Shenzhen University, Shenzhen 518060, Guangdong, China (Email:
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>).Ran Wang is
with the College of Mathematics and Statistics, Shenzhen University, Shenzhen
518060, China and also with the Shenzhen Key Laboratory of Advanced Machine
Learning and Applications, Shenzhen University, Shenzhen 518060, China.
(e-mail: wangran@szu.edu.cn).Corresponding author: Xizhao Wang.
###### Abstract
It is necessary to improve the performance of some special classes or to
particularly protect them from attacks in adversarial learning. This paper
proposes a framework combining cost-sensitive classification and adversarial
learning together to train a model that can distinguish between protected and
unprotected classes, such that the protected classes are less vulnerable to
adversarial examples. We find in this framework an interesting phenomenon
during the training of deep neural networks, called Min-Max property, that is,
the absolute values of most parameters in the convolutional layer approach
zero while the absolute values of a few parameters are significantly larger
becoming bigger. Based on this Min-Max property which is formulated and
analyzed in a view of random distribution, we further build a new defense
model against adversarial examples for adversarial robustness improvement. An
advantage of the built model is that it does no longer need adversarial
training, and thus, has a higher computational efficiency than most existing
models of needing adversarial training. It is experimentally confirmed that,
regarding the average accuracy of all classes, our model is almost as same as
the existing models when an attack does not occur and is better than the
existing models when an attack occurs. Specifically, regarding the accuracy of
protected classes, the proposed model is much better than the existing models
when an attack occurs.
###### Index Terms:
Adversarial examples, adversarial training, robustness, cost-sensitive, attack
and defense.
## I Introduction
(a) The framework of adversarial training (b) The framework of CSA and CSE
algorithms
Figure 1: A framework for adversarial training and an overview of CSA and CSE
algorithms. (a) The clean examples and adversarial examples are fed into the
network alternately during training. For example, firstly, clean examples were
fed into the network in Stage 1. Then adversarial examples are generated from
clean examples in Stage 2. Finally, in Stage 3, the adversarial examples are
applied to train the network. (b) A framework for the CSA and CSE algorithms.
We take a LeNet [1] network as example. Note that, for the CSE algorithm, the
convolutional parameters will be applied to the loss function with some
formulation to attain the Min-Max property.
Recent studies have shown that the deep neural network is relatively fragile
[2], and it is vulnerable to adversarial examples that are composed of some
slight perturbations and clean samples. The methods to generate adversarial
examples are called adversarial attacks [2, 7, 10, 11, 6, 12]. Simultaneously,
to defend adversarial attacks, recently, many works have been proposed
adversarial defenses [3, 4, 5, 2, 6, 7, 8, 9]. There is an arms race between
adversarial attacks and adversarial defenses.
Adversarial training [2, 6, 7] is a simple and effective adversarial defense
method. Many studies [6, 7] have shown that adversarial training can
effectively defend against white-box attacks. However, it can only defend
against a given attack method, which may fail for other stronger or unknown
attacks. Meanwhile, it is found in some works that the adversarial training
would step into the trap, called obfuscated gradients [3, 13]. The obfuscated
gradients give a false sense that the model has a good adversarial robustness
against adversarial attack by limiting the attacker to calculate the gradient
of the model. However, an attacker can successfully attack a model with
adversarial training by using a transfer attack method, building a new
approximation function, or using a non-gradient attack.
The main purpose of existing defense methods [2, 4] is to improve the model’s
robustness for overall classes. These defense methods treat every class in the
dataset equally, i.e., the adversarial robustness of each class needs to be
improved simultaneously. However, constructing such an ideal model, which
improves the adversarial robustness of each class, is challenging. Actually,
in practical applications, not all the classes are equally important in some
tasks, i.e., samples of certain classes are more important than those of
others. For example, there are various denominations of dollar such as 1$, 2$,
10$, 20$, 50$, 100$ and so on. Obviously, in the task of identifying dollar
bills, we hope that the model can be more accurate of large denominations. As
for the adversary, of course, they prefer to attack the large denomination of
the bill to obtain the maximum profit. From this perspective, we hope to
propose a method to particularly improve the adversarial robustness of more
important classes.
Some related works can be retrieved from the literature [14, 15]. Zhang et al.
[14] point out that in practical application, the cost of adversarial
transformation depends on specific classes. They combine cost-sensitive
learning with robust learning [16] and advocate using the cost-sensitive
robustness to measure the classifier’s performance for some tasks where the
adversarial transformation is more important than other transformations. They
code the cost of each type of adversarial transformation into a cost matrix,
combine it with the robust learning method [16], and propose a new objective
function. However, they do not explain why cost-sensitive robustness learning
affects the robustness of each class, and the performance of the model is
heavily dependent on the robustness learning methods in [16]. Based on optimal
transmission theory and Wasserstein Distance, Terzi et al. [15] propose the
optimal cost of transferring from one kind of distribution to another to
improve the model’s cost-sensitive robustness. Their proposed WPGD method can
be used to solve the cost-sensitive problem, data imbalance problem, or the
balance problem of robustness and accuracy.
In this paper, we focus on a new problem, i.e., protecting a particular class
under adversarial attacks. Unlike traditional defense strategies, we consider
the accuracy rate of specific categories to minimize the impact of adversarial
examples. Of course, the accuracy rate of other categories may be sacrificed
to some extent, but it is expected that the overall performance is similar to
that of the standard training model. We find that this problem has one thing
in common with the cost-sensitive problem [17, 18], i.e., the cost of
misclassification depends on different labels. This is different from the
traditional classification problem, which assumes that all misclassification
cases have the same cost. In real life, the misclassification cost of many
problems is related to the individual categories. For example, in medical
diagnosis, the cost of misdiagnosing a cancer patient as a flu patient is much
higher than misdiagnosing a flu patient as a cancer patient. It is noteworthy
that the traditional cost-sensitive learning, which is different with the
proposed problem, does not consider the adversarial robustness of the model or
particularly protect a certain class against adversarial attacks.
This paper proposes a cost-sensitive adversarial learning model (CSA), which
can well resist the adversarial attacks against special classes. We point out
that the good robustness of the model is due to a special property of the
convolutional layer parameters in the model, which is reflected in LeNet
networks as the fact that the absolute values of most parameters in the
convolutional layer approach zero while the absolute values of a few
parameters are significantly larger than others. This property indicates that
the absolute values of a major part of weight parameters of the convolutional
layer in the model attain the minimum (zero), and the absolute values of a
minor part of weight parameters go to maximum. Thus we name it as Min-Max
property of weight parameters. Actually, the Min-Max property refers to a kind
of approximate sparseness of weight parameters in convolutional layers, which
reflects the essential of convolution from low level to high level features.
Furthermore, when the model makes predictions for the samples of a specific
class, only some of the parameters play a key role. Therefore, we explain why
the CSA model could improve the adversarial robustness of special classes: the
adversarial training brings the Min-Max property to the model parameters,
while the cost matrix can locate the parameters that play a decisive role in
the prediction results of the target class. When applying the cost matrix to
adversarial training, it makes the model endowing part of the positioned
parameters with more obvious Min-Max property than other parameters during the
training, thus improving the adversarial robustness of class the target class.
According to this explanation, we propose to build a new learning model,
called cost-sensitive adversarial extension (CSE), that does not depend on
adversarial training. CSE is an end-to-end learning method that does not need
adversarial training but can improve the adversarial robustness of the model.
An overview of the proposed model is shown in Fig. 1.
The contributions of this paper are listed as follows:
* •
by incorporating the cost-sensitivity into the adversarial training, we
provide an algorithm (CSA), which can specifically protect an important class
and improve its adversarial robustness.
* •
We give a new explanation to the robustness of the model against adversarial
attacks and propose a novel robust learning algorithm (CSE), which can make
the model resist the adversarial examples without adversarial training. It is
noted that most of the state-of-the-art models of adversarial robustness
learning do need adversarial training.
* •
We also verify the validity of the CSA model and CSE model experimentally.
Compared with the traditional adversarial training models, our model has
better overall robustness and can effectively improve the adversarial
robustness of the protected class.
## II Related Work
Since the first discovery of adversarial examples [2] in deep learning, there
have been many works [3, 6, 7, 12, 13, 4] studying adversarial examples
generation. The adversarial example is indistinguishable to humans but can
easily fool machines into misclassification. This imperceptibility, which
depends largely on human perception, is not measurable. But we usually use a
small perturbation, which is limited in the $l_{p}$-norm, to evaluate this
imperceptibility when generating adversarial examples. In this way, we can
represent the adversarial examples as the following form:
$A(\mathbf{x})=\\{\mathbf{x}^{\prime}|f(\mathbf{x}^{\prime})\neq
f(\mathbf{x}),||\mathbf{x}^{\prime}-\mathbf{x}||_{p}\leq\epsilon\\}$
where $A$ is a set, $\mathbf{x}$ is a clean example, $\mathbf{x}^{\prime}$ is
the adversarial example, $\|\mathbf{x}^{\prime}-\mathbf{x}\|$ represents the
size of perturbation, and $\epsilon$ is the maximum perturbation. Some
examples and corresponding adversarial examples are represented in Fig. 2.
### II-A Adversarial attack
According to whether we can know the model’s structure and weights, the
adversarial attacks can be divided into two types: white-box attack [7, 6, 19,
11] and black-box attack [20, 21]. In a white-box attack, the adversary knows
the model structure and weights when attacking, such as FGSM [20, 21],
DeepFool [11], PGD [6], CW [12]. FGSM is a gradient-based and one-step attack
method that updates along the direction of the pixel gradient’s signal
function to obtain the adversarial examples. DeepFool looks for the minimum
distance from a clean sample to an adversarial example and defines this
distance as the model’s adversarial robustness. DeepFool takes advantage of a
linear approximation method to generate adversarial examples iteratively. CW
[12] is a non-gradient based adversarial attack, one of the most powerful
attacks. Carlini et al. [12] propose several objective functions to generate
adversarial examples, among which the most effective attack method can
effectively attack Distillation Defense [4]. From the perspective of robust
optimization, Madry et al. [6] study the model’s adversarial robustness,
formulate the adversarial training process of the model as a saddle point
problem, and use projected gradient descent to search for more aggressive
adversarial examples. They prove that the PGD method is the strongest attack
method among the first-order attack methods.
In a black-box attack, the attacker knows nothing about the model except for
the model’s output. Generally, one can attack these models by taking advantage
of the transferability of adversarial examples [20, 22] or to generate
adversarial examples with GAN [21]. The so-called transferability of
adversarial examples means that adversarial examples of networks can attack
neural networks with different structures, and even the two networks are
trained on different datasets [22]. Liu et al. [20] study the transferability
of adversarial examples on the large-scale network model and large-scale
dataset. They find that the adversarial examples with the non-targeted attack
are easy to generate, while the adversarial examples with targeted attack
could hardly be transferred to the target label. A GAN based algorithm is
proposed [21] to generate adversarial examples. The combination of image and
text domains allows the generated adversarial examples to be more natural and
interpretable, which helps understand the black-box model’s local behaviour.
(a) (b)
Figure 2: There are clean examples (left) and adversarial examples (right),
which were generated by FGSM [7]. (a) Some examples from MNIST and their
corresponding adversarial examples generated by FGSM with $\epsilon=0.3$. (b)
Some examples from CIFAR10 and their corresponding adversarial examples
generated by FGSM with $\epsilon=0.03$.
### II-B Adversarial training
Adversarial training is one of the most effective defense methods. Since the
model is required to classify the adversarial examples correctly, the
adversarial examples can be generated and added to the training set to retrain
the model. The framework of adversarial training is shown in Fig. 1. Many
defense works are based on this framework. For example, Goodfellow et al. [7]
use FGSM to generate adversarial examples to retrain the model. They point out
that the adversarial training procedure can be seen as minimizing the worst-
case error when data is perturbed by an adversary. After adversarial training,
the adversarial robustness of the model is significantly improved. Some
conclusions are drawn in [23]. However, both [7] and [23] conduct their
experiments only on the MNIST dataset. Some more complicated experiments on
ImageNet are conducted by [24]. When they train their networks, they utilize
adversarial examples and clean examples in each training step. They present
that adversarial training increases deep neural networks’ robustness against
one-step attacks but would not help under iterative attacks. To search more
powerful attack method, Madry et al. [6] mention that adversarial training
could be formulated as a minimum-maximization problem and propose the PGD
attack method based on projected gradient descent. They prove that their
method is the most powerful among the first-order attack methods. The model
based on adversarial training with PGD obtains high performance against
adversarial examples [13].
Nowadays, many defense methods are based on the minimum-maximum optimization
problem mentioned in [6]. Liu et al. [25] propose a novel single-step
adversarial training method, which can resist single-step and iterative
adversarial examples. Zhang et al. [26] propose TRADES, the minimization
algorithm of the strictest upper limit in theoretical probability distribution
and measurable predictive variables, and win the first place in NeurIPS2018.
Moreover, recently, some works [27, 28, 29, 30] propose to reduce the
computation of adversarial examples, such as [30] proposes a linear
regularization to solve the problem that the computational cost of adversarial
training grows prohibitively as the size of the model and number of input
dimensions increase. Moreover, some works [3, 13] point out that adversarial
training would cause obfuscated gradients. Athalye et al. [13] identify
obfuscated gradients would cause a false sense of security in defenses against
adversarial examples. According to this weakness, they successfully attack
eight methods which were proposed in ICLR 2018.
### II-C Cost-sensitive learning
Cost-sensitive learning [17] is a common algorithm to solve unequal
misclassification or unbalanced data learning. The main idea is: although a
model can improve the overall performance, some problems bring more serious
consequences than others. In other words, misclassification of different
problems may bring different levels of consequences. If an important
performance cannot be guaranteed by only considering the overall performance,
it may bring unimaginable bad effects. For example, in medical practice, it is
clear that misdiagnosing a person with real cancer as a healthy person is much
higher than the cost of misdiagnosing a healthy person as a cancer patient.
Since cancer patients are in the minority in real life, the method tends to
diagnose patients as healthy. Cost-sensitive learning is an algorithm to solve
such problems. Its core element is the cost matrix. Taking medical diagnosis
as an example, its cost-sensitive matrix may be written as the following form:
$C=\begin{bmatrix}0&100\\\ 1&0\end{bmatrix}$
where the column represents the actual label, and the row represents the
predictive label such as $C(1,2)$ represents the cost of diagnosing a cancer
patient as a healthy person.
Many existing works on cost-sensitive learning [17, 18, 31, 32, 33, 34, 35,
36] fall roughly into two categories. One is to adjust the samples’
distribution [17, 32, 33, 35, 36]. It transforms the frequencies of categories
to proportions according to the cost of misclassification. The advantage is
that the change of sample distributions may affect the performance of the
algorithm. The other is meta-cost learning [18, 31, 34], a method to transform
the general classification model into a cost-sensitive model. Kukar et al.
[31] firstly apply cost-sensitive learning to neural networks. Although there
are many works on cost-sensitive learning, there are few studies on cost-
sensitive adversarial learning. In [37], cost-sensitive learning is first
applied to adversarial training, and a minimum-maximization method for
generating robust classifiers is proposed. This method can directly minimize
the error cost of convex optimization problems, but it is only applicable to
linear classifiers. Asif et al. [37] encode each adversarial transformation
into a matrix called the cost matrix. They then use the adversarial robust
learning method in [16] to propose a new objective function for training the
cost-sensitive robust classifier. Terzi et al. [15] propose the WPGD method
and use it in Adversarial Training. It provides a simple method to make the
model cost-sensitive to control the balance of accuracy-robustness.
## III Adversarial learning with cost-sensitive classes
Section III-A gives some basic symbol definitions such as empirical risk
minimization, adversarial training, and cost-sensitive learning. Then, we
formally describe the problem this paper solved. Section III-B shows the
detail about the CSA algorithm, which combines cost-sensitive learning and
adversarial training to improve the model’s adversarial robustness. Actually,
by analyzing the convolutional layer’s parameters, we find some
characteristics of the model trained by adversarial training. For the sake of
description, we abbreviate these characteristics as the Min-Max property.
According to the MinMax property, we propose a new algorithm that could
improve the model’s adversarial robustness without adversarial training,
called CSE. The CSE algorithm is described in Section III-C.
### III-A Symbol definition and problem description
##### Empirical Risk Minimization
Given a training set with $N$ samples
$D=\\{\mathbf{x}^{(i)},y^{(i)}\\}^{N}_{i=1}\subset R^{n}\times Y$ where
$Y=\\{0,1,\dots,m-1\\}$ and $m$ is the number of classes, we can train a
classifier by machine learning $f:R^{n}\rightarrow[0,1]^{m}$, where
$\hat{y}=\mathop{\arg\max}_{i}f_{i}(\mathbf{x})$ represents the prediction
result of the classifier on sample
$(\mathbf{x},y)\in\\{\mathbf{x}^{(i)},y^{(i)}\\}^{N}_{i=1}$. For the
classification problem of $m$ categories, the neural network is used to
represent the classifier, and its training process can be described as the
following optimization problem:
$\min\mathop{E}_{(\mathbf{x},y)\backsim
D}[L(f(\mathbf{x}),y)]=\min\sum_{i=1}^{N}L(f(\mathbf{x}^{(i)}),y^{(i)})$ (1)
where $L$ represents the loss function.
##### Adversarial Training
The set of adversarial examples concerning a sample $(\mathbf{x},y)$ is
defined as
$B_{l}(\mathbf{x},y,f)=\\{\mathbf{x}_{adv}|\quad\|\mathbf{x}_{adv}-\mathbf{x}\|_{l}\leq\epsilon\quad
and\quad\arg\max_{j}f_{j}(\mathbf{x}_{adv})\neq y\\}$, where $l$ represents
$l$-norm. Then, the process of its adversarial training can be formulated as
follows:
$\min\mathop{E}_{(\mathbf{x},y)\backsim D}[\max_{\mathbf{x}_{adv}\in
B_{l}}L(f(\mathbf{x}_{adv}),y)]$ (2)
##### Cost-Sensitive Learning
The cost of categorization is usually represented by a cost matrix $C$ as
below:
$C(i,j)=\begin{cases}e_{ij}\quad&i\neq j\\\ 0\quad&i=j\end{cases}$
where $e_{ij}$ represents the cost of misclassifing an example from class $j$
to class $i$.
For any sample $\mathbf{x}$ belonging to class $j$, the optimal decision is
equivalent to minimizing the following loss function:
$L(\mathbf{x},j)=\sum_{i}P(i|\mathbf{x})C(i,j)$
That is to say, $L(\mathbf{x},j)$ represents the cost expectation of the class
$j$ predicted by the model under the given sample $\mathbf{x}$.
##### The Proposed Problem Formulation
Given a training set with $N$ samples,
$D=\\{\mathbf{x}^{(i)},y^{(i)}\\}^{N}_{i=1}$, where a special class, say the
$p$-th class we need to protect. Our goal is to train a classifier
$f:X\rightarrow Y$, which can classify any clean sample
$(\mathbf{x},y)\in\\{\mathbf{x}^{(i)},y^{(i)}\\}^{N}_{i=1}$ while has stronger
robustness against adversarial examples $\mathbf{x}_{adv}\in
B_{l}(\mathbf{x},y_{p},f)$ in the condition of adversarial attack. In short,
when improving the overall robustness of the classifier $f$, category $p$ is
given priority to ensure the robustness.
### III-B Cost-Sensitive Adversarial Model (CSA)
This subsection introduces cost-sensitive learning and adversarial training.
Then, we proposes a cost-sensitive adversarial model (hereinafter referred to
as CSA model for convenience). CSA model can effectively improve the
robustness of a certain class $p$ in the classification problem against
adversarial attacks.
Note that we want to protect class $p$ and improve the robustness of the model
regarding class $p$. To solve this problem, two subquestions need to be
answered:
* •
How to improve the overall robustness of the model?
* •
How to give priority to improve the robustness of class $p$ in the model?
Intuitively, we can improve the overall robustness of the model through
adversarial training. Simultaneously, to particularly improve the robustness
regarding class p, we add the cost matrix under the framework of the
adversarial training and use the cost matrix to indicate which classes of the
classifier need to be specially protected. In conclusion, the cost of model
misclassification of class $p$ into non-$p$ or non-$p$ into $p$ is higher than
that of other misclassification. So, we define the following cost matrix:
$C(i,j)=\begin{cases}0\quad&i=j,\\\ c\quad&i=p\ or\ j=p,\\\
1\quad&else.\end{cases}$ (3)
where $p$ is the protected class number, $j$ represents the true label, and
$i$ represents the class number predicted by the classifier. This matrix
indicates that the cost is zero when the classifier correctly identifies the
sample. When a classifier misclassifies a $p$ class as a non-$p$ class or
misclassifies a non-$p$ class as a $p$ class, the cost is $c$. And the cost is
1 in other conditions.
$\min\mathop{E}_{(\mathbf{x},y)\backsim D}[\max_{\mathbf{x}_{adv}\in
B_{l}}L(f(\mathbf{x}_{adv}),y)+\sum_{i}f_{i}(\mathbf{x}_{adv})C(i,y)]$ (4)
The first term in Eq. 4 is a traditional loss function, which can be the
cross-entropy function, square deviation, or any existing loss function. It
mainly plays a role in improving the overall performance of the model. The
second term represents the expected output cost of the model. When $c=1$, the
CSA model degenerates back to a standard adversarial training model.
Eq. 4 is a non-convex problem. Thus finding a solution is a bit of challenge.
There is a natural explanation for adversarial training proposed in [2]; that
is, an attack method is a solution to the internal max part of Eq. 4. In
contrast, the external min part works on the whole data set, making the
model’s loss the least. For example, Eq. 5 is an attack the method called FGSM
[7].
$(\mathbf{x}_{adv})_{FGSM}=\mathbf{x}+\epsilon sign(\bigtriangledown
L(f(\mathbf{x}),y))$ (5)
Then, Eq. 4 can be solved by the follow:
$\min\mathop{E}_{(\mathbf{x},y)\backsim
D}[L(f(\mathbf{x}_{fgsm}),y)+\sum_{i}f_{i}(\mathbf{x}_{fgsm})C(i,y)]$ (6)
The sensitivity of the model to the protected category depends on the cost
matrix $C$. Obviously, the cost of the protected category misclassified by the
model is higher than that of the general category. When $C(i,j)=1,i\neq j$,
the model’s penalty for misclassification of any category is the same. The
implementation of the CSA algorithm is presented in Algorithm 1.
Algorithm 1 CSA Algorithm
1: Initialize parameters of network $\mathbf{\theta}$ with random weights
2: Initialize dataset $D=\\{\mathbf{x}^{(i)},y^{(i)}\\}^{N}_{i=1}$
3: Initialize the batchsize $B$, epochs $T$
4: for i=1 to $T$ do
5: Initialize cost-sensitive matrix $C$
6: Sample $B$ examples $Q_{1}=\\{(\mathbf{x}^{(i)},y^{(i)})\\}_{1}^{B}$
7: Get adversarial example $Q_{2}=\\{(\mathbf{x}_{adv},y)|(\mathbf{x},y)\in
Q_{1}$ }
8: Stage 1: Update $\mathbf{\theta}$ with clean examples $({\mathbf{x}},y)\in
Q_{1}$
9: Updated $\mathbf{\theta}$ by minimizing loss function Eq. (4)
10: Stage 2: Update $\mathbf{\theta}$ with adversarial examples
$({\mathbf{x}_{adv}},y)\in Q_{2}$
11: Updated $\mathbf{\theta}$ by minimizing loss function Eq. (4)
12: end for
13: Output $\mathbf{\theta}^{*}$
### III-C Cost-Sensitive Adversarial Extension (CSE)
This subsection first gives a explanation of the adversarial robustness of the
CSA model, then gives an extension of CSA model, called CSE model. Compared
with the CSA model, a significant feature is the CSE model does not need
adversarial training to make the model robust.
#### III-C1 Empirical Observations on CSA
(a)
(b)
Figure 3: CSA & ADV: The parameter’s size in the first convolutional layer of
the LeNet network (Sort from left to right, top to bottom).
To further study the principle of the CSA model, we analyze the difference
between the CSA model and the ADV model based on the parameters of the
convolutional layer (ADV represents the model that is training with standard
adversarial learning. Turn to Section IV for details). Fig.3a is the parameter
distribution diagram of the convolution of the CSA model and the ADV model in
the first layer. Fig. 3b arranges the convolution parameters of the CSA model
and ADV model from left to right and top to bottom. Fig. 3a shows that in the
first convolution parameter of the CSA model, most values are relatively
small, and the number of values close to 0 is several times that of the ADV
model. Fig. 3b shows that in both the CSA model and ADV model, part of the
parameters in the first convolutional layer are relatively high, and the value
of the CSA model is higher than that of the ADV model.
In order to further observe the influence of parameter distribution of
convolutional layer, we added the L2 norm term to the original CSA model,
hoping that the absolute values of the parameter of model CSA+L2 would become
a little smaller to observe what happens to the model’s adversarial
robustness. Eq. 7 is the new optimization equation.
$\min\mathop{E}_{(\mathbf{x},y)\backsim
D}[L(f(\mathbf{x}_{fgsm}),y)+\sum_{i}f_{i}(\mathbf{x}_{fgsm})C(i,y)+\|\theta_{f}\|_{2}]$
(7)
(a)
(b)
Figure 4: CSA+L2 & CSA & ADV: The parameter’s size in the first convolutional
layer of the LeNet network (Sort from left to right, top to bottom).
Through these experiments, the protective effect of three models on category
$p$ is: $ADV<CSA+L2<CSA$. Section IV shows the experimental details. Fig. 4a
is the distribution diagram of parameters of the first convolutional layer of
CSA model, CSA+L2 model, and ADV model. From Figs. 4a and 4b, we find that the
parameters of the first layer of the convolutional layer have the following
features:
* •
The number of parameter values approaching zero: $ADV<CSA+L2<CSA$;
* •
Among the many parameters, the absolute values of a small number of parameters
are relatively large, and for the parameter values of the three models at the
corresponding positions, we have $ADV<CSA+L2<CSA$
Therefore, we give the following two propositions based on the above
experimental findings:
###### Proposition 1
The model has adversarial robustness possibly due to a Min-Max property which
parameters follow, i.e., the absolute values of most parameters in the
convolutional layer approach zero while the absolute values of a few
parameters are significantly larger than others.
###### Proposition 2
When the model predicts sample, the parameters that have real impaction on
prediction result are only a part.
Two above-mentioned observations and analysis give some explanations why the
CSA model can improve the robustness of category $p$. The adversarial training
gives the Min-Max property to the parameters in the model. The cost matrix can
locate the parameters that play a decisive role in the prediction results of
category $p$. When the adversarial training is combined with the cost matrix,
the positioning effect of the cost matrix makes these parameters having the
Min-Max property during the training to improve the robustness of the category
$p$.
#### III-C2 Extension of CSA model (CSE)
This section further mines the information of Proposition 1 and uses this
information to design the CSE model.
Proposition 1 shows that, if the model parameters have the Min-Max property,
then the model has stringer adversarial robustness. Through the analysis in
the previous section, we give two features of the Min-Max property:
###### Feature 1
Most of the parameters in the convolutional layer of the model tend to zero.
###### Feature 2
In the convolutional layer parameters of the model, some are very large
relatively.
Now, we analyze Features 1 and 2 in convolutional layers from the viewpoint of
uncertainty. Let $\mathbf{W_{conv}}$ represent the parameters of convolutional
layers. According to [38, 39], the fuzziness vector can be defined as
$\displaystyle\begin{split}Fuzziness(\mathbf{W_{conv}})=-\frac{1}{N}\sum_{1}^{N}[\rho_{i}log\rho_{i}+\\\
(1-\rho_{i})log(1-\rho_{i})],\end{split}$ (8)
where $N$ is the size of vector $\mathbf{W_{conv}}$ and $\rho_{i}$ is defined
as follows:
$\displaystyle\rho_{i}=\frac{1}{1+e^{|w_{i}|}}$ (9)
By minimizing Eq. 8, we have
$w_{i}\rightarrow 0\Rightarrow\rho_{i}\rightarrow 1,$
$w_{i}\rightarrow\infty\Rightarrow\rho_{i}\rightarrow 0.$
Therefore, we can train a model with Features 1 and 2 by minimizing Eq. 8.
Then, we have
$\displaystyle\min\mathop{E}_{(\mathbf{x},y)\backsim D}[$ $\displaystyle
L(f(\mathbf{x}),y)+\sum_{i}f_{i}(\mathbf{x})C(i,y)+$ (10) $\displaystyle\gamma
Fuzziness(\mathbf{W_{conv}})]$
where $\gamma$ is a hyperparameter. Compared with Eq. 4, Eq. 10 omits the
calculation of the Max function but adds a regular terms
$Fuzziness(\mathbf{W_{conv}})$. The implementation of the CSE algorithm is
presented in Algorithm 2.
In a word, the CSE is a novel method of defense against adversarial attacks.
The difference between the CSE model and the traditional adversarial defense
methods (such as adversarial training) is that the CSE model does not need to
calculate the Max function inside the optimization formula, which greatly
reduces the training time of the model.
Algorithm 2 CSE Algorithm
1: Initialize parameters of network
$f=(\mathbf{x};\mathbf{\theta}=[\mathbf{\theta}_{conv},\mathbf{\theta}_{other}])$
with random weights
2: Initialize dataset $D=\\{\mathbf{x}^{(i)},y^{(i)}\\}^{N}_{i=1}$
3: Initialize the batchsize $B$, learning rate $\alpha$, epochs $T$, gamma
$\gamma$
4: for i=1 to $T$ do
5: Initialize cost-sensitive matrix $C$
6: Sample $B$ examples $Q=\\{(\mathbf{x}^{(i)},y^{(i)})\\}_{1}^{B}$
7: Calculate entropy loss $l_{1}=\sum_{1}^{B}{L(f(x),y)}$
8: Calculate cost-sensitive loss
$l_{2}=\sum_{1}^{B}{\sum_{i}{f_{i}(x)C(i,y)}}$
9: $l_{3}=\gamma Fuzziness(\mathbf{\theta}_{conv})$
10: Then, $L=l_{1}+l_{2}+l_{3}$
11: Update parameters of network
$\mathbf{\theta}\leftarrow\mathbf{\theta}-\alpha\triangledown L$
12: end for
13: Output $\mathbf{\theta}^{*}$
### III-D Min-Max property
The Min-Max property is first discovered by Shen et al. in [38]. Shen et al.
propose that the neural network models with Min-Max property have stronger
adversarial robustness. The Min-Max property means that, after using a
combination between L1 and L2 normalizations as the loss function, the
training process will result in such a phenomenon that the weights in
convolutional layers will tend towards zero (the minimum) or a maximum value.
This paper considers more complex architectures of neural networks to enhance
the representation ability and advocate to measure Min-Max property by
minimizing fuzziness of convolutional layers.
It is worth noting that the Min-Max property is similar to an off-center
technique proposed in metric learning [40]. The off-center technique is to
achieve a better representation in feature space based on such an idea that,
given the center being 0.5, the similarity after transformation is require to
tend towards zero (the minimum) or one (the maximum) if the similarity before
the transformation is less than or more than 0.5, respectively. It is observed
that the parameters are close to zero or far further away from zero after
several rounds of adversarial training in LeNet. It is an interesting
observation which confirms from the angle of off-center that convolution can
indeed summarize the features from low to high levels while high-level
features have more representative abilities.
The process of training neural networks such that the weights in convolutional
layer weights going to extremes can be implemented in different ways, e.g., by
minimizing the uncertainty of the convolutional layer [38] or by adding the
regularization term in the loss function. Furthermore, it is found that a DNN
with strong robustness against adversarial examples may not have the Min-Max
property. However, based on the observation from a considerable number of
simulations, a DNN with the Min-Max property usually has strong robustness
against adversarial examples where the adversarial robustness means a
tolerance to adversarial noise.
We recall some results in [38] where two neural network models with simple
architectures are considered.
* •
$Model\\#1$:
$L_{a}(\mathbf{x};\mathbf{W_{a}})=-\mathbf{y}^{T}log(softmax(\mathbf{W_{a}}\mathbf{x}))$,
where $W_{a}=(a_{ij})$ and $a_{ij}\sim U(0,1)$.
* •
$Model\\#2$:
$L_{b}(\mathbf{x};\mathbf{W_{b}})=-\mathbf{y}^{T}log(softmax(\mathbf{W_{b}}\mathbf{x}))$,
where $W_{b}=(b_{ij})$ and $b_{ij}\sim B(1,p_{1})$. And $B(1,p_{1})$ is the
binomial distribution. $p_{1}$ represents the probability of $b_{ij}=1$ where
$p_{1}>0,p_{1}<<1$.
Based on both models, a theorem regarding the normal and bi-nominal
distributions of weight parameters was given as follows:
###### Theorem 1
Suppose $\mathbf{x}$ is a vector and $\mathbf{x}_{i}$ is the $i$th subscript
in $\mathbf{x}$, $\forall i$, the following inequality holds
$\lvert\frac{\partial\mathbb{E}_{b_{ij}\sim
B(1,p_{1})}L_{b}(\mathbf{x};\mathbf{W_{b}})}{\partial\mathbf{x}_{i}}\rvert\\\
\leq\lvert\frac{\partial\mathbb{E}_{a_{ij}\sim
U(0,1)}[L_{a}(\mathbf{x};\mathbf{W_{a}})]}{\partial\mathbf{x}_{i}}\rvert$
.
According to this theorem, we have the following inequality:
$\displaystyle\begin{split}\lvert
L_{b}(\mathbf{x};\mathbf{W_{b}})-L_{b}(\mathbf{x+\Delta};\mathbf{W_{b}})\rvert\leq\\\
\lvert
L_{a}(\mathbf{x};\mathbf{W_{a}})-L_{a}(\mathbf{x+\Delta};\mathbf{W_{a}})\rvert\end{split}$
(11)
which indicates that $Model\\#2$ (with Min-Max property) is really having the
adversarial robustness stronger than $Model\\#1$ (without the Min-Max
property).
## IV Experiments
In this section, we use Python3.6 and Jupyter Notebook to implement our
algorithms. Using Advertorch [41], we adopt some adversarial attacks (FGSM
[7], PGD [6], CW [12], MIA [42], L2BIA [42] and LinfBIA [42]) to evaluate the
adversarial robustness of the model. We compare the performance of the
following four models:
* •
Standard model (STD): This model is trained with clean examples by adopting
cross-entropy as the loss function.
* •
The extension of the cost-sensitive adversarial model (CSE): This is a model
trained with clean examples by adopting a loss function (Eq.10).
* •
Cost-sensitive adversarial model (CSA): This is an adversarial training model
trained with Eq. 4. The adversarial examples are generated by PGD.
* •
A model combined CSE and adversarial training (CSE+ADV): This is a adversarial
training model trained with Eq. 10. The adversarial examples are generated by
PGD.
We evaluate these models on three standard datasets MINST [43], CIFAR10 [44]
and CIFAR100 [44]:
* •
MNIST dataset consists of 60,000 training samples and 10,000 samples, each of
which is a $28\times 28$ pixel handwriting digital image.
* •
CIFAR10 dataset consists of 60,000 32x32 colour images in 10 classes, with
6000 images per class. There are 50,000 training images and 10,000 testing
images.
* •
CIFAR100 is just like the CIFAR10, except it has 100 classes containing 600
images each. There are 500 training images and 100 testing images per class.
Some examples are shown in Fig. 2.
In the experiments, all the adversarial perturbations are limited to a
$l_{\infty}$-norm ball. Let $M_{p}$ represent the model $M$ which particularly
protects category $p$. Before training network, samples from datasets will be
regularized. The preprocessing is described below:
$\mathbf{x}=\frac{\mathbf{x}-\mathbf{\mu}}{\mathbf{\sigma}}$
where the $\mathbf{\mu}$ and $\mathbf{\sigma}$ in different dataset are shown
in Tabel I.
TABLE I: The mean value and standard deviation in MNIST, CIFAR10 and CIFAR100 | $\mathbf{\mu}$ | $\mathbf{\sigma}$ | |
---|---|---|---|---
MNIST | [0.1307] | [0.3081] | |
CIFAR10 | [0.4914, 0.4822, 0.4465] | [0.2023, 0.1994, 0.2010] | |
CIFAR100 | [0.5070, 0.4865, 0.4409] | [0.2673, 0.2564, 0.2761] | |
### IV-A MINST
TABLE II: MNIST: The accuracy of protected categories under various attack methods. P represents the category of protection. And $O_{i}$ presents the accuracy of category $i$ for the data in the table. | | | The protected class: $p$. Only showing the accuracy of protected class.
---|---|---|---
Attacks | ADV Training | Models | $O_{0}$, p=0 | $O_{1}$, p=1 | $O_{2}$, p=2 | $O_{3}$, p=3 | $O_{4}$, p=4 | $O_{5}$, p=5 | $O_{6}$, p=6 | $O_{7}$, p=7 | $O_{8}$, p=8 | $O_{9}$, p=9
FGSM | No | STD | 0.8771 | 0.9491 | 0.7667 | 0.8401 | 0.7230 | 0.8573 | 0.8381 | 0.6053 | 0.8227 | 0.6830
CSE | 0.9448 | 0.9647 | 0.9109 | 0.7696 | 0.8849 | 0.8574 | 0.8424 | 0.9052 | 0.9048 | 0.7099
Yes | CSA | 0.8894 | 0.9679 | 0.9363 | 0.9099 | 0.9562 | 0.8827 | 0.9437 | 0.9057 | 0.9254 | 0.9287
CSE+ADV | 0.9566 | 0.9792 | 0.9426 | 0.9229 | 0.9013 | 0.9311 | 0.9642 | 0.9443 | 0.9406 | 0.8953
PGD | No | STD | 0.7157 | 0.7749 | 0.4019 | 0.6155 | 0.2730 | 0.5787 | 0.6061 | 0.1425 | 0.3767 | 0.2096
CSE | 0.8740 | 0.7954 | 0.7341 | 0.6293 | 0.7497 | 0.7284 | 0.5535 | 0.7187 | 0.5696 | 0.2096
Yes | CSA | 0.8336 | 0.9066 | 0.7978 | 0.8023 | 0.7398 | 0.7094 | 0.8664 | 0.7841 | 0.8023 | 0.7404
CSE+ADV | 0.8508 | 0.9553 | 0.9099 | 0.8667 | 0.9225 | 0.8961 | 0.9367 | 0.9137 | 0.8805 | 0.9069
CW | No | STD | 0.5837 | 0.9036 | 0.4403 | 0.6052 | 0.5519 | 0.6539 | 0.5500 | 0.5192 | 0.2413 | 0.3317
CSE | 0.7497 | 0.9118 | 0.6314 | 0.4859 | 0.6105 | 0.6614 | 0.4966 | 0.7477 | 0.4196 | 0.3317
Yes | CSA | 0.4524 | 0.6934 | 0.6194 | 0.6362 | 0.7059 | 0.6969 | 0.7442 | 0.7296 | 0.4248 | 0.4982
CSE+ADV | 0.7070 | 0.8541 | 0.7428 | 0.6548 | 0.5446 | 0.6247 | 0.7448 | 0.7926 | 0.5427 | 0.4781
MIA | No | STD | 0.7509 | 0.8342 | 0.4107 | 0.6717 | 0.3211 | 0.6280 | 0.6180 | 0.1795 | 0.4031 | 0.2629
CSE | 0.8455 | 0.5360 | 0.6835 | 0.5440 | 0.7150 | 0.6397 | 0.4971 | 0.6592 | 0.5031 | 0.2629
Yes | CSA | 0.8403 | 0.9171 | 0.8085 | 0.8271 | 0.7706 | 0.7162 | 0.8656 | 0.8089 | 0.8294 | 0.7680
CSE+ADV | 0.8771 | 0.9542 | 0.9049 | 0.8737 | 0.9305 | 0.8964 | 0.9462 | 0.9141 | 0.8811 | 0.9073
L2BIA | No | STD | 0.9926 | 0.9795 | 0.9647 | 0.9532 | 0.9467 | 0.9504 | 0.9672 | 0.9613 | 0.9477 | 0.9729
CSE | 0.9738 | 0.9927 | 0.9762 | 0.9859 | 0.9760 | 0.9789 | 0.9693 | 0.9810 | 0.9827 | 0.9729
Yes | CSA | 0.9834 | 0.9934 | 0.9918 | 0.9879 | 0.9935 | 0.9936 | 0.9800 | 0.9827 | 0.9884 | 0.9916
CSE+ADV | 0.9893 | 0.9935 | 0.9881 | 0.9854 | 0.9819 | 0.9852 | 0.9911 | 0.9872 | 0.9925 | 0.9813
LinfBIA | No | STD | 0.7485 | 0.7075 | 0.3370 | 0.6562 | 0.2400 | 0.5700 | 0.5774 | 0.1222 | 0.3371 | 0.1734
CSE | 0.8845 | 0.8519 | 0.7626 | 0.6330 | 0.7651 | 0.7375 | 0.5640 | 0.7542 | 0.5900 | 0.2010
Yes | CSA | 0.8375 | 0.8940 | 0.7771 | 0.8029 | 0.7197 | 0.7129 | 0.8531 | 0.7704 | 0.7952 | 0.7182
CSE+ADV | 0.8388 | 0.9533 | 0.9031 | 0.8670 | 0.9229 | 0.8868 | 0.9350 | 0.9142 | 0.8650 | 0.9097
For the MNIST dataset, we adopt LeNet [45] as our network structure. LeNet
network is composed of two convolutional layers, two pooling layers and two
full connection layers. Two convolutional layers contain 5 and 16 convolution
kernels respectively. And the convolutional layer is followed by a pooling
layer, the padding is set to 2. The number of two full connection layer hidden
nodes are 120 and 84, respectively. RELU [46] function is adopted as the
activation function in the network. During the training stage, the batch size
is 256, and the learning rate is 0.01, the epoch is 20, momentum is 0.95. For
the CSA model and CSE+ADV model, the first 10 epochs are trained with clean
samples, and the last 10 epochs are trained with adversarial examples
generated by PGD. Maximum perturbation of the attack $\epsilon=0.3$. We set
the constant $c=10$.
We respectively conduct 10 experiments for CSA, CSE and $CSE+ADV$ to protect
each class (0-9). Table II is the result of various models on the testing
dataset, where $\epsilon=0.3$. We only show the accuracy of the particularly
protected class. For example, ”$O_{1},p=1$” represents the accuracy of class
”one” in a model that is trained under protecting class ”one”. However, the
STD model, as a baseline, does not adopt any protection strategy. It can be
seen from the table that the CSA, CSE and CSE + ADV can effectively improve
the adversarial robustness of the model than STD. Compared the CSE with STD
model, we find that CSE can achieve high adversarial robustness although both
models do not use adversarial training. When adding adversarial training, it
is worth-noted that CSE + ADV achieves better performance than CSA in most
scenarios. These results experimentally validated that the Min-Max property of
the model can improve the model’s adversarial robustness. Moreover, it is an
effective way to reduce the fuzziness of parameters in convolutional layers.
Figs. 5a, 5b and 5c show the change trend of loss function, fuzziness of
parameters in convolutional layer and the robustness. The protected label is
set to the first class. We find that CSE is harder to converge than STD. It
would take more time for training CSE than STD. However, shown in Fig.5b, as
the number of iterations increases, the fuzziness of convolutional layer
decreases gradually in CSE but remaining stable in STD. This phenomenon
demonstrates that our method can effectively reduce the uncertainty of
parameters which gives a model Min-Max property which reflects in the result
that the adversarial robustness of CSE is stronger than STD as shown in Fig.
5c. Besides, we note that the fuzziness of convolutional layer in the STD and
CSE is around 0.7 at beginning. This is due to the fact that the parameters
are initialized randomly in a same way.
(a) The loss on MNIST.
(b) The fuzziness of first convolutional layer on MNIST.
(c) The robustness on MNIST. Evaluation under PGD.
(d) The loss on CIFAR10.
(e) The fuzziness of first convolutional layer on CIFAR10.
(f) The robustness on CIFAR10. Evaluation under PGD.
Figure 5: The changing trend of the loss function, fuzziness and robustness on
MNIST and CIFAR10.
### IV-B CIFAR10
TABLE III: CIFAR10: The accuracy of protected categories under various attack methods. P represents the category of protection. And $O_{i}$ presents the accuracy of category $i$ for the data in the table. | | | The protected class: $p$. Only showing the accuracy of protected class.
---|---|---|---
Attacks | ADV Training | Models | $O_{0}$, p=0 | $O_{1}$, p=1 | $O_{2}$, p=2 | $O_{3}$, p=3 | $O_{4}$, p=4 | $O_{5}$, p=5 | $O_{6}$, p=6 | $O_{7}$, p=7 | $O_{8}$, p=8 | $O_{9}$, p=9
FGSM | No | STD | 0.3093 | 0.3258 | 0.1539 | 0.0697 | 0.1169 | 0.1561 | 0.2877 | 0.3170 | 0.3320 | 0.2823
CSE | 0.4269 | 0.4399 | 0.2212 | 0.0862 | 0.1325 | 0.2136 | 0.3580 | 0.4029 | 0.4566 | 0.3748
Yes | CSA | 0.5671 | 0.6010 | 0.3371 | 0.2204 | 0.3082 | 0.3644 | 0.5726 | 0.5746 | 0.6173 | 0.5467
CSE+ADV | 0.5582 | 0.5904 | 0.3423 | 0.1982 | 0.2817 | 0.3539 | 0.5937 | 0.5497 | 0.6284 | 0.5260
PGD | No | STD | 0.2190 | 0.2584 | 0.0918 | 0.0270 | 0.0439 | 0.1065 | 0.1804 | 0.2586 | 0.2424 | 0.2144
CSE | 0.3628 | 0.4059 | 0.1975 | 0.0682 | 0.0810 | 0.1770 | 0.3118 | 0.3881 | 0.4039 | 0.3561
Yes | CSA | 0.5485 | 0.6015 | 0.3344 | 0.2154 | 0.2883 | 0.3531 | 0.5699 | 0.5493 | 0.6123 | 0.5614
CSE+ADV | 0.5677 | 0.5602 | 0.3186 | 0.1904 | 0.2770 | 0.3431 | 0.5850 | 0.5509 | 0.6115 | 0.5104
CW | No | STD | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000
CSE | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000
Yes | CSA | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000
CSE+ADV | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000
MIA | No | STD | 0.2041 | 0.2546 | 0.0988 | 0.0322 | 0.0404 | 0.1049 | 0.1752 | 0.2446 | 0.2255 | 0.2051
CSE | 0.3702 | 0.4069 | 0.1961 | 0.0618 | 0.0756 | 0.1815 | 0.3011 | 0.3703 | 0.3837 | 0.3463
Yes | CSA | 0.5449 | 0.5917 | 0.3260 | 0.1997 | 0.3000 | 0.3370 | 0.5492 | 0.5455 | 0.6236 | 0.5554
CSE+ADV | 0.5636 | 0.5791 | 0.3101 | 0.1849 | 0.2851 | 0.3367 | 0.5845 | 0.5501 | 0.6105 | 0.5116
L2BIA | No | STD | 0.6796 | 0.7365 | 0.4834 | 0.4313 | 0.5427 | 0.4984 | 0.6787 | 0.6666 | 0.7128 | 0.6816
CSE | 0.7044 | 0.7411 | 0.4936 | 0.3861 | 0.5114 | 0.4915 | 0.6854 | 0.6891 | 0.7761 | 0.6783
Yes | CSA | 0.7020 | 0.7439 | 0.4935 | 0.3987 | 0.5277 | 0.5175 | 0.7424 | 0.6842 | 0.7612 | 0.7137
CSE+ADV | 0.7016 | 0.7445 | 0.4893 | 0.3505 | 0.5079 | 0.5184 | 0.7641 | 0.6845 | 0.7620 | 0.6875
LinfBIA | No | STD | 0.2732 | 0.3134 | 0.1448 | 0.0470 | 0.0956 | 0.1449 | 0.2326 | 0.2938 | 0.3195 | 0.2508
CSE | 0.4128 | 0.4447 | 0.2070 | 0.0819 | 0.1175 | 0.2010 | 0.3329 | 0.3940 | 0.4555 | 0.3709
Yes | CSA | 0.5407 | 0.5871 | 0.3393 | 0.2026 | 0.3009 | 0.3536 | 0.5586 | 0.5548 | 0.5995 | 0.5586
CSE+ADV | 0.5699 | 0.5830 | 0.3425 | 0.1870 | 0.2902 | 0.3556 | 0.5903 | 0.5504 | 0.6289 | 0.5136
In the experiment of CIFAR10, the neural network still is a LeNet network. The
difference with the network on MNIST is that the first layer convolution has
three channels, and it does not need padding operation. The hyperparameters
are showed as follows: the epochs are 50, the batch size is 256, the optimizer
is Adam, the learning rate is 0.01. Every 10 epochs, the learning rate will
drop by half. When training the CSA and CSE + ADV, both clean examples and
adversarial examples are used as inputs every iteration. The adversarial
examples are generated by PGD.
In the test of the model’s adversarial robustness, assuming the adversarial
perturbation is limited in $l_{\infty}$-norm ball with $\epsilon=8/255$, we
evaluate the model’s performance under FGSM, PGD, CW, MIA, L2BIA and LinfBIA
attack methods. The results are shown in Table III. To convenient for showing
protecting performance, we only show the accuracy of the protected class in
each scenario as same as that on MNIST. Under most adversarial attack methods
except for CW, CSE has stronger adversarial robustness than STD. The CSA and
CSE+ADV models have much stronger adversarial robustness than STD.
However, it is worth-note that all the models are vulnerable to adversarial
examples generated by CW. Actually, CW is a strong attack method which does
not depend on gradients. It has been found that adversarial training would
cause gradient obfuscation which gives one a false sense that the model has
adversarial robustness [13]. Therefore, we conclude that the Min-Max propery
would cause gradient obfuscation in another way.
The CIFAR10 dataset is much more complex than MNIST. Therefore, the number of
iterations may be much more than that of MNIST, as shown in Figs. 5a and 5d.
When the number of iterations is sufficient, both STD and CSE can converge
eventually. This is not obvious when training CSE on MNIST (so it does not
require too many iterations). Similar to the MNIST experiments, with the
increase of iterations, the fuzziness of the convolutional layer decreases
gradually, e.g., the Min-Max property of convolution becomes more obvious.
Thus the model becomes more adversarial robust.
### IV-C CIFAR100
In this section, we will test our proposed methods on a larger, more realistic
dataset. On the CIFAR100 dataset, the simple LeNet network will no longer be
applicable. Therefore, ResNet18 is adopted as the structure of the model. The
following is the introduction of some hyperparameters: Learning rate is 0.1.
And it will be halved every 60 epochs; epochs are 180; batch size is 256;
optimizer is SGD; the constant in cost matrix is 10. When training CSA and CSE
+ ADV, alternate training modes of clean examples and adversarial examples are
adopted. All adversarial examples in training are generated by PGD.
Since CIFAR100 has 100 classes, it would be a waste of time to train
$100\times 3$ models (ADV, CSA and CSE) if each class is protected once.
Therefore, as shown in Table IV, we only show partial results since CIFAR100
contains 100 classes. Nevertheless, we find our methods improve
insignificantly adversarial robustness of model on deeper neural networks and
more complicated datasets. To find the failure reason, we further explore the
convolutional layer’s fuzziness in ResNet18. Fig. 6 shows the fuzziness of
convolutional layer parameters in ResNet18 on CIFAR100. For the CSE model, the
protected label is 21. We discover that the fuzziness of convolutional layer
parameters is almost hard to become small on CIFARI100. The fuzziness of each
convolutional layer in the STD model is almost identical to that of the CSE
model. We think this is due to the depth of the network. And the proposed
optimization algorithm cannot well control the fuzziness of parameters in the
deep network convolutional layer, so the proposed method cannot improve the
robustness of the model. As a result, in shallow neural networks, we can add a
regular term ($Fuzziness(\mathbf{W_{conv}})$) to control convolutional layer
parameters’ fuzziness, but this method may not be effective for deep networks.
If some skills can make the deep network’s convolutional layer parameters
follow the Min-Max property, we speculate that the network should also have
stronger adversarial robustness.
TABLE IV: CIFAR100: The accuracy of protected categories under various attack methods. P represents the category of protection. And $O_{i}$ presents the accuracy of category $i$ for the data in the table. Attacks | ADV Training | Models | $O_{9}$,p=9 | $O_{21}$,p=21 | $O_{36}$,p=36
---|---|---|---|---|---
FGSM | No | Std | 0.0262 | 0.0000 | 0.0111
CSE | 0.0200 | 0.0000 | 0.0000
Yes | CSA | 0.0135 | 0.0000 | 0.0000
CSE+Adv | 0.0251 | 0.0068 | 0.0110
PGD | No | Std | 0.0000 | 0.0000 | 0.0042
CSE | 0.0043 | 0.0000 | 0.0000
Yes | CSA | 0.0241 | 0.0000 | 0.0000
CSE+Adv | 0.0211 | 0.0000 | 0.0044
CW | No | Std | 0.0381 | 0.0130 | 0.0242
CSE | 0.0000 | 0.0000 | 0.0000
Yes | CSA | 0.0405 | 0.0000 | 0.0000
CSE+Adv | 0.0135 | 0.0000 | 0.0000
MIA | No | Std | 0.0044 | 0.0000 | 0.0035
CSE | 0.0042 | 0.0048 | 0.0072
Yes | CSA | 0.0203 | 0.0000 | 0.0035
CSE+Adv | 0.0047 | 0.0000 | 0.0108
L2BIA | No | Std | 0.0313 | 0.0141 | 0.0164
CSE | 0.0063 | 0.0000 | 0.0000
Yes | CSA | 0.0197 | 0.0000 | 0.0000
CSE+Adv | 0.0227 | 0.0000 | 0.0188
LinfBIA | No | Std | 0.0000 | 0.0000 | 0.0072
CSE | 0.0000 | 0.0000 | 0.0000
Yes | CSA | 0.0348 | 0.0000 | 0.0000
CSE+Adv | 0.0190 | 0.0000 | 0.0067
Figure 6: The fuzziness of convolutional layers of parameters in ResNet18.
## V Conclusion
A great deal of work has been done to improve the overall adversarial
robustness of a model. However, in some specific problems, the cost of each
class of adversarial attack often varies greatly. Therefore, we consider to
protect the adversarial robustness of specific categories. While improving the
overall adversarial robustness of a model, we prioritize improving the
adversarial robustness of specific classes. Experimentally we show that the
cost-sensitive training can effectively protect specific categories from
adversarial attacks. Moreover, we find that the robustness of model is closely
related to the convolutional layer parameter distribution in LeNet networks.
Also we experimentally find that, the more obvious Min-Max property of the
convolutional layer parameter in LeNet is, the stronger the adversarial
robustness of the model will be.
There exist cases in which our method may not be successful in improving the
adversarial robustness of the model when we apply the proposed method to a
more complicated dataset. These cases may need deeper network structures to be
designed. How to effectively control the uncertainty of convolutional layer
parameters in a deep network, which significantly has impact on adversarial
robustness, remains to be studied further.
However, this failure on CIFAR100 dose not negate our theoretical results. The
Min-Max property can still be verified on small datasets. Therefore, how to
control the Min-Max property of the convolutional layer in a large-scale
network is a relatively challenging task.
## Acknowledgment
This work was supported in part by Key Project of Natural Science Foundation
of China (Grant 61732011), in part by the National Natural Science Foundation
of China (Grants 61976141, 61772344 and 61732011), in part by the Natural
Science Foundation of SZU (827-000230), and in part by the Interdisciplinary
Innovation Team of Shenzhen University.
## References
* [1] M. Cheng, Q. Lei, P.-Y. Chen, I. Dhillon, and C.-J. Hsieh, “Cat: Customized adversarial training for improved robustness,” _arXiv preprint arXiv:2002.06789_ , 2020.
* [2] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” _arXiv preprint arXiv:1312.6199_ , 2013.
* [3] A. Kurakin, D. Boneh, F. Tramèr, I. Goodfellow, N. Papernot, and P. McDaniel, “Ensemble adversarial training: Attacks and defenses,” 2018.
* [4] N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, “Distillation as a defense to adversarial perturbations against deep neural networks,” in _2016 IEEE Symposium on Security and Privacy (SP)_. IEEE, 2016, pp. 582–597.
* [5] A. Raghunathan, J. Steinhardt, and P. Liang, “Certified defenses against adversarial examples,” _arXiv preprint arXiv:1801.09344_ , 2018.
* [6] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” _arXiv preprint arXiv:1706.06083_ , 2017.
* [7] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” _arXiv preprint arXiv:1412.6572_ , 2014.
* [8] H. Zhang, Y. Yu, J. Jiao, E. P. Xing, L. E. Ghaoui, and M. I. Jordan, “Theoretically principled trade-off between robustness and accuracy,” _arXiv preprint arXiv:1901.08573_ , 2019.
* [9] A. Athalye, N. Carlini, and D. A. Wagner, “Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples,” in _Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018_ , ser. Proceedings of Machine Learning Research, J. G. Dy and A. Krause, Eds., vol. 80. PMLR, 2018, pp. 274–283. [Online]. Available: http://proceedings.mlr.press/v80/athalye18a.html
* [10] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, “The limitations of deep learning in adversarial settings,” in _2016 IEEE European symposium on security and privacy (EuroS &P)_. IEEE, 2016, pp. 372–387.
* [11] S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “Deepfool: a simple and accurate method to fool deep neural networks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 2574–2582.
* [12] N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in _2017 ieee symposium on security and privacy (sp)_. IEEE, 2017, pp. 39–57.
* [13] A. Athalye, N. Carlini, and D. Wagner, “Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples,” _arXiv preprint arXiv:1802.00420_ , 2018.
* [14] X. Zhang and D. Evans, “Cost-sensitive robustness against adversarial examples,” _arXiv preprint arXiv:1810.09225_ , 2018.
* [15] M. Terzi, G. A. Susto, and P. Chaudhari, “Directional adversarial training for cost sensitive deep learning classification applications,” _Engineering Applications of Artificial Intelligence_ , vol. 91, p. 103550, 2020.
* [16] E. Wong and J. Z. Kolter, “Provable defenses against adversarial examples via the convex outer adversarial polytope,” _arXiv preprint arXiv:1711.00851_ , 2017.
* [17] C. Elkan, “The foundations of cost-sensitive learning,” in _International joint conference on artificial intelligence_ , vol. 17, no. 1. Lawrence Erlbaum Associates Ltd, 2001, pp. 973–978.
* [18] Z.-H. Zhou and X.-Y. Liu, “On multi-class cost-sensitive learning,” _Computational Intelligence_ , vol. 26, no. 3, pp. 232–257, 2010.
* [19] M. Abbasi and C. Gagné, “Robustness to adversarial examples through an ensemble of specialists,” _arXiv preprint arXiv:1702.06856_ , 2017.
* [20] Y. Liu, X. Chen, C. Liu, and D. Song, “Delving into transferable adversarial examples and black-box attacks,” _arXiv preprint arXiv:1611.02770_ , 2016\.
* [21] Z. Zhao, D. Dua, and S. Singh, “Generating natural adversarial examples,” _arXiv preprint arXiv:1710.11342_ , 2017.
* [22] N. Papernot, P. McDaniel, and I. Goodfellow, “Transferability in machine learning: from phenomena to black-box attacks using adversarial samples,” _arXiv preprint arXiv:1605.07277_ , 2016.
* [23] R. Huang, B. Xu, D. Schuurmans, and C. Szepesvári, “Learning with a strong adversary,” _arXiv preprint arXiv:1511.03034_ , 2015.
* [24] A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial machine learning at scale,” _arXiv preprint arXiv:1611.01236_ , 2016.
* [25] G. Liu, I. Khalil, and A. Khreishah, “Using single-step adversarial training to defend iterative adversarial examples,” _arXiv preprint arXiv:2002.09632_ , 2020.
* [26] H. Zhang, Y. Yu, J. Jiao, E. P. Xing, L. E. Ghaoui, and M. I. Jordan, “Theoretically principled trade-off between robustness and accuracy,” _arXiv preprint arXiv:1901.08573_ , 2019.
* [27] A. Shafahi, M. Najibi, M. A. Ghiasi, Z. Xu, J. Dickerson, C. Studer, L. S. Davis, G. Taylor, and T. Goldstein, “Adversarial training for free!” in _Advances in Neural Information Processing Systems_ , 2019, pp. 3358–3369.
* [28] D. Zhang, T. Zhang, Y. Lu, Z. Zhu, and B. Dong, “You only propagate once: Accelerating adversarial training via maximal principle,” in _Advances in Neural Information Processing Systems_ , 2019, pp. 227–238.
* [29] E. Wong, L. Rice, and J. Z. Kolter, “Fast is better than free: Revisiting adversarial training,” _arXiv preprint arXiv:2001.03994_ , 2020.
* [30] C. Qin, J. Martens, S. Gowal, D. Krishnan, K. Dvijotham, A. Fawzi, S. De, R. Stanforth, and P. Kohli, “Adversarial robustness through local linearization,” in _Advances in Neural Information Processing Systems_ , 2019, pp. 13 847–13 856.
* [31] M. Kukar, I. Kononenko _et al._ , “Cost-sensitive learning with neural networks.” in _ECAI_ , vol. 98, 1998, pp. 445–449.
* [32] N. Abe, B. Zadrozny, and J. Langford, “An iterative method for multi-class cost-sensitive learning,” in _Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining_ , 2004, pp. 3–11.
* [33] L. Jiang, C. Li, and S. Wang, “Cost-sensitive bayesian network classifiers,” _Pattern Recognition Letters_ , vol. 45, pp. 211–216, 2014.
* [34] P. Domingos, “Metacost: A general method for making classifiers cost-sensitive,” in _Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining_ , 1999, pp. 155–164.
* [35] W.-Y. Loh, “Fifty years of classification and regression trees,” _International Statistical Review_ , vol. 82, no. 3, pp. 329–348, 2014.
* [36] B. Zadrozny, J. Langford, and N. Abe, “Cost-sensitive learning by cost-proportionate example weighting,” in _Third IEEE international conference on data mining_. IEEE, 2003, pp. 435–442.
* [37] K. Asif, W. Xing, S. Behpour, and B. D. Ziebart, “Adversarial cost-sensitive classification.” in _UAI_ , 2015, pp. 92–101.
* [38] H. Shen, S. Chen, and R. Wang, “A study on the uncertainty of convolutional layers in deep neural networks,” _CoRR_ , vol. abs/2011.13719, 2020. [Online]. Available: https://arxiv.org/abs/2011.13719
* [39] J. Basak, R. K. De, and S. K. Pal, “Unsupervised feature selection using a neuro-fuzzy approach,” _Pattern Recognition Letters_ , vol. 19, no. 11, pp. 997–1006, 1998.
* [40] D. Yan, X. Zhou, X. Wang, and R. Wang, “An off-center technique: Learning a feature transformation to improve the performance of clustering and classification,” _Inf. Sci._ , vol. 503, pp. 635–651, 2019. [Online]. Available: https://doi.org/10.1016/j.ins.2019.06.068
* [41] G. W. Ding, L. Wang, and X. Jin, “AdverTorch v0.1: An adversarial robustness toolbox based on pytorch,” _arXiv preprint arXiv:1902.07623_ , 2019.
* [42] Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li, “Boosting adversarial attacks with momentum,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 9185–9193.
* [43] L. Deng, “The mnist database of handwritten digit images for machine learning research [best of the web],” _IEEE Signal Processing Magazine_ , vol. 29, no. 6, pp. 141–142, 2012.
* [44] A. Krizhevsky, G. Hinton _et al._ , “Learning multiple layers of features from tiny images,” 2009.
* [45] S. Haykin and B. Kosko, _GradientBased Learning Applied to Document Recognition_ , 2001, pp. 306–351.
* [46] X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in _Proceedings of the fourteenth international conference on artificial intelligence and statistics_ , 2011, pp. 315–323.
|
# The Evolutionary Pathways of Disk-, Bulge-, and Halo-dominated Galaxies
Min Du Kavli Institute for Astronomy and Astrophysics, Peking University,
Beijing 100871, China Luis C. Ho Kavli Institute for Astronomy and
Astrophysics, Peking University, Beijing 100871, China Department of
Astronomy, School of Physics, Peking University, Beijing 100871, China Victor
P. Debattista Jeremiah Horrocks Institute, University of Central Lancashire,
Preston PR1 2HE, UK Annalisa Pillepich Max-Planck-Institut f$\ddot{u}$r
Astronomie, K$\ddot{o}$nigstuhl 17, D-69117 Heidelberg, Germany Dylan Nelson
Max-Planck-Institut f$\ddot{u}$r Astrophysik, Karl-Schwarzschild-Str. 1, 85741
Garching, Germany Lars Hernquist Harvard–Smithsonian Center for
Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA Rainer Weinberger
Harvard–Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA
02138, USA
###### Abstract
We have recently developed a method to kinematically decompose simulated
galaxies that helps to break the degeneracy among galactic stellar structures.
For example, the concept of stellar halos is generalized to weakly-rotating
structures that are composed of loosely bound stars, which can hence be
associated to both disk and elliptical type morphologies. By applying this
method to about 500 central galaxies with stellar mass $10^{10-11.5}\
M_{\odot}$ from the TNG50 simulation at $z=0$, we identify three broadly-
defined types of galaxies: ones dominated by disk, by bulge, or by stellar
halo structures. We then use the simulation to infer the underlying connection
between the growth of structures and physical processes over cosmic time.
Tracing galaxies back in time, we recognize three fundamental regimes: an
early phase of evolution ($z\gtrsim 2$), and internal and external (mainly
mergers) processes that act at later times. We find that disk- and bulge-
dominated galaxies are not significantly affected by mergers since $z\sim 2$;
the difference in their present-day structures originates from two distinct
evolutionary pathways, extended vs. compact, that are likely determined by
their parent dark matter halos in the early phase; i.e., nature. On the other
hand, normal elliptical galaxies are typically halo-dominated, forming by
external processes (e.g. major mergers) in the later phase, i.e., nurture.
This picture challenges the general idea that elliptical galaxies are the same
objects as classical bulges in disk galaxies. In observations, both bulge- and
halo-dominated galaxies are likely to be classified as early-type galaxies
with compact morphology and quiescent star formation. However, here we find
them to have very different evolutionary histories.
Galaxy structure (622); Galaxy evolution (594); Galaxy formation (595); Galaxy
bulges (578); Spiral galaxies (1560); Star formation (1569)
## 1 Introduction
An accurate decomposition and classification of galaxies is required to
uncover the causal link between galaxy formation history and their properties.
In observations, galaxies are generally decomposed by the limited information
of their morphologies and kinematics. The presence of spirals, bulge-to-total
ratio (e.g., the Hubble (1936) sequence), and rotation are widely used. These
parameters permit us to infer certain aspects of a galaxy’s evolution over
cosmic time. However, the connection between these quantities and the galaxy
formation history is still quite uncertain. For example, early-type galaxies
(ETGs) exhibit featureless morphologies. For many years, this simple
appearance was thought to reflect a straightforward formation via mergers that
erases the diversity in both morphology and kinematics. More recent
observations have shown that, while in many ways the structure of ETGs is
intrinsically simple, there is a rich diversity of properties. It is well
established now that ETGs can be separated into fast and slow rotators by
their kinematics, thanks to the development of the integral-field unit (IFU)
technique (e.g., Emsellem et al., 2007, 2011; Cappellari et al., 2011a, b),
which indicates very different formation and evolution histories. By applying
the orbit-superposition Schwarzschild method (e.g., Schwarzschild, 1979;
Valluri et al., 2004; van den Bosch et al., 2008) to reconstruct stellar
orbits, Zhu et al. (2018a) was able to make remarkable progress in decomposing
observed galaxies. Zhu et al. (2018b, a) showed that the kinematic structures
they found exhibit several differences from the general expectation of
morphological decompositions. Kinematics help to break the degeneracy in the
morphology of different stellar structures to a certain degree. However, it is
still a very challenging, if not impossible, task to decompose galaxies
accurately from observations, as galaxy formation histories are deeply encoded
with complex physical processes, while the information observations can
provide is limited.
A significant degeneracy exists between bulges and stellar halos defined
traditionally by morphological methods (Du et al., 2020), making several
interpretations difficult. Within a $\Lambda$CDM hierarchical growth of
structure scenario, there is no doubt that the formation of stellar halos is
associated with mergers that disperse stars into large volumes or with the
stellar stripping of low-mass orbiting satellites. Generally, however, bulges
are also considered to be correlated with mergers (e.g. Toomre, 1977; Aguerri
et al., 2001; Hopkins et al., 2010; Wellons et al., 2015), i.e., external
processes. In fact, a variety of internal processes that conspire to produce
gas-rich inflows are possibly also important in bulge formation (Dekel &
Burkert, 2014; Zolotov et al., 2015; Tacchella et al., 2016). Such processes
include disc instabilities, clump migration, and misaligned gas streams (e.g.
Dekel et al., 2009; Parry et al., 2009; Bournaud et al., 2011; Sales et al.,
2012; Ceverino et al., 2015; Wellons et al., 2015; Park et al., 2019), which
are closely associated with the underlying dark matter halos that galaxies
inhabit. In such a picture, galaxy sizes and angular momenta are expected to
be controlled by halo angular momenta, as gas cools out of gaseous halos that
are initially coupled with their parent dark matter haloes (Mo et al., 1998;
Bullock et al., 2001; Zolotov et al., 2015).
A complete galaxy formation theory can almost only be achieved with numerical
simulations and, in particular, with cosmological simulations that self-
consistently evolve the dark matter and baryonic components of the Universe
from cosmologically-motivated initial conditions. In such simulations,
galaxies naturally emerge in a great diversity under the influence of internal
and external processes (Vogelsberger et al., 2014b; Schaye et al., 2015). In
the first few billion years of cosmic evolution, young galaxies form from
efficient gas accretion and then rapid star formation (SF), i.e., the epoch
known as “cosmic noon”, at $z\sim 2-3$. Because of the difference in the
early-phase evolution, a bimodality in galaxy type can begin to occur (Dekel
et al., 2009), which will possibly lead to long-lasting differences at
subsequent cosmic epochs. In such later phases, galaxies move into a secular
evolution period driven by internal processes in the cases of no significant
merger activity. Gas and stellar velocity dispersions decrease toward low
redshifts with the decrease of star formations and the increase of the galaxy
potential well (e.g. Law et al., 2009; Daddi et al., 2010; Geach et al., 2011;
Genzel et al., 2011; Swinbank et al., 2012; Dessauges-Zavadsky et al., 2015;
Girard et al., 2018). Rich structures, e.g., bars, rings, pseudo-bulges
(reviewed by Kormendy & Kennicutt, 2004), generated largely by internal
instabilities, partially account for the rich galaxy diversity.
However, though mergers are progressively rarer at lower redshifts, they can
dramatically change the morphology and kinematics of galaxies once they
happen, especially major ones. It is well known that dissipationless dry
minor/major mergers can disrupt galaxy spin, generating ETGs (Khochfar & Silk,
2006; Naab et al., 2006; van der Wel et al., 2009; Bezanson et al., 2009). It
has been suggested that the cumulative effect of many dry minor mergers can
explain the size growth of ETGs from $z=2$ to the present via the building up
a diffuse envelope. However, recent analyses on the Illustris(TNG) and EAGLE
simulations do not see a clear cumulative effect of minor mergers (Penoyre et
al., 2017; Lagos et al., 2018; Pulsoni et al., 2020). Instead, dry major
mergers generally lead to the formation of massive slow-rotating ETGs,
especially for central ones (Lagos et al., 2018; Pulsoni et al., 2020). Even
more interestingly, $\approx 30\%$ of the ETGs in EAGLE have not had any
mergers with mass ratios $\geq 0.1$ during their past 10 Gyr. This fraction is
smaller in more massive galaxies. Similarly, Penoyre et al. (2017) and Pulsoni
et al. (2020) also suggested that low-mass ($M_{\rm s}<10^{11}M_{\odot}$) ETGs
have a very different assembly history from high-mass ones.
In recent years, significant progress has been made in reproducing realistic
galaxy morphologies particularly in large-volume hydrodynamical simulations
like Illustris (Genel et al., 2014; Vogelsberger et al., 2014b, a; Nelson et
al., 2015; Sijacki et al., 2015), EAGLE (Schaye et al., 2015; Crain et al.,
2015), and Horizon-AGN (Dubois et al., 2016). The IllustrisTNG simulations
(Nelson et al., 2018, 2019b; Naiman et al., 2018; Marinacci et al., 2018;
Pillepich et al., 2018a, 2019; Springel et al., 2018) is the advanced version
of Illustris. It can reproduce galaxies that successfully emulate real
galaxies in many aspects, thanks to a well-designed galaxy physics model
(Weinberger et al., 2017; Pillepich et al., 2018b). In simulations, we are
able to extract intrinsic structures in a physical way, as well as to track
their formation processes and evolutionary histories. This does not only make
full advantage of simulations but also provides insights into the formation
history of the real galaxies that display a great diversity.
Understanding the evolution of galaxies in numerical simulations is required
to help recovering the comparable evolution of real galaxies. As a first step
in this process, we developed a fully automatic Gaussian mixture model, called
auto-GMM that can decompose simulated galaxies in a non-parametric, accurate,
and efficient way (Du et al., 2019). This method takes full use of the 6D
information of the position and velocity for every star (i.e. stellar
particle). By applying auto-GMM to about 4000 disk galaxies from the TNG100
run of the IllustrisTNG suite, we uncovered rich kinematic structures that
statistically cluster well in the 3D space of structural kinematic moments (Du
et al., 2020). The structural kinematic moments are composed of dimensionless
binding energy, circularity parameter, and non-azimuthal angular momentum that
quantify the compactness, circular rotation in the disk aligned with the
global spin, and the mis-aligned rotation, respectively, of each structure. We
define the structures with strong to moderate rotation as cold and warm disks,
respectively. Spheroidal structures dominated by random motions are classified
as bulges or stellar halos, depending on how tightly bound they are. Du et al.
(2020) suggested that the morphological decomposition widely used in
observations can barely represent kinematic structures found in the
simulations that are likely corresponding to intrinsic structures. We showed
that morphologically-derived bulges are largely composites of kinematic
bulges, stellar halos and even disky bulges in their inner regions. This may
lead to serious biases in the physical interpretation of such structures. Our
kinematic decomposition method, thus, has potential to gain great insights
into the evolutionary histories of real galaxies.
In a series of works (including this one), we aim to understand the formation
history of galaxies using a framework based on kinematically-derived intrinsic
structures. In this paper, we apply auto-GMM to the TNG50 simulation. This
enables us to study realistically-simulated galaxies in unprecedented detail
and statistics. We regard all processes at $z\gtrsim 2$, that are quite
chaotic, as the early-phase evolution of galaxies. At $z\lesssim 2$, galaxy
evolution can be influenced by internal and external (mainly but not
exclusively mergers) processes. This description of galaxy formation is
similar to the ‘two-phase’ picture: an early phase of dissipative collapse and
a later phase of dissipationless mergers (Oser et al., 2010). However, here we
separate the physical processes of the later phase into internal/in-situ and
external/ex-situ ones. The rich diversity in kinematic structures will be
interpreted in the context of these three regimes.
The paper is organized as follows. Sections 2 and 3 introduce the sample
selection and our kinematic decomposition method, respectively. Some basic
properties of galaxies with various kinematic structures are shown in Section
4. Galaxies are quantified, even classified, by the mass fractions of their
kinematic structures in Section 5. In Sections 6 and 7, we then study the
formation history of three kinds of typical galaxies that are dominated by
disk, bulge, and stellar halo structures, respectively. Section 8 discusses
the results, then the main conclusions are summarized in Section 9.
## 2 The TNG50 simulation
The IllustrisTNG suite comprises three runs using different simulation volumes
and resolutions, namely TNG50, TNG100, and TNG300. The simulations are run
with gravo-magnetohydrodynamics (MHD) and incorporate a comprehensive galaxy
model (see Weinberger et al., 2017; Pillepich et al., 2018b, for details).
This study uses the smallest volume run, TNG50, which provides a large enough
number of galaxies for statistical analyses and a “zoom”-like resolution. The
TNG50 data is now publicly available at https://www.tng-project.org. For
comparison, we also include some results of TNG100 in Appendix A. First
results from this simulation focusing on galactic outflows and the formation
of rotationally supported disks are presented in Nelson et al. (2019a) and
Pillepich et al. (2019). TNG50 includes $2\times 2160^{3}$ initial resolution
elements in a $\sim 50$ comoving Mpc box, corresponding to a baryon mass
resolution of $8.5\times 10^{4}M_{\odot}$ with a gravitational softening
length for stars and dark matter of about $0.3$ kpc at $z=0$. Meanwhile, the
minimum gas softening reaches 74 comoving parsec. TNG50 thus has roughly 15
times better mass resolution, and 2.5 times better spatial resolution, than
TNG100 (also publicly available, see Nelson et al., 2019b).
As in other cosmological simulations, also in TNG50 galactic outflows driven
by feedback from both supernovae and supermassive black holes are key
ingredients in generating galaxies with realistic morphologies over a broad
mass range (Nelson et al., 2019a). The unprecedented resolution allows us to
study a large sample of galaxies with the details that were previously
achieved only in zoom-in simulations. Pillepich et al. (2019) showed that star
forming galaxies in TNG50 have a typical thickness of a few hundred parsecs,
in much better agreement with observations than TNG100 at lower resolution.
Both the thickness and kinematics of galaxies above $M_{\rm
s}=10^{9}M_{\odot}$ are now reasonably converged. Moreover, TNG50 can resolve
many physical processes down to small scales, for instance, cold gas clouds in
the circumgalactic medium (CGM) with sizes $\sim$ a few hundred parsecs that
are stabilized by magnetic fields (Nelson et al., 2020), fine-grained galaxy
stellar morphological structures (Zanisi et al., 2020), stellar halo mocks
similar to Dragonfly galaxies (Merritt et al., 2020), and metallicity
gradients (Hemler et al., 2020).
TNG galaxies are identified and characterised with the Friends-of-Friends (FoF
Davis et al., 1985) and SUBFIND (Springel et al., 2001) algorithms. Resolution
elements (gas, stars, dark matter, and black holes) belonging to an individual
galaxy are gravitationally bound to its host subhalo. In this work, we focus
on TNG50 galaxies with total stellar mass of $M_{\rm
s}=10^{10}-10^{11.5}M_{\odot}$, including both spirals and ellipticals. There
are 873 galaxies satisfying this criterion at $z=0$ in TNG50. Our main
conclusions are based on central galaxies to avoid possible environmental
effects (e.g. Joshi et al., 2020; Engler et al., 2021), resulting in a sample
of 542 galaxies.
## 3 Extracting kinematic structures: methodology
We identify kinematic structures in galaxies from TNG50 with the framework
introduced in Du et al. (2019, 2020). We here give only a brief overview of
the method: all that follows applied exclusively to the stellar component of
gaalxies.
The first step is to physically characterize stars in the phase space of any
individual galaxy. In this series of works, we use the kinematic phase
space111Part of the code from Obreja et al. (2018) is used to build the
kinematic phase space for gravitationally-bound stars to a galaxy. comprised
of the circularity parameter $\epsilon=j_{z}/j_{c}(e)$ (Abadi et al., 2003),
the non-azimuthal angular momentum $j_{p}/j_{c}(e)$, and the binding energy
normalized by the minimum value $e/|e|_{\rm max}$, as proposed by Doménech-
Moral et al. (2012), of each stellar particle. Thus, $j_{z}/j_{c}$ and
$j_{p}/j_{c}$ are physical parameters that quantify the aligned and misaligned
rotation with the overall angular momentum, respectively, and $e/|e|_{\rm
max}$ describes how tightly bound a stellar particle is.
Secondly, an automatic Gaussian mixture model222Gaussian mixture models (GMM)
are unsupervised machine learning algorithms that are widely used to model
discrete points with multidimensional Gaussian distributions. Here we use the
GaussianMixture module in the PYTHON scikit-learn. auto-GMM, is used to model
the kinematic phase space. Stars are classified into multiple Gaussian
components with “soft” probabilistic assignment. As recommended by Du et al.
(2019), auto-GMM allows the number of Gaussian components to be determined
automatically by setting the modified Bayesian information criterion
$\Delta{\rm BIC}<0.1$, which corresponds to a Bayes factor $0.95-1$ with
respect to the ideal model using numerous Gaussian components. In this case,
we consider that this model performs equally well as the ideal model in
statistical point of view. Generally, 4-9 prominent Gaussian components in the
kinematic phase space will be found for modelling any individual galaxy
properly. Because the number of Gaussian components is inferred directly from
the data. For each component, its kinematics can be quantified by the mass-
weighted average values of $j_{z}/j_{c}$, $j_{p}/j_{c}$, and $e/|e|_{\rm
max}$, defined as $\langle j_{z}/j_{c}\rangle$, $\langle j_{p}/j_{c}\rangle$,
and $\langle e/|e|_{\rm max}\rangle$. This method not only successfully avoids
overfitting due to the use of too many components, but also minimizes the
possibility of human bias, which makes it possible to identify intrinsic
structures in galaxies.
Finally, the intrinsic structures of galaxies are then objectively inferred
from statistical results. In Du et al. (2020), via stacking all components
together in thousands of disk galaxies from TNG100, we found that the stellar
components also cluster in the kinematic-moment space composed of $\langle
j_{z}/j_{c}\rangle$, $\langle j_{p}/j_{c}\rangle$, and $\langle e/|e|_{\rm
max}\rangle$. We have thus identified the following useful classification:
* •
clusters of stars having strong ($\langle j_{z}/j_{c}\rangle\geq 0.8$) to
moderate ($0.5\leq\langle j_{z}/j_{c}\rangle<0.8$) rotation are defined as
cold and warm disks, respectively;
* •
clusters of stars dominated by random motions and tightly bound ($\langle
e/|e|_{\rm max}\rangle\leq-0.75$) are classified as bulges;
* •
clusters of stars dominated by random motions but that are loosely bound
($-0.75<\langle e/|e|_{\rm max}\rangle$) are defined as stellar halos.
Such criteria have been heuristically inferred from the statistical analysis
on the disk galaxies from the TNG100 simulation, as presented in Du et al.
(2020). This classification method is the simplest and physically clearest
classification of galaxy intrinsic structures, referred to as classification 1
in Du et al. (2020).
It is worth emphasizing that kinematically-defined disky structures in such a
classification generally do not follow a simple exponential profile, in
contrast with what has been typically and widely used in morphological
decompositions Du et al. (2020). The overall kinematic disks obtained by
summing stars of cold and warm disks commonly have extra mass in their central
regions, where our method can further isolate disky bulges that have bulge-
like compact morphology but moderate rotation, as defined in classification 2
of Du et al. (2020). Cold disks, on the other hand, are often truncated in
their inner regions (see also arguments based on observations in Zhu et al.
(2018a) and Breda et al. (2020))333The relation between such inner truncations
in kinematic cold disks and the inner break found in purely photometric
decomposition (Gao & Ho, 2017) is unclear.. In this paper, we adopt throughout
the simpler classification 1 (cold and warm disks, bulges, stellar halos):
using a more complex methodology such as classification 2 does not affect our
results in this paper.
Following the three steps above, we decompose all galaxies in the TNG50 sample
into kinematic stellar cold disk, warm disk, bulge, and halo structures which
qualitatively correspond to thin disks, thick disks, classical bulges, and
stellar halos in observations, respectively. They will be used in the
subsequent analysis. All stars bound to the galaxy are counted, in order to
measure mass fractions of these kinematic structures accurately. It is worth
mentioning that this mass fraction cannot be directly compared with
observations where, generally, stellar light is probed only out to a few
effective radii or less.
Figure 1: Edge-on views of a randomly selected sample of $z=0$ TNG50 central
galaxies in the mass range $M_{\rm s}=10^{10.5}-10^{11}M_{\odot}$, in the
bulge-to-total vs. stellar halo-to-total stellar mass fraction plane. For each
galaxy, the edge-on surface density maps are shown in a region of $40\times
40$ kpc. The surface density is normalized by the maximum value for each
galaxy. The dashed line marks the position where the mass fraction of
spheroidal components is equal to 0.5, i.e., $f_{\rm b}+f_{\rm h}=0.5$: it can
be used to separate disk galaxies from elliptical ones. A massive central
concentration commonly exists in galaxies with massive bulges, while the
galaxies with massive halos are generally surrounded by diffuse envelopes. The
red squares mark four TNG50 analogues of the Sombrero Galaxy.
Figure 2: As in Figure 2 but for the relative importance of rotation $|v_{\rm
los}|/\sqrt{v_{\rm los}+3\sigma_{\rm los}}$, estimated from the edge-on view
of the same TNG50 galaxies, where $v_{\rm los}$ and $\sigma_{\rm los}$ are the
mean velocity and velocity dispersion in the line-of-sight view, respectively.
The rotation becomes weaker and weaker with increasing spheroidal fraction
towards the top-right corner.
Figure 3: The edge-on spatial distributions of disk, bulge, and stellar halo
particles selected by our kinematic decomposition method for the same TNG50
galaxies as in Figures 2 and 2. For each galaxy, $10^{5}$ stellar particles
are selected randomly. Bulges are generally concentrated in the central
regions of galaxies, while halos typically follow a diffuse distribution that
extends from the center to an extended envelope. It is worth emphasizing that
there is a severe degeneracy between bulges and halos in the central regions
of galaxies. Here we plot bulge particles last in order to make them more
visually prominent.
Figure 4: Distribution of the normalized surface density profiles in the
midplane for the same TNG50 galaxies as Figures 2, 2, and 4. The results of
all stars, and those of kinematic disk, bulge, and halo structures are shown
in black, blue, yellow, and red, respectively. For each panel, the $x$\-
$y$-axes represent $R$/kpc and log$\Sigma_{\star}/\Sigma_{\rm max}$,
respectively, covering the range of [0, 20] kpc and [-5, 0] (top-right
corner). $\Sigma_{\rm max}$ is the maximum value of $\Sigma_{\star}$. Neither
bulges nor halos are necessarily the direct counterparts of morphological
bulges described by the S$\acute{e}$rsic function.
## 4 Relation between kinematic structures and global properties
### 4.1 A physical definition of bulges and stellar halos
Cosmologically-motivated models suggest that stars tend to conserve their
binding energy during galaxy mergers, they can be loosely bound and can hence
populate galaxies in a broad range of galactocentric distances (e.g. Barnes,
1988; Hopkins et al., 2009; Amorisco, 2017). The constituent stars of stellar
halos are fossil records of the hierarchical merging process (e.g. Deason et
al., 2016; D’Souza & Bell, 2018; Monachesi et al., 2019): mergers with larger
satellites produce more massive, higher-metallicity stellar halos, and can
thus reproduce the recently observed stellar halo metallicity-mass relation
(discovered by an HST imaging survey of nearby galaxies, GHOSTS, Harmsen et
al., 2017). Studies of the hierarchical growth of structures have reached
similar conclusions using large-scale, hydrodynamic cosmological simulations
(e.g. Illustris Pillepich et al., 2014; Rodriguez-Gomez et al., 2016; Pop et
al., 2018).
It is often argued that massive, compact classical bulges formed at early
cosmic epochs via various pathways such as early gas-rich accretions, violent
disk instabilities, or misaligned inflows of gas. Such classical bulges, thus,
are largely composed of stars formed in-situ and characterized by low binding
energy. Bell et al. (2017) showed that galaxies with massive classical bulges
have diverse merger histories, and no clear correlation with properties of the
stellar halos has been found. It is, thus, plausible that bulges are indeed
dominated by in-situ chaotic processes. The classical conception that bulges
are produced by mergers may therefore not hold in all cases.
Figures 2 and 2 show the distributions of the morphology and relative
importance of rotation for a selection of TNG50 galaxies when viewed edge-on.
They are characterized by the mass fraction of kinematic bulge ($f_{\rm b}$,
$y$-axis) and halo ($f_{\rm h}$, $x$-axis) derived by auto-GMM. We normalize
the surface density map of each galaxy by its maximum value to gain equal
contrast for galaxies with different stellar masses. Obviously, disk galaxies
have strong rotation, thus located at the bottom-left corner. Elliptical
galaxies mainly lie above the dashed line where the overall mass fraction of
spheroids $f_{\rm sph}=f_{\rm b}+f_{\rm h}$ is larger than 0.5. However,
galaxies with massive bulges seem to be not clearly distinguishable from those
having massive halos in observations, even when taking kinematics into
account. This issue is the more serious the lower the inclination. Therefore,
a severe degeneracy exists in classical morphological decompositions of bulge
vs. halo stars, even though the central massive concentration indeed becomes
more prominent with the increase of bulge mass fraction.
Figure 5: The Sombrero Galaxy (M104), top left panel, and analogues from the
TNG50 simulation. M104 is an example of a galaxy with a large, classical
central “bulge” that is possibly a stellar halo according to our kinematic
definition. The top left is a composite image of V, R, I-bands that was
obtained with the FORS1 multi-mode instrument at VLT ANTU [ESO]. The other
panels showcase idealized synthetic images of TNG50 galaxies (using HST/ACS
F435W, F606W and F775W filters) generated with the radiative transfer code
SKIRT as in Rodriguez-Gomez et al. (2019). Unlike the real Sombrero galaxy,
the simulated objects are seen perfectly edge-on and across a larger field of
view of about 70 kpc. The kinematically-defined stellar halos of these
galaxies indeed occupy 35-50 percent of their total stellar masses.
As discussed in Section 3, the stars of bulges and stellar halos defined by
our kinematic method are separated by their binding energies, which is
consistent with our physical expectation. Both bulges and halos have similarly
weak rotation; however, as shown by the spatial distribution of their stellar
particles in Figure 4 and 1D surface density profiles in Figure 4, bulge stars
(yellow) are tightly bound around the galactic central regions. Halo stars
(red) are loosely bound, composing the diffuse envelopes. Halo stars, that
move on highly elliptical orbits, are able to pass through the central regions
that are dominated by bulge stars. The half-mass radii of bulges are generally
less than 2 kpc, while those of the stellar halos vary in a broad range of
2-10 kpc.
The fact that stellar halos approximately follow the S$\acute{e}$rsic
function, see Figure 4, may induce a serious difficulty in making accurate
morphological decompositions of galaxies, and hence in advancing
interpretations for their formation processes. In order to illustrate this
issue clearly, we take the famous Sombrero Galaxy (M104/NGC 4594, see Figure
5) as an example. The Sombrero Galaxy is regarded as one of the most unusual
galaxy having a disk embedded in an extremely large “bulge”. Gadotti &
Sánchez-Janssen (2012) argued that the bulge mass fraction can be reduced from
$77\%$ to $<10\%$ in the Sombrero Galaxy if an outer spheroidal component,
i.e., a stellar halo, is considered. We highlight four Sombrero visual
analogues with red squares in Figures 2 and 2. As we can see in Figures 4 and
4, the huge “bulges” of Sombrero analogues are largely contributed by
kinematic halos. The mass fraction of their bulges can vary from 0 to 0.5.
Idealized synthetic images of Sombrero-like galaxies from TNG50 are shown too
in Figure 5, following the procedure described in Rodriguez-Gomez et al.
(2019).
Our method allows us to break the degeneracy between bulges and halos even in
the central regions of galaxies. In this picture, kinematic bulges
qualitatively correspond to classical bulges. Normal elliptical galaxies are
largely dominated by kinematic stellar halos, which challenges the general
idea that elliptical galaxies are the same objects as classical bulges in disk
galaxies, obeying the Kormendy relation (e.g. Kormendy, 1977; Gadotti, 2009)
and the $M_{\rm bh}-\sigma_{\rm s}$ relation (e.g. Kormendy & Ho, 2013). It is
worth emphasizing that although stellar halos have generally much lower
surface density than other structures on the midplane, their overall mass
fractions can be large due to their wide extent reaching tens, if not
hundreds, of kpc distance.
Figure 6: Relation between global properties and the mass fractions of
kinematically-derived structures for TNG50 central galaxies at $z=0$. From
left to right, we show the distributions of the stellar half-mass radius
$r_{\rm e}$, global rotation parameter $K_{\rm rot}$, and global instantaneous
star formation rate as a function of total stellar mass. The same set of
galaxies are coloured by the mass fractions of their spheroidal, bulge, and
halo structures, respectively, from top to bottom. The cases of zero SFR are
set to -4.5 in the left panels.
### 4.2 Global properties
It is expected that intrinsic structures in galaxies are reflected by their
morphological and kinematic properties, but possibly in a non-linear way. In
this section, we discuss the relation between galaxies classified by our
kinematic method and their global morphological and kinematic properties.
Figure 6 shows the relations between some basic properties (stellar half-mass
radius $r_{\rm e}$, global rotation $K_{\rm rot}=\langle
v_{\phi}^{2}/v^{2}\rangle$ (Sales et al., 2010), and star formation rate, SFR
hereafter) and the mass fractions of kinematic structures (see Figure 26 for
TNG100 galaxies in appendix A). Each data point is coloured by the mass
fraction of its spheroid, bulge, and halo, respectively, from top to bottom.
Clearly, galaxies with different kinematic structures have very different
properties. This is a confirmation of the bounty of both the galaxy formation
model underlying TNG50 as well as our kinematically-motivated stellar
decomposition method. The galaxies that are dominated by spheroidal structures
(red points in the top panels) generally have relatively compact morphologies,
quiescent SF and weak rotation. The blue points, mainly corresponding to
galaxies dominated by kinematic disks, are generally extended galaxies with
active SF and strong rotation, that preferentially populate the low-mass end
($M_{\rm s}<10^{10.6}M_{\odot}$) of the distribution.
It is clearly shown in the top-left panel of Figure 6 that galaxies dominated
by spheroidal structures are common in massive galaxies producing the well-
known mass-size relation, where disk galaxies are rare. Systematic comparisons
between the mass-size relation in observations and that in IllustrisTNG are
given in Rodriguez-Gomez et al. (2019); Genel et al. (2018) for TNG100 and in
Pillepich et al. (2019) for TNG50. As shown in the middle panels of Figure 6,
galaxies with more massive bulges are generally more compact (smaller size and
larger central density), while those with massive halos (bottom panels) are
not that dramatically different in $r_{\rm e}$ from galaxies dominated by
disky structures. Interestingly, we can see that many galaxies with massive
stellar halos are as extended as disk galaxies in less massive cases of
$M_{\rm s}\lesssim 10^{10.6}$. Galaxies with massive bulges are generally the
most compact objects over a broad mass range.
The mass fraction of spheroidal components decomposed by auto-GMM is tightly
correlated with $K_{\rm rot}$ that is almost independent of galaxy stellar
mass. $K_{\rm rot}>0.5$ has been widely used as a criterion to select disk
galaxies in simulations. This criterion selects almost the same group of
galaxies using $f_{\rm sph}<0.5$. Both galaxies with massive bulges and halos
are thus generally classified as elliptical/early-type galaxies, while they
are clearly different types of galaxies. The galaxies with massive bulges have
somewhat stronger rotation ($K_{\rm rot}\sim 0.4-0.6$) and more disky
morphology (Figure 2) than those with massive halos. This suggests that
galaxies with massive bulges are analogues of fast rotator ETGs from both
morphological and kinematic points of view. But there is no clear dividing
line in $K_{\rm rot}$ that can separate them from galaxies with massive halos
that are slow rotator analogues.
It is worth mentioning that, at the low-mass end, many galaxies dominated by
spheroidal components are still actively forming stars, falling on the main
sequence of disk galaxies (blue dots in the right panels of Figure 6). This
result suggests the quenching is unlikely to be directly correlated with the
growth of either bulges or halos in central galaxies.
Figure 7: Mass fractions of kinematic structures in TNG50 central galaxies and
definition of disk-, bulge, and stellar halo-dominated galaxies. The color
represents the mass fraction of stars formed ex situ. The rectangles in the
left panel highlight the three groups of galaxies, we selected, that are
dominated by disks, bulges, and halos. Disk-dominated galaxies are further
divided into two sub-groups. Shown in the right panel, we see two branches
with the decrease of mass fraction of disky components: those in the lower
branch have been significantly affected by mergers, while mergers rarely
happen for those in the upper one. The dashed line marks the criterion of disk
and elliptical galaxies $f_{\rm sph}=0.5$. It is worth mentioning that the
cases of $f_{\rm b}=0$ have no bulges that are prominent enough to be
identified by the kinematic decomposition method. The gap around $f_{\rm
b}\sim 0.03$ is thus not physically meaningful.
## 5 A kinematic selection of galaxies dominated by disks, bulges, and halos
### 5.1 Definition of
disk-, bulge-, and halo- dominated galaxies
Figure 7 shows the mass fractions of kinematic structures for all central
galaxies from TNG50 in the $10^{10-11.5}\ M_{\odot}$ stellar mass range. The
ratios of stellar halo, bulge, and disk mass, respectively, to the total
stellar mass are denoted with $f_{\rm h}$, $f_{\rm b}$, and $f_{\rm d}$. In
the left panel of Figure 7, we select three groups of galaxies:
* •
Disk-dominated 1 and 2: $f_{\rm h}<0.2$. For the groups 1 (blue rectangle) and
2 (green rectangle), $f_{\rm b}$ is $<0.1$ and $0.1-0.2$, respectively.
* •
Bulge-dominated: $f_{\rm b}\geq 0.2$ and $f_{\rm h}<0.2$, orange rectangle.
* •
Halo-dominated: $f_{\rm b}<0.2$ and $f_{\rm h}\geq 0.4$, red rectangle.
From left to right in the right panel, the mass fraction of disky structures
increases, thus galaxies change from spheroidal early-type to disky late-type
ones. A large group of galaxies dominated by disks clusters at the bottom-
right corner in the right panel of Figure 7 (also the bottom-left corner in
the left panel). These galaxies are akin to pure-disk/bulgeless galaxies in
observations (see theiredge-on view in Figure 2). Note that the mass fraction
of disks $f_{\rm d}$ is obtained by summing all stars in their cold and warm
disks (which includes any disky/pseudo bulge). The mass fraction decrease of
disky structures leads to the increase of either a bulge or a halo, thus two
branches. Galaxies on the lower branch have relatively more massive halos.
Figure 8: Merger frequency (upper panels) in galaxies ($10^{10}\ M_{\odot}\leq
M_{\rm s}\leq 10^{11.5}\ M_{\odot}$) with different kinematic structures. It
measures the fraction of galaxies that have experienced at least one certain
merger since a particular redshift. Here the mergers of mass ratio $\geq 0.1$
(1:10 minor mergers) at $z<1$ and $\geq 0.25$ (1:4 major mergers) at $z<2$ are
taken into account. $f_{\rm d}$ is equal to $f_{\rm cd}+f_{\rm wd}$. The lower
panels show the number counts of galaxies in each bin. The number distribution
of TNG100 galaxies is divided by three to compare with that of the TNG50
galaxies.
### 5.2 Connection to merger history
Dissipationless “dry” mergers in the later phase are expected to be
destructive for disky structures. In Figure 7, individual simulated galaxies
are color-coded by the amount of stellar mass that was formed ex situ,
estimated by the method of Rodriguez-Gomez et al. (2016). It is clear that the
mass fraction of ex-situ stars is generally $<0.1$ in both disk- and bulge-
dominated galaxies, while it is much larger in halo-dominated galaxies.
In the upper panels of Figure 8, we show the merger frequency of galaxies with
different kinematic structures. The lower panels show number counts of
galaxies. Clearly, the mass fraction of a kinematic stellar halo has a strong
positive correlation with mergers. About 80 per cent of the central galaxies
with massive stellar halos of mass fraction $f_{\rm h}>0.4$ have experienced
at least one merger of stellar mass ratio $\geq 0.1$ since $z=1$ (see blue
curves for TNG50 and gray histograms for TNG100). More than 50 per cent of
such galaxies are associated with 0.25 major mergers (red profiles for TNG50
and black histogram for TNG100), consistent with Penoyre et al. (2017) and
Lagos et al. (2018).
Despite the somewhat artificial classification, both disk- and bulge-dominated
galaxies have rarely been affected by mergers in the past 10 Gyr ($z<2$), thus
they are likely to be two fundamentally-different types of galaxies in
comparison to halo-dominated ones. If for bulge-dominated galaxies mergers
have been infrequent in the late-phase of cosmic evolution, then such bulges
must have been generated by either internal processes or in an earlier-phase
of the Universe. Bulge- and disk-dominated galaxies may be the two ends of a
continuous distribution. The properties of such galaxies are able to record
important information of their “initial conditions” after the early phase,
which is likely lost in mergers. The formation of halo-dominated galaxies, in
contrast, is tightly correlated with mergers. They may form via major mergers
between the two fundamental types of galaxies, or via ex-situ stellar
accumulation through minor mergers with low-mass satellites. In order to gain
new insights into galaxy properties discussed here and their evolutionary
histories, we trace different types of galaxies back to high redshifts in the
next two sections.
## 6 Evolution of disk- and bulge-dominated galaxies driven by the early-
phase and internal processes
Given the mass fractions of kinematically-derived structures, we are able to
study the difference in their evolutionary histories in detail for galaxies
with different structures. The most fundamental physical processes are
separated into three parts: the early phase evolution, and internal and
external processes in the later phase. The rich diversity in kinematic
structure will be interpreted in the context of these three origins. In this
work, we regard all processes at $z>2$ as the early-phase evolution of
galaxies. At $z<2$, galaxy evolution can be influenced by internal and
external (mainly but not exclusively mergers) processes. The SubLink galaxy
merger tree of the IllustrisTNG simulation (Rodriguez-Gomez et al., 2015) is
used to trace galaxy evolution back in time.
Figure 9: Mass growth ($y$-axis, in logarithmic scale) of stellar and dark
matter components in TNG50 central galaxies classified by the kinematic
method. From left to right, galaxies are separated into four mass bins using
the total stellar mass at $z=0$. The shaded regions represent the stacked 1D
profiles ($1\sigma$ envelope) for the disk-dominated (group 1 and 2) and
bulge-dominated galaxies, in each mass bin, where the dashed profiles
correspond to their median values. From left to right, galaxies are shown in
four mass bins of their stars at $z=0$, i.e., $10^{10-10.2}\ M_{\odot}$,
$10^{10.2-10.5}\ M_{\odot}$, $10^{10.5-10.8}\ M_{\odot}$, and $10^{10.8-11.5}\
M_{\odot}$, respectively. The solid profiles in the right-most panels show
each individual galaxy when the statistics are poor in that mass bin. The
number of galaxies is listed at the bottom-left corner. Figure 10: Evolution
of total stellar mass $M_{\rm s}$, normalized by the values at $z=0$. This
image uses the same convention as Figure 9.
Figure 11: Evolution of half-mass radius $r_{\rm e}$, measured in the three-
dimensional space. This image uses the same convention as Figure 9.
Figure 12: Evolution of total SFR. This image uses the same convention as
Figure 9.
Figure 13: Evolution of the SFR within one $r_{\rm e}$. This image uses the
same convention as Figure 9. Massive bulge-dominated galaxies are quenched in
an inside-out manner in comparison with Figure 14.
Figure 14: Evolution of the spin parameter $\lambda$, derived by the mass-
weighted sum of angular momenta of all member particles/cells. This image uses
the same convention as Figure 9. Figure 15: The mass-size diagram of the disk-
and bulge-dominated galaxies that are the two fundamental types of galaxies
selected in Figure 7. The color represents the bulge-to-total mass fraction
$f_{\rm b}$ derived by our kinematic method. The gray dots are their
progenitors at $z=1.5$. Dashed lines mark the evolutionary pathway during
$z=0-1.5$ for all bulge-dominated galaxies. The arrows highlight the extended
(blue) and compact (red) evolutionary pathways that form disk and bulge-
dominated galaxies, respectively. Prototype galaxies shown in Figures 16, 17,
19, and 20 are marked by squares.
### 6.1 Extended and compact evolutionary pathways: Evolution of mass, size,
SFR, and spin
#### 6.1.1 Mass and size
In Figure 9, we trace the mass growth of the stellar and dark matter
components in each galaxy. The evolution of disk- and bulge-dominated galaxies
generally follow smooth evolutionary pathways without experiencing any violent
mergers, as shown in Figure 7. In each mass range, we thus stack their
profiles together. Galaxies dominated by disks (blue and green shaded regions)
form later, but grow faster, compared with those dominated by bulges (cyan
shaded regions), thus reaching a similar stellar mass at $z=0$. At $z>1.0$,
the difference in median stellar mass between the group 1 disk-dominated
galaxies and bulge-dominated galaxies is about 0.2-0.4 dex over a wide mass
range; this difference decreases gradually toward low redshifts. The
difference is more significant in more massive galaxies (e.g., $M_{\rm
s}=10^{10.5-10.8}\ M_{\odot}$), shown also clearly in Figure 10 where we
normalize the stellar masses of galaxy progenitors with the value at $z=0$.
About 30-55% stellar mass has been assembled at $z\sim 1.7$ in massive bulge-
dominated galaxies with $M_{\rm s}=10^{10.5-10.8}\ M_{\odot}$, while only
about 5-25% of stars exist in the progenitors of the group 1 disk-dominated
galaxies.
Galaxy size is another crucial parameter that reflects various physical
processes in the evolutionary history of galaxies. Pillepich et al. (2019)
showed that TNG50 successfully reproduces the mass-size relation with respect
to the observations of both gaseous (e.g. van der Wel et al., 2014) and
stellar components across cosmic time. Figure 14 exhibits the growth of galaxy
size. At $z=0$, the disk-dominated galaxies in group 1 have roughly 2-3 times
larger stellar half-mass radius $r_{\rm e}$ than the bulge-dominated galaxies.
At high redshifts, the difference in their sizes is smaller, but bulge-
dominated galaxies are generally a few times more massive than disk-dominated
ones. Thus, bulge-dominated galaxies are much more compact objects than disk-
dominated galaxies. Such compact and extended types of galaxies follow very
different evolutionary pathways, then generate bulge- and disk-dominated
galaxies, respectively. Consistently, Genel et al. (2018) showed that the
sizes of star-forming and quiescent galaxies from TNG100 evolve in similar
extended and compact pathways.
#### 6.1.2 Star formation
In the later phase, bulge-dominated galaxies start to be quenched gradually,
especially more massive systems. During $z\lesssim 0.5$, most massive ($M_{\rm
s}\gtrsim 10^{10.5}\ M_{\odot}$) bulge-dominated galaxies increase by less
than 30% of their total stellar mass at $z=0$, while in disk-dominated
galaxies stellar masses are nearly doubled (see Figure 10). Figure 14 shows
clearly that massive bulge-dominated galaxies start to be quenched
significantly since $z\lesssim 1.0$, thus offset from their disk-dominated
counterparts. Quenching happens later and is less significant in less massive
galaxies. Massive bulge-dominated galaxies are likely quenched by AGN
feedback, as a consequence of activating the low accretion (kinetic) mode of
AGN feedback (Weinberger et al., 2017, 2018; Nelson et al., 2019a; Terrazas et
al., 2020). In this case, mass outflow rates increase rapidly, which pushes
gas out and then quenches SF gradually. This mechanism is insufficient in
quenching less massive bulge-dominated galaxies (the left-most panels of
Figure 9), where the black hole mass is generally smaller. Moreover, in
comparison with the SFR measured within $r_{\rm e}$ (Figure 14), it is clear
that SF are quenched in an inside-out manner (see also Nelson et al. (2019a)
and E. Nelson et al., in preparation) in massive bulge-dominated galaxies.
Disk- and bulge-dominated galaxies follow two distinguishable evolutionary
pathways: extended and compact. A massive bulge forms either earlier or more
easily in bulge-dominated galaxies, thus are more compact than disk-dominated
ones. Such a difference can only be interpreted by the natural properties that
are largely determined by the dark matter halos they inhabit and underlying
internal dynamical instabilities (see more discussions in Section 8.1).
#### 6.1.3 Spin
The mass and angular momenta of dark matter halos are two crucial factors that
may significantly affect galaxy properties. It is clear that bulge-dominated
galaxies are generally present in systems with significantly lower spins
$\lambda=\frac{j}{\sqrt{2}V_{\rm vir}R_{\rm vir}}$444$V_{\rm vir}$ and $R_{\rm
vir}$ are virial velocity and radius estimated by the total mass bound to the
subhalo. $j$ is the mass-weighted angular momentum of all member
particles/cells. (defined by Bullock et al., 2001, see Figure 14), though the
dark matter halos (lower panels of Figure 9) that bulge-dominated galaxies
inhabit are also somewhat more massive. This is consistent with the
straightforward theoretical picture that gas initially coupled with the dark
matter haloes cools down to the center while conserving angular momentum (Mo
et al., 1998; Bullock et al., 2001), possibly by a certain factor. It finally
leads to the bimodality in galaxy compactness (see Dekel & Burkert, 2014) that
assembles together via the dissipative processes in the early phase. Bulge-
and disk-dominated galaxies in TNG100 galaxies follow a similar evolution to
those in TNG50, shown in the appendix A (Figure 27).
Figure 16: D1, a disk-dominated galaxy (ID 580035) with $M_{\rm s}\approx
10^{10.5}M_{\odot}$. From top to bottom, we show the evolution of total, cold
disk, warm disk, bulge, and halo stellar structures, decomposed by our
kinematic method, in both face-on and edge-on views. Their mass fractions are
given on the left side. The 3D half-mass radius $r_{\rm e}$ at each snapshot
is marked by the dashed circle. In each face-on panel, the top textbox gives
the fraction of stellar particles that already exist in this galaxy at this
snapshot for each structure. In the bottom textbox, we estimate the
contributions of in-situ SF and ex-situ mergers to the mass growth of each
structure in a time span between two snapshots. In this galaxy, the kinematic
halo and bulge are small. In-situ SF overall dominates the evolution of all
structures. The contribution from ex-situ processes is negligible since
$z=1.5$. Figure 17: B1, a bulge-dominated galaxy (ID 563732) with $M_{\rm
s}\approx 10^{10.5}M_{\odot}$.This image uses the same convention as Figure
16. The kinematic bulge forms early with no influence from mergers since
$z=1.5$, then a disk assembles, possibly through gas accretion in the later
phase. The mis-aligned disk at $z\sim 1.5$ is likely due to mis-aligned cold
gas accretion or tiny mergers. This galaxy can be a S0 galaxy with a compact
classical-like bulge by visual classification.
Figure 18: Spatial distributions of SFRs in D1 (top) and B1 (bottom), averaged
within 0.5 Gyr. The black contours represent the stellar surface density maps.
Clearly, B1 is quenched inside-out. The size increases of D1 and B1 are mainly
driven by the assembly of their disky structures via SF in the outer regions.
### 6.2 Disk growth in massive cases: gas accretion and inside-out quenching
Figure 15 shows the extended and compact pathways on the mass-size diagram.
The color represents the mass fraction of bulges in both disk- and bugle-
dominated galaxies at $z=0$; the gray data points correspond to their
progenitors at $z=1.5$. Disk-dominated galaxies generally follow the extended
pathway, highlighted by the blue arrow. Bulge-dominated galaxies follow the
compact pathway that has two phases: (1) a compact phase during which the mass
grows significantly while the size changes little, thus forming bulges; (2)
the size increases significantly while the mass grows relatively little, thus
building up disky structures. Massive cases with $M_{\rm s}\gtrsim 10^{10.3}\
M_{\odot}$ have almost passed through the phase (1) at $z\sim 1.0$, while the
compact phase seems last to $z=0$ for less massive cases.
The progenitors of massive bulge-dominated galaxies (linked by the dashed-gray
lines) with $M_{\rm s}\gtrsim 10^{10.5}\ M_{\odot}$ have already been rather
massive and compact at $z=1.5$. They then evolve into the phase (2) of the
compact pathway, during which diffuse disk structures are assembled gradually,
thus their sizes increase in a similar way to disk-dominated ones. Figures 16
and 17 show two prototypes of massive disk- and bulge-dominated galaxies,
named D1 and B1 (marked by squares in Figure 15), respectively, to illustrate
the dramatically different evolutionary pathways between them. Stellar
particles are classified into the structure that has the largest likelihood at
$z=0$ via applying our kinematic decomposition algorithm. From top to bottom,
we show the surface density maps in both face-on and edge-on views for total,
cold disk, warm disk, bulge, and halo.
Three quantities (red words in the top and bottom textboxes of Figures 16 and
17) are used to characterize the number fractions of stars that originate from
an earlier-phase evolution, external/ex-situ mergers, and internal/in-situ SF
(from top to bottom), respectively, for each kinematic structure (see details
of their definitions in the footnote555$N_{i,z}$ is the total stellar particle
number of a certain structure $i$ at redshift $z$. Tracing back from $z_{1}$
to an earlier time point $z_{2}$, the new stellar particle members of each
structure during $z_{1}-z_{2}$ are classified into two origins: ex-situ
accretion and SF, i.e., in situ. The ex-situ part is estimated by the stars
that are not belong to this galaxy at $z_{2}$; the in-situ part is newly
formed stars in a time span between two snapshots from $z_{2}$ to $z_{1}$.).
For example, the kinematic cold disk of D1 contributes $54.4\%$ of its total
stellar mass at $z=0$. At $z=1.5$, only $3.5\%$ of cold disk stars, i.e.,
$3.5\%N_{i,0}$, have already existed in this galaxy, where $N_{i,0}$ is the
total stellar particle number of a certain structure $i$ that is cold disk
here. During $z=1.5-1.0$, it increases by $\approx 11.0\%N_{i,0}$, where SF
and ex-situ accretion contribute $10.97\%N_{i,0}$ and $0\%N_{i,0}$,
respectively.
Clearly, the properties at $z=1.5$ are largely dominated by their early-phase
evolution. At $z=1.5$, the B1 object (Figure 17) has already assembled 53.8
percent of its stars found at $z=0$, while D1 (Figure 16) has only had
$20.9\%$ of its stars. The half-mass radius of B1, marked by dashed circles,
is dramatically smaller than that of D1. A massive central concentration,
i.e., bulge, is clearly visible in B1. The overall properties of the kinematic
bulge in B1 changes mildly since $z=1.5$. Without experiencing mergers, the
halo masses of both D1 and B1 also change little.
During $z=0-1.5$, an extended cold disk forms gradually in both D1 and B1,
which leads to the increase of their galaxy sizes. The growth of cold disks
coincides well with the SF shown in Figure 18. At $z<1$, B1 is gradually
quenched inside-out (see Figures 14 and 14 for statistical results). An
extended SF ring that is also gas-rich is formed. Zolotov et al. (2015)
suggested that such a ring is a natural result of the accretion of cold gas
with high angular momentum from the cosmic web into this node. Moreover, Dekel
et al. (2020) showed that the existence of a massive central concentration,
i.e., bulge, can suppress inward gas transport, which possibly also gives rise
a SF ring. In conclusion, the existence of a massive bulge formed in the early
phase evolution makes bulge-dominated galaxies more compact than disk-
dominated ones. Both of them are able to generate similar disky structures by
internal SF in the later phase. Massive bulge-dominated galaxies, however, are
likely to be quenched at low redshifts, thus classified as fast-rotator ETGs
with massive bulges.
Figure 19: Gas distributions in prototype galaxies D2-D4 and B2-B3, viewed
face-on and edge-on. From D2 (top) to B3 (bottom), galaxies become more and
more compact, which is likely determined by the gas that they accreted.
### 6.3 Size growth in less massive cases: undergoing bulge formation via gas
accretion
There is a dramatic difference in the size growth of disk- and bulge-dominated
galaxies with $M_{\rm s}\lesssim 10^{10.3}\ M_{\odot}$, shown in Figures 14
and 15. The sizes of bulge-dominated galaxies are nearly flat or even decrease
slightly while their stellar masses grow fast at $z\gtrsim 1$, while the
$r_{\rm e}$ of disk-dominated galaxies increases significantly by a factor of
$\sim 2.5$. However, it is surprising that the overall SFRs of such bulge- and
disk-dominated galaxies follow a similar trend, though bulge-dominated
galaxies have slightly higher SFR at $z>1$, but lower at $z<1$, than disk-
dominated ones. The difference in SFR is not as significant as that in size
between disk- and bulge-dominated galaxies. Therefore, the difference of
compact and extended pathways must be due to the spatial distribution of their
SF. Considering that both bulge- and disk-dominated galaxies are weakly
affected by mergers, we speculate that their SFs may be correlated with the
cold gas inflows that are determined by the dark matter halos and
environments.
Figure 19 shows the face-on and edge-on views of gaseous mass in the disk-
dominated galaxies D2-D5 and bulge-dominated galaxies B2-B3 (marked by squares
in Figure 15). They have similar stellar masses of $M_{\rm s}\simeq 10^{10.2}\
M_{\odot}$, but their sizes vary from $\sim 1.5$ kpc to nearly 10 kpc. There
is a clear signature that the dramatic difference in their sizes originates
from the difference in angular momentum of accreted gas. More compact galaxies
are likely to be fuelled by gas inflows whose angular momentum is lower or
removed sufficiently, thus generating a smaller gaseous disk (Figure 14),
which is consistent with the theoretical expectation (e.g. Mo et al., 1998;
Bullock et al., 2001). Moreover, the galactic wind driven by supernova and AGN
feedback may also play a role, as suggested by Genel et al. (2015).
Because gas seems to be directly accreted into galaxy central regions in B2,
then it is then able to contribute directly to the growth of the bulge, shown
in Figure 20. B2’s bulge keeps growing until $z=0$, thus the galaxy size
changes little. It is plausible that low-mass bulge-dominated galaxies evolve
along a similar compact pathway to massive ones, while more massive bulge-
dominated galaxies evolve naturally either earlier or faster.
Figure 20: B2, a low-mass ($M_{\rm s}\approx 10^{10.2}M_{\odot}$) bulge-
dominated galaxy (ID 590926). This image uses the same convention as Figure
16. The bulge forms stars until $z=0$, during which $r_{\rm e}$ (marked by the
dashed circles) changes little. Figure 21: Evolution of halo-dominated
galaxies (red dots) on the mass-size diagram. The disk/bulge-dominated
galaxies (black dots) and their progenitors (gray squares) are overlaid. The
cyan squares are the progenitors of halo-dominated galaxies at $z=1.5$. One-
third of halo-dominated galaxies are linked with their progenitors using gray
dashed lines, which indicate the compact and extended pathways highlighted by
red and blue arrows, respectively. Figure 22: Tracing back to 10 Gyr ago, the
accumulated fractions of galaxies having major (top, mass ratio of $\geq
0.25$) and minor (bottom, mass ratio of $0.1-0.25$) mergers. For each type of
galaxy in a certain mass range, the $y$-axis gives the fraction of galaxies
that have been affected by mergers since that time.
## 7 Evolution of halo-dominated galaxies: the role of mergers
Halo-dominated galaxies generally have weak rotation and elliptical
morphology, and thus qualitatively correspond to slow rotator ETGs or typical
elliptical galaxies. In Figure 21, we show the mass-size diagram of halo-
dominated galaxies and their progenitors at $z=1.5$. The bulge/disk-dominated
galaxies (black dots) and their progenitors (gray squares) are overlaid for
comparison. It is clear that halo-dominated galaxies preferentially populate
the high-mass end, while their progenitors are distributed in a broad range of
both size and mass. It is natural that the progenitors of halo-dominated
galaxies are either extended or compact young galaxies before mergers happen.
The evolution of halo-dominated galaxies, thus, can also be divided into
extended and compact pathways (highlighted by arrows), as suggested by the
gray dashed lines that link halo-dominated galaxies and their progenitors at
$z=1.5$.
Halo-dominated galaxies that evolve on the compact pathway originate mainly
from compact progenitors that are qualitatively similar to those of bulge-
dominated galaxies, but generally more massive and compact. Such compact
objects are likely to be the so-called “nuggets” observed in many high
redshift observations (e.g. van Dokkum et al., 2008, 2009; Newman et al.,
2010; Damjanov et al., 2011; Whitaker et al., 2012; Barro et al., 2013). A
large fraction of the progenitors of halo-dominated galaxies have similar
properties to those of disk-dominated galaxies, falling on the extended
pathway.
Mergers are destructive for galaxies along both extended and compact pathways
that can disrupt galactic spins, thus building stellar halos in the second
phase. As shown in Figure 22, about 50% of massive halo-dominated galaxies
have had at least one $\geq 0.25$ major mergers (top panel) in the past 10
Gyr, which is consistent with the results of Penoyre et al. (2017) and Lagos
et al. (2018). This fraction is smaller ($\sim 30$%) in less massive ($M_{\rm
s}\leq 10^{10.5}M_{\odot}$) galaxies, suggesting that less violent mergers are
required to form a halo-dominated galaxy possibly due to their weaker
potential well. Mergers of mass ratio 0.1-0.25 (bottom panel) play a
relatively less important, but non-negligible, role. In comparison, less than
20% bulge-dominated galaxies have been affected by any mergers during this
time period.
Figure 23: H1 (ID 398784), a prototype halo-dominated galaxy evolves along the
compact pathway. This massive elliptical galaxy forms by a major merger at
$z\sim 0.2$ from a rather compact object at $z=1.5$. The stellar halo is built
up by the initial disk of the primary galaxy and the satellite galaxy
destroyed during the merger. This image uses the same convention as Figure 16.
Figure 24: A halo-dominated galaxy (H2, ID 559386) evolves along the extended
pathway. Its disk is rebuilt after the merger at $z\sim 0.7$, forming a
Sombrero analogue.
### 7.1 Two prototypes evolve along the compact and extended pathways
Two prototype halo-dominated galaxies H1-H2 (marked by squares in Figure 21)
are discussed in this section. They evolve along extended and compact pathways
that are shaped by mergers, especially major ones.
H1 (Figure 23) is a typical massive elliptical galaxy. At $z=1.5$, the
progenitor of this galaxy has about $10^{10.6}\ M_{\odot}$ of stellar mass and
a compact morphology of $r_{\rm e}\sim 0.7$ kpc, thus a typical nugget. It
evolves in a similar way to bulge-dominated galaxies such that a disk is
assembled via in-situ SF during $z=0.2-1.5$ before the major merger happens at
$z\sim 0.2$. If no merger involves in the evolution of this galaxy, it would
become a galaxy of $M_{\rm s}\sim 10^{11.0}M_{\odot}$ and $r_{\rm e}\sim 4$
kpc, i.e., a massive bulge-dominated galaxy, according to its properties at
$z=0.5$. The major merger transforms it to a halo-dominated elliptical galaxy
with $M_{\rm s}=10^{11.3}M_{\odot}$ and $r_{\rm e}\simeq 10$ kpc. The growth
of galaxy size is likely a natural outcome combining mergers and extended SF
via gas accretion. The most dramatic increase of the galaxy size is driven by
the final major merger, especially for galaxies along the compact pathway in
Figure 21, generating a massive elliptical galaxy with a diffuse envelope.
About $50\%$ of the stellar halo particles are from the satellite galaxy
merging in and SF during the merger event; another $50\%$ come from the stars
previously existing in the central galaxy. Although the bulge’s progenitor of
such a galaxy is already quite massive, reaching $\sim 10^{10.5}\ M_{\odot}$,
at $z=1.5$, it only contributes to a small fraction of the total stellar mass
at $z=0$ because of the contribution from ex-situ stars accreted during the
merger.
Shown in Figure 21, most of halo-dominated galaxies that evolve along the
compact pathway become more massive by a factor of $\sim 2$ during $z<2$.
Meanwhile, their sizes are $\sim 10$ times larger. Such an evolutionary
pathway is consistent with “nuggets” that are believed to eventually become
more extended elliptical galaxies. In contrast, the stellar masses of the
galaxies following the extended pathway increase significantly by a factor of
$\sim 10$, while their sizes become only a few times larger. Even for the most
massive cases, there is a non-negligible fraction of halo-dominated galaxies
formed from the extended pathway. This fraction increases significantly toward
the low-mass end.
Figure 24 shows the evolution of a prototype halo-dominated galaxy that forms
along the extended pathway. It is clear that an initially extended disk is
destroyed by a major merger at $z\sim 0.7$, but a new disk with $32.4\%\
M_{\rm s}$ is regenerated in the later time, thus forming a Sombrero analogue.
In such galaxies, a significant merger can disrupt the secular evolution at
certain time, but it cannot shut it down completely. We therefore suggest that
galaxies cannot be quenched directly by either mergers (e.g. Hopkins et al.,
2006, 2008) or the growth of halos.
Figure 25: Illustration of galaxy evolution suggested by IllustrisTNG, based
on our kinematic decomposition. The growth of disks, bulges, and halos are
physically linked with nature, including early-phase evolution and internal
SF, and nurture, mainly merger, processes. Their possible analogues in
observations in the Local Universe are suggested in the right panels.
## 8 Discussion: bimodality in galaxy types, nature and/or nurture
Galaxies exhibit a bimodality in many aspects, such as colour, SFR, stellar
age, and morphology, thus they are generally divided into two main classes:
star-forming late-type and quiescent early-type galaxies. A similar bimodal
distribution of extended SF and compact SF galaxies is also found in the
Universe early phase in the CANDELS survey (Barro et al., 2013, 2014; van
Dokkum et al., 2015). The massive compact galaxies are well know as “red/blue
nuggets”, which are expected to be the most likely progenitors of quiescent
early-type ones at low redshifts. In morphology, late- and early-type galaxies
are classified by the bulge-disk decomposition, i.e., the Hubble (1926)
sequence (Sandage & Tammann, 1981). In Du et al. (2020), we show clearly that
the morphologically-defined bulge has essentially a severe degeneracy between
bulges and stellar halos derived by kinematics. In this paper, we further show
that bulges form from a very different mechanism with respect to stellar
halos. This indicates that both the nature and nurture should be taken into
account to interpret such a bimodal distribution.
### 8.1 Bimodality in nature
The compact-extended evolutionary pathway can be explained by the long-
standing concept of the spin parameter (Fall & Efstathiou, 1980; Blumenthal et
al., 1984; Mo et al., 1998; Dutton et al., 2007). The basic idea is that
galaxy stellar disks form as a consequence of gas slowly cooling from a hot
gaseous halo, in the meanwhile, maintaining its specific angular momentum. A
remarkable scaling relation is found between half-mass radius of the galaxy’s
stellar distribution and its virial radius, as well as the spin, of the galaxy
parent halo out to high redshifts (Kravtsov, 2013; van der Wel et al., 2014;
Shibuya et al., 2015; Somerville et al., 2018). Consistently, we also find a
clear signature that galaxy sizes are somewhat controlled by halo angular
momentum. Compact galaxies with a more massive bulge generally have a smaller
spin than extended disk-dominated galaxies at fixed stellar mass, without
being affect by mergers. This difference finally leads to different evolution
along either a compact or extended evolutionary pathway in nature, as
illustrated in Figure 25. Chaotic, violent instabilities, i.e., the so-called
“compaction” phase (Dekel & Burkert, 2014), are likely to be involved in the
compact evolutionary pathway, which may facilitate the growth of bulges.
Moreover, there is a natural downsizing in the compact-extended evolutionary
pathway: a compact phase occurs earlier in more massive galaxies. At $z=2$,
this phase is already over for massive galaxies, forming massive compact
galaxies, while it is still underway in low-mass ones even at $z=0$.
We suggested that less massive quiescent central galaxies are bulge-dominated
galaxies that commonly exist in the mass range of $10^{10.5}M_{\odot}<M_{\rm
s}<10^{11}M_{\odot}$. Similarly, Lagos et al. (2018) also reported a quiescent
population in less massive galaxies that have not had any mergers in the EAGLE
simulation. Therefore, not all compact galaxies (nuggets) evolve into
elliptical galaxies. By breaking the degeneracy between bulges and halos, we
showed that many massive compact galaxies become the bulges in bulge-dominated
galaxies, as proposed in Dullo & Graham (2013); Graham (2013); Graham et al.
(2015). A consistent conclusion is reached in Wellons et al. (2015, 2016)
using the Illustris simulation.
### 8.2 Bimodality in nurture
In the later phase, relatively dry mergers, start to be destructive for disk
structures in galaxies; i.e., the nurture effect in Figure 25. In the general
picture, compact galaxies are believed to eventually become more extended
quiescent galaxies via the cumulative effect of minor mergers that drives the
increase of massive quiescent galaxies via build-up a diffuse envelope (Naab
et al., 2009; Hopkins et al., 2010; Oser et al., 2010; Porter et al., 2014;
Huang et al., 2016). This is partially due to the fact that the number density
of quiescent galaxies increases by a factor of $\sim 10$ during $z<2$ in
observations (Brammer et al., 2011), which cannot be sufficiently explained by
the major merger rate during this time (Robaina et al., 2010; Brammer et al.,
2011). Genel et al. (2018) have found a similar trend in the SF and quiescent
galaxies from TNG100 that reaches a good agreement with observations (Shen et
al., 2003; van der Wel et al., 2014). Mergers, however, are unable to account
for the density evolution of less massive quiescent galaxies. Other primary
processes, e.g., the formation of bulge-dominated galaxies suggested in
Section 8.1, are required to quench the star-forming galaxies in the low-mass
end to explain the remaining growth of quiescent galaxies since $z\sim 2$.
In our picture, classical bulges are compact structures formed mainly in the
early phase, while elliptical galaxies are diffuse objects that are dominated
by halos formed in the later phase. Both the classical bulges and the cores of
massive ellipitical galaxies, however, are likely to be formed in similarly
fast SF at high redshifts, which is evident by recent observations of red
spiral galaxies (Hao et al., 2019; Guo et al., 2020; Zhou et al., 2020). The
difference between the bulge- and halo-dominated galaxies increases in their
subsequent evolution, largely due to major mergers that lead to a sharp
increase in both mass and size of halo-dominated galaxies.
Therefore, our results suggest that quiescent ETGs are composed of halo- and
bulge-dominated galaxies. The bimodality in galaxy types is, thus, contributed
by both nature and nurture processes. An accurate decomposition of bulges and
halos is required to understand the formation and evolution of galaxies and
their structures.
## 9 Summary
In this work we have studied the origin of galactic stellar structures on the
basis of a physically-motivated kinematic decomposition of galaxies from the
TNG50 simulation at $z=0$. In particular, we have selected about 500 central
galaxies in the $10^{10-11.5}\ M_{\odot}$ stellar mass range and we have
applied the auto-GMM method to isolate disks, bulges, and stellar halos in
each of them. We have identified three typical kinds of galaxies – namely,
those dominated by disk, bulge, and stellar halo structures, respectively –
and have studied their evolution through cosmic time.
We find that the growth of structures is characterized and connected by three
fundamental regimes: an early-phase evolution ($z\gtrsim 2$), followed by
late-phase internal processes as well as late-phase external interactions. Our
findings motivate an overall framework that can be illustrated as in Figure
25.
Galaxies that have massive bulges or disks, but low-mass or negligible stellar
halos, have been weakly affected by mergers since $z\sim 2$. We find clear
indications that bulge- and disk-dominated galaxies evolve along distinct
evolutionary pathways, one compact and one extended, respectively, where
galaxy sizes are likely to be controlled by the angular momentum obtained by
their parent dark matter (proto)halos at early times. In this picture, in the
case of low angular momentum, galaxies form stars efficiently in a compact
way, by forming bulge-dominated galaxies and by building up massive bulge
structures fast in their early-phase evolution. For high angular momentum dark
matter halos, disk-dominated galaxies form: stellar disks form as a
consequence of gas cooling, during which its specific angular momentum is
somewhat maintained. In the late phase, both bulge- and disk-dominated
galaxies can assemble disky structures that drive the increase of their sizes.
This picture suggests that galaxies without diffuse stellar envelopes, i.e.,
without stellar halo structures, can be used as clean fossil records of their
early-phase evolution and properties.
There is a natural downsizing in the compact-vs.-extended evolutionary
picture: more massive galaxies form their bulges earlier. In the case of
$M_{\rm s}>10^{10.5}$ at $z=0$, progenitors of bulge-dominated galaxies
generally have already been rather massive and compact objects that have
similar properties to “nuggets” observed at high redshifts. This also suggests
that some nuggets are likely to become the bulges of massive galaxies in the
local Universe. In the later phase, such massive bulge-dominated galaxies are
quenched inside-out, at least according to TNG50. In less-massive bulge-
dominated galaxies, their star formation occurs within the disk via gas
accretion until recent times.
Galaxies with massive halos are normal elliptical galaxies whose formation is
dominated by major mergers at recent times, i.e. in the later phase. The
progenitors of halo-dominated galaxies can therefore be either compact nuggets
or extended disk galaxies. Mergers, especially major ones, are able to destroy
the stellar disks of galaxies, which in turn contribute to the formation of
massive stellar halos. However, mergers alone cannot quench star formation,
and disky structures can regenerate also after major mergers.
In Du et al. (2020), we had shown that stellar halos are significantly mixed
up with kinematically-derived bulges, so that it is hard for the two
structures to be properly decomposed based on morphology i.e. photometry. In
this paper, we have further shown that the inaccurate classification and
definition of bulges and stellar halos is destined to cause further
difficulties in our understanding of the formation histories of galaxies. This
work provides an initial framework for future attempts to link galactic
structures to galaxy formation physics in detail.
This work was supported by the National Science Foundation of China (11721303,
11991052) and the National Key R&D Program of China (2016YFA0400702). The
authors thank Vicente Rodriguez-Gomez for his contribution with running SKIRT
to generate the synthetic images of Figure 5. M.D. is also supported by the
grants “National Postdoctoral Program for Innovative Talents” (#8201400810)
and “Postdoctoral Science Foundation of China” (#8201400927) from the China
Postdoctoral Science Foundation. V.P.D. was supported by STFC Consolidated
grant ST/R000786/1. The TNG50 simulation used in this work, one of the
flagship runs of the IllustrisTNG project, has been run on the HazelHen Cray
XC40-system at the High Performance Computing Center Stuttgart as part of
project GCS-ILLU of the Gauss centres for Supercomputing (GCS). This work is
also strongly supported by the High-performance Computing Platform of Peking
University, China. The analysis was performed using Pynbody (Pontzen et al.,
2013).
## Appendix A Appendix
For comparison, we perform the same analysis on central galaxies in the same
mass range from the TNG100 run. Figure 26 shows that the galaxies from TNG100
follow a similar trend to those from TNG50 shown in Figure 6. Galaxies with
massive spheroidal components are relatively compact and quiescent objects. It
is worth mentioning that, at the low-mass ($M_{\rm s}\lesssim
10^{10.3}M_{\odot}$) end, many galaxies dominated by spheroidal components are
still actively forming stars, which is consistent with the blue spheroid issue
reported in Rodriguez-Gomez et al. (2019). The increase of bulge mass fraction
in TNG100 may be due to the overheating of disk stars in central regions where
the dynamical time is shortest. TNG50 produces much more realistic low-mass
galaxies, possibly due to the dramatic increase of the numerical resolution
that resolves disk thicknesses well (Pillepich et al., 2019). Figure 27 shows
the evolution of some basic properties of disk- and bulge-dominated galaxies
from TNG100. We can see a similar compact-extended evolutionary pathways in
bulge- and disk-dominated galaxies from TNG100.
Figure 26: Relation between global properties and the mass fractions of
kinematically-derived structures for TNG100 central galaxies. This image uses
the same convention as Figure 6. In low-mass galaxies, TNG100 cannot resolve
disks well, thus significantly overproducing kinematic bulges.
Figure 27: From top to bottom: evolution of stellar and dark matter masses,
half-mass radius $r_{\rm e}$, and spin parameter for bulge- and disk-dominated
galaxies selected in TNG100. This image uses the same convention as Figure 9.
Bulge-dominated galaxies generally have lower spins than disk-dominated ones,
which may lead to evolution along a more compact evolutionary pathway.
## References
* Abadi et al. (2003) Abadi, M. G., Navarro, J. F., Steinmetz, M., & Eke, V. R. 2003, ApJ, 597, 21
* Aguerri et al. (2001) Aguerri, J. A. L., Balcells, M., & Peletier, R. F. 2001, A&A, 367, 428
* Amorisco (2017) Amorisco, N. C. 2017, MNRAS, 464, 2882
* Barnes (1988) Barnes, J. E. 1988, ApJ, 331, 699
* Barro et al. (2013) Barro, G., Faber, S. M., Pérez-González, P. G., et al. 2013, ApJ, 765, 104
* Barro et al. (2014) —. 2014, ApJ, 791, 52
* Bell et al. (2017) Bell, E. F., Monachesi, A., Harmsen, B., et al. 2017, ApJ, 837, L8
* Bezanson et al. (2009) Bezanson, R., van Dokkum, P. G., Tal, T., et al. 2009, ApJ, 697, 1290
* Blumenthal et al. (1984) Blumenthal, G. R., Faber, S. M., Primack, J. R., & Rees, M. J. 1984, Nature, 311, 517
* Bournaud et al. (2011) Bournaud, F., Chapon, D., Teyssier, R., et al. 2011, ApJ, 730, 4
* Brammer et al. (2011) Brammer, G. B., Whitaker, K. E., van Dokkum, P. G., et al. 2011, ApJ, 739, 24
* Breda et al. (2020) Breda, I., Papaderos, P., & Gomes, J.-M. 2020, A&A, 640, A20
* Bullock et al. (2001) Bullock, J. S., Dekel, A., Kolatt, T. S., et al. 2001, ApJ, 555, 240
* Cappellari et al. (2011a) Cappellari, M., Emsellem, E., Krajnović, D., et al. 2011a, MNRAS, 413, 813
* Cappellari et al. (2011b) —. 2011b, MNRAS, 416, 1680
* Ceverino et al. (2015) Ceverino, D., Dekel, A., Tweed, D., & Primack, J. 2015, MNRAS, 447, 3291
* Crain et al. (2015) Crain, R. A., Schaye, J., Bower, R. G., et al. 2015, MNRAS, 450, 1937
* Daddi et al. (2010) Daddi, E., Bournaud, F., Walter, F., et al. 2010, ApJ, 713, 686
* Damjanov et al. (2011) Damjanov, I., Abraham, R. G., Glazebrook, K., et al. 2011, ApJ, 739, L44
* Davis et al. (1985) Davis, M., Efstathiou, G., Frenk, C. S., & White, S. D. M. 1985, ApJ, 292, 371
* Deason et al. (2016) Deason, A. J., Mao, Y.-Y., & Wechsler, R. H. 2016, ApJ, 821, 5
* Dekel & Burkert (2014) Dekel, A., & Burkert, A. 2014, MNRAS, 438, 1870
* Dekel et al. (2009) Dekel, A., Sari, R., & Ceverino, D. 2009, ApJ, 703, 785
* Dekel et al. (2020) Dekel, A., Lapiner, S., Ginzburg, O., et al. 2020, MNRAS, 496, 5372
* Dessauges-Zavadsky et al. (2015) Dessauges-Zavadsky, M., Zamojski, M., Schaerer, D., et al. 2015, A&A, 577, A50
* Doménech-Moral et al. (2012) Doménech-Moral, M., Martínez-Serrano, F. J., Domínguez-Tenreiro, R., & Serna, A. 2012, MNRAS, 421, 2510
* D’Souza & Bell (2018) D’Souza, R., & Bell, E. F. 2018, MNRAS, 474, 5300
* Du et al. (2020) Du, M., Ho, L. C., Debattista, V. P., et al. 2020, ApJ, 895, 139
* Du et al. (2019) Du, M., Ho, L. C., Zhao, D., et al. 2019, ApJ, 884, 129
* Dubois et al. (2016) Dubois, Y., Peirani, S., Pichon, C., et al. 2016, MNRAS, 463, 3948
* Dullo & Graham (2013) Dullo, B. T., & Graham, A. W. 2013, ApJ, 768, 36
* Dutton et al. (2007) Dutton, A. A., van den Bosch, F. C., Dekel, A., & Courteau, S. 2007, ApJ, 654, 27
* Emsellem et al. (2007) Emsellem, E., Cappellari, M., Krajnović, D., et al. 2007, MNRAS, 379, 401
* Emsellem et al. (2011) —. 2011, MNRAS, 414, 888
* Engler et al. (2021) Engler, C., Pillepich, A., Joshi, G. D., et al. 2021, MNRAS, 500, 3957
* Fall & Efstathiou (1980) Fall, S. M., & Efstathiou, G. 1980, MNRAS, 193, 189
* Gadotti (2009) Gadotti, D. A. 2009, MNRAS, 393, 1531
* Gadotti & Sánchez-Janssen (2012) Gadotti, D. A., & Sánchez-Janssen, R. 2012, MNRAS, 423, 877
* Gao & Ho (2017) Gao, H., & Ho, L. C. 2017, ApJ, 845, 114
* Geach et al. (2011) Geach, J. E., Smail, I., Moran, S. M., et al. 2011, ApJ, 730, L19
* Genel et al. (2015) Genel, S., Fall, S. M., Hernquist, L., et al. 2015, ApJ, 804, L40
* Genel et al. (2014) Genel, S., Vogelsberger, M., Springel, V., et al. 2014, MNRAS, 445, 175
* Genel et al. (2018) Genel, S., Nelson, D., Pillepich, A., et al. 2018, MNRAS, 474, 3976
* Genzel et al. (2011) Genzel, R., Newman, S., Jones, T., et al. 2011, ApJ, 733, 101
* Girard et al. (2018) Girard, M., Dessauges-Zavadsky, M., Schaerer, D., et al. 2018, A&A, 613, A72
* Graham (2013) Graham, A. W. 2013, T.D.Oswalt & W.C.Keel (Eds.), Springer Publishing (arXiv:1108.0997), 6, 91
* Graham et al. (2015) Graham, A. W., Dullo, B. T., & Savorgnan, G. A. D. 2015, ApJ, 804, 32
* Guo et al. (2020) Guo, R., Hao, C.-N., Xia, X., et al. 2020, ApJ, 897, 162
* Hao et al. (2019) Hao, C.-N., Shi, Y., Chen, Y., et al. 2019, ApJ, 883, L36
* Harmsen et al. (2017) Harmsen, B., Monachesi, A., Bell, E. F., et al. 2017, MNRAS, 466, 1491
* Hemler et al. (2020) Hemler, Z. S., Torrey, P., Qi, J., et al. 2020, arXiv e-prints, arXiv:2007.10993
* Hopkins et al. (2006) Hopkins, P. F., Hernquist, L., Cox, T. J., et al. 2006, ApJS, 163, 1
* Hopkins et al. (2008) Hopkins, P. F., Hernquist, L., Cox, T. J., & Kereš, D. 2008, ApJS, 175, 356
* Hopkins et al. (2009) Hopkins, P. F., Lauer, T. R., Cox, T. J., Hernquist, L., & Kormendy, J. 2009, ApJS, 181, 486
* Hopkins et al. (2010) Hopkins, P. F., Bundy, K., Croton, D., et al. 2010, ApJ, 715, 202
* Huang et al. (2016) Huang, S., Ho, L. C., Peng, C. Y., Li, Z.-Y., & Barth, A. J. 2016, ApJ, 821, 114
* Joshi et al. (2020) Joshi, G. D., Pillepich, A., Nelson, D., et al. 2020, MNRAS, 496, 2673
* Khochfar & Silk (2006) Khochfar, S., & Silk, J. 2006, ApJ, 648, L21
* Kormendy (1977) Kormendy, J. 1977, ApJ, 218, 333
* Kormendy & Ho (2013) Kormendy, J., & Ho, L. C. 2013, ARA&A, 51, 511
* Kormendy & Kennicutt (2004) Kormendy, J., & Kennicutt, Jr., R. C. 2004, ARA&A, 42, 603
* Kravtsov (2013) Kravtsov, A. V. 2013, ApJ, 764, L31
* Lagos et al. (2018) Lagos, C. d. P., Stevens, A. R. H., Bower, R. G., et al. 2018, MNRAS, 473, 4956
* Law et al. (2009) Law, D. R., Steidel, C. C., Erb, D. K., et al. 2009, ApJ, 697, 2057
* Marinacci et al. (2018) Marinacci, F., Vogelsberger, M., Pakmor, R., et al. 2018, MNRAS, 480, 5113
* Merritt et al. (2020) Merritt, A., Pillepich, A., van Dokkum, P., et al. 2020, MNRAS, 495, 4570
* Mo et al. (1998) Mo, H. J., Mao, S., & White, S. D. M. 1998, MNRAS, 295, 319
* Monachesi et al. (2019) Monachesi, A., Gómez, F. A., Grand , R. J. J., et al. 2019, MNRAS, 485, 2589
* Naab et al. (2009) Naab, T., Johansson, P. H., & Ostriker, J. P. 2009, ApJ, 699, L178
* Naab et al. (2006) Naab, T., Khochfar, S., & Burkert, A. 2006, ApJ, 636, L81
* Naiman et al. (2018) Naiman, J. P., Pillepich, A., Springel, V., et al. 2018, MNRAS, 477, 1206
* Nelson et al. (2015) Nelson, D., Pillepich, A., Genel, S., et al. 2015, Astronomy and Computing, 13, 12
* Nelson et al. (2018) Nelson, D., Pillepich, A., Springel, V., et al. 2018, MNRAS, 475, 624
* Nelson et al. (2019a) —. 2019a, MNRAS, 490, 3234
* Nelson et al. (2019b) Nelson, D., Springel, V., Pillepich, A., et al. 2019b, Computational Astrophysics and Cosmology, 6, 2
* Nelson et al. (2020) Nelson, D., Sharma, P., Pillepich, A., et al. 2020, arXiv e-prints, arXiv:2005.09654
* Newman et al. (2010) Newman, A. B., Ellis, R. S., Treu, T., & Bundy, K. 2010, ApJ, 717, L103
* Obreja et al. (2018) Obreja, A., Macciò, A. V., Moster, B., et al. 2018, MNRAS, 477, 4915
* Oser et al. (2010) Oser, L., Ostriker, J. P., Naab, T., Johansson, P. H., & Burkert, A. 2010, ApJ, 725, 2312
* Park et al. (2019) Park, M.-J., Yi, S. K., Dubois, Y., et al. 2019, ApJ, 883, 25
* Parry et al. (2009) Parry, O. H., Eke, V. R., & Frenk, C. S. 2009, MNRAS, 396, 1972
* Penoyre et al. (2017) Penoyre, Z., Moster, B. P., Sijacki, D., & Genel, S. 2017, MNRAS, 468, 3883
* Pillepich et al. (2014) Pillepich, A., Vogelsberger, M., Deason, A., et al. 2014, MNRAS, 444, 237
* Pillepich et al. (2018a) Pillepich, A., Nelson, D., Hernquist, L., et al. 2018a, MNRAS, 475, 648
* Pillepich et al. (2018b) Pillepich, A., Springel, V., Nelson, D., et al. 2018b, MNRAS, 473, 4077
* Pillepich et al. (2019) Pillepich, A., Nelson, D., Springel, V., et al. 2019, MNRAS, 490, 3196
* Pontzen et al. (2013) Pontzen, A., Roškar, R., Stinson, G. S., et al. 2013, pynbody: Astrophysics Simulation Analysis for Python, , , astrophysics Source Code Library, ascl:1305.002
* Pop et al. (2018) Pop, A.-R., Pillepich, A., Amorisco, N. C., & Hernquist, L. 2018, MNRAS, 480, 1715
* Porter et al. (2014) Porter, L. A., Somerville, R. S., Primack, J. R., et al. 2014, MNRAS, 445, 3092
* Pulsoni et al. (2020) Pulsoni, C., Gerhard, O., Arnaboldi, M., et al. 2020, arXiv e-prints, arXiv:2009.01823
* Robaina et al. (2010) Robaina, A. R., Bell, E. F., van der Wel, A., et al. 2010, ApJ, 719, 844
* Rodriguez-Gomez et al. (2015) Rodriguez-Gomez, V., Genel, S., Vogelsberger, M., et al. 2015, MNRAS, 449, 49
* Rodriguez-Gomez et al. (2016) Rodriguez-Gomez, V., Pillepich, A., Sales, L. V., et al. 2016, MNRAS, 458, 2371
* Rodriguez-Gomez et al. (2019) Rodriguez-Gomez, V., Snyder, G. F., Lotz, J. M., et al. 2019, MNRAS, 483, 4140
* Sales et al. (2010) Sales, L. V., Navarro, J. F., Schaye, J., et al. 2010, MNRAS, 409, 1541
* Sales et al. (2012) Sales, L. V., Navarro, J. F., Theuns, T., et al. 2012, MNRAS, 423, 1544
* Sandage & Tammann (1981) Sandage, A., & Tammann, G. A. 1981, A Revised Shapley-Ames Catalog of Bright Galaxies
* Schaye et al. (2015) Schaye, J., Crain, R. A., Bower, R. G., et al. 2015, MNRAS, 446, 521
* Schwarzschild (1979) Schwarzschild, M. 1979, ApJ, 232, 236
* Shen et al. (2003) Shen, S., Mo, H. J., White, S. D. M., et al. 2003, MNRAS, 343, 978
* Shibuya et al. (2015) Shibuya, T., Ouchi, M., & Harikane, Y. 2015, ApJS, 219, 15
* Sijacki et al. (2015) Sijacki, D., Vogelsberger, M., Genel, S., et al. 2015, MNRAS, 452, 575
* Somerville et al. (2018) Somerville, R. S., Behroozi, P., Pandya, V., et al. 2018, MNRAS, 473, 2714
* Springel et al. (2001) Springel, V., White, S. D. M., Tormen, G., & Kauffmann, G. 2001, MNRAS, 328, 726
* Springel et al. (2018) Springel, V., Pakmor, R., Pillepich, A., et al. 2018, MNRAS, 475, 676
* Swinbank et al. (2012) Swinbank, A. M., Smail, I., Sobral, D., et al. 2012, ApJ, 760, 130
* Tacchella et al. (2016) Tacchella, S., Dekel, A., Carollo, C. M., et al. 2016, MNRAS, 457, 2790
* Terrazas et al. (2020) Terrazas, B. A., Bell, E. F., Pillepich, A., et al. 2020, MNRAS, 493, 1888
* Toomre (1977) Toomre, A. 1977, in Evolution of Galaxies and Stellar Populations, ed. B. M. Tinsley & R. B. G. Larson, D. Campbell, 401
* Valluri et al. (2004) Valluri, M., Merritt, D., & Emsellem, E. 2004, ApJ, 602, 66
* van den Bosch et al. (2008) van den Bosch, R. C. E., van de Ven, G., Verolme, E. K., Cappellari, M., & de Zeeuw, P. T. 2008, MNRAS, 385, 647
* van der Wel et al. (2009) van der Wel, A., Rix, H.-W., Holden, B. P., Bell, E. F., & Robaina, A. R. 2009, ApJ, 706, L120
* van der Wel et al. (2014) van der Wel, A., Franx, M., van Dokkum, P. G., et al. 2014, ApJ, 788, 28
* van Dokkum et al. (2009) van Dokkum, P. G., Kriek, M., & Franx, M. 2009, Nature, 460, 717
* van Dokkum et al. (2008) van Dokkum, P. G., Franx, M., Kriek, M., et al. 2008, ApJ, 677, L5
* van Dokkum et al. (2015) van Dokkum, P. G., Nelson, E. J., Franx, M., et al. 2015, ApJ, 813, 23
* Vogelsberger et al. (2014a) Vogelsberger, M., Genel, S., Springel, V., et al. 2014a, MNRAS, 444, 1518
* Vogelsberger et al. (2014b) —. 2014b, Nature, 509, 177
* Weinberger et al. (2017) Weinberger, R., Springel, V., Hernquist, L., et al. 2017, MNRAS, 465, 3291
* Weinberger et al. (2018) Weinberger, R., Springel, V., Pakmor, R., et al. 2018, MNRAS, 479, 4056
* Wellons et al. (2015) Wellons, S., Torrey, P., Ma, C.-P., et al. 2015, MNRAS, 449, 361
* Wellons et al. (2016) —. 2016, MNRAS, 456, 1030
* Whitaker et al. (2012) Whitaker, K. E., Kriek, M., van Dokkum, P. G., et al. 2012, ApJ, 745, 179
* Zanisi et al. (2020) Zanisi, L., Huertas-Company, M., Lanusse, F., et al. 2020, arXiv e-prints, arXiv:2007.00039
* Zhou et al. (2020) Zhou, S., Li, C., Hao, C.-N., et al. 2020, arXiv e-prints, arXiv:2011.13749
* Zhu et al. (2018a) Zhu, L., van den Bosch, R., van de Ven, G., et al. 2018a, MNRAS, 473, 3000
* Zhu et al. (2018b) Zhu, L., van de Ven, G., van den Bosch, R., et al. 2018b, Nature Astronomy, 2, 233
* Zolotov et al. (2015) Zolotov, A., Dekel, A., Mandelker, N., et al. 2015, MNRAS, 450, 2327
|
Novel Non-Invasive In-house Fabricated Wearable System with a Hybrid Algorithm
for Fetal Movement Recognition
Upekha Delay1,2, Thoshara Nawarathne1, Sajan Dissanayake1,3¤, Samitha
Gunarathne1, Thanushi Withanage1, Roshan Godaliyadda1, Chathura Rathnayake2,
Parakrama Ekanayake1, Janaka Wijayakulasooriya1,
1 Department of Electrical and Electronic Engineering, Faculty of Engineering,
University of Peradeniya, Peradeniya [20400], Sri Lanka.
2 Department of Obstetrics and Gynacology, Faculty of Medicine, University of
Peradeniya, Peradeniya [20400], Sri Lanka.
These authors contributed equally to this work.
* Corresponding author<EMAIL_ADDRESS>
## Abstract
Fetal movement count monitoring is one of the most commonly used methods of
assessing fetal well-being. While few methods are available to monitor fetal
movements, they consist of several adverse qualities such as unreliability as
well as the inability to be conducted in a non-clinical setting. Therefore,
this research was conducted to design a complete system that will enable
pregnant mothers to monitor fetal movement at home. This system consists of a
non-invasive, non-transmitting sensor unit that can be fabricated at a low
cost. An accelerometer was utilized as the primary sensor and a micro-
controller based circuit was implemented. Clinical testing was conducted
utilizing this sensor unit. Two phases of clinical testing procedures were
done and readings from more than 120 pregnant mothers were taking. Validation
was done by conducting an abdominal ultrasound scan which was utilized as the
ground truth during the second phase of the clinical testing procedure. A
clinical survey was also conducted in parallel with clinical testings in order
to improve the sensor unit as well as to improve the final system. Four
different signal processing algorithms were implemented on the data set and
the performance of each was compared with each other. Consequently, the most
feasible as well as the best performing algorithm was determined and it was
utilized in the final system. Furthermore, a mobile application was also
developed to be used with the sensor unit by pregnant mothers. Finally, a
complete end to end method to monitor fetal movement in a non-clinical setting
was presented by the proposed system.
## Introduction
During pregnancy, the main aim of the parents, as well as the obstetricians,
is to maintain the health of the fetus as well as the mother. Obstetricians
use different methods to assess fetal health. Among them monitoring fetal
movement is the most commonly used method. It is also the simplest and the
most economical method available[1].
Several studies have been carried out to classify the different types of fetal
movements [2]. It was mentioned that fetal movements can be classified based
on the amplitude and the speed of the specific activity. The amplitude could
be weak vs strong and the length of the activity could be short vs sustained.
In a study conducted [3] seven different fetal movement types were identified
using Ultrasound imaging and Doppler Ultrasound. They were Startles, General
movements, Hiccups, Fetal breathing movements, Isolated arm or leg movement,
Twitches and Clonic movements. It was also noted that there is a striking
similarity between the observed movements and the movements observed in a baby
after birth.
In a survey conducted 99.9% of pregnant mothers reported that it was important
to feel and count baby movements[4]. Fetal movement patterns of each trimester
of the pregnancy may differ from each other as well as from each fetus and
mother. Studies have shown that the perception of decreased fetal movement is
associated with stillbirth[5]. Any difference in the fetal movement pattern
can indicate that the fetus is unwell. The change in the pattern could be
reduced fetal movement, weak fetal movement or intense fetal movement. Hence,
change in fetal movement pattern may be a sign of an unhealthy fetus [3]. It
was also shown that by timely reporting to health care providers when
experiencing a decreased fetal movement may prevent perinatal morbidity and
mortality[2]. A study carried out on 305 women who experienced reduced fetal
movement after 28 weeks of gestation showed that 22.1% pregnancies ended in
complications such as small-for-gestational-age infants[6]. Another study
reported 54.7% cases of stillbirth resulting women present who experienced
reduced fetal movements[7]. In another study 20 out of 23 growth-restricted
fetuses were identified prior to birth by monitoring fetal movement counting
where as only 12 out of 20 growth-restricted fetuses were identified with
standard antenatal care without fetal movement counting [8]. Also, several
studies have been conducted to identify fetal movement patterns in each
trimester as well as fetal movement patterns in of mothers[6][7][8]. Another
study conducted by the Royal College of Obstetricians and Gynecologists has
highlighted the lack of studies of fetal movement patterns[9]. Therefore, it
can be concluded that monitoring fetal movement patterns play a major role in
fetal well-being. Furthermore, further studies need to be conducted on how
fetal movement patterns effect the well-being of the fetus.
Currently, fetal movements can be quantified by conducting an ultrasound scan
or an MRI scan[10][11]. These can only be conducted in a clinical setup and
can only be done by a trained technician [10]. In a study, an automated method
of analysing fetal growth using 2D ultrasound was developed [11]. This was
done due to the lack of trained sonographers in developing countries. This
indicates that there’s a lack of trained technicians to conduct these tests.
As a result, these tests are expensive and can only be conducted in a short
window of time.
Few research studies were conducted on monitoring fetal movement via non-
invasive techniques. A study conducted using a single accelerometer placed on
the mother’s abdomen employed a threshold signal processing method [12]. This
study concluded that the thresholding method employed to identify fetal
movement performed poorly. Also, it was found that the acceleration signals
are corrupted by maternal movements such as laugh, cough and hiccup.
Therefore, a more complex signal analysis method needs to be employed. Another
study [13] employed a capacitive accelerometer to monitor fetal movements
while the mother was asleep. In the initial part of the experiment, an
ultrasonographer was used parallel to the device to validate data acquired.
During this experiment, three types of fetal movements were recorded and their
positive hit rates were calculated. Gross movements had a positive hit rate of
38.5% - 23.5% depending on the fetal age. Similarly, isolated limb movements
had a positive hit rate of 5% to 13% and breathing movements had a lower
positive hit rate of 4.3% - 22.8%. Another study [14] was conducted using
acoustic sensors and accelerometer sensors.This was also conducted in a
clinical setup with ultrasound validation. They were able to discriminate
fetal startle movements from general movements with 72.1% accuracy. Several
other research studies also have been conducted on this topic[15, 16, 17].
In this study, we investigate whether an accelerometer sensor can be used to
monitor the fetal movement count in a non clinical setting.
## Materials and methods
### Hardware
Few studies were conducted on wearable sensor-based fetal monitoring devices
[12][13][18]. The most common sensors used in previous studies were
accelerometers and acoustic sensors. Furthermore, it was mentioned in a study
conducted that it is possible to monitor fetal movements as well as heartbeat
using an accelerometric sensor. However, it was also mentioned that this is
realizable only after the 30th week of gestation. This is due to the lack of
strength of the signals generated when the fetus is in the early gestational
stages[19]. Therefore, it was decided to that an accelerometric sensor should
be used to capture data.
A device was initially developed to acquire signals from pregnant mothers.
When selecting a sensor for this device several factors were considered: One
of the main considerations made was using a non-invasive sensor to acquire
data. This was to reduce the impact of the device on the fetus as well as on
the mother. Furthermore, the main objective of this was to come up with a
wearable device which can be used by mothers daily. Therefore the ergonomics
of the sensor also played a major role. The sensor to be selected should be
light in weight and small in size. Also, it should be easy to wear on the
mother’s abdomen. Since this device should be commercially sold the cost also
play a major role.
Considering all these factors our research team decided to use the sensor MPU
9250. It is a multi-chip module which houses a 3-Axis accelerometer and a
3-Axis gyroscope. It has inbuilt analogue to digital converters to convert the
signals received from the accelerometer and the gyroscope. The data received
from the sensor is transferred to a removable micro SD card via a
microcontroller. This enables the device to operate independently of a
computer and also eliminates the need for a wireless transfer method which may
have adversary effects on the fetus[18].
A previous study was conducted using accelerometers to decide the optimum
number of sensors to be used and the sensor positioning. During this study,
five accelerometers were used and they were positioned on the abdomen with the
navel as the centre mark. Additionally, a reference sensor was placed on the
back. Readings of 6 pregnant mothers whose gestational age was from 30 upwards
were taken. The maternal perception was considered to be the ground truth
while only the fetal kicks were taken into consideration. In this test, the
increase in the number of sensors did not have a considerable effect on the
positive predictive value. While the positive predictive value increased when
the number of sensors were increased, the difference was not notable[20].
Therefore, it was decided that a single sensor should be used in the setup.
This idea was further supported by the notion that this will reduce the
computational capacity required as well as the size of the data recorded which
in turn will improve the speed of analysis as well as data transfer.
The clinical testing procedure was conducted in two phases. During the initial
phase mothers perception of fetal movement was considered to be the ground
truth. However, during the second phase, ultrasound readings were utilized for
validation and as the ground truth. During both of these phases, a belt-like
device was used to collect data. This can be seen in Fig 1.
Fig 1: The in-house designed and fabricated device.
Initially the sensor, MPU 9250 was embedded into a rubber sole. This was to
adhere the sensor on to the abdomen securely as well as to prevent any
discomfort to the mother due to the sharp edges of the sensor. Then this
rubber sole was stitched into a fabric belt. The choice of the material to the
fabric belt was also done considering the mother’s comfort. Several factors
such as the materials ability to absorb perspiration, flexibility, how well it
moulds to the mother’s abdomen as well as the colour of the material were
considered when choosing the material. This selection was made by considering
the responses of several pregnant mothers who were interviewed during the
design process. The dimensions of this belt were designed in such a way so
that it could be worn for an extended time period. Furthermore, the sensor was
placed at the centre of the belt in order to obtain a uniform sensor
positioning on every mother.
During the clinical testing period, the microcontroller as well as the SD card
were housed in a separate box. Two buttons were also included in this setup.
This housing can be seen in Fig 1. One of the buttons was used to obtain
mothers perception of fetal movements. When taking readings the mother was
advised to press the button on the device whenever a fetal movement is felt.
The other button was used to record maternal movements such as laugh, cough
and hiccups.
#### Ethical clearance
This study was approved by the Ethics Review Committee, Faculty of Medicine,
University of Peradeniya.Approval was granted to conduct research project No.
2018/EC/43 entitled ”Fetal movement analysis for condition monitoring” at the
Teaching Hospital, Peradeniya.
### Clinical Tests
The clinical testing procedure was conducted in two phases. During phase one
mother’s perception of fetal movements were considered to be the ground truth.
During this phase, the design and development of the device were also done.
Some adjustments were made to the device during this period according to the
feedback received from pregnant mothers. At the conclusion of phase one, a
proper device was designed and a proper method of taking readings was
developed. Then at phase two ultrasound readings were considered to be the
ground truth. The device which was finalized in phase one was utilized in this
phase to obtain readings.
Phase 1: During this phase readings from 127 women impatient at Peradeniya
Teaching hospital, Sri Lanka were taken. The gestational age of the group
varied from 28 weeks to 40+ weeks and most were singleton pregnancies. A twin
pregnancy as well as a quadruplets pregnancy was also recorded. However, only
the readings from singleton pregnancies were initially considered.
Furthermore, the occurrence of fetal movements and the occurrence of maternal
movements were recorded utilizing the button system mentioned above.
Phase 2: During this phase readings from 15 mothers were taken and all of them
were impatient at Peradeniya Teaching Hospital, Sri Lanka. Similar to phase
one the gestation period of these mothers varied from 28 weeks to 40+ weeks
and all the pregnancies were singleton. Furthermore, during this phase, each
mother underwent an abdominal ultrasound, and the fetal movements were
monitored and recorded by a trained technician while the developed device
recorded accelerometric data. Other maternal movements were also recorded.
Moreover, the mother’s perception was also recorded.
During both these phases pregnant mother’s written consent was obtained prior
to acquiring data and a thorough explanation on the process of acquiring data
as well as the final target of the research was given. Then basic details
about the mother as well as the fetus were recorded. The data collected were,
mother’s age, fetal age, fetal gender, number of previous pregnancies,
expected delivery method and additional comments. After that the belt
containing the sensor was placed around the mother’s abdomen and secured to
reduce the movements of the belt relative to the mother’s abdomen. Then the
mother was advised to stay in a comfortable position. In phase one most
mothers chose to lay down while a few were sitting up. However, during phase
two every mother had to lay down in order to accommodate the ultrasound probe
as well as the device. Each session was approximately 20 minutes long and for
every mother, a single session was conducted. Therefore, more than 5 hours of
readings were acquired.
#### Class Identification
When considering the maternal movements such as laugh, cough and hiccup, it
was observed that maternal laugh is the most frequent maternal movement.
Furthermore, it has the most similarities to the fetal movement signal
observed. Therefore, during phase one three classes were introduced. They were
fetal movement, maternal laugh and mothers respiratory movements. During phase
two, three types of fetal movements were observed. They are limb movements,
rotations and whole body movements. From these three types, only the fetal
limb movements were considered as fetal movements when conducting the
analysis. Therefore, the three classes identified during phase two are fetal
limb movements, maternal laugh and mother’s respiratory movements. The
frequency of each class occurrence can be observed in Table 1.
Table 1: Number of occurrences of the three classes with gestation age.
$\bf FetalAge(Weeks)$ | Class 1 | Class 2 | Class 3
---|---|---|---
$\bf 27-31$ | 174 | 35 | 263
$\bf 32-35$ | 265 | 78 | 360
$\bf 36-40+$ | 583 | 163 | 954
$\bf Total$ | 1022 | 276 | 1563
The total number of occurrences of each class can be observed. The fetal
movement realizations are classified as Class 1, Maternal laugh realizations
are classified as Class 2 and Maternal respiratory movement realizations are
classified as Class 3.
#### Clinical Survey
The end goal of this research was to design and fabricate a fetal movement
monitoring system which can be used by pregnant mothers at home. Therefore, a
survey was conducted alongside clinical tests. The results obtained from this
was utilized when designing the end system. Following observations were made
during the survey. Every mother interviewed kept track of fetal movement
during the pregnancy as advised by their obstetrician. However, this was done
by keeping track of whether a fetal movement occurred during each hour of the
day. The general opinion of the mothers was that this method was unreliable
and a nuisance. Furthermore, they were advised to observe fetal moments
immediately after consuming food. On average, a mother with a healthy fetus
undergoes three ultrasound scans during the gestational period and undergoes a
Cardiotocography (CTG) scan daily when impatient at the hospital. More than
80% of the mothers interviewed had a favourable opinion about the proposed
system of fetal movement monitoring, while approximately 15% were indecisive.
Only less than 5% of mothers had an unfavourable opinion about the proposed
system. Moreover, almost every mother interviewed reacted favourably to the
idea of including a mobile application to the proposed system.
### Observations
The data obtained from the sensor were stored in the micro SD card which was
later transferred and analysed. Prior to conducting analysis, several
important observations were made. The accelerometer in the sensor measures the
acceleration along the three axes: X-Axis, Y-Axis and Z-Axis. Z-Axis records
the acceleration variation normal to the mother’s abdomen while X and Y axes
record the acceleration variation along the plane of the abdomen. Raw time-
domain data obtained can be observed in Fig 2.
Fig 2: The accelerometric data along the three axes, X-Axis, Y-Axis and
Z-Axis.
From Fig 2 it can be observed that the variation along the Z-Axis is more
prominent than the variation along the other two axes. This argument can be
further solidified by observing Fig 3. In Fig 3 the variation of data point in
the 3 dimensional space can be observed during a fetal movement realization.
In it, it can be observed that the most prominent variation is along the
Z-Axis. Therefore, only Z-Axis data were utilized when conducting the
analysis. Furthermore, this results in a reduction of computational capacity
requirement, which in turn, is favourable when conducting complex algorithms.
Fig 3: The variation of data point in the 3 dimensional space during a fetal
movement realization.
During both phases of clinical testing, the realizations were segmented into
three classes. They are fetal movement, maternal laugh and maternal
respiratory movements. Visually the time-domain representation of the fetal
movement and maternal laugh have similarities while clear discrimination can
be made between these two and maternal respiratory movements. This can be
observed in Fig 4.
Fig 4: The accelerometric data of realizations from the three classes.
During the second phase of the clinical testing procedure, an abdominal
ultrasound was conducted to validate the results. The time-domain signal of a
singleton pregnancy observed by the sensor and the fetal movement identified
by each method can be observed in Fig 5. Furthermore, a similar diagram of the
time domain signal of a breached fetal can be observed in Fig 6.
Fig 5: The time-domain signal of a singleton pregnancy observed by the sensor
and the fetal movement identified by each method. Fig 6: The time-domain
signal of a singleton breached pregnancy observed by the sensor and the fetal
movement identified by each method.
If the abdominal ultrasound is to be considered as the ground truth, it can be
observed that some fetal movements were not felt by the mother as well as the
device. It can be observed from the Fig 5 that in the normal singleton
pregnancy most fetal movements identified by the ultrasound were also observed
by the mother and the device. However, when the fetus is breached, most of the
fetal movements identified by the ultrasound were not observed by the mother
but they were observed by the device. Therefore, it can be derived that this
proposed system can be utilized to identify and monitor fetal movement which
can not be felt by the mother. A similar analysis was conducted for the entire
data set obtained during the second phase. It was observed that 65.56% of
realizations were identified by all three methods. However, 15.56% of
realizations were identified by the ultrasound and the device and 18.89% of
realizations were only identified by the ultrasound. The summery of this data
can be seen in Table 2. Furthermore the mothers were not able to identify
fetal movements which were not captured by the ultrasound scan and all the
movements felt by the mothers were also captured by the device.
Table 2: Number of realizations identified and observed by each method during
the second phase of clinical testing.
$\bf Ultra$ | Device | Mother’s | Number of | Total
---|---|---|---|---
$\bf Sound$ | | Response | Detected Kicks | Percentage(%)
$1$ | 0 | 0 | 17 | 18.89
$1$ | 1 | 0 | 14 | 15.56
$1$ | 1 | 1 | 59 | 65.56
During the second phase of clinical testing three methods were utilized to
identify the occurrence of a fetal movement. They are: the abdominal
ultrasound, the in-house fabricated device and mother’s response. In the table
’1’ indicates that the relevant method was able to identify the fetal movement
and ’0’ indicates that the method was not able to identify the fetal movement.
For instance the first data row of the table indicated the number of
realizations captured only by the abdominal ultrasound. The second row
indicate the number of fetal movement captured by the ultra sound and the in-
house fabricated device. And the final row indicate the fetal movements
captured by all three methods.
### Signal Analysis
Several combinations of signal processing algorithms were administered in
order to obtain an optimum algorithm. The main objective of the algorithm was
to successfully compute the number of fetal movement occurrences in a single
session and to minimize the probability of false positives as these could
result in extensive adversarial effects. The entire algorithm was implemented
using MATLAB and later implemented on Android studio in order to implement
this algorithm on a smart phone.
Pre processing : Initially the signals obtained were observed and it was
noticed that due to maternal movements the signal has shifted as seen in Fig
7. Furthermore maternal breathing has introduced a periodic noise component
which can also be observed in Fig 8. Therefore, the raw time domain signal was
initially sent through a high pass filter which was utilized in a similar
research study done previously [2]. As it can be observed in Fig 8 when the
filter is applied maternal movements as well as maternal breathing noise were
eliminated.
Fig 7: The raw time domain data captured from the sensor Vs the time domain
data when a custom high pass filter was applied Fig 8: The raw time domain
data captured from the sensor Vs the time domain data when a custom high pass
filter was applied (Zoomed-in)
Realization Segmentation : The output signal from the pre processor was then
segmented in to realizations. These segments are non overlapping and had a
width of 200 samples. This sample size was selected by observing the average
length of fetal movements and maternal laughs. The realizations were
classified into three classes: fetal movement, maternal laugh and maternal
respiratory movements. The type of the realizations were determined by the
input from the ultrasound scan, the pregnant mother as well as the data
recorded via the button system.
Short Time Fourier Transform :The raw accelerometric data are segmented into
three classes: fetal movements, maternal respiratory movements and maternal
laugh. Each signal category contains unique features. However, conducting only
time domain analysis will inhibit the ability to extract these features in
order to classify them in to each class. Therefore, it was required to
visualize them in a more descriptive manner. Furthermore, the input for the
subsequent algorithm is required to be two dimensional. To achieve this
purpose, Short Time Fourier Transform (STFT) was utilized to generate
Magnitude Spectrogram from the time domain accelerometric readings [21]. The
Magnitude Spectrogram is one of the most expressive signal representation
methods, because it represents the intensity plot of frequencies of a signal
which varies with time[22].
Non-Negative Matrix Factorization :Non-Negative Matrix Factorization (NNMF) is
widely used in signal processing to reduce the dimension [23]. Applications of
NNMF range from simple text document clustering to advanced biological data
mining [24],[25]. It is an advanced signal processing technique employed to
identify and extract hidden features from raw signals. The input to the NNMF
algorithm should be a non-negative matrix and the algorithm will generate two
low rank non-negative matrices. This basic NNMF algorithm can be
mathematically illustrated as follows [26].
$V\approx WH$ (1)
Where, $V,W,H\geqslant 0$
In this study of fetal movements identification application, the magnitude
spectrogram was utilized as the input non-negative matrix V. Following the
NNMF algorithm two matrices are generated. They are the Basis Matrix W and the
Activation Coefficient Matrix (Abundance Matrix) H. These two matrices are
generated such that the Basis Matrix contains all the features of the given
magnitude spectrogram while the Activation Coefficient Matrix contains the
proportional factors of bases.
Convoluted Neural Network :
Convoluted Neural Network is one of the most common classifications methods
utilized in biomedical signal processing [27][28]. In order to classify the
three classes the output from the above algorithm was fed in to a Convoluted
Neural Network. 80% of the data set was utilized for training and the
selection process was random. The size of a realization fed into the CNN
algorithm was 64x26 pixels and images were RGB. Following parameters of the
CNN algorithm were varied and optimum values for each layer was obtained. The
values of individual layers can be observed in Table 3. When training the
network RelU Activation function was utilized and three fully connected layers
were implemented for each class. Stochastic gradient descent method was used
to train the network.
Table 3: Parameters of the three fully connected layers implemented in the
Convoluted Neural Network.
$\bf Parameter$ | Layer 1 | Layer 2 | Layer 3
---|---|---|---
$\bf Neuron\hskip 5.69046ptSize$ | $5\times 3$ | $5\times 2$ | $5\times 2$
$\bf Number\hskip 5.69046ptof\hskip 5.69046ptNeurons$ | 60 | 50 | 40
$\bf Learning\hskip 5.69046ptRate$ | 0.0001 | 0.0001 | 0.0001
$\bf Epochs$ | 80 | 150 | 300
In this table the parameters varied during implementing the convoluted neural
network can be observed. The optimum parameters of each layer are mentioned.
### Mobile Application
The final aim of this research study is to provide pregnant mothers with a
wearable non-invasive device which can count and monitor fetal movement
reliably. Therefore, initially a wearable deice was designed and fabricated.
Then further studies were carried out to identify the best digital
implementation method. During these, the digital literacy of the country was
analysed. For this purpose, data released from the Department of Census and
Statistics, Sri Lanka were utilized and following observations were made[29].
The overall computer literacy of the female population during the year 2019
was 28.3%. However, the average age group most pregnant mothers fall into is
20 to 35 years. The computer literacy of this age group vacillated between
about 50%. Compared to computer literacy, digital literacy of females have a
higher value which is around 40%. The digital literacy of females around the
age of 20 to 35 vacillates between around 75%. Therefore, it can be concluded
that more pregnant mothers in Sri Lanka have better digital literacy than
computer literacy. Hence, the smart phone was chosen as the digital
implementation method. Moreover, it was derived from the survey conducted,
that most mothers would prefer to use a mobile application along with the
device.
Therefore, as the final system, the data capturing was done via the device in
Fig 1, and stored in a micro SD card. After a session the mother can transfer
the data included in the SD card to their smart phone. Initially, although an
attempt was made to conduct an analysis within the smartphone, it was not
successful as common smartphones used by pregnant mothers lacked the
computational capacity to run the algorithm. Furthermore, the size of the data
file of a single session is compact. Therefore, as a solution to these
problems the data is then transferred to an online cloud and the analysis was
conducted remotely. At the end of the analysis the number of kicks recorded
within the session will be sent back to the mother as well as to her
obstetrician. When designing the mobile application the software Android
Studios was utilized and when conducting the analysis remotely the Wamp Server
Software was used. The mobile application was designed in a manner that is
attractive as well as user friendly. The mobile application will store and
record the fetal movement patterns as well. The user interface of the mobile
application can be seen in Fig 9.
Fig 9: Designed mobile application interface
## Results
During the clinical testing, readings from more than 120 mothers were taken.
The distribution of gestational age and the fetal gender of the test group can
be observed in Fig 10. From this figure, it can be observed that the gestation
age of the test group varied from 27 weeks to 40+ weeks, where 40+ implies the
fetus age is beyond 40 weeks old. Furthermore, it can be observed that the
test group mostly contained fetuses whose gestation periods were beyond 37
weeks and the gender distribution was approximately uniform. However, the test
group contained a higher number of younger male fetuses than female fetuses
and a higher number of older female fetuses than male fetuses. There were few
fetuses where the gender was not stated.
Fig 10: Distribution of gestational age and the fetal gender of the test group
The algorithms stated above were implemented on this data set and the results
were observed. With the aim of obtaining an optimum algorithm, several
combinations of algorithms were implemented. They are:
1. 1.
Algorithm 1 : Segmentation – STFT – CNN
2. 2.
Algorithm 2 : High pass filter – segmentation – STFT – CNN
3. 3.
Algorithm 3 : High pass filter – segmentation – STFT –NNMF(W)– CNN
4. 4.
Algorithm 4 : High pass filter – segmentation – STFT –NNMF(H)– CNN
In each algorithm immediately after the segmentation a short time Fourier
transform was implemented. The resulting spectrograms can be seen in Fig 11.
It can be observed that the spectrogram of the maternal respiratory movements
is concentrated on to the lower bands of frequency, and this is constant
through out the samples. However in the spectrogram of fetal movements this
concentration occurs only in a smaller range of samples. Furthermore, the
spectrogram of the maternal laugh signal differ from the other two as well.
Therefore, it can be stated that implementing a standard short time Fourier
transform can aid the discrimination process.
Fig 11: The resulting spectrograms of the three classes
Then a standard Nonnegative Matrix Factorization algorithm was implemented on
to the spectrogram images. This factorized the spectrogram image into two
images: the abundance matrix and the base matrix. As stated in the previous
section the abundance matrix is identified as the H matrix and the base matrix
is identified as the W matrix for easy reference. Then the matrices are
visualized in to figures with a grid format. Similar to the figures generated
by the STFT algorithm, the amplitude of elements of the output matrices were
converted in to colours and visualized. The generated figures for the W matrix
of three different realizations can be observed in Fig 12 and the generated
figures for the H matrix of the same three realizations can be observed in Fig
13.
Fig 12: The resulting base matrices (W matrices) for three classes Fig 13: The
resulting abundance matrices (H matrices) for three classes
Then the spectrograms, and the base matrices and the abundance matrices were
fed in to a standard convoluted neural network algorithm described above. From
each following confusion matrices were obtained. In the given matrices Class 1
represents the fetal movement realizations, Class 2 represents the maternal
laugh realizations and Class 3 represents the maternal respiratory movement
realizations. And confusion matrices obtained by implementing the four
algorithms are shown in Fig 14.
Fig 14: The confusion matrices generated for each algorithm
## Discussion
From the given confusion matrices following observations can be made. The
confusion matrix in Fig 14 is obtained by implementing Algorithm 1, where the
raw time-domain signal is fed into the STFT algorithm. From it, it can be
observed while the accuracy of the algorithm is not at the best reasonable
discrimination among classes can be obtained. The confusion matrix in Fig 15
is obtained by implementing Algorithm 2. In this initially, a high-pass filter
was implemented to remove large maternal movements as well as maternal
breathing pattern from the signal. Then the filtered signal was fed into the
STFT algorithm and the remaining process is similar to Algorithm 1. By
comparing the results obtained by implementing these two algorithms the effect
of implementing a high-pass filter can be clearly observed. When comparing the
two confusion matrices. it can be clearly observed that the true positive
accuracy of Algorithm 2 is higher than the True positive accuracy of Algorithm
1. However, the most important observation is the reduction of the false-
positive rate with the implementation of the high pass filter. In this
application of fetal movement monitoring, higher attention is paid to the
false positive rate. When a false positive occurs the observer will identify a
fetal movement when actually a fetal movement has not occurred. Therefore, if
the false positive rate is higher then the application will indicate a higher
fetal movement rate than the actual. Therefore, mothers and health care
providers won’t be able to identify a significant reduction in fetal movement
rates early, which may lead to dire consequences. It can be observed that in
Algorithm 2 the false positive rate is comparatively low than the false
positive rate of Algorithm 1. From these observations, it can be observed that
the introduction of the high-pass filter as a pre-processing step has improved
the results of the algorithm.
The confusion matrices obtained by implementing Algorithm 3 and Algorithm 4
can be observed in Fig 16 and Fig 17 respectively. In Algorithm 2 the entire
spectrogram was fed into the CNN algorithm. However, in Algorithm 3 and 4, the
spectrogram was factorized using the Non-Negative Matrix factorization and the
resulting abundance matrices and the base matrices are fed in the CNN
respectively. The main reason to factorize the spectrogram is to reduce the
computational capacity required to run the CNN algorithm. However, the
following observations were made by analyzing the confusion matrices. When
considering the observations made when Algorithm 3 is implemented it can be
noted that the performance of this algorithm is weak. The true positive rate
is at the lowest of 65% and the false positive rate is at it’s highest of 9%.
However, the best results were obtained when the spectrograms are fed into the
CNN rather than any factorized matrices. The true positive rate of Algorithm 2
is 86% and the false positive rate is around 7%. The next best result is
observed when the (Abundance Matrix) H is fed into the CNN algorithm. The true
positive rate of it is approximately 85% and the false positive rate is around
7.2%. When comparing these two algorithms the decrease in the true positive
rate can be neglected. However, the false-positive rate will increase due to
factorization. For the same explanation made above, in this application, more
attention needs to be paid to the effect of the false-positive rate.
The main reason for the better performance of Algorithm 2 could be due to the
properties of the factorization algorithm. When the spectrogram matrix is
factorized into the abundance matrix and the base matrix the abundance matrix
contains mainly the data pertaining to individual mothers. Therefore, when the
abundance matrices are fed into the CNN algorithm since the algorithm is
trained utilizing data obtained from several mothers the performance is low.
However, if we were to train the network using the abundance matrices of an
individual mother the performance may be better. This can be observed in Table
4 where Algorithm 2 to 4 are implemented on 6 individual mothers and the true
positive rates are compared. In the table, it can be observed that for some
mothers the true positive rate obtained using Algorithm 4 is higher. This
solidifies the argument made above. However, training a network utilizing an
individual mother is not feasible for the intended application. Furthermore,
if the training is to be done to individual mothers a large number of samples
need to be collected from individual mothers. This does not go along with the
target of devising a universal system to monitor fetal movements. Furthermore,
it can be observed the performance of Algorithm 3 is weak even if it was
implemented on individual mothers.
Table 4: True Positive rate of Algorithm 2 to 4 on individual mothers..
$\bf MotherIndex$ | A2 (%) | A4(%) | A3(%)
---|---|---|---
$\bf 1$ | 93.55 | 83.87 | 41.94
$\bf 2$ | 88.89 | 88.89 | 63.49
$\bf 3$ | 78.57 | 71.43 | 64.24
$\bf 4$ | 71.43 | 71.43 | 42.86
$\bf 5$ | 71.43 | 71.43 | 57.14
$\bf 6$ | 68.75 | 75.00 | 43.75
In this table the true positive rates observed when the three algorithms are
implemented on six individual mothers. ’A2’ refers to Algorithm 2 , ’A4’
refers to Algorithm 4 and ’A3’ refers to Algorithm 3. The results of Algorithm
2 and Algorithm 4 are juxtaposed for the purpose of better discrimination
among the two. The results from Algorithm 3 are also included.
When all the observations mentioned above are considered it can be concluded
that the best algorithm in Algorithm 2, where initially a high pass filter is
implemented, then a spectrogram was obtained and it was fed into a Convoluted
Neural Network Algorithm. Furthermore, all the computations are being carried
out in an online cloud. This provides a higher computational capacity, which
helps when the spectrogram is directly fed into the CNN algorithm.
## Conclusion
The need for a proper system to monitor fetal movements in a non-clinical
setup is of paramount importance in order to maintain fetal well-being.
Although some previous research studies were conducted on this front, several
crucial and novel concepts were introduced in this study. A complete system
was introduced to be used by pregnant mothers to monitor fetal movement count
in a non-clinical setting. While a significant amount of effort was spent on
developing the algorithm as well as the sensing system an equal amount of
effort was invested in designing and implementing proper ergonomics and a
user-friendly interface to the system. It was made sure that the proposed
system is feasible to be implemented. This was done by studying the
preferences and habits of pregnant mothers. Furthermore, it was ensured that
the system is user friendly in nature. This was aided by the extensive amount
of surveys conducted while clinical testing procedures.
In the initial phases of the research, a proper non-invasive device was
designed and fabricated. Then from the feedback received from pregnant
mothers, it was further modified. Subsequently, a mobile application was
developed to be used by mothers. Finally, four different algorithms were
implemented on the data set to identify the most acceptable algorithm.
Algorithm 2 gave out the most promising results while the performance of
Algorithm 4 was also close to Algorithm 2. Furthermore, due to the facts such
as data file size and lack of computational capacity limitations Algorithm 2
was selected to be the most befitting algorithm to the system.
In conclusion, in this research, a low-cost, non-transmitting wearable system
was designed to monitor fetal movement patterns. This system consists of a
non-invasive sensing unit, a mobile application to be used by pregnant
mothers, and an algorithm to extract the required information from the
captured data. This system would be of immense use for pregnant mothers as
well as for researches who would be required to collect data to analyze fetal
movement patterns.
## Acknowledgments
We would like to express our gratitude to all the participants of the clinical
testings for volunteering as well as for providing feedback on the proposed
system. We also would like to thank the healthcare professional at the
Peradeniya Teaching Hospital for all the help and guidance provided. We also
express our gratitude to our reviewers for their constructive feedback.
## References
* 1. Kamalifard M, Abbasalizadeh S, Ghojazadeh M, Samani F, Rabiei L. Diagnostic Value of Fetal Movement Counting by Mother and the Optimal Recording Duration. Journal of caring sciences. 2013;2:89–95. doi:10.5681/jcs.2013.011.
* 2. Saastad E, Winje B, Stray-Pedersen B, Frøen JF. Fetal Movement Counting Improved Identification of Fetal Growth Restriction and Perinatal Outcomes – a Multi-Centre, Randomized, Controlled Trial. PloS one. 2011;6:e28482. doi:10.1371/journal.pone.0028482.
* 3. Christensen F, Rayburn W. Obstetrics and gynecology clinics of North America;. p. 607–621.
* 4. Saastad E, Ahlborg T, Frøen JF. Low Maternal Awareness of Fetal Movement is Associated With Small For Gestational Age Infants. Journal of midwifery and women’s health. 2008;53:345–52. doi:10.1016/j.jmwh.2008.03.001.
* 5. Stacey T, Thompson J, Mitchell E, Ekeroma A, Zuccollo J, Mccowan L. Maternal Perception of Fetal Activity and Late Stillbirth Risk: Findings from the Auckland Stillbirth Study. Birth (Berkeley, Calif). 2011;38:311–6. doi:10.1111/j.1523-536X.2011.00490.x.
* 6. Dutton P, Warrander L, Roberts S, Bernatavicius G, Byrd L, Gaze D, et al. Predictors of Poor Perinatal Outcome following Maternal Perception of Reduced Fetal Movements – A Prospective Cohort Study. PloS one. 2012;7:e39784. doi:10.1371/journal.pone.0039784.
* 7. Efkarpidis S, Alexopoulos E, Kean L, Liu D, Fay T. Case–control study of factors associated with intrauterine deaths. MedGenMed : Medscape general medicine. 2004;6:53.
* 8. Saastad E, Winje B, Stray-Pedersen B, Frøen JF. Fetal Movement Counting Improved Identification of Fetal Growth Restriction and Perinatal Outcomes – a Multi-Centre, Randomized, Controlled Trial. PloS one. 2011;6:e28482. doi:10.1371/journal.pone.0028482.
* 9. Reduced Fetal Movements. Royal College of Obstetricians and Gynaecologists; 2011\. Available from: https://www.rcog.org.uk/globalassets/documents/guidelines/gtg_57.pdf.
* 10. Andonotopo W, Kurjak A. The assessment of fetal behavior of growth restricted fetuses by 4D sonography. Journal of perinatal medicine. 2006;34:471–8. doi:10.1515/JPM.2006.092.
* 11. Heuvel T, de bruijn D, Korte C, Ginneken B. Automated measurement of fetal head circumference using 2D ultrasound images. PLOS ONE. 2018;13:e0200412. doi:10.1371/journal.pone.0200412.
* 12. Girier T, O’ Toole J, Mesbah M, Boashash B, Clough I, Wilson S, et al. Detecting fetal movements using non-invasive accelerometers: A preliminary analysis; 2010. p. 508–511.
* 13. Ryo E, Nishihara K, Matsumoto S, Kamata H. A new method for long-term home monitoring of fetal movement by pregnant women themselves. Medical engineering and physics. 2011;34:566–72. doi:10.1016/j.medengphy.2011.09.001.
* 14. Lai J, Woodward R, Alexandrov Y, Munnee QA, Lees C, Vaidyanathan R, et al. Performance of a wearable acoustic system for fetal movement discrimination. PLOS ONE. 2018;13:e0195728. doi:10.1371/journal.pone.0195728.
* 15. Avci R, Wilson JD, Escalona-Vargas D, Eswaran H. Tracking Fetal Movement Through Source Localization From Multisensor Magnetocardiographic Recordings. IEEE Journal of Biomedical and Health Informatics. 2018;22(3):758–765. doi:10.1109/JBHI.2017.2690879.
* 16. Abeywardhana SAY, Subhashini HAA, Wasalaarachchi WAWS, Wimalarathna GHI, Ekanayake MPB, Godaliyadda GMRI, et al. Time Domain Analysis for Fetal Movement Detection Using Accelerometer Data. In: 2018 IEEE Region 10 Humanitarian Technology Conference (R10-HTC); 2018\. p. 1–5.
* 17. Wasalaarachchi WAWS, Subhashini HAA, Abeywardhana SAY, Gunarathne MSL, Ruwanga WT, Godaliyadda GMRI, et al. Fetal Movements Identification Based on Non-negative Matrix Factorization and Spectral Clustering. In: 2019 14th Conference on Industrial and Information Systems (ICIIS); 2019. p. 266–271.
* 18. Bektas H, Dasdag S. Effect of radiofrequencies emitted from mobile phones and Wi-FI on pregnancy. Journal of International Dental and Medical Research. 2017;10:1084–1095.
* 19. Hu Y, Kim E, Cao G, Liu S, Xu Y. Physiological Acoustic Sensing Based on Accelerometers: A Survey for Mobile Healthcare. Annals of biomedical engineering. 2014;42. doi:10.1007/s10439-014-1111-8.
* 20. Altini M, Mullan P, Rooijakkers M, Gradl S, Penders J, Geusens N, et al. Detection of Fetal Kicks Using Body-Worn Accelerometers During Pregnancy: Trade-offs Between Sensors Number and Positioning. EMBC. 2016;.
* 21. Maputle S, Mothiba T. Mothers’ knowledge of foetal movements monitoring during pregnancy in relation to perinatal outcome. Health SA Gesondheid : Journal of Interdisciplinary Health Sciences. 2006;11. doi:10.4102/hsag.v11i2.219.
* 22. Baumgartner C, Blinowska K, Cichocki A, Dickhaus H, Durka P, McClintock P, et al. Discussion of ”Time-frequency Techniques in Biomedical Signal Analysis: A Tutorial Review of Similarities and Differences”. Methods of information in medicine. 2013;52:297–307. doi:10.3414/ME12-01-0083.
* 23. Mingyu L, Hongbing J, Chunhong Z. Non negative Matrix Factorization and Its Application in Medical Signal and Image Processing. 2nd International Conference on Bioinformatics and Biomedical Engineering, iCBBE 2008. 2008;doi:10.1109/ICBBE.2008.866.
* 24. Lee D, Seung H. Learning the Parts of Objects by Non-Negative Matrix Factorization. Nature. 1999;401:788–91. doi:10.1038/44565.
* 25. Rathnayake B, Weerakoon K, Godaliyadda GMR, Ekanayake MP. Toward Finding Optimal Source Dictionaries for Single Channel Music Source Separation Using Nonnegative Matrix Factorization; 2018.
* 26. Lee D, Seung H. Algorithms for Non-negative Matrix Factorization. Adv Neural Inform Process Syst. 2001;13.
* 27. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. vol. 9351; 2015. p. 234–241.
* 28. Pang S, Du A, Orgun M, Yu Z. A novel fused convolutional neural network for biomedical image classification. Medical and Biological Engineering and Computing. 2018;57. doi:10.1007/s11517-018-1819-y.
* 29. Computer Literacy Statistics-2019, Department of Census and Statistics, Sri Lanka; 2019. Available from: hhttp://www.statistics.gov.lk/PressReleases/ComputerLiteracystatistics-2019-Firstsixmonths.
|
# Nearly associative and nearly Hom-associative algebras and bialgebras
Mafoya Landry Dassoundo1 Sergei Silvestrov2
1Chern Institute of Mathematics and LPMC, Nankai University, Tianjin 300071,
China
e-mail<EMAIL_ADDRESS>
2 Division of Mathematics and Physics, School of Education, Culture and
Communication,
Mälardalen University, Box 883, 72123 Västeras, Sweden.
e-mail<EMAIL_ADDRESS>
###### Abstract
Basic definitions and properties of nearly associative algebras are described.
Nearly associative algebras are proved to be Lie-admissible algebras. Two-
dimensional nearly associative algebras are classified, and its main classes
are derived. The bimodules, matched pairs and Manin triple of a nearly
associative algebras are derived and their equivalence with nearly associative
bialgebras is proved. Basic definitions and properties of nearly Hom-
associative algebras are described. Related bimodules and matched pairs are
given, and associated identities are established.
000Corresponding author: Sergei Silvestrov<EMAIL_ADDRESS>
## 1 Introduction
An algebra $A$ with a bilinear product $\cdot:A\times A\rightarrow A$ is not
necessarily associative or possibly non-associative if possibly there exist
$x,y,z\in A$ such that $(x\cdot y)\cdot z-x\cdot(y\cdot z)\neq 0.$ If such
$x,y,z\in A$ exist, then algebra is not associative. The term non-associative
algebras is used often to mean all possibly non-associative algebras,
including also the associative algebras. Associative algebras, Lie algebras,
and Jordan algebras are well-known sub-classes of non-associative algebras in
the sense of possibly not associative algebras [60].
Hom-algebraic structures originated from quasi-deformations of Lie algebras of
vector fields which gave rise to quasi-Lie algebras, defined as generalized
Lie structures in which the skew-symmetry and Jacobi conditions are twisted.
Hom-Lie algebras and more general quasi-Hom-Lie algebras where introduced
first by Silvestrov and his students Hartwig and Larsson in [27], where the
general quasi-deformations and discretizations of Lie algebras of vector
fields using general twisted derivations, $\sigma$-derivations, and a general
method for construction of deformations of Witt and Virasoro type algebras
based on twisted derivations have been developed. The initial motivation came
from examples of $q$-deformed Jacobi identities discovered in $q$-deformed
versions and other discrete modifications of differential calculi and
homological algebra, $q$-deformed Lie algebras and other algebras important in
string theory, vertex models in conformal field theory, quantum mechanics and
quantum field theory, such as the $q$-deformed Heisenberg algebras,
$q$-deformed oscillator algebras, $q$-deformed Witt, $q$-deformed Virasoro
algebras and related $q$-deformations of infinite-dimensional algebras [1, 16,
17, 18, 19, 20, 21, 22, 33, 34, 41, 42, 43].
Possibility of studying, within the same framework, $q$-deformations of Lie
algebras and such well-known generalizations of Lie algebras as the color and
super Lie algebras provided further general motivation for development of
quasi-Lie algebras and subclasses of quasi-Hom-Lie algebras and Hom-Lie
algebras. The general abstract quasi-Lie algebras and the subclasses of quasi-
Hom-Lie algebras and Hom-Lie algebras, as well as their color (graded)
counterparts, color (graded) quasi-Lie algebras, color (graded) quasi-Hom-Lie
algebras and color (graded) Hom-Lie algebras, including in particular the
super quasi-Lie algebras, super quasi-Hom-Lie algebras, and super Hom-Lie
algebras, have been introduced in [27, 37, 38, 39, 63, 64]. In [48], Hom-
associative algebras were introduced, generalizing associative algebras by
twisting the associativity law by a linear map. Hom-associative algebra is a
triple $(A,\cdot,\alpha)$ consisting of a linear space $A$, a bilinear product
$\cdot:A\times A\rightarrow A$ and a linear map $\alpha:A\rightarrow A$,
satisfying $a_{\alpha,\cdot}(x,y,z)=(x\cdot
y)\cdot\alpha(z)-\alpha(x)\cdot(y\cdot z)=0,$ for any $x,y,z\in A$. In [48],
alongside Hom-associative algebras, the Hom-Lie admissible algebras
generalizing Lie-admissible algebras, were introduced as Hom-algebras such
that the commutator product, defined using the multiplication in a Hom-
algebra, yields a Hom-Lie algebra, and also Hom-associative algebras were
shown to be Hom-Lie admissible. Moreover, in [48], more general $G$-Hom-
associative algebras including Hom-associative algebras, Hom-Vinberg algebras
(Hom-left symmetric algebras), Hom-pre-Lie algebras (Hom-right symmetric
algebras), and some other Hom-algebra structures, generalizing $G$-associative
algebras, Vinberg and pre-Lie algebras respectively, have been introduced and
shown to be Hom-Lie admissible, meaning that for these classes of Hom-
algebras, the operation of taking commutator leads to Hom-Lie algebras as
well. Also, flexible Hom-algebras have been introduced, connections to Hom-
algebra generalizations of derivations and of adjoint maps have been noticed,
and some low-dimensional Hom-Lie algebras have been described. The enveloping
algebras of Hom-Lie algebras were considered in [67] using combinatorial
objects of weighted binary trees. In [29], for Hom-associative algebras and
Hom-Lie algebras, the envelopment problem, operads, and the Diamond Lemma and
Hilbert series for the Hom-associative operad and free algebra have been
studied. Strong Hom-associativity yielding a confluent rewrite system and a
basis for the free strongly hom-associative algebra has been considered in
[28]. An explicit constructive way, based on free Hom-associative algebras
with involutive twisting, was developed in [25] to obtain the universal
enveloping algebras and Poincaré-Birkhoff-Witt type theorem for Hom-Lie
algebras with involutive twisting map. Free Hom-associative color algebra on a
Hom-module and enveloping algebra of color Hom-Lie algebras with involutive
twisting and also with more general conditions on the powers of twisting map
was constructed, and Poincaré-Birkhoff-Witt type theorem was obtained in [4,
5]. It is worth noticing here that, in the subclass of Hom-Lie algebras, the
skew-symmetry is untwisted, whereas the Jacobi identity is twisted by a single
linear map and contains three terms as in Lie algebras, reducing to ordinary
Lie algebras when the twisting linear map is the identity map.
Hom-algebra structures include their classical counterparts and open new broad
possibilities for deformations, extensions to Hom-algebra structures of
representations, homology, cohomology and formal deformations, Hom-modules and
hom-bimodules, Hom-Lie admissible Hom-coalgebras, Hom-coalgebras, Hom-
bialgebras, Hom-Hopf algebras, $L$-modules, $L$-comodules and Hom-Lie quasi-
bialgebras, $n$-ary generalizations of biHom-Lie algebras and biHom-
associative algebras and generalized derivations, Rota-Baxter operators, Hom-
dendriform color algebras, Rota-Baxter bisystems and covariant bialgebras,
Rota-Baxter cosystems, coquasitriangular mixed bialgebras, coassociative Yang-
Baxter pairs, coassociative Yang-Baxter equation and generalizations of Rota-
Baxter systems and algebras, curved $\mathcal{O}$-operator systems and their
connections with tridendriform systems and pre-Lie algebras, BiHom-algebras,
BiHom-Frobenius algebras and double constructions, infinitesimal biHom-
bialgebras and Hom-dendriform $D$-bialgebras, Hom-algebras has been considered
from a category theory point of view [3, 8, 9, 10, 12, 11, 13, 14, 15, 24, 26,
30, 31, 35, 36, 37, 40, 44, 45, 46, 49, 50, 51, 52, 56, 57, 62, 61, 65, 66,
67, 68, 69, 70].
This paper is organized as follows. In Section 2, basic definitions and
fundamental identities and some elementary examples of nearly associative
algebras are given. In Section 3, we derive the classification of the two-
dimensional nearly associative algebras and main classes are provided. In
Section 4, bimodules, duals bimodules and matched pair of nearly associative
algebras are established and related identities are derived and proved. In
Section 5, Manin triple of nearly associative algebras is given and its
equivalence to the nearly associative bialgebras is derived. In Section 6,
Hom-Lie-admissible, $G$-Hom-associative, flexible Hom-algebras, the result on
Lie-admissibility of $G$-Hom-admissible algebras and subclasses of $G$-Hom-
admissible algebras are reviewed. In Section 7, main definitions and
fundamental identities of Hom-nearly associative algebras are given.
Furthermore, the bimodules, and matched pair of the Hom-nearly associative
algebras are derived and related properties are obtained.
## 2 Nearly associative algebras: basic definitions and properties
Throughout this paper, for simplicity of exposition, all linear spaces are
assumed to be over field $\mathbb{K}$ of characteristic is $0$, even though
many results hold in general for other fields as well unchanged or with minor
modifications. An algebra is a couple $(A,\mu)$ consisting of a linear space
$A$ and a bilinear product $\mu:A\times A\rightarrow A$.
###### Definition 2.1.
An algebra $(A,\cdot)$ is called nearly associative if, for all $x,y,z\in A$,
$\displaystyle x\cdot(y\cdot z)=(z\cdot x)\cdot y.$ (1)
###### Example 2.2.
Consider a two-dimensional linear space $A$ with basis $\\{e_{1},e_{2}\\}$.
* •
Then, $(A,\cdot)$ is a nearly associative algebra, where $e_{1}\cdot
e_{1}=e_{1}+e_{2}$ and for all $(i,j)\neq(1,1)$ with $i,j\in\\{1,2\\}$,
$e_{i}\cdot e_{j}=0$.
* •
The linear product defined on $A$ by: $e_{1}\cdot e_{1}=e_{2}$, $e_{1}\cdot
e_{2}=e_{1}=e_{2}\cdot e_{1}$ and $e_{2}\cdot e_{2}=e_{2}$, is such that
$(A,\cdot)$ is a nearly associative algebra.
###### Example 2.3.
Consider a three-dimensional linear space $A$ with basis
$\\{e_{1},e_{2},e_{3}\\}$.
* •
The linear space $A$ equipped with the linear product defined on $A$ by:
$e_{1}\cdot e_{1}=e_{2}+e_{3}$, $e_{2}\cdot e_{2}=e_{1}+e_{2}-e_{3}$,
$e_{3}\cdot e_{3}=-e_{1}+e_{2}$ and for all $i\neq j,e_{i}\cdot e_{j}=0$,
where $i,j\in\\{1,2,3\\}$, is a nearly associative algebra.
* •
The linear space $A$ equipped with the linear product defined on $A$ by:
$e_{1}\cdot e_{1}=e_{2}-e_{3}$, $e_{2}\cdot e_{2}=e_{2}+e_{3}$, $e_{3}\cdot
e_{3}=e_{1}-e_{2}+e_{3}$ and for all $i\neq j,e_{i}\cdot e_{j}=0$, where
$i,j\in\\{1,2,3\\}$, is a nearly associative algebra.
* •
The linear space $A$ equipped with the linear product defined on $A$ by:
$e_{2}\cdot e_{2}=e_{1}+e_{3}$, $e_{1}\cdot e_{1}=e_{1}+e_{2}+e_{3}$,
$e_{3}\cdot e_{3}=e_{1}+e_{2}$ and for all $i\neq j,e_{i}\cdot e_{j}=0$, where
$i,j\in\\{1,2,3\\}$, is a nearly associative algebra.
###### Definition 2.4 ([2, 23, 53, 54, 55, 58, 59]).
An algebra $(A,\cdot)$ is called Lie admissible if $(A,[.,.])$ is a Lie
algebra, where $[x,y]=x\cdot y-y\cdot x$ for all $x,y\in A$.
For a Lie admissible algebra $(A,\cdot)$, the Lie algebra
$\mathcal{G}(A)=(A,[.,.])$ is called an underlying Lie algebra of $(A,\cdot)$.
It is known that associative algebras, left-symmetric algebras and anti-
flexible algebras (center-symmetric algebras) are Lie-admissible [6, 7, 30].
###### Proposition 2.5.
Any nearly associative algebra is Lie-admissible.
###### Proof.
For $[.,.]:(v,w)\mapsto v\cdot w-w\cdot v$ and $x,y,z$ in a nearly associative
algebra $(A,\cdot)$,
$\displaystyle[x,[y,z]]+[y,[z,x]]+[z,[x,y]]$ $\displaystyle=[x,y\cdot z-z\cdot
y]+[y,z\cdot x-x\cdot z]+[z,x\cdot y-y\cdot x]$ $\displaystyle=x\cdot(y\cdot
z)-x\cdot(z\cdot y)-(y\cdot z)\cdot x+(z\cdot y)\cdot x$
$\displaystyle\quad+y\cdot(z\cdot x)-y\cdot(x\cdot z)-(z\cdot x)\cdot
y+(x\cdot z)\cdot y$ $\displaystyle\quad+z\cdot(x\cdot y)-z\cdot(y\cdot
x)-(x\cdot y)\cdot z+(y\cdot x)\cdot z$ $\displaystyle=\\{x\cdot(y\cdot
z)-(z\cdot x)\cdot y\\}+\\{(y\cdot x)\cdot z-x\cdot(z\cdot y)\\}$
$\displaystyle\quad+\\{y\cdot(z\cdot x)-(x\cdot y)\cdot z\\}+\\{z\ast(x\cdot
y)-(y\cdot z)\cdot x\\}$ $\displaystyle\quad+\\{(z\cdot y)\cdot
x-y\cdot(x\cdot z)\\}+\\{(x\cdot z)\cdot y-z\cdot(y\cdot x)\\}=0.$
Therefore, $(A,[.,.])$ is a Lie algebra. ∎
###### Remark 2.6.
In a nearly associative algebra $(A,\cdot)$, for $x,y\in A$,
$\displaystyle L(x)L(y)$ $\displaystyle=$ $\displaystyle R(y)R(x),$ (2a)
$\displaystyle L(x)R(y)$ $\displaystyle=$ $\displaystyle L({y\cdot x}),$ (2b)
$\displaystyle R(x)L(y)$ $\displaystyle=$ $\displaystyle R(x\cdot y),$ (2c)
where $L,R:A\rightarrow{\rm End}(A)$ are the operators of left and right
multiplications.
###### Definition 2.7.
An anti-flexible algebra is a couple $(A,\cdot)$ where $A$ is a linear space,
and $\cdot:A\times A\rightarrow A$ is a bilinear product such that for all
$x,y,z\in A$,
$\displaystyle(x\cdot y)\cdot z-(z\cdot y)\cdot x=x\cdot(y\cdot
z)-z\cdot(y\cdot x).$ (3)
Using associator $a(x,y,z)=(x\cdot y)\cdot z-x\cdot(y\cdot z)$, the equality
(3) is equivalent to
$\displaystyle a(x,y,z)=a(z,y,x).$ (4)
In view of (4), anti-flexible algebras were called center-symmetric algebras
in [30].
###### Proposition 2.8.
Any commutative nearly associative algebra is anti-flexible.
###### Proof.
For all $x,y,z\in A$ in a commutative nearly associative algebra $(A,\cdot)$,
by using nearly associativity, commutativity and again nearly associativity,
$\displaystyle a(x,y,z)=(x\cdot y)\cdot z-x\cdot(y\cdot z)=y\cdot(z\cdot
x)-(z\cdot x)\cdot y=[y,z\cdot x]=[y,x\cdot z]=$ $\displaystyle y\cdot(x\cdot
z)-(x\cdot z)\cdot y=(z\cdot y)\cdot x-z\cdot(y\cdot x)=a(z,y,x)$
proves (4) meaning that $(A,\cdot)$ is anti-flexible. ∎
## 3 Classification of the two-dimensional nearly associative algebras
###### Theorem 3.1.
Any two-dimensional algebra $(A,\cdot)$ is nearly associative if and only if
$\displaystyle e_{1}\cdot(e_{1}\cdot e_{1})=(e_{1}\cdot e_{1})\cdot
e_{1},\qquad e_{1}\cdot(e_{1}\cdot e_{2})=(e_{2}\cdot e_{1})\cdot e_{1},$
$\displaystyle e_{1}\cdot(e_{2}\cdot e_{1})=(e_{1}\cdot e_{1})\cdot
e_{2},\qquad e_{2}\cdot(e_{1}\cdot e_{1})=(e_{1}\cdot e_{2})\cdot e_{1},$
$\displaystyle e_{1}\cdot(e_{2}\cdot e_{2})=(e_{2}\cdot e_{1})\cdot
e_{2},\qquad e_{2}\cdot(e_{1}\cdot e_{2})=(e_{2}\cdot e_{2})\cdot e_{1},$
$\displaystyle e_{2}\cdot(e_{2}\cdot e_{1})=(e_{1}\cdot e_{2})\cdot
e_{2},\qquad e_{2}\cdot(e_{2}\cdot e_{2})=(e_{2}\cdot e_{2})\cdot e_{2},$
where $\\{e_{1},e_{2}\\}$ is a basis of $A.$
###### Theorem 3.2.
Any two-dimensional nearly associative algebra is isomorphic to one of the
following nearly associative algebras:
* •
For all $(\alpha,\beta)\in\mathbb{K}^{2}\backslash\\{(0,0)\\}$, $e_{1}\cdot
e_{1}=\alpha e_{2},e_{1}\cdot e_{2}=\beta e_{1}=e_{2}\cdot e_{1},e_{2}\cdot
e_{2}=\beta e_{2}$.
* •
For all $(\alpha,\beta)\in\mathbb{K}^{2}\backslash\\{(0,0)\\}$, $e_{1}\cdot
e_{1}=\alpha e_{1}+\beta e_{2},e_{1}\cdot e_{2}=\beta e_{1}+\alpha
e_{2}=e_{2}\cdot e_{1},\\\ e_{2}\cdot e_{2}=\alpha e_{1}+\beta e_{2}$.
* •
For all $(\alpha,\beta,\gamma)\in\mathbb{K}^{3}$, such that
$\gamma^{2}+4\alpha\beta\geq 0,$ $e_{1}\cdot e_{1}=\alpha e_{1},e_{2}\cdot
e_{2}=\beta e_{1}+\gamma e_{2},\\\ e_{1}\cdot
e_{2}=\frac{1}{2}\left(\gamma+\sqrt{\gamma^{2}+4\alpha\beta}\right)e_{1}=e_{2}\cdot
e_{1}.$
###### Proof.
Equip the linear space $A$ with the basis $\\{e_{1},e_{2}\\}$, and for all
$i;j\in\\{1;2\\}$, set $e_{i}\cdot e_{j}=a_{ij}e_{1}+b_{ij}e_{2}$, where
$a_{ij}\in\mathbb{K}$ and $b_{ij}\in\mathbb{K}$. In addition, for all
$i,j,k\in\\{1,2\\}$,
$a_{jk}a_{i1}+b_{jk}a_{i2}=a_{ki}a_{1j}+b_{ki}a_{2j},a_{jk}b_{i1}+b_{jk}b_{i2}=a_{ki}b_{1j}+b_{ki}b_{2j}.$
By Theorem 3.1,
$\displaystyle\left\\{\begin{array}[]{lllllllllllllllll}e_{1}\cdot(e_{1}\cdot
e_{1})=(e_{1}\cdot e_{1})\cdot e_{1}\\\ e_{1}\cdot(e_{1}\cdot
e_{2})=(e_{2}\cdot e_{1})\cdot e_{1}\\\ e_{1}\cdot(e_{2}\cdot
e_{1})=(e_{1}\cdot e_{1})\cdot e_{2}\\\ e_{2}\cdot(e_{1}\cdot
e_{1})=(e_{1}\cdot e_{2})\cdot e_{1}\\\ e_{1}\cdot(e_{2}\cdot
e_{2})=(e_{2}\cdot e_{1})\cdot e_{2}\\\ e_{2}\cdot(e_{1}\cdot
e_{2})=(e_{2}\cdot e_{2})\cdot e_{1}\\\ e_{2}\cdot(e_{2}\cdot
e_{1})=(e_{1}\cdot e_{2})\cdot e_{2}\\\ e_{2}\cdot(e_{2}\cdot
e_{2})=(e_{2}\cdot e_{2})\cdot e_{2}\end{array}\right.$
$\displaystyle\Longleftrightarrow\left\\{\begin{array}[]{lllllllllllllllll}a_{11}a_{11}+b_{11}a_{12}=a_{11}a_{11}+b_{11}a_{21},\\\
a_{11}b_{11}+b_{11}b_{12}=a_{11}b_{11}+b_{11}b_{21},\\\
a_{12}a_{11}+b_{12}a_{12}=a_{21}a_{11}+b_{21}a_{21},\\\
a_{12}b_{11}+b_{12}b_{12}=a_{21}b_{11}+b_{21}b_{21},\\\
a_{21}a_{11}+b_{21}a_{12}=a_{11}a_{12}+b_{11}a_{22},\\\
a_{21}b_{11}+b_{21}b_{12}=a_{11}b_{12}+b_{11}b_{22},\\\
a_{11}a_{21}+b_{11}a_{22}=a_{12}a_{11}+b_{12}a_{21},\\\
a_{11}b_{21}+b_{11}b_{22}=a_{12}b_{11}+b_{12}b_{21},\\\
a_{22}a_{11}+b_{22}a_{12}=a_{21}a_{12}+b_{21}a_{22},\\\
a_{22}b_{11}+b_{22}b_{12}=a_{21}b_{12}+b_{21}b_{22},\\\
a_{12}a_{21}+b_{12}a_{22}=a_{22}a_{11}+b_{22}a_{21},\\\
a_{12}b_{21}+b_{12}b_{22}=a_{22}b_{11}+b_{22}b_{21},\\\
a_{21}a_{21}+b_{21}a_{22}=a_{12}a_{12}+b_{12}a_{22},\\\
a_{21}b_{21}+b_{21}b_{22}=a_{12}b_{12}+b_{12}b_{22},\\\
a_{22}a_{21}+b_{22}a_{22}=a_{22}a_{12}+b_{22}a_{22},\\\
a_{22}b_{21}+b_{22}b_{22}=a_{22}b_{12}+b_{22}b_{22}\end{array}\right.$
$\displaystyle\Longleftrightarrow\left\\{\begin{array}[]{lllllllllllllllll}e(b-c)=0,e(f-g)=0,\\\
h(b-c)=0,d(b-c)=0,\\\ d(f-g)=0,a(f-g)=0\\\ (b-c)(b+c)=0,\\\ (f-g)(f+g)=0\\\
e(b-c)+f(g-a)=0\\\ d(a-g)+b(h-c)=0\\\ a(b-c)=0,h(f-g)=0\\\ (bf-
cg)=0,bg=de=fc\end{array}\right.$
$\displaystyle\Longleftrightarrow\left\\{\begin{array}[]{cccccccccccccc}\left\\{\begin{array}[]{llllllllllll}a=r_{1},b=r_{2},c=r_{2},d=r_{1},\\\
e=r_{2},f=r_{1},g=r_{1},h=r_{2}\\\ \end{array}\right.\mbox{or
}\left\\{\begin{array}[]{llllllllllll}a=r_{1},b=r_{2},c=r_{2},d=r_{2},\\\
e=r_{1},f=r_{1},g=r_{1},h=r_{2}\\\ \end{array}\right.\\\ \mbox{or }\\\
\left\\{\begin{array}[]{lllllllllll}a=r_{5},b=0,c=0,d=0,\\\
e=r_{6},f=r_{5},g=r_{5},h=r_{8}\\\ \end{array}\right.\mbox{or
}\left\\{\begin{array}[]{lllllllllll}a=r_{5},b=0,c=0,d=r_{6},\\\
e=0,f=r_{5},g=r_{5},h=r_{8}\\\ \end{array}\right.\\\ \mbox{or }\\\
\left\\{\begin{array}[]{llllllllll}a=r_{9},b=\frac{|r_{12}|+r_{12}}{2},\\\
c=\frac{|r_{12}|+r_{12}}{2},d=0,\\\ e=r_{11},f=0,g=0,h=r_{12}\\\
\end{array}\right.\mbox{or
}\left\\{\begin{array}[]{llllllllll}a=r_{9},b=\frac{\sqrt{4r_{10}\,r_{9}+{{r_{12}}^{2}}}+r_{12}}{2},\\\
c=\frac{\sqrt{4r_{10}\,r_{9}+{{r_{12}}^{2}}}+r_{12}}{2},d=r_{10},\\\
e=0,f=0,g=0,h=r_{12}\end{array}\right.\\\ \mbox{or }\\\
\left\\{\begin{array}[]{lllllllllllllll}a=r_{13},b=\frac{r_{16}-\sqrt{{{r_{16}}^{2}}}}{2},\\\
c=\frac{r_{16}-\sqrt{{{r_{16}}^{2}}}}{2},d=0,\\\ e=r_{14},f=0\\\
,g=0,h=r_{16}\\\ \end{array}\right.\mbox{or
}\left\\{\begin{array}[]{lllllllllllllll}a=r_{13},b=\frac{r_{16}-\sqrt{{{r_{16}}^{2}}+4r_{13}\,r_{14}}}{2},\\\
c=\frac{r_{16}-\sqrt{{{r_{16}}^{2}}+4r_{13}\,r_{14}}}{2},d=r_{14},\\\
e=0,f=0,g=0,h=r_{16}\\\ \end{array}\right.\\\ \mbox{or }\\\
\left\\{\begin{array}[]{llllllllllllllll}a=r_{17},b=0,c=0,d=0,\\\
e=r_{18},f=0,g=0,h=0\\\ \end{array}\right.\mbox{or
}\left\\{\begin{array}[]{llllllllllllllll}a=r_{17},b=\sqrt{r_{17}\,r_{18}},c=\sqrt{r_{17}\,r_{18}},\\\
d=r_{18},e=0,f=0,g=0,h=0\\\ \end{array}\right.\\\ \mbox{or }\\\
\left\\{\begin{array}[]{lllllllllllllll}a=r_{20},b=0,c=0,d=0,\\\
e=r_{21},f=0,g=0,h=0\\\ \end{array}\right.\mbox{or
}\left\\{\begin{array}[]{lllllllllllllll}a=r_{20},b=-\sqrt{r_{20}\,r_{21}},\\\
c=-\sqrt{r_{20}\,r_{21}},d=r_{21},\\\ e=0,f=0,g=0,h=0\\\ \end{array}\right.\\\
\mbox{or }\\\ \left\\{\begin{array}[]{lllllllllllll}a=0,b=0,c=0,d=0,\\\
e=r_{24},f=0,g=0,h=r_{25}\\\ \end{array}\right.\mbox{or
}\left\\{\begin{array}[]{lllllllllllll}a=0,b=0,c=0,d=r_{23},\\\
e=0,f=0,g=0,h=r_{25}\\\ \end{array}\right.\\\ \mbox{or }\\\
\left\\{\begin{array}[]{lllllllllllllll}a=0,b=r_{26},c=r_{26},d=0,\\\
e=r_{28},f=0,g=0,h=r_{26}\\\ \end{array}\right.\mbox{or
}\left\\{\begin{array}[]{lllllllllllllll}a=0,b=r_{26},c=r_{26},d=r_{27},\\\
e=0,f=0,g=0,h=r_{26}\\\ \end{array}\right.\\\ \mbox{or }\\\
\left\\{\begin{array}[]{llllllllllllllll}a=r_{29},b=0,c=0,d=0,\\\
e=0,f=0,g=0,h=0\end{array}\right.\end{array}\right.$
with
$a_{11}=a,a_{12}=b,a_{21}=c,a_{22}=d,b_{11}=e,b_{12}=f,b_{21}=g,b_{22}=h.$
Therefore, the non-isomorphic algebras generated by these constants structures
are:
* •
For all $(\alpha,\beta)\in\mathbb{K}^{2}\backslash\\{(0,0)\\}$,$e_{1}\cdot
e_{1}=\alpha e_{2},e_{1}\cdot e_{2}=\beta e_{1}=e_{2}\cdot e_{1},e_{2}\cdot
e_{2}=\beta e_{2}$.
* •
For all $(\alpha,\beta)\in\mathbb{K}^{2}\backslash\\{(0,0)\\}$,$e_{1}\cdot
e_{1}=\alpha e_{1}+\beta e_{2},e_{1}\cdot e_{2}=\beta e_{1}+\alpha
e_{2}=e_{2}\cdot e_{1},\\\ e_{2}\cdot e_{2}=\alpha e_{1}+\beta e_{2}$.
* •
For all $(\alpha,\beta,\gamma)\in\mathbb{K}^{3}$, such that
$\gamma^{2}+4\alpha\beta\geq 0,$ $e_{1}\cdot e_{1}=\alpha e_{1},e_{2}\cdot
e_{2}=\beta e_{1}+\gamma e_{2},\\\ e_{1}\cdot
e_{2}=\frac{1}{2}\left(\gamma+\sqrt{\gamma^{2}+4\alpha\beta}\right)e_{1}=e_{2}\cdot
e_{1}.$
∎
## 4 Bimodules and matched pairs nearly associative algebras
###### Definition 4.1.
Let $(A,\cdot)$ be a nearly associative algebra. Consider the linear maps
$l;r:A\rightarrow{\rm End}(V)$, where $V$ is a linear space. A triple
$(l,r,V)$ is a bimodule of $(A,\cdot)$ if for all $x,y\in A$, the following
relations
$\displaystyle l(x)l(y)$ $\displaystyle=$ $\displaystyle r(y)r(x),$ (18a)
$\displaystyle l(x)r(y)$ $\displaystyle=$ $\displaystyle l({y\cdot x}),$ (18b)
$\displaystyle r(x)l(y)$ $\displaystyle=$ $\displaystyle r({x\cdot y})$ (18c)
are satisfied.
###### Example 4.2.
Let $(A,\cdot)$ be a nearly associative algebra. The triple $(L,R,A)$ is a
bimodule of $(A,\cdot)$, where for any $x,y\in A$, $L(x)y=x\cdot y=R(y)x$.
###### Proposition 4.3.
Let $(l,r,V)$ be a bimodule of a nearly associative algebra $(A,\cdot)$, where
$l;r:A\rightarrow{\rm End}(V)$ are two linear maps and $V$ a linear space.
There is a nearly associative algebra defined on $A\oplus V$ by, for any
$x,y\in A$ and any $u,v\in V,$
$\displaystyle(x+u)\ast(y+v)=x\cdot y+l(x)v+r(y)u.$ (19)
###### Proof.
Consider the bimodule $(l,r,V)$ of the nearly associative algebra $(A,\cdot)$.
For all $x,y,z\in A$ and $u,v,w\in V$ we have:
$\displaystyle(x+u)\ast((y+v)\ast(z+w))=x\cdot(y\cdot
z)+l(x)l(y)w+l(x)r(z)v+r({y\cdot z})u$ (20a)
$\displaystyle((z+w)\ast(x+u))\ast(y+v)=(z\cdot x)\cdot y+l({z\cdot
x})v+r(y)l(z)u+r(y)r(x)w$ (20b)
Using (18a) - (18c) in (20a) and (20b) we easily deduce that $(A\oplus
V,\ast)$ is a nearly associative algebra. ∎
###### Corollary 4.4.
Let $(l,r,V)$ be a bimodule of a nearly associative algebra $(A,\cdot)$, where
$l,r:\rm A\rightarrow{\rm End}(V)$ are two linear maps and $V$ a linear space.
Then there is a Lie algebra product on $A\oplus V$ given by
$\displaystyle[x+u,y+v]=[x,y]_{{}_{\cdot}}+(l(x)-r(x))v-(l(y)-r(y))u$ (21)
for all $x,y\in A$ and for any $u,v\in V$.
###### Proof.
It is simple to remark that the commutator of the product defined in (19) is
the product defined in (21). By taking into account Proposition 2.5, the
Jacobi identity of the product given in (21) is satisfied. ∎
###### Definition 4.5.
Let $(\mathcal{G},[.,.]_{{}_{\mathcal{G}}})$ be a Lie algebra. A
representation of $(\mathcal{G},[.,.]_{{}_{\mathcal{G}}})$ over the linear
space $V$ is a linear map $\rho:\mathcal{G}\rightarrow{\rm End}(V)$ satisfying
$\displaystyle\rho([x,y]_{{}_{\mathcal{G}}})=\rho(x)\circ\rho(y)-\rho(y)\circ\rho(x)$
(22)
for all $x,y\in\mathcal{G}$.
###### Proposition 4.6.
Let $(A,\cdot)$ be a nearly associative algebra and let $V$ be a finite-
dimensional linear space over the field $\mathbb{K}$ such that $(l,r,V)$ is a
bimodule of $(A,\cdot)$, where $l,r:A\rightarrow{\rm End}(V)$ are two linear
maps. Then, the linear map
$l-r:A\rightarrow{\rm End}(V),\quad x\mapsto l(x)-r(x)$
is a representation of the underlying Lie algebra $\mathcal{G}(A)$ underlying
$(A,\cdot)$.
###### Proof.
Let $(l,r,V)$ be a bimodule of the nearly associative algebra $(A,\cdot)$. For
$x,y\in A$,
$(l(x)-r(x))(l(y)-r(y))-(l(y)-r(y))(l(x)-r(x))=\cr
l(x)l(y)-l(x)r(y)-r(x)l(y)+r(x)r(y)-l(y)l(x)+l(y)r(x)+r(y)l(x)-r(y)r(x)\cr=-l(x)r(y)-r(x)l(y)+l(y)r(x)+r(y)l(x)=-l({y\cdot
x})-r({x\cdot y})+l({x\cdot x})+r({y\cdot x})\cr=(l-r)(x\cdot y-y\cdot
x)=(l-r)([x,y]).$
Therefore, (22) is satisfied for $l-r=\rho$. ∎
###### Definition 4.7.
Let $(A,\cdot)$ be a nearly associative algebra and $(l,r,V)$ its associated a
bimodule, where $V$ is a finite-dimensional linear space. The dual maps
$l^{*},r^{*}$ of linear maps $l,r,$ respectively are defined as $\displaystyle
l^{*},r^{*}:A\rightarrow{\rm End}(V^{*})$ such that for any $x\in A,u^{*}\in
V^{*},v\in V,$
$\displaystyle\left<l^{*}(x)u^{*},v\right>=\left<u^{*},l(x)v\right>,$ (23a)
$\displaystyle\left<r^{*}(x)u^{*},v\right>=\left<u^{*},r(x)v\right>.$ (23b)
###### Proposition 4.8.
Let $(A,\cdot)$ be a nearly associative algebra and $(l,r,V)$ be its bimodule.
The following relations are equivalent:
1. (i)
$(r^{*},l^{*},V^{*})$ is a bimodule of $(A,\cdot)$,
2. (ii)
$l(x)r(y)=r(y)l(x)$, for all $x,y\in A$,
3. (iii)
$(l^{*},r^{*},V^{*})$ is a bimodule of $(A,\cdot)$.
###### Proof.
Let $(A,\cdot)$ be a nearly associative algebra and $(l,r,V)$ be its
associated bimodule i.e. the linear maps $l,r:A\rightarrow{\rm End}(V)$
satisfying (18a) - (18c) and $V$ is a finite-dimensional linear space.
* •
Suppose that $(r^{*},l^{*},V^{*})$ is a bimodule of $(A,\cdot)$, i.e. with
correspondences $l\rightarrow r^{*}$ and $r\rightarrow l^{*}$, (18a) - (18c)
are satisfied. For any $x,y\in A$, $v\in V$, $u^{*}\in V^{*}$:
$\langle l(x)r(y)v,u^{*}\rangle=\langle v,r^{*}(y)l^{*}(x)u^{*}\rangle=\langle
v,r^{*}({y\cdot x})u^{*}\rangle\\\ =\langle r({y\cdot
x})v,u^{*}\rangle=\langle r(y)l(x)v,u^{*}\rangle.$
Therefore, the relation $l(x)r(y)=r(y)l(x)$ is satisfied.
* •
Suppose $l(x)r(y)=r(y)l(x)$ for any $x,y\in A$. For any $x,y\in A$, $v\in V$,
$u^{*}\in V^{*}$:
$\left<l^{*}(x)l^{*}(y)u^{*},v\right>=\left<u^{*},l(y)l(x)v\right>=\left<u^{*},r(x)r(y)v\right>=\left<r^{*}(y)r^{*}(x)u^{*},v\right>,$
yields $l^{*}(x)l^{*}(y)=r^{*}(y)r^{*}(x);$
$\left<l^{*}(x)r^{*}(y)u^{*},v\right>=\left<u^{*},r(y)l(x)v\right>=\left<u^{*},l(x)r(y)v\right>=\\\
\left<u^{*},l({y\cdot x})v\right>=\left<l^{*}({y\cdot x})u^{*},v\right>,$
yields $l^{*}(x)r^{*}(y)=l^{*}({y\cdot x})$;
$\left<r^{*}(y)l^{*}(x)u^{*},v\right>=\left<u^{*},l(x)r(y)v\right>=\left<u^{*},r(y)l(x)v\right>=\\\
\left<u^{*},r({y\cdot x})v\right>=\left<r^{*}({y\cdot x})u^{*},v\right>,$
yields $r^{*}(y)l^{*}(x)=r^{*}({y\cdot x}).$ Thus, with correspondences
$r^{*}\rightarrow l$ and $l^{*}\rightarrow r$, (18a) - (18c) are satisfied.
Similarly, one obtains the equivalence between $l(x)r(y)=r(y)l(x)$, for any
$x,y\in A$, and $(l^{*},r^{*},V^{*})$ being a bimodule of $(A,\cdot)$. ∎
###### Remark 4.9.
It is clear that $(L_{\cdot}^{*},R_{\cdot}^{*},A^{*})$ and
$(R_{\cdot}^{*},L_{\cdot}^{*},A^{*})$ are bimodules of the nearly associative
algebra $(A,\cdot)$ if and only if $L$ and $R$ commute.
###### Theorem 4.10.
Let $(A,\cdot)$ and $(B,\circ)$ be two nearly associative algebras. Suppose
that $(l_{A},r_{A},B)$ and $(l_{B},r_{B},A)$ are bimodules of $(A,\cdot)$ and
$(B,\circ)$, respectively, where $l_{A},r_{A}:A\rightarrow{\rm End}(B)$,
$l_{B},r_{B}:B\rightarrow{\rm End}(A)$ are four linear maps satisfying for all
$x,y\in A$, $a,b\in B$ the following relations
$\displaystyle r_{B}(l_{A}(x)a)y+y\cdot(r_{B}(a)x)-(l_{B}(a)y)\cdot
x-l_{B}(r_{A}(y)a)x=0,$ (24a) $\displaystyle r_{B}(a)(x\cdot
y)-y\cdot(l_{B}(a)x)-r_{B}(r_{A}(x)a)y=0,$ (24b) $\displaystyle
l_{B}(a)(x\cdot y)-(r_{B}(a)y)\cdot x-l_{B}(l_{A}(y)a)x=0,$ (24c)
$\displaystyle r_{A}(l_{B}(a)x)b+b\circ(r_{A}(x)a)-(l_{A}(x)b)\circ
a-l_{A}(r_{B}(b)x)a=0,$ (24d) $\displaystyle r_{A}(x)(a\circ
b)-b\circ(l_{A}(x)a)-r_{A}(r_{B}(a)x)b=0,$ (24e) $\displaystyle
l_{A}(x)(a\circ b)-(r_{A}(x)b)\circ a-l_{A}(l_{B}(b)x)a=0.$ (24f)
Then, $(A\oplus B,\ast)$ is a nearly associative algebra, where
$\displaystyle(x+a)\ast(y+b)=(x\cdot y+l_{B}(a)y+r_{B}(b)x)+(a\circ
b+l_{A}(x)b+r_{A}(y)a).$ (25)
for all $x,y\in A,a,b\in B$.
###### Proof.
For any $x,y,z\in A$, and for any $a,b,c\in B$, we have
$(x+a)\ast((y+b)\ast(z+c))=x\cdot(y\cdot
z)+\\{x\cdot(l_{B}(b)z)+r_{B}(r_{A}(z)b)x\\}+l_{B}(a)(y\cdot
z)\cr+\\{x\cdot(r_{B}(c)y)+r_{B}(l_{A}(y)c)x\\}+l_{B}(a)(l_{B}(b)z)+r_{B}(b\circ
c)x\cr+l_{B}(a)(r_{B}(c)y)+a\circ(b\circ
c)+\\{a\circ(l_{A}(y)c)+r_{B}(r_{B}(c)y)a\\}\cr+\\{a\circ(r_{A}(z)b)+r_{B}(l_{B}(b)z)a\\}+l_{A}(x)(l_{A}(y)c)+l_{A}(x)(b\circ
c)+l_{A}(x)(r_{A}(z)b)+r_{A}(y\cdot z)a;\\\\[8.5359pt]
((z+c)\ast(x+a))\ast(y+b)=(z\cdot x)\cdot y+\\{(l_{B}(c)x)\cdot
y+l_{B}(r_{A}(x)c)y\\}+r_{B}(x)(z\cdot x)\cr+\\{(r_{A}(a)z)\cdot
y+l_{B}(l_{A}(z)a)y\\}+l_{B}(c\circ
a)y+r_{B}(b)(l_{B}(c)x)\cr+r_{B}(b)(r_{B}(a)z)(c\circ a)\circ
y+\\{(l_{A}(z)a)\circ b+l_{A}(r_{B}(a)z)b\\}\cr+\\{(r_{A}(x)c)\circ
b+l_{A}(l_{B}(c)x)b\\}+r_{A}(y)(r_{A}(x)c)+r_{A}(y)(c\circ
a)+r_{A}(y)(l_{A}(z)a)+l_{A}(z\cdot x)b.$
Using (24a) - (24f) and that $(l_{A},r_{A},B)$ and $(l_{B},r_{B},A)$ are
bimodules of $(A,\cdot)$ and $(B,\circ)$, respectively, we derive that
$(A\oplus B,\ast)$ is a nearly associative algebra. ∎
###### Definition 4.11 ([47]).
Let $(\mathcal{G},[.,.]_{{}_{\mathcal{G}}})$ and
$({\mathcal{H},[.,.]_{{}_{\mathcal{H}}}})$ be two Lie algebras such that
$\rho:\mathcal{G}\rightarrow{\rm End}(\mathcal{H})$ and
$\mu:\mathcal{H}\rightarrow{\rm End}(\mathcal{G})$ are representations of
$\mathcal{G}$ and $\mathcal{H}$, respectively. A matched pair of Lie algebras
$\mathcal{G}$ and $\mathcal{H}$ is $(\mathcal{G},\mathcal{H},\rho,\mu)$ such
that $\rho$ and $\mu$ are satisfying the following relations, for all
$x,y\in\mathcal{G}$ and $a,b\in\mathcal{H}$,
$\displaystyle\rho(x)[a,b]_{{}_{\mathcal{G}}}-[\rho(x)a,b]_{{}_{\mathcal{H}}}-[a,\rho(x)b]_{{}_{\mathcal{H}}}+\rho(\mu(a)x)b-\rho(\mu(b)x)a=0,$
(26a)
$\displaystyle\mu(a)[x,y]_{{}_{\mathcal{G}}}-[\mu(a)x,y]_{{}_{\mathcal{G}}}-[x,\mu(a)y]_{{}_{\mathcal{G}}}+\mu(\rho(x)a)y-\mu(\rho(y)a)x=0.$
(26b)
###### Corollary 4.12.
Let $(A,B,l_{A},r_{A},l_{B},r_{B})$ be a matched pair of the nearly
associative algebras $(A,\cdot)$ and $(B,\circ)$. Then
$(\mathcal{G}(A),\mathcal{G}(B),l_{A}-r_{A},l_{B}-r_{B})$ is a matched pair of
Lie algebras $\mathcal{G}(A)$ and $\mathcal{G}(B)$.
###### Proof.
Let $(A,B,l_{A},r_{A},l_{B},r_{B})$ be a matched pair of the nearly
associative algebras $(A,\cdot)$ and $(B,\circ)$. In view of Proposition 4.6,
the linear maps $l_{A}-r_{A}:A\longrightarrow{\rm End}(B)$ and
$l_{B}-r_{B}:B\longrightarrow{\rm End}(A)$ are representations of the
underlying Lie algebras $\mathcal{G}(A)$ and $\mathcal{G}(B)$, respectively.
Therefore, by direct calculation we have (26a) is equivalent to (24a) - (24c)
and similarly, (26b) is equivalent to (24d) - (24f). ∎
###### Proposition 4.13.
Let $(A,\cdot)$ be a nearly associative algebra. Suppose that there is a
nearly associative algebra structure $\circ$ on its the dual space $A^{*}$. If
in addition, the linear maps $L$ and $R$ commute then
$(A,A^{*},R_{\cdot}^{*},L_{\cdot}^{*},R_{\circ}^{*},L_{\circ}^{*})$ is a
matched pair of the nearly associative algebras $(A,\cdot)$ and
$(A^{*},\circ)$ if and only if the following relations are satisfied for any
$x,y\in A$ and $a\in A^{*}$
$\displaystyle
L_{\circ}^{*}(R_{\cdot}^{*}(x)a)y-y\cdot(L_{\circ}^{*}(a)x)-(R_{\circ}^{*}(a)y)\cdot
x-R_{\circ}^{*}(L_{\cdot}^{*}(y)a)x=0,$ (27a) $\displaystyle
L_{\circ}^{*}(a)(x\cdot
y)-y\cdot(R_{\circ}^{*}(a)x)-L_{\circ}^{*}(L_{\cdot}^{*}(x)a)y=0,$ (27b)
$\displaystyle R_{\circ}^{*}(a)(x\cdot y)-(L_{\circ}^{*}(a)y)\cdot
x-R_{\circ}^{*}(R_{\cdot}^{*}(y)a)x=0.$ (27c)
###### Proof.
Since $L$ and $R$ commute, according to Remark 4.9 and Proposition 4.8, both
$(R_{\cdot}^{*},L_{\cdot}^{*},A^{*})$ and
$(L_{\cdot}^{*},R_{\cdot}^{*},A^{*})$ are bimodules of $(A,\cdot)$. Setting
$l_{A}=R_{\cdot}^{*}$, $r_{A}=L_{\cdot}^{*},l_{B}=R_{\circ}^{*}$ and
$r_{B}=L_{\circ}^{*}$ in Theorem 4.10 the equivalences among Eq (24a) and
(27a), (24b) and (27b), and finally (24c) and (27c) are straightforward.
Besides, for any $x,y\in A$ and any $a,b\in A^{*}$, we have
$\displaystyle\langle L_{\circ}^{*}(R_{\cdot}^{*}(x)a)y,b\rangle=\langle
y,L_{\circ}(R_{\cdot}^{*}(x)a)b\rangle=\langle y,(R_{\cdot}^{*}(x)a)\circ
b\rangle,$ $\displaystyle\langle y\cdot(L_{\circ}^{*}(a)x),b\rangle=\langle
R_{\cdot}(L_{\circ}^{*}(a)x)y,b\rangle=\langle
y,R_{\cdot}^{*}(L_{\circ}^{*}(a)x)b\rangle,$
$\displaystyle\langle(R_{\circ}^{*}(a)y)\cdot x,b\rangle=\langle
R_{\circ}^{*}(a)y,R_{\cdot}^{*}(x)b\rangle=\langle y,(R_{\cdot}^{*}(x)b)\circ
a\rangle,$ $\displaystyle\langle
R_{\circ}^{*}(L_{\cdot}^{*}(y)a)x,b\rangle=\langle
L_{\circ}^{*}(b)x,L_{\cdot}^{*}(y)a\rangle=\langle
y\cdot(L_{\circ}^{*}(b)x),a\rangle=\langle
y,R_{\cdot}^{*}(L_{\circ}^{*}(b)x)a\rangle,$ $\displaystyle\langle
L_{\circ}^{*}(a)(x\cdot y),b\rangle=\langle R_{\cdot}(y)x,a\circ
b\rangle=\langle x,R_{\cdot}^{*}(y)(a\circ b)\rangle,$ $\displaystyle\langle
y\cdot(R_{\circ}^{*}(a)x),b\rangle=\langle
R_{\circ}^{*}(a)x,L_{\cdot}^{*}(y)b\rangle=\langle x,(L_{\cdot}^{*}(y)b)\circ
a\rangle,$ $\displaystyle\langle
L_{\circ}^{*}(L_{\cdot}^{*}(x)a)y,b\rangle=\langle
R_{\circ}^{*}(b)y,L_{\circ}^{*}(x)a\rangle=\langle
x\cdot(R_{\circ}^{*}(b)y),a\rangle=\langle
x,R_{\cdot}^{*}(R_{\circ}^{*}(b)y)a\rangle,$ $\displaystyle\langle
R_{\circ}^{*}(a)(x\cdot y),b\rangle=\langle L_{\cdot}(x)y,b\circ
a\rangle=\langle y,L_{\cdot}(x)^{*}(b\circ a)\rangle,$
$\displaystyle\langle(L_{\circ}^{*}(a)y)\cdot x,b\rangle=\langle
L_{\circ}^{*}(a)y,R_{\cdot}^{*}(b)\rangle=\langle
y,a\circ(R_{\cdot}^{*}(x)b)\rangle,$ $\displaystyle\langle
R_{\circ}^{*}(R_{\cdot}^{*}(y)a)x,b\rangle=\langle
L_{\circ}^{*}(b)x,R_{\cdot}^{*}(y)a\rangle=\langle(L_{\circ}^{*}(b)x)\cdot
y,a\rangle=\langle y,L_{\cdot}^{*}(L_{\circ}^{*}(b)x)a\rangle$
Then, Eq (24a) holds if and only if (24d) holds, Eq (24b) holds if and only if
(24e) holds, and finally Eq (24c) holds if and only if (24f) holds. ∎
## 5 Manin triple and bialgebra of nearly associative algebras
###### Definition 5.1.
A bilinear form $\mathfrak{B}$ on a nearly associative algebra $(A,\cdot)$ is
called left-invariant if $forallx,y,z\in A$,
$\mathfrak{B}(x\cdot y,z)=\mathfrak{B}(x,y\cdot z).$ (28)
###### Proposition 5.2.
Let $(A,\cdot)$ be a nearly associative algebra. If there is a nondegenerate
symmetric invariant bilinear form $\mathfrak{B}$ defined on $A$, then as
bimodules of the nearly associative algebra $(A,\cdot)$, $(L,R,A)$ and
$(R^{*},L^{*},A^{*})$ are equivalent. Conversely, if $(L,R,A)$ and
$(R^{*},L^{*},A^{*})$ are equivalent bimodules of a nearly associative algebra
$(A,\cdot)$, then there exists a nondegenerate invariant bilinear form
$\mathfrak{B}$ on $A$.
###### Definition 5.3.
A Manin triple of nearly associative algebras is a triple of nearly
associative algebras $(A,A_{1},A_{2})$ together with a nondegenerate symmetric
invariant bilinear form $\mathfrak{B}$ on $A$ such that the following
conditions are satisfied.
1. (i)
$A_{1}$ and $A_{2}$ nearly associative subalgebras of $A$;
2. (ii)
as linear spaces, $A=A_{1}\oplus A_{2}$;
3. (iii)
$A_{1}$ and $A_{2}$ are isotropic with respect to $\mathfrak{B}$, i.e. for any
$x_{1},y_{1}\in A_{1}$ and any $x_{2},y_{2}\in A_{2}$,
$\mathfrak{B}(x_{1},y_{1})=0=\mathfrak{B}(x_{2},y_{2})=0.$
###### Definition 5.4.
Let $(A,\cdot)$ be a nearly associative algebra. Suppose that $\circ$ is a
nearly associative algebra structure on the dual space $A^{*}$ of $A$ and
there is a nearly associative algebra structure on the direct sum $A\oplus
A^{*}$ of the underlying linear spaces of $A$ and $A^{*}$ such that
$(A,\cdot)$ and $(A^{*},\circ)$ are subalgebras and the natural symmetric
bilinear form on $A\oplus A^{*}$ given by $\forall x,y\in A;\forall
a^{*},b^{*}\in A^{*},$
$\mathfrak{B}_{d}(x+a^{*},y+b^{*}):=\langle a^{*},y\rangle+\langle
x,b^{*}\rangle,\;$ (29)
is left-invariant, then $(A\oplus A^{*},A,A^{*})$ is called a standard Manin
triple of nearly associative algebras associated to $\mathfrak{B}_{d}$.
Obviously, a standard Manin triple of nearly associative algebras is a Manin
triple of nearly associative algebras. By symmetric role of $A$ and $A^{*}$,
we have
###### Proposition 5.5.
Every Manin triple of nearly associative algebras is isomorphic to a standard
one.
###### Proposition 5.6.
Let $(A,\cdot)$ be a nearly associative algebra. Suppose that there is a
nearly associative algebra structure $\circ$ on the dual space $A^{*}$. There
exists a nearly associative algebra structure on the linear space $A\oplus
A^{*}$ such that $(A\oplus A^{*},A,A^{*})$ is a standard Manin triple of
nearly associative algebras associated to $\mathfrak{B}_{d}$ defined by (29)
if and only if
$(A,A^{*},R_{\cdot}^{*},L_{\cdot}^{*},R_{\circ}^{*},L_{\circ}^{*})$ is a
matched pair of nearly associative algebras.
###### Theorem 5.7.
Let $(A,\cdot)$ be a nearly associative algebra such that the left and right
multiplication operators commute. Suppose that there is a nearly associative
algebra structure $\circ$ on its the dual space $A^{*}$ given by
$\Delta^{*}:A^{*}\otimes A^{*}\rightarrow A^{*}$. Then,
$(A,A^{*},R_{\cdot}^{*},L_{\cdot}^{*},R_{\circ}^{*},L_{\circ}^{*})$ is a
matched pair of the nearly associative algebras $(A,\cdot)$ and
$(A^{*},\circ)$ if and only if $\Delta:A\rightarrow A\otimes A$ satisfies the
following relations
$\displaystyle(R_{\cdot}(x)\otimes{\rm id}-\sigma(R_{\cdot}(x)\otimes{\rm
id}))\Delta(y)+({\rm id}\otimes L_{\cdot}(y)-\sigma({\rm id}\otimes
L_{\cdot}(y)))\Delta(x)=0,$ (30a)
$\displaystyle\begin{array}[]{r}(L_{\cdot}(x)\otimes{\rm
id})\Delta(y)+\sigma(L_{\cdot}(y)\otimes{\rm id})\Delta(x)=\Delta(x\cdot y)\\\
=\sigma({\rm id}\otimes R_{\cdot}(x))\Delta(y)+({\rm id}\otimes
R_{\cdot}(y))\Delta(x).\end{array}$ (30d)
###### Proof.
For any $a,b\in A^{*}$ and any $x,y\in A$ we have
$\displaystyle\langle(R_{\cdot}(x)\otimes{\rm id})\Delta(y),a\otimes
b\rangle=\langle y,(R_{\cdot}^{*}(x)a)\circ b\rangle=\langle
L_{\circ}^{*}(R_{\cdot}^{*}(x)a)y,b\rangle,$
$\displaystyle\langle\sigma(R_{\cdot}(x)\otimes{\rm id})\Delta(y),a\otimes
b\rangle=\langle y,(R_{\cdot}^{*}(x)b)\circ a\rangle=\langle
R_{\circ}^{*}(a)y,R_{\cdot}^{*}(x)b\rangle=\langle(R_{\circ}^{*}(a)y)\cdot
x,b\rangle,$ $\displaystyle\langle({\rm id}\otimes
L_{\cdot}(y))\Delta(x),a\otimes b\rangle=\langle
x,a\circ(L_{\cdot}^{*}(y)b)\rangle=\langle
y\cdot(L_{\circ}^{*}(a)x),b\rangle,$ $\displaystyle\langle\sigma({\rm
id}\otimes L_{\cdot}(y))\Delta(x),a\otimes b\rangle=\langle
x,b\circ(L_{\cdot}^{*}(y)a)\rangle=\langle
R_{\circ}^{*}(L_{\cdot}^{*}(y)a)x,b\rangle.$
Hence (27a) is equivalent to (30a).
Similarly, we have for any $x,y\in A$ and any $a,b\in A^{*}$
$\displaystyle\langle\Delta(x\cdot y),a\otimes b\rangle=\langle x\cdot
y,a\circ b\rangle=\langle L_{\circ}^{*}(a)(x\cdot y),b\rangle=\langle
R_{\circ}^{*}(b)(x\cdot y),a\rangle,$
$\displaystyle\langle(L_{\cdot}(x)\otimes{\rm id})\Delta(y),a\otimes
b\rangle=\langle y,(L_{\cdot}^{*}(x)a)\circ b\rangle=\langle
L_{\circ}^{*}(L_{\cdot}^{*}(x)a)y,b\rangle,$
$\displaystyle\langle\sigma(L_{\cdot}(y)\otimes{\rm id})\Delta(x),a\otimes
b\rangle=\langle x,(L_{\cdot}^{*}(y)b)\circ a\rangle=\langle
y\cdot(R_{\circ}^{*}(a)x),b\rangle,$ $\displaystyle\langle\sigma({\rm
id}\otimes R_{\cdot}(x))\Delta(y),a\otimes b\rangle=\langle
y,b\circ(R_{\cdot}^{*}(x)a)\rangle=\langle
R_{\circ}^{*}(R_{\cdot}^{*}(x)a)y,b\rangle,$ $\displaystyle\langle({\rm
id}\otimes R_{\cdot}(y))\Delta(x),a\otimes b\rangle=\langle
x,a\circ(R_{\cdot}^{*}(y)b)\rangle=\langle(L_{\circ}^{*}(a)x)\cdot
y,b\rangle,$
Therefore, (27b) and (27c) and is equivalent to (30d). ∎
###### Remark 5.8.
Obviously, if $L$ and $R$ commute, then $L^{*}$ and $R^{*}$ commute too and if
in addition $\gamma:A^{*}\rightarrow A^{*}\otimes A^{*}$ is a linear maps such
that its dual $\gamma^{*}:A\otimes A\rightarrow A$ defines a nearly
associative algebra structure $\cdot$ on $A$, then $\Delta$ satisfies (30a)
and (30d) if and only if $\gamma$ satisfies for all $a,b\in A^{*}$,
$\displaystyle(R_{\circ}(a)\otimes{\rm id}-\sigma(R_{\circ}(a)\otimes{\rm
id}))\gamma(b)+({\rm id}\otimes L_{\circ}(b)-\sigma({\rm id}\otimes
L_{\circ}(b)))\gamma(a)=0,$ $\displaystyle(L_{\circ}(x)\otimes{\rm
id})\gamma(b)+\sigma(L_{\circ}(b)\otimes{\rm id})\gamma(a)=$
$\displaystyle\qquad\qquad\qquad\gamma(a\circ b)=\sigma({\rm id}\otimes
R_{\circ}(a))\gamma(b)+({\rm id}\otimes R_{\circ}(b))\gamma(a).$
###### Definition 5.9.
Let $(A,\cdot)$ be a nearly associative algebra in which the left ($L$) and
right ($R$) multiplication operators commute. A nearly anti-flexible bialgebra
structure is a linear map $\Delta:A\rightarrow A\otimes A$ such that
* •
$\Delta^{*}:A^{*}\otimes A^{*}\rightarrow A^{*}$ defines a nearly associative
algebra structure on $A,$
* •
$\Delta$ satisfies (30d) and (30d).
###### Theorem 5.10.
Let $(A,\cdot)$ be a nearly associative algebra in which the left and right
multiplication operators commute. Suppose that there is a nearly associative
algebra structure on $A^{*}$ denoted by $\circ$ which defined a linear map
$\Delta:A\rightarrow A\otimes A$. Then the following conditions are
equivalent:
1. (i)
$(A\oplus A^{*},A,A^{*})$ is a standard Manin triple of nearly associative
algebras $(A,\cdot)$ and $(A^{*},\circ)$ such that its associated symmetric
bilinear form $\mathfrak{B}_{d}$ is defined by (29).
2. (ii)
$(A,A^{*},R_{\cdot}^{*},L_{\cdot}^{*},R_{\circ}^{*},L_{\circ}^{*})$ is a
matched pair of nearly associative algebras $(A,\cdot)$ and $(A^{*},\circ)$.
3. (iii)
$(A,A^{*})$ is a nearly associative bialgebra.
## 6 Hom-Lie admissible, G-Hom-associative, flexible and anti-flexible Hom-
algebras
Hom-Lie admissible algebras along with Hom-associative algebras and more
general $G$-Hom-associative algebras were first introduced, and Hom-
associative algebras and $G$-Hom-associative algebras were shown to be Hom-Lie
admissible in [48].
Hom-algebra is a triple $(A,\mu,\alpha)$ consisting of a linear space $A$ over
a field $\mathbb{K}$, a bilinear product $\mu:A\times A\rightarrow A$ and a
linear map $\alpha:A\rightarrow A$.
###### Definition 6.1 ([48]).
Hom-Lie, Hom-Lie admissible, Hom-associative and $G$-Hom-associative Hom-
algebras (over a field $\mathbb{K}$) are defined as follows:
1. 1)
Hom-Lie algebras are triples $(A,[.,.],\alpha)$, consisting of a linear space
$A$ over a field $\mathbb{K}$, bilinear map (bilinear product) $[.,.]:A\times
A\rightarrow A$ and a linear map $\alpha:A\rightarrow A$ satisfying, for all
$x,y,z\in A$,
$\displaystyle[x,y]=-[y,x],$ (Skew-symmetry) (31)
$\displaystyle[\alpha(x),[y,z]]+[\alpha(y),[z,x]]+[\alpha(z),[x,y]]=0.$ (Hom-
Jacobi identity) (32)
2. 2)
Hom-Lie admissible algebras are Hom-algebras $(A,\mu,\alpha)$ consisting of
possibly non-associative algebra $(A,\mu)$ and a linear map
$\alpha:A\rightarrow A$, such that $(A,[.,.],\alpha)$ is a Hom-Lie algebra,
where $[x,y]=\mu(x,y)-\mu(y,x)$ for all $x,y\in A$.
3. 3)
Hom-associative algebras are triples $(A,\cdot,\alpha)$ consisting of a linear
space $A$ over a field $\mathbb{K}$, a bilinear product $\mu:A\times
A\rightarrow A$ and a linear map $\alpha:A\rightarrow A$, satisfying for all
$x,y,z\in A$,
$\mu(\mu(x,y),\alpha(z))=\mu(\alpha(x),\mu(y,z)).\quad\quad\text{\rm(Hom-
associativity)}$ (33)
4. 4)
Let $G$ be a subgroup of the permutations group $\mathcal{S}_{3}$. Hom-algebra
$(A,\mu,\alpha)$ is said to be $G$-Hom-associative if
$\sum_{\sigma\in
G}{(-1)^{\varepsilon({\sigma})}(\mu(\mu(x_{\sigma(1)},x_{\sigma(2)}),\alpha(x_{\sigma(3)}))}-\mu(\alpha(x_{\sigma(1)}),\mu(x_{\sigma(2)},x_{\sigma(3)}))=0,$
(34)
where $x_{i}\in A,i=1,2,3$ and $(-1)^{\varepsilon({\sigma})}$ is the signature
of the permutation $\sigma$.
For any Hom-algebra $(A,\mu,\alpha)$, the Hom-associator, called also
$\alpha$-associator of $\mu$, is a trilinear map (ternary product)
$a_{\alpha,\mu}:A\times A\times A\rightarrow A$ defined by
$a_{\alpha,\mu}(x_{1},x_{2},x_{3})=\mu(\mu(x_{1},x_{2}),\alpha(x_{3}))-\mu(\alpha(x_{1}),\mu(x_{2},x_{3}))$
for all $x_{1},x_{2},x_{3}\in A$. The ordinary associator
$a_{\mu}(x_{1},x_{2},x_{3})=a_{{\rm
id},\mu}(x_{1},x_{2},x_{3})=\mu((x_{1},x_{2}),(x_{3}))-\mu((x_{1}),\mu(x_{2},x_{3}))$
on an algebra $(A,\mu)$ is $\alpha$-associator for the Hom-algebra
$(A,\mu,\alpha)=(A,\mu,{\rm id})$ with $\alpha={\rm id}:A\rightarrow A$, the
identity map on $A$.
Using Hom-associator $a_{\alpha,\mu}$ and notation
$\sigma(x_{1},x_{2},x_{3})=(x_{\sigma(1)},x_{\sigma(2)},x_{\sigma(3)})$, the
Hom-associativity (33) can be written as
$a_{\alpha,\mu}(x,y,z)=\mu(\mu(x,y),\alpha(z))-\mu(\alpha(x),\mu(y,z))=0,\quad\quad\text{\rm(Hom-
associativity)}$ (35)
or as $a_{\alpha,\mu}=0$, and the $G$-Hom-associativity (34) as
$\sum_{\sigma\in G}{(-1)^{\varepsilon({\sigma})}a_{\alpha,\mu}\circ\sigma}=0.$
(36)
If $\mu$ is the multiplication of a Hom-Lie admissible Lie algebra, then (34)
is equivalent to $[x,y]=\mu(x,y)-\mu(y,x)$ satisfying the Hom-Jacobi identity,
or equivalently,
$\sum_{\sigma\in\mathcal{S}_{3}}{(-1)^{\varepsilon({\sigma})}(\mu(\mu(x_{\sigma(1)},x_{\sigma(2)}),\alpha(x_{\sigma(3)}))}-\mu(\alpha(x_{\sigma(1)}),\mu(x_{\sigma(2)},x_{\sigma(3)})))=0,$
(37)
which may be written as
$\sum_{\sigma\in\mathcal{S}_{3}}{(-1)^{\varepsilon({\sigma})}a_{\alpha,\mu}\circ\sigma}=0.$
(38)
Thus, Hom-Lie admissible Hom-algebras are $\mathcal{S}_{3}$-associative Hom-
algebras. In general, for all subgroups $G$ of the permutations group
$\mathcal{S}_{3}$, all $G$-Hom-associative Hom-algebras are Hom-Lie
admissible, or in other words, all Hom-algebras from the six classes of
$G$-Hom-associative Hom-algebras, corresponding to the six subgroups of the
symmetric group $\mathcal{S}_{3}$, are Hom-Lie admissible [48, Proposition
3.4]. All six subgroups of $\mathcal{S}_{3}$ are
$\displaystyle G_{1}=\mathcal{S}_{3}({\rm id})=\\{{\rm
id}\\},G_{2}=\mathcal{S}_{3}(\tau_{12})=\\{{\rm
id},\tau_{12}\\},G_{3}=\mathcal{S}_{3}(\tau_{23})=\\{{\rm id},\tau_{23}\\},$
$\displaystyle G_{4}=\mathcal{S}_{3}(\tau_{13})=\\{{\rm
id},\tau_{13}\\},G_{5}=\mathcal{A}_{3},G_{6}=\mathcal{S}_{3}$
where $\mathcal{A}_{3}$ is the alternating group and $\tau_{ij}$ is the
transposition of $i$ and $j$.
Table 1: $G$-Hom-associative algebras Subgroup of $\mathcal{S}_{3}$ | Hom-algebras class names | Defining Identity (Notation: $\mu(a,b)=ab$)
---|---|---
$G_{1}=$ $\mathcal{S}_{3}({\rm id})$ | Hom-associative | $\alpha(x)(yz)=(xy)\alpha(z)$
$G_{2}=$ $\mathcal{S}_{3}(\tau_{12})$ | Hom-left symmetric Hom-Vinberg | $\alpha(x)(yz)-\alpha(y)(xz)=(xy)\alpha(z)-(yx)\alpha(z)$
$G_{3}=$ $\mathcal{S}_{3}(\tau_{23})$ | $\mathcal{S}_{3}(\tau_{23})$-Hom-associative Hom-right symmetric Hom-pre-Lie | $\alpha(x)(yz)-\alpha(x)(zy)=(xy)\alpha(z)-(xz)\alpha(y)$
$G_{4}=$ $\mathcal{S}_{3}(\tau_{13})$ | $\mathcal{S}_{3}(\tau_{13})$-Hom-associative Hom-anti-flexible Hom-center symmetric | $\alpha(x)(yz)-\alpha(z)(yx)=(xy)\alpha(z)-(zy)\alpha(x)$
$G_{5}=$ $\mathcal{A}_{3}$ | $\mathcal{A}_{3}$-Hom-associative | $\begin{array}[]{l}\alpha(x)(yz)+\alpha(y)(zx)+\alpha(z)(xy)=\\\ (xy)\alpha(z)+(yz)\alpha(x)+(zx)\alpha(y)\end{array}$
$G_{6}=$ $\mathcal{S}_{3}$ | Hom-Lie admissible | $\begin{array}[]{r}\displaystyle{\sum_{\sigma\in\mathcal{S}_{3}}(-1)^{\varepsilon({\sigma})}}\left((x_{\sigma(1)}x_{\sigma(2)})\alpha(x_{\sigma(3)})\right.\\\ \left.-\alpha(x_{\sigma(1)})(x_{\sigma(2)}x_{\sigma(3)})\right)=0\end{array}$
The skew-symmetric $G_{5}$-Hom-associative Hom-algebras and Hom-Lie algebras
form the same class of Hom-algebras for linear spaces over fields of
characteristic different from $2$, since then the defining identity of
$G_{5}$-Hom-associative algebras is equivalent to the Hom-Jacobi identity of
Hom-Lie algebras when the product $\mu$ is skew-symmetric.
A Hom-right symmetric (Hom-pre-Lie) algebra is the opposite algebra of a Hom-
left-symmetric algebra.
Hom-flexible algebras introduced in [48] is a generalization to Hom-algebra
context of flexible algebras [2, 53, 55].
###### Definition 6.2 ([48]).
A Hom-algebra $(A,\mu,\alpha)$ is called flexible if
$\mu(\mu(x,y),\alpha(x))=\mu(\alpha(x),\mu(y,x)))$ (39)
for any $x,y$ in $A$.
Using the $\alpha$-associator
$a_{\alpha,\mu}(x,y,z)=\mu(\mu(x,y),\alpha(z))-\mu(\alpha(x),\mu(y,z)),$ the
condition (39) may be written as
$a_{\alpha,\mu}(x,y,x)=0.$ (40)
Since Hom-associator map $a_{\alpha,\mu}$ is a trilinear map,
$a_{\alpha,\mu}(z-x,y,z-x)=a_{\alpha,\mu}(z,y,z)+a_{\alpha,\mu}(x,y,x)-a_{\alpha,\mu}(x,y,z)-a_{\alpha,\mu}(z,y,x),$
and hence (40) yields
$a_{\alpha,\mu}(x,y,z)=-a_{\alpha,\mu}(z,y,x)$ (41)
in linear spaces over any field, whereas setting $x=z$ in (41) gives
$2a_{\alpha,\mu}(x,y,x)=0$, implying that (40) and (41) are equivalent in
linear spaces over fields of characteristic different from $2$. The equality
(41) written in terms of the Hom-algebra producs $\mu$ is
$\mu(\mu(x,y),\alpha(z))-\mu(\alpha(x),\mu(y,z))=\mu(\alpha(z),\mu(y,x))-\mu(\mu(z,y),\alpha(x)).$
(42)
###### Definition 6.3.
A Hom-algebra $(A,\mu,\alpha)$ is called anti-flexible if
$\displaystyle\mu(\mu(x,y),\alpha(z))-\mu(\mu(z,y),\alpha(x))=\mu(\mu(\alpha(x),\mu(y,z))-\mu(\mu(\alpha(z),\mu(y,x))$
(43)
for all $x,y,z\in A$.
The equality (43) can be written as
$\displaystyle a_{\alpha,\mu}(x,y,z)=a_{\alpha,\mu}(z,y,x),$ (44)
in terms of the Hom-associator $a_{\alpha,\mu}(x,y,z)$.
Hom-anti-flexible algebras were first introduced in [48] as
$\mathcal{S}_{3}(\tau_{13})$-Hom-associative algebras, the subclass of
$G$-Hom-associative algebras corresponding to the subgroup
$G=\mathcal{S}_{3}(\tau_{13})\subset\mathcal{S}_{3}$ (see Table 1). In view of
(44), anti-flexible algebras have been called Hom-center symmetric in [32].
Note that (44) differs from (41) by absence of the minus sign on the right
hand side, meaning that for any $y$, the bilinear map $a_{\alpha,\mu}(.,y,.)$
is symmetric on Hom-anti-flexible algebras and skew-symmetric on Hom-flexible
algebras. Unlike (39) and (41) in Hom-flexible algebras, in Hom-anti-flexible
algebras, (44) is generally not equivalent to the restriction of (44) to $z=x$
trivially identically satisfied for any $x$ and $y$. In view of (44), Hom-
anti-flexible algebras are called Hom-center-symmetric algebras in [32].
## 7 Nearly Hom-associative algebras, bimodules and matched pairs
###### Definition 7.1.
A nearly Hom-associative algebra is a triple $(A,\ast,\alpha)$, where $A$ is a
linear space endowed to the bilinear product $\ast:A\times A\rightarrow A$ and
$\alpha:A\rightarrow A$ is a linear map such that for all $x,y,z\in A$,
$\displaystyle\alpha(x)\ast(y\ast z)=(z\ast x)\ast\alpha(y).$ (45)
Nearly Hom-associative algebras are Hom-Lie admissible.
###### Proposition 7.2.
Any nearly Hom-associative algebra $(A,\ast,\alpha)$ is Hom-Lie admissible,
that is $(A,[.,.],\alpha)$ is a Hom-Lie algebra, where $[x,y]=x\ast y-y\ast x$
for all $x,y\in A$.
###### Proof.
Let $(A,\ast,\alpha)$ be a nearly Hom-associative algebra. The commutator is
skew-symmetric since $[x,y]=x\ast y-y\ast x=-(y\ast x-x\ast y)=-[y,x].$ For
all $x,y,z\in A$,
$\displaystyle[\alpha(x),[y,z]]+[\alpha(y),[z,x]]+[\alpha(z),[x,y]]$
$\displaystyle=[\alpha(x),y\ast z-z\ast y]+[\alpha(y),z\ast x-x\ast
z]+[\alpha(z),x\ast y-y\ast x]$ $\displaystyle=\alpha(x)\ast(y\ast
z)-\alpha(x)\ast(z\ast y)-(y\ast z)\ast\alpha(x)$ $\displaystyle+(z\ast
y)\ast\alpha(x)+\alpha(y)\ast(z\ast x)-\alpha(y)\ast(x\ast z)$
$\displaystyle-(z\ast x)\ast\alpha(y)+(x\ast
z)\ast\alpha(y)+\alpha(z)\ast(x\ast y)$ $\displaystyle-\alpha(z)\ast(y\ast
x)-(x\ast y)\ast\alpha(z)+(y\ast x)\ast\alpha(z)$
$\displaystyle=\\{\alpha(x)\ast(y\ast z)-(z\ast x)\ast\alpha(y)\\}$
$\displaystyle+\\{(y\ast x)\ast\alpha(z)-\alpha(x)\ast(z\ast y)\\}$
$\displaystyle+\\{\alpha(y)\ast(z\ast x)-(x\ast y)\ast\alpha(z)\\}$
$\displaystyle+\\{\alpha(z)\ast(x\ast y)-(y\ast z)\ast\alpha(x)\\}$
$\displaystyle+\\{(z\ast y)\ast\alpha(x)-\alpha(y)\ast(x\ast z)\\}$
$\displaystyle+\\{(x\ast z)\ast\alpha(y)-\alpha(z)\ast(y\ast x)\\}=0.$
Therefore, $(A,[.,.],\alpha)$ is a Hom-Lie algebra. ∎
Commutative nearly Hom-associative algebras are Hom-anti-flexible.
###### Proposition 7.3.
If $(A,\ast,\alpha)$ is a commutative nearly Hom-associative algebra, then
$(A,\ast,\alpha)$ is a Hom-anti-flexible algebra.
###### Proof.
In a commutative nearly Hom-associative algebra $(A,\ast,\alpha)$.
$\displaystyle a_{\alpha,\ast}(x,y,z)$ $\displaystyle=(x\ast
y)\ast\alpha(z)-\alpha(x)\ast(y\ast z)$ $\displaystyle=\alpha(y)\ast(z\ast
x)-(z\ast x)\ast\alpha(y)$ (nearly Hom-associativity)
$\displaystyle=\alpha(y)\ast(x\ast z)-(x\ast z)\ast\alpha(y)$ (commutativity)
$\displaystyle=(z\ast y)\ast\alpha(x)-\alpha(z)\ast(y\ast x)$ (nearly Hom-
associativity) $\displaystyle=a_{\alpha,\ast}(z,y,x).$
So any commutative nearly Hom-associative algebra is a Hom-anti-flexible
algebra. ∎
###### Definition 7.4.
A bimodule of a nearly Hom-associative algebra $(A,\ast,\alpha)$ is a
quadruple $(l,r,V,\varphi)$, where $V$ is a linear space,
$l,r:A\rightarrow{\rm End}(V)$ are two linear maps and $\varphi\in{\rm
End}(V)$ satisfying the relations, for all $x,y\in A$,
$\displaystyle\varphi\circ l(x)=l({\alpha(x)})\circ\varphi,$
$\displaystyle\varphi\circ r(x)=r({\alpha(x)})\circ\varphi,$ (46a)
$\displaystyle l({\alpha(x)})\circ l(y)$ $\displaystyle=$ $\displaystyle
r({\alpha(y)})\circ r(x),$ (46b) $\displaystyle l({\alpha(x)})\circ r(y)$
$\displaystyle=$ $\displaystyle l({y\ast x})\circ\varphi,$ (46c)
$\displaystyle r({\alpha(x)})\circ l(y)$ $\displaystyle=$ $\displaystyle
r({x\ast y})\circ\varphi.$ (46d)
###### Proposition 7.5.
Consider a nearly Hom-associative $(A,\ast,\alpha)$. Let $l,r:A\rightarrow{\rm
End}(V)$ be two linear maps such that $V$ is a linear space and
$\varphi\in{\rm End}(V)$. The quadruple $(l,r,V,\varphi)$ is a bimodule of
$(A,\ast,\alpha)$ if and only if there is a structure of a nearly Hom-
associative algebra $\star$ on $A\oplus V$ given by, for all $x,y\in A$ and
all $u,v\in V$,
$\displaystyle(\alpha\oplus\varphi)(x+u)$ $\displaystyle=$
$\displaystyle\alpha(x)+\varphi(u),$ (47) $\displaystyle(x+u)\star(y+v)$
$\displaystyle=$ $\displaystyle(x\ast y)+(l(x)v+r(y)u).$ (48)
###### Definition 7.6.
A representation of a Hom-Lie algebra
$(\mathcal{G},[.,.]_{{}_{\mathcal{G}}},\alpha_{{}_{\mathcal{G}}})$ on a linear
space $V$ with respect to $\psi\in{\rm End}(V)$ is a linear map
$\rho_{{}_{\mathcal{G}}}:\mathcal{G}\rightarrow{\rm End}(V)$ obeying for all
$x,y\in\mathcal{G}$,
$\displaystyle\rho_{{}_{\mathcal{G}}}(\alpha_{{}_{\mathcal{G}}}(x))\circ\psi$
$\displaystyle=$ $\displaystyle\psi\circ\rho_{{}_{\mathcal{G}}}(x),$ (49)
$\displaystyle\rho_{{}_{\mathcal{G}}}([x,y]_{{}_{\mathcal{G}}})\circ\psi$
$\displaystyle=$
$\displaystyle\rho_{{}_{\mathcal{G}}}(\alpha_{{}_{\mathcal{G}}}(x))\circ\rho_{{}_{\mathcal{G}}}(y)-\rho_{{}_{\mathcal{G}}}(\alpha_{{}_{\mathcal{G}}}(y))\circ\rho_{{}_{\mathcal{G}}}(x).$
(50)
###### Proposition 7.7.
Let $(A,\cdot,\alpha)$ be a nearly Hom-associative algebra and $V$ be a
finite-dimensional linear space over the field $\mathbb{K}$ such that
$(l,r,\varphi,V)$ is a bimodule of $(A,\cdot,\alpha)$, where
$l,r:A\rightarrow{\rm End}(V)$ are two linear maps and $\varphi\in{\rm
End}(V)$. Then the linear map $l-r:A\rightarrow{\rm End}(V),x\mapsto
l(x)-r(x)$ is a representation of the underlying Hom-Lie algebra
$(\mathcal{G}(A),\alpha)$ associated to the nearly Hom-associative algebra
$(A,\cdot,\alpha)$.
###### Proof.
Let $(A,\cdot,\alpha)$ be a nearly Hom-associative algebra and $V$ a finite-
dimensional linear space over the field $\mathbb{K}$ such that
$(l,r,\varphi,V)$ is a bimodule of $(A,\cdot,\alpha)$, where
$l,r:A\rightarrow{\rm End}(V)$ are two linear maps and $\varphi\in{\rm
End}(V)$. For all $x,y\in A$,
$\displaystyle(l-r)({\alpha(x)})\circ\varphi=l({\alpha(x)})\circ\varphi-r({\alpha(x)})\circ\varphi=\varphi\circ
l(x)-\varphi\circ r(x)=\varphi\circ(l-r)(x),$
$\displaystyle(l-r)({(\alpha(x))})\circ(l-r)(y)-(l-r)({(\alpha(y))})\circ(l-r)(x)$
$\displaystyle\quad=l({\alpha(x)})\circ l(y)-l({\alpha(x)})\circ
r(y)-r({\alpha(x)})\circ l(y)+r({\alpha(x)})\circ r(y)$
$\displaystyle\quad\quad-l({\alpha(y)})\circ l(x)+l({\alpha(y)})\circ
r(x)+r({\alpha(y)})\circ l(x)-r({\alpha(y)})\circ r(x)$
$\displaystyle\quad=\\{l({\alpha(x)})\circ l(y)-r({\alpha(y)})\circ
r(x)\\}-l({\alpha(x)})\circ r(y)-r({\alpha(x)})\circ l(y)$
$\displaystyle\quad\quad+\\{r({\alpha(x)})\circ r(y)-l({\alpha(y)})\circ
l(x)\\}+r({\alpha(y)})\circ l(x)+l({\alpha(y)})\circ r(x)$
$\displaystyle\quad=r({\alpha(y)})\circ l(x)-l({\alpha(x)})\circ
r(y)+l({\alpha(y)})\circ r(x)-r({\alpha(x)})\circ l(y)$
$\displaystyle\quad=r({y\cdot x})\circ\varphi-l({y\cdot
x})\circ\varphi+l({x\cdot y})\circ\varphi-r({x\cdot
y})\circ\varphi=(l-r)({[x,y]})\circ\varphi.$
Therefore, (49) and (50) are satisfied. ∎
###### Definition 7.8.
Let
$\displaystyle(\mathcal{G},[.,.]_{{}_{\mathcal{G}}},\alpha_{{}_{\mathcal{G}}})$
and
$\displaystyle(\mathcal{H},[.,.]_{{}_{\mathcal{H}}},\alpha_{{}_{\mathcal{H}}})$
be two Hom-Lie algebras. Let
$\displaystyle\rho_{{}_{\mathcal{H}}}:\mathcal{H}\rightarrow{\rm
End}(\mathcal{G})$ and
$\displaystyle\mu_{{}_{\mathcal{G}}}:\mathcal{G}\rightarrow{\rm
End}(\mathcal{H})$ be two Hom-Lie algebra representations, and
$\alpha_{{}_{\mathcal{G}}}:\mathcal{G}\rightarrow\mathcal{G}$ and
$\alpha_{{}_{\mathcal{H}}}:\mathcal{H}\rightarrow\mathcal{H}$ two linear maps
such that for all $x,y\in\mathcal{G},a,b\in\mathcal{H},$
$\displaystyle\begin{array}[]{lll}\mu_{{}_{\mathcal{G}}}(\alpha_{{}_{\mathcal{G}}}(x))\left[a,b\right]_{{}_{\mathcal{H}}}&=&\left[\mu_{{}_{\mathcal{G}}}(x)a,\alpha_{{}_{\mathcal{H}}}(b)\right]_{{}_{\mathcal{H}}}+\left[\alpha_{{}_{\mathcal{H}}}(a),\mu_{{}_{\mathcal{G}}}(x)b\right]_{{}_{\mathcal{H}}}\\\
&&-\mu_{{}_{\mathcal{G}}}(\rho_{{}_{\mathcal{H}}}(a)x)(\alpha_{{}_{\mathcal{H}}}(b))+\mu_{{}_{\mathcal{G}}}(\rho_{{}_{\mathcal{H}}}(b)x)(\alpha_{{}_{\mathcal{H}}}(a)),\end{array}$
(51c)
$\displaystyle\begin{array}[]{lll}\rho_{{}_{\mathcal{H}}}(\alpha_{{}_{\mathcal{H}}}(a))\left[x,y\right]_{{}_{\mathcal{G}}}&=&\left[\rho_{{}_{\mathcal{H}}}(a)x,\alpha_{{}_{\mathcal{G}}}(y)\right]_{{}_{\mathcal{G}}}+\left[\alpha_{{}_{\mathcal{G}}}(x),\rho_{{}_{\mathcal{H}}}(a)y\right]_{{}_{\mathcal{G}}}\\\
&&-\rho_{{}_{\mathcal{H}}}(\mu_{{}_{\mathcal{G}}}(x)a)(\alpha_{{}_{\mathcal{G}}}(y))+\rho_{{}_{\mathcal{H}}}(\mu_{{}_{\mathcal{G}}}(y)a)(\alpha_{{}_{\mathcal{G}}}(x)).\end{array}$
(51f)
Then,
$\displaystyle(\mathcal{G},\mathcal{H},\mu,\rho,\alpha_{{}_{\mathcal{G}}},\alpha_{{}_{\mathcal{H}}})$
is called a matched pair of the Hom-Lie algebras $\displaystyle{\mathcal{G}}$
and $\displaystyle\mathcal{H}$, and denoted by
$\displaystyle\mathcal{H}\bowtie_{\mu_{{}_{\mathcal{G}}}}^{\rho_{{}_{\mathcal{H}}}}{\mathcal{G}}.$
In this case,
$\displaystyle(\mathcal{G}\oplus\mathcal{H},[.,.]_{{}_{\mathcal{G}\oplus\mathcal{H}}},\alpha_{{}_{\mathcal{G}}}\oplus\alpha_{{}_{\mathcal{H}}})$
defines a Hom-Lie algebra, where
$\displaystyle[(x+a),(y+b)]_{{}_{\mathcal{G}\oplus\mathcal{H}}}=[x,y]_{{}_{\mathcal{G}}}+\rho_{{}_{\mathcal{H}}}(a)y-\rho_{{}_{\mathcal{H}}}(b)x+[a,b]_{{}_{\mathcal{H}}}+\mu_{{}_{\mathcal{G}}}(x)b-\mu_{{}_{\mathcal{G}}}(y)a.$
(52)
###### Theorem 7.9.
Let $(A,\cdot,\alpha_{A})$ and $(B,\circ,\alpha_{B})$ be two nearly Hom-
associative algebras. Suppose there are linear maps
$l_{A},r_{A}:A\rightarrow{\rm End}(B)$ and $l_{B},r_{B}:B\rightarrow{\rm
End}(A)$ such that $(l_{A},r_{A},B,\alpha_{B})$ and
$(l_{B},r_{B},A,\alpha_{A})$ are bimodules of the nearly Hom-associative
algebras $(A,\cdot,\alpha_{A})$ and $(B,\circ,\alpha_{B})$, respectively and
satisfying the following conditions for all $x,y\in A$ and $a,b\in B:$
$\displaystyle\alpha_{A}(x)\cdot(r_{B}(a)y)+(r_{B}(l_{A}(y)a)\alpha_{A}(x)-(l_{B}(a)x)\cdot\alpha_{A}(y)-l_{B}(r_{A}(x)a)\alpha_{A}(y)=0,$
(53a)
$\displaystyle\alpha_{A}(x)\cdot(l_{B}(a)y)+r_{B}(r_{A}(y)a)\alpha_{A}(x)-r_{B}(\alpha_{B}(a))(y\cdot
x)=0,$ (53b) $\displaystyle l_{B}(\alpha_{B}(a))(x\cdot
y)-(r_{B}(a)y)\cdot\alpha_{A}(x)-l_{B}(l_{A}(y)a)\alpha_{A}(x)=0,$ (53c)
$\displaystyle\alpha_{B}(a)\circ(r_{A}(x)b)+r_{A}(l_{B}(b)x)\alpha_{B}(a)-(l_{A}(x)a)\circ\alpha_{B}(b)-l_{A}(r_{B}(a)x)\alpha_{B}(b)=0,$
(53d)
$\displaystyle\alpha_{B}(a)\circ(l_{A}(x)b)+r_{A}(r_{B}(b)x)\alpha_{B}(a)-r_{A}(\alpha_{A}(x))(b\circ
a)=0,$ (53e) $\displaystyle l_{A}(\alpha_{A}(x))(b\circ
a)-(r_{A}(x)a)\circ\alpha_{B}(b)-l_{A}(l_{B}(a)x)\alpha_{B}(b)=0.$ (53f)
Then, there is a bilinear product defined on $A\oplus B$ for all $x,y\in A$,
and all $a,b\in B$, by
$\displaystyle(x+a)\ast(y+b)=(x\cdot y+l_{B}(a)y+r_{B}(b)x)+(a\circ
b+l_{A}(x)b+r_{A}(y)a)$ (54)
such that $(A\oplus B,\ast,\alpha_{A}\oplus\alpha_{B})$ is a nearly Hom-
associative algebra.
###### Proof.
Let $(A,\cdot,\alpha_{A})$, $(B,\circ,\alpha_{B})$ be two nearly Hom-
associative algebras, $(l_{A},r_{A},B,\alpha_{B})$ a bimodule of
$(A,\cdot,\alpha_{A})$ and $(l_{B},r_{B},A,\alpha_{A})$ a bimodule of
$(B,\circ,\alpha_{B})$. For all $x,y\in A$ and all $a,b\in B$,
$\displaystyle(\alpha_{A}(x)+\alpha_{B}(a))\ast((y+b)\ast(z+c))$
$\displaystyle\quad=\\{(\alpha_{A}(x))\cdot(l_{B}(b)z)+r_{B}(r_{A}(z)b)\cdot(\alpha_{A}(x))\\}$
$\displaystyle\quad\quad+\\{(\alpha_{A}(x))\cdot(r_{B}(c)y)+r_{B}(l_{A}(y)c)\alpha_{A}(x)\\}$
$\displaystyle\quad\quad+(\alpha_{A}(x))\cdot(y\cdot
z)+l_{B}(\alpha_{B}(a))(y\cdot z)+l_{B}(\alpha_{B}(a))(l_{B}(b)z)$
$\displaystyle\quad\quad+l_{B}(\alpha_{B}(a))(r_{B}(c)y)+r_{B}(b\circ
c)(\alpha_{A}(x))$
$\displaystyle\quad\quad+\\{(\alpha_{B}(a))\circ(l_{A}(y)c)+r_{A}(r_{B}(c)y)\alpha_{B}(a)\\}$
$\displaystyle\quad\quad+\\{(\alpha_{B}(a))\circ(r_{A}(z)b)+r_{A}(l_{B}(b)z)\alpha_{B}(a)\\}$
$\displaystyle\quad\quad+(\alpha_{B}(a))\circ(b\circ c)+r_{A}(y\cdot
z)\alpha_{B}(a)+l_{A}(\alpha_{A}(x))(b\circ c)$
$\displaystyle\quad\quad+l_{A}(\alpha_{A}(x))(l_{A}(y)c)+l_{A}(\alpha_{A}(x))(r_{A}(z)b);$
$\displaystyle((z+c)\ast(x+a))\ast(\alpha_{A}(y)+\alpha_{B}(b))$
$\displaystyle\quad=\\{(l_{B}(c)x)\cdot(\alpha_{A}(y))+l_{B}(r_{A}(x)c)\alpha_{A}(y)\\}$
$\displaystyle\quad\quad+\\{l_{B}(l_{A}(z)a)\alpha_{A}(y)+(l_{B}(c)x)\cdot\alpha_{A}(y)\\}$
$\displaystyle\quad\quad+(z\cdot x)\cdot(\alpha_{A}(y))+l_{B}(c\circ
a)(\alpha_{A}(y))+r_{B}(\alpha_{B}(b)(z\cdot x)$
$\displaystyle\quad\quad+r_{B}(\alpha_{B}(b)(l_{B}(c)x)+r_{B}(\alpha_{B}(b)(r_{B}(a)z)$
$\displaystyle\quad\quad+\\{(l_{A}(z)a)\circ(\alpha_{B}(b))+l_{A}(r_{B}(a)z)\alpha_{B}\\}$
$\displaystyle\quad\quad+\\{(r_{A}(x)c)\circ(\alpha_{B}(b))+l_{A}(l_{B}(c)x)\alpha_{B}(b)\\}$
$\displaystyle\quad\quad+(c\circ a)\circ(\alpha_{B}(b))+l_{A}(z\cdot
x)(\alpha_{B}(b))+r_{A}(\alpha_{A}(y))(c\circ a)$
$\displaystyle\quad\quad+r_{A}(\alpha_{A}(y))(l_{A}(z)a)+r_{A}(\alpha_{A}(y))(r_{A}(x)c)$
Using (53a) - (53f) and the fact that $(l_{A},r_{A},B,\alpha_{B})$ and
$(l_{B},r_{B},A,\alpha_{A})$ are bimodules of the nearly Hom-associative
algebras $(A,\cdot,\alpha_{A})$ and $(B,\circ,\alpha_{B})$, respectively, we
obtain that $(A\oplus B,\ast,\alpha_{A}\oplus\alpha_{B})$ is a nearly
associative algebra. ∎
###### Definition 7.10.
A matched pair of the nearly Hom-associative algebras $(A,\cdot,\alpha_{A})$
and $(B,\circ,\alpha_{B})$ is the high-tuple
$(A,B,l_{A},r_{A},\alpha_{B},l_{B},r_{B},\alpha_{A})$, where
$l_{A},r_{A}:A\rightarrow{\rm End}(B)$ and $l_{B},r_{B}:B\rightarrow{\rm
End}(A)$ are linear maps such that $(l_{A},r_{A},B,\alpha_{B})$ and
$(l_{B},r_{B},A,\alpha_{A})$ are bimodules of the nearly Hom-associative
algebras $(A,\cdot,\alpha_{A})$ and $(B,\circ,\alpha_{B})$, respectively, and
satisfying (53a) - (53f).
###### Corollary 7.11.
Let $(A,B,l_{A},r_{A},\alpha_{B},l_{B},r_{B},\alpha_{A})$ be a matched pair of
the nearly Hom-associative algebras $(A,\cdot,\alpha_{A})$ and
$(B,\circ,\alpha_{B}).$ Then,
$(\mathcal{G}(A),\mathcal{G}(B),l_{A}-r_{A},l_{B}-r_{B},\alpha_{A},\alpha_{B})$
is a matched pair of the underlying Hom-Lie algebras $\mathcal{G}(A)$ and
$\mathcal{G}(B)$ of the nearly Hom-associative algebras $(A,\cdot,\alpha_{A})$
and $(B,\circ,\alpha_{B})$.
###### Proof.
Let $(A,B,l_{A},r_{A},\alpha_{B},l_{B},r_{B},\alpha_{A})$ be a matched pair of
nearly Hom-associative algebras $(A,\cdot,\alpha_{A})$ and
$(B,\circ,\alpha_{B})$. In view of Proposition 7.7, the linear maps
$l_{A}-r_{A}:A\longrightarrow{\rm End}(B)$ and
$l_{B}-r_{B}:B\longrightarrow{\rm End}(A)$ are representations of the
underlying Hom-Lie algebras $(\mathcal{G}(A),\alpha_{A})$ and
$(\mathcal{G}(B),\alpha_{B})$, respectively. Therefore, (51c) is equivalent to
(53a) - (53c) and similarly, (51f) is equivalent to (53d) - (53f). ∎
## References
* [1] Aizawa, N., Sato, H.: $q$-deformation of the Virasoro algebra with central extension, Phys. Lett. B 256, 185-190 (1991) (Hiroshima Univ. preprint, HUPD-9012 (1990))
* [2] Albert, A. A.: Power associative rings, Trans. Amer. Math. Soc. 64, 552-593 (1948)
* [3] Ammar, F., Ejbehi, Z., Makhlouf, A., Cohomology and deformations of Hom-algebras, J. Lie Theory 21, no. 4, 813-836 (2011)
* [4] Armakan, A., Silvestrov, S., Farhangdoost, M.: Enveloping algebras of color hom-Lie algebras, Turk. J. Math. 43, 316-339 (2019). (arXiv:1709.06164[math.QA], (2017))
* [5] Armakan, A., Silvestrov, S.: Enveloping algebras of certain types of color Hom-Lie algebras, In: Silvestrov, S., Malyarenko, A., Ranc̆ić, M. (eds.), Algebraic Structures and Applications, Springer Proceedings in Mathematics and Statistics, vol. 317, Ch. 10, 257-284, Springer (2020)
* [6] Bai, C.: Left-symmetric bialgebras and an analogue of the classical Yang-Baxter equation. Commun. Contemp. Math. 10(2), 221-260 (2008)
* [7] Bai, C.: Double constructions of Frobenius algebras, Connes cocycle and their duality. J. Noncommut. Geom. 4, 475-530 (2010).
* [8] Bakayoko, I.: Laplacian of Hom-Lie quasi-bialgebras, International Journal of Algebra, 8 (15), 713-727 (2014)
* [9] Bakayoko, I.: $L$-modules, $L$-comodules and Hom-Lie quasi-bialgebras, African Diaspora Journal of Mathematics, 17 49-64 (2014)
* [10] Bakayoko, I., Banagoura, M.: Bimodules and Rota-Baxter Relations. J. Appl. Mech. Eng. 4(5) (2015)
* [11] Bakayoko, I., Silvestrov, S.: Multiplicative $n$-Hom-Lie color algebras, In: Silvestrov, S., Malyarenko, A., Ranc̆ić, M. (Eds.), Algebraic Structures and Applications, Springer Proceedings in Mathematics and Statistics 317, Ch. 7, 159-187, Springer (2020). (arXiv:1912.10216[math.QA] (2019))
* [12] Bakayoko, I., Silvestrov, S.: Hom-left-symmetric color dialgebras, Hom-tridendriform color algebras and Yau’s twisting generalizations, Afrika Matematika (accepted in January 2020), arXiv:1912.01441[math.RA] (2019)
* [13] Benayadi, S., Makhlouf, A.: Hom-Lie algebras with symmetric invariant nondegenerate bilinear forms, J. Geom. Phys. 76, 38-60 (2014)
* [14] Ben Abdeljelil, A., Elhamdadi, M., Kaygorodov, I., Makhlouf, A.: Generalized Derivations of $n$-BiHom-Lie algebras, In: Silvestrov, S., Malyarenko, A., Ranc̆ić, M. (Eds.), Algebraic Structures and Applications, Springer Proceedings in Mathematics and Statistics 317, Ch. 4, 81-97, Springer (2020). (arXiv:1901.09750[math.RA] (2019))
* [15] Caenepeel S., Goyvaerts I.: Monoidal Hom-Hopf Algebras, Comm. Algebra 39 no. 6, 2216-2240 (2011)
* [16] Chaichian, M., Ellinas, D., Popowicz, Z.: Quantum conformal algebra with central extension, Phys. Lett. B 248, 95-99 (1990)
* [17] Chaichian, M., Isaev, A. P., Lukierski, J., Popowic, Z., Prešnajder, P.: $q$-deformations of Virasoro algebra and conformal dimensions, Phys. Lett. B 262(1), 32-38 (1991)
* [18] Chaichian, M., Kulish, P., Lukierski, J.: $q$-deformed Jacobi identity, $q$-oscillators and $q$-deformed infinite-dimensional algebras, Phys. Lett. B 237, 401-406 (1990)
* [19] Chaichian, M., Popowicz, Z., Prešnajder, P.: $q$-Virasoro algebra and its relation to the $q$-deformed KdV system, Phys. Lett. B 249, 63-65 (1990)
* [20] Curtright, T. L., Zachos, C. K.: Deforming maps for quantum algebras, Phys. Lett. B 243, 237-244 (1990)
* [21] Damaskinsky, E. V., Kulish, P. P.: Deformed oscillators and their applications (in Russian), Zap. Nauch. Semin. LOMI 189, 37-74 (1991) (Engl. transl.: J. Sov. Math., 62, 2963-2986 (1992))
* [22] Daskaloyannis, C.: Generalized deformed Virasoro algebras, Modern Phys. Lett. A 7(9), 809-816 (1992)
* [23] Elduque, A., Myung, H. C.: Mutations of Alternative Algebras, Mathematics and Its Applications 278, Springer-Verlag (1994)
* [24] Graziani, G., Makhlouf, A., Menini, C., Panaite, F.: BiHom-Associative Algebras, BiHom-Lie Algebras and BiHom-Bialgebras, SIGMA 11(086), 34 pp (2015)
* [25] Guo, L., Zhang, B., Zheng, S.: Universal enveloping algebras and Poincare-Birkhoff-Witt theorem for involutive Hom-Lie algebras, J. Lie Theory, 28(3), 735-756 (2018). (arXiv:1607.05973[math.QA], (2016))
* [26] Hassanzadeh, M., Shapiro, I., Sütlü, S.: Cyclic homology for Hom-associative algebras, J. Geom. Phys. 98, 40-56 (2015)
* [27] Hartwig, J. T., Larsson, D., Silvestrov, S. D.: Deformations of Lie algebras using $\sigma$-derivations, J. Algebra, 295, 314-361 (2006). (Preprint in Mathematical Sciences 2003:32, LUTFMA-5036-2003, Centre for Mathematical Sciences, Department of Mathematics, Lund Institute of Technology, 52 pp. (2003))
* [28] Hellström, L.: Strong Hom-associativity, In: Silvestrov, S., Malyarenko, A., Ranc̆ić, M. (eds.), Algebraic Structures and Applications, Springer Proceedings in Mathematics and Statistics, vol. 317, 317-337, Springer (2020)
* [29] Hellström, L., Makhlouf, A., Silvestrov, S. D.: Universal Algebra Applied to Hom-Associative Algebras, and More, In: Makhlouf A., Paal E., Silvestrov S., Stolin A. (eds), Algebra, Geometry and Mathematical Physics. Springer Proceedings in Mathematics and Statistics, vol 85. Springer, Berlin, Heidelberg, 157-199 (2014)
* [30] Hounkonnou, M. N., Dassoundo M. L.: Center-symmetric Algebras and Bialgebras: Relevant Properties and Consequences. In: Kielanowski P., Ali S., Bieliavsky P., Odzijewicz A., Schlichenmaier M., Voronov T. (eds) Geometric Methods in Physics. Trends in Mathematics. 2016, pp. 281-293. Birkhäuser, Cham (2016)
* [31] Hounkonnou, M. N., Houndedji, G. D., Silvestrov, S.: Double constructions of biHom-Frobenius algebras, arXiv:2008.06645 [math.QA] (2020)
* [32] Hounkonnou, M. N., Dassoundo, M. L.: Hom-center-symmetric algebras and bialgebras. arXiv:1801.06539.
* [33] Hu, N.: $q$-Witt algebras, $q$-Lie algebras, $q$-holomorph structure and representations, Algebra Colloq. 6, no. 1, 51-70 (1999)
* [34] Kassel, C.: Cyclic homology of differential operators, the virasoro algebra and a $q$-analogue, Comm. Math. Phys. 146 (2), 343-356 (1992)
* [35] Kitouni, A., Makhlouf, A., Silvestrov, S.: On $n$-ary generalization of BiHom-Lie algebras and BiHom-associative algebras, In: Silvestrov, S., Malyarenko, A., RancicRanc̆ić, M. (Eds.), Algebraic Structures and Applications, Springer Proceedings in Mathematics and Statistics 317, Ch 5, 99-126, Springer (2020)
* [36] Larsson, D., Sigurdsson, G., Silvestrov, S. D.: Quasi-Lie deformations on the algebra $\mathbb{F}[t]/(t^{N})$, J. Gen. Lie Theory Appl. 2, 201-205 (2008)
* [37] Larsson, D., Silvestrov, S. D.: Quasi-Hom-Lie algebras, Central Extensions and $2$-cocycle-like identities, J. Algebra 288, 321-344 (2005). (Preprints in Mathematical Sciences 2004:3, LUTFMA-5038-2004, Centre for Mathematical Sciences, Department of Mathematics, Lund Institute of Technology, Lund University, (2004))
* [38] Larsson, D., Silvestrov, S. D.: Quasi-Lie algebras, In: Noncommutative Geometry and Representation Theory in Mathematical Physics, Contemp. Math. 391, Amer. Math. Soc., Providence, RI, 241-248 (2005). (Preprints in Mathematical Sciences 2004:30, LUTFMA-5049-2004, Centre for Mathematical Sciences, Department of Mathematics, Lund Institute of Technology, Lund University (2004))
* [39] Larsson, D., Silvestrov, S. D.: Graded quasi-Lie agebras, Czechoslovak J. Phys. 55, 1473-1478 (2005)
* [40] Larsson, D., Silvestrov, S. D.: Quasi-deformations of $sl_{2}(\mathbb{F})$ using twisted derivations, Comm. Algebra 35, 4303-4318 (2007) (Preprint in Mathematical Sciences 2004:26, LUTFMA-5047-2004, Centre for Mathematical Sciences, Lund Institute of Technology, Lund University (2004). arXiv:math/0506172 [math.RA] (2005))
* [41] Liu, K. Q.: Quantum central extensions, C. R. Math. Rep. Acad. Sci. Canada 13 (4), 135-140 (1991)
* [42] Liu, K. Q.: Characterizations of the Quantum Witt Algebra, Lett. Math. Phys. 24 (4), 257-265 (1992)
* [43] Liu, K. Q.: The Quantum Witt Algebra and Quantization of Some Modules over Witt Algebra, PhD Thesis, Department of Mathematics, University of Alberta, Edmonton, Canada (1992)
* [44] Ma, T., Makhlouf, A., Silvestrov, S.: Curved $\mathcal{O}$-operator systems, 17pp, arXiv:1710.05232[math.RA] (2017)
* [45] Ma, T., Makhlouf, A., Silvestrov, S.: Rota-Baxter bisystems and covariant bialgebras, 30 pp, arXiv:1710.05161[math.RA] (2017)
* [46] Ma, T., Makhlouf, A., Silvestrov, S.: Rota-Baxter Cosystems and Coquasitriangular Mixed Bialgebras, J. Algebra Appl. Accepted 2019. (Puiblished first online 2020: doi:10.1142/S021949882150064X)
* [47] Majid, S.: Matched pairs of Lie groups associated to solutions of the Yang-Baxter equations, Pacific J. Math. 141(2), 311-332 (1990)
* [48] Makhlouf, A., Silvestrov, S. D.: Hom-algebra structures, J. Gen. Lie Theory Appl. 2(2), 51-64 (2008). (Preprints in Mathematical Sciences 2006:10, LUTFMA-5074-2006, Centre for Mathematical Sciences, Department of Mathematics, Lund Institute of Technology, Lund University (2006))
* [49] Makhlouf, A., Silvestrov, S.: Hom-Lie admissible Hom-coalgebras and Hom-Hopf algebras, In: Silvestrov, S., Paal, E., Abramov, V., Stolin, A. (Eds.), Generalized Lie Theory in Mathematics, Physics and Beyond, Ch. 17, 189-206, Springer-Verlag, Berlin, Heidelberg (2009). (arXiv:0709.2413[math.RA] (2007))
* [50] Makhlouf, A., Silvestrov, S. D.: Notes on formal deformations of Hom-associative and Hom-Lie algebras, Forum Math. 22 (4), 715-739 (2010). (arXiv:0712.3130 [math.RA] (2007))
* [51] Makhlouf, A., Silvestrov, S. D.: Hom-Algebras and Hom-Coalgebras, J. Algebra Appl. 9(04), 553-589 (2010). (arXiv:0811.0400[math.RA] (2008))
* [52] Makhlouf, A., Yau, D.: Rota-Baxter Hom-Lie admissible algebras, Comm. algebra. 42, 1231-1257 (2014)
* [53] Myung, H. C.: Lie-admissible algebras, Hadronic J. 1, 169-193 (1978)
* [54] Myung, H. C., Okubo, S., Santilli, R. M.: Applications of Lie-admissible algebras in physics, Vol I, II, Hadronic Press (1978)
* [55] Myung, H. C.: Lie algebras and flexible Lie-admissible algebras, Hadronic Press Monographs in Mathematics 1, Hadronic Press (1982)
* [56] Richard, L., Silvestrov, S. D.: Quasi-Lie structure of $\sigma$-derivations of $\mathbb{C}[t^{\pm 1}]$, J. Algebra 319(3), 1285-1304 (2008) (arXiv:math/0608196[math.QA] (2006). Preprints in mathematical sciences (2006:12), LUTFMA-5076-2006, Centre for Mathematical Sciences, Lund University (2006))
* [57] Richard, L., Silvestrov, S. D.: A note on quasi-Lie and Hom-Lie structures of $\sigma$-derivations of ${\mathbb{C}}[z_{1}^{\pm 1},\ldots,z_{n}^{\pm 1}]$, In: Silvestrov, S., Paal, E., Abramov, V., Stolin, A. (Eds.), Generalized Lie Theory in Mathematics, Physics and Beyond, Ch. 22, 257-262, Springer-Verlag, Berlin, Heidelberg (2009)
* [58] Santilli, R. M.: An introduction to Lie-admissible algebras, Nuovo Cim. Suppl., I. 6, 1225-1249 (1968)
* [59] Santilli, R. M.: Lie-admissible approach to the hadronic structure, II , Hadronic Press (1982)
* [60] Schafer, R. D.: An Introduction to Non-associative Algebras, Dover Publications (1961)
* [61] Sheng, Y.: Representations of Hom-Lie algebras, Algebr. Reprensent. Theory 15, 1081-1098 (2012)
* [62] Sheng, Y., Bai, C.: A new approach to Hom-Lie bialgebras, J. Algebra. 399, 232-250 (2014)
* [63] Sigurdsson, G., Silvestrov, S.: Graded quasi-Lie algebras of Witt type, Czech. J. Phys. 56, 1287-1291 (2006)
* [64] Sigurdsson, G., Silvestrov, S.: Lie color and Hom-Lie algebras of Witt type and their central extensions, In: Silvestrov, S., Paal, E., Abramov, V., Stolin, A. (Eds.), Generalized Lie Theory in Mathematics, Physics and Beyond, Ch. 21, 247-255, Springer, Berlin, Heidelberg (2009)
* [65] Silvestrov, S.: Paradigm of quasi-Lie and quasi-Hom-Lie algebras and quasi-deformations. In ”New techniques in Hopf algebras and graded ring theory”, K. Vlaam. Acad. Belgie Wet. Kunsten (KVAB), Brussels, 165-177 (2007)
* [66] Yau, D.: Module Hom-algebras, arXiv:0812.4695[math.RA] (2008)
* [67] Yau, D.: Enveloping algebras of Hom-Lie algebras, J. Gen. Lie Theory Appl. 2(2), 95-108 (2008). (arXiv:0709.0849 [math.RA] (2007))
* [68] Yau, D.: Hom-algebras and homology, J. Lie Theory 19(2), 409-421 (2009)
* [69] Yau, D.: Hom-bialgebras and comodule Hom-algebras, Int. Electron. J. Algebra 8, 45-64 (2010). (arXiv:0810.4866[math.RA] (2008))
* [70] Yau D.: The Hom-Yang-Baxter equation, Hom-Lie algebras and quasi-triangular bialgebras, J. Phys. A. 42, 165–202 (2006)
|
# NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose
Estimation
Angtian Wang, Adam Kortylewski, Alan Yuille
Department of Computer Science
Johns Hopkins University
Maryland, MD 21218, USA
{angtianwang, akortyl1<EMAIL_ADDRESS>
###### Abstract
3D pose estimation is a challenging but important task in computer vision. In
this work, we show that standard deep learning approaches to 3D pose
estimation are not robust when objects are partially occluded or viewed from a
previously unseen pose. Inspired by the robustness of generative vision models
to partial occlusion, we propose to integrate deep neural networks with 3D
generative representations of objects into a unified neural architecture that
we term NeMo. In particular, NeMo learns a generative model of neural feature
activations at each vertex on a dense 3D mesh. Using differentiable rendering
we estimate the 3D object pose by minimizing the reconstruction error between
NeMo and the feature representation of the target image. To avoid local optima
in the reconstruction loss, we train the feature extractor to maximize the
distance between the individual feature representations on the mesh using
contrastive learning. Our extensive experiments on PASCAL3D+, occluded-
PASCAL3D+ and ObjectNet3D show that NeMo is much more robust to partial
occlusion and unseen pose compared to standard deep networks, while retaining
competitive performance on regular data. Interestingly, our experiments also
show that NeMo performs reasonably well even when the mesh representation only
crudely approximates the true object geometry with a cuboid, hence revealing
that the detailed 3D geometry is not needed for accurate 3D pose estimation.
The code is publicly available at https://github.com/Angtian/NeMo.
## 1 Introduction
Object pose estimation is a fundamentally important task in computer vision
with a multitude of real-world applications, e.g. in self-driving cars, or
partially autonomous surgical systems. Advances in the architecture design of
deep convolutional neural networks (DCNNs) Tulsiani & Malik (2015); Su et al.
(2015); Mousavian et al. (2017); Zhou et al. (2018) increased the performance
of computer vision systems at 3D pose estimation enormously. However, our
experiment shows current 3D pose estimation approaches are not robust to
partial occlusion and when objects are viewed from a previously unseen pose.
This lack of robustness can have serious consequences in real-world
applications and therefore needs to be addressed by the research community.
In general, recent works follow either of two approaches for object pose
estimation: Keypoint-based approaches detect a sparse set of keypoints and
subsequently align a 3D object representation to the detection result.
However, due to the sparsity of the keypoints, these approaches are highly
vulnerable when the keypoint detection result is affected by adverse viewing
conditions, such as partial occlusion. On the other hand, rendering-based
approaches utilize a generative model, that is built on a dense 3D mesh
representation of an object. They estimate the object pose by reconstructing
the input image in a render-and-compare manner (Figure 1). While rendering-
based approaches can be more robust to partial occlusion Egger et al. (2018),
their core limitation is that they model objects in terms of image
intensities. Therefore, they pay too much attention to object details that are
not relevant for the 3D pose estimation task. This makes them difficult to
optimize Blanz & Vetter (2003); Schönborn et al. (2017), and also requires a
detailed mesh representation for every shape variant of an object class (e.g.
they need several types of sedan meshes instead of using one prototypical type
of sedan).
In this work, we introduce NeMo a rendering-based approach to 3D pose
estimation that is highly robust to partial occlusion, while also being able
to generalize to previously unseen views. Our key idea is to learn a
generative model of an object category in terms of neural feature activations,
instead of image intensities (Figure 1). In particular, NeMo is composed of a
prototypical mesh representation of the object category and feature
representations at each vertex of the mesh. The feature representations are
learned to be invariant to instance specific details (such as shape and color
variations) that are not relevant for the 3D pose estimation task.
Specifically, we use contrastive learning He et al. (2020); Wu et al. (2018);
Bai et al. (2020) to ensure that the extracted features of an object are
distinct from each other (e.g. the features of the front tire of a car are
different from those of the back tire), while also being distinct from non-
object features in the background. Furthermore, we train a generative model of
the feature activations at every vertex of the mesh representation. During
inference, NeMo estimates the object pose by reconstructing a target feature
map with using render-and-compare and gradient-based optimization w.r.t. the
3D object pose parameters.
We evaluate NeMo at 3D pose estimation on the PASCAL3D+ Xiang et al. (2014)
and the ObjectNet3D Xiang et al. (2016) dataset. Both datasets contain a
variety of rigid objects and their corresponding 3D CAD models. Our
experimental results show that NeMo outperforms popular approaches such as
Starmap Zhou et al. (2018) at 3D pose estimation by a wide margin under
partial occlusion, and performs comparably when the objects are not occluded.
Moreover, NeMo is exceptionally robust when objects are seen from a viewpoint
that is not present in the training data. Interestingly, we also find that the
mesh representation in NeMo can simply approximate the true object geometry
with a cuboid, and still perform very well. Our main contributions are:
1. 1.
We propose a 3D neural mesh model of objects that is generative in terms of
contrastive neural network features. This representation combines a
prototypical geometric representation of the object category with a generative
model of neural network features that are invariant to irrelevant object
details.
2. 2.
We demonstrate that standard deep learning approaches to 3D pose estimation
are highly sensitive to out-of-distribution data including partial occlusions
and unseen poses. In contrast, NeMo performs 3D pose estimation with
exceptional robustness.
3. 3.
In contrast to other rendering-based approaches that require instance-specific
mesh representations of the target objects, we show that NeMo achieves a
highly competitive 3D pose estimation performance even with a very crude
prototypical approximation of the object geometry using a cuboid.
Figure 1: Traditional render-and-compare approaches render RGB images and make
pixel-level comparisons. These are difficult to optimize due to the many local
optima in the pixel-wise reconstruction loss. In contrast, NeMo is a Neural
Mesh Model that renders feature maps and compares them with feature maps
obtained via CNN backbone. The invariance of the neural features to nuisance
variables, such as shape and color variations, enables a robust 3D pose
estimation with simple gradient-descent optimization of the neural
reconstruction loss.
## 2 Related Work
Category-Level Object Pose Estimation. Category-Level object pose estimation
has been well explored by the research community. A classical approach as
proposes by Tulsiani & Malik (2015) and Mousavian et al. (2017) was to
formulate object pose estimation as a classification problem. Another common
category-level object pose estimation approach involves a two-step process
Szeto & Corso (2017); Pavlakos et al. (2017): First, semantic keypoints are
detected interdependently and subsequently a Perspective-n-Point problem is
solved to find the optimal 3D pose of an object mesh Lu et al. (2000); Lepetit
et al. (2009). Zhou et al. (2018) further improved this approach by utlizing
depth information. Recent work Wang et al. (2019); Chen et al. (2020)
introduced render-and-compare to for category-level pose estimation. However,
both approaches used pixel-level image synthesis and required detailed mesh
models during training.In contrast, NeMo preforms render-and-compare on the
level of contrastive features, which are invariant to intra-category
nuisances, such as shape and color variations. This enables NeMo to achieve
accurate 3D pose estimation results even with a crude prototypical category-
level mesh representation.
Pose Estimation under Partial Occlusion. Keypoint-based pose estimation
methods are sensitive to outliers, which can be caused by partial occlusion
Pavlakos et al. (2017); Sundermeyer et al. (2018). Some rendering-based
approaches achieve satisfactory results on instance-level pose estimation
under partial occlusion Song et al. (2020); Peng et al. (2019); Zakharov et
al. (2019); Li et al. (2018). However, these approaches render RGB images or
use instance-level constraints, e.g. pixel-level voting, to estimate object
pose. Therefore, these approaches are not suited for category-level pose
estimation. To the best of our knowledge, NeMo is the first approach that
performs category-level pose estimation robustly under partial occlusion.
Contrastive Feature Learning. Contrastive learning is widely used in deep
learning research. Hadsell et al. (2006) proposed an intuitive tuple loss,
which was later extended to triplets and N-Pair tuples Schroff et al. (2015);
Sohn (2016). Recent contrastive learning approaches showed high potential in
unsupervised learning Wu et al. (2018); He et al. (2020). Oord et al. (2018);
Han et al. (2019) demonstrated the effectiveness of feature-level contrastive
losses in representation learning. Bai et al. (2020) proposed a keypoint
detection framework by optimizing the feature representations of keypoints
with a contrastive loss. In this work, we use contrastive feature learning to
encourage the feature extraction backbone to extract locally distinct
features. This, in turn, enables 3D pose estimation by simply optimizing the
neural reconstruction loss with gradient descent.
Robust Vision through Analysis-by-Synthesis. In a broader context, our work
relates to a line of work in the computer vision literature, which demonstrate
that the explicit modeling of the object structure significantly enhances the
robustness of computer vision models, e.g. at 3D pose estimation Zeeshan Zia
et al. (2013), face reconstruction Egger et al. (2018) and human detection
Girshick et al. (2011) under occlusion. More specifically, our work builds on
a recent line of work that introduced a neural analysis-by-synthesis approach
to vision Kortylewski et al. (2020b) and demonstrated its effectiveness in
occlusion-robust image classification Kortylewski et al. (2020a; c) and object
detection Wang et al. (2020). Our work significantly extends neural analysis-
by-synthesis to include an explicit 3D object representation, instead of 2D
template-like object representations. This enables our model to achieve state-
of-the-art robustness at pose estimation through neural render-and-compare.
## 3 NeMo: A 3D generative model of neural features
We denote a feature representation of an input image $I$ as
$\Phi(I)=F^{l}\in\mathbb{R}^{H\times W\times D}$. Where $l$ is the output of a
layer $l$ of a DCNN $\Phi$, with $D$ being the number of channels in layer
$l$. $f^{l}_{i}\in\mathbb{R}^{D}$ is a feaure vector in $F^{l}$ at position
$i$ on the 2D lattice $\mathcal{P}$ of the feature map. In the remainder of
this section we omit the superscript $l$ for notational simplicity because
this is fixed a-priori in our model.
### 3.1 Neural Rendering of Feature Maps
Similar to other graphics-based generative models, such as e.g. 3D morphable
models Blanz & Vetter (1999); Egger et al. (2018), our model builds on a 3D
mesh representation that is composed of a set of 3D vertices
$\Gamma=\\{r\in\mathbb{R}^{3}|r=1,\dots,R\\}$. For now, we assume the object
mesh to be given at training time but we will relax this assumption in later
sections. Different from standard graphics-based generative models, we do not
store RGB values at each mesh vertex $r$ but instead store feature vectors
$\Theta=\\{\theta_{r}\in\mathbb{R}^{D}|r=1,\dots,R\\}$. Using standard
rendering techniques, we can use this 3D neural mesh model
$\mathfrak{N}=\\{\Gamma,\Theta\\}$ to render feature maps:
$\bar{F}(m)=\Re(\mathfrak{N},m)\in\mathbb{R}^{H\times W\times D},$ (1)
where $m$ are the camera parameters for projecting the neural mesh
representation (Figure 2).
### 3.2 Neural Mesh Models
Neural Mesh Models are probabilistic generative models of neural feature
activations. Hence, our goal is to learn a generative model
$p(F|\mathfrak{N}_{y})$ of the real-valued feature activations $F$ of an
object class $y$ by leveraging a 3D neural mesh representation
$\mathfrak{N}_{y}$. Assuming that the 3D pose $m$ of the object in the input
image is known, we define the likelihood of the feature representation $F$ as:
$p(F|\mathfrak{N}_{y},m,B)=\prod_{i\in\mathcal{FG}}p(f_{i}|\mathfrak{N}_{y},m)\prod_{i^{\prime}\in\mathcal{BG}}p(f_{i^{\prime}}|B).$
(2)
The foreground $\mathcal{FG}$ is the set of all positions on the 2D lattice
$\mathcal{P}$ of the feature map $F$ that are covered by the rendered neural
mesh model. We compute $\mathcal{FG}$ by projecting the 3D vertices of the
mesh model $\Gamma_{y}$ into the image using the ground truth camera pose $m$
to obtain the 2D locations of the visible vertices in the image
$\mathcal{FG}=\\{s_{t}\in\mathbb{R}^{2}|t=1,\dots,T\\}$. We define foreground
feature likelihoods to be Gaussian distributed:
$\displaystyle p(f_{i}|\mathfrak{N}_{y},m)$
$\displaystyle=\frac{1}{\sigma_{r}\sqrt{2\pi}}\exp\left(-\frac{1}{2\sigma_{r}^{2}}\lVert
f_{i}-\theta_{r}\rVert^{2}\right).$ (3)
Note that the correspondence between the feature vector $f_{i}$ in the feature
map $F$ and the vector $\theta_{r}$ on the neural mesh model is given by the
2D projection of $\mathfrak{N}_{y}$ with camera parameters $m$. Those features
that are not covered by the neural mesh model
$\mathcal{BG}=\mathcal{P}\setminus\\{\mathcal{FG}\\}$, i.e. are located in the
background, are modeled by a Gaussian likelihood:
$\displaystyle p(f_{i^{\prime}}|B)$
$\displaystyle=\frac{1}{\sigma\sqrt{2\pi}}\exp\left(-\frac{1}{2\sigma^{2}}\lVert
f_{i^{\prime}}-\beta\rVert^{2}\right),$ (4)
with mixture parameters $B=\\{\beta,\sigma\\}$.
### 3.3 Training using Maximum Likelihood and Contrastive Learning
During training we want to optimize two objectives: 1) The parameters of the
generative model as defined in Equation 2 should be optimized to achieve
maxmimum likelihood on the training data. 2) The backbone used for feature
extraction $\psi$ should be optimized to make the individual feature vectors
as disctinct from each other as possible.
Maximum likelihood estimation of the generative model. We optimize the
parameters of the generative model to minimize the negative log-likelihood of
our model (Equation 2):
$\displaystyle\mathcal{L}_{ML}(F,\mathfrak{N}_{y},m,B)=$ $\displaystyle-\ln
p(F|\mathfrak{N}_{y},m,B)$ (5) $\displaystyle=$
$\displaystyle-\sum_{i\in\mathcal{FG}}\ln\left(\frac{1}{\sigma_{r}\sqrt{2\pi}}\right)-\frac{1}{2\sigma_{r}^{2}}\lVert
f_{i}-\theta_{r}\rVert^{2}$ (6)
$\displaystyle+\sum_{i^{\prime}\in\mathcal{BG}}\ln\left(\frac{1}{\sigma\sqrt{2\pi}}\right)-\frac{1}{2\sigma^{2}}\lVert
f_{i^{\prime}}-\beta\rVert^{2}$ (7)
If we constrain the variances such that
$\\{\sigma^{2}=\sigma_{r}^{2}=1|\forall r\\}$ then the maximum likelihood loss
reduces to:
$\displaystyle\mathcal{L}_{ML}(F,\mathfrak{N}_{y},m,B)$
$\displaystyle=-C\sum_{i\in\mathcal{FG}}\lVert
f_{i}-\theta_{r}\rVert^{2}+\sum_{i^{\prime}\in\mathcal{BG}}\lVert
f_{i^{\prime}}-\beta\rVert^{2},$ (8)
where $C$ is a constant scalar.
Contrastive learning of the feature extractor. The general idea of the
contrastive loss is to train the feature extractor such that the individual
feature vectors on the object are distinct from each other, as well as from
the background:
$\displaystyle\mathcal{L}_{Feature}(F,\mathcal{FG})$
$\displaystyle=-\sum_{i\in\mathcal{FG}}\hskip
2.84544pt\sum_{i^{\prime}\in\mathcal{FG}\setminus\\{i\\}}\lVert
f_{i}-f_{i^{\prime}}\rVert^{2}$ (9)
$\displaystyle\mathcal{L}_{Back}(F,\mathcal{FG},\mathcal{BG})$
$\displaystyle=-\sum_{i\in\mathcal{FG}}\sum_{j\in\mathcal{BG}}\lVert
f_{i}-f_{j}\rVert^{2}.$ (10)
The contrastive feature loss $\mathcal{L}_{Feature}$ encourages the features
on the object to be distinct from each other (e.g. the feature vectors at the
front tire of a car are different from those of the back tire). The
contrastive background loss $\mathcal{L}_{Back}$ encourages the features on
the object to be distinct from the features in the background. The overall
loss used to train NeMo is:
$\displaystyle\mathcal{L}(F,\mathfrak{N}_{y},m,B)$
$\displaystyle=\mathcal{L}_{ML}(F,\mathfrak{N}_{y},m,B)+\mathcal{L}_{Feature}(F,\mathcal{FG})+\mathcal{L}_{Back}(F,\mathcal{FG},\mathcal{BG})$
(11)
Figure 2: Overview of pose estimation: For each image, we use the trained CNN
backbone to extract feature map $F$. Meanwhile, using trained Neural Mesh
Model and randomly initialized object pose, we can render a feature map
$\bar{F}$. By calculating similarity at each local of $F$ and $\bar{F}$, we
can create a foreground score map, which demonstrate the object likelihood at
each location. Similarly, we can get a background score map via $F$ and
trained clutter model $\beta$. Using these two maps, we do the occlusion
inference to segment image into foreground region and background region. Then,
we calculate reconstruction loss and optimize object pose via minimize the
loss. We also visualize the loss landscape along all 3 object pose parameters,
and the final pose prediction.
### 3.4 Robust 3D Pose Estimation with Render and Compare
After training the feature extractor and the generative model in NeMo, we
apply the model for estimating the camera pose parameters $b$. In particular
we aim to optimize the model likelihood from Equation 2 w.r.t. to the camera
parameters in a render-and-compare manner. Following related work on robust
inference with generative models Kortylewski (2017); Egger et al. (2018) we
optimize a robust model likelihood:
$\displaystyle
p(F|\mathfrak{N}_{y},m,B,z_{i})=\prod_{i\in\mathcal{FG}}\left[p(f_{i}|\mathfrak{N}_{y},m)p(z_{i}\texttt{=}1)\right]^{z_{i}}\left[p(f_{i}|B)p(z_{i}\texttt{=}0)\right]^{(1-z_{i})}\prod_{i^{\prime}\in\mathcal{BG}}p(f_{i^{\prime}}|B).$
(12)
Here $z_{i}\in\\{0,1\\}$ is a binary variable and $p(z_{i}\texttt{=}1)$ and
$p(z_{i}\texttt{=}0)$ are the prior probabilities of the respective values.
Here $z_{i}$ is a binary variable that allows the background model
$p(f_{i}|B)$ to explain those locations in the feature map $F$ that are in the
foreground region $\mathcal{FG}$, but which the foreground model
$(f_{i}|\mathfrak{N}_{y},m)$ cannot explain well. A primary purpose of this
mechanism is to make the cost function robust to partial occlusion. Figure 2
illustrates the inference process. Given an initial camera pose estimate we
use the Neural Mesh Model to render a feature map $\bar{F}$ and evaluate the
reconstruction loss in the foreground region $\mathcal{FG}$ (foreground score
map), as well as the reconstruction error when using the background model only
(background score map). Pixel-wised comparison of foreground score and
background score yield the occlusion map
$\mathcal{Z}=\\{z_{i}\in\\{0,1\\}|\forall i\in\mathcal{P}\\}$. The map $Z$
indicates where feature vectors are explained by either the foreground or
background model.
A fundamental benefit of our Neural Mesh Models is that, they are generative
on the level of neural feature activations. This makes the overall
reconstruction loss very smooth compared to related works who are generative
on the pixel level. Therefore, NeMo can be optimized w.r.t. the pose
parameters with standard stochastic gradient descent. We visualize the loss as
a function of the individual pose parameters in Figure 2. Note that the losses
are generally very smooth and contain one clear global optimum. This is in
stark contrast to the optimization of classic generative models at the level
of RGB pixels, which often requires complex hand designed initialization and
optimization procedures to avoid the many local optima of the reconstruction
loss Blanz & Vetter (2003); Schönborn et al. (2017).
## 4 Experiment
We first describe the experimental setup in Section 4.1. Subsequently, we
study the performance of NeMo at 3D pose estimation in Section 4.2 and study
the effect of crudely approximating the object geometry within NeMo single 3d
cuboid, that one cuboid represent all object in each category, and multiple 3d
cuboid, that one cuboid represent only one subtype of object in each category.
We ablate the important modules of our model in Section 4.4.
(a) Aeroplane, L0
(b) Bus, L1
(c) Sofa, L2
(d) Car, L3
Figure 3: Qualitative results of NeMo on PASCAL3D+ (L0) and occluded PASCAL3D+
(L1 & L2 & L3) for different categories under different occlusion level. For
each example, we show four subfigures. Top-left: the input image; Top-right: A
mesh superimposed on the input image in the predicted 3D pose. Bottom-left:
The occluder localization result, where yellow is background, green is the
non-occluded area of the object and red is the occluded area as predicted by
NeMo. Bottom-right: The loss landscape for each individual camera parameter
respectively. The colored vertical lines demonstrate the final prediction and
the ground-truth parameter is at center of x-axis.
### 4.1 Experimental Setup
Evaluation. The task of 3D object pose estimation involves the prediction of
three rotation parameters (azimuth, elevation, in-plane rotation) of an object
relative to the camera. In our evaluation, we follow the protocol as proposed
in related work Zhou et al. (2018) to measure the pose estimation error
between the predicted rotation matrix and the ground truth rotation matrix:
$\Delta\left(R_{pred},R_{gt}\right)=\frac{\left\|\log
m\left(R_{pred}^{T}R_{gt}\right)\right\|_{F}}{\sqrt{2}}$. We report two
commonly used evaluation metrics the median of the rotation error, the
percentage of predicted angles within a given accuracy threshold.
Specifically, we use the thresholds $\frac{pi}{6}$ and $\frac{pi}{18}$.
Following Zhou et al. (2018), we assume the centers and scales of the objects
are given in all experiments.
Datasets. We evaluate NeMo on both the PASCAL3D+ dataset Xiang et al. (2014)
and the occluded PASCAL3D+ dataset Wang et al. (2020). PASCAL3D+ contains 12
man-made object categories with 3D pose annotations and 3D meshes for each
category respectively. We follow Wang et al. (2020) and Bai et al. (2020) to
split the PASCAL3D+ into a training set with 11045 images and validation set
with 10812 images. The occluded PASCAL3D+ dataset is a benchmark to evaluate
robustness under occlusion. This dataset simulates realistic man-made
occlusion by artificially superimposing occluders collected from the MS-COCO
dataset Lin et al. (2014) on objects in PASCAL3D+. The dataset contains all 12
classes of objects from PASCAL3D+ dataset with three levels of occlusion,
where L1: 20-40%, L2: 40-60%, L3: 60-80% of the object area are occluded.
We further test NeMo on the ObjectNet3D dataset Xiang et al. (2016), which is
also a category-level 3D pose estimation benchmark. ObjectNet3D contains 100
different categories with 3D meshes, it contains totally 17101 training
samples and 19604 testing samples, including 3556 occluded or truncated
testing samples. Following Zhou et al. (2018), we report pose estimation
results on 18 categories. Note that different from StarMap, we use all images
during evaluation, including occluded or truncated samples.
Training Setup. In the training process, we use the 3D meshes (see Section 4.2
for experiments without the mesh geometry), the locations and scales of
objects, and the 3D poses. We use Blender Community (2018) to reduce the
resolution of the mesh because the meshes provided in PASCAL3D+ have a very
high number of vertices. In order to balance the performance and computational
cost, in particular the cost of the rendering process, we limit the size of
the feature map produced by backbone to $\frac{1}{8}$ of the input image. To
achieve this, we use the ResNet50 with two additional upsample layers as our
backbone. We train a backbone for each category separately, and learn a Neural
Mesh Model for each subtype in a category. We follow hyperparameter settings
from Bai et al. (2020) for the contrastive loss. We train NeMo for 800
training epochs with a batch size of 108, which takes around 3 to 5 hours to
train a category using 6 NVIDIA RTX Titan GPUs.
Baselines. We compare our model to StarMap Zhou et al. (2018) using their
official implementation and training setup. Following common practice, we also
evaluate a popular baseline that formulates pose estimation as a
classification problem. In particular, we evaluate the performance of a deep
neural network classifier that uses the same backbone as NeMo. We train a
category specific Resnet50 (Res50-Specific), which formulates the pose
estimation in each category as an individual classification problem.
Furthermore, we train a non-specific Resnet50 (Res50-General), which performs
pose estimation for all categories in a single classification task. We report
the result of both architectures using the implementation provided by Zhou et
al. (2018).
Table 1: Pose estimation results on PASCAL3D+ and the occluded PASCAL3D+ dataset. Occlusion level L0 are the orignal images from PASCAL3D+, while Occlusion Level L1 to L3 are the occluded PASCAL3D+ images with increasing occlusion ratio. We evaluate both baseline and NeMo using Accuracy (percentage, higher better) and Median Error (degree, lower better). Note that NeMo is exceptionally robust to partial occlusion. Evaluation Metric | $ACC_{\frac{\pi}{6}}\uparrow$ | $ACC_{\frac{\pi}{18}}\uparrow$ | MedErr $\downarrow$
---|---|---|---
Occlusion Level | L0 | L1 | L2 | L3 | L0 | L1 | L2 | L3 | L0 | L1 | L2 | L3
Res50-General | 88.1 | 70.4 | 52.8 | 37.8 | 44.6 | 25.3 | 14.5 | 6.7 | 11.7 | 17.9 | 30.4 | 46.4
Res50-Specific | 87.6 | 73.2 | 58.4 | 43.1 | 43.9 | 28.1 | 18.6 | 9.9 | 11.8 | 17.3 | 26.1 | 44.0
StarMap | 89.4 | 71.1 | 47.2 | 22.9 | 59.5 | 34.4 | 13.9 | 3.7 | 9.0 | 17.6 | 34.1 | 63.0
NeMo | 84.1 | 73.1 | 59.9 | 41.3 | 60.4 | 45.1 | 30.2 | 14.5 | 9.3 | 15.6 | 24.1 | 41.8
NeMo-MultiCuboid | 86.7 | 77.2 | 65.2 | 47.1 | 63.2 | 49.9 | 34.5 | 17.8 | 8.2 | 13.0 | 20.2 | 36.1
NeMo-SingleCuboid | 86.1 | 76.0 | 63.9 | 46.8 | 61.0 | 46.3 | 32.0 | 17.1 | 8.8 | 13.6 | 20.9 | 36.5
Inference via Feature-level Rendering. We implement the NeMo inference
pipeline (see 3.4) using PyTorch3D Ravi et al. (2020). Specifically, we render
the Neural Mesh Models into feature map $\bar{F}$ using the feature
representations $\Theta$ stored at each mesh vertex. We estimate the object
pose by minimizing the reconstruction loss as introduced in Equation 12. For
initialization of pose optimization, we uniformly sample 144 different poses
(12 azimuth angles, 4 elevation angles, 3 in-plane rotations respectively).
Then we pick the initial pose with minimum reconstruction loss as a starting
point of optimization (optimization conduct with only the chosen pose, others
will be deprecated). On average each image takes about 8s with a single GPU
for inference. The whole inference process on PASCAL3D+ takes about 3 hours
using a 8 GPU machine.
### 4.2 Robust 3D Pose Estimation Under Occlusion
Baseline performances. Table 1 (for categories specific scores, see 5)
illustrates the 3D pose estimation results on PASCAL3D+ under different levels
of occlusion. In the low accuracy setting ($ACC_{\frac{\pi}{6}}$) StarMap
performs exceptionally well when the object is non-occluded ($L0$). However,
with increasing level of partial occlusion, the performance of StarMap
degrades massively, falling even below the basic classification models
Res50-General and Res50-Specific. These results highlight that today’s most
common deep networks for 3D pose estimation are not robust. Similar,
generalization patterns can be observed for the high accuracy setting
($ACC_{\frac{\pi}{18}}$). However, we can observe that the classification
baselines do not perform as well as before, and hence are not well suited for
fine-grained 3D pose estimation. Nevertheless, they outperform StarMap at high
occlusion levels ($L2$ & $L3$).
NeMo. We evaluate NeMo in three different setups: NeMo uses a down-sampled
object mesh as geometry representation, NeMo-MultiCuboid and NeMo-SingleCuboid
approximate the 3D object geometry crudely using 3D cuboid boxes. We discuss
the cuboid generation and results in detail in the next paragraph. Compared to
the baseline performances, we observe that NeMo achieves competitive
performance at estimating the 3D pose of non-occluded objects. Moreover, NeMo
is much more robust compared to all baseline approaches. In particular, we
observe that NeMo achieves the highest performance at every evaluation metric
when the objects are partially occluded. Note that the training data for all
models is exactly the same.
To further investigate and understand the robustness of NeMo, we qualitatively
analyze the pose estimation and occluder location predictions of NeMo in
Figure 3. Each subfigure shows the input image, the pose estimation result,
the occluder localization map and the loss as a function of the pose angles.
We visualize the loss landscape along each pose parameter (azimuth, elevation
and in-plane rotation) by sampling the individual parameters in a fixed step
size, while keeping all other parameters at their ground-truth value. We
further split the binary occlusion map $\mathcal{Z}$ into three regions to
highlight the occluder localization performance of NeMo. In particular, we
split the region that is explained by the background model into a yellow and a
red region. The red region is covered by rendered mesh and highlights the
locations with the projected region of the mesh, which the neural mesh model
cannot explain well. Hence these mark the locations in the image that NeMo
predicts to be occluded. From the qualitative illustrations, we observe that
NeMo maintains high robustness even under extreme occlusion, when only a small
part of the object is visible. Furthermore, we can clearly see that NeMo can
approximately localize the occluders. This occluder localization property of
NeMo makes our model not just more robust but also much more human-
interpretable compared to standard deep network approaches.
NeMo without detailed object mesh. We approximate the object geometry in NeMo
by replacing the downsampled mesh with 3D cuboid boxes (see Figure 5). The
vertices of the cuboid meshes are evenly distributed on all six sides of the
cuboid. For generating the cuboids, we use three constraint: 1) The cuboid
should cover all the vertices of the original mesh with minimum volume; 2) The
distances between each pair of adjacent vertices should be similar; 3) The
total number of vertices for each mesh should be around 1200 vertices. We
generate two different types of models. NeMo-MultiCuboid uses a separate
cuboid for each object mesh in an object category, while NeMo-SingleCuboid
uses on cuboid for all instances of a category.
We report the pose estimations results with NeMo using cuboid meshes in Table
1. The results show that approximating the detailed mesh representations of a
category with a single 3D cuboid gives surprisingly good results. In
particular, NeMo-SingleCuboid even often outperforms our standard model. This
shows that generative models of neural network feature activations must not
retain the detailed object geometry, because the feature activations are
invariant to detailed shape properties. Moreover, NeMo-MultiCube outperforms
the SingleCube model significantly. This suggests that for some categories the
size between different sub-types can be very differnt (e.g. for the airplane
class it could be a passanger jet or a fighter jet). Therefore, a single mesh
may not be representative enough for some object categories. The MultiCuboid
model even outperforms our the model with detailed mesh geometry. This is very
likely caused by difficulties during the down-sampling of the original meshes
in PASCAL3D+, which might remove important parts of the object geometry.
We also conduct experiment on ObjectNet3D dataset, which reported in Table 2.
The result demonstrates that NeMo outperforms StarMap in 14 categories out of
all 18 categories. Note that due to the considerable number of occluded and
truncated images in ObjectNet3D dataset, this dataset is significantly harder
than PASCAL3D+, however, NeMo still demonstrates reasonable accuracy.
Table 2: Pose estimation results of ObjectNet3D. Evaluated via pose estimation accuracy percentage for error under $\frac{\pi}{6}$ (higher better). Both baseline and NeMo evaluated on all images of given category, including occluded and truncated. Overall, NeMo has higher accuracy in 14 categories while lower in 4 categories. $ACC_{\frac{\pi}{6}}\uparrow$ | bed | bookshelf | calculator | cellphone | computer | cabinet | guitar | iron | knife
---|---|---|---|---|---|---|---|---|---
StarMap | 40.0 | 72.9 | 21.1 | 41.9 | 62.1 | 79.9 | 38.7 | 2.0 | 6.1
NeMo-MultiCuboid | 56.1 | 53.7 | 57.1 | 28.2 | 78.8 | 83.6 | 38.8 | 32.3 | 9.8
$ACC_{\frac{\pi}{6}}\uparrow$ | microwave | pen | pot | rifle | slipper | stove | toilet | tub | wheelchair
StarMap | 86.9 | 12.4 | 45.1 | 3.0 | 13.3 | 79.7 | 35.6 | 46.4 | 17.7
NeMo-MultiCuboid | 90.3 | 3.7 | 66.7 | 13.7 | 6.1 | 85.2 | 74.5 | 61.6 | 71.7
### 4.3 Generalization to Unseen Views
[table]Unseen Pose Evaluation Metric $ACC_{\frac{\pi}{6}}\uparrow$
$ACC_{\frac{\pi}{18}}\uparrow$ MedErr $\downarrow$ Data Split Seen Unseen Seen
Unseen Seen Unseen Res50-General 91.7 37.2 47.9 5.3 10.8 45.8 Res50-Specific
91.2 34.7 47.9 4.0 10.8 48.5 StarMap 93.1 49.8 68.6 13.5 7.3 36.0 NeMo-
MultiCuboid 88.6 54.7 70.2 31.0 6.6 34.9 NeMo-SingleCuboid 88.5 54.3 68.6 27.9
7.0 35.1
Figure 4: Pose estimation results on PASCAL3D+ for objects in seen and unseen
poses. The histogram on the left shows how we separate the PASCAL3D+ test
dataset into subsets based on the azimuth pose of the object. We have
similarly split the training dataset and trained all models only on the ”seen”
subset. We evaluate on both test sets (Seen & Unseen). Note the strong
generalization performance of NeMo in unseen view points.
To further investigate robustness of NeMo to out-of-distribution data, we
evaluate the performance of NeMo when objects are observed from previously
unseen viewpoints. For this, we split the PASCAL3D+ dataset into two sets
based on the ground-truth azimuth angle. In particular, we use the front and
rear views for training. We evaluate all approaches on the full testing set
and split the performance into seen (front and rear) and unseen (side) poses.
The histogram on the left of Table 4 shows the distribution of ground-truth
azimuth angles in the PASCAL3D+ test dataset. The seen-test-set contains 7305
images while the unseen-test-set contains 3507 images. Table 4 shows that NeMo
can significantly better generalize to novel viewpoints compared to the
baselines. For some categories the accuracy of NeMo on the unseen-test-set is
even comparable to seen-test-set (Table 7). These results highlight the
importance of building neural networks with 3D internal representations, which
enable them to generalize exceptionally well to unseen 3D transformations.
### 4.4 Ablation Study
Table 3: Ablation study on PASCAL3D+ and occluded PASCAL3D+. All ablation experiments are conducted with the NeMo-MultiCuboid model. The performance is reported in terms of Accuracy (percentage, higher better) and Median Error (degree, lower better). Evaluation Metric | $ACC_{\frac{\pi}{6}}\uparrow$ | $ACC_{\frac{\pi}{18}}\uparrow$ | MedErr $\downarrow$
---|---|---|---
Occlusion Level | L0 | L1 | L2 | L3 | L0 | L1 | L2 | L3 | L0 | L1 | L2 | L3
NeMo | 86.7 | 77.3 | 65.2 | 47.1 | 63.2 | 49.2 | 34.5 | 17.8 | 8.2 | 13.1 | 20.2 | 36.1
NeMo w/o outlier | 85.2 | 76.0 | 63.2 | 44.4 | 61.8 | 47.9 | 32.4 | 16.2 | 8.5 | 13.5 | 20.7 | 41.6
NeMo w/o contrastive | 69.7 | 58.0 | 44.6 | 26.9 | 40.8 | 27.7 | 14.7 | 5.6 | 18.3 | 27.7 | 37.0 | 61.0
Table 4: Sensitivity of NeMo-MultiCuboid different numbers of random pose initializations during inference (Init Samples) on PASCAL3D+. Init Samples | $ACC_{\frac{\pi}{6}}\uparrow$ | $ACC_{\frac{\pi}{18}}\uparrow$ | MedErr $\downarrow$
---|---|---|---
144(Std.) | 86.7 | 63.2 | 8.2
72 | 86.3 | 63.0 | 8.3
36 | 84.1 | 61.1 | 8.8
12 | 81.2 | 57.7 | 9.3
6 | 80.4 | 57.7 | 9.6
1 | 54.9 | 38.9 | 35.6
In Table 3, we study the effect of each individual module of NeMo.
Specifically, we remove the clutter feature, background score and occluder
prediction during inference, and only use foreground score to calculate pose
loss. This reduces the robustness to occlusion significantly. Furthermore, we
remove the contrastive loss and use neural features that were extracted with
an ImageNet-pretrained Resnet50 with non-parametric-upsampling. This leads to
a massive decrease in performance, and hence highlights the importance of
learning locally distinct feature representations. Table 4 (and Table 9) study
the sensitivity of NeMo to the random pose initialization before the pose
optimization. In this ablation, we evaluate NeMo-MultiCuiboid with 144 down to
1 uniformly sampled initialization poses. Note that we do not run 144
optimization processes. We instead evaluate the reconstruction error for each
initialization and start the optimization from the initializaiton with the
lowest error. Hence, every experiment only involves one optimization run. The
results demonstrate that NeMo benefits from the smooth lose landscape. With 6
initial samples NeMo achieves a reasonable performance, while 72 initial poses
almost yield the maximum performance. This ablation clearly highlights that,
unlike standard Render-and-Compare approaches Blanz & Vetter (1999); Schönborn
et al. (2017), NeMo does not require complex designed initialization
strategies.
## 5 Conclusion
In this work, we considered the problem of robust 3D pose estimation with
neural networks. We found that standard deep learning approaches do not give
robust predictions when objects are partially occluded or viewed from an
unseen pose. In an effort to resolve this fundamental limitation we developed
Neural Mesh Models (NeMo), a neural network architecture that integrates a
prototypical mesh representation with a generative model of neural features.
We combine NeMo with contrastive learning and show that this makes possible to
estimate the 3D pose with very high robustness to out-of-distribution data
using simple gradient-based render-and-compare. Our experiments demonstrate
the superiority of NeMo compared to related work on a range of challenging
datasets.
#### Acknowledgments
We gratefully acknowledge funding support from ONR N00014-18-1-2119, ONR
N00014-20-1-2206, the Institute for Assured Autonomy at JHU with Grant IAA
80052272, and the Swiss National Science Foundation with Grant P2BSP2.181713.
We also thank Weichao Qiu, Qing Liu, Yutong Bai and Jiteng Mu for suggestions
on our paper.
## References
* Bai et al. (2020) Yutong Bai, Angtian Wang, Adam Kortylewski, and Alan Yuille. Coke: Localized contrastive learning for robust keypoint detection. _arXiv preprint arXiv:2009.14115_ , 2020.
* Blanz & Vetter (1999) Volker Blanz and Thomas Vetter. A morphable model for the synthesis of 3d faces. In _Proceedings of the 26th annual conference on Computer graphics and interactive techniques_ , pp. 187–194, 1999.
* Blanz & Vetter (2003) Volker Blanz and Thomas Vetter. Face recognition based on fitting a 3d morphable model. _IEEE Transactions on pattern analysis and machine intelligence_ , 25(9):1063–1074, 2003.
* Chen et al. (2020) Xu Chen, Zijian Dong, Jie Song, Andreas Geiger, and Otmar Hilliges. Category level object pose estimation via neural analysis-by-synthesis. _arXiv preprint arXiv:2008.08145_ , 2020.
* Community (2018) Blender Online Community. _Blender - a 3D modelling and rendering package_. Blender Foundation, Stichting Blender Foundation, Amsterdam, 2018. URL http://www.blender.org.
* Egger et al. (2018) Bernhard Egger, Sandro Schönborn, Andreas Schneider, Adam Kortylewski, Andreas Morel-Forster, Clemens Blumer, and Thomas Vetter. Occlusion-aware 3d morphable models and an illumination prior for face image analysis. _International Journal of Computer Vision_ , 126(12):1269–1287, 2018.
* Girshick et al. (2011) Ross Girshick, Pedro Felzenszwalb, and David McAllester. Object detection with grammar models. _Advances in Neural Information Processing Systems_ , 24:442–450, 2011.
* Hadsell et al. (2006) Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant mapping. In _2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06)_ , volume 2, pp. 1735–1742. IEEE, 2006.
* Han et al. (2019) Tengda Han, Weidi Xie, and Andrew Zisserman. Video representation learning by dense predictive coding. In _Proceedings of the IEEE International Conference on Computer Vision Workshops_ , pp. 0–0, 2019.
* He et al. (2020) Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 9729–9738, 2020.
* Kortylewski (2017) Adam Kortylewski. _Model-based image analysis for forensic shoe print recognition_. PhD thesis, University_of_Basel, 2017.
* Kortylewski et al. (2020a) Adam Kortylewski, Ju He, Qing Liu, and Alan L Yuille. Compositional convolutional neural networks: A deep architecture with innate robustness to partial occlusion. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 8940–8949, 2020a.
* Kortylewski et al. (2020b) Adam Kortylewski, Qing Liu, Angtian Wang, Yihong Sun, and Alan Yuille. Compositional convolutional neural networks: A robust and interpretable model for object recognition under occlusion. _International Journal of Computer Vision_ , pp. 1–25, 2020b.
* Kortylewski et al. (2020c) Adam Kortylewski, Qing Liu, Huiyu Wang, Zhishuai Zhang, and Alan Yuille. Combining compositional models and deep networks for robust object classification under occlusion. In _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_ , pp. 1333–1341, 2020c.
* Lepetit et al. (2009) Vincent Lepetit, Francesc Moreno-Noguer, and Pascal Fua. Epnp: An accurate o (n) solution to the pnp problem. _International journal of computer vision_ , 81(2):155, 2009.
* Li et al. (2018) Yi Li, Gu Wang, Xiangyang Ji, Yu Xiang, and Dieter Fox. Deepim: Deep iterative matching for 6d pose estimation. In _Proceedings of the European Conference on Computer Vision (ECCV)_ , pp. 683–698, 2018.
* Lin et al. (2014) Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In _European conference on computer vision_ , pp. 740–755. Springer, 2014.
* Lu et al. (2000) C-P Lu, Gregory D Hager, and Eric Mjolsness. Fast and globally convergent pose estimation from video images. _IEEE transactions on pattern analysis and machine intelligence_ , 22(6):610–622, 2000.
* Mousavian et al. (2017) Arsalan Mousavian, Dragomir Anguelov, John Flynn, and Jana Kosecka. 3d bounding box estimation using deep learning and geometry. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 7074–7082, 2017.
* Oord et al. (2018) Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. _arXiv preprint arXiv:1807.03748_ , 2018.
* Pavlakos et al. (2017) Georgios Pavlakos, Xiaowei Zhou, Aaron Chan, Konstantinos G Derpanis, and Kostas Daniilidis. 6-dof object pose from semantic keypoints. In _2017 IEEE international conference on robotics and automation (ICRA)_ , pp. 2011–2018. IEEE, 2017.
* Peng et al. (2019) Sida Peng, Yuan Liu, Qixing Huang, Xiaowei Zhou, and Hujun Bao. Pvnet: Pixel-wise voting network for 6dof pose estimation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 4561–4570, 2019.
* Ravi et al. (2020) Nikhila Ravi, Jeremy Reizenstein, David Novotny, Taylor Gordon, Wan-Yen Lo, Justin Johnson, and Georgia Gkioxari. Accelerating 3d deep learning with pytorch3d. _arXiv:2007.08501_ , 2020.
* Schönborn et al. (2017) Sandro Schönborn, Bernhard Egger, Andreas Morel-Forster, and Thomas Vetter. Markov chain monte carlo for automated face image analysis. _International Journal of Computer Vision_ , 123(2):160–183, 2017.
* Schroff et al. (2015) Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 815–823, 2015.
* Sohn (2016) Kihyuk Sohn. Improved deep metric learning with multi-class n-pair loss objective. In _Advances in neural information processing systems_ , pp. 1857–1865, 2016.
* Song et al. (2020) Chen Song, Jiaru Song, and Qixing Huang. Hybridpose: 6d object pose estimation under hybrid representations. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , June 2020.
* Su et al. (2015) Hao Su, Charles R Qi, Yangyan Li, and Leonidas J Guibas. Render for cnn: Viewpoint estimation in images using cnns trained with rendered 3d model views. In _Proceedings of the IEEE International Conference on Computer Vision_ , pp. 2686–2694, 2015.
* Sundermeyer et al. (2018) Martin Sundermeyer, Zoltan-Csaba Marton, Maximilian Durner, Manuel Brucker, and Rudolph Triebel. Implicit 3d orientation learning for 6d object detection from rgb images. In _Proceedings of the European Conference on Computer Vision (ECCV)_ , pp. 699–715, 2018.
* Szeto & Corso (2017) Ryan Szeto and Jason J Corso. Click here: Human-localized keypoints as guidance for viewpoint estimation. In _Proceedings of the IEEE International Conference on Computer Vision_ , pp. 1595–1604, 2017.
* Tulsiani & Malik (2015) Shubham Tulsiani and Jitendra Malik. Viewpoints and keypoints. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 1510–1519, 2015.
* Wang et al. (2020) Angtian Wang, Yihong Sun, Adam Kortylewski, and Alan L. Yuille. Robust object detection under occlusion with context-aware compositionalnets. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , June 2020.
* Wang et al. (2019) He Wang, Srinath Sridhar, Jingwei Huang, Julien Valentin, Shuran Song, and Leonidas J Guibas. Normalized object coordinate space for category-level 6d object pose and size estimation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 2642–2651, 2019.
* Wu et al. (2018) Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance discrimination. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 3733–3742, 2018.
* Xiang et al. (2014) Yu Xiang, Roozbeh Mottaghi, and Silvio Savarese. Beyond pascal: A benchmark for 3d object detection in the wild. In _IEEE Winter Conference on Applications of Computer Vision (WACV)_ , 2014.
* Xiang et al. (2016) Yu Xiang, Wonhui Kim, Wei Chen, Jingwei Ji, Christopher Choy, Hao Su, Roozbeh Mottaghi, Leonidas Guibas, and Silvio Savarese. Objectnet3d: A large scale database for 3d object recognition. In _European Conference Computer Vision (ECCV)_ , 2016.
* Zakharov et al. (2019) Sergey Zakharov, Ivan Shugurov, and Slobodan Ilic. Dpod: 6d pose object detector and refiner. In _Proceedings of the IEEE International Conference on Computer Vision_ , pp. 1941–1950, 2019.
* Zeeshan Zia et al. (2013) M Zeeshan Zia, Michael Stark, and Konrad Schindler. Explicit occlusion modeling for 3d object class representations. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 3326–3333, 2013.
* Zhou et al. (2018) Xingyi Zhou, Arjun Karpur, Linjie Luo, and Qixing Huang. Starmap for category-agnostic keypoint and viewpoint estimation. In _Proceedings of the European Conference on Computer Vision (ECCV)_ , pp. 318–334, 2018.
## Appendix A Appendix
Table 5: Pose estimation results on PASCAL3D+ (L0) for all categories respectively. Results reported in Accuracy (percentage, higher better) and Median Error (degree, lower better). | aero | bike | boat | bottle | bus | car | chair | table | mbike | sofa | train | tv | Mean
---|---|---|---|---|---|---|---|---|---|---|---|---|---
$\uparrow ACC_{\frac{\pi}{6}}$ Res50-General | 83.0 | 79.6 | 73.1 | 87.9 | 96.8 | 95.5 | 91.1 | 82.0 | 80.7 | 97.0 | 94.9 | 83.3 | 88.1
$\uparrow ACC_{\frac{\pi}{6}}$ Res50-Specific | 79.5 | 75.8 | 73.5 | 90.3 | 93.5 | 95.6 | 89.1 | 82.4 | 79.7 | 96.3 | 96.0 | 84.6 | 87.6
$\uparrow ACC_{\frac{\pi}{6}}$ StarMap | 85.5 | 84.4 | 65.0 | 93.0 | 98.0 | 97.8 | 94.4 | 82.7 | 85.3 | 97.5 | 93.8 | 89.4 | 89.4
$\uparrow ACC_{\frac{\pi}{6}}$ NeMo | 73.3 | 66.4 | 65.5 | 83.0 | 87.4 | 98.8 | 82.8 | 81.9 | 74.6 | 94.7 | 87.0 | 85.5 | 84.1
$\uparrow ACC_{\frac{\pi}{6}}$ NeMo-MultiCuboid | 76.9 | 82.2 | 66.5 | 87.1 | 93.0 | 98.0 | 90.1 | 80.5 | 81.8 | 96.0 | 89.3 | 87.1 | 86.7
$\uparrow ACC_{\frac{\pi}{6}}$ NeMo-SingleCuboid | 82.2 | 78.4 | 68.1 | 88.0 | 91.7 | 98.2 | 87.0 | 76.9 | 85.0 | 95.0 | 83.0 | 82.2 | 86.1
$\uparrow ACC_{\frac{\pi}{18}}$ Res50-General | 31.3 | 25.7 | 23.9 | 35.9 | 67.2 | 63.5 | 37.0 | 40.2 | 18.9 | 62.5 | 51.2 | 24.9 | 44.6
$\uparrow ACC_{\frac{\pi}{18}}$ Res50-Specific | 29.1 | 22.9 | 25.3 | 39.0 | 62.7 | 62.9 | 37.5 | 42.0 | 19.5 | 57.5 | 50.2 | 25.4 | 43.9
$\uparrow ACC_{\frac{\pi}{18}}$ StarMap | 49.8 | 34.2 | 25.4 | 56.8 | 90.3 | 81.9 | 67.1 | 57.5 | 27.7 | 70.3 | 69.7 | 40.0 | 59.5
$\uparrow ACC_{\frac{\pi}{18}}$ NeMo | 39.0 | 31.3 | 29.6 | 38.6 | 83.1 | 94.8 | 46.9 | 58.1 | 29.3 | 61.1 | 71.1 | 66.4 | 60.4
$\uparrow ACC_{\frac{\pi}{18}}$ NeMo-MultiCuboid | 43.1 | 35.3 | 36.4 | 48.6 | 89.7 | 95.5 | 49.5 | 56.5 | 33.8 | 68.8 | 75.9 | 56.8 | 63.2
$\uparrow ACC_{\frac{\pi}{18}}$ NeMo-SingleCuboid | 49.7 | 29.5 | 37.7 | 49.3 | 89.3 | 94.7 | 49.5 | 52.9 | 29.0 | 58.5 | 70.1 | 42.4 | 61.0
$\downarrow$ MedErr Res50-General | 13.3 | 15.9 | 15.6 | 12.1 | 8.9 | 8.8 | 11.5 | 11.4 | 16.6 | 8.7 | 9.9 | 15.8 | 11.7
$\downarrow$ MedErr Res50-Specific | 14.2 | 17.3 | 15.4 | 11.7 | 9.0 | 8.8 | 12.0 | 11.0 | 17.1 | 9.2 | 10.0 | 14.9 | 11.8
$\downarrow$ MedErr StarMap | 10.0 | 14.0 | 19.7 | 8.8 | 3.2 | 4.2 | 6.9 | 8.5 | 14.5 | 6.8 | 6.7 | 12.1 | 9.0
$\downarrow$ MedErr NeMo | 13.8 | 17.5 | 18.3 | 12.8 | 3.4 | 2.7 | 10.7 | 8.2 | 16.1 | 8.0 | 5.6 | 6.6 | 9.3
$\downarrow$ MedErr NeMo-MultiCuboid | 11.8 | 13.4 | 14.8 | 10.2 | 2.6 | 2.8 | 10.1 | 8.8 | 14.0 | 7.0 | 5.0 | 8.1 | 8.2
$\downarrow$ MedErr NeMo-SingleCuboid | 10.1 | 16.3 | 14.9 | 10.2 | 3.2 | 3.2 | 10.1 | 9.3 | 14.1 | 8.6 | 5.4 | 12.2 | 8.8
Table 6: Pose estimation results on occluded PASCAL3D+ occlusion L1 for all categories respectively. Results reported in Accuracy (percentage, higher better) and Median Error (degree, lower better). | aero | bike | boat | bottle | bus | car | chair | table | mbike | sofa | train | tv | Mean
---|---|---|---|---|---|---|---|---|---|---|---|---|---
$\uparrow ACC_{\frac{\pi}{6}}$ Res50-General | 57.3 | 56.8 | 51.4 | 78.3 | 82.5 | 80.0 | 62.3 | 63.1 | 61.1 | 84.9 | 87.8 | 69.8 | 70.4
$\uparrow ACC_{\frac{\pi}{6}}$ Res50-Specific | 54.0 | 59.5 | 48.9 | 84.4 | 86.1 | 84.4 | 67.1 | 64.9 | 65.9 | 87.8 | 92.4 | 74.5 | 73.2
$\uparrow ACC_{\frac{\pi}{6}}$ StarMap | 52.6 | 65.3 | 42.0 | 81.8 | 87.9 | 86.1 | 64.5 | 66.5 | 62.8 | 76.9 | 85.2 | 59.7 | 71.1
$\uparrow ACC_{\frac{\pi}{6}}$ NeMo | 49.0 | 51.4 | 52.9 | 73.5 | 82.2 | 94.3 | 70.2 | 67.9 | 53.8 | 86.7 | 75.0 | 79.4 | 73.1
$\uparrow ACC_{\frac{\pi}{6}}$ NeMo-MultiCuboid | 58.1 | 68.8 | 53.4 | 78.8 | 86.9 | 94.0 | 76.0 | 70.0 | 61.8 | 87.3 | 82.8 | 82.8 | 77.2
$\uparrow ACC_{\frac{\pi}{6}}$ NeMo-SingleCuboid | 61.9 | 63.4 | 52.9 | 81.3 | 84.8 | 92.7 | 78.4 | 68.2 | 68.9 | 87.1 | 80.3 | 76.9 | 76.0
$\uparrow ACC_{\frac{\pi}{18}}$ Res50-General | 11.8 | 12.5 | 12.3 | 26.5 | 45.0 | 40.7 | 14.7 | 22.3 | 10.7 | 24.4 | 34.9 | 13.0 | 25.3
$\uparrow ACC_{\frac{\pi}{18}}$ Res50-Specific | 12.4 | 10.7 | 13.8 | 30.2 | 46.9 | 44.8 | 21.2 | 24.0 | 10.4 | 28.0 | 40.6 | 17.9 | 28.1
$\uparrow ACC_{\frac{\pi}{18}}$ StarMap | 15.6 | 15.1 | 10.8 | 36.2 | 66.6 | 58.1 | 26.6 | 32.0 | 14.4 | 23.8 | 47.4 | 13.0 | 34.4
$\uparrow ACC_{\frac{\pi}{18}}$ NeMo | 18.5 | 19.9 | 19.1 | 24.0 | 72.1 | 82.0 | 25.8 | 35.7 | 12.6 | 44.3 | 54.0 | 49.0 | 45.1
$\uparrow ACC_{\frac{\pi}{18}}$ NeMo-MultiCuboid | 25.4 | 23.3 | 22.9 | 36.7 | 86.9 | 84.8 | 33.1 | 36.8 | 20.8 | 46.5 | 61.0 | 46.3 | 49.9
$\uparrow ACC_{\frac{\pi}{18}}$ NeMo-SingleCuboid | 29.3 | 18.0 | 24.3 | 41.5 | 76.1 | 80.5 | 27.2 | 31.4 | 19.4 | 39.9 | 55.1 | 32.0 | 46.3
$\downarrow$ MedErr Res50-General | 25.3 | 24.5 | 29.0 | 14.9 | 10.6 | 11.2 | 22.4 | 18.1 | 23.3 | 15.5 | 11.7 | 21.1 | 17.9
$\downarrow$ MedErr Res50-Specific | 26.8 | 23.7 | 31.0 | 13.8 | 10.5 | 10.6 | 18.2 | 16.7 | 21.8 | 13.6 | 10.9 | 19.3 | 17.3
$\downarrow$ MedErr StarMap | 27.3 | 22.1 | 38.9 | 12.9 | 7.0 | 8.2 | 19.1 | 17.2 | 21.7 | 16.8 | 10.6 | 24.1 | 17.6
$\downarrow$ MedErr NeMo | 30.8 | 29.0 | 27.3 | 17.6 | 5.9 | 5.1 | 18.6 | 14.7 | 27.4 | 11.3 | 8.8 | 10.2 | 15.6
$\downarrow$ MedErr NeMo-MultiCuboid | 22.6 | 18.6 | 25.8 | 14.1 | 4.7 | 4.6 | 15.1 | 13.8 | 21.2 | 11.0 | 8.0 | 11.3 | 13.0
$\downarrow$ MedErr NeMo-SingleCuboid | 18.9 | 23.2 | 26.7 | 12.6 | 5.2 | 5.4 | 15.6 | 15.4 | 20.1 | 12.1 | 8.6 | 15.3 | 13.6
Table 7: Pose estimation results on occluded PASCAL3D+ occlusion L2 for all categories respectively. Results reported in Accuracy (percentage, higher better) and Median Error (degree, lower better). | aero | bike | boat | bottle | bus | car | chair | table | mbike | sofa | train | tv | Mean
---|---|---|---|---|---|---|---|---|---|---|---|---|---
$\uparrow ACC_{\frac{\pi}{6}}$ Res50-General | 33.3 | 40.2 | 33.6 | 70.6 | 69.5 | 57.0 | 41.8 | 47.4 | 43.3 | 66.8 | 80.4 | 58.1 | 52.8
$\uparrow ACC_{\frac{\pi}{6}}$ Res50-Specific | 36.3 | 44.9 | 36.1 | 76.1 | 73.1 | 65.5 | 53.2 | 49.5 | 45.4 | 72.7 | 88.3 | 65.0 | 58.4
$\uparrow ACC_{\frac{\pi}{6}}$ StarMap | 28.5 | 38.9 | 21.3 | 65.0 | 61.7 | 59.3 | 37.5 | 44.7 | 43.2 | 55.1 | 56.4 | 36.2 | 47.2
$\uparrow ACC_{\frac{\pi}{6}}$ NeMo | 38.2 | 41.2 | 39.6 | 58.3 | 72.6 | 84.7 | 50.7 | 51.1 | 34.9 | 70.1 | 60.0 | 64.6 | 59.9
$\uparrow ACC_{\frac{\pi}{6}}$ NeMo-MultiCuboid | 43.1 | 55.7 | 43.3 | 69.1 | 79.8 | 84.5 | 58.8 | 58.4 | 43.9 | 76.4 | 64.3 | 70.3 | 65.2
$\uparrow ACC_{\frac{\pi}{6}}$ NeMo-SingleCuboid | 43.4 | 49.6 | 43.6 | 76.0 | 71.2 | 83.8 | 61.9 | 55.9 | 50.9 | 78.3 | 63.1 | 68.6 | 63.9
$\uparrow ACC_{\frac{\pi}{18}}$ Res50-General | 6.1 | 4.5 | 7.2 | 20.1 | 25.9 | 21.4 | 9.5 | 13.2 | 6.1 | 14.0 | 23.0 | 8.6 | 14.5
$\uparrow ACC_{\frac{\pi}{18}}$ Res50-Specific | 5.7 | 6.9 | 8.0 | 25.5 | 33.9 | 29.1 | 13.0 | 11.6 | 6.8 | 18.4 | 32.0 | 13.8 | 18.6
$\uparrow ACC_{\frac{\pi}{18}}$ StarMap | 3.8 | 5.8 | 2.4 | 19.7 | 30.5 | 24.5 | 7.7 | 9.6 | 5.1 | 9.6 | 21.5 | 5.8 | 13.9
$\uparrow ACC_{\frac{\pi}{18}}$ NeMo | 10.7 | 10.5 | 11.3 | 13.9 | 55.8 | 60.6 | 9.3 | 20.3 | 6.3 | 26.1 | 34.6 | 32.1 | 30.2
$\uparrow ACC_{\frac{\pi}{18}}$ NeMo-MultiCuboid | 12.8 | 16.6 | 16.8 | 21.9 | 62.3 | 64.6 | 17.2 | 20.3 | 12.3 | 32.4 | 38.2 | 32.7 | 34.5
$\uparrow ACC_{\frac{\pi}{18}}$ NeMo-SingleCuboid | 14.9 | 11.1 | 15.6 | 18.2 | 56.0 | 62.4 | 17.4 | 18.7 | 10.2 | 30.5 | 36.4 | 22.4 | 32.0
$\downarrow$ MedErr Res50-General | 49.3 | 42.5 | 58.5 | 17.7 | 15.9 | 21.3 | 35.4 | 32.0 | 36.1 | 20.3 | 15.2 | 25.3 | 30.4
$\downarrow$ MedErr Res50-Specific | 45.8 | 33.9 | 52.8 | 16.3 | 12.4 | 15.1 | 27.1 | 30.9 | 32.4 | 18.3 | 12.3 | 24.1 | 26.1
$\downarrow$ MedErr StarMap | 55.2 | 37.1 | 69.1 | 20.6 | 19.0 | 21.3 | 39.2 | 34.0 | 35.5 | 27.0 | 24.8 | 40.3 | 34.1
$\downarrow$ MedErr NeMo | 39.8 | 37.7 | 44.2 | 24.8 | 8.8 | 7.7 | 29.7 | 28.5 | 47.5 | 16.9 | 18.2 | 17.0 | 24.1
$\downarrow$ MedErr NeMo-MultiCuboid | 38.5 | 26.4 | 38.2 | 18.8 | 7.0 | 7.3 | 23.0 | 23.0 | 36.0 | 14.0 | 14.9 | 16.1 | 20.2
$\downarrow$ MedErr NeMo-SingleCuboid | 39.9 | 30.6 | 38.8 | 19.5 | 8.3 | 7.8 | 21.3 | 24.8 | 29.5 | 14.2 | 16.9 | 18.5 | 20.9
Table 8: Pose estimation results on occluded PASCAL3D+ occlusion L3 for all categories respectively. Results reported in Accuracy (percentage, higher better) and Median Error (degree, lower better). | aero | bike | boat | bottle | bus | car | chair | table | mbike | sofa | train | tv | Mean
---|---|---|---|---|---|---|---|---|---|---|---|---|---
$\uparrow ACC_{\frac{\pi}{6}}$ Res50-General | 18.3 | 20.8 | 21.2 | 62.1 | 57.0 | 36.9 | 31.1 | 32.2 | 24.3 | 56.2 | 64.5 | 53.4 | 37.8
$\uparrow ACC_{\frac{\pi}{6}}$ Res50-Specific | 20.0 | 33.4 | 25.5 | 67.5 | 57.8 | 42.0 | 40.7 | 33.9 | 30.3 | 56.6 | 82.8 | 56.5 | 43.1
$\uparrow ACC_{\frac{\pi}{6}}$ StarMap | 7.6 | 18.5 | 10.6 | 46.3 | 35.1 | 25.3 | 22.5 | 24.6 | 15.9 | 26.4 | 24.0 | 19.5 | 22.9
$\uparrow ACC_{\frac{\pi}{6}}$ NeMo | 24.0 | 31.3 | 27.4 | 43.3 | 48.8 | 62.8 | 31.8 | 29.7 | 18.4 | 44.2 | 34.5 | 51.4 | 41.3
$\uparrow ACC_{\frac{\pi}{6}}$ NeMo-MultiCuboid | 23.8 | 34.3 | 29.5 | 53.9 | 56.0 | 65.5 | 43.4 | 41.5 | 25.4 | 58.2 | 43.2 | 54.1 | 47.1
$\uparrow ACC_{\frac{\pi}{6}}$ NeMo-SingleCuboid | 20.6 | 33.8 | 27.6 | 61.7 | 49.9 | 61.8 | 44.7 | 41.2 | 35.3 | 62.9 | 47.9 | 50.2 | 46.8
$\uparrow ACC_{\frac{\pi}{18}}$ Res50-General | 1.6 | 2.3 | 2.9 | 11.9 | 14.4 | 7.6 | 3.8 | 5.7 | 3.1 | 7.9 | 12.7 | 8.9 | 6.7
$\uparrow ACC_{\frac{\pi}{18}}$ Res50-Specific | 2.0 | 5.5 | 4.8 | 16.7 | 21.1 | 13.1 | 5.9 | 5.7 | 4.3 | 9.9 | 22.5 | 6.0 | 9.9
$\uparrow ACC_{\frac{\pi}{18}}$ StarMap | 0.8 | 1.7 | 1.1 | 11.8 | 8.3 | 4.8 | 2.1 | 2.6 | 1.6 | 2.8 | 5.2 | 0.7 | 3.7
$\uparrow ACC_{\frac{\pi}{18}}$ NeMo | 4.4 | 6.2 | 6.7 | 6.8 | 26.5 | 31.1 | 3.4 | 6.7 | 2.0 | 9.3 | 13.0 | 16.7 | 14.5
$\uparrow ACC_{\frac{\pi}{18}}$ NeMo-MultiCuboid | 5.5 | 5.2 | 7.9 | 10.8 | 34.2 | 37.4 | 7.4 | 8.2 | 4.5 | 15.8 | 15.1 | 15.9 | 17.8
$\uparrow ACC_{\frac{\pi}{18}}$ NeMo-SingleCuboid | 4.7 | 6.7 | 8.6 | 11.7 | 29.2 | 33.7 | 11.0 | 10.7 | 4.9 | 17.8 | 17.2 | 10.9 | 17.1
$\downarrow$ MedErr Res50-General | 69.8 | 70.9 | 73.2 | 22.7 | 24.9 | 46.7 | 41.5 | 44.4 | 59.8 | 26.3 | 21.3 | 28.4 | 46.4
$\downarrow$ MedErr Res50-Specific | 65.8 | 47.1 | 75.8 | 20.9 | 18.5 | 46.6 | 35.9 | 49.9 | 56.3 | 26.4 | 15.3 | 26.5 | 44.0
$\downarrow$ MedErr StarMap | 87.0 | 67.6 | 90.2 | 32.6 | 51.3 | 64.0 | 60.7 | 53.2 | 73.4 | 51.0 | 52.7 | 54.7 | 63.0
$\downarrow$ MedErr NeMo | 65.3 | 48.4 | 65.2 | 34.5 | 34.9 | 17.2 | 44.6 | 55.7 | 74.3 | 33.7 | 47.6 | 29.3 | 41.8
$\downarrow$ MedErr NeMo-MultiCuboid | 69.8 | 49.6 | 63.0 | 28.2 | 19.4 | 14.9 | 35.4 | 39.9 | 60.0 | 23.7 | 38.1 | 27.2 | 36.1
$\downarrow$ MedErr NeMo-SingleCuboid | 74.8 | 46.1 | 70.1 | 24.5 | 30.2 | 16.3 | 35.2 | 37.5 | 50.5 | 21.5 | 31.7 | 29.9 | 36.5
Table 9: Full table for 4. This table shows category specific results of NeMo-MultiCuboid pose estimation performance on PASCAL3D+ using different number of initialization pose during inference. The Init Samples shows total number of initialization pose e.g. 144 means we uniformly sample 12(azimuth) * 4(elevation) * 3(in-plane rotation) poses. Std. mean this setting is standard settings and used in main experiment. Category | aero | bike | boat | bottle | bus | car | chair | table | mbike | sofa | train | tv | Mean
---|---|---|---|---|---|---|---|---|---|---|---|---|---
$\uparrow ACC_{\frac{\pi}{6}}$ | 144(Std.) | 76.9 | 82.2 | 66.5 | 87.1 | 93.0 | 98.0 | 90.1 | 80.5 | 81.8 | 96.0 | 89.3 | 87.1 | 86.7
72 | 77.1 | 81.9 | 64.6 | 86.5 | 93.0 | 98.0 | 89.2 | 81.3 | 82.2 | 95.8 | 85.9 | 87.0 | 86.3
36 | 74.6 | 79.2 | 60.0 | 86.6 | 89.8 | 94.7 | 88.6 | 79.5 | 80.0 | 95.2 | 86.4 | 86.5 | 84.1
12 | 73.4 | 78.1 | 57.1 | 86.2 | 79.9 | 86.9 | 90.1 | 81.6 | 79.3 | 94.6 | 82.5 | 86.5 | 81.2
6 | 69.7 | 78.1 | 58.3 | 85.7 | 82.5 | 90.9 | 87.8 | 68.6 | 80.3 | 95.0 | 79.4 | 86.0 | 80.4
1 | 38.7 | 33.6 | 34.2 | 86.9 | 54.9 | 40.6 | 77.5 | 68.3 | 27.8 | 89.7 | 78.6 | 84.7 | 54.9
$\uparrow ACC_{\frac{\pi}{18}}$ | 144(Std.) | 43.1 | 35.3 | 36.4 | 48.6 | 89.7 | 95.5 | 49.5 | 56.5 | 33.8 | 68.8 | 75.9 | 56.8 | 63.2
72 | 43.2 | 35.7 | 36.6 | 47.5 | 89.8 | 95.2 | 48.7 | 56.7 | 34.0 | 69.1 | 72.6 | 56.6 | 63.0
36 | 41.3 | 33.2 | 31.6 | 47.8 | 85.5 | 91.8 | 49.5 | 56.1 | 33.0 | 68.4 | 74.1 | 56.3 | 61.1
12 | 41.1 | 32.2 | 26.6 | 47.3 | 73.3 | 84.4 | 49.7 | 57.0 | 33.5 | 67.9 | 67.6 | 55.8 | 57.7
6 | 38.3 | 32.1 | 30.5 | 46.9 | 78.4 | 88.2 | 48.1 | 46.7 | 33.0 | 68.1 | 66.9 | 55.5 | 57.7
1 | 22.7 | 19.7 | 21.6 | 47.4 | 44.7 | 37.9 | 44.6 | 47.3 | 14.6 | 65.5 | 62.1 | 54.5 | 38.9
$\downarrow$ MedErr | 144(Std.) | 11.8 | 13.4 | 14.8 | 10.2 | 2.6 | 2.8 | 10.1 | 8.8 | 14.0 | 7.0 | 5.0 | 8.1 | 8.2
72 | 11.9 | 13.4 | 15.7 | 10.4 | 2.6 | 2.8 | 10.3 | 8.7 | 14.0 | 7.0 | 5.2 | 8.2 | 8.3
36 | 12.4 | 14.5 | 19.1 | 10.4 | 2.8 | 2.9 | 10.1 | 8.8 | 14.2 | 7.0 | 5.0 | 8.2 | 8.8
12 | 12.5 | 14.8 | 22.6 | 10.5 | 3.8 | 3.1 | 10.0 | 8.7 | 14.5 | 7.1 | 5.7 | 8.3 | 9.3
6 | 14.2 | 14.8 | 21.4 | 10.5 | 3.1 | 2.9 | 10.4 | 11.5 | 14.3 | 7.1 | 5.6 | 8.4 | 9.6
1 | 49.0 | 77.0 | 63.9 | 10.4 | 27.7 | 43.6 | 11.1 | 11.3 | 78.0 | 7.4 | 6.0 | 8.7 | 35.6
Table 10: Experiment for NeMo-MultiCuboid when subtype is not given during inference. In the w/o subtype experiment we use NMMs of all subtypes to do inference on each image respectively, then pick the predicted pose with subtypes with minimum reconstruction loss. The result demonstrate that distinguishing subtypes is not necessary for pose estimation with NeMo. $\uparrow ACC_{\frac{\pi}{6}}$ | aero | bike | boat | bottle | bus | car | chair | table | mbike | sofa | train | tv | Mean
---|---|---|---|---|---|---|---|---|---|---|---|---|---
L0 | with subtype | 76.9 | 82.2 | 66.5 | 87.1 | 93.0 | 98.0 | 90.1 | 80.5 | 81.8 | 96.0 | 89.3 | 87.1 | 86.7
w/o subtype | 77.7 | 78.3 | 70.9 | 82.6 | 94.5 | 98.7 | 86.4 | 75.5 | 83.7 | 93.8 | 89.3 | 81.0 | 85.8
L1 | with subtype | 58.1 | 68.8 | 53.4 | 78.8 | 86.9 | 94.0 | 76.0 | 70.0 | 61.8 | 87.3 | 82.8 | 82.8 | 77.2
w/o subtype | 59.4 | 63.4 | 56.0 | 74.9 | 90.3 | 96.1 | 73.4 | 63.0 | 64.7 | 83.8 | 84.0 | 77.6 | 76.5
L2 | with subtype | 43.1 | 55.7 | 43.3 | 69.1 | 79.8 | 84.5 | 58.8 | 58.4 | 43.9 | 76.4 | 64.3 | 70.3 | 65.2
w/o subtype | 46.3 | 48.1 | 44.3 | 67.3 | 84.0 | 86.2 | 61.5 | 50.9 | 45.1 | 70.2 | 67.0 | 67.4 | 64.7
L3 | with subtype | 23.8 | 34.3 | 29.5 | 53.9 | 56.0 | 65.5 | 43.4 | 41.5 | 25.4 | 58.2 | 43.2 | 54.1 | 47.1
w/o subtype | 26.1 | 28.6 | 31.5 | 54.9 | 61.2 | 66.9 | 44.3 | 34.4 | 25.0 | 54.0 | 49.2 | 53.8 | 47.2
Figure 5: Using detailed mesh model we can create all type of mesh models for
NeMo. (a) We use remesh method in Blender to down sample the original mesh.
The processed mesh contains 1722 vertices. (b) Following rules in 4.2, we
create subtype specificed cuboid (one cuboid for each subtype), which used in
NeMo-MultiCuboid approach. The cuboid contains 1096 vertices. (c) We create
the subtype general cuboid by requiring the cuboid cover original meshes of
all subtypes. And we use the created cuboid to represent all objects in this
category, which reported as NeMo-SingleCuboid. This cuboid contains 1080
vertices.
(a) Chair, L2
(b) Car, L3
(c) Chair, L2
(d) Diningtable, L1
Figure 6: Visualization of failure case of NeMo on occluded PASCAL3D+. For
each example, we show four subfigures. Top-left: the input image; Top-right: A
mesh superimposed on the input image in the predicted 3D pose. Bottom-left:
The occluder localization result, where yellow is background, green is the
non-occluded area of the object and red is the occluded area as predicted by
NeMo. Bottom-right: The loss landscape for each individual camera parameter
respectively. The colored vertical lines demonstrate the final prediction and
the ground-truth parameter is at center of x-axis.
[table]Unseen Pose Evaluation Metric $ACC_{\frac{\pi}{6}}\uparrow$
$ACC_{\frac{\pi}{18}}\uparrow$ $\downarrow$ MedErr $\downarrow$ Data Split
Seen Unseen Seen Unseen Seen Unseen Res50-General 97.2 55.3 72.2 11.5 8.1 25.5
Res50-Specific 97.2 52.5 70.5 11.7 8.2 27.7 StarMap 98.2 77.6 93.7 34.2 3.4
15.5 NeMo-MultiCuboid 96.8 97.0 94.8 85.4 2.6 5.2 NeMo-SingleCuboid 98.0 97.8
96.3 78.8 2.9 5.9
Figure 7: Pose estimation results on PASCAL3D+ under unseen pose for CAR
category. Figure shows the distribution of azimuth in PASCAL3D+ testing set of
car category and our splitting.
|
# Potential well in Poincaré recurrence
Miguel Abadi Vitor Amorim Sandro Gallo
###### Abstract
From a physical/dynamical system perspective, the potential well represents
the proportional mass of points that escape the neighbourhood of a given
point. In the last 20 years, several works have shown the importance of this
quantity to obtain precise approximations for several recurrence time
distributions in mixing stochastic processes and dynamical systems. Besides
providing a review of the different scaling factors used in the literature in
recurrence times, the present work contribute with two new results: (1) for
$\phi$-mixing and $\psi$-mixing processes, we give a new exponential
approximation for hitting and return times using the potential well as scaling
parameter. The error terms are explicit and sharp. (2) We analyse the uniform
positivity of the potential well.
###### Contents
1. 1 Introduction
2. 2 Poincaré Recurrence Theory for mixing processes
1. 2.1 The framework of mixing processes
2. 2.2 Recurrence times and exponential approximations
3. 2.3 Potential well: definition and genealogy in PRT
3. 3 Main results
1. 3.1 Second order periodicity
2. 3.2 Type 2 approximations scaled by the potential well
3. 3.3 Uniform positivity of the potential well
4. 4 Proofs of the results
1. 4.1 Preliminary results
2. 4.2 Proof of Theorem 1
1. 4.2.1 Proofs of the statements for small $t$’s
2. 4.2.2 Proof of the statements for large $t$’s
3. 4.3 Proof of Theorem 2
## 1 Introduction
The close relation between the Extreme Value Theory (EVT) and the statistical
properties of Poincaré recurrence has been recently quite well explored. The
starting point is that the exceedances of a stochastic process to a sequence
of barrier values $a_{n}>0,n\in\mathbb{N},$ can be considered as hittings to a
sequence of nested sets. More precisely, if one defines the semi-infinite
intervals
$A_{n}=(a_{n},\infty),$
and consider a sequence of random variables $X_{1},X_{2},\ldots$, one has the
equivalence
$\max\\{X_{1},\dots,X_{t}\\}>a_{n}\ \ \ \ \text{if and only if }\ \ \ \
T_{A_{n}}\leq t,$
where for any measurable set $A$, $T_{A}$ denotes the smallest $k$ such that
$X_{k}\in A$. As the sequence of levels $a_{n}$ diverges, the sets $A_{n}$ are
nested. This equivalence allows to make a bridge between two historically
independent theories: Extreme Value Theory (EVT) and Poincaré Recurrence
Theory (PRT). While EVT focuses on the existence (and identification) of the
limit of the distribution of the partial maxima, $k$-maxima, among others
([LLR12, FFT10, Fre13, Res13, LFdF+16]), the aim of recent works on PRT is to
understand the statistical properties of the different notions of return
times.
The present paper stands in the approach of PRT, our interest is about the
statistics of visits of a random process $X_{t},t\in\mathbb{N}$ to a given
target measurable set. _Asymptotic_ statistics are obtained studying sequences
of target sets $A_{n},n\geq 1$, usually of measure shrinking to zero. In this
context and for certain classes of processes, hitting and return times with
respect to a given sequence of target sets converge to the exponential
distribution, modelling the unpredictability of rare events. However, this
rough affirmation is full of nuances which need to be established in very
precise terms. It turns out that these details bring many information on the
system.
For instance, for two observables having the same probability, the Ergodic
Theorem says that, macroscopically, their number of occurrences are about the
same. However, these occurrences can appear scattered in a very different way
along time. Under some strong mixing assumptions, it is a well-known fact of
the literature that for nested sequences of observables with the same
probability, the asymptotic observation of one of them can be distributed as a
Poisson process while the other one will follow a compound Poisson process.
Thus, the dichotomy Poisson/compound Poisson in the _same_ system is
determined also by the intrinsic properties of the target sets considered.
In the setting of the present paper, the target sets are finite strings of
symbols (_patterns_). In this case, even if the process is a sequence of
independent random variables, the successive occurrences of the string are not
independent because the structure of the pattern itself enters the game,
allowing or avoiding consecutive observations due to possible overlaps with
itself. This leads to a dichotomy between aperiodic/periodic patterns which
yields, in the limit of long patterns, to the dichotomy Poisson/compound
Poisson mentioned before. In passing, let us also mention that this dichotomy
also exists in EVT where it is referred as phenomenon of clustering/non
clustering of maxima, and has generated a great deal of research along the 2
last decades [LFdF+16].
Let us now be more specific about what we are doing here. First, we stand in
the context of discrete time stochastic processes with countable alphabet
enjoying $\phi$-mixing. Fix any point $x$, that is, any right infinite
sequence of symbols taken from the alphabet, and consider the nested sequence
of neighbourhoods corresponding to the first $n$ symbols of $x$, namely
$A_{n}=(x_{0},\ldots,x_{n-1}),n\geq 1$. The main theorem of the paper, Theorem
1, gives explicit and computable error terms for the approximation of the
hitting time distribution $\mu(T_{A_{n}}>t)$ and return time distribution
$\mu_{A_{n}}(T_{A_{n}}>t)$, by exponential distributions whose parameter is
explicit and depends on $A_{n}$.
The first main advantage of Theorem 1 is that it uses the _potential well_ as
scaling parameter. In words, the potential well is the probability,
conditioned on starting from $A_{n}$, that the pattern $A_{n}$ does not
reappear at the first possible moment it could reappear. The use of this
simple and well defined quantity as scaling parameter contrasts with previous
works using parameters whose expressions are hardly explicit and even more
hardly computable. Another advantage of Theorem 1 is that, unlike a whole body
of literature obtaining almost sure results, our results hold for _all $x$_.
This allows to distinguish different limiting distributions, as for example in
the periodic/aperiodic dichotomy described above that almost-sure results
cannot detect. And last but not least, the error terms of our approximations
are not in total variation distance, but in the stronger point-wise form with
respect to the time scale.
Yet another important point of Theorem 1, with respect to return times, is
that it corrects the exponential approximation obtained by [AV09]. Indeed,
Theorem 4.1 therein contains a mistake in the error term for small $t$’s.
The other main novelty of the present work is Theorem 2, stating that the
potential well is uniformly bounded away from $0$ when we have $\psi$-mixing
or $\phi$-mixing with summable function $\phi(n)$. Naturally, as a conditional
probability, we know that the potential well belongs to the interval $[0,1]$
for any $n\geq 1$ and any pattern $A_{n}$. But it was proved that the
potential well could be arbitrarily close to $0$ for $\beta$-mixing processes,
a slightly weaker mixing assumption than $\phi$-mixing. Indeed, it was shown
in [ACG15] that for the binary renewal process, with specific choices of
transition probabilities and target sets $A_{n},n\geq 1$, the potential well
of $A_{n}$ vanishes as $n$ diverges. Note that the border is thin between this
$\beta$-mixing example and our Theorem 2 holding for $\psi$-mixing and
$\phi$-mixing with summable $\phi$ (see the review of [Bra05] on the distinct
mixing assumptions). We conjecture that the assumption of summability of the
$\phi$ rates can be dropped.
To conclude on the importance of the present work as a whole, let us mention
that our results are fundamental for the study of further recurrence
quantities, such as the return time function [WZ89, OW93] and the waiting time
function [Shi93, MS94], establishing a link with information theory. These
random variables are known to satisfy a counterpart of the famous Shannon-
McMillan-Breiman Theorem (asymptotic equipartition property). In order to
study the fluctuations of these limit theorems, for instance a large deviation
principle, we need to control the return/hitting time exponential
approximations for any point and any $t>0$. It is particularly clear in [CU05]
and [ACG19] studying the fluctuations of the waiting time and return time
respectively. It is also interesting to notice that it was [CGS99] who first
pointed the importance of seeking exponential approximations for _any_ point
$x$, and it was precisely to study the small and large fluctuations of the
return time function.
The paper is organized as follows. We describe in Section 2 the setting of the
paper in the context of PRT, defining carefully the types of exponential
approximations we are interested in and explaining, including through an
extensive bibliography, the role of the potential well as scaling parameter.
Section 3 contains the main results and Section 4 is dedicated to their
proofs.
## 2 Poincaré Recurrence Theory for mixing processes
### 2.1 The framework of mixing processes
Consider a countable set $\mathcal{A}$ that we call alphabet. With
$\mathbb{N}$ we denote the set of nonnegative integers and with
$\mathcal{X}:=\mathcal{A}^{\mathbb{N}}$ the set of right infinite sequences
$x=(x_{0},x_{1},\ldots)$ of symbols taken from $\mathcal{A}$. Given a point
$x\in\mathcal{X}$, and for any finite set $I\subset\mathbb{N}$, the cylinder
sets with base in $I$ is defined as the set
$A_{I}(x):=\\{y\in\mathcal{X}:y_{i}=x_{i},i\in I\\}$. In the particular case
where $I=\\{0\ldots,n-1\\}$ we will write $A_{n}(x)$ and sometimes abuse
notation writing $x_{0}^{n-1}$. We endow $\mathcal{X}$ with the
$\sigma$-algebra $\mathcal{F}$ generated by the class of cylinder sets
$\\{A_{I}:I\subset\mathbb{N},|I|<\infty\\}$. Further $\mathcal{F}_{I}$ denotes
the $\sigma$-algebra generated by $A_{I}(x),x\in\mathcal{X}$. The special case
in which $I=\\{i,\ldots,j\\}$, $0\leq i\leq j\leq\infty$, we use the notation
$\mathcal{F}_{i}^{j}$. We use the shorthand notation
$a_{i}^{j}:=(a_{i},a_{i+1},\ldots,a_{j})$, $0\leq i\leq j<\infty$ for finite
strings of consecutive symbols of $\mathcal{A}$. When necessary, $A_{n}(x)$
will be naturally identify with the sequence $x_{0}^{n-1}$.
The shift operator $\sigma:\mathcal{X}\rightarrow\mathcal{X}$ shifts the point
$x=(x_{0},x_{1},x_{2},\dots)$ to the left by one coordinate, $(\sigma
x)_{i}=x_{i+1}$, $i\geq 0$.
We consider a shift invariant (or stationary) probability measure $\mu$ on
$(\mathcal{X},\mathcal{F})$. For any $A\in\mathcal{F}$ of positive measure,
$\mu_{A}(\cdot):=\frac{\mu(\\{x\in\cdot\cap A\\})}{\mu(A)}$ is the conditional
measure $\mu$ restricted to $A$.
Our results are stated under two mixing conditions that we now define. For all
$n\geq 1$, define
$\displaystyle\phi(n)$
$\displaystyle:=\sup_{i\in\mathbb{N},A\in\mathcal{F}_{0}^{i},B\in\mathcal{F}_{i+n}^{\infty}}\left|\frac{\mu(A\cap
B)}{\mu(A)}-\mu(B)\right|,$ $\displaystyle\psi(n)$
$\displaystyle:=\sup_{i\in\mathbb{N},A\in\mathcal{F}_{0}^{i},B\in\mathcal{F}_{i+n}^{\infty}}\left|\frac{\mu(A\cap
B)}{\mu(A)\mu(B)}-1\right|.$
Note that $\psi(n)$ and $\phi(n)$ are nonincreasing sequences, since
$\mathcal{F}_{0}^{i}\subset\mathcal{F}_{0}^{i+1}$ for every $i\geq 0$.
###### Definition 2.1.
We say that the measure $\mu$ on $(\mathcal{X},\mathcal{F})$ is $\phi$-mixing
(_resp._ $\psi$-mixing) if $\phi(n)$ (_resp._ $\psi(n)$) goes to $0$ as $n$
diverges. We will say that $\mu$ is “summable $\phi$-mixing” if it is $\phi$
mixing with $\sum_{n}\phi(n)<\infty$.
We refer to [Bra05] for an exhaustive review of mixing properties and
examples.
### 2.2 Recurrence times and exponential approximations
The hitting time of a point $y$ to a set $A\in\mathcal{F}$ is defined by
$\displaystyle T_{A}(y)$ $\displaystyle=\inf\\{k\geq 1:\sigma^{k}(y)\in A\\}.$
For sets $A$ of small measure (rare events), and under mixing conditions such
as the ones introduced in the preceding subsection, it is expected that
$\mu(T_{A}>t)$ is approximately exponentially distributed. This is what we
call hitting time exponential approximation. Similarly, when we refer to
return time, we mean that we study the approximation of $\mu_{A}(T_{A}>t)$,
that is, the measure of the same event, conditioned on the points starting in
$A$.
In this paper we are interested in the case where we fix _any_ point $x$ and
consider $A_{n}(x)$ as target set. When $n$ diverges, the measure of
$A_{n}(x)$ vanishes, leading to rare events. The scaling parameter of the
exponential approximation will depend on the point $x$.
The two main types of approximations that appeared in the literature when
approximating the hitting/return time distributions around any point $x$ of
the phase space are a total variation distance type and a pointwise type.
* •
_Type 1 : Total variation distance._ For any $x\in\mathcal{X}$,
* –
Hitting times
$\sup_{t>0}\left|\mu(T_{A}>t)-e^{-\mu(A)\theta(A)t}\right|\leq\epsilon(A),$
* –
Return times
$\sup_{t>0}\left|\mu_{A}(T_{A}>t)-\bar{\theta}(A)e^{-\mu(A)\theta(A)t}\right|\leq\epsilon(A).$
* •
_Type 2 : Pointwise._ For any $x\in\mathcal{X}$ and any $t>0$,
* –
Hitting times
$\left|\mu(T_{A}>t)-e^{-\mu(A)\theta(A)t}\right|\leq\epsilon(A,t),$
* –
Return times
$\left|\mu_{A}(T_{A}>t)-\bar{\theta}(A)e^{-\mu(A)\theta(A)t}\right|\leq\epsilon(A,t).$
Note that in the return time approximation, the parameters $\theta$ and
$\bar{\theta}$ need not to be equal. However, such approximation leads to
$\mathbb{E}_{A}(T_{A})\approx\bar{\theta}(A)\frac{1}{\mu(A)\theta(A)}.$
In view of Kac Lemma which, we recall, states that
$\mathbb{E}_{A}(T_{A})=\frac{1}{\mu(A)}$, the last display suggests that
$\theta$ and $\bar{\theta}$ must be close.
### 2.3 Potential well: definition and genealogy in PRT
As already explained, _potential well_ will be used as scaling parameter in
the exponential approximations of Types 1 and 2 defined above. In order to
define it, we need first to define the _shortest possible return_ of a set
$A\in\mathcal{F}$ (to itself)
$\tau(A):=\inf_{y\in
A}\left\\{T_{A}(y):\mu_{A}\left(\sigma^{-T_{A}(y)}(A)\right)>0\right\\},$
or, equivalently
$\tau(A):=\inf\left\\{k\geq 1:\mu_{A}\left(\sigma^{-k}(A)\right)>0\right\\}.$
In the case where $A=A_{n}(x)$, we can define $\tau_{n}(x)=\tau(A_{n}(x))$,
and the $\tau_{n}:\mathcal{X}\rightarrow\mathbb{N}$ constitutes a sequence of
simple functions.
The first possible return time $\tau_{n}(x)$ is an object of independent
interest which was studied under several perspectives in the literature. Let
us mention that its asymptotic concentration was proved by [STV02] and
[ACS03], large deviations in [AV08], [HV10] and [AC15], and fluctuations in
[AL13] and [AGRM17].
Obviously, by definition, $\mu_{A}(T_{A}\geq\tau(A))=1$. If for a point $x\in
A$ we have $T_{A}(x)>\tau(A)$, we say that $x$ _escapes_ from $A$. The
_potential well_ of order $n$ at $x$ is precisely the proportional measure of
points of $A$ which escape from $A$
$\rho(A):=\mu_{A}(T_{A}>\tau(A)).$
Since we are interested in the case where $A=A_{n}(x)$, we may use the
alternative notation $\rho(x_{0}^{n-1})$. Besides being explicitly computable
in many situations, the potential well is physically meaningful and, as
scaling parameter, provides precise exponential approximations for recurrence
times under suitable mixing assumptions.
We give below a small genealogy of scaling parameters that appeared in the
literature that consider results holding _for all_ points to get
approximations for hitting/return times.
* •
As far as we know the first paper to prove exponential approximations for
hitting time statistics _for all points_ is due to Aldous and Brown [AB93].
They obtained Type 1 approximations in the case of reversible Markov chains.
The parameter used there is just the inverse of the expectation, which is
mandatory to use when the approximating law is the exponential distribution.
However, this does not bring information about the value of the expectation.
* •
Galves and Schmitt [GS97] obtained Type 1 approximations for hitting times in
$\psi$-mixing processes. The major breakthrough there was that the authors
provided an explicit formula for the parameter (denoted by $\lambda(A)$). This
quantity could be viewed as the _grandfather_ of $\rho$. Nonetheless, its
explicit significance was not evident.
* •
[Aba01] and [Aba04] gave exponential approximations (Type 1 and Type 2
respectively) of the distribution of hitting time around any point using a
scaling parameter. In [Aba04] however, only its existence and necessity were
proven, the calculation of $\lambda$ being intractable in general. The main
problem is that $\lambda(A)$ depends on the recurrence property of the
cylinder $A$ up to large time scales (usually of the order of $\mu(A)^{-1}$).
* •
In order to circumvent this issue, [Aba01] also provided, in the context of
approximations of Type 1, another scaling parameter, easier to compute, but
with a slightly larger error term as a price to pay. It is defined as follows
$\zeta_{s}(x_{0}^{n-1}):=\mu_{x_{0}^{n-1}}(T_{x_{0}^{n-1}}>n/s).$
This quantity depends on, at most, the $2n$ first coordinates of the process.
$\zeta_{s}(x_{0}^{n-1})$ can be seen as the _father_ of the potential well.
Both works [Aba01] and [Aba04] lead with processes enjoying $\psi$-mixing or
summable $\phi$-mixing.
* •
The use of the potential well $\rho$ as scaling parameter was firstly proposed
by Abadi in [Aba06], still in the context of an approximation of Type 1 for
hitting and return times. More specifically, it is proved that, for
exponentially $\alpha$-mixing processes, $\lambda$ and $\zeta$ (grandpa and
father of $\rho$) can be well approximated by $\rho$.
* •
The first paper to really directly use $\rho$ as scaling parameter was [AV09],
in which a type 2 approximation for return times was obtained, with
$\bar{\theta}=\theta=\rho$. The process is assumed to be $\phi$-mixing.
* •
Focusing on proving exponential approximations for hitting and return times
under the largest possible class of systems, and still for all points, abadi
and Saussol [AS11] returned to the approach of Galves and Schmitt. Their
results hold under the $\alpha$-mixing condition, which is the weakest
hypothesis used up to date, but the scaling parameter is not explicit.
* •
Focussing on the specific class of binary renewal processes, [ACG15] proved a
type 1 approximation for hitting and return times using the potential well
$\rho$. One interesting aspect concerning this work lays in the fact that the
renewal process is $\beta$-mixing (weaker than the $\phi$-mixing assumed by
[AV09]). Moreover, the authors managed to use the renewal property to compute
the limit of $\rho(A_{n}(x))$ for _any_ point $x$. In other words, the
approximating asymptotic law for hitting and return times was explicitly
computed as function of the parameters of the process. This result shows the
usefulness of the potential well, an “easy to compute” scaling parameter.
## 3 Main results
Theorem 1 below presents Type 2 approximations for hitting and return time
under $\phi$ and $\psi$-mixing conditions with the potential well as scaling
parameter and an explicit error term.
Before we can state this result, we first need to define the second order
periodicity of string $A_{n}(x)$, which plays a crucial role for the size of
the error term.
### 3.1 Second order periodicity
The short returns that we will define here are precisely those that are
difficult to treat as (almost) independent. They not only depend on the
correlation decay of the system but also on the particular properties of the
string itself. Technically, for an $n$-cylinder, _short_ means returning in up
to the order $n$ steps.
Consider the cylinder $A$ and suppose $\tau(A)=k$. Write $n=qk+r$, where
$q\in\mathbb{N}$ and $0\leq r<k$, and note that the cylinder overlaps itself
in all multiples of $k$ smaller than $n$. The set $\mathcal{P}(A):=\\{mk:1\leq
m\leq q\\}$ are indexes of possible returns at multiples of $\tau(A)$, but
returns can also occur at other time indexes after that. Let
$\mathcal{R}(A)=\\{j\in\\{qk+1,...,qk+r-1\\}:\,\mu_{A}(\sigma^{-j}(A))>0\\}.$
A point $y\in A$ could only return to $A$ before $n$ at time indexes in
$\mathcal{P}(A)\cup\mathcal{R}(A)$, but there is a crucial difference between
them. A point that escapes from $A$ can not return in $\mathcal{P}(A)$, but it
_could_ return in $\mathcal{R}(A)$. Namely,
$\mu_{A}(\sigma^{-\tau(A)}(A^{c})\cap\sigma^{-j}(A))=0,\ \
j\in\mathcal{P}(A),$
while
$\mu_{A}(\sigma^{-\tau(A)}(A^{c})\cap\sigma^{-j}(A))>0,\ \
j\in\mathcal{R}(A).$
We set $n_{A}$ as the first possible return to $A$, _among those points $x\in
A$ who escape $A$ at $\tau(A)$ _
$n_{A}=\left\\{\begin{array}[]{lc}\min\mathcal{R}(A)&\mathcal{R}(A)\neq\emptyset\\\
\min\\{j:\mu_{A}(\sigma^{-\tau(A)}(A^{c})\cap\sigma^{-j}(A))>0\\},&\mathcal{R}(A)=\emptyset.\\\
\end{array}\right.$
Actually, in the second case, $n_{A}\geq n$. We refer to [AV09] to find an
example that illustrates these facts. This definition is slightly more general
that the one therein, since we include the case of a non complete grammar111We
say $\mu$ has complete grammar if $\mu(A_{n}(x))>0$ for any $x\in\mathcal{X}$
and $n\geq 1$..
### 3.2 Type 2 approximations scaled by the potential well
For any finite string $A$, let us denote with $A^{(k)}$ the suffix of $A$ of
size $k$. That is, if $A=x_{0}^{n-1}$, then $A^{(k)}=x_{n-k}^{n-1}$. When $A$
is an $n$-cylinder we will use the convention
$\mu(A^{(j)})=\mu(A^{(n)})=\mu(A)$ for $j\geq n$.
By definition, $\phi(g)$ is finite for all $g\geq 1$. This is not the case of
$\psi(g)$. Thus, for $\psi$-mixing measures, we define
$g_{0}=g_{0}(\psi):=\inf\\{g\geq 1:\psi(g)<\infty\\}-1.$ (1)
Now, for the error term, define
$\displaystyle(a)\;\;\epsilon_{\psi}(A):=n\mu\left(A^{(n_{A}-g_{0})}\right)+\psi(n),$
(2) $\displaystyle(b)\;\;\epsilon_{\phi}(A):=\inf_{1\leq w\leq
n_{A}}\left\\{(n+\tau(A))\mu\left(A^{(w)}\right)+\phi(n_{A}-w)\right\\}.$ (3)
Note that cylinders $A$ of size $n$ verify that $n_{A}\geq n/2$, then
$\epsilon_{\psi}$ is well defined for all $n>2g_{0}$.
We will use $\epsilon$ to denote either $\epsilon_{\psi}$ or $\epsilon_{\phi}$
when the argument/statement is general.
###### Theorem 1.
Consider a stationary measure $\mu$ on $(\mathcal{X},\mathcal{F})$ enjoying
either $\phi$-mixing with
$\sup_{A\in\mathcal{A}^{n}}\mu(A)\tau(A)\stackrel{{\scriptstyle
n}}{{\longrightarrow}}0$, or simply $\psi$-mixing. There exist five positive
constants $C_{i}$, $i=1,\ldots,5$, and $n_{0}\in\mathbb{N}$ such that for all
$n\geq n_{0}$ and all $A\in\mathcal{A}^{n}$, the following inequalities hold.
1. 1.
For all $t\geq 0$:
$\left|\mu(T_{A}>t)-e^{-\rho(A)\mu(A)t}\right|\leq\left\\{\begin{array}[]{lc}C_{1}\left(\tau(A)\mu(A)+t\mu(A)\epsilon(A)\right)&t\leq[2\mu(A)]^{-1}\\\
C_{2}\,\mu(A)t\epsilon(A)e^{-\mu(A)t\left(\rho(A)-C_{3}\epsilon(A)\right)}&t>[2\mu(A)]^{-1}.\\\
\end{array}\right.$
2. 2.
For all $t\geq\tau(A)$:
$\left|\mu_{A}(T_{A}>t)-\rho(A)e^{-\rho(A)\mu(A)(t-\tau(A))}\right|\hskip
113.81102pt$ $\hskip
28.45274pt\leq\left\\{\begin{array}[]{lc}C_{4}\,\epsilon(A)&t\leq[2\mu(A)]^{-1}\\\
C_{5}\,\mu(A)t\epsilon(A)e^{-\mu(A)t\left(\rho(A)-C_{3}\epsilon(A)\right)}&t>[2\mu(A)]^{-1}.\\\
\end{array}\right.$
Theorem 1 (and its proof) are definitely inspired by [AV09] and their Theorem
4.1. However, let us first observe that our result provides the first
statement of the literature for Type 2 hitting time approximations with the
potential well as scaling parameter. Moreover, contrarily to [AV09], we do not
assume complete grammar nor finite alphabet.
Let us make some further important observations concerning this theorem.
###### Remark 1.
Under $\phi$-mixing, the assumption $\sup\mu(A)\tau(A)\stackrel{{\scriptstyle
n}}{{\longrightarrow}}0$ can be dropped under certain circumstances. For
instance, if the measure has complete grammar, we have $\tau(A)\leq n$ and the
assumption is granted using Lemma 1. Another way is to assume that $\mu$ is
_summable_ $\phi$-mixing or $\psi$-mixing, as commented after Lemma 2 in
Section 4.
###### Remark 2.
According to Lemma 1, if $\mu$ is $\phi$-mixing (and _a fortiori_ ,
$\psi$-mixing), there exist constants $C$ and $c$ such that $\mu(A)\leq
Ce^{-cn}$ for all $n\geq 1$ and $A\in\mathcal{A}^{n}$. On the other hand,
since $n_{A}\geq n/2$, we get $\mu\left(A^{(n_{A}-g_{0})}\right)\leq
Ce^{-c(n/2-g_{0})}$ for all $n>2g_{0}$. Therefore,
$\epsilon_{\psi}(A)\stackrel{{\scriptstyle n}}{{\longrightarrow}}0$ uniformly.
Further, if $\tau(A)\leq 2n$ it is enough to take $w=\lceil n/4\rceil$ to
obtain $\phi(n_{A}-w)\leq\phi(\lfloor n/4\rfloor)$ and
$(n+\tau(A))\mu\left(A^{(w)}\right)\leq 3Cne^{-cn/4}$, which ensures
$\epsilon_{\phi}(A)\stackrel{{\scriptstyle n}}{{\longrightarrow}}0$ uniformly.
This is the case, for instance, if one has complete grammar. On the other
hand, notice that $\tau(A)<n_{A}$. Hence, if $\tau(A)>2n$ we take $w=n$ and
get $\phi(n_{A}-w)\leq\phi(n)$. Therefore, since
$\tau(A)\mu(A)\stackrel{{\scriptstyle n}}{{\longrightarrow}}0$, we also have
in this case $\epsilon_{\phi}\stackrel{{\scriptstyle n}}{{\longrightarrow}}0$.
###### Remark 3.
Naturally, the statements under $\psi$-mixing are less general, but have
smaller error terms. The error term is the same for $t>[2\mu(A)]^{-1}$ for
both hitting and return times approximations. The difference is for small
$t$’s, due to the correlation arising from the conditional measure.
###### Remark 4.
For application purposes involving data, it is essential to control all the
constants involved in the statements. These constants can be accessed from the
proof presented in Section 4.2, where we also make explicit the integer
$n_{0}$ from which Theorem 1 holds (see (22)). If $\mu$ is $\psi$-mixing, we
define $M:=\psi\left(g_{0}+1\right)+1$. In this case $C_{1}=8M+9$,
$C_{2}=194M+206$, $C_{3}=66M+89$, $C_{4}=12M+15$ and $C_{5}=197M+220$. On the
other hand, for the $\phi$-mixing case we have $C_{1}=9$, $C_{2}=143$,
$C_{3}=61$, $C_{4}=14$ and $C_{5}=170$.
###### Remark 5.
We show the sharpness of the error term in the return time approximation given
by Theorem 1 with a simple example. Consider an i.i.d. process
$(X_{m})_{m\in\mathbb{N}}$ with alphabet $\mathcal{A}$. Take $b\in\mathcal{A}$
such that $\mu(b)=p$ and
$x=(b,b,\ldots)\in\mathcal{X}=\mathcal{A}^{\mathbb{N}}$. Thus
$A_{n}(x)=x_{0}^{n-1}=(b,b,\ldots,b)$. Direct calculations give
* •
$\mu(A_{n})=p^{n}$
* •
$\tau(A_{n})=1$
* •
$\displaystyle\rho(A_{n})=1-\mu_{A_{n}}(T_{A_{n}}=\tau(A_{n}))=1-\mu_{A_{n}}(X_{n}=b)=1-p$
* •
$n_{A_{n}}=n$.
An i.i.d. process is trivially $\psi$-mixing with function $\psi$ identically
zero. Thus Theorem 1 states that the error for small $t$’s is
$\epsilon_{\psi}(A_{n})=np^{n}$. On the other hand, by direct calculation we
have for each $n\geq 2$
$\displaystyle\left|\mu_{A_{n}}(T_{A_{n}}>n-1)-\rho(A_{n})e^{-\rho(A_{n})\mu(A_{n})((n-1)-\tau(A_{n}))}\right|=(1-p)\left(1-e^{-(1-p)p^{n}(n-2)}\right)$
which implies that the exact error in the approximation for return time at
$n-1$ is of order $p^{n}n$, just as stated by Theorem 1.
###### Remark 6.
The reader may notice a difference between Theorem 1 and Theorem 4.1 of [AV09]
concerning the error term for small $t$’s for return time approximation.
Indeed, their statement is incorrect as shown by the preceding example. We
recall that the error term for small $t$’s plays a fundamental role when
studying return time spectrum, as was done by [ACG19]. Theorem 1, besides
correcting [AV09] is also fundamental to correct [ACG19] which was based on
the exponential approximations given by [AV09].
### 3.3 Uniform positivity of the potential well
Theorem 1 says that the potential well can be used as scaling parameter to
obtain approximations for recurrence times around _any_ point. We now ask
about the possible values of this scaling parameter in its range $[0,1]$.
Abadi and Saussol [AS16], in the more general case known up to now, proved
that for $\alpha$-mixing processes with at least polynomially decaying
$\alpha$ rates, the distribution of hitting and return time converge, _almost
surely_ , to an exponential with parameter $1$. We refer [Bra05] for the
precise definition of $\alpha$-mixing, but the only important point for us it
to know that summable $\phi$-mixing implies $\alpha$-mixing with at least
polynomially decaying $\alpha$ rates. This fact, combined with Theorem 1,
proves, indirectly, that for summable $\phi$-mixing processes, the potential
well converges almost surely to $1$, since both theorem must agree on the
limiting distribution under these conditions. Theorem 2 item (a) below states
that the same holds for $\phi$-mixing without any assumption on the rate.
On the other hand, for the renewal processes, with certain tail distribution
for the inter-arrival times, Abadi, Cardeño and Gallo [ACG15] proved that for
the point $x=(00000...)$, the sequence of potential wells $\rho(x_{0}^{n-1})$
converges to $0$. In this case, the scaling parameter has a predominant role,
indicating the drastic change of scale of occurrence of events. For instance,
in this case the mean hitting time is much larger than the mean return time
$\mathbb{E}(T_{x_{0}^{n-1}})\approx\frac{1}{\rho(x_{0}^{n-1})\mu(x_{0}^{n-1})}\gg\frac{1}{\mu(x_{0}^{n-1})}=\mathbb{E}_{x_{0}^{n-1}}(T_{x_{0}^{n-1}}).$
Such renewal processes are $\beta$-mixing (see [Bra05] for the definition).
Theorem 2 item (b) below states that this cannot happen for $\psi$-mixing
processes or summable $\phi$-mixing processes.
###### Theorem 2.
Let $\mu$ be a stationary $\phi$-mixing measure. Then
* (a)
$\rho(x_{0}^{n-1})\stackrel{{\scriptstyle n}}{{\longrightarrow}}1$, almost
surely.
* (b)
If $\mu$ is $\psi$-mixing or summable $\phi$-mixing, there exists $n_{1}\geq
1$ such that:
$\inf_{n\geq
n_{1},x_{0}^{n-1}\in\mathcal{A}^{n}}\rho(x_{0}^{n-1})=\rho_{-}>0\,.$
If the alphabet $\mathcal{A}$ is finite, the set
$\\{\rho\left(A\right):A\in\mathcal{A}^{n},n<n_{1}\\}$ is finite and has a
strictly positive infimum, which implies that the infimum above can be taken
over all $n\geq 1$.
## 4 Proofs of the results
The statement of Theorem 1 is for $\phi$ and $\psi$ and for hitting and return
times. The case of return times under $\phi$-mixing was already done by
[AV09]. Our proof follows their method. In particular for the next subsection
that lists a sequence of auxiliary results, some of them will not be proved.
### 4.1 Preliminary results
The following lemma plays a fundamental role in Theorems 1 and 2. It was
originally proved in [Aba01] assuming summability of the function $\phi$, an
assumption which can be dropped.
###### Lemma 1.
Let $\mu$ be a $\phi$-mixing measure. Then, there exists positive constants
$C$ and $c$ such that for all $n\geq 1$ and all $A\in\mathcal{A}^{n}$, one
has:
$\mu(A)\leq Ce^{-cn}.$
###### Proof.
We denote by $\lambda=\sup\\{\mu(a):a\in\mathcal{A}\\}<1$. Consider a positive
integer $k_{0}$ and for all $n\geq k_{0}$ write $n=k_{0}q+r$, with $1\leq
q\in\mathbb{N}$ and $0\leq r<k_{0}$. Suppose $A=a_{0}^{n-1}$, and apply the
$\phi$-mixing property to obtain:
$\displaystyle\mu(A)$
$\displaystyle\leq\mu\left(\bigcap_{j=0}^{q-1}\left\\{\sigma^{-jk_{0}}(a_{jk_{0}})\right\\}\right)\leq\mu\left(\bigcap_{j=0}^{q-2}\left\\{\sigma^{-jk_{0}}(a_{jk_{0}})\right\\}\right)(\phi(k_{0})+\mu(a_{(q-1)k_{0}}))$
$\displaystyle\leq\mu\left(\bigcap_{j=0}^{q-2}\left\\{\sigma^{-jk_{0}}(a_{jk_{0}})\right\\}\right)(\phi(k_{0})+\lambda).$
Iterating this argument one concludes
$\mu(A)\leq(\phi(k_{0})+\lambda)^{q}.$
Since $\phi(k)\stackrel{{\scriptstyle k}}{{\longrightarrow}}0$, there exists
$k_{0}\in\mathbb{N}$ such that $\phi(k_{0})+\lambda<1$. Thus, for $n\geq
k_{0}$, and observing that
$q=\frac{n-r}{k_{0}}>\frac{n}{k_{0}}-\frac{k_{0}-1}{k_{0}}$
$\mu(A)\leq(\phi(k_{0})+\lambda)^{-(k_{0}-1)/k_{0}}\left(\left(\phi(k_{0})+\lambda\right)^{1/k_{0}}\right)^{n}.$
This covers the case $n\geq k_{0}$. By eventually enlarging the constant $C$,
one covers the case $n<k_{0}$. This ends the proof.
∎
Under the assumption of complete grammar, we would obviously have, by
definition, $\tau(A_{n}(x))\leq n$. But since we do not assume this, we need
the following lemma, which provides upper bounds for $\tau(A)$ when $\mu$ is
$\psi$-mixing or summable $\phi$-mixing.
###### Lemma 2.
Consider $\mu$ a $\psi$-mixing or summable $\phi$-mixing measure. Then, there
exists $n_{2}\in\mathbb{N}$ such that for all $n\geq n_{2}$ and
$A\in\mathcal{A}^{n}$, we have
* •
$\tau(A)\leq 2n$, for $\psi$;
* •
$\displaystyle\tau(A)\leq-\frac{2}{\mu\left(A\right)\ln\mu\left(A\right)}+n$,
for summable $\phi$.
###### Proof.
We start with the case $\psi$. For $n$ large enough we have $\psi(n)<1$, which
implies:
$\mu\left(A\cap\sigma^{-2n}(A)\right)\geq\mu(A)^{2}(1-\psi(n))>0$
Since $\tau(A)$ is the smallest positive integer such that
$\mu\left(A\cap\sigma^{-\tau(A)}(A)\right)>0$, we must have $\tau(A)\leq 2n$.
Now consider the $\phi$-mixing case. Summability of $\phi$ ensures that for
$g$ large enough we have $\phi(g)\leq 1/(g\ln g)$. Thus
$\mu\left(A\cap\sigma^{-g-n}\left(A\right)\right)\geq\mu(A)\left(\mu\left(A\right)-\phi(g)\right)\geq\mu(A)\left(\mu\left(A\right)-\frac{1}{g\ln
g}\right).$
Take $\displaystyle g=-\frac{2}{\mu\left(A\right)\ln\mu\left(A\right)}$. The
rightmost parenthesis above becomes
$\mu\left(A\right)\left[1-\frac{1}{2}\frac{-\ln\mu\left(A\right)}{(\ln(2)-\ln\mu\left(A\right)-\ln(-\ln\mu\left(A\right)))}\right]$
which is positive for $n$ large enough. ∎
The multiplicative constant $2$ in both cases is technical and was chosen for
the simplicity of the proof. Actually, it can be replaced by any constant
strictly larger than one. An irreducible aperiodic finite state Markov chain
with some entry equal to zero shows that this constant can not be taken equal
to one in the $\psi$-mixing case. Whether this bound is optimal for the
$\phi$-mixing case is an open question. Note that Lemmas 1 and 2 imply that
$\tau(A)\mu(A)\stackrel{{\scriptstyle n}}{{\longrightarrow}}0$ uniformly.
The remaining results of this subsection hold for $n\geq n^{\prime}$, where
$n^{\prime}=1$ for the case of $\phi$-mixing and
$\displaystyle n^{\prime}:=\inf\\{n>2g_{0}:\psi(n)<1\\}$ (4)
for the $\psi$-mixing case (see (1) for the definition of $g_{0}$).
Let us define
$M:=\psi\left(g_{0}+1\right)+1.$
###### Proposition 1.
Let $\mu$ be a $\psi$-mixing measure. Then for all $n\geq n^{\prime}$,
$A\in\mathcal{A}^{n}$ and $k\geq n$, the following inequalities hold
* (a)
$\mu_{A}({T_{A}}\in\mathcal{R}(A))\leq
M\,|\mathcal{R}(A)|\,\mu\left(A^{(n_{A}-g_{0})}\right)$
* (b)
$\mu_{A}(n\leq{T_{A}}\leq k)\leq M(k-n+1)\mu\left(A^{(n-g_{0})}\right)$.
###### Proof.
(a) We consider the case $\mathcal{R}(A)\not=\emptyset$, otherwise it is
trivial. For all $j\geq 1$ and $n\geq n^{\prime}$, one trivially has
$\\{{T_{A}}=j\\}\subset\sigma^{-j}(A)\subset\sigma^{-j-(n-(n_{A}-g_{0}))}\left(A^{(n_{A}-g_{0})}\right).$
Thus
$\displaystyle\mu_{A}({T_{A}}\in\mathcal{R}(A))\leq\mu_{A}\left(\bigcup_{j\in\mathcal{R}(A)}\sigma^{-j-(n-(n_{A}-g_{0}))}\left(A^{(n_{A}-g_{0})}\right)\right).$
Note that $A$ and the union on the right hand side in the above inequality are
separated by a gap of length $g_{0}+1$. By $\psi$-mixing, the left hand side
is bounded by
$\displaystyle(\psi(g_{0}+1)+1)\mu\left(\bigcup_{j\in\mathcal{R}(A)}\sigma^{-j-(n-(n_{A}-g_{0}))}\left(A^{(n_{A}-g_{0})}\right)\right)\leq
M\,|\mathcal{R}(A)|\,\mu\left(A^{(n_{A}-g_{0})}\right).$
(b) In a similar way to item (a)
$\displaystyle\mu_{A}(n\leq{T_{A}}\leq k)$
$\displaystyle\leq\mu_{A}\left(\bigcup_{n\leq j\leq
k}\sigma^{-j-g_{0}}\left(A^{(n-g_{0})}\right)\right)$ $\displaystyle\leq
M(k-n+1)\mu\left(A^{(n-g_{0})}\right).$
∎
For the next proposition, recall that $\epsilon$ stands either for
$\epsilon_{\psi}$ (2) or for $\epsilon_{\phi}$ (3), according to the mixing
property of the measure under consideration. Further, let us use the notation
${T_{A}}^{[i]}:=T_{A}\circ\sigma^{i}$.
###### Proposition 2.
Let $\mu$ be a $\phi$ or $\psi$-mixing measure. Then for all $n\geq
n^{\prime}$, $A\in\mathcal{A}^{n}$ and $t\geq\tau(A)$
$|\mu_{A}({T_{A}}>t)-\rho(A)\mu({T_{A}}>t)|\leq C\epsilon(A),$
where $C=4$ for $\epsilon_{\phi}$ and $C=4(M+1)$ for $\epsilon_{\psi}$.
###### Proof.
The proof for $\epsilon_{\phi}$ can be found in [AV09, Proposition 4.1 item
(b)]. We observe that the error term defined therein is
$\epsilon^{\prime}(A)=\displaystyle\inf_{1\leq w\leq
n_{A}}\left\\{(2n+\tau(A))\mu\left(A^{(w)}\right)+\phi(n_{A}-w))\right\\}\leq
2\epsilon_{\phi}(A),$
which justifies $C=4$ for this case. Here we prove the case $\epsilon_{\psi}$
in the same way. We start assuming that $t\geq\tau(A)+2n$. By the triangle
inequality
$\displaystyle|\mu_{A}({T_{A}}>t)-\rho(A)\mu({T_{A}}>t)|$
$\displaystyle\leq\left|\mu_{A}\left({T_{A}}>\tau(A);{T_{A}}^{[\tau(A)]}>t-\tau(A)\right)-\right.$
$\displaystyle\hskip
11.38092pt\left.\mu_{A}\left({T_{A}}>\tau(A);{T_{A}}^{[\tau(A)+2n]}>t-\tau(A)-2n\right)\right|$
(5)
$\displaystyle+\left|\mu_{A}\left({T_{A}}>\tau(A);{T_{A}}^{[\tau(A)+2n]}>t-\tau(A)-2n\right)-\rho(A)\mu\left({T_{A}}>t-\tau(A)-2n\right)\right|$
(6)
$\displaystyle+\left|\rho(A)\mu({T_{A}}>t-\tau(A)-2n)-\rho(A)\mu({T_{A}}>t)\right|.$
(7)
For the first modulus, by inclusion of sets we get immediately that
$\displaystyle\eqref{pr1}\,$
$\displaystyle\leq\mu_{A}\left({T_{A}}>\tau(A);{T_{A}}^{[\tau(A)]}\leq
2n\right)$ $\displaystyle\leq\mu_{A}({T_{A}}\in\mathcal{R}(A))+\mu_{A}(n\leq
T_{A}\leq\tau(A)+2n)$ $\displaystyle\leq
M|\mathcal{R}(A)|\mu\left(A^{(n_{A}-g_{0})}\right)+M(\tau(A)+n+1)\mu\left(A^{(n-g_{0})}\right)$
$\displaystyle\leq 4Mn\mu\left(A^{(n_{A}-g_{0})}\right).$
The third inequality follows from Proposition 1 and the last one follows from
Lemma 2.
By $\psi$-mixing, the modulus (6) is bounded by
$\rho(A)\mu({T_{A}}>t-\tau(A)-2n)\psi(n)\leq\psi(n).$
Note that the modulus is not needed for (7), and by inclusion we get
$\displaystyle\eqref{pr3}$
$\displaystyle\leq\rho(A)\mu\left({T_{A}}^{[t-\tau(A)-2n]}\leq\tau(A)+2n\right)$
$\displaystyle=\rho(A)\mu\left({T_{A}}\leq\tau(A)+2n\right)$
$\displaystyle\leq(2n+\tau(A))\mu(A)$ $\displaystyle\leq
4n\mu\left(A^{(n_{A}-g_{0})}\right)$
where the equality and second inequality follow from stationarity of $\mu$.
Therefore, for $t\geq\tau(A)+2n$, the sum of $(\ref{pr1})$, $(\ref{pr2})$ and
$(\ref{pr3})$ is bounded by
$4Mn\mu\left(A^{(n_{A}-g_{0})}\right)+\psi(n)+4n\mu\left(A^{(n_{A}-g_{0})}\right)\leq
4(M+1)\epsilon_{\psi}(A).$
We now consider the case where $\tau(A)\leq t<\tau(A)+2n$, we have
$\displaystyle|\mu_{A}({T_{A}}>t)-\rho(A)\mu({T_{A}}>t)|\quad$
$\displaystyle\quad\leq|\mu_{A}({T_{A}}>t)-\rho(A)|+|\rho(A)-\rho(A)\mu({T_{A}}>t)|$
$\displaystyle\quad\leq\mu_{A}(\tau(A)<{T_{A}}\leq\tau(A)+2n)+t\mu(A)$
$\displaystyle\quad\leq
M\left(|\mathcal{R}(A)|\mu\left(A^{(n_{A}-g_{0})}\right)+(\tau(A)+n+1)\mu\left(A^{(n-g_{0})}\right)\right)+(\tau(A)+2n)\mu\left(A^{(n_{A}-g_{0})}\right)$
$\displaystyle\quad\leq 4(M+1)\epsilon_{\psi}(A).$
The last but one inequality follows again by Proposition 1. The other
inequalities are straightforward. This ends the proof. ∎
The next lemma establishes upper bounds for the tail distribution at the scale
given by Kac’s Lemma, namely $1/\mu(A)$. For technical reasons we actually
choose the scale
$f_{A}:=1/(2\mu(A)).$
###### Lemma 3.
Let $\mu$ be a stationary measure. Then for all $n\geq 1$,
$A\in\mathcal{A}^{n}$, positive integer $k$ and
$B\in\mathcal{F}_{kf_{A}}^{\infty}$ the following inequalities hold
* (a)
$\mu({T_{A}}>kf_{A};B)\leq((\psi(n)+1)\mu({T_{A}}>f_{A}-2n))^{k}\mu(B)$,
* (b)
$\mu({T_{A}}>kf_{A};B)\leq\left(\mu\left({T_{A}}>f_{A}-2n\right)+\phi(n)\right)^{k}(\mu(B)+\phi(n))$,
* (c)
$\mu_{A}({T_{A}}>kf_{A};B)\leq(\psi(n)+1)^{k}\mu({T_{A}}>f_{A}-2n)^{k-1}\mu(B)$.
###### Proof.
We start observing that
$\\{{T_{A}}>kf_{A}\\}\subset\\{{T_{A}}>kf_{A}-2n\\}\in\mathcal{F}_{0}^{kf_{A}-n}$.
Thus, applying the $\psi$-mixing property we get
$\displaystyle\mu({T_{A}}>kf_{A};B)\leq\mu({T_{A}}>kf_{A}-2n;B)\leq(\psi(n)+1)\mu(T_{A}>kf_{A}-2n)\mu(B)$
(8)
Furthermore
$\\{{T_{A}}>kf_{A}-2n\\}=\left\\{{T_{A}}>(k-1)f_{A};{T_{A}}^{[(k-1)f_{A}]}>f_{A}-2n\right\\}.$
Now one can take in particular
$B=\left\\{{T_{A}}^{[(k-1)f_{A}]}>f_{A}-2n\right\\}\in\mathcal{F}_{(k-1)f_{A}}^{\infty}$,
and then apply (8) with $k-1$ instead of $k$ to get
$\displaystyle\mu({T_{A}}>kf_{A}-2n)$
$\displaystyle\leq(\psi(n)+1)\mu\left({T_{A}}>(k-1)f_{A}-2n\right)\mu\left({T_{A}}^{[(k-1)f_{A}]}>f_{A}-2n\right)$
$\displaystyle=(\psi(n)+1)\mu\left({T_{A}}>(k-1)f_{A}-2n\right)\mu\left({T_{A}}>f_{A}-2n\right).$
The equality follows by stationarity. Iterating this argument one concludes
that
$\mu({T_{A}}>kf_{A}-2n)\leq(\psi(n)+1)^{k-1}\mu(T_{A}>f_{A}-2n)^{k}\;.$ (9)
Applying the resulting inequality in (8), we get the statement (a). In a
similar way, $\phi$-mixing gives
$\displaystyle\mu(T_{A}>kf_{A};B)\leq\mu(T_{A}>kf_{A}-2n)(\mu(B)+\phi(n))$
And thus
$\displaystyle\mu({T_{A}}>kf_{A}-2n)$
$\displaystyle\leq\mu({T_{A}}>(k-1)f_{A}-2n)\left(\mu\left({T_{A}}^{[(k-1)f_{A}]}>f_{A}-2n\right)+\phi(n)\right)$
$\displaystyle\leq\left(\mu\left({T_{A}}>f_{A}-2n\right)+\phi(n)\right)^{k}.$
(10)
which ends the proof of (b).
The proof for (c) follows the same lines as item (a), observing that for
$A,B\in\mathcal{F}_{0}^{i}$ and $C\in\mathcal{F}_{i+n}^{\infty}$,
$\psi$-mixing property implies $\mu_{A}(B;C)\leq\mu_{A}(B)\mu(C)(\psi(n)+1)$.
∎
The next proposition is the key to the proof of Theorem 1, and the idea is the
following. We work under the time scale $f_{A}$. When $t=kf_{A},\
k\in\mathbb{N}$, then we simply cut out $t$ into $k$ pieces of equal size
$f_{A}$. Then, the case of general $t=kf_{A}+r,r<f_{A}$ is approximated by its
“integer part” $kf_{A}$. Technically, this is done in $b)$ and $a)$
respectively.
###### Proposition 3.
Let $\mu$ be a $\phi$ or $\psi$-mixing measure. Then for all $n\geq
n^{\prime}$, $A\in\mathcal{A}^{n}$ and positive integer $k$, the following
inequalities hold:
(a) For $0\leq r\leq f_{A}$
* 1.
$|\mu({T_{A}}>kf_{A}+r)-\mu({T_{A}}>kf_{A})\mu({T_{A}}>r)|\leq
C^{\prime}(\psi(n)+1)^{k-1}\mu({T_{A}}>f_{A}-2n)^{k}\epsilon_{\psi}(A)$
* 2.
$|\mu({T_{A}}>kf_{A}+r)-\mu({T_{A}}>kf_{A})\mu({T_{A}}>r)|\leq
C^{\prime}(\mu({T_{A}}>f_{A}-2n)+\phi(n))^{k}\epsilon_{\phi}(A)$
* 3.
$|\mu_{A}({T_{A}}>kf_{A}+r)-\mu_{A}({T_{A}}>kf_{A})\mu({T_{A}}>r)|\leq
C^{\prime}((\psi(n)+1)\mu({T_{A}}>f_{A}-2n))^{k-1}\epsilon_{\psi}(A)$.
(b) For $k\geq 1$
* 1.
$\left|\mu({T_{A}}>kf_{A})-\mu({T_{A}}>f_{A})^{k}\right|\leq
C^{\prime}\epsilon_{\psi}(A)(k-1)(\psi(n)+1)^{k-2}\mu({T_{A}}>f_{A}-2n)^{k-1}$
* 2.
$\left|\mu({T_{A}}>kf_{A})-\mu({T_{A}}>f_{A})^{k}\right|\leq
C^{\prime}\epsilon_{\phi}(A)(k-1)(\mu({T_{A}}>f_{A}-2n)+\phi(n))^{k-1}$
* 3.
$\left|\mu_{A}({T_{A}}>kf_{A})-\mu_{A}(T_{A}>f_{A})\mu({T_{A}}>f_{A})^{k-1}\right|\leq
C^{\prime}\epsilon_{\psi}(A)(k-1)((\psi(n)+1)\mu({T_{A}}>f_{A}-2n))^{k-2}$
where $C^{\prime}=2(M+1)$ for the cases involving $\psi$ and $C^{\prime}=4$
for $\phi$.
###### Proof.
We will proof items (a)-1 and (a)-2 together. Initially, consider the case in
which $r<2n$. In this case, for all $n\geq n^{\prime}$ we have
$\displaystyle|\mu({T_{A}}>kf_{A}+r)-\mu({T_{A}}>kf_{A})\mu({T_{A}}>r)|$
$\displaystyle\leq\left|\mu\left({T_{A}}>kf_{A},{T_{A}}^{[kf_{A}]}>r\right)-\mu({T_{A}}>kf_{A})\right|+\mu({T_{A}}>kf_{A})|1-\mu({T_{A}}>r)|$
$\displaystyle\leq\mu\left({T_{A}}>kf_{A},{T_{A}}^{[kf_{A}]}\leq
r\right)+\mu({T_{A}}>kf_{A})\mu({T_{A}}\leq r).$ (11)
By Lemma 3-(a) and (9), the last sum is bounded by
$\displaystyle((\psi(n)+1)\mu(T_{A}>f_{A}-2n))^{k}\mu\left({T_{A}}^{[kf_{A}]}\leq
r\right)+\mu({T_{A}}>kf_{A}-2n)r\mu(A)$
$\displaystyle\leq((\psi(n)+1)\mu(T_{A}>f_{A}-2n))^{k}\mu\left({T_{A}}\leq
r\right)+(\psi(n)+1)^{k-1}\mu(T_{A}>f_{A}-2n)^{k}r\mu(A)$
$\displaystyle\leq(\psi(n)+1)^{k-1}\mu(T_{A}>f_{A}-2n)^{k}r\mu(A)(M+1)$
$\displaystyle\leq
2(M+1)(\psi(n)+1)^{k-1}\mu(T_{A}>f_{A}-2n)^{k}\epsilon_{\psi}(A)$ (12)
which gives us (a)-1. To get (a)-2 for $r<2n$, we apply Lemma 3-(b) and (10)
in a similar way. Thus, (11) is bounded by
$\displaystyle(\mu(T_{A}>f_{A}-2n)+\phi(n))^{k}(\mu(T_{A}\leq
r)+\phi(n))+(\mu(T_{A}>f_{A}-2n)+\phi(n))^{k}\mu(T_{A}\leq r)$
$\displaystyle\leq 4(\mu(T_{A}>f_{A}-2n)+\phi(n))^{k}\epsilon_{\phi}(A).$
We now consider the case $r\geq 2n$. The triangle inequality gives us
$\displaystyle|\mu({T_{A}}>kf_{A}+r)-\mu({T_{A}}>kf_{A})\mu({T_{A}}>r)|$
$\displaystyle\leq\left|\mu\left({T_{A}}>kf_{A};{T_{A}}^{[kf_{A}]}>r\right)-\mu\left({T_{A}}>kf_{A};{T_{A}}^{[kf_{A}+2n]}>r-2n\right)\right|$
(13)
$\displaystyle+\left|\mu\left({T_{A}}>kf_{A};{T_{A}}^{[kf_{A}+2n]}>r-2n\right)-\mu\left({T_{A}}>kf_{A}\right)\mu\left({T_{A}}^{[kf_{A}+2n]}>r-2n\right)\right|$
(14)
$\displaystyle+\left|\mu\left({T_{A}}>kf_{A}\right)\mu\left({T_{A}}^{[kf_{A}+2n]}>r-2n\right)-\mu\left({T_{A}}>kf_{A}\right)\mu\left({T_{A}}>r\right)\right|.$
(15)
We proceed as in (5) and use Lemma 3-(a) to get
$\displaystyle\eqref{pr23}$
$\displaystyle\leq\mu\left({T_{A}}>kf_{A};{T_{A}}^{[kf_{A}]}\leq 2n\right)$
$\displaystyle\leq((\psi(n)+1)(\mu(T_{A}>f_{A}-2n))^{k}\mu(T_{A}\leq 2n)$
$\displaystyle\leq 2n\mu(A)((\psi(n)+1)(\mu(T_{A}>f_{A}-2n))^{k}.$ (16)
For the case $\phi$, we apply Lemma 3-(b) and get
$\displaystyle\eqref{pr23}$
$\displaystyle\leq(\mu(T_{A}>f_{A}-2n)+\phi(n))^{k}(2n\mu(A)+\phi(n)).$ (17)
By $\psi$-mixing and (9)
$\displaystyle\eqref{pr24}$ $\displaystyle\leq\mu({T_{A}}>kf_{A}-2n)\psi(n)$
$\displaystyle\leq(\psi(n)+1)^{k-1}\mu(T_{A}>f_{A}-2n)^{k}\psi(n)$ (18)
And applying $\phi$-mixing and (10)
$\displaystyle\eqref{pr24}$
$\displaystyle\leq(\mu({T_{A}}>f_{A}-2n)+\phi(n))^{k}\phi(n)$ (19)
Finally, using shift-invariance and the same arguments as above
$\displaystyle\eqref{pr25}$
$\displaystyle=\mu({T_{A}}>kf_{A})\mu\left(r-2n<{T_{A}}\leq r\right)$
$\displaystyle\leq 2n\mu(A)\mu({T_{A}}>kf_{A}-2n).$ (20)
Therefore, (9), (16),(18) and (20) give us
$\displaystyle\eqref{pr23}+\eqref{pr24}+\eqref{pr25}\leq
2(M+1)(\psi(n)+1)^{k-1}\mu(T_{A}>f_{A}-2n)^{k}\epsilon_{\psi}(A)$
and from (10), (17), (19) and (20) we get
$\displaystyle\eqref{pr23}+\eqref{pr24}+\eqref{pr25}\leq
4(\mu(T_{A}>f_{A}-2n)+\phi(n))^{k}\epsilon_{\phi}(A)$
which ends the proof for (a)-1 and (a)-2.
For the proof of the (a)-3, we write a similar triangle inequality as above:
$\displaystyle|\mu_{A}({T_{A}}>kf_{A}+r)-\mu_{A}({T_{A}}>kf_{A})\mu({T_{A}}>r)|$
$\displaystyle\leq\left|\mu_{A}\left({T_{A}}>kf_{A};{T_{A}}^{[kf_{A}]}>r\right)-\mu_{A}\left({T_{A}}>kf_{A};{T_{A}}^{[kf_{A}+2n]}>r-2n\right)\right|$
$\displaystyle+\left|\mu_{A}\left({T_{A}}>kf_{A};{T_{A}}^{[kf_{A}+2n]}>r-2n\right)-\mu_{A}\left({T_{A}}>kf_{A}\right)\mu\left({T_{A}}^{[kf_{A}+2n]}>r-2n\right)\right|$
$\displaystyle+\mu_{A}\left({T_{A}}>kf_{A}\right)\left|\mu\left({T_{A}}^{[kf_{A}+2n]}>r-2n\right)-\mu\left({T_{A}}>r\right)\right|.$
Then, we follow the same as we did for (a)-1, but applying item (c) of Lemma 3
and using the $\psi$-mixing property:
$|\mu_{A}(B;C)-\mu_{A}(B)\mu(C)|\leq\mu_{A}(B)\mu(C)\psi(n)$
where $A,B\in\mathcal{F}_{0}^{i}$ and $C\in\mathcal{F}_{i+n}^{\infty}$. For
the case $r<2n$, we use
$\displaystyle|\mu_{A}({T_{A}}>kf_{A}+r)-\mu_{A}({T_{A}}>kf_{A})\mu({T_{A}}>r)|$
$\displaystyle\leq\left|\mu_{A}\left({T_{A}}>kf_{A},{T_{A}}^{[kf_{A}]}>r\right)-\mu_{A}({T_{A}}>kf_{A})\right|+\mu_{A}({T_{A}}>kf_{A})|1-\mu({T_{A}}>r)|$
and proceed as we did in (12), applying again Lemma 3-(c). This ends item (a).
We now come to the proof of items (b)-1 and (b)-2. For $k=1$ we have an
equality. For $k\geq 2$ we get
$\displaystyle\left|\mu({T_{A}}>kf_{A})-\mu({T_{A}}>f_{A})^{k}\right|$
$\displaystyle=\left|\sum_{j=2}^{k}\left(\mu({T_{A}}>jf_{A})-\mu({T_{A}}>(j-1)f_{A})\mu({T_{A}}>f_{A})\right)\mu({T_{A}}>f_{A})^{k-j}\right|$
$\displaystyle\leq\sum_{j=2}^{k}\left|\mu({T_{A}}>jf_{A})-\mu({T_{A}}>(j-1)f_{A})\mu({T_{A}}>f_{A})\right|\mu({T_{A}}>f_{A})^{k-j}.$
(21)
We put $r=f_{A}$ in item (a)-1 to obtain (b)-1:
$\displaystyle\eqref{pr35}$ $\displaystyle\leq
2(M+1)\epsilon_{\psi}(A)\sum_{j=2}^{k}(\psi(n)+1)^{j-2}\mu({T_{A}}>f_{A}-2n)^{j-1}\mu({T_{A}}>f_{A})^{k-j}$
$\displaystyle\leq
2(M+1)\epsilon_{\psi}(A)(k-1)(\psi(n)+1)^{k-2}\mu({T_{A}}>f_{A}-2n)^{k-1}.$
Furthermore, we get the inequality (b)-2, under $\phi$-mixing, proceeding
similarly as above
$\displaystyle\eqref{pr35}$ $\displaystyle\leq
4\epsilon_{\phi}(A)\sum_{j=2}^{k}(\mu({T_{A}}>f_{A}-2n)+\phi(n))^{j-1}(\mu({T_{A}}>f_{A}-2n)+\phi(n)^{k-j}$
$\displaystyle=4\epsilon_{\phi}(A)(k-1)(\mu({T_{A}}>f_{A}-2n)+\phi(n))^{k-1}$
Finally, we proof (b)-3 applying (a)-3 as follows
$\displaystyle\left|\mu_{A}({T_{A}}>kf_{A})-\mu_{A}({T_{A}}>f_{A})\mu(T_{A}>f_{A})^{k-1}\right|$
$\displaystyle\leq\sum_{j=2}^{k}\left|\mu_{A}({T_{A}}>jf_{A})-\mu_{A}({T_{A}}>(j-1)f_{A})\mu({T_{A}}>f_{A})\right|\mu({T_{A}}>f_{A})^{k-j}$
$\displaystyle\leq
2(M+1)\epsilon_{\psi}(A)\sum_{j=2}^{k}((\psi(n)+1)\mu(T_{A}>f_{A}-2n))^{j-2}\mu(T_{A}>f_{A})^{k-j}$
$\displaystyle\leq
2(M+1)\epsilon_{\psi}(A)(k-1)((\psi(n)+1)\mu(T_{A}>f_{A}-2n))^{k-2}.$
∎
The next two lemmas are classical results and are stated without proof. The
first one establishes the reversibility of certain sets for stationary
measures and the second one is a discrete version of the Mean Value Theorem
which follows with a straightforward computation.
###### Lemma 4.
Let $\mu$ be shift-invariant. For all positive $i\in\mathbb{N}$, $n\geq 1$ and
$A\in\mathcal{A}^{n}$ we have
$\mu({T_{A}}=i)=\mu({T_{A}}>i-1;A)$
###### Lemma 5.
Given $a_{1},...,a_{n},b_{1},...,b_{n}$ real numbers such that $0\leq
a_{i},b_{i}\leq 1$, the following inequality holds
$\displaystyle\left|\prod_{i=1}^{n}a_{i}-\prod_{i=1}^{n}b_{i}\right|\leq\sum_{i=1}^{n}\left|a_{i}-b_{i}\right|\left(\max_{1\leq
i\leq
n}\\{a_{i},b_{i}\\}\right)^{n-1}\leq\sum_{i=1}^{n}\left|a_{i}-b_{i}\right|.$
### 4.2 Proof of Theorem 1
Theorem 1 contains 8 statements, each statement corresponding to a choice of
* •
recurrence time: hitting or return,
* •
mixing property: $\psi$ or $\phi$,
* •
amplitude of $t$: smaller or larger than $f_{A}$.
Recall the definition of $n^{\prime}$ in (4). The proof of Theorem 1 holds for
all $n\geq n_{0}$, where $n_{0}$ is explicitly given by
$\displaystyle n_{0}:=\inf\left\\{m\geq
n^{\prime};\sup_{A\in\mathcal{A}^{n}}\mu(A)\tau(A)<1/2,\;\forall n\geq
m\right\\}$ (22)
which is finite since
$\sup_{A\in\mathcal{A}^{n}}\mu(A)\tau(A)\stackrel{{\scriptstyle
n}}{{\longrightarrow}}0$. Then, in particular, we have $\tau(A)<f_{A}$ for all
$n\geq n_{0}$ and $A\in\mathcal{A}^{n}$.
#### 4.2.1 Proofs of the statements for small $t$’s
Here we assume that $1\leq t\leq f_{A}:=[2\mu(A)]^{-1}$.
###### Proof of hitting time, $\phi$ and $\psi$ together.
Recall that $\epsilon(A)$ denotes $\epsilon_{\phi}(A)$ or
$\epsilon_{\psi}(A)$, depending on whether the measure is $\phi$ or
$\psi$-mixing. For positive $i\in\mathbb{N}$, define
$p_{i}=\frac{\mu_{A}({T_{A}}>i-1)}{\mu({T_{A}}>i-1)}.$
Then
$\displaystyle\mu({T_{A}}>t)$
$\displaystyle=\prod_{i=1}^{t}\frac{\mu({T_{A}}>i)}{\mu({T_{A}}>i-1)}=\prod_{i=1}^{t}\left(1-\mu({T_{A}}=i|{T_{A}}>i-1)\right)$
$\displaystyle=\prod_{i=1}^{t}\left(1-\mu(\sigma^{-i}(A)|{T_{A}}>i-1)\right)=\prod_{i=1}^{t}\left(1-\mu(A)p_{i}\right)$
(23)
where we used Lemma 4 in the last equality.
Similarly, for $\tau(A)\leq t\leq f_{A}$, we have
$\displaystyle\mu({T_{A}}>t)=\mu({T_{A}}>\tau(A))\prod_{i=\tau(A)+1}^{t}\left(1-\mu(A)p_{i}\right).$
(24)
We apply (23) and Lemma 5 to obtain
$\displaystyle\quad\left|\mu({T_{A}}>t)-e^{-\rho(A)\mu(A)t}\right|$
$\displaystyle=\left|\prod_{i=1}^{t}\left(1-\mu(A)p_{i}\right)-\prod_{i=1}^{t}e^{-\rho(A)\mu(A)}\right|$
$\displaystyle\leq\left|\prod_{i=1}^{\tau(A)}\left(1-\mu(A)p_{i}\right)-\prod_{i=1}^{\tau(A)}e^{-\rho(A)\mu(A)}\right|+\left|\prod_{i=\tau(A)+1}^{t}\left(1-\mu(A)p_{i}\right)-\prod_{i=\tau(A)+1}^{t}e^{-\rho(A)\mu(A)}\right|$
$\displaystyle\leq\left|\mu({T_{A}}>\tau(A))-e^{-\rho(A)\mu(A)\tau(A)}\right|+\sum_{i=\tau(A)+1}^{t}\left|1-\mu(A)p_{i}-e^{-\rho(A)\mu(A)}\right|.$
(25)
Applying the inequality $\left|1-e^{-x}\right|\leq x$ for $x\geq 0$, we have
$\displaystyle\left|\mu({T_{A}}>\tau(A))-e^{-\rho(A)\mu(A)\tau(A)}\right|$
$\displaystyle\leq\left|\mu({T_{A}}>\tau(A))-1\right|+\left|1-e^{-\rho(A)\mu(A)\tau(A)}\right|$
$\displaystyle\leq 2\tau(A)\mu(A).$ (26)
On the other hand, by the triangle inequality
$\displaystyle\left|1-p_{i}\mu(A)-e^{-\rho(A)\mu(A)}\right|$
$\displaystyle\leq\left|p_{i}-\rho(A)\right|\mu(A)+\left|1-\rho(A)\mu(A)-e^{-\rho(A)\mu(A)}\right|.$
(27)
Since $|1-x-e^{-x}|\leq\frac{x^{2}}{2}$ for all $0\leq x\leq 1$, by doing
$x=\rho(A)\mu(A)$ we get
$\displaystyle\left|1-\rho(A)\mu(A)-e^{-\rho(A)\mu(A)}\right|\leq\frac{\rho(A)^{2}\mu(A)^{2}}{2}\leq\frac{\epsilon(A)\mu(A)}{2}.$
Furthermore, for $\tau(A)+1\leq i\leq f_{A}+1$, Proposition 2 gives us
$\displaystyle|p_{i}-\rho(A)|=\left|\frac{\mu_{A}({T_{A}}>i-1)}{\mu({T_{A}}>i-1)}-\rho(A)\right|\leq\frac{C\epsilon(A)}{\mu({T_{A}}>i-1)}\leq
2C\epsilon(A),$
where, for the last inequality we used
$\displaystyle\mu({T_{A}}>i-1)=1-\mu({T_{A}}\leq i-1)\geq 1-(i-1)\mu(A)\geq
1-f_{A}\mu(A)=\frac{1}{2}.$
Thus, applying (27) we obtain for $\tau(A)+1\leq i\leq f_{A}+1$
$\displaystyle\left|1-p_{i}\mu(A)-e^{-\rho(A)\mu(A)}\right|\leq\left(2C+1/2\right)\epsilon(A)\mu(A).$
(28)
Therefore, (25), (26) and (28) give us
$\displaystyle\left|\mu({T_{A}}>t)-e^{-\rho(A)\mu(A)t}\right|$
$\displaystyle\leq
2\tau(A)\mu(A)+\left(2C+1/2\right)(t-\tau(A))\epsilon(A)\mu(A)$
$\displaystyle\leq\left(2C+1/2\right)[\tau(A)\mu(A)+t\mu(A)\epsilon(A)]$ (29)
which concludes the statement of Theorem 1 for hitting time at small $t$’s
(with either $\phi$ or $\psi$).
∎
###### Proof for return time, $\phi$ and $\psi$ together.
By definition we have $\mu_{A}(T_{A}>t)=p_{t+1}\mu(T_{A}>t)$. Then, we use
again the triangle inequality to write
$\displaystyle\quad\left|\mu_{A}({T_{A}}>t)-\rho(A)e^{-\rho(A)\mu(A)(t-\tau(A))}\right|$
$\displaystyle\leq\mu({T_{A}}>t)\left|p_{t+1}-\rho(A)\right|+\rho(A)\left|\mu({T_{A}}>t)-e^{-\rho(A)\mu(A)(t-\tau(A))}\right|.$
(30)
As we saw before, the first modulus above is bounded by $2C\epsilon(A)$. On
the other hand, applying (24) we can write
$\displaystyle\left|\mu({T_{A}}>t)-e^{-\rho(A)\mu(A)(t-\tau(A))}\right|=\left|\mu({T_{A}}>\tau(A))\prod_{i=\tau(A)+1}^{t}(1-\mu(A)p_{i})-\prod_{i=\tau(A)+1}^{t}e^{-\rho(A)\mu(A)}\right|.$
This is bounded, applying Lemma 5, by
$\displaystyle\quad\;|\mu({T_{A}}>\tau(A))-1|+\left|\prod_{i=\tau(A)+1}^{t}(1-\mu(A)p_{i})-\prod_{i=\tau(A)+1}^{t}e^{-\rho(A)\mu(A)}\right|$
$\displaystyle\leq\tau(A)\mu(A)+\left(2C+1/2\right)t\mu(A)\epsilon(A)$
where the last inequality follows from (25) and (28). Finally, notice that
$t\mu(A)\leq f_{A}\mu(A)=1/2$ and $\tau(A)\mu(A)\leq 2\epsilon(A)$ (use Lemma
2 for $\psi$). Therefore, we obtain from (30)
$\displaystyle\left|\mu_{A}({T_{A}}>t)-\rho(A)e^{-\rho(A)\mu(A)(t-\tau(A))}\right|\leq\left(3C+9/4\right)\epsilon(A)$
(31)
This concludes the statement of Theorem 1 for return time at small $t$’s (with
either $\phi$ or $\psi$). ∎
#### 4.2.2 Proof of the statements for large $t$’s
The proof for return time for $t>f_{A}$ is done in [AV09] under $\phi$-mixing,
finite alphabet and complete grammar. The proof still holds if one just assume
countable alphabet and incomplete grammar (recall Remark 2 for the uniform
convergence to zero of the error term $\epsilon_{\phi}$). Thus, we focus on
hitting time under each mixing assumption, and return time only under
$\psi$-mixing.
###### Proof of Theorem 1 for hitting times, for $t>f_{A}$.
Write $t=kf_{A}+r$ with integer $k\geq 1$ and $0\leq r<f_{A}$. Thus, we have
$\displaystyle\left|\mu({T_{A}}>t)-e^{-\rho(A)\mu(A)t}\right|$
$\displaystyle\leq\left|\mu({T_{A}}>kf_{A}+r)-\mu({T_{A}}>kf_{A})\mu({T_{A}}>r)\right|$
(32)
$\displaystyle\quad+\left|\mu({T_{A}}>kf_{A})-\mu({T_{A}}>f_{A})^{k}\right|\mu({T_{A}}>r)$
(33)
$\displaystyle\quad+\left|\mu({T_{A}}>f_{A})^{k}-e^{-\rho(A)\frac{k}{2}}\right|\mu({T_{A}}>r)$
(34)
$\displaystyle\quad+\left|e^{-\rho(A)\frac{k}{2}}\mu({T_{A}}>r)-e^{-\rho(A)\mu(A)t}\right|.$
(35)
In order to get an upper bound for the sum of (32) and (33), we analyse the
$\psi$ and $\phi$ cases separately, and start by the $\psi$-mixing. Applying
items (a)-1 and (b)-1 of Proposition 3, that sum is bounded by
$\displaystyle\leq
C^{\prime}\epsilon_{\psi}(A)(\psi(n)+1)^{k-1}\mu({T_{A}}>f_{A}-2n)^{k}\left(1+(k-1)((\psi(n)+1)\mu({T_{A}}>f_{A}-2n))^{-1}\right)$
$\displaystyle\leq
2(M+1)\epsilon_{\psi}(A)\left((\psi(n)+1)\mu({T_{A}}>f_{A}-2n)\right)^{k}2k$
$\displaystyle\leq
8(M+1)\epsilon_{\psi}(A)\mu(A)t\left((\psi(n)+1)\mu({T_{A}}>f_{A}-2n)\right)^{k}.$
(36)
where the last two inequalities are justified by
$\mu({T_{A}}>f_{A}-2n)^{-1}\leq\mu({T_{A}}>f_{A})^{-1}\leq 2$ and $k\leq
2\mu(A)t$.
On the other hand, applying (29) with $t=f_{A}-2n$ we get
$\displaystyle\left|\mu({T_{A}}>f_{A}-2n)-e^{-\rho(A)\mu(A)(f_{A}-2n)}\right|$
$\displaystyle=\left|\mu({T_{A}}>f_{A}-2n)-e^{-\frac{\rho(A)}{2}+2n\rho(A)\mu(A)}\right|$
$\displaystyle\leq\left(2C+1/2\right)(\tau(A)\mu(A)+(f_{A}-2n)\mu(A)\epsilon(A))$
$\displaystyle\leq\left(5C+5/4\right)\epsilon(A)$
where we use $\tau(A)\mu(A)\leq 2\epsilon(A)$.
Furthermore, by the Mean Value Theorem (MVT)
$\displaystyle\left|e^{-\frac{\rho(A)}{2}+2n\rho(A)\mu(A)}-e^{-\frac{\rho(A)}{2}}\right|$
$\displaystyle\leq 2n\rho(A)\mu(A)e^{-\frac{\rho(A)}{2}+2n\rho(A)\mu(A)}$
$\displaystyle\leq 2n\mu(A)e^{2n\mu(A)}\leq\frac{11}{2}n\mu(A)$
since for $n\geq n_{0}$ we have $2n\mu(A)\leq 2\sup\mu(A)\tau(A)\leq 1$.
Thus, it follows that
$\displaystyle\quad\left|(\psi(n)+1)\mu({T_{A}}>f_{A}-2n)-e^{-\frac{\rho(A)}{2}}\right|$
$\displaystyle\leq\psi(n)+\left|\mu({T_{A}}>f_{A}-2n)-e^{-\frac{\rho(A)}{2}+2n\rho(A)\mu(A)}\right|+\left|e^{-\frac{\rho(A)}{2}+2n\rho(A)\mu(A)}-e^{-\frac{\rho(A)}{2}}\right|$
$\displaystyle\leq\left(5C+27/4\right)\epsilon(A).$
Therefore
$\left((\psi(n)+1)\mu({T_{A}}>f_{A}-2n)\right)^{k}\leq\left(e^{-\frac{\rho(A)}{2}}+\left(5C+27/4\right)\epsilon(A)\right)^{k}.$
Since $e^{x}-1\geq x\;\forall x\in\mathbb{R}$, by doing
$K=\left(5C+27/4\right)e^{1/2}$ we get
$\displaystyle\left(e^{K\epsilon(A)}-1\right)\geq
K\epsilon(A)\geq\left(5C+27/4\right)\epsilon(A)e^{\frac{\rho(A)}{2}}$
$\displaystyle\Longrightarrow$
$\displaystyle\;e^{-\frac{\rho(A)}{2}}\left(e^{K\epsilon(A)}-1\right)\geq\left(5C+27/4\right)\epsilon(A)$
$\displaystyle\Longrightarrow$
$\displaystyle\;e^{-\frac{\rho(A)}{2}+K\epsilon(A)}\geq\left(5C+27/4\right)\epsilon(A)+e^{-\frac{\rho(A)}{2}}.$
(37)
Now, using that $k=2\mu(A)(t-r)$, we have
$\displaystyle\left((\psi(n)+1)\mu({T_{A}}>f_{A}-2n)\right)^{k}$
$\displaystyle\leq\left(e^{-\frac{\rho(A)}{2}+K\epsilon(A)}\right)^{k}$
$\displaystyle=e^{-\rho(A)\mu(A)t+\rho(A)\mu(A)r+2K\epsilon(A)\mu(A)t-2K\epsilon(A)\mu(A)r}$
$\displaystyle\leq e^{-\mu(A)t\left(\rho(A)-2K\epsilon(A)\right)}e^{\mu(A)r}$
$\displaystyle\leq e^{1/2}e^{-\mu(A)t\left(\rho(A)-C_{3}\epsilon(A)\right)}$
(38)
where the last inequality follows from $e^{\mu(A)r}\leq e^{\mu(A)f_{A}}$.
Therefore, it follows from (36) that the sum of (32) and (33) is bounded by
$\displaystyle
14(M+1)\epsilon_{\psi}(A)\mu(A)te^{-\mu(A)t\left(\rho(A)-C_{3}\epsilon(A)\right)}.$
We now turn to the case of $\phi$-mixing. We apply items (a)-2 and (b)-2 of
Proposition 3 to get an upper bound for the sum of (32) and (33):
$\displaystyle\left|\mu({T_{A}}>kf_{A}+r)-\mu({T_{A}}>kf_{A})\mu({T_{A}}>r)\right|+\left|\mu({T_{A}}>kf_{A})-\mu({T_{A}}>f_{A})^{k}\right|\mu({T_{A}}>r)$
$\displaystyle\leq
4\epsilon_{\phi}(A)\left(\mu({T_{A}}>f_{A}-2n)+\phi(n)\right)^{k}\left(1+(k-1)\left(\mu({T_{A}}>f_{A}-2n)+\phi(n)\right)^{-1}\right)$
$\displaystyle\leq
4\epsilon_{\phi}(A)\left(\mu({T_{A}}>f_{A}-2n)+\phi(n)\right)^{k}2k$
$\displaystyle\leq
16\epsilon_{\phi}(A)\mu(A)t\left(\mu({T_{A}}>f_{A}-2n)+\phi(n)\right)^{k}.$
Similarly to $\psi$-mixing case, one obtain
$\displaystyle\left(\mu({T_{A}}>f_{A}-2n)+\phi(n)\right)^{k}\leq
e^{1/2}e^{-\mu(A)t\left(\rho(A)-C_{3}\epsilon(A)\right)}$
which implies in the $\phi$-mixing case that the sum of (32) and (33) is
bounded by
$27\epsilon_{\phi}(A)\mu(A)te^{-\mu(A)t\left(\rho(A)-C_{3}\epsilon(A)\right)}.$
Now, we will treat the cases $\psi$ and $\phi$ together to obtain upper bounds
for (34) and (35). In order to get an upper bound for (34), we apply (29) with
$t=f_{A}$:
$\displaystyle\left|\mu({T_{A}}>f_{A})-e^{-\rho(A)\mu(A)f_{A}}\right|$
$\displaystyle=\left|\mu({T_{A}}>f_{A})-e^{-\frac{\rho(A)}{2}}\right|$
$\displaystyle\leq\left(2C+1/2\right)\left(\tau(A)\mu(A)+f_{A}\mu(A)\epsilon(A)\right)$
$\displaystyle\leq\left(5C+5/4\right)\epsilon(A).$ (39)
Thus, applying Lemma 5 we have
$\displaystyle\quad\left|\mu({T_{A}}>f_{A})^{k}-e^{-\rho(A)\frac{k}{2}}\right|$
$\displaystyle\leq\sum_{i=1}^{k}\left|\mu({T_{A}}>f_{A})-e^{-\frac{\rho(A)}{2}}\right|\left(\max\left\\{\mu({T_{A}}>f_{A}),e^{-\frac{\rho(A)}{2}}\right\\}\right)^{k-1}.$
The max is bounded using (39) by
$\displaystyle e^{-\frac{\rho(A)}{2}}+\left(5C+5/4\right)\epsilon(A).$
Naturally, the absolute value is also bounded using (39) and we get that the
above sum is bounded above by
$\displaystyle
k\,\left(5C+5/4\right)\epsilon(A)\,\left(e^{-\frac{\rho(A)}{2}}+\left(5C+5/4\right)\epsilon(A)\right)^{k-1}.$
(40)
Recalling that $k=2\mu(A)(t-r)$ and proceeding as we did for (37) and (38), we
get the following upper bound for (34)
$2\left(5C+5/4\right)\epsilon(A)\mu(A)t\,e^{-\mu(A)t\left(\rho(A)-C_{3}\epsilon(A)\right)}e^{1}\leq
7\left(4C+1\right)\epsilon(A)\mu(A)te^{-\mu(A)t\left(\rho(A)-C_{3}\epsilon(A)\right)}.$
To conclude the proof for hitting time, we apply (29) with $t=r$ to bound (35)
as follows
$\displaystyle\left|e^{-\rho(A)\frac{k}{2}}\mu({T_{A}}>r)-e^{-\rho(A)\mu(A)t}\right|$
$\displaystyle=e^{-\rho(A)\mu(A)t+\rho(A)\mu(A)r}\left|\mu({T_{A}}>r)-e^{-\rho(A)\mu(A)r}\right|$
$\displaystyle\leq\left(2C+1/2\right)e^{-\rho(A)\mu(A)t+\mu(A)f_{A}}\left(\tau(A)\mu(A)+r\mu(A)\epsilon(A)\right)$
$\displaystyle\leq\left(2C+1/2\right)\left(\tau(A)\mu(A)+f_{A}\mu(A)\epsilon(A)\right)e^{-\rho(A)\mu(A)t}e^{1/2}$
$\displaystyle\leq(17C+5)\epsilon(A)\mu(A)te^{-\mu(A)t\left(\rho(A)-C_{3}\epsilon(A)\right)}$
where the term $\mu(A)t$ follows from $1=2\mu(A)f_{A}\leq 2\mu(A)t$. ∎
###### Proof of Theorem 1 for return time, for $t>f_{A}$ and under
$\psi$-mixing.
We use again the triangle inequality to write
$\displaystyle\left|\mu_{A}({T_{A}}>t)-\rho(A)e^{-\rho(A)\mu(A)(t-\tau(A))}\right|$
$\displaystyle\leq\left|\mu_{A}({T_{A}}>kf_{A}+r)-\mu_{A}({T_{A}}>kf_{A})\mu({T_{A}}>r)\right|$
(41)
$\displaystyle+\left|\mu_{A}({T_{A}}>kf_{A})-\mu_{A}(T_{A}>f_{A})\mu({T_{A}}>f_{A})^{k-1}\right|\mu({T_{A}}>r)$
(42)
$\displaystyle+\left|\mu_{A}(T_{A}>f_{A})\mu({T_{A}}>f_{A})^{k-1}-\rho(A)e^{-\rho(A)\frac{k}{2}}\right|\mu({T_{A}}>r)$
(43)
$\displaystyle+\rho(A)e^{-\rho(A)\frac{k}{2}}\left|\mu({T_{A}}>r)-e^{-\rho(A)\mu(A)(r-\tau(A))}\right|.$
(44)
Applying items (a)-3 and (b)-3 of Proposition 3, the sum of (41) and (42) is
bounded by
$\displaystyle
2(M+1)\epsilon_{\psi}(A)((\psi(n)+1)\mu(T_{A}>f_{A}-2n))^{k-1}(1+(k-1)((\psi(n)+1)\mu(T_{A}>f_{A}-2n))^{-1})$
$\displaystyle\leq
2(M+1)\epsilon_{\psi}(A)((\psi(n)+1)\mu(T_{A}>f_{A}-2n))^{k-1}2k$
$\displaystyle\leq
8(M+1)\epsilon_{\psi}(A)\mu(A)t((\psi(n)+1)\mu(T_{A}>f_{A}-2n))^{k-1}.$
Replacing $k$ by $k-1$ in (38), the last term is bounded above by
$8(M+1)\epsilon_{\psi}(A)\mu(A)t\,e^{1}\,e^{-\mu(A)t\left(\rho(A)-C_{3}\epsilon(A)\right)}\leq
22(M+1)\epsilon_{\psi}(A)\mu(A)t\,e^{-\mu(A)t\left(\rho(A)-C_{3}\epsilon(A)\right)}.$
On the other hand, Lemma 5 gives us
$\displaystyle\eqref{teo17}$
$\displaystyle\leq\left(\max\left\\{\mu_{A}(T_{A}>f_{A}),\mu(T_{A}>f_{A}),e^{-\rho(A)/2}\right\\}\right)^{k-1}\left(\left|\mu_{A}(T_{A}>f_{A})-\rho(A)e^{-\rho(A)/2}\right|\right.$
$\displaystyle\left.+\sum_{i=1}^{k-1}\left|\mu(T_{A}>f_{A})-e^{-\rho(A)/2}\right|\right)$
The last sum is bounded by $(5C+5/4)(k-1)\epsilon(A)$ using (39). On the other
hand, applying (31) with $t=f_{A}$ and the MVT we obtain
$\displaystyle\left|\mu_{A}(T_{A}>f_{A})-\rho(A)e^{-\rho(A)/2}\right|$
$\displaystyle\leq\left|\mu_{A}(T_{A}>f_{A})-\rho(A)e^{-\rho(A)/2+\rho(A)\mu(A)\tau(A)}\right|+\rho(A)\left|e^{-\rho(A)/2+\rho(A)\mu(A)\tau(A)}-e^{-\rho(A)/2}\right|$
$\displaystyle\leq(3C+9/4)\epsilon(A)+\rho(A)\mu(A)\tau(A)e^{-\rho(A)(1/2-\mu(A)\tau(A))}$
$\displaystyle\leq(3C+17/4)\epsilon(A)$
since $e^{-\rho(A)(1/2-\mu(A)\tau(A))}\leq 1$ and $\mu(A)\tau(A)\leq
2\epsilon_{\psi}(A)$ for $n\geq n_{0}$.
Furthermore, the last inequality implies
$\mu_{A}(T_{A}>f_{A})\leq\rho(A)e^{-\rho(A)/2}+(3C+17/4)\epsilon(A)\leq
e^{-\rho(A)/2}+(5C+5/4)\epsilon(A)$
and by (39) we get
$\max\left\\{\mu_{A}(T_{A}>f_{A}),\mu(T_{A}>f_{A}),e^{-\rho(A)/2}\right\\}\leq
e^{-\rho(A)/2}+(5C+5/4)\epsilon(A).$
Therefore, as we saw in (40), we have
$\displaystyle\eqref{teo17}$
$\displaystyle\leq(5C+5/4)\epsilon(A)k\left(e^{-\rho(A)/2}+(5C+5/4)\epsilon(A)\right)^{k-1}$
$\displaystyle\leq
2(5C+5/4)\epsilon(A)\mu(A)t\,e^{1}\,e^{-\mu(A)t\left(\rho(A)-C_{3}\epsilon(A)\right)}$
$\displaystyle\leq(109M+116)\epsilon(A)\mu(A)te^{-\mu(A)t\left(\rho(A)-C_{3}\epsilon(A)\right)}.$
Finally, by doing $t=r$ in (29) and applying the MVT once again, we get
$\displaystyle\left|\mu({T_{A}}>r)-e^{-\rho(A)\mu(A)(r-\tau(A))}\right|$
$\displaystyle\leq\left|\mu(\tau_{A}>r)-e^{-\rho(A)\mu(A)r}\right|+\left|e^{-\rho(A)\mu(A)r}-e^{-\rho(A)\mu(A)(r-\tau(A))}\right|$
$\displaystyle\leq(2C+1/2)(\tau(A)\mu(A)+r\mu(A)\epsilon(A))+\rho(A)\mu(A)\tau(A)e^{-\rho(A)\mu(A)(r-\tau(A))}$
$\displaystyle\leq(2C+1/2)(2\epsilon(A)+f_{A}\mu(A)\epsilon(A))+(7/2)\epsilon(A)$
$\displaystyle\leq(5C+19/4)\epsilon(A).$
We justify the third inequality in two cases. If $r>\tau(A)$, then
$e^{-\rho(A)\mu(A)(r-\tau(A))}\leq 1$. Otherwise, if $r\leq\tau(A)$, then
$e^{-\rho(A)\mu(A)(r-\tau(A))}\leq e^{\rho(A)\mu(A)\tau(A)}\leq e^{1/2}$,
since $n\geq n_{0}$. Now just note that $\rho(A)\mu(A)\tau(A)\leq
2\epsilon(A)$.
Therefore, we finish the proof obtaining the following upper bound:
$\displaystyle\eqref{teo18}$
$\displaystyle\leq(5C+19/4)\epsilon(A)e^{\rho(A)\mu(A)r}e^{-\rho(A)\mu(A)t}$
$\displaystyle\leq(5C+19/4)\epsilon(A)2\mu(A)t\;e^{f_{A}\mu(A)}e^{-\mu(A)t(\rho(A)-C_{3}\epsilon(A))}$
$\displaystyle\leq(66M+82)\epsilon(A)\mu(A)te^{-\mu(A)t(\rho(A)-C_{3}\epsilon(A))}.$
∎
### 4.3 Proof of Theorem 2
###### Proof of Statement (a).
For each $x\in\mathcal{X}$ we define
$\tau(x):=\sup\\{\tau(x_{0}^{n-1}),n\geq 1\\}.$
Let $\mathcal{B}=\\{x\in\mathcal{X};\tau(x)=\infty\\}$ be the set of aperiodic
points of $\mathcal{X}$. For $x\in\mathcal{B}$, denote $A_{n}=A_{n}(x)$ and
consider the case $\tau(A_{n})<n$. Then, we have
$\displaystyle 1-\rho(A_{n})$
$\displaystyle=\mu_{A_{n}}\left(T_{A_{n}}=\tau(A_{n})\right)$
$\displaystyle=\mu_{A_{n}}\left(\sigma^{-n}\left(A_{n}^{(\tau(A_{n})}\right)\right)$
$\displaystyle\leq\mu_{A_{n}}\left(\sigma^{-n-\lfloor\tau(A_{n})/2\rfloor}\left(A_{n}^{(\lceil\tau(A_{n})/2\rceil}\right)\right)$
$\displaystyle\leq\mu\left(A_{n}^{(\lceil\tau(A_{n})/2\rceil}\right)+\phi\left(\lfloor\tau(A_{n})/2\rfloor+1\right).$
Since $x\in\mathcal{B}$, we have $\tau(A_{n})\stackrel{{\scriptstyle
n}}{{\longrightarrow}}\infty$, which implies that the last expression
converges to zero. For the case $\tau(A_{n})\geq n$, we use the same argument
$\displaystyle 1-\rho(A_{n})$
$\displaystyle=\mu_{A_{n}}\left(\sigma^{-\tau(A_{n})}(A_{n})\right)$
$\displaystyle\leq\mu_{A_{n}}\left(\sigma^{-\tau(A_{n})-\lfloor
n/2\rfloor}\left(A_{n}^{(\lceil n/2\rceil}\right)\right)$
$\displaystyle\leq\mu\left(A_{n}^{(\lceil n/2\rceil)}\right)+\phi\left(\lfloor
n/2\rfloor+1\right)$
which also converges to zero. Therefore, $\rho(A_{n})\stackrel{{\scriptstyle
n}}{{\longrightarrow}}1$. We conclude the proof by noting that
$\mathcal{X}-\mathcal{B}$ is a countable set, and thus $\mu(\mathcal{B})=1$. ∎
###### Proof of Statement (b).
By Lemma 2, for $\psi$-mixing or summable $\phi$-mixing measures, there exists
$n_{0}\geq 1$ such that
$\forall n\geq n_{0},\;\forall A\in\mathcal{C}_{n},\;\;\mu(A)^{-1}>\tau(A)\,.$
Now, since $\mu_{A}(T_{A}>j),j\geq 1$ is a nonincreasing sequence, the
potential well is larger or equal than the arithmetic mean of the subsequent
$\mu(A)^{-1}$ elements
$\displaystyle\rho(A)=\mu_{A}(T_{A}>\tau(A))$
$\displaystyle\geq\frac{1}{\mu(A)^{-1}-\tau(A)}\sum_{j=\tau(A)}^{\mu(A)^{-1}-1}\mu_{A}(T_{A}>j)$
$\displaystyle\geq\frac{1}{\mu(A)^{-1}}\sum_{j=\tau(A)}^{\mu(A)^{-1}-1}\mu_{A}(T_{A}>j)$
$\displaystyle=\sum_{j=\tau(A)}^{\mu(A)^{-1}-1}\mu(A;T_{A}>j)$
$\displaystyle=\sum_{j=\tau(A)}^{\mu(A)^{-1}-1}\mu(T_{A}=j+1).$ (45)
In the last equality we used Lemma 4. By (45) one obtain
$\displaystyle\rho(A)$
$\displaystyle\geq\mu(T_{A}\leq\mu(A)^{-1})-\mu(T_{A}\leq\tau(A))$
$\displaystyle=\mu(T_{A}\leq\mu(A)^{-1})-\tau(A)\mu(A)$ (46)
where the equality follows by stationarity and the definition of $\tau(A)$.
By Lemmas 1 and 2, we know that $\tau(A)\mu(A)\stackrel{{\scriptstyle
n}}{{\longrightarrow}}0$ uniformly. Thus, it is enough to find a strictly
positive lower bound for $\mu(T_{A}\leq\mu(A)^{-1})$. Let
$N=\sum_{j=1}^{\mu(A)^{-1}}\mathds{1}_{A}\circ\sigma^{j}\,$
which counts the number of occurrences of $A$ up to $\mu(A)^{-1}$. By the so-
called _second moment method_ ,
$\mu(T_{A}\leq\mu(A)^{-1})=\mu(N\geq
1)\geq\frac{\mathds{E}(N)^{2}}{\mathds{E}(N^{2})}.$ (47)
Stationarity gives $\mathds{E}(N)=1$. It remains to prove that
$\mathds{E}(N^{2})$ is bounded above by a constant. Expanding $N^{2}$, using
stationarity and $\mathds{E}(N)=1$ we obtain
$\mathds{E}(N^{2})=1+2\sum_{j=1}^{\mu(A)^{-1}}(\mu(A)^{-1}-j)\,\mu(A\cap\sigma^{-j}(A))\,.$
(48)
Let us first consider the $\phi$-mixing case. For $j\geq n$, mixing gives
$\mu(A\cap\sigma^{-j}(A))\leq\mu(A)^{2}+\mu(A)\phi(j-n+1)$. Thus,
$\displaystyle\sum_{j=n}^{\mu(A)^{-1}}(\mu(A)^{-1}-j)\,\mu(A\cap\sigma^{-j}(A))\leq\frac{1}{2}+\sum_{\ell=0}^{\mu(A)^{-1}-n}\phi(\ell+1)$
(49)
where we used $\mu(A)^{-1}-j\leq\mu(A)^{-1}$ to get the last term.
For $1\leq j\leq n-1$, as before $A^{(j)}\subset A^{(\lceil j/2\rceil)}$, thus
$\displaystyle\mu\left(A\cap\sigma^{-j}\left(A\right)\right)$
$\displaystyle=\mu\left(A\cap\sigma^{-n}\left(A^{(j)}\right)\right)$
$\displaystyle\leq\mu\left(A\cap\sigma^{-n-\lfloor j/2\rfloor}\left(A^{(\lceil
j/2\rceil)}\right)\right)$
$\displaystyle\leq\mu(A)\left(\mu\left(A^{\left(\lceil
j/2\rceil\right)}\right)+\phi(\lfloor j/2+1\rfloor)\right)$
$\displaystyle\leq\mu(A)\left(Ce^{-c\lceil j/2\rceil}+\phi(\lfloor
j/2+1\rfloor)\right)\,.$
Therefore,
$\displaystyle\sum_{j=1}^{n-1}(\mu(A)^{-1}-j)\,\mu(A\cap\sigma^{-j}(A))\leq\sum_{j=1}^{n-1}\left(Ce^{-c\lceil
j/2\rceil}+\phi(\lfloor j/2+1\rfloor)\right).$ (50)
Therefore, by (49) and (50), the summability of $\phi$ concludes the proof for
the $\phi$-mixing case.
If $\mu$ is $\psi$-mixing, we separate the sum in (48) in three parts. First,
recall the definition of $g_{0}$ in Section 3.2. For $1\leq j\leq g_{0}$, we
bound the sum as follows
$\sum_{j=1}^{g_{0}}(\mu(A)^{-1}-j)\,\mu(A\cap\sigma^{-j}(A))\leq\sum_{j=1}^{g_{0}}\mu(A)^{-1}\mu(A)=g_{0}.$
For $g_{0}+1\leq j\leq g_{0}+n-1$, we have by $\psi$-mixing
$(\mu(A)^{-1}-j)\,\mu(A\cap\sigma^{-j}(A))\leq\mu(A)^{-1}\mu\left(A\cap\sigma^{-n-g_{0}}\left(A^{(j-g_{0})}\right)\right)\leq
M\,\mu(A)^{-1}\mu(A)\mu\left(A^{(\ell)}\right)$
where we denoted $\ell=j-g_{0}$. Thus
$\sum_{j=g_{0}+1}^{g_{0}+n-1}(\mu(A)^{-1}-j)\,\mu(A\cap\sigma^{-j}(A))\leq
M\sum_{\ell=1}^{n-1}Ce^{-c\ell}.$
Finally, applying $\psi$-mixing again,
$\sum_{j=g_{0}+n}^{\mu(A)^{-1}}(\mu(A)^{-1}-j)\,\mu(A\cap\sigma^{-j}(A))\leq
M\sum_{j=n+g_{0}}^{\mu(A)^{-1}}\mu(A)^{-1}\,\mu(A)^{2}\leq M,$
concluding the proof of the $\psi$-mixing case.
∎
## References
* [AB93] David J Aldous and Mark Brown. Inequalities for rare events in time-reversible markov chains ii. Stochastic Processes and their Applications, 44(1):15–25, 1993\.
* [Aba01] Miguel Abadi. Exponential approximation for hitting times in mixing processes. Math. Phys. Electron. J, 7(2):1–19, 2001.
* [Aba04] Miguel Abadi. Sharp error terms and necessary conditions for exponential hitting times in mixing processes. Ann. Probab., 32(1A):243–264, 2004.
* [Aba06] Miguel Abadi. Hitting, returning and the short correlation function. Bulletin of the Brazilian Mathematical Society, 37(4):593–609, 2006\.
* [AC15] Miguel Abadi and Liliam Cardeño. Rényi entropies and large deviations for the first match function. IEEE Trans. Inform. Theory, 61(4):1629–1639, 2015.
* [ACG15] Miguel Abadi, Liliam Cardeño, and Sandro Gallo. Potential well spectrum and hitting time in renewal processes. J. Stat. Phys., 159(5):1087–1106, 2015.
* [ACG19] M Abadi, J-R Chazottes, and S Gallo. The complete $l^{q}$-spectrum and large deviations for return times for equilibrium states with summable potentials. arXiv preprint arXiv:1902.03441, 2019.
* [ACS03] Valentin Afraimovich, Jean René Chazottes, and Benoît Saussol. Pointwise dimensions for Poincaré recurrences associated with maps and special flows. Discrete Contin. Dyn. Syst., 9(2):263–280, 2003.
* [AGRM17] Miguel Abadi, Sandro Gallo, and Erika Alejandra Rada-Mora. The shortest possible return time of $\beta$-mixing processes. IEEE Transactions on Information Theory, 64(7):4895–4906, 2017\.
* [AL13] Miguel Abadi and Rodrigo Lambert. The distribution of the short-return function. Nonlinearity, 26(5):1143–1162, 2013.
* [AS11] Miguel Abadi and Benoit Saussol. Hitting and returning to rare events for all alpha-mixing processes. Stochastic processes and their applications, 121(2):314–323, 2011\.
* [AS16] Miguel Abadi and Benoît Saussol. Almost sure convergence of the clustering factor in $\alpha$-mixing processes. Stochastics and Dynamics, 16(03):1660016, 2016.
* [AV08] Miguel Abadi and Sandro Vaienti. Large deviations for short recurrence. Discrete and Continuous Dynamical Systems-Series A, 21(3):729–747, 2008.
* [AV09] Miguel Abadi and Nicolas Vergne. Sharp error terms for return time statistics under mixing conditions. J. Theoret. Probab., 22(1):18–37, 2009.
* [Bra05] Richard C. Bradley. Basic properties of strong mixing conditions. A survey and some open questions. Probab. Surv., 2:107–144, 2005. Update of, and a supplement to, the 1986 original.
* [CGS99] Pierre Collet, Antonio Galves, and Bernard Schmitt. Repetition times for Gibbsian sources. Nonlinearity, 12(4):1225–1237, 1999.
* [CU05] Jean-René Chazottes and Edgardo Ugalde. Entropy estimation and fluctuations of hitting and recurrence times for Gibbsian sources. Discrete Contin. Dyn. Syst. Ser. B, 5(3):565–586, 2005.
* [FFT10] Ana Cristina Moreira Freitas, Jorge Milhazes Freitas, and Mike Todd. Hitting time statistics and extreme value theory. Probability Theory and Related Fields, 147(3-4):675–710, 2010.
* [Fre13] Jorge Milhazes Freitas. Extremal behaviour of chaotic dynamics. Dynamical Systems, 28(3):302–332, 2013.
* [GS97] A. Galves and B. Schmitt. Inequalities for hitting times in mixing dynamical systems. Random Comput. Dynam., 5(4):337–347, 1997.
* [HV10] Nicolai Haydn and Sandro Vaienti. The Rényi entropy function and the large deviation of short return times. Ergodic Theory Dynam. Systems, 30(1):159–179, 2010.
* [LFdF+16] Valerio Lucarini, Davide Faranda, Jorge Miguel Milhazes de Freitas, Mark Holland, Tobias Kuna, Matthew Nicol, Mike Todd, Sandro Vaienti, et al. Extremes and recurrence in dynamical systems. John Wiley & Sons, 2016.
* [LLR12] Malcolm R Leadbetter, Georg Lindgren, and Holger Rootzén. Extremes and related properties of random sequences and processes. Springer Science & Business Media, 2012.
* [MS94] Katalin Marton and Paul Shields. Almost sure waiting time results for weak and very weak bernoulli processes. In Proceedings of 1994 IEEE International Symposium on Information Theory, page 180. IEEE, 1994.
* [OW93] Donald Samuel Ornstein and Benjamin Weiss. Entropy and data compression schemes. IEEE Trans. Inform. Theory, 39(1):78–83, 1993.
* [Res13] Sidney I Resnick. Extreme values, regular variation and point processes. Springer, 2013.
* [Shi93] Paul C Shields. Waiting times: positive and negative results on the wyner-ziv problem. Journal of Theoretical Probability, 6(3):499–519, 1993.
* [STV02] B. Saussol, S. Troubetzkoy, and S. Vaienti. Recurrence, dimensions, and Lyapunov exponents. J. Statist. Phys., 106(3-4):623–634, 2002.
* [WZ89] Aaron D Wyner and Jacob Ziv. Some asymptotic properties of the entropy of a stationary ergodic data source with applications to data compression. IEEE Transactions on Information Theory, 35(6):1250–1258, 1989\.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.